threads
listlengths
1
2.99k
[ { "msg_contents": "While hacking on zedstore, we needed to get a list of the columns to\nbe projected--basically all of the columns needed to satisfy the\nquery. The two use cases we have for this is\n1) to pass this column list down to the AM layer for the AM to leverage it\n2) for use during planning to improving costing\nIn other threads, such as [1], there has been discussion about the\npossible benefits for all table types of having access to this set of\ncolumns.\n\nFocusing on how to get this used cols list (as opposed to how to pass\nit down to the AM), we have tried a few approaches to constructing it\nand wanted to get some ideas on how best to do it.\n\nWe are trying to determine which phase to get the columns -- after\nparsing, after planning, or during execution right before calling the\nAM.\n\nApproach A: right before calling AM\n\n Leverage expression_tree_walker() right before calling beginscan()\n and collecting the columns into a needed columns context. This\n approach is what is currently in the zedstore patch mentioned in\n this thread [2].\n\n The benefit of this approach is that it walks the tree right\n before the used column set will be used--which makes it easy to\n skip this walk for queries or AMs that don't benefit from this\n used columns list.\n\nApproach B: after parsing and/or after planning\n\n Add a new member 'used_cols' to PlannedStmt which contains the\n attributes for each relation present in the query. Construct\n 'used_cols' at the end of planning using the PathTargets in the\n RelOptInfos in the PlannerInfo->simple_rel_array and the\n RangeTblEntries in PlannerInfo->simple_rte_array.\n\n The nice thing about this is that it does not require a full walk\n of the plan tree. Approach A could be more expensive if the tree\n is quite large. I'm not sure, however, if just getting the\n PathTargets from the RelOptInfos is sufficient for obtaining the\n whole set of columns used in the query.\n\n Approach B, however, does not work for utility statements which do\n not go through planning.\n\n One potential solution to this that we tried was getting the\n columns from the query tree after parse analyze and then in\n exec_simple_query() adding the column list to the PlannedStmt.\n\n This turned out to be as messy or more than Approach A because\n each kind of utility statement has its own data structure that is\n composed of elements taken from the Query tree but does not\n directly include the original PlannedStmt created for the query\n (the PlannedStmt doesn't contain anything except the query tree\n for utility statements since they do not go through planning). So,\n for each type of utility statement, we would have to separately\n copy over the column list from the PlannedStmt in its own way.\n\n It is worth noting that Approach A also requires special handling\n for each type of utility statement.\n\nWe are wondering about specific benefits of Approach B--that is, is\nthere some use case (maybe plan re-use) for having the column set\naccessible in the PlannedStmt itself?\n\nOne potential benefit of Approach B could be for scans of partition\ntables. Collecting the used column list could be done once for the\nquery instead of once for each partition.\n\nBoth approaches, however, do not address our second use case, as we\nwould not have the column list during planning for non-utility\nstatements. To satisfy this, we would likely have to extract the\ncolumns from the query tree after parse analyze for non-utility\nstatements as well.\n\nAn approach which extracted this list before planning and saved it\nsomewhere would help avoid having to do the same walk during planning\nand then again during execution. Though, using the list constructed\nafter parsing may not be ideal when some columns were able to be\neliminated during planning.\n\nMelanie & Ashwin\n\n[1]\nhttps://www.postgresql.org/message-id/20190409010440.bqdikgpslh42pqit%40alap3.anarazel.de\n[2]\nhttps://www.postgresql.org/message-id/CALfoeiuuLe12PuQ%3DzvM_L7B5qegBqGHYENHUGbLOsjAnG%3Dmi4A%40mail.gmail.com\n\nWhile hacking on zedstore, we needed to get a list of the columns tobe projected--basically all of the columns needed to satisfy thequery. The two use cases we have for this is1) to pass this column list down to the AM layer for the AM to leverage it2) for use during planning to improving costingIn other threads, such as [1], there has been discussion about thepossible benefits for all table types of having access to this set ofcolumns. Focusing on how to get this used cols list (as opposed to how to passit down to the AM), we have tried a few approaches to constructing itand wanted to get some ideas on how best to do it.We are trying to determine which phase to get the columns -- afterparsing, after planning, or during execution right before calling theAM.Approach A: right before calling AM    Leverage expression_tree_walker() right before calling beginscan()    and collecting the columns into a needed columns context. This    approach is what is currently in the zedstore patch mentioned in    this thread [2].    The benefit of this approach is that it walks the tree right    before the used column set will be used--which makes it easy to    skip this walk for queries or AMs that don't benefit from this    used columns list. Approach B: after parsing and/or after planning    Add a new member 'used_cols' to PlannedStmt which contains the    attributes for each relation present in the query. Construct    'used_cols' at the end of planning using the PathTargets in the    RelOptInfos in the PlannerInfo->simple_rel_array and the    RangeTblEntries in PlannerInfo->simple_rte_array.    The nice thing about this is that it does not require a full walk    of the plan tree. Approach A could be more expensive if the tree    is quite large. I'm not sure, however, if just getting the    PathTargets from the RelOptInfos is sufficient for obtaining the    whole set of columns used in the query.    Approach B, however, does not work for utility statements which do    not go through planning.    One potential solution to this that we tried was getting the    columns from the query tree after parse analyze and then in    exec_simple_query() adding the column list to the PlannedStmt.    This turned out to be as messy or more than Approach A because    each kind of utility statement has its own data structure that is    composed of elements taken from the Query tree but does not    directly include the original PlannedStmt created for the query    (the PlannedStmt doesn't contain anything except the query tree    for utility statements since they do not go through planning). So,    for each type of utility statement, we would have to separately    copy over the column list from the PlannedStmt in its own way.    It is worth noting that Approach A also requires special handling    for each type of utility statement.We are wondering about specific benefits of Approach B--that is, isthere some use case (maybe plan re-use) for having the column setaccessible in the PlannedStmt itself?One potential benefit of Approach B could be for scans of partitiontables. Collecting the used column list could be done once for thequery instead of once for each partition.Both approaches, however, do not address our second use case, as wewould not have the column list during planning for non-utilitystatements. To satisfy this, we would likely have to extract thecolumns from the query tree after parse analyze for non-utilitystatements as well.An approach which extracted this list before planning and saved itsomewhere would help avoid having to do the same walk during planningand then again during execution. Though, using the list constructedafter parsing may not be ideal when some columns were able to beeliminated during planning.Melanie & Ashwin[1] https://www.postgresql.org/message-id/20190409010440.bqdikgpslh42pqit%40alap3.anarazel.de[2] https://www.postgresql.org/message-id/CALfoeiuuLe12PuQ%3DzvM_L7B5qegBqGHYENHUGbLOsjAnG%3Dmi4A%40mail.gmail.com", "msg_date": "Fri, 14 Jun 2019 18:45:51 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Extracting only the columns needed for a query" }, { "msg_contents": "On Sat, 15 Jun 2019 at 13:46, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> While hacking on zedstore, we needed to get a list of the columns to\n> be projected--basically all of the columns needed to satisfy the\n> query. The two use cases we have for this is\n> 1) to pass this column list down to the AM layer for the AM to leverage it\n> 2) for use during planning to improving costing\n> In other threads, such as [1], there has been discussion about the\n> possible benefits for all table types of having access to this set of\n> columns.\n>\n> Focusing on how to get this used cols list (as opposed to how to pass\n> it down to the AM), we have tried a few approaches to constructing it\n> and wanted to get some ideas on how best to do it.\n>\n> We are trying to determine which phase to get the columns -- after\n> parsing, after planning, or during execution right before calling the\n> AM.\n\nFor planning, isn't this information already available via either\ntargetlists or from the RelOptInfo->attr_needed array combined with\nmin_attr and max_attr?\n\nIf it's going to be too much overhead to extract vars from the\ntargetlist during executor startup then maybe something can be done\nduring planning to set a new Bitmapset field of attrs in\nRangeTblEntry. Likely that can be built easily by looking at\nattr_needed in RelOptInfo. Parse is too early to set this as which\nattributes are needed can change during planning. For example, look at\nremove_rel_from_query() in analyzejoins.c. If you don't need access to\nthis during planning then maybe setrefs.c is a good place to set\nsomething. Although, it would be nice not to add this overhead when\nthe executor does not need this information. I'm unsure how the\nplanner could know that though, other than by having another tableam\ncallback function to tell it.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Sat, 15 Jun 2019 16:58:23 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> While hacking on zedstore, we needed to get a list of the columns to\n> be projected--basically all of the columns needed to satisfy the\n> query. The two use cases we have for this is\n> 1) to pass this column list down to the AM layer for the AM to leverage it\n> 2) for use during planning to improving costing\n> In other threads, such as [1], there has been discussion about the\n> possible benefits for all table types of having access to this set of\n> columns.\n\nThe thing that most approaches to this have fallen down on is triggers ---\nthat is, a trigger function might access columns mentioned nowhere in the\nSQL text. (See 8b6da83d1 for a recent example :-() If you have a plan\nfor dealing with that, then ...\n\n> Approach B: after parsing and/or after planning\n\nIf we wanted to do something about this, making the planner record\nthe set of used columns seems like the thing to do. We could avoid\nthe expense of doing it when it's not needed by setting up an AM/FDW/\netc property or callback to request it.\n\nAnother reason for having the planner do this is that presumably, in\nan AM that's excited about this, the set of fetched columns should\nplay into the cost estimates for the scan. I've not been paying\nenough attention to the tableam work to know if we've got hooks for\nthe AM to affect scan costing ... but if we don't, that seems like\na hole that needs plugged.\n\n> I'm not sure, however, if just getting the\n> PathTargets from the RelOptInfos is sufficient for obtaining the\n> whole set of columns used in the query.\n\nThe PathTarget only records the columns that need to be *emitted*\nby the relation scan plan node. You'd also have to look at the\nbaserestrictinfo conditions to see what columns are inspected by\nfilter conditions. But that seems fine to me anyway since the AM\nmight conceivably have different requirements for filter variables than\nemitted variables. (As a concrete example, filter conditions that are\nhandled by non-lossy index checks might not give rise to any requirement\nfor the table AM to do anything.) So that line of thought confirms that\nwe want to do this at the end of planning when we know the shape of the\nplan tree; the parser can't do it usefully.\n\n> Approach B, however, does not work for utility statements which do\n> not go through planning.\n\nI'm not sure why you're excited about that case? Utility statements\ntend to be pretty much all-or-nothing as far as data access goes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Jun 2019 13:01:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": ">\n> The thing that most approaches to this have fallen down on is triggers ---\n> that is, a trigger function might access columns mentioned nowhere in the\n> SQL text. (See 8b6da83d1 for a recent example :-() If you have a plan\n> for dealing with that, then ...\n>\n\nWell, if we had a trigger language that compiled to <something> at creation\ntime, and that trigger didn't do any dynamic/eval code, we could store\nwhich attributes and rels were touched inside the trigger.\n\nI'm not sure if that trigger language would be sql, plpgsql with a\n\"compile\" pragma, or maybe we exhume PSM, but it could have some side\nbenefits:\n\n 1. This same issue haunts any attempts at refactoring triggers and\nreferential integrity, so narrowing the scope of what a trigger touches\nwill help there too\n 2. additional validity checks\n 3. (this is an even bigger stretch) possibly a chance to combine multiple\ntriggers into one statement, or combine mutliple row-based triggers into a\nstatement level trigger\n\nOf course, this all falls apart with one dynamic SQL or one SELECT *, but\nit would be incentive for the users to refactor code to not do things that\nimpede trigger optimization.\n\nThe thing that most approaches to this have fallen down on is triggers ---\nthat is, a trigger function might access columns mentioned nowhere in the\nSQL text.  (See 8b6da83d1 for a recent example :-()  If you have a plan\nfor dealing with that, then ...Well, if we had a trigger language that compiled to <something> at creation time, and that trigger didn't do any dynamic/eval code, we could store which attributes and rels were touched inside the trigger.I'm not sure if that trigger language would be sql, plpgsql with a \"compile\" pragma, or maybe we exhume PSM, but it could have some side benefits:  1. This same issue haunts any attempts at refactoring triggers and referential integrity, so narrowing the scope of what a trigger touches will help there too  2. additional validity checks  3. (this is an even bigger stretch) possibly a chance to combine multiple triggers into one statement, or combine mutliple row-based triggers into a statement level triggerOf course, this all falls apart with one dynamic SQL or one SELECT *, but it would be incentive for the users to refactor code to not do things that impede trigger optimization.", "msg_date": "Sun, 16 Jun 2019 11:47:45 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "On Sat, Jun 15, 2019 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Approach B: after parsing and/or after planning\n>\n> If we wanted to do something about this, making the planner record\n> the set of used columns seems like the thing to do. We could avoid\n> the expense of doing it when it's not needed by setting up an AM/FDW/\n> etc property or callback to request it.\n>\n\nSounds good. In Zedstore patch, we have added AM property to convey the AM\nleverages column projection and currently skip physical tlist optimization\nbased\non it. So, yes can similarly be leveraged for other planning needs.\n\n\n> Another reason for having the planner do this is that presumably, in\n> an AM that's excited about this, the set of fetched columns should\n> play into the cost estimates for the scan. I've not been paying\n> enough attention to the tableam work to know if we've got hooks for\n> the AM to affect scan costing ... but if we don't, that seems like\n> a hole that needs plugged.\n>\n\nAM callback relation_estimate_size exists currently which planner\nleverages. Via\nthis callback it fetches tuples, pages, etc.. So, our thought is to extend\nthis\nAPI if possible to pass down needed column and help perform better costing\nfor\nthe query. Though we think if wish to leverage this function, need to know\nlist\nof columns before planning hence might need to use query tree.\n\n\n> > Approach B, however, does not work for utility statements which do\n> > not go through planning.\n>\n> I'm not sure why you're excited about that case? Utility statements\n> tend to be pretty much all-or-nothing as far as data access goes.\n>\n\nStatements like COPY, CREATE INDEX, CREATE CONSTRAINTS, etc.. can benefit\nfrom\nsubset of columns for scan. For example in Zedstore currently for CREATE\nINDEX we extract needed columns by walking indexInfo->ii_Predicate and\nindexInfo->ii_Expressions. For COPY, we currently use cstate->attnumlist to\nknow\nwhich columns need to be scanned.\n\nOn Sat, Jun 15, 2019 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Approach B: after parsing and/or after planning\n\nIf we wanted to do something about this, making the planner record\nthe set of used columns seems like the thing to do.  We could avoid\nthe expense of doing it when it's not needed by setting up an AM/FDW/\netc property or callback to request it.Sounds good. In Zedstore patch, we have added AM property to convey the AMleverages column projection and currently skip physical tlist optimization basedon it. So, yes can similarly be leveraged for other planning needs. \nAnother reason for having the planner do this is that presumably, in\nan AM that's excited about this, the set of fetched columns should\nplay into the cost estimates for the scan.  I've not been paying\nenough attention to the tableam work to know if we've got hooks for\nthe AM to affect scan costing ... but if we don't, that seems like\na hole that needs plugged.AM callback relation_estimate_size exists currently which planner leverages. Viathis callback it fetches tuples, pages, etc.. So, our thought is to extend thisAPI if possible to pass down needed column and help perform better costing forthe query. Though we think if wish to leverage this function, need to know listof columns before planning hence might need to use query tree.\n>     Approach B, however, does not work for utility statements which do\n>     not go through planning.\n\nI'm not sure why you're excited about that case?  Utility statements\ntend to be pretty much all-or-nothing as far as data access goes.Statements like COPY, CREATE INDEX, CREATE CONSTRAINTS, etc.. can benefit fromsubset of columns for scan. For example in Zedstore currently for CREATEINDEX we extract needed columns by walking indexInfo->ii_Predicate andindexInfo->ii_Expressions. For COPY, we currently use cstate->attnumlist to knowwhich columns need to be scanned.", "msg_date": "Sun, 16 Jun 2019 11:26:33 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "We implemented Approach B in the attached patch set (patch 0001) and\nthen implemented Approach A (patch 0002) to sanity check the pruned\nlist of columns to scan we were getting at plan-time.\nWe emit a notice in SeqNext() if the two column sets differ.\nCurrently, for all of the queries in the regression suite, no\ndifferences are found.\nWe did notice that none of the current regression tests for triggers\nare showing a difference between columns that can be extracted at plan\ntime and those that can be extracted at execution time--though we\nhaven't dug into this at all.\n\nIn our implementation of Approach B, we extract the columns to scan in\nmake_one_rel() after set_base_rel_sizes() and before\nset_base_rel_pathlists() so that the scan cols can be used in costing.\nWe do it after calling set_base_rel_sizes() because it eventually\ncalls set_append_rel_size() which adds PathTarget exprs for the\npartitions with Vars having the correct varattno and varno.\n\nWe added the scanCols to RangeTblEntries because it seemed like the\neasiest way to make sure the information was available at scan time\n(as suggested by David).\n\nA quirk in the patch for Approach B is that, in inheritance_planner(),\nwe copy over the scanCols which we populated in each subroot's rtable\nto the final rtable.\nThe rtables for all of the subroots seem to be basically unused after\nfinishing with inheritance_planner(). That is, many other members of\nthe modified child PlannerInfos are copied over to the root\nPlannerInfo, but the rtables seem to be an exception.\nIf we want to get at them later, we would have had to go rooting\naround in the pathlist of the RelOptInfo, which seemed very complex.\n\nOne note: we did not add any logic to make the extraction of scan\ncolumns conditional on the AM. We have code to do it conditionally in\nthe zedstore patch, but we made it generic here.\n\nWhile we were implementing this, we found that in the columns\nextracted at plan-time, there were excess columns for\nUPDATE/DELETE...RETURNING on partition tables.\nVars for these columns are being added to the targetlist in\npreprocess_targetlist(). Eventually targetlist items are added to\nRelOptInfo->reltarget exprs.\nHowever, when result_relation is 0, this means all of the vars from\nthe returningList will be added to the targetlist, which seems\nincorrect. We included a patch (0003) to only add the vars if\nresult_relation is not 0.\n\nAdding the scanCols in RangeTblEntry, we noticed that a few other\nmembers of RangeTblEntry are consistently initialized to NULL whenever\na new RangeTblEntry is being made.\nThis is confusing because makeNode() for a RangeTblEntry does\npalloc0() anyway, so, why is this necessary?\nIf it is necessary, why not create some kind of generic initialization\nfunction to do this?\n\nThanks,\nAshwin & Melanie", "msg_date": "Tue, 16 Jul 2019 18:49:10 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "On Sat, Jun 15, 2019 at 10:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Melanie Plageman <melanieplageman@gmail.com> writes:\n> > While hacking on zedstore, we needed to get a list of the columns to\n> > be projected--basically all of the columns needed to satisfy the\n> > query. The two use cases we have for this is\n> > 1) to pass this column list down to the AM layer for the AM to leverage\n> it\n> > 2) for use during planning to improving costing\n> > In other threads, such as [1], there has been discussion about the\n> > possible benefits for all table types of having access to this set of\n> > columns.\n>\n> The thing that most approaches to this have fallen down on is triggers ---\n> that is, a trigger function might access columns mentioned nowhere in the\n> SQL text. (See 8b6da83d1 for a recent example :-() If you have a plan\n> for dealing with that, then ...\n>\n>\nFor triggers, there's not much we can do since we don't know what\ncolumns the trigger function requires. All of the columns will have to\nbe scanned for GetTupleForTrigger().\nThe initial scan can still use the scanCols, though.\n\nCurrently, when we use the scanCols, triggers work because\nGetTupleForTrigger() will call table_tuple_lock(). tuple_table_lock()\nexpects the return slot to be populated with the latest fetched\ntuple--which will have all of the columns.\nSo, you don't get any kind of optimization, but, really, with the\ncurrent TRIGGER/FUNCTION syntax, it doesn't seem like we could do\nbetter than that.\n\nAshwin & Melanie\n\nOn Sat, Jun 15, 2019 at 10:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Melanie Plageman <melanieplageman@gmail.com> writes:\n> While hacking on zedstore, we needed to get a list of the columns to\n> be projected--basically all of the columns needed to satisfy the\n> query. The two use cases we have for this is\n> 1) to pass this column list down to the AM layer for the AM to leverage it\n> 2) for use during planning to improving costing\n> In other threads, such as [1], there has been discussion about the\n> possible benefits for all table types of having access to this set of\n> columns.\n\nThe thing that most approaches to this have fallen down on is triggers ---\nthat is, a trigger function might access columns mentioned nowhere in the\nSQL text.  (See 8b6da83d1 for a recent example :-()  If you have a plan\nfor dealing with that, then ...\nFor triggers, there's not much we can do since we don't know whatcolumns the trigger function requires. All of the columns will have tobe scanned for GetTupleForTrigger().The initial scan can still use the scanCols, though.Currently, when we use the scanCols, triggers work becauseGetTupleForTrigger() will call table_tuple_lock(). tuple_table_lock()expects the return slot to be populated with the latest fetchedtuple--which will have all of the columns.So, you don't get any kind of optimization, but, really, with thecurrent TRIGGER/FUNCTION syntax, it doesn't seem like we could dobetter than that.Ashwin & Melanie", "msg_date": "Wed, 17 Jul 2019 16:46:37 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "> On Tue, Jul 16, 2019 at 06:49:10PM -0700, Melanie Plageman wrote:\n>\n> We implemented Approach B in the attached patch set (patch 0001) and\n> then implemented Approach A (patch 0002) to sanity check the pruned\n> list of columns to scan we were getting at plan-time.\n> We emit a notice in SeqNext() if the two column sets differ.\n> Currently, for all of the queries in the regression suite, no\n> differences are found.\n> We did notice that none of the current regression tests for triggers\n> are showing a difference between columns that can be extracted at plan\n> time and those that can be extracted at execution time--though we\n> haven't dug into this at all.\n\nThanks for the patch! If I understand correctly from this thread,\napproach B is more preferable, so I've concentrated on the patch 0001\nand have a few commentaries/questions:\n\n* The idea is to collect columns that have being used for selects/updates\n (where it makes sense for columnar storage to avoid extra work), do I see it\n right? If it's the case, then scanCols could be a bit misleading, since it\n gives an impression that it's only about reads.\n\n* After a quick experiment, it seems that extract_used_columns is invoked for\n updates, but collects too many colums, e.g:\n\n create table test (id int primary key, a text, b text, c text);\n update test set a = 'something' where id = 1;\n\n collects into scanCols all columns (varattno from 1 to 4) + again the first\n column from baserestrictinfo. Is it correct?\n\n* Not sure if it supposed to be covered by this functionality, but if we do\n\n insert ... on conflict (uniq_col) do update set other_col = 'something'\n\n and actually have to perform an update, extract_used_columns is not called.\n\n* Probably it also makes sense to check IS_DUMMY_REL in extract_used_columns?\n\n> > On Sat, Jun 15, 2019 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Another reason for having the planner do this is that presumably, in\n> > an AM that's excited about this, the set of fetched columns should\n> > play into the cost estimates for the scan. I've not been paying\n> > enough attention to the tableam work to know if we've got hooks for\n> > the AM to affect scan costing ... but if we don't, that seems like\n> > a hole that needs plugged.\n>\n> AM callback relation_estimate_size exists currently which planner leverages.\n> Via this callback it fetches tuples, pages, etc.. So, our thought is to extend\n> this API if possible to pass down needed column and help perform better costing\n> for the query. Though we think if wish to leverage this function, need to know\n> list of columns before planning hence might need to use query tree.\n\nI believe it would be beneficial to add this potential API extension patch into\nthe thread (as an example of an interface defining how scanCols could be used)\nand review them together.\n\n\n", "msg_date": "Tue, 17 Dec 2019 11:59:50 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "On Tue, Dec 17, 2019 at 2:57 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n>\n> Thanks for the patch! If I understand correctly from this thread,\n> approach B is more preferable, so I've concentrated on the patch 0001\n> and have a few commentaries/questions:\n>\n\nThanks so much for the review!\n\n\n>\n> * The idea is to collect columns that have being used for selects/updates\n> (where it makes sense for columnar storage to avoid extra work), do I\n> see it\n> right? If it's the case, then scanCols could be a bit misleading, since\n> it\n> gives an impression that it's only about reads.\n>\n\nThe \"scanCols\" columns are only what will need to be scanned in order\nto execute a query, so, even if a column is being \"used\", it may not\nbe set in \"scanCols\" if it is not required to scan it.\n\nFor example, a column which does not need to be scanned but is \"used\"\n-- e.g. in UPDATE x SET col = 2; \"col\" will not be in \"scanCols\" because\nit is known that it will be 2.\n\nThat makes me think that maybe the function name,\nextract_used_columns() is bad, though. Maybe extract_scan_columns()?\nI tried this in the attached, updated patch.\n\n\n>\n> * After a quick experiment, it seems that extract_used_columns is invoked\n> for\n> updates, but collects too many colums, e.g:\n>\n> create table test (id int primary key, a text, b text, c text);\n> update test set a = 'something' where id = 1;\n>\n> collects into scanCols all columns (varattno from 1 to 4) + again the\n> first\n> column from baserestrictinfo. Is it correct?\n>\n\nFor UPDATE, we need all of the columns in the table because of the\ntable_lock() API's current expectation that the slot has all of the\ncolumns populated. If we want UPDATE to only need to insert the column\nvalues which have changed, table_tuple_lock() will have to change.\n\nCollecting columns from the baserestrictinfo is important for when\nthat column isn't present in another part of the query, but it won't\ndouble count it in the bitmap (when it is already present).\n\n\n>\n> * Not sure if it supposed to be covered by this functionality, but if we do\n>\n> insert ... on conflict (uniq_col) do update set other_col = 'something'\n>\n> and actually have to perform an update, extract_used_columns is not\n> called.\n>\n>\nFor UPSERT, you are correct that it will not extract scan columns.\nIt wasn't by design. It is because that UPDATE is planned as part of\nan INSERT.\nFor an INSERT, in query_planner(), because the jointree has only one\nrelation and that relation is an RTE_RESULT planner does not continue\non to make_one_rel() and thus doesn't extract scan columns.\n\nThis means that for INSERT ON CONFLICT DO UPDATE, \"scanCols\" is not\npopulated, however, since UPDATE needs to scan all of the columns\nanyway, I don't think populating \"scanCols\" would have any impact.\nThis does mean that that bitmap would be different for a regular\nUPDATE vs an UPSERT, however, I don't think that doing the extra work\nto populate it makes sense if it won't be used. What do you think?\n\n\n> * Probably it also makes sense to check IS_DUMMY_REL in\n> extract_used_columns?\n>\n>\nI am wondering, when IS_DUMMY_REL is true for a relation, do we\nreference the associated RTE later? It seems like if it is a dummy\nrel, we wouldn't scan it. It still makes sense to add it to\nextract_used_columns(), I think, to avoid any wasted loops through the\nrel's expressions. Thanks for the idea!\n\n-- \nMelanie Plageman", "msg_date": "Thu, 2 Jan 2020 17:21:55 -0800", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "I just wanted to address a question we got about making scanCols\ninstead of using RelOptInfo->attr_needed.\n\nDavid Rowley had asked:\n\n> For planning, isn't this information already available via either\n> targetlists or from the RelOptInfo->attr_needed array combined with\n> min_attr and max_attr?\n\nattr_needed does not have all of the attributes needed set. Attributes\nare only added in add_vars_to_targetlist() and this is only called for\ncertain query classes.\n\nAlso, Jeff Davis had asked off-list why we didn't start using\nRelOptInfo->attr_needed for our purpose (marking which columns will\nneed to be scanned for the use of table access methods) instead of\nadding a new member to RangeTblEntry.\n\nThe primary reason is that RelOptInfos are used during planning and\nnot execution. We need access to this information somewhere in the\nPlannedStmt.\n\nEven if we used attr_needed, we would, at some point, need to transfer\nthe columns needed to be scanned to a data structure available during\nexecution.\n\nHowever, the next question was why not use attr_needed during costing\n(which is the other time the table access method can use scanCols).\n\nDavid Kimura and I dug into how attr_needed is used currently in\nPostgres to understand if its meaning and usage is consistent with\nusing it instead of scanCols during costing.\n\nWe could set attr_needed in the same way we are now setting scanCols\nand then use it during costing, however, besides the fact that we\nwould then have to create a member like scanCols anyway during\nexecution, it seems like adding an additional meaning to attr_needed\nduring planning is confusing and dangerous.\n\nattr_needed is documented as being used to determine the highest\njoinrel in which attribute is needed (when it was introduced\n835bb975d8d).\nSince then it has been extended to be used for removing joins and\nrelations from queries b78f6264eba33 and to determine if whole row\nvars are used in a baserel (which isn't supported as a participant in\na partition-wise join) 7cfdc77023ad507317.\n\nIt actually seems like attr_needed might be too general a name for\nthis field.\nIt isn't set for every attribute in a query -- only in specific cases\nfor attributes in certain parts of the query. If a developer checks\nattr_needed for the columns in his/her query, many times those\ncolumns will not be present.\nIt might actually be better to rename attr_needed to clarify its\nusage.\nscanCols, on the other hand, has a clear meaning and usage. For table\naccess methods, scanCols are the columns that need to be scanned.\nThere might even be cases where this ends up being a different set\nthan attr_needed for a base rel.\n\nMelanie & David\n\nI just wanted to address a question we got about making scanColsinstead of using RelOptInfo->attr_needed.David Rowley had asked:> For planning, isn't this information already available via either> targetlists or from the RelOptInfo->attr_needed array combined with> min_attr and max_attr?attr_needed does not have all of the attributes needed set. Attributesare only added in add_vars_to_targetlist() and this is only called forcertain query classes.Also, Jeff Davis had asked off-list why we didn't start usingRelOptInfo->attr_needed for our purpose (marking which columns willneed to be scanned for the use of table access methods) instead ofadding a new member to RangeTblEntry.The primary reason is that RelOptInfos are used during planning andnot execution. We need access to this information somewhere in thePlannedStmt.Even if we used attr_needed, we would, at some point, need to transferthe columns needed to be scanned to a data structure available duringexecution.However, the next question was why not use attr_needed during costing(which is the other time the table access method can use scanCols).David Kimura and I dug into how attr_needed is used currently inPostgres to understand if its meaning and usage is consistent withusing it instead of scanCols during costing.We could set attr_needed in the same way we are now setting scanColsand then use it during costing, however, besides the fact that wewould then have to create a member like scanCols anyway duringexecution, it seems like adding an additional meaning to attr_neededduring planning is confusing and dangerous.attr_needed is documented as being used to determine the highestjoinrel in which attribute is needed (when it was introduced835bb975d8d).Since then it has been extended to be used for removing joins andrelations from queries b78f6264eba33 and to determine if whole rowvars are used in a baserel (which isn't supported as a participant ina partition-wise join) 7cfdc77023ad507317.It actually seems like attr_needed might be too general a name forthis field.It isn't set for every attribute in a query -- only in specific casesfor attributes in certain parts of the query. If a developer checksattr_needed for the columns in his/her query, many times thosecolumns will not be present.It might actually be better to rename attr_needed to clarify itsusage.scanCols, on the other hand, has a clear meaning and usage. For tableaccess methods, scanCols are the columns that need to be scanned.There might even be cases where this ends up being a different setthan attr_needed for a base rel.Melanie & David", "msg_date": "Tue, 7 Jan 2020 15:40:52 -0800", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "> On Thu, Jan 02, 2020 at 05:21:55PM -0800, Melanie Plageman wrote:\n>\n> That makes me think that maybe the function name,\n> extract_used_columns() is bad, though. Maybe extract_scan_columns()?\n> I tried this in the attached, updated patch.\n\nThanks, I'll take a look at the new version. Following your explanation\nextract_scan_columns sounds better, but at the same time scan is pretty\nbroad term and one can probably misunderstand what kind of scan is that\nin the name.\n\n> For UPDATE, we need all of the columns in the table because of the\n> table_lock() API's current expectation that the slot has all of the\n> columns populated. If we want UPDATE to only need to insert the column\n> values which have changed, table_tuple_lock() will have to change.\n\nCan you please elaborate on this part? I'm probably missing something,\nsince I don't see immediately where are those expectations expressed.\n\n> > AM callback relation_estimate_size exists currently which planner leverages.\n> > Via this callback it fetches tuples, pages, etc.. So, our thought is to extend\n> > this API if possible to pass down needed column and help perform better costing\n> > for the query. Though we think if wish to leverage this function, need to know\n> > list of columns before planning hence might need to use query tree.\n>\n> I believe it would be beneficial to add this potential API extension patch into\n> the thread (as an example of an interface defining how scanCols could be used)\n> and review them together.\n\nI still find this question from my previous email interesting to explore.\n\n\n", "msg_date": "Wed, 15 Jan 2020 16:54:27 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "I'm bumping this to the next commitfest because I haven't had a chance\nto address the questions posed by Dmitry. I'm sure I'll get to it in\nthe next few weeks.\n\n> I believe it would be beneficial to add this potential API extension\npatch into\n> the thread (as an example of an interface defining how scanCols could be\nused)\n> and review them together.\n\nAs for including some code that uses the scanCols, after discussing\noff-list with a few folks, there are three options I would like to\npursue for doing this.\n\nOne option I will pursue is using the scanCols to inform the columns\nneeded to be spilled for memory-bounded hashagg (mentioned by Jeff\nhere [1]).\n\nThe second is potentially using the scanCols in the context of FDW.\nCorey, would you happen to have any specific examples handy of when\nthis might be useful?\n\nThe third is exercising it with a test only but providing an example\nof how a table AM API user like Zedstore uses the columns during\nplanning.\n\n[1]\nhttps://www.postgresql.org/message-id/e5566f7def33a9e9fdff337cca32d07155d7b635.camel%40j-davis.com\n\n-- \nMelanie Plageman\n\nI'm bumping this to the next commitfest because I haven't had a chanceto address the questions posed by Dmitry. I'm sure I'll get to it inthe next few weeks.> I believe it would be beneficial to add this potential API extension patch into\n> the thread (as an example of an interface defining how scanCols could be used)\n> and review them together.As for including some code that uses the scanCols, after discussingoff-list with a few folks, there are three options I would like topursue for doing this. One option I will pursue is using the scanCols to inform the columnsneeded to be spilled for memory-bounded hashagg (mentioned by Jeffhere [1]).The second is potentially using the scanCols in the context of FDW.Corey, would you happen to have any specific examples handy of whenthis might be useful?The third is exercising it with a test only but providing an exampleof how a table AM API user like Zedstore uses the columns duringplanning.[1] https://www.postgresql.org/message-id/e5566f7def33a9e9fdff337cca32d07155d7b635.camel%40j-davis.com-- Melanie Plageman", "msg_date": "Fri, 31 Jan 2020 09:52:09 -0800", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "On Wed, Jan 15, 2020 at 7:54 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > For UPDATE, we need all of the columns in the table because of the\n> > table_lock() API's current expectation that the slot has all of the\n> > columns populated. If we want UPDATE to only need to insert the column\n> > values which have changed, table_tuple_lock() will have to change.\n>\n> Can you please elaborate on this part? I'm probably missing something,\n> since I don't see immediately where are those expectations expressed.\n>\n\nThe table_tuple_lock() has TupleTableSlot as output argument. Comment for\nthe function states \"*slot: contains the target tuple\". Usage of\ntable_tuple_lock() in places like ExecUpdate() and GetTupleForTrigger() use\nthe returned tuple for EvalPlanQual. Also, it's unknown\nwithin table_tuple_lock() what is expectation and what would be\nperformed on the returned slot/tuple. Hence, it becomes tricky for any AM\ncurrently to guess and hence need to return full tuple, as API doesn't\nstate only which columns it would be interested in or just wishes to take\nthe lock and needs nothing back in slot.\n\nOn Wed, Jan 15, 2020 at 7:54 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> For UPDATE, we need all of the columns in the table because of the\n> table_lock() API's current expectation that the slot has all of the\n> columns populated. If we want UPDATE to only need to insert the column\n> values which have changed, table_tuple_lock() will have to change.\n\nCan you please elaborate on this part? I'm probably missing something,\nsince I don't see immediately where are those expectations expressed.The table_tuple_lock() has TupleTableSlot as output argument. Comment for the function states \"*slot: contains the target tuple\". Usage of table_tuple_lock() in places like ExecUpdate() and GetTupleForTrigger() use the returned tuple for EvalPlanQual. Also, it's unknown within table_tuple_lock() what is expectation and what would be performed on the returned slot/tuple. Hence, it becomes tricky for any AM currently to guess and hence need to return full tuple, as API doesn't state only which columns it would be interested in or just wishes to take the lock and needs nothing back in slot.", "msg_date": "Fri, 31 Jan 2020 10:05:23 -0800", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "> > > On Sat, Jun 15, 2019 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Another reason for having the planner do this is that presumably, in\n> > > an AM that's excited about this, the set of fetched columns should\n> > > play into the cost estimates for the scan. I've not been paying\n> > > enough attention to the tableam work to know if we've got hooks for\n> > > the AM to affect scan costing ... but if we don't, that seems like\n> > > a hole that needs plugged.\n> >\n> > AM callback relation_estimate_size exists currently which planner\n> leverages.\n> > Via this callback it fetches tuples, pages, etc.. So, our thought is to\n> extend\n> > this API if possible to pass down needed column and help perform better\n> costing\n> > for the query. Though we think if wish to leverage this function, need\n> to know\n> > list of columns before planning hence might need to use query tree.\n>\n> I believe it would be beneficial to add this potential API extension patch\n> into\n> the thread (as an example of an interface defining how scanCols could be\n> used)\n> and review them together.\n>\n> Thanks for your suggestion, we paste one potential API extension change\nbellow for zedstore to use scanCols.\n\nThe change contains 3 patches to clarify our idea.\n0001-ANALYZE.patch is a generic patch for ANALYZE API extension, we develop\nit to make the\nanalysis of zedstore tables more accurate. It is more flexible now, eg,\nTableAm can provide\nlogical block number as random sample seed; TableAm can only analyze\nspecified columns; TableAm\ncan provide extra info besides the data tuple.\n\n0002-Planner.patch is the real patch to show how we use rte->scanCols for a\ncost estimate, the main idea\nis adding a new metric 'stadiskfrac' to catalog pg_statistic, 'stadiskfrac'\nis the physical size ratio of a column,\nit is calculated when ANALYZE is performed, 0001-ANALYZE.patch can help to\nprovide extra disk size info.\nSo when set_plain_rel_size() is called by the planner, it uses\nrte->scanCols and 'stadiskfrac' to adjust the\nrel->pages, please see set_plain_rel_page_estimates().\n\n0003-ZedStore.patch is an example of how zedstore uses extended ANALYZE\nAPI, I paste it here anywhere, in case someone\nis interest in it.\n\nThanks,\nPengzhou", "msg_date": "Fri, 14 Feb 2020 17:59:39 +0800", "msg_from": "Pengzhou Tang <ptang@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "On Fri, Jan 31, 2020 at 9:52 AM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n> I'm bumping this to the next commitfest because I haven't had a chance\n> to address the questions posed by Dmitry. I'm sure I'll get to it in\n> the next few weeks.\n>\n> > I believe it would be beneficial to add this potential API extension\n> patch into\n> > the thread (as an example of an interface defining how scanCols could be\n> used)\n> > and review them together.\n>\n> As for including some code that uses the scanCols, after discussing\n> off-list with a few folks, there are three options I would like to\n> pursue for doing this.\n>\n> One option I will pursue is using the scanCols to inform the columns\n> needed to be spilled for memory-bounded hashagg (mentioned by Jeff\n> here [1]).\n>\n>\n> The third is exercising it with a test only but providing an example\n> of how a table AM API user like Zedstore uses the columns during\n> planning.\n>\n\nOutside of the use case that Pengzhou has provided in [1], we started\nlooking into using scanCols for extracting the subset of columns\nneeded in two cases:\n\n1) columns required to be spilled for memory-bounded hashagg\n2) referenced CTE columns which must be materialized into tuplestore\n\nHowever, implementing these optimization with the scanCols patch\nwouldn't work with its current implementation.\n\nThe scanCols are extracted from PlannerInfo->simple_rel_array and\nPlannerInfo->simple_rte_array, at which point, we have no way of\nknowing if the column was aggregated or if it was part of a CTE or\nanything else about how it is used in the query.\n\nWe could solve this by creating multiple bitmaps at the time that we\ncreate the scanCols field -- one for aggregated columns, one for\nunaggregated columns, one for CTEs. However, that seems like it would\nadd a lot of extra complexity to the common code path during planning.\n\nBasically, scanCols are simply columns that need to be scanned. It is\nprobably okay if it is only used by table access method API users, as\nPengzhou's patch illustrates.\n\nGiven that we have addressed the feedback about showing a use case,\nthis patch is probably ready for a once over from Dmitry again. (It is\nregistered for the March fest).\n\n[1]\nhttps://www.postgresql.org/message-id/CAG4reAQc9vYdmQXh%3D1D789x8XJ%3DgEkV%2BE%2BfT9%2Bs9tOWDXX3L9Q%40mail.gmail.com\n\n-- \nMelanie Plageman\n\nOn Fri, Jan 31, 2020 at 9:52 AM Melanie Plageman <melanieplageman@gmail.com> wrote:I'm bumping this to the next commitfest because I haven't had a chanceto address the questions posed by Dmitry. I'm sure I'll get to it inthe next few weeks.> I believe it would be beneficial to add this potential API extension patch into\n> the thread (as an example of an interface defining how scanCols could be used)\n> and review them together.As for including some code that uses the scanCols, after discussingoff-list with a few folks, there are three options I would like topursue for doing this. One option I will pursue is using the scanCols to inform the columnsneeded to be spilled for memory-bounded hashagg (mentioned by Jeffhere [1]).The third is exercising it with a test only but providing an exampleof how a table AM API user like Zedstore uses the columns duringplanning.Outside of the use case that Pengzhou has provided in [1], we startedlooking into using scanCols for extracting the subset of columnsneeded in two cases:1) columns required to be spilled for memory-bounded hashagg2) referenced CTE columns which must be materialized into tuplestoreHowever, implementing these optimization with the scanCols patchwouldn't work with its current implementation.The scanCols are extracted from PlannerInfo->simple_rel_array andPlannerInfo->simple_rte_array, at which point, we have no way ofknowing if the column was aggregated or if it was part of a CTE oranything else about how it is used in the query.We could solve this by creating multiple bitmaps at the time that wecreate the scanCols field -- one for aggregated columns, one forunaggregated columns, one for CTEs. However, that seems like it wouldadd a lot of extra complexity to the common code path during planning.Basically, scanCols are simply columns that need to be scanned. It isprobably okay if it is only used by table access method API users, asPengzhou's patch illustrates.Given that we have addressed the feedback about showing a use case,this patch is probably ready for a once over from Dmitry again. (It isregistered for the March fest).[1] https://www.postgresql.org/message-id/CAG4reAQc9vYdmQXh%3D1D789x8XJ%3DgEkV%2BE%2BfT9%2Bs9tOWDXX3L9Q%40mail.gmail.com-- Melanie Plageman", "msg_date": "Tue, 18 Feb 2020 15:26:16 -0800", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "> On Tue, Feb 18, 2020 at 03:26:16PM -0800, Melanie Plageman wrote:\n>\n> > > I believe it would be beneficial to add this potential API extension\n> > patch into\n> > > the thread (as an example of an interface defining how scanCols could be\n> > used)\n> > > and review them together.\n> >\n> > As for including some code that uses the scanCols, after discussing\n> > off-list with a few folks, there are three options I would like to\n> > pursue for doing this.\n> >\n> > One option I will pursue is using the scanCols to inform the columns\n> > needed to be spilled for memory-bounded hashagg (mentioned by Jeff\n> > here [1]).\n> >\n> >\n> > The third is exercising it with a test only but providing an example\n> > of how a table AM API user like Zedstore uses the columns during\n> > planning.\n> >\n>\n> Basically, scanCols are simply columns that need to be scanned. It is\n> probably okay if it is only used by table access method API users, as\n> Pengzhou's patch illustrates.\n\nThanks for update. Sure, that would be fine. At the moment I have couple\nof intermediate commentaries.\n\nIn general implemented functionality looks good. I've checked how it\nworks on the existing tests, almost everywhere required columns were not\nmissing in scanCols (which is probably the most important part).\nSometimes exressions were checked multiple times, which could\npotentially introduce some overhead, but I believe this effect is\nnegligible. Just to mention some counterintuitive examples, for this\nkind of query\n\n SELECT min(q1) FROM INT8_TBL;\n\nthe same column was checked 5 times in my tests, since it's present also\nin baserestrictinfo, and build_minmax_path does one extra round of\nplanning and invoking make_one_rel. I've also noticed that for\npartitioned tables every partition is evaluated separately. IIRC they\nstructure cannot differ, does it makes sense then? Another interesting\nexample is Values Scan (e.g. in an insert statements with multiple\nrecords), can an abstract table AM user leverage information about\ncolumns in it?\n\nOne case, where I believe columns were missing, is statements with\nreturning:\n\n INSERT INTO foo (col1)\n VALUES ('col1'), ('col2'), ('col3')\n RETURNING *;\n\nLooks like in this situation there is only expression in reltarget is\nfor col1, but returning result contains all columns.\n\nAnd just out of curiosity, what do you think about table AM specific\ncolumns e.g. ctid, xmin/xmax etc? I mean, they're not included into\nscanCols and should not be since they're heap AM related. But is there a\nchance that there would be some AM specific columns relevant to e.g.\nthe columnar storage that would also make sense to put into scanCols?\n\n\n", "msg_date": "Fri, 13 Mar 2020 20:10:48 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "On Fri, Mar 13, 2020 at 12:09 PM Dmitry Dolgov <9erthalion6@gmail.com>\nwrote:\n\n> In general implemented functionality looks good. I've checked how it\n> works on the existing tests, almost everywhere required columns were not\n> missing in scanCols (which is probably the most important part).\n> Sometimes exressions were checked multiple times, which could\n> potentially introduce some overhead, but I believe this effect is\n> negligible. Just to mention some counterintuitive examples, for this\n> kind of query\n>\n> SELECT min(q1) FROM INT8_TBL;\n>\n> the same column was checked 5 times in my tests, since it's present also\n> in baserestrictinfo, and build_minmax_path does one extra round of\n> planning and invoking make_one_rel.\n\n\nThanks so much for the very thorough review, Dmitry.\n\nThese extra calls to extract_scan_cols() should be okay in this case.\nAs you mentioned, for min/max queries, planner attempts an optimization\nwith an indexscan and, to do this, it modifies the querytree and then\ncalls query_planner() on it.\nIt tries it with NULLs first and then NULLs last -- each of which\ninvokes query_planner(), so that is two out of three calls. The\nthird is the normal invocation. I'm not sure how you would get five,\nthough.\n\n\n> Another interesting\n> example is Values Scan (e.g. in an insert statements with multiple\n> records), can an abstract table AM user leverage information about\n> columns in it?\n>\n> One case, where I believe columns were missing, is statements with\n> returning:\n>\n> INSERT INTO foo (col1)\n> VALUES ('col1'), ('col2'), ('col3')\n> RETURNING *;\n>\n> Looks like in this situation there is only expression in reltarget is\n> for col1, but returning result contains all columns.\n>\n\nThis relates to both of your above points:\nFor this RETURNING query, it is a ValuesScan, so no columns have to be\nscanned. We actually do add the reference to col1 to the scanCols\nbitmap, though. I'm not sure we want to do so, since we don't want to\nscan col1 in this case. I wonder what cases we would miss if we special\ncased RTEKind RTE_VALUES when doing extract_scan_cols().\n\n-- \nMelanie\n\nOn Fri, Mar 13, 2020 at 12:09 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\nIn general implemented functionality looks good. I've checked how it\nworks on the existing tests, almost everywhere required columns were not\nmissing in scanCols (which is probably the most important part).\nSometimes exressions were checked multiple times, which could\npotentially introduce some overhead, but I believe this effect is\nnegligible. Just to mention some counterintuitive examples, for this\nkind of query\n\n    SELECT min(q1) FROM INT8_TBL;\n\nthe same column was checked 5 times in my tests, since it's present also\nin baserestrictinfo, and build_minmax_path does one extra round of\nplanning and invoking make_one_rel. Thanks so much for the very thorough review, Dmitry.These extra calls to extract_scan_cols() should be okay in this case.As you mentioned, for min/max queries, planner attempts an optimizationwith an indexscan and, to do this, it modifies the querytree and thencalls query_planner() on it.It tries it with NULLs first and then NULLs last -- each of whichinvokes query_planner(), so that is two out of three calls. Thethird is the normal invocation. I'm not sure how you would get five,though. Another interesting\nexample is Values Scan (e.g. in an insert statements with multiple\nrecords), can an abstract table AM user leverage information about\ncolumns in it?\n\nOne case, where I believe columns were missing, is statements with\nreturning:\n\n    INSERT INTO foo (col1)\n      VALUES ('col1'), ('col2'), ('col3')\n      RETURNING *;\n\nLooks like in this situation there is only expression in reltarget is\nfor col1, but returning result contains all columns.This relates to both of your above points:For this RETURNING query, it is a ValuesScan, so no columns have to bescanned. We actually do add the reference to col1 to the scanColsbitmap, though. I'm not sure we want to do so, since we don't want toscan col1 in this case. I wonder what cases we would miss if we specialcased RTEKind RTE_VALUES when doing extract_scan_cols().-- Melanie", "msg_date": "Thu, 18 Jun 2020 17:46:09 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "On Fri, Mar 13, 2020 at 12:09 PM Dmitry Dolgov <9erthalion6@gmail.com>\nwrote:\n> I've also noticed that for partitioned tables every partition is\n> evaluated separately. IIRC they structure cannot differ, does it\n> makes sense then?\n\nGood point. At some point, we had discussed only collecting the columns\nfor one of the child partitions and then using that for all partitions.\n\nIt is definitely worthwhile to try that optimization.\n\nAnother interesting\n> example is Values Scan (e.g. in an insert statements with multiple\n> records), can an abstract table AM user leverage information about\n> columns in it?\n>\n\nFor ValuesScan, I can't see a use case yet for including those in\nscanCols, since it is not required to scan the existing columns in the\ntable, but I am open to a use case.\n\n\n>\n> One case, where I believe columns were missing, is statements with\n> returning:\n>\n> INSERT INTO foo (col1)\n> VALUES ('col1'), ('col2'), ('col3')\n> RETURNING *;\n>\n> Looks like in this situation there is only expression in reltarget is\n> for col1, but returning result contains all columns.\n>\n>\nSo, you are correct, for INSERT and DELETE queries with RETURNING, the\nscanCols should only include columns needed to INSERT or DELETE data.\n\nFor DELETE RETURNING, the RETURNING expression is executed separately in\nExecDelete, so scanCols should basically reflect only what needs to be\nscanned to evaluate the qual.\n\nFor INSERT RETURNING, the scanCols should only reflect those columns\nneeding to be scanned to perform the INSERT.\n\nThere are several reasons why different kinds of RETURNING queries have\ntoo many scanCols. Soumyadeep and I did some investigation into this.\n\nGiven:\nt1(a, b)\nt2(a, b, c)\n\nFor:\nINSERT INTO t1(a) VALUES (1) RETURNING *;\n\nThere is no need to scan t1(a) in order to satisfy the RETURNING\nexpression here. Because this INSERT doesn't go through make_one_rel(),\nscanCols shouldn't be populated.\n\nFor:\nINSERT INTO t2 SELECT a,a,a FROM t1 RETURNING *;\n\nFor this INSERT, the patch correctly identifies that no columns from t2\nneed be scanned and only t1(a) needs be scanned.\n\nFor:\nDELETE FROM t1 WHERE a = 2 RETURNING *;\n\nscanCols correctly has only t1(a) which is needed to evaluate the qual.\n\nFor:\nDELETE FROM t1 USING t2 where a = a RETURNING *;\n\nscanCols should have t1(a) and t2(a), however, it has t1(a) and t2(a,b,c).\n\nThis is because in preprocess_targetlist(), all of the returningList\nvars from t2 are added to the query tree processed_tlist to make sure\nthe RETURNING expression can be evaluated later.\n\nHowever, the query tree processed_tlist items for each rel seem to be\nadded to the RelOptInfo for t2 later, so, in extract_scan_columns(), we\nmistakenly add these to the scanCols.\n\nThis is tricky to change because we don't want to change what gets added\nto the RelOptInfo->reltarget->exprs (I think), so, we'd need some way to\nknow which columns are from the RETURNING expression, which are from the\nqual, etc. And, we'd need to know the query was a DELETE (since an\nUPDATE would need all the columns anyway with the current API, for\nexample). This is pretty different than the current logic in\nextract_scan_cols().\n\nA separate issue with DELETE USING RETURNING queries scanCols arises\nwith partition tables.\n\nGiven this additional table:\nt(a, b, c) partition table with partitions\ntp1(a, b, c) and\ntp2(a, b, c)\n\nthe same query above with different relations\nDELETE FROM t USING t1 WHERE a = a RETURNING *;\n\nscanCols will say it requires t(a,b,c) and t1(a,b) (all of the columns\nin both relations).\nt1 columns are wrong for the same reason as in the non-partition example\nquery described above.\n\nHowever, the partition table scanCols are wrong for a related but\ndifferent reason. This, however, is much more easily fixed. The same\ncode in to add returningList to processed_tlist for a querytree in\npreprocess_targetlist() doesn't handle the case of an inherited table or\npartition child (since their result_relation is set to imitate a SELECT\ninstead of an UPDATE/DELETE/INSERT). I started a different thread on\nthis topic [1].\n\nLastly is UPDATE...RETURNING queries.\nUPDATE queries will require scan of all of the columns with the current\ntuple_table_lock() API (mentioned upthread).\n\n\n> And just out of curiosity, what do you think about table AM specific\n> columns e.g. ctid, xmin/xmax etc? I mean, they're not included into\n> scanCols and should not be since they're heap AM related. But is there a\n> chance that there would be some AM specific columns relevant to e.g.\n> the columnar storage that would also make sense to put into scanCols?\n>\n\nMaybe tid? But, I'm not sure. What do you think?\n\n[1]\nhttps://www.postgresql.org/message-id/CAAKRu_Y+7qX4JzvfovsBE9T9_2c4kK1Bdda139oQ6cA5B-LTZA@mail.gmail.com\n\nOn Fri, Mar 13, 2020 at 12:09 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:>  I've also noticed that for partitioned tables every partition is>  evaluated separately. IIRC they structure cannot differ, does it>  makes sense then?Good point. At some point, we had discussed only collecting the columnsfor one of the child partitions and then using that for all partitions.It is definitely worthwhile to try that optimization.Another interesting\nexample is Values Scan (e.g. in an insert statements with multiple\nrecords), can an abstract table AM user leverage information about\ncolumns in it?For ValuesScan, I can't see a use case yet for including those inscanCols, since it is not required to scan the existing columns in thetable, but I am open to a use case. \n\nOne case, where I believe columns were missing, is statements with\nreturning:\n\n    INSERT INTO foo (col1)\n      VALUES ('col1'), ('col2'), ('col3')\n      RETURNING *;\n\nLooks like in this situation there is only expression in reltarget is\nfor col1, but returning result contains all columns.\nSo, you are correct, for INSERT and DELETE queries with RETURNING, thescanCols should only include columns needed to INSERT or DELETE data.For DELETE RETURNING, the RETURNING expression is executed separately inExecDelete, so scanCols should basically reflect only what needs to bescanned to evaluate the qual.For INSERT RETURNING, the scanCols should only reflect those columnsneeding to be scanned to perform the INSERT.There are several reasons why different kinds of RETURNING queries havetoo many scanCols. Soumyadeep and I did some investigation into this.Given:t1(a, b)t2(a, b, c)For: INSERT INTO t1(a) VALUES (1) RETURNING *;There is no need to scan t1(a) in order to satisfy the RETURNINGexpression here. Because this INSERT doesn't go through make_one_rel(),scanCols shouldn't be populated.For:INSERT INTO t2 SELECT a,a,a FROM t1 RETURNING *;For this INSERT, the patch correctly identifies that no columns from t2need be scanned and only t1(a) needs be scanned.For:DELETE FROM t1 WHERE a = 2 RETURNING *;scanCols correctly has only t1(a) which is needed to evaluate the qual.For:DELETE FROM t1 USING t2 where a = a RETURNING *;scanCols should have t1(a) and t2(a), however, it has t1(a) and t2(a,b,c).This is because in preprocess_targetlist(), all of the returningListvars from t2 are added to the query tree processed_tlist to make surethe RETURNING expression can be evaluated later. However, the query tree processed_tlist items for each rel seem to beadded to the RelOptInfo for t2 later, so, in extract_scan_columns(), wemistakenly add these to the scanCols.This is tricky to change because we don't want to change what gets addedto the RelOptInfo->reltarget->exprs (I think), so, we'd need some way toknow which columns are from the RETURNING expression, which are from thequal, etc. And, we'd need to know the query was a DELETE (since anUPDATE would need all the columns anyway with the current API, forexample). This is pretty different than the current logic inextract_scan_cols().A separate issue with DELETE USING RETURNING queries scanCols ariseswith partition tables.Given this additional table:t(a, b, c) partition table with partitionstp1(a, b, c) andtp2(a, b, c)the same query above with different relationsDELETE FROM t USING t1 WHERE a = a RETURNING *;scanCols will say it requires t(a,b,c) and t1(a,b) (all of the columnsin both relations).t1 columns are wrong for the same reason as in the non-partition examplequery described above.However, the partition table scanCols are wrong for a related butdifferent reason. This, however, is much more easily fixed. The samecode in to add returningList to processed_tlist for a querytree inpreprocess_targetlist() doesn't handle the case of an inherited table orpartition child (since their result_relation is set to imitate a SELECTinstead of an UPDATE/DELETE/INSERT). I started a different thread onthis topic [1].Lastly is UPDATE...RETURNING queries.UPDATE queries will require scan of all of the columns with the currenttuple_table_lock() API (mentioned upthread). \nAnd just out of curiosity, what do you think about table AM specific\ncolumns e.g. ctid, xmin/xmax etc? I mean, they're not included into\nscanCols and should not be since they're heap AM related. But is there a\nchance that there would be some AM specific columns relevant to e.g.\nthe columnar storage that would also make sense to put into scanCols?Maybe tid? But, I'm not sure. What do you think? [1] https://www.postgresql.org/message-id/CAAKRu_Y+7qX4JzvfovsBE9T9_2c4kK1Bdda139oQ6cA5B-LTZA@mail.gmail.com", "msg_date": "Tue, 23 Jun 2020 14:37:09 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extracting only the columns needed for a query" }, { "msg_contents": "Hello,\n\nMelanie and me will be posting a separate thread with the scanCols patch\nlisted here, a patch to capture the set of cols in RETURNING and a group\nof patches to pass down the list of cols to various table AM functions\ntogether as a patch set. This will take some time. Thus, we are\nderegistering this patch for the July commitfest and will come back.\n\nRegards,\nSoumyadeep (VMware)\n\n\n", "msg_date": "Tue, 30 Jun 2020 16:59:42 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extracting only the columns needed for a query" } ]
[ { "msg_contents": "Hello hackers,\n\nPlease consider fixing the following typos and inconsistencies living in\nthe source code starting from v11:\n3.1 add_placeholders_to_child_joinrel -> remove (orphaned after 7cfdc770)\n3.2 AttachIndexInfo -> IndexAttachInfo (an internal inconsistency)\n3.3 BlockRecordInfo -> BlockInfoRecord (an internal inconsistency)\n3.4 bount -> bound (a typo)\n3.5 CopyBoth -> CopyBothResponse (an internal inconsistency)\n3.6 directy -> directory (a typo)\n3.7 ExecCreateSlotFromOuterPlan -> ExecCreateScanSlotFromOuterPlan (an\ninternal inconsistency)\n3.8 es_epqTuple -> es_epqTupleSlot (an internal inconsistency)\n3.9 ExecHashTableParallelInsert -> ExecParallelHashTableInsert (an\ninternal inconsistency)\n3.10 ExecMakeFunctionResult -> ExecMakeFunctionResultSet (an internal\ninconsistency)\n3.11 fmgr_builtins_oid_index -> fmgr_builtin_oid_index (an internal\ninconsistency)\n3.12 freeScanStack -> remove (irrelevant after 2a636834, v12 only)\n3.13 GetTupleTrigger -> GetTupleForTrigger (an internal inconsistency)\n3.14 grow_barrier -> grow_batches_barrier (an internal inconsistency)\n3.15 HAVE__BUIILTIN_CLZ -> HAVE__BUILTIN_CLZ (a typo, v12 only)\n3.16 ignored_killed_tuples -> ignore_killed_tuples (a typo)\n3.17 intset_tests_stats -> intset_test_stats (an internal inconsistency,\nv12 only)\n3.18 is_aggregate -> objtype (needed to determine error handling and\nrequired result type) (an internal inconsistency)\n3.19 iterate_json_string_values -> iterate_json_values (renamed in 1c1791e0)\n3.20 $log_number -> remove (not used since introduction in ed8a7c6f)\n3.21 mechinism -> mechanism (a typo)\n3.22 new_node, new_node_item -> child, child_key (an internal\ninconsistency, v12 only)\n3.23 new_part_constaints -> new_part_constraints (a typo)\n3.24 parentIndexOid -> parentIndexId (for the sake of consistency, but\nthis argument is still unused since 8b08f7d4)\n3.25 particiant -> participant (a typo)\n3.26 PathNameCreateShared -> SharedFileSetCreate (an internal inconsistency)\n3.27 PathnameCreateTemporaryDir -> PathNameCreateTemporaryDir (an\ninconsistent case)\n3.28 pg_access_server_files -> pg_read_server_files or\npg_write_server_files (non-existing role referenced)\n3.29 pg_beginmessage_reuse -> pq_beginmessage_reuse (a typo)\n3.30 Form_pg_fdw & pg_fdw -> Form_pg_foreign_data_wrapper &\npg_foreign_data_wrapper (an internal inconsistency)\n3.31 PG_MCV_LIST -> pg_mcv_list (an internal inconsistency, v12 only)\n3.32 pg_partition_table -> pg_partitioned_table (an internal inconsistency)\n3.33 pg_write -> pg_pwrite (an internal inconsistency, v12 only)\n3.34 PLyObject_FromJsonb -> PLyObject_FromJsonbContainer (an internal\ninconsistency)\n3.35 port_win32.h -> win32_port.h (an internal inconsistency)\n3.36 PruneCtxStateIdx -> PruneCxtStateIdx (an internal inconsistency)\n3.37 SetErrormode -> SetErrorMode (an internal inconsistency)\n3.38 SharedRecordTypemodRegistry -> SharedRecordTypmodRegistry (an\ninternal inconsistency)\n3.39 SharedTupleStore -> SharedTuplestore (an internal inconsistency)\n3.40 shm_mq_get_receive_bytes -> shm_mq_receive_bytes (an internal\ninconsistency)\n3.41 t_natts -> number-of-attributes (questionable) (renamed in\nstorage.sgml with 3e23b68d, but one reference is left)\n3.42 tts_buffer -> remove (orphaned after 4da597ed, v12 only)\n3.43 tts_flag -> tts_flags (an internal inconsistency, v12 only)\n3.44 tts_off -> remove (orphaned after 4da597ed, v12 only)\n3.45 _vaues -> _values (a typo)\n3.46 wait_event_class -> wait_event_type (an internal inconsistency)\n3.47 WarnNoTranactionBlock -> WarnNoTransactionBlock (a typo)\n3.48 with-wal-segsize -> remove (orphaned after fc49e24f)\n3.49 XLOG_SEG_SIZE -> WAL segment size (orphaned after fc49e24fa)\n\nTwo summary patches for REL_11_STABLE and master are attached.\n\nBest regards,\nAlexander", "msg_date": "Sat, 15 Jun 2019 18:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Fix typos and inconsistencies for v11+" }, { "msg_contents": "On Sat, Jun 15, 2019 at 06:00:00PM +0300, Alexander Lakhin wrote:\n> Two summary patches for REL_11_STABLE and master are attached.\n\nThanks. I have committed to HEAD most of the inconsistencies you have\npointed out.\n--\nMichael", "msg_date": "Mon, 17 Jun 2019 16:16:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix typos and inconsistencies for v11+" }, { "msg_contents": "17.06.2019 10:16, Michael Paquier wrote:\n> On Sat, Jun 15, 2019 at 06:00:00PM +0300, Alexander Lakhin wrote:\n>> Two summary patches for REL_11_STABLE and master are attached.\n> Thanks. I have committed to HEAD most of the inconsistencies you have\n> pointed out.\nThank you, Michael.\nThen I will go deeper for v10 and beyond. If older versions are not\ngoing to be fixed, I will prepare patches only for the master branch.\n\nBest regards,\nAlexander\n\n\n\n", "msg_date": "Mon, 17 Jun 2019 10:32:13 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix typos and inconsistencies for v11+" }, { "msg_contents": "On Mon, Jun 17, 2019 at 10:32:13AM +0300, Alexander Lakhin wrote:\n> Then I will go deeper for v10 and beyond. If older versions are not\n> going to be fixed, I will prepare patches only for the master branch.\n\nWhen it comes to fixing typos in in anything which is not directly\nuser-visible like the documentation or error strings, my take is to\nbother only about HEAD. There is always a risk of conflicts with\nback-branches, but I have never actually bumped into this as being a\nproblem. There is an argument for me to be less lazy of course..\n--\nMichael", "msg_date": "Tue, 18 Jun 2019 09:52:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix typos and inconsistencies for v11+" } ]
[ { "msg_contents": "Once in a blue moon I get this assertion failure on server start:\n\n2019-06-15 12:00:29.650 -04 [30080] LOG: iniciando PostgreSQL 12beta1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04) 7.4.0, 64-bit\n2019-06-15 12:00:29.650 -04 [30080] LOG: escuchando en la direcci�n IPv4 �127.0.0.1�, port 55432\n2019-06-15 12:00:29.650 -04 [30080] LOG: escuchando en el socket Unix �/tmp/.s.PGSQL.55432�\n2019-06-15 12:00:29.658 -04 [30956] LOG: el sistema de bases de datos fue apagado en 2019-06-15 12:00:24 -04\n2019-06-15 12:00:29.659 -04 [30080] LOG: proceso de servidor (PID 30107) termin� con c�digo de salida 15\n2019-06-15 12:00:29.659 -04 [30080] LOG: terminando todos los otros procesos de servidor activos\nTRAP: FailedAssertion(�!(AbortStartTime == 0)�, Archivo: �/pgsql/source/master/src/backend/postmaster/postmaster.c�, L�nea: 2957)\nAborted (core dumped)\n\nApologies for the Spanish -- I cannot readily reproduce this. In\nessence, this shows a normal startup, until suddenly process 30107\nterminates with exit code 15, and then while shutting everything down,\npostmaster hits the aforementioned assertion and terminates.\n\nOne problem with debugging this is that I don't know what process 30107\nis, since the logs don't mention it.\n\nNo idea what is going on. But I'm going to set my script to start the\nserver with log_min_messages=debug1, in case I hit it again ...\n\nHas anybody else seen this?\n\n-- \n�lvaro Herrera\n\n\n", "msg_date": "Sat, 15 Jun 2019 12:09:50 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "assertion at postmaster start" }, { "msg_contents": "Hi,\n\nOn 2019-06-15 12:09:50 -0400, Alvaro Herrera wrote:\n> Once in a blue moon I get this assertion failure on server start:\n> \n> 2019-06-15 12:00:29.650 -04 [30080] LOG: iniciando PostgreSQL 12beta1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04) 7.4.0, 64-bit\n> 2019-06-15 12:00:29.650 -04 [30080] LOG: escuchando en la direcci�n IPv4 �127.0.0.1�, port 55432\n> 2019-06-15 12:00:29.650 -04 [30080] LOG: escuchando en el socket Unix �/tmp/.s.PGSQL.55432�\n> 2019-06-15 12:00:29.658 -04 [30956] LOG: el sistema de bases de datos fue apagado en 2019-06-15 12:00:24 -04\n> 2019-06-15 12:00:29.659 -04 [30080] LOG: proceso de servidor (PID 30107) termin� con c�digo de salida 15\n> 2019-06-15 12:00:29.659 -04 [30080] LOG: terminando todos los otros procesos de servidor activos\n> TRAP: FailedAssertion(�!(AbortStartTime == 0)�, Archivo: �/pgsql/source/master/src/backend/postmaster/postmaster.c�, L�nea: 2957)\n> Aborted (core dumped)\n> \n> Apologies for the Spanish -- I cannot readily reproduce this. In\n> essence, this shows a normal startup, until suddenly process 30107\n> terminates with exit code 15, and then while shutting everything down,\n> postmaster hits the aforementioned assertion and terminates.\n\nI assume this is on master as of a few days ago? This doesn't even look\nto be *after* a crash-restart? And I assume core files weren't enabled?\n\n\n> One problem with debugging this is that I don't know what process 30107\n> is, since the logs don't mention it.\n\nHm - it probably can't be that many processes, it looks like 30107 has\nto have started pretty soon after the startup process (which IIRC is the\none emitting \"el sistema de bases de datos fue apagado en\"), and as soon\nas that's done 30107 is noticed as having crashed.\n\nUnfortunately, as this appears to be a start in a clean database, we\ndon't really know which phase of startup this is. There's IIRC no\nmessages to be expected before \"database system is ready to accept\nconnections\" in a clean start.\n\nWhat is a bit odd is that:\n\n> 2019-06-15 12:00:29.659 -04 [30080] LOG: proceso de servidor (PID 30107) termin� con c�digo de salida 15\n\ncomes from:\n#. translator: %s is a noun phrase describing a child process, such as\n#. \"server process\"\n#: postmaster/postmaster.c:3656\n#, c-format\nmsgid \"%s (PID %d) exited with exit code %d\"\nmsgstr \"%s (PID %d) termin� con c�digo de salida %d\"\n\n#: postmaster/postmaster.c:3301 postmaster/postmaster.c:3321\n#: postmaster/postmaster.c:3328 postmaster/postmaster.c:3346\nmsgid \"server process\"\nmsgstr \"proceso de servidor\"\n\nAnd \"server process\" is afaict only used for actual backends, not other\ntypes of processes. But we've not yet seen \"database system is ready to\naccept accept connections\", so IIRC it could only be a \"dead_end\" type\nbackend? But we didn't yet see an error from that...\n\n\n> No idea what is going on.\n\nSeems to indicate a logic error in postmaster's state machine. Perhaps\nsomething related to dead_end processes?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2019 10:28:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: assertion at postmaster start" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-06-15 12:09:50 -0400, Alvaro Herrera wrote:\n>> Once in a blue moon I get this assertion failure on server start:\n>> TRAP: FailedAssertion(«!(AbortStartTime == 0)», Archivo: «/pgsql/source/master/src/backend/postmaster/postmaster.c», Línea: 2957)\n\n> And \"server process\" is afaict only used for actual backends, not other\n> types of processes. But we've not yet seen \"database system is ready to\n> accept accept connections\", so IIRC it could only be a \"dead_end\" type\n> backend? But we didn't yet see an error from that...\n> Seems to indicate a logic error in postmaster's state machine. Perhaps\n> something related to dead_end processes?\n\nSo if Andres is guessing right, this must be from something trying to\nconnect before the postmaster is ready? Seems like that could be\ntested for ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Jun 2019 13:38:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: assertion at postmaster start" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2019-06-15 12:09:50 -0400, Alvaro Herrera wrote:\n>>> Once in a blue moon I get this assertion failure on server start:\n>>> TRAP: FailedAssertion(\"!(AbortStartTime == 0)\", Archivo: \"/pgsql/source/master/src/backend/postmaster/postmaster.c\", Linea: 2957)\n\n>> And \"server process\" is afaict only used for actual backends, not other\n>> types of processes. But we've not yet seen \"database system is ready to\n>> accept accept connections\", so IIRC it could only be a \"dead_end\" type\n>> backend? But we didn't yet see an error from that...\n>> Seems to indicate a logic error in postmaster's state machine. Perhaps\n>> something related to dead_end processes?\n\n> So if Andres is guessing right, this must be from something trying to\n> connect before the postmaster is ready? Seems like that could be\n> tested for ...\n\nI got around to trying to test for this, and I find that I can reproduce\nthe symptom exactly by applying the attached hack and then trying to\nconnect a couple of seconds after starting the postmaster.\n\nBasically what seems to be happening in Alvaro's report is that\n\n(1) before the startup process is done, something tries to connect,\ncausing a dead_end child to be launched;\n\n(2) for reasons mysterious, that child exits with exit code 15 rather\nthan the expected 0 or 1;\n\n(3) the postmaster therefore initiates a system-wide restart cycle;\n\n(4) the startup process completes normally anyway, indicating that the\nSIGQUIT arrived too late to affect it;\n\n(5) then we hit the Assert, since we reach the transition-to-normal-run\ncode even though HandleChildCrash set AbortStartTime in step (3).\n\nThe timing window for (4) to happen is extremely tight normally. The\nattached patch makes it wider by the expedient of just not sending the\nSIGQUIT to the startup process ;-). Then you just need enough of a delay\nin startup to perform a manual connection, plus some hack to make the\ndead_end child exit with an unexpected exit code.\n\nI think what this demonstrates is that that Assert is just wrong:\nwe *can* reach PM_RUN with the flag still set, so we should do\n\n\t\t\tStartupStatus = STARTUP_NOT_RUNNING;\n\t\t\tFatalError = false;\n-\t\t\tAssert(AbortStartTime == 0);\n+\t\t\tAbortStartTime = 0;\n\t\t\tReachedNormalRunning = true;\n\t\t\tpmState = PM_RUN;\n\nProbably likewise for the similar Assert in sigusr1_handler.\n\nA larger question is whether we should modify the postmaster logic\nso that crashes of dead_end children aren't treated as reasons to\nperform a system restart. I'm dubious about this, because frankly,\nsuch crashes shouldn't be happening. There is very little code\nthat a dead_end child will traverse before exiting ... so how the\ndevil did it reach an exit(15)? Alvaro, are you running any\nnonstandard code in the postmaster (shared_preload_libraries, maybe)?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 24 Aug 2019 13:30:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: assertion at postmaster start" }, { "msg_contents": "I wrote:\n> I think what this demonstrates is that that Assert is just wrong:\n> we *can* reach PM_RUN with the flag still set, so we should do\n> \t\t\tStartupStatus = STARTUP_NOT_RUNNING;\n> \t\t\tFatalError = false;\n> -\t\t\tAssert(AbortStartTime == 0);\n> +\t\t\tAbortStartTime = 0;\n> \t\t\tReachedNormalRunning = true;\n> \t\t\tpmState = PM_RUN;\n> Probably likewise for the similar Assert in sigusr1_handler.\n\nPoking further at this, I noticed that the code just above here completely\nfails to do what the comments say it intends to do, which is restart the\nstartup process after we've SIGQUIT'd it. That's because the careful\nmanipulation of StartupStatus in reaper (lines 2943ff in HEAD) is stomped\non by HandleChildCrash, which will just unconditionally reset it to\nSTARTUP_CRASHED (line 3507). So we end up shutting down the database\nafter all, which is not what the intention seems to be. Hence,\ncommit 45811be94 was still a few bricks shy of a load :-(.\n\nI propose the attached. I'm inclined to think that the risk/benefit\nof back-patching this is not very good, so I just want to stick it in\nHEAD, unless somebody can explain why dead_end children are likely to\ncrash in the field.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 24 Aug 2019 16:55:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: assertion at postmaster start" }, { "msg_contents": "I wrote:\n> I propose the attached. I'm inclined to think that the risk/benefit\n> of back-patching this is not very good, so I just want to stick it in\n> HEAD, unless somebody can explain why dead_end children are likely to\n> crash in the field.\n\nPushed at ee3278239.\n\nI'm still curious as to the explanation for a dead_end child exiting\nwith code 15, but I have no way to pursue the point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Aug 2019 16:07:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: assertion at postmaster start" }, { "msg_contents": "On 2019-Aug-26, Tom Lane wrote:\n\n> I wrote:\n> > I propose the attached. I'm inclined to think that the risk/benefit\n> > of back-patching this is not very good, so I just want to stick it in\n> > HEAD, unless somebody can explain why dead_end children are likely to\n> > crash in the field.\n> \n> Pushed at ee3278239.\n> \n> I'm still curious as to the explanation for a dead_end child exiting\n> with code 15, but I have no way to pursue the point.\n\nMany thanks for all the investigation and fix!\n\nSadly, I have *no* idea what could have happened that would have caused\na connection at that point (my start scripts don't do it). It is\npossible that I had a terminal running some shell loop on psql (\"watch\npsql -c something\" perhaps). But I'm sure I didn't notice that when I\nreported this, or I would have mentioned it. However, I have no idea\nwhy would it have died with code 15. From my notes of what I was doing\nthat day, I can't find any evidence that I would have had anything in\nshared_preload_libraries. (I don't have Frost's complete timestamped\nshell history, however.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 28 Aug 2019 10:43:58 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: assertion at postmaster start" } ]
[ { "msg_contents": "Hi hackers,\n\nI see evidence on this list that it's sort of a rite of passage\nto ask the flinfo->fn_extra question, and my time has come.\n\nSo please let me know if I seem to correctly understand the limits\non its use.\n\nI gather that various extensions use it to stash various things. But\n(I assume) ... they will only touch fn_extra in FmgrInfo structs that\npertain to *their own functions*. (Please say that's true?)\n\nIOW, it is assured that, if I am a language handler, when I am called\nto handle a function in my language, fn_extra is mine to use as I please ...\n\n... with the one big exception, if I am handling a function in my language\nthat returns a set, and I will use SFRM_ValuePerCall mode, I have to leave\nfn_extra NULL before SRF_FIRSTCALL_INIT(), which plants its own gunk there,\nand then I can stash my stuff in gunk->user_fctx for the duration of that\nSRF call.\n\nDoes that seem to catch the essentials?\n\nThanks,\n-Chap\n\n\np.s.: noticed in fmgr/README: \"Note that simple \"strict\" functions can\nignore both isnull and args[i].isnull, since they won't even get called\nwhen there are any TRUE values in args[].isnull.\"\n\nI get why a strict function can ignore args[i].isnull, but is the part\nabout ignoring isnull a mistake? A strict function can be passed all\nnon-null arguments and still return null if it wants to, right?\n\n\n", "msg_date": "Sat, 15 Jun 2019 21:04:04 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "The flinfo->fn_extra question, from me this time." }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> So please let me know if I seem to correctly understand the limits\n> on its use.\n\n> I gather that various extensions use it to stash various things. But\n> (I assume) ... they will only touch fn_extra in FmgrInfo structs that\n> pertain to *their own functions*. (Please say that's true?)\n\n> IOW, it is assured that, if I am a language handler, when I am called\n> to handle a function in my language, fn_extra is mine to use as I please ...\n\nYup.\n\n> ... with the one big exception, if I am handling a function in my language\n> that returns a set, and I will use SFRM_ValuePerCall mode, I have to leave\n> fn_extra NULL before SRF_FIRSTCALL_INIT(), which plants its own gunk there,\n> and then I can stash my stuff in gunk->user_fctx for the duration of that\n> SRF call.\n\nYup. (Of course, you don't have to use the SRF_FIRSTCALL_INIT\ninfrastructure.)\n\nKeep in mind that in most contexts, whatever you cache in fn_extra\nwill only be there for the life of the current query.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Jun 2019 21:21:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "On 06/15/19 21:21, Tom Lane wrote:\n> Yup. (Of course, you don't have to use the SRF_FIRSTCALL_INIT\n> infrastructure.)\n\nThat had crossed my mind ... but it seems there's around 80 or 100\nlines of good stuff there that'd be a shame to duplicate. If only\ninit_MultiFuncCall() took an extra void ** argument, and the stock\nSRF_FIRSTCALL_INIT passed &(fcinfo->flinfo->fn_extra), seems like\nmost of it would be reusable. shutdown_MultiFuncCall would need to work\nslightly differently, and a caller who wanted to be different would need\na customized variant of SRF_PERCALL_SETUP, but that's two lines.\n\nCheers,\n-Chap\n\n\n", "msg_date": "Sat, 15 Jun 2019 21:46:55 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "On 06/15/19 21:46, Chapman Flack wrote:\n> On 06/15/19 21:21, Tom Lane wrote:\n>> Yup. (Of course, you don't have to use the SRF_FIRSTCALL_INIT\n>> infrastructure.)\n> \n> That had crossed my mind ... but it seems there's around 80 or 100\n> lines of good stuff there that'd be a shame to duplicate. If only\n\nI suppose that's only if I want to continue using SFRM_ValuePerCall mode.\n\nSFRM_Materialize mode could remove a good deal of complexity currently\nin PL/Java around managing memory contexts, SPI_connect, etc. through\nmultiple calls ... and I'd also have fn_extra all to myself.\n\nUntil now, I had assumed that SFRM_ValuePerCall mode might offer some\nbenefits, such as the possibility of pipelining certain queries and not\nbuilding up a whole tuplestore in advance.\n\nBut looking in the code, I'm getting the impression that those\nbenefits are only theoretical future ones, as ExecMakeTableFunctionResult\nimplements SFRM_ValuePerCall mode by ... repeatedly calling the function\nto build up a whole tuplestore in advance.\n\nAm I right about that? Are there other sites from which a SRF might be\ncalled that I haven't found, where ValuePerCall mode might actually\nsupport some form of pipelining? Are there actual cases where allowedModes\nmight not contain SFRM_Materialize?\n\nOr is the ValuePerCall variant currently there just to support possible\nfuture such cases, none of which exist at the moment?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 21 Jul 2019 16:44:37 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> Until now, I had assumed that SFRM_ValuePerCall mode might offer some\n> benefits, such as the possibility of pipelining certain queries and not\n> building up a whole tuplestore in advance.\n\n> But looking in the code, I'm getting the impression that those\n> benefits are only theoretical future ones, as ExecMakeTableFunctionResult\n> implements SFRM_ValuePerCall mode by ... repeatedly calling the function\n> to build up a whole tuplestore in advance.\n\nYes, that's the case for a SRF in FROM. A SRF in the targetlist\nactually does get the chance to pipeline, if it implements ValuePerCall.\n\nThe FROM case could be improved perhaps, if somebody wanted to put\ntime into it. You'd still need to be prepared to build a tuplestore,\nin case of rescan or backwards fetch; but in principle you could return\nrows immediately while stashing them aside in a tuplestore.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Jul 2019 17:54:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "On 21 Jul 2019, at 22:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Chapman Flack <chap@anastigmatix.net> writes:\n>> Until now, I had assumed that SFRM_ValuePerCall mode might offer some\n>> benefits, such as the possibility of pipelining certain queries and not\n>> building up a whole tuplestore in advance.\n> \n>> But looking in the code, I'm getting the impression that those\n>> benefits are only theoretical future ones, as ExecMakeTableFunctionResult\n>> implements SFRM_ValuePerCall mode by ... repeatedly calling the function\n>> to build up a whole tuplestore in advance.\n> \n> Yes, that's the case for a SRF in FROM. A SRF in the targetlist\n> actually does get the chance to pipeline, if it implements ValuePerCall.\n> \n> The FROM case could be improved perhaps, if somebody wanted to put\n> time into it.\n\nWhile looking at whether REFCURSOR output could be pipelined into the executor [1], I’ve stumbled upon the same.\n\nBy any chance, do either of you know if there are initiatives to make the changes mentioned?\n\n> You'd still need to be prepared to build a tuplestore,\n> in case of rescan or backwards fetch; but […]\n\nI’m also interested in your comment here. If the function was STABLE, could not the function scan simply be restarted? (Rather than needing to create the tuplestore for all cases.)\n\nI guess perhaps the backwards scan is where it falls down though...\n\n> […] in principle you could return\n> rows immediately while stashing them aside in a tuplestore.\n\nDoes the planner have any view on this? When I first saw what was going on, I presumed the planner had decided the cost of multiple function scans was greater than the cost of materialising it in a temporary store.\n\nIt occurs to me that, if we made a switch towards pipelining the function scan results directly out, then we might be loose efficiency where there are a significant number of scans and/or the function cost high. Is that why you were suggesting to as well stash them aside?\n\ndenty.\n\n[1] https://www.postgresql.org/message-id/B2AFCAB5-FACD-44BF-963F-7DD2735FAB5D%40QQdd.eu\n\n", "msg_date": "Sun, 22 Sep 2019 11:40:29 +0100", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "Dent John <denty@QQdd.eu> writes:\n> On 21 Jul 2019, at 22:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Chapman Flack <chap@anastigmatix.net> writes:\n>>> But looking in the code, I'm getting the impression that those\n>>> benefits are only theoretical future ones, as ExecMakeTableFunctionResult\n>>> implements SFRM_ValuePerCall mode by ... repeatedly calling the function\n>>> to build up a whole tuplestore in advance.\n\n>> Yes, that's the case for a SRF in FROM. A SRF in the targetlist\n>> actually does get the chance to pipeline, if it implements ValuePerCall.\n>> The FROM case could be improved perhaps, if somebody wanted to put\n>> time into it.\n\n> By any chance, do either of you know if there are initiatives to make the changes mentioned?\n\nI don't know of anybody working on it.\n\n>> You'd still need to be prepared to build a tuplestore,\n>> in case of rescan or backwards fetch; but […]\n\n> I’m also interested in your comment here. If the function was STABLE, could not the function scan simply be restarted? (Rather than needing to create the tuplestore for all cases.)\n> I guess perhaps the backwards scan is where it falls down though...\n\nMy point was that you can't simply remove the tuplestore-building code\npath. The exact boundary conditions for that might be negotiable.\nBut I'd be very dubious of an assumption that re-running the function\nwould be cheaper than building a tuplestore, regardless of whether it's\nsafe.\n\n> Does the planner have any view on this?\n\ncost_functionscan and cost_rescan would likely need some adjustment if\npossible. However, I'm not sure that the planner has any way to know\nwhether a given SRF will support ValuePerCall or not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Sep 2019 11:01:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "On Sun, Jul 21, 2019 at 5:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The FROM case could be improved perhaps, if somebody wanted to put\n> time into it. You'd still need to be prepared to build a tuplestore,\n> in case of rescan or backwards fetch; but in principle you could return\n> rows immediately while stashing them aside in a tuplestore.\n\nBut you could skip it if you could prove that no rescans or backward\nfetches are possible for a particular node, something that we also\nwant for Gather, as discussed not long ago.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 24 Sep 2019 14:09:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "> On 22 Sep 2019, at 16:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nHi Tom,\n\n> I don't know of anybody working on it.\n\nOkay. I had a look at this. I tried to apply Andre’s patch [1] from some time ago, but that turned out not so easy. I guess the code has moved on since. So I’ve attempted to re-invent the same spirit, stealing from his patch, and from how the tSRF code does things. The patch isn’t final, but it demonstrates a concept.\n\nHowever, given your comments below, I wonder if you might comment on the approach before I go further?\n\n(Patch is presently still against 12beta2.)\n\n>>> You'd still need to be prepared to build a tuplestore,\n>>> in case of rescan or backwards fetch; but […]\n\nI do recognise this. The patch teaches ExecMaterializesOutput() and ExecSupportsBackwardScan() that T_FunctionScan nodes don't materialise their output.\n\n(Actually, Andre’s patch did the educating of ExecMaterializesOutput() and ExecSupportsBackwardScan() — it’s not my invention.)\n\nI haven’t worked out how to easily demonstrate the backward scan case, but joins (which presumably are the typical cause of rescan) now yield an intermediate Materialize node.\n\npostgres=# explain (analyze, buffers) select * from unnest (array_fill ('scanner'::text, array[10])) t1, unnest (array_fill ('dummy'::text, array[10000000])) t2 limit 100;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.01..1.36 rows=100 width=64) (actual time=0.009..0.067 rows=100 loops=1)\n -> Nested Loop (cost=0.01..1350000.13 rows=100000000 width=64) (actual time=0.008..0.049 rows=100 loops=1)\n -> Function Scan on unnest t2 (cost=0.00..100000.00 rows=10000000 width=32) (actual time=0.003..0.006 rows=10 loops=1)\n -> Materialize (cost=0.00..0.15 rows=10 width=32) (actual time=0.000..0.002 rows=10 loops=10)\n -> Function Scan on unnest t1 (cost=0.00..0.10 rows=10 width=32) (actual time=0.001..0.004 rows=10 loops=1)\n Planning Time: 127.875 ms\n Execution Time: 0.102 ms\n(7 rows)\n\n> My point was that you can't simply remove the tuplestore-building code\n> path. The exact boundary conditions for that might be negotiable.\n> But I'd be very dubious of an assumption that re-running the function\n> would be cheaper than building a tuplestore, regardless of whether it's\n> safe.\n\nUnderstood, and I agree. I think it’s preferable to allow the planner control over when to explicitly materialise.\n\nBut if I’m not wrong, at present, the planner doesn’t really trade-off the cost of rescan versus materialisation, but instead adopts a simple heuristic of materialising one or other side during a join. We can see this in the plans if the unnest()s are moved into the target list and buried in a subquery. For example:\n\npostgres=# explain (analyze, buffers) select * from (select unnest (array_fill ('scanner'::text, array[10]))) t1, (select unnest (array_fill ('dummy'::text, array[10000000]))) t2 limit 100;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1.40 rows=100 width=64) (actual time=0.011..0.106 rows=100 loops=1)\n -> Nested Loop (cost=0.00..1400000.21 rows=100000000 width=64) (actual time=0.010..0.081 rows=100 loops=1)\n -> ProjectSet (cost=0.00..50000.02 rows=10000000 width=32) (actual time=0.004..0.024 rows=10 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1)\n -> Materialize (cost=0.00..0.22 rows=10 width=32) (actual time=0.001..0.002 rows=10 loops=10)\n -> ProjectSet (cost=0.00..0.07 rows=10 width=32) (actual time=0.001..0.004 rows=10 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1)\n Planning Time: 180.482 ms\n Execution Time: 0.148 ms\n(9 rows)\n\nI am tempted to stop short of educating the planner about the possibility to re-scan (thus dropping the materialise node) during a join. It seems feasible, and sometimes advantageous. (Perhaps if the join quals would cause a huge amount of the output to be filtered??) But it also seems better to treat it as an entirely separate issue.\n\n> cost_functionscan and cost_rescan would likely need some adjustment if\n> possible.\n\nI looked at cost_functionscan(), but I think it is already of the view that a function can pipeline. It computes a startup_cost and a run_cost, where run_cost is the per-tuple cost * num_rows. With this understanding, it is actually wrong given the current materialisation-always behaviour. I think this means I don’t need to make any fundamental changes in order to correctly cost the new behaviour.\n\n> However, I'm not sure that the planner has any way to know\n> whether a given SRF will support ValuePerCall or not.\n\nYes. There is a flaw. But with the costing support function interface, it’s possible to supply costs that correctly relate to the SRF’s abilities.\n\nI guess there can be a case where the SRF supports ValuePerCall, and supplies costs accordingly, but at execution time, decides to not to use it. That seems a curious situation, but it will, at worst, cost us a bit more buffer space.\n\nIn the opposite case, where the SRF can’t support ValuePerCall, the risk is that the planner has decided it wants to interject a Materialize node, and the result will be buffer-to-buffer copying. If the function has a costing support function, it should all be costed correctly, but it’s obviously not ideal. Currently, my patch doesn’t do anything about this case. My plan would be to allow the Materialize node to be supplied with a tuplestore from the FunctionScan node at execution time. I guess this optimisation would similarly help non-ValuePerCall tSRFs.\n\nAfter all this, I’m wondering how you view the proposal?\n\nFor sake of comparison, 12beta1 achieves the following plans:\n\npostgres=# create or replace function test1() returns setof record language plpgsql as $$ begin return query (select 'a', generate_series (1, 1e6)); end; $$; -- using plpgsql because it can’t pipeline\nCREATE FUNCTION\npostgres=# explain (verbose, analyse, buffers) select key, count (value), sum (value) from test1() as (key text, value numeric) group by key;\n...\n Planning Time: 0.068 ms\n Execution Time: 589.651 ms\n\npostgres=# explain (verbose, analyse, buffers) select * from test1() as (key text, value numeric) limit 50;\n...\n Planning Time: 0.059 ms\n Execution Time: 348.334 ms\n\npostgres=# explain (analyze, buffers) select count (a.a), sum (a.a) from unnest (array_fill (1::numeric, array[10000000])) a;\n...\n Planning Time: 165.502 ms\n Execution Time: 5629.094 ms\n\npostgres=# explain (analyze, buffers) select * from unnest (array_fill (1::numeric, array[10000000])) limit 50;\n...\n Planning Time: 110.952 ms\n Execution Time: 1080.609 ms\n\nVersus 12beta2+patch, which seem favourable in the main, at least for these pathological cases:\n\npostgres=# explain (verbose, analyse, buffers) select key, count (value), sum (value) from test1() as (key text, value numeric) group by key;\n...\n Planning Time: 0.068 ms\n Execution Time: 591.749 ms\n\npostgres=# explain (verbose, analyse, buffers) select * from test1() as (key text, value numeric) limit 50;\n...\n Planning Time: 0.051 ms\n Execution Time: 289.820 ms\n\npostgres=# explain (analyze, buffers) select count (a.a), sum (a.a) from unnest (array_fill (1::numeric, array[10000000])) a;\n...\n Planning Time: 169.260 ms\n Execution Time: 4759.781 ms\n\npostgres=# explain (analyze, buffers) select * from unnest (array_fill (1::numeric, array[10000000])) limit 50;\n...\n Planning Time: 163.374 ms\n Execution Time: 0.051 ms\ndenty.\n\n[1] https://www.postgresql.org/message-id/20160822214023.aaxz5l4igypowyri%40alap3.anarazel.de <https://www.postgresql.org/message-id/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de>", "msg_date": "Sat, 5 Oct 2019 11:27:49 +0100", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "Hi,\n\nTurns out — to my embarrassment — that pretty much all of the regression tests failed with my patch. No idea if anyone spotted that and withheld reply in revenge, but I wouldn’t blame if you did!\n\nI have spent a bit more time on it. The attached patch is a much better show, though there are still a few regressions and undoubtedly it’s still rough.\n\n(Attached patch is against 12.0.)\n\nAs was perhaps predictable, some of the regression tests do indeed break in the case of rescan. To cite the specific class of fail, it’s this:\n\nSELECT * FROM (VALUES (1),(2),(3)) v(r), ROWS FROM( rngfunc_sql(11,11), rngfunc_mat(10+r,13) );\n r | i | s | i | s \n ---+----+---+----+—\n 1 | 11 | 1 | 11 | 1\n 1 | | | 12 | 2\n 1 | | | 13 | 3\n- 2 | 11 | 1 | 12 | 4\n+ 2 | 11 | 2 | 12 | 4\n 2 | | | 13 | 5\n- 3 | 11 | 1 | 13 | 6\n+ 3 | 11 | 3 | 13 | 6\n (6 rows)\n\nThe reason for the change is that ’s' comes from rngfunc_mat(), which computes s as nextval(). The patch currently prefers to re-execute the function in place of materialising it into a tuplestore.\n\nTom suggested not dropping the tuplestore creation logic. I can’t fathom a way of avoiding change for folk that have gotten used to the current behaviour without doing that. So I’m tempted to pipeline the rows back from a function (if it returns ValuePerCall), and also record it in a tuplestore, just in case rescan happens. There’s still wastage in this approach, but it would save the current behaviour, while stil enabling the early abort of ValuePerCall SRFs at relatively low cost, which is certainly one of my goals.\n\nI’d welcome opinion on whether there are downsides the that approach, as I might move to integrate that next.\n\nBut I would also like to kick around ideas for how to avoid entirely the tuplestore.\n\nEarlier, I suggested that we might make the decision logic prefer to materialise a tuplestore for VOLATILE functions, and prefer to pipeline directly from STABLE (and IMMUTABLE) functions. The docs on volatility categories describe that the optimiser will evaluate a VOLATILE function for every row it is needed, whereas it might cache STABLE and IMMUTABLE with greater aggression. It’s basically the polar opposite of what I want to achieve.\n\nIt is arguably also in conflict with current behaviour. I think we should make the docs clearer about that.\n\nSo, on second thoughts, I don’t think overloading the meaning of STABLE, et al., is the right thing to do. I wonder if we could invent a new modifier to CREATE FUNCTION, perhaps “PIPELINED”, which would simply declare a function's ability and preference for ValuePerCall mode.\n\nOr perhaps modify the ROWS FROM extension, and adopt WITH’s [ NOT ] MATERIALIZED clause. For example, the following would achieve the above proposed behaviour:\n\nROWS FROM( rngfunc_sql(11,11) MATERIALIZED, rngfunc_mat(10+r,13) MATERIALIZED ) \n\nOf course, NOT MATERIALIZED would achieve ValuePerCall mode, and omit materialisation. I guess MATERIALIZED would have to be the default.\n\nI wonder if another alternative would be to decide materialization based on what the outer plan includes. I guess we can tell if we’re part of a join, or if the plan requires the ability to scan backwards. Could that work?\n\ndenty.\nHi,Turns out — to my embarrassment — that pretty much all of the regression tests failed with my patch. No idea if anyone spotted that and withheld reply in revenge, but I wouldn’t blame if you did!I have spent a bit more time on it. The attached patch is a much better show, though there are still a few regressions and undoubtedly it’s still rough.(Attached patch is against 12.0.)As was perhaps predictable, some of the regression tests do indeed break in the case of rescan. To cite the specific class of fail, it’s this:SELECT * FROM (VALUES (1),(2),(3)) v(r), ROWS FROM( rngfunc_sql(11,11), rngfunc_mat(10+r,13) );  r | i  | s | i  | s  ---+----+---+----+—  1 | 11 | 1 | 11 | 1  1 |    |   | 12 | 2  1 |    |   | 13 | 3- 2 | 11 | 1 | 12 | 4+ 2 | 11 | 2 | 12 | 4  2 |    |   | 13 | 5- 3 | 11 | 1 | 13 | 6+ 3 | 11 | 3 | 13 | 6 (6 rows)The reason for the change is that ’s' comes from rngfunc_mat(), which computes s as nextval(). The patch currently prefers to re-execute the function in place of materialising it into a tuplestore.Tom suggested not dropping the tuplestore creation logic. I can’t fathom a way of avoiding change for folk that have gotten used to the current behaviour without doing that. So I’m tempted to pipeline the rows back from a function (if it returns ValuePerCall), and also record it in a tuplestore, just in case rescan happens. There’s still wastage in this approach, but it would save the current behaviour, while stil enabling the early abort of ValuePerCall SRFs at relatively low cost, which is certainly one of my goals.I’d welcome opinion on whether there are downsides the that approach, as I might move to integrate that next.But I would also like to kick around ideas for how to avoid entirely the tuplestore.Earlier, I suggested that we might make the decision logic prefer to materialise a tuplestore for VOLATILE functions, and prefer to pipeline directly from STABLE (and IMMUTABLE) functions. The docs on volatility categories describe that the optimiser will evaluate a VOLATILE function for every row it is needed, whereas it might cache STABLE and IMMUTABLE with greater aggression. It’s basically the polar opposite of what I want to achieve.It is arguably also in conflict with current behaviour. I think we should make the docs clearer about that.So, on second thoughts, I don’t think overloading the meaning of STABLE, et al., is the right thing to do. I wonder if we could invent a new modifier to CREATE FUNCTION, perhaps “PIPELINED”, which would simply declare a function's ability and preference for ValuePerCall mode.Or perhaps modify the ROWS FROM extension, and adopt WITH’s [ NOT ] MATERIALIZED clause. For example, the following would achieve the above proposed behaviour:ROWS FROM( rngfunc_sql(11,11) MATERIALIZED, rngfunc_mat(10+r,13) MATERIALIZED ) Of course, NOT MATERIALIZED would achieve ValuePerCall mode, and omit materialisation. I guess MATERIALIZED would have to be the default.I wonder if another alternative would be to decide materialization based on what the outer plan includes. I guess we can tell if we’re part of a join, or if the plan requires the ability to scan backwards. Could that work?denty.", "msg_date": "Sat, 2 Nov 2019 22:42:56 +0000", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "(And here’s aforementioned attachment… doh.)", "msg_date": "Sun, 3 Nov 2019 11:51:14 +0000", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "Hi\n\nne 3. 11. 2019 v 12:51 odesílatel Dent John <denty@qqdd.eu> napsal:\n\n> (And here’s aforementioned attachment… doh.)\n>\n\ncan be nice, if patch has some regress tests - it is good for memory\nrefreshing what is target of patch.\n\nRegards\n\nPavel\n\nHine 3. 11. 2019 v 12:51 odesílatel Dent John <denty@qqdd.eu> napsal:(And here’s aforementioned attachment… doh.)can be nice, if patch has some regress tests - it is good for memory refreshing what is target of patch.RegardsPavel", "msg_date": "Sun, 3 Nov 2019 14:33:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "> On 3 Nov 2019, at 13:33, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> can be nice, if patch has some regress tests - it is good for memory refreshing what is target of patch.\n\nWith a suitably small work_mem constraint, it is possible to show the absence of buffers resulting from the tuplestore. It’ll need some commentary explaining what is being looked for, and why. But it’s a good idea.\n\nI’ll take a look.\n\ndenty.\n\n", "msg_date": "Sun, 3 Nov 2019 15:53:26 +0000", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": ">> On 3 Nov 2019, at 13:33, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> \n>> can be nice, if patch has some regress tests - it is good for memory refreshing what is target of patch.\n\nI’ve updated the patch, and added some regression tests.\n\ndenty.", "msg_date": "Sat, 9 Nov 2019 10:51:58 +0000", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "Hi folks,\n\nI’ve updated the patch, addressed the rescan issue, and restructured the tests.\n\nI’ve taken a slightly different approach this time, re-using the (already pipeline-supporting) machinery of the Materialize node, and extended it to allow an SFRM_Materialize SRF to donate the tuplestore it returns. I feel this yields a better code structure, as well getting as more reuse.\n\nIt also opens up more informative and transparent EXPLAIN output. For example, the following shows Materialize explicitly, whereas previously a FunctionScan would have silently materialised the result of both generate_series() invocations.\n\npostgres=# explain (analyze, costs off, timing off, summary off) \nselect * from generate_series(11,15) r, generate_series(11,14) s;\n QUERY PLAN \n------------------------------------------------------------------\n Nested Loop (actual rows=20 loops=1)\n -> Function Scan on generate_series s (actual rows=4 loops=1)\n -> SRF Scan (actual rows=4 loops=1)\n SFRM: ValuePerCall\n -> Function Scan on generate_series r (actual rows=5 loops=4)\n -> Materialize (actual rows=5 loops=4)\n -> SRF Scan (actual rows=5 loops=1)\n SFRM: ValuePerCall\n\nI also thought again about when to materialise, and particularly Robert’s suggestion[1] (which is in also this thread, but I didn’t originally understand the implication of). If I’m not wrong, between occasional explicit use of a Materialize node by the planner, and more careful observation of EXEC_FLAG_REWIND and EXEC_FLAG_BACKWARD in FunctionScan’s initialisation, we do actually have what is needed to pipeline without materialisation in at least some cases. There is not a mechanism to preferentially re-execute a SRF rather than materialise it, but because materialisation only seems to be necessary in the face of a join or a scrollable cursor, I’m not considering much of a problem anymore.\n\nThe EXPLAIN output needs a bit of work, costing is still a sore point, and it’s not quite as straight-line performant as my first attempt, as well as there undoubtedly being some unanticipated breakages and rough edges.\n\nBut the concept seems to work roughly as I intended (i.e., allowing FunctionScan to pipeline). Unless there are any objections, I will push it into the January commit fest for progressing.\n\n(Revised patch attached.)\n\ndenty.\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmobw%2BPhNVciLesd-mQQ4As9D8L2-F7AiKqv465RhDkPf2Q%40mail.gmail.com <https://www.postgresql.org/message-id/CA+Tgmobw+PhNVciLesd-mQQ4As9D8L2-F7AiKqv465RhDkPf2Q@mail.gmail.com>", "msg_date": "Sun, 8 Dec 2019 20:33:02 +0000", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "Dent John <denty@QQdd.eu> writes:\n> I’ve updated the patch, addressed the rescan issue, and restructured the tests.\n> [ pipeline-functionscan-v4.patch ]\n\nFWIW, this patch doesn't apply to HEAD anymore. The cfbot\nhas failed to notice because it is still testing the v3 patch.\nApparently the formatting of this email is weird enough that\nneither the archives nor the CF app notice the embedded patch.\n\nPlease fix and repost.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Jan 2020 21:57:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "Thanks Tom. \n\nI’ll look at it. Probably won’t be able to until after the commitfest closes though. \n\nd.\n\n> On 28 Jan 2020, at 02:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Dent John <denty@QQdd.eu> writes:\n>> I’ve updated the patch, addressed the rescan issue, and restructured the tests.\n>> [ pipeline-functionscan-v4.patch ]\n> \n> FWIW, this patch doesn't apply to HEAD anymore. The cfbot\n> has failed to notice because it is still testing the v3 patch.\n> Apparently the formatting of this email is weird enough that\n> neither the archives nor the CF app notice the embedded patch.\n> \n> Please fix and repost.\n> \n> regards, tom lane\n\n\n", "msg_date": "Tue, 28 Jan 2020 08:58:51 +0000", "msg_from": "Dent John <denty@qqdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "On Tue, Jan 28, 2020 at 9:59 PM Dent John <denty@qqdd.eu> wrote:\n> I’ll look at it. Probably won’t be able to until after the commitfest closes though.\n\n(We've seen that hidden attachment problem from Apple Mail before,\ndiscussion of the MIME details in the archives somewhere. I have no\nidea what GUI interaction causes that, but most Apple Mail attachments\nseem to be fine.)\n\nHere's a quick rebase in case it helps. I mostly applied fine (see\nbelow). The conflicts were just Makefile and expected output files,\nwhich I tried to do the obvious thing with. I had to add a #include\n\"access/tupdesc.h\" to plannodes.h to make something compile (because\nit uses TupleDesc). Passes check-world here.\n\n$ gpatch --merge -p1 < ~/pipeline-functionscan-v4.patch\npatching file src/backend/access/common/tupdesc.c\npatching file src/backend/commands/explain.c\npatching file src/backend/executor/Makefile\nHunk #1 NOT MERGED at 19-29.\npatching file src/backend/executor/execAmi.c\npatching file src/backend/executor/execProcnode.c\npatching file src/backend/executor/execSRF.c\npatching file src/backend/executor/nodeFunctionscan.c\nHunk #1 merged at 4-20.\npatching file src/backend/executor/nodeMaterial.c\npatching file src/backend/executor/nodeNestloop.c\npatching file src/backend/executor/nodeProjectSet.c\npatching file src/backend/executor/nodeSRFScan.c\npatching file src/include/access/tupdesc.h\npatching file src/include/executor/executor.h\npatching file src/include/executor/nodeFunctionscan.h\npatching file src/include/executor/nodeMaterial.h\npatching file src/include/executor/nodeSRFScan.h\npatching file src/include/nodes/execnodes.h\npatching file src/include/nodes/nodes.h\npatching file src/include/nodes/plannodes.h\npatching file src/test/regress/expected/aggregates.out\npatching file src/test/regress/expected/groupingsets.out\npatching file src/test/regress/expected/inherit.out\npatching file src/test/regress/expected/join.out\nHunk #1 NOT MERGED at 3078-3087.\nHunk #3 NOT MERGED at 3111-3120, merged at 3127.\npatching file src/test/regress/expected/misc_functions.out\npatching file src/test/regress/expected/pg_lsn.out\npatching file src/test/regress/expected/plpgsql.out\npatching file src/test/regress/expected/rangefuncs.out\npatching file src/test/regress/expected/union.out\npatching file src/test/regress/sql/plpgsql.sql\npatching file src/test/regress/sql/rangefuncs.sql", "msg_date": "Tue, 28 Jan 2020 22:56:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "> On 28 Jan 2020, at 09:56, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> ([…] I have no\n> idea what GUI interaction causes that, but most Apple Mail attachments\n> seem to be fine.)\n\nI gathered from the other thread that posting plain text seems to attach the patches in a way that’s more acceptable. Seems to work, but doesn’t explain exactly what the issue is, and I’m pretty sure I’ve not always had to go via the “make plain text” menu item before.\n\n> Here's a quick rebase in case it helps. I mostly applied fine (see\n> below). The conflicts were just Makefile and expected output files,\n> which I tried to do the obvious thing with. I had to add a #include\n> \"access/tupdesc.h\" to plannodes.h to make something compile (because\n> it uses TupleDesc). Passes check-world here.\n\nThanks a lot for doing that. I tried it against 530609a, and indeed it seems to work.\n\nI’m also watching the polymorphic table functions light thread[0], which at first glance would also seems to make useful SRF RECORD-returning functions when employed in the SELECT list. It’s not doing what this patch does, but people might happy enough to transform their queries into SELECT … FROM (SELECT fn(…)) to achieve pipelining, at least in the short term.\n\n[0] https://www.postgresql.org/message-id/46a1cb32-e9c6-e7a8-f3c0-78e6b3f70cfe@2ndquadrant.com\n\ndenty.\n\n", "msg_date": "Sat, 1 Feb 2020 09:09:32 +0000", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "The cfbot is still not happy with this, because you're ignoring the\nproject style rule against C99-like mixing of code and declarations.\nI went to fix that, and soon found that the code doesn't compile,\nmuch less pass regression tests, with --enable-cassert. That's\nreally a serious error on your part: basically, nobody should ever\ndo backend code development in non-cassert builds, because there is\ntoo much useful error checking you forego that way. (Performance\ntesting is a different matter ... but you need to make the code\nwork before you worry about speed.)\n\nAnyway, attached is a marginal update that gets this to the point\nwhere it should compile in the cfbot, but it'll still fail regression\ntests there. (At least on the Linux side. I guess the cfbot's\nWindows builds are sans cassert, which seems like an odd choice.)\n\nI didn't want to spend any more effort on it than that, because I'm\nnot really on board with this line of attack. This patch seems\nawfully invasive for what it's accomplishing, both at the code level\nand in terms of what users will see in EXPLAIN. No, I don't think\nthat adding additional \"SRF Scan\" nodes below FunctionScan is an\nimprovement, nor do I like your repurposing/abusing of Materialize.\nIt might be okay if you were just using Materialize as-is, but if\nit's sort-of-materialize-but-not-always, I don't think that's going\nto make anyone less confused.\n\nMore locally, this business with creating new \"plan nodes\" below the\nFunctionScan at executor startup is a real abuse of a whole lot of stuff,\nand I suspect that it's not unrelated to the assertion failures I'm\nseeing. Don't do that. If you want to build some data structures at\nexecutor start, fine, but they're not plans and shouldn't be mislabeled as\nthat. On the other hand, if they do need to be plan nodes, they should be\nmade by the planner (which in turn would require a lot of infrastructure\nyou haven't built, eg copyfuncs/outfuncs/readfuncs/setrefs/...).\n\nThe v3 patch seemed closer to the sort of thing I was expecting\nto get out of this (though I've not read it in any detail).\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 12 Mar 2020 14:51:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "On Fri, Mar 13, 2020 at 7:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ... (At least on the Linux side. I guess the cfbot's\n> Windows builds are sans cassert, which seems like an odd choice.)\n\nI tried turning that on by adding $config{asserts} = 1 in the build\nscript and adding some scripting to dump all relevant logs on\nappveyor. It had the desired effect, but I had some trouble getting\nany useful information out of it. Somehow the FailedAssertion message\nis not making it to the log, which seems to be the bare minimum you'd\nneed for this to be useful, and ideally you'd also want a backtrace.\nI'll look into that next week with the help of a Windows-enabled\ncolleague.\n\n\n", "msg_date": "Fri, 13 Mar 2020 17:28:59 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." }, { "msg_contents": "> On 12 Mar 2020, at 18:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> […]\n> \n> I didn't want to spend any more effort on it than that, because I'm\n> not really on board with this line of attack.\n\nAppreciate that. It was about the approach that I was most keen to get feedback upon.\n\n> This patch seems\n> awfully invasive for what it's accomplishing, both at the code level\n> and in terms of what users will see in EXPLAIN. No, I don't think\n> that adding additional \"SRF Scan\" nodes below FunctionScan is an\n> improvement, nor do I like your repurposing/abusing of Materialize.\n> It might be okay if you were just using Materialize as-is, but if\n> it's sort-of-materialize-but-not-always, I don't think that's going\n> to make anyone less confused.\n\nOkay. Makes sense.\n\nI wonder whether you think it's valuable to retain in the EXPLAIN output the mode the SRF operated in?\n\nThat information is not available to end users, yet it is important to understand when trying to create a pipeline-able plan.\n\n> More locally, this business with creating new \"plan nodes\" below the\n> FunctionScan at executor startup is a real abuse of a whole lot of stuff,\n> and I suspect that it's not unrelated to the assertion failures I'm\n> seeing. Don't do that. If you want to build some data structures at\n> executor start, fine, but they're not plans and shouldn't be mislabeled as\n> that.\n\nI felt that FunctionScan was duplicating a bunch of stuff that the Materialize node could be doing for it. But in the end, I agree. Actually making re-use of Materialize turned out quite invasive.\n\n> On the other hand, if they do need to be plan nodes, they should be\n> made by the planner (which in turn would require a lot of infrastructure\n> you haven't built, eg copyfuncs/outfuncs/readfuncs/setrefs/...).\n> \n> The v3 patch seemed closer to the sort of thing I was expecting\n> to get out of this (though I've not read it in any detail).\n\nI did a bit more exploration down the route of pushing it into the planner. I figured perhaps some of the complexities would shake out by approaching it at the planner level, but I learned enough along the way to realise that it is a long journey.\n\nI’ll dust off the v3 approach and resubmit. While I’m doing that, I'll pull it back from the CF.\n\n\n\n", "msg_date": "Mon, 23 Mar 2020 00:13:17 +0000", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: The flinfo->fn_extra question, from me this time." } ]
[ { "msg_contents": "I encountered the following segfault when running against a PG 12 beta1\n\nduring a analyze against a table.\n\n\n#0 0x000056008ad0c826 in update_attstats (vacattrstats=0x0, \nnatts=2139062143, inh=false,\n relid=<error reading variable: Cannot access memory at address \n0x40>) at analyze.c:572\n#1 do_analyze_rel (onerel=onerel@entry=0x7f0bc59a7a38, \nparams=params@entry=0x7ffe06aeabb0, va_cols=va_cols@entry=0x0,\n acquirefunc=<optimized out>, relpages=8, inh=inh@entry=false, \nin_outer_xact=false, elevel=13) at analyze.c:572\n#2 0x000056008ad0d2e0 in analyze_rel (relid=<optimized out>, \nrelation=<optimized out>,\n params=params@entry=0x7ffe06aeabb0, va_cols=0x0, \nin_outer_xact=<optimized out>, bstrategy=<optimized out>)\n at analyze.c:260\n#3 0x000056008ad81300 in vacuum (relations=0x56008c4d1110, \nparams=params@entry=0x7ffe06aeabb0,\n bstrategy=<optimized out>, bstrategy@entry=0x0, \nisTopLevel=isTopLevel@entry=true) at vacuum.c:413\n#4 0x000056008ad8197f in ExecVacuum \n(pstate=pstate@entry=0x56008c5c2688, vacstmt=vacstmt@entry=0x56008c3e0428,\n isTopLevel=isTopLevel@entry=true) at vacuum.c:199\n#5 0x000056008af0133b in standard_ProcessUtility (pstmt=0x56008c982e50,\n queryString=0x56008c3df368 \"select \n\\\"_disorder_replica\\\".finishTableAfterCopy(3); analyze \n\\\"disorder\\\".\\\"do_inventory\\\"; \", context=<optimized out>, params=0x0, \nqueryEnv=0x0, dest=0x56008c9831d8, completionTag=0x7ffe06aeaef0 \"\")\n at utility.c:670\n#6 0x000056008aefe112 in PortalRunUtility (portal=0x56008c4515f8, \npstmt=0x56008c982e50, isTopLevel=<optimized out>,\n setHoldSnapshot=<optimized out>, dest=<optimized out>, \ncompletionTag=0x7ffe06aeaef0 \"\") at pquery.c:1175\n#7 0x000056008aefec91 in PortalRunMulti \n(portal=portal@entry=0x56008c4515f8, isTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false, \ndest=dest@entry=0x56008c9831d8, altdest=altdest@entry=0x56008c9831d8,\n completionTag=completionTag@entry=0x7ffe06aeaef0 \"\") at pquery.c:1328\n#8 0x000056008aeff9e9 in PortalRun (portal=portal@entry=0x56008c4515f8, \ncount=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, \ndest=dest@entry=0x56008c9831d8,\n altdest=altdest@entry=0x56008c9831d8, completionTag=0x7ffe06aeaef0 \n\"\") at pquery.c:796\n#9 0x000056008aefb6bb in exec_simple_query (\n query_string=0x56008c3df368 \"select \n\\\"_disorder_replica\\\".finishTableAfterCopy(3); analyze \n\\\"disorder\\\".\\\"do_inventory\\\"; \") at postgres.c:1215\n\n\nWith master from today(aa087ec64f703a52f3c48c) I still get segfaults \nunder do_analyze_rel\n\ncompute_index_stats (onerel=0x7f84bf1436a8, col_context=0x55a5d3d56640, \nnumrows=<optimized out>, rows=0x55a5d4039520,\n nindexes=<optimized out>, indexdata=0x3ff0000000000000, \ntotalrows=500) at analyze.c:711\n#1 do_analyze_rel (onerel=onerel@entry=0x7f84bf1436a8, \nparams=0x7ffdde2b5c40, params@entry=0x3ff0000000000000,\n va_cols=va_cols@entry=0x0, acquirefunc=<optimized out>, \nrelpages=11, inh=inh@entry=false, in_outer_xact=true,\n elevel=13) at analyze.c:552\n\n\n\n\n", "msg_date": "Sat, 15 Jun 2019 22:05:42 -0400", "msg_from": "Steve Singer <steve@ssinger.info>", "msg_from_op": true, "msg_subject": "PG 12 beta 1 segfault during analyze" }, { "msg_contents": "Steve Singer <steve@ssinger.info> writes:\n> I encountered the following segfault when running against a PG 12 beta1\n> during a analyze against a table.\n\nNobody else has reported this, so you're going to have to work on\nproducing a self-contained test case, or else debugging it yourself.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Jun 2019 22:18:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 12 beta 1 segfault during analyze" }, { "msg_contents": "On 6/15/19 10:18 PM, Tom Lane wrote:\n> Steve Singer <steve@ssinger.info> writes:\n>> I encountered the following segfault when running against a PG 12 beta1\n>> during a analyze against a table.\n> Nobody else has reported this, so you're going to have to work on\n> producing a self-contained test case, or else debugging it yourself.\n>\n> \t\t\tregards, tom lane\n>\n>\n>\nThe attached patch fixes the issue.\n\n\nSteve", "msg_date": "Mon, 17 Jun 2019 21:46:02 -0400", "msg_from": "Steve Singer <steve@ssinger.info>", "msg_from_op": true, "msg_subject": "Re: PG 12 beta 1 segfault during analyze" }, { "msg_contents": "Steve Singer <steve@ssinger.info> writes:\n> The attached patch fixes the issue.\n\nHmm, that's a pretty obvious mistake :-( but after some fooling around\nI've not been able to cause a crash with it. I wonder what test case\nyou were using, on what platform?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jun 2019 00:32:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 12 beta 1 segfault during analyze" }, { "msg_contents": "Hi,\n\nOn 2019-06-18 00:32:17 -0400, Tom Lane wrote:\n> Steve Singer <steve@ssinger.info> writes:\n> > The attached patch fixes the issue.\n>\n> Hmm, that's a pretty obvious mistake :-( but after some fooling around\n> I've not been able to cause a crash with it. I wonder what test case\n> you were using, on what platform?\n\nI suspect that's because the bug is \"only\" in the\nHEAPTUPLE_DELETE_IN_PROGRESS case. And it's \"harmless\" as far as\ncrashing goes in the !TransactionIdIsCurrentTransactionId() case,\nbecause as the tuple is sampled, we just return. And then there still\nneeds to be an actually dead row afterwards, to actually trigger\ndereferencing the modified deadrows. And then acquire_sample_rows()'s\ndeadrows actually needs to point to something that causes crashes when\nmodified.\n\nI can definitely get it to do a \"wild\" pointer write:\n\nBreakpoint 2, heapam_scan_analyze_next_tuple (scan=0x55f8fcb92728, OldestXmin=512, liverows=0x7fff56159850,\n deadrows=0x7fff56159f50, slot=0x55f8fcb92b40) at /home/andres/src/postgresql/src/backend/access/heap/heapam_handler.c:1061\n1061\t\t\t\t\t*deadrows += 1;\n(gdb) p deadrows\n$9 = (double *) 0x7fff56159f50\n(gdb) up\n#1 0x000055f8fad922c5 in table_scan_analyze_next_tuple (scan=0x55f8fcb92728, OldestXmin=512, liverows=0x7fff56159850,\n deadrows=0x7fff56159848, slot=0x55f8fcb92b40) at /home/andres/src/postgresql/src/include/access/tableam.h:1467\n1467\t\treturn scan->rs_rd->rd_tableam->scan_analyze_next_tuple(scan, OldestXmin,\n(gdb) p deadrows\n$10 = (double *) 0x7fff56159848\n\nmaking a question of a crash just a question of the exact stack layout\nand the number of deleted tuples.\n\nWill fix tomorrow morning.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2019 00:23:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PG 12 beta 1 segfault during analyze" }, { "msg_contents": "On 6/18/19 12:32 AM, Tom Lane wrote:\n> Steve Singer <steve@ssinger.info> writes:\n>> The attached patch fixes the issue.\n> Hmm, that's a pretty obvious mistake :-( but after some fooling around\n> I've not been able to cause a crash with it. I wonder what test case\n> you were using, on what platform?\n>\n> \t\t\tregards, tom lane\n>\n>\n\nI was running the slony regression tests.  The crash happened when it \ntries to replicate a particular table that already has data in it on the \nreplica.  It doesn't happen with every table and  I haven't been able to \nreplicate the crash in as self contained test by manually doing similar \nsteps to just that table with psql.\n\nThis is on x64.\n\n\n\n\n", "msg_date": "Tue, 18 Jun 2019 07:57:36 -0400", "msg_from": "Steve Singer <steve@ssinger.info>", "msg_from_op": true, "msg_subject": "Re: PG 12 beta 1 segfault during analyze" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-06-18 00:32:17 -0400, Tom Lane wrote:\n>> Hmm, that's a pretty obvious mistake :-( but after some fooling around\n>> I've not been able to cause a crash with it. I wonder what test case\n>> you were using, on what platform?\n\n> I suspect that's because the bug is \"only\" in the\n> HEAPTUPLE_DELETE_IN_PROGRESS case. And it's \"harmless\" as far as\n> crashing goes in the !TransactionIdIsCurrentTransactionId() case,\n> because as the tuple is sampled, we just return. And then there still\n> needs to be an actually dead row afterwards, to actually trigger\n> dereferencing the modified deadrows. And then acquire_sample_rows()'s\n> deadrows actually needs to point to something that causes crashes when\n> modified.\n\nRight, I'd come to the same conclusions last night, but failed to make\na crasher example. Not sure why though, because my first try today\nblew up real good:\n\n---\n\\set N 10\n\ncreate table bug as select generate_series(1,:N) f1;\ndelete from bug where f1 = :N;\n\nbegin;\ndelete from bug;\nanalyze verbose bug;\nrollback;\n\ndrop table bug;\n---\n\nOn my machine, N smaller than 10 doesn't do it, but of course that\nwill be very platform-specific.\n\n> Will fix tomorrow morning.\n\nOK. To save you the trouble of \"git blame\", it looks like\n737a292b5de296615a715ddce2b2d83d1ee245c5 introduced this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jun 2019 10:35:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 12 beta 1 segfault during analyze" }, { "msg_contents": "Hi,\n\nOn 2019-06-17 21:46:02 -0400, Steve Singer wrote:\n> On 6/15/19 10:18 PM, Tom Lane wrote:\n> > Steve Singer <steve@ssinger.info> writes:\n> > > I encountered the following segfault when running against a PG 12 beta1\n> > > during a analyze against a table.\n> > Nobody else has reported this, so you're going to have to work on\n> > producing a self-contained test case, or else debugging it yourself.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n> > \n> The attached patch fixes the issue.\n\nThanks for the bug report, diagnosis and patch. Pushed.\n\nI included a small testcase for ANALYZE running in a transaction that\nalso modified a few rows, after going back and forth on it for a\nwhile. Seems unlikely that we'll reintroduce this specific bug, but it\nseems good to have test coverage a least some of the\nHEAPTUPLE_DELETE_IN_PROGRESS path. We currently have none...\n\nI think the testcase would catch the issue at hand on most machines, by\nmixing live/dead/deleted-by-current-transaction rows.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2019 16:02:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PG 12 beta 1 segfault during analyze" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n\n> Hi,\n>\n> On 2019-06-17 21:46:02 -0400, Steve Singer wrote:\n>> On 6/15/19 10:18 PM, Tom Lane wrote:\n>> > Steve Singer <steve@ssinger.info> writes:\n>> > > I encountered the following segfault when running against a PG 12 beta1\n>> > > during a analyze against a table.\n>> > Nobody else has reported this, so you're going to have to work on\n>> > producing a self-contained test case, or else debugging it yourself.\n>> > \n>> > \t\t\tregards, tom lane\n>> > \n>> > \n>> > \n>> The attached patch fixes the issue.\n>\n> Thanks for the bug report, diagnosis and patch. Pushed.\n\nI was going to suggest trying to prevent similar bugs by declaring these\nand other output parameters as `double *const foo` in tableam.h, but\ndoing that without adding the corresponding `const` in heapam_handler.c\ndoesn't even raise a warning.\n\nStill, declaring them as *const in both places might serve as an\nexample/reminder for people writing their own table AMs.\n\n- ilmari\n-- \n\"I use RMS as a guide in the same way that a boat captain would use\n a lighthouse. It's good to know where it is, but you generally\n don't want to find yourself in the same spot.\" - Tollef Fog Heen\n\n\n", "msg_date": "Wed, 19 Jun 2019 11:19:39 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: PG 12 beta 1 segfault during analyze" } ]
[ { "msg_contents": "There are a few places in configure and the makefiles that are looking\nat $host_cpu to decide what to do. As far as I can tell, almost all of\nthem are wrong and should be looking at $target_cpu instead. (The\nlack of complaints indicates that nobody is trying very hard to test\ncross-compilation.)\n\nI'm not too sure about this case in makefiles/Makefile.hpux:\n\nifeq ($(host_cpu), ia64)\n DLSUFFIX = .so\nelse\n DLSUFFIX = .sl\nendif\n\nDoes HPUX even support cross-compiling, and if so what shlib extension\ndo you get in that case?\n\nThe other references seem definitely wrong ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Jun 2019 12:56:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "$host_cpu -> $target_cpu in configure?" }, { "msg_contents": "On Sun, Jun 16, 2019 at 12:56:52PM -0400, Tom Lane wrote:\n> There are a few places in configure and the makefiles that are looking\n> at $host_cpu to decide what to do. As far as I can tell, almost all of\n> them are wrong and should be looking at $target_cpu instead. (The\n> lack of complaints indicates that nobody is trying very hard to test\n> cross-compilation.)\n\nhttps://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Specifying-Target-Triplets.html\ndescribes the intended usage. When cross-compiling, $host_cpu is the machine\nable to run the resulting PostgreSQL installation, and $build_cpu is the\nmachine creating that installation. PostgreSQL does not contain a compiler\nthat emits code as output to the user, so $target_cpu is meaningless. Every\nuse of $host_cpu looks correct.\n\n\n", "msg_date": "Sun, 16 Jun 2019 20:33:54 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: $host_cpu -> $target_cpu in configure?" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> https://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Specifying-Target-Triplets.html\n> describes the intended usage. When cross-compiling, $host_cpu is the machine\n> able to run the resulting PostgreSQL installation, and $build_cpu is the\n> machine creating that installation. PostgreSQL does not contain a compiler\n> that emits code as output to the user, so $target_cpu is meaningless. Every\n> use of $host_cpu looks correct.\n\nHmph ... okay, but that's sure a confusing usage of \"host\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Jun 2019 16:36:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: $host_cpu -> $target_cpu in configure?" } ]
[ { "msg_contents": "Hi all,\n(Adding in CC relevant committer and author, Teodor and Alexander)\n\nI have been looking today at a crash of pg_waldump on one of the test\nbuilds keeping running in some internal environment. Luckily, I have\nbeen able to put my hands on a core file with 11.2 running. The\nbacktrace is not that interesting:\n(gdb) bt\n#0 btree_desc (buf=0x0, record=0x22ce590) at nbtdesc.c:103\n#1 0x000000000040419f in XLogDumpDisplayRecord (config=0x7fff1ccfd360,\nrecord=0x22ce590) at\n/build/mts/release/bora-13598999/vpostgres/src/postgres/src/bin/pg_waldump/pg_waldump.c:558\n#2 main (argc=<optimized out>, argv=<optimized out>) at\n/build/mts/release/bora-13598999/vpostgres/src/postgres/src/bin/pg_waldump/pg_waldump.c:1170\n(gdb) down 1\n#0 btree_desc (buf=0x0, record=0x22ce590) at nbtdesc.c:103\n103 nbtdesc.c: No such file or directory.\n(gdb) p record\n$1 = (XLogReaderState *) 0x22ce590\n(gdb) p *record\n$2 = {wal_segment_size = 16777216, read_page = 0x405220\n<XLogDumpReadPage>, system_identifier = 0, private_data =\n0x7fff1ccfd380, ReadRecPtr = 67109592, EndRecPtr = 67109672,\ndecoded_record = 0x22cf178, main_data = 0x0, main_data_len = 0,\nmain_data_bufsz = 0, record_origin = 0, blocks = {{in_use = true,\nrnode = {spcNode = 16399, dbNode = 16386, relNode = 19907}, forknum =\nMAIN_FORKNUM, blkno = 0, flags = 96 '`', has_image = false,\napply_image = false, bkp_image = 0x0, hole_offset = 0, hole_length =\n0, bimg_len = 0, bimg_info = 0 '\\000', has_data = true, data =\n0x22db2c0 \"\\003\", data_len = 32, data_bufsz = 8192}, {in_use = false,\nrnode = {spcNode = 0, dbNode = 0, relNode = 0}, forknum =\nMAIN_FORKNUM, blkno = 0, flags = 0 '\\000', has_image = false,\napply_image = false, bkp_image = 0x0, hole_offset = 0, hole_length =\n0, bimg_len = 0, bimg_info = 0 '\\000', has_data = false, data = 0x0,\ndata_len = 0, data_bufsz = 0} <repeats 32 times>}, max_block_id = 0,\nreadBuf = 0x22ceea0 \"\\230\\320\\a\", readLen = 8192, readSegNo = 4,\nreadOff = 0, readPageTLI = 0, latestPagePtr = 67108864, latestPageTLI\n= 1, currRecPtr = 67109592, currTLI = 0, currTLIValidUntil = 0,\nnextTLI = 0, readRecordBuf = 0x22d12b0 \"L\", readRecordBufSize = 40960, \nerrormsg_buf = 0x22d0eb0 \"\"}\n(gdb) p xlrec\n$5 = (xl_btree_metadata *) 0x0\n\nAnyway, after looking at the code relevant to XLOG_BTREE_META_CLEANUP,\nI have noticed that the meta-data associated to the first buffer is\nregistered via XLogRegisterBufData() (this is correct because we want\nto associate this data to the metadata buffer). However, nbtdesc.c\nassumes that xl_btree_metadata is from the main record data, causing a\ncrash because we have nothing in this case.\n\nI think that we could just patch nbtpage.c so as we fetch the\nmetadata using XLogRecGetBlockData(), and log its data. The error\ncomes from 3d92796, which is one year-old and has been visibly\ncommitted untested. I am surprised that we have not seen that\ncomplain yet. Attached is a patch, which looks right to me and should\nbe back-patched down to v11. I have not taken the time to actually\ntest it though.\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 17 Jun 2019 10:30:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "nbtdesc.c and nbtpage.c are inconsistent with\n XLOG_BTREE_META_CLEANUP (11~)" }, { "msg_contents": "On Sun, Jun 16, 2019 at 6:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I think that we could just patch nbtpage.c so as we fetch the\n> metadata using XLogRecGetBlockData(), and log its data.\n\nDon't you mean nbtdesc.c?\n\n> The error\n> comes from 3d92796, which is one year-old and has been visibly\n> committed untested. I am surprised that we have not seen that\n> complain yet.\n\nWhy is that surprising?\n\nhttps://coverage.postgresql.org/src/backend/access/rmgrdesc/index.html\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 16 Jun 2019 18:54:57 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: nbtdesc.c and nbtpage.c are inconsistent with\n XLOG_BTREE_META_CLEANUP\n (11~)" }, { "msg_contents": "On Sun, Jun 16, 2019 at 06:54:57PM -0700, Peter Geoghegan wrote:\n> On Sun, Jun 16, 2019 at 6:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> I think that we could just patch nbtpage.c so as we fetch the\n>> metadata using XLogRecGetBlockData(), and log its data.\n> \n> Don't you mean nbtdesc.c?\n\nYeah I meant nbtdesc.c, sorry. This will have to wait after this\nweek's release for a fix by the way...\n\n> Why is that surprising?\n> \n> https://coverage.postgresql.org/src/backend/access/rmgrdesc/index.html\n\nI would have supposed that more people scan WAL records using the\ndescription callbacks.\n--\nMichael", "msg_date": "Mon, 17 Jun 2019 11:05:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: nbtdesc.c and nbtpage.c are inconsistent with\n XLOG_BTREE_META_CLEANUP (11~)" }, { "msg_contents": "On Sun, Jun 16, 2019 at 7:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I would have supposed that more people scan WAL records using the\n> description callbacks.\n\nThe WAL record in question, XLOG_BTREE_META_CLEANUP, is certainly one\nof the less common record types used by nbtree. I agree that this\nshould have been tested when it went in, but I'm not surprised that\nthe bug remained undetected for a year. Not that many people use\npg_waldump.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 16 Jun 2019 19:14:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: nbtdesc.c and nbtpage.c are inconsistent with\n XLOG_BTREE_META_CLEANUP\n (11~)" }, { "msg_contents": "On Sun, Jun 16, 2019 at 07:14:05PM -0700, Peter Geoghegan wrote:\n> The WAL record in question, XLOG_BTREE_META_CLEANUP, is certainly one\n> of the less common record types used by nbtree. I agree that this\n> should have been tested when it went in, but I'm not surprised that\n> the bug remained undetected for a year. Not that many people use\n> pg_waldump.\n\nActually, a simple installcheck generates a handful of them. I have\nnot actually run into a crash, but this causes pg_waldump to describe\nthe record incorrectly. Committed down to 11 after cross-checking\nthat the data inserted in the WAL record and what gets described are\nboth consistent.\n\n_bt_restore_meta() does the right thing by the way when restoring the\npage.\n--\nMichael", "msg_date": "Wed, 19 Jun 2019 11:04:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: nbtdesc.c and nbtpage.c are inconsistent with\n XLOG_BTREE_META_CLEANUP (11~)" } ]
[ { "msg_contents": "Hi all,\n\nAlvaro has reported a rather rare buildfarm failure involving\n007_sync_rep.pl to which I have responded here:\nhttps://www.postgresql.org/message-id/20190613060123.GC1643@paquier.xyz\n\nThe buildfarm failure is here:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2019-05-12%2020%3A37%3A11\n\nIt involves a race condition related to the way the standbys of the\ntest are stopped and restarted to ensure that they appear in the\ncorrect order in the WAL sender array of the primary, but feel free to\nlook at the message above for all the details.\n\nAttached is a patch to improve the stability of the test. The fix I\nam proposing is very simple: in order to make sure that a standby is\nadded into the WAL sender array of the primary, let's check after\npg_stat_replication after a standby is started. This can be done\nconsistently with a small wrapper in the tests.\n\nAny thoughts?\n--\nMichael", "msg_date": "Mon, 17 Jun 2019 14:51:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Race conditions with TAP test for syncrep" }, { "msg_contents": "On 2019-Jun-17, Michael Paquier wrote:\n\n> Attached is a patch to improve the stability of the test. The fix I\n> am proposing is very simple: in order to make sure that a standby is\n> added into the WAL sender array of the primary, let's check after\n> pg_stat_replication after a standby is started. This can be done\n> consistently with a small wrapper in the tests.\n> \n> Any thoughts?\n\nHmm, this introduces a bit of latency: it waits for each standby to be\nfully up before initializing the next standby. Maybe it would be more\nconvenient to split the primitives: keep the current one to start the\nstandby, and add a separate one to wait for it to be registered. Then\nwe could do\nstandby1->start;\nstandby2->start;\nstandby3->start;\nforeach my $sby (@standbys) {\n\t$sby->wait_for_standby\n}\n\nso they all start in parallel, saving a bit of time.\n\n> +\tprint \"### Waiting for standby \\\"$standby_name\\\" on \\\"$master_name\\\"\\n\";\n\nI think this should be note() rather than print(), or maybe diag(). (I\nsee that we have a couple of other cases which use print() in the tap\ntests, which I think should be note() as well.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Jun 2019 10:50:39 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race conditions with TAP test for syncrep" }, { "msg_contents": "On Mon, Jun 17, 2019 at 10:50:39AM -0400, Alvaro Herrera wrote:\n> Hmm, this introduces a bit of latency: it waits for each standby to be\n> fully up before initializing the next standby. Maybe it would be more\n> convenient to split the primitives: keep the current one to start the\n> standby, and add a separate one to wait for it to be registered. Then\n> we could do\n> standby1->start;\n> standby2->start;\n> standby3->start;\n> foreach my $sby (@standbys) {\n> \t$sby->wait_for_standby\n> }\n\nIt seems to me that this sequence could still lead to inconsistencies:\n1) standby 1 starts, reaches consistency so pg_ctl start -w exits.\n2) standby 2 starts, reaches consistency.\n3) standby 2 starts a WAL receiver, gets the first WAL sender slot of\nthe primary.\n4) standby 1 starts a WAL receiver, gets the second slot.\n\n> I think this should be note() rather than print(), or maybe diag(). (I\n> see that we have a couple of other cases which use print() in the tap\n> tests, which I think should be note() as well.)\n\nOK. Let's change it for this patch. For the rest, I can always send\na different patch. Just writing down your comment..\n--\nMichael", "msg_date": "Tue, 18 Jun 2019 09:59:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Race conditions with TAP test for syncrep" }, { "msg_contents": "On 2019-Jun-18, Michael Paquier wrote:\n\n> On Mon, Jun 17, 2019 at 10:50:39AM -0400, Alvaro Herrera wrote:\n> > Hmm, this introduces a bit of latency: it waits for each standby to be\n> > fully up before initializing the next standby. Maybe it would be more\n> > convenient to split the primitives: keep the current one to start the\n> > standby, and add a separate one to wait for it to be registered. Then\n> > we could do\n> > standby1->start;\n> > standby2->start;\n> > standby3->start;\n> > foreach my $sby (@standbys) {\n> > \t$sby->wait_for_standby\n> > }\n> \n> It seems to me that this sequence could still lead to inconsistencies:\n> 1) standby 1 starts, reaches consistency so pg_ctl start -w exits.\n> 2) standby 2 starts, reaches consistency.\n> 3) standby 2 starts a WAL receiver, gets the first WAL sender slot of\n> the primary.\n> 4) standby 1 starts a WAL receiver, gets the second slot.\n\nHo ho .. you know what misled me into thinking that that would work?\nJust look at the name of the test that failed, \"asterisk comes before\nanother standby name\". That doesn't seem to be what the test is\ntesting!\n\n# poll_query_until timed out executing this query:\n# SELECT application_name, sync_priority, sync_state FROM pg_stat_replication ORDER BY application_name;\n# expecting this output:\n# standby1|1|sync\n# standby2|2|sync\n# standby3|2|potential\n# standby4|2|potential\n# last actual query output:\n# standby1|1|sync\n# standby2|2|potential\n# standby3|2|sync\n# standby4|2|potential\n# with stderr:\n\n# Failed test 'asterisk comes before another standby name'\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 19 Jun 2019 16:08:44 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race conditions with TAP test for syncrep" }, { "msg_contents": "On Wed, Jun 19, 2019 at 04:08:44PM -0400, Alvaro Herrera wrote:\n> Ho ho .. you know what misled me into thinking that that would work?\n> Just look at the name of the test that failed, \"asterisk comes before\n> another standby name\". That doesn't seem to be what the test is\n> testing!\n\nI agree that the wording is poor here. Perhaps a better description\nin the comment block would be \"standby1 is selected as sync as it has\nthe highest priority, and is followed by a second standby listed first\nin the WAL sender array, in this case standby2\". We could change the\ndescription like that \"second standby chosen as sync is the first one\nin WAL sender array\". The follow-up test using '2(*)' is actually\nworse in terms of ordering dependency as all standbys could be\nselected. The last test with a quorum lookup on all the standbys is\nfine from this perspective thanks to the ORDER BY on application_name\nwhen doing the lookup of pg_stat_replication.\n--\nMichael", "msg_date": "Thu, 20 Jun 2019 15:07:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Race conditions with TAP test for syncrep" }, { "msg_contents": "On Tue, Jun 18, 2019 at 09:59:08AM +0900, Michael Paquier wrote:\n> On Mon, Jun 17, 2019 at 10:50:39AM -0400, Alvaro Herrera wrote:\n> > I think this should be note() rather than print(), or maybe diag(). (I\n> > see that we have a couple of other cases which use print() in the tap\n> > tests, which I think should be note() as well.)\n> \n> OK. Let's change it for this patch.\n\nPostgresNode uses \"print\" the same way. The patch does close the intended\nrace conditions, and its implementation choices look fine to me.\n\n\n", "msg_date": "Mon, 22 Jul 2019 23:45:53 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Race conditions with TAP test for syncrep" }, { "msg_contents": "On Mon, Jul 22, 2019 at 11:45:53PM -0700, Noah Misch wrote:\n> PostgresNode uses \"print\" the same way. The patch does close the intended\n> race conditions, and its implementation choices look fine to me.\n\nThanks Noah for the review. I have reviewed the thread and improved a\ncouple of comments based on Alvaro's previous input. Attached is v2.\nIf there are no objections, I would be fine to commit it.\n--\nMichael", "msg_date": "Tue, 23 Jul 2019 17:04:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Race conditions with TAP test for syncrep" }, { "msg_contents": "On Tue, Jul 23, 2019 at 05:04:32PM +0900, Michael Paquier wrote:\n> Thanks Noah for the review. I have reviewed the thread and improved a\n> couple of comments based on Alvaro's previous input. Attached is v2.\n> If there are no objections, I would be fine to commit it.\n\nApplied and back-patched down to 9.6 where it applies. Thanks for the\nreviews.\n--\nMichael", "msg_date": "Wed, 24 Jul 2019 10:56:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Race conditions with TAP test for syncrep" } ]
[ { "msg_contents": "Hi all,\n\nI have just bumped into $subject, which makes no sense now as this is\nan init-time option. Any objections if I remove this code as per the\nattached?\n\nThanks,\n--\nMichael", "msg_date": "Mon, 17 Jun 2019 16:32:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Still some references to configure-time WAL segment size option in\n msvc scripts" }, { "msg_contents": "On Mon, Jun 17, 2019 at 04:32:28PM +0900, Michael Paquier wrote:\n> I have just bumped into $subject, which makes no sense now as this is\n> an init-time option. Any objections if I remove this code as per the\n> attached?\n\nAnd committed.\n--\nMichael", "msg_date": "Wed, 19 Jun 2019 11:19:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Still some references to configure-time WAL segment size option\n in msvc scripts" } ]
[ { "msg_contents": "Hello,\n\nI had an trouble report that the reporter had the following error\nmessages.\n\nFATAL: XX000: requested timeline 175 is not a child of this server's history\nDETAIL: Latest checkpoint is at 1A/D6000028 on timeline 172, but in\nthe history of the requested timeline, the server forked off from that\ntimeline at 1C/29074DB8.\n\nThis message doesn't make sense. Perhaps timeline 172 started\nafter 1A/D6000028 instead.\n\nThe attached patch makes the error messages for both cases make sense.\n\nFATAL: requested timeline 4 is not a child of this server's history\nDETAIL: Latest checkpoint is at 0/3000060 on timeline 2, but in the\nhistory of the requested timeline, the server forked off from that\ntimeline at 0/22000A0.\n\nFATAL: requested timeline 4 is not a child of this server's history\nDETAIL: Latest checkpoint is at 0/3000060 on timeline 2, but in the\nhistory of the requested timeline, the server entered that timeline at\n0/40000A0.\n\nIntentional corruption of timeline-history is required to\nexercise this. Do we need to do that regression test?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 17 Jun 2019 17:31:03 +0900", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Strange error message in xlog.c" } ]
[ { "msg_contents": "Hi all,\n\nDuring the reorder of grouping sets into correct prefix order, if only\none aggregation pass is needed, we follow the order of the ORDER BY\nclause to the extent possible, to minimize the chance that we add\nunnecessary sorts. This is implemented in preprocess_grouping_sets -->\nreorder_grouping_sets.\n\nHowever, current codes fail to do that. For instance:\n\n# set enable_hashagg to off;\nSET\n\n# explain verbose select * from t group by grouping sets((a,b,c),(c)) order\nby c,b,a;\n QUERY PLAN\n-------------------------------------------------------------------------------\n Sort (cost=184.47..185.48 rows=404 width=12)\n Output: a, b, c\n Sort Key: t.c, t.b, t.a\n -> GroupAggregate (cost=142.54..166.98 rows=404 width=12)\n Output: a, b, c\n Group Key: t.c, t.a, t.b\n Group Key: t.c\n -> Sort (cost=142.54..147.64 rows=2040 width=12)\n Output: a, b, c\n Sort Key: t.c, t.a, t.b\n -> Seq Scan on public.t (cost=0.00..30.40 rows=2040\nwidth=12)\n Output: a, b, c\n(12 rows)\n\nThis sort node in the above plan can be avoided if we reorder the\ngrouping sets more properly.\n\nAttached is a patch for the fixup. With the patch, the above plan would\nbecome:\n\n# explain verbose select * from t group by grouping sets((a,b,c),(c)) order\nby c,b,a;\n QUERY PLAN\n-------------------------------------------------------------------------\n GroupAggregate (cost=142.54..166.98 rows=404 width=12)\n Output: a, b, c\n Group Key: t.c, t.b, t.a\n Group Key: t.c\n -> Sort (cost=142.54..147.64 rows=2040 width=12)\n Output: a, b, c\n Sort Key: t.c, t.b, t.a\n -> Seq Scan on public.t (cost=0.00..30.40 rows=2040 width=12)\n Output: a, b, c\n(9 rows)\n\nThe fix happens in reorder_grouping_sets and is very simple. In each\niteration to reorder one grouping set, if the next item in sortclause\nmatches one element in new_elems, we add that item to the grouing set\nlist and meanwhile remove it from the new_elems list. When all the\nelements in new_elems have been removed, we can know we are done with\ncurrent grouping set. We should break out to continue with next grouping\nset.\n\nAny thoughts?\n\nThanks\nRichard", "msg_date": "Mon, 17 Jun 2019 17:23:11 +0800", "msg_from": "Richard Guo <riguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Fix up grouping sets reorder" }, { "msg_contents": "Hi,\n\nOn 2019-06-17 17:23:11 +0800, Richard Guo wrote:\n> During the reorder of grouping sets into correct prefix order, if only\n> one aggregation pass is needed, we follow the order of the ORDER BY\n> clause to the extent possible, to minimize the chance that we add\n> unnecessary sorts. This is implemented in preprocess_grouping_sets -->\n> reorder_grouping_sets.\n\nThanks for finding!\n\nAndrew, could you take a look at that?\n\n- Andres\n\n\n", "msg_date": "Mon, 17 Jun 2019 10:33:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix up grouping sets reorder" }, { "msg_contents": ">>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n\n >> During the reorder of grouping sets into correct prefix order, if\n >> only one aggregation pass is needed, we follow the order of the\n >> ORDER BY clause to the extent possible, to minimize the chance that\n >> we add unnecessary sorts. This is implemented in\n >> preprocess_grouping_sets --> reorder_grouping_sets.\n\n Andres> Thanks for finding!\n\n Andres> Andrew, could you take a look at that?\n\nYes.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Mon, 17 Jun 2019 19:47:38 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Fix up grouping sets reorder" }, { "msg_contents": ">>>>> \"Richard\" == Richard Guo <riguo@pivotal.io> writes:\n\n Richard> Hi all,\n\n Richard> During the reorder of grouping sets into correct prefix order,\n Richard> if only one aggregation pass is needed, we follow the order of\n Richard> the ORDER BY clause to the extent possible, to minimize the\n Richard> chance that we add unnecessary sorts. This is implemented in\n Richard> preprocess_grouping_sets --> reorder_grouping_sets.\n\n Richard> However, current codes fail to do that.\n\nYou're correct, thanks for the report.\n\nYour fix works, but I prefer to refactor the conditional logic slightly\ninstead, removing the outer if{}. So I didn't use your exact patch in\nthe fix I just committed.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Mon, 01 Jul 2019 00:00:33 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Fix up grouping sets reorder" } ]
[ { "msg_contents": "Can anyone please give me a hint (and possibly add some comments to the code)\nwhen pg_log_fatal() should be used in frontend code and when it's appropriate\nto call pg_log_error()? The current use does not seem very consistent.\n\nI'd expect that the pg_log_fatal() should be called when the error is serious\nenough to cause premature exit, but I can see cases where even pg_log_error()\nis followed by exit(1). pg_waldump makes me feel that pg_log_error() is used\nto handle incorrect user input (before the actual execution started) while\npg_log_fatal() handles error conditions that user does not fully control\n(things that happen during the actual execution). But this is rather a guess.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 17 Jun 2019 14:19:30 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "pg_log_fatal vs pg_log_error" }, { "msg_contents": "On Mon, Jun 17, 2019 at 02:19:30PM +0200, Antonin Houska wrote:\n> I'd expect that the pg_log_fatal() should be called when the error is serious\n> enough to cause premature exit, but I can see cases where even pg_log_error()\n> is followed by exit(1). pg_waldump makes me feel that pg_log_error() is used\n> to handle incorrect user input (before the actual execution started) while\n> pg_log_fatal() handles error conditions that user does not fully control\n> (things that happen during the actual execution). But this is rather a guess.\n\nI agree with what you say when pg_log_fatal should be used for an\nerror bad enough that the binary should exit immediately. In the case\nof pg_waldump, not using pg_log_fatal() makes the code more readable\nbecause there is no need to repeat the \"Try --help for more\ninformation on a bad argument\". Have you spotted other areas of the\ncode where it makes sense to change a pg_log_error() + exit to a\nsingle pg_log_fatal()?\n--\nMichael", "msg_date": "Mon, 17 Jun 2019 21:43:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_log_fatal vs pg_log_error" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jun 17, 2019 at 02:19:30PM +0200, Antonin Houska wrote:\n> > I'd expect that the pg_log_fatal() should be called when the error is serious\n> > enough to cause premature exit, but I can see cases where even pg_log_error()\n> > is followed by exit(1). pg_waldump makes me feel that pg_log_error() is used\n> > to handle incorrect user input (before the actual execution started) while\n> > pg_log_fatal() handles error conditions that user does not fully control\n> > (things that happen during the actual execution). But this is rather a guess.\n> \n> I agree with what you say when pg_log_fatal should be used for an\n> error bad enough that the binary should exit immediately. In the case\n> of pg_waldump, not using pg_log_fatal() makes the code more readable\n> because there is no need to repeat the \"Try --help for more\n> information on a bad argument\".\n\nI'd understand this if pg_log_fatal() called exit() itself, but it does not\n(unless I miss something).\n\n> Have you spotted other areas of the code where it makes sense to change a\n> pg_log_error() + exit to a single pg_log_fatal()?\n\nI haven't done an exhaustive search so far, but as I mentioned above,\npg_log_fatal() does not seem to be \"pg_log_error() + exit()\".\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 17 Jun 2019 15:39:49 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: pg_log_fatal vs pg_log_error" }, { "msg_contents": "On Mon, Jun 17, 2019 at 03:39:49PM +0200, Antonin Houska wrote:\n> I'd understand this if pg_log_fatal() called exit() itself, but it does not\n> (unless I miss something).\n\nOops. My apologies. I have my own wrapper of pg_log_fatal() for an\ninternal tool which does an exit on top of the logging in this case.\nYou are right the PG code does not exit() in this case.\n--\nMichael", "msg_date": "Mon, 17 Jun 2019 22:44:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_log_fatal vs pg_log_error" }, { "msg_contents": "On 2019-06-17 14:19, Antonin Houska wrote:\n> Can anyone please give me a hint (and possibly add some comments to the code)\n> when pg_log_fatal() should be used in frontend code and when it's appropriate\n> to call pg_log_error()? The current use does not seem very consistent.\n\nFor a program that runs in a loop, like for example psql or\npg_receivewal, use error if the program keeps running and fatal if not.\nFor one-shot programs like for example createdb, there is no difference,\nso we have used error in those cases.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Jun 2019 16:34:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_log_fatal vs pg_log_error" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-06-17 14:19, Antonin Houska wrote:\n> > Can anyone please give me a hint (and possibly add some comments to the code)\n> > when pg_log_fatal() should be used in frontend code and when it's appropriate\n> > to call pg_log_error()? The current use does not seem very consistent.\n> \n> For a program that runs in a loop, like for example psql or\n> pg_receivewal, use error if the program keeps running and fatal if not.\n> For one-shot programs like for example createdb, there is no difference,\n> so we have used error in those cases.\n\nThat makes sense, but shouldn't then pg_log_fatal() perform exit(EXIT_FAILURE)\ninternally? Just like elog(FATAL) does on backend side.\n\nActually there are indications that someone would appreciate such behaviour\neven in frontends.\n\nIn pg_rewind.h I see:\n\n/* logging support */\n#define pg_fatal(...) do { pg_log_fatal(__VA_ARGS__); exit(1); } while(0)\n\nor this in pg_upgrade/util.c:\n\nvoid\npg_fatal(const char *fmt,...)\n{\n\tva_list\t\targs;\n\n\tva_start(args, fmt);\n\tpg_log_v(PG_FATAL, fmt, args);\n\tva_end(args);\n\tprintf(_(\"Failure, exiting\\n\"));\n\texit(1);\n}\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 10 Jul 2019 10:58:57 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: pg_log_fatal vs pg_log_error" } ]
[ { "msg_contents": "A small new feature in SQL:2016 allows attaching a table alias to a\nJOIN/USING construct:\n\n <named columns join> ::=\n USING <left paren> <join column list> <right paren>\n [ AS <join correlation name> ]\n\n(The part in brackets is new.)\n\nThis seems quite useful, and it seems the code would already support\nthis if we allow the grammar to accept this syntax.\n\nPatch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 17 Jun 2019 16:40:57 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On Tue, Jun 18, 2019 at 2:41 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> A small new feature in SQL:2016 allows attaching a table alias to a\n> JOIN/USING construct:\n>\n> <named columns join> ::=\n> USING <left paren> <join column list> <right paren>\n> [ AS <join correlation name> ]\n>\n> (The part in brackets is new.)\n>\n> This seems quite useful, and it seems the code would already support\n> this if we allow the grammar to accept this syntax.\n\nNeat. That's a refreshingly short patch to get a sql_features.txt\nline bumped to YES.\n\n> Patch attached.\n\nIt does what it says on the tin.\n\nI see that USING is the important thing here; for (a NATURAL JOIN b)\nAS ab or (a JOIN b ON ...) AS ab you still need the parentheses or\n(respectively) it means something different (alias for B only) or\ndoesn't parse. That makes sense.\n\nI noticed that the HINT when you accidentally use a base table name\ninstead of a table alias is more helpful than the HINT you get when\nyou use a base table name instead of a join alias. That seems like a\npotential improvement that is independent of this syntax change.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Sat, 13 Jul 2019 18:29:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "\nHello Peter,\n\n> A small new feature in SQL:2016 allows attaching a table alias to a\n> JOIN/USING construct:\n>\n> <named columns join> ::=\n> USING <left paren> <join column list> <right paren>\n> [ AS <join correlation name> ]\n>\n> (The part in brackets is new.)\n>\n> This seems quite useful, and it seems the code would already support\n> this if we allow the grammar to accept this syntax.\n>\n> Patch attached.\n\nA few more comments.\n\nPatch v1 applies cleanly, compiles. make check ok. Doc gen ok.\n\nThe patch allows an AS clause (alias) attached to a JOIN USING, which seems\nto be SQL feature F404, which seems a new feature in SQL:2016.\n\nThe feature implementation only involves parser changes, so the underlying\ninfrastructure seems to be already available.\n\nAbout the code:\n\nThe removal from the grammar of the dynamic type introspection to distinguish\nbetween ON & USING is a relief in itself:-)\n\nAbout the feature:\n\nWhen using aliases both on tables and on the unifying using clause, the former\nare hidden from view. I cannot say that I understand why, and this makes it\nimpossible to access some columns in some cases if there is an ambiguity, eg:\n\n postgres=# SELECT t.filler\n FROM pgbench_tellers AS t\n \t JOIN pgbench_branches AS b USING (bid) AS x;\n ERROR: invalid reference to FROM-clause entry for table \"t\"\n LINE 1: SELECT t.filler FROM pgbench_tellers AS t JOIN pgbench_branc...\n ^\n HINT: There is an entry for table \"t\", but it cannot be referenced from this\n part of the query.\n\nBut then:\n\n postgres=# SELECT x.filler\n FROM pgbench_tellers AS t\n \t JOIN pgbench_branches AS b USING (bid) AS x;\n ERROR: column reference \"filler\" is ambiguous\n LINE 1: SELECT x.filler FROM pgbench_tellers AS t JOIN pgbench_branc...\n ^\n\nIs there a good reason to forbid several aliases covering the same table?\n\nMore precisely, is this behavior expected from the spec or a side effect \nof pg implementation?\n\nGiven that the executor detects that the underlying alias exists, could it \njust let it pass instead of raising an error, and it would simply just \nwork?\n\nI'm wondering why such an alias could not be attached also to an ON \nclause. Having them in one case but not the other looks strange.\n\nAbout the documentation:\n\nThe documentation changes only involves the synopsis. ISTM that maybe aliases\nshadowing one another could deserve some caveat. The documentation in its\n\"alias\" paragraph only talks about hidding table and functions names.\n\nAlso, the USING paragraph could talk about its optional alias and its \nhiding effect.\n\nAbout tests:\n\nMaybe an alias hidding case could be added.\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 15 Jul 2019 22:58:33 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On Tue, Jul 16, 2019 at 8:58 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> About the feature:\n>\n> When using aliases both on tables and on the unifying using clause, the former\n> are hidden from view. I cannot say that I understand why, and this makes it\n> impossible to access some columns in some cases if there is an ambiguity, eg:\n>\n> postgres=# SELECT t.filler\n> FROM pgbench_tellers AS t\n> JOIN pgbench_branches AS b USING (bid) AS x;\n> ERROR: invalid reference to FROM-clause entry for table \"t\"\n> LINE 1: SELECT t.filler FROM pgbench_tellers AS t JOIN pgbench_branc...\n> ^\n> HINT: There is an entry for table \"t\", but it cannot be referenced from this\n> part of the query.\n>\n> But then:\n>\n> postgres=# SELECT x.filler\n> FROM pgbench_tellers AS t\n> JOIN pgbench_branches AS b USING (bid) AS x;\n> ERROR: column reference \"filler\" is ambiguous\n> LINE 1: SELECT x.filler FROM pgbench_tellers AS t JOIN pgbench_branc...\n> ^\n>\n> Is there a good reason to forbid several aliases covering the same table?\n>\n> More precisely, is this behavior expected from the spec or a side effect\n> of pg implementation?\n\nIndeed, that seems like a problem, and it's a good question. You can\nsee this on unpatched master with SELECT x.filler FROM\n(pgbench_tellers AS t JOIN b USING (bid)) AS x.\n\nI'm moving this to the next CF.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 18:33:50 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On 2019-Aug-01, Thomas Munro wrote:\n\n> Indeed, that seems like a problem, and it's a good question. You can\n> see this on unpatched master with SELECT x.filler FROM\n> (pgbench_tellers AS t JOIN b USING (bid)) AS x.\n\nI'm not sure I understand why that problem is a blocker for this patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 17 Sep 2019 14:37:42 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On 2019-09-17 19:37, Alvaro Herrera wrote:\n> On 2019-Aug-01, Thomas Munro wrote:\n> \n>> Indeed, that seems like a problem, and it's a good question. You can\n>> see this on unpatched master with SELECT x.filler FROM\n>> (pgbench_tellers AS t JOIN b USING (bid)) AS x.\n> \n> I'm not sure I understand why that problem is a blocker for this patch.\n\nI tried to analyze the spec for what the behavior should be here, but I\ngot totally lost. I'll give it another look.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 17 Sep 2019 22:14:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "\nOn Tue, 17 Sep 2019, Alvaro Herrera wrote:\n\n>> Indeed, that seems like a problem, and it's a good question. You can\n>> see this on unpatched master with SELECT x.filler FROM\n>> (pgbench_tellers AS t JOIN b USING (bid)) AS x.\n>\n> I'm not sure I understand why that problem is a blocker for this patch.\n\nAs discussed on another thread,\n\n https://www.postgresql.org/message-id/flat/2aa57950-b1d7-e9b6-0770-fa592d565dda@2ndquadrant.com\n\nthe patch does not conform to spec\n\n SQL:2016 Part 2 Foundation Section 7.10 <joined table>\n\nBasically \"x\" is expected to include *ONLY* joined attributes with USING, \ni.e. above only x.bid should exists, and per-table aliases are expected to \nstill work for other attributes.\n\nISTM that this patch could be \"returned with feedback\".\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 24 Dec 2019 19:13:32 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On 2019-12-24 19:13, Fabien COELHO wrote:\n>>> Indeed, that seems like a problem, and it's a good question. You can\n>>> see this on unpatched master with SELECT x.filler FROM\n>>> (pgbench_tellers AS t JOIN b USING (bid)) AS x.\n>>\n>> I'm not sure I understand why that problem is a blocker for this patch.\n> \n> As discussed on another thread,\n> \n> https://www.postgresql.org/message-id/flat/2aa57950-b1d7-e9b6-0770-fa592d565dda@2ndquadrant.com\n> \n> the patch does not conform to spec\n> \n> SQL:2016 Part 2 Foundation Section 7.10 <joined table>\n> \n> Basically \"x\" is expected to include *ONLY* joined attributes with USING,\n> i.e. above only x.bid should exists, and per-table aliases are expected to\n> still work for other attributes.\n\nI took another crack at this. Attached is a new patch that addresses \nthe semantic comments from this and the other thread. It's all a bit \ntricky, comments welcome.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 30 Dec 2019 22:25:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On 30/12/2019 22:25, Peter Eisentraut wrote:\n> On 2019-12-24 19:13, Fabien COELHO wrote:\n>>>> Indeed, that seems like a problem, and it's a good question.  You can\n>>>> see this on unpatched master with SELECT x.filler FROM\n>>>> (pgbench_tellers AS t JOIN b USING (bid)) AS x.\n>>>\n>>> I'm not sure I understand why that problem is a blocker for this patch.\n>>\n>> As discussed on another thread,\n>>\n>>      \n>> https://www.postgresql.org/message-id/flat/2aa57950-b1d7-e9b6-0770-fa592d565dda@2ndquadrant.com\n>>\n>> the patch does not conform to spec\n>>\n>>     SQL:2016 Part 2 Foundation Section 7.10 <joined table>\n>>\n>> Basically \"x\" is expected to include *ONLY* joined attributes with\n>> USING,\n>> i.e. above only x.bid should exists, and per-table aliases are\n>> expected to\n>> still work for other attributes.\n>\n> I took another crack at this.  Attached is a new patch that addresses\n> the semantic comments from this and the other thread.  It's all a bit\n> tricky, comments welcome.\n\n\nExcellent!  Thank you for working on this, Peter.\n\n\nOne thing I notice is that the joined columns are still accessible from\ntheir respective table names when they should not be per spec.  That\nmight be one of those \"silly restrictions\" that we choose to ignore, but\nit should probably be noted somewhere, at the very least in a code\ncomment if not in user documentation. (This is my reading of SQL:2016 SR\n11.a.i)\n\n-- \n\nVik Fearing\n\n\n\n", "msg_date": "Tue, 31 Dec 2019 00:07:55 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "\nHello Peter,\n\n> I took another crack at this. Attached is a new patch that addresses \n> the semantic comments from this and the other thread. It's all a bit \n> tricky, comments welcome.\n\nIt seems that this patch does not apply anymore after Tom's 5815696.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 3 Jan 2020 16:04:04 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On 2019-12-31 00:07, Vik Fearing wrote:\n> One thing I notice is that the joined columns are still accessible from\n> their respective table names when they should not be per spec.  That\n> might be one of those \"silly restrictions\" that we choose to ignore, but\n> it should probably be noted somewhere, at the very least in a code\n> comment if not in user documentation. (This is my reading of SQL:2016 SR\n> 11.a.i)\n\nHere is a rebased patch.\n\nThe above comment is valid. One reason I didn't implement it is that it \nwould create inconsistencies with existing behavior, which is already \nnonstandard.\n\nFor example,\n\ncreate table a (id int, a1 int, a2 int);\ncreate table b (id int, b2 int, b3 int);\n\nmakes\n\nselect a.id from a join b using (id);\n\ninvalid. Adding an explicit alias for the common column names doesn't \nchange that semantically, because an implicit alias also exists if an \nexplicit one isn't specified.\n\nI agree that some documentation would be in order if we decide to leave \nit like this.\n\nAnother reason was that it seemed \"impossible\" to implement it before \nTom's recent refactoring of the parse namespace handling. Now we also \nhave parse namespace columns tracked separately from range table \nentries, so it appears that this would be possible. If we want to do it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 27 Jan 2020 10:19:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "> On 27 Jan 2020, at 10:19, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2019-12-31 00:07, Vik Fearing wrote:\n>> One thing I notice is that the joined columns are still accessible from\n>> their respective table names when they should not be per spec. That\n>> might be one of those \"silly restrictions\" that we choose to ignore, but\n>> it should probably be noted somewhere, at the very least in a code\n>> comment if not in user documentation. (This is my reading of SQL:2016 SR\n>> 11.a.i)\n> \n> Here is a rebased patch.\n\nThis thread has stalled for a bit, let's try to bring it to an end.\n\nVik: having shown interest in, and been actively reviewing, this patch; do you\nhave time to review this latest version from Peter during this commitfest?\n\ncheers ./daniel\n\n", "msg_date": "Thu, 9 Jul 2020 13:38:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "Peter Eisentraut:\n> On 2019-12-31 00:07, Vik Fearing wrote:\n>> One thing I notice is that the joined columns are still accessible from\n>> their respective table names when they should not be per spec.  That\n>> might be one of those \"silly restrictions\" that we choose to ignore, but\n>> it should probably be noted somewhere, at the very least in a code\n>> comment if not in user documentation. (This is my reading of SQL:2016 SR\n>> 11.a.i)\n> \n> Here is a rebased patch.\n> \n> The above comment is valid.  One reason I didn't implement it is that it \n> would create inconsistencies with existing behavior, which is already \n> nonstandard.\n> \n> For example,\n> \n> create table a (id int, a1 int, a2 int);\n> create table b (id int, b2 int, b3 int);\n> \n> makes\n> \n> select a.id from a join b using (id);\n> \n> invalid.  Adding an explicit alias for the common column names doesn't \n> change that semantically, because an implicit alias also exists if an \n> explicit one isn't specified.\nI just looked through the patch without applying or testing it - but I \ncouldn't find anything that would indicate that this is not going to \nwork for e.g. a LEFT JOIN as well. First PG patch I looked at, so tell \nme if I missed something there.\n\nSo given this:\n\nSELECT x.id FROM a LEFT JOIN b USING (id) AS x\n\nwill this return NULL or a.id for rows that don't match in b? This \nshould definitely be mentioned in the docs and I guess a test wouldn't \nbe too bad as well?\n\nIn any case: If a.id and b.id would not be available anymore, but just \nx.id, either the id value itself or the NULL value (indicating the \nmissing row in b) are lost. So this seems like a no-go.\n\n > I agree that some documentation would be in order if we decide to leave\n > it like this.\n\nKeep it like that!\n\n\n", "msg_date": "Mon, 3 Aug 2020 19:44:53 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "Hi,\r\n\r\nI noticed that this patch fails on the cfbot.\r\nFor this, I changed the status to: 'Waiting on Author'.\r\n\r\nCheers,\r\n//Georgios\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 10 Nov 2020 15:15:55 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On 2020-11-10 16:15, Georgios Kokolatos wrote:\n> I noticed that this patch fails on the cfbot.\n> For this, I changed the status to: 'Waiting on Author'.\n> \n> Cheers,\n> //Georgios\n> \n> The new status of this patch is: Waiting on Author\n\nHere is a rebased and lightly retouched patch.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Sat, 14 Nov 2020 09:49:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On 2020-08-03 19:44, Wolfgang Walther wrote:\n> So given this:\n> \n> SELECT x.id FROM a LEFT JOIN b USING (id) AS x\n> \n> will this return NULL or a.id for rows that don't match in b? This\n> should definitely be mentioned in the docs and I guess a test wouldn't\n> be too bad as well?\n\nThis issue is independent of the presence of the alias \"x\", so I don't \nthink it has to do with this patch.\n\nThere is a fair amount of documentation on outer joins, so I expect that \nthis is discussed there.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Sat, 14 Nov 2020 09:52:39 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "On 11/14/20 3:49 AM, Peter Eisentraut wrote:\n> On 2020-11-10 16:15, Georgios Kokolatos wrote:\n>> I noticed that this patch fails on the cfbot.\n>> For this, I changed the status to: 'Waiting on Author'.\n>>\n>> Cheers,\n>> //Georgios\n>>\n>> The new status of this patch is: Waiting on Author\n> \n> Here is a rebased and lightly retouched patch.\n\nThere don't seem to be any objections to just documenting the slight \ndivergence from the spec.\n\nSo, does it make sense to just document that and proceed?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 5 Mar 2021 12:00:20 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "\nOn 05.03.21 18:00, David Steele wrote:\n> On 11/14/20 3:49 AM, Peter Eisentraut wrote:\n>> On 2020-11-10 16:15, Georgios Kokolatos wrote:\n>>> I noticed that this patch fails on the cfbot.\n>>> For this, I changed the status to: 'Waiting on Author'.\n>>>\n>>> Cheers,\n>>> //Georgios\n>>>\n>>> The new status of this patch is: Waiting on Author\n>>\n>> Here is a rebased and lightly retouched patch.\n> \n> There don't seem to be any objections to just documenting the slight \n> divergence from the spec.\n> \n> So, does it make sense to just document that and proceed?\n\nYeah, I think that is not a problem.\n\nI think Tom's input on the guts of this patch would be most valuable, \nsince it intersects a lot with the parse namespace refactoring he did.\n\n\n", "msg_date": "Fri, 19 Mar 2021 08:12:00 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I think Tom's input on the guts of this patch would be most valuable, \n> since it intersects a lot with the parse namespace refactoring he did.\n\nYeah, I've been meaning to take a look. I'll try to get it done in\nthe next couple of days.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Mar 2021 10:00:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I think Tom's input on the guts of this patch would be most valuable, \n> since it intersects a lot with the parse namespace refactoring he did.\n\nI really didn't like the way you'd done that :-(. My primary complaint\nis that any one ParseNamespaceItem can describe only one table alias,\nbut here we have the potential for two aliases associated with the same\njoin:\n\n\tselect * from (t1 join t2 using(a) as tu) tx;\n\nAdmittedly that's not hugely useful since tx hides the tu alias, but\nit should behave in a sane fashion. (BTW, after reading the SQL spec\nagain along the way to reviewing this, I am wondering if hiding the\nlower aliases is really what we want; though it may be decades too late\nto change that.)\n\nHowever, ParseNamespaceItem as it stands needs some help for this.\nIt has a wired-in assumption that p_rte->eref describes the table\nand column aliases exposed by the nsitem. 0001 below fixes this by\ncreating a separate p_names field in an nsitem. (There are some\ncomments in 0001 referencing JOIN USING aliases, but no actual code\nfor the feature.) That saves one indirection in common code paths,\nso it's possibly a win on its own. Then 0002 is your patch rebased\nonto that infrastructure, and with some cleanup of my own.\n\nOne thing I ran into is that a whole-row Var for the JOIN USING\nalias did the wrong thing. It should have only the common columns,\nbut we were getting all the join columns in examples such as the\nrow_to_json() test case I added. This is difficult to fix given\nthe existing whole-row Var infrastructure, unless we want to make a\nseparate RTE for the JOIN USING alias, which I think is overkill.\nWhat I did about this was to make transformWholeRowRef produce a\nROW() construct --- which is something that a whole-row Var for a\njoin would be turned into by the planner anyway. I think this is\nsemantically OK since the USING construct has already nailed down\nthe number and types of the join's common columns; there's no\nprospect of those changing underneath a stored view query. It's\nslightly ugly because the ROW() construct will be visible in a\ndecompiled view instead of \"tu.*\" like you wrote originally,\nbut I'm willing to live with that.\n\nSpeaking of decompiled views, I feel like ruleutils.c could do with\na little more work to teach it that these aliases are available.\nRight now, it resorts to ugly workarounds:\n\nregression=# create table t1 (a int, b int, c int);\nCREATE TABLE\nregression=# create table t2 (a int, x int, y int);\nCREATE TABLE\nregression=# create view vvv as select tj.a, t1.b from t1 full join t2 using(a) as tj, t1 as tx;\nCREATE VIEW\nregression=# \\d+ vvv\n View \"public.vvv\"\n Column | Type | Collation | Nullable | Default | Storage | Description \n--------+---------+-----------+----------+---------+---------+-------------\n a | integer | | | | plain | \n b | integer | | | | plain | \nView definition:\n SELECT a,\n t1.b\n FROM t1\n FULL JOIN t2 USING (a) AS tj,\n t1 tx(a_1, b, c);\n\nThat's not wrong, but it could likely be done better if ruleutils\nrealized it could use the tj alias to reference the column, instead\nof having to force unqualified \"a\" to be a globally unique name.\n\nI ran out of steam to look into that, though, and it's probably\nsomething that could be improved later.\n\nOne other cosmetic thing is that this:\n\nregression=# select tu.* from (t1 join t2 using(a) as tu) tx;\nERROR: missing FROM-clause entry for table \"tu\"\nLINE 1: select tu.* from (t1 join t2 using(a) as tu) tx;\n ^\n\nis a relatively dumb error message, compared to\n\nregression=# select t1.* from (t1 join t2 using(a) as tu) tx;\nERROR: invalid reference to FROM-clause entry for table \"t1\"\nLINE 1: select t1.* from (t1 join t2 using(a) as tu) tx;\n ^\nHINT: There is an entry for table \"t1\", but it cannot be referenced from this part of the query.\n\nI didn't look into why that isn't working, but maybe errorMissingRTE\nneeds to trawl all of the ParseNamespaceItems not just the RTEs.\n\nAnyway, since these remaining gripes are cosmetic, I'll mark this RFC.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 22 Mar 2021 19:18:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" }, { "msg_contents": "\nOn 23.03.21 00:18, Tom Lane wrote:\n> However, ParseNamespaceItem as it stands needs some help for this.\n> It has a wired-in assumption that p_rte->eref describes the table\n> and column aliases exposed by the nsitem. 0001 below fixes this by\n> creating a separate p_names field in an nsitem. (There are some\n> comments in 0001 referencing JOIN USING aliases, but no actual code\n> for the feature.) That saves one indirection in common code paths,\n> so it's possibly a win on its own. Then 0002 is your patch rebased\n> onto that infrastructure, and with some cleanup of my own.\n\nMakes sense. I've committed it based on that.\n\n> Speaking of decompiled views, I feel like ruleutils.c could do with\n> a little more work to teach it that these aliases are available.\n> Right now, it resorts to ugly workarounds:\n\nYeah, the whole has_dangerous_join_using() can probably be unwound and \nremoved with this. But it's a bit of work.\n\n> One other cosmetic thing is that this:\n> \n> regression=# select tu.* from (t1 join t2 using(a) as tu) tx;\n> ERROR: missing FROM-clause entry for table \"tu\"\n> LINE 1: select tu.* from (t1 join t2 using(a) as tu) tx;\n> ^\n> \n> is a relatively dumb error message, compared to\n> \n> regression=# select t1.* from (t1 join t2 using(a) as tu) tx;\n> ERROR: invalid reference to FROM-clause entry for table \"t1\"\n> LINE 1: select t1.* from (t1 join t2 using(a) as tu) tx;\n> ^\n> HINT: There is an entry for table \"t1\", but it cannot be referenced from this part of the query.\n> \n> I didn't look into why that isn't working, but maybe errorMissingRTE\n> needs to trawl all of the ParseNamespaceItems not just the RTEs.\n\nYes, I've prototyped that and it would have the desired effect. Might \nneed some code rearranging, like either change searchRangeTableForRel() \nto not return an RTE or make a similar function for ParseNamespaceItem \nsearch. Needs some more thought. I have left a test case in that would \nshow any changes here.\n\n\n", "msg_date": "Wed, 31 Mar 2021 17:49:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow an alias to be attached directly to a JOIN ... USING" } ]
[ { "msg_contents": "There is a fair amount of collation-related functionality that is only\nbeing tested by sql/collate.icu.utf8.sql and sql/collate.linux.utf8.sql,\nwhich are not run by default. There is more functionality planned in\nthis area, so making the testing more straightforward would be useful.\n\nThe reason these tests cannot be run by default (other than that they\ndon't apply to each build, which is easy to figure out) is that\n\na) They contain UTF8 non-ASCII characters that might not convert to\nevery server-side encoding, and\n\nb) The error messages mention the encoding name ('ERROR: collation\n\"foo\" for encoding \"UTF8\" does not exist')\n\nThe server encoding can be set more-or-less arbitrarily for each test\nrun, and moreover it is computed from the locale, so it's not easy to\ndetermine ahead of time from a makefile, say.\n\nWhat would be a good way to sort this out? None of these problems are\nterribly difficult on their own, but I'm struggling to come up with a\ncoherent solution.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Jun 2019 16:56:00 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "how to run encoding-dependent tests by default" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> There is a fair amount of collation-related functionality that is only\n> being tested by sql/collate.icu.utf8.sql and sql/collate.linux.utf8.sql,\n> which are not run by default. There is more functionality planned in\n> this area, so making the testing more straightforward would be useful.\n> The reason these tests cannot be run by default (other than that they\n> don't apply to each build, which is easy to figure out) is that\n> a) They contain UTF8 non-ASCII characters that might not convert to\n> every server-side encoding, and\n> b) The error messages mention the encoding name ('ERROR: collation\n> \"foo\" for encoding \"UTF8\" does not exist')\n> The server encoding can be set more-or-less arbitrarily for each test\n> run, and moreover it is computed from the locale, so it's not easy to\n> determine ahead of time from a makefile, say.\n\n> What would be a good way to sort this out? None of these problems are\n> terribly difficult on their own, but I'm struggling to come up with a\n> coherent solution.\n\nPerhaps set up a separate test run (not part of the core tests) in which\nthe database is forced to have UTF8 encoding? That could be expanded\nto other encodings too if anyone cares.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Jun 2019 11:32:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "\nOn 6/17/19 11:32 AM, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> There is a fair amount of collation-related functionality that is only\n>> being tested by sql/collate.icu.utf8.sql and sql/collate.linux.utf8.sql,\n>> which are not run by default. There is more functionality planned in\n>> this area, so making the testing more straightforward would be useful.\n>> The reason these tests cannot be run by default (other than that they\n>> don't apply to each build, which is easy to figure out) is that\n>> a) They contain UTF8 non-ASCII characters that might not convert to\n>> every server-side encoding, and\n>> b) The error messages mention the encoding name ('ERROR: collation\n>> \"foo\" for encoding \"UTF8\" does not exist')\n>> The server encoding can be set more-or-less arbitrarily for each test\n>> run, and moreover it is computed from the locale, so it's not easy to\n>> determine ahead of time from a makefile, say.\n>> What would be a good way to sort this out? None of these problems are\n>> terribly difficult on their own, but I'm struggling to come up with a\n>> coherent solution.\n> Perhaps set up a separate test run (not part of the core tests) in which\n> the database is forced to have UTF8 encoding? That could be expanded\n> to other encodings too if anyone cares.\n>\n> \t\t\t\n\n\n\nI should point out that the buildfarm does run these tests for every\nutf8 locale it's configured for if the TestICU module is enabled. At the\nmoment the only animal actually running those tests is prion, for\nen_US.utf8.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 17 Jun 2019 12:36:10 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "Hi,\n\nOn 2019-06-17 16:56:00 +0200, Peter Eisentraut wrote:\n> There is a fair amount of collation-related functionality that is only\n> being tested by sql/collate.icu.utf8.sql and sql/collate.linux.utf8.sql,\n> which are not run by default. There is more functionality planned in\n> this area, so making the testing more straightforward would be useful.\n> \n> The reason these tests cannot be run by default (other than that they\n> don't apply to each build, which is easy to figure out) is that\n> \n> a) They contain UTF8 non-ASCII characters that might not convert to\n> every server-side encoding, and\n> \n> b) The error messages mention the encoding name ('ERROR: collation\n> \"foo\" for encoding \"UTF8\" does not exist')\n> \n> The server encoding can be set more-or-less arbitrarily for each test\n> run, and moreover it is computed from the locale, so it's not easy to\n> determine ahead of time from a makefile, say.\n> \n> What would be a good way to sort this out? None of these problems are\n> terribly difficult on their own, but I'm struggling to come up with a\n> coherent solution.\n\nI wonder if using alternative output files and psql's \\if could be good\nenough here. It's not that hard to maintain an alternative output file\nif it's nearly empty.\n\nBasically something like:\n\n\\gset SELECT my_encodings_are_compatible() AS compatible\n\\if :compatible\ntest;\ncontents;\n\\endif\n\nThat won't get rid of b) in its entirety, but even just running the test\nautomatically on platforms it works without problems would be an\nimprovement.\n\nWe probably also could just have a wrapper function in those tests that\ncatch the exception and print a more anodyne message.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2019 09:39:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "On 2019-06-17 18:39, Andres Freund wrote:\n> Basically something like:\n> \n> \\gset SELECT my_encodings_are_compatible() AS compatible\n> \\if :compatible\n> test;\n> contents;\n> \\endif\n\nCool, that works out quite well. See attached patch. I flipped the\nlogic around to make it \\quit if not compatible. That way the\nalternative expected file is shorter and doesn't need to be updated all\nthe time. But it gets the job done either way.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 23 Jun 2019 21:44:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "On 2019-06-23 21:44, Peter Eisentraut wrote:\n> On 2019-06-17 18:39, Andres Freund wrote:\n>> Basically something like:\n>>\n>> \\gset SELECT my_encodings_are_compatible() AS compatible\n>> \\if :compatible\n>> test;\n>> contents;\n>> \\endif\n> \n> Cool, that works out quite well. See attached patch. I flipped the\n> logic around to make it \\quit if not compatible. That way the\n> alternative expected file is shorter and doesn't need to be updated all\n> the time. But it gets the job done either way.\n\nSmall patch update: The collate.linux.utf8 test also needs to check in a\nsimilar manner that all the locales it is using are installed. This\nshould get the cfbot run passing.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 5 Jul 2019 13:33:17 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Cool, that works out quite well. See attached patch. I flipped the\n>> logic around to make it \\quit if not compatible. That way the\n>> alternative expected file is shorter and doesn't need to be updated all\n>> the time. But it gets the job done either way.\n\nI took a look at this and did some light testing. It seems to work\nas advertised, but I do have one gripe, which is the dependency on\nthe EXTRA_TESTS mechanism. There are a few things not to like about\ndoing it that way:\n\n* need additional hacking for Windows (admittedly, moot for\ncollate.linux.utf8, but I hope it's not for collate.icu.utf8).\n\n* can't put these tests into a parallel group, they run by themselves;\n\n* if user specifies EXTRA_TESTS on make command line, that overrides\nthe Makefile so these tests aren't run.\n\nSo I wish we could get rid of the Makefile changes, have the test\nscripts be completely responsible for whether to run themselves or\nnot, and put them into the schedule files normally.\n\nIt's pretty obvious how we might do this for collate.icu.utf8:\nmake it look to see if there are any ICU-supplied collations in\npg_collation.\n\nI'm less clear on a reasonable way to detect a glibc platform\nfrom SQL. The best I can think of is to see if the string\n\"linux\" appears in the output of version(), and that's probably\nnone too robust. Can we do anything based on the content of\npg_collation? Probably not :-(.\n\nStill, even if you only fixed collate.icu.utf8 this way, that\nwould be a step forward since it would solve the Windows aspect.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Jul 2019 14:12:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "I wrote:\n> I'm less clear on a reasonable way to detect a glibc platform\n> from SQL. The best I can think of is to see if the string\n> \"linux\" appears in the output of version(), and that's probably\n> none too robust. Can we do anything based on the content of\n> pg_collation? Probably not :-(.\n\nActually, scraping the buildfarm database suggests that checking\nversion() for \"linux\" or even \"linux-gnu\" would work very well.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 28 Jul 2019 14:31:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "Oh ... one other thought, based on forcing the collate.linux.utf8\ntest to run on platforms where it can be expected to fail: I think\nyou'd be well advised to make that test verify that the required\ncollations are present, the same as you did in the collate.icu.utf8\ntest. I noticed for instance that it fails if en_US.utf8 is not\npresent (or not spelled exactly like that), but I doubt that that\nlocale is necessarily present on every Linux platform. tr_TR is\neven more likely to be subject to packagers' whims.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Jul 2019 15:42:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "On 2019-07-28 20:12, Tom Lane wrote:\n> So I wish we could get rid of the Makefile changes, have the test\n> scripts be completely responsible for whether to run themselves or\n> not, and put them into the schedule files normally.\n> \n> It's pretty obvious how we might do this for collate.icu.utf8:\n> make it look to see if there are any ICU-supplied collations in\n> pg_collation.\n> \n> I'm less clear on a reasonable way to detect a glibc platform\n> from SQL. The best I can think of is to see if the string\n> \"linux\" appears in the output of version(), and that's probably\n> none too robust. Can we do anything based on the content of\n> pg_collation? Probably not :-(.\n> \n> Still, even if you only fixed collate.icu.utf8 this way, that\n> would be a step forward since it would solve the Windows aspect.\n\nGood points. Updated patch attach.\n\n(The two tests create the same schema name, so they cannot be run in\nparallel. I opted against changing that here, since it would blow up\nthe patch and increase the diff between the two tests.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 29 Jul 2019 07:34:07 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "On 2019-07-28 21:42, Tom Lane wrote:\n> Oh ... one other thought, based on forcing the collate.linux.utf8\n> test to run on platforms where it can be expected to fail: I think\n> you'd be well advised to make that test verify that the required\n> collations are present, the same as you did in the collate.icu.utf8\n> test. I noticed for instance that it fails if en_US.utf8 is not\n> present (or not spelled exactly like that), but I doubt that that\n> locale is necessarily present on every Linux platform. tr_TR is\n> even more likely to be subject to packagers' whims.\n\nThis was already done in my v2 test posted in this thread.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 29 Jul 2019 07:34:39 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-07-28 20:12, Tom Lane wrote:\n>> So I wish we could get rid of the Makefile changes, have the test\n>> scripts be completely responsible for whether to run themselves or\n>> not, and put them into the schedule files normally.\n\n> Good points. Updated patch attach.\n\nv3 looks good and passes local testing. I've marked it RFC.\n\n> (The two tests create the same schema name, so they cannot be run in\n> parallel. I opted against changing that here, since it would blow up\n> the patch and increase the diff between the two tests.)\n\nThis does create one tiny nit, which is that the order of the\nparallel and serial schedule files don't match. Possibly I'm\noverly anal-retentive about that, but I think it's confusing\nwhen they don't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jul 2019 10:47:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to run encoding-dependent tests by default" }, { "msg_contents": "On 2019-07-29 16:47, Tom Lane wrote:\n>> (The two tests create the same schema name, so they cannot be run in\n>> parallel. I opted against changing that here, since it would blow up\n>> the patch and increase the diff between the two tests.)\n> \n> This does create one tiny nit, which is that the order of the\n> parallel and serial schedule files don't match. Possibly I'm\n> overly anal-retentive about that, but I think it's confusing\n> when they don't.\n\nRight. Committed with adjustment to keep these consistent.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 31 Jul 2019 13:54:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: how to run encoding-dependent tests by default" } ]
[ { "msg_contents": "Hi. If I'm using psql, and type for example:\n\nUPDATE my_table SET my_field\n(with a trailing space)\n\nand then hit Tab, it will expand that to an =, and then another tab will\nexpand to DEFAULT, so that I then have:\n\nUPDATE my_table SET my_field = DEFAULT\n\nIf I'm tabbing out in this situation, it's going to be after the =, and I\nwill have typed \"myreal\"[tab] in the vain hope that psql will complete that\nto \"myreallylongfieldname,\" but instead it gets replaced with DEFAULT.\n\nSo I'm curious if this is intended behavior, if it's considered useful,\nand/or if it's a placeholder for something in the future that will be\nuseful. Also, is this new, as I've never noticed it before?\n\nThanks in advance,\nKen\n\np.s., Version 9.6.13\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\nken.tanzer@agency-software.org\n(253) 245-3801\n\nSubscribe to the mailing list\n<agency-general-request@lists.sourceforge.net?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nHi.  If I'm using psql, and type for example:UPDATE my_table SET my_field (with a trailing space)and then hit Tab, it will expand that to an =, and then another tab will expand to DEFAULT, so that I then have:UPDATE my_table SET my_field = DEFAULTIf I'm tabbing out in this situation, it's going to be after the =, and I will have typed \"myreal\"[tab] in the vain hope that psql will complete that to \"myreallylongfieldname,\" but instead it gets replaced with DEFAULT.So I'm curious if this is intended behavior, if it's considered useful, and/or if it's a placeholder for something in the future that will be useful.  Also, is this new, as I've never noticed it before?Thanks in advance,Kenp.s.,  Version 9.6.13-- AGENCY Software  A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/clientken.tanzer@agency-software.org(253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.", "msg_date": "Mon, 17 Jun 2019 15:03:11 -0700", "msg_from": "Ken Tanzer <ken.tanzer@gmail.com>", "msg_from_op": true, "msg_subject": "psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "On 6/17/19 3:03 PM, Ken Tanzer wrote:\n> Hi.  If I'm using psql, and type for example:\n> \n> UPDATE my_table SET my_field\n> (with a trailing space)\n> \n> and then hit Tab, it will expand that to an =, and then another tab will \n> expand to DEFAULT, so that I then have:\n> \n> UPDATE my_table SET my_field = DEFAULT\n> \n> If I'm tabbing out in this situation, it's going to be after the =, and \n> I will have typed \"myreal\"[tab] in the vain hope that psql will complete \n> that to \"myreallylongfieldname,\" but instead it gets replaced with DEFAULT.\n> \n> So I'm curious if this is intended behavior, if it's considered useful, \n> and/or if it's a placeholder for something in the future that will be \n> useful.  Also, is this new, as I've never noticed it before?\n\nNot sure how long that has been around.\n\nMy cheat for dealing with many/long column names is:\n\ntest=# \\d up_test\n Table \"public.up_test\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n id | integer | | |\n col1 | boolean | | |\n col2 | integer | | |\n\n\n\ntest=# \\pset format unaligned\nOutput format is unaligned.\ntest=# \\pset fieldsep ','\nField separator is \",\".\n\nselect * from up_test limit 0;\nid,col1,col2\n\nCut and paste above.\n\ntest=# \\pset fieldsep '|'\nField separator is \"|\".\n\ntest=# \\pset format 'aligned'\nOutput format is aligned.\n\n\n> \n> Thanks in advance,\n> Ken\n> \n> p.s.,  Version 9.6.13\n> \n> -- \n> AGENCY Software\n> A Free Software data system\n> By and for non-profits\n> /http://agency-software.org//\n> /https://demo.agency-software.org/client/\n> ken.tanzer@agency-software.org <mailto:ken.tanzer@agency-software.org>\n> (253) 245-3801\n> \n> Subscribe to the mailing list \n> <mailto:agency-general-request@lists.sourceforge.net?body=subscribe> to\n> learn more about AGENCY or\n> follow the discussion.\n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Mon, 17 Jun 2019 16:24:43 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "On Mon, Jun 17, 2019 at 4:24 PM Adrian Klaver <adrian.klaver@aklaver.com>\nwrote:\n\n> On 6/17/19 3:03 PM, Ken Tanzer wrote:\n> >\n> > So I'm curious if this is intended behavior, if it's considered useful,\n> > and/or if it's a placeholder for something in the future that will be\n> > useful. Also, is this new, as I've never noticed it before?\n>\n> Not sure how long that has been around.\n>\n> My cheat for dealing with many/long column names is:\n>\n>\nThanks Adrian, though I wasn't really seeking tips for column names. I was\ninstead trying to understand whether this particular tab expansion was\nintentional and considered useful, and if so what that usefulness was,\nbecause it's rather escaping me!\n\nCheers,\nKen\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\nken.tanzer@agency-software.org\n(253) 245-3801\n\nSubscribe to the mailing list\n<agency-general-request@lists.sourceforge.net?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nOn Mon, Jun 17, 2019 at 4:24 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 6/17/19 3:03 PM, Ken Tanzer wrote:> \n> So I'm curious if this is intended behavior, if it's considered useful, \n> and/or if it's a placeholder for something in the future that will be \n> useful.  Also, is this new, as I've never noticed it before?\n\nNot sure how long that has been around.\n\nMy cheat for dealing with many/long column names is:\n\nThanks Adrian, though I wasn't really seeking tips for column names.  I was instead trying to understand whether this particular tab expansion was intentional and considered useful, and if so what that usefulness was, because it's rather escaping me!Cheers,Ken-- AGENCY Software  A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/clientken.tanzer@agency-software.org(253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.", "msg_date": "Mon, 17 Jun 2019 16:33:44 -0700", "msg_from": "Ken Tanzer <ken.tanzer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "On Tue, 18 Jun 2019 at 09:34, Ken Tanzer <ken.tanzer@gmail.com> wrote:\n\n> On Mon, Jun 17, 2019 at 4:24 PM Adrian Klaver <adrian.klaver@aklaver.com>\n> wrote:\n>\n>> On 6/17/19 3:03 PM, Ken Tanzer wrote:\n>> >\n>> > So I'm curious if this is intended behavior, if it's considered useful,\n>> > and/or if it's a placeholder for something in the future that will be\n>> > useful. Also, is this new, as I've never noticed it before?\n>>\n>> Not sure how long that has been around.\n>>\n>> My cheat for dealing with many/long column names is:\n>>\n>>\n> Thanks Adrian, though I wasn't really seeking tips for column names. I\n> was instead trying to understand whether this particular tab expansion was\n> intentional and considered useful, and if so what that usefulness was,\n> because it's rather escaping me!\n>\n> Cheers,\n> Ken\n>\n>\n>\nHave to say, I fid that behaviour unusual as well. I would expect that once\nI've typed some characters, the completion mechanism would attempt to\ncomplete based on the characters I've typed and if it cannot, to do\nnothing. Instead, what happens is that what I have typed is replaced by\n'default'. For example, if I type\n\nupdate my_table set my_col = other_t\n\nand hit tab, 'other_t is replaced by 'default', which is of no use. What I\nwould expect is for tab to either complete (possibly only partially if\nthere is multiple candidates) what it could for candidates which start with\n'other_t' e.g. 'other_table' or it would do nothing i.e. no completion\ncandidates found, telling me there is no match based on the prefix I've\ntyped.\n\n\n-- \nregards,\n\nTim\n\n--\nTim Cross\n\nOn Tue, 18 Jun 2019 at 09:34, Ken Tanzer <ken.tanzer@gmail.com> wrote:On Mon, Jun 17, 2019 at 4:24 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 6/17/19 3:03 PM, Ken Tanzer wrote:> \n> So I'm curious if this is intended behavior, if it's considered useful, \n> and/or if it's a placeholder for something in the future that will be \n> useful.  Also, is this new, as I've never noticed it before?\n\nNot sure how long that has been around.\n\nMy cheat for dealing with many/long column names is:\n\nThanks Adrian, though I wasn't really seeking tips for column names.  I was instead trying to understand whether this particular tab expansion was intentional and considered useful, and if so what that usefulness was, because it's rather escaping me!Cheers,KenHave to say, I fid that behaviour unusual as well. I would expect that once I've typed some characters, the completion mechanism would attempt to complete based on the characters I've typed and if it cannot, to do nothing. Instead, what happens is that what I have typed is replaced by 'default'.  For example, if I typeupdate my_table set my_col = other_tand hit tab, 'other_t is replaced by 'default', which is of no use. What I would expect is for tab to either complete (possibly only partially if there is multiple candidates) what it could for candidates which start with 'other_t' e.g. 'other_table' or it would do nothing i.e. no completion candidates found, telling me there is no match based on the prefix I've typed. -- regards,Tim--Tim Cross", "msg_date": "Tue, 18 Jun 2019 10:09:11 +1000", "msg_from": "Tim Cross <theophilusx@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "On 6/17/19 4:33 PM, Ken Tanzer wrote:\n> On Mon, Jun 17, 2019 at 4:24 PM Adrian Klaver <adrian.klaver@aklaver.com \n> <mailto:adrian.klaver@aklaver.com>> wrote:\n> \n> On 6/17/19 3:03 PM, Ken Tanzer wrote:\n> >\n> > So I'm curious if this is intended behavior, if it's considered\n> useful,\n> > and/or if it's a placeholder for something in the future that\n> will be\n> > useful.  Also, is this new, as I've never noticed it before?\n> \n> Not sure how long that has been around.\n> \n> My cheat for dealing with many/long column names is:\n> \n> \n> Thanks Adrian, though I wasn't really seeking tips for column names.  I \n> was instead trying to understand whether this particular tab expansion \n> was intentional and considered useful, and if so what that usefulness \n\nIf I am following the below correctly it is intentional:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/psql/tab-complete.c;h=68a2ba27aec22302625c5481a8f74cf866f4dc23;hb=d22ca701a39dfd03cdfa1ca238370d34f4bc4ac4\n\nLine 2888\n\nUseful, that is in the eye of the beholder:)\n\n> was, because it's rather escaping me!\n> \n> Cheers,\n> Ken\n> \n> \n\n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Mon, 17 Jun 2019 17:22:53 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "On Mon, Jun 17, 2019 at 6:03 PM Ken Tanzer <ken.tanzer@gmail.com> wrote:\n\n> Hi. If I'm using psql, and type for example:\n>\n> UPDATE my_table SET my_field\n> (with a trailing space)\n>\n> and then hit Tab, it will expand that to an =, and then another tab will\n> expand to DEFAULT, so that I then have:\n>\n> UPDATE my_table SET my_field = DEFAULT\n>\n> If I'm tabbing out in this situation, it's going to be after the =, and I\n> will have typed \"myreal\"[tab] in the vain hope that psql will complete that\n> to \"myreallylongfieldname,\" but instead it gets replaced with DEFAULT.\n>\n\nYeah, it is especially annoying to delete what I actually typed to replace\nit with something else. I've been irked by that before. I think the\ngeneral behavior of replacing something already typed with (what it\nbelieves to be) the only proper completion is part of the underlying\nreadline/libedit library, not something psql goes out of its way to do.\n\n\n> So I'm curious if this is intended behavior, if it's considered useful,\n> and/or if it's a placeholder for something in the future that will be\n> useful. Also, is this new, as I've never noticed it before?\n>\n\nThe tab completion doesn't have a SQL parser/analyzer, it is just driven of\ngeneral rules of looking at the proceeding N words. In this case, it is\nhitting the rule for \"SET anything TO\", which is intended to catch the\nsetting of parameters, it is only accidentally hitting on the SET part of\nUPDATE statements.\n\nThis goes back at least to 9.3.\n\nWe could improve it by making a higher priority rule which looks back a few\nmore words to:\n\nUPDATE <tablename> SET <colname> TO\n\nBut what would we complete with? Any expression can go there, and we can't\nmake it tab complete any arbitrary expression, like function names or\nliterals. If we tab complete, but only with a restricted set of choices,\nthat could be interpreted as misleadingly suggesting no other things are\npossible. (Of course the current accidental behavior is also misleading,\nthen)\n\nIf we are willing to offer an incomplete list of suggestions, what would\nthey be? NULL, DEFAULT, '(' and all the columnnames present in\n<tablename>, with appropriate quotes where necessary? But what to do with\n<tablename> doesn't actually exist as the name of a table?\n\nOr, we could have it implement the more precise higher priority rule, and\nhave it just refuse to offer any suggestions, but at least not delete what\nis already there.\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, Jun 17, 2019 at 6:03 PM Ken Tanzer <ken.tanzer@gmail.com> wrote:Hi.  If I'm using psql, and type for example:UPDATE my_table SET my_field (with a trailing space)and then hit Tab, it will expand that to an =, and then another tab will expand to DEFAULT, so that I then have:UPDATE my_table SET my_field = DEFAULTIf I'm tabbing out in this situation, it's going to be after the =, and I will have typed \"myreal\"[tab] in the vain hope that psql will complete that to \"myreallylongfieldname,\" but instead it gets replaced with DEFAULT.Yeah, it is especially annoying to delete what I actually typed to replace it with something else.  I've been irked by that before.  I think the general behavior of replacing something already typed with (what it believes to be) the only proper completion is part of the underlying readline/libedit library, not something psql goes out of its way to do. So I'm curious if this is intended behavior, if it's considered useful, and/or if it's a placeholder for something in the future that will be useful.  Also, is this new, as I've never noticed it before?The tab completion doesn't have a SQL parser/analyzer, it is just driven of general rules of looking at the proceeding N words.  In this case, it is hitting the rule for \"SET anything TO\", which is intended to catch the setting of parameters, it is only accidentally hitting on the SET part of UPDATE statements.This goes back at least to 9.3.We could improve it by making a higher priority rule which looks back a few more words to:UPDATE <tablename> SET <colname> TOBut what would we complete with?  Any expression can go there, and we can't make it tab complete any arbitrary expression, like function names or literals.  If we tab complete, but only with a restricted set of choices, that could be interpreted as misleadingly suggesting no other things are possible.  (Of course the current accidental behavior is also misleading, then) If we are willing to offer an incomplete list of suggestions, what would they be?  NULL, DEFAULT, '(' and all the columnnames present in <tablename>, with appropriate quotes where necessary?  But what to do with <tablename> doesn't actually exist as the name of a table?Or, we could have it implement the more precise higher priority rule, and have it just refuse to offer any suggestions, but at least not delete what is already there.Cheers,Jeff", "msg_date": "Mon, 17 Jun 2019 20:23:47 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "On Mon, Jun 17, 2019 at 8:23 PM Adrian Klaver <adrian.klaver@aklaver.com>\nwrote:\n\n> On 6/17/19 4:33 PM, Ken Tanzer wrote:\n> >\n> > Thanks Adrian, though I wasn't really seeking tips for column names. I\n> > was instead trying to understand whether this particular tab expansion\n> > was intentional and considered useful, and if so what that usefulness\n>\n> If I am following the below correctly it is intentional:\n>\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/psql/tab-complete.c;h=68a2ba27aec22302625c5481a8f74cf866f4dc23;hb=d22ca701a39dfd03cdfa1ca238370d34f4bc4ac4\n>\n> Line 2888\n>\n\nBut that portion doesn't offer the DEFAULT completion. It stops at\noffering '=', and goes no further.\n\nIt is at line 2859 which accidentally offers to complete DEFAULT, and that\nis not part of the UPDATE-specific code.\n\nCheers,\n\nJeff\n\nOn Mon, Jun 17, 2019 at 8:23 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:On 6/17/19 4:33 PM, Ken Tanzer wrote:> \n> Thanks Adrian, though I wasn't really seeking tips for column names.  I \n> was instead trying to understand whether this particular tab expansion \n> was intentional and considered useful, and if so what that usefulness \n\nIf I am following the below correctly it is intentional:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/psql/tab-complete.c;h=68a2ba27aec22302625c5481a8f74cf866f4dc23;hb=d22ca701a39dfd03cdfa1ca238370d34f4bc4ac4\n\nLine 2888But that portion doesn't offer the DEFAULT completion.  It stops at offering '=', and goes no further.It is at line 2859 which accidentally offers to complete DEFAULT, and that is not part of the UPDATE-specific code. Cheers,Jeff", "msg_date": "Mon, 17 Jun 2019 20:34:33 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "Tim Cross <theophilusx@gmail.com> writes:\n> On Tue, 18 Jun 2019 at 09:34, Ken Tanzer <ken.tanzer@gmail.com> wrote:\n>> Thanks Adrian, though I wasn't really seeking tips for column names. I\n>> was instead trying to understand whether this particular tab expansion was\n>> intentional and considered useful, and if so what that usefulness was,\n>> because it's rather escaping me!\n\n> Have to say, I fid that behaviour unusual as well.\n\nI don't think it's intentional. A look into tab-complete.c shows that it\nmakes no attempt to offer completions beyond the \"=\" part of the syntax;\nso there's room for improvement there. But then what is producing the\n\"DEFAULT\" completion? After looking around a bit, I think it's\naccidentally matching the pattern for a GUC \"set\" command:\n\n else if (TailMatches(\"SET\", MatchAny, \"TO|=\"))\n {\n /* special cased code for individual GUCs */\n ...\n else\n COMPLETE_WITH(\"DEFAULT\");\n }\n\nSo perhaps that needs to look more like this other place where somebody\nalready noticed the conflict against UPDATE:\n\n else if (TailMatches(\"SET|RESET\") && !TailMatches(\"UPDATE\", MatchAny, \"SET\"))\n COMPLETE_WITH_QUERY(Query_for_list_of_set_vars);\n\nMore generally, though, I'm inclined to think that offering DEFAULT\nand nothing else, which is what this code does if it doesn't recognize\nthe \"GUC name\", is just ridiculous. If the word after SET is not a known\nGUC name then we probably have misconstrued the context, as indeed is\nhappening in your example; and in any case DEFAULT is about the least\nlikely thing for somebody to be trying to enter here. (They'd probably\nhave selected RESET not SET if they were trying to do that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Jun 2019 20:39:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "On Tue, 18 Jun 2019 at 10:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tim Cross <theophilusx@gmail.com> writes:\n> > On Tue, 18 Jun 2019 at 09:34, Ken Tanzer <ken.tanzer@gmail.com> wrote:\n> >> Thanks Adrian, though I wasn't really seeking tips for column names. I\n> >> was instead trying to understand whether this particular tab expansion\n> was\n> >> intentional and considered useful, and if so what that usefulness was,\n> >> because it's rather escaping me!\n>\n> > Have to say, I fid that behaviour unusual as well.\n>\n> I don't think it's intentional. A look into tab-complete.c shows that it\n> makes no attempt to offer completions beyond the \"=\" part of the syntax;\n> so there's room for improvement there. But then what is producing the\n> \"DEFAULT\" completion? After looking around a bit, I think it's\n> accidentally matching the pattern for a GUC \"set\" command:\n>\n> else if (TailMatches(\"SET\", MatchAny, \"TO|=\"))\n> {\n> /* special cased code for individual GUCs */\n> ...\n> else\n> COMPLETE_WITH(\"DEFAULT\");\n> }\n>\n> So perhaps that needs to look more like this other place where somebody\n> already noticed the conflict against UPDATE:\n>\n> else if (TailMatches(\"SET|RESET\") && !TailMatches(\"UPDATE\", MatchAny,\n> \"SET\"))\n> COMPLETE_WITH_QUERY(Query_for_list_of_set_vars);\n>\n> More generally, though, I'm inclined to think that offering DEFAULT\n> and nothing else, which is what this code does if it doesn't recognize\n> the \"GUC name\", is just ridiculous. If the word after SET is not a known\n> GUC name then we probably have misconstrued the context, as indeed is\n> happening in your example; and in any case DEFAULT is about the least\n> likely thing for somebody to be trying to enter here. (They'd probably\n> have selected RESET not SET if they were trying to do that.)\n>\n> regards, tom lane\n>\n\n\nGiven that without adding a full blown sql parser in order to identify\nlegitimate candidates following a '=' in an update statement, my suggestion\nwould be to refine the rules so that no completion is attempted after the\n=. Would rather have tab do nothing over tab replacing what I've already\ntyped with 'default'.\n\n-- \nregards,\n\nTim\n\n--\nTim Cross\n\nOn Tue, 18 Jun 2019 at 10:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:Tim Cross <theophilusx@gmail.com> writes:\n> On Tue, 18 Jun 2019 at 09:34, Ken Tanzer <ken.tanzer@gmail.com> wrote:\n>> Thanks Adrian, though I wasn't really seeking tips for column names.  I\n>> was instead trying to understand whether this particular tab expansion was\n>> intentional and considered useful, and if so what that usefulness was,\n>> because it's rather escaping me!\n\n> Have to say, I fid that behaviour unusual as well.\n\nI don't think it's intentional.  A look into tab-complete.c shows that it\nmakes no attempt to offer completions beyond the \"=\" part of the syntax;\nso there's room for improvement there.  But then what is producing the\n\"DEFAULT\" completion?  After looking around a bit, I think it's\naccidentally matching the pattern for a GUC \"set\" command:\n\n    else if (TailMatches(\"SET\", MatchAny, \"TO|=\"))\n    {\n        /* special cased code for individual GUCs */\n        ...\n        else\n            COMPLETE_WITH(\"DEFAULT\");\n    }\n\nSo perhaps that needs to look more like this other place where somebody\nalready noticed the conflict against UPDATE:\n\n    else if (TailMatches(\"SET|RESET\") && !TailMatches(\"UPDATE\", MatchAny, \"SET\"))\n        COMPLETE_WITH_QUERY(Query_for_list_of_set_vars);\n\nMore generally, though, I'm inclined to think that offering DEFAULT\nand nothing else, which is what this code does if it doesn't recognize\nthe \"GUC name\", is just ridiculous.  If the word after SET is not a known\nGUC name then we probably have misconstrued the context, as indeed is\nhappening in your example; and in any case DEFAULT is about the least\nlikely thing for somebody to be trying to enter here.  (They'd probably\nhave selected RESET not SET if they were trying to do that.)\n\n                        regards, tom lane\nGiven that without adding a full blown sql parser in order to identify legitimate candidates following a '=' in an update statement, my suggestion would be to refine the rules so that no completion is attempted after the =. Would rather have tab do nothing over tab replacing what I've already typed with 'default'. -- regards,Tim--Tim Cross", "msg_date": "Tue, 18 Jun 2019 10:52:44 +1000", "msg_from": "Tim Cross <theophilusx@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "On Mon, Jun 17, 2019 at 4:24 PM Adrian Klaver <adrian.klaver@aklaver.com>\nwrote:\n\n>\n> My cheat for dealing with many/long column names is:\n>\n> test=# \\d up_test\n> Table \"public.up_test\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> id | integer | | |\n> col1 | boolean | | |\n> col2 | integer | | |\n>\n>\n>\n> test=# \\pset format unaligned\n> Output format is unaligned.\n> test=# \\pset fieldsep ','\n> Field separator is \",\".\n>\n> select * from up_test limit 0;\n> id,col1,col2\n>\n> Cut and paste above.\n>\n> test=# \\pset fieldsep '|'\n> Field separator is \"|\".\n>\n> test=# \\pset format 'aligned'\n> Output format is aligned.\n>\n>\nJust curious, but if you really do that often, wouldn't you be better off\nwith something like this?\n\nCREATE OR REPLACE FUNCTION field_list( name ) RETURNS text AS $$\n\nSELECT array_to_string(array_agg(column_name::text ORDER BY\nordinal_position),',') FROM information_schema.columns WHERE table_name =\n$1;\n\n$$ LANGUAGE sql STABLE;\n\nCheers,\nKen\n\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\nken.tanzer@agency-software.org\n(253) 245-3801\n\nSubscribe to the mailing list\n<agency-general-request@lists.sourceforge.net?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nOn Mon, Jun 17, 2019 at 4:24 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\nMy cheat for dealing with many/long column names is:\n\ntest=# \\d up_test\n               Table \"public.up_test\"\n  Column |  Type   | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n  id     | integer |           |          |\n  col1   | boolean |           |          |\n  col2   | integer |           |          |\n\n\n\ntest=# \\pset format unaligned\nOutput format is unaligned.\ntest=# \\pset fieldsep ','\nField separator is \",\".\n\nselect * from up_test limit 0;\nid,col1,col2\n\nCut and paste above.\n\ntest=# \\pset fieldsep '|'\nField separator is \"|\".\n\ntest=# \\pset format 'aligned'\nOutput format is aligned.\nJust curious, but if you really do that often, wouldn't you be better off with something like this?CREATE OR REPLACE FUNCTION field_list( name ) RETURNS text AS $$SELECT array_to_string(array_agg(column_name::text ORDER BY ordinal_position),',') FROM information_schema.columns WHERE table_name = $1;$$ LANGUAGE sql STABLE;Cheers,Ken-- AGENCY Software  A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/clientken.tanzer@agency-software.org(253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.", "msg_date": "Tue, 18 Jun 2019 15:23:40 -0700", "msg_from": "Ken Tanzer <ken.tanzer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "On 6/18/19 3:23 PM, Ken Tanzer wrote:\n> On Mon, Jun 17, 2019 at 4:24 PM Adrian Klaver <adrian.klaver@aklaver.com \n> <mailto:adrian.klaver@aklaver.com>> wrote:\n> \n> \n> My cheat for dealing with many/long column names is:\n> \n> test=# \\d up_test\n>                Table \"public.up_test\"\n>   Column |  Type   | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n>   id     | integer |           |          |\n>   col1   | boolean |           |          |\n>   col2   | integer |           |          |\n> \n> \n> \n> test=# \\pset format unaligned\n> Output format is unaligned.\n> test=# \\pset fieldsep ','\n> Field separator is \",\".\n> \n> select * from up_test limit 0;\n> id,col1,col2\n> \n> Cut and paste above.\n> \n> test=# \\pset fieldsep '|'\n> Field separator is \"|\".\n> \n> test=# \\pset format 'aligned'\n> Output format is aligned.\n> \n> \n> Just curious, but if you really do that often, wouldn't you be better \n> off with something like this?\n\nI could/should I just don't do the above enough to get motivated to \nbuild a function. Most cases where I'm doing complicated updates I am \nnot using psql I am building then in Python from a dict.\n\n> \n> CREATE OR REPLACE FUNCTION field_list( name ) RETURNS text AS $$\n> \n> SELECT array_to_string(array_agg(column_name::text ORDER BY \n> ordinal_position),',') FROM information_schema.columns WHERE table_name \n> = $1;\n> \n> $$ LANGUAGE sql STABLE;\n> \n> Cheers,\n> Ken\n> \n\n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Tue, 18 Jun 2019 18:03:18 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "[ moving thread to -hackers ]\n\nSo I propose the attached patch for fixing the clear bugs that have\nemerged in this discussion: don't confuse UPDATE ... SET ... with\nGUC-setting commands, and don't offer just DEFAULT in contexts where\nthat's unlikely to be the only valid completion.\n\nNosing around in tab-complete.c, I notice a fair number of other\nplaces where we're doing COMPLETE_WITH() with just a single possible\ncompletion. Knowing what we know now, in each one of those places\nlibreadline will suppose that that completion is the only syntactically\nlegal continuation, and throw away anything else the user might've typed.\nWe should probably inspect each of those places to see if that's really\ndesirable behavior ... but I didn't muster the energy to do that this\nmorning.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 19 Jun 2019 10:39:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "I wrote:\n> Nosing around in tab-complete.c, I notice a fair number of other\n> places where we're doing COMPLETE_WITH() with just a single possible\n> completion. Knowing what we know now, in each one of those places\n> libreadline will suppose that that completion is the only syntactically\n> legal continuation, and throw away anything else the user might've typed.\n> We should probably inspect each of those places to see if that's really\n> desirable behavior ... but I didn't muster the energy to do that this\n> morning.\n\nI took a closer look and realized that this isn't some magic behavior of\narcane parts of libreadline; it's more like self-inflicted damage. It\nhappens because tab-complete.c's complete_from_const() is doing exactly\nwhat its comment says it does:\n\n/*\n * This function returns one fixed string the first time even if it doesn't\n * match what's there, and nothing the second time. This should be used if\n * there is only one possibility that can appear at a certain spot, so\n * misspellings will be overwritten. The string to be passed must be in\n * completion_charp.\n */\n\nThis is unlike complete_from_list(), which will only return completions\nthat match the text-string-so-far.\n\nI have to wonder whether complete_from_const()'s behavior is really\na good idea; I think there might be an argument for getting rid of it\nand using complete_from_list() even for one-element lists.\n\nWe certainly didn't do anybody any favors in the refactoring we did in\n4f3b38fe2, which removed the source-code difference between calling\ncomplete_from_const() and calling complete_from_list() with just one list\nitem. But even before that, I really doubt that many people hacking on\ntab-complete.c had internalized the idea that COMPLETE_WITH_CONST()\nimplied a higher degree of certainty than COMPLETE_WITH_LIST() with one\nlist item. I'm pretty sure I'd never understood that.\n\nBoth of those functions go back to the beginnings of tab-complete.c,\nso there's not much available in the history to explain the difference\nin behavior (and the discussion of the original patch, if any, is lost\nto the mists of time --- our archives for pgsql-patches only go back to\n2000). But my own feeling about this is that there's no situation in\nwhich I'd expect tab completion to wipe out text I'd typed.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2019 14:51:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" }, { "msg_contents": "I wrote:\n> I took a closer look and realized that this isn't some magic behavior of\n> arcane parts of libreadline; it's more like self-inflicted damage. It\n> happens because tab-complete.c's complete_from_const() is doing exactly\n> what its comment says it does:\n\n> /*\n> * This function returns one fixed string the first time even if it doesn't\n> * match what's there, and nothing the second time. This should be used if\n> * there is only one possibility that can appear at a certain spot, so\n> * misspellings will be overwritten. The string to be passed must be in\n> * completion_charp.\n> */\n\n> This is unlike complete_from_list(), which will only return completions\n> that match the text-string-so-far.\n\n> I have to wonder whether complete_from_const()'s behavior is really\n> a good idea; I think there might be an argument for getting rid of it\n> and using complete_from_list() even for one-element lists.\n\nI experimented with ripping out complete_from_const() altogether, and\nsoon found that there's still one place where we need it: down at the\nend of psql_completion, where we've failed to find any useful completion.\nIf that instance of COMPLETE_WITH(\"\") is implemented by complete_from_list\nthen readline will happily try to do filename completion :-(.\n(I don't quite understand why we don't get the wiping-out behavior there;\nmaybe an empty-string result is treated differently from\nnot-empty-string?)\n\nSo I propose the attached instead, which doesn't get rid of\ncomplete_from_const but ensures that it's only used in that one place.\n\nThis is independent of the other patch shown upthread. I'm proposing\nthis one for HEAD only but would back-patch the other.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 25 Jun 2019 11:59:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql UPDATE field [tab] expands to DEFAULT?" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15858\nLogged by: William Allen\nEmail address: williamedwinallen@live.com\nPostgreSQL version: 11.3\nOperating system: Windows Server 2012 R2\nDescription: \n\nIssue using copy from command for files over 4GB.\r\n\r\nERROR: could not stat file \"E:\\file.txt\": Unknown error\r\nSQL state: XX000", "msg_date": "Tue, 18 Jun 2019 10:02:53 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Tue, Jun 18, 2019 at 10:02:53AM +0000, PG Bug reporting form wrote:\n> Issue using copy from command for files over 4GB.\n> \n> ERROR: could not stat file \"E:\\file.txt\": Unknown error\n> SQL state: XX000\n\nWindows is known for having limitations in its former implementations\nof stat(), and the various _stat structures they use make actually\nthat much harder from a compatibility point of view:\nhttps://www.postgresql.org/message-id/1803D792815FC24D871C00D17AE95905CF5099@g01jpexmbkw24\n\nNobody has actually dug enough into this set of issues to get a patch\nout of the ground, which basically requires more tweaks that one may\nthink at first sight (look at pgwin32_safestat() in src/port/dirmod.c\nfor example).\n--\nMichael", "msg_date": "Wed, 19 Jun 2019 10:26:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Wed, Jun 19, 2019 at 3:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Windows is known for having limitations in its former implementations\n> of stat(), and the various _stat structures they use make actually\n> that much harder from a compatibility point of view:\n> https://www.postgresql.org/message-id/1803D792815FC24D871C00D17AE95905CF5099@g01jpexmbkw24\n>\n\nGoing through this discussion it is not clear to me if there was a\nconsensus about the shape of an acceptable patch. Would something like\nthe attached be suitable?\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Wed, 19 Jun 2019 18:07:14 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> On Wed, Jun 19, 2019 at 3:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Windows is known for having limitations in its former implementations\n>> of stat(), and the various _stat structures they use make actually\n>> that much harder from a compatibility point of view:\n>> https://www.postgresql.org/message-id/1803D792815FC24D871C00D17AE95905CF5099@g01jpexmbkw24\n\n> Going through this discussion it is not clear to me if there was a\n> consensus about the shape of an acceptable patch. Would something like\n> the attached be suitable?\n\nI think there's general agreement that the correct fix involves somehow\nmapping stat() to _stat64() and mapping \"struct stat\" to \"struct __stat64\"\nto go along with that. Beyond that, things get murky.\n\n1. Can we assume that _stat64() and struct __stat64 exist on every Windows\nversion and build toolchain that we care about? Windows itself is\nprobably OK --- googling found a (non-authoritative) statement that these\nwere introduced in Windows 2K. But it's less clear whether they'll work\non builds with Cygwin, or Mingw, or Mingw-64, or how far back that support\ngoes. I found one statement that Mingw declares them only \"#if\n__MSVCRT_VERSION__ >= 0x0601\".\n\n2. Mapping stat() to _stat64() seems easy enough: we already declare\nstat(a,b) as a macro on Windows, so just change it to something else.\n\n3. What about the struct name? I proposed just \"define stat __stat64\",\nbut Robert thought that was too cute, and he's got a point --- in\nparticular, it's not clear to me how nicely it'd play to have both\nfunction and object macros for the same name \"stat\". I see you are\nproposing fixing this angle by suppressing the system definition of\nstruct stat and then defining it ourselves with the same contents as\nstruct __stat64. That might work. Ordinarily I'd be worried about\nbit-rot in a struct that has to track a system definition, but Microsoft\nare so religiously anal about never breaking ABI that it might be safe\nto assume we don't have to worry about that.\n\nI don't like the specific way you're proposing suppressing the system\ndefinition of struct stat, though. \"#define _CRT_NO_TIME_T\" seems\nlike it's going to be a disaster, both because it likely has other\nside-effects and because it probably doesn't do what you intend at all\non non-MSVC toolchains. We have precedents for dealing with similar\nissues in, eg, plperl; and what those precedents would suggest is\ndoing something like\n\n#define stat microsoft_native_stat\n#include <sys/stat.h>\n#undef stat\n\nafter which we could do\n\nstruct stat {\n ... same contents as __stat64\n};\n\n#define stat(a,b) _stat64(a,b)\n\nAnother issue here is that pgwin32_safestat() probably needs revisited\nas to its scope and purpose. Its use of GetFileAttributesEx() can\npresumably be dropped. I don't actually believe the header comment\nclaiming that stat() is not guaranteed to update the st_size field;\nthere's no indication of that in the Microsoft documentation. What\nseems more likely is that that's a garbled version of the truth,\nthat you won't get a correct value of _st_size for files over 4GB.\nBut the test for ERROR_DELETE_PENDING might be worth keeping. So\nthat would lead us to\n\nstruct stat {\n ... same contents as __stat64\n};\n\nextern int\tpgwin32_safestat(const char *path, struct stat *buf);\n#define stat(a,b) pgwin32_safestat(a,b)\n\nand something like\n\nint\npgwin32_safestat(const char *path, struct stat *buf)\n{\n int r;\n\n /*\n * Don't call stat(), that would just recurse back to here.\n * We really want _stat64().\n */\n r = _stat64(path, buf);\n\n if (r < 0)\n {\n if (GetLastError() == ERROR_DELETE_PENDING)\n {\n /*\n * File has been deleted, but is not gone from the filesystem yet.\n * This can happen when some process with FILE_SHARE_DELETE has it\n * open and it will be fully removed once that handle is closed.\n * Meanwhile, we can't open it, so indicate that the file just\n * doesn't exist.\n */\n errno = ENOENT;\n }\n }\n return r;\n}\n\nNot sure if we'd need an explicit cast to override passing struct\nstat * to _stat64(). If so, a StaticAssert that sizeof(struct stat)\nmatches sizeof(struct __stat64) seems like a good idea.\n\nI'd also be very strongly inclined to move pgwin32_safestat into its\nown file in src/port and get rid of UNSAFE_STAT_OK. There wouldn't\nbe a good reason to opt out of using it once we got to this point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jun 2019 13:40:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "I wrote:\n> Another issue here is that pgwin32_safestat() probably needs revisited\n> as to its scope and purpose. Its use of GetFileAttributesEx() can\n> presumably be dropped. I don't actually believe the header comment\n> claiming that stat() is not guaranteed to update the st_size field;\n> there's no indication of that in the Microsoft documentation. What\n> seems more likely is that that's a garbled version of the truth,\n> that you won't get a correct value of _st_size for files over 4GB.\n\nSo after further digging around, it seems that this is wrong. The\nexistence of pgwin32_safestat() can be traced back to these threads:\n\nhttps://www.postgresql.org/message-id/flat/528853D3C5ED2C4AA8990B504BA7FB850106DF10%40sol.transas.com\nhttps://www.postgresql.org/message-id/flat/528853D3C5ED2C4AA8990B504BA7FB850106DF2F%40sol.transas.com\n\nin which it's stated that\n\n It seems I've found the cause and the workaround of the problem.\n MSVC's stat() is implemented by using FindNextFile().\n MSDN contains the following suspicious paragraph аbout FindNextFile():\n \"In rare cases, file attribute information on NTFS file systems\n may not be current at the time you call this function. To obtain\n the current NTFS file system file attributes, call\n GetFileInformationByHandle.\"\n Since we generally cannot open an examined file, we need another way.\n\nI'm wondering though why we adopted the existing coding in the face of\nthat observation. Couldn't the rest of struct stat be equally out of\ndate?\n\nIn short it seems like maybe we should be doing something similar to the\npatch that Sergey actually submitted in that discussion:\n\nhttps://www.postgresql.org/message-id/528853D3C5ED2C4AA8990B504BA7FB850658BA5C%40sol.transas.com\n\nwhich reimplements stat() from scratch on top of GetFileAttributesEx(),\nand thus doesn't require any assumptions at all about what's available\nfrom the toolchain's <sys/stat.h>.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jun 2019 14:02:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Wed, Jun 19, 2019 at 8:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> In short it seems like maybe we should be doing something similar to the\n> patch that Sergey actually submitted in that discussion:\n>\n> https://www.postgresql.org/message-id/528853D3C5ED2C4AA8990B504BA7FB850658BA5C%40sol.transas.com\n>\n\nI will not have much time for this list in the next couple of weeks,\nso I will send this patch in its current WIP state rather than\nstalling without a reply.\n\nMost of its functionality comes from Sergey's patch with some cosmetic\nchanges, and the addition of the 64 bits struct stat and fstat().\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Tue, 25 Jun 2019 12:00:45 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Tue, Jun 25, 2019 at 12:00:45PM +0200, Juan José Santamaría Flecha wrote:\n> I will not have much time for this list in the next couple of weeks,\n> so I will send this patch in its current WIP state rather than\n> stalling without a reply.\n> \n> Most of its functionality comes from Sergey's patch with some cosmetic\n> changes, and the addition of the 64 bits struct stat and fstat().\n\nThe former patch was rather impressive. Or scary. Or both. At which\nextent have you tested it? I think that we really need to make sure\nof a couple of things which satisfy our needs:\n1) Are we able to fix the issues with stat() calls on files larger\nthan 2GB and report a correct size?\n2) Are we able to detect properly that files pending for deletion are\ndiscarded with ENOENT?\n3) Are frontends able to use the new layer?\n\nIt seems to me that you don't need the configure changes.\n\nInstead of stat_pg_fixed which is confusing because it only involves\nWindows, I would rename the new file to stat.c or win32_stat.c. The\nlocation in src/port/ is adapted. I would also move out of\nwin32_port.h the various inline declarations and keep only raw\ndeclarations. That could be much cleaner.\n\nThe code desperately needs more comments to help understand its\nlogic. Don't we have in the tree an equivalent of cvt_ft2ut? What\ndoes cvt_attr2uxmode do? It would be nice to avoid conversion\nwrappers as much as possible, and find out system-related equivalents\nif any, and actually if necessary.\n\n+static unsigned short\n+cvt_attr2uxmode(int attr, const _TCHAR * name)\nThis looks rather bug-prone...\n\nI think that this stuff has not been tested and would break at\ncompilation. If src/tools/msvc/Mkvcbuild.pm is not changed, then the\nnew file won't get included in the compiled set. \n--\nMichael", "msg_date": "Wed, 26 Jun 2019 11:22:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Wed, Jun 26, 2019 at 4:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> The former patch was rather impressive. Or scary. Or both. At which\n> extent have you tested it? I think that we really need to make sure\n> of a couple of things which satisfy our needs:\n\nI wanted to make a quick test on the previous patch. So let me state\nwhat have I tested and what I have not: it builds and pass tests in\nWindows and Cygwin, but I have not setup a MinGW environment.\n\n> 1) Are we able to fix the issues with stat() calls on files larger\n> than 2GB and report a correct size?\n\nI have successfuly tested a COPY with large files.\n\n> 2) Are we able to detect properly that files pending for deletion are\n> discarded with ENOENT?\n\nCannot reproduce reliably, but is using the same logic as pgwin32_safestat().\n\n> 3) Are frontends able to use the new layer?\n\nAfter removing UNSAFE_STAT_OK, is this still an issue?\n\n> It seems to me that you don't need the configure changes.\n\nThe changes in configuration are meant for gcc compilations in Windows\n(Cygwin and Mingw).\n\n> Instead of stat_pg_fixed which is confusing because it only involves\n> Windows, I would rename the new file to stat.c or win32_stat.c. The\n> location in src/port/ is adapted. I would also move out of\n> win32_port.h the various inline declarations and keep only raw\n> declarations. That could be much cleaner.\n\nOk.\n\n> The code desperately needs more comments to help understand its\n> logic. Don't we have in the tree an equivalent of cvt_ft2ut? What\n> does cvt_attr2uxmode do? It would be nice to avoid conversion\n> wrappers as much as possible, and find out system-related equivalents\n> if any, and actually if necessary.\n\nI have only found something similar in ./src/port/gettimeofday.c, but\nnot sure if this patch should touch that code.\n\n\n> +static unsigned short\n> +cvt_attr2uxmode(int attr, const _TCHAR * name)\n> This looks rather bug-prone...\n\nI wanted to keep as much of the original code as possible, but if this\nis found as a viable solution, what shape should it have?\n\n> I think that this stuff has not been tested and would break at\n> compilation. If src/tools/msvc/Mkvcbuild.pm is not changed, then the\n> new file won't get included in the compiled set.\n\nThe previous patch was broken, taken from the wrong local branch\n(sorry about that). The attached is still a WIP but it has to do the\nthings above-mentioned.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Fri, 28 Jun 2019 23:34:38 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Fri, Jun 28, 2019 at 11:34:38PM +0200, Juan José Santamaría Flecha wrote:\n> I wanted to make a quick test on the previous patch. So let me state\n> what have I tested and what I have not: it builds and pass tests in\n> Windows and Cygwin, but I have not setup a MinGW environment.\n\nThanks. Could you attach this patch to the next commit fest? We had\nmany complaints with the current limitations with large files (pg_dump\nsyncs its result files, so that breaks on Windows actually if the dump\nis larger than 2GB..), and we are going to need to do something. I\nfind that stuff rather hard to backpatch, but let's see.\n--\nMichael", "msg_date": "Sat, 29 Jun 2019 11:30:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Sat, Jun 29, 2019 at 4:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Thanks. Could you attach this patch to the next commit fest? We had\n> many complaints with the current limitations with large files (pg_dump\n> syncs its result files, so that breaks on Windows actually if the dump\n> is larger than 2GB..), and we are going to need to do something. I\n> find that stuff rather hard to backpatch, but let's see.\n\nDone. [1]\n\nRegards,\n\nJuan José Santamaría Flecha\n\n[1] https://commitfest.postgresql.org/23/2189/\n\n\n", "msg_date": "Sat, 29 Jun 2019 08:19:18 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> On Wed, Jun 26, 2019 at 4:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> It seems to me that you don't need the configure changes.\n\n> The changes in configuration are meant for gcc compilations in Windows\n> (Cygwin and Mingw).\n\nDirectly editing the configure script is Not Done ... or at least,\nsuch changes wouldn't survive the next correctly-done configure\nupdate. You have to edit configure.in (or one of the sub-files in\nconfig/) and then regenerate configure using autoconf.\n\nIt seems likely that we *don't* need or want this for Cygwin;\nthat should be providing a reasonable stat() emulation already.\nSo probably you just want to add \"AC_LIBOBJ(win32_stat)\" to\nthe stanza beginning\n\n\t# Win32 (really MinGW) support\n\tif test \"$PORTNAME\" = \"win32\"; then\n\t AC_CHECK_FUNCS(_configthreadlocale)\n\t AC_REPLACE_FUNCS(gettimeofday)\n\t AC_LIBOBJ(dirmod)\n\n\nI'd also recommend that stat() fill all the fields in struct stat,\neven if you don't have anything better to put there than zeroes.\nOtherwise you're just opening things up for random misbehavior.\n\nI'm not in a position to comment on the details of the conversion from\nGetFileAttributesEx results to struct stat, but in general this\nseems like a reasonable way to proceed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Aug 2019 17:49:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "Thanks for looking into this.\n\nOn Fri, Aug 23, 2019 at 11:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Directly editing the configure script is Not Done ... or at least,\n> such changes wouldn't survive the next correctly-done configure\n> update. You have to edit configure.in (or one of the sub-files in\n> config/) and then regenerate configure using autoconf.\n>\n> It seems likely that we *don't* need or want this for Cygwin;\n> that should be providing a reasonable stat() emulation already.\n> So probably you just want to add \"AC_LIBOBJ(win32_stat)\" to\n> the stanza beginning\n>\n> I'd also recommend that stat() fill all the fields in struct stat,\n> even if you don't have anything better to put there than zeroes.\n> Otherwise you're just opening things up for random misbehavior.\n>\n\nFixed.\n\n> I'm not in a position to comment on the details of the conversion from\n> GetFileAttributesEx results to struct stat, but in general this\n> seems like a reasonable way to proceed.\n>\n\nActually, due to the behaviour of GetFileAttributesEx with symbolic\nlinks I think that using GetFileInformationByHandle instead can give a\nmore resilient solution. Also, by using a handle we get a good test\nfor ERROR_DELETE_PENDING. This is the approach for the attached patch.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Wed, 4 Sep 2019 23:47:47 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "Hi - is this likely to be applied to an upcoming release? / How does a novice apply a patch..?\r\n\r\nThanks\r\n\r\n-----Original Message-----\r\nFrom: Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> \r\nSent: 04 September 2019 22:48\r\nTo: Tom Lane <tgl@sss.pgh.pa.us>\r\nCc: Michael Paquier <michael@paquier.xyz>; williamedwinallen@live.com; pgsql-bugs@lists.postgresql.org; Magnus Hagander <magnus@hagander.net>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\nSubject: Re: BUG #15858: could not stat file - over 4GB\r\n\r\nThanks for looking into this.\r\n\r\nOn Fri, Aug 23, 2019 at 11:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>\r\n> Directly editing the configure script is Not Done ... or at least, \r\n> such changes wouldn't survive the next correctly-done configure \r\n> update. You have to edit configure.in (or one of the sub-files in\r\n> config/) and then regenerate configure using autoconf.\r\n>\r\n> It seems likely that we *don't* need or want this for Cygwin; that \r\n> should be providing a reasonable stat() emulation already.\r\n> So probably you just want to add \"AC_LIBOBJ(win32_stat)\" to the stanza \r\n> beginning\r\n>\r\n> I'd also recommend that stat() fill all the fields in struct stat, \r\n> even if you don't have anything better to put there than zeroes.\r\n> Otherwise you're just opening things up for random misbehavior.\r\n>\r\n\r\nFixed.\r\n\r\n> I'm not in a position to comment on the details of the conversion from \r\n> GetFileAttributesEx results to struct stat, but in general this seems \r\n> like a reasonable way to proceed.\r\n>\r\n\r\nActually, due to the behaviour of GetFileAttributesEx with symbolic links I think that using GetFileInformationByHandle instead can give a more resilient solution. Also, by using a handle we get a good test for ERROR_DELETE_PENDING. This is the approach for the attached patch.\r\n\r\nRegards,\r\n\r\nJuan José Santamaría Flecha\r\n", "msg_date": "Mon, 28 Oct 2019 14:28:59 +0000", "msg_from": "william allen <williamedwinallen@live.com>", "msg_from_op": false, "msg_subject": "RE: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Mon, Oct 28, 2019 at 3:29 PM william allen <williamedwinallen@live.com>\nwrote:\n\n> Hi - is this likely to be applied to an upcoming release? / How does a\n> novice apply a patch..?\n>\n>\nAt this moment is missing review, so it is probably far from being\ncommitable. Any attention is appreciated and might help pushing it forward.\nAs a personal note, I have to check that is still applies before the\nupcoming commitfest.\n\nAs for applying this patch you would need a Windows development\nenvironment. I would recommend Visual Studio as a starting point [1]. You\nalso have a very visual guide in the wiki [2].\n\n[1] https://www.postgresql.org/docs/current/install-windows.html\n[2] https://wiki.postgresql.org/wiki/Working_With_VisualStudio\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Oct 28, 2019 at 3:29 PM william allen <williamedwinallen@live.com> wrote:Hi - is this likely to be applied to an upcoming release? / How does a novice apply a patch..?At this moment is missing review, so it is probably far from being commitable. Any attention is appreciated and might help pushing it forward. As a personal note, I have to check that is still applies before the upcoming commitfest.As for applying this patch you would need a Windows development environment. I would recommend Visual Studio as a starting point [1]. You also have a very visual guide in the wiki [2].[1] https://www.postgresql.org/docs/current/install-windows.html[2] https://wiki.postgresql.org/wiki/Working_With_VisualStudioRegards,Juan José Santamaría Flecha", "msg_date": "Mon, 28 Oct 2019 18:13:58 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI ran into this problem when using psql.exe and copy command. \r\n\r\nI have checked out 11.6-release tarball and applied the patch. \r\nThe patch does not apply cleanly, but can be easily modified to apply. See Note 1.\r\nAfter applying the patch I built using \"build psql\" and ran the new psql.exe binary.\r\n\r\nIn order to test I have done the following: \r\nAgainst a PostgreSQL 11 server run two commands: \r\n\"COPY public.table FROM 'C:/file'\" and \"\\copy public.table FROM 'C:/file'\"\r\nThe first one runs in the context of the server, and does not work. It aborts with an error saying \"cannot stat file\", as expected. \r\nThe seconds on runs in the context of the new binary and does work. It copies data as expected. \r\n\r\n\r\n\r\nNote 1: \r\nsrc/tools/msvc/Mkvcbuild.pm should be \r\n\r\n-\t sprompt.c strerror.c tar.c thread.c getopt.c getopt_long.c dirent.c\r\n-\t win32env.c win32error.c win32security.c win32setlocale.c);\r\n+\t sprompt.c tar.c thread.c getopt.c getopt_long.c dirent.c\r\n+\t win32env.c win32error.c win32security.c win32setlocale.c win32_stat.c);\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Wed, 05 Feb 2020 11:46:33 +0000", "msg_from": "Emil Iggland <emil@iggland.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Wed, Feb 5, 2020 at 12:47 PM Emil Iggland <emil@iggland.com> wrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n\nThe latest version of this patch could benefit from an update. Please find\nattached a new version.\n\nMost changes are cosmetic, but they have been more extensive than a simple\nrebase so I am changing the status back to 'needs review'.\n\nTo summarize those changes:\n- Rename 'win32_stat.c' file to 'win32stat.c', as a better match of\nproject files.\n- Improve indentation and comments.\n- Remove cruft about old Windows versions.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Fri, 28 Feb 2020 10:15:45 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> The latest version of this patch could benefit from an update. Please find\n> attached a new version.\n\nThe cfbot thinks this doesn't compile on Windows [1]. Looks like perhaps\na missing-#include problem?\n\n\t\t\tregards, tom lane\n\n[1] https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.81541\n\n\n", "msg_date": "Fri, 28 Feb 2020 18:44:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Sat, Feb 29, 2020 at 12:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> The cfbot thinks this doesn't compile on Windows [1]. Looks like perhaps\n> a missing-#include problem?\n\n\nThe define logic for _WIN32_WINNT includes testing of _MSC_VER, and is not\na proper choice for MSVC 2013 as the cfbot is showing.\n\nPlease find attached a new version addressing this issue.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>", "msg_date": "Sat, 29 Feb 2020 09:40:44 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Sat, Feb 29, 2020 at 9:40 AM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n> On Sat, Feb 29, 2020 at 12:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>>\n>> The cfbot thinks this doesn't compile on Windows [1]. Looks like perhaps\n>> a missing-#include problem?\n>\n>\n> The define logic for _WIN32_WINNT includes testing of _MSC_VER, and is not\n> a proper choice for MSVC 2013 as the cfbot is showing.\n>\n\nThe cfbot is not happy yet. I will backtrack a bit on the cruft cleanup.\n\nPlease find attached a new version addressing this issue.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>>\n>", "msg_date": "Sat, 29 Feb 2020 12:36:05 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "I assigned myself as a reviewer for this patch, as I hit this bug today and had to perform a workaround. I have never reviewed a patch before but will try to update within the next 5 days. I intend on performing \"Implements Feature\" reviewing.", "msg_date": "Thu, 10 Sep 2020 14:30:54 +0000", "msg_from": "Greg Steiner <greg.steiner89@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Mon, Oct 28, 2019 at 06:13:58PM +0100, Juan José Santamaría Flecha wrote:\n> At this moment is missing review, so it is probably far from being\n> commitable. Any attention is appreciated and might help pushing it forward.\n> As a personal note, I have to check that is still applies before the\n> upcoming commitfest.\n\nCould you send a rebase of the patch? Thanks!\n--\nMichael", "msg_date": "Thu, 17 Sep 2020 16:45:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Thu, Sep 17, 2020 at 9:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> Could you send a rebase of the patch? Thanks!\n>\n\nThanks for the reminder. Please find attached a rebased version.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Thu, 17 Sep 2020 17:16:15 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> Thanks for the reminder. Please find attached a rebased version.\n\n(This hasn't shown up on -hackers yet, maybe caught in moderation?)\n\nI took a quick look through this. I'm not qualified to review the\nactual Windows code in win32stat.c, but as far as the way you're\nplugging it into the system goes, it looks good and seems to comport\nwith the discussion so far.\n\nOne thing I noticed, which is a pre-existing problem but maybe now\nis a good time to consider it, is that we're mapping lstat() to be\nexactly stat() on Windows. That made sense years ago when (we\nbelieved that) Windows didn't have symlinks, but surely it no longer\nmakes sense.\n\nAnother more trivial point is that it'd be good to run the new code\nthrough pgindent before committing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Sep 2020 12:04:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "Em qui., 17 de set. de 2020 às 14:37, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> escreveu:\n\n>\n> On Thu, Sep 17, 2020 at 9:46 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>>\n>> Could you send a rebase of the patch? Thanks!\n>>\n>\n> Thanks for the reminder. Please find attached a rebased version.\n>\nSorry, I'm missing something?\nWhat's wrong with _stat64?\n\n Pasta de C:\\tmp\n\n18/08/2020 16:51 6.427.512.517 macOS_Catalina.7z\n 1 arquivo(s) 6.427.512.517 bytes\n 0 pasta(s) 149.691.797.504 bytes disponíveis\n\nC:\\usr\\src\\tests\\stat>crt_stat\nFile size : 6427512517\nDrive : C:\nTime modified : Tue Aug 18 16:51:47 2020\n\nregards,\nRanier Vilela", "msg_date": "Thu, 17 Sep 2020 15:13:44 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> What's wrong with _stat64?\n\nSee upthread.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Sep 2020 14:26:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Thu, Sep 17, 2020 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <\n> juanjo.santamaria@gmail.com> writes:\n> > Thanks for the reminder. Please find attached a rebased version.\n>\n> (This hasn't shown up on -hackers yet, maybe caught in moderation?)\n>\n\nThanks for looking into it. Finally, it went through. I will be removing\nbug-list from now on.\n\n>\n> I took a quick look through this. I'm not qualified to review the\n> actual Windows code in win32stat.c, but as far as the way you're\n> plugging it into the system goes, it looks good and seems to comport\n> with the discussion so far.\n>\n> One thing I noticed, which is a pre-existing problem but maybe now\n> is a good time to consider it, is that we're mapping lstat() to be\n> exactly stat() on Windows. That made sense years ago when (we\n> believed that) Windows didn't have symlinks, but surely it no longer\n> makes sense.\n>\n\nI will have to take a better look at it, but from a quick look it, all\nlstat() calls seem to test just if the file exists, and that can be done\nwith a cheap call to GetFileAttributes(). Would a limited (but fast)\nlstat(), where only st_mode could be informed, be acceptable?\n\n>\n> Another more trivial point is that it'd be good to run the new code\n> through pgindent before committing.\n>\n\nI do not have pgindent in the WIN32 machine, but I will try to for the next\nversion.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Sep 17, 2020 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> Thanks for the reminder. Please find attached a rebased version.\n\n(This hasn't shown up on -hackers yet, maybe caught in moderation?)\n\nThanks for looking into it. Finally, it went through. I will be removing bug-list from now on. \n\nI took a quick look through this.  I'm not qualified to review the\nactual Windows code in win32stat.c, but as far as the way you're\nplugging it into the system goes, it looks good and seems to comport\nwith the discussion so far.\n\nOne thing I noticed, which is a pre-existing problem but maybe now\nis a good time to consider it, is that we're mapping lstat() to be\nexactly stat() on Windows.  That made sense years ago when (we\nbelieved that) Windows didn't have symlinks, but surely it no longer\nmakes sense.I will have to take a better look at it, but from a quick look it, all lstat() calls seem to test just if the file exists, and that can be done with a cheap call to GetFileAttributes(). Would a limited \n\n (but fast) \n\n lstat(), where only st_mode could be informed, be acceptable?\n\nAnother more trivial point is that it'd be good to run the new code\nthrough pgindent before committing.I do not have pgindent in the WIN32 machine, but I will try to for the next version.Regards,Juan José Santamaría Flecha", "msg_date": "Thu, 17 Sep 2020 20:47:39 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Thu, Sep 17, 2020 at 8:47 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n> On Thu, Sep 17, 2020 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>>\n>> One thing I noticed, which is a pre-existing problem but maybe now\n>> is a good time to consider it, is that we're mapping lstat() to be\n>> exactly stat() on Windows. That made sense years ago when (we\n>> believed that) Windows didn't have symlinks, but surely it no longer\n>> makes sense.\n>>\n>\n> I will have to take a better look at it, but from a quick look it, all\n> lstat() calls seem to test just if the file exists, and that can be done\n> with a cheap call to GetFileAttributes(). Would a limited (but fast)\n> lstat(), where only st_mode could be informed, be acceptable?\n>\n\nAfter thinking more about this, that approach would be problematic for\nDELETE_PENDING files. The proposed patch logic is meant to maintain current\nbehaviour, which is not broken for WIN32 symlinks AFAICT.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Sep 17, 2020 at 8:47 PM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:On Thu, Sep 17, 2020 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nOne thing I noticed, which is a pre-existing problem but maybe now\nis a good time to consider it, is that we're mapping lstat() to be\nexactly stat() on Windows.  That made sense years ago when (we\nbelieved that) Windows didn't have symlinks, but surely it no longer\nmakes sense.I will have to take a better look at it, but from a quick look it, all lstat() calls seem to test just if the file exists, and that can be done with a cheap call to GetFileAttributes(). Would a limited \n\n (but fast) \n\n lstat(), where only st_mode could be informed, be acceptable?After thinking more about this, that approach would be problematic for DELETE_PENDING files. The proposed patch logic is meant to maintain current behaviour, which is not broken for WIN32 symlinks AFAICT.Regards,Juan José Santamaría Flecha", "msg_date": "Fri, 18 Sep 2020 12:47:06 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI tested the patch at hand, and it performs as expected. Files larger than 4GB can be imported.\r\n\r\nSteps: \r\n0) create a csv-file that is sufficiently big (>4GB), and one that is small. Use these files to test.\r\n1a) Attempt to import the small file using devel-version.\r\n1b) EXPECTED: success, ACTUAL: success\r\n2a) Attempt to import the big file using devel-version.\r\n2b) EXPECTED: failure, ACTUAL: failure\r\n3) Apply patch and build new version\r\n4a) Attempt to import the small file using patched-version.\r\n4b) EXPECTED: success, ACTUAL: success\r\n4a) Attempt to import the big file using patched-version.\r\n4b) EXPECTED: success, ACTUAL: success\r\n\r\nThe code looks sensible, it is easy to read and follow. The code uses appropriate win32 functions to perform the task. \r\n\r\nCode calculates file size using the following method: buf->st_size = ((__int64) fiData.nFileSizeHigh) << 32 | (__int64)(fiData.nFileSizeLow);\r\nThe hard coded constant 32 is fine, nFileSizeHigh is defined as a DWORD in the Win32 API, which is a 32 bit unsigned integer. There is no need to a dynamic calculation.\r\n\r\nThere are minor \"nit-picks\" that I would change if it were my code, but do not change the functionality of the code. \r\n\r\n1) \r\nif (GetFileAttributes(name) == INVALID_FILE_ATTRIBUTES)\r\n{\r\n errno = ENOENT;\r\n return -1;\r\n}\r\n\r\nHere I would call _dosmaperr(GetLastError()) instead, just to take account of the possibility that some other error occurred. Following this change there are slight inconsistency in the order of \"CloseHandle(hFile), errno = ENOENT; return -1\" and \"_dosmaperr(GetLastError()); CloseHandle(hFile); return -1\". I would prefer consistent ordering, but that is not important.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 07 Oct 2020 19:13:29 +0000", "msg_from": "Emil Iggland <emil@iggland.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "Emil Iggland <emil@iggland.com> writes:\n> I tested the patch at hand, and it performs as expected. Files larger than 4GB can be imported.\n\nThanks for testing!\n\nI'd been expecting one of our Windows-savvy committers to pick this up,\nbut since nothing has been happening, I took it on myself to push it.\nI'll probably regret that :-(\n\nI made a few cosmetic changes, mostly reorganizing comments in a way\nthat made more sense to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Oct 2020 16:22:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Fri, Oct 9, 2020 at 10:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Emil Iggland <emil@iggland.com> writes:\n> > I tested the patch at hand, and it performs as expected. Files larger\n> than 4GB can be imported.\n>\n> Thanks for testing!\n>\n\n Thanks for testing! +1\n\n>\n> I'd been expecting one of our Windows-savvy committers to pick this up,\n> but since nothing has been happening, I took it on myself to push it.\n> I'll probably regret that :-(\n>\n\nThanks for taking care of this. I see no problems in the build farm, but\nplease reach me if I missed something.\n\n>\n> I made a few cosmetic changes, mostly reorganizing comments in a way\n> that made more sense to me.\n>\n> I was working on a new version, which was pgindent-friendlier and clearer\nabout reporting an error when 'errno' is not informed. Please find attached\na patch with those changes.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Sat, 10 Oct 2020 13:31:21 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Sat, Oct 10, 2020 at 01:31:21PM +0200, Juan José Santamaría Flecha wrote:\n> Thanks for taking care of this. I see no problems in the build farm, but\n> please reach me if I missed something.\n\nThanks for continuing your work on this patch. I see no related\nfailures in the buildfarm.\n\n- _dosmaperr(GetLastError());\n+ DWORD err = GetLastError();\n+\n+ /* report when not ERROR_SUCCESS */\n+ if (err == ERROR_FILE_NOT_FOUND || err == ERROR_PATH_NOT_FOUND)\n+ errno = ENOENT;\n+ else\n+ _dosmaperr(err);\nWhy are you changing that? The original coding is fine, as\n_dosmaperr() already maps ERROR_FILE_NOT_FOUND and\nERROR_PATH_NOT_FOUND to ENOENT.\n\n- _dosmaperr(GetLastError());\n+ DWORD err = GetLastError();\n+\n CloseHandle(hFile);\n+ _dosmaperr(err);\nThese parts are indeed incorrect. CloseHandle() could overwrite\nerrno.\n--\nMichael", "msg_date": "Sat, 10 Oct 2020 21:23:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Sat, Oct 10, 2020 at 2:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> - _dosmaperr(GetLastError());\n> + DWORD err = GetLastError();\n> +\n> + /* report when not ERROR_SUCCESS */\n> + if (err == ERROR_FILE_NOT_FOUND || err ==\n> ERROR_PATH_NOT_FOUND)\n> + errno = ENOENT;\n> + else\n> + _dosmaperr(err);\n> Why are you changing that? The original coding is fine, as\n> _dosmaperr() already maps ERROR_FILE_NOT_FOUND and\n> ERROR_PATH_NOT_FOUND to ENOENT.\n>\n\nIf the file does not exist there is no need to call _dosmaperr() and log\nthe error.\n\n>\n> - _dosmaperr(GetLastError());\n> + DWORD err = GetLastError();\n> +\n> CloseHandle(hFile);\n> + _dosmaperr(err);\n> These parts are indeed incorrect. CloseHandle() could overwrite\n> errno.\n>\n\nThe meaningful error should come from the previous call, and an error from\nCloseHandle() could mask it. Not sure it makes a difference anyhow.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Sat, Oct 10, 2020 at 2:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n-               _dosmaperr(GetLastError());\n+               DWORD           err = GetLastError();\n+\n+               /* report when not ERROR_SUCCESS */\n+               if (err == ERROR_FILE_NOT_FOUND || err == ERROR_PATH_NOT_FOUND)\n+                       errno = ENOENT;\n+               else\n+                       _dosmaperr(err);\nWhy are you changing that?  The original coding is fine, as\n_dosmaperr() already maps ERROR_FILE_NOT_FOUND and\nERROR_PATH_NOT_FOUND to ENOENT.If the file does not exist there is no need to call _dosmaperr() and log the error. \n\n-      _dosmaperr(GetLastError());\n+      DWORD           err = GetLastError();\n+\n       CloseHandle(hFile);\n+      _dosmaperr(err);\nThese parts are indeed incorrect.  CloseHandle() could overwrite\nerrno.The meaningful error should come from the previous call, and an error from \n\nCloseHandle() could mask it. Not sure it makes a difference anyhow.Regards,Juan José Santamaría Flecha", "msg_date": "Sat, 10 Oct 2020 16:29:38 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> If the file does not exist there is no need to call _dosmaperr() and log\n> the error.\n\nI concur with Michael that it's inappropriate to make an end run around\n_dosmaperr() here. If you think that the DEBUG5 logging inside that\nis inappropriate, you should propose removing it outright.\n\nPushed the rest of this.\n\n(pgindent behaved differently around PFN_NTQUERYINFORMATIONFILE today\nthan it did yesterday. No idea why.)\n\n> The meaningful error should come from the previous call, and an error from\n> CloseHandle() could mask it. Not sure it makes a difference anyhow.\n\nWould CloseHandle() really touch errno at all? But this way is\ncertainly safer, so done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Oct 2020 13:42:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Sat, Oct 10, 2020 at 7:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> I concur with Michael that it's inappropriate to make an end run around\n> _dosmaperr() here. If you think that the DEBUG5 logging inside that\n> is inappropriate, you should propose removing it outright.\n>\n> Pushed the rest of this.\n>\n\nGreat, thanks again to everyone who has taken some time to look into this.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Sat, Oct 10, 2020 at 7:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nI concur with Michael that it's inappropriate to make an end run around\n_dosmaperr() here.  If you think that the DEBUG5 logging inside that\nis inappropriate, you should propose removing it outright.\n\nPushed the rest of this.Great, thanks again to everyone who has taken some time to look into this.Regards,Juan José Santamaría Flecha", "msg_date": "Sat, 10 Oct 2020 21:00:27 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Sat, Oct 10, 2020 at 09:00:27PM +0200, Juan José Santamaría Flecha wrote:\n> Great, thanks again to everyone who has taken some time to look into this.\n\nWe are visibly not completely out of the woods yet, jacana is\nreporting a compilation error:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2020-10-10%2018%3A00%3A28\nOct 10 14:04:40\nc:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/port/win32stat.c:\nIn function 'fileinfo_to_stat':\nOct 10 14:04:40\nc:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/port/win32stat.c:151:13:\nerror: 'BY_HANDLE_FILE_INFORMATION {aka struct\n_BY_HANDLE_FILE_INFORMATION}' has no member named 'nFileSizeLowi'; did\nyou mean 'nFileSizeLow'?\nOct 10 14:04:40 fiData.nFileSizeLowi);\nOct 10 14:04:40 ^~~~~~~~~~~~~\nOct 10 14:04:40 nFileSizeLow\n\nI don't have the time to check MinGW and HEAD now, so that's just a\nheads-up.\n--\nMichael", "msg_date": "Sun, 11 Oct 2020 09:24:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> We are visibly not completely out of the woods yet, jacana is\n> reporting a compilation error:\n\nNah, I fixed that hours ago (961e07b8c). jacana must not have run again\nyet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Oct 2020 20:34:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Sat, Oct 10, 2020 at 08:34:48PM -0400, Tom Lane wrote:\n> Nah, I fixed that hours ago (961e07b8c). jacana must not have run again\n> yet.\n\nIndeed, thanks. I have missed one sync here.\n\n+ hFile = CreateFile(name,\n+ GENERIC_READ,\n+ (FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE),\n+ &sa,\n+ OPEN_EXISTING,\n+ (FILE_FLAG_NO_BUFFERING | FILE_FLAG_BACKUP_SEMANTICS |\n+ FILE_FLAG_OVERLAPPED),\n+ NULL);\n+ if (hFile == INVALID_HANDLE_VALUE)\n+ {\n+ CloseHandle(hFile);\n+ errno = ENOENT;\n+ return -1;\n+ }\nWhy are we forcing errno=ENOENT here? Wouldn't it be correct to use\n_dosmaperr(GetLastError()) to get the correct errno? This code would\nfor example consider as non-existing a file even if we fail getting it\nbecause of ERROR_SHARING_VIOLATION, which should map to EACCES. This\ncase can happen with virus scanners taking a non-share handle on files\nbeing looked at in parallel of this code path.\n--\nMichael", "msg_date": "Mon, 12 Oct 2020 10:01:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Why are we forcing errno=ENOENT here? Wouldn't it be correct to use\n> _dosmaperr(GetLastError()) to get the correct errno?\n\nFair question. Juan, was there some good reason not to look at\nGetLastError() in this step?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 11 Oct 2020 23:27:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Mon, Oct 12, 2020 at 5:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > Why are we forcing errno=ENOENT here? Wouldn't it be correct to use\n> > _dosmaperr(GetLastError()) to get the correct errno?\n>\n> Fair question. Juan, was there some good reason not to look at\n> GetLastError() in this step?\n>\n\nUhm, a good question indeed, forcing errno serves no purpose there.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Oct 12, 2020 at 5:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Michael Paquier <michael@paquier.xyz> writes:\n> Why are we forcing errno=ENOENT here?  Wouldn't it be correct to use\n> _dosmaperr(GetLastError()) to get the correct errno?\n\nFair question.  Juan, was there some good reason not to look at\nGetLastError() in this step?Uhm, a good question indeed, forcing errno serves no purpose there.Regards,Juan José Santamaría Flecha", "msg_date": "Mon, 12 Oct 2020 14:33:32 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> On Mon, Oct 12, 2020 at 5:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Michael Paquier <michael@paquier.xyz> writes:\n>>> Why are we forcing errno=ENOENT here? Wouldn't it be correct to use\n>>> _dosmaperr(GetLastError()) to get the correct errno?\n\n>> Fair question. Juan, was there some good reason not to look at\n>> GetLastError() in this step?\n\n> Uhm, a good question indeed, forcing errno serves no purpose there.\n\nOK, changed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Oct 2020 11:13:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Mon, Oct 12, 2020 at 11:13:38AM -0400, Tom Lane wrote:\n> Juan José Santamaría Flecha wrote:\n>> Uhm, a good question indeed, forcing errno serves no purpose there.\n> \n> OK, changed.\n\nThanks!\n--\nMichael", "msg_date": "Tue, 13 Oct 2020 09:25:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "Hi,\r\n\r\nI noticed that this modification only commit into master branch, \r\nthere is still have a problem on 12.6 or 13.2 on Windows.\r\n\r\nDo you have a plan to backpatch this commit into REL_12_STABLE or REL_13_STABLE ?\r\n\r\nThe commit:\r\nhttps://github.com/postgres/postgres/commit/bed90759fcbcd72d4d06969eebab81e47326f9a2\r\nhttps://github.com/postgres/postgres/commit/ed30b1a60dadf2b7cc58bce5009ad8676b8fe479\r\n\r\n\r\n------\r\nBest regards\r\nShenhao Wang\r\n", "msg_date": "Thu, 25 Feb 2021 06:07:06 +0000", "msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com> writes:\n> Do you have a plan to backpatch this commit into REL_12_STABLE or REL_13_STABLE ?\n\nhttps://www.postgresql.org/message-id/YCsZIX2A2Ilsvfnl@paquier.xyz\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Feb 2021 01:21:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "On Thu, Feb 25, 2021 at 06:07:06AM +0000, wangsh.fnst@fujitsu.com wrote:\n> I noticed that this modification only commit into master branch, \n> there is still have a problem on 12.6 or 13.2 on Windows.\n> \n> Do you have a plan to backpatch this commit into REL_12_STABLE or REL_13_STABLE ?\n\nThe change to be able to fix that stuff is invasive. So, while I\ndon't really object to a backpatch of this change in the future, I\nthink that it would be wiser to wait until we get more feedback with\nthe release of Postgres 14 before doing a backpatch to older\nversions. So we are in a wait phase for the moment.\n\nThanks,\n--\nMichael", "msg_date": "Thu, 25 Feb 2021 15:21:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15858: could not stat file - over 4GB" }, { "msg_contents": "Thank you for sharing\r\n\r\nBest regards\r\nShenhao Wang\r\n\r\n-----Original Message-----\r\nFrom: Michael Paquier <michael@paquier.xyz> \r\nSent: Thursday, February 25, 2021 2:22 PM\r\nTo: Wang, Shenhao/王 申豪 <wangsh.fnst@fujitsu.com>\r\nCc: Tom Lane <tgl@sss.pgh.pa.us>; Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>; Emil Iggland <emil@iggland.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\nSubject: Re: BUG #15858: could not stat file - over 4GB\r\n\r\nOn Thu, Feb 25, 2021 at 06:07:06AM +0000, wangsh.fnst@fujitsu.com wrote:\r\n> I noticed that this modification only commit into master branch, \r\n> there is still have a problem on 12.6 or 13.2 on Windows.\r\n> \r\n> Do you have a plan to backpatch this commit into REL_12_STABLE or REL_13_STABLE ?\r\n\r\nThe change to be able to fix that stuff is invasive. So, while I\r\ndon't really object to a backpatch of this change in the future, I\r\nthink that it would be wiser to wait until we get more feedback with\r\nthe release of Postgres 14 before doing a backpatch to older\r\nversions. So we are in a wait phase for the moment.\r\n\r\nThanks,\r\n--\r\nMichael\r\n", "msg_date": "Thu, 25 Feb 2021 06:39:14 +0000", "msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: BUG #15858: could not stat file - over 4GB" } ]
[ { "msg_contents": "The current hardcoded EDH parameter fallback use the old SKIP primes, for which\nthe source disappeared from the web a long time ago. Referencing a known dead\nsource seems a bit silly, so I think we should either switch to a non-dead\nsource of MODP primes or use an archive.org link for SKIP. Personally I prefer\nthe former.\n\nThis was touched upon, but never really discussed AFAICT, back when then EDH\nparameters were reworked a few years ago. Instead of replacing with custom\nones, as suggested in [1] it we might as well replace with standardized ones as\nthis is a fallback. Custom ones wont make it more secure, just add more work\nfor the project. The attached patch replace the SKIP prime with the 2048 bit\nMODP group from RFC 3526, which is the same change that OpenSSL did a few years\nback [2].\n\ncheers ./daniel\n\n[1] https://www.postgresql.org/message-id/54f44984-2f09-8744-927f-140a90c379dc%40ohmu.fi\n[2] https://github.com/openssl/openssl/commit/fb015ca6f05e09b11a3932f89d25bae697c8af1e", "msg_date": "Tue, 18 Jun 2019 14:05:00 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Replacing the EDH SKIP primes" }, { "msg_contents": "On Tue, Jun 18, 2019 at 02:05:00PM +0200, Daniel Gustafsson wrote:\n> The current hardcoded EDH parameter fallback use the old SKIP primes, for which\n> the source disappeared from the web a long time ago. Referencing a known dead\n> source seems a bit silly, so I think we should either switch to a non-dead\n> source of MODP primes or use an archive.org link for SKIP. Personally I prefer\n> the former.\n\nI agree with you that it sounds more sensible to switch to a new prime\ninstead of relying on an archive of the past one.\n\n> This was touched upon, but never really discussed AFAICT, back when then EDH\n> parameters were reworked a few years ago. Instead of replacing with custom\n> ones, as suggested in [1] it we might as well replace with standardized ones as\n> this is a fallback. Custom ones wont make it more secure, just add more work\n> for the project. The attached patch replace the SKIP prime with the 2048 bit\n> MODP group from RFC 3526, which is the same change that OpenSSL did a few years\n> back [2].\n\nFine by me. Let's stick with the 2048b-long one for now as we did in\nc0a15e0. I am wondering if we should sneak that into v12, but I'd\nrather just wait for v13 to open.\n--\nMichael", "msg_date": "Wed, 19 Jun 2019 12:40:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "> On 19 Jun 2019, at 05:40, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Fine by me. Let's stick with the 2048b-long one for now as we did in\n> c0a15e0. I am wondering if we should sneak that into v12, but I'd\n> rather just wait for v13 to open.\n\nI think this is v13 material, I’ll stick it in the next commitfest.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 19 Jun 2019 07:44:46 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "On Wed, Jun 19, 2019 at 07:44:46AM +0200, Daniel Gustafsson wrote:\n> I think this is v13 material, I’ll stick it in the next commitfest.\n\nThanks!\n--\nMichael", "msg_date": "Wed, 19 Jun 2019 14:52:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "On 2019-06-18 13:05, Daniel Gustafsson wrote:\n> This was touched upon, but never really discussed AFAICT, back when then EDH\n> parameters were reworked a few years ago. Instead of replacing with custom\n> ones, as suggested in [1] it we might as well replace with standardized ones as\n> this is a fallback. Custom ones wont make it more secure, just add more work\n> for the project. The attached patch replace the SKIP prime with the 2048 bit\n> MODP group from RFC 3526, which is the same change that OpenSSL did a few years\n> back [2].\n\nIt appears that we have consensus to go ahead with this.\n\n<paranoia>\nI was wondering whether the provided binary blob contained any checksums\nor other internal checks. How would we know whether it contains\ntransposed characters or replaces a 1 by a I or a l? If I just randomly\nedit the blob, the ssl tests still pass. (The relevant load_dh_buffer()\ncall does get called by the tests.) How can we make sure we actually\ngot a good copy?\n</paranoia>\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jul 2019 08:14:25 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "On Tue, Jul 02, 2019 at 08:14:25AM +0100, Peter Eisentraut wrote:\n> It appears that we have consensus to go ahead with this.\n\nYeah, I was planning to look at that one next. Or perhaps you would\nlike to take care of it, Peter?\n\n> <paranoia>\n> I was wondering whether the provided binary blob contained any checksums\n> or other internal checks. How would we know whether it contains\n> transposed characters or replaces a 1 by a I or a l? If I just randomly\n> edit the blob, the ssl tests still pass. (The relevant load_dh_buffer()\n> call does get called by the tests.) How can we make sure we actually\n> got a good copy?\n> </paranoia>\n\nPEM_read_bio_DHparams() has some checks on the Diffie-Hellman key, but\nit is up to the caller to make sure that it is normally providing a\nprime number in this case to make the cracking harder, no? RFC 3526\nhas a small formula in this case, which we can use to double-check the\npatch.\n--\nMichael", "msg_date": "Tue, 2 Jul 2019 16:49:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "> On 2 Jul 2019, at 09:49, Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jul 02, 2019 at 08:14:25AM +0100, Peter Eisentraut wrote:\n\n>> <paranoia>\n>> I was wondering whether the provided binary blob contained any checksums\n>> or other internal checks. How would we know whether it contains\n>> transposed characters or replaces a 1 by a I or a l? If I just randomly\n>> edit the blob, the ssl tests still pass. (The relevant load_dh_buffer()\n>> call does get called by the tests.) How can we make sure we actually\n>> got a good copy?\n>> </paranoia>\n> \n> PEM_read_bio_DHparams() has some checks on the Diffie-Hellman key, but\n> it is up to the caller to make sure that it is normally providing a\n> prime number in this case to make the cracking harder, no?\n\nOpenSSL provides DH_check() which we use in load_dh_file() to ensure that the\nuser is passing a valid prime in the DH file. Adding this to the hardcoded\nblob seems overkill though, once the validity has been verified before it being\ncommitted.\n\n> RFC 3526\n> has a small formula in this case, which we can use to double-check the\n> patch.\n\nA DH param in PEM (or DER) format can be checked with the openssl dhparam tool.\nAssuming the PEM is extracted from the patch into a file, one can do:\n\n\topenssl dhparam -inform PEM -in /tmp/dh.pem -check -text\n\nThe prime is returned and can be diffed against the one in the RFC. If you\nmodify the blob you will see that the check complains about it not being prime.\n\nThere is an expected warning in the output however: \"the g value is not a\ngenerator” (this is also present when subjecting the PEM for the 2048 MODP in\nOpenSSL). From reading RFC2412 which outlines how the primes are generated,\nthis is by design. In Appendix E:\n\n \"Because these two primes are congruent to 7 (mod 8), 2 is a quadratic\n residue of each prime. All powers of 2 will also be quadratic\n residues. This prevents an opponent from learning the low order bit\n of the Diffie-Hellman exponent (AKA the subgroup confinement\n problem). Using 2 as a generator is efficient for some modular\n exponentiation algorithms. [Note that 2 is technically not a\n generator in the number theory sense, because it omits half of the\n possible residues mod P. From a cryptographic viewpoint, this is a\n virtue.]\"\n\nI’m far from a cryptographer, but AFAICT from reading it essentially means that\nthe RFC authors chose to limit the search space of the shared secret rather\nthan leaking a bit of it, and OpenSSL has chosen in DH_check() that leaking a\nbit is preferrable. (This makes me wonder if we should downgrade the check in\nload_dh_file() \"codes & DH_NOT_SUITABLE_GENERATOR” to WARNING, but the lack of\nreports of it being a problem either shows that most people are just using\nopenssl dhparam generated parameters which can leak a bit, or aren’t using DH\nfiles at all.)\n\ncheers ./daniel\n\n", "msg_date": "Wed, 3 Jul 2019 10:56:41 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "On Wed, Jul 03, 2019 at 10:56:41AM +0200, Daniel Gustafsson wrote:\n> OpenSSL provides DH_check() which we use in load_dh_file() to ensure that the\n> user is passing a valid prime in the DH file. Adding this to the hardcoded\n> blob seems overkill though, once the validity has been verified before it being\n> committed.\n\nAgreed, and I didn't notice this check... There could be an argument\nfor having DH_check within an assertion block but as this changes once\nper decade there is little value.\n\n> A DH param in PEM (or DER) format can be checked with the openssl dhparam tool.\n> Assuming the PEM is extracted from the patch into a file, one can do:\n> \n> \topenssl dhparam -inform PEM -in /tmp/dh.pem -check -text\n> \n> The prime is returned and can be diffed against the one in the RFC. If you\n> modify the blob you will see that the check complains about it not being prime.\n\nAh, thanks. I can see that the new key matches the RFC.\n\n> There is an expected warning in the output however: \"the g value is not a\n> generator” (this is also present when subjecting the PEM for the 2048 MODP in\n> OpenSSL).\n\nIndeed, I saw that also from OpenSSL. That looks to come from dh.c\n(there are two other code paths, haven't checked in details). Thanks\nfor the pointer.\n\n> I’m far from a cryptographer, but AFAICT from reading it essentially means that\n> the RFC authors chose to limit the search space of the shared secret rather\n> than leaking a bit of it, and OpenSSL has chosen in DH_check() that leaking a\n> bit is preferrable. (This makes me wonder if we should downgrade the check in\n> load_dh_file() \"codes & DH_NOT_SUITABLE_GENERATOR” to WARNING, but the lack of\n> reports of it being a problem either shows that most people are just using\n> openssl dhparam generated parameters which can leak a bit, or aren’t using DH\n> files at all.)\n\nYeah, no objections per the arguments from the RFC. I am not sure if\nwe actually need to change that part though. We'd still get a\ncomplaint for a key which is not a prime, and for the default one this\nis not something we care much about, because we know its properties,\nno? It would be nice to add a comment on that though, perhaps in\nlibpq-be.h where the key is defined.\n--\nMichael", "msg_date": "Wed, 3 Jul 2019 19:11:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "> On 3 Jul 2019, at 12:11, Michael Paquier <michael@paquier.xyz> wrote:\n\n> It would be nice to add a comment on that though, perhaps in\n> libpq-be.h where the key is defined.\n\nAgreed, I’ve updated the patch with a comment on this formulated such that it\nshould stand the test of time even as OpenSSL changes etc.\n\ncheers ./daniel", "msg_date": "Wed, 3 Jul 2019 20:56:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "On Wed, Jul 03, 2019 at 08:56:42PM +0200, Daniel Gustafsson wrote:\n> Agreed, I’ve updated the patch with a comment on this formulated such that it\n> should stand the test of time even as OpenSSL changes etc.\n\nI'd like to think that we had rather mention the warning issue\nexplicitely, so as people don't get surprised, like that for example:\n\n * This is the 2048-bit DH parameter from RFC 3526. The generation of the\n * prime is specified in RFC 2412, which also discusses the design choice\n * of the generator. Note that when loaded with OpenSSL this causes\n * DH_check() to fail on with DH_NOT_SUITABLE_GENERATOR, where leaking\n * a bit is preferred.\n\nNow this makes an OpenSSL-specific issue pop up within a section of\nthe code where we want to make things more generic with SSL, so your\nsimpler version has good arguments as well.\n\nI have just rechecked the shape of the key, and we have an exact\nmatch.\n--\nMichael", "msg_date": "Thu, 4 Jul 2019 09:58:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "\n\n> On 04 Jul 2019, at 02:58, Michael Paquier <michael@paquier.xyz> wrote:\n> \n>> On Wed, Jul 03, 2019 at 08:56:42PM +0200, Daniel Gustafsson wrote:\n>> Agreed, I’ve updated the patch with a comment on this formulated such that it\n>> should stand the test of time even as OpenSSL changes etc.\n> \n> I'd like to think that we had rather mention the warning issue\n> explicitely, so as people don't get surprised, like that for example:\n> \n> * This is the 2048-bit DH parameter from RFC 3526. The generation of the\n> * prime is specified in RFC 2412, which also discusses the design choice\n> * of the generator. Note that when loaded with OpenSSL this causes\n> * DH_check() to fail on with DH_NOT_SUITABLE_GENERATOR, where leaking\n> * a bit is preferred.\n> \n> Now this makes an OpenSSL-specific issue pop up within a section of\n> the code where we want to make things more generic with SSL, so your\n> simpler version has good arguments as well.\n> \n> I have just rechecked the shape of the key, and we have an exact\n> match.\n\nLGTM, thanks.\n\ncheers ./daniel\n\n\n", "msg_date": "Thu, 4 Jul 2019 08:24:13 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Replacing the EDH SKIP primes" }, { "msg_contents": "On Thu, Jul 04, 2019 at 08:24:13AM +0200, Daniel Gustafsson wrote:\n> LGTM, thanks.\n\nOkay, done, after rechecking the shape of the key. Thanks!\n--\nMichael", "msg_date": "Fri, 5 Jul 2019 11:04:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Replacing the EDH SKIP primes" } ]
[ { "msg_contents": "In a case of a corrupted database, I saw an error message like\n\n Could not read from file ...: Success.\n\nfrom the SLRU module. This is because it checks that it reads or writes\nexactly BLCKSZ, and else goes to the error path. The attached patch\ngives a different error message in this case.\n\nBecause of the structure of this code, we don't have the information to\ndo the usual \"read %d of %zu\", but at least this is better than\nreporting a \"success\" error.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 18 Jun 2019 14:35:02 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "fix \"Success\" error messages" }, { "msg_contents": "On Tue, Jun 18, 2019 at 02:35:02PM +0200, Peter Eisentraut wrote:\n> --- a/src/backend/access/transam/slru.c\n> +++ b/src/backend/access/transam/slru.c\n> @@ -923,15 +923,19 @@ SlruReportIOError(SlruCtl ctl, int pageno, TransactionId xid)\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode_for_file_access(),\n> \t\t\t\t\t errmsg(\"could not access status of transaction %u\", xid),\n> -\t\t\t\t\t errdetail(\"Could not read from file \\\"%s\\\" at offset %u: %m.\",\n> -\t\t\t\t\t\t\t path, offset)));\n> +\t\t\t\t\t errno == 0\n> +\t\t\t\t\t ? errdetail(\"Short read from file \\\"%s\\\" at offset %u.\", path, offset)\n> +\t\t\t\t\t : errdetail(\"Could not read from file \\\"%s\\\" at offset %u: %m.\",\n> +\t\t\t\t\t\t\t\t path, offset)));\n\nPerhaps using \"Encountered partial read from file ...\" would be better \nsuited for the error message.\n\n> \t\t\tbreak;\n> \t\tcase SLRU_WRITE_FAILED:\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode_for_file_access(),\n> \t\t\t\t\t errmsg(\"could not access status of transaction %u\", xid),\n> -\t\t\t\t\t errdetail(\"Could not write to file \\\"%s\\\" at offset %u: %m.\",\n> -\t\t\t\t\t\t\t path, offset)));\n> +\t\t\t\t\t errno == 0\n> +\t\t\t\t\t ? errdetail(\"Short write to file \\\"%s\\\" at offset %u.\", path, offset)\n> +\t\t\t\t\t : errdetail(\"Could not write to file \\\"%s\\\" at offset %u: %m.\",\n> +\t\t\t\t\t\t\t\t path, offset)));\n\nSimilarly, \"Encountered partial write to file ...\" would be better? Not \na 100% on using \"Encountered\" but \"partial\" seems to be the right word \nto use here.\n\nDo note that SlruPhysicalWritePage() will always set errno to ENOSPC if \nerrno was 0 during a write attempt, see [0]. Not sure this is a good \nassumption to make since write(2) should return ENOSPC if the storage \nmedium is out of space [1].\n\n[0] \nhttps://github.com/postgres/postgres/blob/master/src/backend/access/transam/slru.c#L854\n[1] https://linux.die.net/man/2/write\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n", "msg_date": "Tue, 18 Jun 2019 09:13:19 -0700", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: fix \"Success\" error messages" }, { "msg_contents": "On Tue, Jun 18, 2019 at 09:13:19AM -0700, Shawn Debnath wrote:\n>> \t\tcase SLRU_WRITE_FAILED:\n>> \t\t\tereport(ERROR,\n>> \t\t\t\t\t(errcode_for_file_access(),\n>> \t\t\t\t\t errmsg(\"could not access status of transaction %u\", xid),\n>> -\t\t\t\t\t errdetail(\"Could not write to file \\\"%s\\\" at offset %u: %m.\",\n>> -\t\t\t\t\t\t\t path, offset)));\n>> +\t\t\t\t\t errno == 0\n>> +\t\t\t\t\t ? errdetail(\"Short write to file \\\"%s\\\" at offset %u.\", path, offset)\n>> +\t\t\t\t\t : errdetail(\"Could not write to file \\\"%s\\\" at offset %u: %m.\",\n>> +\t\t\t\t\t\t\t\t path, offset)));\n\nThere is no point to call errcode_for_file_access() if we know that\nerrno is 0. Not a big deal, still.. The last time I looked at that,\nthe best way I could think of would be to replace slru_errcause with a\nproper error string generated at error time. Perhaps the partial\nread/write case does not justify this extra facility. I don't know.\nAt least that would be more flexible.\n\n> Similarly, \"Encountered partial write to file ...\" would be better? Not \n> a 100% on using \"Encountered\" but \"partial\" seems to be the right word \n> to use here.\n\n811b6e36 has done some work so as we have more consistency in the\nerror messages and I don't think we should introduce more flavors of\nthis stuff as that makes the life of translators harder.\n\n- if (CloseTransientFile(fd))\n+ if (CloseTransientFile(fd) < 0)\nSome spots are missing:\n$ git grep \"CloseTransientFile\" | grep \"))\" | wc -l\n30\n--\nMichael", "msg_date": "Wed, 19 Jun 2019 11:51:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fix \"Success\" error messages" }, { "msg_contents": "On 2019-06-19 04:51, Michael Paquier wrote:\n> On Tue, Jun 18, 2019 at 09:13:19AM -0700, Shawn Debnath wrote:\n>>> \t\tcase SLRU_WRITE_FAILED:\n>>> \t\t\tereport(ERROR,\n>>> \t\t\t\t\t(errcode_for_file_access(),\n>>> \t\t\t\t\t errmsg(\"could not access status of transaction %u\", xid),\n>>> -\t\t\t\t\t errdetail(\"Could not write to file \\\"%s\\\" at offset %u: %m.\",\n>>> -\t\t\t\t\t\t\t path, offset)));\n>>> +\t\t\t\t\t errno == 0\n>>> +\t\t\t\t\t ? errdetail(\"Short write to file \\\"%s\\\" at offset %u.\", path, offset)\n>>> +\t\t\t\t\t : errdetail(\"Could not write to file \\\"%s\\\" at offset %u: %m.\",\n>>> +\t\t\t\t\t\t\t\t path, offset)));\n> \n> There is no point to call errcode_for_file_access() if we know that\n> errno is 0. Not a big deal, still.. The last time I looked at that,\n> the best way I could think of would be to replace slru_errcause with a\n> proper error string generated at error time. Perhaps the partial\n> read/write case does not justify this extra facility. I don't know.\n> At least that would be more flexible.\n\nHere is an updated patch set that rearranges this a bit according to\nyour suggestions, and also fixes some similar issues in pg_checksums.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 26 Aug 2019 21:40:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: fix \"Success\" error messages" }, { "msg_contents": "On Mon, Aug 26, 2019 at 09:40:23PM +0200, Peter Eisentraut wrote:\n> Here is an updated patch set that rearranges this a bit according to\n> your suggestions, and also fixes some similar issues in pg_checksums.\n\nThanks for the new patch, and you are right that pg_checksums has been\nslacking here. There is the same issue with pg_verify_checksums in\n11. Not sure that's worth a back-patch though. Those parts could\nfind their way to v12 easily.\n\n> - ereport(ERROR,\n> - (errcode_for_file_access(),\n> - errmsg(\"could not access status of transaction %u\", xid),\n> - errdetail(\"Could not read from file \\\"%s\\\" at offset %u: %m.\",\n> - path, offset)));\n> + if (errno)\n> + ereport(ERROR,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not access status of transaction %u\", xid),\n> + errdetail(\"Could not read from file \\\"%s\\\" at offset %u: %m.\",\n> + path, offset)));\n> + else\n> + ereport(ERROR,\n> + (errmsg(\"could not access status of transaction %u\", xid),\n> + errdetail(\"Could not read from file \\\"%s\\\" at offset %u: read too few bytes.\", path, offset)));\n\nLast time I worked on that, the following suggestion was made for\nerror messages with shorter reads or writes:\ncould not read file \\\"%s\\\": read %d of %zu\nStill this is clearly an improvement and I that's not worth the extra\ncomplication, so +1 for this way of doing things.\n\n> if (r == 0)\n> break;\n> - if (r != BLCKSZ)\n> + else if (r < 0)\n> + {\n> + pg_log_error(\"could not read block %u in file \\\"%s\\\": %m\",\n> + blockno, fn);\n> + exit(1);\n> + }\n> + else if (r != BLCKSZ)\n> {\n> pg_log_error(\"could not read block %u in file \\\"%s\\\": read %d of %d\",\n> blockno, fn, r, BLCKSZ);\n\nOther code areas (xlog.c, pg_waldump.c, etc.) prefer doing it this\nway, after checking the size read:\nif (r != BLCKSZ)\n{\n if (r < 0)\n pg_log_error(\"could not read blah: %m\");\n else\n pg_log_error(\"could not read blah: read %d of %d\")\n}\n\n> /* Write block with checksum */\n> - if (write(f, buf.data, BLCKSZ) != BLCKSZ)\n> + w = write(f, buf.data, BLCKSZ);\n> + if (w != BLCKSZ)\n> {\n> - pg_log_error(\"could not write block %u in file \\\"%s\\\": %m\",\n> - blockno, fn);\n> + if (w < 0)\n> + pg_log_error(\"could not write block %u in file \\\"%s\\\": %m\",\n> + blockno, fn);\n> + else\n> + pg_log_error(\"could not write block %u in file \\\"%s\\\": wrote %d of %d\",\n> + blockno, fn, w, BLCKSZ);\n> exit(1);\n> }\n> }\n\nThis is consistent.\n--\nMichael", "msg_date": "Tue, 27 Aug 2019 15:27:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fix \"Success\" error messages" }, { "msg_contents": "On 2019-08-27 08:27, Michael Paquier wrote:\n> Thanks for the new patch, and you are right that pg_checksums has been\n> slacking here. There is the same issue with pg_verify_checksums in\n> 11. Not sure that's worth a back-patch though. Those parts could\n> find their way to v12 easily.\n\nCommitted to master and PG12 with your suggested changes.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 3 Sep 2019 08:38:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: fix \"Success\" error messages" }, { "msg_contents": "On Tue, Sep 3, 2019 at 08:38:22AM +0200, Peter Eisentraut wrote:\n> On 2019-08-27 08:27, Michael Paquier wrote:\n> > Thanks for the new patch, and you are right that pg_checksums has been\n> > slacking here. There is the same issue with pg_verify_checksums in\n> > 11. Not sure that's worth a back-patch though. Those parts could\n> > find their way to v12 easily.\n> \n> Committed to master and PG12 with your suggested changes.\n\nThis \"Success\" has happened so many times I think we should tell people\nto report any such error message as a bug by emitting a special error\nmessage line.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 27 Sep 2019 11:52:39 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: fix \"Success\" error messages" }, { "msg_contents": "Hello, pgsql-hackers\n\nI'm gathering information about the following error.\n\n FATAL: could not access status of transaction ..\n DETAIL: Could not read from file (pg_clog/.... or pg_xact/....) ...: Success.\n\nThis error has caused the server to fail to start with recovery.\nI got a report that it happend repeatedly at the newly generated\nstandby cluster. I gave them advice to comfirm the low level server\nenvironment.\n\nHowever, in addition to improving the message, should we retry to read \nthe rest of the data in the case reading too few bytes? \nWhat about a limited number of retries instead of a complete loop?\n\n---\n Haruka Takatsuka / SRA OSS, Inc. Japan\n\nOn Fri, 27 Sep 2019 11:52:39 -0400\nBruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Sep 3, 2019 at 08:38:22AM +0200, Peter Eisentraut wrote:\n> > On 2019-08-27 08:27, Michael Paquier wrote:\n> > > Thanks for the new patch, and you are right that pg_checksums has been\n> > > slacking here. There is the same issue with pg_verify_checksums in\n> > > 11. Not sure that's worth a back-patch though. Those parts could\n> > > find their way to v12 easily.\n> > \n> > Committed to master and PG12 with your suggested changes.\n> \n> This \"Success\" has happened so many times I think we should tell people\n> to report any such error message as a bug by emitting a special error\n> message line.\n> \n> -- \n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n> \n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n\n\n\n", "msg_date": "Thu, 21 Nov 2019 10:42:57 +0900", "msg_from": "TAKATSUKA Haruka <harukat@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: fix \"Success\" error messages" }, { "msg_contents": "On 2019-11-21 02:42, TAKATSUKA Haruka wrote:\n> FATAL: could not access status of transaction ..\n> DETAIL: Could not read from file (pg_clog/.... or pg_xact/....) ...: Success.\n> \n> This error has caused the server to fail to start with recovery.\n> I got a report that it happend repeatedly at the newly generated\n> standby cluster. I gave them advice to comfirm the low level server\n> environment.\n> \n> However, in addition to improving the message, should we retry to read\n> the rest of the data in the case reading too few bytes?\n> What about a limited number of retries instead of a complete loop?\n\nIf we thought that would help, there are probably hundreds or more other \nplaces where we read files that would need to be fixed up in the same \nway. That doesn't seem reasonable.\n\nAlso, it is my understanding that short reads can in practice only \nhappen if the underlying storage is having a serious problem, so \nretrying wouldn't actually help much.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 Nov 2019 10:40:36 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: fix \"Success\" error messages" }, { "msg_contents": "\nOn Thu, 21 Nov 2019 10:40:36 +0100\nPeter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-11-21 02:42, TAKATSUKA Haruka wrote:\n> > FATAL: could not access status of transaction ..\n> > DETAIL: Could not read from file (pg_clog/.... or pg_xact/....) ...: Success.\n> > \n> > This error has caused the server to fail to start with recovery.\n> > I got a report that it happend repeatedly at the newly generated\n> > standby cluster. I gave them advice to comfirm the low level server\n> > environment.\n> > \n> > However, in addition to improving the message, should we retry to read\n> > the rest of the data in the case reading too few bytes?\n> > What about a limited number of retries instead of a complete loop?\n> \n> If we thought that would help, there are probably hundreds or more other \n> places where we read files that would need to be fixed up in the same \n> way. That doesn't seem reasonable.\n> \n> Also, it is my understanding that short reads can in practice only \n> happen if the underlying storage is having a serious problem, so \n> retrying wouldn't actually help much.\n\nOK, I understand.\nIn our case, the standby DB cluster space is on DRBD.\nI will report the clear occurrence condition if it is found.\n\nthanks,\nHaruka Takatsuka\n\n\n\n", "msg_date": "Fri, 22 Nov 2019 12:23:50 +0900", "msg_from": "TAKATSUKA Haruka <harukat@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: fix \"Success\" error messages" } ]
[ { "msg_contents": "-----\r\n To avoid planning the subquery again later on, I want to keep a pointer of\r\n the subplan in SubLink so that we can directly reuse the subplan when\r\n needed. However, this change breaks initdb for some reason and I'm trying to\r\n figure it out.\r\n-----\r\n\r\n\"make clean\" solved the initdb issue. This new patch keeps a pointer of the subplan\r\n in SubLink so that we can directly reuse the subplan when needed. When the subplan\r\nis not hashable (too big to fit in work_mem), the NOT IN query will be flattened to\r\nan ANTI JOIN and we won't need to use subplan again. However, when the subplan\r\nis hashable, we don't do the conversion and will need to use subplan later, patch v2.1\r\navoids planning the subquery twice in this case.\r\n\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL", "msg_date": "Tue, 18 Jun 2019 20:22:30 +0000", "msg_from": "\"Li, Zheng\" <zhelli@amazon.com>", "msg_from_op": true, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "Hi,\r\n\r\nI'm submitting patch v2.2.\r\n\r\nThis version fixed an issue that involves CTE. Because we call subquery_planner before deciding whether to proceed with the transformation, we need to setup access to upper level CTEs at this point if the subquery contains any CTE RangeTblEntry.\r\n\r\nAlso added more test cases of NOT IN accessing CTEs, including recursive CTE. It's nice that CTE can use index now!\r\n\r\nLet me know if you have any comments.\r\n\r\nRegards,\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL", "msg_date": "Wed, 26 Jun 2019 21:26:16 +0000", "msg_from": "\"Li, Zheng\" <zhelli@amazon.com>", "msg_from_op": true, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "On Wed, Jun 26, 2019 at 09:26:16PM +0000, Li, Zheng wrote:\n> Let me know if you have any comments.\n\nI have one: the latest patch visibly applies, but fails to build\nbecause of the recent API changes around lists in the backend code.\nSo a rebase is in order. The discussion has not moved a iota in the\nlast couple of months, still as the latest patch has not received\nreviews, I have moved it to next CF waiting on author.\n--\nMichael", "msg_date": "Sun, 1 Dec 2019 12:43:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "Hi Michael,\r\n\r\nHere is the latest rebased patch.\r\n\r\nRegards,\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL\r\n \r\n\r\nOn 11/30/19, 10:43 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n\r\n On Wed, Jun 26, 2019 at 09:26:16PM +0000, Li, Zheng wrote:\r\n > Let me know if you have any comments.\r\n \r\n I have one: the latest patch visibly applies, but fails to build\r\n because of the recent API changes around lists in the backend code.\r\n So a rebase is in order. The discussion has not moved a iota in the\r\n last couple of months, still as the latest patch has not received\r\n reviews, I have moved it to next CF waiting on author.\r\n --\r\n Michael", "msg_date": "Mon, 2 Dec 2019 16:25:20 +0000", "msg_from": "\"Li, Zheng\" <zhelli@amazon.com>", "msg_from_op": true, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "At the top of the thread your co-author argued the beginning of this \nwork with \"findings about the performance of PostgreSQL, MySQL, and \nOracle on various subqueries:\"\n\nhttps://antognini.ch/2017/12/how-well-a-query-optimizer-handles-subqueries/\n\nI launched this test set with your \"not_in ...\" patch. Your optimization \nimproves only results of D10-D13 queries. Nothing has changed for bad \nplans of the E20-E27 and F20-F27 queries.\n\nFor example, we can replace E20 query:\nSELECT * FROM large WHERE n IN (SELECT n FROM small WHERE small.u = \nlarge.u); - execution time: 1370 ms, by\nSELECT * FROM large WHERE EXISTS (SELECT n,u FROM small WHERE (small.u = \nlarge.u) AND (large.n = small.n\n)) AND n IS NOT NULL; - execution time: 0.112 ms\n\nE21 query:\nSELECT * FROM large WHERE n IN (SELECT nn FROM small WHERE small.u = \nlarge.u); - 1553 ms, by\nSELECT * FROM large WHERE EXISTS (SELECT nn FROM small WHERE (small.u = \nlarge.u) AND (small.nn = large.n)); - 0.194 ms\n\nF27 query:\nSELECT * FROM large WHERE nn NOT IN (SELECT nn FROM small WHERE small.nu \n= large.u); - 1653.048 ms, by\nSELECT * FROM large WHERE NOT EXISTS (SELECT nn,nu FROM small WHERE \n(small.nu = large.u) AND (small.nn = large.nn)); - 274.019 ms\n\nAre you planning to make another patch for these cases?\n\nAlso i tried to increase work_mem up to 2GB: may be hashed subqueries \ncan improve situation? But hashing is not improved execution time of the \nqueries significantly.\n\nOn your test cases (from the comments of the patch) the subquery hashing \nhas the same execution time with queries No.13-17. At the queries \nNo.1-12 it is not so slow as without hashing, but works more slowly (up \nto 3 orders) than NOT IN optimization.\n\nOn 12/2/19 9:25 PM, Li, Zheng wrote:\n> Here is the latest rebased patch.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 5 Jan 2020 11:11:19 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "Hi Andrey,\r\n\r\nThanks for the comment!\r\n\r\nThe unimproved cases you mentioned all fall into the category “correlated subquery”. This category is explicitly disallowed by existing code to convert to join in convert_ANY_sublink_to_join:\r\n /*\r\n * The sub-select must not refer to any Vars of the parent query. (Vars of\r\n * higher levels should be okay, though.)\r\n */\r\n if (contain_vars_of_level((Node *) subselect, 1))\r\n return NULL;\r\n\r\nI think this is also the reason why hashed subplan is not used for such subqueries.\r\n\r\nIt's probably not always safe to convert a correlated subquery to join. We need to find out/prove when it’s safe/unsafe to convert such ANY subquery if we were to do so.\r\n\r\nRegards,\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL\r\n \r\n\r\nOn 1/5/20, 1:12 AM, \"Andrey Lepikhov\" <a.lepikhov@postgrespro.ru> wrote:\r\n\r\n At the top of the thread your co-author argued the beginning of this \r\n work with \"findings about the performance of PostgreSQL, MySQL, and \r\n Oracle on various subqueries:\"\r\n \r\n https://antognini.ch/2017/12/how-well-a-query-optimizer-handles-subqueries/\r\n \r\n I launched this test set with your \"not_in ...\" patch. Your optimization \r\n improves only results of D10-D13 queries. Nothing has changed for bad \r\n plans of the E20-E27 and F20-F27 queries.\r\n \r\n For example, we can replace E20 query:\r\n SELECT * FROM large WHERE n IN (SELECT n FROM small WHERE small.u = \r\n large.u); - execution time: 1370 ms, by\r\n SELECT * FROM large WHERE EXISTS (SELECT n,u FROM small WHERE (small.u = \r\n large.u) AND (large.n = small.n\r\n )) AND n IS NOT NULL; - execution time: 0.112 ms\r\n \r\n E21 query:\r\n SELECT * FROM large WHERE n IN (SELECT nn FROM small WHERE small.u = \r\n large.u); - 1553 ms, by\r\n SELECT * FROM large WHERE EXISTS (SELECT nn FROM small WHERE (small.u = \r\n large.u) AND (small.nn = large.n)); - 0.194 ms\r\n \r\n F27 query:\r\n SELECT * FROM large WHERE nn NOT IN (SELECT nn FROM small WHERE small.nu \r\n = large.u); - 1653.048 ms, by\r\n SELECT * FROM large WHERE NOT EXISTS (SELECT nn,nu FROM small WHERE \r\n (small.nu = large.u) AND (small.nn = large.nn)); - 274.019 ms\r\n \r\n Are you planning to make another patch for these cases?\r\n \r\n Also i tried to increase work_mem up to 2GB: may be hashed subqueries \r\n can improve situation? But hashing is not improved execution time of the \r\n queries significantly.\r\n \r\n On your test cases (from the comments of the patch) the subquery hashing \r\n has the same execution time with queries No.13-17. At the queries \r\n No.1-12 it is not so slow as without hashing, but works more slowly (up \r\n to 3 orders) than NOT IN optimization.\r\n \r\n On 12/2/19 9:25 PM, Li, Zheng wrote:\r\n > Here is the latest rebased patch.\r\n \r\n -- \r\n Andrey Lepikhov\r\n Postgres Professional\r\n https://postgrespro.com\r\n The Russian Postgres Company\r\n \r\n\r\n", "msg_date": "Mon, 6 Jan 2020 19:34:08 +0000", "msg_from": "\"Li, Zheng\" <zhelli@amazon.com>", "msg_from_op": true, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "\n\nOn 1/7/20 12:34 AM, Li, Zheng wrote:\n> Hi Andrey,\n> \n> Thanks for the comment!\n> \n> The unimproved cases you mentioned all fall into the category “correlated subquery”. This category is explicitly disallowed by existing code to convert to join in convert_ANY_sublink_to_join:\n> /*\n> * The sub-select must not refer to any Vars of the parent query. (Vars of\n> * higher levels should be okay, though.)\n> */\n> if (contain_vars_of_level((Node *) subselect, 1))\n> return NULL;\n> \n> I think this is also the reason why hashed subplan is not used for such subqueries.\n> \n> It's probably not always safe to convert a correlated subquery to join. We need to find out/prove when it’s safe/unsafe to convert such ANY subquery if we were to do so.\n> \n\nMaybe this part of code contains logical error?\nYou optimize only the special case of the \"NOT IN\" expression, equal to \nNOT EXISTS. The convert_EXISTS_sublink_to_join() routine can contain \nvars of the parent query.\nMay be you give an trivial example for this problem?\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 9 Jan 2020 09:47:26 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "\"Li, Zheng\" <zhelli@amazon.com> writes:\n> Here is the latest rebased patch.\n\nI noticed that the cfbot is failing to test this because of some trivial\nmerge conflicts, so here's a re-rebased version.\n\nI haven't reviewed this in any detail, but here's a couple of notes\nfrom having quickly looked through the patch:\n\n* I find it entirely unacceptable to stick some planner temporary\nfields into struct SubLink. If you need that storage you'll have\nto find some other place to put it. But in point of fact I don't\nthink you need it; it doesn't look to me to be critical to generate\nthe subquery's plan any earlier than make_subplan would have done it.\nMoreover, you should really strive to *not* do that, because it's\nlikely to get in the way of other future optimizations. As the\nexisting comment in make_subplan already suggests, we might want to\ndelay subplan planning even further than that in future.\n\n* I'm also not too happy with the (undocumented) rearrangement of\nreduce_outer_joins. There's a specific sequence of processing that\nthat's involved in, as documented at the top of prepjointree.c, and\nI doubt that you can just randomly call it from other places and expect\ngood results. In particular, since JOIN alias var flattening won't have\nhappened yet when this code is being run from pull_up_sublinks, it's\nunlikely that reduce_outer_joins will reliably get the same answers it\nwould get normally. I also wonder whether it's safe to make the\nparsetree changes it makes earlier than normal, and whether it will be\nproblematic to run it twice on the same tree, and whether its rather\nindirect connection to distribute_qual_to_rels is going to misbehave.\n\n* The proposed test additions seem to about triple the runtime of\nsubselect.sql. This seems excessive. I also wonder why it's necessary\nfor this test to build its own large tables; couldn't it re-use ones\nthat already exist in the regression database?\n\n* Not really sure that we need a new planner GUC for this, but if we\ndo, it needs to be documented.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 24 Mar 2020 16:29:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "Hi Tom,\r\n\r\nThanks for the feedback.\r\n \r\n * I find it entirely unacceptable to stick some planner temporary\r\n fields into struct SubLink. If you need that storage you'll have\r\n to find some other place to put it. But in point of fact I don't\r\n think you need it; it doesn't look to me to be critical to generate\r\n the subquery's plan any earlier than make_subplan would have done it.\r\n Moreover, you should really strive to *not* do that, because it's\r\n likely to get in the way of other future optimizations. As the\r\n existing comment in make_subplan already suggests, we might want to\r\n delay subplan planning even further than that in future.\r\n\r\n The reason for calling make_subplan this early is that we want to\r\nCall subplan_is_hashable(plan), to decide whether to proceed with the proposed\r\ntransformation. We try to stick with the fast hashed subplan when possible to avoid\r\npotential performance degradation from the transformation which may restrict the\r\nplanner to choose Nested Loop Anti Join in order to handle Null properly,\r\nthe following is an example from subselect.out:\r\nexplain (costs false) select * from s where n not in (select u from l);\r\n QUERY PLAN\r\n-----------------------------------------------\r\n Nested Loop Anti Join\r\n InitPlan 1 (returns $0)\r\n -> Seq Scan on l l_1\r\n -> Seq Scan on s\r\n Filter: ((n IS NOT NULL) OR (NOT $0))\r\n -> Index Only Scan using l_u on l\r\n Index Cond: (u = s.n)\r\n\r\nHowever, if the subplan is not hashable, the above Nested Loop Anti Join is\r\nactually faster.\r\n \r\n * I'm also not too happy with the (undocumented) rearrangement of\r\n reduce_outer_joins. There's a specific sequence of processing that\r\n that's involved in, as documented at the top of prepjointree.c, and\r\n I doubt that you can just randomly call it from other places and expect\r\n good results. In particular, since JOIN alias var flattening won't have\r\n happened yet when this code is being run from pull_up_sublinks, it's\r\n unlikely that reduce_outer_joins will reliably get the same answers it\r\n would get normally. I also wonder whether it's safe to make the\r\n parsetree changes it makes earlier than normal, and whether it will be\r\n problematic to run it twice on the same tree, and whether its rather\r\n indirect connection to distribute_qual_to_rels is going to misbehave.\r\n\r\n The rearrangement of reduce_outer_joins was to make the null test function\r\nis_node_nonnullable() more accurate. Later we added strict predicates logic in\r\nis_node_nonnullable(), so I think we can get rid of the rearrangement of\r\nreduce_outer_joins now without losing accuracy.\r\n \r\n * The proposed test additions seem to about triple the runtime of\r\n subselect.sql. This seems excessive. I also wonder why it's necessary\r\n for this test to build its own large tables; couldn't it re-use ones\r\n that already exist in the regression database?\r\n\r\n I added a lot of test cases. But yes, I can reuse the existing large table if\r\nthere is one that doesn't fit in 64KB work_mem.\r\n \r\n * Not really sure that we need a new planner GUC for this, but if we\r\n do, it needs to be documented.\r\n\r\n The new GUC is just in case if anything goes wrong, the user can\r\neasily turn it off.\r\n\r\nRegards,\r\nZheng \r\n \r\n\r\n", "msg_date": "Tue, 24 Mar 2020 22:32:00 +0000", "msg_from": "\"Li, Zheng\" <zhelli@amazon.com>", "msg_from_op": true, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "\"Li, Zheng\" <zhelli@amazon.com> writes:\n> * I find it entirely unacceptable to stick some planner temporary\n> fields into struct SubLink. If you need that storage you'll have\n> to find some other place to put it. But in point of fact I don't\n> think you need it; it doesn't look to me to be critical to generate\n> the subquery's plan any earlier than make_subplan would have done it.\n> Moreover, you should really strive to *not* do that, because it's\n> likely to get in the way of other future optimizations. As the\n> existing comment in make_subplan already suggests, we might want to\n> delay subplan planning even further than that in future.\n\n> The reason for calling make_subplan this early is that we want to\n> Call subplan_is_hashable(plan), to decide whether to proceed with the proposed\n> transformation.\n\nWell, you're going to have to find another way, because this one won't do.\n\nIf you really need to get whacked over the head with a counterexample for\nthis approach, consider what happens if some part of the planner decides\nto pass the SubLink through copyObject, expression_tree_mutator, etc\nin between where you've done the planning and where make_subplan looks\nat it. Since you haven't taught copyfuncs.c about these fields, they'll\nsemi-accidentally wind up as NULL afterwards, meaning you lost the\ninformation anyway. (In fact, I wouldn't be surprised if that's happening\nalready in some cases; you couldn't really tell, since make_subplan will\njust repeat the work.) On the other hand, you can't have copyfuncs.c\ncopying such fields either --- we don't have copyfuncs support for\nPlannerInfo, and if we did, the case would end up as infinite recursion.\nNor would it be particularly cool to try to fake things out by copying the\npointers as scalars; that will lead to dangling pointers later on.\n\nBTW, so far as I can see, the only reason you're bothering with the whole\nthing is to compare the size of the subquery output with work_mem, because\nthat's all that subplan_is_hashable does. I wonder whether that\nconsideration is even still necessary in the wake of 1f39bce02. If it is,\nI wonder whether there isn't a cheaper way to figure it out. (Note\nsimilar comment in make_subplan.)\n\nAlso ...\n\n> We try to stick with the fast hashed subplan when possible to avoid\n> potential performance degradation from the transformation which may\n> restrict the planner to choose Nested Loop Anti Join in order to handle\n> Null properly,\n\nBut can't you detect that case directly? It seems like you'd need to\nfigure out the NULL situation anyway to know whether the transformation\nto antijoin is valid in the first place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Mar 2020 19:49:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": " >BTW, so far as I can see, the only reason you're bothering with the whole\r\n thing is to compare the size of the subquery output with work_mem, because\r\n that's all that subplan_is_hashable does. I wonder whether that\r\n consideration is even still necessary in the wake of 1f39bce02. If it is,\r\n I wonder whether there isn't a cheaper way to figure it out. (Note\r\n similar comment in make_subplan.)\r\n\r\n The comment in make_subplan says there is no cheaper way to figure out:\r\n /* At present, however, we can only check hashability after\r\n * we've made the subplan :-(. (Determining whether it'll fit in work_mem\r\n * is the really hard part.)\r\n */\r\n\r\n I don't see why commit 1f39bce02 is related to this problem. Can you expand on this?\r\n \r\n >But can't you detect that case directly? It seems like you'd need to\r\n figure out the NULL situation anyway to know whether the transformation\r\n to antijoin is valid in the first place.\r\n \r\n Yes, we do need to figure out the NULL situation, and there is always valid transformation\r\n to antijoin, it's just in the NULL case we need to stuff additional clause to the anti join\r\n condition, and in these cases the transformation actually outperforms Subplan (non-hashed),\r\n but underperforms the hashed Subplan. The unmodified anti hash join has similar performance\r\n compared to hashed Subplan.\r\n\r\n", "msg_date": "Thu, 26 Mar 2020 20:58:24 +0000", "msg_from": "\"Li, Zheng\" <zhelli@amazon.com>", "msg_from_op": true, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "You should do small rebase (conflict with 911e7020770) and pgindent of \nthe patch to repair problems with long lines and backspaces.\n\nI am reviewing your patch in small steps. Questions:\n1. In the find_innerjoined_rels() routine you stop descending on \nJOIN_FULL node type. I think it is wrong because if var has NOT NULL \nconstraint, full join can't change it to NULL.\n2. The convert_NOT_IN_to_join() routine is ok, but its name is \nmisleading. May be you can use something like make_NOT_IN_to_join_quals()?\n3. pull_up_sublinks_qual_recurse(). Comment:\n\"Return pullout predicate (x is NOT NULL)...\"\nmay be change to\n\"Return pullout predicate (x is NOT NULL or NOT EXISTS...)\"?\n4. is_node_nonnullable():\nI think one more case of non-nullable var may be foreign key constraint.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 2 Apr 2020 09:50:58 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "On 3/26/20 4:58 PM, Li, Zheng wrote:\n> >BTW, so far as I can see, the only reason you're bothering with the whole\n> thing is to compare the size of the subquery output with work_mem, because\n> that's all that subplan_is_hashable does. I wonder whether that\n> consideration is even still necessary in the wake of 1f39bce02. If it is,\n> I wonder whether there isn't a cheaper way to figure it out. (Note\n> similar comment in make_subplan.)\n> \n> The comment in make_subplan says there is no cheaper way to figure out:\n> /* At present, however, we can only check hashability after\n> * we've made the subplan :-(. (Determining whether it'll fit in work_mem\n> * is the really hard part.)\n> */\n> \n> I don't see why commit 1f39bce02 is related to this problem. Can you expand on this?\n> \n> >But can't you detect that case directly? It seems like you'd need to\n> figure out the NULL situation anyway to know whether the transformation\n> to antijoin is valid in the first place.\n> \n> Yes, we do need to figure out the NULL situation, and there is always valid transformation\n> to antijoin, it's just in the NULL case we need to stuff additional clause to the anti join\n> condition, and in these cases the transformation actually outperforms Subplan (non-hashed),\n> but underperforms the hashed Subplan. The unmodified anti hash join has similar performance\n> compared to hashed Subplan.\n\nThere seem to be enough questions about this implementation that I think \nit makes sense to mark this patch Returned with Feedback.\n\nFeel free to resubmit it to a future CF when there is more of a \nconsensus on the implementation.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 8 Apr 2020 09:32:09 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: NOT IN subquery optimization" } ]
[ { "msg_contents": "Hi,\n\nThe current implementation of multi-column MCV lists (added in this\ncycle) uses a fairly simple algorithm to pick combinations to include in\nthe MCV list. We just compute a minimum number of occurences, and then\ninclude all entries sampled more often. See get_mincount_for_mcv_list().\n\nBy coincidence I received a real-world data set where this does not work\nparticularly well, so I'm wondering if we can improve this somehow. It\ndoes not make the estimates worse than without the MCV lists, so I don't\nthink we have an issue, but I wonder if we could do better.\n\nThe data set is very simple - it's a single table of valid addresses\n(city/street name and street number):\n\n CREATE TABLE addresses (\n city_name text,\n street_name text,\n street_no int\n ); \n\nData with Czech addresses are available here (it's ~9.3MB, so I'm not\nattaching it directly)\n\n https://drive.google.com/file/d/1EiZGsk6s5hqzZrL7t5KiaqByJnBsndfA\n\nand attached is a SQL script I used to compute a \"virtual\" MCV list.\n\nThere are about 3M records, so let's query for a street name in one of\nthe cities:\n\n EXPLAIN ANALYZE\n SELECT * FROM addresses\n WHERE city_name = 'Most' AND street_name = 'Pionýrů'\n\n\n QUERY PLAN\n ----------------------------------------------------------------------------\n Seq Scan on addresses (cost=0.00..62645.38 rows=1 width=25)\n (actual time=117.225..204.291 rows=779 loops=1)\n Filter: ((city_name = 'Most'::text) AND (street_name = 'Pionýrů'::text))\n Rows Removed by Filter: 2923846\n Planning Time: 0.065 ms\n Execution Time: 204.525 ms\n (5 rows)\n\nIt's true 779 rows is only a tiny fraction of the data set (~0.025%),\nbut this data set is a bit weird in one other aspect - about half of the\nrecords has empty (NULL) street_name, it's just city + number. Small\nvillages simply don't have streets at all, and large cities probably\nstarted as small villages, so they have such addresses too.\n\nThis however means that by choosing the MCV entries solely based on the\nnumber of occurrences in the sample, we end up with MCV lists where vast\nmajority of entries has NULL street name.\n\nThat's why we got such poor estimate in the example query, despite the\nfact that the city/street combination is the most frequent in the data\nset (with non-NULL street name).\n\nThe other weird thing is that frequency of NULL street names is fairly\nuniform in the whole data set. In total about 50% addresses match that,\nand for individual cities it's generally between 25% and 100%, so the\nestimate is less than 2x off in those cases.\n\nBut for addresses with non-NULL street names, the situation is very\ndifferent. Some street names are unique to a single city, etc.\n\nOverall, this means we end up with MCV list with entries representing\nthe mostly-uniform part of the data set, instead of prefering the\nentries that are truly skewed.\n\nSo I'm wondering if we could/should rethink the algorithm, so look at\nthe frequency and base_frequency, and maybe pick entries based on their\nratio, or something like that.\n\nFor example, we might sort the entries by\n\n abs(freq - base_freq) / freq\n\nwhich seems like a reasonable \"measure of mis-estimate\". Or maybe we\nmight look just at abs(freq - base_freq)? I think the first option would\nbe better, because (1 row vs. 100.000 rows) is probably worse than (1M\nrows vs. 2M rows).\n\nOf course, this is a challenging problem, for a couple of reasons.\n\nFirstly, picking simply the most frequent groups is very simple and it\ngives us additional information about the largest group (which may be\nuseful elsewhere, e.g. the incremental sort patch).\n\nSecondly, if the correlated combinations are less frequent, how reliably\ncan we even estimate the frequency from a sample? The combination in the\nexample query was ~0.02% of the data, so how likely it's to be sampled?\n\nAlternatively, it's possible to increase statistics target to make the\nsample larger, but that also keeps more entries in the MCV list,\nincluding the correlated ones. So maybe the best thing we can do is to\njust rely on that?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 18 Jun 2019 22:59:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Choosing values for multivariate MCV lists" }, { "msg_contents": "On Tue, 18 Jun 2019 at 21:59, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> The current implementation of multi-column MCV lists (added in this\n> cycle) uses a fairly simple algorithm to pick combinations to include in\n> the MCV list. We just compute a minimum number of occurences, and then\n> include all entries sampled more often. See get_mincount_for_mcv_list().\n>\n> [snip]\n>\n> This however means that by choosing the MCV entries solely based on the\n> number of occurrences in the sample, we end up with MCV lists where vast\n> majority of entries has NULL street name.\n>\n> That's why we got such poor estimate in the example query, despite the\n> fact that the city/street combination is the most frequent in the data\n> set (with non-NULL street name).\n>\n\nI think the fact that they're NULL is a bit of a red herring because\nwe're treating NULL just like any other value. The same thing would\nhappen if there were some other very common non-NULL value that\ndominated the dataset.\n\n> The other weird thing is that frequency of NULL street names is fairly\n> uniform in the whole data set. In total about 50% addresses match that,\n> and for individual cities it's generally between 25% and 100%, so the\n> estimate is less than 2x off in those cases.\n>\n> But for addresses with non-NULL street names, the situation is very\n> different. Some street names are unique to a single city, etc.\n>\n> Overall, this means we end up with MCV list with entries representing\n> the mostly-uniform part of the data set, instead of prefering the\n> entries that are truly skewed.\n>\n> So I'm wondering if we could/should rethink the algorithm, so look at\n> the frequency and base_frequency, and maybe pick entries based on their\n> ratio, or something like that.\n>\n\nHmm, interesting. I think I would like to see a more rigorous\njustification for changing the algorithm deciding which values to\nkeep.\n\nIf I've understood correctly, I think the problem is this: The\nmincount calculation is a good way of identifying MCV candidates to\nkeep, because it ensures that we don't keep values that don't appear\nsufficiently often to produce accurate estimates, and ideally we'd\nkeep everything with count >= mincount. However, in the case were\nthere are more than stats_target items with count >= mincount, simply\nordering by count and keeping the most commonly seen values isn't\nnecessarily the best strategy in the case of multivariate statistics.\n\nTo identify what the best strategy might be, I think you need to\nexamine the errors that would occur as a result of *not* keeping a\nvalue in the multivariate MCV list. Given a value that appears with\ncount >= mincount, N*freq ought to be a reasonable estimate for the\nactual number of occurrences of that value in the table, and\nN*base_freq ought to be a reasonable estimate for the\nunivariate-stats-based estimate that it would be given if it weren't\nkept in the multivariate MCV list. So the absolute error resulting\nfrom not keeping that value would be\n\n N * Abs(freq - base_freq)\n\nBut then I think we ought to take into account how often we're likely\nto get that error. If we're simply picking values at random, the\nlikelihood of getting that value is just its frequency, so the average\naverage absolute error would be\n\n Sum( N * freq[i] * Abs(freq[i] - base_freq[i]) )\n\nwhich suggests that, if we wanted to reduce the average absolute error\nof the estimates, we should order by freq*Abs(freq-base_freq) and keep\nthe top n in the MCV list.\n\nOn the other hand, if we wanted to reduce the average *relative* error\nof the estimates, we might instead order by Abs(freq-base_freq).\n\n> For example, we might sort the entries by\n>\n> abs(freq - base_freq) / freq\n>\n\nI'm not sure it's easy to justify ordering by Abs(freq-base_freq)/freq\nthough, because that would seem likely to put too much weight on the\nleast commonly occurring values.\n\n> Of course, this is a challenging problem, for a couple of reasons.\n>\n> Firstly, picking simply the most frequent groups is very simple and it\n> gives us additional information about the largest group (which may be\n> useful elsewhere, e.g. the incremental sort patch).\n>\n\nYes, you would have to keep in mind that changing the algorithm would\nmean that the MCV list no longer represented all the most common\nvalues. For example, it would no longer be valid to assume that no\nvalue appeared more often than the first value in the MCV list. I'm\nnot sure that we currently do that though.\n\n> Secondly, if the correlated combinations are less frequent, how reliably\n> can we even estimate the frequency from a sample? The combination in the\n> example query was ~0.02% of the data, so how likely it's to be sampled?\n>\n\nI think that's OK as long as we keep the mincount filter in the new\nalgorithm. I think some experimentation is definitely worthwhile here,\nbut it looks plausible that a decent approach might be:\n\n1). Discard all values with count < mincount.\n2). Order by freq*Abs(freq-base_freq) (or possibly just\nAbs(freq-base_freq)) and keep the top n, where n is the stats target.\n3). Perhaps re-sort by count, so the final list is in frequency order\nagain? Not sure if that's a desirable property to keep.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 20 Jun 2019 06:55:41 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Thu, Jun 20, 2019 at 06:55:41AM +0100, Dean Rasheed wrote:\n>On Tue, 18 Jun 2019 at 21:59, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> The current implementation of multi-column MCV lists (added in this\n>> cycle) uses a fairly simple algorithm to pick combinations to include in\n>> the MCV list. We just compute a minimum number of occurences, and then\n>> include all entries sampled more often. See get_mincount_for_mcv_list().\n>>\n>> [snip]\n>>\n>> This however means that by choosing the MCV entries solely based on the\n>> number of occurrences in the sample, we end up with MCV lists where vast\n>> majority of entries has NULL street name.\n>>\n>> That's why we got such poor estimate in the example query, despite the\n>> fact that the city/street combination is the most frequent in the data\n>> set (with non-NULL street name).\n>>\n>\n>I think the fact that they're NULL is a bit of a red herring because\n>we're treating NULL just like any other value. The same thing would\n>happen if there were some other very common non-NULL value that\n>dominated the dataset.\n>\n\nI wasn't really suggesting the NULL is an issue - sorry if that wasn't\nclear. It might be any other value, as long as it's very common (and\nroughly uniform) in all cities. So yes, I agree with you here.\n\n>> The other weird thing is that frequency of NULL street names is fairly\n>> uniform in the whole data set. In total about 50% addresses match that,\n>> and for individual cities it's generally between 25% and 100%, so the\n>> estimate is less than 2x off in those cases.\n>>\n>> But for addresses with non-NULL street names, the situation is very\n>> different. Some street names are unique to a single city, etc.\n>>\n>> Overall, this means we end up with MCV list with entries representing\n>> the mostly-uniform part of the data set, instead of prefering the\n>> entries that are truly skewed.\n>>\n>> So I'm wondering if we could/should rethink the algorithm, so look at\n>> the frequency and base_frequency, and maybe pick entries based on their\n>> ratio, or something like that.\n>>\n>\n>Hmm, interesting. I think I would like to see a more rigorous\n>justification for changing the algorithm deciding which values to\n>keep.\n>\n\nSure, I'm not going to pretend my proposals were particularly rigorous, it\nwas more a collection of random ideas.\n\n>If I've understood correctly, I think the problem is this: The\n>mincount calculation is a good way of identifying MCV candidates to\n>keep, because it ensures that we don't keep values that don't appear\n>sufficiently often to produce accurate estimates, and ideally we'd\n>keep everything with count >= mincount. However, in the case were\n>there are more than stats_target items with count >= mincount, simply\n>ordering by count and keeping the most commonly seen values isn't\n>necessarily the best strategy in the case of multivariate statistics.\n>\n\nYes.\n\n>To identify what the best strategy might be, I think you need to\n>examine the errors that would occur as a result of *not* keeping a\n>value in the multivariate MCV list. Given a value that appears with\n>count >= mincount, N*freq ought to be a reasonable estimate for the\n>actual number of occurrences of that value in the table, and\n>N*base_freq ought to be a reasonable estimate for the\n>univariate-stats-based estimate that it would be given if it weren't\n>kept in the multivariate MCV list. So the absolute error resulting\n>from not keeping that value would be\n>\n> N * Abs(freq - base_freq)\n>\n\nYeah. Considering N is the same for all groups in the sample, this\nwould mean the same thing as Abs(freq - base_freq).\n\n>But then I think we ought to take into account how often we're likely\n>to get that error. If we're simply picking values at random, the\n>likelihood of getting that value is just its frequency, so the average\n>average absolute error would be\n>\n> Sum( N * freq[i] * Abs(freq[i] - base_freq[i]) )\n>\n>which suggests that, if we wanted to reduce the average absolute error\n>of the estimates, we should order by freq*Abs(freq-base_freq) and keep\n>the top n in the MCV list.\n>\n\nInteresting idea. But I'm not sure it makes much sense to assume the rows\nare picked randomly - OTOH we don't really know anything about how the\ndata will be queries, so we may just as well do that.\n\n>On the other hand, if we wanted to reduce the average *relative* error\n>of the estimates, we might instead order by Abs(freq-base_freq).\n>\n\nHmmm, yeah. I don't know what's the right choice here, TBH.\n\n>> For example, we might sort the entries by\n>>\n>> abs(freq - base_freq) / freq\n>>\n>\n>I'm not sure it's easy to justify ordering by Abs(freq-base_freq)/freq\n>though, because that would seem likely to put too much weight on the\n>least commonly occurring values.\n>\n\nBut would that be an issue, or a good thing? I mean, as long as the item\nis above mincount, we take the counts as reliable. As I explained, my\nmotivation for proposing that was that both\n\n ... (cost=... rows=1 ...) (actual=... rows=1000001 ...)\n\nand\n\n ... (cost=... rows=1000000 ...) (actual=... rows=2000000 ...)\n\nhave exactly the same Abs(freq - base_freq), but I think we both agree\nthat the first misestimate is much more dangerous, because it's off by six\norders of magnitude. I think the MCV algorithm should reflect this.\n\n\n>> Of course, this is a challenging problem, for a couple of reasons.\n>>\n>> Firstly, picking simply the most frequent groups is very simple and it\n>> gives us additional information about the largest group (which may be\n>> useful elsewhere, e.g. the incremental sort patch).\n>>\n>\n>Yes, you would have to keep in mind that changing the algorithm would\n>mean that the MCV list no longer represented all the most common\n>values. For example, it would no longer be valid to assume that no\n>value appeared more often than the first value in the MCV list. I'm\n>not sure that we currently do that though.\n>\n\nWe don't, but I've proposed using some of this knowledge in the\nincremental sort patch. But I think we might actually extend the MCV list\nto store some extra information (e.g. frequency of the largest groups,\neven if we don't store it).\n\n>> Secondly, if the correlated combinations are less frequent, how reliably\n>> can we even estimate the frequency from a sample? The combination in the\n>> example query was ~0.02% of the data, so how likely it's to be sampled?\n>>\n>\n>I think that's OK as long as we keep the mincount filter in the new\n>algorithm. I think some experimentation is definitely worthwhile here,\n>but it looks plausible that a decent approach might be:\n>\n>1). Discard all values with count < mincount.\n>2). Order by freq*Abs(freq-base_freq) (or possibly just\n>Abs(freq-base_freq)) and keep the top n, where n is the stats target.\n>3). Perhaps re-sort by count, so the final list is in frequency order\n>again? Not sure if that's a desirable property to keep.\n>\n\nWill try. Thanks for the feedback.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 21 Jun 2019 00:35:48 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Thu, 20 Jun 2019 at 23:35, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Jun 20, 2019 at 06:55:41AM +0100, Dean Rasheed wrote:\n>\n> >I'm not sure it's easy to justify ordering by Abs(freq-base_freq)/freq\n> >though, because that would seem likely to put too much weight on the\n> >least commonly occurring values.\n>\n> But would that be an issue, or a good thing? I mean, as long as the item\n> is above mincount, we take the counts as reliable. As I explained, my\n> motivation for proposing that was that both\n>\n> ... (cost=... rows=1 ...) (actual=... rows=1000001 ...)\n>\n> and\n>\n> ... (cost=... rows=1000000 ...) (actual=... rows=2000000 ...)\n>\n> have exactly the same Abs(freq - base_freq), but I think we both agree\n> that the first misestimate is much more dangerous, because it's off by six\n> orders of magnitude.\n>\n\nHmm, that's a good example. That definitely suggests that we should be\ntrying to minimise the relative error, but also perhaps that what we\nshould be looking at is actually just the ratio freq / base_freq,\nrather than their difference.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 21 Jun 2019 08:50:33 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Fri, Jun 21, 2019 at 08:50:33AM +0100, Dean Rasheed wrote:\n>On Thu, 20 Jun 2019 at 23:35, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Thu, Jun 20, 2019 at 06:55:41AM +0100, Dean Rasheed wrote:\n>>\n>> >I'm not sure it's easy to justify ordering by Abs(freq-base_freq)/freq\n>> >though, because that would seem likely to put too much weight on the\n>> >least commonly occurring values.\n>>\n>> But would that be an issue, or a good thing? I mean, as long as the item\n>> is above mincount, we take the counts as reliable. As I explained, my\n>> motivation for proposing that was that both\n>>\n>> ... (cost=... rows=1 ...) (actual=... rows=1000001 ...)\n>>\n>> and\n>>\n>> ... (cost=... rows=1000000 ...) (actual=... rows=2000000 ...)\n>>\n>> have exactly the same Abs(freq - base_freq), but I think we both agree\n>> that the first misestimate is much more dangerous, because it's off by six\n>> orders of magnitude.\n>>\n>\n>Hmm, that's a good example. That definitely suggests that we should be\n>trying to minimise the relative error, but also perhaps that what we\n>should be looking at is actually just the ratio freq / base_freq,\n>rather than their difference.\n>\n\nAttached are patches that implement this (well, the first one optimizes\nhow the base frequency is computed, which helps for high statistic target\nvalues). The 0002 part picks the items based on\n\n Max(freq/base_freq, base_freq/freq)\n\nIt did help with the addresses data set quite a bit, but I'm sure it needs\nmore testing. I've also tried using\n\n freq * abs(freq - base_freq)\n\nbut the estimates were not as good.\n\nOne annoying thing I noticed is that the base_frequency tends to end up\nbeing 0, most likely due to getting too small. It's a bit strange, though,\nbecause with statistic target set to 10k the smallest frequency for a\nsingle column is 1/3e6, so for 2 columns it'd be ~1/9e12 (which I think is\nsomething the float8 can represent).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 22 Jun 2019 16:10:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Sat, 22 Jun 2019 at 15:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> One annoying thing I noticed is that the base_frequency tends to end up\n> being 0, most likely due to getting too small. It's a bit strange, though,\n> because with statistic target set to 10k the smallest frequency for a\n> single column is 1/3e6, so for 2 columns it'd be ~1/9e12 (which I think is\n> something the float8 can represent).\n>\n\nYeah, it should be impossible for the base frequency to underflow to\n0. However, it looks like the problem is with mcv_list_items()'s use\nof %f to convert to text, which is pretty ugly.\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 23 Jun 2019 20:48:26 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Sun, Jun 23, 2019 at 08:48:26PM +0100, Dean Rasheed wrote:\n>On Sat, 22 Jun 2019 at 15:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> One annoying thing I noticed is that the base_frequency tends to end up\n>> being 0, most likely due to getting too small. It's a bit strange, though,\n>> because with statistic target set to 10k the smallest frequency for a\n>> single column is 1/3e6, so for 2 columns it'd be ~1/9e12 (which I think is\n>> something the float8 can represent).\n>>\n>\n>Yeah, it should be impossible for the base frequency to underflow to\n>0. However, it looks like the problem is with mcv_list_items()'s use\n>of %f to convert to text, which is pretty ugly.\n>\n\nYeah, I realized that too, eventually. One way to fix that would be\nadding %.15f to the sprintf() call, but that just adds ugliness. It's\nprobably time to rewrite the function to build the tuple from datums,\ninstead of relying on BuildTupleFromCStrings.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 23 Jun 2019 22:23:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Sun, Jun 23, 2019 at 10:23:19PM +0200, Tomas Vondra wrote:\n>On Sun, Jun 23, 2019 at 08:48:26PM +0100, Dean Rasheed wrote:\n>>On Sat, 22 Jun 2019 at 15:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>One annoying thing I noticed is that the base_frequency tends to end up\n>>>being 0, most likely due to getting too small. It's a bit strange, though,\n>>>because with statistic target set to 10k the smallest frequency for a\n>>>single column is 1/3e6, so for 2 columns it'd be ~1/9e12 (which I think is\n>>>something the float8 can represent).\n>>>\n>>\n>>Yeah, it should be impossible for the base frequency to underflow to\n>>0. However, it looks like the problem is with mcv_list_items()'s use\n>>of %f to convert to text, which is pretty ugly.\n>>\n>\n>Yeah, I realized that too, eventually. One way to fix that would be\n>adding %.15f to the sprintf() call, but that just adds ugliness. It's\n>probably time to rewrite the function to build the tuple from datums,\n>instead of relying on BuildTupleFromCStrings.\n>\n\nOK, attached is a patch doing this. It's pretty simple, and it does\nresolve the issue with frequency precision.\n\nThere's one issue with the signature, though - currently the function\nreturns null flags as bool array, but values are returned as simple\ntext value (formatted in array-like way, but still just a text).\n\nIn the attached patch I've reworked both to proper arrays, but obviously\nthat'd require a CATVERSION bump - and there's not much apetite for that\npast beta2, I suppose. So I'll just undo this bit.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 24 Jun 2019 01:42:32 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Sat, Jun 22, 2019 at 04:10:52PM +0200, Tomas Vondra wrote:\n>On Fri, Jun 21, 2019 at 08:50:33AM +0100, Dean Rasheed wrote:\n>>On Thu, 20 Jun 2019 at 23:35, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>\n>>>On Thu, Jun 20, 2019 at 06:55:41AM +0100, Dean Rasheed wrote:\n>>>\n>>>>I'm not sure it's easy to justify ordering by Abs(freq-base_freq)/freq\n>>>>though, because that would seem likely to put too much weight on the\n>>>>least commonly occurring values.\n>>>\n>>>But would that be an issue, or a good thing? I mean, as long as the item\n>>>is above mincount, we take the counts as reliable. As I explained, my\n>>>motivation for proposing that was that both\n>>>\n>>> ... (cost=... rows=1 ...) (actual=... rows=1000001 ...)\n>>>\n>>>and\n>>>\n>>> ... (cost=... rows=1000000 ...) (actual=... rows=2000000 ...)\n>>>\n>>>have exactly the same Abs(freq - base_freq), but I think we both agree\n>>>that the first misestimate is much more dangerous, because it's off by six\n>>>orders of magnitude.\n>>>\n>>\n>>Hmm, that's a good example. That definitely suggests that we should be\n>>trying to minimise the relative error, but also perhaps that what we\n>>should be looking at is actually just the ratio freq / base_freq,\n>>rather than their difference.\n>>\n>\n>Attached are patches that implement this (well, the first one optimizes\n>how the base frequency is computed, which helps for high statistic target\n>values). The 0002 part picks the items based on\n>\n> Max(freq/base_freq, base_freq/freq)\n>\n>It did help with the addresses data set quite a bit, but I'm sure it needs\n>more testing. I've also tried using\n>\n> freq * abs(freq - base_freq)\n>\n>but the estimates were not as good.\n>\n>One annoying thing I noticed is that the base_frequency tends to end up\n>being 0, most likely due to getting too small. It's a bit strange, though,\n>because with statistic target set to 10k the smallest frequency for a\n>single column is 1/3e6, so for 2 columns it'd be ~1/9e12 (which I think is\n>something the float8 can represent).\n>\n\nFWIW while doing more tests on this, I've realized a rather annoying\nbehavior while increasing the statistics target.\n\nWith the current algorithm picking values merely based on frequency, the\nMCV list is expanded in a stable way. Increasing the statistics target\nmeans the MCV list may grow, but the larger MCV list will contain the\nsmaller MCV one (ignoring changes due to a differences in the sample).\n\nAfter switching to the two-phase algorithm (first picking candidates\nbased on mincount, then picking the items based on error) that's no\nlonger true. I've repeatedly seen cases when increasing the target\nlowered mincount, adding candidates with larger frequency errors,\nremoving some of the items from the \"smaller\" MCV list.\n\nIn practice this means that increasing the statistics target may easily\nmake the estimates much worse (for some of the values).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 24 Jun 2019 01:54:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Mon, Jun 24, 2019 at 01:42:32AM +0200, Tomas Vondra wrote:\n>On Sun, Jun 23, 2019 at 10:23:19PM +0200, Tomas Vondra wrote:\n>>On Sun, Jun 23, 2019 at 08:48:26PM +0100, Dean Rasheed wrote:\n>>>On Sat, 22 Jun 2019 at 15:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>>One annoying thing I noticed is that the base_frequency tends to end up\n>>>>being 0, most likely due to getting too small. It's a bit strange, though,\n>>>>because with statistic target set to 10k the smallest frequency for a\n>>>>single column is 1/3e6, so for 2 columns it'd be ~1/9e12 (which I think is\n>>>>something the float8 can represent).\n>>>>\n>>>\n>>>Yeah, it should be impossible for the base frequency to underflow to\n>>>0. However, it looks like the problem is with mcv_list_items()'s use\n>>>of %f to convert to text, which is pretty ugly.\n>>>\n>>\n>>Yeah, I realized that too, eventually. One way to fix that would be\n>>adding %.15f to the sprintf() call, but that just adds ugliness. It's\n>>probably time to rewrite the function to build the tuple from datums,\n>>instead of relying on BuildTupleFromCStrings.\n>>\n>\n>OK, attached is a patch doing this. It's pretty simple, and it does\n>resolve the issue with frequency precision.\n>\n>There's one issue with the signature, though - currently the function\n>returns null flags as bool array, but values are returned as simple\n>text value (formatted in array-like way, but still just a text).\n>\n>In the attached patch I've reworked both to proper arrays, but obviously\n>that'd require a CATVERSION bump - and there's not much apetite for that\n>past beta2, I suppose. So I'll just undo this bit.\n>\n\nMeh, I forgot to attach the patch, of couse ...\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 24 Jun 2019 01:56:36 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Mon, 24 Jun 2019 at 00:42, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Jun 23, 2019 at 10:23:19PM +0200, Tomas Vondra wrote:\n> >On Sun, Jun 23, 2019 at 08:48:26PM +0100, Dean Rasheed wrote:\n> >>On Sat, 22 Jun 2019 at 15:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >>>One annoying thing I noticed is that the base_frequency tends to end up\n> >>>being 0, most likely due to getting too small. It's a bit strange, though,\n> >>>because with statistic target set to 10k the smallest frequency for a\n> >>>single column is 1/3e6, so for 2 columns it'd be ~1/9e12 (which I think is\n> >>>something the float8 can represent).\n> >>>\n> >>\n> >>Yeah, it should be impossible for the base frequency to underflow to\n> >>0. However, it looks like the problem is with mcv_list_items()'s use\n> >>of %f to convert to text, which is pretty ugly.\n> >>\n> >\n> >Yeah, I realized that too, eventually. One way to fix that would be\n> >adding %.15f to the sprintf() call, but that just adds ugliness. It's\n> >probably time to rewrite the function to build the tuple from datums,\n> >instead of relying on BuildTupleFromCStrings.\n> >\n>\n> OK, attached is a patch doing this. It's pretty simple, and it does\n> resolve the issue with frequency precision.\n>\n> There's one issue with the signature, though - currently the function\n> returns null flags as bool array, but values are returned as simple\n> text value (formatted in array-like way, but still just a text).\n>\n> In the attached patch I've reworked both to proper arrays, but obviously\n> that'd require a CATVERSION bump - and there's not much apetite for that\n> past beta2, I suppose. So I'll just undo this bit.\n>\n\nHmm, I didn't spot that the old code was using a single text value\nrather than a text array. That's clearly broken, especially since it\nwasn't even necessarily constructing a valid textual representation of\nan array (e.g., if an individual value's textual representation\nincluded the array markers \"{\" or \"}\").\n\nIMO fixing this to return a text array is worth doing, even though it\nmeans a catversion bump.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 24 Jun 2019 14:54:01 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Mon, Jun 24, 2019 at 02:54:01PM +0100, Dean Rasheed wrote:\n>On Mon, 24 Jun 2019 at 00:42, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Sun, Jun 23, 2019 at 10:23:19PM +0200, Tomas Vondra wrote:\n>> >On Sun, Jun 23, 2019 at 08:48:26PM +0100, Dean Rasheed wrote:\n>> >>On Sat, 22 Jun 2019 at 15:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> >>>One annoying thing I noticed is that the base_frequency tends to end up\n>> >>>being 0, most likely due to getting too small. It's a bit strange, though,\n>> >>>because with statistic target set to 10k the smallest frequency for a\n>> >>>single column is 1/3e6, so for 2 columns it'd be ~1/9e12 (which I think is\n>> >>>something the float8 can represent).\n>> >>>\n>> >>\n>> >>Yeah, it should be impossible for the base frequency to underflow to\n>> >>0. However, it looks like the problem is with mcv_list_items()'s use\n>> >>of %f to convert to text, which is pretty ugly.\n>> >>\n>> >\n>> >Yeah, I realized that too, eventually. One way to fix that would be\n>> >adding %.15f to the sprintf() call, but that just adds ugliness. It's\n>> >probably time to rewrite the function to build the tuple from datums,\n>> >instead of relying on BuildTupleFromCStrings.\n>> >\n>>\n>> OK, attached is a patch doing this. It's pretty simple, and it does\n>> resolve the issue with frequency precision.\n>>\n>> There's one issue with the signature, though - currently the function\n>> returns null flags as bool array, but values are returned as simple\n>> text value (formatted in array-like way, but still just a text).\n>>\n>> In the attached patch I've reworked both to proper arrays, but obviously\n>> that'd require a CATVERSION bump - and there's not much apetite for that\n>> past beta2, I suppose. So I'll just undo this bit.\n>>\n>\n>Hmm, I didn't spot that the old code was using a single text value\n>rather than a text array. That's clearly broken, especially since it\n>wasn't even necessarily constructing a valid textual representation of\n>an array (e.g., if an individual value's textual representation\n>included the array markers \"{\" or \"}\").\n>\n>IMO fixing this to return a text array is worth doing, even though it\n>means a catversion bump.\n>\n\nYeah :-(\n\nIt used to be just a \"debugging\" function, but now that we're using it\ne.g. in pg_stats_ext definition, we need to be more careful about the\noutput. Presumably we could keep the text output and make sure it's\nescaped properly etc. We could even build an array internally and then\nrun it through an output function. That'd not require catversion bump.\n\nI'll cleanup the patch changing the function signature. If others think\nthe catversion bump would be too significant annoyance at this point, I\nwill switch back to the text output (with proper formatting).\n\nOpinions?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 25 Jun 2019 11:18:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Tue, Jun 25, 2019 at 11:18:19AM +0200, Tomas Vondra wrote:\n>On Mon, Jun 24, 2019 at 02:54:01PM +0100, Dean Rasheed wrote:\n>>On Mon, 24 Jun 2019 at 00:42, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>\n>>>On Sun, Jun 23, 2019 at 10:23:19PM +0200, Tomas Vondra wrote:\n>>>>On Sun, Jun 23, 2019 at 08:48:26PM +0100, Dean Rasheed wrote:\n>>>>>On Sat, 22 Jun 2019 at 15:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>>>>One annoying thing I noticed is that the base_frequency tends to end up\n>>>>>>being 0, most likely due to getting too small. It's a bit strange, though,\n>>>>>>because with statistic target set to 10k the smallest frequency for a\n>>>>>>single column is 1/3e6, so for 2 columns it'd be ~1/9e12 (which I think is\n>>>>>>something the float8 can represent).\n>>>>>>\n>>>>>\n>>>>>Yeah, it should be impossible for the base frequency to underflow to\n>>>>>0. However, it looks like the problem is with mcv_list_items()'s use\n>>>>>of %f to convert to text, which is pretty ugly.\n>>>>>\n>>>>\n>>>>Yeah, I realized that too, eventually. One way to fix that would be\n>>>>adding %.15f to the sprintf() call, but that just adds ugliness. It's\n>>>>probably time to rewrite the function to build the tuple from datums,\n>>>>instead of relying on BuildTupleFromCStrings.\n>>>>\n>>>\n>>>OK, attached is a patch doing this. It's pretty simple, and it does\n>>>resolve the issue with frequency precision.\n>>>\n>>>There's one issue with the signature, though - currently the function\n>>>returns null flags as bool array, but values are returned as simple\n>>>text value (formatted in array-like way, but still just a text).\n>>>\n>>>In the attached patch I've reworked both to proper arrays, but obviously\n>>>that'd require a CATVERSION bump - and there's not much apetite for that\n>>>past beta2, I suppose. So I'll just undo this bit.\n>>>\n>>\n>>Hmm, I didn't spot that the old code was using a single text value\n>>rather than a text array. That's clearly broken, especially since it\n>>wasn't even necessarily constructing a valid textual representation of\n>>an array (e.g., if an individual value's textual representation\n>>included the array markers \"{\" or \"}\").\n>>\n>>IMO fixing this to return a text array is worth doing, even though it\n>>means a catversion bump.\n>>\n>\n>Yeah :-(\n>\n>It used to be just a \"debugging\" function, but now that we're using it\n>e.g. in pg_stats_ext definition, we need to be more careful about the\n>output. Presumably we could keep the text output and make sure it's\n>escaped properly etc. We could even build an array internally and then\n>run it through an output function. That'd not require catversion bump.\n>\n>I'll cleanup the patch changing the function signature. If others think\n>the catversion bump would be too significant annoyance at this point, I\n>will switch back to the text output (with proper formatting).\n>\n>Opinions?\n>\n\nAttached is a cleaned-up version of that patch. The main difference is\nthat instead of using construct_md_array() this uses ArrayBuildState to\nconstruct the arrays, which is much easier. The docs don't need any\nupdate because those were already using text[] for the parameter, the\ncode was inconsistent with it.\n\nThis does require catversion bump, but as annoying as it is, I think\nit's worth it (and there's also the thread discussing the serialization\nissues). Barring objections, I'll get it committed later next week, once\nI get back from PostgresLondon.\n\nAs I mentioned before, if we don't want any additional catversion bumps,\nit's possible to pass the arrays through output functions - that would\nallow us keeping the text output (but correct, unlike what we have now).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 29 Jun 2019 15:01:26 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Sat, 29 Jun 2019 at 14:01, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> >>>>>However, it looks like the problem is with mcv_list_items()'s use\n> >>>>>of %f to convert to text, which is pretty ugly.\n> >>>>\n> >>>There's one issue with the signature, though - currently the function\n> >>>returns null flags as bool array, but values are returned as simple\n> >>>text value (formatted in array-like way, but still just a text).\n> >>>\n> >>IMO fixing this to return a text array is worth doing, even though it\n> >>means a catversion bump.\n> >>\n> Attached is a cleaned-up version of that patch. The main difference is\n> that instead of using construct_md_array() this uses ArrayBuildState to\n> construct the arrays, which is much easier. The docs don't need any\n> update because those were already using text[] for the parameter, the\n> code was inconsistent with it.\n>\n\nCool, this looks a lot neater and fixes the issues discussed with both\nfloating point values no longer being converted to text, and proper\ntext arrays for values.\n\nOne minor nitpick -- it doesn't seem to be necessary to build the\narrays *outfuncs and *fmgrinfo. You may as well just fetch the\nindividual column output function information at the point where it's\nused (in the \"if (!item->isnull[i])\" block) rather than building those\narrays.\n\n\n> This does require catversion bump, but as annoying as it is, I think\n> it's worth it (and there's also the thread discussing the serialization\n> issues). Barring objections, I'll get it committed later next week, once\n> I get back from PostgresLondon.\n>\n> As I mentioned before, if we don't want any additional catversion bumps,\n> it's possible to pass the arrays through output functions - that would\n> allow us keeping the text output (but correct, unlike what we have now).\n>\n\nI think this is a clear bug fix, so I'd vote for fixing it properly\nnow, with a catversion bump.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 1 Jul 2019 12:02:28 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Choosing values for multivariate MCV lists" }, { "msg_contents": "On Mon, Jul 01, 2019 at 12:02:28PM +0100, Dean Rasheed wrote:\n>On Sat, 29 Jun 2019 at 14:01, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> >>>>>However, it looks like the problem is with mcv_list_items()'s use\n>> >>>>>of %f to convert to text, which is pretty ugly.\n>> >>>>\n>> >>>There's one issue with the signature, though - currently the function\n>> >>>returns null flags as bool array, but values are returned as simple\n>> >>>text value (formatted in array-like way, but still just a text).\n>> >>>\n>> >>IMO fixing this to return a text array is worth doing, even though it\n>> >>means a catversion bump.\n>> >>\n>> Attached is a cleaned-up version of that patch. The main difference is\n>> that instead of using construct_md_array() this uses ArrayBuildState to\n>> construct the arrays, which is much easier. The docs don't need any\n>> update because those were already using text[] for the parameter, the\n>> code was inconsistent with it.\n>>\n>\n>Cool, this looks a lot neater and fixes the issues discussed with both\n>floating point values no longer being converted to text, and proper\n>text arrays for values.\n>\n>One minor nitpick -- it doesn't seem to be necessary to build the\n>arrays *outfuncs and *fmgrinfo. You may as well just fetch the\n>individual column output function information at the point where it's\n>used (in the \"if (!item->isnull[i])\" block) rather than building those\n>arrays.\n>\n\nOK, thanks for the feedback. I'll clean-up the function lookup.\n\n>\n>> This does require catversion bump, but as annoying as it is, I think\n>> it's worth it (and there's also the thread discussing the serialization\n>> issues). Barring objections, I'll get it committed later next week, once\n>> I get back from PostgresLondon.\n>>\n>> As I mentioned before, if we don't want any additional catversion bumps,\n>> it's possible to pass the arrays through output functions - that would\n>> allow us keeping the text output (but correct, unlike what we have now).\n>>\n>\n>I think this is a clear bug fix, so I'd vote for fixing it properly\n>now, with a catversion bump.\n>\n\nI agree.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 2 Jul 2019 12:25:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Choosing values for multivariate MCV lists" } ]
[ { "msg_contents": "Hi,\n\nOne slightly inconvenient thing I realized while playing with the\naddress data set is that it's somewhat difficult to set the desired size\nof the multi-column MCV list.\n\nAt the moment, we simply use the maximum statistic target for attributes\nthe MCV list is built on. But that does not allow keeping default size\nfor per-column stats, and only increase size of multi-column MCV lists.\n\nSo I'm thinking we should allow tweaking the statistics for extended\nstats, and serialize it in the pg_statistic_ext catalog. Any opinions\nwhy that would be a bad idea?\n\nI suppose it should be part of the CREATE STATISTICS command, but I'm\nnot sure what'd be the best syntax. We might also have something more\nsimilar to ALTER COLUMNT, but perhaps\n\n ALTER STATISTICS s SET STATISTICS 1000;\n\nlooks a bit too weird.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 18 Jun 2019 23:33:57 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Tue, 18 Jun 2019 at 22:34, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> One slightly inconvenient thing I realized while playing with the\n> address data set is that it's somewhat difficult to set the desired size\n> of the multi-column MCV list.\n>\n> At the moment, we simply use the maximum statistic target for attributes\n> the MCV list is built on. But that does not allow keeping default size\n> for per-column stats, and only increase size of multi-column MCV lists.\n>\n> So I'm thinking we should allow tweaking the statistics for extended\n> stats, and serialize it in the pg_statistic_ext catalog. Any opinions\n> why that would be a bad idea?\n>\n\nSeems reasonable to me. This might not be the only option we'll ever\nwant to add though, so perhaps a \"stxoptions text[]\" column along the\nlines of a relation's reloptions would be the way to go.\n\n> I suppose it should be part of the CREATE STATISTICS command, but I'm\n> not sure what'd be the best syntax. We might also have something more\n> similar to ALTER COLUMNT, but perhaps\n>\n> ALTER STATISTICS s SET STATISTICS 1000;\n>\n> looks a bit too weird.\n>\n\nYes it does look a bit weird, but that's the natural generalisation of\nwhat we have for per-column statistics, so it's probably preferable to\ndo that rather than invent some other syntax that wouldn't be so\nconsistent.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 20 Jun 2019 08:08:44 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Thu, Jun 20, 2019 at 08:08:44AM +0100, Dean Rasheed wrote:\n>On Tue, 18 Jun 2019 at 22:34, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> One slightly inconvenient thing I realized while playing with the\n>> address data set is that it's somewhat difficult to set the desired size\n>> of the multi-column MCV list.\n>>\n>> At the moment, we simply use the maximum statistic target for attributes\n>> the MCV list is built on. But that does not allow keeping default size\n>> for per-column stats, and only increase size of multi-column MCV lists.\n>>\n>> So I'm thinking we should allow tweaking the statistics for extended\n>> stats, and serialize it in the pg_statistic_ext catalog. Any opinions\n>> why that would be a bad idea?\n>>\n>\n>Seems reasonable to me. This might not be the only option we'll ever\n>want to add though, so perhaps a \"stxoptions text[]\" column along the\n>lines of a relation's reloptions would be the way to go.\n>\n\nI don't know - I kinda dislike the idea of stashing stuff like this into\ntext[] arrays unless there's a clear need for such flexibility (i.e.\nvision to have more such options). Which I'm not sure is the case here.\nAnd we kinda have a precedent in pg_attribute.attstattarget, so I'd use\nthe same approach here.\n\n>> I suppose it should be part of the CREATE STATISTICS command, but I'm\n>> not sure what'd be the best syntax. We might also have something more\n>> similar to ALTER COLUMNT, but perhaps\n>>\n>> ALTER STATISTICS s SET STATISTICS 1000;\n>>\n>> looks a bit too weird.\n>>\n>\n>Yes it does look a bit weird, but that's the natural generalisation of\n>what we have for per-column statistics, so it's probably preferable to\n>do that rather than invent some other syntax that wouldn't be so\n>consistent.\n>\n\nYeah, I agree.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 21 Jun 2019 00:12:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Thu, 20 Jun 2019 at 23:12, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Jun 20, 2019 at 08:08:44AM +0100, Dean Rasheed wrote:\n> >On Tue, 18 Jun 2019 at 22:34, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> So I'm thinking we should allow tweaking the statistics for extended\n> >> stats, and serialize it in the pg_statistic_ext catalog. Any opinions\n> >> why that would be a bad idea?\n> >\n> >Seems reasonable to me. This might not be the only option we'll ever\n> >want to add though, so perhaps a \"stxoptions text[]\" column along the\n> >lines of a relation's reloptions would be the way to go.\n>\n> I don't know - I kinda dislike the idea of stashing stuff like this into\n> text[] arrays unless there's a clear need for such flexibility (i.e.\n> vision to have more such options). Which I'm not sure is the case here.\n> And we kinda have a precedent in pg_attribute.attstattarget, so I'd use\n> the same approach here.\n>\n\nHmm, maybe. I can certainly understand your dislike of using text[].\nI'm not sure that we can confidently say that multivariate statistics\nwon't ever need additional tuning knobs, but I have no idea at the\nmoment what they might be, and nothing else has come up so far in all\nthe time spent considering MCV lists and histograms, so maybe this is\nOK.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 21 Jun 2019 08:09:18 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Fri, Jun 21, 2019 at 08:09:18AM +0100, Dean Rasheed wrote:\n>On Thu, 20 Jun 2019 at 23:12, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Thu, Jun 20, 2019 at 08:08:44AM +0100, Dean Rasheed wrote:\n>> >On Tue, 18 Jun 2019 at 22:34, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> >>\n>> >> So I'm thinking we should allow tweaking the statistics for extended\n>> >> stats, and serialize it in the pg_statistic_ext catalog. Any opinions\n>> >> why that would be a bad idea?\n>> >\n>> >Seems reasonable to me. This might not be the only option we'll ever\n>> >want to add though, so perhaps a \"stxoptions text[]\" column along the\n>> >lines of a relation's reloptions would be the way to go.\n>>\n>> I don't know - I kinda dislike the idea of stashing stuff like this into\n>> text[] arrays unless there's a clear need for such flexibility (i.e.\n>> vision to have more such options). Which I'm not sure is the case here.\n>> And we kinda have a precedent in pg_attribute.attstattarget, so I'd use\n>> the same approach here.\n>>\n>\n>Hmm, maybe. I can certainly understand your dislike of using text[].\n>I'm not sure that we can confidently say that multivariate statistics\n>won't ever need additional tuning knobs, but I have no idea at the\n>moment what they might be, and nothing else has come up so far in all\n>the time spent considering MCV lists and histograms, so maybe this is\n>OK.\n>\n\nOK, attached is a patch implementing this - it adds\n\n ALTER STATISTICS ... SET STATISTICS ...\n\nmodifying a new stxstattarget column in pg_statistic_ext catalog,\nfollowing the same logic as pg_attribute.attstattarget.\n\nDuring analyze, the per-ext-statistic value is determined like this:\n\n1) When pg_statistic_ext.stxstattarget != (-1), we just use this value\nand we're done.\n\n2) Otherwise we inspect per-column attstattarget values, and use the\nlargest value. This is what we do now, so it's backwards-compatible\nbehavior.\n\n3) If the value is still (-1), we use default_statistics_target.\n\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 29 Jun 2019 12:41:21 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Fri, 21 Jun 2019 at 19:09, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> Hmm, maybe. I can certainly understand your dislike of using text[].\n> I'm not sure that we can confidently say that multivariate statistics\n> won't ever need additional tuning knobs, but I have no idea at the\n> moment what they might be, and nothing else has come up so far in all\n> the time spent considering MCV lists and histograms, so maybe this is\n> OK.\n\nI agree with having the stxstattarget column. Even if something did\ncome up in the future, then we could consider merging the\nstxstattarget column with a new text[] column at that time.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Sun, 30 Jun 2019 19:34:32 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "Hi,\n\napparently v1 of the ALTER STATISTICS patch was a bit confused about\nlength of the VacAttrStats array passed to statext_compute_stattarget,\ncausing segfaults. Attached v2 patch fixes that, and it also makes sure\nwe print warnings about ignored statistics only once.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 7 Jul 2019 00:02:38 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Sun, Jul 07, 2019 at 12:02:38AM +0200, Tomas Vondra wrote:\n>Hi,\n>\n>apparently v1 of the ALTER STATISTICS patch was a bit confused about\n>length of the VacAttrStats array passed to statext_compute_stattarget,\n>causing segfaults. Attached v2 patch fixes that, and it also makes sure\n>we print warnings about ignored statistics only once.\n>\n\nv3 of the patch, adding pg_dump support - it works just like when you\ntweak statistics target for a column, for example. When the value stored\nin the catalog is not -1, pg_dump emits a separate ALTER STATISTICS\ncommand setting it (for the already created statistics object).\n\nI've considered making it part of CREATE STATISTICS itself, but it seems\na bit cumbersome and we don't do it for columns either. I'm not against\nadding it in the future, but at this point I don't see a need.\n\nAt this point I'm not aware of any missing or broken pieces, so I'd\nwelcome feedback.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 8 Jul 2019 20:18:14 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Tuesday, July 9, 2019, Tomas Vondra wrote:\n> >apparently v1 of the ALTER STATISTICS patch was a bit confused about\n> >length of the VacAttrStats array passed to statext_compute_stattarget,\n> >causing segfaults. Attached v2 patch fixes that, and it also makes sure\n> >we print warnings about ignored statistics only once.\n> >\n> \n> v3 of the patch, adding pg_dump support - it works just like when you tweak\n> statistics target for a column, for example. When the value stored in the\n> catalog is not -1, pg_dump emits a separate ALTER STATISTICS command setting\n> it (for the already created statistics object).\n> \n\nHi Tomas, I stumbled upon your patch.\n\nAccording to the CF bot, your patch applies cleanly, builds successfully, and\npasses make world. Meaning, the pg_dump tap test passed, but there was no\ntest for the new SET STATISTICS yet. So you might want to add a regression\ntest for that and integrate it in the existing alter_generic file.\n\nUpon quick read-through, the syntax and docs are correct because it's similar\nto the format of ALTER TABLE/INDEX... SET STATISTICS... :\n ALTER [ COLUMN ] column_name SET STATISTICS integer\n\n+\t\t/* XXX What if the target is set to 0? Reset the statistic? */\n\nThis also makes me wonder. I haven't looked deeply into the code, but since 0 is\na valid value, I believe it should reset the stats.\nAfter lookup though, this is how it's tested in ALTER TABLE:\n/test/regress/sql/stats_ext.sql:-- Ensure things work sanely with SET STATISTICS 0\n\n> I've considered making it part of CREATE STATISTICS itself, but it seems a\n> bit cumbersome and we don't do it for columns either. I'm not against adding\n> it in the future, but at this point I don't see a need.\n\nI agree. Perhaps that's for another patch should you decide to add it in the future.\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Fri, 19 Jul 2019 06:12:20 +0000", "msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Fri, Jul 19, 2019 at 06:12:20AM +0000, Jamison, Kirk wrote:\n>On Tuesday, July 9, 2019, Tomas Vondra wrote:\n>> >apparently v1 of the ALTER STATISTICS patch was a bit confused about\n>> >length of the VacAttrStats array passed to statext_compute_stattarget,\n>> >causing segfaults. Attached v2 patch fixes that, and it also makes sure\n>> >we print warnings about ignored statistics only once.\n>> >\n>>\n>> v3 of the patch, adding pg_dump support - it works just like when you tweak\n>> statistics target for a column, for example. When the value stored in the\n>> catalog is not -1, pg_dump emits a separate ALTER STATISTICS command setting\n>> it (for the already created statistics object).\n>>\n>\n>Hi Tomas, I stumbled upon your patch.\n>\n>According to the CF bot, your patch applies cleanly, builds successfully, and\n>passes make world. Meaning, the pg_dump tap test passed, but there was no\n>test for the new SET STATISTICS yet. So you might want to add a regression\n>test for that and integrate it in the existing alter_generic file.\n>\n>Upon quick read-through, the syntax and docs are correct because it's similar\n>to the format of ALTER TABLE/INDEX... SET STATISTICS... :\n> ALTER [ COLUMN ] column_name SET STATISTICS integer\n>\n>+\t\t/* XXX What if the target is set to 0? Reset the statistic? */\n>\n>This also makes me wonder. I haven't looked deeply into the code, but since 0 is\n>a valid value, I believe it should reset the stats.\n\nI agree resetting the stats after setting the target to 0 seems quite\nreasonable. But that's not what we do for attribute stats, because in\nthat case we simply skip the attribute during the future ANALYZE runs -\nwe don't reset the stats, we keep the existing stats. So I've done the\nsame thing here, and I've removed the XXX comment.\n\nIf we want to change that, I'd do it in a separate patch for both the\nregular and extended stats.\n\n>After lookup though, this is how it's tested in ALTER TABLE:\n>/test/regress/sql/stats_ext.sql:-- Ensure things work sanely with SET STATISTICS 0\n>\n\nWell, yeah. But that tests we skip building the extended statistic\n(because we excluded the column from the ANALYZE run).\n\n>> I've considered making it part of CREATE STATISTICS itself, but it seems a\n>> bit cumbersome and we don't do it for columns either. I'm not against adding\n>> it in the future, but at this point I don't see a need.\n>\n>I agree. Perhaps that's for another patch should you decide to add it in the future.\n>\n\nRight.\n\nAttached is v4 of the patch, with a couple more improvements:\n\n1) I've renamed the if_not_exists flag to missing_ok, because that's\nmore consistent with the \"IF EXISTS\" clause in the grammar (the old flag\nwas kinda the exact opposite), and I've added a NOTICE about the skip.\n\n2) I've renamed ComputeExtStatsTarget to ComputeExtStatsRows, because\nthat's what the function was doing anyway (computing sample size).\n\n3) I've added a couple of regression tests to stats_ext.sql\n\nAside from that, I've cleaned up a couple of places and improved a bunch\nof comments. Nothing huge.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 20 Jul 2019 01:12:21 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Sat, July 20, 2019 8:12 AM (GMT+9), Tomas Vondra wrote:\n\n> >+\t\t/* XXX What if the target is set to 0? Reset the statistic?\n> */\n> >\n> >This also makes me wonder. I haven't looked deeply into the code, but\n> >since 0 is a valid value, I believe it should reset the stats.\n> \n> I agree resetting the stats after setting the target to 0 seems quite\n> reasonable. But that's not what we do for attribute stats, because in that\n> case we simply skip the attribute during the future ANALYZE runs - we don't\n> reset the stats, we keep the existing stats. So I've done the same thing here,\n> and I've removed the XXX comment.\n> \n> If we want to change that, I'd do it in a separate patch for both the regular\n> and extended stats.\n\nHi, Tomas\n\nSorry for my late reply.\nYou're right. I have no strong opinion whether we'd want to change that behavior.\nI've also confirmed the change in the patch where setting statistics target 0\nskips the statistics. \n\nMaybe only some minor nitpicks in the source code comments below:\n1. \"it's\" should be \"its\":\n> +\t\t * Compute statistic target, based on what's set for the statistic\n> +\t\t * object itself, and for it's attributes.\n\n2. Consistency whether you'd use either \"statistic \" or \"statisticS \".\nEx. statistic target vs statisticS target, statistics object vs statistic object, etc.\n\n> Attached is v4 of the patch, with a couple more improvements:\n>\n> 1) I've renamed the if_not_exists flag to missing_ok, because that's more\n> consistent with the \"IF EXISTS\" clause in the grammar (the old flag was kinda\n> the exact opposite), and I've added a NOTICE about the skip.\n\n+\tbool\t\tmissing_ok; /* do nothing if statistics does not exist */\n\nConfirmed. So we ignore if statistic does not exist, and skip the error.\nMaybe to make it consistent with other data structures in parsernodes.h,\nyou can change the comment of missing_ok to: \n/* skip error if statistics object does not exist */\n\n> 2) I've renamed ComputeExtStatsTarget to ComputeExtStatsRows, because that's\n> what the function was doing anyway (computing sample size).\n>\n> 3) I've added a couple of regression tests to stats_ext.sql\n> \n> Aside from that, I've cleaned up a couple of places and improved a bunch of\n> comments. Nothing huge.\n\nI have a question though regarding ComputeExtStatisticsRows.\nI'm just curious with the value 300 when computing sample size.\nWhere did this value come from?\n\n+\t/* compute sample size based on the statistic target */\n+\treturn (300 * result);\n\nOverall, the patch is almost already in good shape for commit.\nI'll wait for the next update.\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Fri, 26 Jul 2019 07:03:41 +0000", "msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Fri, Jul 26, 2019 at 07:03:41AM +0000, Jamison, Kirk wrote:\n>On Sat, July 20, 2019 8:12 AM (GMT+9), Tomas Vondra wrote:\n>\n>> >+\t\t/* XXX What if the target is set to 0? Reset the statistic?\n>> */\n>> >\n>> >This also makes me wonder. I haven't looked deeply into the code, but\n>> >since 0 is a valid value, I believe it should reset the stats.\n>>\n>> I agree resetting the stats after setting the target to 0 seems quite\n>> reasonable. But that's not what we do for attribute stats, because in that\n>> case we simply skip the attribute during the future ANALYZE runs - we don't\n>> reset the stats, we keep the existing stats. So I've done the same thing here,\n>> and I've removed the XXX comment.\n>>\n>> If we want to change that, I'd do it in a separate patch for both the regular\n>> and extended stats.\n>\n>Hi, Tomas\n>\n>Sorry for my late reply.\n>You're right. I have no strong opinion whether we'd want to change that behavior.\n>I've also confirmed the change in the patch where setting statistics target 0\n>skips the statistics.\n>\n\nOK, thanks.\n\n>Maybe only some minor nitpicks in the source code comments below:\n>1. \"it's\" should be \"its\":\n>> +\t\t * Compute statistic target, based on what's set for the statistic\n>> +\t\t * object itself, and for it's attributes.\n>\n>2. Consistency whether you'd use either \"statistic \" or \"statisticS \".\n>Ex. statistic target vs statisticS target, statistics object vs statistic object, etc.\n>\n>> Attached is v4 of the patch, with a couple more improvements:\n>>\n>> 1) I've renamed the if_not_exists flag to missing_ok, because that's more\n>> consistent with the \"IF EXISTS\" clause in the grammar (the old flag was kinda\n>> the exact opposite), and I've added a NOTICE about the skip.\n>\n>+\tbool\t\tmissing_ok; /* do nothing if statistics does not exist */\n>\n>Confirmed. So we ignore if statistic does not exist, and skip the error.\n>Maybe to make it consistent with other data structures in parsernodes.h,\n>you can change the comment of missing_ok to:\n>/* skip error if statistics object does not exist */\n>\n\nThanks, I've fixed all those places in the attached v5.\n\n>> 2) I've renamed ComputeExtStatsTarget to ComputeExtStatsRows, because that's\n>> what the function was doing anyway (computing sample size).\n>>\n>> 3) I've added a couple of regression tests to stats_ext.sql\n>>\n>> Aside from that, I've cleaned up a couple of places and improved a bunch of\n>> comments. Nothing huge.\n>\n>I have a question though regarding ComputeExtStatisticsRows.\n>I'm just curious with the value 300 when computing sample size.\n>Where did this value come from?\n>\n>+\t/* compute sample size based on the statistic target */\n>+\treturn (300 * result);\n>\n>Overall, the patch is almost already in good shape for commit.\n>I'll wait for the next update.\n>\n\nThat's how we compute number of rows to sample, based on the statistics\ntarget. See std_typanalyze() in analyze.c, which also cites the paper\nwhere this comes from.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 27 Jul 2019 00:05:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Saturday, July 27, 2019 7:06 AM(GMT+9), Tomas Vondra wrote:\n> On Fri, Jul 26, 2019 at 07:03:41AM +0000, Jamison, Kirk wrote:\n> >On Sat, July 20, 2019 8:12 AM (GMT+9), Tomas Vondra wrote:\n> >\n> >> >+\t\t/* XXX What if the target is set to 0? Reset the statistic?\n> >> */\n> >> >\n> >> >This also makes me wonder. I haven't looked deeply into the code,\n> >> >but since 0 is a valid value, I believe it should reset the stats.\n> >>\n> >> I agree resetting the stats after setting the target to 0 seems quite\n> >> reasonable. But that's not what we do for attribute stats, because in\n> >> that case we simply skip the attribute during the future ANALYZE runs\n> >> - we don't reset the stats, we keep the existing stats. So I've done\n> >> the same thing here, and I've removed the XXX comment.\n> >>\n> >> If we want to change that, I'd do it in a separate patch for both the\n> >> regular and extended stats.\n> >\n> >Hi, Tomas\n> >\n> >Sorry for my late reply.\n> >You're right. I have no strong opinion whether we'd want to change that\n> behavior.\n> >I've also confirmed the change in the patch where setting statistics\n> >target 0 skips the statistics.\n> >\n> \n> OK, thanks.\n> \n> >Maybe only some minor nitpicks in the source code comments below:\n> >1. \"it's\" should be \"its\":\n> >> +\t\t * Compute statistic target, based on what's set for the\n> statistic\n> >> +\t\t * object itself, and for it's attributes.\n> >\n> >2. Consistency whether you'd use either \"statistic \" or \"statisticS \".\n> >Ex. statistic target vs statisticS target, statistics object vs statistic\n> object, etc.\n> >\n> >> Attached is v4 of the patch, with a couple more improvements:\n> >>\n> >> 1) I've renamed the if_not_exists flag to missing_ok, because that's\n> >> more consistent with the \"IF EXISTS\" clause in the grammar (the old\n> >> flag was kinda the exact opposite), and I've added a NOTICE about the skip.\n> >\n> >+\tbool\t\tmissing_ok; /* do nothing if statistics does\n> not exist */\n> >\n> >Confirmed. So we ignore if statistic does not exist, and skip the error.\n> >Maybe to make it consistent with other data structures in\n> >parsernodes.h, you can change the comment of missing_ok to:\n> >/* skip error if statistics object does not exist */\n> >\n> \n> Thanks, I've fixed all those places in the attached v5.\n\nI've confirmed the fix.\n\n> >> 2) I've renamed ComputeExtStatsTarget to ComputeExtStatsRows, because\n> >> that's what the function was doing anyway (computing sample size).\n> >>\n> >> 3) I've added a couple of regression tests to stats_ext.sql\n> >>\n> >> Aside from that, I've cleaned up a couple of places and improved a\n> >> bunch of comments. Nothing huge.\n> >\n> >I have a question though regarding ComputeExtStatisticsRows.\n> >I'm just curious with the value 300 when computing sample size.\n> >Where did this value come from?\n> >\n> >+\t/* compute sample size based on the statistic target */\n> >+\treturn (300 * result);\n> >\n> >Overall, the patch is almost already in good shape for commit.\n> >I'll wait for the next update.\n> >\n> \n> That's how we compute number of rows to sample, based on the statistics target.\n> See std_typanalyze() in analyze.c, which also cites the paper where this comes\n> from.\nNoted. Found it. Thank you for the reference.\n\n\nThere's just a small whitespace (extra space) below after running git diff --check.\n>src/bin/pg_dump/pg_dump.c:7226: trailing whitespace.\n>+ \nIt would be better to post an updated patch,\nbut other than that, I've confirmed the fixes.\nThe patch passed the make-world and regression tests as well.\nI've marked this as \"ready for committer\".\n\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Mon, 29 Jul 2019 01:53:08 +0000", "msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Multivariate MCV list vs. statistics target" }, { "msg_contents": "Hi, \n\n> From: Jamison, Kirk [mailto:k.jamison@jp.fujitsu.com]\n> Sent: Monday, July 29, 2019 10:53 AM\n> To: 'Tomas Vondra' <tomas.vondra@2ndquadrant.com>\n> Cc: Dean Rasheed <dean.a.rasheed@gmail.com>; PostgreSQL Hackers\n> <pgsql-hackers@lists.postgresql.org>\n> Subject: RE: Multivariate MCV list vs. statistics target\n> \n> On Saturday, July 27, 2019 7:06 AM(GMT+9), Tomas Vondra wrote:\n> > On Fri, Jul 26, 2019 at 07:03:41AM +0000, Jamison, Kirk wrote:\n> > >On Sat, July 20, 2019 8:12 AM (GMT+9), Tomas Vondra wrote:\n> > >\n> > >> >+\t\t/* XXX What if the target is set to 0? Reset the statistic?\n> > >> */\n> > >> >\n> > >> >This also makes me wonder. I haven't looked deeply into the code,\n> > >> >but since 0 is a valid value, I believe it should reset the stats.\n> > >>\n> > >> I agree resetting the stats after setting the target to 0 seems\n> > >> quite reasonable. But that's not what we do for attribute stats,\n> > >> because in that case we simply skip the attribute during the future\n> > >> ANALYZE runs\n> > >> - we don't reset the stats, we keep the existing stats. So I've\n> > >> done the same thing here, and I've removed the XXX comment.\n> > >>\n> > >> If we want to change that, I'd do it in a separate patch for both\n> > >> the regular and extended stats.\n> > >\n> > >Hi, Tomas\n> > >\n> > >Sorry for my late reply.\n> > >You're right. I have no strong opinion whether we'd want to change\n> > >that\n> > behavior.\n> > >I've also confirmed the change in the patch where setting statistics\n> > >target 0 skips the statistics.\n> > >\n> >\n> > OK, thanks.\n> >\n> > >Maybe only some minor nitpicks in the source code comments below:\n> > >1. \"it's\" should be \"its\":\n> > >> +\t\t * Compute statistic target, based on what's set for the\n> > statistic\n> > >> +\t\t * object itself, and for it's attributes.\n> > >\n> > >2. Consistency whether you'd use either \"statistic \" or \"statisticS \".\n> > >Ex. statistic target vs statisticS target, statistics object vs\n> > >statistic\n> > object, etc.\n> > >\n> > >> Attached is v4 of the patch, with a couple more improvements:\n> > >>\n> > >> 1) I've renamed the if_not_exists flag to missing_ok, because\n> > >> that's more consistent with the \"IF EXISTS\" clause in the grammar\n> > >> (the old flag was kinda the exact opposite), and I've added a NOTICE\n> about the skip.\n> > >\n> > >+\tbool\t\tmissing_ok; /* do nothing if statistics does\n> > not exist */\n> > >\n> > >Confirmed. So we ignore if statistic does not exist, and skip the error.\n> > >Maybe to make it consistent with other data structures in\n> > >parsernodes.h, you can change the comment of missing_ok to:\n> > >/* skip error if statistics object does not exist */\n> > >\n> >\n> > Thanks, I've fixed all those places in the attached v5.\n> \n> I've confirmed the fix.\n> \n> > >> 2) I've renamed ComputeExtStatsTarget to ComputeExtStatsRows,\n> > >> because that's what the function was doing anyway (computing sample\n> size).\n> > >>\n> > >> 3) I've added a couple of regression tests to stats_ext.sql\n> > >>\n> > >> Aside from that, I've cleaned up a couple of places and improved a\n> > >> bunch of comments. Nothing huge.\n> > >\n> > >I have a question though regarding ComputeExtStatisticsRows.\n> > >I'm just curious with the value 300 when computing sample size.\n> > >Where did this value come from?\n> > >\n> > >+\t/* compute sample size based on the statistic target */\n> > >+\treturn (300 * result);\n> > >\n> > >Overall, the patch is almost already in good shape for commit.\n> > >I'll wait for the next update.\n> > >\n> >\n> > That's how we compute number of rows to sample, based on the statistics\n> target.\n> > See std_typanalyze() in analyze.c, which also cites the paper where\n> > this comes from.\n> Noted. Found it. Thank you for the reference.\n> \n> \n> There's just a small whitespace (extra space) below after running git diff\n> --check.\n> >src/bin/pg_dump/pg_dump.c:7226: trailing whitespace.\n> >+\n> It would be better to post an updated patch, but other than that, I've confirmed\n> the fixes.\n> The patch passed the make-world and regression tests as well.\n> I've marked this as \"ready for committer\".\n\nThe patch does not apply anymore.\nIn addition to the whitespace detected,\nkindly rebase the patch as there were changes from recent commits\nin the following files:\n src/backend/commands/analyze.c\n src/bin/pg_dump/pg_dump.c\n src/bin/pg_dump/t/002_pg_dump.pl\n src/test/regress/expected/stats_ext.out\n src/test/regress/sql/stats_ext.sql\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Thu, 1 Aug 2019 00:15:48 +0000", "msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Thu, Aug 1, 2019 at 12:16 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n> > On Saturday, July 27, 2019 7:06 AM(GMT+9), Tomas Vondra wrote:\n> > > On Fri, Jul 26, 2019 at 07:03:41AM +0000, Jamison, Kirk wrote:\n> > > >Overall, the patch is almost already in good shape for commit.\n> > > >I'll wait for the next update.\n\n> > The patch passed the make-world and regression tests as well.\n> > I've marked this as \"ready for committer\".\n>\n> The patch does not apply anymore.\n\nBased on the above, it sounds like this patch is super close and the\nonly problem is bitrot, so I've set it back to Ready for Committer.\nOver to Tomas to rebase and commit, or move to the next CF if that's\nmore appropriate.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 17:25:31 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "Hello.\n\nAt Thu, 1 Aug 2019 00:15:48 +0000, \"Jamison, Kirk\" <k.jamison@jp.fujitsu.com> wrote in <D09B13F772D2274BB348A310EE3027C6518F94@g01jpexmbkw24>\n> The patch does not apply anymore.\n> In addition to the whitespace detected,\n> kindly rebase the patch as there were changes from recent commits\n> in the following files:\n> src/backend/commands/analyze.c\n> src/bin/pg_dump/pg_dump.c\n> src/bin/pg_dump/t/002_pg_dump.pl\n> src/test/regress/expected/stats_ext.out\n> src/test/regress/sql/stats_ext.sql\n\nThe patch finally failed only for stats_ext.out, where 14ef15a222\nis hitting. (for b2a3d706b8)\n\nI looked through this patch and have some comments.\n\n\n\n+++ b/src/backend/commands/statscmds.c\n+#include \"access/heapam.h\"\n..\n+#include \"utils/fmgroids.h\"\n\nThese don't seem needed.\n\n\n+ Assert(stmt->missing_ok);\n\nPerhaps we shouldn't Assert on this condition. Isn't it better we\njust \"elog(ERROR\" here?\n\n\n+ DeconstructQualifiedName(stmt->defnames, &schemaname, &statname);\n\nMaybe we don't need detailed analysis that the function emits on\nerror. Couldn't we use NameListToString() instead? That reduces\nthe number of ereport()s and considerably simplify the code\naround.\n\n+ oldtup = SearchSysCache1(STATEXTOID, ObjectIdGetDatum(stxoid));\n+\n+ /* Must be owner of the existing statistics object */\n+ if (!pg_statistics_object_ownercheck(stxoid, GetUserId()))\n\nThis repeats the SearchSysCache twice in a quite short\nduration. I suppose it'd be better that ACL (and validity) checks\ndone directly using oldtup.\n\n\n+ /* replace the stxstattarget column */\n+ repl_repl[Anum_pg_statistic_ext_stxstattarget - 1] = true;\n+ repl_val[Anum_pg_statistic_ext_stxstattarget - 1] = Int32GetDatum(newtarget)\n\nWe usually do this kind of work using SearchSysCacheCopyN(),\nwhich simplifies the code around, too.\n\n\n+++ b/src/backend/statistics/mcv.c\n> * Maximum number of MCV items to store, based on the attribute with the\n> * largest stats target (and the number of groups we have available).\n> */\n- nitems = stats[0]->attr->attstattarget;\n- for (i = 1; i < numattrs; i++)\n- {\n- if (stats[i]->attr->attstattarget > nitems)\n- nitems = stats[i]->attr->attstattarget;\n- }\n+ nitems = stattarget;\n\nMaybe you forgot to modify the comment.\n\n\ncheck_xact_readonly() returns false for this command. As the\nresult it emits a somewhat pointless error message.\n\n=# alter statistics s1 set statistics 0;\nERROR: cannot assign TransactionIds during recovery\n\n\n+++ b/src/bin/pg_dump/pg_dump.c\n+++ b/src/bin/pg_dump/pg_dump.h\n> i_stxname = PQfnumber(res, \"stxname\");\n> i_stxnamespace = PQfnumber(res, \"stxnamespace\");\n> i_rolname = PQfnumber(res, \"rolname\");\n+ i_stattarget = PQfnumber(res, \"stxstattarget\");\n\nI'm not sure whether it is the convention here, but variable name\nis different from column name only for the added line.\n\n\n+++ b/src/bin/psql/tab-complete.c\n-\t\tCOMPLETE_WITH(\"OWNER TO\", \"RENAME TO\", \"SET SCHEMA\");\n+\t\tCOMPLETE_WITH(\"OWNER TO\", \"RENAME TO\", \"SET SCHEMA\", \"SET STATISTICS\");\n\nALTER STATISTICS s2 SET STATISTICS<tab> is suggested with\next-stats names, but it's the place for target value.\n\n\n+++ b/src/include/nodes/nodes.h\n T_CallStmt,\n+ T_AlterStatsStmt,\n \nI think it should be immediately below T_CreateStatsStmt.\n\n\n+++ b/src/include/nodes/parsenodes.h\n+ bool missing_ok; /* skip error if statistics object is missing */\n\nShould be very trivial, but many bool members especially\nmissing_ok have a comment having \"?\" at the end.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 01 Aug 2019 17:29:20 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Thu, Aug 01, 2019 at 05:25:31PM +1200, Thomas Munro wrote:\n>On Thu, Aug 1, 2019 at 12:16 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>> > On Saturday, July 27, 2019 7:06 AM(GMT+9), Tomas Vondra wrote:\n>> > > On Fri, Jul 26, 2019 at 07:03:41AM +0000, Jamison, Kirk wrote:\n>> > > >Overall, the patch is almost already in good shape for commit.\n>> > > >I'll wait for the next update.\n>\n>> > The patch passed the make-world and regression tests as well.\n>> > I've marked this as \"ready for committer\".\n>>\n>> The patch does not apply anymore.\n>\n>Based on the above, it sounds like this patch is super close and the\n>only problem is bitrot, so I've set it back to Ready for Committer.\n>Over to Tomas to rebase and commit, or move to the next CF if that's\n>more appropriate.\n>\n\nI'll move it to the next CF. Aside from the issues pointed by Kyotaro-san\nin his review, I still haven't made my mind about whether to base the use\nstatistics targets set for the attributes. That's what we're doing now,\nbut I'm not sure it's a good idea after adding separate statistics target.\nI wonder what Dean's opinion on this is, as he added the current logic.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 1 Aug 2019 12:30:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Thu, 1 Aug 2019 at 11:30, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> I'll move it to the next CF. Aside from the issues pointed by Kyotaro-san\n> in his review, I still haven't made my mind about whether to base the use\n> statistics targets set for the attributes. That's what we're doing now,\n> but I'm not sure it's a good idea after adding separate statistics target.\n> I wonder what Dean's opinion on this is, as he added the current logic.\n>\n\nIf this were being released in the same version as MCV stats first\nappeared, I'd say that there's not much point basing the default\nmultivariate stats target on the per-column targets, when it has its\nown knob to control it. However, since this won't be released for a\nyear, those per-column-based defaults will be in the field for that\nlong, and so I'd say that we shouldn't change the default when adding\nthis, otherwise users who don't use this new feature might be\nsurprised by the change in behaviour.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 1 Aug 2019 15:42:35 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On 2019-Aug-01, Tomas Vondra wrote:\n\n> I'll move it to the next CF. Aside from the issues pointed by Kyotaro-san\n> in his review, I still haven't made my mind about whether to base the use\n> statistics targets set for the attributes. That's what we're doing now,\n> but I'm not sure it's a good idea after adding separate statistics target.\n> I wonder what Dean's opinion on this is, as he added the current logic.\n\nLatest patch no longer applies. Please update. And since you already\nseem to have handled all review comments since it was Ready for\nCommitter, and you now know Dean's opinion on the remaining question,\nplease get it pushed.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 3 Sep 2019 14:38:56 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Tue, Sep 03, 2019 at 02:38:56PM -0400, Alvaro Herrera wrote:\n>On 2019-Aug-01, Tomas Vondra wrote:\n>\n>> I'll move it to the next CF. Aside from the issues pointed by Kyotaro-san\n>> in his review, I still haven't made my mind about whether to base the use\n>> statistics targets set for the attributes. That's what we're doing now,\n>> but I'm not sure it's a good idea after adding separate statistics target.\n>> I wonder what Dean's opinion on this is, as he added the current logic.\n>\n>Latest patch no longer applies. Please update. And since you already\n>seem to have handled all review comments since it was Ready for\n>Committer, and you now know Dean's opinion on the remaining question,\n>please get it pushed.\n>\n\nOK, I've pushed this the patch, after some minor polishing.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 11 Sep 2019 00:28:12 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "Hello,\n\nI found a missing column description in the pg_statistic_ext catalog document for this new feature.\nThe attached patch adds a description of the 'stxstattarget' column to pg_statistic_ext catalog's document. \nIf there is a better explanation, please correct it.\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com] \nSent: Wednesday, September 11, 2019 7:28 AM\nTo: Alvaro Herrera <alvherre@2ndquadrant.com>\nCc: Thomas Munro <thomas.munro@gmail.com>; Jamison, Kirk <k.jamison@jp.fujitsu.com>; Dean Rasheed <dean.a.rasheed@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Multivariate MCV list vs. statistics target\n\nOn Tue, Sep 03, 2019 at 02:38:56PM -0400, Alvaro Herrera wrote:\n>On 2019-Aug-01, Tomas Vondra wrote:\n>\n>> I'll move it to the next CF. Aside from the issues pointed by \n>> Kyotaro-san in his review, I still haven't made my mind about whether \n>> to base the use statistics targets set for the attributes. That's \n>> what we're doing now, but I'm not sure it's a good idea after adding separate statistics target.\n>> I wonder what Dean's opinion on this is, as he added the current logic.\n>\n>Latest patch no longer applies. Please update. And since you already \n>seem to have handled all review comments since it was Ready for \n>Committer, and you now know Dean's opinion on the remaining question, \n>please get it pushed.\n>\n\nOK, I've pushed this the patch, after some minor polishing.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 18 Mar 2020 04:28:34 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Multivariate MCV list vs. statistics target" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 18, 2020 at 04:28:34AM +0000, Shinoda, Noriyoshi (PN Japan A&PS Delivery) wrote:\n>Hello,\n>\n>I found a missing column description in the pg_statistic_ext catalog document for this new feature.\n>The attached patch adds a description of the 'stxstattarget' column to pg_statistic_ext catalog's document.\n>If there is a better explanation, please correct it.\n>\n\nThanks for the report. Yes, this is clearly an omission. I think it\nwould be better to use wording similar to attstattarget, per the\nattached patch. That is, without reference to ALTER STATISTICS and\nbetter explaination of what positive/negative values do. Do you agree?\n\n\nregards\n\n>Regards,\n>Noriyoshi Shinoda\n>\n>-----Original Message-----\n>From: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com]\n>Sent: Wednesday, September 11, 2019 7:28 AM\n>To: Alvaro Herrera <alvherre@2ndquadrant.com>\n>Cc: Thomas Munro <thomas.munro@gmail.com>; Jamison, Kirk <k.jamison@jp.fujitsu.com>; Dean Rasheed <dean.a.rasheed@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\n>Subject: Re: Multivariate MCV list vs. statistics target\n>\n>On Tue, Sep 03, 2019 at 02:38:56PM -0400, Alvaro Herrera wrote:\n>>On 2019-Aug-01, Tomas Vondra wrote:\n>>\n>>> I'll move it to the next CF. Aside from the issues pointed by\n>>> Kyotaro-san in his review, I still haven't made my mind about whether\n>>> to base the use statistics targets set for the attributes. That's\n>>> what we're doing now, but I'm not sure it's a good idea after adding separate statistics target.\n>>> I wonder what Dean's opinion on this is, as he added the current logic.\n>>\n>>Latest patch no longer applies. Please update. And since you already\n>>seem to have handled all review comments since it was Ready for\n>>Committer, and you now know Dean's opinion on the remaining question,\n>>please get it pushed.\n>>\n>\n>OK, I've pushed this the patch, after some minor polishing.\n>\n>\n>regards\n>\n>-- \n>Tomas Vondra http://www.2ndQuadrant.com\n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 18 Mar 2020 13:36:29 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "Hi, \n\n>Thanks for the report. Yes, this is clearly an omission. I think it would be better to use wording >similar to attstattarget, per the attached patch. That is, without reference to ALTER STATISTICS and >better explaination of what positive/negative values do. Do you agree?\n\nThank you for your comment.\nI agree with the text you suggested.\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com] \nSent: Wednesday, March 18, 2020 9:36 PM\nTo: Shinoda, Noriyoshi (PN Japan A&PS Delivery) <noriyoshi.shinoda@hpe.com>\nCc: Alvaro Herrera <alvherre@2ndquadrant.com>; Thomas Munro <thomas.munro@gmail.com>; Jamison, Kirk <k.jamison@jp.fujitsu.com>; Dean Rasheed <dean.a.rasheed@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Multivariate MCV list vs. statistics target\n\nHi,\n\nOn Wed, Mar 18, 2020 at 04:28:34AM +0000, Shinoda, Noriyoshi (PN Japan A&PS Delivery) wrote:\n>Hello,\n>\n>I found a missing column description in the pg_statistic_ext catalog document for this new feature.\n>The attached patch adds a description of the 'stxstattarget' column to pg_statistic_ext catalog's document.\n>If there is a better explanation, please correct it.\n>\n\nThanks for the report. Yes, this is clearly an omission. I think it would be better to use wording similar to attstattarget, per the attached patch. That is, without reference to ALTER STATISTICS and better explaination of what positive/negative values do. Do you agree?\n\n\nregards\n\n>Regards,\n>Noriyoshi Shinoda\n>\n>-----Original Message-----\n>From: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com]\n>Sent: Wednesday, September 11, 2019 7:28 AM\n>To: Alvaro Herrera <alvherre@2ndquadrant.com>\n>Cc: Thomas Munro <thomas.munro@gmail.com>; Jamison, Kirk \n><k.jamison@jp.fujitsu.com>; Dean Rasheed <dean.a.rasheed@gmail.com>; \n>PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\n>Subject: Re: Multivariate MCV list vs. statistics target\n>\n>On Tue, Sep 03, 2019 at 02:38:56PM -0400, Alvaro Herrera wrote:\n>>On 2019-Aug-01, Tomas Vondra wrote:\n>>\n>>> I'll move it to the next CF. Aside from the issues pointed by \n>>> Kyotaro-san in his review, I still haven't made my mind about \n>>> whether to base the use statistics targets set for the attributes. \n>>> That's what we're doing now, but I'm not sure it's a good idea after adding separate statistics target.\n>>> I wonder what Dean's opinion on this is, as he added the current logic.\n>>\n>>Latest patch no longer applies. Please update. And since you already \n>>seem to have handled all review comments since it was Ready for \n>>Committer, and you now know Dean's opinion on the remaining question, \n>>please get it pushed.\n>>\n>\n>OK, I've pushed this the patch, after some minor polishing.\n>\n>\n>regards\n>\n>-- \n>Tomas Vondra http://www.2ndQuadrant.com \n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 13:32:19 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Multivariate MCV list vs. statistics target" }, { "msg_contents": "On Wed, Mar 18, 2020 at 01:32:19PM +0000, Shinoda, Noriyoshi (PN Japan\nA&PS Delivery) wrote:\n>Hi,\n>\n>>Thanks for the report. Yes, this is clearly an omission. I think it\n>>would be better to use wording >similar to attstattarget, per the\n>>attached patch. That is, without reference to ALTER STATISTICS and\n>>>better explaination of what positive/negative values do. Do you\n>>>agree?\n>\n>Thank you for your comment. I agree with the text you suggested.\n>\n\nThank you for the report, I've pushed the reworded fix.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 16:53:26 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multivariate MCV list vs. statistics target" }, { "msg_contents": "I think the docs are inconsistent with the commit message and the code\n(d06215d03) and docs should be corrected, soemthing like so:\n\ndiff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml\nindex b135c89005..cd10a6a6fc 100644\n--- a/doc/src/sgml/catalogs.sgml\n+++ b/doc/src/sgml/catalogs.sgml\n@@ -7302,7 +7302,8 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l\n of statistics accumulated for this statistics object by\n <xref linkend=\"sql-analyze\"/>.\n A zero value indicates that no statistics should be collected.\n- A negative value says to use the system default statistics target.\n+ A negative value says to use the maximum of the statistics targets of\n+ the referenced columns, if set, or the system default statistics target.\n Positive values of <structfield>stxstattarget</structfield>\n determine the target number of <quote>most common values</quote>\n to collect.\ndiff --git a/doc/src/sgml/ref/alter_statistics.sgml b/doc/src/sgml/ref/alter_statistics.sgml\nindex be4c3f1f05..f2e8a93166 100644\n--- a/doc/src/sgml/ref/alter_statistics.sgml\n+++ b/doc/src/sgml/ref/alter_statistics.sgml\n@@ -101,7 +101,8 @@ ALTER STATISTICS <replaceable class=\"parameter\">name</replaceable> SET STATISTIC\n The statistic-gathering target for this statistics object for subsequent\n <xref linkend=\"sql-analyze\"/> operations.\n The target can be set in the range 0 to 10000; alternatively, set it\n- to -1 to revert to using the system default statistics\n+ to -1 to revert to using the maximum statistics target of the\n+ referenced column's, if set, or the system default statistics\n target (<xref linkend=\"guc-default-statistics-target\"/>).\n For more information on the use of statistics by the\n <productname>PostgreSQL</productname> query planner, refer to\n-- \n2.17.0\n\n\n\n", "msg_date": "Sat, 5 Sep 2020 20:12:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Multivariate MCV list vs. statistics target" } ]
[ { "msg_contents": "Hi,\n\nWhen experimenting with multi-column MCV lists with statistic target set\nto high value (e.g. 10k), I've realized there's an O(N^2) issue in\nstatext_mcv_build() when computing base frequencies.\n\nWe do this:\n\n for (i = 0; i < nitems; i++)\n {\n ...\n item->base_frequency = 1.0;\n for (j = 0; j < numattrs; j++)\n {\n int count = 0;\n int k;\n\n for (k = 0; k < ngroups; k++)\n {\n if (multi_sort_compare_dim(j, &groups[i], &groups[k], mss) == 0)\n count += groups[k].count;\n }\n\n ...\n }\n }\n\n\nThat is, for each item on the MCV list, we walk through all the groups\n(for each dimension independently) to determine the total frequency of\nthe value.\n\nWith many groups (which can easily happen for high statistics target),\nthis can easily get very expensive.\n\nI think the best solution here is to pre-compute frequencies for values\nin all dimensions, and then just access that instead of looping through\nthe groups over and over.\n\nIMHO this is something we should fix for PG12, so I'll put that on the\nopen items list, and produce a fix shortly.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 18 Jun 2019 23:43:13 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "O(N^2) when building multi-column MCV lists" }, { "msg_contents": "Hi,\n\nAttached is a WIP/PoC fix addressing the O(N^2) behavior in ANALYZE with\nhigh statistic target values. It needs more work, but it's good enough to\nshow some measurements.\n\nFor benchmark, I've created a simple 2-column table, with MCV list on\nthose two columns:\n\n CREATE TABLE t (a int, b int);\n CREATE STATISTICS s (mcv) ON a,b FROM t;\n\nand then loaded data sets with different numbers of random combinations,\ndetermined by number of values in each column. For example with 10 values\nin a column, you get ~100 combinations.\n\n INSERT INTO t\n SELECT 10*random(), 10*random() FROM generate_series(1,3e6);\n\nThe 3M rows is picked because that's the sample size with target 10000.\n\nThe results with different statistic targets look like this:\n\n1) master\n\n values 100 1000 5000 10000\n ====================================================\n 10 103 586 2419 3041\n 100 116 789 4778 8934\n 1000 116 690 3162 499748\n\n2) patched\n\n values 100 1000 5000 10000\n ====================================================\n 10 113 606 2460 3716\n 100 143 711 3371 5231\n 1000 156 994 3836 6002\n\n3) comparison (patched / master)\n\n values 100 1000 5000 10000\n ====================================================\n 10 110% 103% 102% 122%\n 100 123% 90% 71% 59%\n 1000 134% 144% 121% 1%\n\n\nSo clearly, the issue for large statistic targets is resolved (duration\ndrops from 500s to just 6s), but there is measurable regression for the\nother cases. That needs more investigation & fix before commit.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 21 Jun 2019 12:25:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: O(N^2) when building multi-column MCV lists" } ]
[ { "msg_contents": "I noticed that the old NetBSD 5.1.5 installation I had on my G4 Mac\nwas no longer passing our regression tests, because it has a strtof()\nthat is sloppy about underflow. Rather than fight with that I decided\nto update it to something shinier (well, as shiny as you can get on\nhardware that's old enough to apply for a driver's license). I stuck in\nNetBSD/macppc 8.0, and things seem to work, except that PL/Python\ncrashes on launch. I see something like this in the postmaster log:\n\nTraceback (most recent call last):\n File \"<frozen importlib._bootstrap>\", line 1162, in _install_external_importers\n File \"<frozen importlib._bootstrap>\", line 980, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 149, in __enter__\n File \"<frozen importlib._bootstrap>\", line 84, in acquire\nRuntimeError: no current thread ident\nFatal Python error: initexternalimport: external importer setup failed\n\nCurrent thread 0xffffffff (most recent call first):\n2019-06-18 17:40:59.629 EDT [20764] LOG: server process (PID 23714) was terminated by signal 6: Abort trap\n2019-06-18 17:40:59.629 EDT [20764] DETAIL: Failed process was running: CREATE FUNCTION stupid() RETURNS text AS 'return \"zarkon\"' LANGUAGE plpython3u;\n\nand a stack trace like\n\n#0 0xfddd383c in _lwp_kill () from /usr/lib/libc.so.12\n#1 0xfddd3800 in raise () from /usr/lib/libc.so.12\n#2 0xfddd2e38 in abort () from /usr/lib/libc.so.12\n#3 0xf4c371dc in fatal_error () from /usr/pkg/lib/libpython3.7.so.1.0\n#4 0xf4c38370 in _Py_FatalInitError () from /usr/pkg/lib/libpython3.7.so.1.0\n#5 0xf4c38f7c in Py_InitializeEx () from /usr/pkg/lib/libpython3.7.so.1.0\n#6 0xf4c38fc0 in Py_Initialize () from /usr/pkg/lib/libpython3.7.so.1.0\n#7 0xfdc8d548 in PLy_initialize () at plpy_main.c:135\n#8 0xfdc8da0c in plpython3_validator (fcinfo=<optimized out>)\n at plpy_main.c:192\n#9 0x01d4a904 in FunctionCall1Coll (flinfo=0xffffd608, \n collation=<optimized out>, arg1=<optimized out>) at fmgr.c:1140\n#10 0x01d4b03c in OidFunctionCall1Coll (functionId=functionId@entry=16464, \n collation=collation@entry=0, arg1=arg1@entry=32774) at fmgr.c:1418\n#11 0x0196a9d0 in ProcedureCreate (\n procedureName=procedureName@entry=0xfdb0aac0 \"transaction_test1\", \n procNamespace=procNamespace@entry=2200, replace=replace@entry=false, \n returnsSet=returnsSet@entry=false, returnType=returnType@entry=2278, \n proowner=10, languageObjectId=languageObjectId@entry=16465, \n languageValidator=languageValidator@entry=16464, \n prosrc=prosrc@entry=0xfdb0abf8 \"\\nfor i in range(0, 10):\\n plpy.execute(\\\"INSERT INTO test1 (a) VALUES (%d)\\\" % i)\\n if i % 2 == 0:\\n plpy.commit()\\n else:\\n plpy.rollback()\\n\", probin=probin@entry=0x0, \n...\n\nThe \"no current thread ident\" error rings some vague bells, but I could\nnot find any previous discussion matching that in our archives.\n\nThis is with today's HEAD of our code and the python37-3.7.1 package from\nNetBSD 8.0.\n\nAny ideas? I'm not so wedded to PL/Python that I'll spend a lot of time\nmaking it go on this old box ... but seeing that 3.7 is still pretty\nbleeding-edge Python, I wonder if other people will start getting this\ntoo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jun 2019 18:16:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "PL/Python fails on new NetBSD/PPC 8.0 install" }, { "msg_contents": "Awhile back I wrote:\n> I noticed that the old NetBSD 5.1.5 installation I had on my G4 Mac\n> was no longer passing our regression tests, because it has a strtof()\n> that is sloppy about underflow. Rather than fight with that I decided\n> to update it to something shinier (well, as shiny as you can get on\n> hardware that's old enough to apply for a driver's license). I stuck in\n> NetBSD/macppc 8.0, and things seem to work, except that PL/Python\n> crashes on launch. I see something like this in the postmaster log:\n\n> Traceback (most recent call last):\n> File \"<frozen importlib._bootstrap>\", line 1162, in _install_external_importers\n> File \"<frozen importlib._bootstrap>\", line 980, in _find_and_load\n> File \"<frozen importlib._bootstrap>\", line 149, in __enter__\n> File \"<frozen importlib._bootstrap>\", line 84, in acquire\n> RuntimeError: no current thread ident\n> Fatal Python error: initexternalimport: external importer setup failed\n> \n> Current thread 0xffffffff (most recent call first):\n> 2019-06-18 17:40:59.629 EDT [20764] LOG: server process (PID 23714) was terminated by signal 6: Abort trap\n> 2019-06-18 17:40:59.629 EDT [20764] DETAIL: Failed process was running: CREATE FUNCTION stupid() RETURNS text AS 'return \"zarkon\"' LANGUAGE plpython3u;\n\nSo ... I just got this identical failure on NetBSD 8.1 on a shiny\nnew Intel NUC box. So that removes the excuse of old unsupported\nhardware, and leaves us with the conclusion that PL/Python is\nflat out broken on recent NetBSD.\n\nThis is with today's HEAD of our code and the python37-3.7.4/amd64\npackage from NetBSD 8.1.\n\nBTW, the only somewhat-modern NetBSD machine in our buildfarm is\ncoypu, which is running NetBSD/macppc 8.0 ... but what it is testing\nPL/Python against is python 2.7.15, so the fact that it doesn't\nfail can probably be explained as a python 2 vs python 3 thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 27 Oct 2019 21:11:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: PL/Python fails on new NetBSD/PPC 8.0 install" }, { "msg_contents": "None of the output provides any clue to me but I do know that Python 3.7\nhas some issues with a lot of versions of openssl that is based on a\ndisagreement between devs in both projects. This was a problem for me when\ntrying to build python 3.7 on my Kubuntu 14.04 system. I've seen this issue\nreported across all targets for Python including Freebsd so I expect it's\nlikely to also happen for NetBSD.\n\nPerhaps this might be related to the problem?\n\nOn Mon, Oct 28, 2019, 8:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Awhile back I wrote:\n> > I noticed that the old NetBSD 5.1.5 installation I had on my G4 Mac\n> > was no longer passing our regression tests, because it has a strtof()\n> > that is sloppy about underflow. Rather than fight with that I decided\n> > to update it to something shinier (well, as shiny as you can get on\n> > hardware that's old enough to apply for a driver's license). I stuck in\n> > NetBSD/macppc 8.0, and things seem to work, except that PL/Python\n> > crashes on launch. I see something like this in the postmaster log:\n>\n> > Traceback (most recent call last):\n> > File \"<frozen importlib._bootstrap>\", line 1162, in\n> _install_external_importers\n> > File \"<frozen importlib._bootstrap>\", line 980, in _find_and_load\n> > File \"<frozen importlib._bootstrap>\", line 149, in __enter__\n> > File \"<frozen importlib._bootstrap>\", line 84, in acquire\n> > RuntimeError: no current thread ident\n> > Fatal Python error: initexternalimport: external importer setup failed\n> >\n> > Current thread 0xffffffff (most recent call first):\n> > 2019-06-18 17:40:59.629 EDT [20764] LOG: server process (PID 23714) was\n> terminated by signal 6: Abort trap\n> > 2019-06-18 17:40:59.629 EDT [20764] DETAIL: Failed process was running:\n> CREATE FUNCTION stupid() RETURNS text AS 'return \"zarkon\"' LANGUAGE\n> plpython3u;\n>\n> So ... I just got this identical failure on NetBSD 8.1 on a shiny\n> new Intel NUC box. So that removes the excuse of old unsupported\n> hardware, and leaves us with the conclusion that PL/Python is\n> flat out broken on recent NetBSD.\n>\n> This is with today's HEAD of our code and the python37-3.7.4/amd64\n> package from NetBSD 8.1.\n>\n> BTW, the only somewhat-modern NetBSD machine in our buildfarm is\n> coypu, which is running NetBSD/macppc 8.0 ... but what it is testing\n> PL/Python against is python 2.7.15, so the fact that it doesn't\n> fail can probably be explained as a python 2 vs python 3 thing.\n>\n> regards, tom lane\n>\n>\n>\n\nNone of the output provides any clue to me but I do know that Python 3.7 has some issues with a lot of versions of openssl that is based on a disagreement between devs in both projects. This was a problem for me when trying to build python 3.7 on my Kubuntu 14.04 system. I've seen this issue reported across all targets for Python including Freebsd so I expect it's likely to also happen for NetBSD. Perhaps this might be related to the problem? On Mon, Oct 28, 2019, 8:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Awhile back I wrote:\n> I noticed that the old NetBSD 5.1.5 installation I had on my G4 Mac\n> was no longer passing our regression tests, because it has a strtof()\n> that is sloppy about underflow.  Rather than fight with that I decided\n> to update it to something shinier (well, as shiny as you can get on\n> hardware that's old enough to apply for a driver's license).  I stuck in\n> NetBSD/macppc 8.0, and things seem to work, except that PL/Python\n> crashes on launch.  I see something like this in the postmaster log:\n\n> Traceback (most recent call last):\n>   File \"<frozen importlib._bootstrap>\", line 1162, in _install_external_importers\n>   File \"<frozen importlib._bootstrap>\", line 980, in _find_and_load\n>   File \"<frozen importlib._bootstrap>\", line 149, in __enter__\n>   File \"<frozen importlib._bootstrap>\", line 84, in acquire\n> RuntimeError: no current thread ident\n> Fatal Python error: initexternalimport: external importer setup failed\n> \n> Current thread 0xffffffff (most recent call first):\n> 2019-06-18 17:40:59.629 EDT [20764] LOG:  server process (PID 23714) was terminated by signal 6: Abort trap\n> 2019-06-18 17:40:59.629 EDT [20764] DETAIL:  Failed process was running: CREATE FUNCTION stupid() RETURNS text AS 'return \"zarkon\"' LANGUAGE plpython3u;\n\nSo ... I just got this identical failure on NetBSD 8.1 on a shiny\nnew Intel NUC box.  So that removes the excuse of old unsupported\nhardware, and leaves us with the conclusion that PL/Python is\nflat out broken on recent NetBSD.\n\nThis is with today's HEAD of our code and the python37-3.7.4/amd64\npackage from NetBSD 8.1.\n\nBTW, the only somewhat-modern NetBSD machine in our buildfarm is\ncoypu, which is running NetBSD/macppc 8.0 ... but what it is testing\nPL/Python against is python 2.7.15, so the fact that it doesn't\nfail can probably be explained as a python 2 vs python 3 thing.\n\n                        regards, tom lane", "msg_date": "Mon, 28 Oct 2019 08:33:52 +0700", "msg_from": "Benjamin Scherrey <scherrey@proteus-tech.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python fails on new NetBSD/PPC 8.0 install" }, { "msg_contents": "Benjamin Scherrey <scherrey@proteus-tech.com> writes:\n> None of the output provides any clue to me but I do know that Python 3.7\n> has some issues with a lot of versions of openssl that is based on a\n> disagreement between devs in both projects. This was a problem for me when\n> trying to build python 3.7 on my Kubuntu 14.04 system. I've seen this issue\n> reported across all targets for Python including Freebsd so I expect it's\n> likely to also happen for NetBSD.\n\nThanks for looking! It doesn't seem to be related to this issue though.\nI've now tracked this problem down, and what I'm finding is that:\n\n1. The proximate cause of the crash is that pthread_self() is\nreturning ((pthread_t) -1), which Python interprets as a hard\nfailure. Now on the one hand, I wonder why Python is even\nchecking for a failure, given that POSIX is totally clear that\nthere are no failures:\n\n The pthread_self() function shall always be successful and no\n return value is reserved to indicate an error.\n\n\"Shall\" does not allow wiggle room. But on the other hand,\npthread_t is a pointer on this platform, so that's a pretty\nstrange value to be returning if it's valid.\n\nAnd on the third hand, NetBSD's own man page for pthread_self()\ndoesn't admit the possibility of failure either, though it does\nsuggest that you should link with -lpthread [1].\n\n2. Testing pthread_self() standalone on this platform provides\nilluminating results:\n\n$ cat test.c\n#include <stdio.h>\n#include <pthread.h>\n\nint main()\n{\n pthread_t id = pthread_self();\n\n printf(\"self = %p\\n\", id);\n return 0;\n}\n$ gcc test.c\n$ ./a.out\nself = 0xffffffffffffffff\n$ gcc test.c -lpthread\n$ ./a.out\nself = 0x754ae5a2b800\n\n3. libpython.so on this platform has a dependency on libpthread,\nbut we don't link the postgres executable to libpthread. I surmise\nthat pthread_self() actually exists in core libc, but what it returns\nis only valid if libpthread was linked into the main executable so\nthat it could initialize some static state at execution start.\n\n4. If I add -lpthread to the LIBS for the main postgres executable,\nPL/Python starts passing its regression tests. I haven't finished\na complete check-world run, but at least the core regression tests\nshow no ill effects from doing this.\n\n\nSo one possible answer for us is \"if we're on NetBSD and plpython3\nis to be built, add -lpthread to the core LIBS list\". I do not\nmuch like this answer though; it's putting the responsibility in\nthe wrong place.\n\nWhat I'm inclined to do is go file a bug report saying that this\nbehavior contradicts both POSIX and NetBSD's own man page, and\nsee what they say about that.\n\n\t\t\tregards, tom lane\n\n[1] https://netbsd.gw.com/cgi-bin/man-cgi?pthread_self+3+NetBSD-current\n\n\n", "msg_date": "Tue, 29 Oct 2019 16:25:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: PL/Python fails on new NetBSD/PPC 8.0 install" }, { "msg_contents": "On Wed, Oct 30, 2019 at 9:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What I'm inclined to do is go file a bug report saying that this\n> behavior contradicts both POSIX and NetBSD's own man page, and\n> see what they say about that.\n\n From a quick look at the relevant trees, isn't the problem here that\ncpython thinks it can reserve pthread_t value -1 (or rather, that\nnumber cast to unsigned long, which is the type it uses for its own\nthread IDs):\n\nhttps://github.com/python/cpython/blob/master/Include/pythread.h#L21\n\n... and then use that to detect lack of initialisation:\n\nhttps://github.com/python/cpython/blob/master/Modules/_threadmodule.c#L1149\n\n... and that NetBSD also chose the same arbitrary value for their\nthreading stub library:\n\nhttps://github.com/NetBSD/src/blob/trunk/lib/libc/thread-stub/thread-stub.c#L392\n\n... as they are entirely within their rights to do? Assuming the stub\nlibrary can do whatever it has to do with that value, like answer\nquestions like pthread_equal(), as it clearly can. I think libc is\nallowed to implement pthread_t as an integer type and reserve -1, but\napplication code is not allowed to assume that pthread_t is even\ncastable to an integer type, let alone that it can reserve magic\nvalues.\n\nFurther evidence that this is Python's fault is the admission in the\nsource code itself that it is \"inherently hosed\":\n\nhttps://github.com/python/cpython/blob/master/Python/thread_pthread.h#L299\n\n\n", "msg_date": "Wed, 30 Oct 2019 10:38:24 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python fails on new NetBSD/PPC 8.0 install" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Oct 30, 2019 at 9:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'm inclined to do is go file a bug report saying that this\n>> behavior contradicts both POSIX and NetBSD's own man page, and\n>> see what they say about that.\n\n> From a quick look at the relevant trees, isn't the problem here that\n> cpython thinks it can reserve pthread_t value -1 (or rather, that\n> number cast to unsigned long, which is the type it uses for its own\n> thread IDs):\n> https://github.com/python/cpython/blob/master/Include/pythread.h#L21\n\nPossibly. A value of -1 would be quite likely to crash any other\nlibpthread code it might be passed to, though, since it's evidently\nsupposed to be a pointer on this implementation. Note that the\npoint here is that libpython should get a *valid* thread ID that it\ncan use for other purposes, independently of what the host executable\ndid, and that we can expect that libpython's calls are not being\nrouted to the stub implementations.\n\nI've been experimenting with that test program on other platforms,\nand I find that FreeBSD 11.0, OpenBSD 6.4, and Fedora 30 all return\nplausible-looking pointers with or without -lpthread.\n\nInterestingly, RHEL6 (glibc 2.12) acts more like NetBSD is acting: you get\nNULL without -lpthread and a valid pointer with it. Given the lack of\nother problem reports about pl/python, I surmise that the glibc\nimplementation does manage to produce a valid pointer as soon as\nlibpthread is loaded. Or maybe they fixed glibc far enough back that\nnobody has tried recent python with a glibc that worked the old way.\n\n> Further evidence that this is Python's fault is the admission in the\n> source code itself that it is \"inherently hosed\":\n> https://github.com/python/cpython/blob/master/Python/thread_pthread.h#L299\n\nI'm not here to defend Python's choices in this area. I'm just\nobserving that libpthread should produce valid results in a\ncorrectly-linked dynamically loaded library, whether or not the\nhost executable linked libpthread. NetBSD's code is failing that\ntest, and nobody else's is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Oct 2019 18:05:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: PL/Python fails on new NetBSD/PPC 8.0 install" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> ... and that NetBSD also chose the same arbitrary value for their\n> threading stub library:\n> https://github.com/NetBSD/src/blob/trunk/lib/libc/thread-stub/thread-stub.c#L392\n> ... as they are entirely within their rights to do?\n\nI poked around in that repo, and found the non-stub version of\npthread_self:\n\nhttps://github.com/NetBSD/src/blob/trunk/lib/libpthread/pthread.c#L863\n\nRelevant to this discussion is that it actually redirects to the\nstub version if __uselibcstub is still set, and that variable\nappears to be cleared by pthread__init,\n\nhttps://github.com/NetBSD/src/blob/trunk/lib/libpthread/pthread.c#L187\n\nwhose header comment is pretty telling:\n\n/*\n * This needs to be started by the library loading code, before main()\n * gets to run, for various things that use the state of the initial thread\n * to work properly (thread-specific data is an application-visible example;\n * spinlock counts for mutexes is an internal example).\n */\n\nI've not found the mechanism by which pthread__init gets called, but\nthis sure smells like they think it only has to happen before main().\n\nInterestingly, some of the other files in that directory have recent\nCVS log entries specifically mentioning bug fixes for cases where\nlibpthread is dlopen'd. So it's not like they don't want to support\nthe case. I wonder if they just need to fix pthread_self to forcibly\ninit the library if __uselibcstub is still set.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Oct 2019 19:00:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: PL/Python fails on new NetBSD/PPC 8.0 install" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Oct 30, 2019 at 9:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'm inclined to do is go file a bug report saying that this\n>> behavior contradicts both POSIX and NetBSD's own man page, and\n>> see what they say about that.\n\nSo I went and filed that bug,\n\nhttp://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=54661\n\nand the answer seems to be that netbsd's libpthread is operating as\ndesigned. They don't support creating new threads if libpthread\nwasn't present at main program start, so redirecting all the\nentry points to the libc stub functions in that case is actually\npretty sane, self-consistent behavior.\n\nThis behavior is actually kinda useful from our standpoint: it means\nthat a perlu/pythonu/tclu function *can't* cause a backend to become\nmultithreaded, even if it tries. So I definitely don't want to\n\"fix\" this by linking libpthread to the core backend; that would\nopen us up to problems we needn't have, on this platform anyway.\n\n> From a quick look at the relevant trees, isn't the problem here that\n> cpython thinks it can reserve pthread_t value -1 (or rather, that\n> number cast to unsigned long, which is the type it uses for its own\n> thread IDs):\n\nYeah, this. I shall now go rant at the Python people about that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Oct 2019 11:30:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: PL/Python fails on new NetBSD/PPC 8.0 install" }, { "msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> From a quick look at the relevant trees, isn't the problem here that\n>> cpython thinks it can reserve pthread_t value -1 (or rather, that\n>> number cast to unsigned long, which is the type it uses for its own\n>> thread IDs):\n\n> Yeah, this. I shall now go rant at the Python people about that.\n\nDone at\n\nhttps://bugs.python.org/issue38646\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Oct 2019 13:33:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: PL/Python fails on new NetBSD/PPC 8.0 install" } ]
[ { "msg_contents": "A customers DB crashed due to OOM. While investigating the issue in our\nreport, I created MV stats, which causes this error:\n\nts=# CREATE STATISTICS sectors_stats (dependencies) ON site_id,sect_id FROM sectors;\nCREATE STATISTICS\nts=# ANALYZE sectors;\nERROR: XX000: tuple already updated by self\nLOCATION: simple_heap_update, heapam.c:4613\n\nThe issue goes away if I drop the stats object and comes back if I recreate it.\n\nWe're running 11.3 ; most of the (very few) reports from this error are from\nalmost 10+ years ago, running pg7.3 like.\n\nI've taken a couple steps to resolve the issue (vacuum full and then reindex\npg_statistic and its toast and the target table, which doesn't have a toast).\n\nI'm guessing the issue is with pg_statistic_ext, which I haven't touched.\n\nNext step seems to be to truncate pg_statistic{,ext} and re-analyze the DB.\n\nDoes anyone want debugging/diagnostic info before I do that ?\n\nJustin\n\n\n", "msg_date": "Tue, 18 Jun 2019 18:12:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "Hi,\n\nOn 2019-06-18 18:12:33 -0500, Justin Pryzby wrote:\n> A customers DB crashed due to OOM. While investigating the issue in our\n> report, I created MV stats, which causes this error:\n> \n> ts=# CREATE STATISTICS sectors_stats (dependencies) ON site_id,sect_id FROM sectors;\n> CREATE STATISTICS\n> ts=# ANALYZE sectors;\n> ERROR: XX000: tuple already updated by self\n> LOCATION: simple_heap_update, heapam.c:4613\n> \n> The issue goes away if I drop the stats object and comes back if I recreate it.\n> \n> We're running 11.3 ; most of the (very few) reports from this error are from\n> almost 10+ years ago, running pg7.3 like.\n> \n> I've taken a couple steps to resolve the issue (vacuum full and then reindex\n> pg_statistic and its toast and the target table, which doesn't have a toast).\n> \n> I'm guessing the issue is with pg_statistic_ext, which I haven't touched.\n> \n> Next step seems to be to truncate pg_statistic{,ext} and re-analyze the DB.\n> \n> Does anyone want debugging/diagnostic info before I do that ?\n\nAny chance to get a backtrace for the error?\n\nhttps://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD\n\nYou should be able to set a breakpoint to just the location pointed out\nin the error message.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2019 16:30:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Tue, Jun 18, 2019 at 06:12:33PM -0500, Justin Pryzby wrote:\n> A customers DB crashed due to OOM. While investigating the issue in our\n> report, I created MV stats, which causes this error:\n> \n> ts=# CREATE STATISTICS sectors_stats (dependencies) ON site_id,sect_id FROM sectors;\n> CREATE STATISTICS\n> ts=# ANALYZE sectors;\n> ERROR: XX000: tuple already updated by self\n> LOCATION: simple_heap_update, heapam.c:4613\n\n> I'm guessing the issue is with pg_statistic_ext, which I haven't touched.\n> \n> Next step seems to be to truncate pg_statistic{,ext} and re-analyze the DB.\n\nConfirmed the issue is there.\n\nts=# analyze sectors;\nERROR: tuple already updated by self\nts=# begin; delete from pg_statistic_ext; analyze sectors;\nBEGIN\nDELETE 87\nANALYZE\n\nOn Tue, Jun 18, 2019 at 04:30:33PM -0700, Andres Freund wrote:\n> Any chance to get a backtrace for the error?\n\nSure:\n\n(gdb) bt\n#0 errfinish (dummy=0) at elog.c:414\n#1 0x000000000085e834 in elog_finish (elevel=<value optimized out>, fmt=<value optimized out>) at elog.c:1376\n#2 0x00000000004b93bd in simple_heap_update (relation=0x7fee161700c8, otid=0x1fb7f44, tup=0x1fb7f40) at heapam.c:4613\n#3 0x000000000051bdb7 in CatalogTupleUpdate (heapRel=0x7fee161700c8, otid=0x1fb7f44, tup=0x1fb7f40) at indexing.c:234\n#4 0x000000000071e5ca in statext_store (onerel=0x7fee16140de8, totalrows=100843, numrows=100843, rows=0x1fd4028, natts=33260176, vacattrstats=0x1fb7ef0) at extended_stats.c:344\n#5 BuildRelationExtStatistics (onerel=0x7fee16140de8, totalrows=100843, numrows=100843, rows=0x1fd4028, natts=33260176, vacattrstats=0x1fb7ef0) at extended_stats.c:130\n#6 0x0000000000588346 in do_analyze_rel (onerel=0x7fee16140de8, options=2, params=0x7ffe5b6bf8b0, va_cols=0x0, acquirefunc=0x492b4, relpages=36, inh=true, in_outer_xact=false, elevel=13) at analyze.c:627\n#7 0x00000000005891e1 in analyze_rel (relid=<value optimized out>, relation=0x1ea22a0, options=2, params=0x7ffe5b6bf8b0, va_cols=0x0, in_outer_xact=false, bstrategy=0x1f38090) at analyze.c:317\n#8 0x00000000005fb689 in vacuum (options=2, relations=0x1f381f0, params=0x7ffe5b6bf8b0, bstrategy=<value optimized out>, isTopLevel=<value optimized out>) at vacuum.c:357\n#9 0x00000000005fbafe in ExecVacuum (vacstmt=<value optimized out>, isTopLevel=<value optimized out>) at vacuum.c:141\n#10 0x0000000000757a30 in standard_ProcessUtility (pstmt=0x1ea2410, queryString=0x1ea18c0 \"ANALYZE sectors;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1ea26d0, completionTag=0x7ffe5b6bfdf0 \"\")\n at utility.c:670\n#11 0x00007fee163a4344 in pgss_ProcessUtility (pstmt=0x1ea2410, queryString=0x1ea18c0 \"ANALYZE sectors;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1ea26d0, completionTag=0x7ffe5b6bfdf0 \"\")\n at pg_stat_statements.c:1005\n#12 0x0000000000753779 in PortalRunUtility (portal=0x1f1a8e0, pstmt=0x1ea2410, isTopLevel=<value optimized out>, setHoldSnapshot=<value optimized out>, dest=0x1ea26d0, completionTag=<value optimized out>) at pquery.c:1178\n#13 0x000000000075464d in PortalRunMulti (portal=0x1f1a8e0, isTopLevel=true, setHoldSnapshot=false, dest=0x1ea26d0, altdest=0x1ea26d0, completionTag=0x7ffe5b6bfdf0 \"\") at pquery.c:1331\n#14 0x0000000000754de8 in PortalRun (portal=0x1f1a8e0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1ea26d0, altdest=0x1ea26d0, completionTag=0x7ffe5b6bfdf0 \"\") at pquery.c:799\n#15 0x0000000000751987 in exec_simple_query (query_string=0x1ea18c0 \"ANALYZE sectors;\") at postgres.c:1145\n#16 0x0000000000752931 in PostgresMain (argc=<value optimized out>, argv=<value optimized out>, dbname=0x1edbad8 \"ts\", username=<value optimized out>) at postgres.c:4182\n#17 0x00000000006e1ba7 in BackendRun (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4358\n#18 BackendStartup (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4030\n#19 ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1707\n#20 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1380\n#21 0x0000000000656210 in main (argc=3, argv=0x1e9c4d0) at main.c:228\n\n#3 0x000000000051bdb7 in CatalogTupleUpdate (heapRel=0x7fee161700c8, otid=0x1fb7f44, tup=0x1fb7f40) at indexing.c:234\n indstate = 0x1fb84a0\n#4 0x000000000071e5ca in statext_store (onerel=0x7fee16140de8, totalrows=100843, numrows=100843, rows=0x1fd4028, natts=33260176, vacattrstats=0x1fb7ef0) at extended_stats.c:344\n stup = 0x1fb7f40\n oldtup = 0x7fee16158530\n values = {0, 0, 0, 0, 0, 0, 0, 33260544}\n nulls = {true, true, true, true, true, true, true, false}\n replaces = {false, false, false, false, false, false, true, true}\n#5 BuildRelationExtStatistics (onerel=0x7fee16140de8, totalrows=100843, numrows=100843, rows=0x1fd4028, natts=33260176, vacattrstats=0x1fb7ef0) at extended_stats.c:130\n stat = <value optimized out>\n stats = <value optimized out>\n lc2 = <value optimized out>\n ndistinct = <value optimized out>\n dependencies = <value optimized out>\n pg_stext = 0x7fee161700c8\n lc = 0x1fb8290\n stats = 0xfb6a172d\n cxt = 0x1fb7de0\n oldcxt = 0x1f6dd60\n __func__ = \"BuildRelationExtStatistics\"\n\n\nAh: the table is an inheritence parent. If I uninherit its child, there's no\nerror during ANALYZE. MV stats on the child are ok:\n\nts=# CREATE STATISTICS vzw_sectors_stats (dependencies) ON site_id,sect_id FROM vzw_sectors;\nCREATE STATISTICS\nts=# ANALYZE vzw_sectors;\nANALYZE\n\nI'm not sure what the behavior is intended to be, and probably the other parent\ntables I've added stats are all relkind=p.\n\nFWIW, we also have some FKs, like:\n\n \"sectors_site_id_fkey\" FOREIGN KEY (site_id) REFERENCES sites(site_id)\n\nJustin\n\n\n", "msg_date": "Tue, 18 Jun 2019 18:48:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Tue, Jun 18, 2019 at 06:48:58PM -0500, Justin Pryzby wrote:\n> On Tue, Jun 18, 2019 at 06:12:33PM -0500, Justin Pryzby wrote:\n> > ts=# ANALYZE sectors;\n> > ERROR: XX000: tuple already updated by self\n> > LOCATION: simple_heap_update, heapam.c:4613\n\n> Ah: the table is an inheritence parent. If I uninherit its child, there's no\n> error during ANALYZE.\n\npostgres=# CREATE TABLE t(i int,j int); CREATE TABLE u() INHERITS (t); CREATE STATISTICS t_stats ON i,j FROM t; INSERT INTO t VALUES(1,1);ANALYZE t;\nCREATE TABLE\nCREATE TABLE\nCREATE STATISTICS\nINSERT 0 1\nERROR: tuple already updated by self\n\n\n\n", "msg_date": "Tue, 18 Jun 2019 18:57:55 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Tue, Jun 18, 2019 at 4:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Sure:\n>\n> (gdb) bt\n> #0 errfinish (dummy=0) at elog.c:414\n> #1 0x000000000085e834 in elog_finish (elevel=<value optimized out>, fmt=<value optimized out>) at elog.c:1376\n> #2 0x00000000004b93bd in simple_heap_update (relation=0x7fee161700c8, otid=0x1fb7f44, tup=0x1fb7f40) at heapam.c:4613\n> #3 0x000000000051bdb7 in CatalogTupleUpdate (heapRel=0x7fee161700c8, otid=0x1fb7f44, tup=0x1fb7f40) at indexing.c:234\n\nIt might be interesting to set a breakpoint within heap_update(),\nwhich is called by simple_heap_update() --technically, this is where\nthe reported failure occurs. From there, you could send an image of\nthe page to the list by following the procedure described here:\n\nhttps://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Dumping_a_page_image_from_within_GDB\n\nYou'll have to hit \"next\" a few times, until heap_update()'s \"page\"\nvariable is initialized.\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 Jun 2019 17:00:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "Hi,\n\nOn 2019-06-18 18:48:58 -0500, Justin Pryzby wrote:\n> Ah: the table is an inheritence parent. If I uninherit its child, there's no\n> error during ANALYZE. MV stats on the child are ok:\n\nIt's a \"classical\" inheritance parent, not a builtin-partitioning type\nof parent, right? And it contains data?\n\nI assume it ought to not be too hard to come up with a reproducer\nthen...\n\nI think the problem is pretty plainly that for inheritance tables we'll\ntry to store extended statistics twice. And thus end up updating the\nsame row twice.\n\n> #6 0x0000000000588346 in do_analyze_rel (onerel=0x7fee16140de8, options=2, params=0x7ffe5b6bf8b0, va_cols=0x0, acquirefunc=0x492b4, relpages=36, inh=true, in_outer_xact=false, elevel=13) at analyze.c:627\n\n\t\t/* Build extended statistics (if there are any). */\n\t\tBuildRelationExtStatistics(onerel, totalrows, numrows, rows, attr_cnt,\n\t\t\t\t\t\t\t\t vacattrstats);\n\nNote that for plain statistics we a) pass down the 'inh' flag to the\nstorage function, b) stainherit is part of pg_statistics' key:\n\nIndexes:\n \"pg_statistic_relid_att_inh_index\" UNIQUE, btree (starelid, staattnum, stainherit)\n\n\nTomas, I think at the very least extended statistics shouldn't be built\nwhen building inherited stats. But going forward I think we should\nprobably have it as part of the key for pg_statistic_ext.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2019 17:08:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "Hi,\n\nOn 2019-06-18 17:00:09 -0700, Peter Geoghegan wrote:\n> On Tue, Jun 18, 2019 at 4:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Sure:\n> >\n> > (gdb) bt\n> > #0 errfinish (dummy=0) at elog.c:414\n> > #1 0x000000000085e834 in elog_finish (elevel=<value optimized out>, fmt=<value optimized out>) at elog.c:1376\n> > #2 0x00000000004b93bd in simple_heap_update (relation=0x7fee161700c8, otid=0x1fb7f44, tup=0x1fb7f40) at heapam.c:4613\n> > #3 0x000000000051bdb7 in CatalogTupleUpdate (heapRel=0x7fee161700c8, otid=0x1fb7f44, tup=0x1fb7f40) at indexing.c:234\n> \n> It might be interesting to set a breakpoint within heap_update(),\n> which is called by simple_heap_update() --technically, this is where\n> the reported failure occurs. From there, you could send an image of\n> the page to the list by following the procedure described here:\n> \n> https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Dumping_a_page_image_from_within_GDB\n> \n> You'll have to hit \"next\" a few times, until heap_update()'s \"page\"\n> variable is initialized.\n\nHm, what are you hoping to glean by doing so?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2019 17:09:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think the problem is pretty plainly that for inheritance tables we'll\n> try to store extended statistics twice. And thus end up updating the\n> same row twice.\n\nThey shouldn't be the same row though. If we're to try to capture\next-stats on inheritance trees --- and I think that's likely a good\nidea --- then we need a bool corresponding to pg_statistic's stainherit\nas part of pg_statistic_ext's primary key.\n\nSince there is no such bool there now, and I assume that nobody wants\nyet another pg_statistic_ext-driven catversion bump for v12, the only\nfix is to get the stats machinery to not compute or store such stats.\nFor now. But I think we ought to change that in v13.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jun 2019 20:16:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Tue, Jun 18, 2019 at 5:09 PM Andres Freund <andres@anarazel.de> wrote:\n> > It might be interesting to set a breakpoint within heap_update(),\n> > which is called by simple_heap_update() --technically, this is where\n> > the reported failure occurs. From there, you could send an image of\n> > the page to the list by following the procedure described here:\n\n> Hm, what are you hoping to glean by doing so?\n\nNothing in particular. I see no reason to assume that we know what\nthat looks like, though. It's easy to check.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 Jun 2019 17:20:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Tue, Jun 18, 2019 at 06:48:58PM -0500, Justin Pryzby wrote:\n> On Tue, Jun 18, 2019 at 06:12:33PM -0500, Justin Pryzby wrote:\n> > A customers DB crashed due to OOM. While investigating the issue in our\n> > report, I created MV stats, which causes this error:\n> > \n> > ts=# CREATE STATISTICS sectors_stats (dependencies) ON site_id,sect_id FROM sectors;\n> > CREATE STATISTICS\n> > ts=# ANALYZE sectors;\n> > ERROR: XX000: tuple already updated by self\n> > LOCATION: simple_heap_update, heapam.c:4613\n> \n> > I'm guessing the issue is with pg_statistic_ext, which I haven't touched.\n> > \n> > Next step seems to be to truncate pg_statistic{,ext} and re-analyze the DB.\n> \n> Confirmed the issue is there.\n> \n> ts=# analyze sectors;\n> ERROR: tuple already updated by self\n> ts=# begin; delete from pg_statistic_ext; analyze sectors;\n> BEGIN\n> DELETE 87\n> ANALYZE\n\nWhy this works seems to me to be unexplained..\n\nJustin\n\n\n", "msg_date": "Tue, 18 Jun 2019 19:38:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "Hi,\n\nOn June 18, 2019 5:38:34 PM PDT, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>On Tue, Jun 18, 2019 at 06:48:58PM -0500, Justin Pryzby wrote:\n>> On Tue, Jun 18, 2019 at 06:12:33PM -0500, Justin Pryzby wrote:\n>> > A customers DB crashed due to OOM. While investigating the issue\n>in our\n>> > report, I created MV stats, which causes this error:\n>> > \n>> > ts=# CREATE STATISTICS sectors_stats (dependencies) ON\n>site_id,sect_id FROM sectors;\n>> > CREATE STATISTICS\n>> > ts=# ANALYZE sectors;\n>> > ERROR: XX000: tuple already updated by self\n>> > LOCATION: simple_heap_update, heapam.c:4613\n>> \n>> > I'm guessing the issue is with pg_statistic_ext, which I haven't\n>touched.\n>> > \n>> > Next step seems to be to truncate pg_statistic{,ext} and re-analyze\n>the DB.\n>> \n>> Confirmed the issue is there.\n>> \n>> ts=# analyze sectors;\n>> ERROR: tuple already updated by self\n>> ts=# begin; delete from pg_statistic_ext; analyze sectors;\n>> BEGIN\n>> DELETE 87\n>> ANALYZE\n>\n>Why this works seems to me to be unexplained..\n\nThere's no extended stats to compute after that, thus we don't try to update the extended stats twice.\n\nAddress\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Tue, 18 Jun 2019 18:06:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "Hi,\n\nOn 2019-06-18 17:08:37 -0700, Andres Freund wrote:\n> On 2019-06-18 18:48:58 -0500, Justin Pryzby wrote:\n> > Ah: the table is an inheritence parent. If I uninherit its child, there's no\n> > error during ANALYZE. MV stats on the child are ok:\n> \n> It's a \"classical\" inheritance parent, not a builtin-partitioning type\n> of parent, right? And it contains data?\n> \n> I assume it ought to not be too hard to come up with a reproducer\n> then...\n> \n> I think the problem is pretty plainly that for inheritance tables we'll\n> try to store extended statistics twice. And thus end up updating the\n> same row twice.\n> \n> > #6 0x0000000000588346 in do_analyze_rel (onerel=0x7fee16140de8, options=2, params=0x7ffe5b6bf8b0, va_cols=0x0, acquirefunc=0x492b4, relpages=36, inh=true, in_outer_xact=false, elevel=13) at analyze.c:627\n> \n> \t\t/* Build extended statistics (if there are any). */\n> \t\tBuildRelationExtStatistics(onerel, totalrows, numrows, rows, attr_cnt,\n> \t\t\t\t\t\t\t\t vacattrstats);\n> \n> Note that for plain statistics we a) pass down the 'inh' flag to the\n> storage function, b) stainherit is part of pg_statistics' key:\n> \n> Indexes:\n> \"pg_statistic_relid_att_inh_index\" UNIQUE, btree (starelid, staattnum, stainherit)\n> \n> \n> Tomas, I think at the very least extended statistics shouldn't be built\n> when building inherited stats. But going forward I think we should\n> probably have it as part of the key for pg_statistic_ext.\n\nTomas, ping?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Jul 2019 13:01:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Tue, Jul 23, 2019 at 01:01:27PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-06-18 17:08:37 -0700, Andres Freund wrote:\n>> On 2019-06-18 18:48:58 -0500, Justin Pryzby wrote:\n>> > Ah: the table is an inheritence parent. If I uninherit its child, there's no\n>> > error during ANALYZE. MV stats on the child are ok:\n>>\n>> It's a \"classical\" inheritance parent, not a builtin-partitioning type\n>> of parent, right? And it contains data?\n>>\n>> I assume it ought to not be too hard to come up with a reproducer\n>> then...\n>>\n>> I think the problem is pretty plainly that for inheritance tables we'll\n>> try to store extended statistics twice. And thus end up updating the\n>> same row twice.\n>>\n>> > #6 0x0000000000588346 in do_analyze_rel (onerel=0x7fee16140de8, options=2, params=0x7ffe5b6bf8b0, va_cols=0x0, acquirefunc=0x492b4, relpages=36, inh=true, in_outer_xact=false, elevel=13) at analyze.c:627\n>>\n>> \t\t/* Build extended statistics (if there are any). */\n>> \t\tBuildRelationExtStatistics(onerel, totalrows, numrows, rows, attr_cnt,\n>> \t\t\t\t\t\t\t\t vacattrstats);\n>>\n>> Note that for plain statistics we a) pass down the 'inh' flag to the\n>> storage function, b) stainherit is part of pg_statistics' key:\n>>\n>> Indexes:\n>> \"pg_statistic_relid_att_inh_index\" UNIQUE, btree (starelid, staattnum, stainherit)\n>>\n>>\n>> Tomas, I think at the very least extended statistics shouldn't be built\n>> when building inherited stats. But going forward I think we should\n>> probably have it as part of the key for pg_statistic_ext.\n>\n>Tomas, ping?\n>\n\nApologies, I kinda missed this thread until now. The volume of messages\non pgsql-hackers is getting pretty insane ...\n\nAttached is a patch fixing the error by not building extended stats for\nthe inh=true case (as already proposed in this thread). That's about the\nsimplest way to resolve this issue for v12. It should add a simple\nregression test too, I guess.\n\nTo fix this properly we need to add a flag similar to stainherit\nsomewhere. And I've started working on that, thinking that maybe we\ncould do that even for v12 - it'd be yet another catversion bump, but\nthere's already been one since beta2 so maybe it would be OK.\n\nBut it's actually a bit more complicated than just adding a column to\nthe catalog, for two reasons:\n\n1) The optimizer part has to be tweaked to pick the right object, with\nthe flag set to either true/false. Not trivial, but doable.\n\n2) We don't actually have a single catalog - we have two catalogs, one\nfor definition, one for built statistics. The flag does not seem to be\npart of the definition, and we don't know whether there will be child\nrels added sometime in the future, so presumably we'd store it in the\ndata catalog (pg_statistic_ext_data). Which means the code gets more\ncomplex, because right now it assumes 1:1 relationship between those\ncatalogs.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 28 Jul 2019 12:15:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "Hi,\n\nOn 2019-07-28 12:15:20 +0200, Tomas Vondra wrote:\n> Attached is a patch fixing the error by not building extended stats for\n> the inh=true case (as already proposed in this thread). That's about the\n> simplest way to resolve this issue for v12. It should add a simple\n> regression test too, I guess.\n\nDoesn't this also apply to v11?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 28 Jul 2019 09:42:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Sun, Jul 28, 2019 at 09:42:44AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-07-28 12:15:20 +0200, Tomas Vondra wrote:\n>> Attached is a patch fixing the error by not building extended stats for\n>> the inh=true case (as already proposed in this thread). That's about the\n>> simplest way to resolve this issue for v12. It should add a simple\n>> regression test too, I guess.\n>\n>Doesn't this also apply to v11?\n>\n\nAFAICS it applies to 10+ versions, because that's where extended stats\nwere introduced. We certainly can't mess with catalogs there, so this is\nabout the only backpatchable fix I can think of.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 28 Jul 2019 21:21:51 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "Hi,\n\nOn 2019-07-28 21:21:51 +0200, Tomas Vondra wrote:\n> AFAICS it applies to 10+ versions, because that's where extended stats\n> were introduced. We certainly can't mess with catalogs there, so this is\n> about the only backpatchable fix I can think of.\n\nAFAIU the inh version wouldn't be used anyway, and this has never\nworked. So I think it's imo fine to fix it that way for < master. For\nmaster we should have something better, even if perhaps not immediately.\n\n- Andres\n\n\n", "msg_date": "Sun, 28 Jul 2019 21:53:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Sun, 28 Jul 2019 at 11:15, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Attached is a patch fixing the error by not building extended stats for\n> the inh=true case (as already proposed in this thread). That's about the\n> simplest way to resolve this issue for v12. It should add a simple\n> regression test too, I guess.\n>\n\nSeems like a reasonable thing to do for 10, 11 and possibly also 12\n(actually, as you noted, I think it's the only thing that can be done\nfor 10 and 11).\n\n> To fix this properly we need to add a flag similar to stainherit\n> somewhere. And I've started working on that, thinking that maybe we\n> could do that even for v12 - it'd be yet another catversion bump, but\n> there's already been one since beta2 so maybe it would be OK.\n>\n\nYeah, I think that makes sense, if it's not too hard. Since v12 is\nwhere the stats definition is split out from the stats data, this\nmight work out quite neatly, since the inh flag would apply only to\nthe stats data.\n\n> But it's actually a bit more complicated than just adding a column to\n> the catalog, for two reasons:\n>\n> 1) The optimizer part has to be tweaked to pick the right object, with\n> the flag set to either true/false. Not trivial, but doable.\n>\n\nIsn't it just a matter of passing the inh flag to\nget_relation_statistics() from get_relation_info(), so then the\noptimiser would get the right kind of stats data, depending on whether\nor not inheritance was requested in the query.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 29 Jul 2019 10:15:36 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Mon, Jul 29, 2019 at 10:15:36AM +0100, Dean Rasheed wrote:\n>On Sun, 28 Jul 2019 at 11:15, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> Attached is a patch fixing the error by not building extended stats for\n>> the inh=true case (as already proposed in this thread). That's about the\n>> simplest way to resolve this issue for v12. It should add a simple\n>> regression test too, I guess.\n>>\n>\n>Seems like a reasonable thing to do for 10, 11 and possibly also 12\n>(actually, as you noted, I think it's the only thing that can be done\n>for 10 and 11).\n>\n\nOK, will do.\n\n>> To fix this properly we need to add a flag similar to stainherit\n>> somewhere. And I've started working on that, thinking that maybe we\n>> could do that even for v12 - it'd be yet another catversion bump, but\n>> there's already been one since beta2 so maybe it would be OK.\n>>\n>\n>Yeah, I think that makes sense, if it's not too hard. Since v12 is\n>where the stats definition is split out from the stats data, this\n>might work out quite neatly, since the inh flag would apply only to\n>the stats data.\n>\n\nAgreed, we need to add the inh flag to the pg_statistic_ext_data\ncatalog. The trouble is this makes the maintenance somewhat more\ncomplicated, because we suddenly don't have 1:1 mapping :-(\n\nBut if we want to address this in master only, I think that's fine.\n\n>> But it's actually a bit more complicated than just adding a column to\n>> the catalog, for two reasons:\n>>\n>> 1) The optimizer part has to be tweaked to pick the right object, with\n>> the flag set to either true/false. Not trivial, but doable.\n>>\n>\n>Isn't it just a matter of passing the inh flag to\n>get_relation_statistics() from get_relation_info(), so then the\n>optimiser would get the right kind of stats data, depending on whether\n>or not inheritance was requested in the query.\n>\n\nYes, you're right. I've only skimmed how the existing code uses the inh\nflag (for regular stats) and it seemed somewhat more complex, but you're\nright for extended stats it'd be much simpler.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 29 Jul 2019 12:17:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Sun, Jul 28, 2019 at 09:53:20PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-07-28 21:21:51 +0200, Tomas Vondra wrote:\n>> AFAICS it applies to 10+ versions, because that's where extended stats\n>> were introduced. We certainly can't mess with catalogs there, so this is\n>> about the only backpatchable fix I can think of.\n>\n>AFAIU the inh version wouldn't be used anyway, and this has never\n>worked. So I think it's imo fine to fix it that way for < master. For\n>master we should have something better, even if perhaps not immediately.\n>\n\nAgreed. I'll get the simple fix committed soon, and put a TODO on my\nlist for pg13.\n\n\nthanks\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 29 Jul 2019 12:18:33 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" }, { "msg_contents": "On Mon, Jul 29, 2019 at 12:18:33PM +0200, Tomas Vondra wrote:\n>On Sun, Jul 28, 2019 at 09:53:20PM -0700, Andres Freund wrote:\n>>Hi,\n>>\n>>On 2019-07-28 21:21:51 +0200, Tomas Vondra wrote:\n>>>AFAICS it applies to 10+ versions, because that's where extended stats\n>>>were introduced. We certainly can't mess with catalogs there, so this is\n>>>about the only backpatchable fix I can think of.\n>>\n>>AFAIU the inh version wouldn't be used anyway, and this has never\n>>worked. So I think it's imo fine to fix it that way for < master. For\n>>master we should have something better, even if perhaps not immediately.\n>>\n>\n>Agreed. I'll get the simple fix committed soon, and put a TODO on my\n>list for pg13.\n>\n\nI've pushed the simple fix - I've actually simplified it a bit further by\nsimply not calling the BuildRelationExtStatistics() at all when inh=true,\ninstead of passing the flag to BuildRelationExtStatistics() and making the\ndecision there. The function is part of public API, so this would be an\nABI break (although it's unlikely anyone else is calling the function).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 30 Jul 2019 20:11:34 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ANALYZE: ERROR: tuple already updated by self" } ]
[ { "msg_contents": "Hello,\n\nIf you run a lot of parallel queries that use big parallel hash joins\nsimultaneously, you can run out of DSM slots (for example, when\ntesting many concurrent parallel queries). That's because we allow 64\nslots + 2 * MaxBackends, but allocating seriously large amounts of\ndynamic shared memory requires lots of slots.\n\nOriginally the DSM system was designed to support one segment per\nparallel query, but now we also use one for the session and any number\nfor parallel executor nodes that want space limited by work_mem.\n\nThe number of slots it takes for a given total amount of shared memory\ndepends on the macro DSA_NUM_SEGMENTS_AT_EACH_SIZE. Since DSM slots\nare relatively scarce (we use inefficient algorithms to access them,\nand we think that some operating systems won't like us if we create\ntoo many, so we impose this scarcity on ourselves), each DSA area\nallocates bigger and bigger segments as it goes, starting with 1MB.\nThe approximate number of segments required to allocate various sizes\nincrementally using different values of DSA_NUM_SEGMENTS_AT_EACH_SIZE\ncan be seen in this table:\n\n N = 1 2 3 4\n\n 1MB 1 1 1 1\n 64MB 6 10 13 16\n512MB 9 16 22 28\n 1GB 10 18 25 32\n 8GB 13 24 34 44\n 16GB 14 26 37 48\n 32GB 15 28 40 52\n 1TB 20 38 55 72\n\nIt's currently set to 4, but I now think that was too cautious. It\ntries to avoid fragmentation by ramping up slowly (that is, memory\nallocated and in some cases committed by the operating system that we\ndon't turn out to need), but it's pretty wasteful of slots. Perhaps\nit should be set to 2?\n\nPerhaps also the number of slots per backend should be dynamic, so\nthat you have the option to increase it from the current hard-coded\nvalue of 2 if you don't want to increase max_connections but find\nyourself running out of slots (this GUC was a request from Andres but\nthe name was made up by me -- if someone has a better suggestion I'm\nall ears).\n\nAlso, there are some outdated comments near\nPG_DYNSHMEM_SLOTS_PER_BACKEND's definition that we might as well drop\nalong with the macro.\n\nDraft patch attached.\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Wed, 19 Jun 2019 13:07:23 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Tweaking DSM and DSA limits" }, { "msg_contents": "On Tue, Jun 18, 2019 at 9:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> It's currently set to 4, but I now think that was too cautious. It\n> tries to avoid fragmentation by ramping up slowly (that is, memory\n> allocated and in some cases committed by the operating system that we\n> don't turn out to need), but it's pretty wasteful of slots. Perhaps\n> it should be set to 2?\n\n+1. I think I said at the time that I thought that was too cautious...\n\n> Perhaps also the number of slots per backend should be dynamic, so\n> that you have the option to increase it from the current hard-coded\n> value of 2 if you don't want to increase max_connections but find\n> yourself running out of slots (this GUC was a request from Andres but\n> the name was made up by me -- if someone has a better suggestion I'm\n> all ears).\n\nI am not convinced that we really need to GUC-ify this. How about\njust bumping the value up from 2 to say 5? Between the preceding\nchange and this one we ought to buy ourselves more than 4x, and if\nthat is not enough then we can ask whether raising max_connections is\na reasonable workaround, and if that's still not enough then we can\nrevisit this idea, or maybe come up with something better. The\nproblem I have with a GUC here is that nobody without a PhD in\nPostgreSQL-ology will have any clue how to set it, and while that's\ngood for your employment prospects and mine, it's not so great for\nPostgreSQL users generally.\n\nAs Andres observed off-list, it would also be a good idea to allow\nthings that are going to gobble memory like hash joins to have some\ninput into how much memory gets allocated. Maybe preallocating the\nexpected size of the hash is too aggressive -- estimates can be wrong,\nand it could be much smaller. But maybe we should allocate at least,\nsay, 1/64th of that amount, and act as if\nDSA_NUM_SEGMENTS_AT_EACH_SIZE == 1 until the cumulative memory\nallocation is more than 25% of that amount. So if we think it's gonna\nbe 1GB, start by allocating 16MB and double the size of each\nallocation thereafter until we get to at least 256MB allocated. So\nthen we'd have 16MB + 32MB + 64MB + 128MB + 256MB + 256MB + 512MB = 7\nsegments instead of the 32 required currently or the 18 required with\nDSA_NUM_SEGMENTS_AT_EACH_SIZE == 2.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 20 Jun 2019 14:20:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tweaking DSM and DSA limits" }, { "msg_contents": "Hi,\n\nOn 2019-06-20 14:20:27 -0400, Robert Haas wrote:\n> On Tue, Jun 18, 2019 at 9:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Perhaps also the number of slots per backend should be dynamic, so\n> > that you have the option to increase it from the current hard-coded\n> > value of 2 if you don't want to increase max_connections but find\n> > yourself running out of slots (this GUC was a request from Andres but\n> > the name was made up by me -- if someone has a better suggestion I'm\n> > all ears).\n> \n> I am not convinced that we really need to GUC-ify this. How about\n> just bumping the value up from 2 to say 5?\n\nI'm not sure either. Although it's not great if the only way out for a\nuser hitting this is to increase max_connections... But we should really\nincrease the default.\n\n\n> As Andres observed off-list, it would also be a good idea to allow\n> things that are going to gobble memory like hash joins to have some\n> input into how much memory gets allocated. Maybe preallocating the\n> expected size of the hash is too aggressive -- estimates can be wrong,\n> and it could be much smaller.\n\nAt least for the case of the hashtable itself, we allocate that at the\npredicted size immediately. So a mis-estimation wouldn't change\nanything. For the entires, yea, something like you suggest would make\nsense.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 20 Jun 2019 11:52:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Tweaking DSM and DSA limits" }, { "msg_contents": "On Thu, Jun 20, 2019 at 02:20:27PM -0400, Robert Haas wrote:\n> On Tue, Jun 18, 2019 at 9:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > It's currently set to 4, but I now think that was too cautious. It\n> > tries to avoid fragmentation by ramping up slowly (that is, memory\n> > allocated and in some cases committed by the operating system that we\n> > don't turn out to need), but it's pretty wasteful of slots. Perhaps\n> > it should be set to 2?\n> \n> +1. I think I said at the time that I thought that was too cautious...\n> \n> > Perhaps also the number of slots per backend should be dynamic, so\n> > that you have the option to increase it from the current hard-coded\n> > value of 2 if you don't want to increase max_connections but find\n> > yourself running out of slots (this GUC was a request from Andres but\n> > the name was made up by me -- if someone has a better suggestion I'm\n> > all ears).\n> \n> I am not convinced that we really need to GUC-ify this. How about\n> just bumping the value up from 2 to say 5? Between the preceding\n> change and this one we ought to buy ourselves more than 4x, and if\n> that is not enough then we can ask whether raising max_connections is\n> a reasonable workaround,\n\nIs there perhaps a way to make raising max_connections not require a\nrestart? There are plenty of situations out there where restarts\naren't something that can be done on a whim.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 20 Jun 2019 23:00:34 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Tweaking DSM and DSA limits" }, { "msg_contents": "On Thu, Jun 20, 2019 at 5:00 PM David Fetter <david@fetter.org> wrote:\n> Is there perhaps a way to make raising max_connections not require a\n> restart? There are plenty of situations out there where restarts\n> aren't something that can be done on a whim.\n\nSure, if you want to make this take about 100x more work.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Jun 2019 09:57:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tweaking DSM and DSA limits" }, { "msg_contents": "On Fri, Jun 21, 2019 at 6:52 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-06-20 14:20:27 -0400, Robert Haas wrote:\n> > On Tue, Jun 18, 2019 at 9:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Perhaps also the number of slots per backend should be dynamic, so\n> > > that you have the option to increase it from the current hard-coded\n> > > value of 2 if you don't want to increase max_connections but find\n> > > yourself running out of slots (this GUC was a request from Andres but\n> > > the name was made up by me -- if someone has a better suggestion I'm\n> > > all ears).\n> >\n> > I am not convinced that we really need to GUC-ify this. How about\n> > just bumping the value up from 2 to say 5?\n>\n> I'm not sure either. Although it's not great if the only way out for a\n> user hitting this is to increase max_connections... But we should really\n> increase the default.\n\nOk, hard-to-explain GUC abandoned. Here is a patch that just adjusts\nthe two constants. DSM's array allows for 5 slots per connection (up\nfrom 2), and DSA doubles its size after every two segments (down from\n4).\n\n> > As Andres observed off-list, it would also be a good idea to allow\n> > things that are going to gobble memory like hash joins to have some\n> > input into how much memory gets allocated. Maybe preallocating the\n> > expected size of the hash is too aggressive -- estimates can be wrong,\n> > and it could be much smaller.\n>\n> At least for the case of the hashtable itself, we allocate that at the\n> predicted size immediately. So a mis-estimation wouldn't change\n> anything. For the entires, yea, something like you suggest would make\n> sense.\n\nAt the moment the 32KB chunks are used as parallel granules for\nvarious work (inserting, repartitioning, rebucketing). I could\ncertainly allocate a much bigger piece based on estimates, and then\ninvent another kind of chunks inside that, or keep the existing\nlayering but find a way to hint to DSA what allocation stream to\nexpect in the future so it can get bigger underlying chunks ready.\nOne problem is that it'd result in large, odd sized memory segments,\nwhereas the current scheme uses power of two sizes and might be more\namenable to a later segment reuse scheme; or maybe that doesn't really\nmatter.\n\nI have a long wish list of improvements I'd like to investigate in\nthis area, subject for future emails, but while I'm making small\ntweaks, here's another small thing: there is no \"wait event\" while\nallocating (in the kernel sense) POSIX shm on Linux, unlike the\nequivalent IO when file-backed segments are filled with write() calls.\nLet's just reuse the same wait event, so that you can see what's going\non in pg_stat_activity.", "msg_date": "Mon, 21 Oct 2019 12:21:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Tweaking DSM and DSA limits" }, { "msg_contents": "On Mon, Oct 21, 2019 at 12:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jun 21, 2019 at 6:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-06-20 14:20:27 -0400, Robert Haas wrote:\n> > > I am not convinced that we really need to GUC-ify this. How about\n> > > just bumping the value up from 2 to say 5?\n> >\n> > I'm not sure either. Although it's not great if the only way out for a\n> > user hitting this is to increase max_connections... But we should really\n> > increase the default.\n>\n> Ok, hard-to-explain GUC abandoned. Here is a patch that just adjusts\n> the two constants. DSM's array allows for 5 slots per connection (up\n> from 2), and DSA doubles its size after every two segments (down from\n> 4).\n\nPushed. No back-patch for now: the risk/reward ratio doesn't seem\nright for that.\n\n> I have a long wish list of improvements I'd like to investigate in\n> this area, subject for future emails, but while I'm making small\n> tweaks, here's another small thing: there is no \"wait event\" while\n> allocating (in the kernel sense) POSIX shm on Linux, unlike the\n> equivalent IO when file-backed segments are filled with write() calls.\n> Let's just reuse the same wait event, so that you can see what's going\n> on in pg_stat_activity.\n\nAlso pushed.\n\n\n", "msg_date": "Fri, 31 Jan 2020 17:33:18 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Tweaking DSM and DSA limits" } ]
[ { "msg_contents": "Hi all,\n\nWhile looking at this code, I have noticed that a couple of reloptions\nwhich are not toast-specific don't get properly initialized.\ntoast_tuple_target and parallel_workers are the ones standing out.\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 19 Jun 2019 13:53:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Some reloptions non-initialized when loaded" }, { "msg_contents": "Hello,\n\nOn Wed, Jun 19, 2019 at 10:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> While looking at this code, I have noticed that a couple of reloptions\n> which are not toast-specific don't get properly initialized.\n> toast_tuple_target and parallel_workers are the ones standing out.\n>\nDo we also need to initialize vacuum_cleanup_index_scale_factor?\n\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jun 2019 13:21:19 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some reloptions non-initialized when loaded" } ]
[ { "msg_contents": "Hello, hackers.\n\nAny body know how to produce a Soft Block case of Deadlock Detection?\nI have produced the Hard Block case, but can't produce the Soft Block case.\n\n\nI read the design: src/backend/storage/lmgr/README. It reads,\n\n\"If a process A is behind a process B in some lock's wait queue, and\ntheir requested locks conflict, then we must say that A waits for B, since\nProcLockWakeup will never awaken A before B. This creates additional\nedges in the WFG. We call these \"soft\" edges, as opposed to the \"hard\"\nedges induced by locks already held. Note that if B already holds any\nlocks conflicting with A's request, then their relationship is a hard edge\nnot a soft edge.\"\n\n\nBut after trying many testing, I couldn't figure out how to produce a Soft\nBlock.\n\nFollowing is what I did.\n\n* Hard Block Case\n\n** Prepare\n\ncreate table t1 ( id int primary key, test int );\ncreate table t2 ( id int primary key, test int );\n\ninsert into t1 values (10,10);\ninsert into t2 values (20,20);\n\n** test\n\nstep1/backend1:\n begin;\n update t1 set test=11 where id=10;\n\nstep2/backend2:\n begin;\n update t2 set test=21 where id=20;\n\nstep3/backend1:\n update t2 set test=21 where id=20;\n\nstep4/process2: /*deadlock detected*/\n update t1 set test=11 where id=10;\n\n\n\n\n* Soft Block Case\n\n** Prepare\n\ncreate table t1 ( id int primary key, test int );\n\ncreate table t2 ( id int primary key, test int );\n\ncreate table t3 ( id int primary key, test int );\n\ninsert into t1 values (10,10);\ninsert into t2 values (20,20);\ninsert into t3 values (30,30);\n\n** test\n\nstep1/backend1: /*lock t1.row1*/\n begin;\n select * from t1 where id=10 for update;\n\n\nstep2/backend2: /*lock t2.row1*/\n begin;\n select * from t2 where id=20 for no key update;\n\nstep3/backend3: /*lock t2.row1*/\n begin;\n select * from t2 where id=20 for key share;\n\nstep4/backend4:/*lock t3.row1*/\n begin;\n select * from t3 where id=30 for update;\n\nstep5/backend4:/*try to lock t1.row1*/\n update t1 set id=11 where id=10;\n\nstep6/backend3:/*try to lock t3.row1*/\n update t3 set id=31 where id=30;\n\nstep7/backend5:/*try to lock t2.row1, hope to create a soft edge*/\n begin;\n update t2 set id=21 where id=20;\n\nstep8/backend1:/*try to lock t2.row1*/ /*Expect to detect soft block, but\nthere seems no soft block*/\n update t2 set test=21 where id=20;\n\nHello, hackers.Any body know how to produce a Soft Block case of Deadlock Detection? I have produced the Hard Block case, but can't produce the Soft Block case.I read the design: src/backend/storage/lmgr/README. It reads, \"If a process A is behind a process B in some lock's wait queue, andtheir requested locks conflict, then we must say that A waits for B, sinceProcLockWakeup will never awaken A before B.  This creates additionaledges in the WFG.  We call these \"soft\" edges, as opposed to the \"hard\"edges induced by locks already held.  Note that if B already holds anylocks conflicting with A's request, then their relationship is a hard edgenot a soft edge.\"But after trying many testing, I couldn't figure out how to produce a Soft Block.Following is what I did.* Hard Block Case** Preparecreate table t1 ( id int primary key, test int );create table t2 ( id int primary key, test int );insert into t1 values (10,10);insert into t2 values (20,20);** teststep1/backend1:         begin;        update t1 set test=11 where id=10;step2/backend2:       begin;      update t2 set test=21 where id=20;step3/backend1:       update t2 set test=21 where id=20;step4/process2: /*deadlock detected*/       update t1 set test=11 where id=10;* Soft Block Case** Preparecreate table t1 ( id int primary key, test int ); create table t2 ( id int primary key, test int ); create table t3 ( id int primary key, test int ); insert into t1 values (10,10);insert into t2 values (20,20);insert into t3 values (30,30);** teststep1/backend1: /*lock t1.row1*/    begin;    select * from t1 where id=10 for update;step2/backend2: /*lock t2.row1*/    begin;    select * from t2 where id=20 for no key update;step3/backend3: /*lock t2.row1*/    begin;      select * from t2 where id=20 for key share;step4/backend4:/*lock t3.row1*/    begin;     select * from t3 where id=30 for update;step5/backend4:/*try to lock t1.row1*/    update t1 set id=11 where id=10;step6/backend3:/*try to lock t3.row1*/    update t3 set id=31 where id=30;step7/backend5:/*try to lock t2.row1, hope to create a soft edge*/    begin;     update t2 set id=21 where id=20;step8/backend1:/*try to lock t2.row1*/   /*Expect to detect soft block, but there seems no soft block*/    update t2 set test=21 where id=20;", "msg_date": "Wed, 19 Jun 2019 19:18:39 +0800", "msg_from": "Rui Hai Jiang <ruihaij@gmail.com>", "msg_from_op": true, "msg_subject": "How to produce a Soft Block case of Deadlock Detection?" }, { "msg_contents": "I finally found this.\n\nhttps://www.postgresql.org/message-id/29104.1182785028%40sss.pgh.pa.us\n\nThis is very useful to understand the Soft Block.\n\nOn Wed, Jun 19, 2019 at 7:18 PM Rui Hai Jiang <ruihaij@gmail.com> wrote:\n\n> Hello, hackers.\n>\n> Any body know how to produce a Soft Block case of Deadlock Detection?\n> I have produced the Hard Block case, but can't produce the Soft Block case.\n>\n>\n> I read the design: src/backend/storage/lmgr/README. It reads,\n>\n> \"If a process A is behind a process B in some lock's wait queue, and\n> their requested locks conflict, then we must say that A waits for B, since\n> ProcLockWakeup will never awaken A before B. This creates additional\n> edges in the WFG. We call these \"soft\" edges, as opposed to the \"hard\"\n> edges induced by locks already held. Note that if B already holds any\n> locks conflicting with A's request, then their relationship is a hard edge\n> not a soft edge.\"\n>\n>\n> But after trying many testing, I couldn't figure out how to produce a Soft\n> Block.\n>\n> Following is what I did.\n>\n> * Hard Block Case\n>\n> ** Prepare\n>\n> create table t1 ( id int primary key, test int );\n> create table t2 ( id int primary key, test int );\n>\n> insert into t1 values (10,10);\n> insert into t2 values (20,20);\n>\n> ** test\n>\n> step1/backend1:\n> begin;\n> update t1 set test=11 where id=10;\n>\n> step2/backend2:\n> begin;\n> update t2 set test=21 where id=20;\n>\n> step3/backend1:\n> update t2 set test=21 where id=20;\n>\n> step4/process2: /*deadlock detected*/\n> update t1 set test=11 where id=10;\n>\n>\n>\n>\n> * Soft Block Case\n>\n> ** Prepare\n>\n> create table t1 ( id int primary key, test int );\n>\n> create table t2 ( id int primary key, test int );\n>\n> create table t3 ( id int primary key, test int );\n>\n> insert into t1 values (10,10);\n> insert into t2 values (20,20);\n> insert into t3 values (30,30);\n>\n> ** test\n>\n> step1/backend1: /*lock t1.row1*/\n> begin;\n> select * from t1 where id=10 for update;\n>\n>\n> step2/backend2: /*lock t2.row1*/\n> begin;\n> select * from t2 where id=20 for no key update;\n>\n> step3/backend3: /*lock t2.row1*/\n> begin;\n> select * from t2 where id=20 for key share;\n>\n> step4/backend4:/*lock t3.row1*/\n> begin;\n> select * from t3 where id=30 for update;\n>\n> step5/backend4:/*try to lock t1.row1*/\n> update t1 set id=11 where id=10;\n>\n> step6/backend3:/*try to lock t3.row1*/\n> update t3 set id=31 where id=30;\n>\n> step7/backend5:/*try to lock t2.row1, hope to create a soft edge*/\n> begin;\n> update t2 set id=21 where id=20;\n>\n> step8/backend1:/*try to lock t2.row1*/ /*Expect to detect soft block,\n> but there seems no soft block*/\n> update t2 set test=21 where id=20;\n>\n>\n\nI finally found this.https://www.postgresql.org/message-id/29104.1182785028%40sss.pgh.pa.usThis is very useful to understand the Soft Block.On Wed, Jun 19, 2019 at 7:18 PM Rui Hai Jiang <ruihaij@gmail.com> wrote:Hello, hackers.Any body know how to produce a Soft Block case of Deadlock Detection? I have produced the Hard Block case, but can't produce the Soft Block case.I read the design: src/backend/storage/lmgr/README. It reads, \"If a process A is behind a process B in some lock's wait queue, andtheir requested locks conflict, then we must say that A waits for B, sinceProcLockWakeup will never awaken A before B.  This creates additionaledges in the WFG.  We call these \"soft\" edges, as opposed to the \"hard\"edges induced by locks already held.  Note that if B already holds anylocks conflicting with A's request, then their relationship is a hard edgenot a soft edge.\"But after trying many testing, I couldn't figure out how to produce a Soft Block.Following is what I did.* Hard Block Case** Preparecreate table t1 ( id int primary key, test int );create table t2 ( id int primary key, test int );insert into t1 values (10,10);insert into t2 values (20,20);** teststep1/backend1:         begin;        update t1 set test=11 where id=10;step2/backend2:       begin;      update t2 set test=21 where id=20;step3/backend1:       update t2 set test=21 where id=20;step4/process2: /*deadlock detected*/       update t1 set test=11 where id=10;* Soft Block Case** Preparecreate table t1 ( id int primary key, test int ); create table t2 ( id int primary key, test int ); create table t3 ( id int primary key, test int ); insert into t1 values (10,10);insert into t2 values (20,20);insert into t3 values (30,30);** teststep1/backend1: /*lock t1.row1*/    begin;    select * from t1 where id=10 for update;step2/backend2: /*lock t2.row1*/    begin;    select * from t2 where id=20 for no key update;step3/backend3: /*lock t2.row1*/    begin;      select * from t2 where id=20 for key share;step4/backend4:/*lock t3.row1*/    begin;     select * from t3 where id=30 for update;step5/backend4:/*try to lock t1.row1*/    update t1 set id=11 where id=10;step6/backend3:/*try to lock t3.row1*/    update t3 set id=31 where id=30;step7/backend5:/*try to lock t2.row1, hope to create a soft edge*/    begin;     update t2 set id=21 where id=20;step8/backend1:/*try to lock t2.row1*/   /*Expect to detect soft block, but there seems no soft block*/    update t2 set test=21 where id=20;", "msg_date": "Wed, 19 Jun 2019 22:27:03 +0800", "msg_from": "Rui Hai Jiang <ruihaij@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to produce a Soft Block case of Deadlock Detection?" } ]
[ { "msg_contents": "A Twitter thread today regarding the use of master/slave [1] made me curious\nand so I had a look. It seems that commit a1ef920e27ba6ab3602aaf6d6751d8628\nreplaced most instances but missed at least one which is fixed in the attached.\n\ncheers ./daniel\n\n[1] https://twitter.com/Xof/status/1141040942645776384", "msg_date": "Wed, 19 Jun 2019 14:35:02 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Remove one last occurrence of \"replication slave\" in comments" }, { "msg_contents": "On Wed, Jun 19, 2019 at 2:35 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> A Twitter thread today regarding the use of master/slave [1] made me\n> curious\n> and so I had a look. It seems that commit\n> a1ef920e27ba6ab3602aaf6d6751d8628\n> replaced most instances but missed at least one which is fixed in the\n> attached.\n>\n\nApplied, thanks.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jun 19, 2019 at 2:35 PM Daniel Gustafsson <daniel@yesql.se> wrote:A Twitter thread today regarding the use of master/slave [1] made me curious\nand so I had a look.  It seems that commit a1ef920e27ba6ab3602aaf6d6751d8628\nreplaced most instances but missed at least one which is fixed in the attached.Applied, thanks. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 19 Jun 2019 14:39:24 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Remove one last occurrence of \"replication slave\" in comments" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n> A Twitter thread today regarding the use of master/slave [1] made me curious\n> and so I had a look. It seems that commit a1ef920e27ba6ab3602aaf6d6751d8628\n> replaced most instances but missed at least one which is fixed in the attached.\n>\n> cheers ./daniel\n\nThere were some more master/slave references in the plpgsql foreign key\ntests, which the attached chages to base/leaf instead.\n\nI didn't touch the last mention of \"slave\", in the pltcl code, because\nit's calling the Tcl_CreateSlave() API function.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law", "msg_date": "Wed, 19 Jun 2019 18:04:27 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Remove one last occurrence of \"replication slave\" in comments" }, { "msg_contents": "On 2019-06-19 19:04, Dagfinn Ilmari Mannsåker wrote:\n> There were some more master/slave references in the plpgsql foreign key\n> tests, which the attached chages to base/leaf instead.\n\nbase/leaf doesn't sound like a good pair. I committed it with root/leaf\ninstead.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 21 Aug 2019 12:06:21 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Remove one last occurrence of \"replication slave\" in comments" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n\n> On 2019-06-19 19:04, Dagfinn Ilmari Mannsåker wrote:\n>> There were some more master/slave references in the plpgsql foreign key\n>> tests, which the attached chages to base/leaf instead.\n>\n> base/leaf doesn't sound like a good pair. I committed it with root/leaf\n> instead.\n\nThanks! You're right, that is a better name pair.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Wed, 21 Aug 2019 14:54:25 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Remove one last occurrence of \"replication slave\" in comments" } ]
[ { "msg_contents": "s/hte/the/ fixed in the attached.\n\ncheers ./daniel", "msg_date": "Wed, 19 Jun 2019 14:56:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Typo in tableamapi.c" }, { "msg_contents": "On Wed, Jun 19, 2019 at 2:57 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> s/hte/the/ fixed in the attached.\n>\n\nMight as well keep being a commit-pipeline for you today :) applied,\nthanks!\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jun 19, 2019 at 2:57 PM Daniel Gustafsson <daniel@yesql.se> wrote:s/hte/the/ fixed in the attached.Might as well keep being a commit-pipeline for you today :) applied, thanks! --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 19 Jun 2019 15:00:00 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Typo in tableamapi.c" } ]
[ { "msg_contents": "Hi hackers!\nThis proposal aims to provide the ability to de-TOAST a fully TOAST'd and\ncompressed field using an iterator and then update the appropriate parts of\nthe code to use the iterator where possible instead of de-TOAST'ing and\nde-compressing the entire value. Examples where this can be helpful include\nusing position() from the beginning of the value, or doing a pattern or\nsubstring match.\n\nde-TOAST iterator overview:\n1. The caller requests the slice of the attribute value from the de-TOAST\niterator.\n2. The de-TOAST iterator checks if there is a slice available in the output\nbuffer, if there is, return the result directly,\n otherwise goto the step3.\n3. The de-TOAST iterator checks if there is the slice available in the\ninput buffer, if there is, goto step44. Otherwise,\n call fetch_datum_iterator to fetch datums from disk to input buffer.\n4. If the data in the input buffer is compressed, extract some data from\nthe input buffer to the output buffer until the caller's\n needs are met.\n\nI've implemented the prototype and apply it to the position() function to\ntest performance.\nTest tables:\n-----------------------------------------------------------------------------------------------------\ncreate table detoast_c (id serial primary key,\na text\n);\ninsert into detoast_c (a) select\nrepeat('1234567890-=abcdefghijklmnopqrstuvwxyz', 1000000)||'321' as a from\ngenerate_series(1,100);\n\ncreate table detoast_u (id serial primary key,\na text\n);\nalter table detoast_u alter a set storage external;\ninsert into detoast_u (a) select\nrepeat('1234567890-=abcdefghijklmnopqrstuvwxyz', 1000000)||'321' as a from\ngenerate_series(1,100);\n**************************************************************************************\n-----------------------------------------------------------------------------------------------------\n query |\n master (ms) | patch (ms) |\n-----------------------------------------------------------------------------------------------------\nselect position('123' in a) from detoast_c; | 4054.838 |\n1440.735 |\n-----------------------------------------------------------------------------------------------------\nselect position('321' in a) from detoast_c; | 25549.270 |\n 27696.245 |\n-----------------------------------------------------------------------------------------------------\nselect position('123' in a) from detoast_u; | 8116.996 |\n1386.802 |\n-----------------------------------------------------------------------------------------------------\nselect position('321' in a) from detoast_u | 28442.116 |\n 27672.319 |\n-----------------------------------------------------------------------------------------------------\n**************************************************************************************\nIt can be seen that the iterator greatly improves the efficiency of partial\nde-TOAST when it has almost no degradation in full de-TOAST efficiency.\nNext, I will continue to study how to apply iterators to more queries\nand improve iterator efficiency, such as using macros instead of function\ncalls.\n\nThe patch is also available on github[1].\nAny suggestions or comments would be much appreciated:)\n\nBest regards, Binguo Bao.\n\n[1] https://github.com/djydewang/postgres/pull/1/files", "msg_date": "Wed, 19 Jun 2019 21:51:27 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "[proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Thu, Jun 20, 2019 at 1:51 AM Binguo Bao <djydewang@gmail.com> wrote:\n> Hi hackers!\n> This proposal aims to provide the ability to de-TOAST a fully TOAST'd and compressed field using an iterator and then update the appropriate parts of the code to use the iterator where possible instead of de-TOAST'ing and de-compressing the entire value. Examples where this can be helpful include using position() from the beginning of the value, or doing a pattern or substring match.\n>\n> de-TOAST iterator overview:\n> 1. The caller requests the slice of the attribute value from the de-TOAST iterator.\n> 2. The de-TOAST iterator checks if there is a slice available in the output buffer, if there is, return the result directly,\n> otherwise goto the step3.\n> 3. The de-TOAST iterator checks if there is the slice available in the input buffer, if there is, goto step44. Otherwise,\n> call fetch_datum_iterator to fetch datums from disk to input buffer.\n> 4. If the data in the input buffer is compressed, extract some data from the input buffer to the output buffer until the caller's\n> needs are met.\n>\n> I've implemented the prototype and apply it to the position() function to test performance.\n\nHi Binguo,\n\nInteresting work, and nice performance improvements so far. Just by\nthe way, the patch currently generates warnings:\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/554345719\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jul 2019 16:21:17 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "Hi Thomas,\nI've fixed the warnings.\n\nThomas Munro <thomas.munro@gmail.com> 于2019年7月5日周五 下午12:21写道:\n\n> On Thu, Jun 20, 2019 at 1:51 AM Binguo Bao <djydewang@gmail.com> wrote:\n> > Hi hackers!\n> > This proposal aims to provide the ability to de-TOAST a fully TOAST'd\n> and compressed field using an iterator and then update the appropriate\n> parts of the code to use the iterator where possible instead of\n> de-TOAST'ing and de-compressing the entire value. Examples where this can\n> be helpful include using position() from the beginning of the value, or\n> doing a pattern or substring match.\n> >\n> > de-TOAST iterator overview:\n> > 1. The caller requests the slice of the attribute value from the\n> de-TOAST iterator.\n> > 2. The de-TOAST iterator checks if there is a slice available in the\n> output buffer, if there is, return the result directly,\n> > otherwise goto the step3.\n> > 3. The de-TOAST iterator checks if there is the slice available in the\n> input buffer, if there is, goto step44. Otherwise,\n> > call fetch_datum_iterator to fetch datums from disk to input buffer.\n> > 4. If the data in the input buffer is compressed, extract some data from\n> the input buffer to the output buffer until the caller's\n> > needs are met.\n> >\n> > I've implemented the prototype and apply it to the position() function\n> to test performance.\n>\n> Hi Binguo,\n>\n> Interesting work, and nice performance improvements so far. Just by\n> the way, the patch currently generates warnings:\n>\n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/554345719\n>\n> --\n> Thomas Munro\n> https://enterprisedb.com\n>", "msg_date": "Wed, 10 Jul 2019 22:18:24 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "This is the patch that fix warnings.\n\nBest Regards,\nBinguo Bao\n\nBinguo Bao <djydewang@gmail.com> 于2019年7月10日周三 下午10:18写道:\n\n> Hi Thomas,\n> I've fixed the warnings.\n>\n> Thomas Munro <thomas.munro@gmail.com> 于2019年7月5日周五 下午12:21写道:\n>\n>> On Thu, Jun 20, 2019 at 1:51 AM Binguo Bao <djydewang@gmail.com> wrote:\n>> > Hi hackers!\n>> > This proposal aims to provide the ability to de-TOAST a fully TOAST'd\n>> and compressed field using an iterator and then update the appropriate\n>> parts of the code to use the iterator where possible instead of\n>> de-TOAST'ing and de-compressing the entire value. Examples where this can\n>> be helpful include using position() from the beginning of the value, or\n>> doing a pattern or substring match.\n>> >\n>> > de-TOAST iterator overview:\n>> > 1. The caller requests the slice of the attribute value from the\n>> de-TOAST iterator.\n>> > 2. The de-TOAST iterator checks if there is a slice available in the\n>> output buffer, if there is, return the result directly,\n>> > otherwise goto the step3.\n>> > 3. The de-TOAST iterator checks if there is the slice available in the\n>> input buffer, if there is, goto step44. Otherwise,\n>> > call fetch_datum_iterator to fetch datums from disk to input buffer.\n>> > 4. If the data in the input buffer is compressed, extract some data\n>> from the input buffer to the output buffer until the caller's\n>> > needs are met.\n>> >\n>> > I've implemented the prototype and apply it to the position() function\n>> to test performance.\n>>\n>> Hi Binguo,\n>>\n>> Interesting work, and nice performance improvements so far. Just by\n>> the way, the patch currently generates warnings:\n>>\n>> https://travis-ci.org/postgresql-cfbot/postgresql/builds/554345719\n>>\n>> --\n>> Thomas Munro\n>> https://enterprisedb.com\n>>\n>", "msg_date": "Thu, 11 Jul 2019 00:39:05 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "I have set the local build configuration to be the same as on the CI. This\npatch should be correct.\n\nBest regards,\nBinguo Bao\n\nBinguo Bao <djydewang@gmail.com> 于2019年7月11日周四 上午12:39写道:\n\n> This is the patch that fix warnings.\n>\n> Best Regards,\n> Binguo Bao\n>\n> Binguo Bao <djydewang@gmail.com> 于2019年7月10日周三 下午10:18写道:\n>\n>> Hi Thomas,\n>> I've fixed the warnings.\n>>\n>> Thomas Munro <thomas.munro@gmail.com> 于2019年7月5日周五 下午12:21写道:\n>>\n>>> On Thu, Jun 20, 2019 at 1:51 AM Binguo Bao <djydewang@gmail.com> wrote:\n>>> > Hi hackers!\n>>> > This proposal aims to provide the ability to de-TOAST a fully TOAST'd\n>>> and compressed field using an iterator and then update the appropriate\n>>> parts of the code to use the iterator where possible instead of\n>>> de-TOAST'ing and de-compressing the entire value. Examples where this can\n>>> be helpful include using position() from the beginning of the value, or\n>>> doing a pattern or substring match.\n>>> >\n>>> > de-TOAST iterator overview:\n>>> > 1. The caller requests the slice of the attribute value from the\n>>> de-TOAST iterator.\n>>> > 2. The de-TOAST iterator checks if there is a slice available in the\n>>> output buffer, if there is, return the result directly,\n>>> > otherwise goto the step3.\n>>> > 3. The de-TOAST iterator checks if there is the slice available in the\n>>> input buffer, if there is, goto step44. Otherwise,\n>>> > call fetch_datum_iterator to fetch datums from disk to input\n>>> buffer.\n>>> > 4. If the data in the input buffer is compressed, extract some data\n>>> from the input buffer to the output buffer until the caller's\n>>> > needs are met.\n>>> >\n>>> > I've implemented the prototype and apply it to the position() function\n>>> to test performance.\n>>>\n>>> Hi Binguo,\n>>>\n>>> Interesting work, and nice performance improvements so far. Just by\n>>> the way, the patch currently generates warnings:\n>>>\n>>> https://travis-ci.org/postgresql-cfbot/postgresql/builds/554345719\n>>>\n>>> --\n>>> Thomas Munro\n>>> https://enterprisedb.com\n>>>\n>>", "msg_date": "Thu, 11 Jul 2019 17:23:24 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Wed, Jun 19, 2019 at 8:51 PM Binguo Bao <djydewang@gmail.com> wrote:\n> [v4 patch]\n\nHi Binguo,\n\nI can verify I get no warnings with the v4 patch. I've done some\nadditional performance testing. First, to sum up your results:\n\n> insert into detoast_c (a) select repeat('1234567890-=abcdefghijklmnopqrstuvwxyz', 1000000)||'321' as a from generate_series(1,100);\n\nWhen the search pattern was at the beginning, the patch was several\ntimes faster , and when the pattern was at the end, it was 3% slower\nwhen uncompressed and 9% slower when compressed.\n\nFirst, I'd like to advocate for caution when using synthetic\nbenchmarks involving compression. Consider this test:\n\ninsert into detoast_c (a)\nselect\n 'abc'||\n repeat(\n (SELECT string_agg(md5(chr(i)), '')\n FROM generate_series(1,127) i)\n , 10000)\n ||'xyz'\nfrom generate_series(1,100);\n\nThe results for the uncompressed case were not much different then\nyour test. However, in the compressed case the iterator doesn't buy us\nmuch with beginning searches since full decompression is already fast:\n\n master patch\ncomp. beg. 869ms 837ms\ncomp. end 14100ms 16100ms\nuncomp. beg. 6360ms 800ms\nuncomp. end 21100ms 21400ms\n\nand with compression it's 14% slower searching to the end. This is\npretty contrived, but I include it for demonstration.\n\nTo test something hopefully a bit more realistic, I loaded 100 records\neach containing the 1995 CIA fact book (~3MB of ascii) with a pattern\nstring put at the beginning and end. For the end search, I used a\nlonger needle to speed up the consumption of text, hoping to put more\nstress on the detoasting algorithms, for example:\n\nselect max(position('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'\nin a)) from detoast_*;\n\ncomp. beg. 836ms 22ms\ncomp. end 1510ms 1700ms\nuncomp. beg. 185ms 12ms\nuncomp. end 851ms 903ms\n\nHere, the \"beginning\" case is ~15-35x faster, which is very impressive\nand much faster than with your generated contents. The \"end\" case is\nup to 13% slower. It would be interesting to see where the break-even\npoint is, where the results are the same.\n\nReading the thread where you're working on optimizing partial\ndecompression [1], it seems you have two separate solutions for the\ntwo problems. Maybe this is fine, but I'd like to bring up the\npossibility of using the same approach for both kinds of callers.\n\nI'm not an expert on TOAST, but maybe one way to solve both problems\nis to work at the level of whole TOAST chunks. In that case, the\ncurrent patch would look like this:\n\n1. The caller requests more of the attribute value from the de-TOAST iterator.\n2. The iterator gets the next chunk and either copies or decompresses\nthe whole chunk into the buffer. (If inline, just decompress the whole\nthing)\n\nThis seems simpler and also easy to adapt to callers that do know how\nbig a slice they want. I also suspect this way would be easier to\nadapt to future TOAST formats not tied to heap or to a certain\ncompression algorithm. With less bookkeepping overhead, maybe there'll\nbe less worst-case performance degradation, while not giving up much\nin the best case. (Note also that commit 9556aa01c6 already introduced\nsome performance degradation in near-end searches, when using\nmultibyte strings. This patch would add to that.) The regression\ndoesn't seem large, but I see more than your test showed, and it would\nbe nice to avoid it.\n\nThoughts, anyone?\n\n[1] https://www.postgresql.org/message-id/flat/CAL-OGkux7%2BBm_J%3Dt5VpH7fJGGSm%2BPxWJtgs1%2BWU2g6cmLru%3D%3DA%40mail.gmail.com#705d074aa4ae305ed3d992b7e5b7af3c\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 15 Jul 2019 18:20:14 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "Hi, John\n\nFirst, I'd like to advocate for caution when using synthetic\n> benchmarks involving compression. Consider this test:\n> insert into detoast_c (a)\n> select\n> 'abc'||\n> repeat(\n> (SELECT string_agg(md5(chr(i)), '')\n> FROM generate_series(1,127) i)\n> , 10000)\n> ||'xyz'\n> from generate_series(1,100);\n> The results for the uncompressed case were not much different then\n> your test. However, in the compressed case the iterator doesn't buy us\n> much with beginning searches since full decompression is already fast:\n> master patch\n> comp. beg. 869ms 837ms\n> comp. end 14100ms 16100ms\n> uncomp. beg. 6360ms 800ms\n> uncomp. end 21100ms 21400ms\n> and with compression it's 14% slower searching to the end. This is\n> pretty contrived, but I include it for demonstration.\n\n\nI've reproduced the test case with test scripts in the attachment on my\nlaptop:\n\n master patch\ncomp. beg. 2686.77 ms 1532.79 ms\ncomp. end 17971.8 ms 21206.3 ms\nuncomp. beg. 8358.79 ms 1556.93 ms\nuncomp. end 23559.7 ms 22547.1 ms\n\nIn the compressed beginning case, the test result is different from yours\nsince the patch is ~1.75x faster\nrather than no improvement. The interesting thing is that the patch if 4%\nfaster than master in the uncompressed end case.\nI can't figure out reason now.\n\nReading the thread where you're working on optimizing partial\n> decompression [1], it seems you have two separate solutions for the\n> two problems. Maybe this is fine, but I'd like to bring up the\n> possibility of using the same approach for both kinds of callers.\n\n\n\n> I'm not an expert on TOAST, but maybe one way to solve both problems\n> is to work at the level of whole TOAST chunks. In that case, the\n> current patch would look like this:\n> 1. The caller requests more of the attribute value from the de-TOAST\n> iterator.\n> 2. The iterator gets the next chunk and either copies or decompresses\n> the whole chunk into the buffer. (If inline, just decompress the whole\n> thing)\n\n\nThanks for your suggestion. It is indeed possible to implement\nPG_DETOAST_DATUM_SLICE using the de-TOAST iterator.\nIMO the iterator is more suitable for situations where the caller doesn't\nknow the slice size. If the caller knows the slice size,\nit is reasonable to fetch enough chunks at once and then decompress it at\nonce.\n --\nBest regards,\nBinguo Bao", "msg_date": "Tue, 16 Jul 2019 22:14:41 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Tue, Jul 16, 2019 at 9:14 PM Binguo Bao <djydewang@gmail.com> wrote:\n> In the compressed beginning case, the test result is different from yours since the patch is ~1.75x faster\n> rather than no improvement. The interesting thing is that the patch if 4% faster than master in the uncompressed end case.\n> I can't figure out reason now.\n\nProbably some differences in our test environments. I wouldn't worry\nabout it too much, since we can show improvement in more realistic\ntests.\n\n>> I'm not an expert on TOAST, but maybe one way to solve both problems\n>> is to work at the level of whole TOAST chunks. In that case, the\n>> current patch would look like this:\n>> 1. The caller requests more of the attribute value from the de-TOAST iterator.\n>> 2. The iterator gets the next chunk and either copies or decompresses\n>> the whole chunk into the buffer. (If inline, just decompress the whole\n>> thing)\n>\n>\n> Thanks for your suggestion. It is indeed possible to implement PG_DETOAST_DATUM_SLICE using the de-TOAST iterator.\n> IMO the iterator is more suitable for situations where the caller doesn't know the slice size. If the caller knows the slice size,\n> it is reasonable to fetch enough chunks at once and then decompress it at once.\n\nThat sounds reasonable for the reason of less overhead.\n\nIn the case where we don't know the slice size, how about the other\naspect of my question above: Might it be simpler and less overhead to\ndecompress entire chunks at a time? If so, I think it would be\nenlightening to compare performance.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 18 Jul 2019 10:39:48 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "Hi John!\nSorry for the late reply. It took me some time to fix a random bug.\n\nIn the case where we don't know the slice size, how about the other\n> aspect of my question above: Might it be simpler and less overhead to\n> decompress entire chunks at a time? If so, I think it would be\n> enlightening to compare performance.\n\n\nGood idea. I've tested your propopal with scripts and patch v5 in the\nattachment:\n\n master patch v4 patch v5\ncomp. beg. 4364ms 1505ms 1529ms\ncomp. end 28321ms 31202ms 26916ms\nuncomp. beg. 3474ms 1513ms 1523ms\nuncomp. end 27416ms 30260ms 25888ms\n\nThe proposal improves suffix query performance greatly\nwith less calls to the decompression function.\n\nBesides, do you have any other suggestions for the structure of\nDetoastIterator or ToastBuffer?\nMaybe they can be designed to be more reasonable.\n\nThanks again for the proposal.\n-- \nBest regards,\nBinguo Bao", "msg_date": "Thu, 25 Jul 2019 23:20:50 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Thu, Jul 25, 2019 at 10:21 PM Binguo Bao <djydewang@gmail.com> wrote:\n>\n> Hi John!\n> Sorry for the late reply. It took me some time to fix a random bug.\n\nDon't worry, it's not late at all! :-)\n\n>> In the case where we don't know the slice size, how about the other\n>> aspect of my question above: Might it be simpler and less overhead to\n>> decompress entire chunks at a time? If so, I think it would be\n>> enlightening to compare performance.\n>\n>\n> Good idea. I've tested your propopal with scripts and patch v5 in the attachment:\n>\n> master patch v4 patch v5\n> comp. beg. 4364ms 1505ms 1529ms\n> comp. end 28321ms 31202ms 26916ms\n> uncomp. beg. 3474ms 1513ms 1523ms\n> uncomp. end 27416ms 30260ms 25888ms\n>\n> The proposal improves suffix query performance greatly\n> with less calls to the decompression function.\n\nLooks good. I repeated my CIA fact book test and found no difference\nwith compression, but found that suffix search in the uncompressed\ncase had less regression (~5%) than v4 (>8%). Let's pursue this\nfurther.\n\n> Besides, do you have any other suggestions for the structure of DetoastIterator or ToastBuffer?\n\nMy goal for this stage of review was to understand more fully what the\ncode is doing, and make it as simple and clear as possible, starting\nat the top level. In doing so, it looks like I found some additional\nperformance gains. I haven't looked much yet at the TOAST fetching\nlogic.\n\n\n1). For every needle comparison, text_position_next_internal()\ncalculates how much of the value is needed and passes that to\ndetoast_iterate(), which then calculates if it has to do something or\nnot. This is a bit hard to follow. There might also be a performance\npenalty -- the following is just a theory, but it sounds plausible:\nThe CPU can probably correctly predict that detoast_iterate() will\nusually return the same value it did last time, but it still has to\ncall the function and make sure, which I imagine is more expensive\nthan advancing the needle. Ideally, we want to call the iterator only\nif we have to.\n\nIn the attached patch (applies on top of your v5),\ntext_position_next_internal() simply compares hptr to the detoast\nbuffer limit, and calls detoast_iterate() until it can proceed. I\nthink this is clearer. (I'm not sure of the error handling, see #2.)\nIn this scheme, the only reason to know length is to pass to\npglz_decompress_iterate() in the case of in-line compression. As I\nalluded to in my first review, I don't think it's worth the complexity\nto handle that iteratively since the value is only a few kB. I made it\nso in-line datums are fully decompressed as in HEAD and removed struct\nmembers to match. I also noticed that no one updates or looks at\n\"toast_iter.done\" so I removed that as well.\n\nNow pglz_decompress_iterate() doesn't need length at all. For testing\nI just set decompress_all = true and let the compiler optimize away\nthe rest. I left finishing it for you if you agree with these changes.\n\nWith this additional patch, the penalty for suffix search in my CIA\nfact book test is only ~2% in the compressed case, and might even be\nslightly faster than HEAD in the uncompressed case.\n\n\n2). detoast_iterate() and fetch_datum_iterate() return a value but we\ndon't check it or do anything with it. Should we do something with it?\nIt's also not yet clear if we should check the iterator state instead\nof return values. I've added some XXX comments as a reminder. We\nshould also check the return value of pglz_decompress_iterate().\n\n\n3). Speaking of pglz_decompress_iterate(), I diff'd it with\npglz_decompress(), and I have some questions on it:\n\na).\n+ srcend = (const unsigned char *) (source->limit == source->capacity\n? source->limit : (source->limit - 4));\n\nWhat does the 4 here mean in this expression? Is it possible it's\ncompensating for this bit in init_toast_buffer()?\n\n+ buf->limit = VARDATA(buf->buf);\n\nIt seems the initial limit should also depend on whether the datum is\ncompressed, right? Can we just do this:\n\n+ buf->limit = buf->position;\n\nb).\n- while (sp < srcend && dp < destend)\n...\n+ while (sp + 1 < srcend && dp < destend &&\n...\n\nWhy is it here \"sp + 1\"?\n\n\n4. Note that varlena.c has a static state variable, and a cleanup\nfunction that currently does:\n\nstatic void\ntext_position_cleanup(TextPositionState *state)\n{\n/* no cleanup needed */\n}\n\nIt seems to be the detoast iterator could be embedded in this state\nvariable, and then free-ing can happen here. That has a possible\nadvantage that the iterator struct would be on the same cache line as\nthe state data. That would also remove the need to pass \"iter\" as a\nparameter, since these functions already pass \"state\". I'm not sure if\nthis would be good for other users of the iterator, so maybe we can\nhold off on that for now.\n\n5. Would it be a good idea to add tests (not always practical), or\nmore Assert()'s? You probably already know this, but as a reminder\nit's good to develop with asserts enabled, but never build with them\nfor performance testing.\n\nI think that's enough for now. If you have any questions or\ncounter-arguments, let me know. I've set the commitfest entry to\nwaiting on author.\n\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 29 Jul 2019 10:48:52 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> 于2019年7月29日周一 上午11:49写道:\n\n> On Thu, Jul 25, 2019 at 10:21 PM Binguo Bao <djydewang@gmail.com> wrote:\n> My goal for this stage of review was to understand more fully what the\n>\ncode is doing, and make it as simple and clear as possible, starting\n> at the top level. In doing so, it looks like I found some additional\n> performance gains. I haven't looked much yet at the TOAST fetching\n> logic.\n>\n>\n> 1). For every needle comparison, text_position_next_internal()\n> calculates how much of the value is needed and passes that to\n> detoast_iterate(), which then calculates if it has to do something or\n> not. This is a bit hard to follow. There might also be a performance\n> penalty -- the following is just a theory, but it sounds plausible:\n> The CPU can probably correctly predict that detoast_iterate() will\n> usually return the same value it did last time, but it still has to\n> call the function and make sure, which I imagine is more expensive\n> than advancing the needle. Ideally, we want to call the iterator only\n> if we have to.\n>\n> In the attached patch (applies on top of your v5),\n> text_position_next_internal() simply compares hptr to the detoast\n> buffer limit, and calls detoast_iterate() until it can proceed. I\n> think this is clearer.\n\n\nYes, I think this is a general scenario where the caller continually\ncalls detoast_iterate until gets enough data, so I think such operations can\nbe extracted as a macro, as I did in patch v6. In the macro, the\ndetoast_iterate\nfunction is called only when the data requested by the caller is greater\nthan the\nbuffer limit.\n\n(I'm not sure of the error handling, see #2.)\n> In this scheme, the only reason to know length is to pass to\n> pglz_decompress_iterate() in the case of in-line compression. As I\n> alluded to in my first review, I don't think it's worth the complexity\n> to handle that iteratively since the value is only a few kB. I made it\n> so in-line datums are fully decompressed as in HEAD and removed struct\n> members to match.\n\n\nSounds good. This not only simplifies the structure and logic of Detoast\nIterator\nbut also has no major impact on efficiency.\n\n\n> I also noticed that no one updates or looks at\n> \"toast_iter.done\" so I removed that as well.\n>\n\ntoast_iter.done is updated when the buffer limit reached the buffer\ncapacity now.\nSo, I added it back.\n\n\n> Now pglz_decompress_iterate() doesn't need length at all. For testing\n> I just set decompress_all = true and let the compiler optimize away\n> the rest. I left finishing it for you if you agree with these changes.\n>\n\nDone.\n\n\n> 2). detoast_iterate() and fetch_datum_iterate() return a value but we\n> don't check it or do anything with it. Should we do something with it?\n> It's also not yet clear if we should check the iterator state instead\n> of return values. I've added some XXX comments as a reminder. We\n> should also check the return value of pglz_decompress_iterate().\n>\n\nIMO, we need to provide users with a simple iterative interface.\nUsing the required data pointer to compare with the buffer limit is an easy\nway.\nAnd the application scenarios of the iterator are mostly read operations.\nSo I think there is no need to return a value, and the iterator needs to\nthrow an\nexception for some wrong calls, such as all the data have been iterated,\nbut the user still calls the iterator.\n\n\n>\n> 3). Speaking of pglz_decompress_iterate(), I diff'd it with\n> pglz_decompress(), and I have some questions on it:\n>\n> a).\n> + srcend = (const unsigned char *) (source->limit == source->capacity\n> ? source->limit : (source->limit - 4));\n>\n> What does the 4 here mean in this expression?\n\n\nSince we fetch chunks one by one, if we make srcend equals to the source\nbuffer limit,\nIn the while loop \"while (sp < srcend && dp < destend)\", sp may exceed the\nsource buffer limit and\nread unallocated bytes. Giving a four-byte buffer can prevent sp from\nexceeding the source buffer limit.\nIf we have read all the chunks, we don't need to be careful to cross the\nborder,\njust make srcend equal to source buffer limit. I've added comments to\nexplain it in patch v6.\n\n\n\n> Is it possible it's\n> compensating for this bit in init_toast_buffer()?\n>\n> + buf->limit = VARDATA(buf->buf);\n>\n> It seems the initial limit should also depend on whether the datum is\n> compressed, right? Can we just do this:\n>\n> + buf->limit = buf->position;\n>\n\nI'm afraid not. buf->position points to the data portion of the buffer, but\nthe beginning of\nthe chunks we read may contain header information. For example, for\ncompressed data chunks,\nthe first four bytes record the size of raw data, this means that limit is\nfour bytes ahead of position.\nThis initialization doesn't cause errors, although the position is less\nthan the limit in other cases.\nBecause we always fetch chunks first, then decompress it.\n\n\n> b).\n> - while (sp < srcend && dp < destend)\n> ...\n> + while (sp + 1 < srcend && dp < destend &&\n> ...\n>\n> Why is it here \"sp + 1\"?\n>\n\nIgnore it, I set the inactive state of detoast_iter->ctrl to 8 in patch v6\nto\nachieve the purpose of parsing ctrl correctly every time.\n\n\n>\n> 4. Note that varlena.c has a static state variable, and a cleanup\n> function that currently does:\n>\n> static void\n> text_position_cleanup(TextPositionState *state)\n> {\n> /* no cleanup needed */\n> }\n>\n> It seems to be the detoast iterator could be embedded in this state\n> variable, and then free-ing can happen here. That has a possible\n> advantage that the iterator struct would be on the same cache line as\n> the state data. That would also remove the need to pass \"iter\" as a\n> parameter, since these functions already pass \"state\". I'm not sure if\n> this would be good for other users of the iterator, so maybe we can\n> hold off on that for now.\n>\n\nGood idea. I've implemented it in patch v6.\n\n\n> 5. Would it be a good idea to add tests (not always practical), or\n> more Assert()'s? You probably already know this, but as a reminder\n> it's good to develop with asserts enabled, but never build with them\n> for performance testing.\n>\n\nI've added more Assert()'s to check iterator state.\n\n\n>\n> I think that's enough for now. If you have any questions or\n> counter-arguments, let me know. I've set the commitfest entry to\n> waiting on author.\n>\n>\n> --\n> John Naylor https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nBTW, I found that iterators come in handy for json/jsonb's find field value\nor get array elements operations.\nI will continue to optimize the json/jsonb query based on the detoast\niterator patch.\n\n-- \nBest regards,\nBinguo Bao", "msg_date": "Tue, 30 Jul 2019 21:20:20 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Tue, Jul 30, 2019 at 8:20 PM Binguo Bao <djydewang@gmail.com> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> 于2019年7月29日周一 上午11:49写道:\n>>\n>> 1). For every needle comparison, text_position_next_internal()\n>> calculates how much of the value is needed and passes that to\n>> detoast_iterate(), which then calculates if it has to do something or\n>> not. This is a bit hard to follow. There might also be a performance\n>> penalty -- the following is just a theory, but it sounds plausible:\n>> The CPU can probably correctly predict that detoast_iterate() will\n>> usually return the same value it did last time, but it still has to\n>> call the function and make sure, which I imagine is more expensive\n>> than advancing the needle. Ideally, we want to call the iterator only\n>> if we have to.\n>>\n>> In the attached patch (applies on top of your v5),\n>> text_position_next_internal() simply compares hptr to the detoast\n>> buffer limit, and calls detoast_iterate() until it can proceed. I\n>> think this is clearer.\n>\n>\n> Yes, I think this is a general scenario where the caller continually\n> calls detoast_iterate until gets enough data, so I think such operations can\n> be extracted as a macro, as I did in patch v6. In the macro, the detoast_iterate\n> function is called only when the data requested by the caller is greater than the\n> buffer limit.\n\nI like the use of a macro here. However, I think we can find a better\nlocation for the definition. See the header comment of fmgr.h:\n\"Definitions for the Postgres function manager and function-call\ninterface.\" Maybe tuptoaster.h is as good a place as any?\n\n>> I also noticed that no one updates or looks at\n>> \"toast_iter.done\" so I removed that as well.\n>\n>\n> toast_iter.done is updated when the buffer limit reached the buffer capacity now.\n> So, I added it back.\n\nOkay.\n\n>> 2). detoast_iterate() and fetch_datum_iterate() return a value but we\n>> don't check it or do anything with it. Should we do something with it?\n>> It's also not yet clear if we should check the iterator state instead\n>> of return values. I've added some XXX comments as a reminder. We\n>> should also check the return value of pglz_decompress_iterate().\n>\n>\n> IMO, we need to provide users with a simple iterative interface.\n> Using the required data pointer to compare with the buffer limit is an easy way.\n> And the application scenarios of the iterator are mostly read operations.\n> So I think there is no need to return a value, and the iterator needs to throw an\n> exception for some wrong calls, such as all the data have been iterated,\n> but the user still calls the iterator.\n\nOkay, and see these functions now return void. The orignal\npglz_decompress() returned a value that was check against corruption.\nIs there a similar check we can do for the iterative version?\n\n>> 3). Speaking of pglz_decompress_iterate(), I diff'd it with\n>> pglz_decompress(), and I have some questions on it:\n>>\n>> a).\n>> + srcend = (const unsigned char *) (source->limit == source->capacity\n>> ? source->limit : (source->limit - 4));\n>>\n>> What does the 4 here mean in this expression?\n>\n>\n> Since we fetch chunks one by one, if we make srcend equals to the source buffer limit,\n> In the while loop \"while (sp < srcend && dp < destend)\", sp may exceed the source buffer limit and read unallocated bytes.\n\nWhy is this? That tells me the limit is incorrect. Can the setter not\ndetermine the right value?\n\n> Giving a four-byte buffer can prevent sp from exceeding the source buffer limit.\n\nWhy 4? That's a magic number. Why not 2, or 27?\n\n> If we have read all the chunks, we don't need to be careful to cross the border,\n> just make srcend equal to source buffer limit. I've added comments to explain it in patch v6.\n\nThat's a good thing to comment on, but it doesn't explain why. This\nlogic seems like a band-aid and I think a committer would want this to\nbe handled in a more principled way.\n\n>> Is it possible it's\n>> compensating for this bit in init_toast_buffer()?\n>>\n>> + buf->limit = VARDATA(buf->buf);\n>>\n>> It seems the initial limit should also depend on whether the datum is\n>> compressed, right? Can we just do this:\n>>\n>> + buf->limit = buf->position;\n>\n>\n> I'm afraid not. buf->position points to the data portion of the buffer, but the beginning of\n> the chunks we read may contain header information. For example, for compressed data chunks,\n> the first four bytes record the size of raw data, this means that limit is four bytes ahead of position.\n> This initialization doesn't cause errors, although the position is less than the limit in other cases.\n> Because we always fetch chunks first, then decompress it.\n\nI see what you mean now. This could use a comment or two to explain\nthe stated constraints may not actually be satisfied at\ninitialization.\n\n>> b).\n>> - while (sp < srcend && dp < destend)\n>> ...\n>> + while (sp + 1 < srcend && dp < destend &&\n>> ...\n>>\n>> Why is it here \"sp + 1\"?\n>\n>\n> Ignore it, I set the inactive state of detoast_iter->ctrl to 8 in patch v6 to\n> achieve the purpose of parsing ctrl correctly every time.\n\nPlease explain further. Was the \"sp + 1\" correct behavior (and why),\nor only for debugging setting ctrl/c correctly? Also, I don't think\nthe new logic for the ctrl/c variables is an improvement:\n\n1. iter->ctrlc is intialized with '8' (even in the uncompressed case,\nwhich is confusing). Any time you initialize with something not 0 or\n1, it's a magic number, and here it's far from where the loop variable\nis used. This is harder to read.\n\n2. First time though the loop, iter->ctrlc = 8, which immediately gets\nset back to 0.\n\n3. At the end of the loop, iter->ctrl/c are unconditionally set. In\nv5, there was a condition which would usually avoid this copying of\nvalues through pointers.\n\n>> 4. Note that varlena.c has a static state variable, and a cleanup\n>> function that currently does:\n>>\n>> static void\n>> text_position_cleanup(TextPositionState *state)\n>> {\n>> /* no cleanup needed */\n>> }\n>>\n>> It seems to be the detoast iterator could be embedded in this state\n>> variable, and then free-ing can happen here. That has a possible\n>> advantage that the iterator struct would be on the same cache line as\n>> the state data. That would also remove the need to pass \"iter\" as a\n>> parameter, since these functions already pass \"state\". I'm not sure if\n>> this would be good for other users of the iterator, so maybe we can\n>> hold off on that for now.\n>\n>\n> Good idea. I've implemented it in patch v6.\n\nThat's better, and I think we can take it a little bit farther.\n\n1. Notice that TextPositionState is allocated on the stack in\ntext_position(), which passes both the \"state\" pointer and the \"iter\"\npointer to text_position_setup(), and only then sets state->iter =\niter. We can easily set this inside text_position(). That would get\nrid of the need for other callers to pass NULL iter to\ntext_position_setup().\n\n2. DetoastIteratorData is fixed size, so I see no reason to allocate\nit on the heap. We could allocate it on the stack in text_pos(), and\npass the pointer to create_detoast_iterator() (in this case maybe a\nbetter name is init_detoast_iterator), which would return a bool to\ntell text_pos() whether to pass down the pointer or a NULL. The\nallocation of other structs (toast buffer and fetch iterator) probably\ncan't be changed without more work.\n\n>> 5. Would it be a good idea to add tests (not always practical), or\n>> more Assert()'s? You probably already know this, but as a reminder\n>> it's good to develop with asserts enabled, but never build with them\n>> for performance testing.\n>\n>\n> I've added more Assert()'s to check iterator state.\n\nOkay.\n\n> BTW, I found that iterators come in handy for json/jsonb's find field value or get array elements operations.\n> I will continue to optimize the json/jsonb query based on the detoast iterator patch.\n\nThat will be an interesting use case.\n\nThere are other aspects of the patch I should investigate, but I'll\nput that off for another time. Commitfest is over, but note that\nreview can happen at any time. I'll continue to do so as time permits.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 2 Aug 2019 14:12:20 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> 于2019年8月2日周五 下午3:12写道:\n\n> On Tue, Jul 30, 2019 at 8:20 PM Binguo Bao <djydewang@gmail.com> wrote:\n> >\n> > John Naylor <john.naylor@2ndquadrant.com> 于2019年7月29日周一 上午11:49写道:\n> >>\n> >> 1). For every needle comparison, text_position_next_internal()\n> >> calculates how much of the value is needed and passes that to\n> >> detoast_iterate(), which then calculates if it has to do something or\n> >> not. This is a bit hard to follow. There might also be a performance\n> >> penalty -- the following is just a theory, but it sounds plausible:\n> >> The CPU can probably correctly predict that detoast_iterate() will\n> >> usually return the same value it did last time, but it still has to\n> >> call the function and make sure, which I imagine is more expensive\n> >> than advancing the needle. Ideally, we want to call the iterator only\n> >> if we have to.\n> >>\n> >> In the attached patch (applies on top of your v5),\n> >> text_position_next_internal() simply compares hptr to the detoast\n> >> buffer limit, and calls detoast_iterate() until it can proceed. I\n> >> think this is clearer.\n> >\n> >\n> > Yes, I think this is a general scenario where the caller continually\n> > calls detoast_iterate until gets enough data, so I think such operations\n> can\n> > be extracted as a macro, as I did in patch v6. In the macro, the\n> detoast_iterate\n> > function is called only when the data requested by the caller is greater\n> than the\n> > buffer limit.\n>\n> I like the use of a macro here. However, I think we can find a better\n> location for the definition. See the header comment of fmgr.h:\n> \"Definitions for the Postgres function manager and function-call\n> interface.\" Maybe tuptoaster.h is as good a place as any?\n>\n\nPG_DETOAST_ITERATE isn't a sample function-call interface,\nBut I notice that PG_FREE_IF_COPY is also defined in fmgr.h, whose logic is\nsimilar to PG_DETOAST_ITERATE, make condition check first and then\ndecide whether to call the function. Besides, PG_DETOAST_DATUM,\nPG_DETOAST_DATUM_COPY, PG_DETOAST_DATUM_SLICE,\nPG_DETOAST_DATUM_PACKED are all defined in fmgr.h, it is reasonable\nto put all the de-TOAST interface together.\n\n>> 2). detoast_iterate() and fetch_datum_iterate() return a value but we\n> >> don't check it or do anything with it. Should we do something with it?\n> >> It's also not yet clear if we should check the iterator state instead\n> >> of return values. I've added some XXX comments as a reminder. We\n> >> should also check the return value of pglz_decompress_iterate().\n> >\n> >\n> > IMO, we need to provide users with a simple iterative interface.\n> > Using the required data pointer to compare with the buffer limit is an\n> easy way.\n> > And the application scenarios of the iterator are mostly read operations.\n> > So I think there is no need to return a value, and the iterator needs to\n> throw an\n> > exception for some wrong calls, such as all the data have been iterated,\n> > but the user still calls the iterator.\n>\n> Okay, and see these functions now return void. The orignal\n> pglz_decompress() returned a value that was check against corruption.\n> Is there a similar check we can do for the iterative version?\n>\n\nAs far as I know, we can just do such check after all compressed data is\ndecompressed.\nIf we are slicing, we can't do the check.\n\n\n>\n> >> 3). Speaking of pglz_decompress_iterate(), I diff'd it with\n> >> pglz_decompress(), and I have some questions on it:\n> >>\n> >> a).\n> >> + srcend = (const unsigned char *) (source->limit == source->capacity\n> >> ? source->limit : (source->limit - 4));\n> >>\n> >> What does the 4 here mean in this expression?\n> >\n> >\n> > Since we fetch chunks one by one, if we make srcend equals to the source\n> buffer limit,\n> > In the while loop \"while (sp < srcend && dp < destend)\", sp may exceed\n> the source buffer limit and read unallocated bytes.\n>\n> Why is this? That tells me the limit is incorrect. Can the setter not\n> determine the right value?\n>\n\nThere are three statments change `sp` value in the while loop `while (sp <\nsrcend && dp < destend)`:\n`ctrl = *sp++;`\n`off = ((sp[0]) & 0xf0) << 4) | sp[1]; sp += 2;`\n`len += *sp++`\nAlthough we make sure `sp` is less than `srcend` when enter while loop,\n`sp` is likely to\ngo beyond the `srcend` in the loop, and we should ensure that `sp` is\nalways smaller than `buf->limit` to avoid\nreading unallocated data. So, `srcend` can't be initialized to\n`buf->limit`. Only one case is exceptional,\nwe've fetched all data chunks and 'buf->limit' reaches 'buf->capacity',\nit's imposisble to read unallocated\ndata via `sp`.\n\n> Giving a four-byte buffer can prevent sp from exceeding the source buffer\n> limit.\n>\n> Why 4? That's a magic number. Why not 2, or 27?\n>\n\nAs I explained above, `sp` may go beyond the `srcend`in the loop, up to the\n`srcend + 2`.\nIn theory, it's ok to set the buffer size to be greater than or equal 2.\n\n\n> > If we have read all the chunks, we don't need to be careful to cross the\n> border,\n> > just make srcend equal to source buffer limit. I've added comments to\n> explain it in patch v6.\n>\n> That's a good thing to comment on, but it doesn't explain why.\n\n\nYes, the current comment is puzzling. I'll improve it.\n\n\n> This\n> logic seems like a band-aid and I think a committer would want this to\n> be handled in a more principled way.\n>\n\nI don't want to change pglz_decompress logic too much, the iterator should\npay more attention to saving and restoring the original pglz_decompress\nstate.\n\n\n> >> Is it possible it's\n> >> compensating for this bit in init_toast_buffer()?\n> >>\n> >> + buf->limit = VARDATA(buf->buf);\n> >>\n> >> It seems the initial limit should also depend on whether the datum is\n> >> compressed, right? Can we just do this:\n> >>\n> >> + buf->limit = buf->position;\n> >\n> >\n> > I'm afraid not. buf->position points to the data portion of the buffer,\n> but the beginning of\n> > the chunks we read may contain header information. For example, for\n> compressed data chunks,\n> > the first four bytes record the size of raw data, this means that limit\n> is four bytes ahead of position.\n> > This initialization doesn't cause errors, although the position is less\n> than the limit in other cases.\n> > Because we always fetch chunks first, then decompress it.\n>\n> I see what you mean now. This could use a comment or two to explain\n> the stated constraints may not actually be satisfied at\n> initialization.\n>\n\nDone.\n\n\n> >> b).\n> >> - while (sp < srcend && dp < destend)\n> >> ...\n> >> + while (sp + 1 < srcend && dp < destend &&\n> >> ...\n> >>\n> >> Why is it here \"sp + 1\"?\n> >\n> >\n> > Ignore it, I set the inactive state of detoast_iter->ctrl to 8 in patch\n> v6 to\n> > achieve the purpose of parsing ctrl correctly every time.\n>\n> Please explain further. Was the \"sp + 1\" correct behavior (and why),\n> or only for debugging setting ctrl/c correctly?\n\n\nIn patch v5, If the condition is `sp < srcend`, suppose `sp = srcend - 1`\nbefore\nentering the loop `while (sp < srcend && dp < destend)`, when entering the\nloop\nand read a control byte(sp equals to `srcend` now), the program can't enter\nthe\nloop `for (; ctrlc < 8 && sp < srcend && dp < destend; ctrlc++)`, then set\n`iter->ctrlc` to 0,\nexit the first loop and then this iteration is over. At the next iteration,\nthe control byte will be reread since `iter->ctrlc` equals to 0, but the\nprevious control byte\nis not used. Changing the condition to `sp + 1 < srcend` avoid only one\ncontrol byte is read\nthen the iterator is over.\n\n\n> Also, I don't think\n> the new logic for the ctrl/c variables is an improvement:\n>\n> 1. iter->ctrlc is intialized with '8' (even in the uncompressed case,\n> which is confusing). Any time you initialize with something not 0 or\n> 1, it's a magic number, and here it's far from where the loop variable\n> is used. This is harder to read.\n>\n\n`iter->ctrlc` is used to record the value of `ctrl` in pglz_decompress at\nthe end of\nthe last iteration(or loop). In the pglz_decompress, `ctrlc`’s valid value\nis 0~7,\nWhen `ctrlc` reaches 8, a control byte is read from the source\nbuffer to `ctrl` then set `ctrlc` to 0. And a control bytes should be read\nfrom the\nsource buffer to `ctrlc` on the first iteration. So `iter->ctrlc` should be\nintialized with '8'.\n\n\n> 2. First time though the loop, iter->ctrlc = 8, which immediately gets\n> set back to 0.\n>\n\nAs I explained above, `iter->ctrlc = 8` make a control byte be read\nfrom the source buffer to `ctrl` on the first iteration. Besides,\n`iter->ctrlc = 8`\nindicates that the valid value of `ctrlc` at the end of the last iteration\nwas not\nrecorded, Obviously, there are no other iterations before the first\niteration.\n\n\n> 3. At the end of the loop, iter->ctrl/c are unconditionally set. In\n> v5, there was a condition which would usually avoid this copying of\n> values through pointers.\n>\n\nPatch v6 just records the value of `ctrlc` at the end of each iteration(or\nloop)\nwhether it is valid (0~7) or 8, and initializes `ctrlc` on the next\niteration(or loop) correctly.\nI think it is more concise in patch v6.\n\n\n>\n> >> 4. Note that varlena.c has a static state variable, and a cleanup\n> >> function that currently does:\n> >>\n> >> static void\n> >> text_position_cleanup(TextPositionState *state)\n> >> {\n> >> /* no cleanup needed */\n> >> }\n> >>\n> >> It seems to be the detoast iterator could be embedded in this state\n> >> variable, and then free-ing can happen here. That has a possible\n> >> advantage that the iterator struct would be on the same cache line as\n> >> the state data. That would also remove the need to pass \"iter\" as a\n> >> parameter, since these functions already pass \"state\". I'm not sure if\n> >> this would be good for other users of the iterator, so maybe we can\n> >> hold off on that for now.\n> >\n> >\n> > Good idea. I've implemented it in patch v6.\n>\n> That's better, and I think we can take it a little bit farther.\n>\n> 1. Notice that TextPositionState is allocated on the stack in\n> text_position(), which passes both the \"state\" pointer and the \"iter\"\n> pointer to text_position_setup(), and only then sets state->iter =\n> iter. We can easily set this inside text_position(). That would get\n> rid of the need for other callers to pass NULL iter to\n> text_position_setup().\n>\n\nDone.\n\n\n> 2. DetoastIteratorData is fixed size, so I see no reason to allocate\n> it on the heap. We could allocate it on the stack in text_pos(), and\n> pass the pointer to create_detoast_iterator() (in this case maybe a\n> better name is init_detoast_iterator), which would return a bool to\n> tell text_pos() whether to pass down the pointer or a NULL. The\n> allocation of other structs (toast buffer and fetch iterator) probably\n> can't be changed without more work.\n>\n\nDone\n\nIf there is anything else that is not explained clearly, please point it\nout.\n\n-- \nBest regards,\nBinguo Bao", "msg_date": "Sun, 4 Aug 2019 00:11:21 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Sat, Aug 3, 2019 at 11:11 PM Binguo Bao <djydewang@gmail.com> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> 于2019年8月2日周五 下午3:12写道:\n>>\n>> I like the use of a macro here. However, I think we can find a better\n>> location for the definition. See the header comment of fmgr.h:\n>> \"Definitions for the Postgres function manager and function-call\n>> interface.\" Maybe tuptoaster.h is as good a place as any?\n>\n> PG_DETOAST_ITERATE isn't a sample function-call interface,\n> But I notice that PG_FREE_IF_COPY is also defined in fmgr.h, whose logic is\n> similar to PG_DETOAST_ITERATE, make condition check first and then\n> decide whether to call the function. Besides, PG_DETOAST_DATUM,\n> PG_DETOAST_DATUM_COPY, PG_DETOAST_DATUM_SLICE,\n> PG_DETOAST_DATUM_PACKED are all defined in fmgr.h, it is reasonable\n> to put all the de-TOAST interface together.\n\nHmm, it's strange that those macros ended up there, but now I see why\nit makes sense to add new ones there also.\n\n>> Okay, and see these functions now return void. The orignal\n>> pglz_decompress() returned a value that was check against corruption.\n>> Is there a similar check we can do for the iterative version?\n>\n> As far as I know, we can just do such check after all compressed data is decompressed.\n> If we are slicing, we can't do the check.\n\nOkay.\n\n>> >> 3). Speaking of pglz_decompress_iterate(), I diff'd it with\n>> >> pglz_decompress(), and I have some questions on it:\n>> >>\n>> >> a).\n>> >> + srcend = (const unsigned char *) (source->limit == source->capacity\n>> >> ? source->limit : (source->limit - 4));\n>> >>\n>> >> What does the 4 here mean in this expression?\n>> >\n>> > Since we fetch chunks one by one, if we make srcend equals to the source buffer limit,\n>> > In the while loop \"while (sp < srcend && dp < destend)\", sp may exceed the source buffer limit and read unallocated bytes.\n>>\n>> Why is this? That tells me the limit is incorrect. Can the setter not\n>> determine the right value?\n>\n> There are three statments change `sp` value in the while loop `while (sp < srcend && dp < destend)`:\n> `ctrl = *sp++;`\n> `off = ((sp[0]) & 0xf0) << 4) | sp[1]; sp += 2;`\n> `len += *sp++`\n> Although we make sure `sp` is less than `srcend` when enter while loop, `sp` is likely to\n> go beyond the `srcend` in the loop, and we should ensure that `sp` is always smaller than `buf->limit` to avoid\n> reading unallocated data. So, `srcend` can't be initialized to `buf->limit`. Only one case is exceptional,\n> we've fetched all data chunks and 'buf->limit' reaches 'buf->capacity', it's imposisble to read unallocated\n> data via `sp`.\n\nThank you for the detailed explanation and the comment.\n\n>> Please explain further. Was the \"sp + 1\" correct behavior (and why),\n>> or only for debugging setting ctrl/c correctly?\n>\n> In patch v5, If the condition is `sp < srcend`, suppose `sp = srcend - 1` before\n> entering the loop `while (sp < srcend && dp < destend)`, when entering the loop\n> and read a control byte(sp equals to `srcend` now), the program can't enter the\n> loop `for (; ctrlc < 8 && sp < srcend && dp < destend; ctrlc++)`, then set `iter->ctrlc` to 0,\n> exit the first loop and then this iteration is over. At the next iteration,\n> the control byte will be reread since `iter->ctrlc` equals to 0, but the previous control byte\n> is not used. Changing the condition to `sp + 1 < srcend` avoid only one control byte is read\n> then the iterator is over.\n\nOkay, that's quite subtle. I agree the v6/7 way is more clear in this regard.\n\n>> Also, I don't think\n>> the new logic for the ctrl/c variables is an improvement:\n>>\n>> 1. iter->ctrlc is intialized with '8' (even in the uncompressed case,\n>> which is confusing). Any time you initialize with something not 0 or\n>> 1, it's a magic number, and here it's far from where the loop variable\n>> is used. This is harder to read.\n>\n> `iter->ctrlc` is used to record the value of `ctrl` in pglz_decompress at the end of\n> the last iteration(or loop). In the pglz_decompress, `ctrlc`’s valid value is 0~7,\n> When `ctrlc` reaches 8, a control byte is read from the source\n> buffer to `ctrl` then set `ctrlc` to 0. And a control bytes should be read from the\n> source buffer to `ctrlc` on the first iteration. So `iter->ctrlc` should be intialized with '8'.\n\nMy point here is it looks strange out of context, but \"0\" looked\nnormal. Maybe a comment in init_detoast_buffer(), something like \"8\nmeans read a control byte from the source buffer on the first\niteration, see pg_lzdecompress_iterate()\".\n\nOr, possibly, we could have a macro like INVALID_CTRLC. That might\neven improve the readability of the original function. This is just an\nidea, and maybe others would disagree, so you don't need to change it\nfor now.\n\n>> 3. At the end of the loop, iter->ctrl/c are unconditionally set. In\n>> v5, there was a condition which would usually avoid this copying of\n>> values through pointers.\n>\n> Patch v6 just records the value of `ctrlc` at the end of each iteration(or loop)\n> whether it is valid (0~7) or 8, and initializes `ctrlc` on the next iteration(or loop) correctly.\n> I think it is more concise in patch v6.\n\nAnd, in the case mentioned above where we enter the while loop with sp\n= src_end - 1 , we can read the control byte and still store the\ncorrect value for ctrlc.\n\n>>> [varlena.c api]\n>> That's better, and I think we can take it a little bit farther.\n>>\n>> 1. Notice that TextPositionState is allocated on the stack in\n>> text_position(), which passes both the \"state\" pointer and the \"iter\"\n>> pointer to text_position_setup(), and only then sets state->iter =\n>> iter. We can easily set this inside text_position(). That would get\n>> rid of the need for other callers to pass NULL iter to\n>> text_position_setup().\n>\n>\n> Done.\n\nThat looks much cleaner, thanks.\n\nI've repeated my performance test to make sure there's no additional\nregression in my suffix tests:\n\n master patch v7\ncomp. end 1560s 1600ms\nuncomp. end 896ms 890ms\n\nThe regression from master in the compressed case is about 2.5%, which\nis no different from the last patch I tested, so that's good.\n\nAt this point, there are no functional things that I think we need to\nchange. It's close to ready-for-committer. For the next version, I'd\nlike you go through the comments and edit for grammar, spelling, and\nclarity as you see fit. I know you're not a native speaker of English,\nso I can help you with anything that remains. Also note we use braces\non their own lines\n{\n like this\n}\n\nWe do have a source formatting tool (pgindent), but it helps\nreadability for committers to have it mostly standard beforehand.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 14 Aug 2019 12:00:25 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "Hi John,\n\n> >> Also, I don't think\n> >> the new logic for the ctrl/c variables is an improvement:\n> >>\n> >> 1. iter->ctrlc is intialized with '8' (even in the uncompressed case,\n> >> which is confusing). Any time you initialize with something not 0 or\n> >> 1, it's a magic number, and here it's far from where the loop variable\n> >> is used. This is harder to read.\n> >\n> > `iter->ctrlc` is used to record the value of `ctrl` in pglz_decompress\n> at the end of\n> > the last iteration(or loop). In the pglz_decompress, `ctrlc`’s valid\n> value is 0~7,\n> > When `ctrlc` reaches 8, a control byte is read from the source\n> > buffer to `ctrl` then set `ctrlc` to 0. And a control bytes should be\n> read from the\n> > source buffer to `ctrlc` on the first iteration. So `iter->ctrlc` should\n> be intialized with '8'.\n>\n> My point here is it looks strange out of context, but \"0\" looked\n> normal. Maybe a comment in init_detoast_buffer(), something like \"8\n> means read a control byte from the source buffer on the first\n> iteration, see pg_lzdecompress_iterate()\".\n>\n> Or, possibly, we could have a macro like INVALID_CTRLC. That might\n> even improve the readability of the original function. This is just an\n> idea, and maybe others would disagree, so you don't need to change it\n> for now.\n>\n\nAll in all, the idea is much better than a magic number 8. So, I've\nimplemented it.\n\n\n> At this point, there are no functional things that I think we need to\n> change. It's close to ready-for-committer. For the next version, I'd\n> like you go through the comments and edit for grammar, spelling, and\n> clarity as you see fit. I know you're not a native speaker of English,\n> so I can help you with anything that remains.\n\n\nI've tried my best to improve the comments, but there should be room for\nfurther improvement\nI hope you can help me perfect it.\n\n\n> Also note we use braces\n> on their own lines\n> {\n> like this\n> }\n>\n> Done.\n-- \nBest regards,\nBinguo Bao", "msg_date": "Sat, 17 Aug 2019 15:32:32 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Fri, Aug 16, 2019 at 10:48 PM Binguo Bao <djydewang@gmail.com> wrote:\n> [v8 patch with cosmetic changes]\n\nOkay, looks good. I'll make a few style suggestions and corrections.\nIn the course of looking at this again, I have a few other questions\nbelow as well.\n\nIt looks like you already do this for the most part, but I'll mention\nthat we try to keep lines, including comments, less than 80 characters\nlong. pgindent can try to fix that, but the results don't always look\nnice.\n\nAbout variable names: The iterator pointers are variously called\n\"iter\", \"iterator\", and \"fetch_iter\". I found this confusing the first\ntime I read this code. I think we should use \"iter\" if we have only\none kind in the function, and \"detoast_iter\" and \"fetch_iter\" if we\nhave both kinds.\n--\n\ninit_detoast_iterator():\n\n+ * The \"iterator\" variable is normally just a local variable in the caller.\n\nI don't think this comment is helpful to understand this function or its use.\n\n+ * It only make sense to initialize de-TOAST iterator for external\non-disk value.\n\ns/make/makes/\n\"a\" de-TOAST iterator\ns/value/values/\n\nThe comments in this function that start with \"This is a ...\" could be\nshortened like this:\n\n/* indirect pointer -- dereference it */\n\nWhile looking at this again, I noticed we no longer need to test for\nthe in-line compressed case at all. I also tried some other cosmetic\nrearrangements. Let me know what you think about the attached patch.\nAlso, I wonder if the VARATT_IS_EXTERNAL_INDIRECT case should come\nfirst. Then the two normal cases are next to eachother.\n\n\nfree_detoast_iterator(), free_fetch_datum_iterator(), and free_toast_buffer():\n\nThese functions should return void.\n\n+ * Free the memory space occupied by the de-TOAST iterator include buffers and\n+ * fetch datum iterator.\n\nPerhaps \"Free memory used by the de-TOAST iterator, including buffers\nand fetch datum iterator.\"\n\nThe check\n\nif (iter->buf != iter->fetch_datum_iterator->buf)\n\nis what we need to do for the compressed case. Could we use this\ndirectly instead of having a separate state variable iter->compressed,\nwith a macro like this?\n\n#define TOAST_ITER_COMPRESSED(iter) \\\n (iter->buf != iter->fetch_datum_iterator->buf)\n\nOr maybe that's too clever?\n\n\ndetoast_iterate():\n\n+ * As long as there is another data chunk in compression or external storage,\n\nWe no longer use the iterator with in-line compressed values.\n\n+ * de-TOAST it into toast buffer in iterator.\n\nMaybe \"into the iterator's toast buffer\"\n\n\nfetch_datum_iterate():\n\nMy remarks for detoast_iterate() also apply here.\n\n\ninit_toast_buffer():\n\n+ * Note the constrain buf->position <= buf->limit may be broken\n+ * at initialization. Make sure that the constrain is satisfied\n+ * when consume chars.\n\ns/constrain/constraint/ (2 times)\ns/consume/consuming/\n\nAlso, this comment might be better at the top the whole function?\n\n\npglz_decompress_iterate():\n\n+ * Decompresses source into dest until the source is exhausted.\n\nThis comment is from the original function, but I think it would be\nbetter to highlight the differences from the original, something like:\n\n\"This function is based on pglz_decompress(), with these additional\nrequirements:\n\n1. We need to save the current control byte and byte position for the\ncaller's next iteration.\n\n2. In pglz_decompress(), we can assume we have all the source bytes\navailable. This is not the case when we decompress one chunk at a\ntime, so we have to make sure that we only read bytes available in the\ncurrent chunk.\"\n\n(I'm not sure about the term 'byte position', maybe there's a better one.)\n\n+ * In the while loop, sp may go beyond the srcend, provides a four-byte\n+ * buffer to prevent sp from reading unallocated bytes from source buffer.\n+ * When source->limit reaches source->capacity, don't worry about reading\n+ * unallocated bytes.\n\nHere's my suggestion:\n\n\"In the while loop, sp may be incremented such that it points beyond\nsrcend. To guard against reading beyond the end of the current chunk,\nwe set srcend such that we exit the loop when we are within four bytes\nof the end of the current chunk. When source->limit reaches\nsource->capacity, we are decompressing the last chunk, so we can (and\nneed to) read every byte.\"\n\n+ for (; ctrlc < 8 && sp < srcend && dp < destend; ctrlc++)\n\nNote you can also replace 8 with INVALID_CTRLC here.\n\ntuptoaster.h:\n+ * Constrains that need to be satisfied:\n\ns/constrains/constraints/\n\n+ * If \"ctrlc\" field in iterator is equal to INVALID_CTRLC, it means that\n+ * the field is invalid and need to read the control byte from the\n+ * source buffer in the next iteration, see pglz_decompress_iterate().\n+ */\n+#define INVALID_CTRLC 8\n\nI think the macro might be better placed in pg_lzcompress.h, and for\nconsistency used in pglz_decompress(). Then the comment can be shorter\nand more general. With my additional comment in\ninit_detoast_iterator(), hopefully it will be clear to readers.\n\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 19 Aug 2019 11:55:37 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> 于2019年8月19日周一 下午12:55写道:\n\n> init_toast_buffer():\n>\n> + * Note the constrain buf->position <= buf->limit may be broken\n> + * at initialization. Make sure that the constrain is satisfied\n> + * when consume chars.\n>\n> s/constrain/constraint/ (2 times)\n> s/consume/consuming/\n>\n> Also, this comment might be better at the top the whole function?\n>\n\nThe constraint is broken in the if branch, so I think put this comment in\nthe branch\nis more precise.\n\nThe check\n> if (iter->buf != iter->fetch_datum_iterator->buf)\n> is what we need to do for the compressed case. Could we use this\n> directly instead of having a separate state variable iter->compressed,\n> with a macro like this?\n> #define TOAST_ITER_COMPRESSED(iter) \\\n> (iter->buf != iter->fetch_datum_iterator->buf)\n\n\n The logic of the macro may be hard to understand, so I think it's ok to\njust check the compressed state variable.\n\n+ * If \"ctrlc\" field in iterator is equal to INVALID_CTRLC, it means that\n> + * the field is invalid and need to read the control byte from the\n> + * source buffer in the next iteration, see pglz_decompress_iterate().\n> + */\n> +#define INVALID_CTRLC 8\n>\n> I think the macro might be better placed in pg_lzcompress.h, and for\n> consistency used in pglz_decompress(). Then the comment can be shorter\n> and more general. With my additional comment in\n> init_detoast_iterator(), hopefully it will be clear to readers.\n>\n\nThe main role of this macro is to explain the iterator's \"ctrlc\" state, IMO\nit's reasonable to put\nthe macro and definition of de-TOAST iterator together.\n\nThanks for your suggestion, I have updated the patch.\n-- \nBest regards,\nBinguo Bao", "msg_date": "Thu, 22 Aug 2019 01:10:43 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Thu, Aug 22, 2019 at 12:10 AM Binguo Bao <djydewang@gmail.com> wrote:\n> [v9 patch]\n\nThanks, looks good. I'm setting it to ready for committer.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 22 Aug 2019 10:02:01 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "> +static void\n> +init_toast_buffer(ToastBuffer *buf, int32 size, bool compressed)\n> +{\n> +\tbuf->buf = (const char *) palloc0(size);\n\nThis API is weird -- you always palloc the ToastBuffer first, then call\ninit_toast_bufer on it. Why not palloc the ToastBuffer struct in\ninit_toast_buffer and return it from there instead? This is\nparticularly strange since the ToastBuffer itself is freed by the \"free\"\nroutine ... so it's not like we're thinking that caller can take\nownership of the struct by embedding it in a larger struct.\n\nAlso, this function needs a comment on top explaining what it does and\nwhat the params are.\n\nWhy do we need ToastBuffer->buf_size? Seems unused.\n\n> +\tif (iter == NULL)\n> +\t{\n> +\t\treturn;\n> +\t}\n\nPlease, no braces around single-statement blocks. (Many places).\n\n> +/*\n> + * If \"ctrlc\" field in iterator is equal to INVALID_CTRLC, it means that\n> + * the field is invalid and need to read the control byte from the\n> + * source buffer in the next iteration, see pglz_decompress_iterate().\n> + */\n> +#define INVALID_CTRLC 8\n\nWhat does CTRLC stand for? Also: this comment should explain why the\nvalue 8 is what it is.\n\n> +\t\t\t\t/*\n> +\t\t\t\t * Now we copy the bytes specified by the tag from OUTPUT to\n> +\t\t\t\t * OUTPUT. It is dangerous and platform dependent to use\n> +\t\t\t\t * memcpy() here, because the copied areas could overlap\n> +\t\t\t\t * extremely!\n> +\t\t\t\t */\n> +\t\t\t\tlen = Min(len, destend - dp);\n> +\t\t\t\twhile (len--)\n> +\t\t\t\t{\n> +\t\t\t\t\t*dp = dp[-off];\n> +\t\t\t\t\tdp++;\n> +\t\t\t\t}\n\nSo why not use memmove?\n\n> +\t\t\t\t/*\n> +\t\t\t\t * Otherwise it contains the match length minus 3 and the\n> +\t\t\t\t * upper 4 bits of the offset. The next following byte\n> +\t\t\t\t * contains the lower 8 bits of the offset. If the length is\n> +\t\t\t\t * coded as 18, another extension tag byte tells how much\n> +\t\t\t\t * longer the match really was (0-255).\n> +\t\t\t\t */\n> +\t\t\t\tint32\t\tlen;\n> +\t\t\t\tint32\t\toff;\n> +\n> +\t\t\t\tlen = (sp[0] & 0x0f) + 3;\n> +\t\t\t\toff = ((sp[0] & 0xf0) << 4) | sp[1];\n> +\t\t\t\tsp += 2;\n> +\t\t\t\tif (len == 18)\n> +\t\t\t\t\tlen += *sp++;\n\nStarting this para with \"Otherwise\" makes no sense, since there's no\nprevious opposite case. Please reword. However, I don't recognize this\ncode from anywhere, and it seems to have a lot of magical numbers. Is\nthis code completely new?\n\n\nDidn't much like FetchDatumIteratorData SnapshotToast struct member\nname. How about just \"snapshot\"?\n\n> +#define PG_DETOAST_ITERATE(iter, need)\t\t\t\t\t\t\t\t\t\\\n> +\tdo {\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> +\t\tAssert(need >= iter->buf->buf && need <= iter->buf->capacity);\t\\\n> +\t\twhile (!iter->done && need >= iter->buf->limit) { \t\t\t\t\\\n> +\t\t\tdetoast_iterate(iter);\t\t\t\t\t\t\t\t\t\t\\\n> +\t\t}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> +\t} while (0)\n\nThis needs parens around each \"iter\" and \"need\" in the macro definition.\nAlso, please add a comment documenting what the arguments are, since\nit's not immediately obvious.\n\n> +void free_detoast_iterator(DetoastIterator iter)\n> +{\n> +\tif (iter == NULL)\n> +\t{\n> +\t\treturn;\n> +\t}\n\nIf this function is going to do this, why do callers need to check for\nNULL also? Seems pointless. I'd rather make callers simpler and keep\nonly the NULL-check inside the function, since this is not perf-critical\nanyway.\n\n> +extern void detoast_iterate(DetoastIterator detoast_iter)\n> +{\n\nPlease, no \"extern\" in function definitions, only in prototypes in the\n.h files. Also, we indent the function name at the start of line, with\nthe return type appearing on its own in the previous line.\n\n> +\tif (!VARATT_IS_EXTERNAL_ONDISK(attr))\n> +\t\telog(ERROR, \"create_fetch_datum_itearator shouldn't be called for non-ondisk datums\");\n\nTypo for \"iterator\".\n\n> +\t\titer->fetch_datum_iterator = create_fetch_datum_iterator(attr);\n> +\t\tVARATT_EXTERNAL_GET_POINTER(toast_pointer, attr);\n> +\t\tif (VARATT_EXTERNAL_IS_COMPRESSED(toast_pointer))\n> +\t\t{\n> [...]\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\titer->compressed = false;\n> +\n> +\t\t\t/* point the buffer directly at the raw data */\n> +\t\t\titer->buf = iter->fetch_datum_iterator->buf;\n> +\t\t}\n\nThis arrangement where there are two ToastBuffers and they sometimes are\nthe same is cute, but I think we need a better way to know when each\nneeds to be freed afterwards; the proposed coding is confusing. And it\ncertainly it needs more than zero comments about what's going on there.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 3 Sep 2019 16:12:26 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "Also: this patch no longer applies. Please rebase.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 6 Sep 2019 10:52:53 -0400", "msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> 于2019年9月4日周三 上午4:12写道:\n\n> > +static void\n> > +init_toast_buffer(ToastBuffer *buf, int32 size, bool compressed)\n> > +{\n> > + buf->buf = (const char *) palloc0(size);\n>\n> This API is weird -- you always palloc the ToastBuffer first, then call\n> init_toast_bufer on it. Why not palloc the ToastBuffer struct in\n> init_toast_buffer and return it from there instead? This is\n> particularly strange since the ToastBuffer itself is freed by the \"free\"\n> routine ... so it's not like we're thinking that caller can take\n> ownership of the struct by embedding it in a larger struct.\n\n\nI agree with you. I also change \"init_detoast_iterator\" to\n\"create_detoast_iterator\"\nso the caller doesn't need to manage the memory allocation of the iterator\n\n\n> Also, this function needs a comment on top explaining what it does and\n> what the params are.\n>\n\nDone.\n\n\n> Why do we need ToastBuffer->buf_size? Seems unused.\n>\n> > + if (iter == NULL)\n> > + {\n> > + return;\n> > + }\n>\n\nRemoved.\n\n\n> Please, no braces around single-statement blocks. (Many places).\n>\n\nDone.\n\n\n> > +/*\n> > + * If \"ctrlc\" field in iterator is equal to INVALID_CTRLC, it means that\n> > + * the field is invalid and need to read the control byte from the\n> > + * source buffer in the next iteration, see pglz_decompress_iterate().\n> > + */\n> > +#define INVALID_CTRLC 8\n>\n> What does CTRLC stand for? Also: this comment should explain why the\n> value 8 is what it is.\n>\n\nI've improved the comment.\n\n\n>\n> > + /*\n> > + * Now we copy the bytes specified by the\n> tag from OUTPUT to\n> > + * OUTPUT. It is dangerous and platform\n> dependent to use\n> > + * memcpy() here, because the copied areas\n> could overlap\n> > + * extremely!\n> > + */\n> > + len = Min(len, destend - dp);\n> > + while (len--)\n> > + {\n> > + *dp = dp[-off];\n> > + dp++;\n> > + }\n>\n> So why not use memmove?\n>\n> > + /*\n> > + * Otherwise it contains the match length\n> minus 3 and the\n> > + * upper 4 bits of the offset. The next\n> following byte\n> > + * contains the lower 8 bits of the\n> offset. If the length is\n> > + * coded as 18, another extension tag byte\n> tells how much\n> > + * longer the match really was (0-255).\n> > + */\n> > + int32 len;\n> > + int32 off;\n> > +\n> > + len = (sp[0] & 0x0f) + 3;\n> > + off = ((sp[0] & 0xf0) << 4) | sp[1];\n> > + sp += 2;\n> > + if (len == 18)\n> > + len += *sp++;\n>\n> Starting this para with \"Otherwise\" makes no sense, since there's no\n> previous opposite case. Please reword. However, I don't recognize this\n> code from anywhere, and it seems to have a lot of magical numbers. Is\n> this code completely new?\n>\n\nThis function is based on pglz_decompress() in src/common/pg_lzcompress.c\nand I've\nmentioned that in the function's comment at the beginning.\n\n\n> Didn't much like FetchDatumIteratorData SnapshotToast struct member\n> name. How about just \"snapshot\"?\n>\n\nDone.\n\n> +#define PG_DETOAST_ITERATE(iter, need)\n> \\\n> > + do {\n> \\\n> > + Assert(need >= iter->buf->buf && need <=\n> iter->buf->capacity); \\\n> > + while (!iter->done && need >= iter->buf->limit) {\n> \\\n> > + detoast_iterate(iter);\n> \\\n> > + }\n> \\\n> > + } while (0)\n>\n> This needs parens around each \"iter\" and \"need\" in the macro definition.\n> Also, please add a comment documenting what the arguments are, since\n> it's not immediately obvious.\n>\n\nParens makes the macro more reliable. Done.\n\n> +void free_detoast_iterator(DetoastIterator iter)\n> > +{\n> > + if (iter == NULL)\n> > + {\n> > + return;\n> > + }\n>\n> If this function is going to do this, why do callers need to check for\n> NULL also? Seems pointless. I'd rather make callers simpler and keep\n> only the NULL-check inside the function, since this is not perf-critical\n> anyway.\n>\n\nGood catch. Done.\n\n > + iter->fetch_datum_iterator =\ncreate_fetch_datum_iterator(attr);\n\n> > + VARATT_EXTERNAL_GET_POINTER(toast_pointer, attr);\n> > + if (VARATT_EXTERNAL_IS_COMPRESSED(toast_pointer))\n> > + {\n> > [...]\n> > + }\n> > + else\n> > + {\n> > + iter->compressed = false;\n> > +\n> > + /* point the buffer directly at the raw data */\n> > + iter->buf = iter->fetch_datum_iterator->buf;\n> > + }\n>\n> This arrangement where there are two ToastBuffers and they sometimes are\n> the same is cute, but I think we need a better way to know when each\n> needs to be freed afterwards;\n>\n\nWe only need to check the \"compressed\" field in the iterator to figure out\nwhich buffer should be freed.\n\n-- \nBest regards,\nBinguo Bao", "msg_date": "Tue, 10 Sep 2019 21:33:51 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On 2019-Sep-10, Binguo Bao wrote:\n\n> +/*\n> + * Support for de-TOASTing toasted value iteratively. \"need\" is a pointer\n> + * between the beginning and end of iterator's ToastBuffer. The marco\n> + * de-TOAST all bytes before \"need\" into iterator's ToastBuffer.\n> + */\n> +#define PG_DETOAST_ITERATE(iter, need)\t\t\t\t\t\t\t\t\t\t\t\\\n> +\tdo {\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> +\t\tAssert((need) >= (iter)->buf->buf && (need) <= (iter)->buf->capacity);\t\\\n> +\t\twhile (!(iter)->done && (need) >= (iter)->buf->limit) { \t\t\t\t\\\n> +\t\t\tdetoast_iterate(iter);\t\t\t\t\t\t\t\t\t\t\t\t\\\n> +\t\t}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> +\t} while (0)\n> /* WARNING -- unaligned pointer */\n> #define PG_DETOAST_DATUM_PACKED(datum) \\\n> \tpg_detoast_datum_packed((struct varlena *) DatumGetPointer(datum))\n\nIn broad terms this patch looks pretty good to me. I only have a small\nquibble with this API definition in fmgr.h -- namely that it forces us\nto export the definition of all the structs (that could otherwise be\nprivate to toast_internals.h) in order to satisfy callers of this macro.\nI am wondering if it would be possible to change detoast_iterate and\nPG_DETOAST_ITERATE in a way that those details remain hidden -- I am\nthinking something like \"if this returns NULL, then iteration has\nfinished\"; and relieve the macro from doing the \"->buf->buf\" and\n\"->buf->limit\" checks. I think changing that would require a change in\nhow the rest of the code is structured around this (the textpos internal\nfunction), but seems like it would be better overall.\n\n(AFAICS that would enable us to expose much less about the\niterator-related structs to detoast.h -- you should be able to move the\nstruct defs to toast_internals.h)\n\nThen again, it might be just wishful thinking, but it seems worth\nconsidering at least.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Sep 2019 18:22:51 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Mon, Sep 16, 2019 at 06:22:51PM -0300, Alvaro Herrera wrote:\n>On 2019-Sep-10, Binguo Bao wrote:\n>\n>> +/*\n>> + * Support for de-TOASTing toasted value iteratively. \"need\" is a pointer\n>> + * between the beginning and end of iterator's ToastBuffer. The marco\n>> + * de-TOAST all bytes before \"need\" into iterator's ToastBuffer.\n>> + */\n>> +#define PG_DETOAST_ITERATE(iter, need)\t\t\t\t\t\t\t\t\t\t\t\\\n>> +\tdo {\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n>> +\t\tAssert((need) >= (iter)->buf->buf && (need) <= (iter)->buf->capacity);\t\\\n>> +\t\twhile (!(iter)->done && (need) >= (iter)->buf->limit) { \t\t\t\t\\\n>> +\t\t\tdetoast_iterate(iter);\t\t\t\t\t\t\t\t\t\t\t\t\\\n>> +\t\t}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n>> +\t} while (0)\n>> /* WARNING -- unaligned pointer */\n>> #define PG_DETOAST_DATUM_PACKED(datum) \\\n>> \tpg_detoast_datum_packed((struct varlena *) DatumGetPointer(datum))\n>\n>In broad terms this patch looks pretty good to me. I only have a small\n>quibble with this API definition in fmgr.h -- namely that it forces us\n>to export the definition of all the structs (that could otherwise be\n>private to toast_internals.h) in order to satisfy callers of this macro.\n>I am wondering if it would be possible to change detoast_iterate and\n>PG_DETOAST_ITERATE in a way that those details remain hidden -- I am\n>thinking something like \"if this returns NULL, then iteration has\n>finished\"; and relieve the macro from doing the \"->buf->buf\" and\n>\"->buf->limit\" checks. I think changing that would require a change in\n>how the rest of the code is structured around this (the textpos internal\n>function), but seems like it would be better overall.\n>\n>(AFAICS that would enable us to expose much less about the\n>iterator-related structs to detoast.h -- you should be able to move the\n>struct defs to toast_internals.h)\n>\n>Then again, it might be just wishful thinking, but it seems worth\n>considering at least.\n>\n\nI do agree hiding the exact struct definition would be nice. IMHO if the\nonly reason for exposing it is the PG_DETOAST_ITERATE() macro (or rather\nthe references to buf fields in it) then we can simply provide functions\nto return those fields.\n\nGranted, that may have impact on performance, but I'm not sure it'll be\neven measurable. Also, the other detoast macros right before this new\none are also ultimately just a function calls.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 17 Sep 2019 15:34:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> 于2019年9月17日周二 上午5:51写道:\n\n> On 2019-Sep-10, Binguo Bao wrote:\n>\n> > +/*\n> > + * Support for de-TOASTing toasted value iteratively. \"need\" is a\n> pointer\n> > + * between the beginning and end of iterator's ToastBuffer. The marco\n> > + * de-TOAST all bytes before \"need\" into iterator's ToastBuffer.\n> > + */\n> > +#define PG_DETOAST_ITERATE(iter, need)\n> \\\n> > + do {\n>\n> \\\n> > + Assert((need) >= (iter)->buf->buf && (need) <=\n> (iter)->buf->capacity); \\\n> > + while (!(iter)->done && (need) >= (iter)->buf->limit) {\n> \\\n> > + detoast_iterate(iter);\n> \\\n> > + }\n>\n> \\\n> > + } while (0)\n> > /* WARNING -- unaligned pointer */\n> > #define PG_DETOAST_DATUM_PACKED(datum) \\\n> > pg_detoast_datum_packed((struct varlena *) DatumGetPointer(datum))\n>\n> In broad terms this patch looks pretty good to me. I only have a small\n> quibble with this API definition in fmgr.h -- namely that it forces us\n> to export the definition of all the structs (that could otherwise be\n> private to toast_internals.h) in order to satisfy callers of this macro.\n> I am wondering if it would be possible to change detoast_iterate and\n> PG_DETOAST_ITERATE in a way that those details remain hidden -- I am\n> thinking something like \"if this returns NULL, then iteration has\n> finished\"; and relieve the macro from doing the \"->buf->buf\" and\n> \"->buf->limit\" checks. I think changing that would require a change in\n> how the rest of the code is structured around this (the textpos internal\n> function), but seems like it would be better overall.\n>\n> (AFAICS that would enable us to expose much less about the\n> iterator-related structs to detoast.h -- you should be able to move the\n> struct defs to toast_internals.h)\n>\n> Then again, it might be just wishful thinking, but it seems worth\n> considering at least.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nI've tied to hide the details of the struct in patch v11 with checking\n\"need\" pointer\ninside detoast_iterate function. I also compared the performance of the two\nversions.\n\n patch v10 patch v11\ncomp. beg. 1413ms 1489ms\ncomp. end 24327ms 28011ms\nuncomp. beg. 1439ms 1432ms\nuncomp. end 25019ms 29007ms\n\nWe can see that v11 is about 15% slower than v10 on suffix queries since\nthis involves\nthe complete de-TOASTing and detoast_iterate() function is called\nfrequently in v11.\n\nPersonally, I prefer patch v10. Its performance is superior, although it\nexposes some struct details.\n\n-- \nBest regards,\nBinguo Bao", "msg_date": "Mon, 23 Sep 2019 21:55:24 +0800", "msg_from": "Binguo Bao <djydewang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "Paul Ramsey, do you have opinions to share about this patch? I think\nPostGIS might benefit from it. Thread starts here:\n\nhttps://postgr.es/m/CAL-OGks_onzpc9M9bXPCztMofWULcFkyeCeKiAgXzwRL8kXiag@mail.gmail.com\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 23 Sep 2019 13:45:29 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "\n\n> On Sep 23, 2019, at 9:45 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> Paul Ramsey, do you have opinions to share about this patch? I think\n> PostGIS might benefit from it. Thread starts here:\n\nI like the idea a great deal, but actually PostGIS is probably neutral on it: we generally want to retrieve things off the front of our serializations (the metadata header) rather than search through them for things in the middle. So the improvements to Pg12 cover all of our use cases. Haven’t had time to do any performance checks on it yet.\n\nATB,\n\nP.\n\n\n\n", "msg_date": "Wed, 25 Sep 2019 13:40:36 -0700", "msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Mon, Sep 23, 2019 at 09:55:24PM +0800, Binguo Bao wrote:\n> Personally, I prefer patch v10. Its performance is superior, although it\n> exposes some struct details.\n\nPlease be careful. The patch was waiting for author input, but its\nlatest status does not match what the CF app was saying. I have moved\nthis patch to next CF, with \"Needs review\" as status.\n--\nMichael", "msg_date": "Wed, 27 Nov 2019 17:20:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Mon, Sep 23, 2019 at 9:55 PM Binguo Bao <djydewang@gmail.com> wrote:\n>\n> Alvaro Herrera <alvherre@2ndquadrant.com> 于2019年9月17日周二 上午5:51写道:\n>> In broad terms this patch looks pretty good to me. I only have a small\n>> quibble with this API definition in fmgr.h -- namely that it forces us\n>> to export the definition of all the structs (that could otherwise be\n>> private to toast_internals.h) in order to satisfy callers of this macro.\n>> I am wondering if it would be possible to change detoast_iterate and\n>> PG_DETOAST_ITERATE in a way that those details remain hidden -- I am\n>> thinking something like \"if this returns NULL, then iteration has\n>> finished\"; and relieve the macro from doing the \"->buf->buf\" and\n>> \"->buf->limit\" checks. I think changing that would require a change in\n>> how the rest of the code is structured around this (the textpos internal\n>> function), but seems like it would be better overall.\n>>\n>> (AFAICS that would enable us to expose much less about the\n>> iterator-related structs to detoast.h -- you should be able to move the\n>> struct defs to toast_internals.h)\n>>\n>> Then again, it might be just wishful thinking, but it seems worth\n>> considering at least.\n>\n> I've tied to hide the details of the struct in patch v11 with checking \"need\" pointer\n> inside detoast_iterate function.\n\nI took a brief look at v11 to see if there's anything I can do to help\nit move forward. I'm not yet sure how it would look code-wise to\nimplement Alvaro and Tomas's comments upthread, but I'm pretty sure\nthis part means the iterator-related structs are just as exposed as\nbefore, but in a roundabout way that completely defeats the purpose of\nhiding internals:\n\n--- a/src/include/access/detoast.h\n+++ b/src/include/access/detoast.h\n@@ -11,6 +11,7 @@\n */\n #ifndef DETOAST_H\n #define DETOAST_H\n+#include \"toast_internals.h\"\n\nThat said, the idea behind the PG_DETOAST_ITERATE macro was my\nsuggestion so that text_position_next_internal() didn't have to call\nthe iterator function every time the needle advances, which caused a\nnoticeable performance penalty. The toast code has moved around quite\na bit since then, and I'm not sure of the best way forward.\n\nAlso, c60e520f6e0 changed the standard pglz decompression algorithm.\nIt might be worth it to see if those changes are applicable to the\niterator case. At least one of the improved comments could be brought\nover.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 12 Jan 2020 10:53:24 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On 2020-Jan-12, John Naylor wrote:\n> \n> I took a brief look at v11 to see if there's anything I can do to help\n> it move forward. I'm not yet sure how it would look code-wise to\n> implement Alvaro and Tomas's comments upthread, but I'm pretty sure\n> this part means the iterator-related structs are just as exposed as\n> before, but in a roundabout way that completely defeats the purpose of\n> hiding internals:\n\nAgreed -- I think this patch still needs more work before being\ncommittable; I agree with John that the changes after v10 made it worse,\nnot better. Rather than cross-including header files, it seems better\nto expose some struct definitions after all and let the main iterator\ninterface (detoast_iterate) be a \"static inline\" function in detoast.h.\n\nSo let's move forward with v10 (submitted on Sept 10th).\n\nLooking at that version, I don't think the function protos that were put\nin toast_internals.h should be there at all; I think they should be in\ndetoast.h so that they can be used. But I don't like the fact that\ndetoast.h now has to include genam.h; that seems pretty bogus. I think\nthis can be fixed by moving the FetchDatumIteratorData struct definition\n(but not its typedef) to toast_internals.h.\n\nOTOH we've recently seen the TOAST interface (and header files) heavily\nreworked because of table-AM considerations, so probably this needs even\nmore changes to avoid parts of it becoming heapam-dependant again.\n\ncreate_toast_buffer() doing two pallocs seems a waste. It could be a\nsingle one,\n+ buf = (ToastBuffer *) palloc0(MAXALIGN(sizeof(ToastBuffer)) + size);\n+ buf->buf = buf + MAXALIGN(sizeof(ToastBuffer));\n(I'm not sure that the MAXALIGNs are strictly necessary there; I think\nwe access the buf as four-byte aligned stuff somewhere in the toast\ninnards, but maybe I'm wrong about that.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Mar 2020 10:21:14 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Fri, Mar 13, 2020 at 10:19 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> So let's move forward with v10 (submitted on Sept 10th).\n\nIn the attached v12, based on v10, I've made some progress to address\nsome of the remaining issues. There's still some work to be done, in\nparticular to think about how to hide the struct details better, as\nmentioned by you and Tomas back in September, but wanted to put this\nmuch out there to keep things moving.\n\n> Rather than cross-including header files, it seems better\n> to expose some struct definitions after all and let the main iterator\n> interface (detoast_iterate) be a \"static inline\" function in detoast.h.\n\nThe cross-include is gone, and detoast_iterate is now static inline.\n\n> Looking at that version, I don't think the function protos that were put\n> in toast_internals.h should be there at all; I think they should be in\n> detoast.h so that they can be used.\n\nDone.\n\n> But I don't like the fact that\n> detoast.h now has to include genam.h; that seems pretty bogus. I think\n> this can be fixed by moving the FetchDatumIteratorData struct definition\n> (but not its typedef) to toast_internals.h.\n\nI took a stab at this, but I ended up playing whack-a-mole with\ncompiler warnings. I'll have to step back and try again later.\n\n> OTOH we've recently seen the TOAST interface (and header files) heavily\n> reworked because of table-AM considerations, so probably this needs even\n> more changes to avoid parts of it becoming heapam-dependant again.\n\nHaven't thought about this.\n\n> create_toast_buffer() doing two pallocs seems a waste. It could be a\n> single one,\n> + buf = (ToastBuffer *) palloc0(MAXALIGN(sizeof(ToastBuffer)) + size);\n> + buf->buf = buf + MAXALIGN(sizeof(ToastBuffer));\n> (I'm not sure that the MAXALIGNs are strictly necessary there; I think\n> we access the buf as four-byte aligned stuff somewhere in the toast\n> innards, but maybe I'm wrong about that.)\n\nI tried this briefly and got backend crashes, and didn't try to analyze further.\n\nIn addition, I brought in the memcpy() and comment changes in\nc60e520f6e from common/pg_lzcompress.c to pglz_decompress_iterate(). I\nalso made a typo correction in the former, which could be extracted\ninto a separate patch if this one is not ready in time.\n\nFor this comment back in [1]:\n\n> This arrangement where there are two ToastBuffers and they sometimes are\n> the same is cute, but I think we need a better way to know when each\n> needs to be freed afterwards; the proposed coding is confusing. And it\n> certainly it needs more than zero comments about what's going on there.\n\nOne idea is to test if the pointers are equal via a macro, rather than\nsetting and testing a member bool var. And I agree commentary could be\nimproved in this area.\n\n[1] https://www.postgresql.org/message-id/20190903201226.GA16197%40alvherre.pgsql\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 25 Mar 2020 18:04:53 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "Status update for a commitfest entry.\r\n\r\nThis entry was inactive for a very long time. \r\nJohn, are you going to continue working on this?\r\n\r\nThe last message mentions some open issues, namely backend crashes, so I move it to \"Waiting on author\".\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 02 Nov 2020 17:23:47 +0000", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On 2020-Nov-02, Anastasia Lubennikova wrote:\n\n> Status update for a commitfest entry.\n> \n> This entry was inactive for a very long time. \n> John, are you going to continue working on this?\n> \n> The last message mentions some open issues, namely backend crashes, so I move it to \"Waiting on author\".\n\nAs I understand, the patch he posted is fine -- it only crashes when he\ntried a change I suggested. But (as is apparently common) I might be\nsuggesting the wrong thing. Since the cfbot says the patch still\napplies and works, I suggest to keep it as needs-review.\n\n\n", "msg_date": "Mon, 2 Nov 2020 14:30:34 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On Mon, Nov 2, 2020 at 1:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2020-Nov-02, Anastasia Lubennikova wrote:\n>\n> > Status update for a commitfest entry.\n> >\n> > This entry was inactive for a very long time.\n> > John, are you going to continue working on this?\n>\n\nNot in the near future. For background, this was a 2019 GSoC project where\nI was reviewer of record, and the patch is mostly good, but there is some\narchitectural awkwardness. I have tried to address that, but have not had\nsuccess.\n\n\n> > The last message mentions some open issues, namely backend crashes, so I\n> move it to \"Waiting on author\".\n>\n> As I understand, the patch he posted is fine -- it only crashes when he\n> tried a change I suggested.\n\n\nThat's my recollection as well.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Mon, Nov 2, 2020 at 1:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2020-Nov-02, Anastasia Lubennikova wrote:\n\n> Status update for a commitfest entry.\n> \n> This entry was inactive for a very long time. \n> John, are you going to continue working on this?Not in the near future. For background, this was a 2019 GSoC project where I was reviewer of record, and the patch is mostly good, but there is some architectural awkwardness. I have tried to address that, but have not had success. \n> The last message mentions some open issues, namely backend crashes, so I move it to \"Waiting on author\".\n\nAs I understand, the patch he posted is fine -- it only crashes when he\ntried a change I suggested.  That's my recollection as well.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 2 Nov 2020 15:08:57 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" }, { "msg_contents": "On 02.11.2020 22:08, John Naylor wrote:\n>\n>\n> On Mon, Nov 2, 2020 at 1:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org \n> <mailto:alvherre@alvh.no-ip.org>> wrote:\n>\n> On 2020-Nov-02, Anastasia Lubennikova wrote:\n>\n> > Status update for a commitfest entry.\n> >\n> > This entry was inactive for a very long time.\n> > John, are you going to continue working on this?\n>\n>\n> Not in the near future. For background, this was a 2019 GSoC project \n> where I was reviewer of record, and the patch is mostly good, but \n> there is some architectural awkwardness. I have tried to address that, \n> but have not had success.\n>\nThe commitfest is nearing the end and as this thread has stalled, I've \nmarked it Returned with Feedback. Feel free to open a new entry if you \nreturn to this patch.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 02.11.2020 22:08, John Naylor wrote:\n\n\n\n\n\n\n\n\nOn Mon, Nov 2, 2020 at 1:30\n PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n wrote:\n\nOn 2020-Nov-02, Anastasia\n Lubennikova wrote:\n\n > Status update for a commitfest entry.\n > \n > This entry was inactive for a very long time. \n > John, are you going to continue working on this?\n\n\n\nNot in the near future. For background, this was a 2019\n GSoC project where I was reviewer of record, and the patch\n is mostly good, but there is some architectural awkwardness.\n I have tried to address that, but have not had success.\n\n\n\n\n The commitfest is nearing the end and as this thread has stalled,\n I've marked it Returned with Feedback. Feel free to open a new entry\n if you return to this patch.\n -- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 27 Nov 2020 11:31:15 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [proposal] de-TOAST'ing using a iterator" } ]
[ { "msg_contents": "src/include/common/unicode_norm_table.h also should be updated to the\nlatest Unicode tables, as described in src/common/unicode. See attached\npatches. This also passes the tests described in\nsrc/common/unicode/README. (That is, the old code does not pass the\ncurrent Unicode test file, but the updated code does pass it.)\n\nI also checked contrib/unaccent/ but it seems up to date.\n\nIt seems to me that we ought to make this part of the standard major\nrelease preparations. There is a new Unicode standard approximately\nonce a year; see <https://unicode.org/Public/>. (The 13.0.0 listed\nthere is not released yet.)\n\nIt would also be nice to unify and automate all these \"update to latest\nUnicode\" steps.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 19 Jun 2019 22:34:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "more Unicode data updates" }, { "msg_contents": "On Thu, Jun 20, 2019 at 8:35 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> src/include/common/unicode_norm_table.h also should be updated to the\n> latest Unicode tables, as described in src/common/unicode. See attached\n> patches. This also passes the tests described in\n> src/common/unicode/README. (That is, the old code does not pass the\n> current Unicode test file, but the updated code does pass it.)\n>\n> I also checked contrib/unaccent/ but it seems up to date.\n>\n> It seems to me that we ought to make this part of the standard major\n> release preparations. There is a new Unicode standard approximately\n> once a year; see <https://unicode.org/Public/>. (The 13.0.0 listed\n> there is not released yet.)\n>\n> It would also be nice to unify and automate all these \"update to latest\n> Unicode\" steps.\n\n+1, great idea. Every piece of the system that derives from Unicode\ndata should derive from the same version, and the version should be\nmentioned in the release notes when it changes, and should be\ndocumented somewhere centrally. I wondered about that when working on\nthe unaccent generator script but didn't wonder hard enough.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jun 2019 09:45:05 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more Unicode data updates" }, { "msg_contents": "On 2019-06-19 22:34, Peter Eisentraut wrote:\n> src/include/common/unicode_norm_table.h also should be updated to the\n> latest Unicode tables, as described in src/common/unicode. See attached\n> patches. This also passes the tests described in\n> src/common/unicode/README. (That is, the old code does not pass the\n> current Unicode test file, but the updated code does pass it.)\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 24 Jun 2019 22:59:05 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more Unicode data updates" } ]
[ { "msg_contents": "Hi\n\nHere:\n\n https://www.postgresql.org/docs/devel/catalog-pg-class.html\n\nthe description for \"relam\" has not been updated to take into account\ntable access methods; patch attached.\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Thu, 20 Jun 2019 11:17:22 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "doc: update \"relam\" description in pg_class catalog reference" }, { "msg_contents": "On 6/20/19 11:17 AM, Ian Barwick wrote:\n> Hi\n> \n> Here:\n> \n>   https://www.postgresql.org/docs/devel/catalog-pg-class.html\n> \n> the description for \"relam\" has not been updated to take into account\n> table access methods; patch attached.\n\nWhoops, correct version attached. Sorry about the noise.\n\n\nRegards\n\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Thu, 20 Jun 2019 11:20:46 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: doc: update \"relam\" description in pg_class catalog reference" }, { "msg_contents": "On Thu, Jun 20, 2019 at 11:20:46AM +0900, Ian Barwick wrote:\n> Whoops, correct version attached. Sorry about the noise.\n\nv2 looks fine to me, committed. Thanks!\n--\nMichael", "msg_date": "Thu, 20 Jun 2019 13:07:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc: update \"relam\" description in pg_class catalog reference" }, { "msg_contents": "On 6/20/19 1:07 PM, Michael Paquier wrote:\n> On Thu, Jun 20, 2019 at 11:20:46AM +0900, Ian Barwick wrote:\n>> Whoops, correct version attached. Sorry about the noise.\n> \n> v2 looks fine to me, committed. Thanks!\n\nThanks!\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Thu, 20 Jun 2019 13:08:55 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: doc: update \"relam\" description in pg_class catalog reference" } ]
[ { "msg_contents": "Hello,\n\nWhile looking at bug #15857[1], I wondered why the following two\nqueries get different plans, given the schema and data from the bug\nreport:\n\n(1) SELECT COUNT (*)\n FROM a\n JOIN b\n ON a.id=b.base_id\n WHERE EXISTS (\n SELECT 1\n FROM c\n WHERE c.base_id = a.id\n );\n\n(2) SELECT COUNT (*)\n FROM a\n JOIN b\n ON a.id=b.base_id\n WHERE EXISTS (\n SELECT 1\n FROM c\n WHERE c.base_id = b.base_id\n );\n\nThe only difference is a.id vs b.base_id in the WHERE clause, and\nthose are equivalent, and the planner knows it as we can see from the\njoin cond visible in the plan. Query 1 gets a JOIN_UNIQUE_INNER for\nBC (C is sorted, made unique and then hashed for an inner join with B)\nwhile query 2 gets a JOIN_SEMI for BC (C is hashed for a semi join\nwith B). In both cases JOIN_UNIQUE_INNER is considered for BC (though\nit's later blocked for PHJ since commit aca127c1), but the better\nJOIN_SEMI is considered only for query 2.\n\nThe relevant decision logic is in populate_joinrel_with_paths()'s\nJOIN_SEMI case, where it considers which of JOIN_SEMI,\nJOIN_UNIQUE_INNER and JOIN_UNIQUE_OUTER to add.\n\nHere's my question: how could it ever be OK to sort/unique something\nand put it in a hash table, but not OK to put exactly the same thing\nin the hash table directly, with JOIN_SEMI logic to prevent multiple\nmatches? And likewise for the inner side of other join strategies.\nOr to put in in plain C, in what case would the following change be\nwrong?\n\n /*\n * We might have a normal semijoin, or a case\nwhere we don't have\n * enough rels to do the semijoin but can\nunique-ify the RHS and\n * then do an innerjoin (see comments in\njoin_is_legal). In the\n * latter case we can't apply JOIN_SEMI joining.\n */\n- if (bms_is_subset(sjinfo->min_lefthand, rel1->relids) &&\n- bms_is_subset(sjinfo->min_righthand,\nrel2->relids))\n+ if ((bms_is_subset(sjinfo->min_lefthand,\nrel1->relids) &&\n+ bms_is_subset(sjinfo->min_righthand,\nrel2->relids)) ||\n+ bms_equal(sjinfo->syn_righthand, rel2->relids))\n {\n if (is_dummy_rel(rel1) || is_dummy_rel(rel2) ||\n\nrestriction_is_constant_false(restrictlist, joinrel, false))\n\nOr to put it in the language of the comment, how could you ever have\nenough rels to do a join between B and unique(C), but not enough rels\nto do a semi-join between B and C?\n\nI admit that I don't have a great grasp of equivalence classes,\n(min|syn)_(left|right)hand or the join planning code in general,\nhaving focused mostly on execution so far, so the above is a\ncargo-cult change and I may be missing something fundamental...\n\nWhich plan wins is of course a costing matter, but having the\nJOIN_SEMI option available has the advantage of being more profitably\nparallelisable, which led me here. With the above hack you get a\n[Parallel] Hash Semi Join for BC with both queries (unless you set\nwork_mem higher and then you get a much slower merge join, but that's\nan entirely separate problem). Only one regression test plan changes\n-- it's structurally similar to the bug report query, but at a glance\nthe new plan is better anyway. The test still demonstrates what it\nwants to demonstrate (namely that the absence or presence of a unique\nindex causes the plan to change, and with the above hack that is even\nclearer because the two plans now differ only in \"Nested Loop Semi\nJoin\" vs \"Nested Loop\").\n\n[1] https://www.postgresql.org/message-id/flat/15857-d1ba2a64bce0795e%40postgresql.org\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jun 2019 15:14:07 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "JOIN_SEMI planning question" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> While looking at bug #15857[1], I wondered why the following two\n> queries get different plans, given the schema and data from the bug\n> report:\n> ...\n> Here's my question: how could it ever be OK to sort/unique something\n> and put it in a hash table, but not OK to put exactly the same thing\n> in the hash table directly, with JOIN_SEMI logic to prevent multiple\n> matches? And likewise for the inner side of other join strategies.\n\nI think you're thinking about it wrong. Or maybe there's some additional\nwork we could put in to extend the logic.\n\nIf we're considering the case where the semijoin is c.base_id = a.id,\nthen we definitely can do \"a SEMIJOIN c WHERE a.id = c.base_id\".\nIf we want to join b to c first, we can do so, but we have to unique-ify c\nbefore that join, and then both the b/c join and the later join with a\ncan become plain inner joins. We *don't* get to skip unique-ifying c\nbefore joining to b and then apply a semijoin with a later, because in\ngeneral that's going to result in the wrong number of output rows.\n(Example: if a is unique but b has multiple copies of a particular join\nkey, and the key does appear in c, we should end with multiple output rows\nhaving that key, and we wouldn't.)\n\nIt's possible that in some situations we could prove that semijoining\nlater would work, but it seems like that'd require a great deal more\nanalysis than the code does now. There'd also have to be tracking of\nwhether the final a join still has to be a semijoin or not, which'd\nnow depend on which path for b/c was being considered.\n\n> Or to put it in the language of the comment, how could you ever have\n> enough rels to do a join between B and unique(C), but not enough rels\n> to do a semi-join between B and C?\n\nIf you're not joining C to B at all, but to some other rel A, you can't do\nit as a semijoin because you can't execute the semijoin qual correctly.\n\nIn the particular case we're looking at here, it may be possible to prove\nthat equivalence-class substitution from the original B/C semijoin qual\ngives rise to an A/C join qual that will work as an equivalent semijoin\nqual, and then it's OK to do A/C as a semijoin with that. But I'm not\nvery sure what the proof conditions need to be, and I'm 100% sure that\nthe code isn't making any such proof now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2019 19:35:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: JOIN_SEMI planning question" } ]
[ { "msg_contents": "A patch fixing this bug\nhttps://www.postgresql.org/message-id/flat/15738-21723084f3009ceb%40postgresql.org", "msg_date": "Thu, 20 Jun 2019 08:40:58 +0500", "msg_from": "RekGRpth <rekgrpth@gmail.com>", "msg_from_op": true, "msg_subject": "Disconnect from SPI manager on error" }, { "msg_contents": "RekGRpth <rekgrpth@gmail.com> writes:\n> A patch fixing this bug\n> https://www.postgresql.org/message-id/flat/15738-21723084f3009ceb%40postgresql.org\n\nI do not think this code change is necessary or appropriate.\nIt is not plpgsql's job to clean up after other backend subsystems\nduring a transaction abort. Maybe if plpgsql were the only thing\nthat invokes spi.c, it would be sane to factorize the responsibility\nthis way --- but of course it is not.\n\nThe complaint in bug #15738 is 100% bogus, which is probably why\nit was roundly ignored. The quoted C code is just plain wrong\nabout how to handle errors inside the backend. In particular,\nSPI_rollback is not even approximately the right thing to do to\nclean up after catching a thrown exception.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2019 11:32:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Disconnect from SPI manager on error" }, { "msg_contents": ">It is not plpgsql's job to clean up after other backend subsystems\nduring a transaction abort.\nBut plpgsql do clean up on success! I suggest only do cleanup and on\nexception.\n\n\nчт, 20 июн. 2019 г. в 20:33, Tom Lane <tgl@sss.pgh.pa.us>:\n\n> RekGRpth <rekgrpth@gmail.com> writes:\n> > A patch fixing this bug\n> >\n> https://www.postgresql.org/message-id/flat/15738-21723084f3009ceb%40postgresql.org\n>\n> I do not think this code change is necessary or appropriate.\n> It is not plpgsql's job to clean up after other backend subsystems\n> during a transaction abort. Maybe if plpgsql were the only thing\n> that invokes spi.c, it would be sane to factorize the responsibility\n> this way --- but of course it is not.\n>\n> The complaint in bug #15738 is 100% bogus, which is probably why\n> it was roundly ignored. The quoted C code is just plain wrong\n> about how to handle errors inside the backend. In particular,\n> SPI_rollback is not even approximately the right thing to do to\n> clean up after catching a thrown exception.\n>\n> regards, tom lane\n>\n\n>It is not plpgsql's job to clean up after other backend subsystemsduring a transaction abort.But plpgsql do clean up on success! I suggest only do cleanup and on exception.чт, 20 июн. 2019 г. в 20:33, Tom Lane <tgl@sss.pgh.pa.us>:RekGRpth <rekgrpth@gmail.com> writes:\n> A patch fixing this bug\n> https://www.postgresql.org/message-id/flat/15738-21723084f3009ceb%40postgresql.org\n\nI do not think this code change is necessary or appropriate.\nIt is not plpgsql's job to clean up after other backend subsystems\nduring a transaction abort.  Maybe if plpgsql were the only thing\nthat invokes spi.c, it would be sane to factorize the responsibility\nthis way --- but of course it is not.\n\nThe complaint in bug #15738 is 100% bogus, which is probably why\nit was roundly ignored.  The quoted C code is just plain wrong\nabout how to handle errors inside the backend.  In particular,\nSPI_rollback is not even approximately the right thing to do to\nclean up after catching a thrown exception.\n\n                        regards, tom lane", "msg_date": "Fri, 21 Jun 2019 09:49:10 +0500", "msg_from": "RekGRpth <rekgrpth@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Disconnect from SPI manager on error" }, { "msg_contents": "On Fri, Jun 21, 2019 at 3:45 AM RekGRpth <rekgrpth@gmail.com> wrote:\n> >It is not plpgsql's job to clean up after other backend subsystems\n> during a transaction abort.\n> But plpgsql do clean up on success! I suggest only do cleanup and on exception.\n\nExcept that's wrong, because when an error happens, cleanup is - in\nmost cases - the job of (sub)transaction abort, not something that\nshould be done by individual bits of code.\n\nPostgreSQL has a centralized system for processing exception cleanup\nfor a very good reason: there are LOTS of places where errors can be\nthrown, and if each of those places has to have its own error cleanup\nlogic, you end up with a real mess. Instead we've gone the other way:\nyou can throw an error from anywhere without doing any cleanup, and\nit's the job of the error-handling machinery to invoke subtransaction\nabort logic, which is responsible for cleaning up whatever mess has\nbeen left behind.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Jun 2019 11:08:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Disconnect from SPI manager on error" } ]
[ { "msg_contents": "The discussion in bug #15631 revealed that serial/identity sequences of\ntemporary tables should really also be temporary (easy), and that\nserial/identity sequences of unlogged tables should also be unlogged.\nBut there is no support for unlogged sequences, so I looked into that.\n\nIf you copy the initial sequence relation file to the init fork, then\nthis all seems to work out just fine. Attached is a patch. The\nlow-level copying seems to be handled quite inconsistently across the\ncode, so I'm not sure what the most appropriate way to do this would be.\n I'm looking for feedback from those who have worked on tableam and\nstorage manager to see what the right interfaces are or whether some new\ninterfaces might perhaps be appropriate.\n\n(What's still missing in this patch is ALTER SEQUENCE SET\n{LOGGED|UNLOGGED} as well as propagating the analogous ALTER TABLE\ncommand to owned sequences.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 20 Jun 2019 09:30:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "unlogged sequences" }, { "msg_contents": "On Thu, Jun 20, 2019 at 09:30:34AM +0200, Peter Eisentraut wrote:\n> The discussion in bug #15631 revealed that serial/identity sequences of\n> temporary tables should really also be temporary (easy), and that\n> serial/identity sequences of unlogged tables should also be unlogged.\n> But there is no support for unlogged sequences, so I looked into that.\n\nThanks for doing so.\n\n> If you copy the initial sequence relation file to the init fork, then\n> this all seems to work out just fine. Attached is a patch. The\n> low-level copying seems to be handled quite inconsistently across the\n> code, so I'm not sure what the most appropriate way to do this would be.\n> I'm looking for feedback from those who have worked on tableam and\n> storage manager to see what the right interfaces are or whether some new\n> interfaces might perhaps be appropriate.\n\nBut the actual deal is that the sequence meta-data is now in\npg_sequences and not the init forks, no? I have just done a small\ntest:\n1) Some SQL queries:\ncreate unlogged sequence popo;\nalter sequence popo increment 2;\nselect nextval('popo');\nselect nextval('popo');\n2) Then a hard crash:\npg_ctl stop -m immediate\npg_ctl start\n3) Again, with a crash:\nselect nextval('popo'); \n#2 0x000055ce60f3208d in ExceptionalCondition\n(conditionName=0x55ce610f0570 \"!(((PageHeader) (page))->pd_special >=\n(__builtin_offsetof (PageHeaderData, pd_linp)))\",\nerrorType=0x55ce610f0507 \"FailedAssertion\",\nfileName=0x55ce610f04e0 \"../../../src/include/storage/bufpage.h\",\nlineNumber=317) at assert.c:54\n#3 0x000055ce60b43200 in PageValidateSpecialPointer\n(page=0x7ff7692b3d80 \"\") at\n../../../src/include/storage/bufpage.h:317\n#4 0x000055ce60b459d4 in read_seq_tuple (rel=0x7ff768ad27e0,\nbuf=0x7ffc5707f0bc, seqdatatuple=0x7ffc5707f0a0) at\nsequence.c:1213\n#5 0x000055ce60b447ee in nextval_internal (relid=16385,\ncheck_permissions=true) at sequence.c:678\n#6 0x000055ce60b44533 in nextval_oid (fcinfo=0x55ce62537570) at sequence.c:607\n--\nMichael", "msg_date": "Fri, 21 Jun 2019 14:31:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 2019-06-21 07:31, Michael Paquier wrote:\n> 1) Some SQL queries:\n> create unlogged sequence popo;\n> alter sequence popo increment 2;\n\nThe problem is that the above command does a relation rewrite but the\ncode doesn't know to copy the init fork of the sequence. That will need\nto be addressed.\n\n> select nextval('popo');\n> select nextval('popo');\n> 2) Then a hard crash:\n> pg_ctl stop -m immediate\n> pg_ctl start\n> 3) Again, with a crash:\n> select nextval('popo'); \n> #2 0x000055ce60f3208d in ExceptionalCondition\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 23 Jun 2019 22:20:33 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "Hi,\n\nOn 2019-06-20 09:30:34 +0200, Peter Eisentraut wrote:\n> I'm looking for feedback from those who have worked on tableam and\n> storage manager to see what the right interfaces are or whether some new\n> interfaces might perhaps be appropriate.\n\nHm, it's not clear to me that tableam design matters much around\nsequences? To me it's a historical accident that sequences kinda look\nlike tables, not more.\n\n\n\n> +\t/*\n> +\t * create init fork for unlogged sequences\n> +\t *\n> +\t * The logic follows that of RelationCreateStorage() and\n> +\t * RelationCopyStorage().\n> +\t */\n> +\tif (seq->sequence->relpersistence == RELPERSISTENCE_UNLOGGED)\n> +\t{\n> +\t\tSMgrRelation srel;\n> +\t\tPGAlignedBlock buf;\n> +\t\tPage\t\tpage = (Page) buf.data;\n> +\n> +\t\tFlushRelationBuffers(rel);\n\nThat's pretty darn expensive, especially when we just need to flush out\na *single* page, as it needs to scan all of shared buffers. Seems better\nto just to initialize the page from scratch? Any reason not to do that?\n\n\n> +\t\tsrel = smgropen(rel->rd_node, InvalidBackendId);\n> +\t\tsmgrcreate(srel, INIT_FORKNUM, false);\n> +\t\tlog_smgrcreate(&rel->rd_node, INIT_FORKNUM);\n> +\n> +\t\tAssert(smgrnblocks(srel, MAIN_FORKNUM) == 1);\n> +\n> +\t\tsmgrread(srel, MAIN_FORKNUM, 0, buf.data);\n> +\n> +\t\tif (!PageIsVerified(page, 0))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t\t errmsg(\"invalid page in block %u of relation %s\",\n> +\t\t\t\t\t\t\t0,\n> +\t\t\t\t\t\t\trelpathbackend(srel->smgr_rnode.node,\n> +\t\t\t\t\t\t\t\t\t\t srel->smgr_rnode.backend,\n> +\t\t\t\t\t\t\t\t\t\t MAIN_FORKNUM))));\n> +\n> +\t\tlog_newpage(&srel->smgr_rnode.node, INIT_FORKNUM, 0, page, false);\n> +\t\tPageSetChecksumInplace(page, 0);\n> +\t\tsmgrextend(srel, INIT_FORKNUM, 0, buf.data, false);\n> +\t\tsmgrclose(srel);\n> +\t}\n\nI.e. I think it'd be better if we just added a fork argument to\nfill_seq_with_data(), and then do something like\n\nsmgrcreate(srel, INIT_FORKNUM, false);\nlog_smgrcreate(&rel->rd_node, INIT_FORKNUM);\nfill_seq_with_data(rel, tuple, INIT_FORKNUM);\n\nand add a FlushBuffer() to the end of fill_seq_with_data() if writing\nINIT_FORKNUM. The if (RelationNeedsWAL(rel)) would need an || forkNum ==\nINIT_FORKNUM.\n\nAlternatively you could just copy the contents from the buffer currently\nfilled in fill_seq_with_data() to the main fork, and do a memcpy. But\nthat seems unnecessarily complicated, because you'd again need to do WAL\nlogging etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2019 14:37:52 -0400", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Wed, Jun 26, 2019 at 6:38 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-06-20 09:30:34 +0200, Peter Eisentraut wrote:\n> > I'm looking for feedback from those who have worked on tableam and\n> > storage manager to see what the right interfaces are or whether some new\n> > interfaces might perhaps be appropriate.\n>\n> [lots of feedback that requires making decisions]\n\nSeems to be actively under development but no new patch yet. Moved to next CF.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 20:13:05 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 2019-Aug-01, Thomas Munro wrote:\n\n> On Wed, Jun 26, 2019 at 6:38 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-06-20 09:30:34 +0200, Peter Eisentraut wrote:\n> > > I'm looking for feedback from those who have worked on tableam and\n> > > storage manager to see what the right interfaces are or whether some new\n> > > interfaces might perhaps be appropriate.\n> >\n> > [lots of feedback that requires making decisions]\n> \n> Seems to be actively under development but no new patch yet. Moved to next CF.\n\nMarked Waiting on Author.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Sep 2019 14:42:32 -0300", "msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 25.06.19 20:37, Andres Freund wrote:\n> I.e. I think it'd be better if we just added a fork argument to\n> fill_seq_with_data(), and then do something like\n> \n> smgrcreate(srel, INIT_FORKNUM, false);\n> log_smgrcreate(&rel->rd_node, INIT_FORKNUM);\n> fill_seq_with_data(rel, tuple, INIT_FORKNUM);\n> \n> and add a FlushBuffer() to the end of fill_seq_with_data() if writing\n> INIT_FORKNUM. The if (RelationNeedsWAL(rel)) would need an || forkNum ==\n> INIT_FORKNUM.\n\nNow that logical replication of sequences is nearing completion, I \nfigured it would be suitable to dust off this old discussion on unlogged \nsequences, mainly so that sequences attached to unlogged tables can be \nexcluded from replication.\n\nAttached is a new patch that incorporates the above suggestions, with \nsome slight refactoring. The only thing I didn't/couldn't do was to \ncall FlushBuffers(), since that is not an exported function. So this \nstill calls FlushRelationBuffers(), which was previously not liked. \nIdeas welcome.\n\nI have also re-tested the crash reported by Michael Paquier in the old \ndiscussion and added test cases that catch them.\n\nThe rest of the patch is just documentation, DDL support, client \nsupport, etc.\n\nWhat is not done yet is support for ALTER SEQUENCE ... SET \nLOGGED/UNLOGGED. This is a bit of a problem because:\n\n1. The new behavior is that a serial/identity sequence of a new unlogged \ntable is now also unlogged.\n2. There is also a new restriction that changing a table to logged is \nnot allowed if it is linked to an unlogged sequence. (This is IMO \nsimilar to the existing restriction on linking mixed logged/unlogged \ntables via foreign keys.)\n3. Thus, currently, you can't create an unlogged table with a \nserial/identity column and then change it to logged. This is reflected \nin some of the test changes I had to make in alter_table.sql to work \naround this. These should eventually go away.\n\nInterestingly, there is grammar support for ALTER SEQUENCE ... SET \nLOGGED/UNLOGGED because there is this:\n\n | ALTER SEQUENCE qualified_name alter_table_cmds\n {\n AlterTableStmt *n = makeNode(AlterTableStmt);\n n->relation = $3;\n n->cmds = $4;\n n->objtype = OBJECT_SEQUENCE;\n n->missing_ok = false;\n $$ = (Node *)n;\n }\n\nBut it is rejected later in tablecmds.c. In fact, it appears that this \npiece of grammar is currently useless because there are no \nalter_table_cmds that actually work for sequences. (This used to be \ndifferent because things like OWNER TO also went through here.)\n\nI tried to make tablecmds.c handle sequences as well, but that became \nmessy. So I'm thinking about making ALTER SEQUENCE ... SET \nLOGGED/UNLOGGED an entirely separate code path and rip out the above \ngrammar, but that needs some further pondering.\n\nBut all that is a bit of a separate effort, so in the meantime some \nreview of the changes in and around fill_seq_with_data() would be useful.", "msg_date": "Fri, 11 Feb 2022 10:12:55 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "rebased patch, no functional changes\n\nOn 11.02.22 10:12, Peter Eisentraut wrote:\n> On 25.06.19 20:37, Andres Freund wrote:\n>> I.e. I think it'd be better if we just added a fork argument to\n>> fill_seq_with_data(), and then do something like\n>>\n>> smgrcreate(srel, INIT_FORKNUM, false);\n>> log_smgrcreate(&rel->rd_node, INIT_FORKNUM);\n>> fill_seq_with_data(rel, tuple, INIT_FORKNUM);\n>>\n>> and add a FlushBuffer() to the end of fill_seq_with_data() if writing\n>> INIT_FORKNUM. The if (RelationNeedsWAL(rel)) would need an || forkNum ==\n>> INIT_FORKNUM.\n> \n> Now that logical replication of sequences is nearing completion, I \n> figured it would be suitable to dust off this old discussion on unlogged \n> sequences, mainly so that sequences attached to unlogged tables can be \n> excluded from replication.\n> \n> Attached is a new patch that incorporates the above suggestions, with \n> some slight refactoring.  The only thing I didn't/couldn't do was to \n> call FlushBuffers(), since that is not an exported function.  So this \n> still calls FlushRelationBuffers(), which was previously not liked. \n> Ideas welcome.\n> \n> I have also re-tested the crash reported by Michael Paquier in the old \n> discussion and added test cases that catch them.\n> \n> The rest of the patch is just documentation, DDL support, client \n> support, etc.\n> \n> What is not done yet is support for ALTER SEQUENCE ... SET \n> LOGGED/UNLOGGED.  This is a bit of a problem because:\n> \n> 1. The new behavior is that a serial/identity sequence of a new unlogged \n> table is now also unlogged.\n> 2. There is also a new restriction that changing a table to logged is \n> not allowed if it is linked to an unlogged sequence.  (This is IMO \n> similar to the existing restriction on linking mixed logged/unlogged \n> tables via foreign keys.)\n> 3. Thus, currently, you can't create an unlogged table with a \n> serial/identity column and then change it to logged.  This is reflected \n> in some of the test changes I had to make in alter_table.sql to work \n> around this.  These should eventually go away.\n> \n> Interestingly, there is grammar support for ALTER SEQUENCE ... SET \n> LOGGED/UNLOGGED because there is this:\n> \n>         |   ALTER SEQUENCE qualified_name alter_table_cmds\n>                 {\n>                     AlterTableStmt *n = makeNode(AlterTableStmt);\n>                     n->relation = $3;\n>                     n->cmds = $4;\n>                     n->objtype = OBJECT_SEQUENCE;\n>                     n->missing_ok = false;\n>                     $$ = (Node *)n;\n>                 }\n> \n> But it is rejected later in tablecmds.c.  In fact, it appears that this \n> piece of grammar is currently useless because there are no \n> alter_table_cmds that actually work for sequences.  (This used to be \n> different because things like OWNER TO also went through here.)\n> \n> I tried to make tablecmds.c handle sequences as well, but that became \n> messy.  So I'm thinking about making ALTER SEQUENCE ... SET \n> LOGGED/UNLOGGED an entirely separate code path and rip out the above \n> grammar, but that needs some further pondering.\n> \n> But all that is a bit of a separate effort, so in the meantime some \n> review of the changes in and around fill_seq_with_data() would be useful.", "msg_date": "Mon, 28 Feb 2022 10:56:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "Here is an updated patch that now also includes SET LOGGED/UNLOGGED \nsupport. So this version addresses all known issues and open problems.\n\n\nOn 28.02.22 10:56, Peter Eisentraut wrote:\n> rebased patch, no functional changes\n> \n> On 11.02.22 10:12, Peter Eisentraut wrote:\n>> On 25.06.19 20:37, Andres Freund wrote:\n>>> I.e. I think it'd be better if we just added a fork argument to\n>>> fill_seq_with_data(), and then do something like\n>>>\n>>> smgrcreate(srel, INIT_FORKNUM, false);\n>>> log_smgrcreate(&rel->rd_node, INIT_FORKNUM);\n>>> fill_seq_with_data(rel, tuple, INIT_FORKNUM);\n>>>\n>>> and add a FlushBuffer() to the end of fill_seq_with_data() if writing\n>>> INIT_FORKNUM. The if (RelationNeedsWAL(rel)) would need an || forkNum ==\n>>> INIT_FORKNUM.\n>>\n>> Now that logical replication of sequences is nearing completion, I \n>> figured it would be suitable to dust off this old discussion on \n>> unlogged sequences, mainly so that sequences attached to unlogged \n>> tables can be excluded from replication.\n>>\n>> Attached is a new patch that incorporates the above suggestions, with \n>> some slight refactoring.  The only thing I didn't/couldn't do was to \n>> call FlushBuffers(), since that is not an exported function.  So this \n>> still calls FlushRelationBuffers(), which was previously not liked. \n>> Ideas welcome.\n>>\n>> I have also re-tested the crash reported by Michael Paquier in the old \n>> discussion and added test cases that catch them.\n>>\n>> The rest of the patch is just documentation, DDL support, client \n>> support, etc.\n>>\n>> What is not done yet is support for ALTER SEQUENCE ... SET \n>> LOGGED/UNLOGGED.  This is a bit of a problem because:\n>>\n>> 1. The new behavior is that a serial/identity sequence of a new \n>> unlogged table is now also unlogged.\n>> 2. There is also a new restriction that changing a table to logged is \n>> not allowed if it is linked to an unlogged sequence.  (This is IMO \n>> similar to the existing restriction on linking mixed logged/unlogged \n>> tables via foreign keys.)\n>> 3. Thus, currently, you can't create an unlogged table with a \n>> serial/identity column and then change it to logged.  This is \n>> reflected in some of the test changes I had to make in alter_table.sql \n>> to work around this.  These should eventually go away.\n>>\n>> Interestingly, there is grammar support for ALTER SEQUENCE ... SET \n>> LOGGED/UNLOGGED because there is this:\n>>\n>>          |   ALTER SEQUENCE qualified_name alter_table_cmds\n>>                  {\n>>                      AlterTableStmt *n = makeNode(AlterTableStmt);\n>>                      n->relation = $3;\n>>                      n->cmds = $4;\n>>                      n->objtype = OBJECT_SEQUENCE;\n>>                      n->missing_ok = false;\n>>                      $$ = (Node *)n;\n>>                  }\n>>\n>> But it is rejected later in tablecmds.c.  In fact, it appears that \n>> this piece of grammar is currently useless because there are no \n>> alter_table_cmds that actually work for sequences.  (This used to be \n>> different because things like OWNER TO also went through here.)\n>>\n>> I tried to make tablecmds.c handle sequences as well, but that became \n>> messy.  So I'm thinking about making ALTER SEQUENCE ... SET \n>> LOGGED/UNLOGGED an entirely separate code path and rip out the above \n>> grammar, but that needs some further pondering.\n>>\n>> But all that is a bit of a separate effort, so in the meantime some \n>> review of the changes in and around fill_seq_with_data() would be useful.", "msg_date": "Thu, 24 Mar 2022 14:10:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "Patch rebased over some conflicts, and some tests simplified.\n\nOn 24.03.22 14:10, Peter Eisentraut wrote:\n> Here is an updated patch that now also includes SET LOGGED/UNLOGGED \n> support.  So this version addresses all known issues and open problems.\n> \n> \n> On 28.02.22 10:56, Peter Eisentraut wrote:\n>> rebased patch, no functional changes\n>>\n>> On 11.02.22 10:12, Peter Eisentraut wrote:\n>>> On 25.06.19 20:37, Andres Freund wrote:\n>>>> I.e. I think it'd be better if we just added a fork argument to\n>>>> fill_seq_with_data(), and then do something like\n>>>>\n>>>> smgrcreate(srel, INIT_FORKNUM, false);\n>>>> log_smgrcreate(&rel->rd_node, INIT_FORKNUM);\n>>>> fill_seq_with_data(rel, tuple, INIT_FORKNUM);\n>>>>\n>>>> and add a FlushBuffer() to the end of fill_seq_with_data() if writing\n>>>> INIT_FORKNUM. The if (RelationNeedsWAL(rel)) would need an || \n>>>> forkNum ==\n>>>> INIT_FORKNUM.\n>>>\n>>> Now that logical replication of sequences is nearing completion, I \n>>> figured it would be suitable to dust off this old discussion on \n>>> unlogged sequences, mainly so that sequences attached to unlogged \n>>> tables can be excluded from replication.\n>>>\n>>> Attached is a new patch that incorporates the above suggestions, with \n>>> some slight refactoring.  The only thing I didn't/couldn't do was to \n>>> call FlushBuffers(), since that is not an exported function.  So this \n>>> still calls FlushRelationBuffers(), which was previously not liked. \n>>> Ideas welcome.\n>>>\n>>> I have also re-tested the crash reported by Michael Paquier in the \n>>> old discussion and added test cases that catch them.\n>>>\n>>> The rest of the patch is just documentation, DDL support, client \n>>> support, etc.\n>>>\n>>> What is not done yet is support for ALTER SEQUENCE ... SET \n>>> LOGGED/UNLOGGED.  This is a bit of a problem because:\n>>>\n>>> 1. The new behavior is that a serial/identity sequence of a new \n>>> unlogged table is now also unlogged.\n>>> 2. There is also a new restriction that changing a table to logged is \n>>> not allowed if it is linked to an unlogged sequence.  (This is IMO \n>>> similar to the existing restriction on linking mixed logged/unlogged \n>>> tables via foreign keys.)\n>>> 3. Thus, currently, you can't create an unlogged table with a \n>>> serial/identity column and then change it to logged.  This is \n>>> reflected in some of the test changes I had to make in \n>>> alter_table.sql to work around this.  These should eventually go away.\n>>>\n>>> Interestingly, there is grammar support for ALTER SEQUENCE ... SET \n>>> LOGGED/UNLOGGED because there is this:\n>>>\n>>>          |   ALTER SEQUENCE qualified_name alter_table_cmds\n>>>                  {\n>>>                      AlterTableStmt *n = makeNode(AlterTableStmt);\n>>>                      n->relation = $3;\n>>>                      n->cmds = $4;\n>>>                      n->objtype = OBJECT_SEQUENCE;\n>>>                      n->missing_ok = false;\n>>>                      $$ = (Node *)n;\n>>>                  }\n>>>\n>>> But it is rejected later in tablecmds.c.  In fact, it appears that \n>>> this piece of grammar is currently useless because there are no \n>>> alter_table_cmds that actually work for sequences.  (This used to be \n>>> different because things like OWNER TO also went through here.)\n>>>\n>>> I tried to make tablecmds.c handle sequences as well, but that became \n>>> messy.  So I'm thinking about making ALTER SEQUENCE ... SET \n>>> LOGGED/UNLOGGED an entirely separate code path and rip out the above \n>>> grammar, but that needs some further pondering.\n>>>\n>>> But all that is a bit of a separate effort, so in the meantime some \n>>> review of the changes in and around fill_seq_with_data() would be \n>>> useful.", "msg_date": "Tue, 29 Mar 2022 14:28:14 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "Hi,\n\nHere's a slightly improved patch, adding a couple checks and tests for\nowned sequences to ensure both objects have the same persistence. In\nparticular:\n\n* When linking a sequence to a table (ALTER SEQUENCE ... OWNED BY),\nthere's an ereport(ERROR) if the relpersistence values do not match.\n\n* Disallow changing persistence for owned sequences directly.\n\n\nBut I wonder about two things:\n\n1) Do we need to do something about pg_upgrade? I mean, we did not have\nunlogged sequences until now, so existing databases may have unlogged\ntables with logged sequences. If people run pg_upgrade, what should be\nthe end result? Should it convert the sequences to unlogged ones, should\nit fail and force the user to fix this manually, or what?\n\n2) Does it actually make sense to force owned sequences to have the same\nrelpersistence as the table? I can imagine use cases where it's OK to\ndiscard and recalculate the data, but I'd still want to ensure unique\nIDs. Like some data loads, for example.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 31 Mar 2022 16:14:25 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "Hi,\n\nOn 2022-03-31 16:14:25 +0200, Tomas Vondra wrote:\n> 1) Do we need to do something about pg_upgrade? I mean, we did not have\n> unlogged sequences until now, so existing databases may have unlogged\n> tables with logged sequences. If people run pg_upgrade, what should be\n> the end result? Should it convert the sequences to unlogged ones, should\n> it fail and force the user to fix this manually, or what?\n\n> 2) Does it actually make sense to force owned sequences to have the same\n> relpersistence as the table? I can imagine use cases where it's OK to\n> discard and recalculate the data, but I'd still want to ensure unique\n> IDs. Like some data loads, for example.\n\n\nI agree it makes sense to have logged sequences with unlogged tables. We\nshould call out the behavioural change somewhere prominent in the release\nnotes.\n\nI don't think we should make pg_upgrade change the loggedness of sequences.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Mar 2022 09:28:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 9:28 AM Andres Freund <andres@anarazel.de> wrote:\n\n> I agree it makes sense to have logged sequences with unlogged tables. We\n> should call out the behavioural change somewhere prominent in the release\n> notes.\n>\n>\nWe can/do already support that unlikely use case by allowing one to remove\nthe OWNERSHIP dependency between the table and the sequence.\n\nI'm fine with owned sequences tracking the persistence attribute of the\nowning table.\n\nI don't think we should make pg_upgrade change the loggedness of sequences.\n>\n>\nWe are willing to change the default behavior here so it is going to affect\ndump/restore anyway, might as well fully commit and do the same for\npg_upgrade. The vast majority of users will benefit from the new default\nbehavior.\n\nI don't actually get, though, how that would play with pg_dump since it\nalways emits an unowned, and thus restored as logged, sequence first and\nthen alters the sequence to be owned by the table. Thus restoring an old\nSQL dump into the v15 is going to fail if we prohibit\nunlogged-table/logged-sequence; unless we actively change the logged-ness\nof the sequence when subordinating it to a table.\n\nThus, the choices seem to be:\n\n1) implement forced persistence agreement for owned sequences, changing the\nsequence to match the table when the alter table happens, and during\npg_upgrade.\n2) do not force persistence agreement for owned sequences\n\nIf choosing option 2, are you on board with changing the behavior of CREATE\nUNLOGGED TABLE with respect to any auto-generated sequences?\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 9:28 AM Andres Freund <andres@anarazel.de> wrote:I agree it makes sense to have logged sequences with unlogged tables. We\nshould call out the behavioural change somewhere prominent in the release\nnotes.\nWe can/do already support that unlikely use case by allowing one to remove the OWNERSHIP dependency between the table and the sequence.I'm fine with owned sequences tracking the persistence attribute of the owning table.\nI don't think we should make pg_upgrade change the loggedness of sequences.We are willing to change the default behavior here so it is going to affect dump/restore anyway, might as well fully commit and do the same for pg_upgrade.  The vast majority of users will benefit from the new default behavior.I don't actually get, though, how that would play with pg_dump since it always emits an unowned, and thus restored as logged, sequence first and then alters the sequence to be owned by the table.  Thus restoring an old SQL dump into the v15 is going to fail if we prohibit unlogged-table/logged-sequence; unless we actively change the logged-ness of the sequence when subordinating it to a table.Thus, the choices seem to be:1) implement forced persistence agreement for owned sequences, changing the sequence to match the table when the alter table happens, and during pg_upgrade.2) do not force persistence agreement for owned sequencesIf choosing option 2, are you on board with changing the behavior of CREATE UNLOGGED TABLE with respect to any auto-generated sequences?David J.", "msg_date": "Thu, 31 Mar 2022 10:35:55 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 3/31/22 19:35, David G. Johnston wrote:\n> On Thu, Mar 31, 2022 at 9:28 AM Andres Freund <andres@anarazel.de\n> <mailto:andres@anarazel.de>> wrote:\n> \n> I agree it makes sense to have logged sequences with unlogged tables. We\n> should call out the behavioural change somewhere prominent in the\n> release\n> notes.\n> \n\nI'm not sure I follow. If we allow logged sequences with unlogged\ntables, there's be no behavioral change, no?\n\n> \n> We can/do already support that unlikely use case by allowing one to\n> remove the OWNERSHIP dependency between the table and the sequence.\n> \n> I'm fine with owned sequences tracking the persistence attribute of the\n> owning table.\n> \n\nSo essentially an independent sequence, used in a default value.\n\n> I don't think we should make pg_upgrade change the loggedness of\n> sequences.\n> \n> \n> We are willing to change the default behavior here so it is going to\n> affect dump/restore anyway, might as well fully commit and do the same\n> for pg_upgrade.  The vast majority of users will benefit from the new\n> default behavior.\n> \n\nWhatever we do, I think we should keep the pg_dump and pg_upgrade\nbehavior as consistent as possible.\n\n> I don't actually get, though, how that would play with pg_dump since it\n> always emits an unowned, and thus restored as logged, sequence first and\n> then alters the sequence to be owned by the table.  Thus restoring an\n> old SQL dump into the v15 is going to fail if we prohibit\n> unlogged-table/logged-sequence; unless we actively change the\n> logged-ness of the sequence when subordinating it to a table.\n> \n\nYeah. I guess we'd need to either automatically switch the sequence to\nthe right persistence when linking it to the table, or maybe we could\nmodify pg_dump to emit UNLOGGED when the table is unlogged (but that\nwould work only when using the new pg_dump).\n\n> Thus, the choices seem to be:\n> \n> 1) implement forced persistence agreement for owned sequences, changing\n> the sequence to match the table when the alter table happens, and during\n> pg_upgrade.\n> 2) do not force persistence agreement for owned sequences\n> \n> If choosing option 2, are you on board with changing the behavior of\n> CREATE UNLOGGED TABLE with respect to any auto-generated sequences?\n> \n\nWhat behavior change, exactly? To create the sequences as UNLOGGED, but\nwe'd not update the persistence after that?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 31 Mar 2022 21:36:06 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 12:36 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 3/31/22 19:35, David G. Johnston wrote:\n> > On Thu, Mar 31, 2022 at 9:28 AM Andres Freund <andres@anarazel.de\n> > <mailto:andres@anarazel.de>> wrote:\n> >\n> > I agree it makes sense to have logged sequences with unlogged\n> tables. We\n> > should call out the behavioural change somewhere prominent in the\n> > release\n> > notes.\n> >\n>\n> I'm not sure I follow. If we allow logged sequences with unlogged\n> tables, there's be no behavioral change, no?\n>\n>\nAs noted below, the behavior change is in how CREATE TABLE behaves. Not\nwhether or not mixed persistence is allowed.\n\n\n> or maybe we could\n> modify pg_dump to emit UNLOGGED when the table is unlogged (but that\n> would work only when using the new pg_dump).\n>\n\nYes, the horse has already left the barn. I don't really have an opinion\non whether to leave the barn door open or closed.\n\n\n>\n> > If choosing option 2, are you on board with changing the behavior of\n> > CREATE UNLOGGED TABLE with respect to any auto-generated sequences?\n> >\n>\n> What behavior change, exactly? To create the sequences as UNLOGGED, but\n> we'd not update the persistence after that?\n>\n>\nToday, a newly created unlogged table with an automatically owned sequence\n(serial, generated identity) has a logged sequence. This patch changes\nthat so the new automatically owned sequence is unlogged. This seems to be\ngenerally agreed upon as being desirable - but given the fact that unlogged\ntables will not have unlogged sequences it seems worth confirming that this\nminor inconsistency is acceptable.\n\nThe first newly added behavior is just allowing sequences to be unlogged.\nThat is the only mandatory feature introduced by this patch and doesn't\nseem contentious.\n\nThe second newly added behavior being proposed is to have the persistence\nof the sequence be forcibly matched to the table. Whether this is\ndesirable is the main point under discussion.\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 12:36 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 3/31/22 19:35, David G. Johnston wrote:\n> On Thu, Mar 31, 2022 at 9:28 AM Andres Freund <andres@anarazel.de\n> <mailto:andres@anarazel.de>> wrote:\n> \n>     I agree it makes sense to have logged sequences with unlogged tables. We\n>     should call out the behavioural change somewhere prominent in the\n>     release\n>     notes.\n> \n\nI'm not sure I follow. If we allow logged sequences with unlogged\ntables, there's be no behavioral change, no?As noted below, the behavior change is in how CREATE TABLE behaves.  Not whether or not mixed persistence is allowed. or maybe we could\nmodify pg_dump to emit UNLOGGED when the table is unlogged (but that\nwould work only when using the new pg_dump).Yes, the horse has already left the barn.  I don't really have an opinion on whether to leave the barn door open or closed.  \n> If choosing option 2, are you on board with changing the behavior of\n> CREATE UNLOGGED TABLE with respect to any auto-generated sequences?\n> \n\nWhat behavior change, exactly? To create the sequences as UNLOGGED, but\nwe'd not update the persistence after that?Today, a newly created unlogged table with an automatically owned sequence (serial, generated identity) has a logged sequence.  This patch changes that so the new automatically owned sequence is unlogged.  This seems to be generally agreed upon as being desirable - but given the fact that unlogged tables will not have unlogged sequences it seems worth confirming that this minor inconsistency is acceptable.The first newly added behavior is just allowing sequences to be unlogged.  That is the only mandatory feature introduced by this patch and doesn't seem contentious.The second newly added behavior being proposed is to have the persistence of the sequence be forcibly matched to the table.  Whether this is desirable is the main point under discussion.David J.", "msg_date": "Thu, 31 Mar 2022 12:55:53 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "\n\nOn 3/31/22 21:55, David G. Johnston wrote:\n> On Thu, Mar 31, 2022 at 12:36 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> On 3/31/22 19:35, David G. Johnston wrote:\n> > On Thu, Mar 31, 2022 at 9:28 AM Andres Freund <andres@anarazel.de\n> <mailto:andres@anarazel.de>\n> > <mailto:andres@anarazel.de <mailto:andres@anarazel.de>>> wrote:\n> >\n> >     I agree it makes sense to have logged sequences with unlogged\n> tables. We\n> >     should call out the behavioural change somewhere prominent in the\n> >     release\n> >     notes.\n> >\n> \n> I'm not sure I follow. If we allow logged sequences with unlogged\n> tables, there's be no behavioral change, no?\n> \n> \n> As noted below, the behavior change is in how CREATE TABLE behaves.  Not\n> whether or not mixed persistence is allowed.\n>  \n> \n> or maybe we could\n> modify pg_dump to emit UNLOGGED when the table is unlogged (but that\n> would work only when using the new pg_dump).\n> \n> \n> Yes, the horse has already left the barn.  I don't really have an\n> opinion on whether to leave the barn door open or closed.\n>  \n> \n> \n> > If choosing option 2, are you on board with changing the behavior of\n> > CREATE UNLOGGED TABLE with respect to any auto-generated sequences?\n> >\n> \n> What behavior change, exactly? To create the sequences as UNLOGGED, but\n> we'd not update the persistence after that?\n> \n> \n> Today, a newly created unlogged table with an automatically owned\n> sequence (serial, generated identity) has a logged sequence.  This patch\n> changes that so the new automatically owned sequence is unlogged.  This\n> seems to be generally agreed upon as being desirable - but given the\n> fact that unlogged tables will not have unlogged sequences it seems\n> worth confirming that this minor inconsistency is acceptable.\n> \n> The first newly added behavior is just allowing sequences to be\n> unlogged.  That is the only mandatory feature introduced by this patch\n> and doesn't seem contentious.\n> \n> The second newly added behavior being proposed is to have the\n> persistence of the sequence be forcibly matched to the table.  Whether\n> this is desirable is the main point under discussion.\n> \n\nRight. The latest version of the patch also prohibits changing\npersistence of owned sequences directly. But that's probably considered\n to be part of the second behavior.\n\nI agree the first part is not contentious, so shall we extract this part\nof the patch and get that committed for PG15? Or is that too late to\nmake such changes to the patch?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 31 Mar 2022 22:05:30 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 1:05 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n> I agree the first part is not contentious, so shall we extract this part\n> of the patch and get that committed for PG15? Or is that too late to\n> make such changes to the patch?\n>\n>\nThe minimum viable feature for me, given the written goal for the patch and\nthe premise of not changing any existing behavior, is:\n\nDB State: Allow a sequence to be unlogged.\nCommand: ALTER SEQUENCE SET UNLOGGED\nLimitation: The above command fails if the sequence is unowned, or it is\nowned and the table owning it is not UNLOGGED\n\n(optional safety) Limitation: Changing a table from unlogged to logged\nwhile owning unlogged sequences would be prohibited\n(optional safety) Compensatory Behavior: Add the ALTER SEQUENCE SET LOGGED\ncommand for owned sequences to get them logged again in preparation for\nchanging the table to being logged.\n\nIn particular, I don't see CREATE UNLOGGED SEQUENCE to be all that valuable\nsince CREATE UNLOGGED TABLE wouldn't leverage it.\n\nThe above, possibly only half-baked, patch scope does not change any\nexisting behavior but allows for the stated goal: an unlogged table having\nan unlogged sequence. The DBA just has to execute the ALTER SEQUENCE\ncommand on all relevant sequences. They can't even really get it wrong\nsince only relevant sequences can be altered. Not having CREATE TABLE make\nan unlogged sequence by default is annoying though and likely should be\nchanged - though it can leverage ALTER SEQUENCE too.\n\nAnything else they wish to do can be done via a combination of ownership\nmanipulation and, worse case, dropping and recreating the sequence. Though\nallowed for unowned unlogged sequences, while outside the explicit goal of\nthe patch, would be an easy add (just don't error on the ALTER SEQUENCE SET\nUNLOGGED when the sequence is unowned).\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 1:05 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\nI agree the first part is not contentious, so shall we extract this part\nof the patch and get that committed for PG15? Or is that too late to\nmake such changes to the patch?The minimum viable feature for me, given the written goal for the patch and the premise of not changing any existing behavior, is:DB State: Allow a sequence to be unlogged.Command: ALTER SEQUENCE SET UNLOGGEDLimitation: The above command fails if the sequence is unowned, or it is owned and the table owning it is not UNLOGGED(optional safety) Limitation: Changing a table from unlogged to logged while owning unlogged sequences would be prohibited(optional safety) Compensatory Behavior: Add the ALTER SEQUENCE SET LOGGED command for owned sequences to get them logged again in preparation for changing the table to being logged.In particular, I don't see CREATE UNLOGGED SEQUENCE to be all that valuable since CREATE UNLOGGED TABLE wouldn't leverage it.The above, possibly only half-baked, patch scope does not change any existing behavior but allows for the stated goal: an unlogged table having an unlogged sequence.  The DBA just has to execute the ALTER SEQUENCE command on all relevant sequences.  They can't even really get it wrong since only relevant sequences can be altered.  Not having CREATE TABLE make an unlogged sequence by default is annoying though and likely should be changed - though it can leverage ALTER SEQUENCE too.Anything else they wish to do can be done via a combination of ownership manipulation and, worse case, dropping and recreating the sequence.  Though allowed for unowned unlogged sequences, while outside the explicit goal of the patch, would be an easy add (just don't error on the ALTER SEQUENCE SET UNLOGGED when the sequence is unowned).David J.", "msg_date": "Thu, 31 Mar 2022 13:40:11 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 1:40 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> The DBA just has to execute the ALTER SEQUENCE command on all relevant\n> sequences.\n>\n\nAdditional, if we do not implement the forced matching of persistence mode,\nwe should consider adding an \"ALTER TABLE SET ALL SEQUENCES TO UNLOGGED\"\ncommand for convenience. Or maybe make it a function - which would allow\nfor SQL execution against a catalog lookup.\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 1:40 PM David G. Johnston <david.g.johnston@gmail.com> wrote:The DBA just has to execute the ALTER SEQUENCE command on all relevant sequences.Additional, if we do not implement the forced matching of persistence mode, we should consider adding an \"ALTER TABLE SET ALL SEQUENCES TO UNLOGGED\" command for convenience.  Or maybe make it a function - which would allow for SQL execution against a catalog lookup.David J.", "msg_date": "Thu, 31 Mar 2022 14:11:56 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 3/31/22 22:40, David G. Johnston wrote:\n> On Thu, Mar 31, 2022 at 1:05 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> \n> I agree the first part is not contentious, so shall we extract this part\n> of the patch and get that committed for PG15? Or is that too late to\n> make such changes to the patch?\n> \n> \n> The minimum viable feature for me, given the written goal for the patch\n> and the premise of not changing any existing behavior, is:\n> \n> DB State: Allow a sequence to be unlogged.\n> Command: ALTER SEQUENCE SET UNLOGGED\n> Limitation: The above command fails if the sequence is unowned, or it is\n> owned and the table owning it is not UNLOGGED\n> \n> (optional safety) Limitation: Changing a table from unlogged to logged\n> while owning unlogged sequences would be prohibited\n> (optional safety) Compensatory Behavior: Add the ALTER SEQUENCE SET\n> LOGGED command for owned sequences to get them logged again in\n> preparation for changing the table to being logged.\n> \n> In particular, I don't see CREATE UNLOGGED SEQUENCE to be all that\n> valuable since CREATE UNLOGGED TABLE wouldn't leverage it.\n> \n\nHmm, so what about doing a little bit different thing:\n\n1) owned sequences inherit persistence of the table by default\n\n2) allow ALTER SEQUENCE to change persistence for all sequences (no\nrestriction for owned sequences)\n\n3) ALTER TABLE ... SET [UN]LOGGED changes persistence for sequences\nmatching the initial table persistence\n\nIMHO (1) would address vast majority of cases, which simply want the\nsame persistence for the whole table and all auxiliary objects. (2)\nwould address use cases requiring different persistence for sequences\n(including owned ones).\n\nI'm not sure about (3) though, maybe that's overkill.\n\nOf course, we'll always have problems with older releases, as it's not\nclear whether a logged sequence on unlogged table would be desirable or\nis used just because unlogged sequences were not supported. (We do have\nthe same issue for logged tables too, but I doubt anyone really needs\ndefining unlogged sequences on logged tables.)\n\nSo no matter what we do, we'll make the wrong decision in some cases.\n\n> The above, possibly only half-baked, patch scope does not change any\n> existing behavior but allows for the stated goal: an unlogged table\n> having an unlogged sequence.  The DBA just has to execute the ALTER\n> SEQUENCE command on all relevant sequences.  They can't even really get\n> it wrong since only relevant sequences can be altered.  Not having\n> CREATE TABLE make an unlogged sequence by default is annoying though and\n> likely should be changed - though it can leverage ALTER SEQUENCE too.\n> \n> Anything else they wish to do can be done via a combination of ownership\n> manipulation and, worse case, dropping and recreating the sequence. \n> Though allowed for unowned unlogged sequences, while outside the\n> explicit goal of the patch, would be an easy add (just don't error on\n> the ALTER SEQUENCE SET UNLOGGED when the sequence is unowned).\n> \n\nYeah. I think my proposal is pretty close to that, except that the\nsequence would first inherit persistence from the table, and there'd be\nan ALTER SEQUENCE for owned sequences where it differs. (And non-owned\nsequences would be created as logged/unlogged explicitly.)\n\nI don't think we need to worry about old pg_dump versions on new PG\nversions, because that's not really supported.\n\nAnd for old PG versions the behavior would differ a bit depending on the\npg_dump version used. With old pg_dump version, the ALTER SEQUENCE would\nnot be emitted, so all owned sequences would inherit table persistence.\nWith new pg_dump we'd get the expected persistence (which might differ).\n\nThat's need to be documented, of course.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Apr 2022 00:43:04 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 3:43 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 3/31/22 22:40, David G. Johnston wrote:\n> > On Thu, Mar 31, 2022 at 1:05 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> > wrote:\n> >\n> >\n> > I agree the first part is not contentious, so shall we extract this\n> part\n> > of the patch and get that committed for PG15? Or is that too late to\n> > make such changes to the patch?\n> >\n> >\n> > The minimum viable feature for me, given the written goal for the patch\n> > and the premise of not changing any existing behavior, is:\n> >\n> > DB State: Allow a sequence to be unlogged.\n> > Command: ALTER SEQUENCE SET UNLOGGED\n> > Limitation: The above command fails if the sequence is unowned, or it is\n> > owned and the table owning it is not UNLOGGED\n> >\n> > (optional safety) Limitation: Changing a table from unlogged to logged\n> > while owning unlogged sequences would be prohibited\n> > (optional safety) Compensatory Behavior: Add the ALTER SEQUENCE SET\n> > LOGGED command for owned sequences to get them logged again in\n> > preparation for changing the table to being logged.\n> >\n> > In particular, I don't see CREATE UNLOGGED SEQUENCE to be all that\n> > valuable since CREATE UNLOGGED TABLE wouldn't leverage it.\n> >\n>\n> Hmm, so what about doing a little bit different thing:\n>\n> 1) owned sequences inherit persistence of the table by default\n>\n\nThis is the contentious point. If we are going to do it by default - thus\nchanging existing behavior - I would rather just do it always. This is\nalso underspecified, there are multiple ways for a sequence to become owned.\n\nPersonally I'm for the choice to effectively remove the sequence's own\nconcept of logged/unlogged when it is owned by a table and to always just\nuse the table's value.\n\n\n> 2) allow ALTER SEQUENCE to change persistence for all sequences (no\n> restriction for owned sequences)\n>\n\nA generalization that is largely incontrovertible.\n\n>\n> 3) ALTER TABLE ... SET [UN]LOGGED changes persistence for sequences\n> matching the initial table persistence\n>\n\nI'm leaning against this, leaving users to set each owned sequence to\nlogged/unlogged as they see fit if they want something other than\nall-or-nothing. I would stick to only providing an easy method to get the\nassumed desired all-same behavior.\nALTER TABLE SET [UN]LOGGED, SET ALL SEQUENCES TO [UN]LOGGED;\n\n\n> IMHO (1) would address vast majority of cases, which simply want the\n> same persistence for the whole table and all auxiliary objects. (2)\n> would address use cases requiring different persistence for sequences\n> (including owned ones).\n>\n> I'm not sure about (3) though, maybe that's overkill.\n>\n> Of course, we'll always have problems with older releases, as it's not\n> clear whether a logged sequence on unlogged table would be desirable or\n> is used just because unlogged sequences were not supported. (We do have\n> the same issue for logged tables too, but I doubt anyone really needs\n> defining unlogged sequences on logged tables.)\n>\n> So no matter what we do, we'll make the wrong decision in some cases.\n>\n\nAgain, I don't have too much concern here because you lose very little by\nhaving an unowned sequence. Which is why I'm fine with owned sequences\nbecoming even moreso implementation details that adhere to the persistence\nmode of the owning relation. But if the goal here is to defer such a\ndecision then the tradeoff is the DBA is given control and they get to\nenforce consistency even if they are not benefitting from the flexibility.\n\n> > Not having\n> > CREATE TABLE make an unlogged sequence by default is annoying though and\n> > likely should be changed - though it can leverage ALTER SEQUENCE too.\n>\n> Yeah. I think my proposal is pretty close to that, except that the\n> sequence would first inherit persistence from the table, and there'd be\n> an ALTER SEQUENCE for owned sequences where it differs. (And non-owned\n> sequences would be created as logged/unlogged explicitly.)\n>\n\nI don't have any real problem with 1 or 2, they fill out the feature so it\nis generally designed as opposed to solving a very specific problem.\n\nFor 1:\nThe \"ADD COLUMN\" (whether in CREATE TABLE or ALTER TABLE) pathway will\nproduce a new sequence whose persistence matches that of the target table.\nWhile a behavior change it is one aligned with the goal of the patch for\ntypical ongoing behavior and should benefit way more people than it may\ninconvenience. The \"sequence not found\" error that would be generated\nseems minimally impactful.\n\nThe \"ALTER SEQUENCE OWNED BY\" pathway will not change the sequence's\npersistence. This is what pg_dump will use for serial/bigserial\nThe \"ALTER TABLE ALTER COLUMN\" pathway will not change the sequence's\npersistence. This is what pg_dump will use for generated always as identity\n\nProvide a general purpose ALTER SEQUENCE SET [UN]LOGGED command\n\nProvide an SQL Command to change all owned sequences of a table to be\nUNLOGGED or LOGGED (I mentioned a function as well if someone thinks it\nworth the time - in lieu of a function a psql script leveraging \\gexec may\nbe nice to reference).\n\n\n> I don't think we need to worry about old pg_dump versions on new PG\n> versions, because that's not really supported.\n>\n\nCorrect.\n\n>\n> And for old PG versions the behavior would differ a bit depending on the\n> pg_dump version used. With old pg_dump version, the ALTER SEQUENCE would\n> not be emitted,\n\n\nCorrect, nothing else is emitted either...\n\n\n> That's need to be documented, of course.\n>\n>\nIt (the general promises for pg_dump) is documented.\n\nhttps://www.postgresql.org/docs/current/app-pgdump.html : Notes\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 3:43 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 3/31/22 22:40, David G. Johnston wrote:\n> On Thu, Mar 31, 2022 at 1:05 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> \n>     I agree the first part is not contentious, so shall we extract this part\n>     of the patch and get that committed for PG15? Or is that too late to\n>     make such changes to the patch?\n> \n> \n> The minimum viable feature for me, given the written goal for the patch\n> and the premise of not changing any existing behavior, is:\n> \n> DB State: Allow a sequence to be unlogged.\n> Command: ALTER SEQUENCE SET UNLOGGED\n> Limitation: The above command fails if the sequence is unowned, or it is\n> owned and the table owning it is not UNLOGGED\n> \n> (optional safety) Limitation: Changing a table from unlogged to logged\n> while owning unlogged sequences would be prohibited\n> (optional safety) Compensatory Behavior: Add the ALTER SEQUENCE SET\n> LOGGED command for owned sequences to get them logged again in\n> preparation for changing the table to being logged.\n> \n> In particular, I don't see CREATE UNLOGGED SEQUENCE to be all that\n> valuable since CREATE UNLOGGED TABLE wouldn't leverage it.\n> \n\nHmm, so what about doing a little bit different thing:\n\n1) owned sequences inherit persistence of the table by defaultThis is the contentious point.  If we are going to do it by default - thus changing existing behavior - I would rather just do it always.  This is also underspecified, there are multiple ways for a sequence to become owned.Personally I'm for the choice to effectively remove the sequence's own concept of logged/unlogged when it is owned by a table and to always just use the table's value.\n\n2) allow ALTER SEQUENCE to change persistence for all sequences (no\nrestriction for owned sequences)A generalization that is largely incontrovertible.\n\n3) ALTER TABLE ... SET [UN]LOGGED changes persistence for sequences\nmatching the initial table persistenceI'm leaning against this, leaving users to set each owned sequence to logged/unlogged as they see fit if they want something other than all-or-nothing.  I would stick to only providing an easy method to get the assumed desired all-same behavior.ALTER TABLE SET [UN]LOGGED, SET ALL SEQUENCES TO [UN]LOGGED;\n\nIMHO (1) would address vast majority of cases, which simply want the\nsame persistence for the whole table and all auxiliary objects. (2)\nwould address use cases requiring different persistence for sequences\n(including owned ones).\n\nI'm not sure about (3) though, maybe that's overkill.\n\nOf course, we'll always have problems with older releases, as it's not\nclear whether a logged sequence on unlogged table would be desirable or\nis used just because unlogged sequences were not supported. (We do have\nthe same issue for logged tables too, but I doubt anyone really needs\ndefining unlogged sequences on logged tables.)\n\nSo no matter what we do, we'll make the wrong decision in some cases.Again, I don't have too much concern here because you lose very little by having an unowned sequence.  Which is why I'm fine with owned sequences becoming even moreso implementation details that adhere to the persistence mode of the owning relation.  But if the goal here is to defer such a decision then the tradeoff is the DBA is given control and they get to enforce consistency even if they are not benefitting from the flexibility.> Not having\n> CREATE TABLE make an unlogged sequence by default is annoying though and\n> likely should be changed - though it can leverage ALTER SEQUENCE too.\nYeah. I think my proposal is pretty close to that, except that the\nsequence would first inherit persistence from the table, and there'd be\nan ALTER SEQUENCE for owned sequences where it differs. (And non-owned\nsequences would be created as logged/unlogged explicitly.)I don't have any real problem with 1 or 2, they fill out the feature so it is generally designed as opposed to solving a very specific problem.For 1:The \"ADD COLUMN\" (whether in CREATE TABLE or ALTER TABLE) pathway will produce a new sequence whose persistence matches that of the target table.  While a behavior change it is one aligned with the goal of the patch for typical ongoing behavior and should benefit way more people than it may inconvenience.  The \"sequence not found\" error that would be generated seems minimally impactful.The \"ALTER SEQUENCE OWNED BY\" pathway will not change the sequence's persistence.  This is what pg_dump will use for serial/bigserialThe \"ALTER TABLE ALTER COLUMN\" pathway will not change the sequence's persistence.  This is what pg_dump will use for generated always as identityProvide a general purpose ALTER SEQUENCE SET [UN]LOGGED commandProvide an SQL Command to change all owned sequences of a table to be UNLOGGED or LOGGED (I mentioned a function as well if someone thinks it worth the time - in lieu of a function a psql script leveraging \\gexec may be nice to reference).\n\nI don't think we need to worry about old pg_dump versions on new PG\nversions, because that's not really supported.Correct.\n\nAnd for old PG versions the behavior would differ a bit depending on the\npg_dump version used. With old pg_dump version, the ALTER SEQUENCE would\nnot be emitted,Correct, nothing else is emitted either...\nThat's need to be documented, of course.It (the general promises for pg_dump) is documented.https://www.postgresql.org/docs/current/app-pgdump.html  :  NotesDavid J.", "msg_date": "Thu, 31 Mar 2022 16:36:05 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 10:14 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> * When linking a sequence to a table (ALTER SEQUENCE ... OWNED BY),\n> there's an ereport(ERROR) if the relpersistence values do not match.\n>\n> * Disallow changing persistence for owned sequences directly.\n\nWait, what? I don't understand why we would want to do either of these things.\n\nIt seems to me that it's totally fine to use a logged table with an\nunlogged sequence, or an unlogged table with a logged sequence, or any\nof the other combinations. You get what you ask for, so make sure to\nask for what you want. And that's it.\n\nIf you say something like CREATE [UNLOGGED] TABLE foo (a serial) it's\nfine for serial to attribute the same persistence level to the\nsequence as it does to the table. But when that's dumped, it's going\nto be dumped as a CREATE TABLE command and a CREATE SEQUENCE command,\neach of which has a separate persistence level. So you can recreate\nwhatever state you have.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:25:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 4/1/22 02:25, Robert Haas wrote:\n> On Thu, Mar 31, 2022 at 10:14 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> * When linking a sequence to a table (ALTER SEQUENCE ... OWNED BY),\n>> there's an ereport(ERROR) if the relpersistence values do not match.\n>>\n>> * Disallow changing persistence for owned sequences directly.\n> \n> Wait, what? I don't understand why we would want to do either of these things.\n> \n> It seems to me that it's totally fine to use a logged table with an\n> unlogged sequence, or an unlogged table with a logged sequence, or any\n> of the other combinations. You get what you ask for, so make sure to\n> ask for what you want. And that's it.\n> \n> If you say something like CREATE [UNLOGGED] TABLE foo (a serial) it's\n> fine for serial to attribute the same persistence level to the\n> sequence as it does to the table. But when that's dumped, it's going\n> to be dumped as a CREATE TABLE command and a CREATE SEQUENCE command,\n> each of which has a separate persistence level. So you can recreate\n> whatever state you have.\n> \n\nWell, yeah. I did this because the patch was somewhat inconsistent when\nhandling owned sequences - it updated persistence for owned sequences\nwhen persistence for the table changed, expecting to keep them in sync,\nbut then it also allowed operations that'd break it.\n\nBut that started a discussion about exactly this, and AFAICS there's\nagreement we want to allow the table and owned sequences to have\ndifferent persistence values.\n\nThe discussion about the details is still ongoing, but I think it's\nclear we'll ditch the restrictions you point out.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Apr 2022 02:42:29 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 5:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Mar 31, 2022 at 10:14 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > * When linking a sequence to a table (ALTER SEQUENCE ... OWNED BY),\n> > there's an ereport(ERROR) if the relpersistence values do not match.\n> >\n> > * Disallow changing persistence for owned sequences directly.\n>\n> Wait, what? I don't understand why we would want to do either of these\n> things.\n>\n> It seems to me that it's totally fine to use a logged table with an\n> unlogged sequence, or an unlogged table with a logged sequence, or any\n> of the other combinations. You get what you ask for, so make sure to\n> ask for what you want. And that's it.\n>\n\nIt seems reasonable to extend the definition of \"ownership of a sequence\"\nin this way. We always let you create unowned sequences with whatever\npersistence you like if you need flexibility.\n\nThe \"give the user power\" argument is also valid. But since they already\nhave power through unowned sequences, having the owned sequences more\nnarrowly defined doesn't detract from usability, and in many ways enhances\nit by further reinforcing the fact that the sequence internally used when\nyou say \"GENERATED ALWAYS AS IDENTITY\" is an implementation detail - one\nthat has the same persistence as the table.\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 5:25 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Mar 31, 2022 at 10:14 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> * When linking a sequence to a table (ALTER SEQUENCE ... OWNED BY),\n> there's an ereport(ERROR) if the relpersistence values do not match.\n>\n> * Disallow changing persistence for owned sequences directly.\n\nWait, what? I don't understand why we would want to do either of these things.\n\nIt seems to me that it's totally fine to use a logged table with an\nunlogged sequence, or an unlogged table with a logged sequence, or any\nof the other combinations. You get what you ask for, so make sure to\nask for what you want. And that's it.It seems reasonable to extend the definition of \"ownership of a sequence\" in this way.  We always let you create unowned sequences with whatever persistence you like if you need flexibility.The \"give the user power\" argument is also valid.  But since they already have power through unowned sequences, having the owned sequences more narrowly defined doesn't detract from usability, and in many ways enhances it by further reinforcing the fact that the sequence internally used when you say \"GENERATED ALWAYS AS IDENTITY\" is an implementation detail - one that has the same persistence as the table.David J.", "msg_date": "Thu, 31 Mar 2022 17:44:08 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 8:42 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Well, yeah. I did this because the patch was somewhat inconsistent when\n> handling owned sequences - it updated persistence for owned sequences\n> when persistence for the table changed, expecting to keep them in sync,\n> but then it also allowed operations that'd break it.\n\nOops.\n\n> But that started a discussion about exactly this, and AFAICS there's\n> agreement we want to allow the table and owned sequences to have\n> different persistence values.\n>\n> The discussion about the details is still ongoing, but I think it's\n> clear we'll ditch the restrictions you point out.\n\nGreat.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:54:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 8:44 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> It seems reasonable to extend the definition of \"ownership of a sequence\" in this way. We always let you create unowned sequences with whatever persistence you like if you need flexibility.\n\nI'd say it doesn't seem to have any benefit, and therefore seems\nunreasonable. Right now, OWNED BY is documented as a way of getting\nthe sequence to automatically be dropped if the table column goes\naway. If it already did five things, maybe you could argue that this\nthing is just like the other five and therefore changing it is the\nright idea. But going from one thing to two that don't seem to have\nmuch to do with each other seems much less reasonable, especially\nsince it doesn't seem to buy anything.\n\n> The \"give the user power\" argument is also valid. But since they already have power through unowned sequences, having the owned sequences more narrowly defined doesn't detract from usability, and in many ways enhances it by further reinforcing the fact that the sequence internally used when you say \"GENERATED ALWAYS AS IDENTITY\" is an implementation detail - one that has the same persistence as the table.\n\nI think there's a question about what happens in the GENERATED ALWAYS\nAS IDENTITY case. The DDL commands that create such sequences are of\nthe form ALTER TABLE something ALTER COLUMN somethingelse GENERATED\nALWAYS AS (sequence_parameters), and if we need to specify somewhere\nin the whether the sequence should be logged or unlogged, how do we do\nthat? Consider:\n\nrhaas=# create unlogged table xyz (a int generated always as identity);\nCREATE TABLE\nrhaas=# \\d+ xyz\n Unlogged table \"public.xyz\"\n Column | Type | Collation | Nullable | Default\n | Storage | Compression | Stats target | Description\n--------+---------+-----------+----------+------------------------------+---------+-------------+--------------+-------------\n a | integer | | not null | generated always as\nidentity | plain | | |\nAccess method: heap\n\nrhaas=# \\d+ xyz_a_seq\n Sequence \"public.xyz_a_seq\"\n Type | Start | Minimum | Maximum | Increment | Cycles? | Cache\n---------+-------+---------+------------+-----------+---------+-------\n integer | 1 | 1 | 2147483647 | 1 | no | 1\nSequence for identity column: public.xyz.a\n\nIn this new system, does the user still get a logged sequence? If they\nget an unlogged sequence, how does dump-and-restore work? What if they\nwant to still have a logged sequence? But for sequences that are\nsimply owned, there is no problem here, and I think that inventing one\nwould not be a good plan.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 21:03:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 6:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> In this new system, does the user still get a logged sequence? If they\n> get an unlogged sequence, how does dump-and-restore work? What if they\n> want to still have a logged sequence? But for sequences that are\n> simply owned, there is no problem here, and I think that inventing one\n> would not be a good plan.\n>\n\nThere is no design problem here, just coding (including special handling\nfor pg_upgrade). When making a sequence owned we can, without requiring\nany syntax, choose to change its persistence to match the table owning it.\nOr not. These are basically options 1 and 2 I laid out earlier:\n\nhttps://www.postgresql.org/message-id/CAKFQuwY6GsC1CvweCkgaYi-%2BHNF2F-fqCp8JpdFK9bk18gqzFA%40mail.gmail.com\n\nI slightly favor 1, making owned sequences implementation details having a\nmatched persistence mode. But we seem to be leaning toward option 2 per\nsubsequent emails. I'm ok with that - just give me an easy way to change\nall my upgraded logged sequences to unlogged. And probably do the same if\nI change my table's mode as well.\n\nThat there is less implementation complexity is nice but the end user won't\nsee that. I think the typical end user would appreciate having the\nsequence stay in sync with the table instead of having to worry about those\nkinds of details. Hence my slight favor given toward 1.\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 6:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\nIn this new system, does the user still get a logged sequence? If they\nget an unlogged sequence, how does dump-and-restore work? What if they\nwant to still have a logged sequence? But for sequences that are\nsimply owned, there is no problem here, and I think that inventing one\nwould not be a good plan.There is no design problem here, just coding (including special handling for pg_upgrade).  When making a sequence owned we can, without requiring any syntax, choose to change its persistence to match the table owning it.  Or not.  These are basically options 1 and 2 I laid out earlier:https://www.postgresql.org/message-id/CAKFQuwY6GsC1CvweCkgaYi-%2BHNF2F-fqCp8JpdFK9bk18gqzFA%40mail.gmail.comI slightly favor 1, making owned sequences implementation details having a matched persistence mode.  But we seem to be leaning toward option 2 per subsequent emails. I'm ok with that - just give me an easy way to change all my upgraded logged sequences to unlogged.  And probably do the same if I change my table's mode as well.That there is less implementation complexity is nice but the end user won't see that.  I think the typical end user would appreciate having the sequence stay in sync with the table instead of having to worry about those kinds of details.  Hence my slight favor given toward 1.David J.", "msg_date": "Thu, 31 Mar 2022 18:16:11 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Thu, Mar 31, 2022 at 6:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Mar 31, 2022 at 8:44 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>\n> > The \"give the user power\" argument is also valid. But since they\n> already have power through unowned sequences, having the owned sequences\n> more narrowly defined doesn't detract from usability, and in many ways\n> enhances it by further reinforcing the fact that the sequence internally\n> used when you say \"GENERATED ALWAYS AS IDENTITY\" is an implementation\n> detail - one that has the same persistence as the table.\n>\n> I think there's a question about what happens in the GENERATED ALWAYS\n> AS IDENTITY case. The DDL commands that create such sequences are of\n> the form ALTER TABLE something ALTER COLUMN somethingelse GENERATED\n> ALWAYS AS (sequence_parameters), and if we need to specify somewhere\n> in the whether the sequence should be logged or unlogged, how do we do\n> that?\n\n\nI give answers for the \"owned sequences match their owning table's\npersistence\" model below:\n\nYou would not need to specify it - the table is specified and that is\nsufficient to know what value to choose.\n\n\n> Consider:\n>\n> rhaas=# create unlogged table xyz (a int generated always as identity);\n> CREATE TABLE\n> rhaas=# \\d+ xyz\n> Unlogged table \"\n> public.xyz\"\n> Column | Type | Collation | Nullable | Default\n> | Storage | Compression | Stats target | Description\n>\n> --------+---------+-----------+----------+------------------------------+---------+-------------+--------------+-------------\n> a | integer | | not null | generated always as\n> identity | plain | | |\n> Access method: heap\n>\n> rhaas=# \\d+ xyz_a_seq\n> Sequence \"public.xyz_a_seq\"\n> Type | Start | Minimum | Maximum | Increment | Cycles? | Cache\n> ---------+-------+---------+------------+-----------+---------+-------\n> integer | 1 | 1 | 2147483647 | 1 | no | 1\n> Sequence for identity column: public.xyz.a\n>\n> In this new system, does the user still get a logged sequence?\n\n\nNo\n\n\n> If they\n> get an unlogged sequence, how does dump-and-restore work?\n\n\nAs described in the first response, since ALTER COLUMN is used during\ndump-and-restore, the sequence creation occurs in a command where we know\nthe owning table is unlogged so the created sequence is unlogged.\n\n\n> What if they\n> want to still have a logged sequence?\n\n\nI was expecting the following to work, though it does not presently:\n\nALTER SEQUENCE yetanotherthing OWNED BY NONE;\nERROR: cannot change ownership of identity sequence\n\nALTER SEQUENCE yetanotherthing SET LOGGED;\n\nIMO, the generated case is the stronger one for not allowing them to be\ndifferent. They can fall back onto the DEFAULT\nnextval('sequence_that_is_unowned') option to get the desired behavior.\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 6:03 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Mar 31, 2022 at 8:44 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> The \"give the user power\" argument is also valid.  But since they already have power through unowned sequences, having the owned sequences more narrowly defined doesn't detract from usability, and in many ways enhances it by further reinforcing the fact that the sequence internally used when you say \"GENERATED ALWAYS AS IDENTITY\" is an implementation detail - one that has the same persistence as the table.\n\nI think there's a question about what happens in the GENERATED ALWAYS\nAS IDENTITY case. The DDL commands that create such sequences are of\nthe form ALTER TABLE something ALTER COLUMN somethingelse GENERATED\nALWAYS AS (sequence_parameters), and if we need to specify somewhere\nin the whether the sequence should be logged or unlogged, how do we do\nthat?I give answers for the \"owned sequences match their owning table's persistence\" model below:You would not need to specify it - the table is specified and that is sufficient to know what value to choose.  Consider:\n\nrhaas=# create unlogged table xyz (a int generated always as identity);\nCREATE TABLE\nrhaas=# \\d+ xyz\n                                                 Unlogged table \"public.xyz\"\n Column |  Type   | Collation | Nullable |           Default\n | Storage | Compression | Stats target | Description\n--------+---------+-----------+----------+------------------------------+---------+-------------+--------------+-------------\n a      | integer |           | not null | generated always as\nidentity | plain   |             |              |\nAccess method: heap\n\nrhaas=# \\d+ xyz_a_seq\n                     Sequence \"public.xyz_a_seq\"\n  Type   | Start | Minimum |  Maximum   | Increment | Cycles? | Cache\n---------+-------+---------+------------+-----------+---------+-------\n integer |     1 |       1 | 2147483647 |         1 | no      |     1\nSequence for identity column: public.xyz.a\n\nIn this new system, does the user still get a logged sequence?No  If they\nget an unlogged sequence, how does dump-and-restore work?As described in the first response, since ALTER COLUMN is used during dump-and-restore, the sequence creation occurs in a command where we know the owning table is unlogged so the created sequence is unlogged.  What if they\nwant to still have a logged sequence?I was expecting the following to work, though it does not presently:ALTER SEQUENCE yetanotherthing OWNED BY NONE;ERROR: cannot change ownership of identity sequenceALTER SEQUENCE yetanotherthing SET LOGGED;IMO, the generated case is the stronger one for not allowing them to be different.  They can fall back onto the DEFAULT nextval('sequence_that_is_unowned') option to get the desired behavior.David J.", "msg_date": "Thu, 31 Mar 2022 19:31:29 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "\nOn 01.04.22 00:43, Tomas Vondra wrote:\n> Hmm, so what about doing a little bit different thing:\n> \n> 1) owned sequences inherit persistence of the table by default\n> \n> 2) allow ALTER SEQUENCE to change persistence for all sequences (no\n> restriction for owned sequences)\n> \n> 3) ALTER TABLE ... SET [UN]LOGGED changes persistence for sequences\n> matching the initial table persistence\n\nConsider that an identity sequence creates an \"internal\" dependency and \na serial sequence creates an \"auto\" dependency.\n\nAn \"internal\" dependency means that the internal object shouldn't really \nbe operated on directly. (In some cases it's allowed for convenience.) \nSo I think in that case the sequence must follow the table's persistence \nin all cases. This is accomplished by setting the initial persistence \nto the table's, making ALTER TABLE propagate persistence changes, and \nprohibiting direct ALTER SEQUENCE SET.\n\nAn \"auto\" dependency is looser, so manipulating both objects \nindependently can be allowed. In that case, I would do (1), (2), and (3).\n\n(I think your (3) is already the behavior in the patch, since there are \nonly two persistence levels in play at that point.)\n\nI wanted to check if you can have a persistent sequence owned by a temp \ntable, but that is rejected because both sequence and table must be in \nthe same schema. So the sequence owned-by schema does insist on some \ntight coupling. So for example, once a sequence is owned by a table, \nyou can't move it around or change the ownership.\n\n\n", "msg_date": "Fri, 1 Apr 2022 18:22:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 01.04.22 18:22, Peter Eisentraut wrote:\n> \n> On 01.04.22 00:43, Tomas Vondra wrote:\n>> Hmm, so what about doing a little bit different thing:\n>>\n>> 1) owned sequences inherit persistence of the table by default\n>>\n>> 2) allow ALTER SEQUENCE to change persistence for all sequences (no\n>> restriction for owned sequences)\n>>\n>> 3) ALTER TABLE ... SET [UN]LOGGED changes persistence for sequences\n>> matching the initial table persistence\n> \n> Consider that an identity sequence creates an \"internal\" dependency and \n> a serial sequence creates an \"auto\" dependency.\n> \n> An \"internal\" dependency means that the internal object shouldn't really \n> be operated on directly.  (In some cases it's allowed for convenience.) \n> So I think in that case the sequence must follow the table's persistence \n> in all cases.  This is accomplished by setting the initial persistence \n> to the table's, making ALTER TABLE propagate persistence changes, and \n> prohibiting direct ALTER SEQUENCE SET.\n\nBut to make pg_upgrade work for identity sequences of unlogged tables, \nwe need to allow ALTER SEQUENCE ... SET LOGGED on such sequences. Which \nI guess is not a real problem in the end.\n\n\n", "msg_date": "Fri, 1 Apr 2022 18:31:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Fri, Apr 1, 2022 at 9:22 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n>\n> On 01.04.22 00:43, Tomas Vondra wrote:\n> > Hmm, so what about doing a little bit different thing:\n> >\n> > 1) owned sequences inherit persistence of the table by default\n> >\n> > 2) allow ALTER SEQUENCE to change persistence for all sequences (no\n> > restriction for owned sequences)\n> >\n> > 3) ALTER TABLE ... SET [UN]LOGGED changes persistence for sequences\n> > matching the initial table persistence\n>\n> Consider that an identity sequence creates an \"internal\" dependency and\n> a serial sequence creates an \"auto\" dependency.\n>\n> An \"internal\" dependency means that the internal object shouldn't really\n> be operated on directly. (In some cases it's allowed for convenience.)\n> So I think in that case the sequence must follow the table's persistence\n> in all cases. This is accomplished by setting the initial persistence\n> to the table's, making ALTER TABLE propagate persistence changes, and\n> prohibiting direct ALTER SEQUENCE SET.\n>\n> An \"auto\" dependency is looser, so manipulating both objects\n> independently can be allowed. In that case, I would do (1), (2), and (3).\n>\n> (I think your (3) is already the behavior in the patch, since there are\n> only two persistence levels in play at that point.)\n>\n\nI would support having a serial sequence be allowed to be changed\nindependently while an identity sequence is made to match the table it is\nowned by. Older version restores would produce a logged serial sequence\n(since the sequence is independently created and then attached to the\ntable) on unlogged tables but since identity sequences are only even\nimplicitly created they would become unlogged as part of the restore.\nThough I suspect that pg_upgrade will need to change them explicitly.\n\nI would support all owned sequences as well, but that seems unreachable at\nthe moment.\n\nDavid J.\n\nOn Fri, Apr 1, 2022 at 9:22 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\nOn 01.04.22 00:43, Tomas Vondra wrote:\n> Hmm, so what about doing a little bit different thing:\n> \n> 1) owned sequences inherit persistence of the table by default\n> \n> 2) allow ALTER SEQUENCE to change persistence for all sequences (no\n> restriction for owned sequences)\n> \n> 3) ALTER TABLE ... SET [UN]LOGGED changes persistence for sequences\n> matching the initial table persistence\n\nConsider that an identity sequence creates an \"internal\" dependency and \na serial sequence creates an \"auto\" dependency.\n\nAn \"internal\" dependency means that the internal object shouldn't really \nbe operated on directly.  (In some cases it's allowed for convenience.) \nSo I think in that case the sequence must follow the table's persistence \nin all cases.  This is accomplished by setting the initial persistence \nto the table's, making ALTER TABLE propagate persistence changes, and \nprohibiting direct ALTER SEQUENCE SET.\n\nAn \"auto\" dependency is looser, so manipulating both objects \nindependently can be allowed.  In that case, I would do (1), (2), and (3).\n\n(I think your (3) is already the behavior in the patch, since there are \nonly two persistence levels in play at that point.)I would support having a serial sequence be allowed to be changed independently while an identity sequence is made to match the table it is owned by.  Older version restores would produce a logged serial sequence (since the sequence is independently created and then attached to the table) on unlogged tables but since identity sequences are only even implicitly created they would become unlogged as part of the restore.  Though I suspect that pg_upgrade will need to change them explicitly.I would support all owned sequences as well, but that seems unreachable at the moment.David J.", "msg_date": "Fri, 1 Apr 2022 09:33:34 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Fri, Apr 1, 2022 at 9:31 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 01.04.22 18:22, Peter Eisentraut wrote:\n> >\n> > On 01.04.22 00:43, Tomas Vondra wrote:\n> >> Hmm, so what about doing a little bit different thing:\n> >>\n> >> 1) owned sequences inherit persistence of the table by default\n> >>\n> >> 2) allow ALTER SEQUENCE to change persistence for all sequences (no\n> >> restriction for owned sequences)\n> >>\n> >> 3) ALTER TABLE ... SET [UN]LOGGED changes persistence for sequences\n> >> matching the initial table persistence\n> >\n> > Consider that an identity sequence creates an \"internal\" dependency and\n> > a serial sequence creates an \"auto\" dependency.\n> >\n> > An \"internal\" dependency means that the internal object shouldn't really\n> > be operated on directly. (In some cases it's allowed for convenience.)\n> > So I think in that case the sequence must follow the table's persistence\n> > in all cases. This is accomplished by setting the initial persistence\n> > to the table's, making ALTER TABLE propagate persistence changes, and\n> > prohibiting direct ALTER SEQUENCE SET.\n>\n> But to make pg_upgrade work for identity sequences of unlogged tables,\n> we need to allow ALTER SEQUENCE ... SET LOGGED on such sequences. Which\n> I guess is not a real problem in the end.\n>\n\nIndeed, we need the syntax anyway. We can constrain it though, and error\nwhen trying to make them different but allow making them the same. To\nchange a table's persistence you have to then change the table first -\nputting them back into different states - then sync up the sequence again.\n\nDavid J.\n\nOn Fri, Apr 1, 2022 at 9:31 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 01.04.22 18:22, Peter Eisentraut wrote:\n> \n> On 01.04.22 00:43, Tomas Vondra wrote:\n>> Hmm, so what about doing a little bit different thing:\n>>\n>> 1) owned sequences inherit persistence of the table by default\n>>\n>> 2) allow ALTER SEQUENCE to change persistence for all sequences (no\n>> restriction for owned sequences)\n>>\n>> 3) ALTER TABLE ... SET [UN]LOGGED changes persistence for sequences\n>> matching the initial table persistence\n> \n> Consider that an identity sequence creates an \"internal\" dependency and \n> a serial sequence creates an \"auto\" dependency.\n> \n> An \"internal\" dependency means that the internal object shouldn't really \n> be operated on directly.  (In some cases it's allowed for convenience.) \n> So I think in that case the sequence must follow the table's persistence \n> in all cases.  This is accomplished by setting the initial persistence \n> to the table's, making ALTER TABLE propagate persistence changes, and \n> prohibiting direct ALTER SEQUENCE SET.\n\nBut to make pg_upgrade work for identity sequences of unlogged tables, \nwe need to allow ALTER SEQUENCE ... SET LOGGED on such sequences.  Which \nI guess is not a real problem in the end.Indeed, we need the syntax anyway.  We can constrain it though, and error when trying to make them different but allow making them the same.  To change a table's persistence you have to then change the table first - putting them back into different states - then sync up the sequence again.David J.", "msg_date": "Fri, 1 Apr 2022 09:36:20 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Fri, Apr 1, 2022 at 12:31 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> > An \"internal\" dependency means that the internal object shouldn't really\n> > be operated on directly. (In some cases it's allowed for convenience.)\n> > So I think in that case the sequence must follow the table's persistence\n> > in all cases. This is accomplished by setting the initial persistence\n> > to the table's, making ALTER TABLE propagate persistence changes, and\n> > prohibiting direct ALTER SEQUENCE SET.\n>\n> But to make pg_upgrade work for identity sequences of unlogged tables,\n> we need to allow ALTER SEQUENCE ... SET LOGGED on such sequences. Which\n> I guess is not a real problem in the end.\n\nAnd I think also SET UNLOGGED, since it would be weird IMHO to make\nsuch a change irreversible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 13:41:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 01.04.22 18:31, Peter Eisentraut wrote:\n>> Consider that an identity sequence creates an \"internal\" dependency \n>> and a serial sequence creates an \"auto\" dependency.\n>>\n>> An \"internal\" dependency means that the internal object shouldn't \n>> really be operated on directly.  (In some cases it's allowed for \n>> convenience.) So I think in that case the sequence must follow the \n>> table's persistence in all cases.  This is accomplished by setting the \n>> initial persistence to the table's, making ALTER TABLE propagate \n>> persistence changes, and prohibiting direct ALTER SEQUENCE SET.\n> \n> But to make pg_upgrade work for identity sequences of unlogged tables, \n> we need to allow ALTER SEQUENCE ... SET LOGGED on such sequences.  Which \n> I guess is not a real problem in the end.\n\nHere is an updated patch that fixes this pg_dump/pg_upgrade issue and \nalso adds a few more comments and documentation sentences about what \nhappens and what is allowed. I didn't change any behaviors; it seems we \ndidn't have consensus to do that.\n\nThese details about how tables and sequences are linked or not are \npretty easy to adjust, if people still have some qualms.", "msg_date": "Sun, 3 Apr 2022 19:19:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Sun, Apr 3, 2022 at 10:19 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> Here is an updated patch that fixes this pg_dump/pg_upgrade issue and\n> also adds a few more comments and documentation sentences about what\n> happens and what is allowed. I didn't change any behaviors; it seems we\n> didn't have consensus to do that.\n>\n\nIIUC the patch behavior with respect to migration is to have pg_upgrade\nretain the current logged persistence mode for all owned sequences\nregardless of the owning table's persistence. The same goes for pg_dump\nfor serial sequences since they will never be annotated with UNLOGGED and\nsimply adding an ownership link doesn't cause a table rewrite.\n\nHowever, tables having an identity sequence seem to be unaddressed in this\npatch. The existing (and unchanged) pg_dump.c code results in:\n\nCREATE TABLE public.testgenid (\n getid bigint NOT NULL\n);\n\nALTER TABLE public.testgenid OWNER TO postgres;\n\nALTER TABLE public.testgenid ALTER COLUMN getid ADD GENERATED ALWAYS AS\nIDENTITY (\n SEQUENCE NAME public.testgenid_getid_seq\n START WITH 1\n INCREMENT BY 1\n NO MINVALUE\n NO MAXVALUE\n CACHE 1\n);\n\nISTM that we need to add the ability to specify [UN]LOGGED in those\nsequence_options and have pg_dump.c output the choice explicitly instead of\nrelying upon a default.\n\nWithout that, the post-patch dump/restore cannot retain the existing\npersistence mode value for the sequence. For the default we would want to\nhave ALTER TABLE ALTER COLUMN be LOGGED to match the claim that pg_dump\ndoesn't change the persistence mode. The main decision, then, is whether\nCREATE TABLE and ALTER TABLE ADD COLUMN should default to UNLOGGED (this\ncombination preserves existing values via pg_dump while still letting the\nuser benefit from the new feature without having to specify UNLOGGED in\nmultiple places) or LOGGED (preserving existing values and consistency).\nAll UNLOGGED is an option but I think it would need to be considered along\nwith pg_upgrade changing them all as well. Again, limiting this decision\nto identity sequences only.\n\nDavid J.\n\nOn Sun, Apr 3, 2022 at 10:19 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:Here is an updated patch that fixes this pg_dump/pg_upgrade issue and \nalso adds a few more comments and documentation sentences about what \nhappens and what is allowed.  I didn't change any behaviors; it seems we \ndidn't have consensus to do that.IIUC the patch behavior with respect to migration is to have pg_upgrade retain the current logged persistence mode for all owned sequences regardless of the owning table's persistence.  The same goes for pg_dump for serial sequences since they will never be annotated with UNLOGGED and simply adding an ownership link doesn't cause a table rewrite.However, tables having an identity sequence seem to be unaddressed in this patch.  The existing (and unchanged) pg_dump.c code results in:CREATE TABLE public.testgenid (    getid bigint NOT NULL);ALTER TABLE public.testgenid OWNER TO postgres;ALTER TABLE public.testgenid ALTER COLUMN getid ADD GENERATED ALWAYS AS IDENTITY (    SEQUENCE NAME public.testgenid_getid_seq    START WITH 1    INCREMENT BY 1    NO MINVALUE    NO MAXVALUE    CACHE 1);ISTM that we need to add the ability to specify [UN]LOGGED in those sequence_options and have pg_dump.c output the choice explicitly instead of relying upon a default.Without that, the post-patch dump/restore cannot retain the existing persistence mode value for the sequence.  For the default we would want to have ALTER TABLE ALTER COLUMN be LOGGED to match the claim that pg_dump doesn't change the persistence mode.  The main decision, then, is whether CREATE TABLE and ALTER TABLE ADD COLUMN should default to UNLOGGED (this combination preserves existing values via pg_dump while still letting the user benefit from the new feature without having to specify UNLOGGED in multiple places) or LOGGED (preserving existing values and consistency).  All UNLOGGED is an option but I think it would need to be considered along with pg_upgrade changing them all as well.  Again, limiting this decision to identity sequences only.David J.", "msg_date": "Sun, 3 Apr 2022 11:50:26 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 03.04.22 20:50, David G. Johnston wrote:\n> However, tables having an identity sequence seem to be unaddressed in \n> this patch.  The existing (and unchanged) pg_dump.c code results in:\n\nIt is addressed. For example, run this in PG14:\n\ncreate unlogged table t1 (a int generated always as identity, b text);\n\nThen dump it with PG15 with this patch:\n\nCREATE UNLOGGED TABLE public.t1 (\n a integer NOT NULL,\n b text\n);\n\n\nALTER TABLE public.t1 OWNER TO peter;\n\n--\n-- Name: t1_a_seq; Type: SEQUENCE; Schema: public; Owner: peter\n--\n\nALTER TABLE public.t1 ALTER COLUMN a ADD GENERATED ALWAYS AS IDENTITY (\n SEQUENCE NAME public.t1_a_seq\n START WITH 1\n INCREMENT BY 1\n NO MINVALUE\n NO MAXVALUE\n CACHE 1\n);\nALTER SEQUENCE public.t1_a_seq SET LOGGED;\n\n\n", "msg_date": "Sun, 3 Apr 2022 21:36:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Sun, Apr 3, 2022 at 12:36 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 03.04.22 20:50, David G. Johnston wrote:\n> > However, tables having an identity sequence seem to be unaddressed in\n> > this patch. The existing (and unchanged) pg_dump.c code results in:\n>\n> It is addressed. For example, run this in PG14:\n>\n> create unlogged table t1 (a int generated always as identity, b text);\n>\n> Then dump it with PG15 with this patch:\n>\n\nSorry, I wasn't being specific enough. Per our documentation (and I seem\nto recall many comments from Tom):\n\"Because pg_dump is used to transfer data to newer versions of PostgreSQL,\nthe output of pg_dump can be expected to load into PostgreSQL server\nversions newer than pg_dump's version.\" [1]\n\nThat is what I'm getting on about when talking about migrations. So a v14\nSQL backup produced by a v14 pg_dump restored by a v15 psql. (custom format\nand pg_restore supposedly aren't supposed to be different though, right?)\n\n[1] https://www.postgresql.org/docs/current/app-pgdump.html\n\nDavid J.\n\nOn Sun, Apr 3, 2022 at 12:36 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 03.04.22 20:50, David G. Johnston wrote:\n> However, tables having an identity sequence seem to be unaddressed in \n> this patch.  The existing (and unchanged) pg_dump.c code results in:\n\nIt is addressed.  For example, run this in PG14:\n\ncreate unlogged table t1 (a int generated always as identity, b text);\n\nThen dump it with PG15 with this patch:Sorry, I wasn't being specific enough.  Per our documentation (and I seem to recall many comments from Tom):\"Because pg_dump is used to transfer data to newer versions of PostgreSQL, the output of pg_dump can be expected to load into PostgreSQL server versions newer than pg_dump's version.\" [1]That is what I'm getting on about when talking about migrations.  So a v14 SQL backup produced by a v14 pg_dump restored by a v15 psql. (custom format and pg_restore supposedly aren't supposed to be different though, right?)[1] https://www.postgresql.org/docs/current/app-pgdump.htmlDavid J.", "msg_date": "Sun, 3 Apr 2022 16:58:13 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On Sun, Apr 3, 2022 at 12:36 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 03.04.22 20:50, David G. Johnston wrote:\n> > However, tables having an identity sequence seem to be unaddressed in\n> > this patch. The existing (and unchanged) pg_dump.c code results in:\n>\n> It is addressed. For example, run this in PG14:\n>\n> ALTER TABLE public.t1 ALTER COLUMN a ADD GENERATED ALWAYS AS IDENTITY (\n> SEQUENCE NAME public.t1_a_seq\n> START WITH 1\n> INCREMENT BY 1\n> NO MINVALUE\n> NO MAXVALUE\n> CACHE 1\n> );\n> ALTER SEQUENCE public.t1_a_seq SET LOGGED;\n>\n\nOK, I do see the new code for this and see how my prior email was\nconfusing/wrong. I do still have the v14 dump file restoration concern but\nthat actually isn't something pg_dump.c has to (or even can) worry about.\nEnsuring that a v15+ dump represents the existing state correctly is\nbasically a given which is why I wasn't seeing how my comments would be\ninterpreted relative to that.\n\nFor the patch I'm still thinking we want to add [UN]LOGGED to\nsequence_options. Even if pg_dump doesn't utilize it, though aside from\npotential code cleanliness I don't see why it wouldn't. If absent, the\ndefault behavior shown here (sequence matches table, as per \"+\nseqstmt->sequence->relpersistence = cxt->relation->relpersistence;\" would\ntake effect) applies, otherwise the newly created sequence is as requested.\n\n From this, in the current patch, a pg_dump v14- produced dump file\nrestoration will change the persistence of owned sequences on an unlogged\ntable to unlogged from logged during restoration into v15+ (since the alter\nsequence will not be present after the alter table). A v15+ pg_dump\nproduced dump file will retain the logged persistence mode for the\nsequence. The only way to avoid this discrepancy is to have\nsequence_options taken on a [UN]LOGGED option that defaults to LOGGED.\nThis then correctly reflects historical behavior and will produce a\nconsistently restored dump file.\n\nDavid J.\n\nOn Sun, Apr 3, 2022 at 12:36 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 03.04.22 20:50, David G. Johnston wrote:\n> However, tables having an identity sequence seem to be unaddressed in \n> this patch.  The existing (and unchanged) pg_dump.c code results in:\n\nIt is addressed.  For example, run this in PG14:\nALTER TABLE public.t1 ALTER COLUMN a ADD GENERATED ALWAYS AS IDENTITY (\n     SEQUENCE NAME public.t1_a_seq\n     START WITH 1\n     INCREMENT BY 1\n     NO MINVALUE\n     NO MAXVALUE\n     CACHE 1\n);\nALTER SEQUENCE public.t1_a_seq SET LOGGED;OK, I do see the new code for this and see how my prior email was confusing/wrong.  I do still have the v14 dump file restoration concern but that actually isn't something pg_dump.c has to (or even can) worry about.  Ensuring that a v15+ dump represents the existing state correctly is basically a given which is why I wasn't seeing how my comments would be interpreted relative to that.For the patch I'm still thinking we want to add [UN]LOGGED to sequence_options.  Even if pg_dump doesn't utilize it, though aside from potential code cleanliness I don't see why it wouldn't.  If absent, the default behavior shown here (sequence matches table, as per \"+\tseqstmt->sequence->relpersistence = cxt->relation->relpersistence;\" would take effect) applies, otherwise the newly created sequence is as requested.From this, in the current patch, a pg_dump v14- produced dump file restoration will change the persistence of owned sequences on an unlogged table to unlogged from logged during restoration into v15+ (since the alter sequence will not be present after the alter table).  A v15+ pg_dump produced dump file will retain the logged persistence mode for the sequence.  The only way to avoid this discrepancy is to have sequence_options taken on a [UN]LOGGED option that defaults to LOGGED.  This then correctly reflects historical behavior and will produce a consistently restored dump file.David J.", "msg_date": "Sun, 3 Apr 2022 18:16:45 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 04.04.22 01:58, David G. Johnston wrote:\n> \"Because pg_dump is used to transfer data to newer versions of \n> PostgreSQL, the output of pg_dump can be expected to load into \n> PostgreSQL server versions newer than pg_dump's version.\" [1]\n> \n> That is what I'm getting on about when talking about migrations.  So a \n> v14 SQL backup produced by a v14 pg_dump restored by a v15 psql.\n\nIt has always been the case that if you want the best upgrade \nexperience, you need to use the pg_dump that is >= server version.\n\nThe above quote is a corollary to that we don't want to gratuitously \nbreak SQL syntax compatibility. But I don't think that implies that the \nbehavior of those commands cannot change at all. Otherwise we could \nnever add new behavior with new defaults.\n\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:20:00 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 03.04.22 19:19, Peter Eisentraut wrote:\n> \n> On 01.04.22 18:31, Peter Eisentraut wrote:\n>>> Consider that an identity sequence creates an \"internal\" dependency \n>>> and a serial sequence creates an \"auto\" dependency.\n>>>\n>>> An \"internal\" dependency means that the internal object shouldn't \n>>> really be operated on directly.  (In some cases it's allowed for \n>>> convenience.) So I think in that case the sequence must follow the \n>>> table's persistence in all cases.  This is accomplished by setting \n>>> the initial persistence to the table's, making ALTER TABLE propagate \n>>> persistence changes, and prohibiting direct ALTER SEQUENCE SET.\n>>\n>> But to make pg_upgrade work for identity sequences of unlogged tables, \n>> we need to allow ALTER SEQUENCE ... SET LOGGED on such sequences. \n>> Which I guess is not a real problem in the end.\n> \n> Here is an updated patch that fixes this pg_dump/pg_upgrade issue and \n> also adds a few more comments and documentation sentences about what \n> happens and what is allowed.  I didn't change any behaviors; it seems we \n> didn't have consensus to do that.\n> \n> These details about how tables and sequences are linked or not are \n> pretty easy to adjust, if people still have some qualms.\n\nThis patch is now in limbo because it appears that the logical \nreplication of sequences feature might end up being reverted for PG15. \nThis unlogged sequences feature is really a component of that overall \nfeature.\n\nIf we think that logical replication of sequences might stay in, then I \nwould like to commit this patch as well.\n\nIf we think that it will be reverted, then this patch is probably just \ngoing to be in the way of that.\n\nWe could also move forward with this patch independently of the other \none. If we end up reverting the other one, then this one won't be very \nuseful but it won't really hurt anything and it would presumably become \nuseful eventually. What we presumably don't want is that the sequence \nreplication patch gets repaired for PG15 and we didn't end up committing \nthis patch because of uncertainty.\n\nThoughts?\n\n\n", "msg_date": "Wed, 6 Apr 2022 11:12:39 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" }, { "msg_contents": "On 06.04.22 11:12, Peter Eisentraut wrote:\n> We could also move forward with this patch independently of the other \n> one.  If we end up reverting the other one, then this one won't be very \n> useful but it won't really hurt anything and it would presumably become \n> useful eventually.  What we presumably don't want is that the sequence \n> replication patch gets repaired for PG15 and we didn't end up committing \n> this patch because of uncertainty.\n\nI have received some encouragement off-list to go ahead with this, so \nit's been committed.\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 17:24:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: unlogged sequences" } ]
[ { "msg_contents": "Hello,\n\nI ran into someone with a system where big queries scanning 8GB+ of\nall-in-cache data took consistently ~2.5x longer on a primary server\nthan on a replica. Both servers had concurrent activity on them but\nplenty of spare capacity and similar specs. After some investigation\nit turned out that on the primary there were (1) some select()\nsyscalls waiting for 1ms, which might indicate contended\nSpinLockAcquire() back-offs, and (2) a huge amount of time spent in:\n\n+ 93,31% 0,00% postgres postgres [.] index_getnext\n+ 93,30% 0,00% postgres postgres [.] index_fetch_heap\n+ 81,66% 0,01% postgres postgres [.] heap_page_prune_opt\n+ 75,85% 0,00% postgres postgres [.] TransactionIdLimitedForOldSnapshots\n+ 75,83% 0,01% postgres postgres [.] RelationHasUnloggedIndex\n+ 75,79% 0,00% postgres postgres [.] RelationGetIndexList\n+ 75,79% 75,78% postgres postgres [.] list_copy\n\nThe large tables in question have around 30 indexes. I see that\nheap_page_prune_opt()'s call to TransactionIdLimitedForOldSnapshots()\nacquires a couple of system-wide spinlocks, and also tests\nRelationAllowsEarlyPruning() which calls RelationHasUnloggedIndex()\nwhich says:\n\n * Tells whether any index for the relation is unlogged.\n *\n * Note: There doesn't seem to be any way to have an unlogged index attached\n * to a permanent table, but it seems best to keep this general so that it\n * returns sensible results even when they seem obvious (like for an unlogged\n * table) and to handle possible future unlogged indexes on permanent tables.\n\nIt calls RelationGetIndexList() which conses up a new copy of the list\nevery time, so that we can spin through it looking for unlogged\nindexes (and in this user's case there are none). I didn't try to\npoke at this in lab conditions, but from a glance a the code, I guess\nheap_page_prune_opt() is running for every index tuple except those\nthat reference the same heap page as the one before, so I guess it\nhappens a lot if the heap is not physically correlated with the index\nkeys. Ouch.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Jun 2019 01:21:09 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "old_snapshot_threshold vs indexes" }, { "msg_contents": "On Fri, Jun 21, 2019 at 1:21 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I ran into someone with a system where big queries scanning 8GB+ of\n> all-in-cache data took consistently ~2.5x longer on a primary server\n> than on a replica. Both servers had concurrent activity on them but\n> plenty of spare capacity and similar specs. After some investigation\n> it turned out that on the primary there were (1) some select()\n> syscalls waiting for 1ms, which might indicate contended\n> SpinLockAcquire() back-offs, and (2) a huge amount of time spent in:\n>\n> + 93,31% 0,00% postgres postgres [.] index_getnext\n> + 93,30% 0,00% postgres postgres [.] index_fetch_heap\n> + 81,66% 0,01% postgres postgres [.] heap_page_prune_opt\n> + 75,85% 0,00% postgres postgres [.] TransactionIdLimitedForOldSnapshots\n> + 75,83% 0,01% postgres postgres [.] RelationHasUnloggedIndex\n> + 75,79% 0,00% postgres postgres [.] RelationGetIndexList\n> + 75,79% 75,78% postgres postgres [.] list_copy\n\nOn my laptop, all prewarmed, no concurrency, the mere existence of 10\nbrin indexes causes a sequential scan to take ~5% longer and an\nuncorrelated index scan to take ~45% longer (correlated index scans\ndon't suffer). Here's a draft patch for v13 that fixes that problem\nby caching the result of RelationHasUnloggedIndex().\n\nReproducer scripts also attached. I ran them with shared_buffers=8GB,\nold_snapshot_threshold=10s and pg_prewarm installed.\n\nI didn't try to look into the complaint about suspected spinlock contention.\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Tue, 25 Jun 2019 14:21:31 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: old_snapshot_threshold vs indexes" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On my laptop, all prewarmed, no concurrency, the mere existence of 10\n> brin indexes causes a sequential scan to take ~5% longer and an\n> uncorrelated index scan to take ~45% longer (correlated index scans\n> don't suffer). Here's a draft patch for v13 that fixes that problem\n> by caching the result of RelationHasUnloggedIndex().\n\nI agree that this code is absolutely horrid as it stands. However,\nit doesn't look to me like caching RelationHasUnloggedIndex is quite\nenough to fix it. The other problem is that the calls in question\nseem to be mostly in TestForOldSnapshot, which is called in places\nlike heapgetpage:\n\n\tLockBuffer(buffer, BUFFER_LOCK_SHARE);\n\n\tdp = BufferGetPage(buffer);\n\tTestForOldSnapshot(snapshot, scan->rs_base.rs_rd, dp);\n\tlines = PageGetMaxOffsetNumber(dp);\n\tntup = 0;\n\nIt is hard to express what a bad idea it is to be asking for complex\ncatalog searches while holding a buffer lock. We could easily get\ninto undetectable deadlocks that way, for example. We need to refactor\nthese call sites to arrange that the catalog lookup happens outside\nthe low-level page access.\n\nYour 0001 patch looks reasonable for the purpose of caching the\nresult, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Aug 2019 17:28:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold vs indexes" }, { "msg_contents": "On Tue, Aug 27, 2019 at 9:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I agree that this code is absolutely horrid as it stands. However,\n> it doesn't look to me like caching RelationHasUnloggedIndex is quite\n> enough to fix it. The other problem is that the calls in question\n> seem to be mostly in TestForOldSnapshot, which is called in places\n> like heapgetpage:\n>\n> LockBuffer(buffer, BUFFER_LOCK_SHARE);\n>\n> dp = BufferGetPage(buffer);\n> TestForOldSnapshot(snapshot, scan->rs_base.rs_rd, dp);\n> lines = PageGetMaxOffsetNumber(dp);\n> ntup = 0;\n>\n> It is hard to express what a bad idea it is to be asking for complex\n> catalog searches while holding a buffer lock. We could easily get\n> into undetectable deadlocks that way, for example. We need to refactor\n> these call sites to arrange that the catalog lookup happens outside\n> the low-level page access.\n\nHmm. Right. Perhaps the theory was that it was OK because it's\nshared (rather than exclusive), or perhaps the catalog lookup was\nsufficiently well hidden and was forgotten. At first glance it seems\nlike we need to capture PageGetLSN(page) while we have the lock, and\nthen later pass that into TestForOldSnapshot() instead of the page.\nI'll look into that and write a patch, probably in a day or two.\n\n> Your 0001 patch looks reasonable for the purpose of caching the\n> result, though.\n\nThanks for the review. I'll wait until we figure out what to do about\nthe other problem and what needs to be back-patched.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Aug 2019 10:53:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: old_snapshot_threshold vs indexes" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Aug 27, 2019 at 9:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It is hard to express what a bad idea it is to be asking for complex\n>> catalog searches while holding a buffer lock. We could easily get\n>> into undetectable deadlocks that way, for example. We need to refactor\n>> these call sites to arrange that the catalog lookup happens outside\n>> the low-level page access.\n\n> Hmm. Right. Perhaps the theory was that it was OK because it's\n> shared (rather than exclusive), or perhaps the catalog lookup was\n> sufficiently well hidden and was forgotten.\n\nI strongly suspect the latter. Also, it may well be that the\nunlogged-index check was not in the original design but was\nadded later with insufficient thought about where it'd be called\nfrom.\n\n> At first glance it seems\n> like we need to capture PageGetLSN(page) while we have the lock, and\n> then later pass that into TestForOldSnapshot() instead of the page.\n> I'll look into that and write a patch, probably in a day or two.\n\nHm, but surely we need to do other things to the page besides\nTestForOldSnapshot? I was imagining that we'd collect the\nRelationHasUnloggedIndex flag (or perhaps better, the\nRelationAllowsEarlyPruning result) before attempting to lock\nthe page, and then pass it to TestForOldSnapshot.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Aug 2019 18:59:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold vs indexes" }, { "msg_contents": "On Tue, Aug 27, 2019 at 10:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > At first glance it seems\n> > like we need to capture PageGetLSN(page) while we have the lock, and\n> > then later pass that into TestForOldSnapshot() instead of the page.\n> > I'll look into that and write a patch, probably in a day or two.\n>\n> Hm, but surely we need to do other things to the page besides\n> TestForOldSnapshot? I was imagining that we'd collect the\n> RelationHasUnloggedIndex flag (or perhaps better, the\n> RelationAllowsEarlyPruning result) before attempting to lock\n> the page, and then pass it to TestForOldSnapshot.\n\nOK I started writing a patch and realised there were a few ugly\nproblems that I was about to report here... but now I wonder if, based\non the comment for RelationHasUnloggedIndex(), we shouldn't just nuke\nall this code. We don't actually support unlogged indexes on\npermanent tables (there is no syntax to create them, and\nRelationHasUnloggedIndex() will never return true in practice because\nRelationNeedsWAL() will always return false first). This is a locking\nprotocol violation and a performance pessimisation in support of a\nfeature we don't have. If we add support for that in some future\nrelease, we can figure out how to do it properly then, no?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Aug 2019 12:02:25 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: old_snapshot_threshold vs indexes" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> OK I started writing a patch and realised there were a few ugly\n> problems that I was about to report here... but now I wonder if, based\n> on the comment for RelationHasUnloggedIndex(), we shouldn't just nuke\n> all this code. We don't actually support unlogged indexes on\n> permanent tables (there is no syntax to create them, and\n> RelationHasUnloggedIndex() will never return true in practice because\n> RelationNeedsWAL() will always return false first).\n\nOh! That explains why the code coverage report shows clearly that\nRelationHasUnloggedIndex never returns true ;-)\n\n> This is a locking\n> protocol violation and a performance pessimisation in support of a\n> feature we don't have. If we add support for that in some future\n> release, we can figure out how to do it properly then, no?\n\n+1. That fix is also back-patchable, which adding fields to relcache\nentries would not be.\n\nIt's not really apparent to me that unlogged indexes on logged tables\nwould ever be a useful combination, so I'm certainly willing to nuke\npoorly-thought-out code that putatively supports it. But perhaps\nwe should add some comments to remind us that this area would need\nwork if anyone ever wanted to support that. Not sure where.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Aug 2019 21:54:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold vs indexes" }, { "msg_contents": "On Tue, Aug 27, 2019 at 1:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > OK I started writing a patch and realised there were a few ugly\n> > problems that I was about to report here... but now I wonder if, based\n> > on the comment for RelationHasUnloggedIndex(), we shouldn't just nuke\n> > all this code. We don't actually support unlogged indexes on\n> > permanent tables (there is no syntax to create them, and\n> > RelationHasUnloggedIndex() will never return true in practice because\n> > RelationNeedsWAL() will always return false first).\n>\n> Oh! That explains why the code coverage report shows clearly that\n> RelationHasUnloggedIndex never returns true ;-)\n>\n> > This is a locking\n> > protocol violation and a performance pessimisation in support of a\n> > feature we don't have. If we add support for that in some future\n> > release, we can figure out how to do it properly then, no?\n>\n> +1. That fix is also back-patchable, which adding fields to relcache\n> entries would not be.\n\nThere is a fly in the ointment: REL9_6_STABLE's copy of\nRelationHasUnloggedIndex() is hardcoded to return true for hash\nindexes (see commit 2cc41acd8).\n\nHowever, I now see that there isn't a buffer content lock deadlock\nrisk here after all, because we don't reach RelationHasUnloggedIndex()\nif IsCatalogRelation(rel). That reminds me of commit 4fd05bb55b4. It\nstill doesn't seem like a great idea to be doing catalog access while\nholding the buffer content lock, though.\n\nSo I think we need to leave 9.6 as is, and discuss how far back to\nback-patch the attached. It could go back to 10, but perhaps we\nshould be cautious and push it to master only for now, if you agree\nwith my analysis of the deadlock thing.\n\n> It's not really apparent to me that unlogged indexes on logged tables\n> would ever be a useful combination, so I'm certainly willing to nuke\n> poorly-thought-out code that putatively supports it. But perhaps\n> we should add some comments to remind us that this area would need\n> work if anyone ever wanted to support that. Not sure where.\n\nIt might make sense for some kind of in-memory index that is rebuilt\nfrom the heap at startup, but then I doubt such a thing would have an\nindex relation with a relpersistence to check anyway.\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Tue, 27 Aug 2019 16:29:27 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: old_snapshot_threshold vs indexes" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Aug 27, 2019 at 1:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +1. That fix is also back-patchable, which adding fields to relcache\n>> entries would not be.\n\n> There is a fly in the ointment: REL9_6_STABLE's copy of\n> RelationHasUnloggedIndex() is hardcoded to return true for hash\n> indexes (see commit 2cc41acd8).\n\nTrue, in 9.6 hash indexes *were* effectively unlogged, so that the code\nactually did something in that branch. Given the lack of bug reports\ntraceable to this, I wouldn't feel too bad about leaving it alone in 9.6.\n\n> However, I now see that there isn't a buffer content lock deadlock\n> risk here after all, because we don't reach RelationHasUnloggedIndex()\n> if IsCatalogRelation(rel). That reminds me of commit 4fd05bb55b4. It\n> still doesn't seem like a great idea to be doing catalog access while\n> holding the buffer content lock, though.\n\nYeah, I'm not convinced that that observation means the problem is\nunreachable. Probably does make it harder to hit a deadlock, but\nif you mix a few VACUUMs and untimely cache flushes into the\nequation, I feel like one could still happen.\n\n> So I think we need to leave 9.6 as is, and discuss how far back to\n> back-patch the attached. It could go back to 10, but perhaps we\n> should be cautious and push it to master only for now, if you agree\n> with my analysis of the deadlock thing.\n\nI'd vote for back-patching to 10. Even if there is in fact no deadlock\nhazard, you've clearly demonstrated a significant performance hit that\nwe're taking for basically no reason.\n\nIn the larger picture, the commit this reminds me of is b04aeb0a0.\nI'm wondering if we could add some assertions to the effect of\n\"don't initiate relcache or syscache lookups while holding a buffer\nlock\". It would be relatively easy to do that if we could make\nthe rule be \"... while holding any LWLock\", but I suspect that\nthat would break some legitimate cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Aug 2019 11:05:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold vs indexes" }, { "msg_contents": "On Wed, Aug 28, 2019 at 3:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'd vote for back-patching to 10. Even if there is in fact no deadlock\n> hazard, you've clearly demonstrated a significant performance hit that\n> we're taking for basically no reason.\n\nDone.\n\nThe second symptom reported in my first email looked like evidence of\nhigh levels of spinlock backoff, which I guess might have been coming\nfrom TransactionIdLimitedForOldSnapshots()'s hammering of\noldSnapshotControl->mutex_threshold and\noldSnapshotControl->mutex_threshold, when running\nheap_page_prune_opt()-heavy workloads like the one generated by\ntest-indexscan.sql (from my earlier message) from many backends at the\nsame time on a large system. That's just an observation I'm leaving\nhere, I'm not planning to chase that any further for now.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Aug 2019 21:02:37 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: old_snapshot_threshold vs indexes" } ]
[ { "msg_contents": "I decided to do some experiments with how we use Flex. The main\ntakeaway is that backtracking, which we removed in 2005, doesn't seem\nto matter anymore for the core scanner. Also, state table size is of\nmarginal importance.\n\nUsing the information_schema Flex+Bison microbenchmark from Tom [1], I\ntested removing most of the \"fail\" rules designed to avoid\nbacktracking (\"decimalfail\" is needed by PL/pgSQL). Below are the best\ntimes (most runs within 1%), followed by postgres binary size. The\nnumbers are with Flex 2.5.35 on MacOS, no asserts or debugging\nsymbols.\n\nHEAD:\n1.53s\n7139132 bytes\n\nHEAD minus \"fail\" rules (patch attached):\n1.53s\n6971204 bytes\n\nSurprisingly, it has the same performance and a much smaller binary.\nThe size difference is because the size of the elements of the\nyy_transition array is constrained by the number of elements in the\narray. Since there are now fewer than INT16_MAX state transitions, the\nstruct members go from 32 bit:\n\nstruct yy_trans_info\n{\nflex_int32_t yy_verify;\nflex_int32_t yy_nxt;\n};\nstatic yyconst struct yy_trans_info yy_transition[37045] = ...\n\nto 16 bit:\n\nstruct yy_trans_info\n{\nflex_int16_t yy_verify;\nflex_int16_t yy_nxt;\n};\nstatic yyconst struct yy_trans_info yy_transition[31763] = ...\n\nTo test if array size was the deciding factor, I tried bloating it by\nessentially undoing commit a5ff502fcea. Doing so produced an array\nwith 62583 elements and 32-bit members, so nearly quadruple in size,\nand it was still not much slower than HEAD:\n\nHEAD minus \"fail\" rules, minus %xusend/%xuiend:\n1.56s\n7343932 bytes\n\nWhile at it, I repeated the benchmark with different Flex flags:\n\nHEAD, plus -Cf:\n1.60s\n6995788 bytes\n\nHEAD, minus \"fail\" rules, plus -Cf:\n1.59s\n6979396 bytes\n\nHEAD, plus -Cfe:\n1.65s\n6868804 bytes\n\nSo this recommendation of the Flex manual (-CF) still holds true. It's\nworth noting that using perfect hashing for keyword lookup (20%\nfaster) had a much bigger effect than switching from -Cfe to -CF (7%\nfaster).\n\nIt would be nice to have confirmation to make sure I didn't err\nsomewhere, and to try a more real-world benchmark. (Also for the\nmoment I only have Linux on a virtual machine.) The regression tests\npass, but some comments are now wrong. If it's confirmed that\nbacktracking doesn't matter for recent Flex/hardware, disregarding it\nwould make maintenance of our scanners a bit easier.\n\n[1] https://www.postgresql.org/message-id/14616.1558560331%40sss.pgh.pa.us\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 20 Jun 2019 22:31:06 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "benchmarking Flex practices" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> I decided to do some experiments with how we use Flex. The main\n> takeaway is that backtracking, which we removed in 2005, doesn't seem\n> to matter anymore for the core scanner. Also, state table size is of\n> marginal importance.\n\nHuh. That's really interesting, because removing backtracking was a\ndemonstrable, significant win when we did it [1]. I wonder what has\nchanged? I'd be prepared to believe that today's machines are more\nsensitive to the amount of cache space eaten by the tables --- but that\nidea seems contradicted by your result that the table size isn't\nimportant. (I'm wishing I'd documented the test case I used in 2005...)\n\n> The size difference is because the size of the elements of the\n> yy_transition array is constrained by the number of elements in the\n> array. Since there are now fewer than INT16_MAX state transitions, the\n> struct members go from 32 bit:\n> static yyconst struct yy_trans_info yy_transition[37045] = ...\n> to 16 bit:\n> static yyconst struct yy_trans_info yy_transition[31763] = ...\n\nHm. Smaller binary is definitely nice, but 31763 is close enough to\n32768 that I'd have little faith in the optimization surviving for long.\nIs there any way we could buy back some more transitions?\n\n> It would be nice to have confirmation to make sure I didn't err\n> somewhere, and to try a more real-world benchmark.\n\nI don't see much wrong with using information_schema.sql as a parser/lexer\nbenchmark case. We should try to confirm the results on other platforms\nthough.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/8652.1116865895@sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 20 Jun 2019 10:52:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "Hi,\n\nOn 2019-06-20 10:52:54 -0400, Tom Lane wrote:\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n> > It would be nice to have confirmation to make sure I didn't err\n> > somewhere, and to try a more real-world benchmark.\n> \n> I don't see much wrong with using information_schema.sql as a parser/lexer\n> benchmark case. We should try to confirm the results on other platforms\n> though.\n\nMight be worth also testing with a more repetitive testcase to measure\nboth cache locality and branch prediction. I assume that with\ninformation_schema there's enough variability that these effects play a\nsmaller role. And there's plenty real-world cases where there's a *lot*\nof very similar statements being parsed over and over. I'd probably just\nmeasure the statements pgbench generates or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 20 Jun 2019 09:02:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Fri, Jun 21, 2019 at 12:02 AM Andres Freund <andres@anarazel.de> wrote:\n> Might be worth also testing with a more repetitive testcase to measure\n> both cache locality and branch prediction. I assume that with\n> information_schema there's enough variability that these effects play a\n> smaller role. And there's plenty real-world cases where there's a *lot*\n> of very similar statements being parsed over and over. I'd probably just\n> measure the statements pgbench generates or such.\n\nI tried benchmarking with a query string with just\n\nBEGIN;\nUPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid = 1;\nSELECT abalance FROM pgbench_accounts WHERE aid = 1;\nUPDATE pgbench_tellers SET tbalance = tbalance + 1 WHERE tid = 1;\nUPDATE pgbench_branches SET bbalance = bbalance + 1 WHERE bid = 1;\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (1,\n1, 1, 1, CURRENT_TIMESTAMP);\nEND;\n\nrepeated about 500 times. With this, backtracking is about 3% slower:\n\nHEAD:\n1.15s\n\npatch:\n1.19s\n\npatch + huge array:\n1.19s\n\nThat's possibly significant enough to be evidence for your assumption,\nas well as to persuade us to keep things as they are.\n\nOn Thu, Jun 20, 2019 at 10:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Huh. That's really interesting, because removing backtracking was a\n> demonstrable, significant win when we did it [1]. I wonder what has\n> changed? I'd be prepared to believe that today's machines are more\n> sensitive to the amount of cache space eaten by the tables --- but that\n> idea seems contradicted by your result that the table size isn't\n> important. (I'm wishing I'd documented the test case I used in 2005...)\n\nIt's possible the code used with backtracking is better predicted than\n15 years ago, but my uneducated hunch is our Bison grammar has gotten\nmuch worse in cache misses and branch prediction than the scanner has\nin 15 years. That, plus the recent keyword lookup optimization might\nhave caused parsing to be completely dominated by Bison. If that's the\ncase, the 3% slowdown above could be a significant portion of scanning\nin isolation.\n\n> Hm. Smaller binary is definitely nice, but 31763 is close enough to\n> 32768 that I'd have little faith in the optimization surviving for long.\n> Is there any way we could buy back some more transitions?\n\nI tried quickly ripping out the unicode escape support entirely. It\nbuilds with warnings, but the point is to just get the size -- that\nproduced an array with only 28428 elements, and that's keeping all the\nno-backup rules intact. This might be unworkable and/or ugly, but I\nwonder if it's possible to pull unicode escape handling into the\nparsing stage, with \"UESCAPE\" being a keyword token that we have to\npeek ahead to check for. I'll look for other rules that could be more\neasily optimized, but I'm not terribly optimistic.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 21 Jun 2019 15:36:48 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "I wrote:\n\n> I'll look for other rules that could be more\n> easily optimized, but I'm not terribly optimistic.\n\nI found a possible other way to bring the size of the transition table\nunder 32k entries while keeping the existing no-backup rules in place:\nReplace the \"quotecontinue\" rule with a new state. In the attached\ndraft patch, when Flex encounters a quote while inside any kind of\nquoted string, it saves the current state and enters %xqs (think\n'quotestop'). If it then sees {whitespace_with_newline}{quote}, it\nreenters the previous state and continues to slurp the string,\notherwise, it throws back everything and returns the string it just\nexited. Doing it this way is a bit uglier, but with some extra\ncommentary it might not be too bad.\n\nThe array is now 30883 entries. That's still a bit close for comfort,\nbut shrinks the binary by 171kB on Linux x86-64 with Flex 2.6.4. The\nbad news is I have these baffling backup states in my new rules:\n\nState #133 is non-accepting -\n associated rule line numbers:\n551 554 564\n out-transitions: [ \\000-\\377 ]\n jam-transitions: EOF []\n\nState #162 is non-accepting -\n associated rule line numbers:\n551 554 564\n out-transitions: [ \\000-\\377 ]\n jam-transitions: EOF []\n\n2 backing up (non-accepting) states.\n\nI already explicitly handle EOF, so I don't know what it's trying to\ntell me. If it can be fixed while keeping the array size, I'll do\nperformance tests.\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 24 Jun 2019 17:21:33 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "I wrote:\n\n> > I'll look for other rules that could be more\n> > easily optimized, but I'm not terribly optimistic.\n>\n> I found a possible other way to bring the size of the transition table\n> under 32k entries while keeping the existing no-backup rules in place:\n> Replace the \"quotecontinue\" rule with a new state. In the attached\n> draft patch, when Flex encounters a quote while inside any kind of\n> quoted string, it saves the current state and enters %xqs (think\n> 'quotestop'). If it then sees {whitespace_with_newline}{quote}, it\n> reenters the previous state and continues to slurp the string,\n> otherwise, it throws back everything and returns the string it just\n> exited. Doing it this way is a bit uglier, but with some extra\n> commentary it might not be too bad.\n\nI had an epiphany and managed to get rid of the backup states.\nRegression tests pass. The array is down to 30367 entries and the\nbinary is smaller by 172kB on Linux x86-64. Performance is identical\nto master on both tests mentioned upthread. I'll clean this up and add\nit to the commitfest.\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 25 Jun 2019 00:01:16 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "I wrote:\n\n> > I found a possible other way to bring the size of the transition table\n> > under 32k entries while keeping the existing no-backup rules in place:\n> > Replace the \"quotecontinue\" rule with a new state. In the attached\n> > draft patch, when Flex encounters a quote while inside any kind of\n> > quoted string, it saves the current state and enters %xqs (think\n> > 'quotestop'). If it then sees {whitespace_with_newline}{quote}, it\n> > reenters the previous state and continues to slurp the string,\n> > otherwise, it throws back everything and returns the string it just\n> > exited. Doing it this way is a bit uglier, but with some extra\n> > commentary it might not be too bad.\n>\n> I had an epiphany and managed to get rid of the backup states.\n> Regression tests pass. The array is down to 30367 entries and the\n> binary is smaller by 172kB on Linux x86-64. Performance is identical\n> to master on both tests mentioned upthread. I'll clean this up and add\n> it to the commitfest.\n\nFor the commitfest:\n\n0001 is a small patch to remove some unneeded generality from the\ncurrent rules. This lowers the number of elements in the yy_transition\narray from 37045 to 36201.\n\n0002 is a cleaned up version of the above, bring the size down to 29521.\n\nI haven't changed psqlscan.l or pgc.l, in case this approach is\nchanged or rejected\n\nWith the two together, the binary is about 175kB smaller than on HEAD.\n\nI also couldn't resist playing around with the idea upthread to handle\nunicode escapes in parser.c, which further reduces the number of\nstates down to 21068, which allows some headroom for future additions\nwithout going back to 32-bit types in the transition array. It mostly\nworks, but it's quite ugly and breaks the token position handling for\nunicode escape syntax errors, so it's not in a state to share.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 27 Jun 2019 14:25:26 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> 0001 is a small patch to remove some unneeded generality from the\n> current rules. This lowers the number of elements in the yy_transition\n> array from 37045 to 36201.\n\nI don't particularly like 0001. The two bits like this\n\n-whitespace\t\t({space}+|{comment})\n+whitespace\t\t({space}|{comment})\n\nseem likely to create performance problems for runs of whitespace, in that\nthe lexer will now have to execute the associated action once per space\ncharacter not just once for the whole run. Those actions are empty, but\nI don't think flex optimizes for that, and it's really flex's per-action\noverhead that I'm worried about. Note the comment in the \"Performance\"\nsection of the flex manual:\n\n Another area where the user can increase a scanner's performance (and\n one that's easier to implement) arises from the fact that the longer\n the tokens matched, the faster the scanner will run. This is because\n with long tokens the processing of most input characters takes place\n in the (short) inner scanning loop, and does not often have to go\n through the additional work of setting up the scanning environment\n (e.g., `yytext') for the action.\n\nThere are a bunch of higher-order productions that use \"{whitespace}*\",\nwhich is surely a bit redundant given the contents of {whitespace}.\nBut maybe we could address that by replacing \"{whitespace}*\" with\n\"{opt_whitespace}\" defined as\n\nopt_whitespace\t\t({space}*|{comment})\n\nNot sure what impact if any that'd have on table size, but I'm quite sure\nthat {whitespace} was defined with an eye to avoiding unnecessary\nlexer action cycles.\n\nAs for the other two bits that are like\n\n-<xe>.\t\t\t{\n-\t\t\t\t\t/* This is only needed for \\ just before EOF */\n+<xe>\\\\\t\t\t{\n\nmy recollection is that those productions are defined that way to avoid a\nflex warning about not all possible input characters being accounted for\nin the <xe> (resp. <xdolq>) state. Maybe that warning is\nflex-version-dependent, or maybe this was just a worry and not something\nthat actually produced a warning ... but I'm hesitant to change it.\nIf we ever did get to flex's default action, that action is to echo the\ncurrent input character to stdout, which would be Very Bad.\n\nAs far as I can see, the point of 0002 is to have just one set of\nflex rules for the various variants of quotecontinue processing.\nThat sounds OK, though I'm a bit surprised it makes this much difference\nin the table size. I would suggest that \"state_before\" needs a less\ngeneric name (maybe \"state_before_xqs\"?) and more than no comment.\nPossibly more to the point, it's not okay to have static state variables\nin the core scanner, so that variable needs to be kept in yyextra.\n(Don't remember offhand whether it's any more acceptable in the other\nscanners.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2019 18:35:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Wed, Jul 3, 2019 at 5:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n> > 0001 is a small patch to remove some unneeded generality from the\n> > current rules. This lowers the number of elements in the yy_transition\n> > array from 37045 to 36201.\n>\n> I don't particularly like 0001. The two bits like this\n>\n> -whitespace ({space}+|{comment})\n> +whitespace ({space}|{comment})\n>\n> seem likely to create performance problems for runs of whitespace, in that\n> the lexer will now have to execute the associated action once per space\n> character not just once for the whole run.\n\nOkay.\n\n> There are a bunch of higher-order productions that use \"{whitespace}*\",\n> which is surely a bit redundant given the contents of {whitespace}.\n> But maybe we could address that by replacing \"{whitespace}*\" with\n> \"{opt_whitespace}\" defined as\n>\n> opt_whitespace ({space}*|{comment})\n>\n> Not sure what impact if any that'd have on table size, but I'm quite sure\n> that {whitespace} was defined with an eye to avoiding unnecessary\n> lexer action cycles.\n\nIt turns out that {opt_whitespace} as defined above is not equivalent\nto {whitespace}* , since the former is either a single comment or a\nsingle run of 0 or more whitespace chars (if I understand correctly).\nUsing {opt_whitespace} for the UESCAPE rules on top of v3-0002, the\nregression tests pass, but queries like this fail with a syntax error:\n\n# select U&'d!0061t!+000061' uescape --comment\n'!';\n\nThere was in fact a substantial size reduction, though, so for\ncuriosity's sake I tried just replacing {whitespace}* with {space}* in\nthe UESCAPE rules, and the table shrank from 30367 (that's with 0002\nonly) to 24661.\n\n> As for the other two bits that are like\n>\n> -<xe>. {\n> - /* This is only needed for \\ just before EOF */\n> +<xe>\\\\ {\n>\n> my recollection is that those productions are defined that way to avoid a\n> flex warning about not all possible input characters being accounted for\n> in the <xe> (resp. <xdolq>) state. Maybe that warning is\n> flex-version-dependent, or maybe this was just a worry and not something\n> that actually produced a warning ... but I'm hesitant to change it.\n> If we ever did get to flex's default action, that action is to echo the\n> current input character to stdout, which would be Very Bad.\n\nFWIW, I tried Flex 2.5.35 and 2.6.4 with no warnings, and I did get a\nwarning when I deleted any of those two rules. I'll leave them out for\nnow, since this change was only good for ~500 fewer elements in the\ntransition array.\n\n> As far as I can see, the point of 0002 is to have just one set of\n> flex rules for the various variants of quotecontinue processing.\n> That sounds OK, though I'm a bit surprised it makes this much difference\n> in the table size. I would suggest that \"state_before\" needs a less\n> generic name (maybe \"state_before_xqs\"?) and more than no comment.\n> Possibly more to the point, it's not okay to have static state variables\n> in the core scanner, so that variable needs to be kept in yyextra.\n> (Don't remember offhand whether it's any more acceptable in the other\n> scanners.)\n\nAh yes, I got this idea from the ECPG scanner, which is not reentrant. Will fix.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 3 Jul 2019 19:14:25 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Wed, Jul 3, 2019 at 5:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> As far as I can see, the point of 0002 is to have just one set of\n> flex rules for the various variants of quotecontinue processing.\n> That sounds OK, though I'm a bit surprised it makes this much difference\n> in the table size. I would suggest that \"state_before\" needs a less\n> generic name (maybe \"state_before_xqs\"?) and more than no comment.\n> Possibly more to the point, it's not okay to have static state variables\n> in the core scanner, so that variable needs to be kept in yyextra.\n\nv4-0001 is basically the same as v3-0002, with the state variable in\nyyextra. Since follow-on patches use it as well, I've named it\nstate_before_quote_stop. I failed to come up with a nicer short name.\nWith this applied, the transition table is reduced from 37045 to\n30367. Since that's uncomfortably close to the 32k limit for 16 bit\nmembers, I hacked away further at UESCAPE bloat.\n\n0002 unifies xusend and xuiend by saving the state of xui as well.\nThis actually causes a performance regression, but it's more of a\nrefactoring patch to prevent from having to create two additional\nstart conditions in 0003 (of course it could be done that way if\ndesired, but the savings won't be as great). In any case, the table is\nnow down to 26074.\n\n0003 creates a separate start condition so that UESCAPE and the\nexpected quoted character after it are detected in separate states.\nThis allows us to use standard whitespace skipping techniques and also\nto greatly simplify the uescapefail rule. The final size of the table\nis 23696. Removing UESCAPE entirely results in 21860, so this likely\nthe most compact size of this feature.\n\nPerformance is very similar to HEAD. Parsing the information schema\nmight be a hair faster and pgbench-like queries with simple strings a\nhair slower, but the difference seems within the noise of variation.\nParsing strings with UESCAPE likewise seems about the same.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 5 Jul 2019 17:54:16 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> [ v4 patches for trimming lexer table size ]\n\nI reviewed this and it looks pretty solid. One gripe I have is\nthat I think it's best to limit backup-prevention tokens such as\nquotecontinuefail so that they match only exact prefixes of their\n\"success\" tokens. This seems clearer to me, and in at least some cases\nit can save a few flex states. The attached v5 patch does it like that\nand gets us down to 22331 states (from 23696). In some places it looks\nlike you did that to avoid writing an explicit \"{other}\" match rule for\nan exclusive state, but I think it's better for readability and\nseparation of concerns to go ahead and have those explicit rules\n(and it seems to make no difference table-size-wise).\n\nI also made some cosmetic changes (mostly improving comments) and\nsmashed the patch series down to 1 patch, because I preferred to\nreview it that way and we're not really going to commit these\nseparately.\n\nI did a little bit of portability testing, to the extent of verifying\nthat the oldest and newest Flex versions I have handy (2.5.33 and 2.6.4)\nagree on the table size change and get through regression tests. So\nI think we should be good from that end.\n\nWe still need to propagate these changes into the psql and ecpg lexers,\nbut I assume you were waiting to agree on the core patch before touching\nthose. If you're good with the changes I made here, have at it.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 09 Jul 2019 16:15:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Wed, Jul 10, 2019 at 3:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n> > [ v4 patches for trimming lexer table size ]\n>\n> I reviewed this and it looks pretty solid. One gripe I have is\n> that I think it's best to limit backup-prevention tokens such as\n> quotecontinuefail so that they match only exact prefixes of their\n> \"success\" tokens. This seems clearer to me, and in at least some cases\n> it can save a few flex states. The attached v5 patch does it like that\n> and gets us down to 22331 states (from 23696). In some places it looks\n> like you did that to avoid writing an explicit \"{other}\" match rule for\n> an exclusive state, but I think it's better for readability and\n> separation of concerns to go ahead and have those explicit rules\n> (and it seems to make no difference table-size-wise).\n\nLooks good to me.\n\n> We still need to propagate these changes into the psql and ecpg lexers,\n> but I assume you were waiting to agree on the core patch before touching\n> those. If you're good with the changes I made here, have at it.\n\nI just made a couple additional cosmetic adjustments that made sense\nwhen diff'ing with the other scanners. Make check-world passes. Some\nnotes:\n\nThe pre-existing ecpg var \"state_before\" was a bit confusing when\ncombined with the new var \"state_before_quote_stop\", and the former is\nalso used with C-comments, so I decided to go with\n\"state_before_lit_start\" and \"state_before_lit_stop\". Even though\ncomments aren't literals, it's less of a stretch than referring to\nquotes. To keep things consistent, I went with the latter var in psql\nand core.\n\nTo get the regression tests to pass, I had to add this:\n\n psql_scan_in_quote(PsqlScanState state)\n {\n- return state->start_state != INITIAL;\n+ return state->start_state != INITIAL &&\n+ state->start_state != xqs;\n }\n\n...otherwise with parens we sometimes don't get the right prompt and\nwe get empty lines echoed. Adding xuend and xuchar here didn't seem to\nmake a difference. There might be something subtle I'm missing, so I\nthought I'd mention it.\n\nWith the unicode escape rules brought over, the diff to the ecpg\nscanner is much cleaner now. The diff for C-comment rules were still\npretty messy in comparison, so I made an attempt to clean that up in\n0002. A bit off-topic, but I thought I should offer that while it was\nfresh in my head.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 12 Jul 2019 14:35:57 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> The pre-existing ecpg var \"state_before\" was a bit confusing when\n> combined with the new var \"state_before_quote_stop\", and the former is\n> also used with C-comments, so I decided to go with\n> \"state_before_lit_start\" and \"state_before_lit_stop\". Even though\n> comments aren't literals, it's less of a stretch than referring to\n> quotes. To keep things consistent, I went with the latter var in psql\n> and core.\n\nHm, what do you think of \"state_before_str_stop\" instead? It seems\nto me that both \"quote\" and \"lit\" are pretty specific terms, so\nmaybe we need something a bit vaguer.\n\n> To get the regression tests to pass, I had to add this:\n> psql_scan_in_quote(PsqlScanState state)\n> {\n> - return state->start_state != INITIAL;\n> + return state->start_state != INITIAL &&\n> + state->start_state != xqs;\n> }\n> ...otherwise with parens we sometimes don't get the right prompt and\n> we get empty lines echoed. Adding xuend and xuchar here didn't seem to\n> make a difference. There might be something subtle I'm missing, so I\n> thought I'd mention it.\n\nI think you would see a difference if the regression tests had any cases\nwith blank lines between a Unicode string/ident and the associated\nUESCAPE and escape-character literal.\n\nWhile poking at that, I also came across this unhappiness:\n\nregression=# select u&'foo' uescape 'bogus';\nregression'# \n\nthat is, psql thinks we're still in a literal at this point. That's\nbecause the uesccharfail rule eats \"'b\" and then we go to INITIAL\nstate, so that consuming the last \"'\" puts us back in a string state.\nThe backend would have thrown an error before parsing as far as the\nincomplete literal, so it doesn't care (or probably not, anyway),\nbut that's not an option for psql.\n\nMy first reaction as to how to fix this was to rip the xuend and\nxuchar states out of psql, and let it just lex UESCAPE as an\nidentifier and the escape-character literal like any other literal.\npsql doesn't need to account for the escape character's effect on\nthe meaning of the Unicode literal, so it doesn't have any need to\nlex the sequence as one big token. I think the same is true of ecpg\nthough I've not looked really closely.\n\nHowever, my second reaction was that maybe you were on to something\nupthread when you speculated about postponing de-escaping of\nUnicode literals into the grammar. If we did it like that then\nwe would not need to have this difference between the backend and\nfrontend lexers, and we'd not have to worry about what\npsql_scan_in_quote should do about the whitespace before and after\nUESCAPE, either.\n\nSo I'm feeling like maybe we should experiment to see what that\nsolution looks like, before we commit to going in this direction.\nWhat do you think?\n\n\n> With the unicode escape rules brought over, the diff to the ecpg\n> scanner is much cleaner now. The diff for C-comment rules were still\n> pretty messy in comparison, so I made an attempt to clean that up in\n> 0002. A bit off-topic, but I thought I should offer that while it was\n> fresh in my head.\n\nI didn't really review this, but it looked like a fairly plausible\nchange of the same ilk, ie combine rules by adding memory of the\nprevious start state.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jul 2019 16:14:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Sun, Jul 21, 2019 at 3:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n> > The pre-existing ecpg var \"state_before\" was a bit confusing when\n> > combined with the new var \"state_before_quote_stop\", and the former is\n> > also used with C-comments, so I decided to go with\n> > \"state_before_lit_start\" and \"state_before_lit_stop\". Even though\n> > comments aren't literals, it's less of a stretch than referring to\n> > quotes. To keep things consistent, I went with the latter var in psql\n> > and core.\n>\n> Hm, what do you think of \"state_before_str_stop\" instead? It seems\n> to me that both \"quote\" and \"lit\" are pretty specific terms, so\n> maybe we need something a bit vaguer.\n\nSounds fine to me.\n\n> While poking at that, I also came across this unhappiness:\n>\n> regression=# select u&'foo' uescape 'bogus';\n> regression'#\n>\n> that is, psql thinks we're still in a literal at this point. That's\n> because the uesccharfail rule eats \"'b\" and then we go to INITIAL\n> state, so that consuming the last \"'\" puts us back in a string state.\n> The backend would have thrown an error before parsing as far as the\n> incomplete literal, so it doesn't care (or probably not, anyway),\n> but that's not an option for psql.\n>\n> My first reaction as to how to fix this was to rip the xuend and\n> xuchar states out of psql, and let it just lex UESCAPE as an\n> identifier and the escape-character literal like any other literal.\n> psql doesn't need to account for the escape character's effect on\n> the meaning of the Unicode literal, so it doesn't have any need to\n> lex the sequence as one big token. I think the same is true of ecpg\n> though I've not looked really closely.\n>\n> However, my second reaction was that maybe you were on to something\n> upthread when you speculated about postponing de-escaping of\n> Unicode literals into the grammar. If we did it like that then\n> we would not need to have this difference between the backend and\n> frontend lexers, and we'd not have to worry about what\n> psql_scan_in_quote should do about the whitespace before and after\n> UESCAPE, either.\n>\n> So I'm feeling like maybe we should experiment to see what that\n> solution looks like, before we commit to going in this direction.\n> What do you think?\n\nGiven the above wrinkles, I thought it was worth trying. Attached is a\nrough patch (don't mind the #include mess yet :-) ) that works like\nthis:\n\nThe lexer returns UCONST from xus and UIDENT from xui. The grammar has\nrules that are effectively:\n\nSCONST { do nothing}\n| UCONST { esc char is backslash }\n| UCONST UESCAPE SCONST { esc char is from $3 }\n\n...where UESCAPE is now an unreserved keyword. To prevent shift-reduce\nconflicts, I added UIDENT to the %nonassoc precedence list to match\nIDENT, and for UESCAPE I added a %left precedence declaration. Maybe\nthere's a more principled way. I also added an unsigned char type to\nthe %union, but it worked fine on my compiler without it.\n\nlitbuf_udeescape() and check_uescapechar() were moved to gram.y. The\nformer had be massaged to give error messages similar to HEAD. They're\nnot quite identical, but the position info is preserved. Some of the\nfunctions I moved around don't seem to have any test coverage, so I\nshould eventually do some work in that regard.\n\nNotes:\n\n-Binary size is very close to v6. That is to say the grammar tables\ngrew by about the same amount the scanner table shrank, so the binary\nis still about 200kB smaller than HEAD.\n-Performance is very close to v6 with the information_schema and\npgbench-like queries with standard strings, which is to say also very\nclose to HEAD. When the latter was changed to use Unicode escapes,\nhowever, it was about 15% slower than HEAD. That's a big regression\nand I haven't tried to pinpoint why.\n-psql was changed to follow suit. It doesn't think it's inside a\nstring with your too-long escape char above, and it removes all blank\nlines from this query output:\n\n$ cat >> test-uesc-lit.sql\nSELECT\n\nu&'!0041'\n\nuescape\n\n'!'\n\nas col\n;\n\n\nOn HEAD and v6 I get this:\n\n$ ./inst/bin/psql -a -f test-uesc-lit.sql\n\nSELECT\nu&'!0041'\n\nuescape\n'!'\nas col\n;\n col\n-----\n A\n(1 row)\n\n\n-The ecpg changes here are only the bare minimum from HEAD to get it\nto compile, since I'm borrowing its additional token names (although\nthey mean slightly different things). After a bit of experimentation,\nit's clear there's a bit more work needed to get it functional, and\nit's not easy to debug, so I'm putting that off until we decide\nwhether this is the way forward.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 24 Jul 2019 14:45:45 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On 07/24/19 03:45, John Naylor wrote:\n> On Sun, Jul 21, 2019 at 3:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, my second reaction was that maybe you were on to something\n>> upthread when you speculated about postponing de-escaping of\n>> Unicode literals into the grammar. If we did it like that then\n\nWow, yay. I hadn't been following this thread, but I had just recently\nlooked over my own earlier musings [1] and started thinking \"no, it would\nbe outlandish to ask the lexer to return utf-8 always ... but what about\npostponing the de-escaping of Unicode literals into the grammar?\" and\nhad started to think about when I might have a chance to try making a\npatch.\n\nWith the de-escaping postponed, I think we'd be able to move beyond the\ncurrent odd situation where Unicode escapes can't describe non-ascii\ncharacters, in exactly and only the cases where you need them to.\n\n-Chap\n\n\n[1]\nhttps://www.postgresql.org/message-id/6688474e-7c28-b352-bcec-ea0ef59d7a1a%40anastigmatix.net\n\n\n", "msg_date": "Wed, 24 Jul 2019 08:14:40 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 07/24/19 03:45, John Naylor wrote:\n>> On Sun, Jul 21, 2019 at 3:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> However, my second reaction was that maybe you were on to something\n>>> upthread when you speculated about postponing de-escaping of\n>>> Unicode literals into the grammar. If we did it like that then\n\n> With the de-escaping postponed, I think we'd be able to move beyond the\n> current odd situation where Unicode escapes can't describe non-ascii\n> characters, in exactly and only the cases where you need them to.\n\nHow so? The grammar doesn't really have any more context information\nthan the lexer does. (In both cases, it would be ugly but not really\ninvalid for the transformation to depend on the database encoding,\nI think.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jul 2019 22:00:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> On Sun, Jul 21, 2019 at 3:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So I'm feeling like maybe we should experiment to see what that\n>> solution looks like, before we commit to going in this direction.\n>> What do you think?\n\n> Given the above wrinkles, I thought it was worth trying. Attached is a\n> rough patch (don't mind the #include mess yet :-) ) that works like\n> this:\n\n> The lexer returns UCONST from xus and UIDENT from xui. The grammar has\n> rules that are effectively:\n\n> SCONST { do nothing}\n> | UCONST { esc char is backslash }\n> | UCONST UESCAPE SCONST { esc char is from $3 }\n\n> ...where UESCAPE is now an unreserved keyword. To prevent shift-reduce\n> conflicts, I added UIDENT to the %nonassoc precedence list to match\n> IDENT, and for UESCAPE I added a %left precedence declaration. Maybe\n> there's a more principled way. I also added an unsigned char type to\n> the %union, but it worked fine on my compiler without it.\n\nI think it might be better to drop the separate \"Uescape\" production and\njust inline that into the calling rules, exactly per your sketch above.\nYou could avoid duplicating the escape-checking logic by moving that into\nthe str_udeescape support function. This would avoid the need for the\n\"uchr\" union variant, but more importantly it seems likely to be more\nfuture-proof: IME, any time you can avoid or postpone shift/reduce\ndecisions, it's better to do so.\n\nI didn't try, but I think this might allow dropping the %left for\nUESCAPE. That bothers me because I don't understand why it's\nneeded or what precedence level it ought to have.\n\n> litbuf_udeescape() and check_uescapechar() were moved to gram.y. The\n> former had be massaged to give error messages similar to HEAD. They're\n> not quite identical, but the position info is preserved. Some of the\n> functions I moved around don't seem to have any test coverage, so I\n> should eventually do some work in that regard.\n\nI don't terribly like the cross-calls you have between gram.y and scan.l\nin this formulation. If we have to make these functions (hexval() etc)\nnon-static anyway, maybe we should shove them all into scansup.c?\n\n> -Binary size is very close to v6. That is to say the grammar tables\n> grew by about the same amount the scanner table shrank, so the binary\n> is still about 200kB smaller than HEAD.\n\nOK.\n\n> -Performance is very close to v6 with the information_schema and\n> pgbench-like queries with standard strings, which is to say also very\n> close to HEAD. When the latter was changed to use Unicode escapes,\n> however, it was about 15% slower than HEAD. That's a big regression\n> and I haven't tried to pinpoint why.\n\nI don't quite follow what you changed to produce the slower test case?\nBut that seems to be something we'd better run to ground before\ndeciding whether to go this way.\n\n> -The ecpg changes here are only the bare minimum from HEAD to get it\n> to compile, since I'm borrowing its additional token names (although\n> they mean slightly different things). After a bit of experimentation,\n> it's clear there's a bit more work needed to get it functional, and\n> it's not easy to debug, so I'm putting that off until we decide\n> whether this is the way forward.\n\nOn the whole I like this approach, modulo the performance question.\nLet's try to work that out before worrying about ecpg.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jul 2019 11:40:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Mon, Jul 29, 2019 at 10:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n>\n> > The lexer returns UCONST from xus and UIDENT from xui. The grammar has\n> > rules that are effectively:\n>\n> > SCONST { do nothing}\n> > | UCONST { esc char is backslash }\n> > | UCONST UESCAPE SCONST { esc char is from $3 }\n>\n> > ...where UESCAPE is now an unreserved keyword. To prevent shift-reduce\n> > conflicts, I added UIDENT to the %nonassoc precedence list to match\n> > IDENT, and for UESCAPE I added a %left precedence declaration. Maybe\n> > there's a more principled way. I also added an unsigned char type to\n> > the %union, but it worked fine on my compiler without it.\n>\n> I think it might be better to drop the separate \"Uescape\" production and\n> just inline that into the calling rules, exactly per your sketch above.\n> You could avoid duplicating the escape-checking logic by moving that into\n> the str_udeescape support function. This would avoid the need for the\n> \"uchr\" union variant, but more importantly it seems likely to be more\n> future-proof: IME, any time you can avoid or postpone shift/reduce\n> decisions, it's better to do so.\n>\n> I didn't try, but I think this might allow dropping the %left for\n> UESCAPE. That bothers me because I don't understand why it's\n> needed or what precedence level it ought to have.\n\nI tried this, and removing the %left still gives me a shift/reduce\nconflict, so I put some effort in narrowing down what's happening. If\nI remove the rules with UESCAPE individually, I find that precedence\nis not needed for Sconst -- only for Ident. I tried reverting all the\nrules to use the original \"IDENT\" token and one by one changed them to\n\"Ident\", and found 6 places where doing so caused a shift-reduce\nconflict:\n\ncreatedb_opt_name\nxmltable_column_option_el\nColId\ntype_function_name\nNonReservedWord\nColLabel\n\nDue to the number of affected places, that didn't seem like a useful\navenue to pursue, so I tried the following:\n\n-Making UESCAPE a reserved keyword or separate token type works, but\nother keyword types don't work. Not acceptable, but maybe useful info.\n-Giving UESCAPE an %nonassoc precedence above UIDENT works, even if\nUIDENT is the lowest in the list. This seems the least intrusive, so I\nwent with that for v8. One possible downside is that UIDENT now no\nlonger has the same precedence as IDENT. Not sure if it matters, but\ncould we fix that contextually with \"%prec IDENT\"?\n\n> > litbuf_udeescape() and check_uescapechar() were moved to gram.y. The\n> > former had be massaged to give error messages similar to HEAD. They're\n> > not quite identical, but the position info is preserved. Some of the\n> > functions I moved around don't seem to have any test coverage, so I\n> > should eventually do some work in that regard.\n>\n> I don't terribly like the cross-calls you have between gram.y and scan.l\n> in this formulation. If we have to make these functions (hexval() etc)\n> non-static anyway, maybe we should shove them all into scansup.c?\n\nI ended up making them static inline in scansup.h since that seemed to\nreduce the performance impact (results below). I cribbed some of the\nsurrogate pair queries from the jsonpath regression tests so we have\nsome coverage here. Diff'ing from HEAD to patch, the locations are\ndifferent for a couple cases (a side effect of the differen error\nhandling style from scan.l). The patch seems to consistently point at\nan escape sequence, so I think it's okay to use that. HEAD, on the\nother hand, sometimes points at the start of the whole string:\n\n select U&'\\de04\\d83d'; -- surrogates in wrong order\n-psql:test_unicode.sql:10: ERROR: invalid Unicode surrogate pair at\nor near \"U&'\\de04\\d83d'\"\n+psql:test_unicode.sql:10: ERROR: invalid Unicode surrogate pair\n LINE 1: select U&'\\de04\\d83d';\n- ^\n+ ^\n select U&'\\de04X'; -- orphan low surrogate\n-psql:test_unicode.sql:12: ERROR: invalid Unicode surrogate pair at\nor near \"U&'\\de04X'\"\n+psql:test_unicode.sql:12: ERROR: invalid Unicode surrogate pair\n LINE 1: select U&'\\de04X';\n- ^\n+ ^\n\n> > -Performance is very close to v6 with the information_schema and\n> > pgbench-like queries with standard strings, which is to say also very\n> > close to HEAD. When the latter was changed to use Unicode escapes,\n> > however, it was about 15% slower than HEAD. That's a big regression\n> > and I haven't tried to pinpoint why.\n>\n> I don't quite follow what you changed to produce the slower test case?\n> But that seems to be something we'd better run to ground before\n> deciding whether to go this way.\n\nSo \"pgbench str\" below refers to driving the parser with this set of\nqueries repeated a couple hundred times in a string:\n\nBEGIN;\nUPDATE pgbench_accounts SET abalance = abalance + 'foobarbaz' WHERE\naid = 'foobarbaz';\nSELECT abalance FROM pgbench_accounts WHERE aid = 'foobarbaz';\nUPDATE pgbench_tellers SET tbalance = tbalance + 'foobarbaz' WHERE tid\n= 'foobarbaz';\nUPDATE pgbench_branches SET bbalance = bbalance + 'foobarbaz' WHERE\nbid = 'foobarbaz';\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES\n('foobarbaz', 'foobarbaz', 'foobarbaz', 'foobarbaz',\nCURRENT_TIMESTAMP);\nEND;\n\nand \"pgbench uesc\" is the same, but the string is\n\nU&'d!0061t!+000061'\nuescape\n'!'\n\nNow that I think of it, the regression in v7 was largely due to the\nfact that the parser has to call the lexer 3 times per string in this\ncase, and that's going to be slower no matter what we do. I added a\nseparate test with ordinary backslash escapes (\"pgbench unicode\"),\nrebased v6-8 onto the same commit on master, and reran the performance\ntests. The runs are generally +/- 1%:\n\n master v6 v7 v8\ninfo-schema 1.49s 1.48s 1.50s 1.53s\npgbench str 1.12s 1.13s 1.15s 1.17s\npgbench unicode 1.29s 1.29s 1.40s 1.36s\npgbench uesc 1.42s 1.44s 1.64s 1.58s\n\nInlining hexval() and friends seems to have helped somewhat for\nunicode escapes, but I'd have to profile to improve that further.\nHowever, v8 has regressed from v7 enough with both simple strings and\nthe information schema that it's a noticeable regression from HEAD.\nI'm guessing getting rid of the \"Uescape\" production is to blame, but\nI haven't tried reverting just that one piece. Since inlining the\nrules didn't seem to help with the precedence hacks, it seems like the\nseparate production was a better way. Thoughts?\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 1 Aug 2019 15:50:56 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Thu, Aug 1, 2019 at 8:51 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> select U&'\\de04\\d83d'; -- surrogates in wrong order\n> -psql:test_unicode.sql:10: ERROR: invalid Unicode surrogate pair at\n> or near \"U&'\\de04\\d83d'\"\n> +psql:test_unicode.sql:10: ERROR: invalid Unicode surrogate pair\n> LINE 1: select U&'\\de04\\d83d';\n> - ^\n> + ^\n> select U&'\\de04X'; -- orphan low surrogate\n> -psql:test_unicode.sql:12: ERROR: invalid Unicode surrogate pair at\n> or near \"U&'\\de04X'\"\n> +psql:test_unicode.sql:12: ERROR: invalid Unicode surrogate pair\n> LINE 1: select U&'\\de04X';\n> - ^\n> + ^\n\nWhile moving this to the September CF, I noticed this failure on Windows:\n\n+ERROR: Unicode escape values cannot be used for code point values\nabove 007F when the server encoding is not UTF8\nLINE 1: SELECT U&'\\d83d\\d83d';\n^\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.50382\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Aug 2019 11:51:28 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "... it seems this patch needs attention, but I'm not sure from whom.\nThe tests don't pass whenever the server encoding is not UTF8, so I\nsuppose we should either have an alternate expected output file to\naccount for that, or the tests should be removed. But anyway the code\nneeds to be reviewed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Sep 2019 17:45:46 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> ... it seems this patch needs attention, but I'm not sure from whom.\n> The tests don't pass whenever the server encoding is not UTF8, so I\n> suppose we should either have an alternate expected output file to\n> account for that, or the tests should be removed. But anyway the code\n> needs to be reviewed.\n\nYeah, I'm overdue to review it, but other things have taken precedence.\n\nThe unportable test is not a problem at this point, since the patch\nisn't finished anyway. I'm not sure yet whether it'd be worth\npreserving that test case in the final version.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Sep 2019 17:02:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "[ My apologies for being so slow to get back to this ]\n\nJohn Naylor <john.naylor@2ndquadrant.com> writes:\n> Now that I think of it, the regression in v7 was largely due to the\n> fact that the parser has to call the lexer 3 times per string in this\n> case, and that's going to be slower no matter what we do.\n\nAh, of course. I'm not too fussed about the performance of queries with\nan explicit UESCAPE clause, as that seems like a very minority use-case.\nWhat we do want to pay attention to is not regressing for plain\nidentifiers/strings, and to a lesser extent the U& cases without UESCAPE.\n\n> Inlining hexval() and friends seems to have helped somewhat for\n> unicode escapes, but I'd have to profile to improve that further.\n> However, v8 has regressed from v7 enough with both simple strings and\n> the information schema that it's a noticeable regression from HEAD.\n> I'm guessing getting rid of the \"Uescape\" production is to blame, but\n> I haven't tried reverting just that one piece. Since inlining the\n> rules didn't seem to help with the precedence hacks, it seems like the\n> separate production was a better way. Thoughts?\n\nI have duplicated your performance tests here, and get more or less\nthe same results (see below). I agree that the performance of the\nv8 patch isn't really where we want to be --- and it also seems\nrather invasive to gram.y, and hence error-prone. (If we do it\nlike that, I bet my bottom dollar that somebody would soon commit\na patch that adds a production using IDENT not Ident, and it'd take\na long time to notice.)\n\nIt struck me though that there's another solution we haven't discussed,\nand that's to make the token lookahead filter in parser.c do the work\nof converting UIDENT [UESCAPE SCONST] to IDENT, and similarly for the\nstring case. I pursued that to the extent of developing the attached\nincomplete patch (\"v9\"), which looks reasonable from a performance\nstandpoint. I get these results with tests using the drive_parser\nfunction:\n\ninformation_schema\n\nHEAD\t3447.674 ms, 3433.498 ms, 3422.407 ms\nv6\t3381.851 ms, 3442.478 ms, 3402.629 ms\nv7\t3525.865 ms, 3441.038 ms, 3473.488 ms\nv8\t3567.640 ms, 3488.417 ms, 3556.544 ms\nv9\t3456.360 ms, 3403.635 ms, 3418.787 ms\n\npgbench str\n\nHEAD\t4414.046 ms, 4376.222 ms, 4356.468 ms\nv6\t4304.582 ms, 4245.534 ms, 4263.562 ms\nv7\t4395.815 ms, 4398.381 ms, 4460.304 ms\nv8\t4475.706 ms, 4466.665 ms, 4471.048 ms\nv9\t4392.473 ms, 4316.549 ms, 4318.472 ms\n\npgbench unicode\n\nHEAD\t4959.000 ms, 4921.751 ms, 4945.069 ms\nv6\t4856.998 ms, 4802.996 ms, 4855.486 ms\nv7\t5057.199 ms, 4948.342 ms, 4956.614 ms\nv8\t5008.090 ms, 4963.641 ms, 4983.576 ms\nv9\t4809.227 ms, 4767.355 ms, 4741.641 ms\n\npgbench uesc\n\nHEAD\t5114.401 ms, 5235.764 ms, 5200.567 ms\nv6\t5030.156 ms, 5083.398 ms, 4986.974 ms\nv7\t5915.508 ms, 5953.135 ms, 5929.775 ms\nv8\t5678.810 ms, 5665.239 ms, 5645.696 ms\nv9\t5648.965 ms, 5601.592 ms, 5600.480 ms\n\n(A note about what we're looking at: on my machine, after using cpupower\nto lock down the CPU frequency, and taskset to bind everything to one\nCPU socket, I can get numbers that are very repeatable, to 0.1% or so\n... until I restart the postmaster, and then I get different but equally\nrepeatable numbers. The difference can be several percent, which is a lot\nof noise compared to what we're looking for. I believe the explanation is\nthat kernel ASLR has loaded the backend executable at some different\naddresses and so there are different cache-line-boundary effects. While\nI could lock that down too by disabling ASLR, the result would be to\noveremphasize chance effects of a particular set of cache line boundaries.\nSo I prefer to run all the tests over again after restarting the\npostmaster, a few times, and then look at the overall set of results to\nsee what things look like. Each number quoted above is median-of-three\ntests within a single postmaster run.)\n\nAnyway, my conclusion is that the attached patch is at least as fast\nas today's HEAD; it's not as fast as v6, but on the other hand it's\nan even smaller postmaster executable, so there's something to be said\nfor that:\n\n$ size postg*\n text data bss dec hex filename\n7478138 57928 203360 7739426 761822 postgres.head\n7271218 57928 203360 7532506 72efda postgres.v6\n7275810 57928 203360 7537098 7301ca postgres.v7\n7276978 57928 203360 7538266 73065a postgres.v8\n7266274 57928 203360 7527562 72dc8a postgres.v9\n\nI based this on your v7 not v8; not sure if there's anything you\nwant to salvage from v8.\n\nGenerally, I'm pretty happy with this approach: it touches gram.y\nhardly at all, and it removes just about all of the complexity from\nscan.l. I'm happier about dropping the support code into parser.c\nthan the other choices we've discussed.\n\nThere's still undone work here, though:\n\n* I did not touch psql. Probably your patch is fine for that.\n\n* I did not do more with ecpg than get it to compile, using the\nsame hacks as in your v7. It still fails its regression tests,\nbut now the reason is that what we've done in parser/parser.c\nneeds to be transposed into the identical functionality in\necpg/preproc/parser.c. Or at least some kind of functionality\nthere. A problem with this approach is that it presumes we can\nreduce a UIDENT sequence to a plain IDENT, but to do so we need\nassumptions about the target encoding, and I'm not sure that\necpg should make any such assumptions. Maybe ecpg should just\nreject all cases that produce non-ASCII identifiers? (Probably\nit could be made to do something smarter with more work, but\nit's not clear to me that it's worth the trouble.)\n\n* I haven't convinced myself either way as to whether it'd be\nbetter to factor out the code duplicated between the UIDENT\nand UCONST cases in base_yylex.\n\nIf this seems like a reasonable approach to you, please fill in\nthe missing psql and ecpg bits.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 25 Nov 2019 17:51:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Tue, Nov 26, 2019 at 5:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> [ My apologies for being so slow to get back to this ]\n\nNo worries -- it's a nice-to-have, not something our users are excited about.\n\n> It struck me though that there's another solution we haven't discussed,\n> and that's to make the token lookahead filter in parser.c do the work\n> of converting UIDENT [UESCAPE SCONST] to IDENT, and similarly for the\n> string case.\n\nI recently tried again to get gram.y to handle it without precedence\nhacks (or at least hacks with less mystery) and came to the conclusion\nthat maybe it just doesn't belong in the grammar after all. I hadn't\nthought of any alternatives, so thanks for working on that!\n\nIt seems something is not quite right in v9 with the error position reporting:\n\n SELECT U&'wrong: +0061' UESCAPE '+';\n ERROR: invalid Unicode escape character at or near \"'+'\"\n LINE 1: SELECT U&'wrong: +0061' UESCAPE '+';\n- ^\n+ ^\n\nThe caret is not pointing to the third token, or the second for that\nmatter. What worked for me was un-truncating the current token before\ncalling yylex again. To see if I'm on the right track, I've included\nthis in the attached, which applies on top of your v9.\n\n> Generally, I'm pretty happy with this approach: it touches gram.y\n> hardly at all, and it removes just about all of the complexity from\n> scan.l. I'm happier about dropping the support code into parser.c\n> than the other choices we've discussed.\n\nSeems like the best of both worlds. If we ever wanted to ditch the\nwhole token filter and use Bison's %glr mode, we'd have extra work to\ndo, but there doesn't seem to be a rush to do so anyway.\n\n> There's still undone work here, though:\n>\n> * I did not touch psql. Probably your patch is fine for that.\n>\n> * I did not do more with ecpg than get it to compile, using the\n> same hacks as in your v7. It still fails its regression tests,\n> but now the reason is that what we've done in parser/parser.c\n> needs to be transposed into the identical functionality in\n> ecpg/preproc/parser.c. Or at least some kind of functionality\n> there. A problem with this approach is that it presumes we can\n> reduce a UIDENT sequence to a plain IDENT, but to do so we need\n> assumptions about the target encoding, and I'm not sure that\n> ecpg should make any such assumptions. Maybe ecpg should just\n> reject all cases that produce non-ASCII identifiers? (Probably\n> it could be made to do something smarter with more work, but\n> it's not clear to me that it's worth the trouble.)\n\nHmm, I thought we only allowed Unicode escapes in the first place if\nthe server encoding was UTF-8. Or did you mean something else?\n\n> If this seems like a reasonable approach to you, please fill in\n> the missing psql and ecpg bits.\n\nWill do.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 26 Nov 2019 18:38:00 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> It seems something is not quite right in v9 with the error position reporting:\n\n> SELECT U&'wrong: +0061' UESCAPE '+';\n> ERROR: invalid Unicode escape character at or near \"'+'\"\n> LINE 1: SELECT U&'wrong: +0061' UESCAPE '+';\n> - ^\n> + ^\n\n> The caret is not pointing to the third token, or the second for that\n> matter.\n\nInteresting. For me it points at the third token with or without\nyour fix ... some flex version discrepancy maybe? Anyway, I have\nno objection to your fix; it's probably cleaner than what I had.\n\n>> * I did not do more with ecpg than get it to compile, using the\n>> same hacks as in your v7. It still fails its regression tests,\n>> but now the reason is that what we've done in parser/parser.c\n>> needs to be transposed into the identical functionality in\n>> ecpg/preproc/parser.c. Or at least some kind of functionality\n>> there. A problem with this approach is that it presumes we can\n>> reduce a UIDENT sequence to a plain IDENT, but to do so we need\n>> assumptions about the target encoding, and I'm not sure that\n>> ecpg should make any such assumptions. Maybe ecpg should just\n>> reject all cases that produce non-ASCII identifiers? (Probably\n>> it could be made to do something smarter with more work, but\n>> it's not clear to me that it's worth the trouble.)\n\n> Hmm, I thought we only allowed Unicode escapes in the first place if\n> the server encoding was UTF-8. Or did you mean something else?\n\nWell, yeah, but the problem here is that ecpg would have to assume\nthat the client encoding that its output program will be executed\nwith is UTF-8. That seems pretty action-at-a-distance-y.\n\nI haven't looked closely at what ecpg does with the processed\nidentifiers. If it just spits them out as-is, a possible solution\nis to not do anything about de-escaping, but pass the sequence\nU&\"...\" (plus UESCAPE ... if any), just like that, on to the grammar\nas the value of the IDENT token.\n\nBTW, in the back of my mind here is Chapman's point that it'd be\na large step forward in usability if we allowed Unicode escapes\nwhen the backend encoding is *not* UTF-8. I think I see how to\nget there once this patch is done, so I definitely would not like\nto introduce some comparable restriction in ecpg.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Nov 2019 10:32:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Tue, Nov 26, 2019 at 10:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I haven't looked closely at what ecpg does with the processed\n> identifiers. If it just spits them out as-is, a possible solution\n> is to not do anything about de-escaping, but pass the sequence\n> U&\"...\" (plus UESCAPE ... if any), just like that, on to the grammar\n> as the value of the IDENT token.\n\nIt does pass them along as-is, so I did it that way.\n\nIn the attached v10, I've synced both ECPG and psql.\n\n> * I haven't convinced myself either way as to whether it'd be\n> better to factor out the code duplicated between the UIDENT\n> and UCONST cases in base_yylex.\n\nI chose to factor it out, since we have 2 versions of parser.c, and\nthis way was much easier to work with.\n\nSome notes:\n\nI arranged for the ECPG grammar to only see SCONST and IDENT. With\nUCONST and UIDENT out of the way, it was a small additional step to\nput all string reconstruction into the lexer, which has the advantage\nof allowing removal of the other special-case ECPG string tokens as\nwell. The fewer special cases involved in pasting the grammar\ntogether, the better. In doing so, I've probably introduced memory\nleaks, but I wanted to get your opinion on the overall approach before\ninvestigating.\n\nIn ECPG's parser.c, I simply copied check_uescapechar() and\necpg_isspace(), but we could find a common place if desired. During\ndevelopment, I found that this file replicates the location-tracking\nlogic in the backend, but doesn't seem to make use of it. I also would\nhave had to replicate the backend's datatype for YYLTYPE. Fixing that\nmight be worthwhile some day, but to get this working, I just ripped\nout the extra location tracking.\n\nI no longer use state variables to track scanner state, and in fact I\nremoved the existing \"state_before\" variable in ECPG. Instead, I used\nthe Flex builtins yy_push_state(), yy_pop_state(), and yy_top_state().\nThese have been a feature for a long time, it seems, so I think we're\nokay as far as portability. I think it's cleaner this way, and\npossibly faster. I also used this to reunite the xcc and xcsql states.\nThis whole part could be split out into a separate refactoring patch\nto be applied first, if desired.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 3 Dec 2019 18:02:17 +0700", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "I wrote:\n\n> I no longer use state variables to track scanner state, and in fact I\n> removed the existing \"state_before\" variable in ECPG. Instead, I used\n> the Flex builtins yy_push_state(), yy_pop_state(), and yy_top_state().\n> These have been a feature for a long time, it seems, so I think we're\n> okay as far as portability. I think it's cleaner this way, and\n> possibly faster.\n\nI thought I should get some actual numbers to test, and the results\nare encouraging:\n\n master v10\ninfo 1.56s 1.51s\nstr 1.18s 1.14s\nunicode 1.33s 1.34s\nuescape 1.44s 1.58s\n\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 2 Jan 2020 17:56:28 -0600", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n>> I no longer use state variables to track scanner state, and in fact I\n>> removed the existing \"state_before\" variable in ECPG. Instead, I used\n>> the Flex builtins yy_push_state(), yy_pop_state(), and yy_top_state().\n>> These have been a feature for a long time, it seems, so I think we're\n>> okay as far as portability. I think it's cleaner this way, and\n>> possibly faster.\n\nHmm ... after a bit of research I agree that these functions are not\na portability hazard. They are present at least as far back as flex\n2.5.33 which is as old as we've got in the buildfarm.\n\nHowever, I'm less excited about them from a performance standpoint.\nThe BEGIN() macro expands to (ordinarily)\n\n\tyyg->yy_start = integer-constant\n\nwhich is surely pretty cheap. However, yy_push_state is substantially\nmore expensive than that, not least because the first invocation in\na parse cycle will involve a malloc() or palloc(). Likewise yy_pop_state\nis multiple times more expensive than plain BEGIN().\n\nNow, I agree that this is negligible for ECPG's usage, so if\npushing/popping state is helpful there, let's go for it. But I am\nnot convinced it's negligible for the backend, and I also don't\nsee that we actually need to track any nested scanner states there.\nSo I'd rather stick to using BEGIN in the backend. Not sure about\npsql.\n\nBTW, while looking through the latest patch it struck me that\n\"UCONST\" is an underspecified and potentially confusing name.\nIt doesn't indicate what kind of constant we're talking about,\nfor instance a C programmer could be forgiven for thinking\nit means something like \"123U\". What do you think of \"USCONST\",\nfollowing UIDENT's lead of prefixing U onto whatever the\nunderlying token type is?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Jan 2020 18:57:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Mon, Jan 13, 2020 at 7:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Hmm ... after a bit of research I agree that these functions are not\n> a portability hazard. They are present at least as far back as flex\n> 2.5.33 which is as old as we've got in the buildfarm.\n>\n> However, I'm less excited about them from a performance standpoint.\n> The BEGIN() macro expands to (ordinarily)\n>\n> yyg->yy_start = integer-constant\n>\n> which is surely pretty cheap. However, yy_push_state is substantially\n> more expensive than that, not least because the first invocation in\n> a parse cycle will involve a malloc() or palloc(). Likewise yy_pop_state\n> is multiple times more expensive than plain BEGIN().\n>\n> Now, I agree that this is negligible for ECPG's usage, so if\n> pushing/popping state is helpful there, let's go for it. But I am\n> not convinced it's negligible for the backend, and I also don't\n> see that we actually need to track any nested scanner states there.\n> So I'd rather stick to using BEGIN in the backend. Not sure about\n> psql.\n\nOkay, removed in v11. The advantage of stack functions in ECPG was to\navoid having the two variables state_before_str_start and\nstate_before_str_stop. But if we don't use stack functions in the\nbackend, then consistency wins in my mind. Plus, it was easier for me\nto revert the stack functions for all 3 scanners.\n\n> BTW, while looking through the latest patch it struck me that\n> \"UCONST\" is an underspecified and potentially confusing name.\n> It doesn't indicate what kind of constant we're talking about,\n> for instance a C programmer could be forgiven for thinking\n> it means something like \"123U\". What do you think of \"USCONST\",\n> following UIDENT's lead of prefixing U onto whatever the\n> underlying token type is?\n\nMakes perfect sense. Grepping through the source tree, indeed it seems\nthe replication command scanner is using UCONST for digits.\n\nSome other cosmetic adjustments in ECPG parser.c:\n-Previously I had a WIP comment in about 2 functions that are copies\nfrom elsewhere. In v11 I just noted that they are copied.\n-I thought it'd be nicer if ECPG spelled UESCAPE in caps when\nreconstructing the string.\n-Corrected copy-paste-o in comment\n\nAlso:\n-reverted some spurious whitespace changes\n-revised scan.l comment about the performance benefits of no backtracking\n-split the ECPG C-comment scanning cleanup into a separate patch, as I\ndid for v6. I include it here since it's related (merging scanner\nstates), but not relevant to making the core scanner smaller.\n-wrote draft commit messages\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 13 Jan 2020 18:46:01 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> [ v11 patch ]\n\nI pushed this with some small cosmetic adjustments.\n\nOne non-cosmetic adjustment I experimented with was to change\nstr_udeescape() to overwrite the source string in-place, since\nwe know that's modifiable storage and de-escaping can't make\nthe string longer. I reasoned that saving a palloc() might help\nreduce the extra cost of UESCAPE processing. It didn't seem to\nmove the needle much though, so I didn't commit it that way.\nA positive reason to keep the API as it stands is that if we\ndo something about the idea of allowing Unicode strings in\nnon-UTF8 backend encodings, that'd likely break the assumption\nabout how the string can't get longer.\n\nI'm about to go off and look at the non-UTF8 idea, btw.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Jan 2020 15:12:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking Flex practices" }, { "msg_contents": "On Tue, Jan 14, 2020 at 4:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n> > [ v11 patch ]\n>\n> I pushed this with some small cosmetic adjustments.\n\nThanks for your help hacking on the token filter.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Jan 2020 08:59:28 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking Flex practices" } ]
[ { "msg_contents": "Hi,\n\nRight now using -w for shutting down clusters with a bit bigger shared\nbuffers will very frequently fail, because the shutdown checkpoint takes\nmuch longer than 60s. Obviously that can be addressed by manually\nsetting PGCTLTIMEOUT to something higher, but forcing many users to do\nthat doesn't seem right. And while many users probably don't want to\naggressively time-out on the shutdown checkpoint, I'd assume most do\nwant to time out aggressively if the server doesn't actually start the\ncheckpoint.\n\nI wonder if we need to split the timeout into two: One value for\npostmaster to acknowledge the action, one for that action to\ncomplete. It seems to me that that'd be useful for all of starting,\nrestarting and stopping.\n\nI think we have all the necessary information in the pid file, we would\njust need to check for PM_STATUS_STARTING for start, PM_STATUS_STOPPING\nfor restart/stop.\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 20 Jun 2019 09:33:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Do we need to do better for pg_ctl timeouts?" }, { "msg_contents": "On 2019-06-20 18:33, Andres Freund wrote:\n> I wonder if we need to split the timeout into two: One value for\n> postmaster to acknowledge the action, one for that action to\n> complete. It seems to me that that'd be useful for all of starting,\n> restarting and stopping.\n> \n> I think we have all the necessary information in the pid file, we would\n> just need to check for PM_STATUS_STARTING for start, PM_STATUS_STOPPING\n> for restart/stop.\n\nA related thing I came across the other day: systemd has a new\nsd_notify() functionality EXTEND_TIMEOUT_USEC where the service can\nnotify systemd to extend the timeout. I think that's the same idea:\nYou want to timeout if you're stuck, but you want to keep going as long\nas you're doing useful work.\n\nSo yes, improving that would be welcome.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 24 Jun 2019 17:53:39 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Do we need to do better for pg_ctl timeouts?" } ]
[ { "msg_contents": "Hi\n\nSearching subject for \"Specify thread msgid\" field doesn't work. It returns\nempty result set every time.\n\nRegards\n\nPavel\n\nHiSearching subject for \"Specify thread msgid\" field doesn't work. It returns empty result set every time.RegardsPavel", "msg_date": "Thu, 20 Jun 2019 18:49:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "commitfest application - create patch doesn't work" }, { "msg_contents": "Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> Searching subject for \"Specify thread msgid\" field doesn't work. It returns\n> empty result set every time.\n\nIs this still not working? I was chatting with Magnus and it seems\npossible this was broken and then fixed already.\n\nThanks,\n\nStephen", "msg_date": "Thu, 20 Jun 2019 14:27:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: commitfest application - create patch doesn't work" }, { "msg_contents": "čt 20. 6. 2019 v 20:27 odesílatel Stephen Frost <sfrost@snowman.net> napsal:\n\n> Greetings,\n>\n> * Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> > Searching subject for \"Specify thread msgid\" field doesn't work. It\n> returns\n> > empty result set every time.\n>\n> Is this still not working? I was chatting with Magnus and it seems\n> possible this was broken and then fixed already.\n>\n\nIt is working now.\n\nThank you\n\nPavel\n\n\n> Thanks,\n>\n> Stephen\n>\n\nčt 20. 6. 2019 v 20:27 odesílatel Stephen Frost <sfrost@snowman.net> napsal:Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> Searching subject for \"Specify thread msgid\" field doesn't work. It returns\n> empty result set every time.\n\nIs this still not working?  I was chatting with Magnus and it seems\npossible this was broken and then fixed already.It is working now.Thank youPavel\n\nThanks,\n\nStephen", "msg_date": "Thu, 20 Jun 2019 21:05:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: commitfest application - create patch doesn't work" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15865\nLogged by: Keith Fiske\nEmail address: keith.fiske@crunchydata.com\nPostgreSQL version: 11.4\nOperating system: CentOS7\nDescription: \n\nWhen testing the setup of our monitoring platform, we started running into\nan error when using PostgreSQL as a backend for Grafana. We narrowed down\nthe issue to only occurring with the latest point release of PG,\nspecifically 11.4, 10.9 and 9.6.14 (previous major versions were not tested\nat this time). The issue can be recreated by following the setup steps for\nGrafana with a PG backend and it will occur when the Grafana service is\nstarted for the first time and it tries to set up its schema in PG. We do\nnot see the error occurring with the previous minor versions (11.3, 10.8,\n9.6.13).\r\n\r\nA standalone, reproducible use-case is as follows. The final, ALTER TABLE\nstatement (which is generated by Grafana) will cause the error:\r\n\r\n-----------------------------\r\nERROR: relation \"UQE_user_login\" already exists\r\n-----------------------------\r\n\r\nHowever if each ALTER COLUMN statement is run independently, it seems to\nwork fine.\r\n\r\n-----------------------------\r\nCREATE TABLE public.\"user\" (\r\n id integer NOT NULL,\r\n version integer NOT NULL,\r\n login character varying(190) NOT NULL,\r\n email character varying(190) NOT NULL,\r\n name character varying(255),\r\n password character varying(255),\r\n salt character varying(50),\r\n rands character varying(50),\r\n company character varying(255),\r\n org_id bigint NOT NULL,\r\n is_admin boolean NOT NULL,\r\n email_verified boolean,\r\n theme character varying(255),\r\n created timestamp without time zone NOT NULL,\r\n updated timestamp without time zone NOT NULL,\r\n help_flags1 bigint DEFAULT 0 NOT NULL\r\n);\r\n\r\nCREATE SEQUENCE public.user_id_seq1\r\n AS integer\r\n START WITH 1\r\n INCREMENT BY 1\r\n NO MINVALUE\r\n NO MAXVALUE\r\n CACHE 1;\r\n\r\nALTER TABLE ONLY public.\"user\" ALTER COLUMN id SET DEFAULT\nnextval('public.user_id_seq1'::regclass);\r\n\r\nSELECT pg_catalog.setval('public.user_id_seq1', 1, false);\r\n\r\nALTER TABLE ONLY public.\"user\" ADD CONSTRAINT user_pkey1 PRIMARY KEY (id);\r\n\r\nCREATE UNIQUE INDEX \"UQE_user_email\" ON public.\"user\" USING btree (email);\r\n\r\nCREATE UNIQUE INDEX \"UQE_user_login\" ON public.\"user\" USING btree (login);\r\n\r\nALTER TABLE \"user\" ALTER \"login\" TYPE VARCHAR(190), ALTER \"email\" TYPE\nVARCHAR(190), ALTER \"name\" TYPE VARCHAR(255), ALTER \"password\" TYPE\nVARCHAR(255), ALTER \"salt\" TYPE VARCHAR(50), ALTER \"rands\" TYPE VARCHAR(50),\nALTER \"company\" TYPE VARCHAR(255), ALTER \"theme\" TYPE VARCHAR(255);\r\n-----------------------------", "msg_date": "Thu, 20 Jun 2019 20:14:29 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #15865: ALTER TABLE statements causing \"relation already exists\"\n errors when some indexes exist" }, { "msg_contents": "On 2019-Jun-20, PG Bug reporting form wrote:\n\n> When testing the setup of our monitoring platform, we started running into\n> an error when using PostgreSQL as a backend for Grafana. We narrowed down\n> the issue to only occurring with the latest point release of PG,\n> specifically 11.4, 10.9 and 9.6.14 (previous major versions were not tested\n> at this time). The issue can be recreated by following the setup steps for\n> Grafana with a PG backend and it will occur when the Grafana service is\n> started for the first time and it tries to set up its schema in PG. We do\n> not see the error occurring with the previous minor versions (11.3, 10.8,\n> 9.6.13).\n\nConfirmed. Bisection says that\n\ncommit e76de886157b7f974d4d247908b242607cfbf043\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nAuthorDate: Wed Jun 12 12:29:24 2019 -0400\nCommitDate: Wed Jun 12 12:29:39 2019 -0400\n\n Fix ALTER COLUMN TYPE failure with a partial exclusion constraint.\n\nis the culprit.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 20 Jun 2019 16:45:05 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-20, PG Bug reporting form wrote:\n>> When testing the setup of our monitoring platform, we started running into\n>> an error when using PostgreSQL as a backend for Grafana.\n\n> commit e76de886157b7f974d4d247908b242607cfbf043\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> AuthorDate: Wed Jun 12 12:29:24 2019 -0400\n> CommitDate: Wed Jun 12 12:29:39 2019 -0400\n> Fix ALTER COLUMN TYPE failure with a partial exclusion constraint.\n\nYeah, obviously I fat-fingered something there. Looking ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2019 17:08:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> commit e76de886157b7f974d4d247908b242607cfbf043\n>> Author: Tom Lane <tgl@sss.pgh.pa.us>\n>> AuthorDate: Wed Jun 12 12:29:24 2019 -0400\n>> CommitDate: Wed Jun 12 12:29:39 2019 -0400\n>> Fix ALTER COLUMN TYPE failure with a partial exclusion constraint.\n\n> Yeah, obviously I fat-fingered something there. Looking ...\n\nSigh ... so the answer is that I added the cleanup code (lines\n10831..10864 in HEAD) in the wrong place. Putting it in\nATExecAlterColumnType is wrong because that gets executed potentially\nmultiple times per ALTER command, but I'd coded the cleanup assuming\nthat it would run only once. So we can end up with duplicate entries\nin the changedIndexDefs list.\n\nThe right place to put it is in ATPostAlterTypeCleanup, of course.\n(I think we could eliminate the changedIndexDefs list altogether and\njust build the index-defining commands in the loop that uses them.)\n\nThis is a pretty embarrassing bug, reinforcing my hindsight view\nthat I was firing on very few cylinders last week. It basically\nmeans that any ALTER TABLE that tries to alter the type of more than\none column is going to fail, if any but the last such column has a\ndependent plain (non-constraint) index. The test cases added by\ne76de8861 were oh so close to noticing that, but not close enough.\n\nI'll go fix it, but do we need to consider a near-term re-release?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2019 20:20:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "On Thu, Jun 20, 2019 at 08:20:55PM -0400, Tom Lane wrote:\n> This is a pretty embarrassing bug, reinforcing my hindsight view\n> that I was firing on very few cylinders last week. It basically\n> means that any ALTER TABLE that tries to alter the type of more than\n> one column is going to fail, if any but the last such column has a\n> dependent plain (non-constraint) index. The test cases added by\n> e76de8861 were oh so close to noticing that, but not close enough.\n> \n> I'll go fix it, but do we need to consider a near-term re-release?\n\nUgh. That's a possibility. Changing each ALTER TABLE to be run\nindividually can be a pain, and we really ought to push for the fix of\nthe most recent CVE as soon as possible :(\n--\nMichael", "msg_date": "Fri, 21 Jun 2019 09:45:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "I wrote:\n>> Yeah, obviously I fat-fingered something there. Looking ...\n\nAfter further review it seems like I was led into this error by a siren\nsinging something about how we could skip collecting the index definition\nstring for an index we were going to ignore later. (Cue standard lecture\nabout premature optimization...) That absolutely *does not* work, because\nwe might not find out till we're considering some later ALTER TYPE\nsubcommand that the index depends on a relevant constraint. And we have\nto capture the index definition before we alter the type of any column it\ndepends on, or pg_get_indexdef_string will get very confused. That little\ndependency wasn't documented anywhere. I also found a pre-existing\ncomment that contradicted the new reality but I'd missed removing in\ne76de8861.\n\nHere's a patch against HEAD --- since I'm feeling more mortal than usual\nright now, I'll put this out for review rather than just pushing it.\nIt might be easier to review the code changes by just ignoring e76de8861\nand diffing against tablecmds.c from before that, as I've done in the\nsecond attachment.\n\nBTW, has anyone got an explanation for the order in which psql is\nlisting the indexes of \"anothertab\" in this test case?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 20 Jun 2019 21:54:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "On Thu, Jun 20, 2019 at 9:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> >> Yeah, obviously I fat-fingered something there. Looking ...\n>\n> After further review it seems like I was led into this error by a siren\n> singing something about how we could skip collecting the index definition\n> string for an index we were going to ignore later. (Cue standard lecture\n> about premature optimization...) That absolutely *does not* work, because\n> we might not find out till we're considering some later ALTER TYPE\n> subcommand that the index depends on a relevant constraint. And we have\n> to capture the index definition before we alter the type of any column it\n> depends on, or pg_get_indexdef_string will get very confused. That little\n> dependency wasn't documented anywhere. I also found a pre-existing\n> comment that contradicted the new reality but I'd missed removing in\n> e76de8861.\n>\n> Here's a patch against HEAD --- since I'm feeling more mortal than usual\n> right now, I'll put this out for review rather than just pushing it.\n> It might be easier to review the code changes by just ignoring e76de8861\n> and diffing against tablecmds.c from before that, as I've done in the\n> second attachment.\n>\n> BTW, has anyone got an explanation for the order in which psql is\n> listing the indexes of \"anothertab\" in this test case?\n>\n> regards, tom lane\n>\n>\n\nCan't really provide a thorough code review, but I did apply the patch to\nthe base 11.4 code (not HEAD from github) and the compound ALTER table\nstatement that was failing before now works without error. Thank you for\nthe quick fix!\n\n-- \nKeith Fiske\nSenior Database Engineer\nCrunchy Data - http://crunchydata.com\n\nOn Thu, Jun 20, 2019 at 9:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n>> Yeah, obviously I fat-fingered something there.  Looking ...\n\nAfter further review it seems like I was led into this error by a siren\nsinging something about how we could skip collecting the index definition\nstring for an index we were going to ignore later.  (Cue standard lecture\nabout premature optimization...)  That absolutely *does not* work, because\nwe might not find out till we're considering some later ALTER TYPE\nsubcommand that the index depends on a relevant constraint.  And we have\nto capture the index definition before we alter the type of any column it\ndepends on, or pg_get_indexdef_string will get very confused.  That little\ndependency wasn't documented anywhere.  I also found a pre-existing\ncomment that contradicted the new reality but I'd missed removing in\ne76de8861.\n\nHere's a patch against HEAD --- since I'm feeling more mortal than usual\nright now, I'll put this out for review rather than just pushing it.\nIt might be easier to review the code changes by just ignoring e76de8861\nand diffing against tablecmds.c from before that, as I've done in the\nsecond attachment.\n\nBTW, has anyone got an explanation for the order in which psql is\nlisting the indexes of \"anothertab\" in this test case?\n\n                        regards, tom lane\n\nCan't really provide a thorough code review, but I did apply the patch to the base 11.4 code (not HEAD from github) and the compound ALTER table statement that was failing before now works without error. Thank you for the quick fix!-- Keith FiskeSenior Database EngineerCrunchy Data - http://crunchydata.com", "msg_date": "Fri, 21 Jun 2019 10:10:27 -0400", "msg_from": "Keith Fiske <keith.fiske@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "Keith Fiske <keith.fiske@crunchydata.com> writes:\n> On Thu, Jun 20, 2019 at 9:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a patch against HEAD --- since I'm feeling more mortal than usual\n>> right now, I'll put this out for review rather than just pushing it.\n\n> Can't really provide a thorough code review, but I did apply the patch to\n> the base 11.4 code (not HEAD from github) and the compound ALTER table\n> statement that was failing before now works without error. Thank you for\n> the quick fix!\n\nThanks for testing! However, I had a nagging feeling that I was still\nmissing something, and this morning I realized what. The proposed\npatch basically changes ATExecAlterColumnType's assumptions from\n\"no constraint index will have any direct dependencies on table columns\"\nto \"if a constraint index has a direct dependency on a table column,\nso will its constraint\". This is easily shown to not be the case:\n\nregression=# create table foo (f1 int, f2 int);\nCREATE TABLE\nregression=# alter table foo add exclude using btree (f1 with =) where (f2 > 0);\nALTER TABLE\nregression=# select pg_describe_object(classid,objid,objsubid) as obj, pg_describe_object(refclassid,refobjid,refobjsubid) as ref, deptype from pg_depend where objid >= 'foo'::regclass or refobjid >= 'foo'::regclass;\n obj | ref | deptype \n-------------------------------------+-------------------------------------+---------\n type foo | table foo | i\n type foo[] | type foo | i\n table foo | schema public | n\n constraint foo_f1_excl on table foo | column f1 of table foo | a\n index foo_f1_excl | constraint foo_f1_excl on table foo | i\n index foo_f1_excl | column f2 of table foo | a\n(6 rows)\n\nNotice that the index has a dependency on column f2 but the constraint\ndoesn't. So if we change (just) f2, ATExecAlterColumnType never notices\nthe constraint at all, and kaboom:\n\nregression=# alter table foo alter column f2 type bigint;\nERROR: cannot drop index foo_f1_excl because constraint foo_f1_excl on table foo requires it\nHINT: You can drop constraint foo_f1_excl on table foo instead.\n\nThis is the same with or without yesterday's patch, and while I didn't\ntrouble to verify it, I'm quite sure pre-e76de8861 would fail the same.\n\nI'm not exactly convinced that this dependency structure is a Good Thing,\nbut in any case we don't get to rethink it in released branches. So\nwe need to make ATExecAlterColumnType cope, and the way to do that seems\nto be to do the get_index_constraint check in that function not later on.\n\nIn principle this might lead to a few more duplicative\nget_index_constraint calls than before, because if a constraint index has\nmultiple column dependencies we'll have to repeat get_index_constraint for\neach one. But I hardly think that case is worth stressing about the\nperformance of, given it never worked at all before this month.\n\nAs before, I attach a patch against HEAD, plus one that assumes e76de8861\nhas been reverted first, which is likely easier to review.\n\nUnlike yesterday, I'm feeling pretty good about this patch now, but it\nstill wouldn't hurt for somebody else to go over it.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 21 Jun 2019 12:12:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "I wrote:\n> BTW, has anyone got an explanation for the order in which psql is\n> listing the indexes of \"anothertab\" in this test case?\n\nAh, here's the explanation: describe.c is sorting the indexes\nwith this:\n\n\t\"ORDER BY i.indisprimary DESC, i.indisunique DESC, c2.relname;\"\n\nI can see the point of putting the pkey first, I guess, but the preference\nfor uniques second seems pretty bizarre, especially since\n(a) it doesn't distinguish unique constraints from plain unique indexes and\n(b) there's no similar preference for exclusion constraints, even though\nthose might be morally equivalent to a unique constraint.\n\nWhat do people think of dropping the indisunique sort column here?\nObviously not back-patch material, but it might be more sensible\nbehavior going forward.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 12:47:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "On Fri, Jun 21, 2019 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Keith Fiske <keith.fiske@crunchydata.com> writes:\n> > On Thu, Jun 20, 2019 at 9:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Here's a patch against HEAD --- since I'm feeling more mortal than usual\n> >> right now, I'll put this out for review rather than just pushing it.\n>\n> > Can't really provide a thorough code review, but I did apply the patch to\n> > the base 11.4 code (not HEAD from github) and the compound ALTER table\n> > statement that was failing before now works without error. Thank you for\n> > the quick fix!\n>\n> Thanks for testing! However, I had a nagging feeling that I was still\n> missing something, and this morning I realized what. The proposed\n> patch basically changes ATExecAlterColumnType's assumptions from\n> \"no constraint index will have any direct dependencies on table columns\"\n> to \"if a constraint index has a direct dependency on a table column,\n> so will its constraint\". This is easily shown to not be the case:\n>\n> regression=# create table foo (f1 int, f2 int);\n> CREATE TABLE\n> regression=# alter table foo add exclude using btree (f1 with =) where (f2\n> > 0);\n> ALTER TABLE\n> regression=# select pg_describe_object(classid,objid,objsubid) as obj,\n> pg_describe_object(refclassid,refobjid,refobjsubid) as ref, deptype from\n> pg_depend where objid >= 'foo'::regclass or refobjid >= 'foo'::regclass;\n> obj | ref\n> | deptype\n>\n> -------------------------------------+-------------------------------------+---------\n> type foo | table foo\n> | i\n> type foo[] | type foo\n> | i\n> table foo | schema public\n> | n\n> constraint foo_f1_excl on table foo | column f1 of table foo\n> | a\n> index foo_f1_excl | constraint foo_f1_excl on table foo\n> | i\n> index foo_f1_excl | column f2 of table foo\n> | a\n> (6 rows)\n>\n> Notice that the index has a dependency on column f2 but the constraint\n> doesn't. So if we change (just) f2, ATExecAlterColumnType never notices\n> the constraint at all, and kaboom:\n>\n> regression=# alter table foo alter column f2 type bigint;\n> ERROR: cannot drop index foo_f1_excl because constraint foo_f1_excl on\n> table foo requires it\n> HINT: You can drop constraint foo_f1_excl on table foo instead.\n>\n> This is the same with or without yesterday's patch, and while I didn't\n> trouble to verify it, I'm quite sure pre-e76de8861 would fail the same.\n>\n> I'm not exactly convinced that this dependency structure is a Good Thing,\n> but in any case we don't get to rethink it in released branches. So\n> we need to make ATExecAlterColumnType cope, and the way to do that seems\n> to be to do the get_index_constraint check in that function not later on.\n>\n> In principle this might lead to a few more duplicative\n> get_index_constraint calls than before, because if a constraint index has\n> multiple column dependencies we'll have to repeat get_index_constraint for\n> each one. But I hardly think that case is worth stressing about the\n> performance of, given it never worked at all before this month.\n>\n> As before, I attach a patch against HEAD, plus one that assumes e76de8861\n> has been reverted first, which is likely easier to review.\n>\n> Unlike yesterday, I'm feeling pretty good about this patch now, but it\n> still wouldn't hurt for somebody else to go over it.\n>\n> regards, tom lane\n>\n>\n\nTested applying the patch against HEAD this time. Combined ALTER TABLE\nagain works without issue.\n\n-- \nKeith Fiske\nSenior Database Engineer\nCrunchy Data - http://crunchydata.com\n\nOn Fri, Jun 21, 2019 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Keith Fiske <keith.fiske@crunchydata.com> writes:\n> On Thu, Jun 20, 2019 at 9:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a patch against HEAD --- since I'm feeling more mortal than usual\n>> right now, I'll put this out for review rather than just pushing it.\n\n> Can't really provide a thorough code review, but I did apply the patch to\n> the base 11.4 code (not HEAD from github) and the compound ALTER table\n> statement that was failing before now works without error. Thank you for\n> the quick fix!\n\nThanks for testing!  However, I had a nagging feeling that I was still\nmissing something, and this morning I realized what.  The proposed\npatch basically changes ATExecAlterColumnType's assumptions from\n\"no constraint index will have any direct dependencies on table columns\"\nto \"if a constraint index has a direct dependency on a table column,\nso will its constraint\".  This is easily shown to not be the case:\n\nregression=# create table foo (f1 int, f2 int);\nCREATE TABLE\nregression=# alter table foo add exclude using btree (f1 with =) where (f2 > 0);\nALTER TABLE\nregression=# select pg_describe_object(classid,objid,objsubid) as obj, pg_describe_object(refclassid,refobjid,refobjsubid) as ref, deptype from pg_depend where objid >= 'foo'::regclass or refobjid >= 'foo'::regclass;\n                 obj                 |                 ref                 | deptype \n-------------------------------------+-------------------------------------+---------\n type foo                            | table foo                           | i\n type foo[]                          | type foo                            | i\n table foo                           | schema public                       | n\n constraint foo_f1_excl on table foo | column f1 of table foo              | a\n index foo_f1_excl                   | constraint foo_f1_excl on table foo | i\n index foo_f1_excl                   | column f2 of table foo              | a\n(6 rows)\n\nNotice that the index has a dependency on column f2 but the constraint\ndoesn't.  So if we change (just) f2, ATExecAlterColumnType never notices\nthe constraint at all, and kaboom:\n\nregression=# alter table foo alter column f2 type bigint;\nERROR:  cannot drop index foo_f1_excl because constraint foo_f1_excl on table foo requires it\nHINT:  You can drop constraint foo_f1_excl on table foo instead.\n\nThis is the same with or without yesterday's patch, and while I didn't\ntrouble to verify it, I'm quite sure pre-e76de8861 would fail the same.\n\nI'm not exactly convinced that this dependency structure is a Good Thing,\nbut in any case we don't get to rethink it in released branches.  So\nwe need to make ATExecAlterColumnType cope, and the way to do that seems\nto be to do the get_index_constraint check in that function not later on.\n\nIn principle this might lead to a few more duplicative\nget_index_constraint calls than before, because if a constraint index has\nmultiple column dependencies we'll have to repeat get_index_constraint for\neach one.  But I hardly think that case is worth stressing about the\nperformance of, given it never worked at all before this month.\n\nAs before, I attach a patch against HEAD, plus one that assumes e76de8861\nhas been reverted first, which is likely easier to review.\n\nUnlike yesterday, I'm feeling pretty good about this patch now, but it\nstill wouldn't hurt for somebody else to go over it.\n\n                        regards, tom lane\n\nTested applying the patch against HEAD this time. Combined ALTER TABLE again works without issue.-- Keith FiskeSenior Database EngineerCrunchy Data - http://crunchydata.com", "msg_date": "Fri, 21 Jun 2019 13:06:19 -0400", "msg_from": "Keith Fiske <keith.fiske@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "Keith Fiske <keith.fiske@crunchydata.com> writes:\n> On Fri, Jun 21, 2019 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> As before, I attach a patch against HEAD, plus one that assumes e76de8861\n>> has been reverted first, which is likely easier to review.\n\n> Tested applying the patch against HEAD this time. Combined ALTER TABLE\n> again works without issue.\n\nAgain, thanks for testing!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 13:27:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "I wrote:\n> As before, I attach a patch against HEAD, plus one that assumes e76de8861\n> has been reverted first, which is likely easier to review.\n> Unlike yesterday, I'm feeling pretty good about this patch now, but it\n> still wouldn't hurt for somebody else to go over it.\n\nI started to back-patch this, and soon noticed that the content of the\nOCLASS_CONSTRAINT case branch in ATExecAlterColumnType has varied across\nversions, which makes copy-and-pasting it seem pretty hazardous. Hence\nit seems prudent to do slightly more work and split that code out into\na subroutine rather than having two copies. As attached, which is a\nhopefully-final patch for HEAD. As before, it presumes reversion of\ne76de8861, because it's a lot easier to see what's going on that way.\n\nBTW ... while working on this, I got annoyed by the fact that\nATExecAlterColumnGenericOptions was inserted, no doubt with the aid of a\ndartboard, into the middle of a large group of AlterColumnType-related\nfunctions. Would anyone mind a separate patch to relocate it down\npast those, probably just before ATExecChangeOwner?\n\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 24 Jun 2019 15:05:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15865: ALTER TABLE statements causing \"relation already\n exists\" errors when some indexes exist" }, { "msg_contents": "I wrote:\n>> BTW, has anyone got an explanation for the order in which psql is\n>> listing the indexes of \"anothertab\" in this test case?\n\n> Ah, here's the explanation: describe.c is sorting the indexes\n> with this:\n> \t\"ORDER BY i.indisprimary DESC, i.indisunique DESC, c2.relname;\"\n> I can see the point of putting the pkey first, I guess, but the preference\n> for uniques second seems pretty bizarre, especially since\n> (a) it doesn't distinguish unique constraints from plain unique indexes and\n> (b) there's no similar preference for exclusion constraints, even though\n> those might be morally equivalent to a unique constraint.\n> What do people think of dropping the indisunique sort column here?\n> Obviously not back-patch material, but it might be more sensible\n> behavior going forward.\n\nHere's a proposed patch that does this. The changes it causes in the\nexisting regression test results seem to be sufficient illustration,\nso I didn't add new tests. There is of course no documentation\ntouching on this point ...\n\nWith the patch, psql's rule for listing indexes is \"pkey first, then\neverything else in name order\". The traditional rule is basically\ncrazytown IMO when you consider mixes of unique constraints and plain\n(non-constraint-syntax) indexes and exclusion constraints.\n\nA different idea that might make it slightly less crazytown is to\ninclude exclusion constraints in the secondary preference group, along\nthe lines of\n\"ORDER BY i.indisprimary DESC, i.indisunique|i.indisexclusion DESC, c2.relname;\"\nThis'd restore what I think was the original design intention, that\nthe secondary preference group includes all indexes that impose\nconstraints on what the table can hold. But this'd be doubling down\non what I think is fundamentally not a very good idea, so I didn't\npursue it.\n\nAlternatively we could go further and drop the pkey preference too,\nmaking it pure index name order, but I don't feel a need to do that.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 25 Jun 2019 11:02:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Weird index ordering in psql's \\d (was Re: BUG #15865: ALTER TABLE\n statements causing \"relation already exists\" errors when some indexes exist)" } ]
[ { "msg_contents": "In a recent thread[0], the existence of explicit_bzero() was mentioned.\nI went to look where we could use that to clear sensitive information\nfrom memory and found a few candidates:\n\n- In be-secure-common.c, clear the entered SSL passphrase in the error\npath. (In the non-error path, the buffer belongs to OpenSSL.)\n\n- In libpq, clean up after reading .pgpass. Otherwise, the entire file\nincluding all passwords potentially remains in memory.\n\n- In libpq, clear the password after a connection is closed\n(freePGconn/part of PQfinish).\n\n- pg_hba.conf could potentially contain passwords for LDAP, so that\nshould maybe also be cleared, but the structure of that code would make\nthat more involved, so I skipped that for now. Efforts are probably\nbetter directed at providing facilities to avoid having to do that.[1]\n\nAny other ones?\n\nA patch that implements the first three is attached.\n\n\n[0]:\nhttps://www.postgresql.org/message-id/043403c2-f04d-3a69-aa8a-9bb7b9ce8e5b@iki.fi\n[1]:\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGJ44ssWhcKP1KYK2Dm9_XXk1_b629_qSDUhH1fWfuAvXg%40mail.gmail.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 21 Jun 2019 09:25:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "using explicit_bzero" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> +#ifndef HAVE_EXPLICIT_BZERO\n> +#define explicit_bzero(b, len) bzero(b, len)\n> +#endif\n\nThis presumes that every platform has bzero, which is unsafe (POSIX\ndoesn't specify it) and is an assumption we kicked to the curb a dozen\nyears ago (067a5cdb3). Please use memset() for the substitute instead.\n\nAlso, I'm a bit suspicious of using AC_CHECK_FUNCS for this; that\ngenerally Doesn't Work for anything that's not a vanilla out-of-line\nfunction. Are we worried about people implementing this as a macro,\ncompiler built-in, etc?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 09:25:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> +#ifndef HAVE_EXPLICIT_BZERO\n>> +#define explicit_bzero(b, len) bzero(b, len)\n>> +#endif\n>\n> This presumes that every platform has bzero, which is unsafe (POSIX\n> doesn't specify it) and is an assumption we kicked to the curb a dozen\n> years ago (067a5cdb3). Please use memset() for the substitute instead.\n>\n> Also, I'm a bit suspicious of using AC_CHECK_FUNCS for this; that\n> generally Doesn't Work for anything that's not a vanilla out-of-line\n> function. Are we worried about people implementing this as a macro,\n> compiler built-in, etc?\n\nAlso, on Linux it requires libbsd: https://libbsd.freedesktop.org/\n(which seems to be down, but\nhttps://packages.debian.org/buster/libbsd-dev has a list of the\nfunctions it provides).\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Fri, 21 Jun 2019 14:45:47 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> Also, on Linux it requires libbsd: https://libbsd.freedesktop.org/\n> (which seems to be down, but\n> https://packages.debian.org/buster/libbsd-dev has a list of the\n> functions it provides).\n\nUgh, that could be a bit nasty. I might be misremembering, but\nmy hindbrain is running for cover and yelling something about how\nimporting libbsd changes signal semantics. Our git log has a few\nscary references to other bad side-effects of -lbsd (cf 55c235b26,\n1337751e5, a27fafecc). On the whole, I'm not excited about pulling\nin a library whose entire purpose is to mess with POSIX semantics.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 10:01:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-06-21 15:25, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> +#ifndef HAVE_EXPLICIT_BZERO\n>> +#define explicit_bzero(b, len) bzero(b, len)\n>> +#endif\n> \n> This presumes that every platform has bzero, which is unsafe (POSIX\n> doesn't specify it) and is an assumption we kicked to the curb a dozen\n> years ago (067a5cdb3). Please use memset() for the substitute instead.\n\nOK, done.\n\n> Also, I'm a bit suspicious of using AC_CHECK_FUNCS for this; that\n> generally Doesn't Work for anything that's not a vanilla out-of-line\n> function. Are we worried about people implementing this as a macro,\n> compiler built-in, etc?\n\nI think we should address that if we actually find such a case.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 23 Jun 2019 21:55:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-06-21 15:45, Dagfinn Ilmari Mannsåker wrote:\n> Also, on Linux it requires libbsd: https://libbsd.freedesktop.org/\n\nNo, it's in glibc.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 23 Jun 2019 21:56:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-06-23 21:55, Peter Eisentraut wrote:\n> On 2019-06-21 15:25, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> +#ifndef HAVE_EXPLICIT_BZERO\n>>> +#define explicit_bzero(b, len) bzero(b, len)\n>>> +#endif\n>>\n>> This presumes that every platform has bzero, which is unsafe (POSIX\n>> doesn't specify it) and is an assumption we kicked to the curb a dozen\n>> years ago (067a5cdb3). Please use memset() for the substitute instead.\n> \n> OK, done.\n\nand with patch attached\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 23 Jun 2019 21:57:18 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Sun, Jun 23, 2019 at 09:57:18PM +0200, Peter Eisentraut wrote:\n> On 2019-06-23 21:55, Peter Eisentraut wrote:\n>> On 2019-06-21 15:25, Tom Lane wrote:\n>>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>>> +#ifndef HAVE_EXPLICIT_BZERO\n>>>> +#define explicit_bzero(b, len) bzero(b, len)\n>>>> +#endif\n>>>\n>>> This presumes that every platform has bzero, which is unsafe (POSIX\n>>> doesn't specify it) and is an assumption we kicked to the curb a dozen\n>>> years ago (067a5cdb3). Please use memset() for the substitute instead.\n\n+1.\n\n>> OK, done.\n> \n> and with patch attached\n\nCreateRole() and AlterRole() can manipulate a password in plain format\nin memory. The cleanup could be done just after calling\nencrypt_password() in user.c.\n\nCould it be possible to add the new flag in pg_config.h.win32?\n--\nMichael", "msg_date": "Mon, 24 Jun 2019 14:08:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Sun, Jun 23, 2019 at 09:56:40PM +0200, Peter Eisentraut wrote:\n> On 2019-06-21 15:45, Dagfinn Ilmari Mannsåker wrote:\n>> Also, on Linux it requires libbsd: https://libbsd.freedesktop.org/\n> \n> No, it's in glibc.\n\nFrom man:\nexplicit_bzero() first appeared in glibc 2.25.\n--\nMichael", "msg_date": "Mon, 24 Jun 2019 14:10:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Sun, Jun 23, 2019 at 09:56:40PM +0200, Peter Eisentraut wrote:\n>> On 2019-06-21 15:45, Dagfinn Ilmari Mannsåker wrote:\n>>> Also, on Linux it requires libbsd: https://libbsd.freedesktop.org/\n>> \n>> No, it's in glibc.\n>\n> From man:\n> explicit_bzero() first appeared in glibc 2.25.\n\nAh, I was looking on my Debian Stretch (stable) box, which only has\nglibc 2.24. Buster (testing, due out next week) has 2.28 which indeed\nhas it.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n", "msg_date": "Mon, 24 Jun 2019 11:03:20 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Mon, Jun 24, 2019 at 7:57 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-06-23 21:55, Peter Eisentraut wrote:\n> > On 2019-06-21 15:25, Tom Lane wrote:\n> >> years ago (067a5cdb3). Please use memset() for the substitute instead.\n> >\n> > OK, done.\n\n+#ifndef HAVE_EXPLICIT_BZERO\n+#define explicit_bzero(b, len) memset(b, 0, len)\n+#endif\n\nI noticed some other libraries use memset through a function pointer\nor at least define a function the compiler can't see.\n\n> and with patch attached\n\nThe ssl tests fail:\n\nFATAL: could not load private key file \"server-password.key\": bad decrypt\n\nThat's apparently due to the passphrase being clobbered in the output\nbuffer before we've managed to use it:\n\n@@ -118,6 +118,7 @@ run_ssl_passphrase_command(const char *prompt,\nbool is_server_start, char *buf,\n buf[--len] = '\\0';\n\n error:\n+ explicit_bzero(buf, size);\n pfree(command.data);\n return len;\n }\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Sat, 6 Jul 2019 00:06:06 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-07-05 14:06, Thomas Munro wrote:\n> +#ifndef HAVE_EXPLICIT_BZERO\n> +#define explicit_bzero(b, len) memset(b, 0, len)\n> +#endif\n> \n> I noticed some other libraries use memset through a function pointer\n> or at least define a function the compiler can't see.\n\nI don't understand what you are getting at here.\n\n> The ssl tests fail:\n> \n> FATAL: could not load private key file \"server-password.key\": bad decrypt\n> \n> That's apparently due to the passphrase being clobbered in the output\n> buffer before we've managed to use it:\n> \n> @@ -118,6 +118,7 @@ run_ssl_passphrase_command(const char *prompt,\n> bool is_server_start, char *buf,\n> buf[--len] = '\\0';\n> \n> error:\n> + explicit_bzero(buf, size);\n> pfree(command.data);\n> return len;\n> }\n\nYeah, that's a silly mistake. New patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 5 Jul 2019 15:07:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Sat, Jul 6, 2019 at 1:07 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-07-05 14:06, Thomas Munro wrote:\n> > +#ifndef HAVE_EXPLICIT_BZERO\n> > +#define explicit_bzero(b, len) memset(b, 0, len)\n> > +#endif\n> >\n> > I noticed some other libraries use memset through a function pointer\n> > or at least define a function the compiler can't see.\n>\n> I don't understand what you are getting at here.\n\nDo we want to provide a replacement implementation that actually\nprevents the compiler from generating no code in some circumstances?\nThen I think we need at least a function defined in another\ntranslation unit so the compiler can't see what it does, no?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Sat, 6 Jul 2019 09:02:21 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-07-05 23:02, Thomas Munro wrote:\n> On Sat, Jul 6, 2019 at 1:07 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2019-07-05 14:06, Thomas Munro wrote:\n>>> +#ifndef HAVE_EXPLICIT_BZERO\n>>> +#define explicit_bzero(b, len) memset(b, 0, len)\n>>> +#endif\n>>>\n>>> I noticed some other libraries use memset through a function pointer\n>>> or at least define a function the compiler can't see.\n>>\n>> I don't understand what you are getting at here.\n> \n> Do we want to provide a replacement implementation that actually\n> prevents the compiler from generating no code in some circumstances?\n> Then I think we need at least a function defined in another\n> translation unit so the compiler can't see what it does, no?\n\nI see. My premise, which should perhaps be explained in a comment at\nleast, is that on an operating system that does not provide\nexplicit_bzero() (or an obvious alternative), we don't care about\naddressing this particular security concern, since the rest of the\noperating system won't be secure in this way either. It shouldn't be\nour job to fight this battle if the rest of the OS doesn't care.\n\nAn alternative patch would define explicit_bzero() to nothing if not\navailable. But that might create bugs if subsequent code relies on the\nmemory being zeroed, independent of security concerns, so I changed it\nto use memset() so that at least logically both code paths are the same.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 6 Jul 2019 15:11:09 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Sun, Jul 7, 2019 at 1:11 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I see. My premise, which should perhaps be explained in a comment at\n> least, is that on an operating system that does not provide\n> explicit_bzero() (or an obvious alternative), we don't care about\n> addressing this particular security concern, since the rest of the\n> operating system won't be secure in this way either. It shouldn't be\n> our job to fight this battle if the rest of the OS doesn't care.\n>\n> An alternative patch would define explicit_bzero() to nothing if not\n> available. But that might create bugs if subsequent code relies on the\n> memory being zeroed, independent of security concerns, so I changed it\n> to use memset() so that at least logically both code paths are the same.\n\nFollowing a trail of crumbs beginning at OpenSSH's fallback\nimplementation of this[1], I learned that C11 has standardised\nmemset_s[2] for this purpose. Macs have memset_s but no\nexplicit_bzero. FreeBSD has both. I wonder if it'd be better to make\nmemset_s the function we use in our code, considering its standard\nblessing and therefore likelihood of being available on every system\neventually.\n\nOh, I see the problem: glibc 2.25 introduced explicit_bzero, but no\nversion of glibc has memset_s yet. So that's why you did it that\nway... RHEL 8 and Debian 10 ship with explicit_bzero. Bleugh.\n\n[1] https://github.com/openssh/openssh-portable/blob/master/openbsd-compat/explicit_bzero.c\n[2] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf#%5B%7B%22num%22%3A1353%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C0%2C792%2C0%5D\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jul 2019 12:39:21 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Mon, Jun 24, 2019 at 02:08:50PM +0900, Michael Paquier wrote:\n> CreateRole() and AlterRole() can manipulate a password in plain format\n> in memory. The cleanup could be done just after calling\n> encrypt_password() in user.c.\n> \n> Could it be possible to add the new flag in pg_config.h.win32?\n\nWhile remembering about it... Shouldn't the memset(0) now happening in\nbase64.c for the encoding and encoding routines when facing a failure\nuse explicit_zero()?\n--\nMichael", "msg_date": "Thu, 11 Jul 2019 10:11:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-Jul-11, Thomas Munro wrote:\n\n> Following a trail of crumbs beginning at OpenSSH's fallback\n> implementation of this[1], I learned that C11 has standardised\n> memset_s[2] for this purpose. Macs have memset_s but no\n> explicit_bzero. FreeBSD has both. I wonder if it'd be better to make\n> memset_s the function we use in our code, considering its standard\n> blessing and therefore likelihood of being available on every system\n> eventually.\n\nSounds like a future-proof way would be to implement memset_s in\nsrc/port if absent from the OS (using explicit_bzero and other tricks),\nand use that.\n\nHere's a portable implementation (includes _WIN32 and NetBSD's\nexplicit_memset) under ISC license:\nhttps://github.com/jedisct1/libsodium/blob/master/src/libsodium/sodium/utils.c#L112\n(from https://www.cryptologie.net/article/419/zeroing-memory-compiler-optimizations-and-memset_s/ )\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 17 Jul 2019 17:19:31 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-11, Thomas Munro wrote:\n>> Following a trail of crumbs beginning at OpenSSH's fallback\n>> implementation of this[1], I learned that C11 has standardised\n>> memset_s[2] for this purpose. Macs have memset_s but no\n>> explicit_bzero. FreeBSD has both. I wonder if it'd be better to make\n>> memset_s the function we use in our code, considering its standard\n>> blessing and therefore likelihood of being available on every system\n>> eventually.\n\n> Sounds like a future-proof way would be to implement memset_s in\n> src/port if absent from the OS (using explicit_bzero and other tricks),\n> and use that.\n\n+1 for using the C11-standard name, even if that's not anywhere\nin the real world yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jul 2019 18:45:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-07-11 03:11, Michael Paquier wrote:\n> On Mon, Jun 24, 2019 at 02:08:50PM +0900, Michael Paquier wrote:\n>> CreateRole() and AlterRole() can manipulate a password in plain format\n>> in memory. The cleanup could be done just after calling\n>> encrypt_password() in user.c.\n>>\n>> Could it be possible to add the new flag in pg_config.h.win32?\n> \n> While remembering about it... Shouldn't the memset(0) now happening in\n> base64.c for the encoding and encoding routines when facing a failure\n> use explicit_zero()?\n\nbase64.c doesn't know what the data it is dealing with is used for.\nThat should be the responsibility of the caller, no?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Jul 2019 20:11:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-07-18 00:45, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On 2019-Jul-11, Thomas Munro wrote:\n>>> Following a trail of crumbs beginning at OpenSSH's fallback\n>>> implementation of this[1], I learned that C11 has standardised\n>>> memset_s[2] for this purpose. Macs have memset_s but no\n>>> explicit_bzero. FreeBSD has both. I wonder if it'd be better to make\n>>> memset_s the function we use in our code, considering its standard\n>>> blessing and therefore likelihood of being available on every system\n>>> eventually.\n> \n>> Sounds like a future-proof way would be to implement memset_s in\n>> src/port if absent from the OS (using explicit_bzero and other tricks),\n>> and use that.\n> \n> +1 for using the C11-standard name, even if that's not anywhere\n> in the real world yet.\n\nISTM that a problem is that you cannot implement a replacement\nmemset_s() as a wrapper around explicit_bzero(), unless you also want to\nimplement the bound checking stuff. (The \"s\"/safe in this family of\nfunctions refers to the bound checking, not the cannot-be-optimized-away\nproperty.) The other way around it is easier.\n\nAlso, the \"s\" family of functions appears to be a quagmire of\ncontroversy and incompatibility, so it's perhaps better to stay away\nfrom it for the time being.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Jul 2019 20:17:17 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-07-18 00:45, Tom Lane wrote:\n>> +1 for using the C11-standard name, even if that's not anywhere\n>> in the real world yet.\n\n> ISTM that a problem is that you cannot implement a replacement\n> memset_s() as a wrapper around explicit_bzero(), unless you also want to\n> implement the bound checking stuff. (The \"s\"/safe in this family of\n> functions refers to the bound checking, not the cannot-be-optimized-away\n> property.) The other way around it is easier.\n\nOh, hm.\n\n> Also, the \"s\" family of functions appears to be a quagmire of\n> controversy and incompatibility, so it's perhaps better to stay away\n> from it for the time being.\n\nFair enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jul 2019 14:35:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "Another patch, with various fallback implementations.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 29 Jul 2019 11:30:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Mon, Jul 29, 2019 at 11:30:53AM +0200, Peter Eisentraut wrote:\n> Another patch, with various fallback implementations.\n\nI have spotted some issues with this patch:\n1) The list of port files @pgportfiles in Mkvcbuild.pm has not been\nupdated with the new file explicit_bzero.c, so the compilation would\nfail with MSVC.\n2) pg_config.h.win32 does not include the two new flags (same as\nhttps://www.postgresql.org/message-id/20190624050850.GE1637@paquier.xyz)\n3) What about CreateRole() and AlterRole() which can manipulate a\npassword in plain format before hashing? (same message as previous\npoint).\n\nNit: src/port/explicit_bzero.c misses its IDENTIFICATION tag.\n--\nMichael", "msg_date": "Tue, 30 Jul 2019 14:08:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "Hi,\n\nOn 2019-07-29 11:30:53 +0200, Peter Eisentraut wrote:\n> For platforms that don't have explicit_bzero(), provide various\n> fallback implementations. (explicit_bzero() itself isn't standard,\n> but as Linux/glibc and OpenBSD have it, it's the most common spelling,\n> so it makes sense to make that the invocation point.)\n\nI think it's better to have a pg_explicit_bzero or such, and implement\nthat via the various platform dependant mechanisms. It's considerably\nharder to understand code when one is surprised that a function normally\nnot available is called, the buildsystem part is really hard to\nunderstand (with runtime and code filenames differing etc), and invites\nAPI breakages. And it's not really more work to have our own name.\n\n\n> +/*-------------------------------------------------------------------------\n> + *\n> + * explicit_bzero.c\n> + *\n> + * Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +\n> +#include \"c.h\"\n> +\n> +#if defined(HAVE_MEMSET_S)\n> +\n> +void\n> +explicit_bzero(void *buf, size_t len)\n> +{\n> +\t(void) memset_s(buf, len, 0, len);\n> +}\n> +\n> +#elif defined(WIN32)\n> +\n> +#include \"c.h\"\n\nHm?\n\n\n> +/*\n> + * Indirect call through a volatile pointer to hopefully avoid dead-store\n> + * optimisation eliminating the call. (Idea taken from OpenSSH.) We can't\n> + * assume bzero() is present either, so for simplicity we define our own.\n> + */\n> +\n> +static void\n> +bzero2(void *buf, size_t len)\n> +{\n> +\tmemset(buf, 0, len);\n> +}\n> +\n> +static void (* volatile bzero_p)(void *, size_t) = bzero2;\n\nHm, I'm not really sure that this does that much. Especially when the\ncall is via a function in the same translation unit.\n\n\n> +void\n> +explicit_bzero(void *buf, size_t len)\n> +{\n> +\tbzero_p(buf, len);\n\nI've not followed this discussion. But why isn't the obvious\nimplementation here memset(...); pg_compiler_barrier()?\n\nA quick web search indicates that that's what a bunch of projects in the\ncryptography space also ended up with (well, __asm__ __volatile__(\"\" :::\n\"memory\"), which is what pg_compiler_barrier boils down to for\ngcc/clang/compatibles).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 29 Jul 2019 22:58:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Tue, Jul 30, 2019 at 5:58 PM Andres Freund <andres@anarazel.de> wrote:\n> > +#include \"c.h\"\n>\n> Hm?\n\nHeh.\n\n> > +static void (* volatile bzero_p)(void *, size_t) = bzero2;\n>\n> Hm, I'm not really sure that this does that much. Especially when the\n> call is via a function in the same translation unit.\n\nYeah, I wondered the same (when reading the OpenSSH version). You'd\nthink you'd need a non-static global so it has to assume that it could\nchange.\n\n> > +void\n> > +explicit_bzero(void *buf, size_t len)\n> > +{\n> > + bzero_p(buf, len);\n>\n> I've not followed this discussion. But why isn't the obvious\n> implementation here memset(...); pg_compiler_barrier()?\n>\n> A quick web search indicates that that's what a bunch of projects in the\n> cryptography space also ended up with (well, __asm__ __volatile__(\"\" :::\n> \"memory\"), which is what pg_compiler_barrier boils down to for\n> gcc/clang/compatibles).\n\nAt a glance, I think 3.4.3 of this 2017 paper says that might not work\non Clang and those other people might have a bug:\n\nhttps://www.usenix.org/system/files/conference/usenixsecurity17/sec17-yang.pdf\n\ncfbot says:\n\nfe-connect.obj : error LNK2019: unresolved external symbol\nexplicit_bzero referenced in function freePGconn\n[C:\\projects\\postgresql\\libpq.vcxproj]\n\nMoved to next CF.\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 20:08:15 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "Hi,\n\nOn 2019-08-01 20:08:15 +1200, Thomas Munro wrote:\n> On Tue, Jul 30, 2019 at 5:58 PM Andres Freund <andres@anarazel.de> wrote:\n> > > +#include \"c.h\"\n> > > +static void (* volatile bzero_p)(void *, size_t) = bzero2;\n> >\n> > Hm, I'm not really sure that this does that much. Especially when the\n> > call is via a function in the same translation unit.\n>\n> Yeah, I wondered the same (when reading the OpenSSH version). You'd\n> think you'd need a non-static global so it has to assume that it could\n> change.\n\nThe implementations in other projects I saw did the above trick, but\nalso marked the symbol as weak. Telling the compiler it can't know what\nversion will be used at runtime. But that adds a bunch of compiler\ndependencies too.\n\n\n> > > +void\n> > > +explicit_bzero(void *buf, size_t len)\n> > > +{\n> > > + bzero_p(buf, len);\n> >\n> > I've not followed this discussion. But why isn't the obvious\n> > implementation here memset(...); pg_compiler_barrier()?\n> >\n> > A quick web search indicates that that's what a bunch of projects in the\n> > cryptography space also ended up with (well, __asm__ __volatile__(\"\" :::\n> > \"memory\"), which is what pg_compiler_barrier boils down to for\n> > gcc/clang/compatibles).\n>\n> At a glance, I think 3.4.3 of this 2017 paper says that might not work\n> on Clang and those other people might have a bug:\n>\n> https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-yang.pdf\n\nhttps://bugs.llvm.org/show_bug.cgi?id=15495\n\nWe could just combine it with volatile out of paranoia anyway. But I'm\nalso more than bit doubtful about this bugreport. There's simply no\nmemory here. It's not that the memset is optimized away, it's that\nthere's no memory at all. It results in:\n\n .file \"test.c\"\n .globl foo # -- Begin function foo\n .p2align 4, 0x90\n .type foo,@function\nfoo: # @foo\n .cfi_startproc\n# %bb.0:\n #APP\n #NO_APP\n retq\n\nthere's no secrets left over here. If you actuall force the memory be\nfilled, even if it's afterwards dead, you do get the memory\ncleaned. E.g.\n\n#include <string.h>\n\nstatic void mybzero(char *buf, int len) {\n memset(buf,0,len);\n asm(\"\" : : : \"memory\");\n}\n\nextern void grab_password(char *buf, int len);\n\nint main(int argc, char **argv)\n{\n char buf[512];\n\n grab_password(buf, sizeof(buf));\n\n mybzero(buf, sizeof(buf));\n\n return 0;\n}\n\nresults in\n\nmain: # @main\n\t.cfi_startproc\n# %bb.0:\n\tpushq\t%rbx\n\t.cfi_def_cfa_offset 16\n\tsubq\t$512, %rsp # imm = 0x200\n\t.cfi_def_cfa_offset 528\n\t.cfi_offset %rbx, -16\n\tmovq\t%rsp, %rbx\n\tmovq\t%rbx, %rdi\n\tmovl\t$512, %esi # imm = 0x200\n\tcallq\tgrab_password\n\tmovl\t$512, %edx # imm = 0x200\n\tmovq\t%rbx, %rdi\n\txorl\t%esi, %esi\n\tcallq\tmemset\n\t#APP\n\t#NO_APP\n\txorl\t%eax, %eax\n\taddq\t$512, %rsp # imm = 0x200\n\t.cfi_def_cfa_offset 16\n\tpopq\t%rbx\n\t.cfi_def_cfa_offset 8\n\tretq\n.Lfunc_end0:\n\t.size\tmain, .Lfunc_end0-main\n\t.cfi_endproc\n # -- End function\n\nAlthough - and that is not surprising - if you lie and mark\ngrab_password as being pure (__attribute__((pure)), which signals the\nfunction has no sideeffects except its return value), it'll optimize the\nwhole memory away again. But no secrets leaked again:\n\nmain: # @main\n\t.cfi_startproc\n# %bb.0:\n\t#APP\n\t#NO_APP\n\txorl\t%eax, %eax\n\tretq\n.Lfunc_end0:\n\t.size\tmain, .Lfunc_end0-main\n\t.cfi_endproc\n\n\nOut of paranoia we could go add add the additional step and have a\nbarrier variant that's variable specific, and make the __asm__\n__volatile__ als take the input as \"r\"(buf), which'd prevent even this\nissue (because now the memory is actually understood as being used).\n\nWhich turns out to be e.g. what google did for boringssl...\n\nhttps://boringssl.googlesource.com/boringssl/+/ad1907fe73334d6c696c8539646c21b11178f20f%5E!/#F0\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 1 Aug 2019 10:06:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-07-30 07:08, Michael Paquier wrote:\n> On Mon, Jul 29, 2019 at 11:30:53AM +0200, Peter Eisentraut wrote:\n>> Another patch, with various fallback implementations.\n> \n> I have spotted some issues with this patch:\n> 1) The list of port files @pgportfiles in Mkvcbuild.pm has not been\n> updated with the new file explicit_bzero.c, so the compilation would\n> fail with MSVC.\n> 2) pg_config.h.win32 does not include the two new flags (same as\n> https://www.postgresql.org/message-id/20190624050850.GE1637@paquier.xyz)\n\nAnother patch, to attempt to fix the Windows build.\n\n> 3) What about CreateRole() and AlterRole() which can manipulate a\n> password in plain format before hashing? (same message as previous\n> point).\n\nIf you want to secure CREATE ROLE foo PASSWORD 'plaintext' then you need\nto also analyze memory usage in protocol processing and parsing and the\nlike. This would be a laborious and difficult to verify undertaking.\nIt's better to say, if you want to be secure, don't do that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 13 Aug 2019 10:30:39 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-07-30 07:58, Andres Freund wrote:\n> I think it's better to have a pg_explicit_bzero or such, and implement\n> that via the various platform dependant mechanisms. It's considerably\n> harder to understand code when one is surprised that a function normally\n> not available is called, the buildsystem part is really hard to\n> understand (with runtime and code filenames differing etc), and invites\n> API breakages. And it's not really more work to have our own name.\n\nexplicit_bzero() is a pretty established and quasi-standard name by now,\nnot too different from other things in src/port/.\n\n>> +/*\n>> + * Indirect call through a volatile pointer to hopefully avoid dead-store\n>> + * optimisation eliminating the call. (Idea taken from OpenSSH.) We can't\n>> + * assume bzero() is present either, so for simplicity we define our own.\n>> + */\n>> +\n>> +static void\n>> +bzero2(void *buf, size_t len)\n>> +{\n>> +\tmemset(buf, 0, len);\n>> +}\n>> +\n>> +static void (* volatile bzero_p)(void *, size_t) = bzero2;\n> \n> Hm, I'm not really sure that this does that much. Especially when the\n> call is via a function in the same translation unit.\n\nThis is the fallback implementation from OpenSSH, so it's plausible that\nit does something. It's worth verifying, of course.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 13 Aug 2019 10:33:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Tue, Aug 13, 2019 at 10:30:39AM +0200, Peter Eisentraut wrote:\n> Another patch, to attempt to fix the Windows build.\n\nI have not been able to test the compilation, but the changes look\ngood on this side.\n--\nMichael", "msg_date": "Wed, 14 Aug 2019 12:00:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-08-14 05:00, Michael Paquier wrote:\n> On Tue, Aug 13, 2019 at 10:30:39AM +0200, Peter Eisentraut wrote:\n>> Another patch, to attempt to fix the Windows build.\n> \n> I have not been able to test the compilation, but the changes look\n> good on this side.\n\nRebased patch, no functionality changes.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 24 Aug 2019 12:22:06 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-Aug-24, Peter Eisentraut wrote:\n\n> On 2019-08-14 05:00, Michael Paquier wrote:\n> > On Tue, Aug 13, 2019 at 10:30:39AM +0200, Peter Eisentraut wrote:\n> >> Another patch, to attempt to fix the Windows build.\n> > \n> > I have not been able to test the compilation, but the changes look\n> > good on this side.\n> \n> Rebased patch, no functionality changes.\n\nMarked RfC. Can we get on with this?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 4 Sep 2019 16:38:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On Wed, Sep 04, 2019 at 04:38:21PM -0400, Alvaro Herrera wrote:\n> Marked RfC. Can we get on with this?\n\nFWIW, I have been able to test this one on Windows with MSVC and\nthings are handled correctly.\n--\nMichael", "msg_date": "Thu, 5 Sep 2019 11:12:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-09-05 04:12, Michael Paquier wrote:\n> On Wed, Sep 04, 2019 at 04:38:21PM -0400, Alvaro Herrera wrote:\n>> Marked RfC. Can we get on with this?\n> \n> FWIW, I have been able to test this one on Windows with MSVC and\n> things are handled correctly.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Sep 2019 08:38:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "Hi,\n\nOn 2019-09-05 08:38:36 +0200, Peter Eisentraut wrote:\n> On 2019-09-05 04:12, Michael Paquier wrote:\n> > On Wed, Sep 04, 2019 at 04:38:21PM -0400, Alvaro Herrera wrote:\n> >> Marked RfC. Can we get on with this?\n> > \n> > FWIW, I have been able to test this one on Windows with MSVC and\n> > things are handled correctly.\n> \n> committed\n\nI still think this change is done wrongly, by providing an\nimplementation for a library function implemented in various\nprojects. If you e.g. dynamically load a library that implements its own\nversion of bzero, ours will replace it in many cases.\n\nI think all this implementation actually guarantees is that bzero2 is\nread, but not that the copy is not elided. In practice that's *probably*\ngood enough, but a compiler could just check whether bzero_p points to\nmemset.\n\nhttp://cseweb.ucsd.edu/~lerner/papers/dse-usenix2017.pdf\nhttps://boringssl-review.googlesource.com/c/boringssl/+/1339/\n\nI think we have absolutely no business possibly intercepting / replacing\nactually securely coded implementations of sensitive functions.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Sep 2019 08:18:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: using explicit_bzero" }, { "msg_contents": "On 2019-09-09 17:18, Andres Freund wrote:\n> I think all this implementation actually guarantees is that bzero2 is\n> read, but not that the copy is not elided. In practice that's *probably*\n> good enough, but a compiler could just check whether bzero_p points to\n> memset.\n\nAre you saying that the replacement implementation we provide is not\ngood enough? If so, I'm happy to look at alternatives. But that's the\ndesign from OpenSSH, so if that is wrong, then there are bigger\nproblems. We could also take the OpenBSD implementation, but that has a\nGCC-ish dependency, so we would probably want the OpenSSH implementation\nas a fallback anyway.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 17 Sep 2019 11:10:16 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: using explicit_bzero" } ]
[ { "msg_contents": "After the earlier thread [0] that dealt with ALTER TABLE on system\ncatalogs, I took a closer look at the allow_system_table_mods setting.\nI found a few oddities, and it seems there is some room for improvement.\n\nAttached are some patches to get the discussion rolling: One patch makes\nallow_system_table_mods settable at run time by superuser, the second\none is a test suite that documents the current behavior that I gathered\nafter analyzing the source code, the third one removes some code that\nwas found useless by the tests. (The first patch might be useful on its\nown, but right now it's just to facilitate the test suite.)\n\nSome observations:\n\n- For the most part, a_s_t_m establishes an additional level of access\ncontrol on top of superuserdom for doing DDL on system catalogs. That\nseems like a useful definition.\n\n- But enabling a_s_t_m also allows a non-superuser to do DML on system\ncatalogs. That seems like an entirely unrelated and surprising behavior.\n\n- Some checks are redundant with the pinning concept of the dependency\nsystem. For example, you can't drop a system catalog even with a_s_t_m\non. That seems useful, of course, but as a result there is a bit of\ndead or useless code around. (The dependency system is newer than a_s_t_m.)\n\n- The source code comments indicate that SET STATISTICS on system\ncatalogs is supposed to be allowed without a_s_t_m, but it actually\ndoesn't work.\n\nProposals and discussion points:\n\n- Having a test suite like this seems useful.\n\n- The behavior that a_s_t_m allows non-superusers to do DML on system\ncatalogs should be removed. (Regular permissions can be used for that.)\n\n- Things that are useful in normal use, for example SET STATISTICS, some\nor all reloptions, should always be allowed (subject to other access\ncontrol).\n\n- There is currently no support in pg_dump to preserve any of those\nchanges. Maybe that's not a big problem.\n\n- Dead code or code that is redundant with pinning should be removed.\n\nAny other thoughts?\n\n\n[0]:\nhttps://www.postgresql.org/message-id/flat/e49f825b-fb25-0bc8-8afc-d5ad895c7975%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 21 Jun 2019 11:12:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "allow_system_table_mods stuff" }, { "msg_contents": "On Fri, Jun 21, 2019 at 5:12 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Attached are some patches to get the discussion rolling: One patch makes\n> allow_system_table_mods settable at run time by superuser, the second\n> one is a test suite that documents the current behavior that I gathered\n> after analyzing the source code, the third one removes some code that\n> was found useless by the tests. (The first patch might be useful on its\n> own, but right now it's just to facilitate the test suite.)\n\nSounds generally sensible (but I didn't read the code). I\nparticularly like the first idea.\n\n> Any other thoughts?\n\nI kinda feel like we should prohibit DML on system catalogs, even by\nsuperusers, unless you press the big red button that says \"I am\ndefinitely sure that I know what I'm doing.\" Linking that with\nallow_system_table_mods is some way seems natural, but I'm not totally\nsure it's the right thing to do. I guess we could have\nalter_table_system_mods={no,yes,yesyesyes}, the former allowing DML\nand not-too-scary things and the latter allowing anything at all.\n\nA related issue is that alter_system_table_mods prohibits both stuff\nthat's probably not going to cause any big problem and stuff that is\nalmost guaranteed to make the system permanently unusable - e.g. you\ncould 'SET STORAGE' on a system catalog column, which is really pretty\ninnocuous, or you could change the oid column of pg_database to a\nvarlena type, which is guaranteed to destroy the universe. Here\nagain, maybe some operations should be more protected than others, or\nmaybe the relatively safe things just shouldn't be subject to\nallow_system_table_mods at all.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 21 Jun 2019 10:37:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Jun 21, 2019 at 5:12 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > Any other thoughts?\n> \n> I kinda feel like we should prohibit DML on system catalogs, even by\n> superusers, unless you press the big red button that says \"I am\n> definitely sure that I know what I'm doing.\" Linking that with\n> allow_system_table_mods is some way seems natural, but I'm not totally\n> sure it's the right thing to do. I guess we could have\n> alter_table_system_mods={no,yes,yesyesyes}, the former allowing DML\n> and not-too-scary things and the latter allowing anything at all.\n\nI agree that we should be strongly discouraging even superusers from\ndoing DML or DDL on system catalogs, and making them jump through hoops\nto make it happen at all.\n\n> A related issue is that alter_system_table_mods prohibits both stuff\n> that's probably not going to cause any big problem and stuff that is\n> almost guaranteed to make the system permanently unusable - e.g. you\n> could 'SET STORAGE' on a system catalog column, which is really pretty\n> innocuous, or you could change the oid column of pg_database to a\n> varlena type, which is guaranteed to destroy the universe. Here\n> again, maybe some operations should be more protected than others, or\n> maybe the relatively safe things just shouldn't be subject to\n> allow_system_table_mods at all.\n\nIf there are things which are through proper grammar (ALTER TABLE or\nsuch) and which will actually usefully work when done against a system\ncatalog table (eg: GRANT), then I'm all for just allowing that, provided\nthe regular security checks are done. I don't think we should ever be\nallowed DML though, or any DDL which we know will break the system,\nwithout making them go through hoops. Personally, I'd rather disallow\nall DDL on system catalogs and then explicitly add support for specific\nDDL when someone complains and has done a sufficient review to show that\nallowing that DDL is a good thing and will actually work as intended.\n\nThanks,\n\nStephen", "msg_date": "Fri, 21 Jun 2019 11:14:05 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I kinda feel like we should prohibit DML on system catalogs, even by\n> superusers, unless you press the big red button that says \"I am\n> definitely sure that I know what I'm doing.\"\n\nKeep in mind that DML-on-system-catalogs is unfortunately a really\nstandard hack in extension upgrade scripts. (If memory serves,\nsome of our contrib scripts do that, and we've certainly told third\nparties that it's the only way out of some box or other.) I don't\nthink we can just shut it off. What you seem to be proposing is to\nallow it only after\n\nSET allow_system_table_mods = on;\n\nwhich would be all right except that an extension script containing\nsuch a command will fail outright in existing releases. I think we\nneed to be friendlier than that to extension authors who are, for the\nmost part, trying to work around some deficiency of ours not theirs.\n\nI'm not saying that DML-off-by-default is a bad goal to work toward;\nI'm just saying \"mind the collateral damage\".\n\n> A related issue is that alter_system_table_mods prohibits both stuff\n> that's probably not going to cause any big problem and stuff that is\n> almost guaranteed to make the system permanently unusable - e.g. you\n> could 'SET STORAGE' on a system catalog column, which is really pretty\n> innocuous, or you could change the oid column of pg_database to a\n> varlena type, which is guaranteed to destroy the universe. Here\n> again, maybe some operations should be more protected than others, or\n> maybe the relatively safe things just shouldn't be subject to\n> allow_system_table_mods at all.\n\nMeh. It doesn't really seem to me that distinguishing these cases\nis a productive use of code space or maintenance effort. Superusers\nare assumed to know what they're doing, and most especially so if\nthey've hit the big red button.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 12:28:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I kinda feel like we should prohibit DML on system catalogs, even by\n> > superusers, unless you press the big red button that says \"I am\n> > definitely sure that I know what I'm doing.\"\n> \n> Keep in mind that DML-on-system-catalogs is unfortunately a really\n> standard hack in extension upgrade scripts. (If memory serves,\n> some of our contrib scripts do that, and we've certainly told third\n> parties that it's the only way out of some box or other.) I don't\n> think we can just shut it off. What you seem to be proposing is to\n> allow it only after\n> \n> SET allow_system_table_mods = on;\n\nThat's basically what my feeling is, yes.\n\n> which would be all right except that an extension script containing\n> such a command will fail outright in existing releases. I think we\n> need to be friendlier than that to extension authors who are, for the\n> most part, trying to work around some deficiency of ours not theirs.\n\nAs with other cases where someone needs to do DML against the catalog\nfor some reason or another- we should fix that. If there's example\ncases, great! Let's look at those and come up with a proper solution.\n\nOther options include- letting an extension set that GUC (seems likely\nthat any case where this is needed is a case where the extension is\ninstalling C functions and therefore is being run by a superuser\nanyway...), or implicitly setting that GUC when we're running an\nextension's script (urrggghhhh... I don't care for that one bit, but I\nlike it better than letting any superuser who wishes UPDATE random bits\nin the catalog).\n\n> I'm not saying that DML-off-by-default is a bad goal to work toward;\n> I'm just saying \"mind the collateral damage\".\n\nSure, makes sense.\n\n> > A related issue is that alter_system_table_mods prohibits both stuff\n> > that's probably not going to cause any big problem and stuff that is\n> > almost guaranteed to make the system permanently unusable - e.g. you\n> > could 'SET STORAGE' on a system catalog column, which is really pretty\n> > innocuous, or you could change the oid column of pg_database to a\n> > varlena type, which is guaranteed to destroy the universe. Here\n> > again, maybe some operations should be more protected than others, or\n> > maybe the relatively safe things just shouldn't be subject to\n> > allow_system_table_mods at all.\n> \n> Meh. It doesn't really seem to me that distinguishing these cases\n> is a productive use of code space or maintenance effort. Superusers\n> are assumed to know what they're doing, and most especially so if\n> they've hit the big red button.\n\nThe direction I took the above was that we should actually be thinking\nabout if there are acceptable cases to be running DDL against the\ncatalog and, if so, specifically allow those. I'm not convinced at the\nmoment that any such exist and therefore I'd rather have it denied\n(unless you push the big red button) and then tell people to show us\ntheir use case and then we can decide if it's an 'ok' thing to allow, or\nwhat.\n\nI'd really like to stop the cases like stackoverflow articles that\ndescribe how to \"remove\" an enum value by simply modifying the catalog,\nor at least make them have to add a \"well, push this big red button\nfirst that the PG people tell you never to push, and then..\" to the\nstart.\n\nThanks,\n\nStephen", "msg_date": "Fri, 21 Jun 2019 12:58:56 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On 2019-06-21 11:12:38 +0200, Peter Eisentraut wrote:\n> After the earlier thread [0] that dealt with ALTER TABLE on system\n> catalogs, I took a closer look at the allow_system_table_mods setting.\n> I found a few oddities, and it seems there is some room for improvement.\n\nI complained about this recently again, and unfortunately the reaction\nwasn't that welcoming:\nhttps://postgr.es/m/20190509145054.byiwa255xvdbfh3a%40alap3.anarazel.de\n\n> Attached are some patches to get the discussion rolling: One patch makes\n> allow_system_table_mods settable at run time by superuser\n\n+1 - this seems to have agreement.\n\n\n> - For the most part, a_s_t_m establishes an additional level of access\n> control on top of superuserdom for doing DDL on system catalogs. That\n> seems like a useful definition.\n>\n> - But enabling a_s_t_m also allows a non-superuser to do DML on system\n> catalogs. That seems like an entirely unrelated and surprising behavior.\n\nIndeed.\n\n\n> - Some checks are redundant with the pinning concept of the dependency\n> system. For example, you can't drop a system catalog even with a_s_t_m\n> on. That seems useful, of course, but as a result there is a bit of\n> dead or useless code around. (The dependency system is newer than a_s_t_m.)\n\nI'm not fond of deduplicating things around this. This seems like a\nseparate layers of defense to me.\n\n\n> - Having a test suite like this seems useful.\n\n+1\n\n\n> - The behavior that a_s_t_m allows non-superusers to do DML on system\n> catalogs should be removed. (Regular permissions can be used for that.)\n\n+1\n\n\n> - Dead code or code that is redundant with pinning should be removed.\n\n-1\n\n\n> Any other thoughts?\n\n* a_s_t_m=off should forbid modifying catalog tables, even for\n superusers.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 21 Jun 2019 10:30:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Hi,\n\nOn 2019-06-21 12:28:43 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I kinda feel like we should prohibit DML on system catalogs, even by\n> > superusers, unless you press the big red button that says \"I am\n> > definitely sure that I know what I'm doing.\"\n> \n> Keep in mind that DML-on-system-catalogs is unfortunately a really\n> standard hack in extension upgrade scripts. (If memory serves,\n> some of our contrib scripts do that, and we've certainly told third\n> parties that it's the only way out of some box or other.) I don't\n> think we can just shut it off. What you seem to be proposing is to\n> allow it only after\n> \n> SET allow_system_table_mods = on;\n> \n> which would be all right except that an extension script containing\n> such a command will fail outright in existing releases. I think we\n> need to be friendlier than that to extension authors who are, for the\n> most part, trying to work around some deficiency of ours not theirs.\n\nI'm not quite convinced we need to go very far with compatibility here -\npretty much by definition scripts that do this are tied a lot more to\nthe internals than ones using DDL. But if we want to, we could just -\nfor now at least - set allow_system_table_mods to a new 'warn' - when\nprocessing extension scripts as superusers.\n\n\n> > A related issue is that alter_system_table_mods prohibits both stuff\n> > that's probably not going to cause any big problem and stuff that is\n> > almost guaranteed to make the system permanently unusable - e.g. you\n> > could 'SET STORAGE' on a system catalog column, which is really pretty\n> > innocuous, or you could change the oid column of pg_database to a\n> > varlena type, which is guaranteed to destroy the universe. Here\n> > again, maybe some operations should be more protected than others, or\n> > maybe the relatively safe things just shouldn't be subject to\n> > allow_system_table_mods at all.\n> \n> Meh. It doesn't really seem to me that distinguishing these cases\n> is a productive use of code space or maintenance effort. Superusers\n> are assumed to know what they're doing, and most especially so if\n> they've hit the big red button.\n\nI really don't buy this. You need superuser for nearly all CREATE\nEXTENSION invocations, and for a lot of other routine tasks. Making the\nnon-routine crazy stuff slightly harder is worthwhile. I don't think we\ncan really separate those two into fully separate roles unfortunately,\nbecause the routine CREATE EXTENSION stuff obviously can be used to\nelevate privs.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 21 Jun 2019 10:34:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-06-21 12:28:43 -0400, Tom Lane wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > A related issue is that alter_system_table_mods prohibits both stuff\n> > > that's probably not going to cause any big problem and stuff that is\n> > > almost guaranteed to make the system permanently unusable - e.g. you\n> > > could 'SET STORAGE' on a system catalog column, which is really pretty\n> > > innocuous, or you could change the oid column of pg_database to a\n> > > varlena type, which is guaranteed to destroy the universe. Here\n> > > again, maybe some operations should be more protected than others, or\n> > > maybe the relatively safe things just shouldn't be subject to\n> > > allow_system_table_mods at all.\n> > \n> > Meh. It doesn't really seem to me that distinguishing these cases\n> > is a productive use of code space or maintenance effort. Superusers\n> > are assumed to know what they're doing, and most especially so if\n> > they've hit the big red button.\n> \n> I really don't buy this. You need superuser for nearly all CREATE\n> EXTENSION invocations, and for a lot of other routine tasks. Making the\n> non-routine crazy stuff slightly harder is worthwhile. I don't think we\n> can really separate those two into fully separate roles unfortunately,\n> because the routine CREATE EXTENSION stuff obviously can be used to\n> elevate privs.\n\nI'm not sure what you're intending to respond to here, but I don't think\nit's what was being discussed.\n\nThe question went something like this- if we decide that setting the\ndefault for relpages made sense in some use-case, should a superuser\nhave to hit the big red button to do:\n\nALTER TABLE pg_class SET DEFAULT relpages = 100; \n\nor should we just allow it?\n\nTom's opinion, if I followed it correctly, was 'no, that is rare enough\nthat it just is not worth the extra code to allow that without the big\nred button, but deny everything else.' My opinion was 'if they bring us\na legitimate use-case for such, then, sure, maybe we allow specficially\nthat without hitting the big red button.' Robert was suggesting that we\ncould have a tri-state for a_s_t_m, where you could hit the big red\nbutton only half-way, and certain things would then be allowed.\n\nAt least, I think that's about how it went. None of it was about doing\ntypical CREATE EXTENSION and similar routine tasks that don't involve\nrunning ALTER TABLE or DML against catalog tables.\n\nThanks,\n\nStephen", "msg_date": "Fri, 21 Jun 2019 13:54:45 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Keep in mind that DML-on-system-catalogs is unfortunately a really\n>> standard hack in extension upgrade scripts. (If memory serves,\n>> some of our contrib scripts do that, and we've certainly told third\n>> parties that it's the only way out of some box or other.)\n\n> As with other cases where someone needs to do DML against the catalog\n> for some reason or another- we should fix that. If there's example\n> cases, great! Let's look at those and come up with a proper solution.\n\nAs I said, we've got examples. I traced the existing UPDATEs-on-catalogs\nin contrib scripts back to their origin commits, and found these:\n\n\ncommit a89b4b1be0d3550a7860250ff74dc69730555a1f\n Update citext extension for parallel query.\n\n This could have been done cleaner if we had ALTER AGGREGATE variants\n that would let us add an aggcombine function after the fact and mark\n the aggregate PARALLEL SAFE. Seems like a reasonable feature, but\n it still doesn't exist today, three years later.\n\ncommit 94be9e3f0ca9e7ced66168397eb586565bced9ca\n Fix citext's upgrade-from-unpackaged script to set its collation correctly.\ncommit 9b97b7f8356c63ea0b6704718d75ea01ec3035bf\n Fix citext upgrade script to update derived copies of pg_type.typcollation.\n\n The difficulty here was lack of ALTER TYPE to change a type's\n collation. We'd have to rethink the denormalized storage of\n typcollation in a lot of other places if we wanted to support that,\n but possibly it'd be worth it.\n\ncommit 749a787c5b25ae33b3d4da0ef12aa05214aa73c7\n Handle contrib's GIN/GIST support function signature changes honestly.\n\n This needed to change the declared argument types of some support\n functions, without having their OIDs change. No, I *don't* think\n it'd be a good idea to provide a DDL command to do that.\n\ncommit de1d042f5979bc1388e9a6d52a4d445342b04932\n Support index-only scans in contrib/cube and contrib/seg GiST indexes.\n\n \"The only exciting part of this is that ALTER OPERATOR FAMILY lacks\n a way to drop a support function that was declared as being part of\n an opclass rather than being loose in the family. For the moment,\n we'll hack our way to a solution with a manual update of the pg_depend\n entry type, which is what distinguishes the two cases. Perhaps\n someday it'll be worth providing a cleaner way to do that, but for\n now it seems like a very niche problem.\"\n\ncommit 0024e348989254d48dc4afe9beab98a6994a791e\n Fix upgrade of contrib/intarray and contrib/unaccent from 9.0.\ncommit 4eb49db7ae634fab9af7437b2e7b6388dfd83bd3\n Fix contrib/pg_trgm to have smoother updates from 9.0.\n\n More cases where we had to change the proargtypes of a pg_proc\n entry without letting its OID change.\n\ncommit 472f608e436a41865b795c999bda3369725fa097\n One more hack to make contrib upgrades from 9.0 match fresh 9.1 installs.\n\n Lack of a way to replace a support function in an existing opclass.\n It's possible this could be done better, but on the other hand, it'd\n be *really* hard to have an ALTER OPCLASS feature for that that would\n be even a little bit concurrency-safe.\n\n\nSo there's certainly some fraction of these cases where we could have\navoided doing manual catalog updates by expending work on some ALTER\ncommand instead. But I don't see much reason to think that we could,\nor should try to, insist that every such case be done that way. The\ncost/benefit ratio is not there in some cases, and in others, exposing\na DDL command to do it would just be providing easier access to\nsomething that's fundamentally unsafe anyway.\n\nThe change-proargtypes example actually brings up a larger point:\nexactly how is, say, screwing with the contents of the pg_class\nrow for a system catalog any safer than doing \"DDL\" on the catalog?\nI don't think we should fool ourselves that the one thing is\ninherently safer than the other.\n\nIn none of these cases are we ever going to be able to say \"that's\ngenerically safe\", or at least if we try, we're going to find that\ndistinguishing safe cases from unsafe requires unreasonable amounts\nof effort. I don't think it's a productive thing to spend time on.\nI don't mind having two separate \"allow_system_table_ddl\" and\n\"allow_system_table_dml\" flags, because it's easy to tell what each\nof those is supposed to enforce. But I'm going to run away screaming\nfrom any proposal to invent \"allow_safe_system_table_dml\". It's a\nrecipe for infinite security bugs and it's just not worth it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 14:52:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> So there's certainly some fraction of these cases where we could have\n> avoided doing manual catalog updates by expending work on some ALTER\n> command instead. But I don't see much reason to think that we could,\n> or should try to, insist that every such case be done that way. The\n> cost/benefit ratio is not there in some cases, and in others, exposing\n> a DDL command to do it would just be providing easier access to\n> something that's fundamentally unsafe anyway.\n\nIn the cases where we can do better by providing a DDL command, it's\ncertainly my opinion that we should go that route. I don't think we\nshould allow something that's fundamentally unsafe in that way- for\nthose cases though, how is the extension script making it 'safe'? If it\nsimply is hoping, well, that smells like a bug, and we probably should\ntry to avoid having that in our extensions as folks do like to copy\nthem.\n\nWhen it comes to cases that fundamentally are one-off's and that we\ndon't think really deserve a proper DDL command, then I'd say we make\nthe extensions set the flag. At least then it's clear \"hey, we had to\ndo something really grotty here, maybe don't copy this into your new\nextension, or don't use this method.\" We should also un-set the flag\nafter.\n\n> The change-proargtypes example actually brings up a larger point:\n> exactly how is, say, screwing with the contents of the pg_class\n> row for a system catalog any safer than doing \"DDL\" on the catalog?\n> I don't think we should fool ourselves that the one thing is\n> inherently safer than the other.\n\nI don't believe one to be safer than the other...\n\n> In none of these cases are we ever going to be able to say \"that's\n> generically safe\", or at least if we try, we're going to find that\n> distinguishing safe cases from unsafe requires unreasonable amounts\n> of effort. I don't think it's a productive thing to spend time on.\n> I don't mind having two separate \"allow_system_table_ddl\" and\n> \"allow_system_table_dml\" flags, because it's easy to tell what each\n> of those is supposed to enforce.\n\nWhich implies that it doesn't make sense to have two different flags\nfor it.\n\n> But I'm going to run away screaming\n> from any proposal to invent \"allow_safe_system_table_dml\". It's a\n> recipe for infinite security bugs and it's just not worth it.\n\nYeah, I'm not really a fan of that either.\n\nThanks,\n\nStephen", "msg_date": "Fri, 21 Jun 2019 15:07:30 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On 6/21/19 3:07 PM, Stephen Frost wrote:\n> When it comes to cases that fundamentally are one-off's and that we\n> don't think really deserve a proper DDL command, then I'd say we make\n> the extensions set the flag. At least then it's clear \"hey, we had to\n> do something really grotty here, maybe don't copy this into your new\n> extension, or don't use this method.\" We should also un-set the flag\n> after.\n\nI'd be leery of collateral damage from that to extension update scripts\nin extension releases currently in the wild.\n\nMaybe there should be a new extension control file setting\n\nneeds_system_table_mods = (boolean)\n\nwhich means what it says if it's there, but if an ALTER EXTENSION\nUPDATE sees a control file that lacks the setting, assume true\n(with a warning?).\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 21 Jun 2019 16:16:19 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> I'd be leery of collateral damage from that to extension update scripts\n> in extension releases currently in the wild.\n\nYeah, that's my primary concern here.\n\n> Maybe there should be a new extension control file setting\n> needs_system_table_mods = (boolean)\n> which means what it says if it's there, but if an ALTER EXTENSION\n> UPDATE sees a control file that lacks the setting, assume true\n> (with a warning?).\n\nI think that having a SET command in the update script is the way to go;\nfor one thing it simplifies testing the script by just sourcing it,\nand for another it defaults to no-special-privileges which is the\nright default. Also, non-backward-compatible control files aren't\nany nicer than non-backward-compatible script files.\n\nWe do have to get past the compatibility issue though. My thought was\nthat for a period of N years we could force allow_system_table_dml = on\nwhile running extension scripts, and then cease doing so. This would\ngive extension authors a reasonable window in which their scripts would\nwork in either old or new backends. At some point in that time they'd\nprobably have occasion to make other changes that render their scripts\nnot backwards compatible, at which point they can insert \"SET\nallow_system_table_dml = on\" so that the script keeps working when we\nremove the compatibility hack.\n\n(Of course, we have an awful track record about actually doing things\nN years after we said we would, but doesn't anybody around here have\na calendar app?)\n\nThis line of thought leads to the conclusion that we do want\nseparate \"allow_system_table_dml\" and \"allow_system_table_ddl\"\nbools. Otherwise, the backwards-compatibility hack would need\nto turn on a level of unsafety that extension scripts have *not*\nhad before and surely shouldn't have by default.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 16:37:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On 6/21/19 4:37 PM, Tom Lane wrote:\n> We do have to get past the compatibility issue though. My thought was\n> that for a period of N years we could force allow_system_table_dml = on\n> while running extension scripts, and then cease doing so. This would\n> give extension authors a reasonable window in which their scripts would\n> work in either old or new backends. At some point in that time they'd\n> probably have occasion to make other changes that render their scripts\n> not backwards compatible, at which point they can insert \"SET\n> allow_system_table_dml = on\" so that the script keeps working when we\n> remove the compatibility hack.\n\nI was having second thoughts too, like maybe to tweak ALTER EXTENSION\nUPDATE to unconditionally force the flag on before the update script and\nreset it after, but warn if it is actually still set at the reset-after\npoint.\n\nExtension maintainers could then make the warning go away by releasing\nversions where the update scripts contain an explicit RESET (at the very\ntop, if they do nothing fancy), or a(n initially redundant) SET at the\ntop and RESET at the bottom. No new control file syntax.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 21 Jun 2019 16:43:33 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Hi,\n\nOn 2019-06-21 16:37:16 -0400, Tom Lane wrote:\n> We do have to get past the compatibility issue though. My thought was\n> that for a period of N years we could force allow_system_table_dml = on\n> while running extension scripts, and then cease doing so. This would\n> give extension authors a reasonable window in which their scripts would\n> work in either old or new backends. At some point in that time they'd\n> probably have occasion to make other changes that render their scripts\n> not backwards compatible, at which point they can insert \"SET\n> allow_system_table_dml = on\" so that the script keeps working when we\n> remove the compatibility hack.\n\nI'd modify this approach by having a allow_system_table_dml level that\nwarns when DDL to system tables is performed, and then set\nallow_system_table_dml to that when processing extension scripts (and\nperhaps modify the warning message when creating_extension ==\ntrue). That way it'd be easier to spot such extension scripts.\n\nAnd I'd personally probably just set allow_system_table_dml to warn when\nworking interactively, just to improve logging etc.\n\n\n> This line of thought leads to the conclusion that we do want\n> separate \"allow_system_table_dml\" and \"allow_system_table_ddl\"\n> bools. Otherwise, the backwards-compatibility hack would need\n> to turn on a level of unsafety that extension scripts have *not*\n> had before and surely shouldn't have by default.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 21 Jun 2019 14:02:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On Fri, Jun 21, 2019 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This line of thought leads to the conclusion that we do want\n> separate \"allow_system_table_dml\" and \"allow_system_table_ddl\"\n> bools. Otherwise, the backwards-compatibility hack would need\n> to turn on a level of unsafety that extension scripts have *not*\n> had before and surely shouldn't have by default.\n\nRight, exactly.\n\nI'm repeating myself, but I still think it's super-useful to\ndistinguish things which are \"for expert use only\" from things which\nare \"totally bonkers.\" You can argue that if you're an expert, you\nshould know enough to avoid the totally bonkers things, but PostgreSQL\nis pretty popular these days [citation needed] and there are a lot of\npeople administering databases who know what they are doing to a\npretty reasonable degree but don't have anywhere near the level of\nunderstanding of someone who spends their days hacking core. Putting\nup some kind of a stop sign that lets you know when you're about to go\nfrom adventurous to lethal will help those people.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Jun 2019 10:07:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jun 21, 2019 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This line of thought leads to the conclusion that we do want\n>> separate \"allow_system_table_dml\" and \"allow_system_table_ddl\"\n>> bools. Otherwise, the backwards-compatibility hack would need\n>> to turn on a level of unsafety that extension scripts have *not*\n>> had before and surely shouldn't have by default.\n\n> Right, exactly.\n\n> I'm repeating myself, but I still think it's super-useful to\n> distinguish things which are \"for expert use only\" from things which\n> are \"totally bonkers.\"\n\nAgreed, although \"DML vs DDL\" is a pretty poor approximation of that\nboundary. As shown in examples upthread, you can find reasonable things\nto do and totally-catastrophic things to do in both categories.\n\nThe position I'm maintaining is that it's not worth our trouble to try to\nmechanically distinguish which things are which. Once you've broken the\nglass and flipped either the big red switch or the slightly smaller orange\nswitch, it's entirely on you to not screw up your database beyond\nrecovery.\n\nI do see value in two switches not one, but it's what I said above,\nto not need to give people *more* chance-to-break-things than they\nhad before when doing manual catalog fixes. That is, we need a\nsetting that corresponds more or less to current default behavior.\n\nThere's an aesthetic argument to be had about whether to have two\nbools or one three-way switch, but I prefer the former; there's\nno backward-compatibility issue here since allow_system_table_mods\ncouldn't be set by applications anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2019 11:20:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On Mon, Jun 24, 2019 at 11:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm repeating myself, but I still think it's super-useful to\n> > distinguish things which are \"for expert use only\" from things which\n> > are \"totally bonkers.\"\n>\n> Agreed, although \"DML vs DDL\" is a pretty poor approximation of that\n> boundary. As shown in examples upthread, you can find reasonable things\n> to do and totally-catastrophic things to do in both categories.\n\nI agree. I would like it if there were a way to do better, but I'm\nnot sure that there is, at least for a reasonable level of effort.\n\n> There's an aesthetic argument to be had about whether to have two\n> bools or one three-way switch, but I prefer the former; there's\n> no backward-compatibility issue here since allow_system_table_mods\n> couldn't be set by applications anyway.\n\nI'm happy to defer on that point.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Jun 2019 11:24:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Here is a new patch after the discussion.\n\n- Rename allow_system_table_mods to allow_system_table_ddl.\n\n(This makes room for a new allow_system_table_dml, but it's not\nimplemented here.)\n\n- Make allow_system_table_ddl SUSET.\n\n- Add regression test.\n\n- Remove the behavior that allow_system_table_mods allowed\nnon-superusers to do DML on catalog tables without further access checking.\n\nI think there was general agreement on all these points.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 28 Jun 2019 11:51:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On Mon, Jun 24, 2019 at 11:20:51AM -0400, Tom Lane wrote:\n> I do see value in two switches not one, but it's what I said above,\n> to not need to give people *more* chance-to-break-things than they\n> had before when doing manual catalog fixes. That is, we need a\n> setting that corresponds more or less to current default behavior.\n> \n> There's an aesthetic argument to be had about whether to have two\n> bools or one three-way switch, but I prefer the former; there's\n> no backward-compatibility issue here since allow_system_table_mods\n> couldn't be set by applications anyway.\n\nI like a single three-way switch since if you are allowing DDL, you\nprobably don't care if you restrict DML. log_statement already has a\nsimilar distinction with values of none, ddl, mod, all. I assume\nallow_system_table_mods could have value of false, dml, true.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Sun, 7 Jul 2019 23:45:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On Sun, Jul 7, 2019 at 11:45:49PM -0400, Bruce Momjian wrote:\n> On Mon, Jun 24, 2019 at 11:20:51AM -0400, Tom Lane wrote:\n> > I do see value in two switches not one, but it's what I said above,\n> > to not need to give people *more* chance-to-break-things than they\n> > had before when doing manual catalog fixes. That is, we need a\n> > setting that corresponds more or less to current default behavior.\n> > \n> > There's an aesthetic argument to be had about whether to have two\n> > bools or one three-way switch, but I prefer the former; there's\n> > no backward-compatibility issue here since allow_system_table_mods\n> > couldn't be set by applications anyway.\n> \n> I like a single three-way switch since if you are allowing DDL, you\n> probably don't care if you restrict DML. log_statement already has a\n> similar distinction with values of none, ddl, mod, all. I assume\n> allow_system_table_mods could have value of false, dml, true.\n\nOr, to match log_statement, use: none, dml, all.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 8 Jul 2019 10:21:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On 2019-Jun-28, Peter Eisentraut wrote:\n\n> Here is a new patch after the discussion.\n> \n> - Rename allow_system_table_mods to allow_system_table_ddl.\n> \n> (This makes room for a new allow_system_table_dml, but it's not\n> implemented here.)\n> \n> - Make allow_system_table_ddl SUSET.\n> \n> - Add regression test.\n> \n> - Remove the behavior that allow_system_table_mods allowed\n> non-superusers to do DML on catalog tables without further access checking.\n> \n> I think there was general agreement on all these points.\n\nI think this patch is at a point where it merits closer review from\nfellow committers, so I marked it RfC for now. I hope non-committers\nwould also look at it some more, though.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Sep 2019 18:39:40 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On Fri, Sep 13, 2019 at 06:39:40PM -0300, Alvaro Herrera wrote:\n> I think this patch is at a point where it merits closer review from\n> fellow committers, so I marked it RfC for now. I hope non-committers\n> would also look at it some more, though.\n\nI guess so. The patch has conflicts in the serial and parallel\nschedules, so I have moved it to next CF, waiting on author for a\nrebase.\n\nPeter, are you planning to look at that again? Note: the patch has no\nreviewers registered.\n--\nMichael", "msg_date": "Wed, 27 Nov 2019 17:26:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On 2019-11-27 09:26, Michael Paquier wrote:\n> On Fri, Sep 13, 2019 at 06:39:40PM -0300, Alvaro Herrera wrote:\n>> I think this patch is at a point where it merits closer review from\n>> fellow committers, so I marked it RfC for now. I hope non-committers\n>> would also look at it some more, though.\n> \n> I guess so. The patch has conflicts in the serial and parallel\n> schedules, so I have moved it to next CF, waiting on author for a\n> rebase.\n> \n> Peter, are you planning to look at that again? Note: the patch has no\n> reviewers registered.\n\nHere is an updated patch series.\n\nAfter re-reading the discussion again, I have kept the existing name of \nthe option. I have also moved the tests to the \"unsafe_tests\" suite, \nwhich seems like a better place. And I have split the patch into three.\n\nOther than those cosmetic changes, I think everything here has been \ndiscussed and agreed to, so unless anyone expresses any concern or a \nwish to do more review, I think this is ready to commit.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 27 Nov 2019 17:01:49 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-11-27 09:26, Michael Paquier wrote:\n>> Peter, are you planning to look at that again? Note: the patch has no\n>> reviewers registered.\n\n> Here is an updated patch series.\n\n> After re-reading the discussion again, I have kept the existing name of \n> the option. I have also moved the tests to the \"unsafe_tests\" suite, \n> which seems like a better place. And I have split the patch into three.\n\nPersonally I'd have gone with the renaming to allow_system_table_ddl,\nbut it's not a huge point. Updating the code to agree with that\nnaming would make the patch much more invasive, so maybe it's not\nworth it.\n\n> Other than those cosmetic changes, I think everything here has been \n> discussed and agreed to, so unless anyone expresses any concern or a \n> wish to do more review, I think this is ready to commit.\n\nI read through the patch set and have just one quibble: in the\nproposed new docs,\n\n+ Allows modification of the structure of system tables as well as\n+ certain other risky actions on system tables. This is otherwise not\n+ allowed even for superusers. This is used by\n+ <command>initdb</command>. Inconsiderate use of this setting can\n+ cause irretrievable data loss or seriously corrupt the database\n+ system. Only superusers can change this setting.\n\n\"Inconsiderate\" doesn't seem like le mot juste. Maybe \"Ill-advised\"?\n\n(I'm also wondering whether the sentence about initdb is worth keeping.)\n\nI marked the CF entry RFC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Nov 2019 11:11:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow_system_table_mods stuff" }, { "msg_contents": "On 2019-11-28 17:11, Tom Lane wrote:\n> I read through the patch set and have just one quibble: in the\n> proposed new docs,\n> \n> + Allows modification of the structure of system tables as well as\n> + certain other risky actions on system tables. This is otherwise not\n> + allowed even for superusers. This is used by\n> + <command>initdb</command>. Inconsiderate use of this setting can\n> + cause irretrievable data loss or seriously corrupt the database\n> + system. Only superusers can change this setting.\n> \n> \"Inconsiderate\" doesn't seem like le mot juste. Maybe \"Ill-advised\"?\n> \n> (I'm also wondering whether the sentence about initdb is worth keeping.)\n\ncommitted with those adjustments\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 Nov 2019 10:30:53 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: allow_system_table_mods stuff" } ]
[ { "msg_contents": "Hello,\nI am Mahesh S Nair from India. I am a GSoC 2018 student at KDE. I am very\nmuch interested in working with the organization for Google Season of Docs\n2019. I think I am capable for, an open-source technical writing work as I\nhave done GSoC and was a GCI mentor also.\nI am actually new to this community, but I have used PostgreSQL quite often\nin for my academics.\n\n I have started to read more on the topics too. I wish to draft a\nproposal(1st draft) for GSoD as soon as possible.\n\nPlease let me know about the opportunities.\n\nThank You\n\n-- \nThank you from\nMahesh S Nair\nAmrita University <http://amrita.edu>\n\nHello,I am Mahesh S Nair from India. I am a GSoC 2018 student at KDE. I am very much interested in working with the organization for Google Season of Docs 2019. I think I am capable for, an open-source technical writing work as I have done GSoC and was a GCI mentor also.I am actually new to this community, but I have used PostgreSQL quite often in for my academics. I have started to read more on the topics too. I wish to draft a proposal(1st draft) for GSoD as soon as possible.Please let me know about the opportunities.Thank You     -- Thank you fromMahesh S NairAmrita University", "msg_date": "Fri, 21 Jun 2019 18:58:19 +0530", "msg_from": "Mahesh S <mahesh6947foss@gmail.com>", "msg_from_op": true, "msg_subject": "Google Season of Docs" } ]
[ { "msg_contents": "Hackers,\n\nWhile investigating \"Too many open files\" errors reported in our\nparallel restore_command I noticed that the restore_command can inherit\nquite a lot of fds from the recovery process. This limits the number of\nfds available in the restore_command depending on the setting of system\nnofile and Postgres max_files_per_process.\n\nI was wondering if we should consider closing these fds before calling\nrestore_command? It seems like we could do this by forking first or by\nsetting FD_CLOEXEC using fcntl() or O_CLOEXEC on open() where available.\n\nThoughts on this? Is this something we want to change or should I just\nrecommend that users set nofile and max_files_per_process appropriately?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 21 Jun 2019 09:37:59 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "File descriptors inherited by restore_command" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> While investigating \"Too many open files\" errors reported in our\n> parallel restore_command I noticed that the restore_command can inherit\n> quite a lot of fds from the recovery process. This limits the number of\n> fds available in the restore_command depending on the setting of system\n> nofile and Postgres max_files_per_process.\n\nHm. Presumably you could hit the same issue with things like COPY FROM\nPROGRAM. And the only reason the archiver doesn't hit it is it never\nopens many files to begin with.\n\n> I was wondering if we should consider closing these fds before calling\n> restore_command? It seems like we could do this by forking first or by\n> setting FD_CLOEXEC using fcntl() or O_CLOEXEC on open() where available.\n\n+1 for using O_CLOEXEC on machines that have it. I don't think I want to\njump through hoops for machines that don't have it --- POSIX has required\nit for some time, so there should be few machines in that category.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 09:45:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: File descriptors inherited by restore_command" }, { "msg_contents": "On 6/21/19 9:45 AM, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> While investigating \"Too many open files\" errors reported in our\n>> parallel restore_command I noticed that the restore_command can inherit\n>> quite a lot of fds from the recovery process. This limits the number of\n>> fds available in the restore_command depending on the setting of system\n>> nofile and Postgres max_files_per_process.\n> \n> Hm. Presumably you could hit the same issue with things like COPY FROM\n> PROGRAM. And the only reason the archiver doesn't hit it is it never\n> opens many files to begin with.\n\nYes. The archiver process is fine because it has ~8 fds open.\n\n>> I was wondering if we should consider closing these fds before calling\n>> restore_command? It seems like we could do this by forking first or by\n>> setting FD_CLOEXEC using fcntl() or O_CLOEXEC on open() where available.\n> \n> +1 for using O_CLOEXEC on machines that have it. I don't think I want to\n> jump through hoops for machines that don't have it --- POSIX has required\n> it for some time, so there should be few machines in that category.\n\nAnother possible issue is that if we allow a child process to inherit\nall these fds it might accidentally write to them, which would be bad.\nI know the child process can go and maliciously open and trash files if\nit wants, but it doesn't seem like we should allow it to happen\nunintentionally.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 21 Jun 2019 10:09:19 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: File descriptors inherited by restore_command" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 6/21/19 9:45 AM, Tom Lane wrote:\n>> +1 for using O_CLOEXEC on machines that have it. I don't think I want to\n>> jump through hoops for machines that don't have it --- POSIX has required\n>> it for some time, so there should be few machines in that category.\n\n> Another possible issue is that if we allow a child process to inherit\n> all these fds it might accidentally write to them, which would be bad.\n> I know the child process can go and maliciously open and trash files if\n> it wants, but it doesn't seem like we should allow it to happen\n> unintentionally.\n\nTrue. But I don't want to think of this as a security issue, because\nthen it becomes a security bug to forget O_CLOEXEC anywhere in the\nbackend, and that is a standard we cannot meet. (Even if we could\nhold to it for the core code, stuff like libperl and libpython can't\nbe relied on to play ball.) In practice, as long as we use O_CLOEXEC\nfor files opened by fd.c, that would eliminate the actual too-many-fds\nhazard. I don't object to desultorily looking around for other places\nwhere we might want to add it, but personally I'd be satisfied with a\npatch that CLOEXEC-ifies fd.c.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 10:23:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: File descriptors inherited by restore_command" }, { "msg_contents": "I wrote:\n> In practice, as long as we use O_CLOEXEC\n> for files opened by fd.c, that would eliminate the actual too-many-fds\n> hazard. I don't object to desultorily looking around for other places\n> where we might want to add it, but personally I'd be satisfied with a\n> patch that CLOEXEC-ifies fd.c.\n\nActually, even that much coverage might be exciting. Be sure to test\npatch with EXEC_BACKEND to see if it causes zapping of any files the\npostmaster needs to pass down to backends.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2019 10:24:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: File descriptors inherited by restore_command" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> David Steele <david@pgmasters.net> writes:\n> > On 6/21/19 9:45 AM, Tom Lane wrote:\n> >> +1 for using O_CLOEXEC on machines that have it. I don't think I want to\n> >> jump through hoops for machines that don't have it --- POSIX has required\n> >> it for some time, so there should be few machines in that category.\n> \n> > Another possible issue is that if we allow a child process to inherit\n> > all these fds it might accidentally write to them, which would be bad.\n> > I know the child process can go and maliciously open and trash files if\n> > it wants, but it doesn't seem like we should allow it to happen\n> > unintentionally.\n> \n> True. But I don't want to think of this as a security issue, because\n> then it becomes a security bug to forget O_CLOEXEC anywhere in the\n> backend, and that is a standard we cannot meet. (Even if we could\n> hold to it for the core code, stuff like libperl and libpython can't\n> be relied on to play ball.) In practice, as long as we use O_CLOEXEC\n> for files opened by fd.c, that would eliminate the actual too-many-fds\n> hazard. I don't object to desultorily looking around for other places\n> where we might want to add it, but personally I'd be satisfied with a\n> patch that CLOEXEC-ifies fd.c.\n\nAgreed, it's not a security issue, and also agreed that we should\nprobably get it done with fd.c right off, and then if someone wants to\nthink about other places where it might be good to do then more power to\nthem and it seems like we'd be happy to accept such patches.\n\nThanks,\n\nStephen", "msg_date": "Fri, 21 Jun 2019 10:26:50 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: File descriptors inherited by restore_command" }, { "msg_contents": "On 6/21/19 10:26 AM, Stephen Frost wrote:\n>>\n>>> Another possible issue is that if we allow a child process to inherit\n>>> all these fds it might accidentally write to them, which would be bad.\n>>> I know the child process can go and maliciously open and trash files if\n>>> it wants, but it doesn't seem like we should allow it to happen\n>>> unintentionally.\n>>\n>> True. But I don't want to think of this as a security issue, because\n>> then it becomes a security bug to forget O_CLOEXEC anywhere in the\n>> backend, and that is a standard we cannot meet. (Even if we could\n>> hold to it for the core code, stuff like libperl and libpython can't\n>> be relied on to play ball.) In practice, as long as we use O_CLOEXEC\n>> for files opened by fd.c, that would eliminate the actual too-many-fds\n>> hazard. I don't object to desultorily looking around for other places\n>> where we might want to add it, but personally I'd be satisfied with a\n>> patch that CLOEXEC-ifies fd.c.\n> \n> Agreed, it's not a security issue, and also agreed that we should\n> probably get it done with fd.c right off, and then if someone wants to\n> think about other places where it might be good to do then more power to\n> them and it seems like we'd be happy to accept such patches.\n\nI agree this is not a security issue and I wasn't intending to present\nit that way, but in general the more fds closed the better.\n\nI'll work up a patch for fd.c which is the obvious win and we can work\nfrom there if it makes sense. I'll be sure to test EXEC_BACKEND on\nLinux but I don't think it will matter on Windows. cfbot may feel\ndifferently, though.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Fri, 21 Jun 2019 16:03:41 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: File descriptors inherited by restore_command" } ]
[ { "msg_contents": "Hello\n\nHere's a patch that implements progress reporting for ANALYZE.\n\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 21 Jun 2019 14:52:07 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "progress report for ANALYZE" }, { "msg_contents": "On Fri, Jun 21, 2019 at 8:52 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Here's a patch that implements progress reporting for ANALYZE.\n\nPatch applies, code and doc and compiles cleanly. I have few comments:\n\n@@ -512,7 +529,18 @@ do_analyze_rel(Relation onerel, VacuumParams *params,\n if (numrows > 0)\n {\n MemoryContext col_context,\n- old_context;\n+ old_context;\n+ const int index[] = {\n+ PROGRESS_ANALYZE_PHASE,\n+ PROGRESS_ANALYZE_TOTAL_BLOCKS,\n+ PROGRESS_ANALYZE_BLOCKS_DONE\n+ };\n+ const int64 val[] = {\n+ PROGRESS_ANALYZE_PHASE_ANALYSIS,\n+ 0, 0\n+ };\n+\n+ pgstat_progress_update_multi_param(3, index, val);\n[...]\n }\n+ pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,\n+ PROGRESS_ANALYZE_PHASE_COMPLETE);\n+\nIf there wasn't any row but multiple blocks were scanned, the\nPROGRESS_ANALYZE_PHASE_COMPLETE will still show the informations about\nthe blocks that were scanned. I'm not sure if we should stay\nconsistent here.\n\ndiff --git a/src/backend/utils/adt/pgstatfuncs.c\nb/src/backend/utils/adt/pgstatfuncs.c\nindex 05240bfd14..98b01e54fa 100644\n--- a/src/backend/utils/adt/pgstatfuncs.c\n+++ b/src/backend/utils/adt/pgstatfuncs.c\n@@ -469,6 +469,8 @@ pg_stat_get_progress_info(PG_FUNCTION_ARGS)\n /* Translate command name into command type code. */\n if (pg_strcasecmp(cmd, \"VACUUM\") == 0)\n cmdtype = PROGRESS_COMMAND_VACUUM;\n+ if (pg_strcasecmp(cmd, \"ANALYZE\") == 0)\n+ cmdtype = PROGRESS_COMMAND_ANALYZE;\n else if (pg_strcasecmp(cmd, \"CLUSTER\") == 0)\n cmdtype = PROGRESS_COMMAND_CLUSTER;\n else if (pg_strcasecmp(cmd, \"CREATE INDEX\") == 0)\n\nit should be an \"else if\" here.\n\nEverything else LGTM.\n\n\n", "msg_date": "Tue, 2 Jul 2019 15:22:42 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi,\nIn monitoring.sgml, \"a\" is missing in \"row for ech backend that is\ncurrently running that command[...]\".\nAnthony\n\n\nOn Tuesday, July 2, 2019, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Fri, Jun 21, 2019 at 8:52 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n>>\n>> Here's a patch that implements progress reporting for ANALYZE.\n>\n> Patch applies, code and doc and compiles cleanly. I have few comments:\n>\n> @@ -512,7 +529,18 @@ do_analyze_rel(Relation onerel, VacuumParams *params,\n> if (numrows > 0)\n> {\n> MemoryContext col_context,\n> - old_context;\n> + old_context;\n> + const int index[] = {\n> + PROGRESS_ANALYZE_PHASE,\n> + PROGRESS_ANALYZE_TOTAL_BLOCKS,\n> + PROGRESS_ANALYZE_BLOCKS_DONE\n> + };\n> + const int64 val[] = {\n> + PROGRESS_ANALYZE_PHASE_ANALYSIS,\n> + 0, 0\n> + };\n> +\n> + pgstat_progress_update_multi_param(3, index, val);\n> [...]\n> }\n> + pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,\n> + PROGRESS_ANALYZE_PHASE_COMPLETE);\n> +\n> If there wasn't any row but multiple blocks were scanned, the\n> PROGRESS_ANALYZE_PHASE_COMPLETE will still show the informations about\n> the blocks that were scanned. I'm not sure if we should stay\n> consistent here.\n>\n> diff --git a/src/backend/utils/adt/pgstatfuncs.c\n> b/src/backend/utils/adt/pgstatfuncs.c\n> index 05240bfd14..98b01e54fa 100644\n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -469,6 +469,8 @@ pg_stat_get_progress_info(PG_FUNCTION_ARGS)\n> /* Translate command name into command type code. */\n> if (pg_strcasecmp(cmd, \"VACUUM\") == 0)\n> cmdtype = PROGRESS_COMMAND_VACUUM;\n> + if (pg_strcasecmp(cmd, \"ANALYZE\") == 0)\n> + cmdtype = PROGRESS_COMMAND_ANALYZE;\n> else if (pg_strcasecmp(cmd, \"CLUSTER\") == 0)\n> cmdtype = PROGRESS_COMMAND_CLUSTER;\n> else if (pg_strcasecmp(cmd, \"CREATE INDEX\") == 0)\n>\n> it should be an \"else if\" here.\n>\n> Everything else LGTM.\n>\n>\n>\n\nHi,In monitoring.sgml, \"a\" is missing in \"row for ech backend that is currently running that command[...]\".AnthonyOn Tuesday, July 2, 2019, Julien Rouhaud <rjuju123@gmail.com> wrote:> On Fri, Jun 21, 2019 at 8:52 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:>>>> Here's a patch that implements progress reporting for ANALYZE.>> Patch applies, code and doc and compiles cleanly.  I have few comments:>> @@ -512,7 +529,18 @@ do_analyze_rel(Relation onerel, VacuumParams *params,>     if (numrows > 0)>     {>         MemoryContext col_context,> -                   old_context;> +                     old_context;> +       const int   index[] = {> +           PROGRESS_ANALYZE_PHASE,> +           PROGRESS_ANALYZE_TOTAL_BLOCKS,> +           PROGRESS_ANALYZE_BLOCKS_DONE> +       };> +       const int64 val[] = {> +           PROGRESS_ANALYZE_PHASE_ANALYSIS,> +           0, 0> +       };> +> +       pgstat_progress_update_multi_param(3, index, val);> [...]>     }> +   pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,> +                                PROGRESS_ANALYZE_PHASE_COMPLETE);> +> If there wasn't any row but multiple blocks were scanned, the> PROGRESS_ANALYZE_PHASE_COMPLETE will still show the informations about> the blocks that were scanned.  I'm not sure if we should stay> consistent here.>> diff --git a/src/backend/utils/adt/pgstatfuncs.c> b/src/backend/utils/adt/pgstatfuncs.c> index 05240bfd14..98b01e54fa 100644> --- a/src/backend/utils/adt/pgstatfuncs.c> +++ b/src/backend/utils/adt/pgstatfuncs.c> @@ -469,6 +469,8 @@ pg_stat_get_progress_info(PG_FUNCTION_ARGS)>     /* Translate command name into command type code. */>     if (pg_strcasecmp(cmd, \"VACUUM\") == 0)>         cmdtype = PROGRESS_COMMAND_VACUUM;> +   if (pg_strcasecmp(cmd, \"ANALYZE\") == 0)> +       cmdtype = PROGRESS_COMMAND_ANALYZE;>     else if (pg_strcasecmp(cmd, \"CLUSTER\") == 0)>         cmdtype = PROGRESS_COMMAND_CLUSTER;>     else if (pg_strcasecmp(cmd, \"CREATE INDEX\") == 0)>> it should be an \"else if\" here.>> Everything else LGTM.>>>", "msg_date": "Tue, 2 Jul 2019 16:43:20 +0200", "msg_from": "Anthony Nowocien <anowocien@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro!\n\nOn 2019/06/22 3:52, Alvaro Herrera wrote:\n> Hello\n> \n> Here's a patch that implements progress reporting for ANALYZE.\n\nSorry for the late reply.\nMy email address was changed to tatsuro.yamada.tf@nttcom.co.jp.\n\nI have a question about your patch.\nMy ex-colleague Vinayak created same patch in 2017 [1], and he\ncouldn't get commit because there are some reasons such as the\npatch couldn't handle analyzing Foreign table. Therefore, I wonder\nwhether your patch is able to do that or not.\n\nHowever, actually, I think it's okay because the feature is useful\nfor DBAs, even if your patch can't handle Foreign table.\n\nI'll review your patch in this week. :)\n \n[1] ANALYZE command progress checker\nhttps://www.postgresql.org/message-id/flat/968b4eda-2417-8b7b-d468-71643cf088b6%40openscg.com#574488592fcc9708c38fa44b0dae9006\n\nRegards,\nTatsuro Yamada\n\n\n", "msg_date": "Wed, 3 Jul 2019 13:05:24 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro,\n\n> I'll review your patch in this week. :)\n\nI tested your patch on 6b854896.\nHere is a result. See below:\n\n---------------------------------------------------------\n[Session #1]\ncreate table hoge as select * from generate_series(1, 1000000) a;\nanalyze verbose hoge;\n\n[Session #2]\n\\a\n\\t\nselect * from pg_stat_progress_analyze; \\watch 0.001\n\n17520|13599|postgres|16387|f|16387|scanning table|4425|14\n17520|13599|postgres|16387|f|16387|scanning table|4425|64\n17520|13599|postgres|16387|f|16387|scanning table|4425|111\n...\n17520|13599|postgres|16387|f|16387|scanning table|4425|4425\n17520|13599|postgres|16387|f|16387|scanning table|4425|4425\n17520|13599|postgres|16387|f|16387|scanning table|4425|4425\n17520|13599|postgres|16387|f|16387|analyzing sample|0|0\n17520|13599|postgres|16387|f|16387||0|0 <-- Is it Okay??\n---------------------------------------------------------\n\nI have a question of the last line of the result.\nI'm not sure it is able or not, but I guess it would be better\nto keep the phase name (analyzing sample) on the view until the\nend of this command. :)\n\nRegards,\nTatsuro Yamada\n\n\n\n\n\n", "msg_date": "Mon, 8 Jul 2019 18:28:52 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Mon, Jul 8, 2019 at 5:29 AM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> 17520|13599|postgres|16387|f|16387|scanning table|4425|4425\n> 17520|13599|postgres|16387|f|16387|analyzing sample|0|0\n> 17520|13599|postgres|16387|f|16387||0|0 <-- Is it Okay??\n\nWhy do we zero out the block numbers when we switch phases? The\nCREATE INDEX progress reporting patch does that kind of thing too, and\nit seems like poor behavior to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 8 Jul 2019 14:10:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On 2019-Jul-08, Robert Haas wrote:\n\n> On Mon, Jul 8, 2019 at 5:29 AM Tatsuro Yamada\n> <tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> > 17520|13599|postgres|16387|f|16387|scanning table|4425|4425\n> > 17520|13599|postgres|16387|f|16387|analyzing sample|0|0\n> > 17520|13599|postgres|16387|f|16387||0|0 <-- Is it Okay??\n> \n> Why do we zero out the block numbers when we switch phases? The\n> CREATE INDEX progress reporting patch does that kind of thing too, and\n> it seems like poor behavior to me.\n\nYeah, I got the impression that that was determined to be the desirable\nbehavior, so I made it do that, but I'm not really happy about it\neither. We're not too late to change the CREATE INDEX behavior, but\nlet's discuss what is it that we want.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 8 Jul 2019 14:18:45 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Mon, Jul 8, 2019 at 2:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Yeah, I got the impression that that was determined to be the desirable\n> behavior, so I made it do that, but I'm not really happy about it\n> either. We're not too late to change the CREATE INDEX behavior, but\n> let's discuss what is it that we want.\n\nI don't think I intended to make any such determination -- which\ncommit do you think established this as the canonical behavior?\n\nI propose that once a field is set, we should leave it set until the end.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 8 Jul 2019 14:44:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Mon, Jul 8, 2019 at 8:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jul 8, 2019 at 2:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Yeah, I got the impression that that was determined to be the desirable\n> > behavior, so I made it do that, but I'm not really happy about it\n> > either. We're not too late to change the CREATE INDEX behavior, but\n> > let's discuss what is it that we want.\n>\n> I don't think I intended to make any such determination -- which\n> commit do you think established this as the canonical behavior?\n>\n> I propose that once a field is set, we should leave it set until the end.\n\n+1\n\nNote that this patch is already behaving like that if the table only\ncontains dead rows.\n\n\n", "msg_date": "Mon, 8 Jul 2019 20:47:01 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro, Anthony, Julien and Robert,\n\nOn 2019/07/09 3:47, Julien Rouhaud wrote:\n> On Mon, Jul 8, 2019 at 8:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> On Mon, Jul 8, 2019 at 2:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>> Yeah, I got the impression that that was determined to be the desirable\n>>> behavior, so I made it do that, but I'm not really happy about it\n>>> either. We're not too late to change the CREATE INDEX behavior, but\n>>> let's discuss what is it that we want.\n>>\n>> I don't think I intended to make any such determination -- which\n>> commit do you think established this as the canonical behavior?\n>>\n>> I propose that once a field is set, we should leave it set until the end.\n> \n> +1\n> \n> Note that this patch is already behaving like that if the table only\n> contains dead rows.\n\n\nI fixed the patch including:\n\n - Replace \"if\" to \"else if\". (Suggested by Julien)\n - Fix typo s/ech/each/. (Suggested by Anthony)\n - Add Phase \"analyzing complete\" in the pgstat view. (Suggested by Julien, Robert and me)\n It was overlooked to add it in system_views.sql.\n\nI share my re-test result, see below:\n\n---------------------------------------------------------\n[Session #1]\ncreate table hoge as select * from generate_series(1, 1000000) a;\nanalyze verbose hoge;\n\n[Session #2]\n\\a \\t\nselect * from pg_stat_progress_analyze; \\watch 0.001\n\n3785|13599|postgres|16384|f|16384|scanning table|4425|6\n3785|13599|postgres|16384|f|16384|scanning table|4425|31\n3785|13599|postgres|16384|f|16384|scanning table|4425|70\n3785|13599|postgres|16384|f|16384|scanning table|4425|109\n...\n3785|13599|postgres|16384|f|16384|scanning table|4425|4425\n3785|13599|postgres|16384|f|16384|scanning table|4425|4425\n3785|13599|postgres|16384|f|16384|scanning table|4425|4425\n3785|13599|postgres|16384|f|16384|analyzing sample|0|0\n3785|13599|postgres|16384|f|16384|analyzing complete|0|0 <-- Added and fixed. :)\n---------------------------------------------------------\n\nThanks,\nTatsuro Yamada", "msg_date": "Tue, 9 Jul 2019 17:38:44 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On 2019-Jul-08, Robert Haas wrote:\n\n> On Mon, Jul 8, 2019 at 2:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Yeah, I got the impression that that was determined to be the desirable\n> > behavior, so I made it do that, but I'm not really happy about it\n> > either. We're not too late to change the CREATE INDEX behavior, but\n> > let's discuss what is it that we want.\n> \n> I don't think I intended to make any such determination -- which\n> commit do you think established this as the canonical behavior?\n\nNo commit, just discussion in the CREATE INDEX thread.\n\n> I propose that once a field is set, we should leave it set until the end.\n\nHmm, ok. In CREATE INDEX, we use the block counters multiple times. We\ncan leave them set until the next time we need them, I suppose.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Jul 2019 18:12:17 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Tue, Jul 9, 2019 at 6:12 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Hmm, ok. In CREATE INDEX, we use the block counters multiple times.\n\nWhy do we do that?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 10 Jul 2019 09:23:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On 2019-Jul-10, Robert Haas wrote:\n\n> On Tue, Jul 9, 2019 at 6:12 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Hmm, ok. In CREATE INDEX, we use the block counters multiple times.\n> \n> Why do we do that?\n\nBecause we scan the table first, then the index, then the table again\n(last two for the validation phase of CIC). We count \"block numbers\"\nseparately for each of those, and keep those counters in the same pair\nof columns. I think we also do that for tuple counters in one case.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 10 Jul 2019 09:26:28 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Wed, Jul 10, 2019 at 9:26 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Jul-10, Robert Haas wrote:\n> > On Tue, Jul 9, 2019 at 6:12 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > Hmm, ok. In CREATE INDEX, we use the block counters multiple times.\n> >\n> > Why do we do that?\n>\n> Because we scan the table first, then the index, then the table again\n> (last two for the validation phase of CIC). We count \"block numbers\"\n> separately for each of those, and keep those counters in the same pair\n> of columns. I think we also do that for tuple counters in one case.\n\nHmm. I think I would have been inclined to use different counter\nnumbers for table blocks and index blocks, but perhaps we don't have\nroom. Anyway, leaving them set until we need them again seems like the\nbest we can do as things stand.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 10 Jul 2019 22:23:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hello.\n\nAt Tue, 9 Jul 2019 17:38:44 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in <244cb241-168b-d6a9-c45f-a80c34cdc6ad@nttcom.co.jp>\n> Hi Alvaro, Anthony, Julien and Robert,\n> \n> On 2019/07/09 3:47, Julien Rouhaud wrote:\n> > On Mon, Jul 8, 2019 at 8:44 PM Robert Haas <robertmhaas@gmail.com>\n> > wrote:\n> >>\n> >> On Mon, Jul 8, 2019 at 2:18 PM Alvaro Herrera\n> >> <alvherre@2ndquadrant.com> wrote:\n> >>> Yeah, I got the impression that that was determined to be the\n> >>> desirable\n> >>> behavior, so I made it do that, but I'm not really happy about it\n> >>> either. We're not too late to change the CREATE INDEX behavior, but\n> >>> let's discuss what is it that we want.\n> >>\n> >> I don't think I intended to make any such determination -- which\n> >> commit do you think established this as the canonical behavior?\n> >>\n> >> I propose that once a field is set, we should leave it set until the\n> >> end.\n> > +1\n> > Note that this patch is already behaving like that if the table only\n> > contains dead rows.\n\n+1 from me.\n\n> I fixed the patch including:\n> \n> - Replace \"if\" to \"else if\". (Suggested by Julien)\n> - Fix typo s/ech/each/. (Suggested by Anthony)\n> - Add Phase \"analyzing complete\" in the pgstat view. (Suggested by\n> - Julien, Robert and me)\n> It was overlooked to add it in system_views.sql.\n> \n> I share my re-test result, see below:\n> \n> ---------------------------------------------------------\n> [Session #1]\n> create table hoge as select * from generate_series(1, 1000000) a;\n> analyze verbose hoge;\n> \n> [Session #2]\n> \\a \\t\n> select * from pg_stat_progress_analyze; \\watch 0.001\n> \n> 3785|13599|postgres|16384|f|16384|scanning table|4425|6\n> 3785|13599|postgres|16384|f|16384|scanning table|4425|31\n> 3785|13599|postgres|16384|f|16384|scanning table|4425|70\n> 3785|13599|postgres|16384|f|16384|scanning table|4425|109\n> ...\n> 3785|13599|postgres|16384|f|16384|scanning table|4425|4425\n> 3785|13599|postgres|16384|f|16384|scanning table|4425|4425\n> 3785|13599|postgres|16384|f|16384|scanning table|4425|4425\n> 3785|13599|postgres|16384|f|16384|analyzing sample|0|0\n> 3785|13599|postgres|16384|f|16384|analyzing complete|0|0 <-- Added and\n> fixed. :)\n> ---------------------------------------------------------\n\nI have some comments.\n\n+\t\tconst int index[] = {\n+\t\t\tPROGRESS_ANALYZE_PHASE,\n+\t\t\tPROGRESS_ANALYZE_TOTAL_BLOCKS,\n+\t\t\tPROGRESS_ANALYZE_BLOCKS_DONE\n+\t\t};\n+\t\tconst int64 val[] = {\n+\t\t\tPROGRESS_ANALYZE_PHASE_ANALYSIS,\n+\t\t\t0, 0\n\nDo you oppose to leaving the total/done blocks alone here:p?\n\n\n\n+ BlockNumber nblocks;\n+ double blksdone = 0;\n\nWhy is it a double even though blksdone is of the same domain\nwith nblocks? And finally:\n\n+ pgstat_progress_update_param(PROGRESS_ANALYZE_BLOCKS_DONE,\n+ ++blksdone);\n\nIt is converted into int64.\n\n\n\n+ WHEN 2 THEN 'analyzing sample'\n+ WHEN 3 THEN 'analyzing sample (extended stats)'\n\nI think we should avoid parenthesized phrases as far as\nnot-necessary. That makes the column unnecessarily long. The\nphase is internally called as \"compute stats\" so *I* would prefer\nsomething like the followings:\n\n+ WHEN 2 THEN 'computing stats'\n+ WHEN 3 THEN 'computing extended stats'\n\n\n\n+ WHEN 4 THEN 'analyzing complete'\n\nAnd if you are intending by this that (as mentioned in the\ndocumentation) \"working to complete this analyze\", this would be:\n\n+ WHEN 4 THEN 'completing analyze'\n+ WHEN 4 THEN 'finalizing analyze'\n\n\n+ <entry>Process ID of backend.</entry>\n\nof \"the\" backend. ? And period is not attached on all\ndescriptions consisting of a single sentence.\n\n+ <entry>OID of the database to which this backend is connected.</entry>\n+ <entry>Name of the database to which this backend is connected.</entry>\n\n\"database to which .. is connected\" is phrased as \"database this\nbackend is connected to\" in pg_stat_acttivity. (Just for consistency)\n\n\n+ <entry>Whether the current scan includes legacy inheritance children.</entry>\n\nThis apparently excludes partition tables but actually it is\nincluded.\n\n \"Whether scanning through child tables\" ?\n\nI'm not sure \"child tables\" is established as the word to mean\nboth \"partition tables and inheritance children\"..\n\n\n+ The table being scanned (differs from <literal>relid</literal>\n+ only when processing partitions or inheritance children).\n\nIs <literal> needed? And I think the parentheses are not needed.\n\n OID of the table currently being scanned. Can differ from relid\n when analyzing tables that have child tables.\n\n\n+ Total number of heap blocks to scan in the current table.\n\n Number of heap blocks on scanning_table to scan?\n\nIt might be better that this description describes that this\nand the next column is meaningful only while the phase\n\"scanning table\".\n\n\n+ The command is currently scanning the table.\n\n\"sample(s)\" comes somewhat abruptly in the next item. Something\nlike \"The command is currently scanning the table\n<structfield>scanning_table</structfield> to obtain samples\"\nmight be better.\n\n\n+ <command>ANALYZE</command> is currently extracting statistical data\n+ from the sample obtained.\n\nSomething like \"The command is computing stats from the samples\nobtained in the previous phase\" might be better.\n\n\nregards.\n\n- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 11 Jul 2019 19:56:10 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Horiguchi-san!\n\n\nOn 2019/07/11 19:56, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Tue, 9 Jul 2019 17:38:44 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in <244cb241-168b-d6a9-c45f-a80c34cdc6ad@nttcom.co.jp>\n>> Hi Alvaro, Anthony, Julien and Robert,\n>>\n>> On 2019/07/09 3:47, Julien Rouhaud wrote:\n>>> On Mon, Jul 8, 2019 at 8:44 PM Robert Haas <robertmhaas@gmail.com>\n>>> wrote:\n>>>>\n>>>> On Mon, Jul 8, 2019 at 2:18 PM Alvaro Herrera\n>>>> <alvherre@2ndquadrant.com> wrote:\n>>>>> Yeah, I got the impression that that was determined to be the\n>>>>> desirable\n>>>>> behavior, so I made it do that, but I'm not really happy about it\n>>>>> either. We're not too late to change the CREATE INDEX behavior, but\n>>>>> let's discuss what is it that we want.\n>>>>\n>>>> I don't think I intended to make any such determination -- which\n>>>> commit do you think established this as the canonical behavior?\n>>>>\n>>>> I propose that once a field is set, we should leave it set until the\n>>>> end.\n>>> +1\n>>> Note that this patch is already behaving like that if the table only\n>>> contains dead rows.\n> \n> +1 from me.\n> \n>> 3785|13599|postgres|16384|f|16384|analyzing complete|0|0 <-- Added and\n>> fixed. :)\n>> ---------------------------------------------------------\n>\n> I have some comments.\n> \n> +\t\tconst int index[] = {\n> +\t\t\tPROGRESS_ANALYZE_PHASE,\n> +\t\t\tPROGRESS_ANALYZE_TOTAL_BLOCKS,\n> +\t\t\tPROGRESS_ANALYZE_BLOCKS_DONE\n> +\t\t};\n> +\t\tconst int64 val[] = {\n> +\t\t\tPROGRESS_ANALYZE_PHASE_ANALYSIS,\n> +\t\t\t0, 0\n> \n> Do you oppose to leaving the total/done blocks alone here:p?\n\n\nThanks for your comments!\nI created a new patch based on your comments because Alvaro allowed me\nto work on revising the patch. :)\n\n\nAh, I revised it to remove \"0, 0\".\n\n\n\n> + BlockNumber nblocks;\n> + double blksdone = 0;\n> \n> Why is it a double even though blksdone is of the same domain\n> with nblocks? And finally:\n> \n> + pgstat_progress_update_param(PROGRESS_ANALYZE_BLOCKS_DONE,\n> + ++blksdone);\n> \n> It is converted into int64.\n\n\nFixed.\nBut is it suitable to use BlockNumber instead int64?\n\n\n \n> + WHEN 2 THEN 'analyzing sample'\n> + WHEN 3 THEN 'analyzing sample (extended stats)'\n> \n> I think we should avoid parenthesized phrases as far as\n> not-necessary. That makes the column unnecessarily long. The\n> phase is internally called as \"compute stats\" so *I* would prefer\n> something like the followings:\n> \n> + WHEN 2 THEN 'computing stats'\n> + WHEN 3 THEN 'computing extended stats'\n> \n> \n> \n> + WHEN 4 THEN 'analyzing complete'\n> \n> And if you are intending by this that (as mentioned in the\n> documentation) \"working to complete this analyze\", this would be:\n> \n> + WHEN 4 THEN 'completing analyze'\n> + WHEN 4 THEN 'finalizing analyze'\n\n\nI have no strong opinion, so I changed the phase-names based on\nyour suggestions like following:\n\n WHEN 2 THEN 'computing stats'\n WHEN 3 THEN 'computing extended stats'\n WHEN 4 THEN 'finalizing analyze'\n\nHowever, I'd like to get any comments from hackers to get a consensus\nabout the names.\n\n\n\n> + <entry>Process ID of backend.</entry>\n> \n> of \"the\" backend. ? And period is not attached on all\n> descriptions consisting of a single sentence.\n>\n> + <entry>OID of the database to which this backend is connected.</entry>\n> + <entry>Name of the database to which this backend is connected.</entry>\n> \n> \"database to which .. is connected\" is phrased as \"database this\n> backend is connected to\" in pg_stat_acttivity. (Just for consistency)\n\n\nI checked the sentences on other views of progress monitor (VACUUM,\nCREATE INDEX and CLUSTER), and they are same sentence. Therefore,\nI'd like to keep it as is. :)\n\n\n\n> + <entry>Whether the current scan includes legacy inheritance children.</entry>\n> \n> This apparently excludes partition tables but actually it is\n> included.\n>\n> \"Whether scanning through child tables\" ?\n> \n> I'm not sure \"child tables\" is established as the word to mean\n> both \"partition tables and inheritance children\"..\n\n\nHmm... I'm also not sure but I fixed as you suggested.\n\n\n\n> + The table being scanned (differs from <literal>relid</literal>\n> + only when processing partitions or inheritance children).\n> \n> Is <literal> needed? And I think the parentheses are not needed.\n> \n> OID of the table currently being scanned. Can differ from relid\n> when analyzing tables that have child tables.\n\n\nHow about:\n OID of the table currently being scanned.\n It might be different from relid when analyzing tables that have child tables.\n\n\n\n> + Total number of heap blocks to scan in the current table.\n> \n> Number of heap blocks on scanning_table to scan?\n> \n> It might be better that this description describes that this\n> and the next column is meaningful only while the phase\n> \"scanning table\".\n\n\nHow about:\n Total number of heap blocks in the scanning_table.\n\n\n\n\n> + The command is currently scanning the table.\n> \n> \"sample(s)\" comes somewhat abruptly in the next item. Something\n> like \"The command is currently scanning the table\n> <structfield>scanning_table</structfield> to obtain samples\"\n> might be better.\n\n\nFixed.\n\n \n> + <command>ANALYZE</command> is currently extracting statistical data\n> + from the sample obtained.\n> \n> Something like \"The command is computing stats from the samples\n> obtained in the previous phase\" might be better.\n\n\nFixed.\n\n\nPlease find attached patch. :)\n\nHere is a test result of the patch.\n==========================================================\n# select * from pg_stat_progress_analyze ; \\watch 0.0001;\n\n9067|13599|postgres|16387|f|16387|scanning table|443|14\n9067|13599|postgres|16387|f|16387|scanning table|443|44\n9067|13599|postgres|16387|f|16387|scanning table|443|76\n9067|13599|postgres|16387|f|16387|scanning table|443|100\n...\n9067|13599|postgres|16387|f|16387|scanning table|443|443\n9067|13599|postgres|16387|f|16387|scanning table|443|443\n9067|13599|postgres|16387|f|16387|scanning table|443|443\n9067|13599|postgres|16387|f|16387|computing stats|443|443\n9067|13599|postgres|16387|f|16387|computing stats|443|443\n9067|13599|postgres|16387|f|16387|computing stats|443|443\n9067|13599|postgres|16387|f|16387|finalizing analyze|443|443\n==========================================================\n\n\nThanks,\nTatsuro Yamada", "msg_date": "Mon, 22 Jul 2019 15:02:16 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hello.\n\n# It's very good timing, as you came in while I have a time after\n# finishing a quite nerve-wrackig task..\n\nAt Mon, 22 Jul 2019 15:02:16 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in <0876b4fe-26fb-ca32-f179-c696fa3ddfec@nttcom.co.jp>\n> >> 3785|13599|postgres|16384|f|16384|analyzing complete|0|0 <-- Added and\n> >> fixed. :)\n> >> ---------------------------------------------------------\n> >\n> > I have some comments.\n> > +\t\tconst int index[] = {\n> > +\t\t\tPROGRESS_ANALYZE_PHASE,\n> > +\t\t\tPROGRESS_ANALYZE_TOTAL_BLOCKS,\n> > +\t\t\tPROGRESS_ANALYZE_BLOCKS_DONE\n> > +\t\t};\n> > +\t\tconst int64 val[] = {\n> > +\t\t\tPROGRESS_ANALYZE_PHASE_ANALYSIS,\n> > +\t\t\t0, 0\n> > Do you oppose to leaving the total/done blocks alone here:p?\n> \n> \n> Thanks for your comments!\n> I created a new patch based on your comments because Alvaro allowed me\n> to work on revising the patch. :)\n> \n> \n> Ah, I revised it to remove \"0, 0\".\n\nThanks. For a second I thought that\nPROGRESS_ANALYZE_PHASE_ANALYSIS was lost but it is living being\nrenamed to PROGRESS_ANALYZE_PHASE_ANALYSIS.\n\n> > + BlockNumber nblocks;\n> > + double blksdone = 0;\n> > Why is it a double even though blksdone is of the same domain\n> > with nblocks? And finally:\n> > + pgstat_progress_update_param(PROGRESS_ANALYZE_BLOCKS_DONE,\n> > + ++blksdone);\n> > It is converted into int64.\n> \n> \n> Fixed.\n> But is it suitable to use BlockNumber instead int64?\n\nYeah, I didn't meant that we should use int64 there. Sorry for\nthe confusing comment. I meant that blksdone should be of\nBlockNumber.\n\n> > + WHEN 2 THEN 'analyzing sample'\n> > + WHEN 3 THEN 'analyzing sample (extended stats)'\n> > I think we should avoid parenthesized phrases as far as\n> > not-necessary. That makes the column unnecessarily long. The\n> > phase is internally called as \"compute stats\" so *I* would prefer\n> > something like the followings:\n> > + WHEN 2 THEN 'computing stats'\n> > + WHEN 3 THEN 'computing extended stats'\n> > + WHEN 4 THEN 'analyzing complete'\n> > And if you are intending by this that (as mentioned in the\n> > documentation) \"working to complete this analyze\", this would be:\n> > + WHEN 4 THEN 'completing analyze'\n> > + WHEN 4 THEN 'finalizing analyze'\n> \n> \n> I have no strong opinion, so I changed the phase-names based on\n> your suggestions like following:\n> \n> WHEN 2 THEN 'computing stats'\n> WHEN 3 THEN 'computing extended stats'\n> WHEN 4 THEN 'finalizing analyze'\n> \n> However, I'd like to get any comments from hackers to get a consensus\n> about the names.\n\nAgreed. Especially such word choosing is not suitable for me..\n\n> > + <entry>Process ID of backend.</entry>\n> > of \"the\" backend. ? And period is not attached on all\n> > descriptions consisting of a single sentence.\n> >\n> > + <entry>OID of the database to which this backend is\n> > connected.</entry>\n> > + <entry>Name of the database to which this backend is\n> > connected.</entry>\n> > \"database to which .. is connected\" is phrased as \"database this\n> > backend is connected to\" in pg_stat_acttivity. (Just for consistency)\n> \n> \n> I checked the sentences on other views of progress monitor (VACUUM,\n> CREATE INDEX and CLUSTER), and they are same sentence. Therefore,\n> I'd like to keep it as is. :)\n\nOh, I see from where the wordings came. But no periods seen after\nsentense when it is the only one in a description in other system\nviews tables. I think the progress views tables should be\ncorrected following convention.\n\n> > + <entry>Whether the current scan includes legacy inheritance\n> > children.</entry>\n> > This apparently excludes partition tables but actually it is\n> > included.\n> >\n> > \"Whether scanning through child tables\" ?\n> > I'm not sure \"child tables\" is established as the word to mean\n> > both \"partition tables and inheritance children\"..\n> \n> \n> Hmm... I'm also not sure but I fixed as you suggested.\n\nYeah, I also am not sure the suggestion is good enough as is..\n\n\n> > + The table being scanned (differs from <literal>relid</literal>\n> > + only when processing partitions or inheritance children).\n> > Is <literal> needed? And I think the parentheses are not needed.\n> > OID of the table currently being scanned. Can differ from relid\n> > when analyzing tables that have child tables.\n> \n> \n> How about:\n> OID of the table currently being scanned.\n> It might be different from relid when analyzing tables that have child\n> tables.\n> \n> \n> \n> > + Total number of heap blocks to scan in the current table.\n> > Number of heap blocks on scanning_table to scan?\n> > It might be better that this description describes that this\n> > and the next column is meaningful only while the phase\n> > \"scanning table\".\n> \n> \n> How about:\n> Total number of heap blocks in the scanning_table.\n\n(For me, ) that seems like it shows blocks including all\ndescendents for inheritance parent. But I'm not sure..a\n\n> > + The command is currently scanning the table.\n> > \"sample(s)\" comes somewhat abruptly in the next item. Something\n> > like \"The command is currently scanning the table\n> > <structfield>scanning_table</structfield> to obtain samples\"\n> > might be better.\n> \n> \n> Fixed.\n> \n> \n> > + <command>ANALYZE</command> is currently extracting statistical data\n> > + from the sample obtained.\n> > Something like \"The command is computing stats from the samples\n> > obtained in the previous phase\" might be better.\n> \n> \n> Fixed.\n> \n> \n> Please find attached patch. :)\n> \n> Here is a test result of the patch.\n> ==========================================================\n> # select * from pg_stat_progress_analyze ; \\watch 0.0001;\n> \n> 9067|13599|postgres|16387|f|16387|scanning table|443|14\n> 9067|13599|postgres|16387|f|16387|scanning table|443|44\n> 9067|13599|postgres|16387|f|16387|scanning table|443|76\n> 9067|13599|postgres|16387|f|16387|scanning table|443|100\n> ...\n> 9067|13599|postgres|16387|f|16387|scanning table|443|443\n> 9067|13599|postgres|16387|f|16387|scanning table|443|443\n> 9067|13599|postgres|16387|f|16387|scanning table|443|443\n> 9067|13599|postgres|16387|f|16387|computing stats|443|443\n> 9067|13599|postgres|16387|f|16387|computing stats|443|443\n> 9067|13599|postgres|16387|f|16387|computing stats|443|443\n> 9067|13599|postgres|16387|f|16387|finalizing analyze|443|443\n> ==========================================================\n\nLooks fine!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 22 Jul 2019 17:30:39 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Horiguchi-san, Alvaro, Anthony, Julien and Robert,\n\n\nOn 2019/07/22 17:30, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> # It's very good timing, as you came in while I have a time after\n> # finishing a quite nerve-wrackig task..\n> \n> At Mon, 22 Jul 2019 15:02:16 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in <0876b4fe-26fb-ca32-f179-c696fa3ddfec@nttcom.co.jp>\n>>>> 3785|13599|postgres|16384|f|16384|analyzing complete|0|0 <-- Added and\n>>>> fixed. :)\n>>>> ---------------------------------------------------------\n>>>\n>>> I have some comments.\n>>> +\t\tconst int index[] = {\n>>> +\t\t\tPROGRESS_ANALYZE_PHASE,\n>>> +\t\t\tPROGRESS_ANALYZE_TOTAL_BLOCKS,\n>>> +\t\t\tPROGRESS_ANALYZE_BLOCKS_DONE\n>>> +\t\t};\n>>> +\t\tconst int64 val[] = {\n>>> +\t\t\tPROGRESS_ANALYZE_PHASE_ANALYSIS,\n>>> +\t\t\t0, 0\n>>> Do you oppose to leaving the total/done blocks alone here:p?\n>>\n>>\n>> Thanks for your comments!\n>> I created a new patch based on your comments because Alvaro allowed me\n>> to work on revising the patch. :)\n>>\n>>\n>> Ah, I revised it to remove \"0, 0\".\n> \n> Thanks. For a second I thought that\n> PROGRESS_ANALYZE_PHASE_ANALYSIS was lost but it is living being\n> renamed to PROGRESS_ANALYZE_PHASE_ANALYSIS.\n\n\n\"PROGRESS_ANALYZE_PHASE_ANALYSIS\" was replaced with\n\"PROGRESS_ANALYZE_PHASE_COMPUTING\" because I changed\nthe phase-name on v3.patch like this:\n\n./src/include/commands/progress.h\n\n+/* Phases of analyze (as advertised via PROGRESS_ANALYZE_PHASE) */\n+#define PROGRESS_ANALYZE_PHASE_SCAN_TABLE 1\n+#define PROGRESS_ANALYZE_PHASE_COMPUTING 2\n+#define PROGRESS_ANALYZE_PHASE_COMPUTING_EXTENDED 3\n+#define PROGRESS_ANALYZE_PHASE_FINALIZE 4\n\nIs it Okay?\n\n \n>>> + BlockNumber nblocks;\n>>> + double blksdone = 0;\n>>> Why is it a double even though blksdone is of the same domain\n>>> with nblocks? And finally:\n>>> + pgstat_progress_update_param(PROGRESS_ANALYZE_BLOCKS_DONE,\n>>> + ++blksdone);\n>>> It is converted into int64.\n>>\n>>\n>> Fixed.\n>> But is it suitable to use BlockNumber instead int64?\n> \n> Yeah, I didn't meant that we should use int64 there. Sorry for\n> the confusing comment. I meant that blksdone should be of\n> BlockNumber.\n\n\nFixed. Thanks for the clarification. :)\nAttached v4 patch file only includes this fix.\n \n\n>>> + WHEN 2 THEN 'analyzing sample'\n>>> + WHEN 3 THEN 'analyzing sample (extended stats)'\n>>> I think we should avoid parenthesized phrases as far as\n>>> not-necessary. That makes the column unnecessarily long. The\n>>> phase is internally called as \"compute stats\" so *I* would prefer\n>>> something like the followings:\n>>> + WHEN 2 THEN 'computing stats'\n>>> + WHEN 3 THEN 'computing extended stats'\n>>> + WHEN 4 THEN 'analyzing complete'\n>>> And if you are intending by this that (as mentioned in the\n>>> documentation) \"working to complete this analyze\", this would be:\n>>> + WHEN 4 THEN 'completing analyze'\n>>> + WHEN 4 THEN 'finalizing analyze'\n>>\n>>\n>> I have no strong opinion, so I changed the phase-names based on\n>> your suggestions like following:\n>>\n>> WHEN 2 THEN 'computing stats'\n>> WHEN 3 THEN 'computing extended stats'\n>> WHEN 4 THEN 'finalizing analyze'\n>>\n>> However, I'd like to get any comments from hackers to get a consensus\n>> about the names.\n> \n> Agreed. Especially such word choosing is not suitable for me..\n\n\nTo Alvaro, Julien, Anthony and Robert,\nDo you have any ideas? :)\n\n\n \n>>> + <entry>Process ID of backend.</entry>\n>>> of \"the\" backend. ? And period is not attached on all\n>>> descriptions consisting of a single sentence.\n>>>\n>>> + <entry>OID of the database to which this backend is\n>>> connected.</entry>\n>>> + <entry>Name of the database to which this backend is\n>>> connected.</entry>\n>>> \"database to which .. is connected\" is phrased as \"database this\n>>> backend is connected to\" in pg_stat_acttivity. (Just for consistency)\n>>\n>>\n>> I checked the sentences on other views of progress monitor (VACUUM,\n>> CREATE INDEX and CLUSTER), and they are same sentence. Therefore,\n>> I'd like to keep it as is. :)\n> \n> Oh, I see from where the wordings came. But no periods seen after\n> sentense when it is the only one in a description in other system\n> views tables. I think the progress views tables should be\n> corrected following convention.\n\n\nSounds reasonable. However, I'd like to create another patch after\nthis feature was committed since that document fix influence other\nprogress monitor's description on the document.\n\n \n>>> + <entry>Whether the current scan includes legacy inheritance\n>>> children.</entry>\n>>> This apparently excludes partition tables but actually it is\n>>> included.\n>>>\n>>> \"Whether scanning through child tables\" ?\n>>> I'm not sure \"child tables\" is established as the word to mean\n>>> both \"partition tables and inheritance children\"..\n>>\n>>\n>> Hmm... I'm also not sure but I fixed as you suggested.\n> \n> Yeah, I also am not sure the suggestion is good enough as is..\n>\n>>> + Total number of heap blocks to scan in the current table.\n>>> Number of heap blocks on scanning_table to scan?\n>>> It might be better that this description describes that this\n>>> and the next column is meaningful only while the phase\n>>> \"scanning table\".\n>>\n>>\n>> How about:\n>> Total number of heap blocks in the scanning_table.\n> \n> (For me, ) that seems like it shows blocks including all\n> descendents for inheritance parent. But I'm not sure..a\n\n\nIn the case of scanning_table is parent table, it doesn't\nshow the number. However, child tables shows the number.\nI tested it using Declarative partitioning table, See the bottom\nof this email. :)\n\n\n>> Please find attached patch. :)\n>>\n>> Here is a test result of the patch.\n>> ==========================================================\n>> # select * from pg_stat_progress_analyze ; \\watch 0.0001;\n>>\n>> 9067|13599|postgres|16387|f|16387|scanning table|443|14\n>> ...\n>> 9067|13599|postgres|16387|f|16387|scanning table|443|443\n>> 9067|13599|postgres|16387|f|16387|computing stats|443|443\n>> 9067|13599|postgres|16387|f|16387|computing stats|443|443\n>> 9067|13599|postgres|16387|f|16387|computing stats|443|443\n>> 9067|13599|postgres|16387|f|16387|finalizing analyze|443|443\n>> ==========================================================\n> \n> Looks fine!\n\n\nI shared a test result using Declarative partitioning table.\n\n==========================================================\n## Create partitioning table\ncreate table hoge as select a from generate_series(0, 40000) a;\n\ncreate table hoge2(\n a integer\n) partition by range(a);\n\ncreate table hoge2_10000 partition of hoge2\nfor values from (0) to (10000);\n\ncreate table hoge2_20000 partition of hoge2\nfor values from (10000) to (20000);\n\ncreate table hoge2_30000 partition of hoge2\nfor values from (20000) to (30000);\n\ncreate table hoge2_default partition of hoge2 default;\n\n\n## Test\nselect oid,relname,relpages,reltuples from pg_class where relname like 'hoge%';\n\n oid | relname | relpages | reltuples\n-------+---------------+----------+-----------\n 16538 | hoge | 177 | 40001\n 16541 | hoge2 | 0 | 0\n 16544 | hoge2_10000 | 45 | 10000\n 16547 | hoge2_20000 | 45 | 10000\n 16550 | hoge2_30000 | 45 | 10000\n 16553 | hoge2_default | 45 | 10001\n(6 rows)\n\nselect * from pg_stat_progress_analyze ; \\watch 0.00001;\n\n27579|13599|postgres|16541|t|16544|scanning table|45|17\n27579|13599|postgres|16541|t|16544|scanning table|45|38\n27579|13599|postgres|16541|t|16544|scanning table|45|45\n27579|13599|postgres|16541|t|16544|scanning table|45|45\n27579|13599|postgres|16541|t|16544|scanning table|45|45\n\n27579|13599|postgres|16541|t|16547|scanning table|45|17\n27579|13599|postgres|16541|t|16547|scanning table|45|37\n27579|13599|postgres|16541|t|16547|scanning table|45|45\n27579|13599|postgres|16541|t|16547|scanning table|45|45\n27579|13599|postgres|16541|t|16547|scanning table|45|45\n\n27579|13599|postgres|16541|t|16550|scanning table|45|9\n27579|13599|postgres|16541|t|16550|scanning table|45|30\n27579|13599|postgres|16541|t|16550|scanning table|45|45\n27579|13599|postgres|16541|t|16550|scanning table|45|45\n27579|13599|postgres|16541|t|16550|scanning table|45|45\n\n27579|13599|postgres|16541|t|16553|scanning table|45|5\n27579|13599|postgres|16541|t|16553|scanning table|45|26\n27579|13599|postgres|16541|t|16553|scanning table|45|42\n27579|13599|postgres|16541|t|16553|scanning table|45|45\n27579|13599|postgres|16541|t|16553|scanning table|45|45\n27579|13599|postgres|16541|t|16553|computing stats|45|45\n27579|13599|postgres|16541|t|16553|computing stats|45|45\n27579|13599|postgres|16541|t|16553|computing stats|45|45\n27579|13599|postgres|16541|t|16553|finalizing analyze|45|45\n\n27579|13599|postgres|16544|f|16544|scanning table|45|1\n27579|13599|postgres|16544|f|16544|scanning table|45|30\n27579|13599|postgres|16544|f|16544|computing stats|45|45\n\n27579|13599|postgres|16547|f|16547|scanning table|45|25\n27579|13599|postgres|16547|f|16547|computing stats|45|45\n\n27579|13599|postgres|16550|f|16550|scanning table|45|11\n27579|13599|postgres|16550|f|16550|scanning table|45|38\n27579|13599|postgres|16550|f|16550|finalizing analyze|45|45\n\n27579|13599|postgres|16553|f|16553|scanning table|45|25\n27579|13599|postgres|16553|f|16553|computing stats|45|45\n==========================================================\n\nI'll share test result using partitioning table via\nInheritance tables on next email. :)\n\nThanks,\nTatsuro Yamada", "msg_date": "Tue, 23 Jul 2019 13:51:04 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Tue, Jul 23, 2019 at 4:51 PM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> Attached v4 patch file only includes this fix.\n\nHello all,\n\nI've moved this to the September CF, where it is in \"Needs review\" state.\n\nThanks,\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 20:44:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Thu, Aug 1, 2019 at 4:45 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Jul 23, 2019 at 4:51 PM Tatsuro Yamada\n> <tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> > Attached v4 patch file only includes this fix.\n>\n> I've moved this to the September CF, where it is in \"Needs review\" state.\n\n/me reviews.\n\n+ <entry><structfield>scanning_table</structfield></entry>\n\nI think this should be retitled to something that ends in 'relid',\nlike all of the corresponding cases in existing progress views.\nPerhaps 'active_relid' or 'current_relid'.\n\n+ The command is computing extended stats from the samples\nobtained in the previous phase.\n\nI think you should change this (and the previous one) to say \"from the\nsamples obtained during the table scan.\"\n\n+ Total number of heap blocks in the scanning_table.\n\nPerhaps I'm confused, but it looks to me like what you are advertising\nis the number of blocks that will be sampled, not the total number of\nblocks in the table. I think that's the right thing to advertise, but\nthen the column should be named and documented that way.\n\n+ {\n+ const int index[] = {\n+ PROGRESS_ANALYZE_TOTAL_BLOCKS,\n+ PROGRESS_ANALYZE_SCANREL\n+ };\n+ const int64 val[] = {\n+ nblocks,\n+ RelationGetRelid(onerel)\n+ };\n+\n+ pgstat_progress_update_multi_param(2, index, val);\n+ }\n\nThis block seems to be introduced just so you can declare variables; I\ndon't think that's good style. It's arguably unnecessary because we\nnow are selectively allowing variable declarations within functions,\nbut I think you should just move the first array to the top of the\nfunction and the second declaration to the top of the function\ndropping const, and then just do val[0] = nblocks and val[1] =\nRelationGetRelid(onerel). Maybe you can also come up with better\nnames than 'index' and 'val'. Same comment applies to another place\nwhere you have something similar.\n\nPatch seems to need minor rebasing.\n\nMaybe \"scanning table\" should be renamed \"acquiring sample rows,\" to\nmatch the names used in the code?\n\nI'm not a fan of the way you set the scan-table phase and inh flag in\none place, and then slightly later set the relation OID and block\ncount. That creates a race during which users could see the first bit\nof data set and the second not set. I don't see any reason not to set\nall four fields together.\n\nPlease be sure to make the names of the constants you use match up\nwith the external names as far as it reasonably makes sense, e.g.\n\n+#define PROGRESS_ANALYZE_PHASE_SCAN_TABLE 1\n+#define PROGRESS_ANALYZE_PHASE_COMPUTING 2\n+#define PROGRESS_ANALYZE_PHASE_COMPUTING_EXTENDED 3\n+#define PROGRESS_ANALYZE_PHASE_FINALIZE 4\n\nvs.\n\n+ WHEN 0 THEN 'initializing'::text\n+ WHEN 1 THEN 'scanning table'::text\n+ WHEN 2 THEN 'computing stats'::text\n+ WHEN 3 THEN 'computing extended stats'::text\n+ WHEN 4 THEN 'finalizing analyze'::text\n\nNot terrible, but it could be closer.\n\nSimilarly with the column names (include_children vs. INH).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 1 Aug 2019 13:48:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Robert and All!\n\n\nOn 2019/08/02 2:48, Robert Haas wrote:> On Thu, Aug 1, 2019 at 4:45 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Tue, Jul 23, 2019 at 4:51 PM Tatsuro Yamada\n>> <tatsuro.yamada.tf@nttcom.co.jp> wrote:\n>>> Attached v4 patch file only includes this fix.\n>>\n>> I've moved this to the September CF, where it is in \"Needs review\" state.\n> \n> /me reviews.\n\n\nThanks for your comments! :)\n\n\n> + <entry><structfield>scanning_table</structfield></entry>\n> \n> I think this should be retitled to something that ends in 'relid',\n> like all of the corresponding cases in existing progress views.\n> Perhaps 'active_relid' or 'current_relid'.\n\n\nFixed.\nI changed \"scanning_table\" to \"current_relid\" for analyze in monitoring.sgml.\nHowever, I didn't change \"relid\" on other places for other commands because\nI'd like to create other patch for that later. :)\n\n\n > + The command is computing extended stats from the samples\n> obtained in the previous phase.\n> \n> I think you should change this (and the previous one) to say \"from the\n> samples obtained during the table scan.\"\n\n\nFixed.\n\n\n> + Total number of heap blocks in the scanning_table.\n> \n> Perhaps I'm confused, but it looks to me like what you are advertising\n> is the number of blocks that will be sampled, not the total number of\n> blocks in the table. I think that's the right thing to advertise, but\n> then the column should be named and documented that way.\n\n\nAh, you are right. Fixed.\nI used the following sentence based on Vinayak's patch created two years a go.\n\n- <entry><structfield>heap_blks_total</structfield></entry>\n- <entry><type>bigint</type></entry>\n- <entry>\n- Total number of heap blocks in the current_relid.\n- </entry>\n\n+ <entry><structfield>sample_blks_total</structfield></entry>\n+ <entry><type>bigint</></entry>\n+ <entry>\n+ Total number of heap blocks that will be sampled.\n+</entry>\n\n\n\n> + {\n> + const int index[] = {\n> + PROGRESS_ANALYZE_TOTAL_BLOCKS,\n> + PROGRESS_ANALYZE_SCANREL\n> + };\n> + const int64 val[] = {\n> + nblocks,\n> + RelationGetRelid(onerel)\n> + };\n> +\n> + pgstat_progress_update_multi_param(2, index, val);\n> + }\n> \n> This block seems to be introduced just so you can declare variables; I\n> don't think that's good style. It's arguably unnecessary because we\n> now are selectively allowing variable declarations within functions,\n> but I think you should just move the first array to the top of the\n> function and the second declaration to the top of the function\n> dropping const, and then just do val[0] = nblocks and val[1] =\n> RelationGetRelid(onerel). Maybe you can also come up with better\n> names than 'index' and 'val'. Same comment applies to another place\n> where you have something similar.\n\n\nI agreed and fixed.\n\n\n> Patch seems to need minor rebasing.\n> \n> Maybe \"scanning table\" should be renamed \"acquiring sample rows,\" to\n> match the names used in the code?\n\n\nI fixed as following:\n\ns/PROGRESS_ANALYZE_PHASE_SCAN_TABLE/\n PROGRESS_ANALYZE_PHASE_ACQUIRING_SAMPLE_ROWS/\n\ns/WHEN 1 THEN 'scanning table'::text/\n WHEN 1 THEN 'acquiring sample rows'::text/\n\n\n> I'm not a fan of the way you set the scan-table phase and inh flag in\n> one place, and then slightly later set the relation OID and block\n> count. That creates a race during which users could see the first bit\n> of data set and the second not set. I don't see any reason not to set\n> all four fields together.\n\n\nHmm... I understand but it's little difficult because if there are\nchild rels, acquire_inherited_sample_rows() calls acquire_sample_rows()\n(See below). So, it is able to set all four fields together if inh flag\nis given as a parameter of those functions, I suppose. But I'm not sure\nwhether it's okay to add the parameter to both functions or not.\nDo you have any ideas? :)\n\n\n# do_analyze_rel()\n ...\n if (inh)\n numrows = acquire_inherited_sample_rows(onerel, elevel,\n rows, targrows,\n &totalrows, &totaldeadrows);\n else\n numrows = (*acquirefunc) (onerel, elevel,\n rows, targrows,\n &totalrows, &totaldeadrows);\n\n\n# acquire_inherited_sample_rows()\n ...\n foreach(lc, tableOIDs)\n {\n ...\n /* Check table type (MATVIEW can't happen, but might as well allow) */\n if (childrel->rd_rel->relkind == RELKIND_RELATION ||\n childrel->rd_rel->relkind == RELKIND_MATVIEW)\n {\n /* Regular table, so use the regular row acquisition function */\n acquirefunc = acquire_sample_rows;\n ...\n /* OK, we'll process this child */\n has_child = true;\n rels[nrels] = childrel;\n acquirefuncs[nrels] = acquirefunc;\n ...\n }\n ...\n for (i = 0; i < nrels; i++)\n {\n ...\n AcquireSampleRowsFunc acquirefunc = acquirefuncs[i];\n ...\n if (childtargrows > 0)\n {\n ...\n /* Fetch a random sample of the child's rows */\n childrows = (*acquirefunc) (childrel, elevel,\n rows + numrows, childtargrows,\n &trows, &tdrows)\n\n\n> Please be sure to make the names of the constants you use match up\n> with the external names as far as it reasonably makes sense, e.g.\n> \n> +#define PROGRESS_ANALYZE_PHASE_SCAN_TABLE 1\n> +#define PROGRESS_ANALYZE_PHASE_COMPUTING 2\n> +#define PROGRESS_ANALYZE_PHASE_COMPUTING_EXTENDED 3\n> +#define PROGRESS_ANALYZE_PHASE_FINALIZE 4\n> \n> vs.\n> \n> + WHEN 0 THEN 'initializing'::text\n> + WHEN 1 THEN 'scanning table'::text\n> + WHEN 2 THEN 'computing stats'::text\n> + WHEN 3 THEN 'computing extended stats'::text\n> + WHEN 4 THEN 'finalizing analyze'::text\n> \n> Not terrible, but it could be closer.\n\n\nAgreed.\nHow about these?:\n\n#define PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS 1 <- fixed\n#define PROGRESS_ANALYZE_PHASE_COMPUTE_STATS 2 <- fixed\n#define PROGRESS_ANALYZE_PHASE_COMPUTE_EXT_STATS 3 <- fixed\n#define PROGRESS_ANALYZE_PHASE_FINALIZE_ANALYZE 4 <- fixed\n\nvs.\n\nWHEN 1 THEN 'acquiring sample rows'::text\nWHEN 2 THEN 'computing stats'::text\nWHEN 3 THEN 'computing extended stats'::text\nWHEN 4 THEN 'finalizing analyze'::text\n\nI revised the name of the constants, so the constants and the phase\nnames are closer than before. Also, I used Verb instead Gerund\nbecause other phase names used Verb such as VACUUM. :)\n\n\n> Similarly with the column names (include_children vs. INH).\n\n\nFixed.\nI selected \"include_children\" as the column name because it's\neasy to understand than \"INH\".\n\ns/PROGRESS_ANALYZE_INH/\n PROGRESS_ANALYZE_INCLUDE_CHILDREN/\n\n\nPlease find attached file. :)\n\n\nThanks,\nTatsuro Yamada", "msg_date": "Tue, 13 Aug 2019 14:33:34 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hello,\n\nOn 2019-Jul-03, Tatsuro Yamada wrote:\n\n> My ex-colleague Vinayak created same patch in 2017 [1], and he\n> couldn't get commit because there are some reasons such as the\n> patch couldn't handle analyzing Foreign table. Therefore, I wonder\n> whether your patch is able to do that or not.\n\n> [1] ANALYZE command progress checker\n> https://www.postgresql.org/message-id/flat/968b4eda-2417-8b7b-d468-71643cf088b6%40openscg.com#574488592fcc9708c38fa44b0dae9006\n\nSo just now I went to check the jold thread (which I should have\nsearched for before writing my own implementation!). It seems clear\nthat many things are pretty similar in both patch, but I think for the\nmost part they are similar just because the underlying infrastructure\nimposes a certain design already, rather than there being any actual\ncopying. (To be perfectly clear: I didn't even know that this patch\nexisted and I didn't grab any code from there to produce my v1.)\n\nHowever, you've now modified the patch from what I submitted and I'm\nwondering if you've taken any inspiration from Vinayak's old patch. If\nso, it seems fair to credit him as co-author in the commit message. It\nwould be good to get his input on the current patch, though.\n\nI have not looked at the current version of the patch yet, but I intend\nto do so during the upcoming commitfest.\n\nThanks for moving this forward!\n\n\nOn the subject of FDW support: I did look into supporting that before\nsubmitting this. I think it's not academically difficult: just have the\nFDW's acquire_sample_rows callback invoke the update_param functions\nonce in a while. Sadly, in practical terms it looks like postgres_fdw\nis quite stupid about ANALYZE (it scans the whole table??) so doing\nsomething that's actually useful may not be so easy. At least, we know\nthe total relation size and maybe we can add the ctid column to the\ncursor in postgresAcquireSampleRowsFunc so that we have a current block\nnumber to report (becing careful about synchronized seqscans). I think\nthis should *not* be part of the main ANALYZE-progress commit, though,\nbecause getting that properly sorted out is going to take some more\ntime.\n\nI do wonder why doesn't postgres_fdw use TABLESAMPLE.\n\nI did not look at other FDWs at all, mind.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 13 Aug 2019 10:01:27 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 13, 2019 at 11:01 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On the subject of FDW support: I did look into supporting that before\n> submitting this. I think it's not academically difficult: just have the\n> FDW's acquire_sample_rows callback invoke the update_param functions\n> once in a while. Sadly, in practical terms it looks like postgres_fdw\n> is quite stupid about ANALYZE (it scans the whole table??) so doing\n> something that's actually useful may not be so easy. At least, we know\n> the total relation size and maybe we can add the ctid column to the\n> cursor in postgresAcquireSampleRowsFunc so that we have a current block\n> number to report (becing careful about synchronized seqscans).\n\nI don't follow this thread fully, so I might miss something, but I\ndon't think that's fully applicable, because foreign tables managed by\npostgres_fdw can be eg, views on the remote side.\n\n> I do wonder why doesn't postgres_fdw use TABLESAMPLE.\n\nYeah, that's really what I'm thinking for PG13; but I think we would\nstill need to scan the whole table in some cases (eg, when the foreign\ntable is a view on the remote side), because the TABLESAMLE clause can\nonly be applied to regular tables and materialized views.\n\n> I did not look at other FDWs at all, mind.\n\nIIUC, oracle_fdw already uses the SAMPLE BLOCK clause for that. Right?\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 14 Aug 2019 16:28:32 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On 2019-Aug-14, Etsuro Fujita wrote:\n\n> On Tue, Aug 13, 2019 at 11:01 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > On the subject of FDW support: I did look into supporting that before\n> > submitting this. I think it's not academically difficult: just have the\n> > FDW's acquire_sample_rows callback invoke the update_param functions\n> > once in a while. Sadly, in practical terms it looks like postgres_fdw\n> > is quite stupid about ANALYZE (it scans the whole table??) so doing\n> > something that's actually useful may not be so easy. At least, we know\n> > the total relation size and maybe we can add the ctid column to the\n> > cursor in postgresAcquireSampleRowsFunc so that we have a current block\n> > number to report (becing careful about synchronized seqscans).\n> \n> I don't follow this thread fully, so I might miss something, but I\n> don't think that's fully applicable, because foreign tables managed by\n> postgres_fdw can be eg, views on the remote side.\n\nOh, hmm, well I guess that covers the tables and matviews then ... I'm\nnot sure there's a good way to cover foreign tables that are views or\nother stuff. Maybe that'll have to do. But at least it covers more\ncases than none.\n\n> > I do wonder why doesn't postgres_fdw use TABLESAMPLE.\n> \n> Yeah, that's really what I'm thinking for PG13; but I think we would\n> still need to scan the whole table in some cases (eg, when the foreign\n> table is a view on the remote side), because the TABLESAMLE clause can\n> only be applied to regular tables and materialized views.\n\nSure.\n\n> > I did not look at other FDWs at all, mind.\n> \n> IIUC, oracle_fdw already uses the SAMPLE BLOCK clause for that. Right?\n\nYeah, it does that, I checked precisely oracle_fdw this morning.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 14 Aug 2019 17:42:36 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro,\n\nOn 2019/08/13 23:01, Alvaro Herrera wrote:\n> Hello,\n> \n> On 2019-Jul-03, Tatsuro Yamada wrote:\n> \n>> My ex-colleague Vinayak created same patch in 2017 [1], and he\n>> couldn't get commit because there are some reasons such as the\n>> patch couldn't handle analyzing Foreign table. Therefore, I wonder\n>> whether your patch is able to do that or not.\n> \n>> [1] ANALYZE command progress checker\n>> https://www.postgresql.org/message-id/flat/968b4eda-2417-8b7b-d468-71643cf088b6%40openscg.com#574488592fcc9708c38fa44b0dae9006\n> \n> So just now I went to check the jold thread (which I should have\n> searched for before writing my own implementation!). It seems clear\n> that many things are pretty similar in both patch, but I think for the\n> most part they are similar just because the underlying infrastructure\n> imposes a certain design already, rather than there being any actual\n> copying. (To be perfectly clear: I didn't even know that this patch\n> existed and I didn't grab any code from there to produce my v1.)\n\n\nI know your patch was not based on Vinayak's old patch because\ncoding style is different between him and you.\n\n \n> However, you've now modified the patch from what I submitted and I'm\n> wondering if you've taken any inspiration from Vinayak's old patch. If\n> so, it seems fair to credit him as co-author in the commit message. It\n> would be good to get his input on the current patch, though.\n\n\nYeah, I'm happy if you added his name as co-authors because I checked\nthe document including his old patch as a reference. :)\n\n \n> I have not looked at the current version of the patch yet, but I intend\n> to do so during the upcoming commitfest.\n> \n> Thanks for moving this forward!\n\n\nThanks!\nCommitting the patch on PG13 makes me happy because Progress reporting\nfeatures are important for DBA. :)\n\n\n> On the subject of FDW support: I did look into supporting that before\n> submitting this. I think it's not academically difficult: just have the\n> FDW's acquire_sample_rows callback invoke the update_param functions\n> once in a while. Sadly, in practical terms it looks like postgres_fdw\n\n\nActually, I've changed my mind.\nEven if there is no FDW support, Progress report for ANALYZE is still useful. Therefore, FDW support would be preferable but not required for committing\nthe patch, I believe. :)\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n", "msg_date": "Thu, 15 Aug 2019 10:45:15 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "There were some minor problems in v5 -- bogus Docbook as well as\noutdated rules.out, small \"git diff --check\" complaint about whitespace.\nThis v6 (on today's master) fixes those, no other changes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 4 Sep 2019 17:01:03 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro,\n\n> There were some minor problems in v5 -- bogus Docbook as well as\n> outdated rules.out, small \"git diff --check\" complaint about whitespace.\n> This v6 (on today's master) fixes those, no other changes.\n\n \nThanks for fixing that. :)\nI'll test it later.\n\n\nI think we have to address the following comment from Robert. Right?\nDo you have any ideas?\n\n\n>> I'm not a fan of the way you set the scan-table phase and inh flag in\n>> one place, and then slightly later set the relation OID and block\n>> count. That creates a race during which users could see the first bit\n>> of data set and the second not set. I don't see any reason not to set\n>> all four fields together.\n> \n> \n> Hmm... I understand but it's little difficult because if there are\n> child rels, acquire_inherited_sample_rows() calls acquire_sample_rows()\n> (See below). So, it is able to set all four fields together if inh flag\n> is given as a parameter of those functions, I suppose. But I'm not sure\n> whether it's okay to add the parameter to both functions or not.\n> Do you have any ideas?\n> \n> \n> # do_analyze_rel()\n> ...\n> if (inh)\n> numrows = acquire_inherited_sample_rows(onerel, elevel,\n> rows, targrows,\n> &totalrows, &totaldeadrows);\n> else\n> numrows = (*acquirefunc) (onerel, elevel,\n> rows, targrows,\n> &totalrows, &totaldeadrows);\n> \n> \n> # acquire_inherited_sample_rows()\n> ...\n> foreach(lc, tableOIDs)\n> {\n> ...\n> /* Check table type (MATVIEW can't happen, but might as well allow) */\n> if (childrel->rd_rel->relkind == RELKIND_RELATION ||\n> childrel->rd_rel->relkind == RELKIND_MATVIEW)\n> {\n> /* Regular table, so use the regular row acquisition function */\n> acquirefunc = acquire_sample_rows;\n> ...\n> /* OK, we'll process this child */\n> has_child = true;\n> rels[nrels] = childrel;\n> acquirefuncs[nrels] = acquirefunc;\n> ...\n> }\n> ...\n> for (i = 0; i < nrels; i++)\n> {\n> ...\n> AcquireSampleRowsFunc acquirefunc = acquirefuncs[i];\n> ...\n> if (childtargrows > 0)\n> {\n> ...\n> /* Fetch a random sample of the child's rows */\n> childrows = (*acquirefunc) (childrel, elevel,\n> rows + numrows, childtargrows,\n> &trows, &tdrows)\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 06 Sep 2019 11:21:33 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Thu, Sep 5, 2019 at 2:31 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> There were some minor problems in v5 -- bogus Docbook as well as\n> outdated rules.out, small \"git diff --check\" complaint about whitespace.\n> This v6 (on today's master) fixes those, no other changes.\n>\n+ <entry>\n+ The command is preparing to begin scanning the heap. This phase is\n+ expected to be very brief.\n+ </entry>\nIn the above after \".\" there is an extra space, is this intentional. I\nhad noticed that in lot of places there is couple of spaces and\nsometimes single space across this file.\n\nLike in below, there is single space after \".\":\n <entry>Time when this process' current transaction was started, or null\n if no transaction is active. If the current\n query is the first of its transaction, this column is equal to the\n <structfield>query_start</structfield> column.\n </entry>\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Sep 2019 17:21:37 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi vignesh!\n\n\nOn 2019/09/17 20:51, vignesh C wrote:\n> On Thu, Sep 5, 2019 at 2:31 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>\n>> There were some minor problems in v5 -- bogus Docbook as well as\n>> outdated rules.out, small \"git diff --check\" complaint about whitespace.\n>> This v6 (on today's master) fixes those, no other changes.\n>>\n> + <entry>\n> + The command is preparing to begin scanning the heap. This phase is\n> + expected to be very brief.\n> + </entry>\n> In the above after \".\" there is an extra space, is this intentional. I\n> had noticed that in lot of places there is couple of spaces and\n> sometimes single space across this file.\n> \n> Like in below, there is single space after \".\":\n> <entry>Time when this process' current transaction was started, or null\n> if no transaction is active. If the current\n> query is the first of its transaction, this column is equal to the\n> <structfield>query_start</structfield> column.\n> </entry>\n\n\nSorry for the late reply.\n\nProbably, it is intentional because there are many extra space\nin other documents. See below:\n\n# Results of grep\n=============\n$ grep '\\. ' doc/src/sgml/monitoring.sgml | wc -l\n114\n$ grep '\\. ' doc/src/sgml/information_schema.sgml | wc -l\n184\n$ grep '\\. ' doc/src/sgml/func.sgml | wc -l\n577\n=============\n\nTherefore, I'm going to leave it as is. :)\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Fri, 01 Nov 2019 15:22:17 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro, vignesh,\n\nI rebased the patch on 2a4d96eb, and added new column\n\"ext_compute_count\" in pg_stat_progress_analyze vie to\nreport a number of computing extended stats.\nIt is like a \"index_vacuum_count\" in vacuum progress reporter or\n\"index_rebuild_count\" in cluster progress reporter. :)\n\nPlease find attached file: v7.\n\nAnd the following is a test result:\n==============\n[Session1]\n\\! pgbench -i\ncreate statistics pg_ext1 (dependencies) ON aid, bid from pgbench_accounts;\ncreate statistics pg_ext2 (mcv) ON aid, bid from pgbench_accounts;\ncreate statistics pg_ext3 (ndistinct) ON aid, bid from pgbench_accounts;\n\n[Session2]\n# \\a \\t\n# select * from pg_stat_progress_analyze ; \\watch 0.0001\n\n27064|13583|postgres|16405|initializing|f|0|0|0|0\n27064|13583|postgres|16405|acquiring sample rows|f|16405|1640|0|0\n27064|13583|postgres|16405|acquiring sample rows|f|16405|1640|23|0\n27064|13583|postgres|16405|acquiring sample rows|f|16405|1640|64|0\n27064|13583|postgres|16405|acquiring sample rows|f|16405|1640|1640|0\n27064|13583|postgres|16405|computing stats|f|16405|1640|1640|0\n27064|13583|postgres|16405|computing stats|f|16405|1640|1640|0\n27064|13583|postgres|16405|computing extended stats|f|16405|1640|1640|0\n27064|13583|postgres|16405|computing extended stats|f|16405|1640|1640|1\n27064|13583|postgres|16405|computing extended stats|f|16405|1640|1640|2\n27064|13583|postgres|16405|computing extended stats|f|16405|1640|1640|3\n27064|13583|postgres|16405|finalizing analyze|f|16405|1640|1640|3\n\nNote:\n The result on Session2 was shortened for readability.\n If you'd like to check the whole result, you can see attached file: \"hoge.txt\".\n==============\n\nThanks,\nTatsuro Yamada", "msg_date": "Tue, 05 Nov 2019 21:07:07 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On 2019-Nov-05, Tatsuro Yamada wrote:\n\n> ==============\n> [Session1]\n> \\! pgbench -i\n> create statistics pg_ext1 (dependencies) ON aid, bid from pgbench_accounts;\n> create statistics pg_ext2 (mcv) ON aid, bid from pgbench_accounts;\n> create statistics pg_ext3 (ndistinct) ON aid, bid from pgbench_accounts;\n\nWow, it takes a long time to compute these ...\n\nHmm, you normally wouldn't define stats that way; you'd do this instead:\n\ncreate statistics pg_ext1 (dependencies, mcv,ndistinct) ON aid, bid from pgbench_accounts;\n\nI'm not sure if this has an important impact in practice. What I'm\nsaying is that I'm not sure that \"number of ext stats\" is necessarily a\nuseful number as shown. I wonder if it's possible to count the number\nof items that have been computed for each stats object. So if you do\nthis\n\ncreate statistics pg_ext1 (dependencies, mcv) ON aid, bid from pgbench_accounts;\ncreate statistics pg_ext2 (ndistinct,histogram) ON aid, bid from pgbench_accounts;\n\nthen the counter goes to 4. But I also wonder if we need to publish\n_which_ type of ext stats is currently being built, in a separate\ncolumn.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 Nov 2019 10:38:50 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro!\n\nOn 2019/11/05 22:38, Alvaro Herrera wrote:\n> On 2019-Nov-05, Tatsuro Yamada wrote:\n> \n>> ==============\n>> [Session1]\n>> \\! pgbench -i\n>> create statistics pg_ext1 (dependencies) ON aid, bid from pgbench_accounts;\n>> create statistics pg_ext2 (mcv) ON aid, bid from pgbench_accounts;\n>> create statistics pg_ext3 (ndistinct) ON aid, bid from pgbench_accounts;\n> \n> Wow, it takes a long time to compute these ...\n> \n> Hmm, you normally wouldn't define stats that way; you'd do this instead:\n> \n> create statistics pg_ext1 (dependencies, mcv,ndistinct) ON aid, bid from pgbench_accounts;\n\nI'd like to say it's a just example of test case. But I understand that\nyour advice. Thanks! :)\n\n \n> I'm not sure if this has an important impact in practice. What I'm\n> saying is that I'm not sure that \"number of ext stats\" is necessarily a\n> useful number as shown. I wonder if it's possible to count the number\n> of items that have been computed for each stats object. So if you do\n> this\n>\n> create statistics pg_ext1 (dependencies, mcv) ON aid, bid from pgbench_accounts;\n> create statistics pg_ext2 (ndistinct,histogram) ON aid, bid from pgbench_accounts;\n> \n> then the counter goes to 4. But I also wonder if we need to publish\n> _which_ type of ext stats is currently being built, in a separate\n> column.\n\n\nHmm... I have never seen a lot of extended stats on a table (with many columns)\nbut I suppose it will be existence near future because extended stats is an only\nsolution to correct row estimation error in vanilla PostgreSQL. Therefore, it\nwould be better to add the counter on the view, I think.\n\nI revised the patch as following because I realized counting the types of ext\nstats is not useful for users.\n\n - Attached new patch counts a number of ext stats instead the types of ext stats.\n\nSo we can see the counter goes to \"2\", if we created above ext stats (pg_ext1 and\npg_ext2) and analyzed as you wrote. :)\n\n\nThanks,\nTatsuro Yamada", "msg_date": "Wed, 06 Nov 2019 14:49:49 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Yamada-san,\n\nThanks for working on this.\n\nOn Wed, Nov 6, 2019 at 2:50 PM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> I revised the patch as following because I realized counting the types of ext\n> stats is not useful for users.\n>\n> - Attached new patch counts a number of ext stats instead the types of ext stats.\n>\n> So we can see the counter goes to \"2\", if we created above ext stats (pg_ext1 and\n> pg_ext2) and analyzed as you wrote. :)\n\nI have looked at the patch and here are some comments.\n\nI think include_children and current_relid are not enough to\nunderstand the progress of analyzing inheritance trees, because even\nwith current_relid being updated, I can't tell how many more there\nwill be. I think it'd be better to show the total number of children\nand the number of children processed, just like\npg_stat_progress_create_index does for partitions. So, instead of\ninclude_children and current_relid, I think it's better to have\nchild_tables_total, child_tables_done, and current_child_relid, placed\nlast in the set of columns.\n\nAlso, inheritance tree stats are created *after* creating single table\nstats, so I think that it would be better to have a distinct phase\nname for that, say \"acquiring inherited sample rows\". In\ndo_analyze_rel(), you can select which of two phases to set based on\nwhether inh is true or not. For partitioned tables, the progress\noutput will immediately switch to this phase, because partitioned\ntable itself is empty so there's nothing to do in the \"acquiring\nsample rows\" phase.\n\nThat's all for now.\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 19 Nov 2019 10:57:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Amit-san!\n\nThanks for your comments!\n\n\n> I have looked at the patch and here are some comments.\n> \n> I think include_children and current_relid are not enough to\n> understand the progress of analyzing inheritance trees, because even\n> with current_relid being updated, I can't tell how many more there\n> will be. I think it'd be better to show the total number of children\n> and the number of children processed, just like\n> pg_stat_progress_create_index does for partitions. So, instead of\n> include_children and current_relid, I think it's better to have\n> child_tables_total, child_tables_done, and current_child_relid, placed\n> last in the set of columns.\n\nAh, I understood.\nI'll check pg_stat_progress_create_index does for partitions,\nand will create a new patch. :)\n\nRelated to the above,\nI wonder whether we need the total number of ext stats on\npg_stat_progress_analyze or not. As you might know, there is the same\ncounter on pg_stat_progress_vacuum and pg_stat_progress_cluster.\nFor example, index_vacuum_count and index_rebuild_count.\nWould it be added to the total number column to these views? :)\n\n\n> Also, inheritance tree stats are created *after* creating single table\n> stats, so I think that it would be better to have a distinct phase\n> name for that, say \"acquiring inherited sample rows\". In\n> do_analyze_rel(), you can select which of two phases to set based on\n> whether inh is true or not. For partitioned tables, the progress\n> output will immediately switch to this phase, because partitioned\n> table itself is empty so there's nothing to do in the \"acquiring\n> sample rows\" phase.\n> \n> That's all for now.\n\n\nOkay! I'll also add the new phase \"acquiring inherited sample rows\" on\nthe next patch. :)\n\n\nThanks,\nTatsuro Yamada\n\n\n\n", "msg_date": "Tue, 26 Nov 2019 11:32:01 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Amit-san,\n\n\n> Related to the above,\n> I wonder whether we need the total number of ext stats on\n> pg_stat_progress_analyze or not. As you might know, there is the same\n> counter on pg_stat_progress_vacuum and pg_stat_progress_cluster.\n> For example, index_vacuum_count and index_rebuild_count.\n> Would it be added to the total number column to these views? :)\n\n\nOops, I made a mistake. :(\n\nWhat I'd like to say was:\nWould it be better to add the total number column to these views? :)\n\nThanks,\nTatsuro Yamada\n\n\n\n", "msg_date": "Tue, 26 Nov 2019 12:56:48 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On 2019-Nov-26, Tatsuro Yamada wrote:\n\n> > I wonder whether we need the total number of ext stats on\n> > pg_stat_progress_analyze or not. As you might know, there is the same\n> > counter on pg_stat_progress_vacuum and pg_stat_progress_cluster.\n> > For example, index_vacuum_count and index_rebuild_count.\n> \n> Would it be better to add the total number column to these views? :)\n\nYeah, I think it would be good to add that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 Nov 2019 09:22:51 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro!\n\nOn 2019/11/26 21:22, Alvaro Herrera wrote:\n> On 2019-Nov-26, Tatsuro Yamada wrote:\n> \n>>> I wonder whether we need the total number of ext stats on\n>>> pg_stat_progress_analyze or not. As you might know, there is the same\n>>> counter on pg_stat_progress_vacuum and pg_stat_progress_cluster.\n>>> For example, index_vacuum_count and index_rebuild_count.\n>>\n>> Would it be better to add the total number column to these views? :)\n> \n> Yeah, I think it would be good to add that.\n\n\nThanks for your comment!\nOkay, I'll add the column \"ext_stats_total\" to\npg_stat_progress_analyze view on the next patch. :)\n\nRegarding to other total number columns,\nI'll create another patch to add these columns \"index_vacuum_total\" and\n\"index_rebuild_count\" on the other views. :)\n\nThanks,\nTatsuro Yamada\n\n\n\n \n \n\n\n\n", "msg_date": "Wed, 27 Nov 2019 11:01:37 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Tue, Nov 26, 2019 at 9:22 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Nov-26, Tatsuro Yamada wrote:\n>\n> > > I wonder whether we need the total number of ext stats on\n> > > pg_stat_progress_analyze or not. As you might know, there is the same\n> > > counter on pg_stat_progress_vacuum and pg_stat_progress_cluster.\n> > > For example, index_vacuum_count and index_rebuild_count.\n> >\n> > Would it be better to add the total number column to these views? :)\n>\n> Yeah, I think it would be good to add that.\n\nHmm, does it take that long to calculate ext stats on one column? The\nreason it's worthwhile to have index_vacuum_count,\nindex_rebuild_count, etc. is because it can take very long for one\nindex to get vacuumed/rebuilt.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 27 Nov 2019 11:34:26 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Yamada-san,\n\nOn Wed, Nov 27, 2019 at 11:04 AM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> Regarding to other total number columns,\n> I'll create another patch to add these columns \"index_vacuum_total\" and\n> \"index_rebuild_count\" on the other views. :)\n\nMaybe you meant \"index_rebuild_total\"?\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 27 Nov 2019 11:38:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On 2019-Nov-27, Amit Langote wrote:\n\n> On Tue, Nov 26, 2019 at 9:22 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2019-Nov-26, Tatsuro Yamada wrote:\n> >\n> > > > I wonder whether we need the total number of ext stats on\n> > > > pg_stat_progress_analyze or not. As you might know, there is the same\n> > > > counter on pg_stat_progress_vacuum and pg_stat_progress_cluster.\n> > > > For example, index_vacuum_count and index_rebuild_count.\n> > >\n> > > Would it be better to add the total number column to these views? :)\n> >\n> > Yeah, I think it would be good to add that.\n> \n> Hmm, does it take that long to calculate ext stats on one column? The\n> reason it's worthwhile to have index_vacuum_count,\n> index_rebuild_count, etc. is because it can take very long for one\n> index to get vacuumed/rebuilt.\n\nYes, it's noticeable. It's not as long as building an index, of course,\nbut it's a long enough fraction of the total analyze time that it should\nbe reported.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 27 Nov 2019 00:00:14 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Amit-san,\n\n> On Wed, Nov 27, 2019 at 11:04 AM Tatsuro Yamada\n> <tatsuro.yamada.tf@nttcom.co.jp> wrote:\n>> Regarding to other total number columns,\n>> I'll create another patch to add these columns \"index_vacuum_total\" and\n>> \"index_rebuild_count\" on the other views. :)\n> \n> Maybe you meant \"index_rebuild_total\"?\n\nYeah, you are right! :)\n\n\nThanks,\nTatsuro Yamada\n\n\n\n", "msg_date": "Wed, 27 Nov 2019 12:14:47 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Amit-san!\n\n>> I think include_children and current_relid are not enough to\n>> understand the progress of analyzing inheritance trees, because even\n>> with current_relid being updated, I can't tell how many more there\n>> will be. I think it'd be better to show the total number of children\n>> and the number of children processed, just like\n>> pg_stat_progress_create_index does for partitions. So, instead of\n>> include_children and current_relid, I think it's better to have\n>> child_tables_total, child_tables_done, and current_child_relid, placed\n>> last in the set of columns.\n> \n> Ah, I understood.\n> I'll check pg_stat_progress_create_index does for partitions,\n> and will create a new patch. \n\nFixed.\n\nBut I just remembered I replaced column name \"*_table\" with \"*_relid\"\nbased on Robert's comment three months ago, see below:\n\n> /me reviews.\n> \n> + <entry><structfield>scanning_table</structfield></entry>\n> \n> I think this should be retitled to something that ends in 'relid',\n> like all of the corresponding cases in existing progress views.\n> Perhaps 'active_relid' or 'current_relid'.\n\nSo, it would be better to use same rule against child_tables_total and\nchild_tables_done. Thus I changed these column names to others and added\nto the view. I also removed include_children and current_relid.\nThe following columns are new version.\n\n<New columns of the view>\n pid\n datid\n datname\n relid\n phase\n sample_blks_total\n heap_blks_scanned\n ext_stats_total <= Added (based on Alvaro's comment)\n ext_stats_computed <= Renamed\n child_relids_total <= Added\n child_relids_done <= Added\n current_child_relid <= Added\n\n\n>> Also, inheritance tree stats are created *after* creating single table\n>> stats, so I think that it would be better to have a distinct phase\n>> name for that, say \"acquiring inherited sample rows\". In\n>> do_analyze_rel(), you can select which of two phases to set based on\n>> whether inh is true or not. For partitioned tables, the progress\n>> output will immediately switch to this phase, because partitioned\n>> table itself is empty so there's nothing to do in the \"acquiring\n>> sample rows\" phase.\n>>\n>> That's all for now.\n> \n> \n> Okay! I'll also add the new phase \"acquiring inherited sample rows\" on\n> the next patch. \n\n\nFixed.\n\nI tried to abbreviate it to \"acquiring inh sample rows\" because I thought\n\"acquiring inherited sample rows\" is a little long for the phase name.\n\nAttached WIP patch is including these fixes:\n - Remove columns: include_children and current_relid\n - Add new columns: child_relieds_total, child_relids_done and current_child_relid\n - Add new phase \"acquiring inh sample rows\"\n\n Note: the document is not updated, I'll fix it later. :)\n\nAttached testcase.sql is for creating base table and partitioning table.\nYou can check the patch by using the following procedures, easily.\n\nTerminal #1\n--------------\n\\a \\t\nselect * from pg_stat_progress_analyze; \\watch 0.0001\n--------------\n\nTerminal #2\n--------------\n\\i testcase.sql\nanalyze hoge;\nanalyze hoge2;\n--------------\n\nThanks,\nTatsuro Yamada", "msg_date": "Wed, 27 Nov 2019 12:45:41 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Wed, Nov 27, 2019 at 12:45:41PM +0900, Tatsuro Yamada wrote:\n> Fixed.\n\nPatch was waiting on input from author, so I have switched it back to\n\"Needs review\", and moved it to next CF while on it as you are working\non it.\n--\nMichael", "msg_date": "Wed, 27 Nov 2019 13:25:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Yamada-san,\n\nThank you for updating the patch.\n\nOn Wed, Nov 27, 2019 at 12:46 PM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> But I just remembered I replaced column name \"*_table\" with \"*_relid\"\n> based on Robert's comment three months ago, see below:\n>\n> > /me reviews.\n> >\n> > + <entry><structfield>scanning_table</structfield></entry>\n> >\n> > I think this should be retitled to something that ends in 'relid',\n> > like all of the corresponding cases in existing progress views.\n> > Perhaps 'active_relid' or 'current_relid'.\n>\n> So, it would be better to use same rule against child_tables_total and\n> child_tables_done. Thus I changed these column names to others and added\n> to the view.\n\nRobert's comment seems OK for a column that actually reports an OID,\nbut for columns that report counts, names containing \"relids\" sound a\nbit strange to me. So, I prefer child_tables_total /\nchild_tables_done over child_relids_total / child_relids_done. Would\nlike to hear more opinions.\n\n> >> Also, inheritance tree stats are created *after* creating single table\n> >> stats, so I think that it would be better to have a distinct phase\n> >> name for that, say \"acquiring inherited sample rows\". In\n> >> do_analyze_rel(), you can select which of two phases to set based on\n> >> whether inh is true or not.\n> >\n> > Okay! I'll also add the new phase \"acquiring inherited sample rows\" on\n> > the next patch.\n>\n>\n> Fixed.\n>\n> I tried to abbreviate it to \"acquiring inh sample rows\" because I thought\n> \"acquiring inherited sample rows\" is a little long for the phase name.\n\nI think phase names should contain full English words, because they\nare supposed to be descriptive. Users are very likely to not\nunderstand what \"inh\" means without looking up the docs.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 28 Nov 2019 10:59:03 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Amit-san,\n\nOn 2019/11/28 10:59, Amit Langote wrote:\n> Yamada-san,\n> \n> Thank you for updating the patch.\n> \n> On Wed, Nov 27, 2019 at 12:46 PM Tatsuro Yamada\n> <tatsuro.yamada.tf@nttcom.co.jp> wrote:\n>> But I just remembered I replaced column name \"*_table\" with \"*_relid\"\n>> based on Robert's comment three months ago, see below:\n>>\n>>> /me reviews.\n>>>\n>>> + <entry><structfield>scanning_table</structfield></entry>\n>>>\n>>> I think this should be retitled to something that ends in 'relid',\n>>> like all of the corresponding cases in existing progress views.\n>>> Perhaps 'active_relid' or 'current_relid'.\n>>\n>> So, it would be better to use same rule against child_tables_total and\n>> child_tables_done. Thus I changed these column names to others and added\n>> to the view.\n> \n> Robert's comment seems OK for a column that actually reports an OID,\n> but for columns that report counts, names containing \"relids\" sound a\n> bit strange to me. So, I prefer child_tables_total /\n> child_tables_done over child_relids_total / child_relids_done. Would\n> like to hear more opinions.\n\nHmmm... I understand your opinion but I'd like to get more opinions too. :)\nDo you prefer these column names? See below:\n\n<Columns of the view>\n pid\n datid\n datname\n relid\n phase\n sample_blks_total\n heap_blks_scanned\n ext_stats_total\n ext_stats_computed\n child_tables_total <= Renamed: s/relid/table/\n child_tables_done <= Renamed: s/relid/table/\n current_child_table <= Renamed: s/relid/table/\n\n\n\n>>>> Also, inheritance tree stats are created *after* creating single table\n>>>> stats, so I think that it would be better to have a distinct phase\n>>>> name for that, say \"acquiring inherited sample rows\". In\n>>>> do_analyze_rel(), you can select which of two phases to set based on\n>>>> whether inh is true or not.\n>>>\n>>> Okay! I'll also add the new phase \"acquiring inherited sample rows\" on\n>>> the next patch.\n>>\n>>\n>> Fixed.\n>>\n>> I tried to abbreviate it to \"acquiring inh sample rows\" because I thought\n>> \"acquiring inherited sample rows\" is a little long for the phase name.\n> \n> I think phase names should contain full English words, because they\n> are supposed to be descriptive. Users are very likely to not\n> understand what \"inh\" means without looking up the docs.\n\n\nOkay, I will fix it.\n s/acquiring inh sample rows/acquiring inherited sample rows/\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Thu, 28 Nov 2019 19:55:22 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On 2019-Nov-28, Tatsuro Yamada wrote:\n\n> Hmmm... I understand your opinion but I'd like to get more opinions too. :)\n> Do you prefer these column names? See below:\n\nHere's my take on it:\n\n <Columns of the view>\n pid\n datid\n datname\n relid\n phase\n sample_blks_total\n sample_blks_scanned\n ext_stats_total\n ext_stats_computed\n child_tables_total\n child_tables_done\n current_child_table_relid\n\nIt seems to make sense to keep using the \"child table\" terminology in\nthat last column; but since the column carries an OID then as Robert\nsaid it should have \"relid\" in the name. For the other two \"child\ntables\" columns, not using \"relid\" is appropriate because what they have\nis not relids.\n\n\nI think there should be an obvious correspondence in columns that are\nclosely related, which there isn't if you use \"sample\" in one and \"heap\"\nin the other, so I'd go for \"sample\" in both.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 28 Nov 2019 10:37:17 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro!\n\n>> Hmmm... I understand your opinion but I'd like to get more opinions too. :)\n>> Do you prefer these column names? See below:\n> \n> Here's my take on it:\n> \n> <Columns of the view>\n> pid\n> datid\n> datname\n> relid\n> phase\n> sample_blks_total\n> sample_blks_scanned\n> ext_stats_total\n> ext_stats_computed\n> child_tables_total\n> child_tables_done\n> current_child_table_relid\n> \n> It seems to make sense to keep using the \"child table\" terminology in\n> that last column; but since the column carries an OID then as Robert\n> said it should have \"relid\" in the name. For the other two \"child\n> tables\" columns, not using \"relid\" is appropriate because what they have\n> is not relids.\n>\n> I think there should be an obvious correspondence in columns that are\n> closely related, which there isn't if you use \"sample\" in one and \"heap\"\n> in the other, so I'd go for \"sample\" in both.\n\n\nThanks for the comment.\nOops, You are right, I overlooked they are not relids..\nI agreed with you and Amit's opinion so I'll send a revised patch on the next mail. :-)\n\nNext patch will be included:\n - New columns definition of the view (as above)\n - Renamed the phase name: s/acquiring inh sample rows/acquiring inherited sample rows/\n - Update document\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n\n\n", "msg_date": "Fri, 29 Nov 2019 09:54:48 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Michael,\n\nOn 2019/11/27 13:25, Michael Paquier wrote:\n> On Wed, Nov 27, 2019 at 12:45:41PM +0900, Tatsuro Yamada wrote:\n>> Fixed.\n> \n> Patch was waiting on input from author, so I have switched it back to\n> \"Needs review\", and moved it to next CF while on it as you are working\n> on it.\n\nThanks for your CF manager work.\nI will do my best to be committed at the next CF because\nProgress reporting feature is useful for DBAs, as you know. :)\n\n\nThanks,\nTatsuro Yamada\n\n\n\n", "msg_date": "Fri, 29 Nov 2019 10:27:20 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro and Amit!\n\nOn 2019/11/29 9:54, Tatsuro Yamada wrote:\n> Hi Alvaro!\n> \n>>> Hmmm... I understand your opinion but I'd like to get more opinions too. :)\n>>> Do you prefer these column names? See below:\n>>\n>> Here's my take on it:\n>>\n>>   <Columns of the view>\n>>    pid\n>>    datid\n>>    datname\n>>    relid\n>>    phase\n>>    sample_blks_total\n>>    sample_blks_scanned\n>>    ext_stats_total\n>>    ext_stats_computed\n>>    child_tables_total\n>>    child_tables_done\n>>    current_child_table_relid\n>>\n>> It seems to make sense to keep using the \"child table\" terminology in\n>> that last column; but since the column carries an OID then as Robert\n>> said it should have \"relid\" in the name.  For the other two \"child\n>> tables\" columns, not using \"relid\" is appropriate because what they have\n>> is not relids.\n>>\n>> I think there should be an obvious correspondence in columns that are\n>> closely related, which there isn't if you use \"sample\" in one and \"heap\"\n>> in the other, so I'd go for \"sample\" in both.\n> \n> \n> Thanks for the comment.\n> Oops, You are right, I overlooked they are not relids..\n> I agreed with you and Amit's opinion so I'll send a revised patch on the next mail. :-)\n> \n> Next patch will be included:\n>  - New columns definition of the view (as above)\n>  - Renamed the phase name: s/acquiring inh sample rows/acquiring inherited sample rows/\n>  - Update document\n\nAttached patch is the revised patch. :)\n\nI wonder two things below. What do you think?\n\n1)\nFor now, I'm not sure it should be set current_child_table_relid to zero\nwhen the current phase is changed from \"acquiring inherited sample rows\" to\n\"computing stats\". See <Test result> bellow.\n\n2)\nThere are many \"finalizing analyze\" phases based on relids in the case\nof partitioning tables. Would it better to fix the document? or it\nwould be better to reduce it to one?\n\n<Document>\n---------------------------------------------------------\n <entry><literal>finalizing analyze</literal></entry>\n <entry>\n The command is updating pg_class. When this phase is completed,\n <command>ANALYZE</command> will end.\n---------------------------------------------------------\n\n\n<New columns of the view>\n---------------------------------------------------------\n# \\d pg_stat_progress_analyze\n View \"pg_catalog.pg_stat_progress_analyze\"\n Column | Type | Collation | Nullable | Default\n---------------------------+---------+-----------+----------+---------\n pid | integer | | |\n datid | oid | | |\n datname | name | | |\n relid | oid | | |\n phase | text | | |\n sample_blks_total | bigint | | |\n sample_blks_scanned | bigint | | |\n ext_stats_total | bigint | | |\n ext_stats_computed | bigint | | |\n child_tables_total | bigint | | |\n child_tables_done | bigint | | |\n current_child_table_relid | oid | | |\n---------------------------------------------------------\n\n\n\n<Test result using partitioning tables>\n---------------------------------------------------------\n# select * from pg_stat_progress_analyze ; \\watch 0.0001\n\n19309|13583|postgres|36081|acquiring inherited sample rows|0|0|0|0|0|0|0\n19309|13583|postgres|36081|acquiring inherited sample rows|45|17|0|0|4|0|36084\n19309|13583|postgres|36081|acquiring inherited sample rows|45|35|0|0|4|0|36084\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|0|36084\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|0|36084\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|0|36084\n19309|13583|postgres|36081|acquiring inherited sample rows|45|3|0|0|4|1|36087\n19309|13583|postgres|36081|acquiring inherited sample rows|45|22|0|0|4|1|36087\n19309|13583|postgres|36081|acquiring inherited sample rows|45|38|0|0|4|1|36087\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|1|36087\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|1|36087\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|1|36087\n19309|13583|postgres|36081|acquiring inherited sample rows|45|16|0|0|4|2|36090\n19309|13583|postgres|36081|acquiring inherited sample rows|45|34|0|0|4|2|36090\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|2|36090\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|2|36090\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|2|36090\n19309|13583|postgres|36081|acquiring inherited sample rows|45|10|0|0|4|3|36093\n19309|13583|postgres|36081|acquiring inherited sample rows|45|29|0|0|4|3|36093\n19309|13583|postgres|36081|acquiring inherited sample rows|45|43|0|0|4|3|36093\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|3|36093\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|3|36093\n19309|13583|postgres|36081|acquiring inherited sample rows|45|45|0|0|4|3|36093\n19309|13583|postgres|36081|computing stats|45|45|0|0|4|4|36093 <== current_*_reid should be zero?\n19309|13583|postgres|36081|computing stats|45|45|0|0|4|4|36093\n19309|13583|postgres|36081|finalizing analyze|45|45|0|0|4|4|36093 <== there are many finalizing phases\n19309|13583|postgres|36081|finalizing analyze|45|45|0|0|4|4|36093\n19309|13583|postgres|36084|acquiring sample rows|45|3|0|0|0|0|0\n19309|13583|postgres|36084|acquiring sample rows|45|33|0|0|0|0|0\n19309|13583|postgres|36084|computing stats|45|45|0|0|0|0|0\n19309|13583|postgres|36087|acquiring sample rows|45|15|0|0|0|0|0\n19309|13583|postgres|36087|computing stats|45|45|0|0|0|0|0\n19309|13583|postgres|36087|finalizing analyze|45|45|0|0|0|0|0 <== same as above\n19309|13583|postgres|36090|acquiring sample rows|45|11|0|0|0|0|0\n19309|13583|postgres|36090|acquiring sample rows|45|41|0|0|0|0|0\n19309|13583|postgres|36090|finalizing analyze|45|45|0|0|0|0|0 <== same as above\n19309|13583|postgres|36093|acquiring sample rows|45|7|0|0|0|0|0\n19309|13583|postgres|36093|acquiring sample rows|45|37|0|0|0|0|0\n19309|13583|postgres|36093|finalizing analyze|45|45|0|0|0|0|0 <== same as above\n---------------------------------------------------------\n\nThanks,\nTatsuro Yamada", "msg_date": "Fri, 29 Nov 2019 17:45:14 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Yamada-san,\n\nOn Fri, Nov 29, 2019 at 5:45 PM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> Attached patch is the revised patch. :)\n>\n> I wonder two things below. What do you think?\n>\n> 1)\n> For now, I'm not sure it should be set current_child_table_relid to zero\n> when the current phase is changed from \"acquiring inherited sample rows\" to\n> \"computing stats\". See <Test result> bellow.\n\nIn the upthread discussion [1], Robert asked to *not* do such things,\nthat is, resetting some values due to phase change. I'm not sure his\npoint applies to this case too though.\n\n> 2)\n> There are many \"finalizing analyze\" phases based on relids in the case\n> of partitioning tables. Would it better to fix the document? or it\n> would be better to reduce it to one?\n>\n> <Document>\n> ---------------------------------------------------------\n> <entry><literal>finalizing analyze</literal></entry>\n> <entry>\n> The command is updating pg_class. When this phase is completed,\n> <command>ANALYZE</command> will end.\n> ---------------------------------------------------------\n\nWhen a partitioned table is analyzed, its partitions are analyzed too.\nSo, the ANALYZE command effectively runs N + 1 times if there are N\npartitions -- first analyze partitioned table to collect \"inherited\"\nstatistics by collecting row samples using\nacquire_inherited_sample_rows(), then each partition to collect its\nown statistics. Note that this recursive application to ANALYZE to\npartitions (child tables) only occurs for partitioned tables, not for\nlegacy inheritance.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 29 Nov 2019 18:15:28 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Amit-san,\n\nThanks for your comments!\n\n>> Attached patch is the revised patch. :)\n>>\n>> I wonder two things below. What do you think?\n>>\n>> 1)\n>> For now, I'm not sure it should be set current_child_table_relid to zero\n>> when the current phase is changed from \"acquiring inherited sample rows\" to\n>> \"computing stats\". See <Test result> bellow.\n> \n> In the upthread discussion [1], Robert asked to *not* do such things,\n> that is, resetting some values due to phase change. I'm not sure his\n> point applies to this case too though.\n\nYeah, I understood.\nI'll check target relid of \"computing stats\" to re-read a code of\nanalyze command later. :)\n\n \n>> 2)\n>> There are many \"finalizing analyze\" phases based on relids in the case\n>> of partitioning tables. Would it better to fix the document? or it\n>> would be better to reduce it to one?\n>>\n>> <Document>\n>> ---------------------------------------------------------\n>> <entry><literal>finalizing analyze</literal></entry>\n>> <entry>\n>> The command is updating pg_class. When this phase is completed,\n>> <command>ANALYZE</command> will end.\n>> ---------------------------------------------------------\n> \n> When a partitioned table is analyzed, its partitions are analyzed too.\n> So, the ANALYZE command effectively runs N + 1 times if there are N\n> partitions -- first analyze partitioned table to collect \"inherited\"\n> statistics by collecting row samples using\n> acquire_inherited_sample_rows(), then each partition to collect its\n> own statistics. Note that this recursive application to ANALYZE to\n> partitions (child tables) only occurs for partitioned tables, not for\n> legacy inheritance.\n\nThanks for your explanation.\nI understand Analyzing Partitioned table a little.\nBelow is my understanding. Is it right?\n\n==================================================\nIn the case of partitioned table (N = 3)\n\n - Partitioned table name: p (relid is 100)\n - Partitioning table names: p1, p2, p3 (relids are 201, 202 and 203)\n\nFor now, We can get the following results by executing \"analyze p;\".\n\nNum Phase relid current_child_table_relid\n 1 acquire inherited sample rows 100 201\n 2 acquire inherited sample rows 100 202\n 3 acquire inherited sample rows 100 203\n 4 computing stats 100 0\n 5 finalizing analyze 100 0\n\n 6 acquiring sample rows 201 0\n 7 computing stats 201 0\n 8 finalizing analyze 201 0\n\n 9 acquiring sample rows 202 0\n10 computing stats 202 0\n11 finalizing analyze 202 0\n\n12 acquiring sample rows 203 0\n13 computing stats 203 0\n14 finalizing analyze 203 0\n==================================================\n\n\nThanks,\nTatsuro Yamada\n\n\n\n", "msg_date": "Wed, 04 Dec 2019 17:53:42 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Amit-san,\n\n\n>>> I wonder two things below. What do you think?\n>>>\n>>> 1)\n>>> For now, I'm not sure it should be set current_child_table_relid to zero\n>>> when the current phase is changed from \"acquiring inherited sample rows\" to\n>>> \"computing stats\". See <Test result> bellow.\n>>\n>> In the upthread discussion [1], Robert asked to *not* do such things,\n>> that is, resetting some values due to phase change.  I'm not sure his\n>> point applies to this case too though.\n> \n> Yeah, I understood.\n> I'll check target relid of \"computing stats\" to re-read a code of\n> analyze command later. :)\n\n\nFinally, I understood after investigation of the code. :)\nCall stack is the following, and analyze_rel() calls \"N + 1\" times\nfor partitioned table and each partitions.\n\nanalyze_rel start\n do_analyze_rel inh==true start\n onerel: hoge2\n acq_inh_sample_rows start\n childrel: hoge2_10000\n childrel: hoge2_20000\n childrel: hoge2_30000\n childrel: hoge2_default\n acq_inh_sample_rows end\n compute_stats start\n compute_stats end\n compute_index_stats start\n compute_index_stats end\n finalizing start\n finalizing end\n do_analyze_rel inh==true end\nanalyze_rel end\n...\n\n\nAlso, I checked my test result. (\"//\" is my comments)\n\n\n# select oid,relname,relkind from pg_class where relname like 'hoge2%';\n oid | relname | relkind\n-------+---------------+---------\n 36081 | hoge2 | p\n 36084 | hoge2_10000 | r\n 36087 | hoge2_20000 | r\n 36090 | hoge2_30000 | r\n 36093 | hoge2_default | r\n(6 rows)\n\n# select relid,\n current_child_table_relid,\n phase,\n sample_blks_total,\n sample_blks_scanned,\n ext_stats_total,\n ext_stats_computed,\n child_tables_total,\n child_tables_done\n from pg_stat_progress_analyze; \\watch 0.00001\n\n== for partitioned table hoge2 ==\n//hoge2_10000\n36081|36084|acquiring inherited sample rows|45|20|0|0|4|0\n36081|36084|acquiring inherited sample rows|45|42|0|0|4|0\n36081|36084|acquiring inherited sample rows|45|45|0|0|4|0\n36081|36084|acquiring inherited sample rows|45|45|0|0|4|0\n\n//hoge2_20000\n36081|36087|acquiring inherited sample rows|45|3|0|0|4|1\n36081|36087|acquiring inherited sample rows|45|31|0|0|4|1\n36081|36087|acquiring inherited sample rows|45|45|0|0|4|1\n36081|36087|acquiring inherited sample rows|45|45|0|0|4|1\n\n//hoge2_30000\n36081|36090|acquiring inherited sample rows|45|12|0|0|4|2\n36081|36090|acquiring inherited sample rows|45|35|0|0|4|2\n36081|36090|acquiring inherited sample rows|45|45|0|0|4|2\n36081|36090|acquiring inherited sample rows|45|45|0|0|4|2\n\n//hoge2_default\n36081|36093|acquiring inherited sample rows|45|18|0|0|4|3\n36081|36093|acquiring inherited sample rows|45|38|0|0|4|3\n36081|36093|acquiring inherited sample rows|45|45|0|0|4|3\n36081|36093|acquiring inherited sample rows|45|45|0|0|4|3\n\n//Below \"computing stats\" is for the partitioned table hoge,\n//therefore the second column from the left side would be\n//better to set Zero to easy to understand.\n//I guessd that user think which relid is the target of\n//\"computing stats\"?!\n//Of course, other option is to write it on document.\n\n36081|36093|computing stats |45|45|0|0|4|4\n36081|36093|computing stats |45|45|0|0|4|4\n36081|36093|computing stats |45|45|0|0|4|4\n36081|36093|computing stats |45|45|0|0|4|4\n36081|36093|finalizing analyze |45|45|0|0|4|4\n\n== for each partitions such as hoge2_10000 ... hoge2_default ==\n\n//hoge2_10000\n36084|0|acquiring sample rows |45|25|0|0|0|0\n36084|0|computing stats |45|45|0|0|0|0\n36084|0|computing extended stats|45|45|0|0|0|0\n36084|0|finalizing analyze |45|45|0|0|0|0\n\n//hoge2_20000\n36087|0|acquiring sample rows |45|14|0|0|0|0\n36087|0|computing stats |45|45|0|0|0|0\n36087|0|computing extended stats|45|45|0|0|0|0\n36087|0|finalizing analyze |45|45|0|0|0|0\n\n//hoge2_30000\n36090|0|acquiring sample rows |45|12|0|0|0|0\n36090|0|acquiring sample rows |45|44|0|0|0|0\n36090|0|computing extended stats|45|45|0|0|0|0\n36090|0|finalizing analyze |45|45|0|0|0|0\n\n//hoge2_default\n36093|0|acquiring sample rows |45|10|0|0|0|0\n36093|0|acquiring sample rows |45|43|0|0|0|0\n36093|0|computing extended stats|45|45|0|0|0|0\n36093|0|finalizing analyze |45|45|0|0|0|0\n\n\n\n>>> 2)\n>>> There are many \"finalizing analyze\" phases based on relids in the case\n>>> of partitioning tables. Would it better to fix the document? or it\n>>> would be better to reduce it to one?\n>>>\n>>> <Document>\n>>> ---------------------------------------------------------\n>>>        <entry><literal>finalizing analyze</literal></entry>\n>>>        <entry>\n>>>          The command is updating pg_class. When this phase is completed,\n>>>          <command>ANALYZE</command> will end.\n>>> ---------------------------------------------------------\n>>\n>> When a partitioned table is analyzed, its partitions are analyzed too.\n>> So, the ANALYZE command effectively runs N + 1 times if there are N\n>> partitions -- first analyze partitioned table to collect \"inherited\"\n>> statistics by collecting row samples using\n>> acquire_inherited_sample_rows(), then each partition to collect its\n>> own statistics.  Note that this recursive application to ANALYZE to\n>> partitions (child tables) only occurs for partitioned tables, not for\n>> legacy inheritance.\n> \n> Thanks for your explanation.\n> I understand Analyzing Partitioned table a little.\n\n\nIt would be better to modify the document of \"finalizing analyze\" phase.\n\n # Before modify\n The command is updating pg_class. When this phase is completed,\n <command>ANALYZE</command> will end.\n\n # Modified\n The command is updating pg_class. When this phase is completed,\n <command>ANALYZE</command> will end. In the case of partitioned table,\n it might be shown on each partitions.\n\nWhat do you think that? I'm going to fix it, if you agreed. :)\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n", "msg_date": "Fri, 06 Dec 2019 15:23:58 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Yamada-san,\n\nOn Fri, Dec 6, 2019 at 3:24 PM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n >>> 1)\n> >>> For now, I'm not sure it should be set current_child_table_relid to zero\n> >>> when the current phase is changed from \"acquiring inherited sample rows\" to\n> >>> \"computing stats\". See <Test result> bellow.\n> >>\n> >> In the upthread discussion [1], Robert asked to *not* do such things,\n> >> that is, resetting some values due to phase change. I'm not sure his\n> >> point applies to this case too though.\n>\n> //Below \"computing stats\" is for the partitioned table hoge,\n> //therefore the second column from the left side would be\n> //better to set Zero to easy to understand.\n\nOn second thought, maybe we should not reset, because that might be\nconsidered loss of information. To avoid confusion, we should simply\ndocument that current_child_table_relid is only valid during the\n\"acquiring inherited sample rows\" phase.\n\n> >>> 2)\n> >>> There are many \"finalizing analyze\" phases based on relids in the case\n> >>> of partitioning tables. Would it better to fix the document? or it\n> >>> would be better to reduce it to one?\n> >>>\n> It would be better to modify the document of \"finalizing analyze\" phase.\n>\n> # Before modify\n> The command is updating pg_class. When this phase is completed,\n> <command>ANALYZE</command> will end.\n>\n> # Modified\n> The command is updating pg_class. When this phase is completed,\n> <command>ANALYZE</command> will end. In the case of partitioned table,\n> it might be shown on each partitions.\n>\n> What do you think that? I'm going to fix it, if you agreed. :)\n\n*All* phases are repeated in this case, not not just \"finalizing\nanalyze\", because ANALYZE repeatedly runs for each partition after the\nparent partitioned table's ANALYZE finishes. ANALYZE's documentation\nmentions that analyzing a partitioned table also analyzes all of its\npartitions, so users should expect to see the progress information for\neach partition. So, I don't think we need to clarify that if only in\none phase's description. Maybe we can add a note after the phase\ndescription table which mentions this implementation detail about\npartitioned tables. Like this:\n\n <note>\n <para>\n Note that when <command>ANALYZE</command> is run on a partitioned table,\n all of its partitions are also recursively analyzed as also mentioned on\n <xref linkend=\"sql-analyze\"/>. In that case, <command>ANALYZE</command>\n progress is reported first for the parent table, whereby its inheritance\n statistics are collected, followed by that for each partition.\n </para>\n </note>\n\nSome more comments on the documentation:\n\n+ Number of computed extended stats. This counter only advances\nwhen the phase\n+ is <literal>computing extended stats</literal>.\n\nNumber of computer extended stats -> Number of extended stats computed\n\n+ Number of analyzed child tables. This counter only advances\nwhen the phase\n+ is <literal>computing extended stats</literal>.\n\nRegarding, \"Number of analyzed child table\", note that we don't\n\"analyze\" child tables in this phase, only scan its blocks to collect\nsamples for parent's ANALYZE. Also, the 2nd sentence is wrong -- you\nmeant \"when the phase is <literal>acquiring inherited sample\nrows</literal>. I suggest to write this as follows:\n\nNumber of child tables scanned. This counter only advances when the phase\nis <literal>acquiring inherited sample rows</literal>.\n\n+ <entry>OID of the child table currently being scanned.\n+ It might be different from relid when analyzing tables that\nhave child tables.\n\nI suggest:\n\nOID of the child table currently being scanned. This field is only valid when\nthe phase is <literal>computing extended stats</literal>.\n\n+ The command is currently scanning the\n<structfield>current_relid</structfield>\n+ to obtain samples.\n\nI suggest:\n\nThe command is currently scanning the the table given by\n<structfield>current_relid</structfield> to obtain sample rows.\n\n+ The command is currently scanning the\n<structfield>current_child_table_relid</structfield>\n+ to obtain samples.\n\nI suggest (based on phase description pg_stat_progress_create_index\nphase descriptions):\n\nThe command is currently scanning child tables to obtain sample rows. Columns\n<structfield>child_tables_total</structfield>,\n<structfield>child_tables_done</structfield>, and\n<structfield>current_child_table_relid</structfield> contain the progress\ninformation for this phase.\n\n+ <row>\n+ <entry><literal>computing stats</literal></entry>\n\nI think the phase name should really be \"computing statistics\", that\nis, use the full word.\n\n+ <entry>\n+ The command is computing stats from the samples obtained\nduring the table scan.\n+ </entry>\n+ </row>\n\nSo I suggest:\n\nThe command is computing statistics from the sample rows obtained during\nthe table scan\n\n+ <entry><literal>computing extended stats</literal></entry>\n+ <entry>\n+ The command is computing extended stats from the samples\nobtained in the previous phase.\n+ </entry>\n\nI suggest:\n\nThe command is computing extended statistics from the sample rows obtained\nduring the table scan.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 9 Dec 2019 17:51:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Amit-san,\n\n\n> >>> 1)\n>>>>> For now, I'm not sure it should be set current_child_table_relid to zero\n>>>>> when the current phase is changed from \"acquiring inherited sample rows\" to\n>>>>> \"computing stats\". See <Test result> bellow.\n>>>>\n>>>> In the upthread discussion [1], Robert asked to *not* do such things,\n>>>> that is, resetting some values due to phase change. I'm not sure his\n>>>> point applies to this case too though.\n>>\n>> //Below \"computing stats\" is for the partitioned table hoge,\n>> //therefore the second column from the left side would be\n>> //better to set Zero to easy to understand.\n> \n> On second thought, maybe we should not reset, because that might be\n> considered loss of information. To avoid confusion, we should simply\n> document that current_child_table_relid is only valid during the\n> \"acquiring inherited sample rows\" phase.\n\n\nOkay, agreed. :)\n\n \n>>>>> 2)\n>>>>> There are many \"finalizing analyze\" phases based on relids in the case\n>>>>> of partitioning tables. Would it better to fix the document? or it\n>>>>> would be better to reduce it to one?\n>>>>>\n>> It would be better to modify the document of \"finalizing analyze\" phase.\n>>\n>> # Before modify\n>> The command is updating pg_class. When this phase is completed,\n>> <command>ANALYZE</command> will end.\n>>\n>> # Modified\n>> The command is updating pg_class. When this phase is completed,\n>> <command>ANALYZE</command> will end. In the case of partitioned table,\n>> it might be shown on each partitions.\n>>\n>> What do you think that? I'm going to fix it, if you agreed. :)\n> \n> *All* phases are repeated in this case, not not just \"finalizing\n> analyze\", because ANALYZE repeatedly runs for each partition after the\n> parent partitioned table's ANALYZE finishes. ANALYZE's documentation\n> mentions that analyzing a partitioned table also analyzes all of its\n> partitions, so users should expect to see the progress information for\n> each partition. So, I don't think we need to clarify that if only in\n> one phase's description. Maybe we can add a note after the phase\n> description table which mentions this implementation detail about\n> partitioned tables. Like this:\n> \n> <note>\n> <para>\n> Note that when <command>ANALYZE</command> is run on a partitioned table,\n> all of its partitions are also recursively analyzed as also mentioned on\n> <xref linkend=\"sql-analyze\"/>. In that case, <command>ANALYZE</command>\n> progress is reported first for the parent table, whereby its inheritance\n> statistics are collected, followed by that for each partition.\n> </para>\n> </note>\n\n\nAh.. you are right: All phases are repeated, it shouldn't be fixed\nthe only one phase's description.\n\n\n> Some more comments on the documentation:\n> \n> + Number of computed extended stats. This counter only advances\n> when the phase\n> + is <literal>computing extended stats</literal>.\n> \n> Number of computed extended stats -> Number of extended stats computed\n\n\nWill fix.\n\n \n> + Number of analyzed child tables. This counter only advances\n> when the phase\n> + is <literal>computing extended stats</literal>.\n> \n> Regarding, \"Number of analyzed child table\", note that we don't\n> \"analyze\" child tables in this phase, only scan its blocks to collect\n> samples for parent's ANALYZE. Also, the 2nd sentence is wrong -- you\n> meant \"when the phase is <literal>acquiring inherited sample\n> rows</literal>. I suggest to write this as follows:\n> \n> Number of child tables scanned. This counter only advances when the phase\n> is <literal>acquiring inherited sample rows</literal>.\n\n\nOops, I will fix it. :)\n\n\n \n> + <entry>OID of the child table currently being scanned.\n> + It might be different from relid when analyzing tables that\n> have child tables.\n> \n> I suggest:\n> \n> OID of the child table currently being scanned. This field is only valid when\n> the phase is <literal>computing extended stats</literal>.\n\n\nWill fix.\n\n\n> + The command is currently scanning the\n> <structfield>current_relid</structfield>\n> + to obtain samples.\n> \n> I suggest:\n> \n> The command is currently scanning the the table given by\n> <structfield>current_relid</structfield> to obtain sample rows.\n\n\nWill fix.\n\n \n> + The command is currently scanning the\n> <structfield>current_child_table_relid</structfield>\n> + to obtain samples.\n> \n> I suggest (based on phase description pg_stat_progress_create_index\n> phase descriptions):\n> \n> The command is currently scanning child tables to obtain sample rows. Columns\n> <structfield>child_tables_total</structfield>,\n> <structfield>child_tables_done</structfield>, and\n> <structfield>current_child_table_relid</structfield> contain the progress\n> information for this phase.\n\n\nWill fix.\n\n\n> + <row>\n> + <entry><literal>computing stats</literal></entry>\n> \n> I think the phase name should really be \"computing statistics\", that\n> is, use the full word.\n\n\nWill fix.\n\n \n> + <entry>\n> + The command is computing stats from the samples obtained\n> during the table scan.\n> + </entry>\n> + </row>\n> \n> So I suggest:\n> \n> The command is computing statistics from the sample rows obtained during\n> the table scan\n\n\nWill fix.\n\n \n> + <entry><literal>computing extended stats</literal></entry>\n> + <entry>\n> + The command is computing extended stats from the samples\n> obtained in the previous phase.\n> + </entry>\n> \n> I suggest:\n> \n> The command is computing extended statistics from the sample rows obtained\n> during the table scan.\n\n\nWill fix.\n\n\nThanks for your above useful suggestions. It helps me a lot. :)\n\n\nCheers!\nTatsuro Yamada\n\n\n\n", "msg_date": "Wed, 18 Dec 2019 14:44:04 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi All,\n\n>> *All* phases are repeated in this case, not not just \"finalizing\n>> analyze\", because ANALYZE repeatedly runs for each partition after the\n>> parent partitioned table's ANALYZE finishes.  ANALYZE's documentation\n>> mentions that analyzing a partitioned table also analyzes all of its\n>> partitions, so users should expect to see the progress information for\n>> each partition.  So, I don't think we need to clarify that if only in\n>> one phase's description.  Maybe we can add a note after the phase\n>> description table which mentions this implementation detail about\n>> partitioned tables.  Like this:\n>>\n>>    <note>\n>>     <para>\n>>      Note that when <command>ANALYZE</command> is run on a partitioned table,\n>>      all of its partitions are also recursively analyzed as also mentioned on\n>>      <xref linkend=\"sql-analyze\"/>.  In that case, <command>ANALYZE</command>\n>>      progress is reported first for the parent table, whereby its inheritance\n>>      statistics are collected, followed by that for each partition.\n>>     </para>\n>>    </note>\n> \n> \n> Ah.. you are right: All phases are repeated, it shouldn't be fixed\n> the only one phase's description.\n> \n> \n>> Some more comments on the documentation:\n>>\n>> +       Number of computed extended stats.  This counter only advances\n>> when the phase\n>> +       is <literal>computing extended stats</literal>.\n>>\n>> Number of computed extended stats -> Number of extended stats computed\n> \n> \n> Will fix.\n> \n> \n>> +       Number of analyzed child tables.  This counter only advances\n>> when the phase\n>> +       is <literal>computing extended stats</literal>.\n>>\n>> Regarding, \"Number of analyzed child table\", note that we don't\n>> \"analyze\" child tables in this phase, only scan its blocks to collect\n>> samples for parent's ANALYZE.  Also, the 2nd sentence is wrong -- you\n>> meant \"when the phase is <literal>acquiring inherited sample\n>> rows</literal>.  I suggest to write this as follows:\n>>\n>> Number of child tables scanned.  This counter only advances when the phase\n>> is <literal>acquiring inherited sample rows</literal>.\n> \n> \n> Oops, I will fix it. :)\n> \n> \n> \n>> +     <entry>OID of the child table currently being scanned.\n>> +       It might be different from relid when analyzing tables that\n>> have child tables.\n>>\n>> I suggest:\n>>\n>> OID of the child table currently being scanned.  This field is only valid when\n>> the phase is <literal>computing extended stats</literal>.\n> \n> \n> Will fix.\n> \n> \n>> +       The command is currently scanning the\n>> <structfield>current_relid</structfield>\n>> +       to obtain samples.\n>>\n>> I suggest:\n>>\n>> The command is currently scanning the the table given by\n>> <structfield>current_relid</structfield> to obtain sample rows.\n> \n> \n> Will fix.\n> \n> \n>> +       The command is currently scanning the\n>> <structfield>current_child_table_relid</structfield>\n>> +       to obtain samples.\n>>\n>> I suggest (based on phase description pg_stat_progress_create_index\n>> phase descriptions):\n>>\n>> The command is currently scanning child tables to obtain sample rows.  Columns\n>> <structfield>child_tables_total</structfield>,\n>> <structfield>child_tables_done</structfield>, and\n>> <structfield>current_child_table_relid</structfield> contain the progress\n>> information for this phase.\n> \n> \n> Will fix.\n> \n> \n>> +    <row>\n>> +     <entry><literal>computing stats</literal></entry>\n>>\n>> I think the phase name should really be \"computing statistics\", that\n>> is, use the full word.\n> \n> \n> Will fix.\n> \n> \n>> +     <entry>\n>> +       The command is computing stats from the samples obtained\n>> during the table scan.\n>> +     </entry>\n>> +    </row>\n>>\n>> So I suggest:\n>>\n>> The command is computing statistics from the sample rows obtained during\n>> the table scan\n> \n> \n> Will fix.\n> \n> \n>> +     <entry><literal>computing extended stats</literal></entry>\n>> +     <entry>\n>> +       The command is computing extended stats from the samples\n>> obtained in the previous phase.\n>> +     </entry>\n>>\n>> I suggest:\n>>\n>> The command is computing extended statistics from the sample rows obtained\n>> during the table scan.\n> \n> \n> Will fix.\n\n\nI fixed the document based on Amit's comments. :)\nPlease find attached file.\n\n\nThanks,\nTatsuro Yamadas", "msg_date": "Thu, 19 Dec 2019 21:06:50 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "I just pushed this after some small extra tweaks.\n\nThanks, Yamada-san, for seeing this to completion!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Jan 2020 14:11:10 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Wed, Jan 15, 2020 at 02:11:10PM -0300, Alvaro Herrera wrote:\n> I just pushed this after some small extra tweaks.\n> \n> Thanks, Yamada-san, for seeing this to completion!\n\nFind attached minor fixes to docs - sorry I didn't look earlier.\n\nPossibly you'd also want to change the other existing instances of \"preparing\nto begin\".", "msg_date": "Thu, 16 Jan 2020 09:19:31 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi Alvaro and All reviewers,\n\nOn 2020/01/16 2:11, Alvaro Herrera wrote:\n> I just pushed this after some small extra tweaks.\n> \n> Thanks, Yamada-san, for seeing this to completion!\n\nThanks for reviewing and committing the patch!\nHope this helps DBA. :-D\n\nP.S.\nNext up is progress reporting for query execution?!\nTo create it, I guess that it needs to improve\nprogress reporting infrastructure.\n\nThanks,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Wed, 22 Jan 2020 10:46:11 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Fri, Jan 17, 2020 at 12:19 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Jan 15, 2020 at 02:11:10PM -0300, Alvaro Herrera wrote:\n> > I just pushed this after some small extra tweaks.\n> >\n> > Thanks, Yamada-san, for seeing this to completion!\n>\n> Find attached minor fixes to docs - sorry I didn't look earlier.\n>\n> Possibly you'd also want to change the other existing instances of \"preparing\n> to begin\".\n\nSpotted a few other issues with the docs:\n\n+ Number of computed extended statistics computed.\n\nShould be: \"Number of extended statistics computed.\"\n\n+ <entry>OID of the child table currently being scanned. This\nfield is only valid when\n+ the phase is <literal>computing extended statistics</literal>.\n\nShould be: \"This field is only valid when the phase is\n<literal>acquiring inherited sample rows</literal>.\"\n\n+ durring the table scan.\n\nduring\n\nAttached a patch.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 22 Jan 2020 14:52:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Wed, Jan 22, 2020 at 2:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Jan 17, 2020 at 12:19 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Wed, Jan 15, 2020 at 02:11:10PM -0300, Alvaro Herrera wrote:\n> > > I just pushed this after some small extra tweaks.\n> > >\n> > > Thanks, Yamada-san, for seeing this to completion!\n> >\n> > Find attached minor fixes to docs - sorry I didn't look earlier.\n> >\n> > Possibly you'd also want to change the other existing instances of \"preparing\n> > to begin\".\n>\n> Spotted a few other issues with the docs:\n>\n> + Number of computed extended statistics computed.\n>\n> Should be: \"Number of extended statistics computed.\"\n>\n> + <entry>OID of the child table currently being scanned. This\n> field is only valid when\n> + the phase is <literal>computing extended statistics</literal>.\n>\n> Should be: \"This field is only valid when the phase is\n> <literal>acquiring inherited sample rows</literal>.\"\n>\n> + durring the table scan.\n>\n> during\n>\n> Attached a patch.\n\nOops, really attached this time.\n\nThanks,\nAmit", "msg_date": "Wed, 22 Jan 2020 15:06:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Wed, Jan 22, 2020 at 03:06:52PM +0900, Amit Langote wrote:\n> Oops, really attached this time.\n\nThanks, applied. There were clearly two grammar mistakes in the first\npatch sent by Justin. And your suggestions look fine to me. On top\nof that, I have noticed that the indentation of the two tables in the\ndocs was rather inconsistent.\n--\nMichael", "msg_date": "Thu, 23 Jan 2020 17:20:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On 2020-Jan-22, Tatsuro Yamada wrote:\n\n> Thanks for reviewing and committing the patch!\n> Hope this helps DBA. :-D\n\nI'm sure it does!\n\n> P.S.\n> Next up is progress reporting for query execution?!\n\nActually, I think it's ALTER TABLE.\n\nAlso, things like VACUUM could report the progress of the index being\nprocessed ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Jan 2020 18:47:43 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "On Fri, Jan 24, 2020 at 6:47 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Jan-22, Tatsuro Yamada wrote:\n> > P.S.\n> > Next up is progress reporting for query execution?!\n>\n> Actually, I think it's ALTER TABLE.\n\n+1. Existing infrastructure might be enough to cover ALTER TABLE's\nneeds, whereas we will very likely need to build entirely different\ninfrastructure for tracking the progress for SQL query execution.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 24 Jan 2020 23:44:34 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" }, { "msg_contents": "Hi,\n\nOn 2020/01/24 23:44, Amit Langote wrote:\n> On Fri, Jan 24, 2020 at 6:47 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> On 2020-Jan-22, Tatsuro Yamada wrote:\n>>> P.S.\n>>> Next up is progress reporting for query execution?!\n>>\n>> Actually, I think it's ALTER TABLE.\n> \n> +1. Existing infrastructure might be enough to cover ALTER TABLE's\n> needs, whereas we will very likely need to build entirely different\n> infrastructure for tracking the progress for SQL query execution.\n\nYeah, I agree.\nI will create a little POC patch after reading tablecmds.c and alter.c to\ninvestigate how to report progress. :)\n\nRegards,\nTatsuro Yamada\n\n\n\n\n\n", "msg_date": "Mon, 27 Jan 2020 19:16:25 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: progress report for ANALYZE" } ]
[ { "msg_contents": "Does Microsoft or any other DB manufacturer have an ear on this mailing\nlist?\n\nSascha Kuhl\n\nDoes Microsoft or any other DB manufacturer have an ear on this mailing list?Sascha Kuhl", "msg_date": "Sun, 23 Jun 2019 07:18:31 +0200", "msg_from": "Sascha Kuhl <yogidabanli@gmail.com>", "msg_from_op": true, "msg_subject": "Ear on mailing" }, { "msg_contents": "On 2019-Jun-23, Sascha Kuhl wrote:\n\n> Does Microsoft or any other DB manufacturer have an ear on this mailing\n> list?\n\nThis is a public mailing list, so anybody with an interest can subscribe\nto it. If you search the archives, you might find a more explicit\nanswer to your question.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 24 Jun 2019 15:03:17 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Ear on mailing" }, { "msg_contents": "Greetings,\n\n* Sascha Kuhl (yogidabanli@gmail.com) wrote:\n> Does Microsoft or any other DB manufacturer have an ear on this mailing\n> list?\n\nMany, many, many of the people who are on this mailing list work for DB\nmanufacturers.\n\nI suspect that most of them are manufacturing PostgreSQL.\n\nThanks,\n\nStephen", "msg_date": "Mon, 24 Jun 2019 15:21:53 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Ear on mailing" } ]
[ { "msg_contents": "There is some language in a code comment that has been bothering me for\nseveral years now. After pointing it out in a conversation off-list\nrecently, I figured it was past time to do something about it.\n\nPatch attached.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support", "msg_date": "Sun, 23 Jun 2019 11:21:13 +0200", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Code comment change" }, { "msg_contents": "On Sun, Jun 23, 2019 at 9:21 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n> There is some language in a code comment that has been bothering me for\n> several years now. After pointing it out in a conversation off-list\n> recently, I figured it was past time to do something about it.\n>\n> Patch attached.\n\nPushed. Thanks!\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Sun, 23 Jun 2019 22:35:24 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Code comment change" }, { "msg_contents": "On 23/06/2019 12:35, Thomas Munro wrote:\n> On Sun, Jun 23, 2019 at 9:21 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>> There is some language in a code comment that has been bothering me for\n>> several years now. After pointing it out in a conversation off-list\n>> recently, I figured it was past time to do something about it.\n>>\n>> Patch attached.\n> \n> Pushed. Thanks!\n\nThank you!\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n\n", "msg_date": "Sun, 23 Jun 2019 13:43:20 +0200", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Code comment change" }, { "msg_contents": "On Sun, Jun 23, 2019 at 3:36 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Pushed. Thanks!\n\nI wonder what the comment is supposed to mean.\n\nI think that it's addressing the situation prior to commit 70508ba7aed\nin 2003, which was the point when the \"fast\" root concept was\nintroduced. Prior to that commit, there was only what we would now\ncall a true root, and _bt_getroot() had to loop to make sure that it\nreliably found it without deadlocking, while dealing with concurrent\nsplits. This was necessary because the old design also involved\nmaintaining a pointer to each page's parent in each page, which sounds\nlike a seriously bad approach to me.\n\nI think that the whole sentence about \"the standard class of race\nconditions\" should go. There is no more dance. Nothing in\n_bt_getroot() is surprising to me. The other comments explain things\ncomprehensively.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 Jul 2019 19:03:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Code comment change" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, Jun 23, 2019 at 3:36 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Pushed. Thanks!\n\n> I wonder what the comment is supposed to mean.\n> I think that it's addressing the situation prior to commit 70508ba7aed\n> in 2003, which was the point when the \"fast\" root concept was\n> introduced.\n\nYeah. I did some research into the provenance of that comment when\nThomas pushed the change. It's *old*. The whole para exists verbatim\nin Postgres v4r2, src/backend/access/nbtree/nbtpage.c dated 1993-12-10\n(in my copy of that tarball). The only change since then has been to\nchange the whitespace for 4-space tabs.\n\nEven more interesting, the same para also exists verbatim in\nv4r2's src/backend/access/nobtree/nobtpage.c, which is dated 1991-10-29\nin the same tarball. (If you're wondering, \"nobtree\" seems to stand\nfor \"no-overwrite btree\"; so I suppose it went the way of all flesh\nwhen Stonebraker lost interest in write-once mass storage.) So presumably\nthis comment dates back to some common ancestor of the mainline btree code\nand the no-overwrite code, which must have been even older than the 1991\ndate.\n\nThis is only marginally relevant to what we should do about it today,\nbut I think it's reasonable to conclude that the current locking\nconsiderations are nearly unrelated to what they were when the comment\nwas written.\n\n> I think that the whole sentence about \"the standard class of race\n> conditions\" should go. There is no more dance. Nothing in\n> _bt_getroot() is surprising to me. The other comments explain things\n> comprehensively.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2019 22:28:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Code comment change" }, { "msg_contents": "On Mon, Jul 1, 2019 at 7:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Even more interesting, the same para also exists verbatim in\n> v4r2's src/backend/access/nobtree/nobtpage.c, which is dated 1991-10-29\n> in the same tarball. (If you're wondering, \"nobtree\" seems to stand\n> for \"no-overwrite btree\"; so I suppose it went the way of all flesh\n> when Stonebraker lost interest in write-once mass storage.) So presumably\n> this comment dates back to some common ancestor of the mainline btree code\n> and the no-overwrite code, which must have been even older than the 1991\n> date.\n\n\"no-overwrite btree\" is described here, if you're interested:\n\nhttps://pdfs.semanticscholar.org/a0de/438d5efd96e8af51bc7595aa1c30d0497a57.pdf\n\nThis is a link to the B-Tree focused paper \"An Index Implementation\nSupporting Fast Recovery for the POSTGRES Storage System\". I found\nthat the paper provided me with some interesting historic context. I\nam pretty sure that the authors were involved in early work on the\nPostgres B-Tree code. It references Lanin and Shasha, even though the\nnbtree code that is influenced by L&S first appears in the same 2003\ncommit of yours that I mentioned.\n\n> > I think that the whole sentence about \"the standard class of race\n> > conditions\" should go. There is no more dance. Nothing in\n> > _bt_getroot() is surprising to me. The other comments explain things\n> > comprehensively.\n>\n> +1\n\nI'll take care of it soon.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 Jul 2019 20:09:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Code comment change" } ]
[ { "msg_contents": "Hi all,\n\nAfter the issues behind CVE-2019-10164, it seems that we can do much\nbetter with the current interface of decoding and encoding functions\nfor base64 in src/common/.\n\nThe root issue is that the callers of pg_b64_decode() and\npg_b64_encode() provide a buffer where the result gets stored which is\nallocated using respectively pg_b64_dec_len() and pg_b64_dec_enc()\n(those routines overestimate the allocation on purpose) but we don't\nallow callers to provide the length of the buffer allocated and hence\nthose routines lack sanity checks to make sure that what is in input\ndoes not cause an overflow within the result buffer.\n\nOne thing I have noticed is that many projects on the net include this\ncode for their own purpose, and I have suspicions that many other\nprojects link to the code from Postgres and make use of it. So that's\nrather scary.\n\nAttached is a refactoring patch for those interfaces, which introduces\na set of overflow checks so as we cannot repeat errors of the past.\nThis adds one argument to pg_b64_decode() and pg_b64_encode() as the\nsize of the result buffer, and we make use of it in the code to make\nsure that an error is reported in case of an overflow. That's the\nstatus code -1 which is used for other errors for simplicity. One\nthing to note is that the decoding path can already complain on some\nerrors, basically an incorrectly shaped encoded string, but the\nencoding path does not have any errors yet, so we need to make sure\nthat all the existing callers of pg_b64_encode() complain correctly\nwith the new interface.\n\nI am adding that to the next CF for v13.\n\nAny thoughts?\n--\nMichael", "msg_date": "Sun, 23 Jun 2019 22:25:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Refactoring base64 encoding and decoding into a safer interface" }, { "msg_contents": "> On 23 Jun 2019, at 15:25, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Attached is a refactoring patch for those interfaces, which introduces\n> a set of overflow checks so as we cannot repeat errors of the past.\n\nI’ve done a review of this submission. The patch applies cleanly, and passes\nmake check, ssl/scram tests etc. There is enough documentation \n\nI very much agree that functions operating on a buffer like this should have\nthe size of the buffer in order to safeguard against overflow, so +1 on the\ngeneral concept.\n\n> Any thoughts?\n\nA few small comments:\n\nIn src/common/scram-common.c there are a few instances like this. Shouldn’t we\nalso free the result buffer in these cases?\n\n+#ifdef FRONTEND\n+ return NULL;\n+#else\n\nIn the below passage, we leave the input buffer with a non-complete encoded\nstring. Should we memset the buffer to zero to avoid the risk that code which\nfails to check the returnvalue believes it has an encoded string?\n\n+ /*\n+ * Leave if there is an overflow in the area allocated for\n+ * the encoded string.\n+ */\n+ if ((p - dst + 4) > dstlen)\n+ return -1;\n\ncheers ./daniel\n\n", "msg_date": "Mon, 1 Jul 2019 23:11:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Refactoring base64 encoding and decoding into a safer interface" }, { "msg_contents": "On Mon, Jul 01, 2019 at 11:11:43PM +0200, Daniel Gustafsson wrote:\n> I very much agree that functions operating on a buffer like this should have\n> the size of the buffer in order to safeguard against overflow, so +1 on the\n> general concept.\n\nThanks for the review!\n\n> A few small comments:\n> \n> In src/common/scram-common.c there are a few instances like this. Shouldn’t we\n> also free the result buffer in these cases?\n> \n> +#ifdef FRONTEND\n> + return NULL;\n> +#else\n\nFixed.\n\n> In the below passage, we leave the input buffer with a non-complete\n> encoded string. Should we memset the buffer to zero to avoid the\n> risk that code which fails to check the return value believes it has\n> an encoded string?\n\nHmm. Good point. I have not thought of that, and your suggestion\nmakes sense.\n\nAnother question is if we'd want to actually use explicit_bzero()\nhere, but that could be a discussion on this other thread, except if\nthe patch discussed there is merged first:\nhttps://www.postgresql.org/message-id/42d26bde-5d5b-c90d-87ae-6cab875f73be@2ndquadrant.com\n\nAttached is an updated patch.\n--\nMichael", "msg_date": "Tue, 2 Jul 2019 14:41:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring base64 encoding and decoding into a safer interface" }, { "msg_contents": "> On 2 Jul 2019, at 07:41, Michael Paquier <michael@paquier.xyz> wrote:\n\n>> In the below passage, we leave the input buffer with a non-complete\n>> encoded string. Should we memset the buffer to zero to avoid the\n>> risk that code which fails to check the return value believes it has\n>> an encoded string?\n> \n> Hmm. Good point. I have not thought of that, and your suggestion\n> makes sense.\n> \n> Another question is if we'd want to actually use explicit_bzero()\n> here, but that could be a discussion on this other thread, except if\n> the patch discussed there is merged first:\n> https://www.postgresql.org/message-id/42d26bde-5d5b-c90d-87ae-6cab875f73be@2ndquadrant.com\n\nI’m not sure we need to go to that length, but I don’t have overly strong\nopinions. I think of this more like a case of “we’ve changed the API with new\nerrorcases that we didn’t handle before, so we’re being a little defensive to\nhelp you avoid subtle bugs”.\n\n> Attached is an updated patch.\n\nLooks good, passes tests, provides value to the code. Bumping this to ready\nfor committer as I no more comments to add.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 2 Jul 2019 09:56:03 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Refactoring base64 encoding and decoding into a safer interface" }, { "msg_contents": "On Tue, Jul 02, 2019 at 09:56:03AM +0200, Daniel Gustafsson wrote:\n> I’m not sure we need to go to that length, but I don’t have overly strong\n> opinions. I think of this more like a case of “we’ve changed the API with new\n> errorcases that we didn’t handle before, so we’re being a little defensive to\n> help you avoid subtle bugs”.\n\nI quite like this suggestion.\n\n> Looks good, passes tests, provides value to the code. Bumping this to ready\n> for committer as I no more comments to add.\n\nThanks. I'll look at that again in a couple of days, let's see if\nothers have any input to offer.\n--\nMichael", "msg_date": "Tue, 2 Jul 2019 17:22:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring base64 encoding and decoding into a safer interface" }, { "msg_contents": "On Tue, Jul 02, 2019 at 09:56:03AM +0200, Daniel Gustafsson wrote:\n> Looks good, passes tests, provides value to the code. Bumping this to ready\n> for committer as I no more comments to add.\n\nThanks. I have spent more time testing the different error paths and\nthe new checks in base64.c, and committed the thing.\n--\nMichael", "msg_date": "Thu, 4 Jul 2019 16:14:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactoring base64 encoding and decoding into a safer interface" } ]
[ { "msg_contents": "While digging into the incremental sort patch, I noticed in\ntuplesort.c at the beginning of the function in $SUBJECT we have this\ncomment and assertion:\n\ntuplesort_set_bound(Tuplesortstate *state, int64 bound)\n{\n /* Assert we're called before loading any tuples */\n Assert(state->status == TSS_INITIAL);\n\nBut AFAICT from reading the code in puttuple_common the state remains\nTSS_INITIAL while tuples are inserted (unless we reach a point where\nwe decide to transition it to TSS_BOUNDED or TSS_BUILDRUNS).\n\nTherefore it's not true that the assertion guards against having\nloaded any tuples; rather it guarantees that we remain in standard\nmemory quicksort mode.\n\nAssuming my understanding is correct, I've attached a small patch to\nupdate the comment to \"Assert we're still in memory quicksort mode and\nhaven't transitioned to heap or tape mode\".\n\nNote: this also means the function header comment \"Must be called\nbefore inserting any tuples\" is a condition that isn't actually\nvalidated, but I think that's fine given it's not a new problem and\neven more so since the same comment goes on to say that that's\nprobably not a strict requirement.\n\nJames Coleman", "msg_date": "Sun, 23 Jun 2019 15:22:04 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Misleading comment in tuplesort_set_bound" }, { "msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> While digging into the incremental sort patch, I noticed in\n> tuplesort.c at the beginning of the function in $SUBJECT we have this\n> comment and assertion:\n\n> tuplesort_set_bound(Tuplesortstate *state, int64 bound)\n> {\n> /* Assert we're called before loading any tuples */\n> Assert(state->status == TSS_INITIAL);\n\n> But AFAICT from reading the code in puttuple_common the state remains\n> TSS_INITIAL while tuples are inserted (unless we reach a point where\n> we decide to transition it to TSS_BOUNDED or TSS_BUILDRUNS).\n\nYou missed the relevance of the next line:\n\n\tAssert(state->memtupcount == 0);\n\nI think the comment is fine as-is. Perhaps the code would be clearer\nthough, if we merged those two asserts into one?\n\n\t/* Assert we're called before loading any tuples */\n\tAssert(state->status == TSS_INITIAL &&\n\t state->memtupcount == 0);\n\nI'm not totally sure about the usefulness/relevance of the two\nassertions following these, but they could likely do with their\nown comment(s), because this one surely isn't covering them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Aug 2019 17:51:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Misleading comment in tuplesort_set_bound" }, { "msg_contents": "On 2019-Aug-26, Tom Lane wrote:\n\n> James Coleman <jtc331@gmail.com> writes:\n\n> I think the comment is fine as-is. Perhaps the code would be clearer\n> though, if we merged those two asserts into one?\n> \n> \t/* Assert we're called before loading any tuples */\n> \tAssert(state->status == TSS_INITIAL &&\n> \t state->memtupcount == 0);\n\nMakes sense to me. James, do you want to submit a new patch?\n\n> I'm not totally sure about the usefulness/relevance of the two\n> assertions following these, but they could likely do with their\n> own comment(s), because this one surely isn't covering them.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Sep 2019 16:56:57 -0400", "msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Misleading comment in tuplesort_set_bound" }, { "msg_contents": "Yes, planning on it, just a bit behind right now so will likely be a\nfew more days at least.\n\nOn Thu, Sep 5, 2019 at 4:57 PM Alvaro Herrera from 2ndQuadrant\n<alvherre@alvh.no-ip.org> wrote:\n>\n> On 2019-Aug-26, Tom Lane wrote:\n>\n> > James Coleman <jtc331@gmail.com> writes:\n>\n> > I think the comment is fine as-is. Perhaps the code would be clearer\n> > though, if we merged those two asserts into one?\n> >\n> > /* Assert we're called before loading any tuples */\n> > Assert(state->status == TSS_INITIAL &&\n> > state->memtupcount == 0);\n>\n> Makes sense to me. James, do you want to submit a new patch?\n>\n> > I'm not totally sure about the usefulness/relevance of the two\n> > assertions following these, but they could likely do with their\n> > own comment(s), because this one surely isn't covering them.\n>\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Sep 2019 17:10:19 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Misleading comment in tuplesort_set_bound" }, { "msg_contents": "On 2019-Sep-05, James Coleman wrote:\n\n> Yes, planning on it, just a bit behind right now so will likely be a\n> few more days at least.\n\n[ shrug ] It seemed to require no further work, so I just pushed Tom's\nproposed change.\n\nI added an empty line after the new combined assertion, which makes\nclearer (to me anyway) that the other assertions are unrelated.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 12 Sep 2019 10:39:34 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Misleading comment in tuplesort_set_bound" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> [ shrug ] It seemed to require no further work, so I just pushed Tom's\n> proposed change.\n> I added an empty line after the new combined assertion, which makes\n> clearer (to me anyway) that the other assertions are unrelated.\n\nActually, the thing I wanted to add was some actual comments for\nthose other assertions. But that requires a bit of research that\nI hadn't made time for...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Sep 2019 10:00:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Misleading comment in tuplesort_set_bound" } ]
[ { "msg_contents": "\nAlvaro pointed out to me recently that the buildfarm client doesn't have\nany provision for running module tests like commit_ts and\nsnapshot_too_old that use NO_INSTALLCHECK. On looking into this a bit\nmore, I noticed that we also don't run any TAP tests in\nsrc/test/modules. I'm adding some code to the client to remedy both of\nthese, and crake has been running it quite happily for a week or so. Are\nthere any other holes of this nature that should be plugged? We'll need\nsome MSVC build tools support for some of it.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 23 Jun 2019 18:15:06 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Plugging some testing holes" }, { "msg_contents": "On Sun, Jun 23, 2019 at 06:15:06PM -0400, Andrew Dunstan wrote:\n> Alvaro pointed out to me recently that the buildfarm client doesn't have\n> any provision for running module tests like commit_ts and\n> snapshot_too_old that use NO_INSTALLCHECK. On looking into this a bit\n> more, I noticed that we also don't run any TAP tests in\n> src/test/modules. I'm adding some code to the client to remedy both of\n> these, and crake has been running it quite happily for a week or so. Are\n> there any other holes of this nature that should be plugged?\n\nsrc/test/kerberos/ and src/test/ldap/.\n\ncontrib modules having TAP tests are actually able to run the tests.\nOnly an installcheck triggered from contrib/ happens at step\ncontrib-install-check-C, right?\n\n> We'll need some MSVC build tools support for some of it.\n\nThis one is more complex. We don't actually track TAP_TESTS in\nsrc/tools/msvc/ yet. Cough.\n--\nMichael", "msg_date": "Mon, 24 Jun 2019 11:27:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Plugging some testing holes" }, { "msg_contents": "\nOn 6/23/19 10:27 PM, Michael Paquier wrote:\n> On Sun, Jun 23, 2019 at 06:15:06PM -0400, Andrew Dunstan wrote:\n>> Alvaro pointed out to me recently that the buildfarm client doesn't have\n>> any provision for running module tests like commit_ts and\n>> snapshot_too_old that use NO_INSTALLCHECK. On looking into this a bit\n>> more, I noticed that we also don't run any TAP tests in\n>> src/test/modules. I'm adding some code to the client to remedy both of\n>> these, and crake has been running it quite happily for a week or so. Are\n>> there any other holes of this nature that should be plugged?\n> src/test/kerberos/ and src/test/ldap/.\n\n\n\nWe already make provision for those. See PG_TEST_EXTRA in the config file\n\n>\n> contrib modules having TAP tests are actually able to run the tests.\n> Only an installcheck triggered from contrib/ happens at step\n> contrib-install-check-C, right?\n\n\nYes, but I will add in support for the contrib TAP tests, thanks.\n\n\n>\n>> We'll need some MSVC build tools support for some of it.\n> This one is more complex. We don't actually track TAP_TESTS in\n> src/tools/msvc/ yet. Cough.\n\n\nWe do have support for some TAP tests, I will make sure we can run all\nof them\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 24 Jun 2019 08:55:38 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Plugging some testing holes" } ]
[ { "msg_contents": "Hi,\n\n/*\n * GatherMergePath runs several copies of a plan in parallel and collects\n * the results, preserving their common sort order. For gather merge, the\n * parallel leader always executes the plan too, so we don't need single_copy.\n */\ntypedef struct GatherMergePath\n\nThe second sentence is not true as of commit e5253fdc, and the\nattached patch removes it.\n\nEven before that commit, the comment was a bit circular: the reason\nGatherMergePath doesn't need a single_copy field is because\nforce_parallel_mode specifically means \"try to stick a Gather node on\ntop in a test mode with one worker and no leader participation\", and\nthis isn't a Gather node.\n\nHmm. I wonder if we should rename force_parallel_mode to\nforce_gather_node in v13. The current name has always seemed slightly\nmisleading to me; it sounds like some kind of turbo boost button but\nreally it's a developer-only test mode. Also, does it belong under\nDEVELOPER_OPTIONS instead of QUERY_TUNING_OTHER? I'm also wondering\nif the variable single_copy would be better named\nno_leader_participation or single_participant. I find \"copy\" a\nslightly strange way to refer to the number of copies *allowed to\nrun*, but maybe that's just me.\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Mon, 24 Jun 2019 17:20:00 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Misleading comment about single_copy, and some bikeshedding" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Hmm. I wonder if we should rename force_parallel_mode to\n> force_gather_node in v13. The current name has always seemed slightly\n> misleading to me; it sounds like some kind of turbo boost button but\n> really it's a developer-only test mode. Also, does it belong under\n> DEVELOPER_OPTIONS instead of QUERY_TUNING_OTHER? I'm also wondering\n> if the variable single_copy would be better named\n> no_leader_participation or single_participant. I find \"copy\" a\n> slightly strange way to refer to the number of copies *allowed to\n> run*, but maybe that's just me.\n\nFWIW, I agree 100% that these names are opaque. I don't know if your\nsuggestions are the best we can do, but they each seem like improvements.\nAnd yes, force_parallel_mode should be under DEVELOPER_OPTIONS; it's a\nperformance-losing test option.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2019 11:34:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Misleading comment about single_copy, and some bikeshedding" } ]
[ { "msg_contents": "Hi,\n\nIn commit a76200de we added a line to unset MAKELEVEL to fix a problem\nwith our temp-install logic. I don't think it was a great idea to\nclear MAKEFLAGS at the same time, because now when you type \"make -s\n-j8\" on a non-GNU system it ignores you and is loud and slow.\nAdmittedly there is something slightly weird about passing flags to\nboth makes, but in the case of widely understood flags like those\nones, it works fine.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jun 2019 20:21:46 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "MAKEFLAGS in non-GNU Makefile" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> In commit a76200de we added a line to unset MAKELEVEL to fix a problem\n> with our temp-install logic. I don't think it was a great idea to\n> clear MAKEFLAGS at the same time, because now when you type \"make -s\n> -j8\" on a non-GNU system it ignores you and is loud and slow.\n\nFeel free to undo that. I was concerned about possible incompatibilities\nin the makeflags, but if typical cases like this one seem to work, let's\nallow it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2019 10:29:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MAKEFLAGS in non-GNU Makefile" } ]
[ { "msg_contents": "Hi,\n\nI'm looking at PostGIS geometry GiST index build times and try to optimize\nwithing the current GiST framework. The function that shows a lot on my\nflame graphs is penalty.\n\nI spent weekend rewriting PostGIS penalty to be as fast as possible.\n(FYI https://github.com/postgis/postgis/pull/425/files)\n\nHowever I cannot get any meaningfully faster build time. Even when I strip\nit to \"just return edge extension\" index build time is the same.\n\nIs there a way to inline the penalty into above \"choose subtree\" loop\nsomehow? That would also let us stop bit-fiddling floats to simulate a more\ncomplex choosing scheme.\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,I'm looking at PostGIS geometry GiST index build times and try to optimize withing the current GiST framework. The function that shows a lot on my flame graphs is penalty. I spent weekend rewriting PostGIS penalty to be as fast as possible. (FYI https://github.com/postgis/postgis/pull/425/files) However I cannot get any meaningfully faster build time. Even when I strip it to \"just return edge extension\" index build time is the same.Is there a way to inline the penalty into above \"choose subtree\" loop somehow? That would also let us stop bit-fiddling floats to simulate a more complex choosing scheme.-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa", "msg_date": "Mon, 24 Jun 2019 13:08:53 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": true, "msg_subject": "GiST \"choose subtree\" support function to inline penalty" }, { "msg_contents": "Hi!\n\n> 24 июня 2019 г., в 15:08, Darafei Komяpa Praliaskouski <me@komzpa.net> написал(а):\n> \n> I'm looking at PostGIS geometry GiST index build times and try to optimize withing the current GiST framework. The function that shows a lot on my flame graphs is penalty. \n> \n> I spent weekend rewriting PostGIS penalty to be as fast as possible. \n> (FYI https://github.com/postgis/postgis/pull/425/files) \n> \n> However I cannot get any meaningfully faster build time. Even when I strip it to \"just return edge extension\" index build time is the same.\n> \n> Is there a way to inline the penalty into above \"choose subtree\" loop somehow? That would also let us stop bit-fiddling floats to simulate a more complex choosing scheme.\n\nMaybe we could just add new opclass function for choosing subtree?\nI've created GSoC item for this[0].\n\n\nBest regards, Andrey Borodin.\n\n[0] https://wiki.postgresql.org/wiki/GSoC_2019#GiST_API_advancement_.282019.29\n\n", "msg_date": "Mon, 24 Jun 2019 16:31:07 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: GiST \"choose subtree\" support function to inline penalty" }, { "msg_contents": "On Mon, Jun 24, 2019 at 2:31 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> > 24 июня 2019 г., в 15:08, Darafei Komяpa Praliaskouski <me@komzpa.net>\n> написал(а):\n> >\n> > I'm looking at PostGIS geometry GiST index build times and try to\n> optimize withing the current GiST framework. The function that shows a lot\n> on my flame graphs is penalty.\n> >\n> > I spent weekend rewriting PostGIS penalty to be as fast as possible.\n> > (FYI https://github.com/postgis/postgis/pull/425/files)\n> >\n> > However I cannot get any meaningfully faster build time. Even when I\n> strip it to \"just return edge extension\" index build time is the same.\n> >\n> > Is there a way to inline the penalty into above \"choose subtree\" loop\n> somehow? That would also let us stop bit-fiddling floats to simulate a more\n> complex choosing scheme.\n>\n> Maybe we could just add new opclass function for choosing subtree?\n>\n\n+1,\nThis sounds reasonable. Authors of existing GiST opclasses wouldn't have\ntrouble to keep compatible with new PostgreSQL versions.\n\nI see one more use case for \"choose subtree\" instead \"penalty\". When\nR*-tree chooses subtree, it considers to only extension of selected\nbounding box, but also overlap increase of bounding boxes. This strategy\nshould have a positive effect of tree quality, besides I never seen it has\nbeen measured separately. It probably kind of possible to implement using\n\"penalty\" method assuming you have reference to the page in GISTENTRY. But\nthat doesn't seems a correct way to use the GiST interface. Additionally,\nyou don't know the attribute number to get the correct key in multicolumn\nindexes. Having \"choose subtree\" method will make it possible to implement\nthis strategy in correct way. However, this use case is kind of opposite\nto Darafei's one, because it should make choosing subtree slower (but\nbetter).\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nOn Mon, Jun 24, 2019 at 2:31 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:> 24 июня 2019 г., в 15:08, Darafei Komяpa Praliaskouski <me@komzpa.net> написал(а):\n> \n> I'm looking at PostGIS geometry GiST index build times and try to optimize withing the current GiST framework. The function that shows a lot on my flame graphs is penalty. \n> \n> I spent weekend rewriting PostGIS penalty to be as fast as possible. \n> (FYI https://github.com/postgis/postgis/pull/425/files) \n> \n> However I cannot get any meaningfully faster build time. Even when I strip it to \"just return edge extension\" index build time is the same.\n> \n> Is there a way to inline the penalty into above \"choose subtree\" loop somehow? That would also let us stop bit-fiddling floats to simulate a more complex choosing scheme.\n\nMaybe we could just add new opclass function for choosing subtree?+1,This sounds reasonable.  Authors of existing GiST opclasses wouldn't have trouble to keep compatible with new PostgreSQL versions.I see one more use case for \"choose subtree\" instead \"penalty\".  When R*-tree chooses subtree, it considers to only extension of selected bounding box, but also overlap increase of bounding boxes.  This strategy should have a positive effect of tree quality, besides I never seen it has been measured separately.  It probably kind of possible to implement using \"penalty\" method assuming you have reference to the page in GISTENTRY.  But that doesn't seems a correct way to use the GiST interface.  Additionally, you don't know the attribute number to get the correct key in multicolumn indexes.  Having \"choose subtree\" method will make it possible to implement this strategy in correct way.  However, this use case is kind of opposite to Darafei's one, because it should make choosing subtree slower (but better).------Alexander KorotkovPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company", "msg_date": "Wed, 26 Jun 2019 04:20:06 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: GiST \"choose subtree\" support function to inline penalty" }, { "msg_contents": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes:\n> I'm looking at PostGIS geometry GiST index build times and try to optimize\n> withing the current GiST framework. The function that shows a lot on my\n> flame graphs is penalty.\n\n> I spent weekend rewriting PostGIS penalty to be as fast as possible.\n> (FYI https://github.com/postgis/postgis/pull/425/files)\n\n> However I cannot get any meaningfully faster build time. Even when I strip\n> it to \"just return edge extension\" index build time is the same.\n\nTBH this makes me wonder whether the real problem isn't so much \"penalty\nfunction is too slow\" as \"penalty function is resulting in really bad\nindex splits\".\n\nIt might be that giving the opclass higher-level control over the split\ndecision can help with both aspects. But never start micro-optimizing\nan algorithm until you're sure it's the right algorithm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2019 23:00:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST \"choose subtree\" support function to inline penalty" }, { "msg_contents": "On Thu, Jun 27, 2019 at 6:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>\n> writes:\n> > I'm looking at PostGIS geometry GiST index build times and try to\n> optimize\n> > withing the current GiST framework. The function that shows a lot on my\n> > flame graphs is penalty.\n>\n> > I spent weekend rewriting PostGIS penalty to be as fast as possible.\n> > (FYI https://github.com/postgis/postgis/pull/425/files)\n>\n> > However I cannot get any meaningfully faster build time. Even when I\n> strip\n> > it to \"just return edge extension\" index build time is the same.\n>\n> TBH this makes me wonder whether the real problem isn't so much \"penalty\n> function is too slow\" as \"penalty function is resulting in really bad\n> index splits\".\n>\n\nAs an extension writer I don't have much control on how Postgres calls\npenalty function. PostGIS box is using floats instead of doubles, so its\nsize is twice as small as postgres builtin box, meaning penalty is called\neven more often on better packed pages.\n\nI can get index construction speed to be much faster if I break penalty to\nactually result in horrible splits: index size grows 50%, construction is\n30% faster.\n\n\n>\n> It might be that giving the opclass higher-level control over the split\n> decision can help with both aspects.\n\n\nPlease note the question is not about split. Korotkov's split is working\nfine. Issue is with penalty and computations required for choosing the\nsubtree before split happens.\n\nAndrey Borodin proposed off-list that we can provide our own index type\nthat is a copy of GiST but with penalty inlined into \"choose subtree\" code\npath, as that seems to be the only way to do it in PG12. Is there a more\nhumane option than forking GiST?\n\n\n\n> But never start micro-optimizing\n> an algorithm until you're sure it's the right algorithm.\n>\n\nThat's exactly the reason I write original letter. I don't see any option\nfor further optimization in existing GiST framework, but this optimization\nis needed: waiting 10 hours for GiST to build after an hour of ingesting\nthe dataset is frustrating, especially when you see a nearby b-tree done in\nan hour.\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nOn Thu, Jun 27, 2019 at 6:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes:\n> I'm looking at PostGIS geometry GiST index build times and try to optimize\n> withing the current GiST framework. The function that shows a lot on my\n> flame graphs is penalty.\n\n> I spent weekend rewriting PostGIS penalty to be as fast as possible.\n> (FYI https://github.com/postgis/postgis/pull/425/files)\n\n> However I cannot get any meaningfully faster build time. Even when I strip\n> it to \"just return edge extension\" index build time is the same.\n\nTBH this makes me wonder whether the real problem isn't so much \"penalty\nfunction is too slow\" as \"penalty function is resulting in really bad\nindex splits\".As an extension writer I don't have much control on how Postgres calls penalty function. PostGIS box is using floats instead of doubles, so its size is twice as small as postgres builtin box, meaning penalty is called even more often on better packed pages.I can get index construction speed to be much faster if I break penalty to actually result in horrible splits: index size grows 50%, construction is 30% faster. \n\nIt might be that giving the opclass higher-level control over the split\ndecision can help with both aspects. Please note the question is not about split. Korotkov's split is working fine. Issue is with penalty and computations required for choosing the subtree before split happens.Andrey Borodin proposed off-list that we can provide our own index type that is a copy of GiST but with penalty inlined into \"choose subtree\" code path, as that seems to be the only way to do it in PG12. Is there a more humane option than forking GiST?   But never start micro-optimizing\nan algorithm until you're sure it's the right algorithm.That's exactly the reason I write original letter. I don't see any option for further optimization in existing GiST framework, but this optimization is needed: waiting 10 hours for GiST to build after an hour of ingesting the dataset is frustrating, especially when you see a nearby b-tree done in an hour.-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa", "msg_date": "Thu, 27 Jun 2019 13:50:04 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": true, "msg_subject": "Re: GiST \"choose subtree\" support function to inline penalty" } ]
[ { "msg_contents": "Proposing following changes to make predicate locking and checking\nfunctions generic and remove dependency on HeapTuple and Heap AM. We\nmade these changes to help with Zedstore. I think the changes should\nhelp Zheap and other AMs in general.\n\n- Change PredicateLockTuple() to PredicateLockTID(). So, instead of\n passing HeapTuple to it, just pass ItemPointer and tuple insert\n transaction id if known. This was also discussed earlier in [1],\n don't think was done in that context but would be helpful in future\n if such requirement comes up as well.\n\n- CheckForSerializableConflictIn() take blocknum instead of\n buffer. Currently, the function anyways does nothing with the buffer\n just needs blocknum. Also, helps to decouple dependency on buffer as\n not all AMs may have one to one notion between blocknum and single\n buffer. Like for zedstore, tuple is stored across individual column\n buffers. So, wish to have way to lock not physical buffer but\n logical blocknum.\n\n- CheckForSerializableConflictOut() no more takes HeapTuple nor\n buffer, instead just takes xid. Push heap specific parts from\n CheckForSerializableConflictOut() into its own function\n HeapCheckForSerializableConflictOut() which calls\n CheckForSerializableConflictOut(). The alternative option could be\n CheckForSerializableConflictOut() take callback function and\n callback arguments, which gets called if required after performing\n prechecks. Though currently I fell AM having its own wrapper to\n perform AM specific task and then calling\n CheckForSerializableConflictOut() is fine.\n\nAttaching patch which makes these changes.\n\nThis way PredicateLockTID(), CheckForSerializableConflictIn() and\nCheckForSerializableConflictOut() functions become usable by any AM.\n\n\n1]\nhttps://www.postgresql.org/message-id/CAEepm%3D2QbqQ_%2BKQQCnhKukF6NEAeq4SqiO3Qxe%2BfHza5-H-jKA%40mail.gmail.com", "msg_date": "Mon, 24 Jun 2019 10:41:06 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "Hi,\n\nOn 2019-06-24 10:41:06 -0700, Ashwin Agrawal wrote:\n> Proposing following changes to make predicate locking and checking\n> functions generic and remove dependency on HeapTuple and Heap AM. We\n> made these changes to help with Zedstore. I think the changes should\n> help Zheap and other AMs in general.\n\nIndeed.\n\n\n> - Change PredicateLockTuple() to PredicateLockTID(). So, instead of\n> passing HeapTuple to it, just pass ItemPointer and tuple insert\n> transaction id if known. This was also discussed earlier in [1],\n> don't think was done in that context but would be helpful in future\n> if such requirement comes up as well.\n\nRight.\n\n\n> - CheckForSerializableConflictIn() take blocknum instead of\n> buffer. Currently, the function anyways does nothing with the buffer\n> just needs blocknum. Also, helps to decouple dependency on buffer as\n> not all AMs may have one to one notion between blocknum and single\n> buffer. Like for zedstore, tuple is stored across individual column\n> buffers. So, wish to have way to lock not physical buffer but\n> logical blocknum.\n\nHm. I wonder if we somehow ought to generalize the granularity scheme\nfor predicate locks to not be tuple/page/relation. But even if, that's\nprobably a separate patch.\n\n\n> - CheckForSerializableConflictOut() no more takes HeapTuple nor\n> buffer, instead just takes xid. Push heap specific parts from\n> CheckForSerializableConflictOut() into its own function\n> HeapCheckForSerializableConflictOut() which calls\n> CheckForSerializableConflictOut(). The alternative option could be\n> CheckForSerializableConflictOut() take callback function and\n> callback arguments, which gets called if required after performing\n> prechecks. Though currently I fell AM having its own wrapper to\n> perform AM specific task and then calling\n> CheckForSerializableConflictOut() is fine.\n\nI think it's right to move the xid handling out of\nCheckForSerializableConflictOut(). But I think we also ought to move the\nsubtransaction handling out of the function - e.g. zheap doesn't\nwant/need that.\n\n\n> Attaching patch which makes these changes.\n\nPlease make sure that there's a CF entry for this (I'm in a plane with a\nsuper slow connection, otherwise I'd check).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Jun 2019 11:02:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Tue, Jun 25, 2019 at 6:02 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-06-24 10:41:06 -0700, Ashwin Agrawal wrote:\n> > Proposing following changes to make predicate locking and checking\n> > functions generic and remove dependency on HeapTuple and Heap AM. We\n> > made these changes to help with Zedstore. I think the changes should\n> > help Zheap and other AMs in general.\n>\n> Indeed.\n\n+1\n\n> > - Change PredicateLockTuple() to PredicateLockTID(). So, instead of\n> > passing HeapTuple to it, just pass ItemPointer and tuple insert\n> > transaction id if known. This was also discussed earlier in [1],\n> > don't think was done in that context but would be helpful in future\n> > if such requirement comes up as well.\n>\n> Right.\n\n+1\n\n> > - CheckForSerializableConflictIn() take blocknum instead of\n> > buffer. Currently, the function anyways does nothing with the buffer\n> > just needs blocknum. Also, helps to decouple dependency on buffer as\n> > not all AMs may have one to one notion between blocknum and single\n> > buffer. Like for zedstore, tuple is stored across individual column\n> > buffers. So, wish to have way to lock not physical buffer but\n> > logical blocknum.\n>\n> Hm. I wonder if we somehow ought to generalize the granularity scheme\n> for predicate locks to not be tuple/page/relation. But even if, that's\n> probably a separate patch.\n\nWhat do you have in mind? This is certainly the traditional way to do\nlock hierarchies (archeological fun: we used to have\nsrc/backend/storage/lock/multi.c, a \"standard multi-level lock manager\nas per the Gray paper\", before commits 3f7fbf85 and e6e9e18e\ndecommissioned it; SSI brought the concept back again in a parallel\nlock manager), but admittedly it has assumptions about physical\nstorage baked into the naming. Perhaps you just want to give those\nthings different labels, \"TID range\" instead of page, for the benefit\nof \"logical\" TID users? Perhaps you want to permit more levels? That\nseems premature as long as TIDs are defined in terms of blocks and\noffsets, so this stuff is reflected all over the source tree...\n\n> > - CheckForSerializableConflictOut() no more takes HeapTuple nor\n> > buffer, instead just takes xid. Push heap specific parts from\n> > CheckForSerializableConflictOut() into its own function\n> > HeapCheckForSerializableConflictOut() which calls\n> > CheckForSerializableConflictOut(). The alternative option could be\n> > CheckForSerializableConflictOut() take callback function and\n> > callback arguments, which gets called if required after performing\n> > prechecks. Though currently I fell AM having its own wrapper to\n> > perform AM specific task and then calling\n> > CheckForSerializableConflictOut() is fine.\n>\n> I think it's right to move the xid handling out of\n> CheckForSerializableConflictOut(). But I think we also ought to move the\n> subtransaction handling out of the function - e.g. zheap doesn't\n> want/need that.\n\nThoughts on this Ashwin?\n\n> > Attaching patch which makes these changes.\n>\n> Please make sure that there's a CF entry for this (I'm in a plane with a\n> super slow connection, otherwise I'd check).\n\nThis all makes sense, and I'd like to be able to commit this soon. We\ncome up with something nearly identical for zheap. I'm subscribing\nKuntal Ghosh to see if he has comments, as he worked on that. The\nmain point of difference seems to be the assumption about how\nsubtransactions work.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Jul 2019 09:57:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Tue, Jul 30, 2019 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Tue, Jun 25, 2019 at 6:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > > - CheckForSerializableConflictOut() no more takes HeapTuple nor\n> > > buffer, instead just takes xid. Push heap specific parts from\n> > > CheckForSerializableConflictOut() into its own function\n> > > HeapCheckForSerializableConflictOut() which calls\n> > > CheckForSerializableConflictOut(). The alternative option could be\n> > > CheckForSerializableConflictOut() take callback function and\n> > > callback arguments, which gets called if required after performing\n> > > prechecks. Though currently I fell AM having its own wrapper to\n> > > perform AM specific task and then calling\n> > > CheckForSerializableConflictOut() is fine.\n> >\n> > I think it's right to move the xid handling out of\n> > CheckForSerializableConflictOut(). But I think we also ought to move the\n> > subtransaction handling out of the function - e.g. zheap doesn't\n> > want/need that.\n>\n> Thoughts on this Ashwin?\n>\n\nI think the only part its doing for sub-transaction is\nSubTransGetTopmostTransaction(xid). If xid passed to this function is\nalready top most transaction which is case for zheap and zedstore, then\nthere is no downside to keeping that code here in common place.\n\nOn Tue, Jul 30, 2019 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Tue, Jun 25, 2019 at 6:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > - CheckForSerializableConflictOut() no more takes HeapTuple nor\n> >   buffer, instead just takes xid. Push heap specific parts from\n> >   CheckForSerializableConflictOut() into its own function\n> >   HeapCheckForSerializableConflictOut() which calls\n> >   CheckForSerializableConflictOut(). The alternative option could be\n> >   CheckForSerializableConflictOut() take callback function and\n> >   callback arguments, which gets called if required after performing\n> >   prechecks. Though currently I fell AM having its own wrapper to\n> >   perform AM specific task and then calling\n> >   CheckForSerializableConflictOut() is fine.\n>\n> I think it's right to move the xid handling out of\n> CheckForSerializableConflictOut(). But I think we also ought to move the\n> subtransaction handling out of the function - e.g. zheap doesn't\n> want/need that.\n\nThoughts on this Ashwin?I think the only part its doing for sub-transaction is SubTransGetTopmostTransaction(xid). If xid passed to this function is already top most transaction which is case for zheap and zedstore, then there is no downside to keeping that code here in common place.", "msg_date": "Wed, 31 Jul 2019 10:42:50 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "Hi,\n\nOn 2019-07-31 09:57:58 +1200, Thomas Munro wrote:\n> On Tue, Jun 25, 2019 at 6:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > Hm. I wonder if we somehow ought to generalize the granularity scheme\n> > for predicate locks to not be tuple/page/relation. But even if, that's\n> > probably a separate patch.\n> \n> What do you have in mind?\n\nMy concern is that continuing to inferring the granularity levels from\nthe tid doesn't seem like a great path forward. An AMs use of tids might\nnot necessarily be very amenable to that, if the mapping isn't actually\nblock based.\n\n\n> Perhaps you just want to give those things different labels, \"TID\n> range\" instead of page, for the benefit of \"logical\" TID users?\n> Perhaps you want to permit more levels? That seems premature as long\n> as TIDs are defined in terms of blocks and offsets, so this stuff is\n> reflected all over the source tree...\n\nI'm mostly wondering if the different levels shouldn't be computed\noutside of predicate.c.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Jul 2019 10:50:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "Hi,\n\nOn 2019-07-31 10:42:50 -0700, Ashwin Agrawal wrote:\n> On Tue, Jul 30, 2019 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> > On Tue, Jun 25, 2019 at 6:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > - CheckForSerializableConflictOut() no more takes HeapTuple nor\n> > > > buffer, instead just takes xid. Push heap specific parts from\n> > > > CheckForSerializableConflictOut() into its own function\n> > > > HeapCheckForSerializableConflictOut() which calls\n> > > > CheckForSerializableConflictOut(). The alternative option could be\n> > > > CheckForSerializableConflictOut() take callback function and\n> > > > callback arguments, which gets called if required after performing\n> > > > prechecks. Though currently I fell AM having its own wrapper to\n> > > > perform AM specific task and then calling\n> > > > CheckForSerializableConflictOut() is fine.\n> > >\n> > > I think it's right to move the xid handling out of\n> > > CheckForSerializableConflictOut(). But I think we also ought to move the\n> > > subtransaction handling out of the function - e.g. zheap doesn't\n> > > want/need that.\n> >\n> > Thoughts on this Ashwin?\n> >\n> \n> I think the only part its doing for sub-transaction is\n> SubTransGetTopmostTransaction(xid). If xid passed to this function is\n> already top most transaction which is case for zheap and zedstore, then\n> there is no downside to keeping that code here in common place.\n\nWell, it's far from a cheap function. It'll do unnecessary on-disk\nlookups in many cases. I'd call that quite a downside.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Jul 2019 10:55:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Wed, Jul 31, 2019 at 10:55 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-07-31 10:42:50 -0700, Ashwin Agrawal wrote:\n> > On Tue, Jul 30, 2019 at 2:58 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >\n> > > On Tue, Jun 25, 2019 at 6:02 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > > > - CheckForSerializableConflictOut() no more takes HeapTuple nor\n> > > > > buffer, instead just takes xid. Push heap specific parts from\n> > > > > CheckForSerializableConflictOut() into its own function\n> > > > > HeapCheckForSerializableConflictOut() which calls\n> > > > > CheckForSerializableConflictOut(). The alternative option could\n> be\n> > > > > CheckForSerializableConflictOut() take callback function and\n> > > > > callback arguments, which gets called if required after\n> performing\n> > > > > prechecks. Though currently I fell AM having its own wrapper to\n> > > > > perform AM specific task and then calling\n> > > > > CheckForSerializableConflictOut() is fine.\n> > > >\n> > > > I think it's right to move the xid handling out of\n> > > > CheckForSerializableConflictOut(). But I think we also ought to move\n> the\n> > > > subtransaction handling out of the function - e.g. zheap doesn't\n> > > > want/need that.\n> > >\n> > > Thoughts on this Ashwin?\n> > >\n> >\n> > I think the only part its doing for sub-transaction is\n> > SubTransGetTopmostTransaction(xid). If xid passed to this function is\n> > already top most transaction which is case for zheap and zedstore, then\n> > there is no downside to keeping that code here in common place.\n>\n> Well, it's far from a cheap function. It'll do unnecessary on-disk\n> lookups in many cases. I'd call that quite a downside.\n>\n\nOkay, agree, its costly function and better to avoid the call if possible.\n\nInstead of moving the handling out of the function, how do feel about\nadding boolean isTopTransactionId argument to function\nCheckForSerializableConflictOut(). The AMs, which implicitly know, only\npass top transaction Id to this function, can pass true and avoid the\nfunction call to SubTransGetTopmostTransaction(xid). With this\nsubtransaction code remains in generic place and AMs intending to use it\ncontinue to leverage the common code, plus explicitly clarifies the\nbehavior as well.\n\nOn Wed, Jul 31, 2019 at 10:55 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-07-31 10:42:50 -0700, Ashwin Agrawal wrote:\n> On Tue, Jul 30, 2019 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> > On Tue, Jun 25, 2019 at 6:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > - CheckForSerializableConflictOut() no more takes HeapTuple nor\n> > > >   buffer, instead just takes xid. Push heap specific parts from\n> > > >   CheckForSerializableConflictOut() into its own function\n> > > >   HeapCheckForSerializableConflictOut() which calls\n> > > >   CheckForSerializableConflictOut(). The alternative option could be\n> > > >   CheckForSerializableConflictOut() take callback function and\n> > > >   callback arguments, which gets called if required after performing\n> > > >   prechecks. Though currently I fell AM having its own wrapper to\n> > > >   perform AM specific task and then calling\n> > > >   CheckForSerializableConflictOut() is fine.\n> > >\n> > > I think it's right to move the xid handling out of\n> > > CheckForSerializableConflictOut(). But I think we also ought to move the\n> > > subtransaction handling out of the function - e.g. zheap doesn't\n> > > want/need that.\n> >\n> > Thoughts on this Ashwin?\n> >\n> \n> I think the only part its doing for sub-transaction is\n> SubTransGetTopmostTransaction(xid). If xid passed to this function is\n> already top most transaction which is case for zheap and zedstore, then\n> there is no downside to keeping that code here in common place.\n\nWell, it's far from a cheap function. It'll do unnecessary on-disk\nlookups in many cases. I'd call that quite a downside.Okay, agree, its costly function and better to avoid the call if possible. Instead of moving the handling out of the function, how do feel about adding boolean isTopTransactionId argument to function CheckForSerializableConflictOut(). The AMs, which implicitly know, only pass top transaction Id to this function, can pass true and avoid the function call to SubTransGetTopmostTransaction(xid). With this subtransaction code remains in generic place and AMs intending to use it continue to leverage the common code, plus explicitly clarifies the behavior as well.", "msg_date": "Wed, 31 Jul 2019 12:37:58 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Wed, Jul 31, 2019 at 12:37 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n>\n> On Wed, Jul 31, 2019 at 10:55 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2019-07-31 10:42:50 -0700, Ashwin Agrawal wrote:\n>> > On Tue, Jul 30, 2019 at 2:58 PM Thomas Munro <thomas.munro@gmail.com>\n>> wrote:\n>> >\n>> > > On Tue, Jun 25, 2019 at 6:02 AM Andres Freund <andres@anarazel.de>\n>> wrote:\n>> > > > > - CheckForSerializableConflictOut() no more takes HeapTuple nor\n>> > > > > buffer, instead just takes xid. Push heap specific parts from\n>> > > > > CheckForSerializableConflictOut() into its own function\n>> > > > > HeapCheckForSerializableConflictOut() which calls\n>> > > > > CheckForSerializableConflictOut(). The alternative option could\n>> be\n>> > > > > CheckForSerializableConflictOut() take callback function and\n>> > > > > callback arguments, which gets called if required after\n>> performing\n>> > > > > prechecks. Though currently I fell AM having its own wrapper to\n>> > > > > perform AM specific task and then calling\n>> > > > > CheckForSerializableConflictOut() is fine.\n>> > > >\n>> > > > I think it's right to move the xid handling out of\n>> > > > CheckForSerializableConflictOut(). But I think we also ought to\n>> move the\n>> > > > subtransaction handling out of the function - e.g. zheap doesn't\n>> > > > want/need that.\n>> > >\n>> > > Thoughts on this Ashwin?\n>> > >\n>> >\n>> > I think the only part its doing for sub-transaction is\n>> > SubTransGetTopmostTransaction(xid). If xid passed to this function is\n>> > already top most transaction which is case for zheap and zedstore, then\n>> > there is no downside to keeping that code here in common place.\n>>\n>> Well, it's far from a cheap function. It'll do unnecessary on-disk\n>> lookups in many cases. I'd call that quite a downside.\n>>\n>\n> Okay, agree, its costly function and better to avoid the call if possible.\n>\n> Instead of moving the handling out of the function, how do feel about\n> adding boolean isTopTransactionId argument to function\n> CheckForSerializableConflictOut(). The AMs, which implicitly know, only\n> pass top transaction Id to this function, can pass true and avoid the\n> function call to SubTransGetTopmostTransaction(xid). With this\n> subtransaction code remains in generic place and AMs intending to use it\n> continue to leverage the common code, plus explicitly clarifies the\n> behavior as well.\n>\n\nAdded argument to function to make the subtransaction handling optional in\nattached v2 of patch.", "msg_date": "Wed, 31 Jul 2019 13:59:24 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "Hi,\n\nOn 2019-07-31 12:37:58 -0700, Ashwin Agrawal wrote:\n> On Wed, Jul 31, 2019 at 10:55 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > Hi,\n> >\n> > On 2019-07-31 10:42:50 -0700, Ashwin Agrawal wrote:\n> > > On Tue, Jul 30, 2019 at 2:58 PM Thomas Munro <thomas.munro@gmail.com>\n> > wrote:\n> > >\n> > > > On Tue, Jun 25, 2019 at 6:02 AM Andres Freund <andres@anarazel.de>\n> > wrote:\n> > > > > > - CheckForSerializableConflictOut() no more takes HeapTuple nor\n> > > > > > buffer, instead just takes xid. Push heap specific parts from\n> > > > > > CheckForSerializableConflictOut() into its own function\n> > > > > > HeapCheckForSerializableConflictOut() which calls\n> > > > > > CheckForSerializableConflictOut(). The alternative option could\n> > be\n> > > > > > CheckForSerializableConflictOut() take callback function and\n> > > > > > callback arguments, which gets called if required after\n> > performing\n> > > > > > prechecks. Though currently I fell AM having its own wrapper to\n> > > > > > perform AM specific task and then calling\n> > > > > > CheckForSerializableConflictOut() is fine.\n> > > > >\n> > > > > I think it's right to move the xid handling out of\n> > > > > CheckForSerializableConflictOut(). But I think we also ought to move\n> > the\n> > > > > subtransaction handling out of the function - e.g. zheap doesn't\n> > > > > want/need that.\n> > > >\n> > > > Thoughts on this Ashwin?\n> > > >\n> > >\n> > > I think the only part its doing for sub-transaction is\n> > > SubTransGetTopmostTransaction(xid). If xid passed to this function is\n> > > already top most transaction which is case for zheap and zedstore, then\n> > > there is no downside to keeping that code here in common place.\n> >\n> > Well, it's far from a cheap function. It'll do unnecessary on-disk\n> > lookups in many cases. I'd call that quite a downside.\n> >\n>\n> Okay, agree, its costly function and better to avoid the call if possible.\n>\n> Instead of moving the handling out of the function, how do feel about\n> adding boolean isTopTransactionId argument to function\n> CheckForSerializableConflictOut(). The AMs, which implicitly know, only\n> pass top transaction Id to this function, can pass true and avoid the\n> function call to SubTransGetTopmostTransaction(xid). With this\n> subtransaction code remains in generic place and AMs intending to use it\n> continue to leverage the common code, plus explicitly clarifies the\n> behavior as well.\n\nLooking at the code as of master, we currently have:\n\n- PredicateLockTuple() calls SubTransGetTopmostTransaction() to figure\n out a whether the tuple has been locked by the current\n transaction. That check afaict just should be\n TransactionIdIsCurrentTransactionId(), without all the other\n stuff that's done today.\n\n TransactionIdIsCurrentTransactionId() imo ought to be optimized to\n always check for the top level transactionid first - that's a good bet\n today, but even moreso for the upcoming AMs that won't have separate\n xids for subtransactions. Alternatively we shouldn't make that a\n binary search for each subtrans level, but just have a small\n simplehash hashtable for xids.\n\n- CheckForSerializableConflictOut() wants to get the toplevel xid for\n the tuple, because that's the one the predicate hashtable stores.\n\n In your patch you've already moved the HTSV() call etc out of\n CheckForSerializableConflictOut(). I'm somewhat inclined to think that\n the SubTransGetTopmostTransaction() call ought to go along with that.\n I don't really think that belongs in predicate.c, especially if\n most/all new AMs don't use subtransaction ids.\n\n The only downside is that currently the\n TransactionIdEquals(xid, GetTopTransactionIdIfAny()) check\n avoids the SubTransGetTopmostTransaction() check.\n\n But again, the better fix for that seems to be to improve the generic\n code. As written the check won't prevent a subtrans lookup for heap\n when subtransactions are in use, and it's IME pretty common for tuples\n to get looked at again in the transaction that has created them. So\n I'm somewhat inclined to think that SubTransGetTopmostTransaction()\n should have a fast-path for the current transaction - probably just\n employing TransactionIdIsCurrentTransactionId().\n\nI don't really see what we gain by having the subtrans handling in the\npredicate code. Especially given that we've already moved the HTSV()\nhandling out, it seems architecturally the wrong place to me - but I\nadmit that that's a fuzzy argument. The relevant mapping should be one\nline in the caller.\n\nI wonder if it'd be wroth to combine the\nTransactionIdIsCurrentTransactionId() calls in the heap cases that\ncurrently do both, PredicateLockTuple() and\nHeapCheckForSerializableConflictOut(). The heap_fetch() case probably\nisn't commonly that hot a pathq, but heap_hot_search_buffer() is.\n\n\nMinor notes:\n- I don't think 'insert_xid' is necessarily great - it could also be the\n updating xid etc. And while you can argue that an update is an insert\n in the current heap, that's not the case for future AMs.\n- to me\n@@ -1621,7 +1622,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,\n \t\t\tif (valid)\n \t\t\t{\n \t\t\t\tItemPointerSetOffsetNumber(tid, offnum);\n-\t\t\t\tPredicateLockTuple(relation, heapTuple, snapshot);\n+\t\t\t\tPredicateLockTID(relation, &(heapTuple)->t_self, snapshot,\n+\t\t\t\t\t\t\t\t HeapTupleHeaderGetXmin(heapTuple->t_data));\n \t\t\t\tif (all_dead)\n \t\t\t\t\t*all_dead = false;\n \t\t\t\treturn true;\n\n What are those parens - as placed they can't do anything. Did you\n intend to write &(heapTuple->t_self)? Even that is pretty superfluous,\n but it at least clarifies the precedence.\n\n I'm also a bit confused why we don't need to pass in the offset of the\n current tuple, rather than the HOT root tuple here. That's not related\n to this patch. But aren't we locking the wrong tuple here, in case of\n HOT?\n\n- I wonder if CheckForSerializableConflictOutNeeded() shouldn't have a\n portion of it's code as a static inline. In particular, it's a shame\n that we currently perform external function calls at quite the\n frequency when serializable isn't even in use.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Jul 2019 14:06:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Thu, Aug 1, 2019 at 2:36 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > I think the only part its doing for sub-transaction is\n> > > > SubTransGetTopmostTransaction(xid). If xid passed to this function is\n> > > > already top most transaction which is case for zheap and zedstore, then\n> > > > there is no downside to keeping that code here in common place.\n> > >\n> > > Well, it's far from a cheap function. It'll do unnecessary on-disk\n> > > lookups in many cases. I'd call that quite a downside.\n> > >\n> >\n> > Okay, agree, its costly function and better to avoid the call if possible.\n> >\n> > Instead of moving the handling out of the function, how do feel about\n> > adding boolean isTopTransactionId argument to function\n> > CheckForSerializableConflictOut(). The AMs, which implicitly know, only\n> > pass top transaction Id to this function, can pass true and avoid the\n> > function call to SubTransGetTopmostTransaction(xid). With this\n> > subtransaction code remains in generic place and AMs intending to use it\n> > continue to leverage the common code, plus explicitly clarifies the\n> > behavior as well.\n>\n> Looking at the code as of master, we currently have:\n>\n> - PredicateLockTuple() calls SubTransGetTopmostTransaction() to figure\n> out a whether the tuple has been locked by the current\n> transaction. That check afaict just should be\n> TransactionIdIsCurrentTransactionId(), without all the other\n> stuff that's done today.\n>\nYeah. this is the only part where predicate locking uses the subxids.\nSince, predicate locking always use the top xid, IMHO, it'll be good\nto make this api independent of subxids.\n\n> TransactionIdIsCurrentTransactionId() imo ought to be optimized to\n> always check for the top level transactionid first - that's a good bet\n> today, but even moreso for the upcoming AMs that won't have separate\n> xids for subtransactions. Alternatively we shouldn't make that a\n> binary search for each subtrans level, but just have a small\n> simplehash hashtable for xids.\nA check for top transaction id first and usage of simple sound like\ngood optimizations. But, I'm not sure whether these changes should be\npart of this patch or a separate one.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 11:31:32 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Wed, Jul 31, 2019 at 2:06 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Looking at the code as of master, we currently have:\n>\n\nSuper awesome feedback and insights, thank you!\n\n- PredicateLockTuple() calls SubTransGetTopmostTransaction() to figure\n> out a whether the tuple has been locked by the current\n> transaction. That check afaict just should be\n> TransactionIdIsCurrentTransactionId(), without all the other\n> stuff that's done today.\n>\n\nAgree. v1-0002 patch attached does that now. Please let me know if that's\nwhat you meant.\n\n TransactionIdIsCurrentTransactionId() imo ought to be optimized to\n> always check for the top level transactionid first - that's a good bet\n> today, but even moreso for the upcoming AMs that won't have separate\n> xids for subtransactions. Alternatively we shouldn't make that a\n> binary search for each subtrans level, but just have a small\n> simplehash hashtable for xids.\n>\n\nv1-0001 patch checks for GetTopTransactionIdIfAny() first in\nTransactionIdIsCurrentTransactionId() which seems yes better in general and\nmore for future. That mostly meets the needs for current discussion.\n\nThe alternative of not using binary search seems bigger refactoring and\nshould be handled as separate optimization exercise outside of this thread.\n\n\n> - CheckForSerializableConflictOut() wants to get the toplevel xid for\n> the tuple, because that's the one the predicate hashtable stores.\n>\n> In your patch you've already moved the HTSV() call etc out of\n> CheckForSerializableConflictOut(). I'm somewhat inclined to think that\n> the SubTransGetTopmostTransaction() call ought to go along with that.\n> I don't really think that belongs in predicate.c, especially if\n> most/all new AMs don't use subtransaction ids.\n>\n> The only downside is that currently the\n> TransactionIdEquals(xid, GetTopTransactionIdIfAny()) check\n> avoids the SubTransGetTopmostTransaction() check.\n>\n> But again, the better fix for that seems to be to improve the generic\n> code. As written the check won't prevent a subtrans lookup for heap\n> when subtransactions are in use, and it's IME pretty common for tuples\n> to get looked at again in the transaction that has created them. So\n> I'm somewhat inclined to think that SubTransGetTopmostTransaction()\n> should have a fast-path for the current transaction - probably just\n> employing TransactionIdIsCurrentTransactionId().\n>\n\nThat optimization, as Kuntal also mentioned, seems something which can be\ndone on-top afterwards on current patch.\n\n\n> I don't really see what we gain by having the subtrans handling in the\n> predicate code. Especially given that we've already moved the HTSV()\n> handling out, it seems architecturally the wrong place to me - but I\n> admit that that's a fuzzy argument. The relevant mapping should be one\n> line in the caller.\n>\n\nOkay, I moved the sub transaction handling out of\nCheckForSerializableConflictOut() and have it along side HTSV() now.\n\nThe reason I felt leaving subtransaction handling in generic place, was it\nmight be premature to thing no future AM will need it. Plus, all\nserializable function api's having same expectations is easier. Like\nPredicateLockTuple() can be passed top or subtransaction id and it can\nhandle it but with the change CheckForSerializableConflictOut() only be\nfeed top transaction ID. But its fine and can see the point of AM needing\nit can easily get top transaction ID and feed it as heap.\n\n\n> I wonder if it'd be wroth to combine the\n> TransactionIdIsCurrentTransactionId() calls in the heap cases that\n> currently do both, PredicateLockTuple() and\n> HeapCheckForSerializableConflictOut(). The heap_fetch() case probably\n> isn't commonly that hot a pathq, but heap_hot_search_buffer() is.\n>\n\nMaybe, will give thought to it separate from the current patch.\n\n\n> Minor notes:\n> - I don't think 'insert_xid' is necessarily great - it could also be the\n> updating xid etc. And while you can argue that an update is an insert\n> in the current heap, that's not the case for future AMs.\n> - to me\n> @@ -1621,7 +1622,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation\n> relation, Buffer buffer,\n> if (valid)\n> {\n> ItemPointerSetOffsetNumber(tid, offnum);\n> - PredicateLockTuple(relation, heapTuple,\n> snapshot);\n> + PredicateLockTID(relation,\n> &(heapTuple)->t_self, snapshot,\n> +\n> HeapTupleHeaderGetXmin(heapTuple->t_data));\n> if (all_dead)\n> *all_dead = false;\n> return true;\n>\n> What are those parens - as placed they can't do anything. Did you\n> intend to write &(heapTuple->t_self)? Even that is pretty superfluous,\n> but it at least clarifies the precedence.\n>\n\nFixed. No idea what I was thinking there, mostly feel I intended to have it\nas like &(heapTuple->t_self).\n\n I'm also a bit confused why we don't need to pass in the offset of the\n> current tuple, rather than the HOT root tuple here. That's not related\n> to this patch. But aren't we locking the wrong tuple here, in case of\n> HOT?\n>\n\nYes, root is being locked here instead of the HOT. But I don't have full\ncontext on the same. If we wish to fix it though, can be easily done now\nwith the patch by passing \"tid\" instead of &(heapTuple->t_self).\n\n- I wonder if CheckForSerializableConflictOutNeeded() shouldn't have a\n> portion of it's code as a static inline. In particular, it's a shame\n> that we currently perform external function calls at quite the\n> frequency when serializable isn't even in use.\n>\n\nI am not sure on portion of the code part? SerializationNeededForRead() is\nstatic inline function in C file. Can't inline\nCheckForSerializableConflictOutNeeded() without moving\nSerializationNeededForRead() and some other variables to header file.\nCheckForSerializableConflictOut() wasn't inline either, so a function call\nwas performed earlier as well when serializable isn't even in use.\n\nI understand that with refactor, HeapCheckForSerializableConflictOut() is\ncalled which calls CheckForSerializableConflictOutNeeded(). If that's the\nproblem, for addressing the same, I had proposed alternative way to\nrefactor. CheckForSerializableConflictOut() can take callback function and\nvoid* callback argument for AM specific check instead. So, the flow would\nbe AM calling CheckForSerializableConflictOut() as today and only if\nserializable in use will invoke the callback to check with AM if more work\nshould be performed or not. Essentially\nHeapCheckForSerializableConflictOut() will become callback function\ninstead. Due to void* callback argument aspect I didn't like that solution\nand felt AM performing checks and calling CheckForSerializableConflictOut()\nseems more straight forward.", "msg_date": "Fri, 2 Aug 2019 16:56:22 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Sat, Aug 3, 2019 at 11:56 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> On Wed, Jul 31, 2019 at 2:06 PM Andres Freund <andres@anarazel.de> wrote:\n>> I'm also a bit confused why we don't need to pass in the offset of the\n>> current tuple, rather than the HOT root tuple here. That's not related\n>> to this patch. But aren't we locking the wrong tuple here, in case of\n>> HOT?\n>\n> Yes, root is being locked here instead of the HOT. But I don't have full context on the same. If we wish to fix it though, can be easily done now with the patch by passing \"tid\" instead of &(heapTuple->t_self).\n\nHere are three relevant commits:\n\n1. Commit dafaa3efb75 \"Implement genuine serializable isolation\nlevel.\" (2011) locked the root tuple, because it set t_self to *tid.\nPossibly due to confusion about the effect of the preceding line\nItemPointerSetOffsetNumber(tid, offnum).\n\n2. Commit commit 81fbbfe3352 \"Fix bugs in SSI tuple locking.\" (2013)\nfixed that by adding ItemPointerSetOffsetNumber(&heapTuple->t_self,\noffnum).\n\n3. Commit b89e151054a \"Introduce logical decoding.\" (2014) also did\nItemPointerSet(&(heapTuple->t_self), BufferGetBlockNumber(buffer),\noffnum), for the benefit of historical MVCC snapshots (unnecessarily,\nconsidering the change in the commit #2), but then, intending to\n\"reset to original, non-redirected, tid\", clobbered it, reintroducing\nthe bug fixed by #2.\n\nMy first guess is that commit #3 above was developed before commit #2,\nand finished up clobbering it. In fact, both logical decoding and SSI\nwant offnum, so we should be able to just remove the \"reset\" bit\n(perhaps like in the attached sketch, not really tested, though it\npasses). This must be in want of an isolation test, but I haven't yet\ntried to get my head around how to write a test that would show the\ndifference.\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Mon, 5 Aug 2019 20:58:05 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "Hi,\n\nOn 2019-08-05 20:58:05 +1200, Thomas Munro wrote:\n> On Sat, Aug 3, 2019 at 11:56 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> > On Wed, Jul 31, 2019 at 2:06 PM Andres Freund <andres@anarazel.de> wrote:\n> >> I'm also a bit confused why we don't need to pass in the offset of the\n> >> current tuple, rather than the HOT root tuple here. That's not related\n> >> to this patch. But aren't we locking the wrong tuple here, in case of\n> >> HOT?\n> >\n> > Yes, root is being locked here instead of the HOT. But I don't have full context on the same. If we wish to fix it though, can be easily done now with the patch by passing \"tid\" instead of &(heapTuple->t_self).\n> \n> Here are three relevant commits:\n\nThanks for digging!\n\n\n> 1. Commit dafaa3efb75 \"Implement genuine serializable isolation\n> level.\" (2011) locked the root tuple, because it set t_self to *tid.\n> Possibly due to confusion about the effect of the preceding line\n> ItemPointerSetOffsetNumber(tid, offnum).\n> \n> 2. Commit commit 81fbbfe3352 \"Fix bugs in SSI tuple locking.\" (2013)\n> fixed that by adding ItemPointerSetOffsetNumber(&heapTuple->t_self,\n> offnum).\n\nHm. It's not at all sure that it's ok to report the non-root tuple tid\nhere. I.e. I'm fairly sure there was a reason to not just set it to the\nactual tid. I think I might have written that up on the list at some\npoint. Let me dig in memory and list. Obviously possible that that was\nalso obsoleted by parallel changes.\n\n\n> 3. Commit b89e151054a \"Introduce logical decoding.\" (2014) also did\n> ItemPointerSet(&(heapTuple->t_self), BufferGetBlockNumber(buffer),\n> offnum), for the benefit of historical MVCC snapshots (unnecessarily,\n> considering the change in the commit #2), but then, intending to\n> \"reset to original, non-redirected, tid\", clobbered it, reintroducing\n> the bug fixed by #2.\n\n> My first guess is that commit #3 above was developed before commit #2,\n> and finished up clobbering it.\n\nYea, that sounds likely.\n\n\n> This must be in want of an isolation test, but I haven't yet tried to\n> get my head around how to write a test that would show the difference.\n\nIndeed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Aug 2019 09:35:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Tue, Aug 6, 2019 at 4:35 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-08-05 20:58:05 +1200, Thomas Munro wrote:\n> > 1. Commit dafaa3efb75 \"Implement genuine serializable isolation\n> > level.\" (2011) locked the root tuple, because it set t_self to *tid.\n> > Possibly due to confusion about the effect of the preceding line\n> > ItemPointerSetOffsetNumber(tid, offnum).\n> >\n> > 2. Commit commit 81fbbfe3352 \"Fix bugs in SSI tuple locking.\" (2013)\n> > fixed that by adding ItemPointerSetOffsetNumber(&heapTuple->t_self,\n> > offnum).\n>\n> Hm. It's not at all sure that it's ok to report the non-root tuple tid\n> here. I.e. I'm fairly sure there was a reason to not just set it to the\n> actual tid. I think I might have written that up on the list at some\n> point. Let me dig in memory and list. Obviously possible that that was\n> also obsoleted by parallel changes.\n\nAdding Heikki and Kevin.\n\nI haven't found your earlier discussion about that yet, but would be\nkeen to read it if you can find it. I wondered if your argument\nmight have had something to do with heap pruning, but I can't find a\nproblem there. It's not as though the TID of any visible tuples\nchange, it's just that some dead stuff goes away and the root line\npointer is changed to LP_REDIRECT if it wasn't already.\n\nAs for the argument for locking the tuple we emit rather than the HOT\nroot, I think the questions are: (1) How exactly do we get away with\nlocking only one version in a regular non-HOT update chain? (2) Is it\nOK to do that with a HOT root?\n\nThe answer to the first question is given in README-SSI[1].\nUnfortunately it doesn't discuss HOT directly, but I suspect the\nanswer is no, HOT is not special here. By my reading, it relies on\nthe version you lock being the version visible to your snapshot, which\nis important because later updates have to touch that tuple to write\nthe next version. That doesn't apply to some arbitrarily older tuple\nthat happens to be a HOT root. Concretely, heap_update() does\nCheckForSerializableConflictIn(relation, &oldtup, buffer), which is\nonly going to produce a rw conflict if T1 took an SIREAD on precisely\nthe version T2 locks in that path, not some arbitrarily older version\nthat happens to be a HOT root. A HOT root might never be considered\nagain by concurrent writers, no?\n\nAs a minor consequence, the optimisation in\nCheckTargetForConflictsIn() assumes that a tuple being updated has the\nsame tag as we locked when reading the tuple, which isn't the case if\nwe locked the root while reading but now have the TID for the version\nwe actually read, so in master we leak a tuple lock unnecessarily\nuntil end-of-transaction when we update a HOT tuple.\n\n> > This must be in want of an isolation test, but I haven't yet tried to\n> > get my head around how to write a test that would show the difference.\n>\n> Indeed.\n\nOne practical problem is that the only way to reach\nPredicateLockTuple() is from an index scan, and the index scan locks\nthe index page (or the whole index, depending on\nrd_indam->ampredlocks). So I think if you want to see a serialization\nanomaly you'll need multiple indexes (so that index page locks don't\nhide the problem), a long enough HOT chain and then probably several\ntransactions to be able to miss a cycle that should be picked up by\nthe logic in [1]. I'm out of steam for this problem today though.\n\nThe simple test from the report[3] that resulted in commit 81fbbfe3352\ndoesn't work for me (ie with permutation \"r1\" \"r2\" \"w1\" \"w2\" \"c1\" \"c2\"\ntwice in a row). The best I've come up with so far is an assertion\nthat we predicate-lock the same row version that we emitted to the\nuser, when reached via an index lookup that visits a HOT row. The\ntest outputs 'f' for master, but 't' with the change to heapam.c.\n\n[1] Excerpt from README-SSI:\n\n===\n * PostgreSQL does not use \"update in place\" with a rollback log\nfor its MVCC implementation. Where possible it uses \"HOT\" updates on\nthe same page (if there is room and no indexed value is changed).\nFor non-HOT updates the old tuple is expired in place and a new tuple\nis inserted at a new location. Because of this difference, a tuple\nlock in PostgreSQL doesn't automatically lock any other versions of a\nrow. We don't try to copy or expand a tuple lock to any other\nversions of the row, based on the following proof that any additional\nserialization failures we would get from that would be false\npositives:\n\n o If transaction T1 reads a row version (thus acquiring a\npredicate lock on it) and a second transaction T2 updates that row\nversion (thus creating a rw-conflict graph edge from T1 to T2), must a\nthird transaction T3 which re-updates the new version of the row also\nhave a rw-conflict in from T1 to prevent anomalies? In other words,\ndoes it matter whether we recognize the edge T1 -> T3?\n\n o If T1 has a conflict in, it certainly doesn't. Adding the\nedge T1 -> T3 would create a dangerous structure, but we already had\none from the edge T1 -> T2, so we would have aborted something anyway.\n(T2 has already committed, else T3 could not have updated its output;\nbut we would have aborted either T1 or T1's predecessor(s). Hence\nno cycle involving T1 and T3 can survive.)\n\n o Now let's consider the case where T1 doesn't have a\nrw-conflict in. If that's the case, for this edge T1 -> T3 to make a\ndifference, T3 must have a rw-conflict out that induces a cycle in the\ndependency graph, i.e. a conflict out to some transaction preceding T1\nin the graph. (A conflict out to T1 itself would be problematic too,\nbut that would mean T1 has a conflict in, the case we already\neliminated.)\n\n o So now we're trying to figure out if there can be an\nrw-conflict edge T3 -> T0, where T0 is some transaction that precedes\nT1. For T0 to precede T1, there has to be some edge, or sequence of\nedges, from T0 to T1. At least the last edge has to be a wr-dependency\nor ww-dependency rather than a rw-conflict, because T1 doesn't have a\nrw-conflict in. And that gives us enough information about the order\nof transactions to see that T3 can't have a rw-conflict to T0:\n - T0 committed before T1 started (the wr/ww-dependency implies this)\n - T1 started before T2 committed (the T1->T2 rw-conflict implies this)\n - T2 committed before T3 started (otherwise, T3 would get aborted\n because of an update conflict)\n\n o That means T0 committed before T3 started, and therefore\nthere can't be a rw-conflict from T3 to T0.\n\n o So in all cases, we don't need the T1 -> T3 edge to\nrecognize cycles. Therefore it's not necessary for T1's SIREAD lock\non the original tuple version to cover later versions as well.\n===\n\n[2] https://www.postgresql.org/message-id/52527E4D.4060302%40vmware.com\n[3] https://www.postgresql.org/message-id/flat/523C29A8.20904%40vmware.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Tue, 6 Aug 2019 16:20:05 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "Hello Thomas,\n\nOn Tue, Aug 6, 2019 at 9:50 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Aug 6, 2019 at 4:35 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-08-05 20:58:05 +1200, Thomas Munro wrote:\n> > > 1. Commit dafaa3efb75 \"Implement genuine serializable isolation\n> > > level.\" (2011) locked the root tuple, because it set t_self to *tid.\n> > > Possibly due to confusion about the effect of the preceding line\n> > > ItemPointerSetOffsetNumber(tid, offnum).\n> > >\n> > > 2. Commit commit 81fbbfe3352 \"Fix bugs in SSI tuple locking.\" (2013)\n> > > fixed that by adding ItemPointerSetOffsetNumber(&heapTuple->t_self,\n> > > offnum).\n> >\n> > Hm. It's not at all sure that it's ok to report the non-root tuple tid\n> > here. I.e. I'm fairly sure there was a reason to not just set it to the\n> > actual tid. I think I might have written that up on the list at some\n> > point. Let me dig in memory and list. Obviously possible that that was\n> > also obsoleted by parallel changes.\n>\n> Adding Heikki and Kevin.\n>\n> I haven't found your earlier discussion about that yet, but would be\n> keen to read it if you can find it. I wondered if your argument\n> might have had something to do with heap pruning, but I can't find a\n> problem there. It's not as though the TID of any visible tuples\n> change, it's just that some dead stuff goes away and the root line\n> pointer is changed to LP_REDIRECT if it wasn't already.\n>\n> As for the argument for locking the tuple we emit rather than the HOT\n> root, I think the questions are: (1) How exactly do we get away with\n> locking only one version in a regular non-HOT update chain? (2) Is it\n> OK to do that with a HOT root?\n>\n> The answer to the first question is given in README-SSI[1].\n> Unfortunately it doesn't discuss HOT directly, but I suspect the\n> answer is no, HOT is not special here. By my reading, it relies on\n> the version you lock being the version visible to your snapshot, which\n> is important because later updates have to touch that tuple to write\n> the next version. That doesn't apply to some arbitrarily older tuple\n> that happens to be a HOT root. Concretely, heap_update() does\n> CheckForSerializableConflictIn(relation, &oldtup, buffer), which is\n> only going to produce a rw conflict if T1 took an SIREAD on precisely\n> the version T2 locks in that path, not some arbitrarily older version\n> that happens to be a HOT root. A HOT root might never be considered\n> again by concurrent writers, no?\n>\nIf I understand the problem, this is the same serialization issue as\nwith in-place updates for zheap. I had a discussion with Kevin\nregarding the same in this thread [1]. It seems if we're locking the\nhot root id, we may report some false positive serializable errors.\n\n\n> > > This must be in want of an isolation test, but I haven't yet tried to\n> > > get my head around how to write a test that would show the difference.\n> >\n> > Indeed.\n>\n> One practical problem is that the only way to reach\n> PredicateLockTuple() is from an index scan, and the index scan locks\n> the index page (or the whole index, depending on\n> rd_indam->ampredlocks). So I think if you want to see a serialization\n> anomaly you'll need multiple indexes (so that index page locks don't\n> hide the problem), a long enough HOT chain and then probably several\n> transactions to be able to miss a cycle that should be picked up by\n> the logic in [1]. I'm out of steam for this problem today though.\n>\n> The simple test from the report[3] that resulted in commit 81fbbfe3352\n> doesn't work for me (ie with permutation \"r1\" \"r2\" \"w1\" \"w2\" \"c1\" \"c2\"\n> twice in a row). The best I've come up with so far is an assertion\n> that we predicate-lock the same row version that we emitted to the\n> user, when reached via an index lookup that visits a HOT row. The\n> test outputs 'f' for master, but 't' with the change to heapam.c.\n>\nHere is an example from the multiple-row-versions isolation test which\nfails if we perform in-place updates for zheap. I think the same will\nbe relevant if we lock root tuple id instead of the tuple itself.\nStep 1: T1-> BEGIN; Read FROM t where id=1000000;\nStep 2: T2-> BEGIN; UPDATE t where id=1000000; COMMIT; (creates T1->T2)\nStep 3: T3-> BEGIN; UPDATE t where id=1000000; Read FROM t where id=500000;\nStep 4: T4-> BEGIN; UPDATE t where id= 500000; Read FROM t where id=1;\nCOMMIT; (creates T3->T4)\nStep 5: T3-> COMMIT;\nStep 6: T1-> UPDATE t where id=1; COMMIT; (creates T4->T1,)\n\nAt step 6, when the update statement is executed, T1 is rolled back\nbecause of T3->T4->T1.\n\nBut for zheap, step 3 also creates a dependency T1->T3 because of\nin-place update. When T4 commits in step 4, it marks T3 as doomed\nbecause of T1 --> T3 --> T4. Hence, in step 5, T3 is rolled back.\n\n[1] Re: In-place updates and serializable transactions:\nhttps://www.postgresql.org/message-id/CAGz5QCJzreUqJqHeXrbEs6xb0zCNKBHhOj6D9Tjd3btJTzydxg%40mail.gmail.com\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Aug 2019 13:51:10 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On 06/08/2019 07:20, Thomas Munro wrote:\n> On Tue, Aug 6, 2019 at 4:35 AM Andres Freund <andres@anarazel.de> wrote:\n>> On 2019-08-05 20:58:05 +1200, Thomas Munro wrote:\n>>> 1. Commit dafaa3efb75 \"Implement genuine serializable isolation\n>>> level.\" (2011) locked the root tuple, because it set t_self to *tid.\n>>> Possibly due to confusion about the effect of the preceding line\n>>> ItemPointerSetOffsetNumber(tid, offnum).\n>>>\n>>> 2. Commit commit 81fbbfe3352 \"Fix bugs in SSI tuple locking.\" (2013)\n>>> fixed that by adding ItemPointerSetOffsetNumber(&heapTuple->t_self,\n>>> offnum).\n>>\n>> Hm. It's not at all sure that it's ok to report the non-root tuple tid\n>> here. I.e. I'm fairly sure there was a reason to not just set it to the\n>> actual tid. I think I might have written that up on the list at some\n>> point. Let me dig in memory and list. Obviously possible that that was\n>> also obsoleted by parallel changes.\n> \n> Adding Heikki and Kevin.\n> \n> I haven't found your earlier discussion about that yet, but would be\n> keen to read it if you can find it. I wondered if your argument\n> might have had something to do with heap pruning, but I can't find a\n> problem there. It's not as though the TID of any visible tuples\n> change, it's just that some dead stuff goes away and the root line\n> pointer is changed to LP_REDIRECT if it wasn't already.\n> \n> As for the argument for locking the tuple we emit rather than the HOT\n> root, I think the questions are: (1) How exactly do we get away with\n> locking only one version in a regular non-HOT update chain? (2) Is it\n> OK to do that with a HOT root?\n> \n> The answer to the first question is given in README-SSI[1].\n> Unfortunately it doesn't discuss HOT directly, but I suspect the\n> answer is no, HOT is not special here. By my reading, it relies on\n> the version you lock being the version visible to your snapshot, which\n> is important because later updates have to touch that tuple to write\n> the next version. That doesn't apply to some arbitrarily older tuple\n> that happens to be a HOT root. Concretely, heap_update() does\n> CheckForSerializableConflictIn(relation, &oldtup, buffer), which is\n> only going to produce a rw conflict if T1 took an SIREAD on precisely\n> the version T2 locks in that path, not some arbitrarily older version\n> that happens to be a HOT root. A HOT root might never be considered\n> again by concurrent writers, no?\n\nYour analysis is spot on. Thanks for the clear write-up!\n\n>>> This must be in want of an isolation test, but I haven't yet tried to\n>>> get my head around how to write a test that would show the difference.\n>>\n>> Indeed.\n> \n> One practical problem is that the only way to reach\n> PredicateLockTuple() is from an index scan, and the index scan locks\n> the index page (or the whole index, depending on\n> rd_indam->ampredlocks). So I think if you want to see a serialization\n> anomaly you'll need multiple indexes (so that index page locks don't\n> hide the problem), a long enough HOT chain and then probably several\n> transactions to be able to miss a cycle that should be picked up by\n> the logic in [1]. I'm out of steam for this problem today though.\n\nI had some steam, and wrote a spec that reproduces this bug. It wasn't \nactually that hard to reproduce, fortunately. Or unfortunately; people \nmight well be hitting it in production. I used the \"freezetest.spec\" \nfrom the 2013 thread as the starting point, and added one UPDATE to the \ninitialization, so that the test starts with an already HOT-updated \ntuple. It should throw a serialization error, but on current master, it \ndoes not. After applying your fix.txt, it does.\n\nYour fix.txt seems correct. For clarity, I'd prefer moving things around \na bit, though, so that the t_self is set earlier in the function, at the \nsame place where the other HeapTuple fields are set. And set blkno and \noffnum together, in one ItemPointerSet call. With that, I'm not sure we \nneed such a verbose comment explaining why t_self needs to be updated \nbut I kept it for now.\n\nAttached is a patch that contains your fix.txt with the changes for \nclarity mentioned above, and an isolationtester test case.\n\nPS. Because heap_hot_search_buffer() now always sets heapTuple->t_self \nto the returned tuple version, updating *tid is redundant. And the call \nin heapam_index_fetch_tuple() wouldn't need to do \n\"bslot->base.tupdata.t_self = *tid\".\n\n- Heikki", "msg_date": "Tue, 6 Aug 2019 12:26:56 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Tue, Aug 6, 2019 at 9:26 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I had some steam, and wrote a spec that reproduces this bug. It wasn't\n> actually that hard to reproduce, fortunately. Or unfortunately; people\n> might well be hitting it in production. I used the \"freezetest.spec\"\n> from the 2013 thread as the starting point, and added one UPDATE to the\n> initialization, so that the test starts with an already HOT-updated\n> tuple. It should throw a serialization error, but on current master, it\n> does not. After applying your fix.txt, it does.\n\nThanks! Ahh, right, I was expecting it to be harder to see an\nundetected anomaly, because of the index page lock, but of course we\nnever actually write to that page so it's just the heap tuple lock\nholding everything together.\n\n> Your fix.txt seems correct. For clarity, I'd prefer moving things around\n> a bit, though, so that the t_self is set earlier in the function, at the\n> same place where the other HeapTuple fields are set. And set blkno and\n> offnum together, in one ItemPointerSet call. With that, I'm not sure we\n> need such a verbose comment explaining why t_self needs to be updated\n> but I kept it for now.\n\n+1\n\n> Attached is a patch that contains your fix.txt with the changes for\n> clarity mentioned above, and an isolationtester test case.\n\nLGTM.\n\n> PS. Because heap_hot_search_buffer() now always sets heapTuple->t_self\n> to the returned tuple version, updating *tid is redundant. And the call\n> in heapam_index_fetch_tuple() wouldn't need to do\n> \"bslot->base.tupdata.t_self = *tid\".\n\nRight, that sounds like a separate improvement for master only.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Aug 2019 22:35:38 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On 06/08/2019 13:35, Thomas Munro wrote:\n> On Tue, Aug 6, 2019 at 9:26 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> Attached is a patch that contains your fix.txt with the changes for\n>> clarity mentioned above, and an isolationtester test case.\n> \n> LGTM.\n\nPushed, thanks!\n\n- Heikki\n\n\n", "msg_date": "Wed, 7 Aug 2019 13:01:54 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Fri, Aug 2, 2019 at 4:56 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n>\n> On Wed, Jul 31, 2019 at 2:06 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Looking at the code as of master, we currently have:\n>>\n>\n> Super awesome feedback and insights, thank you!\n>\n> - PredicateLockTuple() calls SubTransGetTopmostTransaction() to figure\n>> out a whether the tuple has been locked by the current\n>> transaction. That check afaict just should be\n>> TransactionIdIsCurrentTransactionId(), without all the other\n>> stuff that's done today.\n>>\n>\n> Agree. v1-0002 patch attached does that now. Please let me know if that's\n> what you meant.\n>\n> TransactionIdIsCurrentTransactionId() imo ought to be optimized to\n>> always check for the top level transactionid first - that's a good bet\n>> today, but even moreso for the upcoming AMs that won't have separate\n>> xids for subtransactions. Alternatively we shouldn't make that a\n>> binary search for each subtrans level, but just have a small\n>> simplehash hashtable for xids.\n>>\n>\n> v1-0001 patch checks for GetTopTransactionIdIfAny() first in\n> TransactionIdIsCurrentTransactionId() which seems yes better in general and\n> more for future. That mostly meets the needs for current discussion.\n>\n> The alternative of not using binary search seems bigger refactoring and\n> should be handled as separate optimization exercise outside of this thread.\n>\n>\n>> - CheckForSerializableConflictOut() wants to get the toplevel xid for\n>> the tuple, because that's the one the predicate hashtable stores.\n>>\n>> In your patch you've already moved the HTSV() call etc out of\n>> CheckForSerializableConflictOut(). I'm somewhat inclined to think that\n>> the SubTransGetTopmostTransaction() call ought to go along with that.\n>> I don't really think that belongs in predicate.c, especially if\n>> most/all new AMs don't use subtransaction ids.\n>>\n>> The only downside is that currently the\n>> TransactionIdEquals(xid, GetTopTransactionIdIfAny()) check\n>> avoids the SubTransGetTopmostTransaction() check.\n>>\n>> But again, the better fix for that seems to be to improve the generic\n>> code. As written the check won't prevent a subtrans lookup for heap\n>> when subtransactions are in use, and it's IME pretty common for tuples\n>> to get looked at again in the transaction that has created them. So\n>> I'm somewhat inclined to think that SubTransGetTopmostTransaction()\n>> should have a fast-path for the current transaction - probably just\n>> employing TransactionIdIsCurrentTransactionId().\n>>\n>\n> That optimization, as Kuntal also mentioned, seems something which can be\n> done on-top afterwards on current patch.\n>\n>\n>> I don't really see what we gain by having the subtrans handling in the\n>> predicate code. Especially given that we've already moved the HTSV()\n>> handling out, it seems architecturally the wrong place to me - but I\n>> admit that that's a fuzzy argument. The relevant mapping should be one\n>> line in the caller.\n>>\n>\n> Okay, I moved the sub transaction handling out of\n> CheckForSerializableConflictOut() and have it along side HTSV() now.\n>\n> The reason I felt leaving subtransaction handling in generic place, was it\n> might be premature to thing no future AM will need it. Plus, all\n> serializable function api's having same expectations is easier. Like\n> PredicateLockTuple() can be passed top or subtransaction id and it can\n> handle it but with the change CheckForSerializableConflictOut() only be\n> feed top transaction ID. But its fine and can see the point of AM needing\n> it can easily get top transaction ID and feed it as heap.\n>\n>\n>> I wonder if it'd be wroth to combine the\n>> TransactionIdIsCurrentTransactionId() calls in the heap cases that\n>> currently do both, PredicateLockTuple() and\n>> HeapCheckForSerializableConflictOut(). The heap_fetch() case probably\n>> isn't commonly that hot a pathq, but heap_hot_search_buffer() is.\n>>\n>\n> Maybe, will give thought to it separate from the current patch.\n>\n>\n>> Minor notes:\n>> - I don't think 'insert_xid' is necessarily great - it could also be the\n>> updating xid etc. And while you can argue that an update is an insert\n>> in the current heap, that's not the case for future AMs.\n>> - to me\n>> @@ -1621,7 +1622,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation\n>> relation, Buffer buffer,\n>> if (valid)\n>> {\n>> ItemPointerSetOffsetNumber(tid, offnum);\n>> - PredicateLockTuple(relation, heapTuple,\n>> snapshot);\n>> + PredicateLockTID(relation,\n>> &(heapTuple)->t_self, snapshot,\n>> +\n>> HeapTupleHeaderGetXmin(heapTuple->t_data));\n>> if (all_dead)\n>> *all_dead = false;\n>> return true;\n>>\n>> What are those parens - as placed they can't do anything. Did you\n>> intend to write &(heapTuple->t_self)? Even that is pretty superfluous,\n>> but it at least clarifies the precedence.\n>>\n>\n> Fixed. No idea what I was thinking there, mostly feel I intended to have\n> it as like &(heapTuple->t_self).\n>\n> I'm also a bit confused why we don't need to pass in the offset of the\n>> current tuple, rather than the HOT root tuple here. That's not related\n>> to this patch. But aren't we locking the wrong tuple here, in case of\n>> HOT?\n>>\n>\n> Yes, root is being locked here instead of the HOT. But I don't have full\n> context on the same. If we wish to fix it though, can be easily done now\n> with the patch by passing \"tid\" instead of &(heapTuple->t_self).\n>\n> - I wonder if CheckForSerializableConflictOutNeeded() shouldn't have a\n>> portion of it's code as a static inline. In particular, it's a shame\n>> that we currently perform external function calls at quite the\n>> frequency when serializable isn't even in use.\n>>\n>\n> I am not sure on portion of the code part? SerializationNeededForRead() is\n> static inline function in C file. Can't inline\n> CheckForSerializableConflictOutNeeded() without moving\n> SerializationNeededForRead() and some other variables to header file.\n> CheckForSerializableConflictOut() wasn't inline either, so a function call\n> was performed earlier as well when serializable isn't even in use.\n>\n> I understand that with refactor, HeapCheckForSerializableConflictOut() is\n> called which calls CheckForSerializableConflictOutNeeded(). If that's the\n> problem, for addressing the same, I had proposed alternative way to\n> refactor. CheckForSerializableConflictOut() can take callback function and\n> void* callback argument for AM specific check instead. So, the flow would\n> be AM calling CheckForSerializableConflictOut() as today and only if\n> serializable in use will invoke the callback to check with AM if more work\n> should be performed or not. Essentially\n> HeapCheckForSerializableConflictOut() will become callback function\n> instead. Due to void* callback argument aspect I didn't like that solution\n> and felt AM performing checks and calling CheckForSerializableConflictOut()\n> seems more straight forward.\n>\n\nAttaching re-based version of the patches on top of current master, which\nhas the fix for HOT serializable predicate locking bug spotted by Andres\ncommitted now.", "msg_date": "Wed, 7 Aug 2019 11:53:39 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Thu, Aug 8, 2019 at 6:53 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>>> - I wonder if CheckForSerializableConflictOutNeeded() shouldn't have a\n>>> portion of it's code as a static inline. In particular, it's a shame\n>>> that we currently perform external function calls at quite the\n>>> frequency when serializable isn't even in use.\n>>\n>> I am not sure on portion of the code part? SerializationNeededForRead() is static inline function in C file. Can't inline CheckForSerializableConflictOutNeeded() without moving SerializationNeededForRead() and some other variables to header file. CheckForSerializableConflictOut() wasn't inline either, so a function call was performed earlier as well when serializable isn't even in use.\n\nI agree that it's strange that we do these high frequency function\ncalls just to figure out that we're not even using this stuff, which\nultimately comes down to the static global variable MySerializableXact\nbeing not reachable from (say) an inline function defined in a header.\nThat's something to look into in another thread.\n\n> Attaching re-based version of the patches on top of current master, which has the fix for HOT serializable predicate locking bug spotted by Andres committed now.\n\nI'm planning to commit these three patches on Monday. I've attached\nversions with whitespace-only changes from pgindent, and commit\nmessages lightly massaged and updated to point to this discussion and\nreviewers.", "msg_date": "Fri, 8 Nov 2019 17:43:46 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Thu, Nov 7, 2019 at 8:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Thu, Aug 8, 2019 at 6:53 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> >>> - I wonder if CheckForSerializableConflictOutNeeded() shouldn't have a\n> >>> portion of it's code as a static inline. In particular, it's a shame\n> >>> that we currently perform external function calls at quite the\n> >>> frequency when serializable isn't even in use.\n> >>\n> >> I am not sure on portion of the code part? SerializationNeededForRead()\n> is static inline function in C file. Can't inline\n> CheckForSerializableConflictOutNeeded() without moving\n> SerializationNeededForRead() and some other variables to header file.\n> CheckForSerializableConflictOut() wasn't inline either, so a function call\n> was performed earlier as well when serializable isn't even in use.\n>\n> I agree that it's strange that we do these high frequency function\n> calls just to figure out that we're not even using this stuff, which\n> ultimately comes down to the static global variable MySerializableXact\n> being not reachable from (say) an inline function defined in a header.\n> That's something to look into in another thread.\n>\n> > Attaching re-based version of the patches on top of current master,\n> which has the fix for HOT serializable predicate locking bug spotted by\n> Andres committed now.\n>\n> I'm planning to commit these three patches on Monday. I've attached\n> versions with whitespace-only changes from pgindent, and commit\n> messages lightly massaged and updated to point to this discussion and\n> reviewers.\n>\n\nThanks a lot, sounds good.\n\nOn Thu, Nov 7, 2019 at 8:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Thu, Aug 8, 2019 at 6:53 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>>> - I wonder if CheckForSerializableConflictOutNeeded() shouldn't have a\n>>>   portion of it's code as a static inline. In particular, it's a shame\n>>>   that we currently perform external function calls at quite the\n>>>   frequency when serializable isn't even in use.\n>>\n>> I am not sure on portion of the code part? SerializationNeededForRead() is static inline function in C file. Can't inline CheckForSerializableConflictOutNeeded() without moving SerializationNeededForRead() and some other variables to header file. CheckForSerializableConflictOut() wasn't inline either, so a function call was performed earlier as well when serializable isn't even in use.\n\nI agree that it's strange that we do these high frequency function\ncalls just to figure out that we're not even using this stuff, which\nultimately comes down to the static global variable MySerializableXact\nbeing not reachable from (say) an inline function defined in a header.\nThat's something to look into in another thread.\n\n> Attaching re-based version of the patches on top of current master, which has the fix for HOT serializable predicate locking bug spotted by Andres committed now.\n\nI'm planning to commit these three patches on Monday.  I've attached\nversions with whitespace-only changes from pgindent, and commit\nmessages lightly massaged and updated to point to this discussion and\nreviewers.Thanks a lot, sounds good.", "msg_date": "Fri, 8 Nov 2019 11:40:59 -0800", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Sat, Nov 9, 2019 at 8:41 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> On Thu, Nov 7, 2019 at 8:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> I'm planning to commit these three patches on Monday. I've attached\n>> versions with whitespace-only changes from pgindent, and commit\n>> messages lightly massaged and updated to point to this discussion and\n>> reviewers.\n>\n> Thanks a lot, sounds good.\n\nHi Ashwin,\n\nI pushed the first two, but on another read-through of the main patch\nI didn't like the comments for CheckForSerializableConflictOut() or\nthe fact that it checks SerializationNeededForRead() again, after I\nthought a bit about what the contract for this API is now. Here's a\nversion with small fixup that I'd like to squash into the patch.\nPlease let me know what you think, or if you see how to improve it\nfurther.", "msg_date": "Mon, 11 Nov 2019 17:20:37 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Sun, Nov 10, 2019 at 8:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> I pushed the first two,\n\n\nThank You!\n\nbut on another read-through of the main patch\n> I didn't like the comments for CheckForSerializableConflictOut() or\n> the fact that it checks SerializationNeededForRead() again, after I\n> thought a bit about what the contract for this API is now. Here's a\n> version with small fixup that I'd like to squash into the patch.\n> Please let me know what you think,\n\n\nThe thought or reasoning behind having SerializationNeededForRead()\ninside CheckForSerializableConflictOut() is to keep that API clean and\ncomplete by itself. Only if AM like heap needs to perform some AM\nspecific checking only for serialization needed case can do so but is\nnot forced. So, if AM for example Zedstore doesn't need to do any\nspecific checking, then it can directly call\nCheckForSerializableConflictOut(). With the modified fixup patch, the\nresponsibility is unnecessarily forced to caller even if\nCheckForSerializableConflictOut() can do it. I understand the intent\nis to avoid duplicate check for heap.\n\n>\n> or if you see how to improve it\n> further.\n>\n\nI had proposed as alternative way in initial email and also later,\ndidn't receive comment on that, so re-posting.\n\nAlternative way to refactor. CheckForSerializableConflictOut() can\ntake callback function and (void *) callback argument for AM specific\ncheck. So, the flow would be AM calling\nCheckForSerializableConflictOut() as today and only if\nSerializationNeededForRead() will invoke the callback to check with AM\nif more work should be performed or not. Essentially\nHeapCheckForSerializableConflictOut() will become callback function\ninstead. So, roughly would look like....\n\ntypedef bool (*AMCheckForSerializableConflictOutCallback) (void *arg);\n\nvoid CheckForSerializableConflictOut(Relation relation, TransactionId xid,\nSnapshot snapshot, AMCheckForSerializableConflictOutCallback callback, void\n*callback_arg)\n{\n if (!SerializationNeededForRead(relation, snapshot))\n return;\n if (callback != NULL && !callback(callback_args))\n return;\n........\n.....\n}\n\nWith this AMs which don't have any specific checks to perform can pass\ncallback as NULL. So, function call is involved only if\nSerializationNeededForRead() and only for AMs which need it.\n\nJust due to void* callback argument aspect I didn't prefer that\nsolution and felt AM performing checks and calling\nCheckForSerializableConflictOut() seems better. Please let me know\nhow you feel about this.\n\nOn Sun, Nov 10, 2019 at 8:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\nI pushed the first two, Thank You!but on another read-through of the main patch\nI didn't like the comments for CheckForSerializableConflictOut() or\nthe fact that it checks SerializationNeededForRead() again, after I\nthought a bit about what the contract for this API is now.  Here's a\nversion with small fixup that I'd like to squash into the patch.\nPlease let me know what you think, The thought or reasoning behind having SerializationNeededForRead()inside CheckForSerializableConflictOut() is to keep that API clean andcomplete by itself. Only if AM like heap needs to perform some AMspecific checking only for serialization needed case can do so but isnot forced. So, if AM for example Zedstore doesn't need to do anyspecific checking, then it can directly callCheckForSerializableConflictOut(). With the modified fixup patch, theresponsibility is unnecessarily forced to caller even ifCheckForSerializableConflictOut() can do it. I understand the intentis to avoid duplicate check for heap. or if you see how to improve it\nfurther.I had proposed as alternative way in initial email and also later,didn't receive comment on that, so re-posting.Alternative way to refactor. CheckForSerializableConflictOut() cantake callback function and (void *) callback argument for AM specificcheck. So, the flow would be AM callingCheckForSerializableConflictOut() as today and only ifSerializationNeededForRead() will invoke the callback to check with AMif more work should be performed or not. EssentiallyHeapCheckForSerializableConflictOut() will become callback functioninstead. So, roughly would look like....typedef bool (*AMCheckForSerializableConflictOutCallback) (void *arg);void CheckForSerializableConflictOut(Relation relation, TransactionId xid, Snapshot snapshot, AMCheckForSerializableConflictOutCallback callback, void *callback_arg){    if (!SerializationNeededForRead(relation, snapshot))       return;    if (callback != NULL && !callback(callback_args))       return;.............}With this AMs which don't have any specific checks to perform can passcallback as NULL. So, function call is involved only ifSerializationNeededForRead() and only for AMs which need it.Just due to void* callback argument aspect I didn't prefer thatsolution and felt AM performing checks and callingCheckForSerializableConflictOut() seems better.  Please let me knowhow you feel about this.", "msg_date": "Tue, 12 Nov 2019 22:26:46 -0800", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Wed, Nov 13, 2019 at 7:27 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> On Sun, Nov 10, 2019 at 8:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> but on another read-through of the main patch\n>> I didn't like the comments for CheckForSerializableConflictOut() or\n>> the fact that it checks SerializationNeededForRead() again, after I\n>> thought a bit about what the contract for this API is now. Here's a\n>> version with small fixup that I'd like to squash into the patch.\n>> Please let me know what you think,\n>\n> The thought or reasoning behind having SerializationNeededForRead()\n> inside CheckForSerializableConflictOut() is to keep that API clean and\n> complete by itself. Only if AM like heap needs to perform some AM\n> specific checking only for serialization needed case can do so but is\n> not forced. So, if AM for example Zedstore doesn't need to do any\n> specific checking, then it can directly call\n> CheckForSerializableConflictOut(). With the modified fixup patch, the\n> responsibility is unnecessarily forced to caller even if\n> CheckForSerializableConflictOut() can do it. I understand the intent\n> is to avoid duplicate check for heap.\n\nOK, I kept only the small comment change from that little fixup patch,\nand pushed this.\n\n> I had proposed as alternative way in initial email and also later,\n> didn't receive comment on that, so re-posting.\n\n> typedef bool (*AMCheckForSerializableConflictOutCallback) (void *arg);\n...\n> Just due to void* callback argument aspect I didn't prefer that\n> solution and felt AM performing checks and calling\n> CheckForSerializableConflictOut() seems better. Please let me know\n> how you feel about this.\n\nYeah. We could always come back to this idea if it looks better once\nwe have more experience with new table AMs.\n\n\n", "msg_date": "Tue, 28 Jan 2020 13:46:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" }, { "msg_contents": "On Mon, Jan 27, 2020 at 4:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> OK, I kept only the small comment change from that little fixup patch,\n> and pushed this.\n>\n> > I had proposed as alternative way in initial email and also later,\n> > didn't receive comment on that, so re-posting.\n>\n> > typedef bool (*AMCheckForSerializableConflictOutCallback) (void *arg);\n> ...\n> > Just due to void* callback argument aspect I didn't prefer that\n> > solution and felt AM performing checks and calling\n> > CheckForSerializableConflictOut() seems better. Please let me know\n> > how you feel about this.\n>\n> Yeah. We could always come back to this idea if it looks better once\n> we have more experience with new table AMs.\n>\n\nSounds good. Thank You!\n\nOn Mon, Jan 27, 2020 at 4:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:OK, I kept only the small comment change from that little fixup patch,\nand pushed this.\n\n> I had proposed as alternative way in initial email and also later,\n> didn't receive comment on that, so re-posting.\n\n> typedef bool (*AMCheckForSerializableConflictOutCallback) (void *arg);\n...\n> Just due to void* callback argument aspect I didn't prefer that\n> solution and felt AM performing checks and calling\n> CheckForSerializableConflictOut() seems better.  Please let me know\n> how you feel about this.\n\nYeah.  We could always come back to this idea if it looks better once\nwe have more experience with new table AMs.Sounds good. Thank You!", "msg_date": "Mon, 27 Jan 2020 17:00:53 -0800", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Remove HeapTuple and Buffer dependency for predicate locking\n functions" } ]
[ { "msg_contents": "Hello hackers,\n\nWhen running on REL_11_STABLE the following query:\nCREATE PROCEDURE test_ambiguous_procname(int) as $$ begin end; $$\nlanguage plpgsql;\nCREATE PROCEDURE test_ambiguous_procname(text) as $$ begin end; $$\nlanguage plpgsql;\nDROP PROCEDURE test_ambiguous_procname;\nunder valgrind I get the memory access errors:\n\n2019-06-24 22:21:39.925 MSK|law|regression|5d1122c2.2921|LOG: \nstatement: DROP PROCEDURE test_ambiguous_procname;\n==00:00:00:07.756 10529== Conditional jump or move depends on\nuninitialised value(s)\n==00:00:00:07.756 10529==    at 0x4C35E60: __memcmp_sse4_1 (in\n/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)\n==00:00:00:07.756 10529==    by 0x2E9A7B: LookupFuncName (parse_func.c:2078)\n==00:00:00:07.756 10529==    by 0x2E9D96: LookupFuncWithArgs\n(parse_func.c:2141)\n==00:00:00:07.756 10529==    by 0x2A2F7E: get_object_address\n(objectaddress.c:893)\n==00:00:00:07.756 10529==    by 0x31C9C0: RemoveObjects (dropcmds.c:71)\n==00:00:00:07.756 10529==    by 0x50D4FF: ExecDropStmt (utility.c:1738)\n==00:00:00:07.756 10529==    by 0x5100BC: ProcessUtilitySlow\n(utility.c:1580)\n==00:00:00:07.756 10529==    by 0x50EDFE: standard_ProcessUtility\n(utility.c:835)\n==00:00:00:07.756 10529==    by 0x50F07D: ProcessUtility (utility.c:360)\n==00:00:00:07.756 10529==    by 0x50B4D2: PortalRunUtility (pquery.c:1178)\n==00:00:00:07.756 10529==    by 0x50C169: PortalRunMulti (pquery.c:1324)\n==00:00:00:07.756 10529==    by 0x50CEFF: PortalRun (pquery.c:799)\n==00:00:00:07.756 10529==  Uninitialised value was created by a stack\nallocation\n==00:00:00:07.756 10529==    at 0x2E9C31: LookupFuncWithArgs\n(parse_func.c:2106)\n==00:00:00:07.756 10529==\n...\n==00:00:00:07.756 10529== Conditional jump or move depends on\nuninitialised value(s)\n==00:00:00:07.756 10529==    at 0x2E9A7E: LookupFuncName (parse_func.c:2078)\n==00:00:00:07.756 10529==    by 0x2E9D96: LookupFuncWithArgs\n(parse_func.c:2141)\n==00:00:00:07.757 10529==    by 0x2A2F7E: get_object_address\n(objectaddress.c:893)\n==00:00:00:07.757 10529==    by 0x31C9C0: RemoveObjects (dropcmds.c:71)\n==00:00:00:07.757 10529==    by 0x50D4FF: ExecDropStmt (utility.c:1738)\n==00:00:00:07.757 10529==    by 0x5100BC: ProcessUtilitySlow\n(utility.c:1580)\n==00:00:00:07.757 10529==    by 0x50EDFE: standard_ProcessUtility\n(utility.c:835)\n==00:00:00:07.757 10529==    by 0x50F07D: ProcessUtility (utility.c:360)\n==00:00:00:07.757 10529==    by 0x50B4D2: PortalRunUtility (pquery.c:1178)\n==00:00:00:07.757 10529==    by 0x50C169: PortalRunMulti (pquery.c:1324)\n==00:00:00:07.757 10529==    by 0x50CEFF: PortalRun (pquery.c:799)\n==00:00:00:07.757 10529==    by 0x5090FF: exec_simple_query\n(postgres.c:1145)\n==00:00:00:07.757 10529==  Uninitialised value was created by a stack\nallocation\n==00:00:00:07.757 10529==    at 0x2E9C31: LookupFuncWithArgs\n(parse_func.c:2106)\n\nAs I see, the code in LookupFuncName can fall through the \"if (nargs ==\n-1)\" condition and execute memcmp with nargs==-1.\nThe proposed patch is attached.\n\nBest regards,\nAlexander", "msg_date": "Mon, 24 Jun 2019 23:10:54 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Prevent invalid memory access in LookupFuncName" }, { "msg_contents": "On Mon, Jun 24, 2019 at 11:10:54PM +0300, Alexander Lakhin wrote:\n> When running on REL_11_STABLE the following query:\n> CREATE PROCEDURE test_ambiguous_procname(int) as $$ begin end; $$\n> language plpgsql;\n> CREATE PROCEDURE test_ambiguous_procname(text) as $$ begin end; $$\n> language plpgsql;\n> DROP PROCEDURE test_ambiguous_procname;\n> under valgrind I get the memory access errors:\n\nThanks! I have been able to reproduce the problem, and the error is\nobvious looking at the code. I have changed the patch to be more\nconsistent with HEAD though, returning InvalidOid in the code paths\ngenerating the error. The logic is the same, but that looked cleaner\nto me, and I have added some comments on the way, similarly to what\nbfb456c1 has done for HEAD (where LookupFuncNameInternal is doing the\nright thing already). This has been incorrect since aefeb68, so\nback-patched down to 10.\n--\nMichael", "msg_date": "Tue, 25 Jun 2019 11:19:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Prevent invalid memory access in LookupFuncName" } ]
[ { "msg_contents": "Short benchmark runs are bad if the runs aren't long enough to produce consistent results.\n\nHaving to do long runs because a benchmarking tool 'converges to reality' over time in reporting a tps number, due to miscalculation, is also bad.\n\n\nI want to measure TPS at a particular connection count. A fully cached Select Only pgbench produces fairly consistent numbers over short runs of a few minutes.\n\npgbench's \"including connections establishing\" number is polluted by fact that for many seconds the benchmark is running with less than the expected number of connections. I thought that was why the 'excluding' number was also printed and I had been relying on that number.\n\npgbench's \"excluding connections establishing\" number seems to be a total garbage number which can be way way over the actual tps. During a period when I had a bug causing slow connections I noticed a consistent value of about 100K tps over the measurement intervals. At the end of a 5 minute run it reported 450K tps! There was no point anywhere during the benchmark that it ran anywhere near that number.\n\n\nI had been using 'excluding' because it 'seemed' perhaps right in the past. It was only when I got crazy numbers I looked at the calculation to find:\n\n\n tps_exclude = total->cnt / (time_include - (INSTR_TIME_GET_DOUBLE(conn_total_time) / nclients));\n\n\nThe 'cnt' is the total across the entire run including the period when connections are ramping up. I don't see how dividing by the total time minus the average connection time produces the correct result.\n\n\nEven without buggy slow connections, when connecting 1000 clients, I've wondered why the 'excluding' number seemed a bit higher than any given reporting interval numbers, over a 5 minute run. I now understand why. NOTE: When the system hits 100% cpu utilization(after about the first 100 connections), on a fully cached Select only pgbench, further connections can struggle to get connected which really skews the results.\n\n\nHow about a patch which offered the option to wait on an advisory lock as a mechanism to let the main thread delay the start of the workload after all clients have connected and entered a READY state? This would produce a much cleaner number.\n\n\nShort benchmark runs are bad if the runs aren't long enough to produce consistent results.Having to do long runs because a benchmarking tool 'converges to reality' over time in reporting a tps number, due to miscalculation, is also bad.I want to measure TPS at a particular connection count.  A fully cached Select Only pgbench produces fairly consistent numbers over short runs of a few minutes.pgbench's \"including connections establishing\" number is polluted by fact that for many seconds the benchmark is running with less than the expected number of connections.  I thought that was why the 'excluding' number was also printed and I had been relying on that number.pgbench's \"excluding connections establishing\" number seems to be a total garbage number which can be way way over the actual tps.  During a period when I had a bug causing slow connections I noticed a consistent value of about 100K tps over the measurement intervals.  At the end of a 5 minute run it reported 450K tps!  There was no point anywhere during the benchmark that it ran anywhere near that number.I had been using 'excluding' because it 'seemed' perhaps right in the past.  It was only when I got crazy numbers I looked at the calculation to find:    tps_exclude = total->cnt / (time_include - (INSTR_TIME_GET_DOUBLE(conn_total_time) / nclients));The 'cnt' is the total across the entire run including the period when connections are ramping up.  I don't see how dividing by the total time minus the average connection time produces the correct result.Even without buggy slow connections, when connecting 1000 clients, I've wondered why the 'excluding' number seemed a bit higher than any given reporting interval numbers, over a 5 minute run.  I now understand why.  NOTE: When the system hits 100% cpu utilization(after about the first 100 connections), on a fully cached Select only pgbench, further connections can struggle to get connected which really skews the results.How about a patch which offered the option to wait on an advisory lock as a mechanism to let the main thread delay the start of the workload after all clients have connected and entered a READY state?  This would produce a much cleaner number.", "msg_date": "Mon, 24 Jun 2019 19:11:17 -0700 (PDT)", "msg_from": "Daniel Wood <hexexpert@comcast.net>", "msg_from_op": true, "msg_subject": "pgbench prints suspect tps numbers" }, { "msg_contents": "\nHello Daniel,\n\n> I want to measure TPS at a particular connection count. [...]\n>\n> pgbench's \"including connections establishing\" number is polluted by \n> fact that for many seconds the benchmark is running with less than the \n> expected number of connections. I thought that was why the 'excluding' \n> number was also printed and I had been relying on that number.\n>\n> pgbench's \"excluding connections establishing\" number seems to be a \n> total garbage number which can be way way over the actual tps. During a \n> period when I had a bug causing slow connections I noticed a consistent \n> value of about 100K tps over the measurement intervals. At the end of a \n> 5 minute run it reported 450K tps! There was no point anywhere during \n> the benchmark that it ran anywhere near that number.\n\nCould you report the precise version, settings and hardware?\n\nIn particular, how many threads, clients and what is the underlying \nhardware?\n\nAre you reconnecting on each transaction?\n\n> I had been using 'excluding' because it 'seemed' perhaps right in the \n> past. It was only when I got crazy numbers I looked at the calculation \n> to find:\n>\n> tps_exclude = total->cnt / (time_include - (INSTR_TIME_GET_DOUBLE(conn_total_time) / nclients));\n>\n>\n> The 'cnt' is the total across the entire run including the period when \n> connections are ramping up.\n\nYep. The threads are running independently, so there is no p\n\n> I don't see how dividing by the total time minus the average connection \n> time produces the correct result.\n\nThe above formula looks okay to me, at least at 7AM:-) Maybe the variable \ncould be given better names.\n\n> Even without buggy slow connections, when connecting 1000 clients,\n\nThat is a lot. Really.\n\n> I've wondered why the 'excluding' number seemed a bit higher than any \n> given reporting interval numbers, over a 5 minute run. I now understand \n> why. NOTE: When the system hits 100% cpu utilization(after about the \n> first 100 connections),\n\nObviously.\n\n> on a fully cached Select only pgbench, further connections can struggle \n> to get connected which really skews the results.\n\nSure, with 1000 clients the system can only by highly overloaded.\n\n> How about a patch which offered the option to wait on an advisory lock \n> as a mechanism to let the main thread delay the start of the workload \n> after all clients have connected and entered a READY state? This would \n> produce a much cleaner number.\n\nA barrier could be implemented, but it should be pretty useless because \nwithout reconnections the connection time is expected to be negligeable.\n\n-- \nFabien\n\n\n", "msg_date": "Tue, 25 Jun 2019 07:05:03 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench prints suspect tps numbers" } ]
[ { "msg_contents": "New thread continuing from\n<https://www.postgresql.org/message-id/d4903af2-e7b7-b551-71f8-3e4a6bdc2e73@2ndquadrant.com>.\n\nHere is a extended version of Álvaro's patch that adds an errbacktrace()\nfunction. You can do two things with this:\n\n- Manually attach it to an ereport() call site that you want to debug.\n\n- Set a configuration parameter like backtrace_function = 'int8in' to\ndebug ereport()/elog() calls in a specific function.\n\nThere was also mention of settings that would automatically produce\nbacktraces for PANICs etc. Those could surely be added if there is\nenough interest.\n\nFor the implementation, I support both backtrace() provided by the OS as\nwell as using libunwind. The former seems to be supported by a number\nof platforms, including glibc, macOS, and FreeBSD, so maybe we don't\nneed the libunwind suport. I haven't found any difference in quality in\nthe backtraces between the two approaches, but surely that is highly\ndependent on the exact configuration.\n\nI would welcome testing in all direction with this, to see how well it\nworks in different circumstances.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 25 Jun 2019 13:08:21 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "errbacktrace" }, { "msg_contents": "On 2019-Jun-25, Peter Eisentraut wrote:\n\n> Here is a extended version of �lvaro's patch that adds an errbacktrace()\n> function.\n\nGreat stuff, thanks for working on it.\n\n> You can do two things with this:\n> \n> - Manually attach it to an ereport() call site that you want to debug.\n> \n> - Set a configuration parameter like backtrace_function = 'int8in' to\n> debug ereport()/elog() calls in a specific function.\n\nWFM. I tried specifying int4in -- didn't work. Turns out the errors\nare inside another function which is what I have to put in\nbacktrace_function:\n\n$ PGOPTIONS=\"-c backtrace_function=pg_strtoint32\" psql\n\nalvherre=# select int 'foobar';\n\n2019-06-25 10:03:51.034 -04 [11711] ERROR: invalid input syntax for type integer: \"foobar\" at character 12\n2019-06-25 10:03:51.034 -04 [11711] BACKTRACE: postgres: alvherre alvherre [local] SELECT(pg_strtoint32+0xef) [0x55862737bdaf]\n\tpostgres: alvherre alvherre [local] SELECT(int4in+0xd) [0x558627336d7d]\n\tpostgres: alvherre alvherre [local] SELECT(InputFunctionCall+0x7b) [0x55862740b10b]\n\tpostgres: alvherre alvherre [local] SELECT(OidInputFunctionCall+0x48) [0x55862740b378]\n\tpostgres: alvherre alvherre [local] SELECT(coerce_type+0x297) [0x5586270b2f67]\n\tpostgres: alvherre alvherre [local] SELECT(coerce_to_target_type+0x9d) [0x5586270b37ad]\n\tpostgres: alvherre alvherre [local] SELECT(+0x1ed3d8) [0x5586270b83d8]\n\tpostgres: alvherre alvherre [local] SELECT(transformExpr+0x18) [0x5586270bbcc8]\n\tpostgres: alvherre alvherre [local] SELECT(transformTargetEntry+0xb2) [0x5586270c81c2]\n\tpostgres: alvherre alvherre [local] SELECT(transformTargetList+0x58) [0x5586270c9808]\n\tpostgres: alvherre alvherre [local] SELECT(transformStmt+0x9d1) [0x55862708caf1]\n\tpostgres: alvherre alvherre [local] SELECT(parse_analyze+0x57) [0x55862708f177]\n\tpostgres: alvherre alvherre [local] SELECT(pg_analyze_and_rewrite+0x12) [0x5586272d2f02]\n\tpostgres: alvherre alvherre [local] SELECT(+0x4085ca) [0x5586272d35ca]\n\tpostgres: alvherre alvherre [local] SELECT(PostgresMain+0x1a37) [0x5586272d52b7]\n\tpostgres: alvherre alvherre [local] SELECT(+0xbf635) [0x558626f8a635]\n\tpostgres: alvherre alvherre [local] SELECT(PostmasterMain+0xf3e) [0x55862724e27e]\n\tpostgres: alvherre alvherre [local] SELECT(main+0x723) [0x558626f8c603]\n\t/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f99d1931b97]\n\tpostgres: alvherre alvherre [local] SELECT(_start+0x2a) [0x558626f8c6ca]\n\nDidn't think too much about the libunwind format string (or even try to\ncompile it.)\n\nDespite possible shortcomings in the produced backtraces, this is a\n*much* more convenient interface than requesting users to attach gdb,\nset breakpoint on errfinish, hey why does my SQL not run, \"oh you forgot\n'cont' in gdb\", etc.\n\n> There was also mention of settings that would automatically produce\n> backtraces for PANICs etc. Those could surely be added if there is\n> enough interest.\n\nLet's have the basics first, we can add niceties afterwards. (IMO yes,\nwe should have backtraces in PANICs and assertion failures).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 25 Jun 2019 10:13:24 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On Tue, Jun 25, 2019 at 4:08 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> New thread continuing from\n> <\n> https://www.postgresql.org/message-id/d4903af2-e7b7-b551-71f8-3e4a6bdc2e73@2ndquadrant.com\n> >.\n>\n> Here is a extended version of Álvaro's patch that adds an errbacktrace()\n> function. You can do two things with this:\n>\n> - Manually attach it to an ereport() call site that you want to debug.\n>\n> - Set a configuration parameter like backtrace_function = 'int8in' to\n> debug ereport()/elog() calls in a specific function.\n>\n\nThank You. This is very helpful. Surprised is missing for so long time. We\nhave printing backtrace in Greenplum and its extremely helpful during\ndevelopment and production.\n\nThere was also mention of settings that would automatically produce\n> backtraces for PANICs etc. Those could surely be added if there is\n> enough interest.\n>\n\nIn Greenplum, we have backtrace enabled for PANICs, SEGV/BUS/ILL and\ninternal ERRORs, proves very helpful.\n\nFor the implementation, I support both backtrace() provided by the OS as\n> well as using libunwind. The former seems to be supported by a number\n> of platforms, including glibc, macOS, and FreeBSD, so maybe we don't\n> need the libunwind suport. I haven't found any difference in quality in\n> the backtraces between the two approaches, but surely that is highly\n> dependent on the exact configuration.\n>\n\nWe have implemented it using backtrace(). Also, using addr2line() (or atos\nfor mac) can convert addresses to file and line numbers before printing if\navailable, to take it a step further.\n\nOn Tue, Jun 25, 2019 at 4:08 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:New thread continuing from\n<https://www.postgresql.org/message-id/d4903af2-e7b7-b551-71f8-3e4a6bdc2e73@2ndquadrant.com>.\n\nHere is a extended version of Álvaro's patch that adds an errbacktrace()\nfunction.  You can do two things with this:\n\n- Manually attach it to an ereport() call site that you want to debug.\n\n- Set a configuration parameter like backtrace_function = 'int8in' to\ndebug ereport()/elog() calls in a specific function.Thank You. This is very helpful. Surprised is missing for so long time. We have printing backtrace in Greenplum and its extremely helpful during development and production.\nThere was also mention of settings that would automatically produce\nbacktraces for PANICs etc.  Those could surely be added if there is\nenough interest.In Greenplum, we have backtrace enabled for PANICs, SEGV/BUS/ILL and internal ERRORs, proves very helpful.\nFor the implementation, I support both backtrace() provided by the OS as\nwell as using libunwind.  The former seems to be supported by a number\nof platforms, including glibc, macOS, and FreeBSD, so maybe we don't\nneed the libunwind suport.  I haven't found any difference in quality in\nthe backtraces between the two approaches, but surely that is highly\ndependent on the exact configuration.We have implemented it using backtrace(). Also, using addr2line() (or atos for mac) can convert addresses to file and line numbers before printing if available, to take it a step further.", "msg_date": "Tue, 25 Jun 2019 11:45:23 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On Tue, 25 Jun 2019 at 06:08, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> New thread continuing from\n> <\n> https://www.postgresql.org/message-id/d4903af2-e7b7-b551-71f8-3e4a6bdc2e73@2ndquadrant.com\n> >.\n>\n> Here is a extended version of Álvaro's patch that adds an errbacktrace()\n> function. You can do two things with this:\n>\n> - Manually attach it to an ereport() call site that you want to debug.\n>\n> - Set a configuration parameter like backtrace_function = 'int8in' to\n> debug ereport()/elog() calls in a specific function.\n>\n> There was also mention of settings that would automatically produce\n> backtraces for PANICs etc. Those could surely be added if there is\n> enough interest.\n>\n> For the implementation, I support both backtrace() provided by the OS as\n> well as using libunwind. The former seems to be supported by a number\n> of platforms, including glibc, macOS, and FreeBSD, so maybe we don't\n> need the libunwind suport. I haven't found any difference in quality in\n> the backtraces between the two approaches, but surely that is highly\n> dependent on the exact configuration.\n>\n> I would welcome testing in all direction with this, to see how well it\n> works in different circumstances.\n>\n>\nHi Peter,\n\nThis is certainly a very useful thing. Sadly, it doesn't seem to compile\nwhen trying to use libunwind.\nI tried it in a Debian 9 machine with gcc 6.3.0 and debian says i installed\nlibunwind8 (1.1)\n\n./configure --prefix=/home/jcasanov/Documentos/pgdg/pgbuild/pg13\n--enable-debug --enable-profiling --enable-cassert --enable-depend\n--with-libunwind\n\nat make i get these errors:\n\"\"\"\nutils/error/elog.o: En la función `set_backtrace':\n/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:847:\nreferencia a `_Ux86_64_getcontext' sin definir\n/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:848:\nreferencia a `_Ux86_64_init_local' sin definir\n/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:850:\nreferencia a `_Ux86_64_step' sin definir\n/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:861:\nreferencia a `_Ux86_64_get_reg' sin definir\n/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:862:\nreferencia a `_Ux86_64_get_proc_name' sin definir\ncollect2: error: ld returned 1 exit status\nmake[2]: *** [postgres] Error 1\nmake[1]: *** [all-backend-recurse] Error 2\nmake: *** [all-src-recurse] Error 2\n\"\"\"\n-- \nJaime Casanova www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Tue, 25 Jun 2019 at 06:08, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:New thread continuing from\n<https://www.postgresql.org/message-id/d4903af2-e7b7-b551-71f8-3e4a6bdc2e73@2ndquadrant.com>.\n\nHere is a extended version of Álvaro's patch that adds an errbacktrace()\nfunction.  You can do two things with this:\n\n- Manually attach it to an ereport() call site that you want to debug.\n\n- Set a configuration parameter like backtrace_function = 'int8in' to\ndebug ereport()/elog() calls in a specific function.\n\nThere was also mention of settings that would automatically produce\nbacktraces for PANICs etc.  Those could surely be added if there is\nenough interest.\n\nFor the implementation, I support both backtrace() provided by the OS as\nwell as using libunwind.  The former seems to be supported by a number\nof platforms, including glibc, macOS, and FreeBSD, so maybe we don't\nneed the libunwind suport.  I haven't found any difference in quality in\nthe backtraces between the two approaches, but surely that is highly\ndependent on the exact configuration.\n\nI would welcome testing in all direction with this, to see how well it\nworks in different circumstances.\nHi Peter, This is certainly a very useful thing. Sadly, it doesn't seem to compile when trying to use libunwind.I tried it in a Debian 9 machine with gcc 6.3.0 and debian says i installed libunwind8 (1.1)./configure --prefix=/home/jcasanov/Documentos/pgdg/pgbuild/pg13 --enable-debug --enable-profiling --enable-cassert --enable-depend --with-libunwindat make i get these errors:\"\"\"utils/error/elog.o: En la función `set_backtrace':/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:847: referencia a `_Ux86_64_getcontext' sin definir/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:848: referencia a `_Ux86_64_init_local' sin definir/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:850: referencia a `_Ux86_64_step' sin definir/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:861: referencia a `_Ux86_64_get_reg' sin definir/home/jcasanov/Documentos/pgdg/projects/postgresql/src/backend/utils/error/elog.c:862: referencia a `_Ux86_64_get_proc_name' sin definircollect2: error: ld returned 1 exit statusmake[2]: *** [postgres] Error 1make[1]: *** [all-backend-recurse] Error 2make: *** [all-src-recurse] Error 2\"\"\"-- Jaime Casanova                      www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 29 Jun 2019 00:40:38 -0500", "msg_from": "Jaime Casanova <jaime.casanova@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On Tue, Jun 25, 2019 at 11:08 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> For the implementation, I support both backtrace() provided by the OS as\n> well as using libunwind. The former seems to be supported by a number\n> of platforms, including glibc, macOS, and FreeBSD, so maybe we don't\n> need the libunwind suport. I haven't found any difference in quality in\n> the backtraces between the two approaches, but surely that is highly\n> dependent on the exact configuration.\n>\n> I would welcome testing in all direction with this, to see how well it\n> works in different circumstances.\n\nI like it.\n\nWorks out of the box on my macOS machine, but for FreeBSD I had to add\n-lexecinfo to LIBS.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jul 2019 15:24:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "> On Sat, Jun 29, 2019 at 7:41 AM Jaime Casanova <jaime.casanova@2ndquadrant.com> wrote:\n>\n> This is certainly a very useful thing. Sadly, it doesn't seem to compile when\n> trying to use libunwind.\n\nYeah, the same for me. To make it works I've restricted libunwind to local\nunwinding only:\n\n #ifdef USE_LIBUNWIND\n #define UNW_LOCAL_ONLY\n #include <libunwind.h>\n #endif\n\nAnd result looks pretty nice:\n\n2019-07-08 17:24:08.406 CEST [31828] ERROR: invalid input syntax for\ntype integer: \"foobar\" at character 12\n2019-07-08 17:24:08.406 CEST [31828] BACKTRACE: #0\npg_strtoint32+0x1d1 [0x000055fa389bcbbe]\n #1 int4in+0xd [0x000055fa38976d7b]\n #2 InputFunctionCall+0x6f [0x000055fa38a488e9]\n #3 OidInputFunctionCall+0x44 [0x000055fa38a48b0d]\n #4 stringTypeDatum+0x33 [0x000055fa386e222e]\n #5 coerce_type+0x26d [0x000055fa386ca14d]\n #6 coerce_to_target_type+0x79 [0x000055fa386c9494]\n #7 transformTypeCast+0xaa [0x000055fa386d0042]\n #8 transformExprRecurse+0x22f [0x000055fa386cf650]\n #9 transformExpr+0x1a [0x000055fa386cf30a]\n #10 transformTargetEntry+0x79 [0x000055fa386e1131]\n #11 transformTargetList+0x86 [0x000055fa386e11ce]\n #12 transformSelectStmt+0xa1 [0x000055fa386a29c9]\n #13 transformStmt+0x9d [0x000055fa386a345a]\n #14 transformOptionalSelectInto+0x94 [0x000055fa386a3f49]\n #15 transformTopLevelStmt+0x15 [0x000055fa386a3f88]\n #16 parse_analyze+0x4e [0x000055fa386a3fef]\n #17 pg_analyze_and_rewrite+0x3e [0x000055fa3890cfa5]\n #18 exec_simple_query+0x35b [0x000055fa3890d5b5]\n #19 PostgresMain+0x91f [0x000055fa3890f7a8]\n #20 BackendRun+0x1ac [0x000055fa3887ed17]\n #21 BackendStartup+0x15c [0x000055fa38881ea1]\n #22 ServerLoop+0x1e6 [0x000055fa388821bb]\n #23 PostmasterMain+0x1101 [0x000055fa388835a1]\n #24 main+0x21a [0x000055fa387db1a9]\n #25 __libc_start_main+0xe7 [0x00007f3d1a607fa7]\n #26 _start+0x2a [0x000055fa3858e4ea]\n\n\n", "msg_date": "Mon, 8 Jul 2019 17:28:01 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-Jul-08, Dmitry Dolgov wrote:\n\n> > On Sat, Jun 29, 2019 at 7:41 AM Jaime Casanova <jaime.casanova@2ndquadrant.com> wrote:\n> >\n> > This is certainly a very useful thing. Sadly, it doesn't seem to compile when\n> > trying to use libunwind.\n> \n> Yeah, the same for me. To make it works I've restricted libunwind to local\n> unwinding only:\n> \n> #ifdef USE_LIBUNWIND\n> #define UNW_LOCAL_ONLY\n> #include <libunwind.h>\n> #endif\n\nAh, yes. unwind's manpage says:\n\n Normally, libunwind supports both local and remote unwinding (the latter will\n be explained in the next section). However, if you tell libunwind that your\n program only needs local unwinding, then a special implementation can be\n selected which may run much faster than the generic implementation which\n supports both kinds of unwinding. To select this optimized version, simply\n define the macro UNW_LOCAL_ONLY before including the headerfile <libunwind.h>.\n\nso I agree with unconditionally defining that symbol.\n\nNitpicking dept: I think in these tests:\n\n+ if (!edata->backtrace &&\n+ edata->funcname &&\n+ backtrace_function[0] &&\n+ strcmp(backtrace_function, edata->funcname) == 0)\n+ set_backtrace(edata, 2);\n\nwe should test for backtrace_function[0] before edata->funcname, since\nit seems more likely to be unset.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 8 Jul 2019 12:28:51 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "After further research I'm thinking about dropping the libunwind\nsupport. The backtrace()/backtrace_symbols() API is more widely\navailable: darwin, freebsd, linux, netbsd, openbsd (via port), solaris,\nand of course it's built-in, whereas libunwind is only available for\nlinux, freebsd, hpux, solaris, and requires an external dependency.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Jul 2019 11:43:28 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-07-09 11:43, Peter Eisentraut wrote:\n> After further research I'm thinking about dropping the libunwind\n> support. The backtrace()/backtrace_symbols() API is more widely\n> available: darwin, freebsd, linux, netbsd, openbsd (via port), solaris,\n> and of course it's built-in, whereas libunwind is only available for\n> linux, freebsd, hpux, solaris, and requires an external dependency.\n\nHere is an updated patch without the libunwind support, some minor\ncleanups, documentation, and automatic back traces from assertion failures.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 22 Jul 2019 20:19:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-Jul-22, Peter Eisentraut wrote:\n\n> On 2019-07-09 11:43, Peter Eisentraut wrote:\n> > After further research I'm thinking about dropping the libunwind\n> > support. The backtrace()/backtrace_symbols() API is more widely\n> > available: darwin, freebsd, linux, netbsd, openbsd (via port), solaris,\n> > and of course it's built-in, whereas libunwind is only available for\n> > linux, freebsd, hpux, solaris, and requires an external dependency.\n> \n> Here is an updated patch without the libunwind support, some minor\n> cleanups, documentation, and automatic back traces from assertion failures.\n\nThe only possibly complaint I see is that the backtrace support in\nExceptionalCondition does not work for Windows eventlog/console ... but\nthat seems moot since Windows does not have backtrace support anyway.\n\n+1 to get this patch in at this stage; we can further refine (esp. add\nWindows support) later, if need be.\n\nhttps://stackoverflow.com/questions/26398064/counterpart-to-glibcs-backtrace-and-backtrace-symbols-on-windows\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Jul 2019 15:37:47 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Here is an updated patch without the libunwind support, some minor\n> cleanups, documentation, and automatic back traces from assertion failures.\n\nJust noticing that ExceptionalCondition has an \"fflush(stderr);\"\nin front of what you added --- perhaps you should also add one\nafter the backtrace_symbols_fd call? It's not clear to me that\nthat function guarantees to fflush, nor do I want to assume that\nabort() does.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jul 2019 16:05:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "I wrote:\n> Just noticing that ExceptionalCondition has an \"fflush(stderr);\"\n> in front of what you added --- perhaps you should also add one\n> after the backtrace_symbols_fd call? It's not clear to me that\n> that function guarantees to fflush, nor do I want to assume that\n> abort() does.\n\nOh, wait, it's writing to fileno(stderr) so it couldn't be\nbuffering anything. Disregard ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jul 2019 16:10:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On Tue, Jul 23, 2019 at 6:19 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-07-09 11:43, Peter Eisentraut wrote:\n> > After further research I'm thinking about dropping the libunwind\n> > support. The backtrace()/backtrace_symbols() API is more widely\n> > available: darwin, freebsd, linux, netbsd, openbsd (via port), solaris,\n> > and of course it's built-in, whereas libunwind is only available for\n> > linux, freebsd, hpux, solaris, and requires an external dependency.\n>\n> Here is an updated patch without the libunwind support, some minor\n> cleanups, documentation, and automatic back traces from assertion failures.\n\nNow works out of the box on FreeBSD. The assertion thing is a nice touch.\n\nI wonder if it'd make sense to have a log_min_backtrace GUC that you\ncould set to error/fatal/panicwhatever (perhaps in a later patch).\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Jul 2019 11:25:43 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "Hi\n\nso I agree with unconditionally defining that symbol.\n>\n> Nitpicking dept: I think in these tests:\n>\n> + if (!edata->backtrace &&\n> + edata->funcname &&\n> + backtrace_function[0] &&\n> + strcmp(backtrace_function, edata->funcname) == 0)\n> + set_backtrace(edata, 2);\n>\n>\nIf I understand well, backtrace is displayed only when edata->funcname is\nsame like backtrace_function GUC. Isn't it too strong limit?\n\nFor example, I want to see backtrace for all PANIC level errors on\nproduction, and I would not to limit the source function?\n\nRegards\n\nPavel\n\n\n\n\n\n> we should test for backtrace_function[0] before edata->funcname, since\n> it seems more likely to be unset.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n>\n>\n\nHi\nso I agree with unconditionally defining that symbol.\n\nNitpicking dept: I think in these tests:\n\n+   if (!edata->backtrace &&\n+       edata->funcname &&\n+       backtrace_function[0] &&\n+       strcmp(backtrace_function, edata->funcname) == 0)\n+       set_backtrace(edata, 2);\nIf I understand well, backtrace is displayed only when edata->funcname is same like backtrace_function GUC. Isn't it too strong limit?For example, I want to see backtrace for all PANIC level errors on production, and I would not to limit the source function?RegardsPavel \nwe should test for backtrace_function[0] before edata->funcname, since\nit seems more likely to be unset.\n\n-- \nÁlvaro Herrera                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 12 Aug 2019 13:19:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-08-12 13:19, Pavel Stehule wrote:\n> If I understand well, backtrace is displayed only when edata->funcname\n> is same like backtrace_function GUC. Isn't it too strong limit?\n> \n> For example, I want to see backtrace for all PANIC level errors on\n> production, and I would not to limit the source function?\n\nWe can add additional ways to invoke this once we have the basic\nfunctionality in.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 12 Aug 2019 19:06:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "po 12. 8. 2019 v 19:06 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2019-08-12 13:19, Pavel Stehule wrote:\n> > If I understand well, backtrace is displayed only when edata->funcname\n> > is same like backtrace_function GUC. Isn't it too strong limit?\n> >\n> > For example, I want to see backtrace for all PANIC level errors on\n> > production, and I would not to limit the source function?\n>\n> We can add additional ways to invoke this once we have the basic\n> functionality in.\n>\n\nok\n\nPavel\n\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npo 12. 8. 2019 v 19:06 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2019-08-12 13:19, Pavel Stehule wrote:\n> If I understand well, backtrace is displayed only when edata->funcname\n> is same like backtrace_function GUC. Isn't it too strong limit?\n> \n> For example, I want to see backtrace for all PANIC level errors on\n> production, and I would not to limit the source function?\n\nWe can add additional ways to invoke this once we have the basic\nfunctionality in.okPavel\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 12 Aug 2019 19:08:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-07-22 20:19, Peter Eisentraut wrote:\n> On 2019-07-09 11:43, Peter Eisentraut wrote:\n>> After further research I'm thinking about dropping the libunwind\n>> support. The backtrace()/backtrace_symbols() API is more widely\n>> available: darwin, freebsd, linux, netbsd, openbsd (via port), solaris,\n>> and of course it's built-in, whereas libunwind is only available for\n>> linux, freebsd, hpux, solaris, and requires an external dependency.\n> \n> Here is an updated patch without the libunwind support, some minor\n> cleanups, documentation, and automatic back traces from assertion failures.\n\nAnother updated version.\n\nI have changed the configuration setting to backtrace_functions plural,\nso that you can debug more than one location at once. I had originally\nwanted to do that but using existing functions like\nSplitIdentifierString() resulted in lots of complications with error\nhandling (inside error handling!). So here I just hand-coded the list\nsplitting. Seems simple enough.\n\nI think this patch is now good to go from my perspective.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 13 Aug 2019 10:12:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-Aug-13, Peter Eisentraut wrote:\n\n> I have changed the configuration setting to backtrace_functions plural,\n> so that you can debug more than one location at once. I had originally\n> wanted to do that but using existing functions like\n> SplitIdentifierString() resulted in lots of complications with error\n> handling (inside error handling!). So here I just hand-coded the list\n> splitting. Seems simple enough.\n\nHmm ... but is that the natural way to write this? I would have thought\nyou'd split the list at config-read time (the assign hook for the GUC)\nand turn it into a List of simple strings. Then you don't have to\nloop strtok() on each errfinish().\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 13 Aug 2019 09:24:13 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I have changed the configuration setting to backtrace_functions plural,\n> so that you can debug more than one location at once. I had originally\n> wanted to do that but using existing functions like\n> SplitIdentifierString() resulted in lots of complications with error\n> handling (inside error handling!). So here I just hand-coded the list\n> splitting. Seems simple enough.\n\nI think it's a pretty bad idea for anything invocable from elog to\ntrample on the process-wide strtok() state. Even if there's no\nconflict today, there will be one eventually, unless you are going\nto adopt the position that nobody else is allowed to use strtok().\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Aug 2019 10:14:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-08-13 15:24, Alvaro Herrera wrote:\n> On 2019-Aug-13, Peter Eisentraut wrote:\n> \n>> I have changed the configuration setting to backtrace_functions plural,\n>> so that you can debug more than one location at once. I had originally\n>> wanted to do that but using existing functions like\n>> SplitIdentifierString() resulted in lots of complications with error\n>> handling (inside error handling!). So here I just hand-coded the list\n>> splitting. Seems simple enough.\n> \n> Hmm ... but is that the natural way to write this? I would have thought\n> you'd split the list at config-read time (the assign hook for the GUC)\n> and turn it into a List of simple strings. Then you don't have to\n> loop strtok() on each errfinish().\n\nThe memory management of that seems too complicated. The \"extra\"\nmechanism of the check/assign hooks only supports one level of malloc.\nUsing a List seems impossible. I don't know if you can safely do a\nmalloc-ed array of malloc-ed strings either.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 20 Aug 2019 21:06:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-Aug-20, Peter Eisentraut wrote:\n\n> The memory management of that seems too complicated. The \"extra\"\n> mechanism of the check/assign hooks only supports one level of malloc.\n> Using a List seems impossible. I don't know if you can safely do a\n> malloc-ed array of malloc-ed strings either.\n\nHere's an idea -- have the check/assign hooks create a different\nrepresentation, which is a single guc_malloc'ed chunk that is made up of\nevery function name listed in the GUC, separated by \\0. That can be\nscanned at error time comparing the function name with each piece.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 13 Sep 2019 12:54:32 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-Sep-13, Alvaro Herrera wrote:\n\n> On 2019-Aug-20, Peter Eisentraut wrote:\n> \n> > The memory management of that seems too complicated. The \"extra\"\n> > mechanism of the check/assign hooks only supports one level of malloc.\n> > Using a List seems impossible. I don't know if you can safely do a\n> > malloc-ed array of malloc-ed strings either.\n> \n> Here's an idea -- have the check/assign hooks create a different\n> representation, which is a single guc_malloc'ed chunk that is made up of\n> every function name listed in the GUC, separated by \\0. That can be\n> scanned at error time comparing the function name with each piece.\n\nPeter, would you like me to clean this up for commit, or do you prefer\nto keep authorship and get it done yourself?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 27 Sep 2019 12:50:01 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-09-27 17:50, Alvaro Herrera wrote:\n> On 2019-Sep-13, Alvaro Herrera wrote:\n> \n>> On 2019-Aug-20, Peter Eisentraut wrote:\n>>\n>>> The memory management of that seems too complicated. The \"extra\"\n>>> mechanism of the check/assign hooks only supports one level of malloc.\n>>> Using a List seems impossible. I don't know if you can safely do a\n>>> malloc-ed array of malloc-ed strings either.\n>>\n>> Here's an idea -- have the check/assign hooks create a different\n>> representation, which is a single guc_malloc'ed chunk that is made up of\n>> every function name listed in the GUC, separated by \\0. That can be\n>> scanned at error time comparing the function name with each piece.\n> \n> Peter, would you like me to clean this up for commit, or do you prefer\n> to keep authorship and get it done yourself?\n\nIf you want to finish it using the idea from your previous message,\nplease feel free. I won't get to it this week.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 30 Sep 2019 20:16:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-09-30 20:16, Peter Eisentraut wrote:\n> On 2019-09-27 17:50, Alvaro Herrera wrote:\n>> On 2019-Sep-13, Alvaro Herrera wrote:\n>>\n>>> On 2019-Aug-20, Peter Eisentraut wrote:\n>>>\n>>>> The memory management of that seems too complicated. The \"extra\"\n>>>> mechanism of the check/assign hooks only supports one level of malloc.\n>>>> Using a List seems impossible. I don't know if you can safely do a\n>>>> malloc-ed array of malloc-ed strings either.\n>>>\n>>> Here's an idea -- have the check/assign hooks create a different\n>>> representation, which is a single guc_malloc'ed chunk that is made up of\n>>> every function name listed in the GUC, separated by \\0. That can be\n>>> scanned at error time comparing the function name with each piece.\n>>\n>> Peter, would you like me to clean this up for commit, or do you prefer\n>> to keep authorship and get it done yourself?\n> \n> If you want to finish it using the idea from your previous message,\n> please feel free. I won't get to it this week.\n\nI hadn't realized that you had already attached a patch that implements\nyour idea. It looks good to me. Maybe a small comment near\ncheck_backtrace_functions() why we're not using a regular list. Other\nthan that, please go ahead with this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 26 Oct 2019 14:37:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-Oct-26, Peter Eisentraut wrote:\n\n> I hadn't realized that you had already attached a patch that implements\n> your idea. It looks good to me. Maybe a small comment near\n> check_backtrace_functions() why we're not using a regular list. Other\n> than that, please go ahead with this.\n\nThanks, I added that comment and others, and pushed. Let's see what\nhappens now ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 Nov 2019 15:52:46 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Oct-26, Peter Eisentraut wrote:\n>> I hadn't realized that you had already attached a patch that implements\n>> your idea. It looks good to me. Maybe a small comment near\n>> check_backtrace_functions() why we're not using a regular list. Other\n>> than that, please go ahead with this.\n\n> Thanks, I added that comment and others, and pushed. Let's see what\n> happens now ...\n\nI had occasion to try to use errbacktrace() just now, and it blew up\non me. Investigation finds this:\n\nint\nerrbacktrace(void)\n{\n\tErrorData *edata = &errordata[errordata_stack_depth];\n\tMemoryContext oldcontext;\n\n\tAssert(false);\n\n\nI suppose that's a debugging leftover that shouldn't have been committed?\nIt did what I wanted after I took out the Assert.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 23 Nov 2019 11:11:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" }, { "msg_contents": "On 2019-Nov-23, Tom Lane wrote:\n\n> I had occasion to try to use errbacktrace() just now, and it blew up\n> on me. Investigation finds this:\n> \n> int\n> errbacktrace(void)\n> {\n> \tErrorData *edata = &errordata[errordata_stack_depth];\n> \tMemoryContext oldcontext;\n> \n> \tAssert(false);\n> \n> \n> I suppose that's a debugging leftover that shouldn't have been committed?\n> It did what I wanted after I took out the Assert.\n\nUhh ... facepalm. Yes, that's not intended. I don't remember why would\nI want to put that there.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 23 Nov 2019 13:14:06 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: errbacktrace" } ]
[ { "msg_contents": "Hello everyone,\n\nI am participating in gsoc and would like to share the current working status of my project,\ndevoted to an alternative data structure for buffer management.\n\nTo get things going, I started with the basic implementation of the art tree for the single\nbackend process, using the open-source realization of the algorithm. That was\na pretty straightforward task that helped me to better understand\ndata structure and convenient mechanism of PostgreSQL for local memory allocation.\nFor the validation purposes, I also ported a couple of tests as a simple extension with a function,\nthat can read a file with test data and run basic operations on the tree: insert/search/delete.\nDescribed changes are present in the first two commits [0].\n\nIt is worthwhile to note that this \"src/backend/lib/artree.c\" implementation will not go any further\nand will be thrown away for the final patch. The reason for that is an example with the current dynahash\nimplementation(that I have examined later, while was trying to move the tree into shared memory),\nthat supports both access patterns with shared & local memory.\nThus, I guess that the proper place for the tree data structure is nearby dynahash, i.e. in \"src/backend/utils/tree/...\"\nand eventually, with some restrictions for the shared variant (fixed key size length), it can be used in some other places\nof the system.\n\nThe last two commits implement the simplest form of the buf-tree in the shared memory, alongside the existing hashtable.\nSharedBufTree just repeats all the actions that are performed on the hashtable and compares results.\nFor now, it completely relies on the synchronization, that is performed at the layer above for the hashtable,\ni.e. BufferAlloc(...). Also, the size of memory that is used for tree's nodes/leaves depends on just some random numbers.\n(working with default 128mb shared memory size)\nOperations that work in a bulky way with buffers (drop/truncate/checkpoint/...) are not touched.\n\nSo, this is a plan, that I would like to stick with subsequently:\n\n1) Work on synchronization.\n2) Examine bulk operations on buffers, replace them with tree (prefix) iterator.\n3) A reasonable way to size memory for tree maintenance.\n\nIdeas & suggestions & comments are welcomed.\n\nDynamic data structure like tree brings problems that are not relevant for the current hashtable implementation.\nThe number of LWLocks in the hashtable is fixed and equal to the number of partitions(128),\nthe memory footprint for buckets and hash-entries can be easily precomputed.\n\nCurrently, I am thinking about the idea of tree decomposition, so that it is really a\ntree of trees, where RelFileNode[12bytes] (maybe + ForkNum) can be used for the top tree and\nthe rest for the trees of blocks. Hence, each bottom tree can contain a single private LWLock, that will be\nused to protect its content. Won't be that an overkill? if we take a hypothetical example with\n2 tablespaces(hdd + sdd), each with 500+ objects, so the number of LWLocks should scale accordingly, or\nwe will be forced to reuse them, by unloading block trees, that should be done in some intelligent way.\n\nAnother approach is to use lock coupling... with the penalty of cache invalidation.\n\n-- \n[0] https://github.com/mvpant/postgres/commits/artbufmgr\n\n\n\n", "msg_date": "Tue, 25 Jun 2019 15:30:11 +0300", "msg_from": "pantilimonov misha <pantlimon@yandex.ru>", "msg_from_op": true, "msg_subject": "[GSoC] artbufmgr" }, { "msg_contents": "Greetings all,\n\n> So, this is a plan, that I would like to stick with subsequently:\n>\n> 1) Work on synchronization.\n> 2) Examine bulk operations on buffers, replace them with tree (prefix) iterator.\n> 3) A reasonable way to size memory for tree maintenance.\n>\n\npreviously i described the plan, that i wanted to follow; achieved results are summarized below.\nShould note beforehand, that third item wasn't touched.\n\n>\n> Dynamic data structure like tree brings problems that are not relevant for the current hashtable implementation.\n> The number of LWLocks in the hashtable is fixed and equal to the number of partitions(128),\n> the memory footprint for buckets and hash-entries can be easily precomputed.\n>\n> Currently, I am thinking about the idea of tree decomposition, so that it is really a\n> tree of trees, where RelFileNode[12bytes] (maybe + ForkNum) can be used for the top tree and\n> the rest for the trees of blocks. Hence, each bottom tree can contain a single private LWLock, that will be\n> used to protect its content. Won't be that an overkill? if we take a hypothetical example with\n> 2 tablespaces(hdd + sdd), each with 500+ objects, so the number of LWLocks should scale accordingly, or\n> we will be forced to reuse them, by unloading block trees, that should be done in some intelligent way.\n>\n\nPrimarily i have started with a single-lock art tree, using the same locking strategy as an existing hashtable.\nBoth structs were surrounded by \"defines\", so i could run them independently or simultaneously.\nThe last option was mainly used as some kind of validation check that tree works in the same way\nas hashtable. I have tested a single-lock tree version using pgbench and 'installcheck',\nin order to test it on some kind of activity where multiple parallel processes are involved and\nfresh relations tags arrive/leave due to the create/drop of tables.\n\nIt was obvious that single-lock tree, by definition, can't stand\n128-locks (each lock is used to protect specific region(partition)) hashtable in all concurrent cases, so\nthe idea was to split the current 'MainTree' into the tree of trees. Such separation has additional benefits,\nbesides throughput improvement:\n\na) Most of the time (relatively) 'MainTree' is not modified, as the time goes it gets saturated by\n 'most used' relation's keys. After that point, it is mostly used as a read-only structure.\nb) 'Path' to specific subtree of 'MainTree' can be cached in SMgrRelation by saving pointers to the\n corresponding ForkNums. (just an array of pointers[max_forknum + 1])\n In the current implementation, this optimization can reduce key length from 20 to 4 bytes.\n\ntypedef struct buftag\n{\n\tOid\t\t\tspcNode;\t\t/* tablespace */\n\tOid\t\t\tdbNode;\t\t\t/* database */\n\tOid\t\t\trelNode;\t\t/* relation */\n\tForkNumber\tforkNum;\n\tBlockNumber blockNum;\t\t/* blknum relative to begin of reln */\n} BufferTag;\n\nTo split 'MainTree' i have injected LWLock to the art_tree structure and created\nan additional separate freelist of these subtree's nodes, that is used then for dynamic allocation\nand deallocation.\nThe key length in 'MainTree' is 16 bytes (spcNode, dbnode, relNode, forkNum) and likely\ncan be reduced to 13 bytes by shifting forkNum value that ranges only from -1 to 3, but occupies\n4 bytes.\nThe key length of each subtree is 4 bytes only - BlockNumber.\n\nBelow are results of tests, performed on database initialized with\npgbench -i -s 50 with shared_buffers=128MB\nand pgbench -i -s 100 with shared buffers=1GB.\nIt should be noted that such workload does not really represents 'real life' results,\nas all contention goes into certain partitions (in case of hashtable) and subtrees (in case of tree).\nNext goal is to compare data structures on some kind of realistic benchmark or just create\nmultiple databases inside cluster and run corresponding number of pgbench instances.\n\ntested on pc with i7, ssd\neach test performed 5 times, readonly ran subsequently,\nfull ran with fresh start(otherwise some problems to be fixed in tree..), best result is taken.\n\nreadonly test: pgbench --random-seed=2 -t 100000 -S -c 6 -j 6 -n\nfull test: pgbench --random-seed=2 -t 10000 -c 6 -j 6\ndrop test:\n create table test_drop2 as select a, md5(a::text) from generate_series(1, 20000) as a; ~ 167 blocks\n drop test_drop2;\n\n128MB:\n readonly HASH: latency average = 0.102 ms tps = 58934.229170 (excluding connections establishing)\n full HASH: latency average = 1.233 ms tps = 4868.332127 (excluding connections establishing)\n\n readonly TREE: latency average = 0.106 ms tps = 56818.893538 (excluding connections establishing)\n full TREE: latency average = 1.227 ms tps = 4890.546989 (excluding connections establishing)\n\n1GB:\n readonly HASH: latency average = 0.100 ms tps = 60120.841307 (excluding connections establishing)\n full HASH: latency average = 1.325 ms tps = 4529.205977 (excluding connections establishing)\n drop HASH: min ~4.9ms, max ~54ms\n\n readonly TREE: latency average = 0.100 ms tps = 60247.345044 (excluding connections establishing)\n full TREE: latency average = 1.286 ms tps = 4665.538565 (excluding connections establishing)\n drop TREE: min ~4.3ms, max ~52ms \n\nThese tests do not show significant superiority of any of the data structures, but it should be noted\nthat maintenance complexity of tree (its dynamic nature) is much higher than an existing hash-table implementation for\nbuffer management.\n\nBefore the start of this project, we assumed that the tree might not be faster than the hash table, but at least be on a par.\nTree's additional properties may allow other operations to be performed more efficiently,\ntherefore making it more attractive as a future structure for buffer management.\n\nSo, i feel like it is better for now _not_ to concentrate on those 'additional properties' and try to pursue\nthe third item stated in the beginning -- maintenance complexity. So at the end of the project, the tree can be\nfully used as an alternative structure instead of the existing hashtable and be\nthe base for 'additional properties' future implementation, like drop/truncate/checkpoint/relation extending/etc.\nMy attempt to implement a tree version of DropRelFileNodesAllBuffers showed that\nthere are many places in the system, that should cope with these changes.\n\n\n\n", "msg_date": "Thu, 25 Jul 2019 15:34:08 +0300", "msg_from": "pantilimonov michael <pantlimon@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: [GSoC] artbufmgr" }, { "msg_contents": "Hello everyone,\n\n> It should be noted that such workload does not really represents 'real life' results,\n> as all contention goes into certain partitions (in case of hashtable) and subtrees (in case of tree).\n> Next goal is to compare data structures on some kind of realistic benchmark or just create\n> multiple databases inside cluster and run corresponding number of pgbench instances.\n>\n\ni am still working on the better way to allocate and recycle different parts of the tree (nodes, subtrees, etc),\nbut would like to share latest results of the benchmarks.\n\nHere is a link to the google sheet:\nhttps://docs.google.com/spreadsheets/d/1VfVY0NUnPQYqgxMEXkpxhHvspbT9uZPRV9mflu8UhLQ/edit?usp=sharing\n\n(Excuse me for the link, it is convenient to accumulate and check results in the sheets.)\n\nComparison is done with pg 11.3 (0616aed243).\nEach tpc-h query ran 12 times. Server restarts weren't performed between queries.\nAverage is calculated on base of 10 launches, first 2 are skipped.\n\nGenerally speaking, current shared tree performs worse than the hashtable in the majority of the TPC-H test queries,\nespecially in the case of 4GB shared buffers. With a greater size of the buffer cache - 16GB the situation looks better,\nbut there is still a 1-6% performance drop in most of the queries. I haven't yet profile any query, but suppose\nthat there are a couple of places worth optimizing.\n\nIt is interesting to note that results of pgbench tests have the same pattern as in 128MB and 1GB buffer cache size:\nhashtable performs slightly better on select-only workload, while tree has better tps throughput in tpcb-like.\n\n\n\n", "msg_date": "Mon, 12 Aug 2019 00:51:42 +0300", "msg_from": "pantilimonov michael <pantlimon@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: [GSoC] artbufmgr" } ]
[ { "msg_contents": "Hi hackers,\n\nI believe we found a bug in logical decoding. It only occures with\ncasserts enabled. It was originally discovered and reproduced by Murat\nKabilov and Ildus Kurbangaliev. Here is the stacktrace we've got:\n\n#0 0x00007facc66ef82f in raise () from /usr/lib/libc.so.6\n#1 0x00007facc66da672 in abort () from /usr/lib/libc.so.6\n#2 0x0000000000ac4ebf in ExceptionalCondition (\n conditionName=0xccdea8 \"!(prev_first_lsn < cur_txn->first_lsn)\",\n errorType=0xccdce4 \"FailedAssertion\", fileName=0xccdd38\n\"reorderbuffer.c\",\n lineNumber=680) at assert.c:54\n#3 0x00000000008a9515 in AssertTXNLsnOrder (rb=0x25ca128) at\nreorderbuffer.c:680\n#4 0x00000000008a900f in ReorderBufferTXNByXid (rb=0x25ca128, xid=65609,\ncreate=true,\n is_new=0x0, lsn=211590864, create_as_top=true) at reorderbuffer.c:559\n#5 0x00000000008abf0d in ReorderBufferAddNewTupleCids (rb=0x25ca128,\nxid=65609,\n lsn=211590864, node=..., tid=..., cmin=0, cmax=4294967295,\ncombocid=4294967295)\n at reorderbuffer.c:2098\n#6 0x00000000008b096b in SnapBuildProcessNewCid (builder=0x25d0158,\nxid=65610,\n lsn=211590864, xlrec=0x25d60b8) at snapbuild.c:781\n#7 0x000000000089d01c in DecodeHeap2Op (ctx=0x25ba0b8, buf=0x7ffd0e294da0)\nat decode.c:382\n#8 0x000000000089c8ca in LogicalDecodingProcessRecord (ctx=0x25ba0b8,\nrecord=0x25ba378)\n at decode.c:125\n#9 0x00000000008a124c in DecodingContextFindStartpoint (ctx=0x25ba0b8) at\nlogical.c:492\n#10 0x00000000008b9c3d in CreateReplicationSlot (cmd=0x257be20) at\nwalsender.c:957\n#11 0x00000000008baa60 in exec_replication_command (\n cmd_string=0x24f5b08 \"CREATE_REPLICATION_SLOT temp_slot_name TEMPORARY\nLOGICAL pgoutput USE_SNAPSHOT\") at walsender.c:1531\n#12 0x0000000000937230 in PostgresMain (argc=1, argv=0x25233a8,\ndbname=0x2523380 \"postgres\",\n username=0x24f23c8 \"zilder\") at postgres.c:4245\n#13 0x0000000000881453 in BackendRun (port=0x251a900) at postmaster.c:4431\n#14 0x0000000000880b4f in BackendStartup (port=0x251a900) at\npostmaster.c:4122\n#15 0x000000000087cbbe in ServerLoop () at postmaster.c:1704\n#16 0x000000000087c34a in PostmasterMain (argc=3, argv=0x24f0330) at\npostmaster.c:1377\n#17 0x00000000007926b6 in main (argc=3, argv=0x24f0330) at main.c:228\n\nAfter viewing coredump we see that\n`prev_first_lsn == cur_txn->first_lsn`\n\nThe problem seems to be that ReorderBuffer adds two ReorderBufferTXNs\nwith the same LSN, but different transaction ids: subxid and top-level\nxid. See FIX part below.\n\n\nSTEPS TO REPRODUCE\n------------------\n\nWe were able reproduce it on 10, 11 and on master branch. Postgres was\nconfigured as:\n\n./configure --enable-cassert CFLAGS='-ggdb3 -O0' --prefix=$HOME/pg12\n\nAdditional options in postgresql.conf:\n\nwal_level='logical'\nmax_connections=1000\nmax_replication_slots=100\nmax_wal_senders=100\nmax_logical_replication_workers=100\n\npgbench scripts:\n\n$ cat create_table.sql\nBEGIN;\nSAVEPOINT p1;\nCREATE temp TABLE t_t (id INT) ON COMMIT DROP;\nROLLBACK TO SAVEPOINT p1;\nROLLBACK;\n\n$ cat create_slot.sql\nBEGIN ISOLATION LEVEL REPEATABLE READ READ ONLY;\nSELECT pg_create_logical_replication_slot('test' || pg_backend_pid(),\n'pgoutput', true);\nSELECT pg_drop_replication_slot('test' || pg_backend_pid());\nROLLBACK;\n\nRun in parallel terminals:\n\n$HOME/pg12/bin/pgbench postgres -f create_table.sql -T1000 -c50 -j50\n$HOME/pg12/bin/pgbench postgres -f create_slot.sql -T1000 -c50 -j50\n\nIt may take some time. On my local machine it breaks in few seconds.\n\n\nFIX?\n----\n\nCan't say that i have enough understanding of what's going on in the\nlogical decoding code. But the one thing i've noticed is inconsistency\nof xids used to make ReorderBufferTXNByXid() call:\n\n1. first, in DecodeHeap2Op() function ReorderBufferProcessXid() is\ncalled with subtransaction id; it actually creates ReorderBufferTXN\nand adds it to reorder buffer's hash table and toplevel_by_lsn list;\n2. second, within ReorderBufferXidSetCatalogChanges() it uses same\nsubxid to lookup the ReorderBufferTXN that was created before,\nsuccessfully;\n3. now in ReorderBufferAddNewTupleCids() it uses top-level transaction\nid instead for lookup; it cannot find xid in hash table and tries to\nadd a new record with the same LSN. And it fails since this LSN is\nalready in toplevel_by_lsn list.\n\nAttached is a simple patch that uses subxid instead of top-level xid\nin ReorderBufferAddNewTupleCids() call. It seems to fix the bug, but\ni'm not sure that this is a valid change. Can someone please verify it\nor maybe suggest a better solution for the issue?\n\nBest regards,\nIldar", "msg_date": "Tue, 25 Jun 2019 16:45:57 +0200", "msg_from": "Ildar Musin <ildar@adjust.com>", "msg_from_op": true, "msg_subject": "Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On Wed, Jun 26, 2019 at 2:46 AM Ildar Musin <ildar@adjust.com> wrote:\n> Attached is a simple patch that uses subxid instead of top-level xid\n> in ReorderBufferAddNewTupleCids() call. It seems to fix the bug, but\n> i'm not sure that this is a valid change. Can someone please verify it\n> or maybe suggest a better solution for the issue?\n\nHello Ildar,\n\nI hope someone more familiar with this code than me can comment, but\nwhile going through the Commitfest CI results I saw this segfault with\nyour patch:\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/555184304\n\nAt a glance, HistoricSnapshotGetTupleCids() returned NULL in\nHeapTupleSatisfiesHistoricMVCC(), so ResolveCminCmaxDuringDecoding()\nblew up.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jul 2019 12:59:26 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On Mon, Jul 8, 2019 at 9:00 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jun 26, 2019 at 2:46 AM Ildar Musin <ildar@adjust.com> wrote:\n> > Attached is a simple patch that uses subxid instead of top-level xid\n> > in ReorderBufferAddNewTupleCids() call. It seems to fix the bug, but\n> > i'm not sure that this is a valid change. Can someone please verify it\n> > or maybe suggest a better solution for the issue?\n>\n\nI've reproduced this issue with script Ildar provided. I don't find\nout the root cause yet and I'm not sure the patch takes a correct way\nto fix this.\n\nIn my environment, I got the following pg_waldump output and the\nlogical decoding failed at 0/019FA058 when processing NEW_CID.\n\n 90489 rmgr: Transaction len (rec/tot): 38/ 38, tx:\n1999, lsn: 0/019F9E80, prev 0/019F9E38, desc: ASSIGNMENT xtop 1998:\nsubxacts: 1999\n 90490 rmgr: Standby len (rec/tot): 405/ 405, tx:\n0, lsn: 0/019F9EA8, prev 0/019F9E80, desc: RUNNING_XACTS nextXid 2000\nlatestCompletedXid 1949 oldestRunningXid 1836; 48 xacts: 1990 1954\n1978 1850 1944 1972 1940 1924 1906 1970 1985 1998 1966 1987 1975 1858\n1914 1982 1958 1840 1920 1926 1992 1962 1\n 90490 910 1950 1874 1928 1974 1968 1946 1912 1918 1996 1922 1930\n1964 1952 1994 1934 1980 1836 1984 1960 1956 1916 1908 1938\n 90491 rmgr: Heap2 len (rec/tot): 60/ 60, tx:\n1999, lsn: 0/019FA058, prev 0/019F9EA8, desc: NEW_CID rel\n1663/12678/2615; tid 11/59; cmin: 0, cmax: 4294967295, combo:\n4294967295\n 90492 rmgr: Heap len (rec/tot): 127/ 127, tx:\n1999, lsn: 0/019FA098, prev 0/019FA058, desc: INSERT off 59 flags\n0x00, blkref #0: rel 1663/12678/2615 blk 11\n\nI thought that the logical decoding doesn't create ReorderBufferTXN of\nxid=1999 when processing NEW_CID since it decodes ASSIGNMENT of\nxid=1999 beforehand. But what actually happen is that it skips NEW_CID\nsince the state of snapshot builder is SNAPBUILD_BUILDING_SNAPSHOT yet\nand then the state becomes SNAPBUILD_FULL_SNAPSHOT when processing\nRUNNING_XACTS , and therefore it creates two ReorderBufferTXN entries\nfor xid = 1999 and xid = 1998 as top-level transactions when\nprocessing NEW_CID (ReorderBufferXidSetCatalogChanges creates xid=1999\nand ReorderBufferAddNewTupleCids creates xid = 1998). And therefore it\ngot the assertion failure when adding ReorderBufferTXN of xid = 1998.\n\nI'll look into this more deeply tomorrow.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 8 Jul 2019 22:46:41 +0800", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On Mon, Jul 8, 2019 at 11:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 8, 2019 at 9:00 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Wed, Jun 26, 2019 at 2:46 AM Ildar Musin <ildar@adjust.com> wrote:\n> > > Attached is a simple patch that uses subxid instead of top-level xid\n> > > in ReorderBufferAddNewTupleCids() call. It seems to fix the bug, but\n> > > i'm not sure that this is a valid change. Can someone please verify it\n> > > or maybe suggest a better solution for the issue?\n> >\n>\n> I've reproduced this issue with script Ildar provided. I don't find\n> out the root cause yet and I'm not sure the patch takes a correct way\n> to fix this.\n>\n> In my environment, I got the following pg_waldump output and the\n> logical decoding failed at 0/019FA058 when processing NEW_CID.\n>\n> 90489 rmgr: Transaction len (rec/tot): 38/ 38, tx:\n> 1999, lsn: 0/019F9E80, prev 0/019F9E38, desc: ASSIGNMENT xtop 1998:\n> subxacts: 1999\n> 90490 rmgr: Standby len (rec/tot): 405/ 405, tx:\n> 0, lsn: 0/019F9EA8, prev 0/019F9E80, desc: RUNNING_XACTS nextXid 2000\n> latestCompletedXid 1949 oldestRunningXid 1836; 48 xacts: 1990 1954\n> 1978 1850 1944 1972 1940 1924 1906 1970 1985 1998 1966 1987 1975 1858\n> 1914 1982 1958 1840 1920 1926 1992 1962 1\n> 90490 910 1950 1874 1928 1974 1968 1946 1912 1918 1996 1922 1930\n> 1964 1952 1994 1934 1980 1836 1984 1960 1956 1916 1908 1938\n> 90491 rmgr: Heap2 len (rec/tot): 60/ 60, tx:\n> 1999, lsn: 0/019FA058, prev 0/019F9EA8, desc: NEW_CID rel\n> 1663/12678/2615; tid 11/59; cmin: 0, cmax: 4294967295, combo:\n> 4294967295\n> 90492 rmgr: Heap len (rec/tot): 127/ 127, tx:\n> 1999, lsn: 0/019FA098, prev 0/019FA058, desc: INSERT off 59 flags\n> 0x00, blkref #0: rel 1663/12678/2615 blk 11\n>\n> I thought that the logical decoding doesn't create ReorderBufferTXN of\n> xid=1999 when processing NEW_CID since it decodes ASSIGNMENT of\n> xid=1999 beforehand. But what actually happen is that it skips NEW_CID\n> since the state of snapshot builder is SNAPBUILD_BUILDING_SNAPSHOT yet\n> and then the state becomes SNAPBUILD_FULL_SNAPSHOT when processing\n> RUNNING_XACTS , and therefore it creates two ReorderBufferTXN entries\n> for xid = 1999 and xid = 1998 as top-level transactions when\n> processing NEW_CID (ReorderBufferXidSetCatalogChanges creates xid=1999\n> and ReorderBufferAddNewTupleCids creates xid = 1998).\n\nI think the cause of this bug would be that a ReorderBufferTXN entry\nof sub transaction is created as top-level transaction. And this\nhappens because we skip to decode ASSIGNMENT during the state of\nsnapshot builder < SNAPBUILD_FULL.\n\n@@ -778,7 +778,7 @@ SnapBuildProcessNewCid(SnapBuild *builder,\nTransactionId xid,\n */\n ReorderBufferXidSetCatalogChanges(builder->reorder, xid, lsn);\n\n- ReorderBufferAddNewTupleCids(builder->reorder, xlrec->top_xid, lsn,\n+ ReorderBufferAddNewTupleCids(builder->reorder, xid, lsn,\n xlrec->target_node, xlrec->target_tid,\n xlrec->cmin, xlrec->cmax,\n xlrec->combocid);\n\nThe above change in the proposed patch changes SnapBuildProcessNewCid\nso that it passes sub transaction id instead of top transaction id to\nReorderBufferAddNewTupleCids that adds a (relfilenode, tid) -> (cmin,\ncmax) mapping to the transaction. But I think the fix is not correct\nsince as the comment of ReorderBufferTXN describes, the mappings are\nalways assigned to the top-level transaction.\n\nin reorderbuffer.h,\n /*\n * List of (relation, ctid) => (cmin, cmax) mappings for catalog tuples.\n * Those are always assigned to the toplevel transaction. (Keep track of\n * #entries to create a hash of the right size)\n */\n dlist_head tuplecids;\n uint64 ntuplecids;\n\nInstead, I wonder if we can decode ASSIGNMENT even when the state of\nsnapshot builder < SNAPBUILD_FULL_SNAPSHOT. That way, the\nReorderBufferTXN entries of both top transaction and sub transaction\nare created properly before we decode NEW_CID.\n\nAttached patch do that. In my environment the issue seems to be fixed\nbut I'm still not confident that this is the right fix. Please review\nit.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center", "msg_date": "Tue, 9 Jul 2019 19:04:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On 2019-Jul-09, Masahiko Sawada wrote:\n\n> I think the cause of this bug would be that a ReorderBufferTXN entry\n> of sub transaction is created as top-level transaction. And this\n> happens because we skip to decode ASSIGNMENT during the state of\n> snapshot builder < SNAPBUILD_FULL.\n\nThat explanation seems to make sense.\n\n> Instead, I wonder if we can decode ASSIGNMENT even when the state of\n> snapshot builder < SNAPBUILD_FULL_SNAPSHOT. That way, the\n> ReorderBufferTXN entries of both top transaction and sub transaction\n> are created properly before we decode NEW_CID.\n\nYeah, that seems a sensible remediation to me.\n\nI would reduce the scope a little bit -- only create the assignment in\nthe BUILDING state, and skip it in the START state. I'm not sure that\nit's possible to get assignments while in START state that are\nsignificant (I'm still trying to digest SnapBuildFindSnapshot).\n\nI would propose the attached. Andres, do you have an opinion on this?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 26 Jul 2019 18:46:35 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "Hi,\n\nPetr, Simon, see the potential issue related to fast forward at the\nbottom.\n\n\nOn 2019-07-26 18:46:35 -0400, Alvaro Herrera wrote:\n> On 2019-Jul-09, Masahiko Sawada wrote:\n>\n> > I think the cause of this bug would be that a ReorderBufferTXN entry\n> > of sub transaction is created as top-level transaction. And this\n> > happens because we skip to decode ASSIGNMENT during the state of\n> > snapshot builder < SNAPBUILD_FULL.\n>\n> That explanation seems to make sense.\n\nYea. The comment that \"in the assignment case we'll not decode those\nxacts\" is true, but it misses that we *do* currently process\nXLOG_HEAP2_NEW_CID records for transactions that started before reaching\nFULL_SNAPSHOT.\n\nThinking about it, it was not immediately clear to me that it is\nnecessary to process XLOG_HEAP2_NEW_CID at that stage. We only need the\ncid mapping when decoding content of the transaction that the\nXLOG_HEAP2_NEW_CID record was about - which will not happen if it\nstarted before SNAPBUILD_FULL.\n\nExcept that they also are responsible for signalling that a transaction\nperformed catalog modifications (cf ReorderBufferXidSetCatalogChanges()\ncall), which in turn is important for SnapBuildCommitTxn() to know\nwhether to include that transaction needs to be included in historic\nsnapshots.\n\nSo unless I am missing something - which is entirely possible, I've had\nthis code thoroughly swapped out - that means that we only need to\nprocess XLOG_HEAP2_NEW_CID < SNAPBUILD_FULL if there can be transactions\nwith relevant catalog changes, that don't have any invalidations\nmessages.\n\nAfter thinking about it for a bit, that's not guaranteed however. For\none, even for system catalog tables, looking at\nCacheInvalidateHeapTuple() et al there can be catalog modifications that\ncreate neither a snapshot invalidation message, nor a catcache\none. There's also the remote scenario that we possibly might be only\nmodifying a toast relation.\n\nBut more importantly, the only modified table could be a user defined\ncatalog table (i.e. WITH (user_catalog_table = true)). Which in all\nlikelihood won't cause invalidation messages. So I think currently it is\nrequired to process NEW_ID records - although we don't need to actually\nexecute the ReorderBufferAddNewTupleCids() etc calls.\n\nPerhaps the right fix for the future would actually be to not rely on on\nNEW_ID for recognizing transactions as such, but instead have an xact.c\nmarker that signals whether a transaction performed catalog\nmodifications.\n\nHm, need to think more about this.\n\n\n> > Instead, I wonder if we can decode ASSIGNMENT even when the state of\n> > snapshot builder < SNAPBUILD_FULL_SNAPSHOT. That way, the\n> > ReorderBufferTXN entries of both top transaction and sub transaction\n> > are created properly before we decode NEW_CID.\n>\n> Yeah, that seems a sensible remediation to me.\n\nThat does seems like a reasonable approach. I can see two alternatives:\n\n1) Change SnapBuildProcessNewCid()'s ReorderBufferXidSetCatalogChanges()\n call to reference the toplevel xid. That has the disadvantage that if\n the subtransaction that performed DDL rolls back, the main\n transaction will still be treated as a catalog transaction - i have a\n hard time seeing that being common, however.\n\n That'd then also require SnapBuildProcessNewCid() in\n SNAPBUILD_FULL_SNAPSHOT to return before processing any data assigned\n to subtransactions. Which would be good, because that's currently\n unnecessarily stored in memory.\n\n2) We could simply assign the subtransaction to the parent using\n ReorderBufferAssignChild() in SnapBuildProcessNewCid() or it's\n caller. That ought to also fix the bug\n\n I also has the advantage that we can save some memory in transactions\n that have some, but fewer than the ASSIGNMENT limit subtransactions,\n because it allows us to avoid having a separate base snapshot for\n them (c.f. ReorderBufferTransferSnapToParent()).\n\n Like 1) that could be combined with adding an early return when <\n SNAPBUILD_FULL_SNAPSHOT, after ReorderBufferXidSetCatalogChanges(),\n but I don't think it'd be required for correctness in contrast to 1).\n\nBoth of these would have the advantage that we only would track\nadditional information for transactions that have modified the catalog,\nwhereas the proposal to process ASSIGNMENT earlier, would mean that we\nadditionally track all transactions with more than 64 children. So\nprovided that I didn't mis-analyze here, I think both of my alternatives\nare preferrable? I think 2) is simpler?\n\n\n> \t/*\n> -\t * No point in doing anything yet, data could not be decoded anyway. It's\n> -\t * ok not to call ReorderBufferProcessXid() in that case, except in the\n> -\t * assignment case there'll not be any later records with the same xid;\n> -\t * and in the assignment case we'll not decode those xacts.\n> +\t * If the snapshot isn't yet fully built, we cannot decode anything, so\n> +\t * bail out.\n> +\t *\n> +\t * However, it's critical to process XLOG_XACT_ASSIGNMENT records even\n> +\t * when the snapshot is being built: it is possible to get later records\n> +\t * that require subxids to be properly assigned.\n> \t */\n\nI think I would want this comment to be slightly more expansive. It's\nnot exactly obvious why such records would exist, at least to me. I\ncan't quite come up with something much shorter than the above braindump\nright now however. I'll try to come up with something more concise.\nProbably worthwhile to add somewhere, even if we go for one of my\nalternative proposals.\n\n\nThis actually made me look at the nearby changes due to\n\ncommit 9c7d06d60680c7f00d931233873dee81fdb311c6\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: 2018-01-17 11:38:34 +0000\n\n Ability to advance replication slots\n\nand uhm, I'm not sure they're fully baked. Something like:\n\n\t/*\n\t * If we don't have snapshot or we are just fast-forwarding, there is no\n\t * point in decoding changes.\n\t */\n\tif (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n\t\tctx->fast_forward)\n\t\treturn;\n\n\t\tcase XLOG_HEAP2_MULTI_INSERT:\n\t\t\tif (!ctx->fast_forward &&\n\t\t\t\tSnapBuildProcessChange(builder, xid, buf->origptr))\n\t\t\t\tDecodeMultiInsert(ctx, buf);\n\t\t\tbreak;\n\nis not very suggestive of that (note the second check).\n\n\nAnd related to the point of the theorizing above, I don't think skipping\nXLOG_HEAP2_NEW_CID entirely when forwarding is correct. As a NEW_CID\nrecord does not imply an invalidation message as discussed above, we'll\nafaict compute wrong snapshots when such transactions are encountered\nduring forwarding. And we'll then log those snapshots to disk. Which\nthen means the slot cannot safely be used for actual decoding anymore -\nas we'll use that snapshot when starting up decoding without fast\nforwarding.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 26 Jul 2019 18:15:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "Hi,\n\nOn 27. 07. 19 3:15, Andres Freund wrote:\n> Hi,\n> \n> Petr, Simon, see the potential issue related to fast forward at the\n> bottom.\n> \n> [..snip..]\n> \n> This actually made me look at the nearby changes due to\n> \n> commit 9c7d06d60680c7f00d931233873dee81fdb311c6\n> Author: Simon Riggs <simon@2ndQuadrant.com>\n> Date: 2018-01-17 11:38:34 +0000\n> \n> Ability to advance replication slots\n> \n> and uhm, I'm not sure they're fully baked. Something like:\n> \n> \t/*\n> \t * If we don't have snapshot or we are just fast-forwarding, there is no\n> \t * point in decoding changes.\n> \t */\n> \tif (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n> \t\tctx->fast_forward)\n> \t\treturn;\n> \n> \t\tcase XLOG_HEAP2_MULTI_INSERT:\n> \t\t\tif (!ctx->fast_forward &&\n> \t\t\t\tSnapBuildProcessChange(builder, xid, buf->origptr))\n> \t\t\t\tDecodeMultiInsert(ctx, buf);\n> \t\t\tbreak;\n> \n> is not very suggestive of that (note the second check).\n> \n\nYou mean that it's redundant, yeah.., although given your next point, \nsee bellow.\n\n> \n> And related to the point of the theorizing above, I don't think skipping\n> XLOG_HEAP2_NEW_CID entirely when forwarding is correct. As a NEW_CID\n> record does not imply an invalidation message as discussed above, we'll\n> afaict compute wrong snapshots when such transactions are encountered\n> during forwarding. And we'll then log those snapshots to disk. Which\n> then means the slot cannot safely be used for actual decoding anymore -\n> as we'll use that snapshot when starting up decoding without fast\n> forwarding.\n> \n\nHmm, I guess that's true. I think I have convinced myself that CID does \nnot matter outside of this transaction, but since we might actually \nshare the computed snapshot via file save/restore with other slots, any \nnon-fast-forwarding decoding that reads the same transaction could miss \nthe CID thanks to the shared snapshot which does not include it.\n\nGiven that we don't process any other records in this function besides \nXLOG_HEAP2_MULTI_INSERT and XLOG_HEAP2_NEW_CID, it seems like simplest \nfix is to just remove the first check for fast forward and keep the one \nin XLOG_HEAP2_MULTI_INSERT.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Mon, 29 Jul 2019 01:09:52 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On Mon, 29 Jul 2019 at 00:09, Petr Jelinek <petr@2ndquadrant.com> wrote:\n\n\n> Given that we don't process any other records in this function besides\n> XLOG_HEAP2_MULTI_INSERT and XLOG_HEAP2_NEW_CID, it seems like simplest\n> fix is to just remove the first check for fast forward and keep the one\n> in XLOG_HEAP2_MULTI_INSERT.\n>\n\nFix proposed by Petr, with comments as explained by Andres.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise", "msg_date": "Mon, 29 Jul 2019 06:52:48 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On 2019-Jul-26, Andres Freund wrote:\n\n> 2) We could simply assign the subtransaction to the parent using\n> ReorderBufferAssignChild() in SnapBuildProcessNewCid() or it's\n> caller. That ought to also fix the bug\n> \n> I also has the advantage that we can save some memory in transactions\n> that have some, but fewer than the ASSIGNMENT limit subtransactions,\n> because it allows us to avoid having a separate base snapshot for\n> them (c.f. ReorderBufferTransferSnapToParent()).\n\nI'm not sure I understood this suggestion correctly. I first tried with\nthis, which seems the simplest rendition:\n\n--- a/src/backend/replication/logical/snapbuild.c\n+++ b/src/backend/replication/logical/snapbuild.c\n@@ -772,6 +772,12 @@ SnapBuildProcessNewCid(SnapBuild *builder, TransactionId xid,\n {\n \tCommandId\tcid;\n \n+\tif ((SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT) &&\n+\t\t(xlrec->top_xid != xid))\n+\t{\n+\t\tReorderBufferAssignChild(builder->reorder, xlrec->top_xid, xid, lsn);\n+\t}\n+\n \t/*\n \t * we only log new_cid's if a catalog tuple was modified, so mark the\n \t * transaction as containing catalog modifications\n\ntest_decoding's tests pass with that, but if I try the example script\nprovided by Ildar, all pgbench clients die with this:\n\nclient 19 script 1 aborted in command 1 query 0: ERROR: subtransaction logged without previous top-level txn record\n\nI thought I would create the main txn before calling AssignChild in\nsnapbuild; however, ReorderBufferTXNByXid is static in reorderbuffer.c.\nSo that seems out. My next try was to remove the elog() that was\ncausing the failure ... but that leads pretty quickly to a crash with\nthis backtrace:\n\n#2 0x00005653241fb823 in ExceptionalCondition (conditionName=conditionName@entry=0x5653243c1960 \"!(prev_first_lsn < cur_txn->first_lsn)\", \n errorType=errorType@entry=0x565324250596 \"FailedAssertion\", \n fileName=fileName@entry=0x5653243c18e8 \"/pgsql/source/master/src/backend/replication/logical/reorderbuffer.c\", \n lineNumber=lineNumber@entry=680) at /pgsql/source/master/src/backend/utils/error/assert.c:54\n#3 0x0000565324062a84 in AssertTXNLsnOrder (rb=rb@entry=0x565326304fa8)\n at /pgsql/source/master/src/backend/replication/logical/reorderbuffer.c:680\n#4 0x0000565324062e39 in ReorderBufferTXNByXid (rb=rb@entry=0x565326304fa8, xid=<optimized out>, xid@entry=185613, create=create@entry=true, \n is_new=is_new@entry=0x0, lsn=lsn@entry=2645271944, create_as_top=create_as_top@entry=true)\n at /pgsql/source/master/src/backend/replication/logical/reorderbuffer.c:559\n#5 0x0000565324067365 in ReorderBufferAddNewTupleCids (rb=0x565326304fa8, xid=185613, lsn=lsn@entry=2645271944, node=..., tid=..., cmin=0, \n cmax=4294967295, combocid=4294967295) at /pgsql/source/master/src/backend/replication/logical/reorderbuffer.c:2100\n#6 0x0000565324069451 in SnapBuildProcessNewCid (builder=0x56532630afd8, xid=185614, lsn=2645271944, xlrec=0x5653262efc78)\n at /pgsql/source/master/src/backend/replication/logical/snapbuild.c:787\n\nNow this failure goes away if I relax the < to <= in the\ncomplained-about line ... but at this point it's two sanity checks that\nI've lobotomized in order to get this to run at all. Not really\ncomfortable with that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 7 Aug 2019 16:19:13 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "Hi,\n\nOn 2019-08-07 16:19:13 -0400, Alvaro Herrera wrote:\n> On 2019-Jul-26, Andres Freund wrote:\n> \n> > 2) We could simply assign the subtransaction to the parent using\n> > ReorderBufferAssignChild() in SnapBuildProcessNewCid() or it's\n> > caller. That ought to also fix the bug\n> > \n> > I also has the advantage that we can save some memory in transactions\n> > that have some, but fewer than the ASSIGNMENT limit subtransactions,\n> > because it allows us to avoid having a separate base snapshot for\n> > them (c.f. ReorderBufferTransferSnapToParent()).\n> \n> I'm not sure I understood this suggestion correctly. I first tried with\n> this, which seems the simplest rendition:\n> \n> --- a/src/backend/replication/logical/snapbuild.c\n> +++ b/src/backend/replication/logical/snapbuild.c\n> @@ -772,6 +772,12 @@ SnapBuildProcessNewCid(SnapBuild *builder, TransactionId xid,\n> {\n> \tCommandId\tcid;\n> \n> +\tif ((SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT) &&\n> +\t\t(xlrec->top_xid != xid))\n> +\t{\n> +\t\tReorderBufferAssignChild(builder->reorder, xlrec->top_xid, xid, lsn);\n> +\t}\n> +\n\nI think we would need to do this for all values of\nSnapBuildCurrentState() - after all the problem occurs because we\n*previously* didn't assign subxids to the toplevel xid. Compared to the\ncost of catalog changes, ReorderBufferAssignChild() is really cheap. So\nI don't think there's any problem just calling it unconditionally (when\ntop_xid <> xid, of course).\n\nIf the above is the only change, I think the body of the if should be\nunreachable, DecodeHeap2Op guards against that:\n\n\t/*\n\t * If we don't have snapshot or we are just fast-forwarding, there is no\n\t * point in decoding changes.\n\t */\n\tif (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n\t\tctx->fast_forward)\n\t\treturn;\n\n\n> I thought I would create the main txn before calling AssignChild in\n> snapbuild; however, ReorderBufferTXNByXid is static in reorderbuffer.c.\n> So that seems out.\n\nThere shouldn't be any need for doing that, ReorderBufferAssignChild\ndoes that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Aug 2019 13:59:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On 2019-Aug-07, Andres Freund wrote:\n\n> I think we would need to do this for all values of\n> SnapBuildCurrentState() - after all the problem occurs because we\n> *previously* didn't assign subxids to the toplevel xid. Compared to the\n> cost of catalog changes, ReorderBufferAssignChild() is really cheap. So\n> I don't think there's any problem just calling it unconditionally (when\n> top_xid <> xid, of course).\n\nBTW I wrote the code as suggested and it passes all the tests ... but I\nthen noticed that the unpatched code doesn't fail Ildar's original\npgbench-based test for me, either. So maybe my laptop is not powerful\nenough to reproduce it, or maybe I'm doing something wrong.\n\nI'm tempted to just push it, since it seems \"obviously\" more correct\nthan the original.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 12 Aug 2019 17:35:58 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On Tue, Aug 13, 2019 at 6:36 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Aug-07, Andres Freund wrote:\n>\n> > I think we would need to do this for all values of\n> > SnapBuildCurrentState() - after all the problem occurs because we\n> > *previously* didn't assign subxids to the toplevel xid. Compared to the\n> > cost of catalog changes, ReorderBufferAssignChild() is really cheap. So\n> > I don't think there's any problem just calling it unconditionally (when\n> > top_xid <> xid, of course).\n>\n> BTW I wrote the code as suggested and it passes all the tests ... but I\n> then noticed that the unpatched code doesn't fail Ildar's original\n> pgbench-based test for me, either. So maybe my laptop is not powerful\n> enough to reproduce it, or maybe I'm doing something wrong.\n\nIf you share the patch fixing this issue I'll test it on my\nenvironment where I could reproduce the original problem.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 19 Aug 2019 17:42:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On 2019-Aug-19, Masahiko Sawada wrote:\n\n> On Tue, Aug 13, 2019 at 6:36 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > BTW I wrote the code as suggested and it passes all the tests ... but I\n> > then noticed that the unpatched code doesn't fail Ildar's original\n> > pgbench-based test for me, either. So maybe my laptop is not powerful\n> > enough to reproduce it, or maybe I'm doing something wrong.\n> \n> If you share the patch fixing this issue I'll test it on my\n> environment where I could reproduce the original problem.\n\nNever mind. I was able to reproduce it later, and verify that Andres'\nproposed strategy doesn't seem to fix the problem. I'm going to study\nthe problem again today.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 19 Aug 2019 10:43:28 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "Hi,\n\nOn August 19, 2019 7:43:28 AM PDT, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>On 2019-Aug-19, Masahiko Sawada wrote:\n>\n>> On Tue, Aug 13, 2019 at 6:36 AM Alvaro Herrera\n><alvherre@2ndquadrant.com> wrote:\n>\n>> > BTW I wrote the code as suggested and it passes all the tests ...\n>but I\n>> > then noticed that the unpatched code doesn't fail Ildar's original\n>> > pgbench-based test for me, either. So maybe my laptop is not\n>powerful\n>> > enough to reproduce it, or maybe I'm doing something wrong.\n>> \n>> If you share the patch fixing this issue I'll test it on my\n>> environment where I could reproduce the original problem.\n>\n>Never mind. I was able to reproduce it later, and verify that Andres'\n>proposed strategy doesn't seem to fix the problem. I'm going to study\n>the problem again today.\n\nCould you post the patch?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Mon, 19 Aug 2019 08:51:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "Hi,\n\nOn 2019-08-19 08:51:43 -0700, Andres Freund wrote:\n> On August 19, 2019 7:43:28 AM PDT, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >On 2019-Aug-19, Masahiko Sawada wrote:\n> >\n> >> On Tue, Aug 13, 2019 at 6:36 AM Alvaro Herrera\n> ><alvherre@2ndquadrant.com> wrote:\n> >\n> >> > BTW I wrote the code as suggested and it passes all the tests ...\n> >but I\n> >> > then noticed that the unpatched code doesn't fail Ildar's original\n> >> > pgbench-based test for me, either. So maybe my laptop is not\n> >powerful\n> >> > enough to reproduce it, or maybe I'm doing something wrong.\n> >> \n> >> If you share the patch fixing this issue I'll test it on my\n> >> environment where I could reproduce the original problem.\n> >\n> >Never mind. I was able to reproduce it later, and verify that Andres'\n> >proposed strategy doesn't seem to fix the problem. I'm going to study\n> >the problem again today.\n> \n> Could you post the patch?\n\nPing?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 Sep 2019 14:11:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "Hello,\n\nOn 2019-Sep-06, Andres Freund wrote:\n\n> On 2019-08-19 08:51:43 -0700, Andres Freund wrote:\n> > On August 19, 2019 7:43:28 AM PDT, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > >Never mind. I was able to reproduce it later, and verify that Andres'\n> > >proposed strategy doesn't seem to fix the problem. I'm going to study\n> > >the problem again today.\n> > \n> > Could you post the patch?\n\nHere's a couple of patches.\n\nalways_decode_assignment.patch is Masahiko Sawada's patch, which has\nbeen confirmed to fix the assertion failure.\n\nassign-child.patch is what I understood you were proposing -- namely to\nassign the subxid to the top-level xid on NEW_CID. In order for it to\nwork at all, I had to remove a different safety check; but the assertion\nstill hits when running Ildar's test case. So the patch doesn't\nactually fix anything. And I think it makes sense that it fails, since\nthe first thing that's happening in this patch is that we create both\nthe top-level xact and the subxact with the same LSN value, which is\nwhat triggers the assertion in the first place. It's possible that I\nmisunderstood what you were suggesting.\n\nIf you want to propose a different fix, be my guest, but failing that\nI'm inclined to push always_decode_assignment.patch sometime before the\nend of the week.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 10 Sep 2019 17:11:05 -0300", "msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On 2019-Sep-10, Alvaro Herrera from 2ndQuadrant wrote:\n\n> Here's a couple of patches.\n> \n> always_decode_assignment.patch is Masahiko Sawada's patch, which has\n> been confirmed to fix the assertion failure.\n\nI pushed this one to all branches. Thanks Ildar for reporting and\nSawada-san for fixing, and reviewers.\n\nIf you (Andres) want to propose a different fix for this, be my guest.\nWe can always revert this one if you have a different better fix.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Sep 2019 16:42:47 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "Hi,\n\nOn 2019-09-13 16:42:47 -0300, Alvaro Herrera wrote:\n> On 2019-Sep-10, Alvaro Herrera from 2ndQuadrant wrote:\n> \n> > Here's a couple of patches.\n> > \n> > always_decode_assignment.patch is Masahiko Sawada's patch, which has\n> > been confirmed to fix the assertion failure.\n> \n> I pushed this one to all branches. Thanks Ildar for reporting and\n> Sawada-san for fixing, and reviewers.\n> \n> If you (Andres) want to propose a different fix for this, be my guest.\n> We can always revert this one if you have a different better fix.\n\nI'm bit surprised to see this go in just now, after I asked for the\nchanges you were reporting as not working for three weeks, and you sent\nthem out three days ago (during which I was at the linux plumbers\nconference)...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 Sep 2019 15:00:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On 2019-Jul-26, Andres Freund wrote:\n\n> Petr, Simon, see the potential issue related to fast forward at the\n> bottom.\n\nI think we neglected this bit. I looked at the patch Simon submitted\ndownthread, and while I vaguely understand that we need to process\nNEW_CID records during fast-forwarding, I don't quite understand why we\nstill can skip XLOG_INVALIDATION messages. I *think* we should process\nthose too. Here's a patch that also contains that change; I also\nreworded Simon's proposed comment. I appreciate reviews.\n\nThoughts? Relevant extracts from Andres' message below.\n\n> Thinking about it, it was not immediately clear to me that it is\n> necessary to process XLOG_HEAP2_NEW_CID at that stage. We only need the\n> cid mapping when decoding content of the transaction that the\n> XLOG_HEAP2_NEW_CID record was about - which will not happen if it\n> started before SNAPBUILD_FULL.\n> \n> Except that they also are responsible for signalling that a transaction\n> performed catalog modifications (cf ReorderBufferXidSetCatalogChanges()\n> call), which in turn is important for SnapBuildCommitTxn() to know\n> whether to include that transaction needs to be included in historic\n> snapshots.\n> \n> So unless I am missing something - which is entirely possible, I've had\n> this code thoroughly swapped out - that means that we only need to\n> process XLOG_HEAP2_NEW_CID < SNAPBUILD_FULL if there can be transactions\n> with relevant catalog changes, that don't have any invalidations\n> messages.\n> \n> After thinking about it for a bit, that's not guaranteed however. For\n> one, even for system catalog tables, looking at\n> CacheInvalidateHeapTuple() et al there can be catalog modifications that\n> create neither a snapshot invalidation message, nor a catcache\n> one. There's also the remote scenario that we possibly might be only\n> modifying a toast relation.\n> \n> But more importantly, the only modified table could be a user defined\n> catalog table (i.e. WITH (user_catalog_table = true)). Which in all\n> likelihood won't cause invalidation messages. So I think currently it is\n> required to process NEW_ID records - although we don't need to actually\n> execute the ReorderBufferAddNewTupleCids() etc calls.\n\n[...]\n\n> And related to the point of the theorizing above, I don't think skipping\n> XLOG_HEAP2_NEW_CID entirely when forwarding is correct. As a NEW_CID\n> record does not imply an invalidation message as discussed above, we'll\n> afaict compute wrong snapshots when such transactions are encountered\n> during forwarding. And we'll then log those snapshots to disk. Which\n> then means the slot cannot safely be used for actual decoding anymore -\n> as we'll use that snapshot when starting up decoding without fast\n> forwarding.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 30 Jan 2020 16:05:14 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" }, { "msg_contents": "On Fri, Jan 31, 2020 at 12:35 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jul-26, Andres Freund wrote:\n>\n> > Petr, Simon, see the potential issue related to fast forward at the\n> > bottom.\n>\n> I think we neglected this bit. I looked at the patch Simon submitted\n> downthread, and while I vaguely understand that we need to process\n> NEW_CID records during fast-forwarding,\n>\n\nRight, IIUC, that is mainly to mark txn has catalog changes\n(ReorderBufferXidSetCatalogChanges) so that later such a transaction\ncan be used to build historic snapshots (during SnapBuildCommitTxn).\nNow, if that is true, then won't we need a similar change in\nDecodeHeapOp for XLOG_HEAP_INPLACE case as well? Also, I am not sure\nif SnapBuildProcessNewCid needs to create Cid map in such a case.\n\n\n> I don't quite understand why we\n> still can skip XLOG_INVALIDATION messages. I *think* we should process\n> those too.\n>\n\nI also think so. If you are allowing to execute invalidation\nirrespective of fast_forward in DecodeStandbyOp, then why not do the\nsame in DecodeCommit where we add\nReorderBufferAddInvalidations?\n\n> Here's a patch that also contains that change; I also\n> reworded Simon's proposed comment. I appreciate reviews.\n>\n\nEven though, in theory, the changes look to be in the right direction,\nbut it is better if we can create a test case to reproduce the\nproblem. I am not sure, but I think we need to generate a few DDLs\nfor the transaction for which we want to fast forward and then after\nmoving forward the DMLs dependent on those WAL should create some\nproblem as we have skipped executing invalidations and addition of\nsuch a transaction in the historic snapshot.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 31 Jan 2020 16:47:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicated LSN in ReorderBuffer" } ]
[ { "msg_contents": "Hi,\n\nI think it might be worthwhile require that IndexAmRoutine returned by\namhandler are allocated statically. Right now we copy them into\nlocal/cache memory contexts. That's not free and reduces branch/jump\ntarget prediction rates. For tableam we did the same, and that was\nactually measurable.\n\nIt seems to me like there's not that many index AMs out there, so\nchanging the signature of amhandler() to require returning a const\npointer to a const object ought to both be enough of a warning, and not\ntoo big a burden.\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2019 14:50:11 -0400", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Don't allocate IndexAmRoutine dynamically?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think it might be worthwhile require that IndexAmRoutine returned by\n> amhandler are allocated statically.\n\n+1. Could only be an issue if somebody were tempted to have time-varying\nentries in them, but it's hard to see why that could be a good idea.\n\nShould we enforce this for *all* handler objects? If only index AMs,\nwhy only them?\n\n> It seems to me like there's not that many index AMs out there, so\n> changing the signature of amhandler() to require returning a const\n> pointer to a const object ought to both be enough of a warning, and not\n> too big a burden.\n\nOne too many \"consts\" there. Pointer to const object seems fine.\nThe other part is either meaningless or will cause problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2019 16:15:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Don't allocate IndexAmRoutine dynamically?" }, { "msg_contents": "Hi,\n\nOn 2019-06-25 16:15:17 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think it might be worthwhile require that IndexAmRoutine returned by\n> > amhandler are allocated statically.\n> \n> +1. Could only be an issue if somebody were tempted to have time-varying\n> entries in them, but it's hard to see why that could be a good idea.\n\nYea, that seems like a use case we wouldn't want to support. If\nsomething like that is needed, they ought to store it in the relcache.\n\n\n> Should we enforce this for *all* handler objects? If only index AMs,\n> why only them?\n\nWell, tableams do that already. Other than indexam and tableam I think\nthere's also FDW and TSM routines - are there any others? Changing the\nFDW API seems like it'd incur some work to a lot more people than any of\nthe others - I'm not sure it's worth it.\n\n\n> > It seems to me like there's not that many index AMs out there, so\n> > changing the signature of amhandler() to require returning a const\n> > pointer to a const object ought to both be enough of a warning, and not\n> > too big a burden.\n> \n> One too many \"consts\" there. Pointer to const object seems fine.\n> The other part is either meaningless or will cause problems.\n\nYea - I was thinking of the pointer in RelationData, where having it as\nconst *Routine const; would make sense (but it's annoying to do without\ninvoking technically undefined behaviour, doing ugly things with memcpy\nor duplicating struct definitions).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2019 17:06:09 -0400", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Don't allocate IndexAmRoutine dynamically?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-06-25 16:15:17 -0400, Tom Lane wrote:\n>> One too many \"consts\" there. Pointer to const object seems fine.\n>> The other part is either meaningless or will cause problems.\n\n> Yea - I was thinking of the pointer in RelationData, where having it as\n> const *Routine const; would make sense (but it's annoying to do without\n> invoking technically undefined behaviour, doing ugly things with memcpy\n> or duplicating struct definitions).\n\nYeah, I think trying to make such pointer fields \"const\", within\nstructures that are otherwise not const, is just more trouble than it's\nworth. To start with, how will you assign the handler's output pointer\nto such a field?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2019 17:25:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Don't allocate IndexAmRoutine dynamically?" }, { "msg_contents": "Hi,\n\nOn 2019-06-25 17:25:12 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-06-25 16:15:17 -0400, Tom Lane wrote:\n> >> One too many \"consts\" there. Pointer to const object seems fine.\n> >> The other part is either meaningless or will cause problems.\n>\n> > Yea - I was thinking of the pointer in RelationData, where having it as\n> > const *Routine const; would make sense (but it's annoying to do without\n> > invoking technically undefined behaviour, doing ugly things with memcpy\n> > or duplicating struct definitions).\n>\n> Yeah, I think trying to make such pointer fields \"const\", within\n> structures that are otherwise not const, is just more trouble than it's\n> worth. To start with, how will you assign the handler's output pointer\n> to such a field?\n\nYea, it's annoying. C++ is slightly cleaner in this case, but it's still not\ngreat. In most cases it's perfectly legal to cast the const away (that's\nalways legal) *and* write through that. The standard's requirement is\nquite minimal - C99's 6.7.3 5) says:\n\n If an attempt is made to modify an object defined with a\n const-qualified type through use of an lvalue with non-\n const-qualified type, the behavior is undefined. ...\n\nWhich, in my reading, appears to mean that in the case of dynamically\nallocated memory, the underlying memory can just be initialized ignoring\nthe constness. At least before the object is used via the struct, but\nI'm not sure that strictly speaking matters.\n\nIn the case of relcache it's a bit more complicated, because we copy\nover existing entries - but we don't ever actually change the constant\nfields, even though they're part of a memcpy....\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2019 17:42:52 -0400", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Don't allocate IndexAmRoutine dynamically?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-06-25 17:25:12 -0400, Tom Lane wrote:\n>> Yeah, I think trying to make such pointer fields \"const\", within\n>> structures that are otherwise not const, is just more trouble than it's\n>> worth. To start with, how will you assign the handler's output pointer\n>> to such a field?\n\n> Yea, it's annoying. C++ is slightly cleaner in this case, but it's still not\n> great. In most cases it's perfectly legal to cast the const away (that's\n> always legal) *and* write through that. The standard's requirement is\n> quite minimal - C99's 6.7.3 5) says:\n\n> If an attempt is made to modify an object defined with a\n> const-qualified type through use of an lvalue with non-\n> const-qualified type, the behavior is undefined. ...\n\nI'm not sure how you are parsing \"the behavior is undefined\" as \"it's\nlegal\". But in any case, I'm not on board with const-qualifying stuff\nif we just have to cast the const away in common situations. I think\nit'd be far more valuable to get to a state where cast-away-const can\nbe made an error.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2019 17:53:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Don't allocate IndexAmRoutine dynamically?" }, { "msg_contents": "Hi,\n\nOn June 25, 2019 5:53:47 PM EDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On 2019-06-25 17:25:12 -0400, Tom Lane wrote:\n>>> Yeah, I think trying to make such pointer fields \"const\", within\n>>> structures that are otherwise not const, is just more trouble than\n>it's\n>>> worth. To start with, how will you assign the handler's output\n>pointer\n>>> to such a field?\n>\n>> Yea, it's annoying. C++ is slightly cleaner in this case, but it's\n>still not\n>> great. In most cases it's perfectly legal to cast the const away\n>(that's\n>> always legal) *and* write through that. The standard's requirement is\n>> quite minimal - C99's 6.7.3 5) says:\n>\n>> If an attempt is made to modify an object defined with a\n>> const-qualified type through use of an lvalue with non-\n>> const-qualified type, the behavior is undefined. ...\n>\n>I'm not sure how you are parsing \"the behavior is undefined\" as \"it's\n>legal\". \n\nBecause of \"defined\". There's no object defined that way for dynamic memory allocations, at the very least at the time malloc has been called, before the return value is casted to the target type. So I don't see how something like *(TableamRoutine**)((char*) malloced + offsetof(RelationData, tableamroutine)) = whatever; after the memory allocations could be undefined.\n\nBut that's obviously somewhat ugly. And it's not that clear whether it ever could be problematic for cache entry rebuild cases, at least theoretically (would a memcpy without changing values be invalid once used via RelationData be invalid? What if we ever wanted to allow changing the AM of a relation?).\n\n\n> But in any case, I'm not on board with const-qualifying stuff\n>if we just have to cast the const away in common situations. I think\n>it'd be far more valuable to get to a state where cast-away-const can\n>be made an error.\n\nI'm not sure I agree that low level details inside relcache.c, while initially building an entry can really be considered \"common\". But I agree it's probably not worth const'ing the routines.\n\nDon't think the compiler could actually use it for optimizations in this case. If it could, it might be worthwhile. E.g. not having to repeatedly read/dereference the routine pointer when repeatedly calling routine callbacks would sure be nice.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Tue, 25 Jun 2019 18:55:43 -0400", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Don't allocate IndexAmRoutine dynamically?" } ]
[ { "msg_contents": "Hi\n\nif somebody has a interest about topic, then can look to article\n\nhttps://sigmodrecord.org/publications/sigmodRecord/1806/pdfs/full-issue.pdf\n\nThe New and Improved SQL:2016 Standard\n\nRegards\n\nPavel\n\nHiif somebody has a interest about topic, then can look to articlehttps://sigmodrecord.org/publications/sigmodRecord/1806/pdfs/full-issue.pdfThe New and Improved SQL:2016 StandardRegardsPavel", "msg_date": "Tue, 25 Jun 2019 22:05:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "sigmod article about ANSI SQL 2016 features" } ]
[ { "msg_contents": "I'm seeing a reproducible bus error here:\n\n#0 0x00417420 in statext_mcv_serialize (mcvlist=0x62223450, stats=Variable \"stats\" is not available.\n)\n at mcv.c:785\n785 memcpy(ITEM_BASE_FREQUENCY(item, ndims), &mcvitem->base_frequency, sizeof(double));\n\nWhat appears to be happening is that since ITEM_BASE_FREQUENCY is defined as\n\n#define ITEM_BASE_FREQUENCY(item,ndims)\t((double *) (ITEM_FREQUENCY(item, ndims) + 1))\n\nthe compiler is assuming that the first argument to memcpy is\ndouble-aligned, and it is generating code that depends on that being\ntrue, and of course it isn't true and kaboom.\n\nYou can *not* cast something to an aligned pointer type if it's not\nactually certain to be aligned suitably for that type. In this example,\neven if you wrote \"(char *)\" in front of this, it wouldn't save you;\nthe compiler would still be entitled to believe that the intermediate\ncast value meant something. The casts in the underlying macros\nITEM_FREQUENCY and so on are equally unsafe.\n\n(For the record, this is with gcc 4.2.1 on OpenBSD/hppa 6.4.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2019 23:52:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Tue, Jun 25, 2019 at 11:52:28PM -0400, Tom Lane wrote:\n>I'm seeing a reproducible bus error here:\n>\n>#0 0x00417420 in statext_mcv_serialize (mcvlist=0x62223450, stats=Variable \"stats\" is not available.\n>)\n> at mcv.c:785\n>785 memcpy(ITEM_BASE_FREQUENCY(item, ndims), &mcvitem->base_frequency, sizeof(double));\n>\n>What appears to be happening is that since ITEM_BASE_FREQUENCY is defined as\n>\n>#define ITEM_BASE_FREQUENCY(item,ndims)\t((double *) (ITEM_FREQUENCY(item, ndims) + 1))\n>\n>the compiler is assuming that the first argument to memcpy is\n>double-aligned, and it is generating code that depends on that being\n>true, and of course it isn't true and kaboom.\n>\n>You can *not* cast something to an aligned pointer type if it's not\n>actually certain to be aligned suitably for that type. In this example,\n>even if you wrote \"(char *)\" in front of this, it wouldn't save you;\n>the compiler would still be entitled to believe that the intermediate\n>cast value meant something. The casts in the underlying macros\n>ITEM_FREQUENCY and so on are equally unsafe.\n>\n\nOK. So the solution is to ditch the casts altogether, and then do plain\npointer arithmetics like this:\n\n#define ITEM_INDEXES(item)\t\t\t(item)\n#define ITEM_NULLS(item,ndims)\t\t(ITEM_INDEXES(item) + (ndims))\n#define ITEM_FREQUENCY(item,ndims)\t(ITEM_NULLS(item, ndims) + (ndims))\n#define ITEM_BASE_FREQUENCY(item,ndims)\t(ITEM_FREQUENCY(item, ndims) + sizeof(double))\n\nOr is that still relying on alignment, somehow?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 26 Jun 2019 09:49:46 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Tue, Jun 25, 2019 at 11:52:28PM -0400, Tom Lane wrote:\n>> You can *not* cast something to an aligned pointer type if it's not\n>> actually certain to be aligned suitably for that type.\n\n> OK. So the solution is to ditch the casts altogether, and then do plain\n> pointer arithmetics like this:\n\n> #define ITEM_INDEXES(item)\t\t\t(item)\n> #define ITEM_NULLS(item,ndims)\t\t(ITEM_INDEXES(item) + (ndims))\n> #define ITEM_FREQUENCY(item,ndims)\t(ITEM_NULLS(item, ndims) + (ndims))\n> #define ITEM_BASE_FREQUENCY(item,ndims)\t(ITEM_FREQUENCY(item, ndims) + sizeof(double))\n\n> Or is that still relying on alignment, somehow?\n\nNo, constructs like a char* pointer plus n times sizeof(something) should\nbe safe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2019 09:40:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Wed, Jun 26, 2019 at 09:49:46AM +0200, Tomas Vondra wrote:\n>On Tue, Jun 25, 2019 at 11:52:28PM -0400, Tom Lane wrote:\n>>I'm seeing a reproducible bus error here:\n>>\n>>#0 0x00417420 in statext_mcv_serialize (mcvlist=0x62223450, stats=Variable \"stats\" is not available.\n>>)\n>> at mcv.c:785\n>>785 memcpy(ITEM_BASE_FREQUENCY(item, ndims), &mcvitem->base_frequency, sizeof(double));\n>>\n>>What appears to be happening is that since ITEM_BASE_FREQUENCY is defined as\n>>\n>>#define ITEM_BASE_FREQUENCY(item,ndims)\t((double *) (ITEM_FREQUENCY(item, ndims) + 1))\n>>\n>>the compiler is assuming that the first argument to memcpy is\n>>double-aligned, and it is generating code that depends on that being\n>>true, and of course it isn't true and kaboom.\n>>\n>>You can *not* cast something to an aligned pointer type if it's not\n>>actually certain to be aligned suitably for that type. In this example,\n>>even if you wrote \"(char *)\" in front of this, it wouldn't save you;\n>>the compiler would still be entitled to believe that the intermediate\n>>cast value meant something. The casts in the underlying macros\n>>ITEM_FREQUENCY and so on are equally unsafe.\n>>\n>\n>OK. So the solution is to ditch the casts altogether, and then do plain\n>pointer arithmetics like this:\n>\n>#define ITEM_INDEXES(item)\t\t\t(item)\n>#define ITEM_NULLS(item,ndims)\t\t(ITEM_INDEXES(item) + (ndims))\n>#define ITEM_FREQUENCY(item,ndims)\t(ITEM_NULLS(item, ndims) + (ndims))\n>#define ITEM_BASE_FREQUENCY(item,ndims)\t(ITEM_FREQUENCY(item, ndims) + sizeof(double))\n>\n>Or is that still relying on alignment, somehow?\n>\n\nAttached is a patch that should (hopefully) fix this. It essentially\ntreats the item as (char *) and does all pointer arithmetics without any\nadditional casts. So there are no intermediate casts.\n\nI have no way to test this, so I may either wait for you to test this\nfirst, or push and wait. It seems to fail only on a very small number of\nbuildfarm animals, so having a confirmation would be nice.\n\nThe fix keeps the binary format as is, so the serialized MCV items are\nmax-aligned. That means we can access the uint16 indexes directly, but we\nneed to copy the rest of the fields (because those may not be aligned). In\nhindsight that seems a bit silly, we might as well copy everything, not\ncare about the alignment and maybe save a few more bytes. But that would\nrequire catversion bump. OTOH we may beed to do that anyway, to fix the\npg_mcv_list_items() signature (as discussed in the other MCV thread).\n\nregards \n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 26 Jun 2019 15:43:44 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Attached is a patch that should (hopefully) fix this. It essentially\n> treats the item as (char *) and does all pointer arithmetics without any\n> additional casts. So there are no intermediate casts.\n\nThis passes the eyeball test, and it also allows my OpenBSD/hppa\ninstallation to get through the core regression tests, so I think\nit's good as far as it goes. Please push.\n\nHowever ... nosing around in mcv.c, I noticed that the next macro:\n\n/*\n * Used to compute size of serialized MCV list representation.\n */\n#define MinSizeOfMCVList\t\t\\\n\t(VARHDRSZ + sizeof(uint32) * 3 + sizeof(AttrNumber))\n\n#define SizeOfMCVList(ndims,nitems)\t\\\n\t(MAXALIGN(MinSizeOfMCVList + sizeof(Oid) * (ndims)) + \\\n\t MAXALIGN((ndims) * sizeof(DimensionInfo)) + \\\n\t MAXALIGN((nitems) * ITEM_SIZE(ndims)))\n\nis both woefully underdocumented and completely at variance with\nreality. It doesn't seem to be accounting for the actual data values.\nNo doubt this is why it's not used in the places where it'd matter;\nthe tests that do use it are testing much weaker conditions than they\nshould.\n\n> The fix keeps the binary format as is, so the serialized MCV items are\n> max-aligned. That means we can access the uint16 indexes directly, but we\n> need to copy the rest of the fields (because those may not be aligned). In\n> hindsight that seems a bit silly, we might as well copy everything, not\n> care about the alignment and maybe save a few more bytes.\n\nI think that part of the problem here is that the way this code is\nwritten, \"maxaligned\" is no such thing. What you're actually maxaligning\nseems to be the offset from the start of the data area of a varlena value,\nwhich is generally going to be a maxaligned palloc result plus 4 bytes.\nSo \"aligned\" double values are actually guaranteed to be on odd word\nboundaries not even ones.\n\nWhat's more, it's difficult to convince oneself that the maxaligns done\nin different parts of the code are all enforcing the same choices about\nwhich substructures get pseudo-maxaligned and which don't, because the\nlogic doesn't line up very well.\n\nIf we do need another catversion bump before v12, I'd vote for ripping\nout Every Single One of the \"maxalign\" operations in this code, just\non the grounds of code simplicity and bug reduction.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2019 11:26:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Wed, Jun 26, 2019 at 11:26:21AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> Attached is a patch that should (hopefully) fix this. It essentially\n>> treats the item as (char *) and does all pointer arithmetics without any\n>> additional casts. So there are no intermediate casts.\n>\n>This passes the eyeball test, and it also allows my OpenBSD/hppa\n>installation to get through the core regression tests, so I think\n>it's good as far as it goes. Please push.\n>\n>However ... nosing around in mcv.c, I noticed that the next macro:\n>\n>/*\n> * Used to compute size of serialized MCV list representation.\n> */\n>#define MinSizeOfMCVList\t\t\\\n>\t(VARHDRSZ + sizeof(uint32) * 3 + sizeof(AttrNumber))\n>\n>#define SizeOfMCVList(ndims,nitems)\t\\\n>\t(MAXALIGN(MinSizeOfMCVList + sizeof(Oid) * (ndims)) + \\\n>\t MAXALIGN((ndims) * sizeof(DimensionInfo)) + \\\n>\t MAXALIGN((nitems) * ITEM_SIZE(ndims)))\n>\n>is both woefully underdocumented and completely at variance with\n>reality. It doesn't seem to be accounting for the actual data values.\n>No doubt this is why it's not used in the places where it'd matter;\n>the tests that do use it are testing much weaker conditions than they\n>should.\n>\n\nI agree about the macro being underdocumented, but AFAICS it's used\ncorrectly to check the expected length. It can't include the data values\ndirectly, because that's variable amount of data - and it's encoded in not\nyet verified part of the data.\n\nSo this only includes parts with known lengths, and then the code does\nthis:\n\n for (dim = 0; dim < ndims; dim++)\n {\n ...\n expected_size += MAXALIGN(info[dim].nbytes);\n }\n\nand uses that to check the actual length.\n\n if (VARSIZE_ANY(data) != expected_size)\n elog(ERROR, ...);\n\nThat being said, maybe this is unnecessarily defensive and we should just\ntrust the values not being corrupted. So if we get pg_mcv_list value, we'd\nsimply assume it's OK.\n\n>> The fix keeps the binary format as is, so the serialized MCV items are\n>> max-aligned. That means we can access the uint16 indexes directly, but we\n>> need to copy the rest of the fields (because those may not be aligned). In\n>> hindsight that seems a bit silly, we might as well copy everything, not\n>> care about the alignment and maybe save a few more bytes.\n>\n>I think that part of the problem here is that the way this code is\n>written, \"maxaligned\" is no such thing. What you're actually maxaligning\n>seems to be the offset from the start of the data area of a varlena value,\n>which is generally going to be a maxaligned palloc result plus 4 bytes.\n>So \"aligned\" double values are actually guaranteed to be on odd word\n>boundaries not even ones.\n>\n\nI don't think so. The pointers should be maxaligned with respect to the\nwhole varlena value, which is what 'raw' points to. At least that was the\nintent of code like this:\n\n raw = palloc0(total_length);\n\n ...\n\n /* the header may not be exactly aligned, so make sure it is */\n ptr = raw + MAXALIGN(ptr - raw);\n\nIf it's not like that in some place, it's a bug.\n\n>What's more, it's difficult to convince oneself that the maxaligns done\n>in different parts of the code are all enforcing the same choices about\n>which substructures get pseudo-maxaligned and which don't, because the\n>logic doesn't line up very well.\n>\n\nNot sure. If there's a way to make it clearer, I'm ready to do the work.\nUnfortunately it's hard for me to judge that, because I've spent so much\ntime on that code that it seems fairly clear to me.\n\n>If we do need another catversion bump before v12, I'd vote for ripping\n>out Every Single One of the \"maxalign\" operations in this code, just\n>on the grounds of code simplicity and bug reduction.\n>\n\nHmmm, OK. The original reason to keep the parts aligned was to be able to\nreference the parts directly during processing. If we get rid of the\nalignment, we'll have to memcpy everything during deserialization. But\nif it makes the code simpler, it might be worth it - this part of the\ncode was clearly the weakest part of the patch.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 26 Jun 2019 18:08:08 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, Jun 26, 2019 at 11:26:21AM -0400, Tom Lane wrote:\n>> #define SizeOfMCVList(ndims,nitems)\t\\\n>> is both woefully underdocumented and completely at variance with\n>> reality. It doesn't seem to be accounting for the actual data values.\n\n> I agree about the macro being underdocumented, but AFAICS it's used\n> correctly to check the expected length. It can't include the data values\n> directly, because that's variable amount of data - and it's encoded in not\n> yet verified part of the data.\n\nWell, it should have some other name then. Or *at least* a comment.\nIt's unbelievably misleading as it stands.\n\n> That being said, maybe this is unnecessarily defensive and we should just\n> trust the values not being corrupted.\n\nNo, I'm on board with checking the lengths. I just don't like how\nhard it is to discern what's being checked.\n\n>> I think that part of the problem here is that the way this code is\n>> written, \"maxaligned\" is no such thing. What you're actually maxaligning\n>> seems to be the offset from the start of the data area of a varlena value,\n\n> I don't think so. The pointers should be maxaligned with respect to the\n> whole varlena value, which is what 'raw' points to.\n\n[ squint ... ] OK, I think I misread this:\n\nstatext_mcv_deserialize(bytea *data)\n{\n...\n\t/* pointer to the data part (skip the varlena header) */\n\tptr = VARDATA_ANY(data);\n\traw = (char *) data;\n\nI think this is confusing in itself --- I read it as \"raw = (char *) ptr\"\nand I think most other people would assume that too based on the order\nof operations. It'd read better as\n\n\t/* remember start of datum for maxalign reference */\n\traw = (char *) data;\n\n\t/* pointer to the data part (skip the varlena header) */\n\tptr = VARDATA_ANY(data);\n\nAnother problem with this code is that it flat doesn't work for\nnon-4-byte-header varlenas: it'd do the alignment differently than the\nserialization side did. That's okay given that the two extant call sites\nare guaranteed to pass detoasted datums. But using VARDATA_ANY gives a\ncompletely misleading appearance of being ready to deal with short-header\nvarlenas, and heaven forbid there should be any comment to discourage\nfuture coders from trying. So really what I'd like to see here is\n\n\t/* remember start of datum for maxalign reference */\n\traw = (char *) data;\n\n\t/* alignment logic assumes full-size datum header */\n\tAssert(VARATT_IS_4B(data));\n\n\t/* pointer to the data part (skip the varlena header) */\n\tptr = VARDATA_ANY(data);\n\nOr, of course, this could all go away if we got rid of the\nbogus maxaligning...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2019 12:31:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Wed, Jun 26, 2019 at 12:31:13PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Wed, Jun 26, 2019 at 11:26:21AM -0400, Tom Lane wrote:\n>>> #define SizeOfMCVList(ndims,nitems)\t\\\n>>> is both woefully underdocumented and completely at variance with\n>>> reality. It doesn't seem to be accounting for the actual data values.\n>\n>> I agree about the macro being underdocumented, but AFAICS it's used\n>> correctly to check the expected length. It can't include the data values\n>> directly, because that's variable amount of data - and it's encoded in not\n>> yet verified part of the data.\n>\n>Well, it should have some other name then. Or *at least* a comment.\n>It's unbelievably misleading as it stands.\n>\n\nTrue.\n\n>> That being said, maybe this is unnecessarily defensive and we should just\n>> trust the values not being corrupted.\n>\n>No, I'm on board with checking the lengths. I just don't like how\n>hard it is to discern what's being checked.\n>\n\nUnderstood.\n\n>>> I think that part of the problem here is that the way this code is\n>>> written, \"maxaligned\" is no such thing. What you're actually maxaligning\n>>> seems to be the offset from the start of the data area of a varlena value,\n>\n>> I don't think so. The pointers should be maxaligned with respect to the\n>> whole varlena value, which is what 'raw' points to.\n>\n>[ squint ... ] OK, I think I misread this:\n>\n>statext_mcv_deserialize(bytea *data)\n>{\n>...\n>\t/* pointer to the data part (skip the varlena header) */\n>\tptr = VARDATA_ANY(data);\n>\traw = (char *) data;\n>\n>I think this is confusing in itself --- I read it as \"raw = (char *) ptr\"\n>and I think most other people would assume that too based on the order\n>of operations. It'd read better as\n>\n>\t/* remember start of datum for maxalign reference */\n>\traw = (char *) data;\n>\n>\t/* pointer to the data part (skip the varlena header) */\n>\tptr = VARDATA_ANY(data);\n>\n\nYeah, that'd have been better.\n\n>Another problem with this code is that it flat doesn't work for\n>non-4-byte-header varlenas: it'd do the alignment differently than the\n>serialization side did. That's okay given that the two extant call sites\n>are guaranteed to pass detoasted datums. But using VARDATA_ANY gives a\n>completely misleading appearance of being ready to deal with short-header\n>varlenas, and heaven forbid there should be any comment to discourage\n>future coders from trying. So really what I'd like to see here is\n>\n>\t/* remember start of datum for maxalign reference */\n>\traw = (char *) data;\n>\n>\t/* alignment logic assumes full-size datum header */\n>\tAssert(VARATT_IS_4B(data));\n>\n>\t/* pointer to the data part (skip the varlena header) */\n>\tptr = VARDATA_ANY(data);\n>\n>Or, of course, this could all go away if we got rid of the\n>bogus maxaligning...\n>\n\nOK. Attached is a patch ditching the alignment in serialized data. I've\nditched the macros to access parts of serialized data, and everything\ngets copied.\n\nThe main complication is with varlena values, which may or may not have\n4B headers (for now there's the PG_DETOAST_DATUM call, but as you\nmentioned we may want to remove it in the future). So I've stored the\nlength as uint32 separately, followed by the full varlena value (thanks\nto that the deserialization is simpler). Not sure if that's the best\nsolution, though, because this way we store the length twice.\n\nI've kept the alignment in the deserialization code, because there it\nallows us to allocate the whole value as a single chunk, which I think\nis useful (I admit I don't have any measurements to demonstrate that).\nBut if we decide to rework this later, we can - it's just in-memory\nrepresentation, not on-disk.\n\nIs this roughly what you had in mind?\n\nFWIW I'm sure some of the comments are stale and/or need clarification,\nbut it's a bit too late over here, so I'll look into that tomorrow.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 27 Jun 2019 00:29:18 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> OK. Attached is a patch ditching the alignment in serialized data. I've\n> ditched the macros to access parts of serialized data, and everything\n> gets copied.\n\nI lack energy to actually read this patch right now, and I don't currently\nhave an opinion about whether it's worth another catversion bump to fix\nthis stuff in v12. But I did test the patch, and I can confirm it gets\nthrough the core regression tests on hppa (both gaur's host environment\nwith gcc 3.4.6, and the OpenBSD installation with gcc 4.2.1).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2019 00:04:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Thu, Jun 27, 2019 at 12:04:30AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> OK. Attached is a patch ditching the alignment in serialized data. I've\n>> ditched the macros to access parts of serialized data, and everything\n>> gets copied.\n>\n>I lack energy to actually read this patch right now, and I don't currently\n>have an opinion about whether it's worth another catversion bump to fix\n>this stuff in v12. But I did test the patch, and I can confirm it gets\n>through the core regression tests on hppa (both gaur's host environment\n>with gcc 3.4.6, and the OpenBSD installation with gcc 4.2.1).\n>\n\nThanks for running it through regression tests, that alone is a very\nuseful piece of information for me.\n\nAs for the catversion bump - I'd probably vote to do it. Not just because\nof this serialization stuff, but to fix the pg_mcv_list_items function.\nIt's not something I'm very enthusiastic about (kinda embarassed about it,\nreally), but it seems better than shipping something that we'll need to\nrework in PG13.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 27 Jun 2019 13:26:32 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Thu, Jun 27, 2019 at 01:26:32PM +0200, Tomas Vondra wrote:\n>On Thu, Jun 27, 2019 at 12:04:30AM -0400, Tom Lane wrote:\n>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>OK. Attached is a patch ditching the alignment in serialized data. I've\n>>>ditched the macros to access parts of serialized data, and everything\n>>>gets copied.\n>>\n>>I lack energy to actually read this patch right now, and I don't currently\n>>have an opinion about whether it's worth another catversion bump to fix\n>>this stuff in v12. But I did test the patch, and I can confirm it gets\n>>through the core regression tests on hppa (both gaur's host environment\n>>with gcc 3.4.6, and the OpenBSD installation with gcc 4.2.1).\n>>\n>\n>Thanks for running it through regression tests, that alone is a very\n>useful piece of information for me.\n>\n>As for the catversion bump - I'd probably vote to do it. Not just because\n>of this serialization stuff, but to fix the pg_mcv_list_items function.\n>It's not something I'm very enthusiastic about (kinda embarassed about it,\n>really), but it seems better than shipping something that we'll need to\n>rework in PG13.\n>\n\nAttached is a slightly improved version of the serialization patch. The\nmain difference is that when serializing varlena values, the previous\npatch version stored\n\n length (uint32) + full varlena (incl. the header)\n\nwhich is kinda redundant, because the varlena stores the length too. So\nnow it only stores the length + data, without the varlena header. I\ndon't think there's a better way to store varlena values without\nenforcing alignment (which is what happens in current master).\n\nThere's one additional change I failed to mention before - I had to add\nanother field to DimensionInfo, tracking how much space will be needed\nfor deserialized data. This is needed because the deserialization\nallocates the whole MCV as a single chunk of memory, to reduce palloc\noverhead. It could parse the data twice (first to determine the space,\nthen to actually parse it), this allows doing just a single pass. Which\nseems useful for large MCV lists, but maybe it's not worth it?\n\nBarring objections I'll commit this together with the pg_mcv_list_items\nfix, posted in a separate thread. Of course, this requires catversion\nbump - an alternative would be to keep enforcing the alignment, but\ntweak the macros to work on all platforms without SIGBUS.\n\nConsidering how troublesome this serialiation part of the patch turner\nout to be, I'm not really sure by anything at this point. So I'd welcome\nthoughts about the proposed changes.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 29 Jun 2019 16:13:12 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Attached is a slightly improved version of the serialization patch.\n\nI reviewed this patch, and tested it on hppa and ppc. I found one\nserious bug: in the deserialization varlena case, you need\n\n-\t\t\t\t\tdataptr += MAXALIGN(len);\n+\t\t\t\t\tdataptr += MAXALIGN(len + VARHDRSZ);\n\n(approx. line 1100 in mcv.c). Without this, the output data is corrupt,\nplus the Assert a few lines further down about dataptr having been\nadvanced by the correct amount fires. (On one machine I tested on,\nthat happened during the core regression tests. The other machine\ngot through regression, but trying to do \"select * from pg_stats_ext;\"\nafterwards exhibited the crash. I didn't investigate closely, but\nI suspect the difference has to do with different MAXALIGN values,\n4 and 8 respectively.)\n\nThe attached patch (a delta atop your v2) corrects that plus some\ncosmetic issues.\n\nIf we're going to push this, it would be considerably less complicated\nto do so before v12 gets branched --- not long after that, there will be\ncatversion differences to cope with. I'm planning to make the branch\ntomorrow (Monday), probably ~1500 UTC. Just sayin'.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 30 Jun 2019 20:30:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Sun, Jun 30, 2019 at 08:30:33PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> Attached is a slightly improved version of the serialization patch.\n>\n>I reviewed this patch, and tested it on hppa and ppc. I found one\n>serious bug: in the deserialization varlena case, you need\n>\n>-\t\t\t\t\tdataptr += MAXALIGN(len);\n>+\t\t\t\t\tdataptr += MAXALIGN(len + VARHDRSZ);\n>\n>(approx. line 1100 in mcv.c). Without this, the output data is corrupt,\n>plus the Assert a few lines further down about dataptr having been\n>advanced by the correct amount fires. (On one machine I tested on,\n>that happened during the core regression tests. The other machine\n>got through regression, but trying to do \"select * from pg_stats_ext;\"\n>afterwards exhibited the crash. I didn't investigate closely, but\n>I suspect the difference has to do with different MAXALIGN values,\n>4 and 8 respectively.)\n>\n>The attached patch (a delta atop your v2) corrects that plus some\n>cosmetic issues.\n>\n\nThanks.\n\n>If we're going to push this, it would be considerably less complicated\n>to do so before v12 gets branched --- not long after that, there will be\n>catversion differences to cope with. I'm planning to make the branch\n>tomorrow (Monday), probably ~1500 UTC. Just sayin'.\n>\n\nUnfortunately, I was travelling on Sunday and was quite busy on Monday, so\nI've been unable to push this before the branching :-(\n\nI'll push by the end of this week, once I get home.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 2 Jul 2019 10:38:29 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Tue, Jul 02, 2019 at 10:38:29AM +0200, Tomas Vondra wrote:\n>On Sun, Jun 30, 2019 at 08:30:33PM -0400, Tom Lane wrote:\n>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>Attached is a slightly improved version of the serialization patch.\n>>\n>>I reviewed this patch, and tested it on hppa and ppc. I found one\n>>serious bug: in the deserialization varlena case, you need\n>>\n>>-\t\t\t\t\tdataptr += MAXALIGN(len);\n>>+\t\t\t\t\tdataptr += MAXALIGN(len + VARHDRSZ);\n>>\n>>(approx. line 1100 in mcv.c). Without this, the output data is corrupt,\n>>plus the Assert a few lines further down about dataptr having been\n>>advanced by the correct amount fires. (On one machine I tested on,\n>>that happened during the core regression tests. The other machine\n>>got through regression, but trying to do \"select * from pg_stats_ext;\"\n>>afterwards exhibited the crash. I didn't investigate closely, but\n>>I suspect the difference has to do with different MAXALIGN values,\n>>4 and 8 respectively.)\n>>\n>>The attached patch (a delta atop your v2) corrects that plus some\n>>cosmetic issues.\n>>\n>\n>Thanks.\n>\n>>If we're going to push this, it would be considerably less complicated\n>>to do so before v12 gets branched --- not long after that, there will be\n>>catversion differences to cope with. I'm planning to make the branch\n>>tomorrow (Monday), probably ~1500 UTC. Just sayin'.\n>>\n>\n>Unfortunately, I was travelling on Sunday and was quite busy on Monday, so\n>I've been unable to push this before the branching :-(\n>\n>I'll push by the end of this week, once I get home.\n>\n\nI've pushed the fix (along with the pg_mcv_list_item fix) into master,\nhopefully the buildfarm won't be upset about it.\n\nI was about to push into REL_12_STABLE, when I realized that maybe we\nneed to do something about the catversion first. REL_12_STABLE is still\non 201906161, while master got to 201907041 thanks to commit\n7b925e12703. Simply cherry-picking the commits would get us to\n201907052 in both branches, but that'd be wrong as the catalogs do\ndiffer. I suppose this is what you meant by \"catversion differences to\ncope with\".\n\nI suppose this is not the first time this happened - how did we deal\nwith it in the past? I guess we could use some \"past\" non-conflicting\ncatversion number in the REL_12_STABLE branch (say, 201907030x) but\nmaybe that'd be wrong?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 5 Jul 2019 03:23:42 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I was about to push into REL_12_STABLE, when I realized that maybe we\n> need to do something about the catversion first. REL_12_STABLE is still\n> on 201906161, while master got to 201907041 thanks to commit\n> 7b925e12703. Simply cherry-picking the commits would get us to\n> 201907052 in both branches, but that'd be wrong as the catalogs do\n> differ. I suppose this is what you meant by \"catversion differences to\n> cope with\".\n\nYeah, exactly.\n\nMy recommendation is to use 201907051 on v12 and 201907052\non master (or whatever is $today for you). They need to be\ndifferent now that the branches' catalog histories have diverged,\nand it seems to me that the back branch should be \"older\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2019 21:28:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Fri, Jul 5, 2019, 03:28 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > I was about to push into REL_12_STABLE, when I realized that maybe we\n> > need to do something about the catversion first. REL_12_STABLE is still\n> > on 201906161, while master got to 201907041 thanks to commit\n> > 7b925e12703. Simply cherry-picking the commits would get us to\n> > 201907052 in both branches, but that'd be wrong as the catalogs do\n> > differ. I suppose this is what you meant by \"catversion differences to\n> > cope with\".\n>\n> Yeah, exactly.\n>\n> My recommendation is to use 201907051 on v12 and 201907052\n> on master (or whatever is $today for you). They need to be\n> different now that the branches' catalog histories have diverged,\n> and it seems to me that the back branch should be \"older\".\n>\n> regards, tom lane\n>\n\nUnfortunately, master is already using both 201907051 and 201907052 (two of\nthe patches I pushed touched the catalog), so we can't quite do exactly\nthat. We need to use 201907042 and 201907043 or something preceding 201907041\n(which is the extra catversion on master).\n\nAt this point there's no perfect sequence, thanks to the extra commit on\nmaster, so REL_12_STABLE can't be exactly \"older\" :-(\n\nBarring objections, I'll go ahead with 201907042+201907043 later today,\nbefore someone pushes another catversion-bumping patch.\n\nregards\n\n>\n\nOn Fri, Jul 5, 2019, 03:28 Tom Lane <tgl@sss.pgh.pa.us> wrote:Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I was about to push into REL_12_STABLE, when I realized that maybe we\n> need to do something about the catversion first. REL_12_STABLE is still\n> on 201906161, while master got to 201907041 thanks to commit\n> 7b925e12703.  Simply cherry-picking the commits would get us to\n> 201907052 in both branches, but that'd be wrong as the catalogs do\n> differ. I suppose this is what you meant by \"catversion differences to\n> cope with\".\n\nYeah, exactly.\n\nMy recommendation is to use 201907051 on v12 and 201907052\non master (or whatever is $today for you).  They need to be\ndifferent now that the branches' catalog histories have diverged,\nand it seems to me that the back branch should be \"older\".\n                        regards, tom laneUnfortunately, master is already using both 201907051 and 201907052 (two of the patches I pushed touched the catalog), so we can't quite do exactly that. We need to use 201907042 and 201907043 or something preceding 201907041 (which is the extra catversion on master).At this point there's no perfect sequence, thanks to the extra commit on master, so REL_12_STABLE can't be exactly \"older\" :-(Barring objections, I'll go ahead with 201907042+201907043 later today, before someone pushes another catversion-bumping patch.regards", "msg_date": "Fri, 5 Jul 2019 10:36:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On Fri, Jul 05, 2019 at 10:36:59AM +0200, Tomas Vondra wrote:\n>On Fri, Jul 5, 2019, 03:28 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> > I was about to push into REL_12_STABLE, when I realized that maybe we\n>> > need to do something about the catversion first. REL_12_STABLE is still\n>> > on 201906161, while master got to 201907041 thanks to commit\n>> > 7b925e12703. Simply cherry-picking the commits would get us to\n>> > 201907052 in both branches, but that'd be wrong as the catalogs do\n>> > differ. I suppose this is what you meant by \"catversion differences to\n>> > cope with\".\n>>\n>> Yeah, exactly.\n>>\n>> My recommendation is to use 201907051 on v12 and 201907052\n>> on master (or whatever is $today for you). They need to be\n>> different now that the branches' catalog histories have diverged,\n>> and it seems to me that the back branch should be \"older\".\n>>\n>> regards, tom lane\n>>\n>\n>Unfortunately, master is already using both 201907051 and 201907052 (two of\n>the patches I pushed touched the catalog), so we can't quite do exactly\n>that. We need to use 201907042 and 201907043 or something preceding 201907041\n>(which is the extra catversion on master).\n>\n>At this point there's no perfect sequence, thanks to the extra commit on\n>master, so REL_12_STABLE can't be exactly \"older\" :-(\n>\n>Barring objections, I'll go ahead with 201907042+201907043 later today,\n>before someone pushes another catversion-bumping patch.\n>\n\nI've pushed the REL_12_STABLE backpatches too, now. I've ended up using\n201907031 and 201907032 - those values precede the first catversion bump\nin master (201907041), so the back branch looks \"older\". And there's a\nbit of slack for additional bumps (if the unlikely case we need them).\n\nWe might have \"fixed\" this by backpatching the commit with the extra\ncatversion bump (7b925e12) but the commit seems a bit too large for\nthat. It's fairly isolated though. But it seems like a bad practice.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 5 Jul 2019 18:04:56 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I've pushed the REL_12_STABLE backpatches too, now. I've ended up using\n> 201907031 and 201907032 - those values precede the first catversion bump\n> in master (201907041), so the back branch looks \"older\". And there's a\n> bit of slack for additional bumps (if the unlikely case we need them).\n\nFWIW, I don't think there's a need for every catversion on the back branch\nto look older than any catversion on HEAD. The requirement so far as the\ncore code is concerned is only for non-equality. Now, extension code does\noften do something like \"if catversion >= xxx\", but in practice they're\nonly concerned about numbers used by released versions. HEAD's catversion\nwill be strictly greater than v12's soon enough, even if you had made it\nnot so today. So I think sticking to today's-date-with-some-N is better\nthan artificially assigning other dates.\n\nWhat's done is done, and there's no need to change it, but now you\nknow what to do next time.\n\n> We might have \"fixed\" this by backpatching the commit with the extra\n> catversion bump (7b925e12) but the commit seems a bit too large for\n> that. It's fairly isolated though. But it seems like a bad practice.\n\nYeah, that approach flies in the face of the notion of feature freeze.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2019 13:06:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: mcvstats serialization code is still shy of a load" }, { "msg_contents": "On 2019-Jul-05, Tom Lane wrote:\n\n> FWIW, I don't think there's a need for every catversion on the back branch\n> to look older than any catversion on HEAD. The requirement so far as the\n> core code is concerned is only for non-equality. Now, extension code does\n> often do something like \"if catversion >= xxx\", but in practice they're\n> only concerned about numbers used by released versions.\n\npg_upgrade also uses >= catversion comparison for a couple of things. I\ndon't think it affects this case, but it's worth keeping in mind.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 5 Jul 2019 13:36:39 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: mcvstats serialization code is still shy of a load" } ]
[ { "msg_contents": "Hello all.\n\nFirst, I'd like to appreciate with all your reviewing and discussion in the last CommitFest[1].\n\nI don't think that the rest one of my proposals has been rejected completely, so I want to restart discussion.\n\nIt is a timeout parameter in interfaces/libpq.\n\nConsider some situations where some happening occurred a server and it became significant busy. e.g., what I and Tsunakawa-san have illustrated[2][3].\nThese server's bad condition(i.e., non-functional server) could cause clients' infinite waiting because it is not always possible for current timeout parameters in backend side to fire.\nUnder such server's bad condition, control should be passed to the client after a certain period of time, and just a timeout disconnection corresponds to it.\nAlso, in such situations the backend parameters may not work, so we need to implement the timeout parameters on the client side.\n\nIt is preferable to implement this parameter in PQwait() where clients can wait endlessly.\nHowever this can do unintended timeout when setting socket_timeout < statement_timeout(etc. some other timeout parameters).\nThus documentation warns this.\n\nFYI, a similarly parameter socketTimeout is in pgJDBC[4].\nDo you have any thoughts?\n\nP.S.\nFabien-san, \nI'll build another thread and let's discussion there about \\c's taking care of connection parameters you have pointed out!\n\n[1] https://www.postgresql.org/message-id/flat/EDA4195584F5064680D8130B1CA91C45367328@G01JPEXMBYT04\n[2] https://www.postgresql.org/message-id/EDA4195584F5064680D8130B1CA91C45367328%40G01JPEXMBYT04\n[3] https://www.postgresql.org/message-id/0A3221C70F24FB45833433255569204D1FBC7561%40G01JPEXMBYT05\n[4] https://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters\n\nBest regards,\n---------------------\nRyohei Nagaura", "msg_date": "Wed, 26 Jun 2019 04:13:36 +0000", "msg_from": "\"nagaura.ryohei@fujitsu.com\" <nagaura.ryohei@fujitsu.com>", "msg_from_op": true, "msg_subject": "[patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "On Wed, Jun 26, 2019 at 04:13:36AM +0000, nagaura.ryohei@fujitsu.com wrote:\n> I don't think that the rest one of my proposals has been rejected\n> completely, so I want to restart discussion.\n\nI recall on the matter that there was consensus that nobody really\nliked this option because it enforced a cancellation on the\nconnection.\n--\nMichael", "msg_date": "Wed, 26 Jun 2019 13:23:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "Hi, Michael-san.\n\n> From: Michael Paquier <michael@paquier.xyz>\n> On Wed, Jun 26, 2019 at 04:13:36AM +0000, nagaura.ryohei@fujitsu.com wrote:\n> > I don't think that the rest one of my proposals has been rejected\n> > completely, so I want to restart discussion.\n> I recall on the matter that there was consensus that nobody really liked this option\n> because it enforced a cancellation on the connection.\nIt seems that you did not think so at that time.\n# Please refer to [1]\n\nI don't think all the reviewers are completely negative.\nI think some couldn't judge because lack of what kind of problem I was going to solve and the way to solve it, so I restarted to describe them in this time.\n\n[1] https://www.postgresql.org/message-id/20190406065428.GA2145%40paquier.xyz\nBest regards,\n---------------------\nRyohei Nagaura\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2019 11:56:28 +0000", "msg_from": "\"nagaura.ryohei@fujitsu.com\" <nagaura.ryohei@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "On Wed, Jun 26, 2019 at 11:56:28AM +0000, nagaura.ryohei@fujitsu.com wrote:\n> It seems that you did not think so at that time.\n> # Please refer to [1]\n> \n> I don't think all the reviewers are completely negative.\n\nI recall having a negative impression on the patch when first looking\nat it, and still have the same impression when looking at the last\nversion. Just with a quick look, assuming that you can bypass all\ncleanup operations normally taken by pqDropConnection() through a\nhijacking of pqWait() is not fine as this routine explicitely assumes\nto *never* have a timeout for its wait.\n--\nMichael", "msg_date": "Tue, 10 Sep 2019 15:38:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "On Tue, Sep 10, 2019 at 03:38:21PM +0900, Michael Paquier wrote:\n> I recall having a negative impression on the patch when first looking\n> at it, and still have the same impression when looking at the last\n> version. Just with a quick look, assuming that you can bypass all\n> cleanup operations normally taken by pqDropConnection() through a\n> hijacking of pqWait() is not fine as this routine explicitely assumes\n> to *never* have a timeout for its wait.\n\nBy the way, Fabien, you are marked as a reviewer of this patch since\nthe end of June. Are you planning to review it?\n--\nMichael", "msg_date": "Wed, 11 Sep 2019 13:58:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "\n> By the way, Fabien, you are marked as a reviewer of this patch since the \n> end of June. Are you planning to review it?\n\nNot this round.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 11 Sep 2019 16:25:07 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "On Wed, Sep 11, 2019 at 04:25:07PM +0200, Fabien COELHO wrote:\n> Not this round.\n\nYou have registered yourself as a reviewer of this patch since the end\nof June. Could you please avoid that? Sometimes people skips patches\nwhen they see someone already registered to review it.\n\nThe patch applies cleanly so I am movingit to next CF.\n\n(FWIW, I still have the same impression as upthread, looking again at\nthe patch, but let's see if there are other opinions.)\n--\nMichael", "msg_date": "Wed, 27 Nov 2019 17:31:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "Michaᅵl,\n\n>> Not this round.\n>\n> You have registered yourself as a reviewer of this patch since the end\n> of June. Could you please avoid that? Sometimes people skips patches\n> when they see someone already registered to review it.\n\nYep. ISTM that I did a few reviews on early versions of the patch, which \nwas really a set of 3 patches.\n\n> The patch applies cleanly so I am movingit to next CF.\n>\n> (FWIW, I still have the same impression as upthread, looking again at\n> the patch, but let's see if there are other opinions.)\n\nAFAICR, I was partly dissuated to pursue reviews by your comment that \nsomehow the feature had no clear consensus, so I thought that the patch \nwas implicitely rejected.\n\nAlthough I work for free, I try to avoid working for nothing:-)\n\nIt is still unclear from your above comment whether the patch would ever \nget committed, so this does not motivate spending time on it.\n\n-- \nFabien.", "msg_date": "Wed, 27 Nov 2019 10:29:33 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "Hi, Michael-san.\n\nSorry, I have missed your e-mail...\n\n> From: Michael Paquier <michael@paquier.xyz>\n> On Wed, Jun 26, 2019 at 11:56:28AM +0000, nagaura.ryohei@fujitsu.com wrote:\n> > It seems that you did not think so at that time.\n> > # Please refer to [1]\n> >\n> > I don't think all the reviewers are completely negative.\n> \n> I recall having a negative impression on the patch when first looking at it, and still\n> have the same impression when looking at the last version. Just with a quick\n> look, assuming that you can bypass all cleanup operations normally taken by\n> pqDropConnection() through a hijacking of pqWait() is not fine as this routine\n> explicitely assumes to *never* have a timeout for its wait.\nI couldn't understand what you meant.\nDo you say that we shouldn't change pqWait() behavior?\nOr should I modify my patch to use pqDropConnection()?\n\nBest regards,\n---------------------\nRyohei Nagaura\n\n\n\n\n", "msg_date": "Fri, 29 Nov 2019 05:22:01 +0000", "msg_from": "\"nagaura.ryohei@fujitsu.com\" <nagaura.ryohei@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "On 11/29/19 12:22 AM, nagaura.ryohei@fujitsu.com wrote:\n> \n>> From: Michael Paquier <michael@paquier.xyz>\n>> On Wed, Jun 26, 2019 at 11:56:28AM +0000, nagaura.ryohei@fujitsu.com wrote:\n>>> It seems that you did not think so at that time.\n>>> # Please refer to [1]\n>>>\n>>> I don't think all the reviewers are completely negative.\n>>\n>> I recall having a negative impression on the patch when first looking at it, and still\n>> have the same impression when looking at the last version. Just with a quick\n>> look, assuming that you can bypass all cleanup operations normally taken by\n>> pqDropConnection() through a hijacking of pqWait() is not fine as this routine\n>> explicitely assumes to *never* have a timeout for its wait.\n >\n> I couldn't understand what you meant.\n> Do you say that we shouldn't change pqWait() behavior?\n> Or should I modify my patch to use pqDropConnection()?\n\nThis patch no longer applies: http://cfbot.cputube.org/patch_27_2175.log\n\nCF entry has been updated to Waiting on Author.\n\nMore importantly it looks like there is still no consensus on this \npatch, which is an uncommitted part of a previous patch [1].\n\nUnless somebody chimes in I'll mark this Returned with Feedback at the \nend of the CF.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/raw/20190406065428.GA2145%40paquier.xyz\n\n\n", "msg_date": "Tue, 24 Mar 2020 10:58:21 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [patch]socket_timeout in interfaces/libpq" }, { "msg_contents": "On 3/24/20 10:58 AM, David Steele wrote:\n> On 11/29/19 12:22 AM, nagaura.ryohei@fujitsu.com wrote:\n >\n>> I couldn't understand what you meant.\n>> Do you say that we shouldn't change pqWait() behavior?\n>> Or should I modify my patch to use pqDropConnection()?\n> \n> This patch no longer applies: http://cfbot.cputube.org/patch_27_2175.log\n> \n> CF entry has been updated to Waiting on Author.\n> \n> More importantly it looks like there is still no consensus on this \n> patch, which is an uncommitted part of a previous patch [1].\n> \n> Unless somebody chimes in I'll mark this Returned with Feedback at the \n> end of the CF.\n\nMarked Returned with Feedback.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 8 Apr 2020 08:39:05 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [patch]socket_timeout in interfaces/libpq" } ]
[ { "msg_contents": "Hello!\n\n\\c is followed by [-reuse-previous=on/off] positional syntax | conninfo\nUsing \\c -reuse-previous=on positional syntax or \\c positional syntax without -reuse-previous option, some parameters which were omitted or notated as \"-\" will be reused in the new connection.\nThe target is only \"dbname\", \"user\", \"host\", \"port\" in the current implementation.\n# details in [1]\nWhen we discussed in [2], this topic came out.\nAlthough I'm not heavy psql user, I want it to inherit connection parameters and Fabien-san does too.\n\nMy new specification:\n\\c inherits all the unspecified connection parameters in PQconninfoOptions in cases below.\na) \\c -reuse-previous=on positional syntax\nb) \\c positional syntax\nThis is just an expansion of the target of inheritance of parameters from the current specification.\n\nI have an item to talk about.\nIt is whether the message which indicates connection information to users has to have more information such as\nYou are now connected to database \"TomANDJelly\" as user \"nyannyan\" with some parameters [reused | defaults].\n# after \"nyannyan\"\n\nDo you have any thoughts?\n\n[1] https://www.postgresql.org/docs/12/app-psql.html\n[2] https://www.postgresql.org/message-id/flat/EDA4195584F5064680D8130B1CA91C45367328@G01JPEXMBYT04\nBest regards,\n---------------------\nRyohei Nagaura\n\n\n\n", "msg_date": "Wed, 26 Jun 2019 04:14:45 +0000", "msg_from": "\"nagaura.ryohei@fujitsu.com\" <nagaura.ryohei@fujitsu.com>", "msg_from_op": true, "msg_subject": "inheritance of connection paramters when using \\c" } ]
[ { "msg_contents": "Dear GSoD PostgreSQL Organization Administrators,\n\n \n\nI am a PhD Student in Istanbul Technical University, Computer Engineering.\nBefore the being full-time researcher in the laboratory, I was full-stack\nsoftware developer. Therefore, I am strongly familiar the SQL queries. Also,\nspecially, I used PostgreSQL in my school project naming \"Database Systems\"\nin the bachelor degree (in 2017). I am willing to contribute Introductory\nTutorial page in whole summer (if it is necessary, I can keep to enhance\ntutorial after summer). I am proposing tutorial like in Kubernetes web page\n(https://kubernetes.io/docs/home/) which is compact and in some ways\ninteractive. So I want to discuss to make interactive tutorial. Also I am\nstrong in HTML and CSS issues therefore changing web style without\ncorrupting existing architecture will not problem for us. \n\n \n\nIf you still need to this enhancement, we can keep in touch according to\nyour instructions. Otherwise, we can change the project with respect to your\nneeds. I am free in whole summer and I want to participate to the PostgreSQL\ndocumentation.\n\n \n\nThe project idea was suggested by Pavan Agrawal, James Chanco as I have seen\nin the Wiki page of PostgreSQL. However I could not sure about the correct\nemails. Therefore I am writing this email according to given mail list in\nthe GSoD web page and wiki page of the PostgreSQL.\n\n \n\nYou can find my CV:\nhttps://drive.google.com/file/d/1WJcW7E7gK-FWEXSlFF56Rd4KcogWrre_/view?usp=s\nharing\n\nWeb page: http://bcrg.itu.edu.tr/MemberDetail.aspx?id=9\n\n \n\nBest regards,\n\n \n\n \n\n\n\n\n\t\t\t\nElif Ak, PhD Student\n\n\n <mailto:akeli@itu.edu.tr> akeli@itu.edu.tr\n\n\nISTANBUL TECHNICAL UNIVERSITY\n\n\nCOMPUTER ENGINEERING\n\n\nCampus of ITU Ayazaga, Department of Computer Engineering 34469, Maslak\nSarıyer / İstanbul\n\n\n\nT.\n\n+905387062245", "msg_date": "Wed, 26 Jun 2019 12:06:14 +0300", "msg_from": "\"Elif Ak\" <akeli@itu.edu.tr>", "msg_from_op": true, "msg_subject": "GSoD Introductory Tutorial" } ]
[ { "msg_contents": "Hello\n\nThis is my first posting to hackers so sorry if I'm taking up valuable time.\n\nI'm currently migrating a packaged application which supported oracle and sql server to PostgreSQL.\n\nSomething that I've identified as hurting the performance a lot is loose index scanning. I don't have access to the application SQL , so all I can try and do is mitigate through indexes. There are ~4000 tables in the application schema, and ~6000 indices.\n\nSome plans are clearly worse than I would expect - because there are lots of index(a,b,c) and select where a= and c=.\n\nIn an attempt to see if the putative skip scan changes will be beneficial on our real world data sets, I've been attempting to build and run pgsql from github with the v20- patch applied.\n\nIf I build without the patch, I get a running server, and can execute whatever queries I want.\n\nIf I apply the latest patch (which says 1 of 2? - maybe I'm missing a part of the patch?), I apply with\n\n$ patch -p1 <../v20-0001-Index-skip-scan.patch\npatching file contrib/bloom/blutils.c\npatching file doc/src/sgml/config.sgml\npatching file doc/src/sgml/indexam.sgml\npatching file doc/src/sgml/indices.sgml\npatching file src/backend/access/brin/brin.c\npatching file src/backend/access/gin/ginutil.c\npatching file src/backend/access/gist/gist.c\npatching file src/backend/access/hash/hash.c\npatching file src/backend/access/index/indexam.c\npatching file src/backend/access/nbtree/nbtree.c\npatching file src/backend/access/nbtree/nbtsearch.c\npatching file src/backend/access/spgist/spgutils.c\npatching file src/backend/commands/explain.c\npatching file src/backend/executor/nodeIndexonlyscan.c\npatching file src/backend/executor/nodeIndexscan.c\npatching file src/backend/nodes/copyfuncs.c\npatching file src/backend/nodes/outfuncs.c\npatching file src/backend/nodes/readfuncs.c\npatching file src/backend/optimizer/path/costsize.c\npatching file src/backend/optimizer/path/pathkeys.c\npatching file src/backend/optimizer/plan/createplan.c\npatching file src/backend/optimizer/plan/planagg.c\npatching file src/backend/optimizer/plan/planner.c\npatching file src/backend/optimizer/util/pathnode.c\npatching file src/backend/optimizer/util/plancat.c\npatching file src/backend/utils/misc/guc.c\npatching file src/backend/utils/misc/postgresql.conf.sample\npatching file src/include/access/amapi.h\npatching file src/include/access/genam.h\npatching file src/include/access/nbtree.h\npatching file src/include/nodes/execnodes.h\npatching file src/include/nodes/pathnodes.h\npatching file src/include/nodes/plannodes.h\npatching file src/include/optimizer/cost.h\npatching file src/include/optimizer/pathnode.h\npatching file src/include/optimizer/paths.h\npatching file src/test/regress/expected/create_index.out\npatching file src/test/regress/expected/select_distinct.out\npatching file src/test/regress/expected/sysviews.out\npatching file src/test/regress/sql/create_index.sql\npatching file src/test/regress/sql/select_distinct.sql\n\nThis will 'make' and 'make install' cleanly.\n\nWhen I run the server, I can log in but the postgres processes associated with my psql session crashes SIGSEGV in many cases, for example when using \\d:\n\npsql (12beta2)\nType \"help\" for help.\n\ndb1=> show enable_indexskipscan;\nenable_indexskipscan\n----------------------\non\n(1 row)\n\ndb1=> \\d\npsql: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!> \\q\n\nI got a backtrace out of the process:\n\n(gdb) backtrace\n#0 MemoryContextAllocZeroAligned (context=0x0, size=size@entry=80) at mcxt.c:864\n#1 0x000000000067d2d4 in get_eclass_for_sort_expr (root=root@entry=0x22ecb10, expr=expr@entry=0x22ee280, nullable_relids=nullable_relids@entry=0x0, opfamilies=0x22ff530,\n opcintype=opcintype@entry=19, collation=collation@entry=950, sortref=<optimized out>, rel=0x0, create_it=true) at equivclass.c:704\n#2 0x0000000000686d9e in make_pathkey_from_sortinfo (root=root@entry=0x22ecb10, expr=expr@entry=0x22ee280, nullable_relids=nullable_relids@entry=0x0, opfamily=1994, opcintype=19,\n collation=950, reverse_sort=false, nulls_first=false, sortref=1, rel=0x0, create_it=true) at pathkeys.c:228\n#3 0x0000000000686eb7 in make_pathkey_from_sortop (root=root@entry=0x22ecb10, expr=0x22ee280, nullable_relids=0x0, ordering_op=660, nulls_first=<optimized out>, sortref=1,\n create_it=true) at pathkeys.c:271\n#4 0x0000000000687a4a in make_pathkeys_for_sortclauses (root=root@entry=0x22ecb10, sortclauses=<optimized out>, tlist=tlist@entry=0x22ee2f0) at pathkeys.c:1099\n#5 0x0000000000694588 in standard_qp_callback (root=0x22ecb10, extra=<optimized out>) at planner.c:3635\n#6 0x0000000000693024 in query_planner (root=root@entry=0x22ecb10, qp_callback=qp_callback@entry=0x6944e0 <standard_qp_callback>, qp_extra=qp_extra@entry=0x7ffe6fe2b8e0)\n at planmain.c:207\n#7 0x00000000006970e0 in grouping_planner (root=root@entry=0x22ecb10, inheritance_update=inheritance_update@entry=false, tuple_fraction=<optimized out>, tuple_fraction@entry=0)\n at planner.c:2048\n#8 0x000000000069978d in subquery_planner (glob=glob@entry=0x22e43c0, parse=parse@entry=0x22e3f30, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false,\n tuple_fraction=tuple_fraction@entry=0) at planner.c:1012\n#9 0x000000000069a7b6 in standard_planner (parse=0x22e3f30, cursorOptions=256, boundParams=<optimized out>) at planner.c:406\n#10 0x000000000073ceac in pg_plan_query (querytree=querytree@entry=0x22e3f30, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:878\n#11 0x000000000073cf86 in pg_plan_queries (querytrees=<optimized out>, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:968\n#12 0x000000000073d399 in exec_simple_query (\n query_string=0x222a9a0 \"SELECT n.nspname as \\\"Schema\\\",\\n c.relname as \\\"Name\\\",\\n CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN '\"...) at postgres.c:1143\n#13 0x000000000073ef5a in PostgresMain (argc=<optimized out>, argv=argv@entry=0x2255440, dbname=<optimized out>, username=<optimized out>) at postgres.c:4249\n#14 0x00000000006cfaf6 in BackendRun (port=0x224e220, port=0x224e220) at postmaster.c:4431\n#15 BackendStartup (port=0x224e220) at postmaster.c:4122\n#16 ServerLoop () at postmaster.c:1704\n#17 0x00000000006d09d0 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x2224c50) at postmaster.c:1377\n#18 0x00000000004820c4 in main (argc=3, argv=0x2224c50) at main.c:228\n\nWith the skip scan v20 patch installed, I tried to set enable_indexskipscan=off, but this did not resolve the crash.\n\nAlso, if I try to explain a select:\n\ndb1=> explain analyze select cmpy,bal,fncl_val from t1 where cmpy='04' and bal='CO';\npsql: ERROR: variable not found in subplan target list\n\nI'm quite prepared to be told I've miscompiled or missed something really obvious and for that I apologize in advance. I'm really keen to get involved with testing this patch/feature ahead of release.\nHelloThis is my first posting to hackers so sorry if I'm taking up valuable time.I'm currently migrating a packaged application which supported oracle and sql server to PostgreSQL. Something that I've identified as hurting the performance a lot is loose index scanning. I don't have access to the application SQL , so all I can try and do is mitigate through indexes. There are ~4000 tables in the application schema, and ~6000 indices. Some plans are clearly worse than I would expect - because there are lots of index(a,b,c) and select where a= and c=.  In an attempt to see if the putative skip scan changes will be beneficial on our real world data sets, I've been attempting to build and run pgsql from github with the v20- patch applied.If I build without the patch, I get a running server, and can execute whatever queries I want.If I apply the latest patch (which says 1 of 2? - maybe I'm missing a part of the patch?), I apply with$ patch -p1 <../v20-0001-Index-skip-scan.patchpatching file contrib/bloom/blutils.cpatching file doc/src/sgml/config.sgmlpatching file doc/src/sgml/indexam.sgmlpatching file doc/src/sgml/indices.sgmlpatching file src/backend/access/brin/brin.cpatching file src/backend/access/gin/ginutil.cpatching file src/backend/access/gist/gist.cpatching file src/backend/access/hash/hash.cpatching file src/backend/access/index/indexam.cpatching file src/backend/access/nbtree/nbtree.cpatching file src/backend/access/nbtree/nbtsearch.cpatching file src/backend/access/spgist/spgutils.cpatching file src/backend/commands/explain.cpatching file src/backend/executor/nodeIndexonlyscan.cpatching file src/backend/executor/nodeIndexscan.cpatching file src/backend/nodes/copyfuncs.cpatching file src/backend/nodes/outfuncs.cpatching file src/backend/nodes/readfuncs.cpatching file src/backend/optimizer/path/costsize.cpatching file src/backend/optimizer/path/pathkeys.cpatching file src/backend/optimizer/plan/createplan.cpatching file src/backend/optimizer/plan/planagg.cpatching file src/backend/optimizer/plan/planner.cpatching file src/backend/optimizer/util/pathnode.cpatching file src/backend/optimizer/util/plancat.cpatching file src/backend/utils/misc/guc.cpatching file src/backend/utils/misc/postgresql.conf.samplepatching file src/include/access/amapi.hpatching file src/include/access/genam.hpatching file src/include/access/nbtree.hpatching file src/include/nodes/execnodes.hpatching file src/include/nodes/pathnodes.hpatching file src/include/nodes/plannodes.hpatching file src/include/optimizer/cost.hpatching file src/include/optimizer/pathnode.hpatching file src/include/optimizer/paths.hpatching file src/test/regress/expected/create_index.outpatching file src/test/regress/expected/select_distinct.outpatching file src/test/regress/expected/sysviews.outpatching file src/test/regress/sql/create_index.sqlpatching file src/test/regress/sql/select_distinct.sqlThis will 'make' and 'make install' cleanly.When I run the server, I can log in but the postgres processes associated with my psql session crashes SIGSEGV in many cases, for example when using \\d:psql (12beta2)Type \"help\" for help.db1=> show enable_indexskipscan;enable_indexskipscan----------------------on(1 row)db1=> \\dpsql: server closed the connection unexpectedly        This probably means the server terminated abnormally        before or while processing the request.The connection to the server was lost. Attempting reset: Failed.!> \\qI got a backtrace out of the process:(gdb) backtrace#0  MemoryContextAllocZeroAligned (context=0x0, size=size@entry=80) at mcxt.c:864#1  0x000000000067d2d4 in get_eclass_for_sort_expr (root=root@entry=0x22ecb10, expr=expr@entry=0x22ee280, nullable_relids=nullable_relids@entry=0x0, opfamilies=0x22ff530,    opcintype=opcintype@entry=19, collation=collation@entry=950, sortref=<optimized out>, rel=0x0, create_it=true) at equivclass.c:704#2  0x0000000000686d9e in make_pathkey_from_sortinfo (root=root@entry=0x22ecb10, expr=expr@entry=0x22ee280, nullable_relids=nullable_relids@entry=0x0, opfamily=1994, opcintype=19,    collation=950, reverse_sort=false, nulls_first=false, sortref=1, rel=0x0, create_it=true) at pathkeys.c:228#3  0x0000000000686eb7 in make_pathkey_from_sortop (root=root@entry=0x22ecb10, expr=0x22ee280, nullable_relids=0x0, ordering_op=660, nulls_first=<optimized out>, sortref=1,    create_it=true) at pathkeys.c:271#4  0x0000000000687a4a in make_pathkeys_for_sortclauses (root=root@entry=0x22ecb10, sortclauses=<optimized out>, tlist=tlist@entry=0x22ee2f0) at pathkeys.c:1099#5  0x0000000000694588 in standard_qp_callback (root=0x22ecb10, extra=<optimized out>) at planner.c:3635#6  0x0000000000693024 in query_planner (root=root@entry=0x22ecb10, qp_callback=qp_callback@entry=0x6944e0 <standard_qp_callback>, qp_extra=qp_extra@entry=0x7ffe6fe2b8e0)    at planmain.c:207#7  0x00000000006970e0 in grouping_planner (root=root@entry=0x22ecb10, inheritance_update=inheritance_update@entry=false, tuple_fraction=<optimized out>, tuple_fraction@entry=0)    at planner.c:2048#8  0x000000000069978d in subquery_planner (glob=glob@entry=0x22e43c0, parse=parse@entry=0x22e3f30, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false,    tuple_fraction=tuple_fraction@entry=0) at planner.c:1012#9  0x000000000069a7b6 in standard_planner (parse=0x22e3f30, cursorOptions=256, boundParams=<optimized out>) at planner.c:406#10 0x000000000073ceac in pg_plan_query (querytree=querytree@entry=0x22e3f30, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:878#11 0x000000000073cf86 in pg_plan_queries (querytrees=<optimized out>, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:968#12 0x000000000073d399 in exec_simple_query (    query_string=0x222a9a0 \"SELECT n.nspname as \\\"Schema\\\",\\n  c.relname as \\\"Name\\\",\\n  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN '\"...) at postgres.c:1143#13 0x000000000073ef5a in PostgresMain (argc=<optimized out>, argv=argv@entry=0x2255440, dbname=<optimized out>, username=<optimized out>) at postgres.c:4249#14 0x00000000006cfaf6 in BackendRun (port=0x224e220, port=0x224e220) at postmaster.c:4431#15 BackendStartup (port=0x224e220) at postmaster.c:4122#16 ServerLoop () at postmaster.c:1704#17 0x00000000006d09d0 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x2224c50) at postmaster.c:1377#18 0x00000000004820c4 in main (argc=3, argv=0x2224c50) at main.c:228With the skip scan v20 patch installed, I tried to set enable_indexskipscan=off, but this did not resolve the crash.Also, if I try to explain a select:db1=> explain analyze select cmpy,bal,fncl_val from t1 where cmpy='04' and bal='CO';psql: ERROR:  variable not found in subplan target listI'm quite prepared to be told I've miscompiled or missed something really obvious and for that I apologize in advance.  I'm really keen to get involved with testing this patch/feature ahead of release.", "msg_date": "Wed, 26 Jun 2019 11:52:47 +0000", "msg_from": "pguser <pguser@diorite.uk>", "msg_from_op": true, "msg_subject": "Index Skip Scan - attempting to evalutate patch" }, { "msg_contents": "> On Wed, Jun 26, 2019 at 1:53 PM pguser <pguser@diorite.uk> wrote:\n>\n> If I apply the latest patch (which says 1 of 2? - maybe I'm missing a part of the patch?), I apply with\n\nHi,\n\nFirst of all, thanks for evaluation!\n\n> psql (12beta2)\n> Type \"help\" for help.\n>\n> db1=> show enable_indexskipscan;\n> enable_indexskipscan\n> ----------------------\n> on\n> (1 row)\n>\n> db1=> \\d\n> psql: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !> \\q\n>\n>\n> I got a backtrace out of the process:\n>\n> (gdb) backtrace\n> #0 MemoryContextAllocZeroAligned (context=0x0, size=size@entry=80) at mcxt.c:864\n> #1 0x000000000067d2d4 in get_eclass_for_sort_expr (root=root@entry=0x22ecb10, expr=expr@entry=0x22ee280, nullable_relids=nullable_relids@entry=0x0, opfamilies=0x22ff530,\n> opcintype=opcintype@entry=19, collation=collation@entry=950, sortref=<optimized out>, rel=0x0, create_it=true) at equivclass.c:704\n> #2 0x0000000000686d9e in make_pathkey_from_sortinfo (root=root@entry=0x22ecb10, expr=expr@entry=0x22ee280, nullable_relids=nullable_relids@entry=0x0, opfamily=1994, opcintype=19,\n> collation=950, reverse_sort=false, nulls_first=false, sortref=1, rel=0x0, create_it=true) at pathkeys.c:228\n> #3 0x0000000000686eb7 in make_pathkey_from_sortop (root=root@entry=0x22ecb10, expr=0x22ee280, nullable_relids=0x0, ordering_op=660, nulls_first=<optimized out>, sortref=1,\n> create_it=true) at pathkeys.c:271\n> #4 0x0000000000687a4a in make_pathkeys_for_sortclauses (root=root@entry=0x22ecb10, sortclauses=<optimized out>, tlist=tlist@entry=0x22ee2f0) at pathkeys.c:1099\n> #5 0x0000000000694588 in standard_qp_callback (root=0x22ecb10, extra=<optimized out>) at planner.c:3635\n> #6 0x0000000000693024 in query_planner (root=root@entry=0x22ecb10, qp_callback=qp_callback@entry=0x6944e0 <standard_qp_callback>, qp_extra=qp_extra@entry=0x7ffe6fe2b8e0)\n> at planmain.c:207\n> #7 0x00000000006970e0 in grouping_planner (root=root@entry=0x22ecb10, inheritance_update=inheritance_update@entry=false, tuple_fraction=<optimized out>, tuple_fraction@entry=0)\n> at planner.c:2048\n> #8 0x000000000069978d in subquery_planner (glob=glob@entry=0x22e43c0, parse=parse@entry=0x22e3f30, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false,\n> tuple_fraction=tuple_fraction@entry=0) at planner.c:1012\n> #9 0x000000000069a7b6 in standard_planner (parse=0x22e3f30, cursorOptions=256, boundParams=<optimized out>) at planner.c:406\n> #10 0x000000000073ceac in pg_plan_query (querytree=querytree@entry=0x22e3f30, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:878\n> #11 0x000000000073cf86 in pg_plan_queries (querytrees=<optimized out>, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:968\n> #12 0x000000000073d399 in exec_simple_query (\n> query_string=0x222a9a0 \"SELECT n.nspname as \\\"Schema\\\",\\n c.relname as \\\"Name\\\",\\n CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN '\"...) at postgres.c:1143\n> #13 0x000000000073ef5a in PostgresMain (argc=<optimized out>, argv=argv@entry=0x2255440, dbname=<optimized out>, username=<optimized out>) at postgres.c:4249\n> #14 0x00000000006cfaf6 in BackendRun (port=0x224e220, port=0x224e220) at postmaster.c:4431\n> #15 BackendStartup (port=0x224e220) at postmaster.c:4122\n> #16 ServerLoop () at postmaster.c:1704\n> #17 0x00000000006d09d0 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x2224c50) at postmaster.c:1377\n> #18 0x00000000004820c4 in main (argc=3, argv=0x2224c50) at main.c:228\n\nCould you by any change provide also relations schema that were supposed to be\ndescribed by this command?\n\n\n", "msg_date": "Wed, 26 Jun 2019 14:07:37 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Index Skip Scan - attempting to evalutate patch" }, { "msg_contents": "\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, June 26, 2019 1:07 PM, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Wed, Jun 26, 2019 at 1:53 PM pguser pguser@diorite.uk wrote:\n> > If I apply the latest patch (which says 1 of 2? - maybe I'm missing a part of the patch?), I apply with\n>\n> Hi,\n>\n> First of all, thanks for evaluation!\n>\n\nNo problem. I'd like to get involved in helping this patch mature as I think that we're suffering in a few areas of performance due to this.\n\n> Could you by any change provide also relations schema that were supposed to be\n> described by this command?\n\nOkay for now, it's not much. I get the issue of the SIGSEGV on a brand new database with only one relation:\n\nThis is with the 12beta2 as compiled from git sources by me:\n\npsql (12beta2)\nType \"help\" for help.\n\n\ndb2=> \\d\n List of relations\n Schema | Name | Type | Owner\n--------+------+-------+-------\n e5 | t1 | table | e5\n(1 row)\n\ndb2=> \\d t1\n Table \"e5.t1\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+---------\n n1 | smallint | | |\n n2 | smallint | | |\n c1 | character varying | | |\n c2 | character varying | | |\nIndexes:\n \"i1\" btree (n1, n2, c1)\n\n\nAnd with patch 20 applied:\n\npsql (12beta2)\nType \"help\" for help.\n\ndb2=> \\d\npsql: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!> \\q\n[postgres@ip-172-31-33-89 ~]$ . sql2\npsql (12beta2)\nType \"help\" for help.\n\ndb2=> \\d t1\npsql: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!> \\q\n\n\nIn fact, if I do:\n\ncreatedb db3\npsql -d db3\ndb3=# \\d\npsql: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nI get this on empty database with no relations yet defined.\n\nI feel I have done something silly or missed something when applying patch\n\n\n", "msg_date": "Wed, 26 Jun 2019 13:55:22 +0000", "msg_from": "pguser <pguser@diorite.uk>", "msg_from_op": true, "msg_subject": "Re: Index Skip Scan - attempting to evalutate patch" }, { "msg_contents": "\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, June 26, 2019 2:55 PM, pguser <pguser@diorite.uk> wrote:\n\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> On Wednesday, June 26, 2019 1:07 PM, Dmitry Dolgov 9erthalion6@gmail.com wrote:\n>\n> > > On Wed, Jun 26, 2019 at 1:53 PM pguser pguser@diorite.uk wrote:\n> > > If I apply the latest patch (which says 1 of 2? - maybe I'm missing a part of the patch?), I apply with\n> >\n> > Hi,\n> > First of all, thanks for evaluation!\n>\n> No problem. I'd like to get involved in helping this patch mature as I think that we're suffering in a few areas of performance due to this.\n>\n> > Could you by any change provide also relations schema that were supposed to be\n> > described by this command?\n>\n> Okay for now, it's not much. I get the issue of the SIGSEGV on a brand new database with only one relation:\n>\n> This is with the 12beta2 as compiled from git sources by me:\n>\n> psql (12beta2)\n> Type \"help\" for help.\n>\n> db2=> \\d\n>\n> List of relations\n>\n>\n> Schema | Name | Type | Owner\n> --------+------+-------+-------\n> e5 | t1 | table | e5\n> (1 row)\n>\n> db2=> \\d t1\n>\n> Table \"e5.t1\"\n>\n>\n> Column | Type | Collation | Nullable | Default\n> --------+-------------------+-----------+----------+---------\n> n1 | smallint | | |\n> n2 | smallint | | |\n> c1 | character varying | | |\n> c2 | character varying | | |\n> Indexes:\n> \"i1\" btree (n1, n2, c1)\n>\n> And with patch 20 applied:\n>\n> psql (12beta2)\n> Type \"help\" for help.\n>\n> db2=> \\d\n> psql: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !> \\q\n> [postgres@ip-172-31-33-89 ~]$ . sql2\n> psql (12beta2)\n> Type \"help\" for help.\n>\n> db2=> \\d t1\n> psql: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !> \\q\n>\n> In fact, if I do:\n>\n> createdb db3\n> psql -d db3\n> db3=# \\d\n> psql: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> I get this on empty database with no relations yet defined.\n>\n> I feel I have done something silly or missed something when applying patch\n\n\nI find that my patched installation can't create its own initdb either:\n\ninitdb -D /pgd2\nThe files belonging to this database system will be owned by user \"postgres\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale \"en_US.UTF-8\".\nThe default database encoding has accordingly been set to \"UTF8\".\nThe default text search configuration will be set to \"english\".\n\nData page checksums are disabled.\n\nfixing permissions on existing directory /pgd2 ... ok\ncreating subdirectories ... ok\nselecting dynamic shared memory implementation ... posix\nselecting default max_connections ... 100\nselecting default shared_buffers ... 128MB\nselecting default timezone ... UTC\ncreating configuration files ... ok\nrunning bootstrap script ... ok\nperforming post-bootstrap initialization ... 2019-06-26 14:05:47.807 UTC [8120] FATAL: could not open file \"base/1/2663.1\" (target block 17353008): previous segment is only 4 blocks at character 65\n2019-06-26 14:05:47.807 UTC [8120] STATEMENT: INSERT INTO pg_shdepend SELECT 0,0,0,0, tableoid,oid, 'p' FROM pg_authid;\n\nchild process exited with exit code 1\ninitdb: removing contents of data directory \"/pgd2\"\n\n\nI was hoping to share the pgdata between 12beta2 without patch, and 12beta2 with patch, for ease of side by side comparison.\n\nEven more I feel that I'm missing something more than just this 20 patch from the Index Skip Scan thread.\n\n\n", "msg_date": "Wed, 26 Jun 2019 14:12:55 +0000", "msg_from": "pguser <pguser@diorite.uk>", "msg_from_op": true, "msg_subject": "Re: Index Skip Scan - attempting to evalutate patch" }, { "msg_contents": "On Wed, Jun 26, 2019 at 02:12:55PM +0000, pguser wrote:\n>\n>‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n>On Wednesday, June 26, 2019 2:55 PM, pguser <pguser@diorite.uk> wrote:\n>\n>> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n>> On Wednesday, June 26, 2019 1:07 PM, Dmitry Dolgov 9erthalion6@gmail.com wrote:\n>>\n>> > > On Wed, Jun 26, 2019 at 1:53 PM pguser pguser@diorite.uk wrote:\n>> > > If I apply the latest patch (which says 1 of 2? - maybe I'm missing a part of the patch?), I apply with\n>> >\n>> > Hi,\n>> > First of all, thanks for evaluation!\n>>\n>> No problem. I'd like to get involved in helping this patch mature as I think that we're suffering in a few areas of performance due to this.\n>>\n>> > Could you by any change provide also relations schema that were supposed to be\n>> > described by this command?\n>>\n>> Okay for now, it's not much. I get the issue of the SIGSEGV on a brand new database with only one relation:\n>>\n>> This is with the 12beta2 as compiled from git sources by me:\n>>\n>> psql (12beta2)\n>> Type \"help\" for help.\n>>\n>> db2=> \\d\n>>\n>> List of relations\n>>\n>>\n>> Schema | Name | Type | Owner\n>> --------+------+-------+-------\n>> e5 | t1 | table | e5\n>> (1 row)\n>>\n>> db2=> \\d t1\n>>\n>> Table \"e5.t1\"\n>>\n>>\n>> Column | Type | Collation | Nullable | Default\n>> --------+-------------------+-----------+----------+---------\n>> n1 | smallint | | |\n>> n2 | smallint | | |\n>> c1 | character varying | | |\n>> c2 | character varying | | |\n>> Indexes:\n>> \"i1\" btree (n1, n2, c1)\n>>\n>> And with patch 20 applied:\n>>\n>> psql (12beta2)\n>> Type \"help\" for help.\n>>\n>> db2=> \\d\n>> psql: server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> The connection to the server was lost. Attempting reset: Failed.\n>> !> \\q\n>> [postgres@ip-172-31-33-89 ~]$ . sql2\n>> psql (12beta2)\n>> Type \"help\" for help.\n>>\n>> db2=> \\d t1\n>> psql: server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> The connection to the server was lost. Attempting reset: Failed.\n>> !> \\q\n>>\n>> In fact, if I do:\n>>\n>> createdb db3\n>> psql -d db3\n>> db3=# \\d\n>> psql: server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> The connection to the server was lost. Attempting reset: Failed.\n>>\n>> I get this on empty database with no relations yet defined.\n>>\n>> I feel I have done something silly or missed something when applying patch\n>\n>\n>I find that my patched installation can't create its own initdb either:\n>\n>initdb -D /pgd2\n>The files belonging to this database system will be owned by user \"postgres\".\n>This user must also own the server process.\n>\n>The database cluster will be initialized with locale \"en_US.UTF-8\".\n>The default database encoding has accordingly been set to \"UTF8\".\n>The default text search configuration will be set to \"english\".\n>\n>Data page checksums are disabled.\n>\n>fixing permissions on existing directory /pgd2 ... ok\n>creating subdirectories ... ok\n>selecting dynamic shared memory implementation ... posix\n>selecting default max_connections ... 100\n>selecting default shared_buffers ... 128MB\n>selecting default timezone ... UTC\n>creating configuration files ... ok\n>running bootstrap script ... ok\n>performing post-bootstrap initialization ... 2019-06-26 14:05:47.807 UTC [8120] FATAL: could not open file \"base/1/2663.1\" (target block 17353008): previous segment is only 4 blocks at character 65\n>2019-06-26 14:05:47.807 UTC [8120] STATEMENT: INSERT INTO pg_shdepend SELECT 0,0,0,0, tableoid,oid, 'p' FROM pg_authid;\n>\n>child process exited with exit code 1\n>initdb: removing contents of data directory \"/pgd2\"\n>\n\nWell, there's something seriously wrong with your build or environment,\nthen. I've tried reproducing the issue, but it works just fine for me\n(initdb, psql, ...).\n\n>\n>I was hoping to share the pgdata between 12beta2 without patch, and\n>12beta2 with patch, for ease of side by side comparison.\n>\n\nThat might be dangerous, if there may be differences in contents of\ncatalogs. I don't think the patch does that though, and for me it works\njust fine. I can initdb database using current master, create table +\nindexes, do \\d. And I can do that with the patch applied too.\n\n>Even more I feel that I'm missing something more than just this 20 patch\n>from the Index Skip Scan thread.\n>\n\nAre you sure this is not some sort of OOM issue? That might also\ndemonstrate as a segfault, in various cases.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 26 Jun 2019 17:07:14 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Index Skip Scan - attempting to evalutate patch" }, { "msg_contents": "\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, June 26, 2019 4:07 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n\n> That might be dangerous, if there may be differences in contents of\n> catalogs. I don't think the patch does that though, and for me it works\n> just fine. I can initdb database using current master, create table +\n> indexes, do \\d. And I can do that with the patch applied too.\n>\n\n\nWell, this is embarrassing.\n\nI repeated all my steps again on my development laptop (Fedora 30, GCC 9.1.1, glibc 2.29.15) and it all works (doesn't segfault, can initdb).\n\nOn my Amazon Linux EC2 , (gcc 7.3.1, glibc 2.6.32) it exhibits fault on patched version.\n\nSame steps, same sources.\n\nGot to be build tools/version related on my EC2 instance.\n\nDarn it. Sorry for wasting your time, I will continue to evaluate patch, and be mindful that something, somewhere is sensitive to build tools versions or lib versions.\n\nMany regards\n\n\n\n\n", "msg_date": "Wed, 26 Jun 2019 15:47:48 +0000", "msg_from": "pguser <pguser@diorite.uk>", "msg_from_op": true, "msg_subject": "Re: Index Skip Scan - attempting to evalutate patch" } ]
[ { "msg_contents": "Hi all,\n\nWe have been using RAND_OpenSSL(), a function new as of OpenSSL 1.1.0\nin pgcrypto until fe0a0b5 which has removed the last traces of the\nfunction in the tree. We still have a configure check for it and the\nrelated compilation flag in pg_config.h.in, and both are now useless.\n\nAny objections to the cleanup done in the attached patch?\n\nThanks,\n--\nMichael", "msg_date": "Wed, 26 Jun 2019 23:25:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Useless configure checks for RAND_OpenSSL (HAVE_RAND_OPENSSL) " }, { "msg_contents": "> On 26 Jun 2019, at 16:25, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Any objections to the cleanup done in the attached patch?\n\nNone, LGTM.\n\ncheers ./daniel\n\n\n", "msg_date": "Wed, 26 Jun 2019 16:35:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Useless configure checks for RAND_OpenSSL (HAVE_RAND_OPENSSL) " }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> We have been using RAND_OpenSSL(), a function new as of OpenSSL 1.1.0\n> in pgcrypto until fe0a0b5 which has removed the last traces of the\n> function in the tree. We still have a configure check for it and the\n> related compilation flag in pg_config.h.in, and both are now useless.\n\n> Any objections to the cleanup done in the attached patch?\n\n+1, fewer configure checks always better. I don't see any other\nreferences to RAND_OpenSSL either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2019 10:40:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Useless configure checks for RAND_OpenSSL (HAVE_RAND_OPENSSL)" }, { "msg_contents": "On Wed, Jun 26, 2019 at 04:35:43PM +0200, Daniel Gustafsson wrote:\n> None, LGTM.\n\nThanks, committed.\n--\nMichael", "msg_date": "Thu, 27 Jun 2019 08:36:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Useless configure checks for RAND_OpenSSL (HAVE_RAND_OPENSSL)" } ]
[ { "msg_contents": "Hi\n\nWe are facing an issue where the cost shows up \"cost=10000000000.00\" so\ntrying to find where that is set.\n\nCould anyone point me to the code where the cost \"cost=10000000000.00\" is\nset?\n\nIt doesn't show up when searching through find.\n\nfind . -type f -print | xargs grep 10000000000 | grep -v \"/test/\" | grep -v\n\"/btree_gist/\"\n./postgresql-9.6.12/contrib/pgcrypto/sql/blowfish.sql:decode('1000000000000001',\n'hex'),\n./postgresql-9.6.12/contrib/pgcrypto/expected/blowfish.out:decode('1000000000000001',\n'hex'),\n./postgresql-9.6.12/doc/src/sgml/html/functions-admin.html:\n00000001000000000000000D | 4039624\n./postgresql-9.6.12/doc/src/sgml/html/wal-internals.html:>000000010000000000000000</TT\n./postgresql-9.6.12/doc/src/sgml/func.sgml: 00000001000000000000000D |\n4039624\n./postgresql-9.6.12/doc/src/sgml/wal.sgml:\n<filename>000000010000000000000000</filename>. The numbers do not wrap,\n./postgresql-9.6.12/src/bin/pg_archivecleanup/pg_archivecleanup.c: *\n000000010000000000000010.partial and\n./postgresql-9.6.12/src/bin/pg_archivecleanup/pg_archivecleanup.c: *\n000000010000000000000010.00000020.backup are after\n./postgresql-9.6.12/src/bin/pg_archivecleanup/pg_archivecleanup.c: *\n000000010000000000000010.\n./postgresql-9.6.12/src/bin/pg_archivecleanup/pg_archivecleanup.c:\n \" pg_archivecleanup /mnt/server/archiverdir\n000000010000000000000010.00000020.backup\\n\");\n./postgresql-9.6.12/src/backend/utils/adt/numeric.c: * For input like\n10000000000, we must treat stripped digits as real. So\n./postgresql-9.6.12/src/backend/utils/adt/numeric.c: * For input like\n10000000000, we must treat stripped digits as real. So\n./postgresql-9.6.12/src/backend/utils/adt/cash.c: m4 = (val /\nINT64CONST(100000000000)) % 1000; /* billions */\n./postgresql-9.6.12/src/backend/utils/adt/cash.c: m5 = (val /\nINT64CONST(100000000000000)) % 1000; /* trillions */\n./postgresql-9.6.12/src/backend/utils/adt/cash.c: m6 = (val /\nINT64CONST(100000000000000000)) % 1000; /* quadrillions */\n./postgresql-9.6.12/src/backend/utils/adt/date.c:\n10000000000.0\n./postgresql-9.6.12/src/include/utils/date.h:#define TIME_PREC_INV\n10000000000.0\n\n\nThanks\n\nHiWe are facing an issue where the cost shows up \"cost=10000000000.00\" so trying to find where that is set.Could anyone point me to the code where the cost \"cost=10000000000.00\" is set?It doesn't show up when searching through find.find . -type f -print | xargs grep 10000000000 | grep -v \"/test/\" | grep -v \"/btree_gist/\"./postgresql-9.6.12/contrib/pgcrypto/sql/blowfish.sql:decode('1000000000000001', 'hex'),./postgresql-9.6.12/contrib/pgcrypto/expected/blowfish.out:decode('1000000000000001', 'hex'),./postgresql-9.6.12/doc/src/sgml/html/functions-admin.html: 00000001000000000000000D |     4039624./postgresql-9.6.12/doc/src/sgml/html/wal-internals.html:>000000010000000000000000</TT./postgresql-9.6.12/doc/src/sgml/func.sgml: 00000001000000000000000D |     4039624./postgresql-9.6.12/doc/src/sgml/wal.sgml:   <filename>000000010000000000000000</filename>.  The numbers do not wrap,./postgresql-9.6.12/src/bin/pg_archivecleanup/pg_archivecleanup.c:       * 000000010000000000000010.partial and./postgresql-9.6.12/src/bin/pg_archivecleanup/pg_archivecleanup.c:       * 000000010000000000000010.00000020.backup are after./postgresql-9.6.12/src/bin/pg_archivecleanup/pg_archivecleanup.c:       * 000000010000000000000010../postgresql-9.6.12/src/bin/pg_archivecleanup/pg_archivecleanup.c:                 \"  pg_archivecleanup /mnt/server/archiverdir 000000010000000000000010.00000020.backup\\n\");./postgresql-9.6.12/src/backend/utils/adt/numeric.c:     * For input like 10000000000, we must treat stripped digits as real. So./postgresql-9.6.12/src/backend/utils/adt/numeric.c:     * For input like 10000000000, we must treat stripped digits as real. So./postgresql-9.6.12/src/backend/utils/adt/cash.c:       m4 = (val / INT64CONST(100000000000)) % 1000;           /* billions */./postgresql-9.6.12/src/backend/utils/adt/cash.c:       m5 = (val / INT64CONST(100000000000000)) % 1000;        /* trillions */./postgresql-9.6.12/src/backend/utils/adt/cash.c:       m6 = (val / INT64CONST(100000000000000000)) % 1000; /* quadrillions */./postgresql-9.6.12/src/backend/utils/adt/date.c:               10000000000.0./postgresql-9.6.12/src/include/utils/date.h:#define TIME_PREC_INV 10000000000.0Thanks", "msg_date": "Wed, 26 Jun 2019 20:49:08 -0700", "msg_from": "AminPG Jaffer <aminjaffer.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Cost and execution plan" }, { "msg_contents": "AminPG Jaffer <aminjaffer.pg@gmail.com> writes:\n> We are facing an issue where the cost shows up \"cost=10000000000.00\" so\n> trying to find where that is set.\n\nThat means you've turned off enable_seqscan (or one of its siblings)\nbut the planner is choosing a seqscan plan (or other plan type you\ntried to disable) anyway because it has no other alternative.\n\n> Could anyone point me to the code where the cost \"cost=10000000000.00\" is\n> set?\n\nLook for disable_cost.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2019 00:03:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cost and execution plan" } ]
[ { "msg_contents": "Deja vu from this time last year when despite everyone's best efforts\n(mostly Alvaro) we missed getting run-time pruning in for MergeAppend\ninto the PG11 release. This year it was ModifyTable, which is now\npossible thanks to Amit and Tom's modifications to the inheritance\nplanner.\n\nI've attached what I have so far for this. I think it's mostly okay,\nbut my brain was overheating a bit at the inheritance_planner changes.\nI'm not entirely certain that what I've got is correct there. My brain\nstruggled a bit with the code that Tom wrote to share the data\nstructures from the SELECT invocation of the grouping_planner() in\ninheritance_planner() regarding subquery RTEs. I had to pull out some\nmore structures from the other PlannerInfo structure in order to get\nthe base quals from the target rel. I don't quite see a reason why\nit's particularly wrong to tag those onto the final_rel, but I'll\nprepare myself to be told that I'm wrong about that.\n\nI'm not particularly happy about having to have written the\nIS_DUMMY_MODIFYTABLE macro. I just didn't see a more simple way to\ndetermine if the ModifyTable just contains a single dummy Append path.\n\nI also had to change the ModifyTable resultRelInfo pointer to an array\nof pointers. This seems to be required since we need to somehow ignore\nResultRelInfos which were pruned. I didn't do any performance testing\nfor the added level of indirection, I just imagined that it's\nunmeasurable.\n\nI'll include this in for July 'fest.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Thu, 27 Jun 2019 17:28:08 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Run-time pruning for ModifyTable" }, { "msg_contents": "Hi David,\n\nOn Thu, Jun 27, 2019 at 2:28 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> Deja vu from this time last year when despite everyone's best efforts\n> (mostly Alvaro) we missed getting run-time pruning in for MergeAppend\n> into the PG11 release. This year it was ModifyTable, which is now\n> possible thanks to Amit and Tom's modifications to the inheritance\n> planner.\n>\n> I've attached what I have so far for this.\n\nThanks for working on this. IIUC, the feature is to skip modifying a\ngiven result relation if run-time pruning dictates that none of its\nexisting rows will match some dynamically computable quals.\n\n> I think it's mostly okay,\n> but my brain was overheating a bit at the inheritance_planner changes.\n\nI think we need to consider the fact that there is a proposal [1] to\nget rid of inheritance_planner() as the way of planning UPDATE/DELETEs\non inheritance trees. If we go that route, then a given partitioned\ntarget table will be expanded at the bottom and so, there's no need\nfor ModifyTable to have its own run-time pruning info, because\nAppend/MergeAppend will have it. Maybe, we will need some code in\nExecInitModifyTable() and ExecModifyTable() to handle the case where\nrun-time pruning, during plan tree initialization and plan tree\nexecution respectively, may have rendered modifying a given result\nrelation unnecessary.\n\nA cursory look at the patch suggests that most of its changes will be\nfor nothing if [1] materializes. What do you think about that?\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/357.1550612935%40sss.pgh.pa.us\n\n\n", "msg_date": "Wed, 3 Jul 2019 14:27:14 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Wed, 3 Jul 2019 at 17:27, Amit Langote <amitlangote09@gmail.com> wrote:\n> A cursory look at the patch suggests that most of its changes will be\n> for nothing if [1] materializes. What do you think about that?\n\nYeah, I had this in mind when writing the patch, but kept going\nanyway. I think it's only really a small patch of this patch that\nwould get wiped out with that change. Just the planner.c stuff.\nEverything else is still required, as far as I understand.\n\n> [1] https://www.postgresql.org/message-id/357.1550612935%40sss.pgh.pa.us\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 3 Jul 2019 19:34:03 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Wed, Jul 3, 2019 at 4:34 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> On Wed, 3 Jul 2019 at 17:27, Amit Langote <amitlangote09@gmail.com> wrote:\n> > A cursory look at the patch suggests that most of its changes will be\n> > for nothing if [1] materializes. What do you think about that?\n>\n> Yeah, I had this in mind when writing the patch, but kept going\n> anyway. I think it's only really a small patch of this patch that\n> would get wiped out with that change. Just the planner.c stuff.\n> Everything else is still required, as far as I understand.\n\nIf I understand the details of [1] correctly, ModifyTable will no\nlonger have N subplans for N result relations as there are today. So,\nit doesn't make sense for ModifyTable to contain\nPartitionedRelPruneInfos and for ExecInitModifyTable/ExecModifyTable\nto have to perform initial and execution-time pruning, respectively.\nAs I said, bottom expansion of target inheritance will mean pruning\n(both plan-time and run-time) will occur at the bottom too, so the\nrun-time pruning capabilities of nodes that already have it will be\nused for UPDATE and DELETE too.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 3 Jul 2019 17:40:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "Hi, Amit\r\n\r\n> If I understand the details of [1] correctly, ModifyTable will no longer\r\n> have N subplans for N result relations as there are today. So, it doesn't\r\n> make sense for ModifyTable to contain PartitionedRelPruneInfos and for\r\n> ExecInitModifyTable/ExecModifyTable\r\n> to have to perform initial and execution-time pruning, respectively.\r\n\r\nDoes this mean that the generic plan will not have N subplans for N result relations?\r\nI thought [1] would make creating generic plans faster, but is this correct?\r\n\r\nregards,\r\n\r\nkato sho\r\n> -----Original Message-----\r\n> From: Amit Langote [mailto:amitlangote09@gmail.com]\r\n> Sent: Wednesday, July 3, 2019 5:41 PM\r\n> To: David Rowley <david.rowley@2ndquadrant.com>\r\n> Cc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\n> Subject: Re: Run-time pruning for ModifyTable\r\n> \r\n> On Wed, Jul 3, 2019 at 4:34 PM David Rowley <david.rowley@2ndquadrant.com>\r\n> wrote:\r\n> > On Wed, 3 Jul 2019 at 17:27, Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> > > A cursory look at the patch suggests that most of its changes will\r\n> > > be for nothing if [1] materializes. What do you think about that?\r\n> >\r\n> > Yeah, I had this in mind when writing the patch, but kept going\r\n> > anyway. I think it's only really a small patch of this patch that\r\n> > would get wiped out with that change. Just the planner.c stuff.\r\n> > Everything else is still required, as far as I understand.\r\n> \r\n> If I understand the details of [1] correctly, ModifyTable will no longer\r\n> have N subplans for N result relations as there are today. So, it doesn't\r\n> make sense for ModifyTable to contain PartitionedRelPruneInfos and for\r\n> ExecInitModifyTable/ExecModifyTable\r\n> to have to perform initial and execution-time pruning, respectively.\r\n> As I said, bottom expansion of target inheritance will mean pruning (both\r\n> plan-time and run-time) will occur at the bottom too, so the run-time\r\n> pruning capabilities of nodes that already have it will be used for UPDATE\r\n> and DELETE too.\r\n> \r\n> Thanks,\r\n> Amit\r\n> \r\n\r\n", "msg_date": "Thu, 4 Jul 2019 04:40:44 +0000", "msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Run-time pruning for ModifyTable" }, { "msg_contents": "Kato-san,\n\nOn Thu, Jul 4, 2019 at 1:40 PM Kato, Sho <kato-sho@jp.fujitsu.com> wrote:\n> > If I understand the details of [1] correctly, ModifyTable will no longer\n> > have N subplans for N result relations as there are today. So, it doesn't\n> > make sense for ModifyTable to contain PartitionedRelPruneInfos and for\n> > ExecInitModifyTable/ExecModifyTable\n> > to have to perform initial and execution-time pruning, respectively.\n>\n> Does this mean that the generic plan will not have N subplans for N result relations?\n> I thought [1] would make creating generic plans faster, but is this correct?\n\nYeah, making a generic plan for UPDATE of inheritance tables will\ncertainly become faster, because we will no longer plan the same query\nN times for N child tables. There will still be N result relations\nbut only one sub-plan to fetch the rows from. Also, planning will\nstill cost O(N), but with a much smaller constant factor.\n\nBy the way, let's keep any further discussion on this particular topic\nin the other thread.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 8 Jul 2019 11:33:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Monday, July 8, 2019 11:34 AM, Amit Langote wrote:\r\n> By the way, let's keep any further discussion on this particular topic\r\n> in the other thread.\r\n\r\nThanks for the details. I got it.\r\n\r\nRegards,\r\nKato sho\r\n> -----Original Message-----\r\n> From: Amit Langote [mailto:amitlangote09@gmail.com]\r\n> Sent: Monday, July 8, 2019 11:34 AM\r\n> To: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>\r\n> Cc: David Rowley <david.rowley@2ndquadrant.com>; PostgreSQL Hackers\r\n> <pgsql-hackers@lists.postgresql.org>\r\n> Subject: Re: Run-time pruning for ModifyTable\r\n> \r\n> Kato-san,\r\n> \r\n> On Thu, Jul 4, 2019 at 1:40 PM Kato, Sho <kato-sho@jp.fujitsu.com> wrote:\r\n> > > If I understand the details of [1] correctly, ModifyTable will no\r\n> > > longer have N subplans for N result relations as there are today.\r\n> > > So, it doesn't make sense for ModifyTable to contain\r\n> > > PartitionedRelPruneInfos and for\r\n> ExecInitModifyTable/ExecModifyTable\r\n> > > to have to perform initial and execution-time pruning, respectively.\r\n> >\r\n> > Does this mean that the generic plan will not have N subplans for N\r\n> result relations?\r\n> > I thought [1] would make creating generic plans faster, but is this\r\n> correct?\r\n> \r\n> Yeah, making a generic plan for UPDATE of inheritance tables will\r\n> certainly become faster, because we will no longer plan the same query\r\n> N times for N child tables. There will still be N result relations but\r\n> only one sub-plan to fetch the rows from. Also, planning will still cost\r\n> O(N), but with a much smaller constant factor.\r\n> \r\n> By the way, let's keep any further discussion on this particular topic\r\n> in the other thread.\r\n> \r\n> Thanks,\r\n> Amit\r\n", "msg_date": "Mon, 8 Jul 2019 03:56:03 +0000", "msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Run-time pruning for ModifyTable" }, { "msg_contents": "Here's a rebased version of this patch (it had a trivial conflict).\nNo further changes.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 11 Sep 2019 19:10:28 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Thu, Sep 12, 2019 at 10:10 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> Here's a rebased version of this patch (it had a trivial conflict).\n\nHi, FYI partition_prune.sql currently fails (maybe something to do\nwith commit d52eaa09?):\n\n where s.a = $1 and s.b = $2 and s.c = (select 1);\n explain (costs off) execute q (1, 1);\n QUERY PLAN\n----------------------------------------------------------------\n+----------------------------------------------------\n Append\n InitPlan 1 (returns $0)\n -> Result\n- Subplans Removed: 1\n -> Seq Scan on p1\n- Filter: ((a = $1) AND (b = $2) AND (c = $0))\n+ Filter: ((a = 1) AND (b = 1) AND (c = $0))\n -> Seq Scan on q111\n- Filter: ((a = $1) AND (b = $2) AND (c = $0))\n+ Filter: ((a = 1) AND (b = 1) AND (c = $0))\n -> Result\n- One-Time Filter: ((1 = $1) AND (1 = $2) AND (1 = $0))\n-(10 rows)\n+ One-Time Filter: (1 = $0)\n+(9 rows)\n\n execute q (1, 1);\n a | b | c\n\n\n", "msg_date": "Tue, 5 Nov 2019 16:04:25 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Tue, Nov 05, 2019 at 04:04:25PM +1300, Thomas Munro wrote:\n> On Thu, Sep 12, 2019 at 10:10 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > Here's a rebased version of this patch (it had a trivial conflict).\n> \n> Hi, FYI partition_prune.sql currently fails (maybe something to do\n> with commit d52eaa09?):\n\nDavid, perhaps you did not notice that? For now I have moved this\npatch to next CF waiting on author to look after the failure.\n\nAmit, Kato-san, both of you are marked as reviewers of this patch.\nAre you planning to look at it?\n--\nMichael", "msg_date": "Wed, 27 Nov 2019 17:17:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Wed, Nov 27, 2019 at 05:17:06PM +0900, Michael Paquier wrote:\n>On Tue, Nov 05, 2019 at 04:04:25PM +1300, Thomas Munro wrote:\n>> On Thu, Sep 12, 2019 at 10:10 AM Alvaro Herrera\n>> <alvherre@2ndquadrant.com> wrote:\n>> > Here's a rebased version of this patch (it had a trivial conflict).\n>>\n>> Hi, FYI partition_prune.sql currently fails (maybe something to do\n>> with commit d52eaa09?):\n>\n>David, perhaps you did not notice that? For now I have moved this\n>patch to next CF waiting on author to look after the failure.\n>\n>Amit, Kato-san, both of you are marked as reviewers of this patch.\n>Are you planning to look at it?\n\nDavid, this patch is marked as \"waiting on author\" since 11/27, and\nthere have been no updates or responses since then. Do you plan to\nsubmit a new patch version in this CF? We're already half-way through,\nso there's not much time ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 16 Jan 2020 22:45:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Thu, Jan 16, 2020 at 10:45:25PM +0100, Tomas Vondra wrote:\n> David, this patch is marked as \"waiting on author\" since 11/27, and\n> there have been no updates or responses since then. Do you plan to\n> submit a new patch version in this CF? We're already half-way through,\n> so there's not much time ...\n\nThe reason why I moved it to 2020-01 is that there was not enough time\nfor David to reply back. At this stage, it seems more appropriate to\nme to mark it as returned with feedback and move on.\n--\nMichael", "msg_date": "Fri, 17 Jan 2020 11:40:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "Sorry, I didn't notice this email until now.\n\nOn Wed, Nov 27, 2019 at 5:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Nov 05, 2019 at 04:04:25PM +1300, Thomas Munro wrote:\n> > On Thu, Sep 12, 2019 at 10:10 AM Alvaro Herrera\n> > <alvherre@2ndquadrant.com> wrote:\n> > > Here's a rebased version of this patch (it had a trivial conflict).\n> >\n> > Hi, FYI partition_prune.sql currently fails (maybe something to do\n> > with commit d52eaa09?):\n>\n> David, perhaps you did not notice that? For now I have moved this\n> patch to next CF waiting on author to look after the failure.\n>\n> Amit, Kato-san, both of you are marked as reviewers of this patch.\n> Are you planning to look at it?\n\nSorry, I never managed to look at the patch closely. As I commented\nup-thread, the functionality added by this patch would be unnecessary\nif we were to move forward with the other project related to UPDATE\nand DELETE over inheritance trees:\n\nhttps://www.postgresql.org/message-id/357.1550612935%40sss.pgh.pa.us\n\nI had volunteered to submit a patch in that thread and even managed to\nwrite one but didn't get time to get it in good enough shape to post\nit to the list, like I couldn't make it handle foreign child tables.\nThe gist of the new approach is that ModifyTable will always have\n*one* subplan under ModifyTable, not N for N target partitions as\ncurrently. That one subplan being the same plan as one would get if\nthe query were SELECT instead of UPDATE/DELETE, it would automatically\ntake care of run-time pruning if needed, freeing ModifyTable itself\nfrom having to do it.\n\nNow, the chances of such a big overhaul of how UPDATEs of inheritance\ntrees are handled getting into PG 13 seem pretty thin even if I post\nthe patch in few days, so perhaps it would make sense to get this\npatch in so that we can give users run-time pruning for UPDATE/DELETE\nin PG 13, provided the code is not very invasive. If and when the\naforesaid overhaul takes place, that code would go away along with a\nlot of other code.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 23 Jan 2020 16:31:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Thu, Jan 23, 2020 at 4:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Now, the chances of such a big overhaul of how UPDATEs of inheritance\n> trees are handled getting into PG 13 seem pretty thin even if I post\n> the patch in few days, so perhaps it would make sense to get this\n> patch in so that we can give users run-time pruning for UPDATE/DELETE\n> in PG 13, provided the code is not very invasive. If and when the\n> aforesaid overhaul takes place, that code would go away along with a\n> lot of other code.\n\nFwiw, I updated the patch, mainly expected/partition_prune.out. Some\ntests in it were failing as a fallout of commits d52eaa09 (pointed out\nby Thomas upthread) and 6ef77cf46e8, which are not really related to\nthe code being changed by the patch.\n\nOn the patch itself, it seems straightforward enough. It simply takes\nthe feature we have for Append and MergeAppend nodes and adopts it for\nModifyTable which for the purposes of run-time pruning looks very much\nlike the aforementioned nodes.\n\nPart of the optimizer patch that looks a bit complex is the changes to\ninheritance_planner() which is to be expected, because that function\nis a complex beast itself. I have suggestions to modify some comments\naround the code added/modified by the patch for clarity; attaching a\ndelta patch for that.\n\nThe executor patch looks pretty benign too. Diffs that looked a bit\nsuspicious at first are due to replacing\nModifyTableState.resultRelInfo that is a pointer into\nEState.es_result_relations array by an array of ResultRelInfo\npointers, but doing that seems to make the relevant code easier to\nfollow, especially if you consider the changes that the patch makes to\nthat code.\n\nI'll set the CF entry to Needs Review, because AFAICS there are no\nunaddressed comments.\n\nThanks,\nAmit", "msg_date": "Fri, 24 Jan 2020 17:56:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "Thanks for having a look at this, Amit.\n\nOn Fri, 24 Jan 2020 at 21:57, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Jan 23, 2020 at 4:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Part of the optimizer patch that looks a bit complex is the changes to\n> inheritance_planner() which is to be expected, because that function\n> is a complex beast itself. I have suggestions to modify some comments\n> around the code added/modified by the patch for clarity; attaching a\n> delta patch for that.\n\nI've made another pass over the patch and made various changes. The\nbiggest of which was the required modifications to nodeModifyTable.c\nso that it can now prune all partitions. Append and MergeAppend were\nmodified to allow this in 5935917ce59 (Thanks for pushing that Tom).\nI've also slightly simplified the code in ExecModifyTable() and added\nslightly more code to ExecInitModifyTable(). We now only set\nmt_whichplan to WHICHPLAN_CHOOSE_SUBPLAN when run-time pruning is\nenabled and do_exec_prune is true. I also made it so when all\npartitions are pruned that we set mt_whichplan to\nWHICHPLAN_CHOOSE_SUBPLAN as this saves an additional run-time check\nduring execution.\n\nOver in inheritance_planner(), I noticed that the RT index of the\nSELECT query and the UPDATE/DELETE query can differ. There was some\ncode that performed translations. I changed that code slightly so that\nit's a bit more optimal. It was building two lists, one for the old\nRT index and one for the new. It added elements to this list\nregardless of if the RT indexes were the same or not. I've now changed\nthat to only add to the list if they differ, which I feel should never\nbe slower and most likely always faster. I'm also now building a\ntranslation map between the old and new RT indexes, however, I only\nfound one test in the regression tests which require any sort of\ntranslation of these RT indexes. This was with an inheritance table,\nso I need to do a bit more work to find a case where this happens with\na partitioned table to ensure all this works.\n\n> The executor patch looks pretty benign too. Diffs that looked a bit\n> suspicious at first are due to replacing\n> ModifyTableState.resultRelInfo that is a pointer into\n> EState.es_result_relations array by an array of ResultRelInfo\n> pointers, but doing that seems to make the relevant code easier to\n> follow, especially if you consider the changes that the patch makes to\n> that code.\n\nYeah, that's because the ModifyTableState's resultRelInfo field was\njust a pointer to the estate->es_result_relations array offset by the\nModifyTable's resultRelIndex. This was fine previously because we\nalways initialised the plans for each ResultRelInfo. However, now\nthat we might be pruning some of those that array can't be used as\nit'll still contain ResultRelInfos for relations we're not going to\ntouch. Changing this to an array of pointers allows us to point to the\nelements in estate->es_result_relations that we're going to use. I\nalso renamed the field just to ensure nothing can compile (thinking of\nextensions here) that's not got updated code.\n\nTom, I'm wondering if you wouldn't mind looking over my changes to\ninheritance_planner()?\n\nDavid", "msg_date": "Tue, 10 Mar 2020 00:13:47 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Tue, 10 Mar 2020 at 00:13, David Rowley <dgrowleyml@gmail.com> wrote:\n> Over in inheritance_planner(), I noticed that the RT index of the\n> SELECT query and the UPDATE/DELETE query can differ. There was some\n> code that performed translations. I changed that code slightly so that\n> it's a bit more optimal. It was building two lists, one for the old\n> RT index and one for the new. It added elements to this list\n> regardless of if the RT indexes were the same or not. I've now changed\n> that to only add to the list if they differ, which I feel should never\n> be slower and most likely always faster. I'm also now building a\n> translation map between the old and new RT indexes, however, I only\n> found one test in the regression tests which require any sort of\n> translation of these RT indexes. This was with an inheritance table,\n> so I need to do a bit more work to find a case where this happens with\n> a partitioned table to ensure all this works.\n\nI had a closer look at this today and the code I have in\ninheritance_planner() is certainly not right.\n\nIt's pretty easy to made the SELECT and UPDATE/DELETE's RT indexes\ndiffer with something like:\n\ndrop table part_t cascade;\ncreate table part_t (a int, b int, c int) partition by list (a);\ncreate table part_t12 partition of part_t for values in(1,2) partition\nby list (a);\ncreate table part_t12_1 partition of part_t12 for values in(1);\ncreate table part_t12_2 partition of part_t12 for values in(2);\ncreate table part_t3 partition of part_t for values in(3);\ncreate view vw_part_t as select * from part_t;\n\nexplain analyze update vw_part_t set a = t2.a +0 from part_t t2 where\nt2.a = vw_part_t.a and vw_part_t.a = (select 1);\n\nIn this case, the sub-partitioned table changes RT index. I can't\njust take the RelOptInfo's from the partition_root's simple_rel_array\nand put them in the correct element in the root's simple_rel_array as\nthey RT indexes stored within also need to be translated.\n\nI'll be having another look at this to see what the best fix is going to be.\n\nDavid\n\n\n", "msg_date": "Wed, 25 Mar 2020 12:51:38 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I had a closer look at this today and the code I have in\n> inheritance_planner() is certainly not right.\n\nAlthough I didn't get around to it for v13, there's still a plan on the\ntable for inheritance_planner() to get nuked from orbit [1].\n\nMaybe this improvement should be put on hold till that's done?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/357.1550612935%40sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 24 Mar 2020 20:00:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Wed, 25 Mar 2020 at 13:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I had a closer look at this today and the code I have in\n> > inheritance_planner() is certainly not right.\n>\n> Although I didn't get around to it for v13, there's still a plan on the\n> table for inheritance_planner() to get nuked from orbit [1].\n>\n> Maybe this improvement should be put on hold till that's done?\n\nPossibly. I'm not really wedded to the idea of getting it in. However,\nit would really only be the inheritance planner part that would need\nto be changed later. I don't think any of the other code would need to\nbe adjusted.\n\nAmit shared his thoughts in [1]. If you'd rather I held off, then I will.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqGhD7ieKsv5%2BGkmHgs-XhP2DoUhtESVb3MU-4j14%3DG6LA%40mail.gmail.com\n\n\n", "msg_date": "Wed, 25 Mar 2020 13:48:34 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "Hi David,\n\nSorry I couldn't get to this sooner.\n\nOn Wed, Mar 25, 2020 at 9:49 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 25 Mar 2020 at 13:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> > > I had a closer look at this today and the code I have in\n> > > inheritance_planner() is certainly not right.\n> >\n> > Although I didn't get around to it for v13, there's still a plan on the\n> > table for inheritance_planner() to get nuked from orbit [1].\n> >\n> > Maybe this improvement should be put on hold till that's done?\n>\n> Possibly. I'm not really wedded to the idea of getting it in. However,\n> it would really only be the inheritance planner part that would need\n> to be changed later. I don't think any of the other code would need to\n> be adjusted.\n>\n> Amit shared his thoughts in [1]. If you'd rather I held off, then I will.\n>\n> David\n>\n> [1] https://www.postgresql.org/message-id/CA%2BHiwqGhD7ieKsv5%2BGkmHgs-XhP2DoUhtESVb3MU-4j14%3DG6LA%40mail.gmail.com\n\nActually, I was saying in that email that the update/delete planning\noverhaul being talked about will make the entirety of the\nfunctionality this patch is adding, which is ModifyTable node being\nable to prune its subplans based on run-time parameter values,\nredundant. That's because, with the overhaul, there won't be multiple\nsubplans under ModifyTable, only one which would take care of any\npruning that's necessary.\n\nWhat I did say in favor of this patch though is that it doesn not seem\nthat invasive, so maybe okay to get in for v13.\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Mar 2020 11:36:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Wed, 25 Mar 2020 at 15:37, Amit Langote <amitlangote09@gmail.com> wrote:\n> Actually, I was saying in that email that the update/delete planning\n> overhaul being talked about will make the entirety of the\n> functionality this patch is adding, which is ModifyTable node being\n> able to prune its subplans based on run-time parameter values,\n> redundant. That's because, with the overhaul, there won't be multiple\n> subplans under ModifyTable, only one which would take care of any\n> pruning that's necessary.\n\nThanks for explaining. I've not read over any patch for that yet, so\nwasn't aware of exactly what was planned.\n\nWith your explanation, I imagine some sort of Append / MergeAppend\nthat runs the query as if it were a SELECT, but each\nAppend/MergeAppend subnode is tagged somehow with an index of which\nModifyTable subnode that it belongs to. Basically, just one complete\nplan, rather than a plan per ModifyTable subnode.\n\n> What I did say in favor of this patch though is that it doesn not seem\n> that invasive, so maybe okay to get in for v13.\n\nSince it seems there's much less code that will be useful after the\nrewrite than I thought, combined with the fact that I'm not entirely\nsure the best way to reuse the partitioned table's RelOptInfo from the\nSELECT's PlannerInfo, then I'm going to return this one with feedback.\n\nDavid\n\n\n", "msg_date": "Tue, 7 Apr 2020 23:52:29 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" }, { "msg_contents": "On Tue, Apr 7, 2020 at 8:52 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 25 Mar 2020 at 15:37, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Actually, I was saying in that email that the update/delete planning\n> > overhaul being talked about will make the entirety of the\n> > functionality this patch is adding, which is ModifyTable node being\n> > able to prune its subplans based on run-time parameter values,\n> > redundant. That's because, with the overhaul, there won't be multiple\n> > subplans under ModifyTable, only one which would take care of any\n> > pruning that's necessary.\n>\n> Thanks for explaining. I've not read over any patch for that yet, so\n> wasn't aware of exactly what was planned.\n>\n> With your explanation, I imagine some sort of Append / MergeAppend\n> that runs the query as if it were a SELECT, but each\n> Append/MergeAppend subnode is tagged somehow with an index of which\n> ModifyTable subnode that it belongs to. Basically, just one complete\n> plan, rather than a plan per ModifyTable subnode.\n\nThat's correct, although I don't think Append/MergeAppend will need to\nlook any different structurally, except its subnodes will need to\nproduce a targetlist member to identify partition/child for a given\noutput row. There will still be N result relations, but not the N\nplans created separately for each, as inheritance_planner() currently\ndoes.\n\n> > What I did say in favor of this patch though is that it doesn not seem\n> > that invasive, so maybe okay to get in for v13.\n>\n> Since it seems there's much less code that will be useful after the\n> rewrite than I thought, combined with the fact that I'm not entirely\n> sure the best way to reuse the partitioned table's RelOptInfo from the\n> SELECT's PlannerInfo, then I'm going to return this one with feedback.\n\nThat makes sense. I am thinking to spend some time working on this\nearly in PG 14 cycle.\n\n--\nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 14:54:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run-time pruning for ModifyTable" } ]
[ { "msg_contents": "Hello hackers,\n\nWhile working on pg_dump I noticed the extra quote_all_identifiers in \n_dumpOptions struct. I attached the patch.\n\nIt came from refactoring by 0eea8047bf. There is also a discussion:\nhttps://www.postgresql.org/message-id/CACw0%2B13ZUcXbj9GKJNGZTkym1SXhwRu28nxHoJMoX5Qwmbg_%2Bw%40mail.gmail.com\n\nInitially the patch proposed to use quote_all_identifiers of \n_dumpOptions. But then everyone came to a decision to use global \nquote_all_identifiers from string_utils.c, because fmtId() uses it.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company", "msg_date": "Thu, 27 Jun 2019 12:12:15 +0300", "msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Extra quote_all_identifiers in _dumpOptions" }, { "msg_contents": "On Thu, Jun 27, 2019 at 12:12:15PM +0300, Arthur Zakirov wrote:\n> Hello hackers,\n> \n> While working on pg_dump I noticed the extra quote_all_identifiers in\n> _dumpOptions struct. I attached the patch.\n> \n> It came from refactoring by 0eea8047bf. There is also a discussion:\n> https://www.postgresql.org/message-id/CACw0%2B13ZUcXbj9GKJNGZTkym1SXhwRu28nxHoJMoX5Qwmbg_%2Bw%40mail.gmail.com\n> \n> Initially the patch proposed to use quote_all_identifiers of _dumpOptions.\n> But then everyone came to a decision to use global quote_all_identifiers\n> from string_utils.c, because fmtId() uses it.\n> \n> -- \n> Arthur Zakirov\n> Postgres Professional: http://www.postgrespro.com\n> Russian Postgres Company\n\n> diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h\n> index db30b54a92..8c0cedcd98 100644\n> --- a/src/bin/pg_dump/pg_backup.h\n> +++ b/src/bin/pg_dump/pg_backup.h\n> @@ -153,7 +153,6 @@ typedef struct _dumpOptions\n> \tint\t\t\tno_synchronized_snapshots;\n> \tint\t\t\tno_unlogged_table_data;\n> \tint\t\t\tserializable_deferrable;\n> -\tint\t\t\tquote_all_identifiers;\n> \tint\t\t\tdisable_triggers;\n> \tint\t\t\toutputNoTablespaces;\n> \tint\t\t\tuse_setsessauth;\n\nWow, good catch. I thought C compilers would have reported this issue,\nbut obviously not. Patch applied to head. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 8 Jul 2019 19:32:07 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Extra quote_all_identifiers in _dumpOptions" }, { "msg_contents": "On Mon, Jul 08, 2019 at 07:32:07PM -0400, Bruce Momjian wrote:\n> Wow, good catch. I thought C compilers would have reported this issue,\n> but obviously not. Patch applied to head. Thanks.\n\nYes, I don't recall that gcc nor clang have a magic recipy for that.\nWe have a couple of other orphaned ones in the backend actually.\n--\nMichael", "msg_date": "Tue, 9 Jul 2019 16:41:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extra quote_all_identifiers in _dumpOptions" }, { "msg_contents": "On 09.07.2019 02:32, Bruce Momjian wrote:\n>> diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h\n>> index db30b54a92..8c0cedcd98 100644\n>> --- a/src/bin/pg_dump/pg_backup.h\n>> +++ b/src/bin/pg_dump/pg_backup.h\n>> @@ -153,7 +153,6 @@ typedef struct _dumpOptions\n>> \tint\t\t\tno_synchronized_snapshots;\n>> \tint\t\t\tno_unlogged_table_data;\n>> \tint\t\t\tserializable_deferrable;\n>> -\tint\t\t\tquote_all_identifiers;\n>> \tint\t\t\tdisable_triggers;\n>> \tint\t\t\toutputNoTablespaces;\n>> \tint\t\t\tuse_setsessauth;\n> \n> Wow, good catch. I thought C compilers would have reported this issue,\n> but obviously not. Patch applied to head. Thanks.\n\nThank you, Bruce!\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Tue, 9 Jul 2019 11:10:49 +0300", "msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Extra quote_all_identifiers in _dumpOptions" } ]
[ { "msg_contents": "Hi,\n\nI think commit 83e176ec1, which moved block sampling functions to\nutils/misc/sampling.c, forgot to update this comment in\ncommands/analyze.c: \"This algorithm is from Jeff Vitter's paper (see\nfull citation below)\"; since the citation was also moved to\nutils/misc/sampling.c, I think the \"see full citation below\" part\nshould be updated accordingly. Attached is a patch for that.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Thu, 27 Jun 2019 20:05:36 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Obsolete comment in commands/analyze.c" }, { "msg_contents": "> On 27 Jun 2019, at 13:05, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n\n> since the citation was also moved to\n> utils/misc/sampling.c, I think the \"see full citation below\" part\n> should be updated accordingly. Attached is a patch for that.\n\nAgreed, nice catch!\n\ncheers ./daniel\n\n\n", "msg_date": "Thu, 27 Jun 2019 13:53:52 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Obsolete comment in commands/analyze.c" }, { "msg_contents": "On Thu, Jun 27, 2019 at 01:53:52PM +0200, Daniel Gustafsson wrote:\n>> On 27 Jun 2019, at 13:05, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n>> since the citation was also moved to\n>> utils/misc/sampling.c, I think the \"see full citation below\" part\n>> should be updated accordingly. Attached is a patch for that.\n>\n>Agreed, nice catch!\n>\n\nThanks, committed.\n\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 27 Jun 2019 18:02:51 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Obsolete comment in commands/analyze.c" }, { "msg_contents": "On Fri, Jun 28, 2019 at 1:02 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Thu, Jun 27, 2019 at 01:53:52PM +0200, Daniel Gustafsson wrote:\n> >> On 27 Jun 2019, at 13:05, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> >\n> >> since the citation was also moved to\n> >> utils/misc/sampling.c, I think the \"see full citation below\" part\n> >> should be updated accordingly. Attached is a patch for that.\n> >\n> >Agreed, nice catch!\n>\n> Thanks, committed.\n\nThanks for committing, Tomas! Thanks for reviewing, Daniel!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 28 Jun 2019 13:00:12 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Obsolete comment in commands/analyze.c" } ]
[ { "msg_contents": "On several popular operating systems, we can use relative rpaths, using\nthe $ORIGIN placeholder, so that the resulting installation is\nrelocatable. Then we also don't need to set LD_LIBRARY_PATH during make\ncheck.\n\nThis implementation will use a relative rpath if bindir and libdir are\nunder the same common parent directory.\n\nSupported platforms are: freebsd, linux, netbsd, openbsd, solaris\n\nInformation from https://lekensteyn.nl/rpath.html\n\n(Yes, something for macOS would be nice, to work around SIP issues, but\nI'll leave that as a separate future item.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 27 Jun 2019 13:45:41 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Use relative rpath if possible" }, { "msg_contents": "rebased patch attached, no functionality changes\n\nOn 2019-06-27 13:45, Peter Eisentraut wrote:\n> On several popular operating systems, we can use relative rpaths, using\n> the $ORIGIN placeholder, so that the resulting installation is\n> relocatable. Then we also don't need to set LD_LIBRARY_PATH during make\n> check.\n> \n> This implementation will use a relative rpath if bindir and libdir are\n> under the same common parent directory.\n> \n> Supported platforms are: freebsd, linux, netbsd, openbsd, solaris\n> \n> Information from https://lekensteyn.nl/rpath.html\n> \n> (Yes, something for macOS would be nice, to work around SIP issues, but\n> I'll leave that as a separate future item.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 5 Jul 2019 11:37:11 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Use relative rpath if possible" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> rebased patch attached, no functionality changes\n\nI poked at this a bit, and soon found that it fails check-world,\nbecause the isolationtester binary is built with an rpath that\nonly works if it's part of the temp install tree, which it ain't.\n\n/home/postgres/pgsql/src/test/isolation/isolationtester: error while loading shared libraries: libpq.so.5: cannot open shared object file: No such file or directory\n\n(The cfbot seems to get past that, for reasons that are entirely unclear\nto me; but it falls over later in the ecpg tests with what's presumably\nthe same problem.)\n\nWhile there might be some argument for making isolationtester part\nof the installed set of executables, that approach certainly doesn't\nscale; we can't insist that every test tool should be part of the\ninstallation.\n\nSo I think we need some more-intelligent rule about when to apply\nthe relative rpath. Which in turn seems to mean we still need to\nset up LD_LIBRARY_PATH in some cases.\n\nAnother thing I noticed is that if, say, you build a new version\nof psql and try to test it out with \"./psql ...\", that doesn't work\nanymore (whereas today, it does work as long as you installed libpq\nearlier). That might be acceptable collateral damage, but it's not\nvery desirable IMO.\n\nI'm also slightly concerned that this effectively mandates that every\nlibrary we install be immediately in $(libdir), never subdirectories\nthereof; else it'd need more than one \"..\" in its rpath and there's no\nway to adjust that. That's not a showstopper problem probably, because\nwe have no such libraries today, but I wonder if somebody would want\nsome in the future.\n\nA possible partial solution to these issues is to make the rpath look\nlike $ORIGIN/../lib and then the normal absolute rpath. But that\ndoesn't fix the problem for non-installed binaries used in check-world\nwith no pre-existing installation.\n\n>> (Yes, something for macOS would be nice, to work around SIP issues, but\n>> I'll leave that as a separate future item.)\n\nTBH, I think that supporting macOS with SIP enabled is really the\nonly interesting case here. On these other platforms, changing this\nwon't fix anything very critical, and it seems like it will make\nsome cases worse.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2019 15:30:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use relative rpath if possible" }, { "msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> rebased patch attached, no functionality changes\n\n> I poked at this a bit, and soon found that it fails check-world,\n> because the isolationtester binary is built with an rpath that\n> only works if it's part of the temp install tree, which it ain't.\n\nOh ... just thought of another issue in the same vein: what about\nmodules being built out-of-tree with pgxs? (I'm imagining something\nwith a libpq.so dependency, like postgres_fdw.) We probably really\nhave to keep using the absolute rpath for that, because not only\nwould such modules certainly fail \"make check\" with a relative\nrpath, but it's not really certain that they're intended to get\ninstalled into the same installdir as the core libraries.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2019 10:11:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use relative rpath if possible" }, { "msg_contents": "On Mon, Jul 8, 2019 at 2:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> >> rebased patch attached, no functionality changes\n>\n> > I poked at this a bit, and soon found that it fails check-world,\n> > because the isolationtester binary is built with an rpath that\n> > only works if it's part of the temp install tree, which it ain't.\n>\n> Oh ... just thought of another issue in the same vein: what about\n> modules being built out-of-tree with pgxs? (I'm imagining something\n> with a libpq.so dependency, like postgres_fdw.) We probably really\n> have to keep using the absolute rpath for that, because not only\n> would such modules certainly fail \"make check\" with a relative\n> rpath, but it's not really certain that they're intended to get\n> installed into the same installdir as the core libraries.\n\nThere were a number of problems flagged up in Tom's feedback and then\nsilence. I think this belongs in the 'Returned with feedback' box, so\nI've set it to that, but of course feel free to set it to 'Needs\nreview' and thence 'Move to next CF'.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 20:28:50 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use relative rpath if possible" } ]
[ { "msg_contents": "The ssl_ciphers GUC for configuring cipher suites sets the default value based\non USE_SSL but it’s actually an OpenSSL specific value. As with other such\nchanges, it works fine now but will become an issue should we support other TLS\nbackends. Attached patch fixes this.\n\ncheers ./daniel", "msg_date": "Thu, 27 Jun 2019 16:02:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "OpenSSL specific value under USE_SSL instead of USE_OPENSSL" }, { "msg_contents": "On Thu, Jun 27, 2019 at 04:02:45PM +0200, Daniel Gustafsson wrote:\n> The ssl_ciphers GUC for configuring cipher suites sets the default value based\n> on USE_SSL but it’s actually an OpenSSL specific value. As with other such\n> changes, it works fine now but will become an issue should we support other TLS\n> backends. Attached patch fixes this.\n\nThanks, patch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 8 Jul 2019 19:40:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL specific value under USE_SSL instead of USE_OPENSSL" } ]
[ { "msg_contents": "Hi,\n\nHere is a patch for the pg_receivewal documentation to highlight that \nWAL isn't acknowledged to be applied.\n\nI'll add a CF entry for it.\n\nBest regards,\n Jesper", "msg_date": "Thu, 27 Jun 2019 10:06:46 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "pg_receivewal documentation" }, { "msg_contents": "On Thu, 2019-06-27 at 10:06 -0400, Jesper Pedersen wrote:\n> Here is a patch for the pg_receivewal documentation to highlight that \n> WAL isn't acknowledged to be applied.\n\nI think it is a good idea to document this, but I have a few quibbles\nwith the patch as it is:\n\n- I think there shouldn't be commas after the \"note\" and before the \"if\".\n Disclaimer: I am not a native speaker, so I am lacking authority.\n\n- The assertion is wrong. \"on\" (remote flush) is perfectly fine\n for synchronous_commit, only \"remote_apply\" is a problem.\n\n- There is already something about \"--synchronous\" in the \"Description\"\n section. It might make sense to add the additional information there.\n\nHow about the attached patch?\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 09 Jul 2019 11:16:55 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi Laurenz,\n\nOn 7/9/19 5:16 AM, Laurenz Albe wrote:\n> On Thu, 2019-06-27 at 10:06 -0400, Jesper Pedersen wrote:\n>> Here is a patch for the pg_receivewal documentation to highlight that\n>> WAL isn't acknowledged to be applied.\n> \n> I think it is a good idea to document this, but I have a few quibbles\n> with the patch as it is:\n> \n> - I think there shouldn't be commas after the \"note\" and before the \"if\".\n> Disclaimer: I am not a native speaker, so I am lacking authority.\n> \n> - The assertion is wrong. \"on\" (remote flush) is perfectly fine\n> for synchronous_commit, only \"remote_apply\" is a problem.\n> \n> - There is already something about \"--synchronous\" in the \"Description\"\n> section. It might make sense to add the additional information there.\n> \n> How about the attached patch?\n> \n\nThanks for the review, and the changes.\n\nHowever, I think it belongs in the --synchronous section, so what about \nmoving it there as attached ?\n\nBest regards,\n Jesper", "msg_date": "Tue, 9 Jul 2019 13:18:52 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Jesper Pedersen wrote:\n> Thanks for the review, and the changes.\n> \n> However, I think it belongs in the --synchronous section, so what about \n> moving it there as attached ?\n\nWorks for me.\n\nMarked as \"ready for committer\".\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 10 Jul 2019 00:22:02 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Wed, Jul 10, 2019 at 12:22:02AM +0200, Laurenz Albe wrote:\n> Works for me.\n> \n> Marked as \"ready for committer\".\n\nHmm. synchronous_commit is user-settable, which means that it is\npossible to enforce a value in the connection string doing the\nconnection. Isn't that something we had better enforce directly in\nthe tool? In this case what could be fixed is GetConnection() which\nbuilds the connection string parameters. One thing that we would need\nto be careful about is that if the caller has provided a parameter for\n\"options\" (which is plausible as wal_sender_timeout is user-settable\nas of 12), then we need to make sure that the original value is\npreserved, and that the enforced of synchronous_commit is appended.\n\nOr, as you say, we just adjust the documentation. However I would\nrecommend adding at least an example of connection string which uses\n\"options\" if the server sets synchronous_commit to \"remote_apply\" in\nthis case. Still it seems to me that we have ways to reduce the\nconfusion automatically.\n--\nMichael", "msg_date": "Wed, 10 Jul 2019 17:04:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/9/19 6:22 PM, Laurenz Albe wrote:\n> Works for me.\n> \n> Marked as \"ready for committer\".\n> \n\nThank you !\n\nBest regards,\n Jesper\n\n\n\n", "msg_date": "Wed, 10 Jul 2019 08:31:59 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/10/19 4:04 AM, Michael Paquier wrote:\n> On Wed, Jul 10, 2019 at 12:22:02AM +0200, Laurenz Albe wrote:\n>> Works for me.\n>>\n>> Marked as \"ready for committer\".\n> \n> Hmm. synchronous_commit is user-settable, which means that it is\n> possible to enforce a value in the connection string doing the\n> connection. Isn't that something we had better enforce directly in\n> the tool? In this case what could be fixed is GetConnection() which\n> builds the connection string parameters. One thing that we would need\n> to be careful about is that if the caller has provided a parameter for\n> \"options\" (which is plausible as wal_sender_timeout is user-settable\n> as of 12), then we need to make sure that the original value is\n> preserved, and that the enforced of synchronous_commit is appended.\n> \n\nI think that the above is out-of-scope for this patch. And ...\n\n> Or, as you say, we just adjust the documentation. However I would\n> recommend adding at least an example of connection string which uses\n> \"options\" if the server sets synchronous_commit to \"remote_apply\" in\n> this case. Still it seems to me that we have ways to reduce the\n> confusion automatically.\n\n\nThe patch tries to highlight that if you f.ex. have\n\npostgresql.conf\n===============\nsynchronous_commit = remote_apply\nsynchronous_standby_names = '*'\n\nand you _only_ have pg_receivewal connected then changes are only \napplied locally to the primary instance and any client (psql, ...) won't \nget acknowledged. The replay_lsn for the pg_receivewal connection will \nkeep increasing, so\n\nenv PGOPTIONS=\"-c synchronous_commit=remote_write\" pg_receivewal -D \n/tmp/wal -S replica1 --synchronous\n\nwon't help you.\n\nWe could add some wording around 'synchronous_standby_names' if it makes \nthe case clearer.\n\nBest regards,\n Jesper\n\n\n", "msg_date": "Wed, 10 Jul 2019 08:48:12 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On 2019-Jul-09, Jesper Pedersen wrote:\n\n> + <para>\n> + Note that while WAL will be flushed with this setting,\n> + it will never be applied, so <xref linkend=\"guc-synchronous-commit\"/> must\n> + not be set to <literal>remote_apply</literal> if <application>pg_receivewal</application>\n> + is the only synchronous standby.\n> + </para>\n\n+1 to document this caveat.\n\nHow about \n Note that while WAL will be flushed with this setting,\n <application>pg_receivewal</application> never applies it, so\n <xref linkend=\"guc-synchronous-commit\"/> must not be set to\n <literal>remote_apply</literal> if <application>pg_receivewal</application>\n is the only synchronous standby.\n?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 10 Jul 2019 10:24:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/10/19 10:24 AM, Alvaro Herrera wrote:\n> +1 to document this caveat.\n> \n> How about\n> Note that while WAL will be flushed with this setting,\n> <application>pg_receivewal</application> never applies it, so\n> <xref linkend=\"guc-synchronous-commit\"/> must not be set to\n> <literal>remote_apply</literal> if <application>pg_receivewal</application>\n> is the only synchronous standby.\n> ?\n> \n\nSure.\n\nBest regards,\n Jesper", "msg_date": "Wed, 10 Jul 2019 11:26:04 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Wed, 2019-07-10 at 17:04 +0900, Michael Paquier wrote:\n> Hmm. synchronous_commit is user-settable, which means that it is\n> possible to enforce a value in the connection string doing the\n> connection. Isn't that something we had better enforce directly in\n> the tool? In this case what could be fixed is GetConnection() which\n> builds the connection string parameters.\n\nI don't follow.\n\nAre you talking about the replication connection from pg_receivewal\nto the PostgreSQL server? That wouldn't do anything, because it is\nthe setting of \"synchronous_commit\" for an independent client\nconnection that is the problem:\n\n- pg_receivewal starts a replication connection.\n\n- It is added to \"synchronous_standby_names\" on the server.\n\n- A client connects. It sets \"synchronous_commit\" to \"remote_apply\".\n\n- If the client modifies data, COMMIT will hang indefinitely,\n because pg_receivewal will never send confirmation that it has\n applied the changes.\n\nOne alternative option I see is for pg_receivewal to confirm that\nit has applied the changes as soon as it flushed them.\nIt would be cheating somewhat, but it would work around the problem\nin a way that few people would find surprising.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 10 Jul 2019 21:12:46 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Wed, Jul 10, 2019 at 09:12:46PM +0200, Laurenz Albe wrote:\n> Are you talking about the replication connection from pg_receivewal\n> to the PostgreSQL server? That wouldn't do anything, because it is\n> the setting of \"synchronous_commit\" for an independent client\n> connection that is the problem:\n\nDitto. My previous message was wrong and you are right. You are\nright that this had better be documented. I have no thought this ne\nthrough completely.\n\n> One alternative option I see is for pg_receivewal to confirm that\n> it has applied the changes as soon as it flushed them.\n> It would be cheating somewhat, but it would work around the problem\n> in a way that few people would find surprising.\n\nYes, that's wrong as pg_receivewal applies nothing.\n--\nMichael", "msg_date": "Thu, 11 Jul 2019 13:58:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Wed, Jul 10, 2019 at 11:26:04AM -0400, Jesper Pedersen wrote:\n> On 7/10/19 10:24 AM, Alvaro Herrera wrote:\n> > +1 to document this caveat.\n>> \n>> How about\n>> Note that while WAL will be flushed with this setting,\n>> <application>pg_receivewal</application> never applies it, so\n>> <xref linkend=\"guc-synchronous-commit\"/> must not be set to\n>> <literal>remote_apply</literal> if <application>pg_receivewal</application>\n>> is the only synchronous standby.\n>> ?\n>> \n> \n> Sure.\n\nThis is not true in all cases as since 9.6 it is possible to specify\nmultiple synchronous standbys. So if for example pg_receivewal and\nanother synchronous standby are set in s_s_names and that the number\nof a FIRST (priority-based) or ANY (quorum set) is two, then the same\nissue exists, but this documentation is incorrect. I think that we\nshould have a more extensive wording here, like \"if pg_receivewal is\npart of a quorum-based or priority-based set of synchronous standbys.\"\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 16 Jul 2019 14:05:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Tue, 2019-07-16 at 14:05 +0900, Michael Paquier wrote:\n> >> How about\n> >> Note that while WAL will be flushed with this setting,\n> >> <application>pg_receivewal</application> never applies it, so\n> >> <xref linkend=\"guc-synchronous-commit\"/> must not be set to\n> >> <literal>remote_apply</literal> if <application>pg_receivewal</application>\n> >> is the only synchronous standby.\n> \n> This is not true in all cases as since 9.6 it is possible to specify\n> multiple synchronous standbys. So if for example pg_receivewal and\n> another synchronous standby are set in s_s_names and that the number\n> of a FIRST (priority-based) or ANY (quorum set) is two, then the same\n> issue exists, but this documentation is incorrect. I think that we\n> should have a more extensive wording here, like \"if pg_receivewal is\n> part of a quorum-based or priority-based set of synchronous standbys.\"\n\nI think this would be overly complicated.\nThe wording above seems to cover the priority-based base sufficiently\nin my opinion.\nMaybe a second sentence with more detail would be better:\n\n ... must not be set to <literal>remote_apply</literal> if\n <application>pg_receivewal</application> is the only synchronous standby.\n Similarly, if <application>pg_receivewal</application> is part of\n a quorum-based set of synchronous standbys, it won't count towards\n the quorum if <xref linkend=\"guc-synchronous-commit\"/> is set to\n <literal>remote_apply</literal>.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 16 Jul 2019 18:28:57 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/16/19 12:28 PM, Laurenz Albe wrote:\n>> This is not true in all cases as since 9.6 it is possible to specify\n>> multiple synchronous standbys. So if for example pg_receivewal and\n>> another synchronous standby are set in s_s_names and that the number\n>> of a FIRST (priority-based) or ANY (quorum set) is two, then the same\n>> issue exists, but this documentation is incorrect. I think that we\n>> should have a more extensive wording here, like \"if pg_receivewal is\n>> part of a quorum-based or priority-based set of synchronous standbys.\"\n> \n> I think this would be overly complicated.\n> The wording above seems to cover the priority-based base sufficiently\n> in my opinion.\n> Maybe a second sentence with more detail would be better:\n> \n> ... must not be set to <literal>remote_apply</literal> if\n> <application>pg_receivewal</application> is the only synchronous standby.\n> Similarly, if <application>pg_receivewal</application> is part of\n> a quorum-based set of synchronous standbys, it won't count towards\n> the quorum if <xref linkend=\"guc-synchronous-commit\"/> is set to\n> <literal>remote_apply</literal>.\n> \n\nHere is the patch for that.\n\nBest regards,\n Jesper", "msg_date": "Tue, 16 Jul 2019 13:03:12 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Tue, Jul 16, 2019 at 01:03:12PM -0400, Jesper Pedersen wrote:\n> Here is the patch for that.\n\n+ <para>\n+ Note that while WAL will be flushed with this setting,\n+ <application>pg_receivewal</application> never applies it, so\n+ <xref linkend=\"guc-synchronous-commit\"/> must not be set to\n+ <literal>remote_apply</literal> if <application>pg_receivewal</application>\n+ is the only synchronous standby. Similarly, if\n+ <application>pg_receivewal</application> is part of a quorum-based\n+ set of synchronous standbys, it won't count towards the quorum if\n+ <xref linkend=\"guc-synchronous-commit\"/> is set to\n+ <literal>remote_apply</literal>.\n+ </para>\n\nI think we should really document the caveat with priority-based sets\nof standbys as much as quorum-based sets. For example if a user sets\nsynchronous_commit = remote_apply in postgresql.conf, and then sets\ns_s_names to '2(pg_receivewal, my_connected_standby)' to get a\npriority-based set, then you have the same problem, and pg_receivewal\nis not the only synchronous standby in this configuration. The patch\ndoes not cover that case properly.\n--\nMichael", "msg_date": "Wed, 17 Jul 2019 10:38:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Wed, 2019-07-17 at 10:38 +0900, Michael Paquier wrote:\n> + <para>\n> + Note that while WAL will be flushed with this setting,\n> + <application>pg_receivewal</application> never applies it, so\n> + <xref linkend=\"guc-synchronous-commit\"/> must not be set to\n> + <literal>remote_apply</literal> if <application>pg_receivewal</application>\n> + is the only synchronous standby. Similarly, if\n> + <application>pg_receivewal</application> is part of a quorum-based\n> + set of synchronous standbys, it won't count towards the quorum if\n> + <xref linkend=\"guc-synchronous-commit\"/> is set to\n> + <literal>remote_apply</literal>.\n> + </para>\n> \n> I think we should really document the caveat with priority-based sets\n> of standbys as much as quorum-based sets. For example if a user sets\n> synchronous_commit = remote_apply in postgresql.conf, and then sets\n> s_s_names to '2(pg_receivewal, my_connected_standby)' to get a\n> priority-based set, then you have the same problem, and pg_receivewal\n> is not the only synchronous standby in this configuration. The patch\n> does not cover that case properly.\n\nI understand the concern, I'm just worried that too much accuracy may\nrender the sentence hard to read.\n\nHow about adding \"or priority-based\" after \"quorum-based\"?\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 17 Jul 2019 07:40:48 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Wed, Jul 17, 2019 at 07:40:48AM +0200, Laurenz Albe wrote:\n> I understand the concern, I'm just worried that too much accuracy may\n> render the sentence hard to read.\n> \n> How about adding \"or priority-based\" after \"quorum-based\"?\n\nI would be fine with that for the first part. I am not sure of what a\ngood formulation would be for the second part of the sentence. Now it\nonly refers to quorum, but with priority sets that does not apply.\nAnd I am not sure what \"won't count towards the quorum\" actually\nmeans.\n--\nMichael", "msg_date": "Wed, 17 Jul 2019 17:04:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/17/19 4:04 AM, Michael Paquier wrote:\n>> How about adding \"or priority-based\" after \"quorum-based\"?\n> \n> I would be fine with that for the first part. I am not sure of what a\n> good formulation would be for the second part of the sentence. Now it\n> only refers to quorum, but with priority sets that does not apply.\n> And I am not sure what \"won't count towards the quorum\" actually\n> means.\n\nMaybe something like the attached ? Although it doesn't help we need to \ninclude <literal>on</literal> as well...\n\nBest regards,\n Jesper", "msg_date": "Wed, 17 Jul 2019 13:59:55 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Wed, 2019-07-17 at 13:59 -0400, Jesper Pedersen wrote:\n> + <para>\n> + Note that while WAL will be flushed with this setting,\n> + <application>pg_receivewal</application> never applies it, so\n> + <xref linkend=\"guc-synchronous-commit\"/> must not be set to\n> + <literal>remote_apply</literal> or <literal>on</literal>\n> + if <application>pg_receivewal</application> is the only synchronous standby.\n> + Similarly, if <application>pg_receivewal</application> is part of a\n> + priority-based synchronous replication setup (<literal>FIRST</literal>),\n> + or a quorum-based setup (<literal>ANY</literal>) it won't count towards\n> + the policy specified if <xref linkend=\"guc-synchronous-commit\"/> is\n> + set to <literal>remote_apply</literal> or <literal>on</literal>.\n> + </para>\n\nThat's factually wrong. \"on\" (wait for WAL flush) works fine with\npg_receivewal, only \"remote_apply\" doesn't.\n\nOk, here's another attempt:\n\n Note that while WAL will be flushed with this setting,\n <application>pg_receivewal</application> never applies it, so\n <xref linkend=\"guc-synchronous-commit\"/> must not be set to\n <literal>remote_apply</literal> if <application>pg_receivewal</application>\n is the only synchronous standby.\n Similarly, it is no use adding <application>pg_receivewal</application> to a\n priority-based (<literal>FIRST</literal>) or a quorum-based\n (<literal>ANY</literal>) synchronous replication setup if\n <xref linkend=\"guc-synchronous-commit\"/> is set to <literal>remote_apply</literal>.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 17 Jul 2019 23:21:06 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Wed, Jul 17, 2019 at 11:21:06PM +0200, Laurenz Albe wrote:\n> Ok, here's another attempt:\n> \n> Note that while WAL will be flushed with this setting,\n> <application>pg_receivewal</application> never applies it, so\n> <xref linkend=\"guc-synchronous-commit\"/> must not be set to\n> <literal>remote_apply</literal> if <application>pg_receivewal</application>\n> is the only synchronous standby.\n> Similarly, it is no use adding <application>pg_receivewal</application> to a\n> priority-based (<literal>FIRST</literal>) or a quorum-based\n> (<literal>ANY</literal>) synchronous replication setup if\n> <xref linkend=\"guc-synchronous-commit\"/> is set to <literal>remote_apply</literal>.\n\nOr more simply like that?\n\"Note that while WAL will be flushed with this setting,\npg_receivewal never applies it, so synchronous_commit must not be set\nto remote_apply if pg_receivewal is a synchronous standby, be it a\nmember of a priority-based (FIRST) or a quorum-based (ANY) synchronous\nreplication setup.\"\n--\nMichael", "msg_date": "Thu, 18 Jul 2019 14:29:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi Laurenz,\n\nOn 7/17/19 5:21 PM, Laurenz Albe wrote:\n> That's factually wrong. \"on\" (wait for WAL flush) works fine with\n> pg_receivewal, only \"remote_apply\" doesn't.\n> \n\nPlease, try\n\nmkdir /tmp/wal\ninitdb /tmp/pgsql\npg_ctl -D /tmp/pgsql -l /tmp/logfile start\npsql postgres\nSELECT pg_create_physical_replication_slot('replica1');\nCREATE ROLE repluser WITH LOGIN REPLICATION PASSWORD 'replpass';\n\\q\n\nsynchronous_commit = on\nsynchronous_standby_names = 'replica1'\n\npg_ctl -D /tmp/pgsql -l /tmp/logfile restart\npg_receivewal -D /tmp/wal -S replica1 --synchronous -h localhost -p 5432 \n-U repluser -W\npsql -c 'SELECT * FROM pg_stat_replication;' postgres\npsql -c 'SELECT * FROM pg_replication_slots;' postgres\npsql -c 'CREATE DATABASE test' postgres\n\nIn what scenarios do you see 'on' working ?\n\nBest regards,\n Jesper\n\n\n", "msg_date": "Thu, 18 Jul 2019 08:39:48 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/18/19 1:29 AM, Michael Paquier wrote:\n> Or more simply like that?\n> \"Note that while WAL will be flushed with this setting,\n> pg_receivewal never applies it, so synchronous_commit must not be set\n> to remote_apply if pg_receivewal is a synchronous standby, be it a\n> member of a priority-based (FIRST) or a quorum-based (ANY) synchronous\n> replication setup.\"\n\nYeah, better.\n\nBest regards,\n Jesper", "msg_date": "Thu, 18 Jul 2019 08:40:36 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Thu, Jul 18, 2019 at 08:39:48AM -0400, Jesper Pedersen wrote:\n> mkdir /tmp/wal\n> initdb /tmp/pgsql\n> pg_ctl -D /tmp/pgsql -l /tmp/logfile start\n> psql postgres\n> SELECT pg_create_physical_replication_slot('replica1');\n> CREATE ROLE repluser WITH LOGIN REPLICATION PASSWORD 'replpass';\n> \\q\n> \n> synchronous_commit = on\n> synchronous_standby_names = 'replica1'\n> \n> pg_ctl -D /tmp/pgsql -l /tmp/logfile restart\n> pg_receivewal -D /tmp/wal -S replica1 --synchronous -h localhost -p 5432 -U\n> repluser -W\n> psql -c 'SELECT * FROM pg_stat_replication;' postgres\n> psql -c 'SELECT * FROM pg_replication_slots;' postgres\n> psql -c 'CREATE DATABASE test' postgres\n> \n> In what scenarios do you see 'on' working ?\n\nBecause the code says so, \"on\" is an alias for \"remote_flush\" (which\nis not user-visible by the way):\nsrc/include/access/xact.h:#define SYNCHRONOUS_COMMIT_ON\nSYNCHRONOUS_COMMIT_REMOTE_FLUSH\n\nAnd if you do that it works fine (pg_receivewal --synchronous runs in\nthe background and I created a dummy table):\n=# SELECT application_name, sync_state, flush_lsn, replay_lsn FROM\npg_stat_replication;\n application_name | sync_state | flush_lsn | replay_lsn\n------------------+------------+-----------+------------\n pg_receivewal | sync | 0/15E1F88 | null\n(1 row)\n=# set synchronous_commit to on ;\nSET\n=# insert into aa values (2);\nINSERT 0 1\n\nThis part however is as expected, just blocking:\n=# set synchronous_commit to remote_apply ;\nSET\n=# insert into aa values (3);\n^CCancel request sent\nWARNING: 01000: canceling wait for synchronous replication due to\nuser request\nDETAIL: The transaction has already committed locally, but might not\nhave been replicated to the standby.\nLOCATION: SyncRepWaitForLSN, syncrep.c:266\nINSERT 0 1\n--\nMichael", "msg_date": "Fri, 19 Jul 2019 10:09:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Thu, Jul 18, 2019 at 08:40:36AM -0400, Jesper Pedersen wrote:\n> On 7/18/19 1:29 AM, Michael Paquier wrote:\n>> Or more simply like that?\n>> \"Note that while WAL will be flushed with this setting,\n>> pg_receivewal never applies it, so synchronous_commit must not be set\n>> to remote_apply if pg_receivewal is a synchronous standby, be it a\n>> member of a priority-based (FIRST) or a quorum-based (ANY) synchronous\n>> replication setup.\"\n> \n> Yeah, better.\n\nI was looking into committing that, and the part about\nsynchronous_commit = on is not right. The location of the warning is\nalso harder to catch for the reader, so instead let's move it to the\ntop where we have an extra description for --synchronous. I am\nfinishing with the attached that I would be fine to commit and\nback-patch as needed. Does that sound fine?\n--\nMichael", "msg_date": "Fri, 19 Jul 2019 10:27:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Fri, 2019-07-19 at 10:27 +0900, Michael Paquier wrote:\n> On Thu, Jul 18, 2019 at 08:40:36AM -0400, Jesper Pedersen wrote:\n> > On 7/18/19 1:29 AM, Michael Paquier wrote:\n> > > Or more simply like that?\n> > > \"Note that while WAL will be flushed with this setting,\n> > > pg_receivewal never applies it, so synchronous_commit must not be set\n> > > to remote_apply if pg_receivewal is a synchronous standby, be it a\n> > > member of a priority-based (FIRST) or a quorum-based (ANY) synchronous\n> > > replication setup.\"\n> > \n> > Yeah, better.\n> \n> I was looking into committing that, and the part about\n> synchronous_commit = on is not right. The location of the warning is\n> also harder to catch for the reader, so instead let's move it to the\n> top where we have an extra description for --synchronous. I am\n> finishing with the attached that I would be fine to commit and\n> back-patch as needed. Does that sound fine?\n\nIt was my first reaction too that this had better be at the top.\n\nI'm happy with the patch as it is.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 19 Jul 2019 08:50:17 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/18/19 9:09 PM, Michael Paquier wrote:\n>> pg_receivewal -D /tmp/wal -S replica1 --synchronous -h localhost -p 5432 -U\n>> repluser -W\n>> psql -c 'SELECT * FROM pg_stat_replication;' postgres\n>> psql -c 'SELECT * FROM pg_replication_slots;' postgres\n>> psql -c 'CREATE DATABASE test' postgres\n>>\n>> In what scenarios do you see 'on' working ?\n> \n> Because the code says so, \"on\" is an alias for \"remote_flush\" (which\n> is not user-visible by the way):\n> src/include/access/xact.h:#define SYNCHRONOUS_COMMIT_ON\n> SYNCHRONOUS_COMMIT_REMOTE_FLUSH\n> \n> And if you do that it works fine (pg_receivewal --synchronous runs in\n> the background and I created a dummy table):\n> =# SELECT application_name, sync_state, flush_lsn, replay_lsn FROM\n> pg_stat_replication;\n> application_name | sync_state | flush_lsn | replay_lsn\n> ------------------+------------+-----------+------------\n> pg_receivewal | sync | 0/15E1F88 | null\n> (1 row)\n> =# set synchronous_commit to on ;\n> SET\n> =# insert into aa values (2);\n> INSERT 0 1\n> \n\nI forgot to use pg_receivewal -d with application_name instead of -h -p -U.\n\nMaybe we should have an explicit option for that, but that is a separate \nthread.\n\nBest regards,\n Jesper\n\n\n", "msg_date": "Fri, 19 Jul 2019 13:01:27 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/18/19 9:27 PM, Michael Paquier wrote:\n> The location of the warning is\n> also harder to catch for the reader, so instead let's move it to the\n> top where we have an extra description for --synchronous. I am\n> finishing with the attached that I would be fine to commit and\n> back-patch as needed. Does that sound fine?\n\nLGTM.\n\nBest regards,\n Jesper\n\n\n\n\n", "msg_date": "Fri, 19 Jul 2019 13:02:21 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Tue, Jul 16, 2019 at 9:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I think we should really document the caveat with priority-based sets\n> of standbys as much as quorum-based sets. For example if a user sets\n> synchronous_commit = remote_apply in postgresql.conf, and then sets\n> s_s_names to '2(pg_receivewal, my_connected_standby)' to get a\n> priority-based set, then you have the same problem, and pg_receivewal\n> is not the only synchronous standby in this configuration. The patch\n> does not cover that case properly.\n\nI don't agree with this approach. It seems to me that the original was\ntoo precise already, and making it more precise only exacerbates the\nsituation. The point is that synchronous_commit = remote_apply is\n*categorically* a bad idea for sessions running pg_receivewal. The\nreason why you're adding all this complexity is to try to distinguish\nbetween the case where it's merely a bad idea and the case where it\nwill also completely fail to work. But why is it important to describe\nthe scenarios under which it will altogether fail to work?\n\nYou could just say something like:\n\nSince pg_receivewal does not apply WAL, you should not allow it to\nbecome a synchronous standby when synchronous_commit = remote_apply.\nIf it does, it will appear to be a standby which never catches up,\nwhich may cause commits to block. To avoid this, you should either\nconfigure an appropriate value for synchronous_standby_names, or\nspecify an application_name for pg_receivewal that does not match it,\nor change the value of synchronous_commit to something other than\nremote_apply.\n\nI think that'd be a lot more useful than enumerating the total-failure\nscenarios.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 19 Jul 2019 14:04:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Fri, Jul 19, 2019 at 02:04:03PM -0400, Robert Haas wrote:\n> You could just say something like:\n> \n> Since pg_receivewal does not apply WAL, you should not allow it to\n> become a synchronous standby when synchronous_commit = remote_apply.\n> If it does, it will appear to be a standby which never catches up,\n> which may cause commits to block. To avoid this, you should either\n> configure an appropriate value for synchronous_standby_names, or\n> specify an application_name for pg_receivewal that does not match it,\n> or change the value of synchronous_commit to something other than\n> remote_apply.\n> \n> I think that'd be a lot more useful than enumerating the total-failure\n> scenarios.\n\n+1. Thanks for the suggestions! Your wording looks good to me.\n--\nMichael", "msg_date": "Mon, 22 Jul 2019 10:48:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/21/19 9:48 PM, Michael Paquier wrote:\n>> Since pg_receivewal does not apply WAL, you should not allow it to\n>> become a synchronous standby when synchronous_commit = remote_apply.\n>> If it does, it will appear to be a standby which never catches up,\n>> which may cause commits to block. To avoid this, you should either\n>> configure an appropriate value for synchronous_standby_names, or\n>> specify an application_name for pg_receivewal that does not match it,\n>> or change the value of synchronous_commit to something other than\n>> remote_apply.\n>>\n>> I think that'd be a lot more useful than enumerating the total-failure\n>> scenarios.\n> \n> +1. Thanks for the suggestions! Your wording looks good to me.\n\n+1\n\nHere is the patch for it, with Robert as the author.\n\nBest regards,\n Jesper", "msg_date": "Mon, 22 Jul 2019 13:25:41 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Mon, Jul 22, 2019 at 01:25:41PM -0400, Jesper Pedersen wrote:\n> Hi,\n> \n> On 7/21/19 9:48 PM, Michael Paquier wrote:\n> > > Since pg_receivewal does not apply WAL, you should not allow it to\n> > > become a synchronous standby when synchronous_commit = remote_apply.\n> > > If it does, it will appear to be a standby which never catches up,\n> > > which may cause commits to block. To avoid this, you should either\n> > > configure an appropriate value for synchronous_standby_names, or\n> > > specify an application_name for pg_receivewal that does not match it,\n> > > or change the value of synchronous_commit to something other than\n> > > remote_apply.\n> > > \n> > > I think that'd be a lot more useful than enumerating the total-failure\n> > > scenarios.\n> > \n> > +1. Thanks for the suggestions! Your wording looks good to me.\n> \n> +1\n> \n> Here is the patch for it, with Robert as the author.\n\n> + <xref linkend=\"guc-synchronous-commit\"/> to something other than\n\nLooks fine to me. Just a tiny nit. For the second reference to\nsynchronous_commit, I would change the link to a <varname> markup. \n--\nMichael", "msg_date": "Tue, 23 Jul 2019 09:08:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/22/19 8:08 PM, Michael Paquier wrote:\n>> + <xref linkend=\"guc-synchronous-commit\"/> to something other than\n> \n> Looks fine to me. Just a tiny nit. For the second reference to\n> synchronous_commit, I would change the link to a <varname> markup.\n\nSure.\n\nBest regards,\n Jesper", "msg_date": "Tue, 23 Jul 2019 08:00:41 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Tue, Jul 23, 2019 at 08:00:41AM -0400, Jesper Pedersen wrote:\n> Sure.\n\nThanks. Applied down to 9.6 where remote_apply has been introduced,\nwith tweaks for 9.6 as the tool is named pg_receivexlog there.\n--\nMichael", "msg_date": "Wed, 24 Jul 2019 11:29:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn 7/23/19 10:29 PM, Michael Paquier wrote:\n> Thanks. Applied down to 9.6 where remote_apply has been introduced,\n> with tweaks for 9.6 as the tool is named pg_receivexlog there.\n\nThanks to everybody involved !\n\nBest regards,\n Jesper\n\n\n", "msg_date": "Wed, 24 Jul 2019 07:55:18 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "Hi,\n\nOn Wed, 24 Jul 2019 11:29:28 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jul 23, 2019 at 08:00:41AM -0400, Jesper Pedersen wrote:\n> > Sure. \n> \n> Thanks. Applied down to 9.6 where remote_apply has been introduced,\n> with tweaks for 9.6 as the tool is named pg_receivexlog there.\n\nSorry to step in so lately.\n\nUnless I am missing something, another solution might be to use a dedicated\nrole to pg_receive{xlog|wal} with synchronous_commit lower than\nremote_apply.\n\nNot sure we want to add such detail, but if you consider it useful, you'll find\na patch in attachment.\n\nRegards,", "msg_date": "Wed, 24 Jul 2019 15:03:04 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Wed, Jul 24, 2019 at 03:03:04PM +0200, Jehan-Guillaume de Rorthais wrote:\n> Unless I am missing something, another solution might be to use a dedicated\n> role to pg_receive{xlog|wal} with synchronous_commit lower than\n> remote_apply.\n\nAren't you confused by the same thing as I was upthread [1]?\n[1]: https://www.postgresql.org/message-id/20190710080423.GG1031@paquier.xyz\n\nremote_apply affects all sessions. So even if you use a replication\nrole with synchronous_commit = on and have pg_receivewal use that with\nremote_apply set in postgresql.conf, then remote_apply is effective\nfor all the other sessions so these will still be stuck at commit\nwaiting for pg_receivewal to apply WAL if it is a synchronous\nstandby.\n--\nMichael", "msg_date": "Thu, 25 Jul 2019 16:58:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" }, { "msg_contents": "On Thu, 25 Jul 2019 16:58:17 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Jul 24, 2019 at 03:03:04PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > Unless I am missing something, another solution might be to use a dedicated\n> > role to pg_receive{xlog|wal} with synchronous_commit lower than\n> > remote_apply. \n> \n> Aren't you confused by the same thing as I was upthread [1]?\n> [1]: https://www.postgresql.org/message-id/20190710080423.GG1031@paquier.xyz\n> \n> remote_apply affects all sessions. So even if you use a replication\n> role with synchronous_commit = on and have pg_receivewal use that with\n> remote_apply set in postgresql.conf, then remote_apply is effective\n> for all the other sessions so these will still be stuck at commit\n> waiting for pg_receivewal to apply WAL if it is a synchronous\n> standby.\n\nArgh!\n\n(Sorry for the noise)\n\n\n", "msg_date": "Thu, 25 Jul 2019 10:29:44 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal documentation" } ]
[ { "msg_contents": "Does anyone know why there is no PG 12 stable branch in our git tree?\n\n\t$ git branch -l\n\t REL9_4_STABLE\n\t REL9_5_STABLE\n\t REL9_6_STABLE\n\t REL_10_STABLE\n\t REL_11_STABLE\n\t master\n\nThey exist for earlier releases. Is this a problem?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 27 Jun 2019 10:28:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Missing PG 12 stable branch" }, { "msg_contents": "On Thu, Jun 27, 2019 at 10:28:40AM -0400, Bruce Momjian wrote:\n> Does anyone know why there is no PG 12 stable branch in our git tree?\n> \n> \t$ git branch -l\n> \t REL9_4_STABLE\n> \t REL9_5_STABLE\n> \t REL9_6_STABLE\n> \t REL_10_STABLE\n> \t REL_11_STABLE\n> \t master\n> \n> They exist for earlier releases. Is this a problem?\n\nSorry, I now realize that we haven't branched git yet for PG 12, so\nthere is no branch. I have fixed pglife to handle that case:\n\n\thttps://pglife.momjian.us/\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 27 Jun 2019 10:53:13 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Missing PG 12 stable branch" }, { "msg_contents": "On Thu, Jun 27, 2019 at 10:28:40AM -0400, Bruce Momjian wrote:\n>Does anyone know why there is no PG 12 stable branch in our git tree?\n>\n>\t$ git branch -l\n>\t REL9_4_STABLE\n>\t REL9_5_STABLE\n>\t REL9_6_STABLE\n>\t REL_10_STABLE\n>\t REL_11_STABLE\n>\t master\n>\n>They exist for earlier releases. Is this a problem?\n>\n\nWe haven't stamped master as 13dev yet ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 27 Jun 2019 16:56:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing PG 12 stable branch" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Does anyone know why there is no PG 12 stable branch in our git tree?\n\nFor the record, I'm intending to make the branch as soon as the\nJuly CF starts (i.e., first thing Monday).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2019 10:56:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing PG 12 stable branch" } ]
[ { "msg_contents": "Hello,\n\nI just realized that 7e534adcdc7 broke support for hypothetical\nindexes using BRIN am. Attached patch fix the issue.\n\nThere's no interface to provide the hypothetical pagesPerRange value,\nso I used the default one, and used simple estimates.\n\nI'll add this patch to the next commitfest.", "msg_date": "Thu, 27 Jun 2019 20:02:33 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "Hi, thanks for the patch.\n\nOn 2019-Jun-27, Julien Rouhaud wrote:\n\n> I just realized that 7e534adcdc7 broke support for hypothetical\n> indexes using BRIN am. Attached patch fix the issue.\n> \n> There's no interface to provide the hypothetical pagesPerRange value,\n> so I used the default one, and used simple estimates.\n\nI think it would look nicer to have a routine parallel to brinGetStats()\n(brinGetStatsHypothetical?), instead of polluting selfuncs.c with these\ngory details.\n\nThis seems back-patchable ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Jun 2019 14:14:47 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Thu, Jun 27, 2019 at 8:14 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Hi, thanks for the patch.\n\nThanks for looking at it!\n\n> On 2019-Jun-27, Julien Rouhaud wrote:\n>\n> > I just realized that 7e534adcdc7 broke support for hypothetical\n> > indexes using BRIN am. Attached patch fix the issue.\n> >\n> > There's no interface to provide the hypothetical pagesPerRange value,\n> > so I used the default one, and used simple estimates.\n>\n> I think it would look nicer to have a routine parallel to brinGetStats()\n> (brinGetStatsHypothetical?), instead of polluting selfuncs.c with these\n> gory details.\n\nI'm not opposed to it, but I used the same approach as a similar fix\nfor gincostestimate() (see 7fb008c5ee5). If we add an hypothetical\nversion of brinGetStats(), we should also do it for ginGetStats().\n\n> This seems back-patchable ...\n\nI definitely hope so!\n\n\n", "msg_date": "Thu, 27 Jun 2019 20:32:56 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On 2019-Jun-27, Julien Rouhaud wrote:\n\n> On Thu, Jun 27, 2019 at 8:14 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > I think it would look nicer to have a routine parallel to brinGetStats()\n> > (brinGetStatsHypothetical?), instead of polluting selfuncs.c with these\n> > gory details.\n> \n> I'm not opposed to it, but I used the same approach as a similar fix\n> for gincostestimate() (see 7fb008c5ee5).\n\nHow many #define lines did you have to add to selfuncs there?\n\n> If we add an hypothetical\n> version of brinGetStats(), we should also do it for ginGetStats().\n\nDunno, seems pointless. The GIN case is different.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Jun 2019 14:46:01 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-27, Julien Rouhaud wrote:\n>> On Thu, Jun 27, 2019 at 8:14 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>> I think it would look nicer to have a routine parallel to brinGetStats()\n>>> (brinGetStatsHypothetical?), instead of polluting selfuncs.c with these\n>>> gory details.\n\n>> I'm not opposed to it, but I used the same approach as a similar fix\n>> for gincostestimate() (see 7fb008c5ee5).\n\n> How many #define lines did you have to add to selfuncs there?\n\nFWIW, the proposed patch doesn't seem to me like it adds much more\nBRIN-specific knowledge to brincostestimate than is there already.\n\nI think a more useful response to your modularity concern would be\nto move all the [indextype]costestimate functions out of the common\nselfuncs.c file and into per-AM files. I fooled around with that\nwhile trying to refactor selfuncs.c back in February, but I didn't\ncome up with something that seemed clearly better. Still, as we\nmove into a world with external index AMs, I think we're going to\nhave to make that happen eventually.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2019 14:54:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On 2019-Jun-27, Tom Lane wrote:\n\n> FWIW, the proposed patch doesn't seem to me like it adds much more\n> BRIN-specific knowledge to brincostestimate than is there already.\n\nIt's this calculation that threw me off:\n\tstatsData.revmapNumPages = (indexRanges / REVMAP_PAGE_MAXITEMS) + 1;\nISTM that selfuncs has no reason to learn about revmap low-level\ndetails.\n\n> I think a more useful response to your modularity concern would be\n> to move all the [indextype]costestimate functions out of the common\n> selfuncs.c file and into per-AM files.\n\nYeah, that would be nice, but then I'm not going to push Julien to do\nthat to fix just this one problem; and on the other hand, that's even\nless of a back-patchable fix.\n\n> I fooled around with that while trying to refactor selfuncs.c back in\n> February, but I didn't come up with something that seemed clearly\n> better. Still, as we move into a world with external index AMs, I\n> think we're going to have to make that happen eventually.\n\nNo disagreement.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Jun 2019 15:16:59 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-27, Tom Lane wrote:\n>> FWIW, the proposed patch doesn't seem to me like it adds much more\n>> BRIN-specific knowledge to brincostestimate than is there already.\n\n> It's this calculation that threw me off:\n> \tstatsData.revmapNumPages = (indexRanges / REVMAP_PAGE_MAXITEMS) + 1;\n> ISTM that selfuncs has no reason to learn about revmap low-level\n> details.\n\nUm ... it's accounting for revmap pages already (which is why it needs\nthis field set), so hasn't that ship sailed?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2019 15:22:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On 2019-Jun-27, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Jun-27, Tom Lane wrote:\n> >> FWIW, the proposed patch doesn't seem to me like it adds much more\n> >> BRIN-specific knowledge to brincostestimate than is there already.\n> \n> > It's this calculation that threw me off:\n> > \tstatsData.revmapNumPages = (indexRanges / REVMAP_PAGE_MAXITEMS) + 1;\n> > ISTM that selfuncs has no reason to learn about revmap low-level\n> > details.\n> \n> Um ... it's accounting for revmap pages already (which is why it needs\n> this field set), so hasn't that ship sailed?\n\nYes, but does it need to know how many items there are in a revmap page?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Jun 2019 15:24:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-27, Tom Lane wrote:\n>> Um ... it's accounting for revmap pages already (which is why it needs\n>> this field set), so hasn't that ship sailed?\n\n> Yes, but does it need to know how many items there are in a revmap page?\n\nDunno, I just can't get excited about exposing REVMAP_PAGE_MAXITEMS.\nEspecially not since we seem to agree on the long-term solution here,\nand what you're suggesting to Julien doesn't particularly fit into\nthat long-term solution.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2019 15:30:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On 2019-Jun-27, Tom Lane wrote:\n\n> Dunno, I just can't get excited about exposing REVMAP_PAGE_MAXITEMS.\n> Especially not since we seem to agree on the long-term solution here,\n> and what you're suggesting to Julien doesn't particularly fit into\n> that long-term solution.\n\nWell, it was brin_page.h, which is supposed to be internal to BRIN\nitself. But since we admit that in its current state selfuncs.c is not\nsalvageable as a module and we'll redo the whole thing in the short\nterm, I withdraw my comment.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Jun 2019 16:09:26 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Thu, Jun 27, 2019 at 10:09 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jun-27, Tom Lane wrote:\n>\n> > Dunno, I just can't get excited about exposing REVMAP_PAGE_MAXITEMS.\n> > Especially not since we seem to agree on the long-term solution here,\n> > and what you're suggesting to Julien doesn't particularly fit into\n> > that long-term solution.\n>\n> Well, it was brin_page.h, which is supposed to be internal to BRIN\n> itself. But since we admit that in its current state selfuncs.c is not\n> salvageable as a module and we'll redo the whole thing in the short\n> term, I withdraw my comment.\n\nThanks. I'll also work soon on a patch to move the [am]costestimate\nfunctions in the am-specific files.\n\n\n", "msg_date": "Fri, 28 Jun 2019 07:49:00 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On 27/06/2019 23:09, Alvaro Herrera wrote:\n> On 2019-Jun-27, Tom Lane wrote:\n> \n>> Dunno, I just can't get excited about exposing REVMAP_PAGE_MAXITEMS.\n>> Especially not since we seem to agree on the long-term solution here,\n>> and what you're suggesting to Julien doesn't particularly fit into\n>> that long-term solution.\n> \n> Well, it was brin_page.h, which is supposed to be internal to BRIN\n> itself. But since we admit that in its current state selfuncs.c is not\n> salvageable as a module and we'll redo the whole thing in the short\n> term, I withdraw my comment.\n\nThere seems to be consensus on the going with the approach from the \noriginal patch, so I had a closer look at it.\n\nThe patch first does this:\n\n> \n> \t/*\n> \t * Obtain some data from the index itself, if possible. Otherwise invent\n> \t * some plausible internal statistics based on the relation page count.\n> \t */\n> \tif (!index->hypothetical)\n> \t{\n> \t\tindexRel = index_open(index->indexoid, AccessShareLock);\n> \t\tbrinGetStats(indexRel, &statsData);\n> \t\tindex_close(indexRel, AccessShareLock);\n> \t}\n> \telse\n> \t{\n> \t\t/*\n> \t\t * Assume default number of pages per range, and estimate the number\n> \t\t * of ranges based on that.\n> \t\t */\n> \t\tindexRanges = Max(ceil((double) baserel->pages /\n> \t\t\t\t\t\t\t BRIN_DEFAULT_PAGES_PER_RANGE), 1.0);\n> \n> \t\tstatsData.pagesPerRange = BRIN_DEFAULT_PAGES_PER_RANGE;\n> \t\tstatsData.revmapNumPages = (indexRanges / REVMAP_PAGE_MAXITEMS) + 1;\n> \t}\n>\t...\n\nAnd later in the function, there's this:\n\n>\t/* work out the actual number of ranges in the index */\n>\tindexRanges = Max(ceil((double) baserel->pages / statsData.pagesPerRange),\n>\t\t\t\t\t 1.0);\n\nIt seems a bit error-prone that essentially the same formula is used \ntwice in the function, to compute 'indexRanges', with some distance \nbetween them. Perhaps some refactoring would help with, although I'm not \nsure what exactly would be better. Maybe move the second computation \nearlier in the function, like in the attached patch?\n\nThe patch assumes the default pages_per_range setting, but looking at \nthe code at https://github.com/HypoPG/hypopg, the extension actually \ntakes pages_per_range into account when it estimates the index size. I \nguess there's no easy way to pass the pages_per_range setting down to \nbrincostestimate(). I'm not sure what we should do about that, but seems \nthat just using BRIN_DEFAULT_PAGES_PER_RANGE here is not very accurate.\n\nThe attached patch is based on PG v11, because I tested this with \nhttps://github.com/HypoPG/hypopg, and it didn't compile with later \nversions. There's a small difference in the locking level used between \nv11 and 12, which makes the patch not apply across versions, but that's \neasy to fix by hand.\n\n- Heikki", "msg_date": "Fri, 26 Jul 2019 14:34:19 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Fri, Jul 26, 2019 at 1:34 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> There seems to be consensus on the going with the approach from the\n> original patch, so I had a closer look at it.\n>\n> The patch first does this:\n>\n> >\n> > /*\n> > * Obtain some data from the index itself, if possible. Otherwise invent\n> > * some plausible internal statistics based on the relation page count.\n> > */\n> > if (!index->hypothetical)\n> > {\n> > indexRel = index_open(index->indexoid, AccessShareLock);\n> > brinGetStats(indexRel, &statsData);\n> > index_close(indexRel, AccessShareLock);\n> > }\n> > else\n> > {\n> > /*\n> > * Assume default number of pages per range, and estimate the number\n> > * of ranges based on that.\n> > */\n> > indexRanges = Max(ceil((double) baserel->pages /\n> > BRIN_DEFAULT_PAGES_PER_RANGE), 1.0);\n> >\n> > statsData.pagesPerRange = BRIN_DEFAULT_PAGES_PER_RANGE;\n> > statsData.revmapNumPages = (indexRanges / REVMAP_PAGE_MAXITEMS) + 1;\n> > }\n> > ...\n>\n> And later in the function, there's this:\n>\n> > /* work out the actual number of ranges in the index */\n> > indexRanges = Max(ceil((double) baserel->pages / statsData.pagesPerRange),\n> > 1.0);\n>\n> It seems a bit error-prone that essentially the same formula is used\n> twice in the function, to compute 'indexRanges', with some distance\n> between them. Perhaps some refactoring would help with, although I'm not\n> sure what exactly would be better. Maybe move the second computation\n> earlier in the function, like in the attached patch?\n\nI had the same thought without a great idea on how to refactor it.\nI'm fine with the one in this patch.\n\n> The patch assumes the default pages_per_range setting, but looking at\n> the code at https://github.com/HypoPG/hypopg, the extension actually\n> takes pages_per_range into account when it estimates the index size. I\n> guess there's no easy way to pass the pages_per_range setting down to\n> brincostestimate(). I'm not sure what we should do about that, but seems\n> that just using BRIN_DEFAULT_PAGES_PER_RANGE here is not very accurate.\n\nYes, hypopg can use a custom pages_per_range as it intercepts it when\nthe hypothetical index is created. I didn't find any way to get that\ninformation in brincostestimate(), especially for something that could\nbackpatched. I don't like it, but I don't see how to do better than\njust using BRIN_DEFAULT_PAGES_PER_RANGE :(\n\n> The attached patch is based on PG v11, because I tested this with\n> https://github.com/HypoPG/hypopg, and it didn't compile with later\n> versions. There's a small difference in the locking level used between\n> v11 and 12, which makes the patch not apply across versions, but that's\n> easy to fix by hand.\n\nFTR I created a REL_1_STABLE branch for hypopg which is compatible\nwith pg12 (it's already used for debian packages), as master is still\nin beta and v12 compatibility worked on.\n\n\n", "msg_date": "Fri, 26 Jul 2019 13:52:19 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Fri, Jul 26, 2019 at 1:34 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> The patch assumes the default pages_per_range setting, but looking at\n>> the code at https://github.com/HypoPG/hypopg, the extension actually\n>> takes pages_per_range into account when it estimates the index size. I\n>> guess there's no easy way to pass the pages_per_range setting down to\n>> brincostestimate(). I'm not sure what we should do about that, but seems\n>> that just using BRIN_DEFAULT_PAGES_PER_RANGE here is not very accurate.\n\n> Yes, hypopg can use a custom pages_per_range as it intercepts it when\n> the hypothetical index is created. I didn't find any way to get that\n> information in brincostestimate(), especially for something that could\n> backpatched. I don't like it, but I don't see how to do better than\n> just using BRIN_DEFAULT_PAGES_PER_RANGE :(\n\nI can tell you what I think ought to happen, but making it happen might\nbe more work than this patch should take on.\n\nThe right answer IMO is basically for the brinGetStats call to go\naway from brincostestimate and instead happen during plancat.c's\nbuilding of the IndexOptInfo. In the case of a hypothetical index,\nit'd fall to the get_relation_info_hook to fill in suitable fake\ndata. Sounds simple, but:\n\n1. We really don't want even more AM-specific knowledge in plancat.c.\nSo I think the right way to do this would be something along the\nline of adding a \"void *amdata\" field to IndexOptInfo, and adding\nan AM callback to be called during get_relation_info that's allowed\nto fill that in with some AM-specific data (which the AM's costestimate\nroutine would know about). The existing btree-specific hacks in\nget_relation_info should migrate into btree's version of this callback,\nand IndexOptInfo.tree_height should probably go away in favor of\nkeeping that in btree's version of the amdata struct.\n\n2. This approach puts a premium on the get_relation_info callback\nbeing cheap, because there's no certainty that the data it fills\ninto IndexOptInfo.amdata will ever get used. For btree, the \n_bt_getrootheight call is cheap enough to not be a problem, because\nit just looks at the metapage data that btree keeps cached in the\nindex's relcache entry. The same cannot be said for brinGetStats\nas it stands: it goes off to read the index metapage. There are\nat least two ways to fix that:\n\n2a. Teach brin to keep the metapage cached like btree does.\nThis seems like it could be a performance win across the board,\nbut you'd need to work out invalidation behavior, and it'd be\na bit invasive.\n\n2b. Define IndexOptInfo.amdata as being filled lazily, that is\nbrincostestimate will invoke brinGetStats and fill in the data\nif the pointer is currently NULL. Then a hypothetical-index\nplugin could override that by pre-filling the field with the\ndesired fake data.\n\nI don't have a problem with allowing brincostestimate to fill\nin defaults based on BRIN_DEFAULT_PAGES_PER_RANGE if it sees\nthat amdata is null and the index is hypothetical. But there\nshould be a way for the get_relation_info_hook to do better.\n\nBTW, the current patch doesn't apply according to the cfbot,\nbut I think it just needs a trivial rebase over 9c703c169\n(ie, assume the index is already locked). I didn't bother\nto do that since what I recommend above would require a\nlot more change in that area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 Sep 2019 20:21:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On 2019-Sep-02, Tom Lane wrote:\n\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Fri, Jul 26, 2019 at 1:34 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >> The patch assumes the default pages_per_range setting, but looking at\n> >> the code at https://github.com/HypoPG/hypopg, the extension actually\n> >> takes pages_per_range into account when it estimates the index size. I\n> >> guess there's no easy way to pass the pages_per_range setting down to\n> >> brincostestimate(). I'm not sure what we should do about that, but seems\n> >> that just using BRIN_DEFAULT_PAGES_PER_RANGE here is not very accurate.\n> \n> > Yes, hypopg can use a custom pages_per_range as it intercepts it when\n> > the hypothetical index is created. I didn't find any way to get that\n> > information in brincostestimate(), especially for something that could\n> > backpatched. I don't like it, but I don't see how to do better than\n> > just using BRIN_DEFAULT_PAGES_PER_RANGE :(\n> \n> I can tell you what I think ought to happen, but making it happen might\n> be more work than this patch should take on.\n> \n> The right answer IMO is basically for the brinGetStats call to go\n> away from brincostestimate and instead happen during plancat.c's\n> building of the IndexOptInfo. In the case of a hypothetical index,\n> it'd fall to the get_relation_info_hook to fill in suitable fake\n> data.\n\nSo I'm not clear on what the suggested strategy is, here. Do we want\nthat design change to occur in the bugfix that would be backpatched, or\ndo we want the backbranches to use the patch as posted and then we apply\nthe above design on master only?\n\nIf the former -- is Julien interested in trying to develop that?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Sep 2019 11:53:35 -0300", "msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org> writes:\n> On 2019-Sep-02, Tom Lane wrote:\n>> The right answer IMO is basically for the brinGetStats call to go\n>> away from brincostestimate and instead happen during plancat.c's\n>> building of the IndexOptInfo. In the case of a hypothetical index,\n>> it'd fall to the get_relation_info_hook to fill in suitable fake\n>> data.\n\n> So I'm not clear on what the suggested strategy is, here. Do we want\n> that design change to occur in the bugfix that would be backpatched, or\n> do we want the backbranches to use the patch as posted and then we apply\n> the above design on master only?\n\nThe API change I'm proposing is surely not back-patchable.\n\nWhether we should bother back-patching a less capable stopgap fix\nis unclear to me. Yeah, it's a bug that an index adviser can't\ntry a hypothetical BRIN index; but given that nobody noticed till\nnow, it doesn't seem like there's much field demand for it.\nAnd I'm not sure that extension authors would want to deal with\ntesting minor-release versions to see if the fix is in, so\neven if there were a back-patch, it might go unused.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Sep 2019 11:03:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Mon, Sep 9, 2019 at 5:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org> writes:\n> > On 2019-Sep-02, Tom Lane wrote:\n> >> The right answer IMO is basically for the brinGetStats call to go\n> >> away from brincostestimate and instead happen during plancat.c's\n> >> building of the IndexOptInfo. In the case of a hypothetical index,\n> >> it'd fall to the get_relation_info_hook to fill in suitable fake\n> >> data.\n>\n> > So I'm not clear on what the suggested strategy is, here. Do we want\n> > that design change to occur in the bugfix that would be backpatched, or\n> > do we want the backbranches to use the patch as posted and then we apply\n> > the above design on master only?\n>\n> The API change I'm proposing is surely not back-patchable.\n>\n> Whether we should bother back-patching a less capable stopgap fix\n> is unclear to me. Yeah, it's a bug that an index adviser can't\n> try a hypothetical BRIN index; but given that nobody noticed till\n> now, it doesn't seem like there's much field demand for it.\n> And I'm not sure that extension authors would want to deal with\n> testing minor-release versions to see if the fix is in, so\n> even if there were a back-patch, it might go unused.\n\nFWIW I maintain such an extension and testing for minor release\nversion is definitely not a problem.\n\n\n", "msg_date": "Tue, 24 Sep 2019 09:20:02 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On 2019-Sep-24, Julien Rouhaud wrote:\n\n> On Mon, Sep 9, 2019 at 5:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Whether we should bother back-patching a less capable stopgap fix\n> > is unclear to me. Yeah, it's a bug that an index adviser can't\n> > try a hypothetical BRIN index; but given that nobody noticed till\n> > now, it doesn't seem like there's much field demand for it.\n> > And I'm not sure that extension authors would want to deal with\n> > testing minor-release versions to see if the fix is in, so\n> > even if there were a back-patch, it might go unused.\n> \n> FWIW I maintain such an extension and testing for minor release\n> version is definitely not a problem.\n\nI think the danger is what happens if a version of your plugin that was\ncompiled with the older definition runs in a Postgres which has been\nrecompiled with the new code. This has happened to me with previous\nunnoticed ABI breaks, and it has resulted in crashes in production\nsystems. It's not a nice situation to be in.\n\nIf the break is benign, i.e. \"nothing happens\", then it's possibly a\nworthwhile change to consider. I suppose the only way to know is to\nwrite patches for both sides and try it out.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 24 Sep 2019 18:53:25 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Tue, Sep 24, 2019 at 11:53 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> I think the danger is what happens if a version of your plugin that was\n> compiled with the older definition runs in a Postgres which has been\n> recompiled with the new code. This has happened to me with previous\n> unnoticed ABI breaks, and it has resulted in crashes in production\n> systems. It's not a nice situation to be in.\n\nIndeed.\n\n> If the break is benign, i.e. \"nothing happens\", then it's possibly a\n> worthwhile change to consider. I suppose the only way to know is to\n> write patches for both sides and try it out.\n\nIIUC, if something like Heikki's patch is applied on older branch the\nproblem will be magically fixed from the extension point of view so\nthat should be safe (an extension would only need to detect the minor\nversion to get a more useful error message for users), and all\nalternatives are too intrusive to be patckbatched.\n\n\n", "msg_date": "Wed, 25 Sep 2019 07:03:52 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Wed, Sep 25, 2019 at 07:03:52AM +0200, Julien Rouhaud wrote:\n> IIUC, if something like Heikki's patch is applied on older branch the\n> problem will be magically fixed from the extension point of view so\n> that should be safe (an extension would only need to detect the minor\n> version to get a more useful error message for users), and all\n> alternatives are too intrusive to be patckbatched.\n\nSo, Heikki, are you planning to work more on that and commit a change\nclose to what has been proposed upthread in [1]? It sounds to me that\nthis has the advantage to be non-intrusive and a similar solution has\nbeen used for GIN indexes. Moving the redesign out of the discussion,\nis there actually a downsize with back-patching something like\nHeikki's version?\n\nTom, Alvaro and Julien, do you have more thoughts to share?\n\n[1]: https://www.postgresql.org/message-id/b847493e-d263-3f2e-1802-689e778c9a58@iki.fi\n--\nMichael", "msg_date": "Fri, 15 Nov 2019 12:07:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Fri, Nov 15, 2019 at 12:07:15PM +0900, Michael Paquier wrote:\n> So, Heikki, are you planning to work more on that and commit a change\n> close to what has been proposed upthread in [1]? It sounds to me that\n> this has the advantage to be non-intrusive and a similar solution has\n> been used for GIN indexes. Moving the redesign out of the discussion,\n> is there actually a downsize with back-patching something like\n> Heikki's version?\n\nSo... I have been looking at this patch, and indeed it would be nice\nto pass down a better value than BRIN_DEFAULT_PAGES_PER_RANGE to be\nable to compute the stats in brincostestimate(). Still, it looks also\nto me that this allows the code to be able to compute some stats\ndirectly. As there is no consensus on a backpatch yet, my take would\nbe for now to apply just the attached on HEAD, and consider a\nback-patch later on if there are more arguments in favor of it. If\nyou actually test hypopg currently, the code fails when attempting to\nopen the relation to get the stats now.\n\nAttached are the patch for HEAD, as well as a patch to apply to hypopg\non branch REL1_STABLE to make the module compatible with PG13~.\n\nAny objections?\n\nNB @Julien: perhaps you'd want to apply the second patch to the\nupstream repo of hypopg, and add more tests for other index AMs like\nGIN and BRIN.\n--\nMichael", "msg_date": "Tue, 19 Nov 2019 14:40:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Tue, Nov 19, 2019 at 6:40 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Nov 15, 2019 at 12:07:15PM +0900, Michael Paquier wrote:\n> > So, Heikki, are you planning to work more on that and commit a change\n> > close to what has been proposed upthread in [1]? It sounds to me that\n> > this has the advantage to be non-intrusive and a similar solution has\n> > been used for GIN indexes. Moving the redesign out of the discussion,\n> > is there actually a downsize with back-patching something like\n> > Heikki's version?\n>\n> So... I have been looking at this patch, and indeed it would be nice\n> to pass down a better value than BRIN_DEFAULT_PAGES_PER_RANGE to be\n> able to compute the stats in brincostestimate(). Still, it looks also\n> to me that this allows the code to be able to compute some stats\n> directly. As there is no consensus on a backpatch yet, my take would\n> be for now to apply just the attached on HEAD, and consider a\n> back-patch later on if there are more arguments in favor of it. If\n> you actually test hypopg currently, the code fails when attempting to\n> open the relation to get the stats now.\n>\n> Attached are the patch for HEAD, as well as a patch to apply to hypopg\n> on branch REL1_STABLE to make the module compatible with PG13~.\n>\n> Any objections?\n\nNone from me. I'm obviously biased, but I hope that it can get\nbackpatched. BRIN is probably seldom used, but we shouldn't make it\nharder to use it, even if it's that's only for hypothetical usage, and\neven if it'll still be quite inexact.\n\n> NB @Julien: perhaps you'd want to apply the second patch to the\n> upstream repo of hypopg, and add more tests for other index AMs like\n> GIN and BRIN.\n\nThanks! I didn't noticed that the compatibility macro for heap_open\nwas removed in f25968c49, I'll commit this patch on hypopg with some\ncompatibility macros to make sure that it compiles against all\nversions. GIN (and some others) are unfortunately explicitly\ndisallowed with hypopg. Actually, most of the code already handles it\nbut I have no clear idea on how to estimate the number of tuples and\nthe size of such indexes. But yes, I should definitely add more tests\nfor supported AM, although I can't add any for BRIN until a fix is\ncommitted :(\n\n\n", "msg_date": "Tue, 19 Nov 2019 08:37:04 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Tue, Nov 19, 2019 at 08:37:04AM +0100, Julien Rouhaud wrote:\n> None from me. I'm obviously biased, but I hope that it can get\n> backpatched. BRIN is probably seldom used, but we shouldn't make it\n> harder to use it, even if it's that's only for hypothetical usage, and\n> even if it'll still be quite inexact.\n\nRe-reading the thread. Any design change should IMO just happen on\nmaster so as we don't take any risks with potential ABI breakages.\nEven if there is not much field demand for it, that's not worth the\nrisk. Thinking harder, I don't actually quite see why it would be an\nissue to provide default stats for an hypothetical BRIN index based\nusing the best estimations we can do down to 10 with the infra in\nplace. Taking the case of hypopg, one finishes with an annoying\n\"could not open relation with OID %u\", which is not that nice from the\nuser perspective. Let's wait a bit and see if others have more\narguments to offer.\n--\nMichael", "msg_date": "Tue, 19 Nov 2019 21:48:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" }, { "msg_contents": "On Tue, Nov 19, 2019 at 09:48:59PM +0900, Michael Paquier wrote:\n> Re-reading the thread. Any design change should IMO just happen on\n> master so as we don't take any risks with potential ABI breakages.\n> Even if there is not much field demand for it, that's not worth the\n> risk. Thinking harder, I don't actually quite see why it would be an\n> issue to provide default stats for an hypothetical BRIN index based\n> using the best estimations we can do down to 10 with the infra in\n> place. Taking the case of hypopg, one finishes with an annoying\n> \"could not open relation with OID %u\", which is not that nice from the\n> user perspective. Let's wait a bit and see if others have more\n> arguments to offer.\n\nOkay. Hearing nothing, committed down to 10.\n--\nMichael", "msg_date": "Thu, 21 Nov 2019 10:33:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Hypothetical indexes using BRIN broken since pg10" } ]
[ { "msg_contents": "Hi,\n\nI noticed that the create_upper_paths_hook is never called for partially\ngrouped rels. Is this intentional or a bug? Only the FDW routine's\nGetForeignUpperPaths hook is called for partially grouped rels. This seems\nodd since the regular create_upper_paths_hook gets called for all other\nupper rels.\n\nRegards,\n\nErik\nTimescale\n\nHi,I noticed that the create_upper_paths_hook is never called for partially grouped rels. Is this intentional or a bug? Only the FDW routine's GetForeignUpperPaths hook is called for partially grouped rels. This seems odd since the regular create_upper_paths_hook gets called for all other upper rels.Regards,ErikTimescale", "msg_date": "Thu, 27 Jun 2019 20:54:04 +0200", "msg_from": "=?UTF-8?Q?Erik_Nordstr=C3=B6m?= <erik@timescale.com>", "msg_from_op": true, "msg_subject": "Missing hook for UPPERREL_PARTIAL_GROUP_AGG rels?" }, { "msg_contents": "=?UTF-8?Q?Erik_Nordstr=C3=B6m?= <erik@timescale.com> writes:\n> I noticed that the create_upper_paths_hook is never called for partially\n> grouped rels. Is this intentional or a bug? Only the FDW routine's\n> GetForeignUpperPaths hook is called for partially grouped rels. This seems\n> odd since the regular create_upper_paths_hook gets called for all other\n> upper rels.\n\nThis seems possibly related to the discussion re set_rel_pathlist_hook\nat\n\nhttps://www.postgresql.org/message-id/flat/CADsUR0AaPx4sVgmnuVJ_bOkocccQZGubv6HajzW826rbSmFpCg%40mail.gmail.com\n\nI don't think we've quite resolved what to do there, but maybe\ncreate_upper_paths_hook needs to be looked at at the same time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2019 14:58:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing hook for UPPERREL_PARTIAL_GROUP_AGG rels?" }, { "msg_contents": "Thanks for the quick response. I do think this might be a separate issue\nbecause in the set_rel_pathlist_hook case, that hook is actually called at\none point. In this case there is simply no place in the PostgreSQL code\nwhere a call is made to create_upper_paths_hook for the\nUPPERREL_PARTIAL_GROUP_AGG upper rel kind. See\ncreate_partial_grouping_paths().\n\nRegards,\n\nErik\n\nOn Thu, Jun 27, 2019 at 8:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?Q?Erik_Nordstr=C3=B6m?= <erik@timescale.com> writes:\n> > I noticed that the create_upper_paths_hook is never called for partially\n> > grouped rels. Is this intentional or a bug? Only the FDW routine's\n> > GetForeignUpperPaths hook is called for partially grouped rels. This\n> seems\n> > odd since the regular create_upper_paths_hook gets called for all other\n> > upper rels.\n>\n> This seems possibly related to the discussion re set_rel_pathlist_hook\n> at\n>\n>\n> https://www.postgresql.org/message-id/flat/CADsUR0AaPx4sVgmnuVJ_bOkocccQZGubv6HajzW826rbSmFpCg%40mail.gmail.com\n>\n> I don't think we've quite resolved what to do there, but maybe\n> create_upper_paths_hook needs to be looked at at the same time.\n>\n> regards, tom lane\n>\n\nThanks for the quick response. I do think this might be a separate issue because in the set_rel_pathlist_hook case, that hook is actually called at one point. In this case there is simply no place in the PostgreSQL code where a call is made to create_upper_paths_hook for the UPPERREL_PARTIAL_GROUP_AGG upper rel kind. See create_partial_grouping_paths().Regards,ErikOn Thu, Jun 27, 2019 at 8:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?Q?Erik_Nordstr=C3=B6m?= <erik@timescale.com> writes:\n> I noticed that the create_upper_paths_hook is never called for partially\n> grouped rels. Is this intentional or a bug? Only the FDW routine's\n> GetForeignUpperPaths hook is called for partially grouped rels. This seems\n> odd since the regular create_upper_paths_hook gets called for all other\n> upper rels.\n\nThis seems possibly related to the discussion re set_rel_pathlist_hook\nat\n\nhttps://www.postgresql.org/message-id/flat/CADsUR0AaPx4sVgmnuVJ_bOkocccQZGubv6HajzW826rbSmFpCg%40mail.gmail.com\n\nI don't think we've quite resolved what to do there, but maybe\ncreate_upper_paths_hook needs to be looked at at the same time.\n\n                        regards, tom lane", "msg_date": "Thu, 27 Jun 2019 22:26:24 +0200", "msg_from": "=?UTF-8?Q?Erik_Nordstr=C3=B6m?= <erik@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Missing hook for UPPERREL_PARTIAL_GROUP_AGG rels?" } ]
[ { "msg_contents": "During the unconference at PGCon in Ottawa, I asked about writing C-based\ntests for Postgres. There was interest in trying a tool and also some\nhesitation to depend on a third-party library. So, I wrote a tool that I'd\nlike to contribute to Postgres. I’ve been calling it cexpect [1]\n<https://github.com/berlin-ab/cexpect>.\n\ncexpect is a general-use library for creating test suites in C. It includes:\n\n- a core library for creating and running suites of tests.\n\n- a standard set of test expectations for C\n\n- a default formatter (dots)\n\n- an extensible matcher framework\n\n- an extensible formatter framework\n\n\nWhy add a testing framework for C to Postgres?\n\nAn xUnit-style test framework [2] is a tool for writing tests that is not\ncurrently an option for people hacking on Postgres.\n\n\n -\n\n C-based tests could help increase code coverage in parts of the codebase\n that are difficult to reach with a regress-style test (for example: gin\n posting list compression [3]).\n -\n\n Writing tests for internal components help developers become more\n familiar with a codebase.\n -\n\n Writing C-based tests go beyond providing regression value by providing\n feedback on design decisions like modularity and dependencies.\n -\n\n Test suites already existing in `src/test/modules` could benefit from\n having a consistent way to declare expectations, run suites, and display\n results.\n -\n\n Forks and extensions that write tests could benefit from a testing\n framework provided by the upstream project. An extensible framework will\n allow forks and extensions to create their own matchers and formatters\n without changing the core framework.\n\n\n\nThe extensible matcher framework has benefits for decoupled, isolated unit\ntests, and also high-level system tests. The complexity of a common\nassertions can be packaged into a matcher - abstracting the complexity and\nmaking the assertion easier to reuse.\n\nFor example, there could be expectation matchers specifically crafted for\nthe domain of Postgres:\n\n`expect(xlog, to_have_xlog_record(XLOG_XACT_PREPARE))`\n\n`expect(“postgresadmin”, to_be_an_admin_user())`\n\nThe matchers that come with cexpect out of the box are for C datatypes:\n\n`expect(1, is_int_equal_to(1))`\n\n`expect(1 == 2, is_false())`\n\n`expect(“some random string”, is_string_containing(“and”))’\n\n… and more, with the goal of having a matcher for all standard data types.\n\n\nThe extensible formatter framework could be used to create a test-anything\nprotocol (TAP) output, familiar to Postgres hackers. It could also be used\nto insert test results into a database table for later analysis, or create\na consistent way of reporting results from a user-defined function - the\npattern often used for creating tests under `src/test/modules`.\n\n\nHow does it work?\n\nCreate an executable that links to the core shared library, include the\ncore library headers, create a suite, add some tests, and run it.\n\ntest.c:\n\n\n```\n\n#include \"cexpect.h\"\n\n#include \"cexpect_cmatchers.h\"\n\nvoid some_passing_test(Test *test) {\n\n expect(test, 1, is_int_equal_to(1));\n\n}\n\nint main(int argc, char *args[]) {\n\n Suite *suite = create_suite(\"Example test\");\n\n add_test(suite, some_passing_test);\n\n start_cexpect(suite);\n\n}\n\n```\n\nRunning test.c:\n\n\n```bash\n\nexport DYLD_LIBRARY_PATH=$path_to_cexpect_library\n\nexport compile_flags=”-Wno-int-conversion -Wno-pointer-to-int-cast test.c\n-L $path_to_cexpect_library -I $path_to_cexpect_headers”\n\ngcc $compile_flags -l cexpect -o test.o\n\n./test.o\n\nRunning suite: Example test\n\n.\n\nSummary:\n\nRan 1 test(s).\n\n1 passed, 0 failed, 0 pending\n\n```\n\nRather than post a patch, I'd rather start a conversation first. I'm\nguessing there are some improvements that we'd want to make (for example:\nthe Makefile) before commiting a patch. Let's iterate on improvements\nbefore creating a formal patch.\n\n\nThoughts?\n\n\nThanks,\n\nAdam Berlin\n\nSoftware Engineer at Pivotal Greenplum\n\n[1] https://github.com/berlin-ab/cexpect\n\n[2] https://en.wikipedia.org/wiki/XUnit\n\n[3]\nhttps://coverage.postgresql.org/src/backend/access/gin/ginpostinglist.c.gcov.html\n\nDuring the unconference at PGCon in Ottawa, I asked about writing C-based tests for Postgres. There was interest in trying a tool and also some hesitation to depend on a third-party library. So, I wrote a tool that I'd like to contribute to Postgres. I’ve been calling it cexpect [1].cexpect is a general-use library for creating test suites in C. It includes:- a core library for creating and running suites of tests.- a standard set of test expectations for C- a default formatter (dots)- an extensible matcher framework- an extensible formatter frameworkWhy add a testing framework for C to Postgres? An xUnit-style test framework [2] is a tool for writing tests that is not currently an option for people hacking on Postgres. C-based tests could help increase code coverage in parts of the codebase that are difficult to reach with a regress-style test (for example: gin posting list compression [3]). Writing tests for internal components help developers become more familiar with a codebase. Writing C-based tests go beyond providing regression value by providing feedback on design decisions like modularity and dependencies. Test suites already existing in `src/test/modules` could benefit from having a consistent way to declare expectations, run suites, and display results.Forks and extensions that write tests could benefit from a testing framework provided by the upstream project. An extensible framework will allow forks and extensions to create their own matchers and formatters without changing the core framework.The extensible matcher framework has benefits for decoupled, isolated unit tests, and also  high-level system tests. The complexity of a common assertions can be packaged into a matcher - abstracting the complexity and making the assertion easier to reuse.For example, there could be expectation matchers specifically crafted for the domain of Postgres:`expect(xlog, to_have_xlog_record(XLOG_XACT_PREPARE))``expect(“postgresadmin”, to_be_an_admin_user())`The matchers that come with cexpect out of the box are for C datatypes:`expect(1, is_int_equal_to(1))``expect(1 == 2, is_false())``expect(“some random string”, is_string_containing(“and”))’… and more, with the goal of having a matcher for all standard data types.The extensible formatter framework could be used to create a test-anything protocol (TAP) output, familiar to Postgres hackers. It could also be used to insert test results into a database table for later analysis, or create a consistent way of reporting results from a user-defined function - the pattern often used for creating tests under `src/test/modules`.How does it work?Create an executable that links to the core shared library, include the core library headers, create a suite, add some tests, and run it.test.c:```#include \"cexpect.h\"#include \"cexpect_cmatchers.h\"void some_passing_test(Test *test) {    expect(test, 1, is_int_equal_to(1));}int main(int argc, char *args[]) {    Suite *suite = create_suite(\"Example test\");    add_test(suite, some_passing_test);    start_cexpect(suite);}```Running test.c:```bashexport DYLD_LIBRARY_PATH=$path_to_cexpect_libraryexport compile_flags=”-Wno-int-conversion -Wno-pointer-to-int-cast test.c -L $path_to_cexpect_library -I $path_to_cexpect_headers”gcc $compile_flags -l cexpect -o test.o./test.oRunning suite: Example test.Summary:Ran 1 test(s).1 passed, 0 failed, 0 pending```Rather than post a patch, I'd rather start a conversation first. I'm guessing there are some improvements that we'd want to make (for example: the Makefile) before commiting a patch. Let's iterate on improvements before creating a formal patch.Thoughts? Thanks,Adam BerlinSoftware Engineer at Pivotal Greenplum[1] https://github.com/berlin-ab/cexpect[2] https://en.wikipedia.org/wiki/XUnit[3] https://coverage.postgresql.org/src/backend/access/gin/ginpostinglist.c.gcov.html", "msg_date": "Thu, 27 Jun 2019 20:48:21 -0400", "msg_from": "Adam Berlin <aberlin@pivotal.io>", "msg_from_op": true, "msg_subject": "C testing for Postgres" }, { "msg_contents": "> On Fri, Jun 28, 2019 at 11:38 AM Adam Berlin <aberlin@pivotal.io> wrote:\n>\n> During the unconference at PGCon in Ottawa, I asked about writing C-based\n> tests for Postgres. There was interest in trying a tool and also some\n> hesitation to depend on a third-party library. So, I wrote a tool that I'd\n> like to contribute to Postgres. I’ve been calling it cexpect [1].\n\nCool, thanks!\n\n> Rather than post a patch, I'd rather start a conversation first. I'm guessing\n> there are some improvements that we'd want to make (for example: the\n> Makefile) before commiting a patch. Let's iterate on improvements before\n> creating a formal patch.\n\nJust to mention, there were similar discussions already in the past ([1], [2]),\nwith some concerns being raised, but looks like without any visible results.\n\n[1]: https://www.postgresql.org/message-id/flat/CAEepm%3D2heu%2B5zwB65jWap3XY-UP6PpJZiKLQRSV2UQH9BmVRXQ%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/flat/Pine.LNX.4.58.0410111044030.14840%40linuxworld.com.au\n\n\n", "msg_date": "Fri, 28 Jun 2019 11:57:21 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: C testing for Postgres" }, { "msg_contents": "Here are my takeaways from the previous discussions:\n\n* there *is* interest in testing\n* we shouldn't take it too far\n* there are already tests being written under `src/test/modules`, but\nwithout a consistent way of describing expectations and displaying results\n* no tool was chosen\n\nIf we were to use this tool, would the community want to vendor the\nframework in the Postgres repository, or keep it in a separate repository\nthat produces a versioned shared library?\n\nOn Fri, Jun 28, 2019 at 5:57 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Fri, Jun 28, 2019 at 11:38 AM Adam Berlin <aberlin@pivotal.io> wrote:\n> >\n> > During the unconference at PGCon in Ottawa, I asked about writing C-based\n> > tests for Postgres. There was interest in trying a tool and also some\n> > hesitation to depend on a third-party library. So, I wrote a tool that\n> I'd\n> > like to contribute to Postgres. I’ve been calling it cexpect [1].\n>\n> Cool, thanks!\n>\n> > Rather than post a patch, I'd rather start a conversation first. I'm\n> guessing\n> > there are some improvements that we'd want to make (for example: the\n> > Makefile) before commiting a patch. Let's iterate on improvements before\n> > creating a formal patch.\n>\n> Just to mention, there were similar discussions already in the past ([1],\n> [2]),\n> with some concerns being raised, but looks like without any visible\n> results.\n>\n> [1]:\n> https://www.postgresql.org/message-id/flat/CAEepm%3D2heu%2B5zwB65jWap3XY-UP6PpJZiKLQRSV2UQH9BmVRXQ%40mail.gmail.com\n> [2]:\n> https://www.postgresql.org/message-id/flat/Pine.LNX.4.58.0410111044030.14840%40linuxworld.com.au\n>\n\nHere are my takeaways from the previous discussions:* there *is* interest in testing* we shouldn't take it too far* there are already tests being written under `src/test/modules`, but without a consistent way of describing expectations and displaying results* no tool was chosenIf we were to use this tool, would the community want to vendor the framework in the Postgres repository, or keep it in a separate repository that produces a versioned shared library?On Fri, Jun 28, 2019 at 5:57 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Fri, Jun 28, 2019 at 11:38 AM Adam Berlin <aberlin@pivotal.io> wrote:\n>\n> During the unconference at PGCon in Ottawa, I asked about writing C-based\n> tests for Postgres. There was interest in trying a tool and also some\n> hesitation to depend on a third-party library. So, I wrote a tool that I'd\n> like to contribute to Postgres. I’ve been calling it cexpect [1].\n\nCool, thanks!\n\n> Rather than post a patch, I'd rather start a conversation first. I'm guessing\n> there are some improvements that we'd want to make (for example: the\n> Makefile) before commiting a patch. Let's iterate on improvements before\n> creating a formal patch.\n\nJust to mention, there were similar discussions already in the past ([1], [2]),\nwith some concerns being raised, but looks like without any visible results.\n\n[1]: https://www.postgresql.org/message-id/flat/CAEepm%3D2heu%2B5zwB65jWap3XY-UP6PpJZiKLQRSV2UQH9BmVRXQ%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/flat/Pine.LNX.4.58.0410111044030.14840%40linuxworld.com.au", "msg_date": "Fri, 28 Jun 2019 09:42:54 -0400", "msg_from": "Adam Berlin <aberlin@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: C testing for Postgres" }, { "msg_contents": "On Fri, Jun 28, 2019 at 09:42:54AM -0400, Adam Berlin wrote:\n> Here are my takeaways from the previous discussions:\n> \n> * there *is* interest in testing\n\nYep.\n\n> * we shouldn't take it too far\n> * there are already tests being written under `src/test/modules`, but\n> without a consistent way of describing expectations and displaying results\n\nThis is a giant problem.\n\n> * no tool was chosen\n\nIf there's a way to get this in the tree, assuming people agree it\nshould be there, that'd be fantastic.\n\nOur current system has been creaking for years.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Fri, 28 Jun 2019 23:54:44 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: C testing for Postgres" }, { "msg_contents": "On Fri, Jun 28, 2019 at 10:37 AM Adam Berlin <aberlin@pivotal.io> wrote:\n>\n> If we were to use this tool, would the community want to vendor the\n> framework in the Postgres repository, or keep it in a separate\n> repository that produces a versioned shared library?\n>\n\nIf the library is going to actively evolve, we should bring it into the\ntree. For a project like this, a \"versioned shared library\" is a massive\npain in the rear for both the consumer of such libraries and for their\nmaintainers.\n\nCheers,\nJesse\n\n\n", "msg_date": "Fri, 28 Jun 2019 22:11:11 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: C testing for Postgres" }, { "msg_contents": "On Fri, Jun 28, 2019 at 09:42:54AM -0400, Adam Berlin wrote:\n> If we were to use this tool, would the community want to vendor the\n> framework in the Postgres repository, or keep it in a separate repository\n> that produces a versioned shared library?\n\nWell, my take is that having a base infrastructure for a fault\ninjection framework is something that would prove to be helpful, and \nthat I am not against having something in core. While working on\nvarious issues, I have found myself doing many times crazy stat()\ncalls on an on-disk file to enforce an elog(ERROR) or elog(FATAL), and\nby experience fault points are things very *hard* to place correctly\nbecause they should not be single-purpose things.\n\nNow, we don't want to finish with an infinity of fault points in the\ntree, but being able to enforce a failure in a point added for a patch\nusing a SQL command can make the integration of tests in a patch\neasier for reviewers, for example isolation tests with elog(ERROR)\n(like what has been discussed for b4721f3).\n--\nMichael", "msg_date": "Tue, 2 Jul 2019 15:25:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: C testing for Postgres" }, { "msg_contents": "On Mon, Jul 1, 2019 at 11:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Jun 28, 2019 at 09:42:54AM -0400, Adam Berlin wrote:\n> > If we were to use this tool, would the community want to vendor the\n> > framework in the Postgres repository, or keep it in a separate repository\n> > that produces a versioned shared library?\n>\n> Well, my take is that having a base infrastructure for a fault\n> injection framework is something that would prove to be helpful, and\n> that I am not against having something in core. While working on\n> various issues, I have found myself doing many times crazy stat()\n> calls on an on-disk file to enforce an elog(ERROR) or elog(FATAL), and\n> by experience fault points are things very *hard* to place correctly\n> because they should not be single-purpose things.\n>\n> Now, we don't want to finish with an infinity of fault points in the\n> tree, but being able to enforce a failure in a point added for a patch\n> using a SQL command can make the integration of tests in a patch\n> easier for reviewers, for example isolation tests with elog(ERROR)\n> (like what has been discussed for b4721f3).\n>\n\nJust to clarify what Adam is proposing in this thread is *not* a fault\ninjection framework.\n\nOn Mon, Jul 1, 2019 at 11:26 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Jun 28, 2019 at 09:42:54AM -0400, Adam Berlin wrote:> If we were to use this tool, would the community want to vendor the> framework in the Postgres repository, or keep it in a separate repository> that produces a versioned shared library?\nWell, my take is that having a base infrastructure for a faultinjection framework is something that would prove to be helpful, and that I am not against having something in core.  While working onvarious issues, I have found myself doing many times crazy stat()calls on an on-disk file to enforce an elog(ERROR) or elog(FATAL), andby experience fault points are things very *hard* to place correctlybecause they should not be single-purpose things.\nNow, we don't want to finish with an infinity of fault points in thetree, but being able to enforce a failure in a point added for a patchusing a SQL command can make the integration of tests in a patcheasier for reviewers, for example isolation tests with elog(ERROR)(like what has been discussed for b4721f3).Just to clarify what Adam is proposing in this thread is *not* a fault injection framework.", "msg_date": "Tue, 2 Jul 2019 00:07:42 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: C testing for Postgres" }, { "msg_contents": "> Just to clarify what Adam is proposing in this thread is *not* a fault\n> injection framework.\n>\n\nYes, thanks for clarifying Ashwin.\n\nSorry Michael, this testing framework is more like these other frameworks:\n\nJava with Junit + Hamcrest: http://hamcrest.org/JavaHamcrest/tutorial\nRuby with Rspec:\nhttps://rspec.info/documentation/3.8/rspec-expectations/#Built-in_matchers\nJavascript with Jasmine: https://jasmine.github.io/\n\n\n>\n\nJust to clarify what Adam is proposing in this thread is *not* a fault injection framework.Yes, thanks for clarifying Ashwin. Sorry Michael, this testing framework is more like these other frameworks:Java with Junit + Hamcrest: http://hamcrest.org/JavaHamcrest/tutorialRuby with Rspec: https://rspec.info/documentation/3.8/rspec-expectations/#Built-in_matchersJavascript with Jasmine: https://jasmine.github.io/", "msg_date": "Tue, 2 Jul 2019 10:10:51 -0400", "msg_from": "Adam Berlin <aberlin@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: C testing for Postgres" } ]
[ { "msg_contents": "Hello hackers,\n\nThe first Commitfest[1] for the next major release of PostgreSQL\nbegins in a few days, and runs for the month of July. There are 218\npatches registered[2] right now, and I'm sure we'll see some more at\nthe last minute. PostgreSQL 13 needs you!\n\nI volunteered to be the CF manager for this one, and Jonathan Katz\nkindly offered to help me[3]. Assuming there are no objections and we\nland this coveted role (I didn't see any other volunteers for CF1?), I\nplan to start doing the sort of stuff listed on\nhttps://wiki.postgresql.org/wiki/Running_a_CommitFest shortly, and\nwill then provide updates on this thread. (Clearly some of that is\nout of date WRT the \"new\" Commitfest app and process, so if there's a\nbetter list somewhere please let me know; if not, perhaps one of our\ntasks should be to update that).\n\nEither way, please make sure your patches are in, and start signing up\nto review things that you're interested in or can help with or want to\nlearn about. If you've submitted patches, it'd be ideal if you could\ntry to review patches of similar size/complexity. Every review helps:\nwhether proof-reading or copy-editing the documentation and comments\n(or noting that they are missing), finding low level C programming\nerrors, providing high level architectural review, comparing against\nthe SQL standard or other relevant standards or products, seeing if\nappropriate regression tests are included, manual testing or ...\nanything in between. Testing might include functionality testing\n(does it work as described, do all the supplied tests pass?),\nperformance/scalability testing, portability testing (eg does it work\non your OS?), checking with tools like valgrind, feature combination\nchecks (are there hidden problems when combined with partitions,\nserializable, replication, triggers, ...?) and generally hunting for\nweird edge cases the author didn't think of[4].\n\nA couple of notes for new players: We don't bite, and your\ncontributions are very welcome. It's OK to review things that others\nare already reviewing. If you are interested in a patch and don't\nknow how to get started reviewing it or how to get it up and running\non your system, just ask and someone will be happy to point to or\nprovide more instructions. You'll need to subscribe to this mailing\nlist if you haven't already. If the thread for a CF entry began\nbefore you were subscribed, you might be able to download the whole\nthread as a mailbox file and import it into your email client so that\nyou can reply to the thread; if you can't do that (it can be\ntricky/impossible on some email clients), ping me and I'll CC you so\nyou can reply.\n\n*probably\n\n[1] https://wiki.postgresql.org/wiki/CommitFest\n[2] https://commitfest.postgresql.org/23/\n[3] https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting#11:10_-_11:25.09Commitfest_Management\n[4] \"A QA engineer walks into a bar. Orders a beer. Orders 0 beers.\nOrders 99999999999 beers. Orders a lizard. Orders -1 beers. Orders a\nueicbksjdhd. First real customer walks in and asks where the bathroom\nis. The bar bursts into flames, killing everyone.\" -Brenan Keller\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jun 2019 14:04:47 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "\nHello Thomas,\n\n> The first Commitfest[1] for the next major release of PostgreSQL\n> begins in a few days, and runs for the month of July.\n\n> There are 218 patches registered[2] right now,\n\nISTM that there are a couple of duplicates: 2084 & 2150, 2119 & 2180?\n\n> I volunteered to be the CF manager for this one, and Jonathan Katz\n> kindly offered to help me[3].\n\nThanks for volunteering, and good luck with this task!\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 28 Jun 2019 09:57:06 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "Hello\n\nIs this commitfest for small patches and bugfixes, similar to 2018-07 one in last year?\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 28 Jun 2019 10:58:07 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On 2019-06-28 09:58, Sergei Kornilov wrote:\n> Is this commitfest for small patches and bugfixes, similar to 2018-07 one in last year?\n\nThere are no restrictions about what can be submitted to this commit\nfest. Review early and review often!\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Jun 2019 11:57:19 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> If the thread for a CF entry began\n> before you were subscribed, you might be able to download the whole\n> thread as a mailbox file and import it into your email client so that\n> you can reply to the thread; if you can't do that (it can be\n> tricky/impossible on some email clients), ping me and I'll CC you so\n> you can reply.\n\nshhhhh, don't look now, but there might be a \"Resend email\" button in\nthe archives now that you can click to have an email sent to you...\n\nNote that you have to be logged in, and the email will go to the email\naddress that you're logging into the community auth system with.\n\n(thank you Magnus)\n\nThanks!\n\nStephen", "msg_date": "Fri, 28 Jun 2019 11:52:03 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> shhhhh, don't look now, but there might be a \"Resend email\" button in\n> the archives now that you can click to have an email sent to you...\n\nOooh, lovely.\n\n> (thank you Magnus)\n\n+many\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Jun 2019 13:15:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On 6/28/19 1:15 PM, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n>> shhhhh, don't look now, but there might be a \"Resend email\" button in\n>> the archives now that you can click to have an email sent to you...\n> \n> Oooh, lovely.\n> \n>> (thank you Magnus)\n> \n> +many\n\nThank you, Magnus, this is really helpful!\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 28 Jun 2019 17:47:20 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On Sat, Jun 29, 2019 at 9:47 AM David Steele <david@pgmasters.net> wrote:\n> On 6/28/19 1:15 PM, Tom Lane wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> >> shhhhh, don't look now, but there might be a \"Resend email\" button in\n> >> the archives now that you can click to have an email sent to you...\n> >\n> > Oooh, lovely.\n> >\n> >> (thank you Magnus)\n> >\n> > +many\n>\n> Thank you, Magnus, this is really helpful!\n\nThanks, that's great news. So, just to recap for new people who want\nto get involved in testing and reviewing, the steps are:\n\n1. Subscribe to the pgsql-hackers mailing list, starting here:\nhttps://lists.postgresql.org/\n2. In the process of doing that, you'll create a PostgreSQL community account.\n3. Choose a patch you're interested in from\nhttps://commitfest.postgresql.org/23/ , and possibly add yourself as a\nreviewer.\n4. Follow the link to the email thread.\n5. Click on the shiny new \"Resend email\" link on the latest email in\nthe thread to receive a copy, if you didn't have it already.\n6. You can reply-all to that email to join the discussion.\n\n(As with all busy mailing lists, you'll probably want to set up\nfiltering to put pgsql-hackers messages into a seperate folder/label\ndue to volume.)\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Sat, 29 Jun 2019 20:05:09 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On Sat, Jun 29, 2019 at 10:05 AM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n> On Sat, Jun 29, 2019 at 9:47 AM David Steele <david@pgmasters.net> wrote:\n> > On 6/28/19 1:15 PM, Tom Lane wrote:\n> > > Stephen Frost <sfrost@snowman.net> writes:\n> > >> shhhhh, don't look now, but there might be a \"Resend email\" button in\n> > >> the archives now that you can click to have an email sent to you...\n> > >\n> > > Oooh, lovely.\n> > >\n> > >> (thank you Magnus)\n> > >\n> > > +many\n> >\n> > Thank you, Magnus, this is really helpful!\n>\n> Thanks, that's great news. So, just to recap for new people who want\n> to get involved in testing and reviewing, the steps are:\n>\n> 1. Subscribe to the pgsql-hackers mailing list, starting here:\n> https://lists.postgresql.org/\n> 2. In the process of doing that, you'll create a PostgreSQL community\n> account.\n> 3. Choose a patch you're interested in from\n> https://commitfest.postgresql.org/23/ , and possibly add yourself as a\n> reviewer.\n> 4. Follow the link to the email thread.\n> 5. Click on the shiny new \"Resend email\" link on the latest email in\n> the thread to receive a copy, if you didn't have it already.\n> 6. You can reply-all to that email to join the discussion.\n>\n> (As with all busy mailing lists, you'll probably want to set up\n> filtering to put pgsql-hackers messages into a seperate folder/label\n> due to volume.)\n>\n\nIt might also be worth noticing that for those who only care about\nfollowing one thread, you can subscribe to the pgsql-hackers list and then\ndisable mail delivery. That way you can still post on the thread, and the\nPostgreSQL convention to use \"reply all\" on emails and directly CC all\nparticipants will ensure you get a copy of any replies. This does assume\nyou either started the thread or at some point interacted with it of course\n-- otherwise your email address wouldn't be in the CC list.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Jun 29, 2019 at 10:05 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Sat, Jun 29, 2019 at 9:47 AM David Steele <david@pgmasters.net> wrote:\n> On 6/28/19 1:15 PM, Tom Lane wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> >> shhhhh, don't look now, but there might be a \"Resend email\" button in\n> >> the archives now that you can click to have an email sent to you...\n> >\n> > Oooh, lovely.\n> >\n> >> (thank you Magnus)\n> >\n> > +many\n>\n> Thank you, Magnus, this is really helpful!\n\nThanks, that's great news.  So, just to recap for new people who want\nto get involved in testing and reviewing, the steps are:\n\n1.  Subscribe to the pgsql-hackers mailing list, starting here:\nhttps://lists.postgresql.org/\n2.  In the process of doing that, you'll create a PostgreSQL community account.\n3.  Choose a patch you're interested in from\nhttps://commitfest.postgresql.org/23/ , and possibly add yourself as a\nreviewer.\n4.  Follow the link to the email thread.\n5.  Click on the shiny new \"Resend email\" link on the latest email in\nthe thread to receive a copy, if you didn't have it already.\n6.  You can reply-all to that email to join the discussion.\n\n(As with all busy mailing lists, you'll probably want to set up\nfiltering to put pgsql-hackers messages into a seperate folder/label\ndue to volume.)It might also be worth noticing that for those who only care about following one thread, you can subscribe to the pgsql-hackers list and then disable mail delivery. That way you can still post on the thread, and the PostgreSQL convention to use \"reply all\" on emails and directly CC all participants will ensure you get a copy of any replies. This does assume you either started the thread or at some point interacted with it of course -- otherwise your email address wouldn't be in the CC list.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sat, 29 Jun 2019 11:55:30 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "From: Stephen Frost [mailto:sfrost@snowman.net]\n> shhhhh, don't look now, but there might be a \"Resend email\" button in the\n> archives now that you can click to have an email sent to you...\n> \n> Note that you have to be logged in, and the email will go to the email address\n> that you're logging into the community auth system with.\n> \n> (thank you Magnus)\n\nThank you so much, Magnus. This is very convenient. I'm forced to use Outlook at work, which doesn't allow to reply to a downloaded email. Your help eliminates the need to save all emails.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Mon, 1 Jul 2019 00:11:58 +0000", "msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "Hello hackers,\n\nIt's now July everywhere on Earth, so I marked CF 2019-07 as\nin-progress, and 2019-09 as open for bumping patches into. I pinged\nmost of the \"Needs Review\" threads that don't apply and will do a few\nmore tomorrow, and then I'll try to chase patches that fail on CI, and\nthen see what I can do to highlight some entries that really need\nreview/discussion. I'll do end-of-week status reports.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jul 2019 00:20:32 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On Tue, Jul 2, 2019 at 12:20 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> It's now July everywhere on Earth, so I marked CF 2019-07 as\n> in-progress, and 2019-09 as open for bumping patches into. I pinged\n> most of the \"Needs Review\" threads that don't apply and will do a few\n> more tomorrow, and then I'll try to chase patches that fail on CI, and\n> then see what I can do to highlight some entries that really need\n> review/discussion. I'll do end-of-week status reports.\n\nHello hackers,\n\nHere's a quick status report after the first week (I think only about\n10 commits happened during the week, the rest were pre-CF activity):\n\n status | count\n------------------------+-------\n Committed | 32\n Moved to next CF | 5\n Needs review | 146\n Ready for Committer | 7\n Rejected | 2\n Returned with feedback | 2\n Waiting on Author | 29\n Withdrawn | 8\n\nI wondered about reporting on the number of entries that didn't yet\nhave reviewers signed up, but then I noticed that there isn't a very\ngood correlation between signed up reviewers and reviews. So instead,\nhere is a list of twenty \"Needs review\" entries that have gone a long\ntime without communication on the thread. In some cases there has\nbeen plenty of review, so it's time to make decisions. In others,\nthere has been none at all.\n\nIf you're having trouble choosing something to review, please pick one\nof these and help us figure out how to proceed.\n\n 2026 | Spurious \"apparent wraparound\" via Simpl | {\"Noah Misch\"}\n 2003 | Fix Deadlock Issue in Single User Mode W | {\"Chengchao Yu\"}\n 1796 | documenting signal handling with readme | {\"Chris Travers\"}\n 2053 | NOTIFY options + COLLAPSE (make deduplic | {\"Filip Rembiałkowski\"}\n 1974 | pg_stat_statements should notice FOR UPD | {\"Andrew Gierth\"}\n 2061 | [WIP] Show a human-readable n_distinct i | {\"Maxence Ahlouche\"}\n 2077 | fix pgbench -R hanging on pg11 | {\"Fabien Coelho\"}\n 2062 | Unaccent extension python script Issue i | {\"Hugh\nRanalli\",\"Ramanarayana M\"}\n 1769 | libpq host/hostaddr consistency | {\"Fabien Coelho\"}\n 2060 | suppress errors thrown by to_reg*() | {\"takuma hoshiai\"}\n 2078 | Compile from source using latest Microso | {\"Peifeng Qiu\"}\n 2081 | parse time support function | {\"Pavel Stehule\"}\n 1800 | amcheck verification for GiST | {\"Andrey Borodin\"}\n 2018 | pg_basebackup to adjust existing data di | {\"Haribabu Kommi\"}\n 2095 | pg_upgrade version and path checking | {\"Daniel Gustafsson\"}\n 2044 | propagating replica identity to partitio | {\"Álvaro Herrera\"}\n 2090 | pgbench - implement strict TPC-B benchma | {\"Fabien Coelho\"}\n 2088 | Contribution to Perldoc for TestLib modu | {\"Ramanarayana M\"}\n 2087 | Problem during Windows service start | {\"Ramanarayana M\"}\n 2093 | Trigger autovacuum on tuple insertion | {\"Darafei Praliaskouski\"}\n\nIf you have submitted a patch and it's in \"Waiting for author\" state,\nplease aim to get it to \"Needs review\" state soon if you can, as\nthat's where people are most likely to be looking for things to\nreview.\n\nI have pinged most threads that are in \"Needs review\" state and don't\napply, compile warning-free, or pass check-world. I'll do some more\nof that sort of thing, and I'll highlight a different set of patches\nnext week.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jul 2019 23:56:08 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "Hello hackers,\n\nHere's a quick update at the end of the second week of CF1.\n\n status | w1 | w2\n------------------------+-----+-----\n Committed | 32 | 41\n Moved to next CF | 5 | 6\n Needs review | 146 | 128\n Ready for Committer | 7 | 9\n Rejected | 2 | 2\n Returned with feedback | 2 | 2\n Waiting on Author | 29 | 35\n Withdrawn | 8 | 8\n\nIt looks like we continued our commit rate of around 10/week, punted\none to the next CF, and returned/rejected nothing.\n\nLast week I highlighted 20 'Needs review' patches whose threads hadn't\nseen traffic for the longest time as places that could use some\nattention if our goal is to move all of these patches closer to their\ndestiny. A few of them made some progress and one was committed.\nHere are another 20 like that -- these are threads have been silent\nfor 24 to 90 days. That means they mostly apply and pass basic\ntesting (or I'd probably have reported the failure on the thread and\nthey wouldn't be on this list). Which means you can test them!\n\n 2080 | Minimizing pg_stat_statements performanc | {\"Raymond Martin\"}\n 2103 | Fix failure of identity columns if there | {\"Laurenz Albe\"}\n 1472 | SQL/JSON: functions | {\"Fedor\nSigaev\",\"Alexander Korotkov\",\"Nikita Glukhov\",\"Oleg Bartunov\"}\n 2124 | Introduce spgist quadtree @<(point,circl | {\"Matwey V. Kornilov\"}\n 1306 | pgbench - another attempt at tap test fo | {\"Fabien Coelho\"}\n 2126 | Rearrange postmaster startup order to im | {\"Tom Lane\"}\n 2128 | Fix issues with \"x SIMILAR TO y ESCAPE N | {\"Tom Lane\"}\n 2102 | Improve Append/MergeAppend EXPLAIN outpu | {\"David Rowley\"}\n 1774 | Block level parallel vacuum | {\"Masahiko Sawada\"}\n 2086 | pgbench - extend initialization phase co | {\"Fabien Coelho\"}\n 1348 | BRIN bloom and multi-minmax indexes | {\"Tomas Vondra\"}\n 2183 | Opclass parameters | {\"Nikita Glukhov\"}\n 1854 | libpq trace log | {\"Aya Iwata\"}\n 2147 | Parallel grouping sets | {\"Richard Guo\"}\n 2148 | vacuumlo: report the number of large obj | {\"Timur Birsh\"}\n 1984 | Fix performance issue in foreign-key-awa | {\"David Rowley\"}\n 1911 | anycompatible and anycompatiblearray pol | {\"Pavel Stehule\"}\n 2048 | WIP: Temporal primary and foreign keys | {\"Paul Jungwirth\"}\n 2160 | Multi insert in CTAS/MatView | {\"Paul Guo\",\"Taylor Vesely\"}\n 2154 | Race conditions with TAP test for syncre | {\"Michael Paquier\"}\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jul 2019 14:40:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": " Hello hackers,\n\nHere are the stats at the end of week 3 of the CF:\n\n status | w1 | w2 | w3\n------------------------+-----+-----+-----\n Committed | 32 | 41 | 49\n Moved to next CF | 5 | 6 | 6\n Needs review | 146 | 128 | 114\n Ready for Committer | 7 | 9 | 10\n Rejected | 2 | 2 | 2\n Returned with feedback | 2 | 2 | 2\n Waiting on Author | 29 | 35 | 39\n Withdrawn | 8 | 8 | 9\n\nHere is the last batch of submissions that I want to highlight. These\n13 are all marked as \"Needs review\", but haven't yet seen any email\ntraffic since the CF began:\n\n 2119 | Use memcpy in pglz decompression | {\"Andrey\nBorodin\",\"Владимир Лесков\"}\n 2169 | Remove HeapTuple and Buffer dependency f | {\"Ashwin Agrawal\"}\n 2172 | fsync error handling in pg_receivewal, p | {\"Peter Eisentraut\"}\n 1695 | Global shared meta cache | {\"Takeshi Ideriha\"}\n 2175 | socket_timeout in interfaces/libpq | {\"Ryohei Nagaura\"}\n 2096 | psql - add SHOW_ALL_RESULTS option | {\"Fabien Coelho\"}\n 2023 | NOT IN to ANTI JOIN transformation | {\"James Finnerty\",\"Zheng Li\"}\n 2064 | src/test/modules/dummy_index -- way to t | {\"Nikolay Shaplov\"}\n 1712 | Remove self join on a unique column | {\"Alexander Kuzmenkov\"}\n 2180 | Optimize pglz compression | {\"Andrey\nBorodin\",\"Владимир Лесков\"}\n 2179 | Fix support for hypothetical indexes usi | {\"Julien Rouhaud\"}\n 2025 | SimpleLruTruncate() mutual exclusion (da | {\"Noah Misch\"}\n 2069 | Expose queryid in pg_stat_activity in lo | {\"Julien Rouhaud\"}\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jul 2019 23:32:06 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "Hello,\n\nHere are the numbers at the end of the 4th week, with just a few days to go:\n\n status | w1 | w2 | w3 | w4\n------------------------+-----+-----+-----+-----\n Committed | 32 | 41 | 49 | 59\n Moved to next CF | 5 | 6 | 6 | 6\n Needs review | 146 | 128 | 114 | 106\n Ready for Committer | 7 | 9 | 10 | 7\n Rejected | 2 | 2 | 2 | 2\n Returned with feedback | 2 | 2 | 2 | 2\n Waiting on Author | 29 | 35 | 39 | 39\n Withdrawn | 8 | 8 | 9 | 10\n\nOne observation is that the number marked \"Ready for Committer\" floats\naround 7-10, and that's also about how many get committed each week\n(around 20 were already committed pre-'fest), which seems like a clue\nthat things are moving through that part of the state transition\ndiagram reasonably well.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jul 2019 10:59:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "Hi all,\n\nCF1 officially ends in about 8 hours, when August arrives on the\nvolcanic islands of Howard and Baker, according to CURRENT_TIMESTAMP\nAT TIME ZONE '+12'. I'll probably mark it closed at least 8 hours\nlater than that because I'll be asleep. Anything that is waiting on\nauthor and hasn't had any recent communication, I'm planning to mark\nas returned with feedback. Anything that is clearly making good\nprogress but isn't yet ready for committer, I'm going to move to the\nnext CF. If you're a patch owner or reviewer and you can help move\nyour patches in the right direction, or have other feedback on the\nappropriate state for any or all patches, then please speak up, I'd\nreally appreciate it. In all cases please feel free to change the\nstate or complain if you think I or someone else got it wrong; if I\nrecall correctly there is a way to get from \"returned\" to \"moved to\nnext CF\", perhaps via an intermediate state. Thanks!\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 16:10:01 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On Thu, Aug 1, 2019 at 12:10 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Hi all,\n>\n> CF1 officially ends in about 8 hours, when August arrives on the\n> volcanic islands of Howard and Baker, according to CURRENT_TIMESTAMP\n> AT TIME ZONE '+12'. I'll probably mark it closed at least 8 hours\n> later than that because I'll be asleep. Anything that is waiting on\n> author and hasn't had any recent communication, I'm planning to mark\n> as returned with feedback. Anything that is clearly making good\n> progress but isn't yet ready for committer, I'm going to move to the\n> next CF. If you're a patch owner or reviewer and you can help move\n> your patches in the right direction, or have other feedback on the\n> appropriate state for any or all patches, then please speak up, I'd\n> really appreciate it. In all cases please feel free to change the\n> state or complain if you think I or someone else got it wrong; if I\n> recall correctly there is a way to get from \"returned\" to \"moved to\n> next CF\", perhaps via an intermediate state. Thanks!\n>\n\nAs a normal lurker on hackers, it has been nice seeing the weekly updates.\nThanks for those.\n\n-- Rob\n\n\n>\n> --\n> Thomas Munro\n>\n> https://urldefense.proofpoint.com/v2/url?u=https-3A__enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=51tHa8Iv1xJ6zHVF3Sip1AlXYA5E-AYBfRUwz6SDvrs&m=zzunjUZWnsNXR62PvYhl6kzf6VG6mHBPRpJodFEHOKg&s=b09bCdTOGVhOmxdWbWwiTx0FedVeDW7Ol0EJV6pN_BQ&e=\n>\n>\n>\n\nOn Thu, Aug 1, 2019 at 12:10 AM Thomas Munro <thomas.munro@gmail.com> wrote:Hi all,\n\nCF1 officially ends in about 8 hours,  when August arrives on the\nvolcanic islands of Howard and Baker, according to CURRENT_TIMESTAMP\nAT TIME ZONE '+12'.  I'll probably mark it closed at least 8 hours\nlater than that because I'll be asleep.  Anything that is waiting on\nauthor and hasn't had any recent communication, I'm planning to mark\nas returned with feedback.  Anything that is clearly making good\nprogress but isn't yet ready for committer, I'm going to move to the\nnext CF.  If you're a patch owner or reviewer and you can help move\nyour patches in the right direction, or have other feedback on the\nappropriate state for any or all patches, then please speak up, I'd\nreally appreciate it.  In all cases please feel free to change the\nstate or complain if you think I or someone else got it wrong; if I\nrecall correctly there is a way to get from \"returned\" to \"moved to\nnext CF\", perhaps via an intermediate state.  Thanks!As a normal lurker on hackers, it has been nice seeing the weekly updates. Thanks for those. -- Rob \n\n-- \nThomas Munro\nhttps://urldefense.proofpoint.com/v2/url?u=https-3A__enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=51tHa8Iv1xJ6zHVF3Sip1AlXYA5E-AYBfRUwz6SDvrs&m=zzunjUZWnsNXR62PvYhl6kzf6VG6mHBPRpJodFEHOKg&s=b09bCdTOGVhOmxdWbWwiTx0FedVeDW7Ol0EJV6pN_BQ&e=", "msg_date": "Thu, 1 Aug 2019 00:30:40 -0400", "msg_from": "Robert Eckhardt <reckhardt@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "Greetings, Robert.\n\nYou wrote 2019-08-01, 07:30:\n\n\n\n\n> On Thu, Aug 1, 2019 at 12:10 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Hi all,\n\n> CF1 officially ends in about 8 hours,  when August arrives on the\n> volcanic islands of Howard and Baker, according to CURRENT_TIMESTAMP\n> AT TIME ZONE '+12'.  I'll probably mark it closed at least 8 hours\n> later than that because I'll be asleep.  Anything that is waiting on\n> author and hasn't had any recent communication, I'm planning to mark\n> as returned with feedback.  Anything that is clearly making good\n> progress but isn't yet ready for committer, I'm going to move to the\n> next CF.  If you're a patch owner or reviewer and you can help move\n> your patches in the right direction, or have other feedback on the\n> appropriate state for any or all patches, then please speak up, I'd\n> really appreciate it.  In all cases please feel free to change the\n> state or complain if you think I or someone else got it wrong; if I\n> recall correctly there is a way to get from \"returned\" to \"moved to\n> next CF\", perhaps via an intermediate state.  Thanks!\n\n\n\n\n> As a normal lurker on hackers, it has been nice seeing the weekly updates. Thanks for those. \n\nYeap! Great job! Please, do the same for the rest of our lifes. :)\n\n\n> -- Rob\n>  \n\n> -- \n> Thomas Munro\n> https://urldefense.proofpoint.com/v2/url?u=https-3A__enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=51tHa8Iv1xJ6zHVF3Sip1AlXYA5E-AYBfRUwz6SDvrs&m=zzunjUZWnsNXR62PvYhl6kzf6VG6mHBPRpJodFEHOKg&s=b09bCdTOGVhOmxdWbWwiTx0FedVeDW7Ol0EJV6pN_BQ&e=\n\n\n\n\n\n\n-- \nKind regards,\n Pavlo mailto:pavlo.golub@cybertec.at\n\n\n\n", "msg_date": "Thu, 1 Aug 2019 10:12:12 +0300", "msg_from": "Pavlo Golub <pavlo.golub@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On Thu, Aug 1, 2019 at 7:12 PM Pavlo Golub <pavlo.golub@cybertec.at> wrote:\n> > As a normal lurker on hackers, it has been nice seeing the weekly updates. Thanks for those.\n>\n> Yeap! Great job! Please, do the same for the rest of our lifes. :)\n\nI guess the CF app could show those kind of metrics, but having a\nwritten report from a human seems to be a good idea (I got it from\nAlvaro's blog[1]). The CF is now closed, and here are the final\nnumbers:\n\n status | w1 | w2 | w3 | w4 | final\n------------------------+----+----+----+----+-------\n Committed | 32 | 41 | 49 | 59 | 64\n Moved to next CF | 5 | 6 | 6 | 6 | 145\n Rejected | 2 | 2 | 2 | 2 | 2\n Returned with feedback | 2 | 2 | 2 | 2 | 9\n Withdrawn | 8 | 8 | 9 | 10 | 11\n\nIn percentages, we returned and rejected 5%, withdrew 5%, committed\n28%, and pushed 62% to the next 'fest. That's a wrap. Thanks\neveryone.\n\n[1] https://www.2ndquadrant.com/en/blog/managing-a-postgresql-commitfest/\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Aug 2019 12:18:12 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On Fri, Aug 02, 2019 at 12:18:12PM +1200, Thomas Munro wrote:\n> In percentages, we returned and rejected 5%, withdrew 5%, committed\n> 28%, and pushed 62% to the next 'fest. That's a wrap. Thanks\n> everyone.\n\nThanks Thomas for your efforts in making this possible.\n--\nMichael", "msg_date": "Fri, 2 Aug 2019 10:03:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Aug 02, 2019 at 12:18:12PM +1200, Thomas Munro wrote:\n>> In percentages, we returned and rejected 5%, withdrew 5%, committed\n>> 28%, and pushed 62% to the next 'fest. That's a wrap. Thanks\n>> everyone.\n\n> Thanks Thomas for your efforts in making this possible.\n\n+several --- this is a lot of tedious work, but it definitely helps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Aug 2019 22:13:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On Fri, Aug 2, 2019 at 9:18 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Aug 1, 2019 at 7:12 PM Pavlo Golub <pavlo.golub@cybertec.at> wrote:\n> > > As a normal lurker on hackers, it has been nice seeing the weekly updates. Thanks for those.\n> >\n> > Yeap! Great job! Please, do the same for the rest of our lifes. :)\n>\n> I guess the CF app could show those kind of metrics, but having a\n> written report from a human seems to be a good idea (I got it from\n> Alvaro's blog[1]). The CF is now closed, and here are the final\n> numbers:\n>\n> status | w1 | w2 | w3 | w4 | final\n> ------------------------+----+----+----+----+-------\n> Committed | 32 | 41 | 49 | 59 | 64\n> Moved to next CF | 5 | 6 | 6 | 6 | 145\n> Rejected | 2 | 2 | 2 | 2 | 2\n> Returned with feedback | 2 | 2 | 2 | 2 | 9\n> Withdrawn | 8 | 8 | 9 | 10 | 11\n>\n> In percentages, we returned and rejected 5%, withdrew 5%, committed\n> 28%, and pushed 62% to the next 'fest. That's a wrap. Thanks\n> everyone.\n\nThank you Thomas!\n\nRegards,\nAmit\n\n\n", "msg_date": "Fri, 2 Aug 2019 13:55:06 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "On Fri, Aug 2, 2019 at 6:55 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Aug 2, 2019 at 9:18 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Thu, Aug 1, 2019 at 7:12 PM Pavlo Golub <pavlo.golub@cybertec.at> wrote:\n> > > > As a normal lurker on hackers, it has been nice seeing the weekly updates. Thanks for those.\n> > >\n> > > Yeap! Great job! Please, do the same for the rest of our lifes. :)\n> >\n> > I guess the CF app could show those kind of metrics, but having a\n> > written report from a human seems to be a good idea (I got it from\n> > Alvaro's blog[1]). The CF is now closed, and here are the final\n> > numbers:\n> >\n> > status | w1 | w2 | w3 | w4 | final\n> > ------------------------+----+----+----+----+-------\n> > Committed | 32 | 41 | 49 | 59 | 64\n> > Moved to next CF | 5 | 6 | 6 | 6 | 145\n> > Rejected | 2 | 2 | 2 | 2 | 2\n> > Returned with feedback | 2 | 2 | 2 | 2 | 9\n> > Withdrawn | 8 | 8 | 9 | 10 | 11\n> >\n> > In percentages, we returned and rejected 5%, withdrew 5%, committed\n> > 28%, and pushed 62% to the next 'fest. That's a wrap. Thanks\n> > everyone.\n>\n> Thank you Thomas!\n\nThanks a lot Thomas!\n\n\n", "msg_date": "Fri, 2 Aug 2019 07:54:42 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" }, { "msg_contents": "> I guess the CF app could show those kind of metrics, but having a\n> written report from a human seems to be a good idea (I got it from\n> Alvaro's blog[1]). The CF is now closed, and here are the final\n> numbers:\n>\n> status | w1 | w2 | w3 | w4 | final\n> ------------------------+----+----+----+----+-------\n> Committed | 32 | 41 | 49 | 59 | 64\n> Moved to next CF | 5 | 6 | 6 | 6 | 145\n> Rejected | 2 | 2 | 2 | 2 | 2\n> Returned with feedback | 2 | 2 | 2 | 2 | 9\n> Withdrawn | 8 | 8 | 9 | 10 | 11\n>\n> In percentages, we returned and rejected 5%, withdrew 5%, committed\n> 28%, and pushed 62% to the next 'fest. That's a wrap. Thanks\n> everyone.\n>\n> [1] https://www.2ndquadrant.com/en/blog/managing-a-postgresql-commitfest/\n\nThanks.\n\nAttached a small graphical display of CF results over time.\n\n-- \nFabien.", "msg_date": "Fri, 2 Aug 2019 14:11:47 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2019-07, the first of five* for PostgreSQL 13" } ]
[ { "msg_contents": "Hi,\n\nIn postgresAcquireSampleRowsFunc, we 1) determine the fetch size and\nthen 2) construct the fetch command in each iteration of fetching some\nrows from the remote, but that would be totally redundant. Attached\nis a patch for removing that redundancy.\n\nI'll add this to the upcoming commitfest.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Fri, 28 Jun 2019 18:38:51 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "postgres_fdw: Minor improvement to postgresAcquireSampleRowsFunc" }, { "msg_contents": "On Fri, Jun 28, 2019 at 11:39 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n> In postgresAcquireSampleRowsFunc, we 1) determine the fetch size and\n> then 2) construct the fetch command in each iteration of fetching some\n> rows from the remote, but that would be totally redundant.\n\nIndeed.\n\n> Attached\n> is a patch for removing that redundancy.\n\nIt all looks good to me! I marked it as ready for committer.\n\n\n", "msg_date": "Fri, 28 Jun 2019 11:54:34 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: Minor improvement to postgresAcquireSampleRowsFunc" }, { "msg_contents": "Hi Julien,\n\nOn Fri, Jun 28, 2019 at 6:54 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Fri, Jun 28, 2019 at 11:39 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > In postgresAcquireSampleRowsFunc, we 1) determine the fetch size and\n> > then 2) construct the fetch command in each iteration of fetching some\n> > rows from the remote, but that would be totally redundant.\n>\n> Indeed.\n>\n> > Attached\n> > is a patch for removing that redundancy.\n>\n> It all looks good to me! I marked it as ready for committer.\n\nCool! I'll commit the patch if there are no objections. Thanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 28 Jun 2019 19:15:56 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: Minor improvement to postgresAcquireSampleRowsFunc" }, { "msg_contents": "On Fri, Jun 28, 2019 at 7:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, Jun 28, 2019 at 6:54 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > On Fri, Jun 28, 2019 at 11:39 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > In postgresAcquireSampleRowsFunc, we 1) determine the fetch size and\n> > > then 2) construct the fetch command in each iteration of fetching some\n> > > rows from the remote, but that would be totally redundant.\n> >\n> > Indeed.\n> >\n> > > Attached\n> > > is a patch for removing that redundancy.\n> >\n> > It all looks good to me! I marked it as ready for committer.\n>\n> Cool! I'll commit the patch if there are no objections. Thanks for reviewing!\n\nPushed.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 3 Jul 2019 18:01:36 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: Minor improvement to postgresAcquireSampleRowsFunc" } ]
[ { "msg_contents": "backend/libpq/be-secure-gssapi.c is including both libpq-be.h and libpq.h,\nwhich makes libpq-be.h superfluous as it gets included via libpq.h. The\nattached patch removes the inclusion of libpq-be.h to make be-secure-gssapi.c\nbehave like other files which need both headers.\n\ncheers ./daniel", "msg_date": "Fri, 28 Jun 2019 16:37:07 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Superfluous libpq-be.h include in GSSAPI code" }, { "msg_contents": "On Fri, Jun 28, 2019 at 4:37 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> backend/libpq/be-secure-gssapi.c is including both libpq-be.h and libpq.h,\n> which makes libpq-be.h superfluous as it gets included via libpq.h. The\n> attached patch removes the inclusion of libpq-be.h to make be-secure-gssapi.c\n> behave like other files which need both headers.\n\nLGTM.\n\n\n", "msg_date": "Fri, 28 Jun 2019 20:47:33 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Superfluous libpq-be.h include in GSSAPI code" }, { "msg_contents": "On Fri, Jun 28, 2019 at 08:47:33PM +0200, Julien Rouhaud wrote:\n> On Fri, Jun 28, 2019 at 4:37 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> backend/libpq/be-secure-gssapi.c is including both libpq-be.h and libpq.h,\n>> which makes libpq-be.h superfluous as it gets included via libpq.h. The\n>> attached patch removes the inclusion of libpq-be.h to make be-secure-gssapi.c\n>> behave like other files which need both headers.\n> \n> LGTM.\n\nThanks, committed. I looked at the area in case but did not notice\nanything else strange.\n\n(We have in hba.h a kludge with hbaPort to avoid including libpq-be.h,\nI got to wonder if we could do something about that..)\n--\nMichael", "msg_date": "Sat, 29 Jun 2019 11:23:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Superfluous libpq-be.h include in GSSAPI code" }, { "msg_contents": "> On 29 Jun 2019, at 04:23, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Jun 28, 2019 at 08:47:33PM +0200, Julien Rouhaud wrote:\n>> On Fri, Jun 28, 2019 at 4:37 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> backend/libpq/be-secure-gssapi.c is including both libpq-be.h and libpq.h,\n>>> which makes libpq-be.h superfluous as it gets included via libpq.h. The\n>>> attached patch removes the inclusion of libpq-be.h to make be-secure-gssapi.c\n>>> behave like other files which need both headers.\n>> \n>> LGTM.\n> \n> Thanks, committed. I looked at the area in case but did not notice\n> anything else strange.\n\nThanks!\n\n> (We have in hba.h a kludge with hbaPort to avoid including libpq-be.h,\n> I got to wonder if we could do something about that..)\n\nI looked at that one too at the time, but didn’t come up with anything less\nkludgy.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 1 Jul 2019 10:29:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Superfluous libpq-be.h include in GSSAPI code" } ]
[ { "msg_contents": "Hello,\n pg_dump ignores the dumping of data in foreign tables\n on purpose, this patch makes it optional as the user maybe\n wants to manage the data in the foreign servers directly from\n Postgres. Opinions?\n\nCheers\nLuis M Carril", "msg_date": "Fri, 28 Jun 2019 14:49:42 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Option to dump foreign data in pg_dump" }, { "msg_contents": "Hi\n\npá 28. 6. 2019 v 16:50 odesílatel Luis Carril <luis.carril@swarm64.com>\nnapsal:\n\n> Hello,\n> pg_dump ignores the dumping of data in foreign tables\n> on purpose, this patch makes it optional as the user maybe\n> wants to manage the data in the foreign servers directly from\n> Postgres. Opinions?\n>\n\nIt has sense for me\n\nPavel\n\n>\n> Cheers\n> Luis M Carril\n>\n\nHipá 28. 6. 2019 v 16:50 odesílatel Luis Carril <luis.carril@swarm64.com> napsal:\n\n\nHello,\n\n  pg_dump ignores the dumping of data in foreign tables\n\n  on purpose, this patch makes it optional as the user maybe \n\n\n  wants to manage the data in the foreign servers directly from \n\n\n  Postgres. Opinions?It has sense for mePavel\n\n\n\n\n\nCheers\n\nLuis M Carril", "msg_date": "Fri, 28 Jun 2019 17:16:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:\n\n> pg_dump ignores the dumping of data in foreign tables\n> on purpose, this patch makes it optional as the user maybe \n> wants to manage the data in the foreign servers directly from \n> Postgres. Opinions?\n\nWouldn’t that have the potential to make restores awkward for FDWs that aren’t\nwriteable? Basically, how can the risk of foot-gunning be minimized to avoid\nusers ending up with dumps that are hard to restore?\n\ncheers ./daniel\n\n", "msg_date": "Fri, 28 Jun 2019 17:17:01 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "pá 28. 6. 2019 v 17:17 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:\n>\n> > pg_dump ignores the dumping of data in foreign tables\n> > on purpose, this patch makes it optional as the user maybe\n> > wants to manage the data in the foreign servers directly from\n> > Postgres. Opinions?\n>\n> Wouldn’t that have the potential to make restores awkward for FDWs that\n> aren’t\n> writeable? Basically, how can the risk of foot-gunning be minimized to\n> avoid\n> users ending up with dumps that are hard to restore?\n>\n\nIt can be used for migrations, porting, testing (where FDW sources are not\naccessible).\n\npg_dump has not any safeguards against bad usage. But this feature has\nsense only if foreign tables are dumped as classic tables - so some special\noption is necessary\n\nPavel\n\n>\n> cheers ./daniel\n>\n>\n\npá 28. 6. 2019 v 17:17 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:\n\n>   pg_dump ignores the dumping of data in foreign tables\n>   on purpose, this patch makes it optional as the user maybe \n>   wants to manage the data in the foreign servers directly from \n>   Postgres. Opinions?\n\nWouldn’t that have the potential to make restores awkward for FDWs that aren’t\nwriteable?  Basically, how can the risk of foot-gunning be minimized to avoid\nusers ending up with dumps that are hard to restore?It can be used for migrations, porting, testing (where FDW sources are not accessible). pg_dump has not any safeguards against bad usage. But this feature has sense only if foreign tables are dumped as classic tables - so some special option is necessaryPavel\n\ncheers ./daniel", "msg_date": "Fri, 28 Jun 2019 17:20:58 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:\n>> pg_dump ignores the dumping of data in foreign tables\n>> on purpose, this patch makes it optional as the user maybe \n>> wants to manage the data in the foreign servers directly from \n>> Postgres. Opinions?\n\n> Wouldn’t that have the potential to make restores awkward for FDWs that aren’t\n> writeable?\n\nYeah, I think the feature as-proposed is a shotgun that's much more likely\nto cause problems than solve them. Almost certainly, what people would\nreally need is the ability to dump individual foreign tables' data not\neverything. (I also note that the main reason for \"dump everything\",\nnamely to get a guaranteed-consistent snapshot, isn't really valid for\nforeign tables anyhow.)\n\nI'm tempted to suggest that the way to approach this is to say that if you\nexplicitly select some foreign table(s) with \"-t\", then we'll dump their\ndata, unless you suppress that with \"-s\". No new switch needed.\n\nAnother way of looking at it, which responds more directly to Daniel's\npoint about non-writable FDWs, could be to have a switch that says \"dump\nforeign tables' data if their FDW is one of these\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Jun 2019 11:30:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "pá 28. 6. 2019 v 17:30 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:\n> >> pg_dump ignores the dumping of data in foreign tables\n> >> on purpose, this patch makes it optional as the user maybe\n> >> wants to manage the data in the foreign servers directly from\n> >> Postgres. Opinions?\n>\n> > Wouldn’t that have the potential to make restores awkward for FDWs that\n> aren’t\n> > writeable?\n>\n> Yeah, I think the feature as-proposed is a shotgun that's much more likely\n> to cause problems than solve them. Almost certainly, what people would\n> really need is the ability to dump individual foreign tables' data not\n> everything. (I also note that the main reason for \"dump everything\",\n> namely to get a guaranteed-consistent snapshot, isn't really valid for\n> foreign tables anyhow.)\n>\n\nI agree so major usage is dumping data. But can be interesting some\ntransformation from foreign table to classic table (when schema was created\nby IMPORT FOREIGN SCHEMA).\n\n\n> I'm tempted to suggest that the way to approach this is to say that if you\n> explicitly select some foreign table(s) with \"-t\", then we'll dump their\n> data, unless you suppress that with \"-s\". No new switch needed.\n>\n> Another way of looking at it, which responds more directly to Daniel's\n> point about non-writable FDWs, could be to have a switch that says \"dump\n> foreign tables' data if their FDW is one of these\".\n>\n\nRestoring content of FDW table via pg_restore or psql can be dangerous -\nthere I see a risk, and can be nice to allow it only with some form of\nsafeguard.\n\nI think so important questions is motivation for dumping FDW - a) clonning\n(has sense for me and it is safe), b) real backup (requires writeable FDW)\n- has sense too, but I see a possibility of unwanted problems.\n\nRegards\n\nPavel\n\n>\n> regards, tom lane\n>\n>\n>\n\npá 28. 6. 2019 v 17:30 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 28 Jun 2019, at 16:49, Luis Carril <luis.carril@swarm64.com> wrote:\n>> pg_dump ignores the dumping of data in foreign tables\n>> on purpose, this patch makes it optional as the user maybe \n>> wants to manage the data in the foreign servers directly from \n>> Postgres. Opinions?\n\n> Wouldn’t that have the potential to make restores awkward for FDWs that aren’t\n> writeable?\n\nYeah, I think the feature as-proposed is a shotgun that's much more likely\nto cause problems than solve them.  Almost certainly, what people would\nreally need is the ability to dump individual foreign tables' data not\neverything.  (I also note that the main reason for \"dump everything\",\nnamely to get a guaranteed-consistent snapshot, isn't really valid for\nforeign tables anyhow.)I agree so major usage is dumping data. But can be interesting some transformation from foreign table to classic table (when schema was created by IMPORT FOREIGN SCHEMA).\n\nI'm tempted to suggest that the way to approach this is to say that if you\nexplicitly select some foreign table(s) with \"-t\", then we'll dump their\ndata, unless you suppress that with \"-s\".  No new switch needed.\n\nAnother way of looking at it, which responds more directly to Daniel's\npoint about non-writable FDWs, could be to have a switch that says \"dump\nforeign tables' data if their FDW is one of these\".Restoring content of FDW table via pg_restore or psql can be dangerous - there I see a risk, and can be nice to allow it only with some form of safeguard.I think so important questions is motivation for dumping FDW - a) clonning (has sense for me and it is safe), b) real backup (requires writeable FDW) - has sense too, but I see a possibility of unwanted problems.RegardsPavel\n\n                        regards, tom lane", "msg_date": "Fri, 28 Jun 2019 17:53:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": ">Restoring content of FDW table via pg_restore or psql can be dangerous - there I see a risk, and can be nice to allow it only >with some form of safeguard.\n>\n>I think so important questions is motivation for dumping FDW - a) clonning (has sense for me and it is safe), b) real backup >(requires writeable FDW) - has sense too, but I see a possibility of unwanted problems.\n\nWhat about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for a specific server, this would allow distinction if multiple FDW are being used in the same DB. Also I think it is responsability of the user to know if the FDW that are being used are read-only or not.\n\nCheers\nLuis M Carril\n\n\n\n\n\n\n\n\n\n >Restoring content of FDW table via pg_restore or psql can be dangerous - there I see a risk, and can be nice to allow it only >with some form of safeguard.\n\n\n\n>\n\n>I think so important questions is motivation for dumping FDW - a) clonning (has sense for me and it is safe), b) real backup >(requires writeable FDW) - has sense too, but I see a possibility of unwanted problems.\n\n\nWhat about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for a specific server, this would allow distinction if multiple FDW are\n being used in the same DB. Also I think it is responsability of the user to know if the FDW that are being used are read-only or not.\n\n\nCheers\nLuis M Carril", "msg_date": "Fri, 28 Jun 2019 17:55:52 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "> On 28 Jun 2019, at 17:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n> > Yeah, I think the feature as-proposed is a shotgun that's much more likely\n> > to cause problems than solve them. Almost certainly, what people would\n> > really need is the ability to dump individual foreign tables' data not\n> > everything. (I also note that the main reason for \"dump everything\",\n> > namely to get a guaranteed-consistent snapshot, isn't really valid for\n> > foreign tables anyhow.)\n\n\nI think this is sort of key here, the consistency guarantees are wildly\ndifferent. A note about this should perhaps be added to the docs for the\noption discussed here?\n\n> On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote:\n\n\n> What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for a specific server, this would allow distinction if multiple FDW are being used in the same DB.\n\nI think this is a good option, the normal exclusion rules can then still apply\nin case not everything from a specific server is of interest.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 1 Jul 2019 11:29:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "> > On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote:\n> > What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for > > a specific server, this would allow distinction if multiple FDW are being used in the same DB.\n\n> I think this is a good option, the normal exclusion rules can then still apply\n> in case not everything from a specific server is of interest.\n\nHi, here is a new patch to dump the data of foreign tables using pg_dump.\nThis time the user specifies for which foreign servers the data will be dumped, which helps in case of having a mix of writeable and non-writeable fdw in the database.\nIt would be nice to emit an error if the fdw is read-only, but that information is not available in the catalog.\n\nCheers\nLuis M Carril", "msg_date": "Fri, 12 Jul 2019 14:08:28 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "> On 12 Jul 2019, at 16:08, Luis Carril <luis.carril@swarm64.com> wrote:\n> \n> > > On 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote:\n> > > What about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for > > a specific server, this would allow distinction if multiple FDW are being used in the same DB.\n> \n> > I think this is a good option, the normal exclusion rules can then still apply\n> > in case not everything from a specific server is of interest.\n> \n> Hi, here is a new patch to dump the data of foreign tables using pg_dump. \n\nCool! Please register this patch in the next commitfest to make sure it\ndoesn’t get lost on the way. Feel free to mark me as reviewer when adding it.\n\n> This time the user specifies for which foreign servers the data will be dumped, which helps in case of having a mix of writeable and non-writeable fdw in the database.\n\nLooks good, and works as expected.\n\nA few comments on the patch:\n\nDocumentation is missing, but you've waited with docs until the functionality\nof the patch was fleshed out?\n\nThis allows for adding a blanket wildcard with \"--include-foreign-data=“ which\nincludes every foreign server. This seems to go against the gist of the patch,\nto require an explicit opt-in per server. Testing for an empty string should\ndo the trick.\n\n+\tcase 11:\t\t\t\t/* include foreign data */\n+\t\tsimple_string_list_append(&foreign_servers_include_patterns, optarg);\n+\t\tbreak;\n+\n\nI don’t think expand_foreign_server_name_patterns should use strict_names, but\nrather always consider failures to map as errors.\n\n+\texpand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,\n+\t\t\t\t\t &foreign_servers_include_oids,\n+\t\t\t\t\t strict_names);\n\nThis seems like a bit too ambiguous name, it would be good to indicate in the\nname that it refers to a foreign server.\n\n+\tOid\t\t\tserveroid; /* foreign server oid */\n\nAs coded there is no warning when asking for foreign data on a schema-only\ndump, maybe something like could make usage clearer as this option is similar\nin concept to data-only:\n\n+ if (dopt.schemaOnly && foreign_servers_include_patterns.head != NULL)\n+ {\n+ pg_log_error(\"options -s/--schema-only and --include-foreign-data cannot be used together\");\n+ exit_nicely(1);\n+ }\n+\n\ncheers ./daniel\n\n\n\n", "msg_date": "Mon, 15 Jul 2019 12:06:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 15.07.19 12:06, Daniel Gustafsson wrote:\r\n\r\nOn 12 Jul 2019, at 16:08, Luis Carril <luis.carril@swarm64.com> wrote:\r\n\r\n\r\n\r\nOn 28 Jun 2019, at 19:55, Luis Carril <luis.carril@swarm64.com> wrote:\r\nWhat about providing a list of FDW servers instead of an all or nothing option? In that way the user really has to do a conscious decision to dump the content of the foreign tables for > > a specific server, this would allow distinction if multiple FDW are being used in the same DB.\r\n\r\n\r\n\r\n\r\n\r\nI think this is a good option, the normal exclusion rules can then still apply\r\nin case not everything from a specific server is of interest.\r\n\r\n\r\n\r\nHi, here is a new patch to dump the data of foreign tables using pg_dump.\r\n\r\n\r\n\r\nCool! Please register this patch in the next commitfest to make sure it\r\ndoesn’t get lost on the way. Feel free to mark me as reviewer when adding it.\r\n\r\nThanks, I'll do!\r\n\r\nThis time the user specifies for which foreign servers the data will be dumped, which helps in case of having a mix of writeable and non-writeable fdw in the database.\r\n\r\n\r\n\r\nLooks good, and works as expected.\r\n\r\nA few comments on the patch:\r\n\r\nDocumentation is missing, but you've waited with docs until the functionality\r\nof the patch was fleshed out?\r\n\r\nI've added the documentation about the option in the pg_dump page\r\n\r\nThis allows for adding a blanket wildcard with \"--include-foreign-data=“ which\r\nincludes every foreign server. This seems to go against the gist of the patch,\r\nto require an explicit opt-in per server. Testing for an empty string should\r\ndo the trick.\r\n\r\n+ case 11: /* include foreign data */\r\n+ simple_string_list_append(&foreign_servers_include_patterns, optarg);\r\n+ break;\r\n+\r\n\r\nNow it errors if any is an empty string.\r\n\r\n\r\n\r\nI don’t think expand_foreign_server_name_patterns should use strict_names, but\r\nrather always consider failures to map as errors.\r\n\r\n+ expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,\r\n+ &foreign_servers_include_oids,\r\n+ strict_names);\r\n\r\nRemoved, ie if nothing match it throws an error.\r\n\r\n\r\n\r\nThis seems like a bit too ambiguous name, it would be good to indicate in the\r\nname that it refers to a foreign server.\r\n\r\n+ Oid serveroid; /* foreign server oid */\r\n\r\nChanged to foreign_server_oid.\r\n\r\n\r\n\r\nAs coded there is no warning when asking for foreign data on a schema-only\r\ndump, maybe something like could make usage clearer as this option is similar\r\nin concept to data-only:\r\n\r\n+ if (dopt.schemaOnly && foreign_servers_include_patterns.head != NULL)\r\n+ {\r\n+ pg_log_error(\"options -s/--schema-only and --include-foreign-data cannot be used together\");\r\n+ exit_nicely(1);\r\n+ }\r\n+\r\n\r\nAdded too\r\n\r\n\r\n\r\ncheers ./daniel\r\n\r\n\r\n\r\nThanks for the comments!\r\n\r\nCheers\r\nLuis M Carril", "msg_date": "Mon, 15 Jul 2019 12:39:00 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Hi Luis,\nHere is a few comment for me\n\n*I suggest the option to be just –foreign-data because if we make it\n–include-foreign-data its expected to have –exclude-foreign-data option\ntoo.\n\n*please add test case\n\n* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)\n\nfilter condition is not implemented completely yet so the logic only work\non foreign table so I think its better to handle it separately\n\n* I don’t understand the need for changing SELECT query .we can use the\nsame SELECT query syntax for both regular table and foreign table\n\n\nregards\n\nSurafel\n\nHi Luis,Here is a few comment for me \n\n*I suggest the\noption to be just –foreign-data because if we make it\n–include-foreign-data its expected to have –exclude-foreign-data\noption too. \n\n*please add test\ncase \n\n* +\tif\n(tdinfo->filtercond || tbinfo->relkind ==\nRELKIND_FOREIGN_TABLE)\nfilter condition is\nnot implemented completely yet so the logic only work on foreign\ntable so I think its better to handle it separately \n\n* I don’t\nunderstand the need for changing SELECT query .we can use the same\nSELECT query syntax for both regular table and foreign table\n\n\nregards \n\nSurafel", "msg_date": "Thu, 19 Sep 2019 09:38:03 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Mon, Jul 15, 2019 at 6:09 PM Luis Carril <luis.carril@swarm64.com> wrote:\n>\n> On 15.07.19 12:06, Daniel Gustafsson wrote:\n>\nFew comments:\n\nAs you have specified required_argument in above:\n+ {\"include-foreign-data\", required_argument, NULL, 11},\n\nThe below check may not be required:\n+ if (strcmp(optarg, \"\") == 0)\n+ {\n+ pg_log_error(\"empty string is not a valid pattern in --include-foreign-data\");\n+ exit_nicely(1);\n+ }\n\n+ if (foreign_servers_include_patterns.head != NULL)\n+ {\n+ expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,\n+ &foreign_servers_include_oids);\n+ if (foreign_servers_include_oids.head == NULL)\n+ fatal(\"no matching foreign servers were found\");\n+ }\n+\n\nThe above check if (foreign_servers_include_oids.head == NULL) may not\nbe required, as there is a check present inside\nexpand_foreign_server_name_patterns to handle this error:\n+\n+ res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);\n+ if (PQntuples(res) == 0)\n+ fatal(\"no matching foreign servers were found for pattern \\\"%s\\\"\", cell->val);\n+\n\n+static void\n+expand_foreign_server_name_patterns(Archive *fout,\n+ SimpleStringList *patterns,\n+ SimpleOidList *oids)\n+{\n+ PQExpBuffer query;\n+ PGresult *res;\n+ SimpleStringListCell *cell;\n+ int i;\n+\n+ if (patterns->head == NULL)\n+ return; /* nothing to do */\n+\n\nThe above check for patterns->head may not be required as similar\ncheck exists before this function is called:\n+ if (foreign_servers_include_patterns.head != NULL)\n+ {\n+ expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,\n+ &foreign_servers_include_oids);\n+ if (foreign_servers_include_oids.head == NULL)\n+ fatal(\"no matching foreign servers were found\");\n+ }\n+\n\n+ /* Skip FOREIGN TABLEs (no data to dump) if not requested explicitly */\n+ if (tbinfo->relkind == RELKIND_FOREIGN_TABLE &&\n+ (foreign_servers_include_oids.head == NULL ||\n+ !simple_oid_list_member(&foreign_servers_include_oids,\ntbinfo->foreign_server_oid)))\nsimple_oid_list_member can be split into two lines\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Sep 2019 15:08:48 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Thu, Sep 19, 2019 at 3:08 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Jul 15, 2019 at 6:09 PM Luis Carril <luis.carril@swarm64.com> wrote:\n> >\n> > On 15.07.19 12:06, Daniel Gustafsson wrote:\n> >\n> Few comments:\n>\n> As you have specified required_argument in above:\n> + {\"include-foreign-data\", required_argument, NULL, 11},\n>\n> The below check may not be required:\n> + if (strcmp(optarg, \"\") == 0)\n> + {\n> + pg_log_error(\"empty string is not a valid pattern in --include-foreign-data\");\n> + exit_nicely(1);\n> + }\n>\n> + if (foreign_servers_include_patterns.head != NULL)\n> + {\n> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,\n> + &foreign_servers_include_oids);\n> + if (foreign_servers_include_oids.head == NULL)\n> + fatal(\"no matching foreign servers were found\");\n> + }\n> +\n>\n> The above check if (foreign_servers_include_oids.head == NULL) may not\n> be required, as there is a check present inside\n> expand_foreign_server_name_patterns to handle this error:\n> +\n> + res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);\n> + if (PQntuples(res) == 0)\n> + fatal(\"no matching foreign servers were found for pattern \\\"%s\\\"\", cell->val);\n> +\n>\n> +static void\n> +expand_foreign_server_name_patterns(Archive *fout,\n> + SimpleStringList *patterns,\n> + SimpleOidList *oids)\n> +{\n> + PQExpBuffer query;\n> + PGresult *res;\n> + SimpleStringListCell *cell;\n> + int i;\n> +\n> + if (patterns->head == NULL)\n> + return; /* nothing to do */\n> +\n>\n> The above check for patterns->head may not be required as similar\n> check exists before this function is called:\n> + if (foreign_servers_include_patterns.head != NULL)\n> + {\n> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,\n> + &foreign_servers_include_oids);\n> + if (foreign_servers_include_oids.head == NULL)\n> + fatal(\"no matching foreign servers were found\");\n> + }\n> +\n>\n> + /* Skip FOREIGN TABLEs (no data to dump) if not requested explicitly */\n> + if (tbinfo->relkind == RELKIND_FOREIGN_TABLE &&\n> + (foreign_servers_include_oids.head == NULL ||\n> + !simple_oid_list_member(&foreign_servers_include_oids,\n> tbinfo->foreign_server_oid)))\n> simple_oid_list_member can be split into two lines\n>\nAlso can we include few tests for this feature.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Sep 2019 15:18:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Hello,\n thanks for the comments!\n\n\n*I suggest the option to be just –foreign-data because if we make it –include-foreign-data its expected to have –exclude-foreign-data option too.\n\nSeveral pg_dump options have no counterpart, e.g --enable-row-security does not have a disable (which is the default). Also calling it --foreign-data would sound similar to the --table, by default all tables are dumped, but with --table only the selected tables are dumped. While without --include-foreign-data all data is excluded, and only with the option some foreign data would be included.\n\n*please add test case\n\nI added tests cases for the invalid inputs. I'll try to make a test case for the actual dump of foreign data, but that requires more setup, because a functional fdw is needed there.\n\n* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)\n\nfilter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately\n\nNote that there is another if condition that actually applies the the filtercondition if provided, also for a foreign table we need to do a COPY SELECT instead of a COPY TO\n\n* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table\n\nTo which query do you refer? In the patch there are three queries: 1 retrieves foreign servers, another is the SELECT in the COPY that now it applies in case of a filter condition of a foreign table, and a third that retrieves the oid of a given foreign server.\n\n\n> As you have specified required_argument in above:\n> + {\"include-foreign-data\", required_argument, NULL, 11},\n>\n> The below check may not be required:\n> + if (strcmp(optarg, \"\") == 0)\n> + {\n> + pg_log_error(\"empty string is not a valid pattern in --include-foreign-data\");\n> + exit_nicely(1);\n> + }\n\nWe need to conserve this check to avoid that the use of '--include-foreign-data=', which would match all foreign servers. And in previous messages it was established that that behavior is too coarse.\n\n>\n> + if (foreign_servers_include_patterns.head != NULL)\n> + {\n> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,\n> + &foreign_servers_include_oids);\n> + if (foreign_servers_include_oids.head == NULL)\n> + fatal(\"no matching foreign servers were found\");\n> + }\n> +\n>\n> The above check if (foreign_servers_include_oids.head == NULL) may not\n> be required, as there is a check present inside\n> expand_foreign_server_name_patterns to handle this error:\n> +\n> + res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);\n> + if (PQntuples(res) == 0)\n> + fatal(\"no matching foreign servers were found for pattern \\\"%s\\\"\", cell->val);\n> +\n\nRemoved\n\n>\n> +static void\n> +expand_foreign_server_name_patterns(Archive *fout,\n> + SimpleStringList *patterns,\n> + SimpleOidList *oids)\n> +{\n> + PQExpBuffer query;\n> + PGresult *res;\n> + SimpleStringListCell *cell;\n> + int i;\n> +\n> + if (patterns->head == NULL)\n> + return; /* nothing to do */\n> +\n>\n> The above check for patterns->head may not be required as similar\n> check exists before this function is called:\n> + if (foreign_servers_include_patterns.head != NULL)\n> + {\n> + expand_foreign_server_name_patterns(fout, &foreign_servers_include_patterns,\n> + &foreign_servers_include_oids);\n> + if (foreign_servers_include_oids.head == NULL)\n> + fatal(\"no matching foreign servers were found\");\n> + }\n> +\n\nI think that it is better that the function expand_foreign_server_name do not rely on a non-NULL head, so it checks it by itself, and is closer to the other expand_* functions.\nInstead I've removed the check before the function is called.\n\n>\n> + /* Skip FOREIGN TABLEs (no data to dump) if not requested explicitly */\n> + if (tbinfo->relkind == RELKIND_FOREIGN_TABLE &&\n> + (foreign_servers_include_oids.head == NULL ||\n> + !simple_oid_list_member(&foreign_servers_include_oids,\n> tbinfo->foreign_server_oid)))\n> simple_oid_list_member can be split into two lines\n\nDone\n\nCheers\nLuis M Carril", "msg_date": "Fri, 20 Sep 2019 15:20:25 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Fri, Sep 20, 2019 at 6:20 PM Luis Carril <luis.carril@swarm64.com> wrote:\n\n> Hello,\n> thanks for the comments!\n>\n> * + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)\n>\n> filter condition is not implemented completely yet so the logic only work\n> on foreign table so I think its better to handle it separately\n>\n> Note that there is another if condition that actually applies the the\n> filtercondition if provided, also for a we need to do a COPY SELECT instead\n> of a COPY TO\n>\n\nbut we can't supplied where clause in pg_dump yet so filtercondtion is\nalways NULL and the logic became true only on foreign table.\n\n> * I don’t understand the need for changing SELECT query .we can use the\n> same SELECT query syntax for both regular table and foreign table\n>\n> To which query do you refer? In the patch there are three queries: 1\n> retrieves foreign servers, another is the SELECT in the COPY that now it\n> applies in case of a filter condition of a foreign table, and a third that\n> retrieves the oid of a given foreign server.\n>\n>\nSELECT on COPY\n\nregards\nSurafel\n\nOn Fri, Sep 20, 2019 at 6:20 PM Luis Carril <luis.carril@swarm64.com> wrote:\n\n\nHello,\n\n   thanks for the comments!\n* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)\nfilter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately\n\n\nNote that there is another if condition that actually applies the the filtercondition if provided, also for a we need to do a COPY SELECT instead of a COPY TObut we can't supplied where clause in pg_dump yet so filtercondtion is always NULL and the logic became true only on foreign table. \n\n* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table\n\n\nTo which query do you refer? In the patch there are three queries: 1 retrieves foreign servers, another is the SELECT in the COPY that now it applies in case of a filter condition of a foreign table, and a third that retrieves the oid of a given foreign\n server.\nSELECT on COPY  regards Surafel", "msg_date": "Tue, 24 Sep 2019 09:25:01 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Fri, Sep 20, 2019 at 6:20 PM Luis Carril <luis.carril@swarm64.com<mailto:luis.carril@swarm64.com>> wrote:\nHello,\n thanks for the comments!\n\n* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)\n\nfilter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately\n\nNote that there is another if condition that actually applies the the filtercondition if provided, also for a we need to do a COPY SELECT instead of a COPY TO\n\nbut we can't supplied where clause in pg_dump yet so filtercondtion is always NULL and the logic became true only on foreign table.\n\n* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table\n\nTo which query do you refer? In the patch there are three queries: 1 retrieves foreign servers, another is the SELECT in the COPY that now it applies in case of a filter condition of a foreign table, and a third that retrieves the oid of a given foreign server.\n\n\nSELECT on COPY\n\nregards\nSurafel\nIf we have a non-foreign table and filtercond is NULL, then we can do a `COPY table columns TO stdout`.\nBut if the table is foreign, the `COPY foreign-table columns TO stdout` is not supported by Postgres, so we have to do a `COPY (SELECT columns FROM foreign-table) TO sdout`\n\nNow if in any case the filtercond is non-NULL, ie we have a WHERE clause, then for non-foreign and foreign tables we have to do a:\n`COPY (SELECT columns FROM table) TO sdout`\n\nSo the COPY of a foreign table has to be done using the sub-SELECT just as a non-foreign table with filtercond, not like a non-foreign table without filtercond.\n\nCheers\n\nLuis M Carril\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Fri, Sep 20, 2019 at 6:20 PM Luis Carril <luis.carril@swarm64.com> wrote:\n\n\n\n\nHello,\n\n   thanks for the comments!\n\n* + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)\nfilter condition is not implemented completely yet so the logic only work on foreign table so I think its better to handle it separately\n\n\nNote that there is another if condition that actually applies the the filtercondition if provided, also for a we need to do a COPY SELECT instead of a COPY TO\n\n\n\n\nbut we can't supplied where clause in pg_dump yet so filtercondtion is always NULL and the logic became true only on foreign table.\n\n\n\n\n\n* I don’t understand the need for changing SELECT query .we can use the same SELECT query syntax for both regular table and foreign table\n\n\nTo which query do you refer? In the patch there are three queries: 1 retrieves foreign servers, another is the SELECT in the COPY that now it applies in case of a filter condition of a foreign table, and a third that retrieves the oid of a given foreign\n server.\n\n\n\n\n\n\nSELECT on COPY  \n\n\nregards \n\nSurafel\n\n\n\n\n\n\n\n\nIf we have a non-foreign table and filtercond is NULL, then we can do a `COPY table columns TO stdout`.\nBut if the table is foreign, the `COPY foreign-table columns TO stdout` is not supported by Postgres, so we have to do a `COPY (SELECT columns FROM foreign-table) TO sdout`\n\nNow if in any case the filtercond is non-NULL, ie we have a WHERE clause, then for non-foreign and foreign tables we have to do a:\n`COPY (SELECT columns FROM table) TO sdout`\nSo the COPY of a foreign table has to be done using the sub-SELECT just as a non-foreign table with filtercond, not like a non-foreign table without filtercond.\n\nCheers\n\nLuis M Carril", "msg_date": "Tue, 24 Sep 2019 09:52:24 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 20 Sep 2019, at 17:20, Luis Carril <luis.carril@swarm64.com> wrote:I took a look at this patch again today for a review of the latest version.While I still think it's a potential footgun due to read-only FDW's, I can seeusecases for having it so I'm mildly +1 on adding it.The patch applies to master with a little bit of fuzz and builds without warnings.Regarding the functionality, it's a bit unfortunate that it works differentlyfor --inserts dumps and COPY dumps.  As an example, suppose we have a file_fdwtable in CSV format with 10 rows where row 10 is malformed.  The COPY dump willinclude 9 rows and exit on an error, the --inserts dump won't include any rowsbefore the error.  Since the dump fails with an error, one can argue that itdoesn't matter too much, but it's still not good to have such differentbehaviors based on program internals.  (the example is of course not terriblyrealistic but can be extrapolated from.) Maybe I'm the only one concerned though.*I suggest the option to be just –foreign-data because if we make it –include-foreign-data its expected to have –exclude-foreign-data option too.Several pg_dump options have no counterpart, e.g --enable-row-security does not have a disable (which is the default). Also calling it --foreign-data would sound similar to the --table,  by default all tables are dumped, but with --table only the selected tables are dumped. While without --include-foreign-data all data is excluded, and only with the option some foreign data would be included.I agree that --include-foreign-data conveys the meaning of the option better,+1 for keeping this.*please add test case I added tests cases for the invalid inputs. I'll try to make a test case for the actual dump of foreign data, but that requires more setup, because a functional fdw is needed there.This is where it becomes a bit messy IMO.  Given that there has been a lot ofeffort spent on adding test coverage for pg_dump, I think it would be a shameto add such niche functionality without testing more than the program options.You are however right that in order to test, it requires a fully functionalFDW.I took the liberty to add a testcase to the pg_dump TAP tests which includes adummy FDW that always return 10 predetermined rows (in order to keep testsstable).  There is so far just a happy-path test, since I don't want to spendtime until it's deemed of interest (it is adding a lot of code for a smalltest), but it at least illustrates how this patch could be tested.  Theattached patch builds on top of yours.The below check may not be required:+ if (strcmp(optarg, \"\") == 0)+ {+ pg_log_error(\"empty string is not a valid pattern in --include-foreign-data\");+ exit_nicely(1);+ }We need to conserve this check to avoid that the use of '--include-foreign-data=', which would match all foreign servers. And in previous messages it was established that that behavior is too coarse.I still believe thats the desired functionality.Also, a few small nitpicks on the patch:This should probably be PATTERN instead of SERVER, to match the rest of thehelp output:+   printf(_(\"  --include-foreign-data=SERVER\\n\"+            \"                               include data of foreign tables with the named\\n\"+            \"                               foreign servers in dump\\n\"));It would be good to add a comment explaining the rationale for addingRELKIND_FOREIGN_TABLE to this block, to assist readers:-   if (tdinfo->filtercond)+   if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)cheers ./daniel", "msg_date": "Sat, 9 Nov 2019 21:38:55 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Sat, 2019-11-09 at 21:38 +0100, Daniel Gustafsson wrote:\n> I took a look at this patch again today for a review of the latest version.\n> While I still think it's a potential footgun due to read-only FDW's, I can see\n> usecases for having it so I'm mildly +1 on adding it.\n\nI don't feel good about this feature.\npg_dump should not dump any data that are not part of the database\nbeing dumped.\n\nIf you restore such a dump, the data will be inserted into the foreign table,\nright? Unless someone emptied the remote table first, this will add\nduplicated data to that table.\nI think that is an unpleasant surprise. I'd expect that if I drop a database\nand restore it from a dump, it should be as it was before. This change would\nbreak that assumption.\n\nWhat are the use cases of a dump with foreign table data?\n\nUnless I misunderstood something there, -1.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 11 Nov 2019 21:04:17 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Hello\n\n a new version of the patch with the tests from Daniel (thanks!) and the nitpicks.\n\n\nI don't feel good about this feature.\npg_dump should not dump any data that are not part of the database\nbeing dumped.\n\nIf you restore such a dump, the data will be inserted into the foreign table,\nright? Unless someone emptied the remote table first, this will add\nduplicated data to that table.\nI think that is an unpleasant surprise. I'd expect that if I drop a database\nand restore it from a dump, it should be as it was before. This change would\nbreak that assumption.\n\nWhat are the use cases of a dump with foreign table data?\n\nUnless I misunderstood something there, -1.\n\nThis feature is opt-in so if the user makes dumps of a remote server explicitly by other means, then the user would not need to use these option.\n\nBut, not all foreign tables are necessarily in a remote server like the ones referenced by the postgres_fdw.\nIn FDWs like swarm64da, cstore, citus or timescaledb, the foreign tables are part of your database, and one could expect that a dump of the database includes data from these FDWs.\n\n\nCheers\n\n\nLuis M Carril", "msg_date": "Tue, 12 Nov 2019 11:12:16 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "> On 12 Nov 2019, at 12:12, Luis Carril <luis.carril@swarm64.com> wrote:\n\n> a new version of the patch with the tests from Daniel (thanks!) and the nitpicks.\n\nThe nitpicks have been addressed. However, it seems that the new file\ncontaining the test FDW seems missing from the new version of the patch. Did\nyou forget to git add the file?\n\n>> I don't feel good about this feature.\n>> pg_dump should not dump any data that are not part of the database\n>> being dumped.\n>> \n>> If you restore such a dump, the data will be inserted into the foreign table,\n>> right? Unless someone emptied the remote table first, this will add\n>> duplicated data to that table.\n>> I think that is an unpleasant surprise. I'd expect that if I drop a database\n>> and restore it from a dump, it should be as it was before. This change would\n>> break that assumption.\n>> \n>> What are the use cases of a dump with foreign table data?\n>> \n>> Unless I misunderstood something there, -1.\n> \n> This feature is opt-in so if the user makes dumps of a remote server explicitly by other means, then the user would not need to use these option.\n> But, not all foreign tables are necessarily in a remote server like the ones referenced by the postgres_fdw.\n> In FDWs like swarm64da, cstore, citus or timescaledb, the foreign tables are part of your database, and one could expect that a dump of the database includes data from these FDWs.\n\nRight, given the deliberate opt-in which is required I don't see much risk of\nunpleasant user surprises. There are no doubt foot-guns available with this\nfeature, as has been discussed upthread, but the current proposal is IMHO\nminimizing them.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 12 Nov 2019 14:28:22 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "The nitpicks have been addressed. However, it seems that the new file\ncontaining the test FDW seems missing from the new version of the patch. Did\nyou forget to git add the file?\n\nYes, I forgot, thanks for noticing. New patch attached again.\n\n\nCheers\n\n\nLuis M Carril", "msg_date": "Tue, 12 Nov 2019 14:21:38 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 2019-Nov-12, Luis Carril wrote:\n\n> But, not all foreign tables are necessarily in a remote server like\n> the ones referenced by the postgres_fdw.\n> In FDWs like swarm64da, cstore, citus or timescaledb, the foreign\n> tables are part of your database, and one could expect that a dump of\n> the database includes data from these FDWs.\n\nBTW these are not FDWs in the \"foreign\" sense at all; they're just\nabusing the FDW system in order to be able to store data in some\ndifferent way. The right thing to do IMO is to port these systems to be\nusers of the new storage abstraction (table AM). If we do that, what\nvalue is there to the feature being proposed here?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 12 Nov 2019 12:11:23 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "> On 12 Nov 2019, at 15:21, Luis Carril <luis.carril@swarm64.com> wrote:\n> \n>> The nitpicks have been addressed. However, it seems that the new file\n>> containing the test FDW seems missing from the new version of the patch. Did\n>> you forget to git add the file?\n> Yes, I forgot, thanks for noticing. New patch attached again.\n\nThe patch applies, compiles and tests clean. The debate whether we want to\nallow dumping of foreign data at all will continue but I am marking the patch\nas ready for committer as I believe it is ready for input on that level.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 12 Nov 2019 16:38:36 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Nov-12, Luis Carril wrote:\n>> But, not all foreign tables are necessarily in a remote server like\n>> the ones referenced by the postgres_fdw.\n>> In FDWs like swarm64da, cstore, citus or timescaledb, the foreign\n>> tables are part of your database, and one could expect that a dump of\n>> the database includes data from these FDWs.\n\n> BTW these are not FDWs in the \"foreign\" sense at all; they're just\n> abusing the FDW system in order to be able to store data in some\n> different way. The right thing to do IMO is to port these systems to be\n> users of the new storage abstraction (table AM). If we do that, what\n> value is there to the feature being proposed here?\n\nThat is a pretty valid point. I'm not sure however that there would\nbe *no* use-cases for the proposed option if all of those FDWs were\nconverted to table AMs. Also, even if the authors of those systems\nare all hard at work on such a conversion, it'd probably be years\nbefore the FDW implementations disappear from the wild.\n\nHaving said that, I'm ending up -0.5 or so on the patch as it stands,\nmainly because it seems like it is bringing way more maintenance\nburden than it's realistically worth. I'm particularly unhappy about\nthe proposed regression test additions --- the cycles added to\ncheck-world, and the maintenance effort that's inevitably going to be\nneeded for all that code, seem unwarranted for something that's at\nbest a very niche use-case. And, despite the bulk of the test\nadditions, they're in no sense offering an end-to-end test, because\nthat would require successfully reloading the data as well.\n\nThat objection could be addressed, perhaps, by scaling down the tests\nto just have a goal of exercising the new pg_dump option-handling\ncode, and not to attempt to do meaningful data extraction from a\nforeign table. You could do that with an entirely dummy foreign data\nwrapper and server (cf. sql/foreign_data.sql). I'm imagining perhaps\ncreate two dummy servers, of which only one has a table, and we ask to\ndump data from the other one. This would cover parsing and validation\nof the --include-foreign-data option, and make sure that we don't dump\nfrom servers we're not supposed to. It doesn't actually dump any\ndata, but that part is a completely trivial aspect of the patch,\nreally, and almost all of the code relevant to that does get tested\nalready.\n\nIn the department of minor nitpicks ... why bother with joining to\npg_foreign_server in the query that retrieves a foreign table's\nserver OID? ft.ftserver is already the answer you seek. Also,\nI think it'd be wise from a performance standpoint to skip doing\nthat query altogether in the normal case where --include-foreign-data\nhasn't been requested.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Nov 2019 14:27:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 2019-Nov-12, Luis Carril wrote:\n\n> The nitpicks have been addressed. However, it seems that the new file\n> containing the test FDW seems missing from the new version of the patch. Did\n> you forget to git add the file?\n> \n> Yes, I forgot, thanks for noticing. New patch attached again.\n\nLuis,\n\nIt seems you've got enough support for this concept, so let's move\nforward with this patch. There are some comments from Tom about the\npatch; would you like to send an updated version perhaps?\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 28 Nov 2019 11:31:16 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Luis,\n\nIt seems you've got enough support for this concept, so let's move\nforward with this patch. There are some comments from Tom about the\npatch; would you like to send an updated version perhaps?\n\nThanks\nHi,\n\n I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids (avoiding the query) if the option is not used at all.\n\nAbout the testing issues that Tom mentioned:\nI do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if the data of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C callbacks of the FDW are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed.\nAlso, to test that the include option chooses only the data of the specified foreign servers we would need some negative testing, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind of tests in the test suite, even for other selective options like --table or --exclude-schema.\n\n\nCheers\nLuis M Carril\n\n________________________________\nFrom: Alvaro Herrera <alvherre@2ndquadrant.com>\nSent: Thursday, November 28, 2019 3:31 PM\nTo: Luis Carril <luis.carril@swarm64.com>\nCc: Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; vignesh C <vignesh21@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Option to dump foreign data in pg_dump\n\nOn 2019-Nov-12, Luis Carril wrote:\n\n> The nitpicks have been addressed. However, it seems that the new file\n> containing the test FDW seems missing from the new version of the patch. Did\n> you forget to git add the file?\n>\n> Yes, I forgot, thanks for noticing. New patch attached again.\n\nLuis,\n\nIt seems you've got enough support for this concept, so let's move\nforward with this patch. There are some comments from Tom about the\npatch; would you like to send an updated version perhaps?\n\nThanks\n\n--\nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 29 Nov 2019 08:40:38 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Fri, Nov 29, 2019 at 2:10 PM Luis Carril <luis.carril@swarm64.com> wrote:\n>\n> Luis,\n>\n> It seems you've got enough support for this concept, so let's move\n> forward with this patch. There are some comments from Tom about the\n> patch; would you like to send an updated version perhaps?\n>\n> Thanks\n>\n> Hi,\n>\n> I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids (avoiding the query) if the option is not used at all.\n>\n> About the testing issues that Tom mentioned:\n> I do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if the data of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C callbacks of the FDW are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed.\n> Also, to test that the include option chooses only the data of the specified foreign servers we would need some negative testing, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind of tests in the test suite, even for other selective options like --table or --exclude-schema.\n>\n\nCan you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jan 2020 06:18:21 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Can you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\nI tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.\n\nCheers,\n\nLuis M Carril\n________________________________\nFrom: vignesh C <vignesh21@gmail.com>\nSent: Tuesday, January 14, 2020 1:48 AM\nTo: Luis Carril <luis.carril@swarm64.com>\nCc: Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Option to dump foreign data in pg_dump\n\nOn Fri, Nov 29, 2019 at 2:10 PM Luis Carril <luis.carril@swarm64.com> wrote:\n>\n> Luis,\n>\n> It seems you've got enough support for this concept, so let's move\n> forward with this patch. There are some comments from Tom about the\n> patch; would you like to send an updated version perhaps?\n>\n> Thanks\n>\n> Hi,\n>\n> I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids (avoiding the query) if the option is not used at all.\n>\n> About the testing issues that Tom mentioned:\n> I do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if the data of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C callbacks of the FDW are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed.\n> Also, to test that the include option chooses only the data of the specified foreign servers we would need some negative testing, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind of tests in the test suite, even for other selective options like --table or --exclude-schema.\n>\n\nCan you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\nCan you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\n\nI tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.\n\n\nCheers,\n\n\n\nLuis M Carril\n\n\n\nFrom: vignesh C <vignesh21@gmail.com>\nSent: Tuesday, January 14, 2020 1:48 AM\nTo: Luis Carril <luis.carril@swarm64.com>\nCc: Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Option to dump foreign data in pg_dump\n \n\n\nOn Fri, Nov 29, 2019 at 2:10 PM Luis Carril <luis.carril@swarm64.com> wrote:\n>\n> Luis,\n>\n> It seems you've got enough support for this concept, so let's move\n> forward with this patch.  There are some comments from Tom about the\n> patch; would you like to send an updated version perhaps?\n>\n> Thanks\n>\n> Hi,\n>\n>  I've attached a new version (v6) removing the superfluous JOIN that Tom identified, and not collecting the oids (avoiding the query) if the option is not used at all.\n>\n> About the testing issues that Tom mentioned:\n> I do not see how can we have a pure SQL dummy FDW that tests the functionality. Because the only way to identify if the data of a foreign table for the chosen server is dumped is if the COPY statement appears in the output, but if the C callbacks of the FDW\n are not implemented, then the SELECT that dumps the data to generate the COPY cannot be executed.\n> Also, to test that the include option chooses only the data of the  specified foreign servers we would need some negative testing, i.e. that the COPY statement for the non-desired table does not appear. But I do not find these kind of tests in the test suite,\n even for other selective options like --table or --exclude-schema.\n>\n\nCan you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Jan 2020 11:52:49 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com> wrote:\n\n> Can you have a look at dump with parallel option. Parallel option will\n> take a lock on table while invoking lockTableForWorker. May be this is\n> not required for foreign tables.\n> Thoughts?\n>\n> I tried with -j and found no issue. I guess that the foreign table needs\n> locking anyway to prevent anyone to modify it while is being dumped.\n>\n>\nI'm able to get the problem with the following steps:\nBring up a postgres setup with servers running in 5432 & 5433 port.\n\nExecute the following commands in Server1 configured on 5432 port:\n\n - CREATE EXTENSION postgres_fdw;\n\n\n - CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS\n (host '127.0.0.1', port '5433', dbname 'postgres');\n\n\n - create user user1 password '123';\n\n\n - alter user user1 with superuser;\n\n\n - CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user\n 'user1', password '123');\n\n\nExecute the following commands in Server2 configured on 5433 port:\n\n - create user user1 password '123';\n\n\n - alter user user1 with superuser;\n\nExecute the following commands in Server2 configured on 5433 port as user1\nuser:\n\n - create schema test;\n\n\n - create table test.test1(id int);\n\n\n - insert into test.test1 values(10);\n\n\nExecute the following commands in Server1 configured on 5432 port as user1\nuser:\n\n - CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER\n foreign_server OPTIONS (schema_name 'test', table_name 'test1');\n\n\nWithout parallel option, the operation is successful:\n\n - ./pg_dump -d postgres -f dumpdir -U user1 -F d --include-foreign-data\n foreign_server\n\n\nWith parallel option it fails:\n\n - ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5\n --include-foreign-data foreign_server\n\npg_dump: error: could not obtain lock on relation \"public.foreign_table1\"\nThis usually means that someone requested an ACCESS EXCLUSIVE lock on the\ntable after the pg_dump parent process had gotten the initial ACCESS SHARE\nlock on the table.\npg_dump: error: a worker process died unexpectedly\n\nThere may be simpler steps than this to reproduce the issue, i have not try\nto optimize it.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com> wrote:\n\n\nCan you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\n\nI tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.I'm able to get the problem with the following steps:Bring up a postgres setup with servers running in 5432 & 5433 port.Execute the following commands in Server1 configured on 5432 port:CREATE EXTENSION postgres_fdw;CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');create user user1 password '123';alter user user1 with superuser;CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');Execute the following commands in Server2 configured on 5433 port:create user user1 password '123';alter user user1 with superuser;Execute the following commands in Server2 configured on 5433 port as user1 user:create schema test;create table test.test1(id int);insert into test.test1 values(10);Execute the following commands in Server1 configured on 5432 port as user1 user:CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');Without parallel option, the operation is successful:./pg_dump -d postgres -f dumpdir -U user1 -F d  --include-foreign-data foreign_serverWith parallel option it fails:./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_serverpg_dump: error: could not obtain lock on relation \"public.foreign_table1\"This usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.pg_dump: error: a worker process died unexpectedlyThere may be simpler steps than this to reproduce the issue, i have not try to optimize it.Regards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 16 Jan 2020 14:31:28 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com<mailto:luis.carril@swarm64.com>> wrote:\nCan you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\nI tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.\n\n\nI'm able to get the problem with the following steps:\nBring up a postgres setup with servers running in 5432 & 5433 port.\n\nExecute the following commands in Server1 configured on 5432 port:\n\n * CREATE EXTENSION postgres_fdw;\n\n * CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');\n\n * create user user1 password '123';\n\n * alter user user1 with superuser;\n\n * CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');\n\nExecute the following commands in Server2 configured on 5433 port:\n\n * create user user1 password '123';\n\n * alter user user1 with superuser;\n\nExecute the following commands in Server2 configured on 5433 port as user1 user:\n\n * create schema test;\n\n * create table test.test1(id int);\n\n * insert into test.test1 values(10);\n\nExecute the following commands in Server1 configured on 5432 port as user1 user:\n\n * CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');\n\nWithout parallel option, the operation is successful:\n\n * ./pg_dump -d postgres -f dumpdir -U user1 -F d --include-foreign-data foreign_server\n\nWith parallel option it fails:\n\n * ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server\n\npg_dump: error: could not obtain lock on relation \"public.foreign_table1\"\nThis usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.\npg_dump: error: a worker process died unexpectedly\n\nThere may be simpler steps than this to reproduce the issue, i have not try to optimize it.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\nHi Vignesh,\n\n yes you are right I could reproduce it also with 'file_fdw'. The issue is that LOCK is not supported on foreign tables, so I guess that the safest solution is to make the --include-foreign-data incompatible with --jobs, because skipping the locking for foreign tables maybe can lead to a deadlock anyway. Suggestions?\n\nCheers\nLuis M Carril\n\n________________________________\nFrom: vignesh C <vignesh21@gmail.com>\nSent: Thursday, January 16, 2020 10:01 AM\nTo: Luis Carril <luis.carril@swarm64.com>\nCc: Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Option to dump foreign data in pg_dump\n\nOn Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com<mailto:luis.carril@swarm64.com>> wrote:\nCan you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\nI tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.\n\n\nI'm able to get the problem with the following steps:\nBring up a postgres setup with servers running in 5432 & 5433 port.\n\nExecute the following commands in Server1 configured on 5432 port:\n\n * CREATE EXTENSION postgres_fdw;\n\n * CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');\n\n * create user user1 password '123';\n\n * alter user user1 with superuser;\n\n * CREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');\n\nExecute the following commands in Server2 configured on 5433 port:\n\n * create user user1 password '123';\n\n * alter user user1 with superuser;\n\nExecute the following commands in Server2 configured on 5433 port as user1 user:\n\n * create schema test;\n\n * create table test.test1(id int);\n\n * insert into test.test1 values(10);\n\nExecute the following commands in Server1 configured on 5432 port as user1 user:\n\n * CREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');\n\nWithout parallel option, the operation is successful:\n\n * ./pg_dump -d postgres -f dumpdir -U user1 -F d --include-foreign-data foreign_server\n\nWith parallel option it fails:\n\n * ./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server\n\npg_dump: error: could not obtain lock on relation \"public.foreign_table1\"\nThis usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.\npg_dump: error: a worker process died unexpectedly\n\nThere may be simpler steps than this to reproduce the issue, i have not try to optimize it.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com> wrote:\n\n\n\n\nCan you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\n\nI tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.\n\n\n\n\n\nI'm able to get the problem with the following steps:\nBring up a postgres setup with servers running in 5432 & 5433 port.\n\n\n\nExecute the following commands in Server1 configured on 5432 port:\n\n\nCREATE EXTENSION postgres_fdw;\n\nCREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');\n\n\ncreate user user1 password '123';\n\nalter user user1 with superuser;\n\nCREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');\n\n\n\nExecute the following commands in Server2 configured on 5433 port:\n\ncreate user user1 password '123';\n\nalter user user1 with superuser;\nExecute the following commands in Server2 configured on 5433 port as user1 user:\n\n\ncreate schema test;\n\ncreate table test.test1(id int);\n\ninsert into test.test1 values(10);\n\n\n\nExecute the following commands in Server1 configured on 5432 port as user1 user:\n\n\n\nCREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');\n\n\n\nWithout parallel option, the operation is successful:\n\n\n\n./pg_dump -d postgres -f dumpdir -U user1 -F d  --include-foreign-data foreign_server\n\n\n\nWith parallel option it fails:\n\n\n\n./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server\npg_dump: error: could not obtain lock on relation \"public.foreign_table1\"\nThis usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.\npg_dump: error: a worker process died unexpectedly\n\n\nThere may be simpler steps than this to reproduce the issue, i have not try to optimize it.\n\n\n\nRegards,\nVignesh\nEnterpriseDB: \nhttp://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\nHi Vignesh,\n\n\n\n\n   yes you are right I could reproduce it also with 'file_fdw'. The issue is that LOCK is not supported on foreign tables, so I guess that the safest solution is to make the --include-foreign-data incompatible with --jobs, because skipping the locking for foreign\n tables maybe can lead to a deadlock anyway. Suggestions?\n\n\n\n\n\nCheers\n\nLuis M Carril\n\n\n\nFrom: vignesh C <vignesh21@gmail.com>\nSent: Thursday, January 16, 2020 10:01 AM\nTo: Luis Carril <luis.carril@swarm64.com>\nCc: Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Option to dump foreign data in pg_dump\n \n\n\n\nOn Tue, Jan 14, 2020 at 5:22 PM Luis Carril <luis.carril@swarm64.com> wrote:\n\n\n\n\nCan you have a look at dump with parallel option. Parallel option will\ntake a lock on table while invoking lockTableForWorker. May be this is\nnot required for foreign tables.\nThoughts?\n\nI tried with -j and found no issue. I guess that the foreign table needs locking anyway to prevent anyone to modify it while is being dumped.\n\n\n\n\n\nI'm able to get the problem with the following steps:\nBring up a postgres setup with servers running in 5432 & 5433 port.\n\n\n\nExecute the following commands in Server1 configured on 5432 port:\n\n\nCREATE EXTENSION postgres_fdw;\n\nCREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '127.0.0.1', port '5433', dbname 'postgres');\n\n\ncreate user user1 password '123';\n\nalter user user1 with superuser;\n\nCREATE USER MAPPING FOR user1 SERVER foreign_server OPTIONS (user 'user1', password '123');\n\n\n\nExecute the following commands in Server2 configured on 5433 port:\n\ncreate user user1 password '123';\n\nalter user user1 with superuser;\nExecute the following commands in Server2 configured on 5433 port as user1 user:\n\n\ncreate schema test;\n\ncreate table test.test1(id int);\n\ninsert into test.test1 values(10);\n\n\n\nExecute the following commands in Server1 configured on 5432 port as user1 user:\n\n\n\nCREATE FOREIGN TABLE foreign_table1 (id integer NOT NULL) SERVER foreign_server OPTIONS (schema_name 'test', table_name 'test1');\n\n\n\nWithout parallel option, the operation is successful:\n\n\n\n./pg_dump -d postgres -f dumpdir -U user1 -F d  --include-foreign-data foreign_server\n\n\n\nWith parallel option it fails:\n\n\n\n./pg_dump -d postgres -f dumpdir1 -U user1 -F d -j 5 --include-foreign-data foreign_server\npg_dump: error: could not obtain lock on relation \"public.foreign_table1\"\nThis usually means that someone requested an ACCESS EXCLUSIVE lock on the table after the pg_dump parent process had gotten the initial ACCESS SHARE lock on the table.\npg_dump: error: a worker process died unexpectedly\n\n\nThere may be simpler steps than this to reproduce the issue, i have not try to optimize it.\n\n\n\nRegards,\nVignesh\nEnterpriseDB: \nhttp://www.enterprisedb.com", "msg_date": "Mon, 20 Jan 2020 15:04:56 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Mon, Jan 20, 2020 at 8:34 PM Luis Carril <luis.carril@swarm64.com> wrote:\n>\n>\n> Hi Vignesh,\n>\n> yes you are right I could reproduce it also with 'file_fdw'. The issue is that LOCK is not supported on foreign tables, so I guess that the safest solution is to make the --include-foreign-data incompatible with --jobs, because skipping the locking for foreign tables maybe can lead to a deadlock anyway. Suggestions?\n>\n\nYes we can support --include-foreign-data without parallel option and\nlater add support for parallel option as a different patch.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Jan 2020 10:46:29 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Yes we can support --include-foreign-data without parallel option and\nlater add support for parallel option as a different patch.\n\nHi,\n\n I've attached a new version of the patch in which an error is emitted if the parallel backup is used with the --include-foreign-data option.\n\n\nCheers\n\n\nLuis M. Carril", "msg_date": "Tue, 21 Jan 2020 09:36:33 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Tue, Jan 21, 2020 at 3:06 PM Luis Carril <luis.carril@swarm64.com> wrote:\n>\n> Yes we can support --include-foreign-data without parallel option and\n> later add support for parallel option as a different patch.\n>\n> Hi,\n>\n> I've attached a new version of the patch in which an error is emitted if the parallel backup is used with the --include-foreign-data option.\n>\n\nThanks for working on the comments. I noticed one behavior is\ndifferent when --table option is specified. When --table is specified\nthe following are not getting dumped:\nCREATE SERVER foreign_server\n\nI felt the above also should be included as part of the dump when\ninclude-foreign-data option is specified.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Jan 2020 07:29:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Thanks for working on the comments. I noticed one behavior is\ndifferent when --table option is specified. When --table is specified\nthe following are not getting dumped:\nCREATE SERVER foreign_server\n\nI felt the above also should be included as part of the dump when\ninclude-foreign-data option is specified.\n\nYes, it also happens on master. A dump of a foreign table using --table, which only dumps the table definition, does not include the extension nor the server.\nI guess that the idea behind --table is that the table prerequisites should already exist on the database.\n\nA similar behavior can be reproduced for a non foreign table. If a table is created in a specific schema, dumping only the table with --table does not dump the schema definition.\n\nSo I think we do not need to dump the server with the table.\n\nCheers\n\nLuis M Carril\n\n\n\n\n\n\n\n\n\nThanks for working on the comments. I noticed one behavior is\ndifferent when --table option is specified. When --table is specified\nthe following are not getting dumped:\nCREATE SERVER foreign_server\n\nI felt the above also should be included as part of the dump when\ninclude-foreign-data option is specified.\n\nYes, it also happens on master. A dump of a foreign table using --table, which only dumps the table definition, does not include the extension nor the server.\nI guess that the idea behind --table is that the table prerequisites should already exist on the database.\nA similar behavior can be reproduced for a non foreign table. If a table is created in a specific schema, dumping only the table with --table does not dump the schema definition.\nSo I think we do not need to dump the server with the table.\n\nCheers\n\nLuis M Carril", "msg_date": "Wed, 29 Jan 2020 08:30:16 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 2020-01-21 10:36, Luis Carril wrote:\n>> Yes we can support --include-foreign-data without parallel option and\n>> later add support for parallel option as a different patch.\n> \n> Hi,\n> \n> ��� I've attached a new version of the patch in which an error is \n> emitted if the parallel backup is used with the --include-foreign-data \n> option.\n\nThis seems like an overreaction. The whole point of \nlockTableForWorker() is to avoid deadlocks, but foreign tables don't \nhave locks, so it's not a problem. I think you can just skip foreign \ntables in lockTableForWorker() using the same logic that getTables() uses.\n\nI think parallel data dump would be an especially interesting option \nwhen using foreign tables, so it's worth figuring this out.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Jan 2020 17:05:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On Wed, Jan 29, 2020 at 2:00 PM Luis Carril <luis.carril@swarm64.com> wrote:\n>\n> Thanks for working on the comments. I noticed one behavior is\n> different when --table option is specified. When --table is specified\n> the following are not getting dumped:\n> CREATE SERVER foreign_server\n>\n> I felt the above also should be included as part of the dump when\n> include-foreign-data option is specified.\n>\n> Yes, it also happens on master. A dump of a foreign table using --table, which only dumps the table definition, does not include the extension nor the server.\n> I guess that the idea behind --table is that the table prerequisites should already exist on the database.\n>\n> A similar behavior can be reproduced for a non foreign table. If a table is created in a specific schema, dumping only the table with --table does not dump the schema definition.\n>\n> So I think we do not need to dump the server with the table.\n>\n\nThanks for the clarification, the behavior sounds reasonable to me\nunless others have a different opinion on this.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Jan 2020 09:56:52 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Hi Luis,\n\nOn 1/29/20 11:05 AM, Peter Eisentraut wrote:\n> On 2020-01-21 10:36, Luis Carril wrote:\n>>> Yes we can support --include-foreign-data without parallel option and\n>>> later add support for parallel option as a different patch.\n>>\n>> Hi,\n>>\n>> ���� I've attached a new version of the patch in which an error is \n>> emitted if the parallel backup is used with the --include-foreign-data \n>> option.\n> \n> This seems like an overreaction.� The whole point of \n> lockTableForWorker() is to avoid deadlocks, but foreign tables don't \n> have locks, so it's not a problem.� I think you can just skip foreign \n> tables in lockTableForWorker() using the same logic that getTables() uses.\n> \n> I think parallel data dump would be an especially interesting option \n> when using foreign tables, so it's worth figuring this out.\n\nWhat do you think of Peter's comment?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 3 Mar 2020 14:11:41 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "I am just responding on the latest mail on this thread. But the question is\nabout functionality. The proposal is to add a single flag\n--include-foreign-data which controls whether or not data is dumped for all\nthe foreign tables in a database. That may not serve the purpose. A foreign\ntable may point to a view, materialized view or inheritance tree, and so\non. A database can have foreign tables pointing to all of those kinds.\nRestoring data to a view won't be possible and restoring it into an\ninheritance tree would insert it into the parent only and not the children.\nFurther, a user may not want the data to be dumped for all the foreign\ntables since their usages are different esp. considering restore. I think a\nbetter option is to extract data in a foreign table using --table if that's\nthe only usage. Otherwise, we need a foreign table level flag indicating\nwhether pg_dump should dump the data for that foreign table or not.\n\nOn Wed, Mar 4, 2020 at 12:41 AM David Steele <david@pgmasters.net> wrote:\n\n> Hi Luis,\n>\n> On 1/29/20 11:05 AM, Peter Eisentraut wrote:\n> > On 2020-01-21 10:36, Luis Carril wrote:\n> >>> Yes we can support --include-foreign-data without parallel option and\n> >>> later add support for parallel option as a different patch.\n> >>\n> >> Hi,\n> >>\n> >> I've attached a new version of the patch in which an error is\n> >> emitted if the parallel backup is used with the --include-foreign-data\n> >> option.\n> >\n> > This seems like an overreaction. The whole point of\n> > lockTableForWorker() is to avoid deadlocks, but foreign tables don't\n> > have locks, so it's not a problem. I think you can just skip foreign\n> > tables in lockTableForWorker() using the same logic that getTables()\n> uses.\n> >\n> > I think parallel data dump would be an especially interesting option\n> > when using foreign tables, so it's worth figuring this out.\n>\n> What do you think of Peter's comment?\n>\n> Regards,\n> --\n> -David\n> david@pgmasters.net\n>\n>\n>\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nI am just responding on the latest mail on this thread. But the question is about functionality. The proposal is to add a single flag --include-foreign-data which controls whether or not data is dumped for all the foreign tables in a database. That may not serve the purpose. A foreign table may point to a view, materialized view or inheritance tree, and so on. A database can have foreign tables pointing to all of those kinds. Restoring data to a view won't be possible and restoring it into an inheritance tree would insert it into the parent only and not the children. Further, a user may not want the data to be dumped for all the foreign tables since their usages are different esp. considering restore. I think a better option is to extract data in a foreign table using --table if that's the only usage. Otherwise, we need a foreign table level flag indicating whether pg_dump should dump the data for that foreign table or not.On Wed, Mar 4, 2020 at 12:41 AM David Steele <david@pgmasters.net> wrote:Hi Luis,\n\nOn 1/29/20 11:05 AM, Peter Eisentraut wrote:\n> On 2020-01-21 10:36, Luis Carril wrote:\n>>> Yes we can support --include-foreign-data without parallel option and\n>>> later add support for parallel option as a different patch.\n>>\n>> Hi,\n>>\n>>      I've attached a new version of the patch in which an error is \n>> emitted if the parallel backup is used with the --include-foreign-data \n>> option.\n> \n> This seems like an overreaction.  The whole point of \n> lockTableForWorker() is to avoid deadlocks, but foreign tables don't \n> have locks, so it's not a problem.  I think you can just skip foreign \n> tables in lockTableForWorker() using the same logic that getTables() uses.\n> \n> I think parallel data dump would be an especially interesting option \n> when using foreign tables, so it's worth figuring this out.\n\nWhat do you think of Peter's comment?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n-- Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 4 Mar 2020 22:09:19 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Hi everyone,\n\nI am just responding on the latest mail on this thread. But the question is about functionality. The proposal is to add a single flag --include-foreign-data which controls whether or not data is dumped for all the foreign tables in a database. That may not serve the purpose. A foreign table may point to a view, materialized view or inheritance tree, and so on. A database can have foreign tables pointing to all of those kinds. Restoring data to a view won't be possible and restoring it into an inheritance tree would insert it into the parent only and not the children. Further, a user may not want the data to be dumped for all the foreign tables since their usages are different esp. considering restore. I think a better option is to extract data in a foreign table using --table if that's the only usage. Otherwise, we need a foreign table level flag indicating whether pg_dump should dump the data for that foreign table or not.\n\nThe option enables the user to dump data of tables backed by a specific foreign_server. It is up to the user to guarantee that the foreign server is also writable, that is the reason to make the option opt-in. The option can be combined with --table to dump specific tables if needed. If the user has different foreign servers in the database has to make the conscious decision of dumping each one of them. Without this option the user is totally unable to do it.\n\n\n> On 2020-01-21 10:36, Luis Carril wrote:\n>>> Yes we can support --include-foreign-data without parallel option and\n>>> later add support for parallel option as a different patch.\n>>\n>> Hi,\n>>\n>> I've attached a new version of the patch in which an error is\n>> emitted if the parallel backup is used with the --include-foreign-data\n>> option.\n>\n> This seems like an overreaction. The whole point of\n> lockTableForWorker() is to avoid deadlocks, but foreign tables don't\n> have locks, so it's not a problem. I think you can just skip foreign\n> tables in lockTableForWorker() using the same logic that getTables() uses.\n>\n> I think parallel data dump would be an especially interesting option\n> when using foreign tables, so it's worth figuring this out.\n\nWhat do you think of Peter's comment?\nI took a look at it, we could skip foreign tables by checking the catalog in lockTableForWorker but this would imply an extra query per call to the function (as in getTables), which would be irrelevant for most of the cases. Or we could pass in the TocEntry that it is a foreign table (although that seems highly specific).\nAlso, would it not be possible to offer support of LOCK TABLE on foreign tables?\n\nAt this point I would like to leave the patch as is, and discuss further improvement in a future patch.\n\nLuis M.\n\n________________________________\nFrom: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nSent: Wednesday, March 4, 2020 5:39 PM\nTo: David Steele <david@pgmasters.net>\nCc: Luis Carril <luis.carril@swarm64.com>; vignesh C <vignesh21@gmail.com>; Peter Eisentraut <peter.eisentraut@2ndquadrant.com>; Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Option to dump foreign data in pg_dump\n\nI am just responding on the latest mail on this thread. But the question is about functionality. The proposal is to add a single flag --include-foreign-data which controls whether or not data is dumped for all the foreign tables in a database. That may not serve the purpose. A foreign table may point to a view, materialized view or inheritance tree, and so on. A database can have foreign tables pointing to all of those kinds. Restoring data to a view won't be possible and restoring it into an inheritance tree would insert it into the parent only and not the children. Further, a user may not want the data to be dumped for all the foreign tables since their usages are different esp. considering restore. I think a better option is to extract data in a foreign table using --table if that's the only usage. Otherwise, we need a foreign table level flag indicating whether pg_dump should dump the data for that foreign table or not.\n\nOn Wed, Mar 4, 2020 at 12:41 AM David Steele <david@pgmasters.net<mailto:david@pgmasters.net>> wrote:\nHi Luis,\n\nOn 1/29/20 11:05 AM, Peter Eisentraut wrote:\n> On 2020-01-21 10:36, Luis Carril wrote:\n>>> Yes we can support --include-foreign-data without parallel option and\n>>> later add support for parallel option as a different patch.\n>>\n>> Hi,\n>>\n>> I've attached a new version of the patch in which an error is\n>> emitted if the parallel backup is used with the --include-foreign-data\n>> option.\n>\n> This seems like an overreaction. The whole point of\n> lockTableForWorker() is to avoid deadlocks, but foreign tables don't\n> have locks, so it's not a problem. I think you can just skip foreign\n> tables in lockTableForWorker() using the same logic that getTables() uses.\n>\n> I think parallel data dump would be an especially interesting option\n> when using foreign tables, so it's worth figuring this out.\n\nWhat do you think of Peter's comment?\n\nRegards,\n--\n-David\ndavid@pgmasters.net<mailto:david@pgmasters.net>\n\n\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n\n\n\n\n\n\n\n\n\n\nHi everyone,\n\n\n\n\nI am just responding on the latest mail on this thread. But the question is about functionality. The proposal is to add a single flag --include-foreign-data which controls whether or not data is dumped for all the foreign tables in a\n database. That may not serve the purpose. A foreign table may point to a view, materialized view or inheritance tree, and so on. A database can have foreign tables pointing to all of those kinds. Restoring data to a view won't be possible and restoring it\n into an inheritance tree would insert it into the parent only and not the children. Further, a user may not want the data to be dumped for all the foreign tables since their usages are different esp. considering restore. I think a better option is to extract\n data in a foreign table using --table if that's the only usage. Otherwise, we need a foreign table level flag indicating whether pg_dump should dump the data for that foreign table or not.\n\n\n\nThe option enables the user to dump data of tables backed by a specific foreign_server. It is up to the user to guarantee that the foreign server is also writable, that is the reason to make the option opt-in. The option can be combined with\n --table to dump specific tables if needed. If the user has different foreign servers in the database has to make the conscious decision of dumping each one of them. Without this option the user is totally unable to do it.\n\n\n\n\n\n\n> On 2020-01-21 10:36, Luis Carril wrote:\n>>> Yes we can support --include-foreign-data without parallel option and\n>>> later add support for parallel option as a different patch.\n>>\n>> Hi,\n>>\n>>      I've attached a new version of the patch in which an error is \n>> emitted if the parallel backup is used with the --include-foreign-data \n>> option.\n> \n> This seems like an overreaction.  The whole point of \n> lockTableForWorker() is to avoid deadlocks, but foreign tables don't \n> have locks, so it's not a problem.  I think you can just skip foreign \n> tables in lockTableForWorker() using the same logic that getTables() uses.\n> \n> I think parallel data dump would be an especially interesting option \n> when using foreign tables, so it's worth figuring this out.\n\nWhat do you think of Peter's comment?\n\nI took a look at it, we could skip foreign tables by checking the catalog in lockTableForWorker but this would imply an extra query per call to the function (as in getTables), which would be irrelevant for most of the cases. Or we could pass\n in the TocEntry that it is a foreign table (although that seems highly specific).\nAlso, would it not be possible to offer support of LOCK TABLE on foreign tables?\n\n\nAt this point I would like to leave the patch as is, and discuss further improvement in a future patch.\n\n\nLuis M.\n\n\n\n\n\n\n\n\n\nFrom: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nSent: Wednesday, March 4, 2020 5:39 PM\nTo: David Steele <david@pgmasters.net>\nCc: Luis Carril <luis.carril@swarm64.com>; vignesh C <vignesh21@gmail.com>; Peter Eisentraut <peter.eisentraut@2ndquadrant.com>; Alvaro Herrera <alvherre@2ndquadrant.com>; Daniel Gustafsson <daniel@yesql.se>; Laurenz Albe <laurenz.albe@cybertec.at>;\n PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Option to dump foreign data in pg_dump\n \n\n\nI am just responding on the latest mail on this thread. But the question is about functionality. The proposal is to add a single flag --include-foreign-data which controls whether or not data is dumped for all the foreign tables in a database.\n That may not serve the purpose. A foreign table may point to a view, materialized view or inheritance tree, and so on. A database can have foreign tables pointing to all of those kinds. Restoring data to a view won't be possible and restoring it into an inheritance\n tree would insert it into the parent only and not the children. Further, a user may not want the data to be dumped for all the foreign tables since their usages are different esp. considering restore. I think a better option is to extract data in a foreign\n table using --table if that's the only usage. Otherwise, we need a foreign table level flag indicating whether pg_dump should dump the data for that foreign table or not.\n\n\n\nOn Wed, Mar 4, 2020 at 12:41 AM David Steele <david@pgmasters.net> wrote:\n\n\nHi Luis,\n\nOn 1/29/20 11:05 AM, Peter Eisentraut wrote:\n> On 2020-01-21 10:36, Luis Carril wrote:\n>>> Yes we can support --include-foreign-data without parallel option and\n>>> later add support for parallel option as a different patch.\n>>\n>> Hi,\n>>\n>>      I've attached a new version of the patch in which an error is \n>> emitted if the parallel backup is used with the --include-foreign-data \n>> option.\n> \n> This seems like an overreaction.  The whole point of \n> lockTableForWorker() is to avoid deadlocks, but foreign tables don't \n> have locks, so it's not a problem.  I think you can just skip foreign \n> tables in lockTableForWorker() using the same logic that getTables() uses.\n> \n> I think parallel data dump would be an especially interesting option \n> when using foreign tables, so it's worth figuring this out.\n\nWhat do you think of Peter's comment?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n\n\n\n\n-- \n\n\n\n\nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 5 Mar 2020 13:51:35 +0000", "msg_from": "Luis Carril <luis.carril@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "Hi Luis,\n\nPlease don't top post. Also be careful to quote prior text when \nreplying. Your message was pretty hard to work through -- i.e. figuring \nout what you said vs. what you were replying to.\n\nOn 3/5/20 8:51 AM, Luis Carril wrote:\n> \n> At this point I would like to leave the patch as is, and discuss further \n> improvement in a future patch.\n\nI have marked this as Need Review since the author wants the patch \nconsidered as-is.\n\nI think Ashutosh, at least, has concerns about the patch as it stands, \nbut does anyone else want to chime in?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 16 Mar 2020 10:18:31 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 2020-Jan-29, Peter Eisentraut wrote:\n\n> On 2020-01-21 10:36, Luis Carril wrote:\n> > > Yes we can support --include-foreign-data without parallel option and\n> > > later add support for parallel option as a different patch.\n> > \n> > ��� I've attached a new version of the patch in which an error is\n> > emitted if the parallel backup is used with the --include-foreign-data\n> > option.\n> \n> This seems like an overreaction. The whole point of lockTableForWorker() is\n> to avoid deadlocks, but foreign tables don't have locks, so it's not a\n> problem. I think you can just skip foreign tables in lockTableForWorker()\n> using the same logic that getTables() uses.\n> \n> I think parallel data dump would be an especially interesting option when\n> using foreign tables, so it's worth figuring this out.\n\nI agree it would be nice to implement this, so I tried to implement it.\n\nI found it's not currently workable, because parallel.c only has a tocEntry\nto work with, not a DumpableObject, so it doesn't know that the table is\nforeign; to find that out, parallel.c could use findObjectByDumpId, but\nparallel.c is used by both pg_dump and pg_restore, and findObjectByDumpId is\nin common.c which cannot be linked in pg_restore because of numerous\nincompatibilities.\n\nOne way to make this work would be to put lockTableForWorker somewhere other\nthan parallel.c. Foe example maybe have CreateArchive() set up a new \"lock\ntable\" ArchiveHandle function ptr that parallel.c can call;\nlockTableForWorker() becomes the pg_dump implementation of that, while\npg_restore uses NULL.\n\nAnyway, I think Luis has it right that this should not be a blocker for\nthis feature.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 23 Mar 2020 17:09:11 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 2020-Mar-23, Alvaro Herrera wrote:\n\n> > This seems like an overreaction. The whole point of lockTableForWorker() is\n> > to avoid deadlocks, but foreign tables don't have locks, so it's not a\n> > problem. I think you can just skip foreign tables in lockTableForWorker()\n> > using the same logic that getTables() uses.\n> > \n> > I think parallel data dump would be an especially interesting option when\n> > using foreign tables, so it's worth figuring this out.\n> \n> I agree it would be nice to implement this, so I tried to implement it.\n\n(Here's patch for this, which of course doesn't compile)\n\ndiff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c\nindex c25e3f7a88..b3000da409 100644\n--- a/src/bin/pg_dump/parallel.c\n+++ b/src/bin/pg_dump/parallel.c\n@@ -1316,17 +1316,33 @@ IsEveryWorkerIdle(ParallelState *pstate)\n * then we know that somebody else has requested an ACCESS EXCLUSIVE lock and\n * so we have a deadlock. We must fail the backup in that case.\n */\n+#include \"pg_dump.h\"\n+#include \"catalog/pg_class_d.h\"\n static void\n lockTableForWorker(ArchiveHandle *AH, TocEntry *te)\n {\n \tconst char *qualId;\n \tPQExpBuffer query;\n \tPGresult *res;\n+\tDumpableObject *obj;\n \n \t/* Nothing to do for BLOBS */\n \tif (strcmp(te->desc, \"BLOBS\") == 0)\n \t\treturn;\n \n+\t/*\n+\t * Nothing to do for foreign tables either, since they don't support LOCK\n+\t * TABLE.\n+\t */\n+\tobj = findObjectByDumpId(te->dumpId);\n+\tif (obj->objType == DO_TABLE_DATA)\n+\t{\n+\t\tTableInfo *tabinfo = (TableInfo *) obj;\n+\n+\t\tif (tabinfo->relkind == RELKIND_FOREIGN_TABLE)\n+\t\t\treturn;\n+\t}\n+\n \tquery = createPQExpBuffer();\n \n \tqualId = fmtQualifiedId(te->namespace, te->tag);\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 23 Mar 2020 17:17:13 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "v8 attached.\n\nI modified Luis' v7 a little bit by putting the ftserver acquisition in\nthe main pg_class query instead of adding one separate query for each\nforeign table. That seems better overall.\n\nI don't understand why this code specifically disallows the empty string\nas an option to --dump-foreign-data. The other pattern-matching options\ndon't do that. This seems to have been added in response to Daniel's\nreview[1], but I don't quite understand the rationale. No other option\nbehaves that way. I'm inclined to remove that, and I have done so in\nthis version.\n\nI removed DumpOptions new bool flag. Seems pointless; we can just check\nthat the list is not null, as we do for other such lists.\n\nI split out the proposed test in a different commit; there's no\nconsensus that this test is acceptable as-is. Tom proposed a different\nstrategy[2]; if you try to dump a table with a dummy handler, you'll get\nthis:\n\nCOPY public.ft1 (c1, c2, c3) FROM stdin;\npg_dump: error: query failed: ERROR: foreign-data wrapper \"dummy\" has no handler\npg_dump: error: query was: COPY (SELECT c1, c2, c3 FROM public.ft1 ) TO stdout;\n\nMaybe what we should do just verify that you do get that error (and no\nother errors).\n\n[1] https://postgr.es/m/E9C5B25C-52E4-49EC-9958-69CD5BD14EDA@yesql.se\n[2] https://postgr.es/m/8001.1573759651@sss.pgh.pa.us\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 23 Mar 2020 17:40:21 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "> On 23 Mar 2020, at 21:40, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> I don't understand why this code specifically disallows the empty string\n> as an option to --dump-foreign-data. The other pattern-matching options\n> don't do that. This seems to have been added in response to Daniel's\n> review[1], but I don't quite understand the rationale. No other option\n> behaves that way. I'm inclined to remove that, and I have done so in\n> this version.\n\nIt was a response to the discussion upthread about not allowing a blanket dump-\neverything statement for foreign data, but rather require some form of opt-in.\nThe empty string made the code wildcard to all foreign data, which was thought\nof as being a footgun for creating problematic dumps.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 24 Mar 2020 07:47:43 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 2020-Mar-23, Alvaro Herrera wrote:\n\n> COPY public.ft1 (c1, c2, c3) FROM stdin;\n> pg_dump: error: query failed: ERROR: foreign-data wrapper \"dummy\" has no handler\n> pg_dump: error: query was: COPY (SELECT c1, c2, c3 FROM public.ft1 ) TO stdout;\n> \n> Maybe what we should do just verify that you do get that error (and no\n> other errors).\n\nDone that way. Will be pushing this shortly.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 24 Mar 2020 17:21:06 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 2020-Mar-24, Alvaro Herrera wrote:\n\n> On 2020-Mar-23, Alvaro Herrera wrote:\n> \n> > COPY public.ft1 (c1, c2, c3) FROM stdin;\n> > pg_dump: error: query failed: ERROR: foreign-data wrapper \"dummy\" has no handler\n> > pg_dump: error: query was: COPY (SELECT c1, c2, c3 FROM public.ft1 ) TO stdout;\n> > \n> > Maybe what we should do just verify that you do get that error (and no\n> > other errors).\n> \n> Done that way. Will be pushing this shortly.\n\nHmm, but travis is failing on the cfbot, and I can't see why ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 24 Mar 2020 19:22:45 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" }, { "msg_contents": "On 2020-Mar-24, Alvaro Herrera wrote:\n\n> Hmm, but travis is failing on the cfbot, and I can't see why ...\n\nMy only guess, without further output, is that getopt_long is not liking\nthe [ \"--include-foreign-data\", \"xxx\" ] style of arguments in the Perl\narray of the command to run (which we don't use with anywhere else in\nthe files I looked), so I changed it to [ \"--include-foreign-data=xxx\" ].\nIf this was not the problem, we'll need more info, which the buildfarm\nwill give us.\n\nAnd pushed. Thanks, Luis, and thanks to all reviewers.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Mar 2020 13:25:53 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Option to dump foreign data in pg_dump" } ]
[ { "msg_contents": "Hi\n\nI returned to possibility to sort output of \\d* and \\l by size. There was\nmore a experiments in this area, but without success. Last patch was\nexample of over engineering, and now, I try to implement this feature\nsimply how it is possible. I don't think so we need too complex solution -\nif somebody needs specific report, then it is not hard to run psql with\n\"-E\" option, get and modify used query (and use a power of SQL). But\ndisplaying databases objects sorted by size is very common case.\n\nThis proposal is based on new psql variable \"SORT_BY_SIZE\". This variable\nwill be off by default. The value of this variable is used only in verbose\nmode (when the size is displayed - I don't see any benefit sort of size\nwithout showing size). Usage is very simple and implementation too:\n\n\\dt -- sorted by schema, name\n\\dt+ -- still sorted by schema, name\n\n\\set SORT_BY_SIZE on\n\\dt -- sorted by schema, name (size is not calculated and is not visible)\n\\dt+ -- sorted by size\n\n\\dt+ public.* -- sorted by size from schema public\n\nComments, notes?\n\nRegards\n\nPavel", "msg_date": "Fri, 28 Jun 2019 17:12:23 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - patch: psql - sort_by_size" }, { "msg_contents": "\nHello Pavel,\n\n> \\set SORT_BY_SIZE on\n> \\dt -- sorted by schema, name (size is not calculated and is not visible)\n> \\dt+ -- sorted by size\n\nPatch applies cleanly, compiles, runs. \"make check\" ok. doc build ok.\n\nThere are no tests. Some infrastructure should be in place so that such \nfeatures can be tested, eg so psql-specific TAP tests. ISTM that there was \na patch submitted for that, but I cannot find it:-( Maybe it is combined \nwith some other patch in the CF.\n\nI agree that the simpler the better for such a feature.\n\nISTM that the fact that the option is ignored on \\dt is a little bit \nannoying. It means that \\dt and \\dt+ would not show their results in the \nsame order. I understand that the point is to avoid the cost of computing \nthe sizes, but if the user asked for it, should it be done anyway?\n\nI'm wondering whether it would make sense to have a slightly more generic \ninterface allowing for more values, eg:\n\n \\set DESCRIPTION_SORT \"name\"\n \\set DESCRIPTION_SORT \"size\"\n\nWell, possibly this is a bad idea, so it is really a question.\n\n\n+ Setting this variable to <literal>on</literal> causes so results of\n+ <literal>\\d*</literal> commands will be sorted by size, when size\n+ is displayed.\n\nMaybe the simpler: \"Setting this variable on sorts \\d* outputs by size, \nwhen size is displayed.\"\n\nISTM that the documentation is more generic than reality. Does it work \nwith \\db+? It seems to work with \\dm+.\n\nOn equality, ISTM it it should sort by name as a secondary criterion.\n\nI tested a few cases, although not partitioned tables.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 29 Jun 2019 09:32:21 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "so 29. 6. 2019 v 9:32 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n\n>\n> Hello Pavel,\n>\n> > \\set SORT_BY_SIZE on\n> > \\dt -- sorted by schema, name (size is not calculated and is not visible)\n> > \\dt+ -- sorted by size\n>\n> Patch applies cleanly, compiles, runs. \"make check\" ok. doc build ok.\n>\n> There are no tests. Some infrastructure should be in place so that such\n> features can be tested, eg so psql-specific TAP tests. ISTM that there was\n> a patch submitted for that, but I cannot find it:-( Maybe it is combined\n> with some other patch in the CF.\n>\n\nIt is not possible - the size of relations is not stable (can be different\non some platforms), and because showing the size is base of this patch, I\ncannot to write tests. Maybe only only set/unset of variable.\n\n\n>\n> I agree that the simpler the better for such a feature.\n>\n> ISTM that the fact that the option is ignored on \\dt is a little bit\n> annoying. It means that \\dt and \\dt+ would not show their results in the\n> same order. I understand that the point is to avoid the cost of computing\n> the sizes, but if the user asked for it, should it be done anyway?\n>\n\nIt was one objection against some previous patches. In this moment I don't\nsee any wrong on different order between \\dt and \\dt+. When column \"size\"\nwill be displayed, then ordering of report will be clean.\n\nI am not strongly against this - implementation of support SORT_BY_SIZE for\nnon verbose mode is +/- few lines more. But now (and it is just my opinion\nand filing, nothing more), I think so sorting reports by invisible columns\ncan be messy. But if somebody will have strong different option on this\npoint, I am able to accept it. Both variants can have some sense, and some\nbenefits - both variants are consistent with some rules (but cannot be\ntogether).\n\n\n> I'm wondering whether it would make sense to have a slightly more generic\n> interface allowing for more values, eg:\n>\n> \\set DESCRIPTION_SORT \"name\"\n> \\set DESCRIPTION_SORT \"size\"\n>\n> Well, possibly this is a bad idea, so it is really a question.\n>\n\nWe was at this point already :). If you introduce this, then you have to\nsupport combinations schema_name, name_schema, size, schema_size, ...\n\nMy goal is implementation of most common missing alternative into psql -\nbut I would not to do too generic implementation - it needs more complex\ndesign (and UI), and I don't think so people use it. SORT_BY_SIZE (on/off)\nlooks simply, and because (if will not be changed) it has not impact on non\nverbose mode, then it can be active permanently (and if not, it is not\nmental hard work to set it).\n\nI think so more generic solution needs interactive UI. Now I working on\nvertical cursor support for pspg https://github.com/okbob/pspg. Next step\nwill be sort by column under vertical cursor. So, I hope, it can be good\nenough for simply sorting by any column of report (but to be user friendly,\nit needs interactive UI). Because not everywhere is pspg installed, I would\nto push some simple solution (I prefer simplicity against generic) to psql.\n\n\n\n>\n> + Setting this variable to <literal>on</literal> causes so results of\n> + <literal>\\d*</literal> commands will be sorted by size, when size\n> + is displayed.\n>\n> Maybe the simpler: \"Setting this variable on sorts \\d* outputs by size,\n> when size is displayed.\"\n>\n> ISTM that the documentation is more generic than reality. Does it work\n> with \\db+? It seems to work with \\dm+.\n>\n> On equality, ISTM it it should sort by name as a secondary criterion.\n>\n> I tested a few cases, although not partitioned tables.\n>\n\nThank you - I support now relations (tables, indexes, ), databases, and\ntablespaces. The column size is displayed for data types report, but I am\nnot sure about any benefit in this case.\n\nRegards\n\nPavel\n\n\n> --\n> Fabien.\n>\n>\n>\n\nso 29. 6. 2019 v 9:32 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\nHello Pavel,\n\n> \\set SORT_BY_SIZE on\n> \\dt -- sorted by schema, name (size is not calculated and is not visible)\n> \\dt+ -- sorted by size\n\nPatch applies cleanly, compiles, runs. \"make check\" ok. doc build ok.\n\nThere are no tests. Some infrastructure should be in place so that such \nfeatures can be tested, eg so psql-specific TAP tests. ISTM that there was \na patch submitted for that, but I cannot find it:-( Maybe it is combined \nwith some other patch in the CF.It is not possible - the size of relations is not stable (can be different on some platforms), and because showing the size is base of this patch, I cannot to write tests. Maybe only only set/unset of variable. \n\nI agree that the simpler the better for such a feature.\n\nISTM that the fact that the option is ignored on \\dt is a little bit \nannoying. It means that \\dt and \\dt+ would not show their results in the \nsame order. I understand that the point is to avoid the cost of computing \nthe sizes, but if the user asked for it, should it be done anyway?It was one objection against some previous patches. In this moment I don't see any wrong on different order between \\dt and \\dt+. When column \"size\" will be displayed, then ordering of report will be clean.I am not strongly against this - implementation of support SORT_BY_SIZE for non verbose mode is +/- few lines more. But now (and it is just my opinion and filing, nothing more), I think so sorting reports by invisible columns can be messy. But if somebody will have strong different option on this point, I am able to accept it. Both variants can have some sense, and some benefits - both variants are consistent with some rules (but cannot be together).\n\nI'm wondering whether it would make sense to have a slightly more generic \ninterface allowing for more values, eg:\n\n  \\set DESCRIPTION_SORT \"name\"\n  \\set DESCRIPTION_SORT \"size\"\n\nWell, possibly this is a bad idea, so it is really a question.We was at this point already :). If you introduce this, then you have to support combinations schema_name, name_schema, size, schema_size, ... My goal is implementation of most common missing alternative into psql - but I would not to do too generic implementation - it needs more complex design (and UI), and I don't think so people use it. SORT_BY_SIZE (on/off) looks simply, and because (if will not be changed) it has not impact on non verbose mode, then it can be active permanently (and if not, it is not mental hard work to set it).I think so more generic solution needs interactive UI. Now I working on vertical cursor support for pspg https://github.com/okbob/pspg. Next step will be sort by column under vertical cursor. So, I hope, it can be good enough for simply sorting by any column of report (but to be user friendly, it needs interactive UI). Because not everywhere is pspg installed, I would to push some simple solution (I prefer simplicity against generic) to psql. \n\n\n+   Setting this variable to <literal>on</literal> causes so results of\n+   <literal>\\d*</literal> commands will be sorted by size, when size\n+   is displayed.\n\nMaybe the simpler: \"Setting this variable on sorts \\d* outputs by size, \nwhen size is displayed.\"\n\nISTM that the documentation is more generic than reality. Does it work \nwith \\db+? It seems to work with \\dm+.\n\nOn equality, ISTM it it should sort by name as a secondary criterion.\n\nI tested a few cases, although not partitioned tables.Thank you - I support now relations (tables, indexes, ), databases, and tablespaces. The column size is displayed  for data types report, but I am not sure about any benefit in this case.RegardsPavel\n\n-- \nFabien.", "msg_date": "Sat, 29 Jun 2019 10:19:56 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "Hi\n\nso 29. 6. 2019 v 10:19 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> so 29. 6. 2019 v 9:32 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\n> napsal:\n>\n>>\n>> Hello Pavel,\n>>\n>> > \\set SORT_BY_SIZE on\n>> > \\dt -- sorted by schema, name (size is not calculated and is not\n>> visible)\n>> > \\dt+ -- sorted by size\n>>\n>> Patch applies cleanly, compiles, runs. \"make check\" ok. doc build ok.\n>>\n>> There are no tests. Some infrastructure should be in place so that such\n>> features can be tested, eg so psql-specific TAP tests. ISTM that there\n>> was\n>> a patch submitted for that, but I cannot find it:-( Maybe it is combined\n>> with some other patch in the CF.\n>>\n>\n> It is not possible - the size of relations is not stable (can be different\n> on some platforms), and because showing the size is base of this patch, I\n> cannot to write tests. Maybe only only set/unset of variable.\n>\n>\n>>\n>> I agree that the simpler the better for such a feature.\n>>\n>> ISTM that the fact that the option is ignored on \\dt is a little bit\n>> annoying. It means that \\dt and \\dt+ would not show their results in the\n>> same order. I understand that the point is to avoid the cost of computing\n>> the sizes, but if the user asked for it, should it be done anyway?\n>>\n>\n> It was one objection against some previous patches. In this moment I don't\n> see any wrong on different order between \\dt and \\dt+. When column \"size\"\n> will be displayed, then ordering of report will be clean.\n>\n> I am not strongly against this - implementation of support SORT_BY_SIZE\n> for non verbose mode is +/- few lines more. But now (and it is just my\n> opinion and filing, nothing more), I think so sorting reports by invisible\n> columns can be messy. But if somebody will have strong different option on\n> this point, I am able to accept it. Both variants can have some sense, and\n> some benefits - both variants are consistent with some rules (but cannot be\n> together).\n>\n>\n>> I'm wondering whether it would make sense to have a slightly more generic\n>> interface allowing for more values, eg:\n>>\n>> \\set DESCRIPTION_SORT \"name\"\n>> \\set DESCRIPTION_SORT \"size\"\n>>\n>> Well, possibly this is a bad idea, so it is really a question.\n>>\n>\n> We was at this point already :). If you introduce this, then you have to\n> support combinations schema_name, name_schema, size, schema_size, ...\n>\n> My goal is implementation of most common missing alternative into psql -\n> but I would not to do too generic implementation - it needs more complex\n> design (and UI), and I don't think so people use it. SORT_BY_SIZE (on/off)\n> looks simply, and because (if will not be changed) it has not impact on non\n> verbose mode, then it can be active permanently (and if not, it is not\n> mental hard work to set it).\n>\n> I think so more generic solution needs interactive UI. Now I working on\n> vertical cursor support for pspg https://github.com/okbob/pspg. Next step\n> will be sort by column under vertical cursor. So, I hope, it can be good\n> enough for simply sorting by any column of report (but to be user friendly,\n> it needs interactive UI). Because not everywhere is pspg installed, I would\n> to push some simple solution (I prefer simplicity against generic) to psql.\n>\n>\n>\n>>\n>> + Setting this variable to <literal>on</literal> causes so results of\n>> + <literal>\\d*</literal> commands will be sorted by size, when size\n>> + is displayed.\n>>\n>> Maybe the simpler: \"Setting this variable on sorts \\d* outputs by size,\n>> when size is displayed.\"\n>>\n>\nI used this text in today patch\n\nRegards\n\nPavel\n\n\n>\n>> ISTM that the documentation is more generic than reality. Does it work\n>> with \\db+? It seems to work with \\dm+.\n>>\n>> On equality, ISTM it it should sort by name as a secondary criterion.\n>>\n>> I tested a few cases, although not partitioned tables.\n>>\n>\n> Thank you - I support now relations (tables, indexes, ), databases, and\n> tablespaces. The column size is displayed for data types report, but I am\n> not sure about any benefit in this case.\n>\n> Regards\n>\n> Pavel\n>\n>\n>> --\n>> Fabien.\n>>\n>>\n>>", "msg_date": "Sun, 30 Jun 2019 10:47:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "On Sun, Jun 30, 2019 at 8:48 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I used this text in today patch\n\nHi Pavel,\n\nCould you please post a rebased patch?\n\nThanks,\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jul 2019 16:11:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "Hi\n\npo 8. 7. 2019 v 6:12 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Sun, Jun 30, 2019 at 8:48 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > I used this text in today patch\n>\n> Hi Pavel,\n>\n> Could you please post a rebased patch?\n>\n\nrebased patch attached\n\nRegards\n\nPavel\n\n>\n> Thanks,\n>\n> --\n> Thomas Munro\n> https://enterprisedb.com\n>", "msg_date": "Mon, 8 Jul 2019 06:57:02 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "\nHello Pavel,\n\n> rebased patch attached\n\nI prefer patches with a number rather than a date, if possible. For one \nthing, there may be several updates in one day.\n\nAbout this version (20180708, probably v3): applies cleanly, compiles, \nmake check ok, doc build ok. No tests.\n\nIt works for me on a few manual tests against a 11.4 server.\n\nDocumentation: if you say \"\\d*+\", then it already applies to \\db+ and \n\\dP+, so why listing them? Otherwise, state all commands or make it work \non all commands that have a size?\n\nAbout the text:\n - remove , before \"sorts\"\n - ... outputs by decreasing size, when size is displayed.\n - add: When size is not displayed, the output is sorted by names.\n\nI still think that the object name should be kept as a secondary sort \ncriterion, in case of size equality, so that the output is deterministic. \nHaving plenty of objects of the same size out of alphabetical order looks \nvery strange.\n\nI still do not like much the boolean approach. I understand that the name \napproach has been rejected, and I can understand why.\n\nI've been thinking about another more generic interface, that I'm putting \nhere for discussion, I do not claim that it is a good idea. Probably could \nfall under \"over engineering\", but it might not be much harder to \nimplement, and it solves a few potential problems.\n\nThe idea is to add an option to \\d commands, such as \"\\echo -n\":\n\n \\dt+ [-o 1d,2a] ...\n\nmeaning do the \\dt+, order by column 1 descending, column 2 ascending.\nWith this there would be no need for a special variable nor other \nextensions to specify some ordering, whatever the user wishes.\n\nMaybe it could be \"\\dt+ [-o '1 DESC, 2 ASC'] ...\" so that the string\nis roughly used as an ORDER BY specification by the query, but it would be \nlonger to specify.\n\nIt also solves the issue that if someone wants another sorting order we \nwould end with competing boolean variables such as SORT_BY_SIZE, \nSORT_BY_TYPE, SORT_BY_SCHEMA, which would be pretty unpractical. The \nboolean approach works for *one* sorting extension and breaks at the next \nextension.\n\nAlso, the boolean does not say that it is a descending order. I could be \ninterested in looking at the small tables.\n\nAnother benefit for me is that I do not like much variables with side \neffects, whereas with an explicit syntax there would be no such thing, the \nuser has what was asked for. Ok, psql is full of them, but I cannot say I \nlike it for that.\n\nThe approach could be extended to specify a limit, eg \\dt -l 10 would\nadd a LIMIT 10 on the query.\n\nAlso, the implementation could be high enough so that the description \nhandlers would not have to deal with it individually, it could return\nthe query which would then be completed with SORT/LIMIT clauses before \nbeing executed, possibly with a default order if none is specified.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 12 Jul 2019 15:10:26 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "pá 12. 7. 2019 v 15:10 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\nnapsal:\n\n>\n> Hello Pavel,\n>\n> > rebased patch attached\n>\n> I prefer patches with a number rather than a date, if possible. For one\n> thing, there may be several updates in one day.\n>\n> About this version (20180708, probably v3): applies cleanly, compiles,\n> make check ok, doc build ok. No tests.\n>\n> It works for me on a few manual tests against a 11.4 server.\n>\n> Documentation: if you say \"\\d*+\", then it already applies to \\db+ and\n> \\dP+, so why listing them? Otherwise, state all commands or make it work\n> on all commands that have a size?\n>\n\n\\dT+ show sizes too, and there is a mix of values \"1, 4, 8, 12, 24, var\". I\ndon't think so sort by size there has sense\n\n\n> About the text:\n> - remove , before \"sorts\"\n> - ... outputs by decreasing size, when size is displayed.\n> - add: When size is not displayed, the output is sorted by names.\n>\n\nok\n\n>\n> I still think that the object name should be kept as a secondary sort\n> criterion, in case of size equality, so that the output is deterministic.\n> Having plenty of objects of the same size out of alphabetical order looks\n> very strange.\n>\n> I still do not like much the boolean approach. I understand that the name\n> approach has been rejected, and I can understand why.\n>\n> I've been thinking about another more generic interface, that I'm putting\n> here for discussion, I do not claim that it is a good idea. Probably could\n> fall under \"over engineering\", but it might not be much harder to\n> implement, and it solves a few potential problems.\n>\n> The idea is to add an option to \\d commands, such as \"\\echo -n\":\n>\n> \\dt+ [-o 1d,2a] ...\n>\n> meaning do the \\dt+, order by column 1 descending, column 2 ascending.\n> With this there would be no need for a special variable nor other\n> extensions to specify some ordering, whatever the user wishes.\n>\n> Maybe it could be \"\\dt+ [-o '1 DESC, 2 ASC'] ...\" so that the string\n> is roughly used as an ORDER BY specification by the query, but it would be\n> longer to specify.\n>\n\nI have two objections - although I think so this functionality can coexists\nwith functionality implemented by this patch\n\n1. You cannot use column number for sort by size, because this value is\nprettified (use pg_size_pretty).\n\n2. Because @1, then there is not simple solution for sort by size\n\n3. This extension should be generic, and then it will be much bigger patch\n\n\n> It also solves the issue that if someone wants another sorting order we\n> would end with competing boolean variables such as SORT_BY_SIZE,\n> SORT_BY_TYPE, SORT_BY_SCHEMA, which would be pretty unpractical. The\n> boolean approach works for *one* sorting extension and breaks at the next\n> extension.\n>\n> Also, the boolean does not say that it is a descending order. I could be\n> interested in looking at the small tables.\n>\n> Another benefit for me is that I do not like much variables with side\n> effects, whereas with an explicit syntax there would be no such thing, the\n> user has what was asked for. Ok, psql is full of them, but I cannot say I\n> like it for that.\n>\n> The approach could be extended to specify a limit, eg \\dt -l 10 would\n> add a LIMIT 10 on the query.\n>\n\nIt is common problem - when you do some repeated task, then you want to do\nquickly. But sometimes you would to do some specialized task, and then you\nshould to overwrite default setting easy.\n\nGood system should to support both. But commands that allows\nparametrization can be hard for learning, hard for use. There are lot of\nusers of \"vim\" or \"emacs\", but most users prefers \"notepad\".\n\nAll is about searching some compromise.\n\n\n\n> Also, the implementation could be high enough so that the description\n> handlers would not have to deal with it individually, it could return\n> the query which would then be completed with SORT/LIMIT clauses before\n> being executed, possibly with a default order if none is specified.\n>\n\nI don't think so your proposal is bad, and it is not in conflict with this\npatch, but it\n\na) doesn't solve SORT BY SIZE problem\nb) requires modification of parser of any related \\command - so it will be\nbigger and massive patch.\n\nIn this moment I prefer my simple implementation still. My patch is related\njust for few describe commands. Your proposal should be really generic\n(there is not a reason limit it just for reports with size)\n\nSimple boolean design doesn't block any enhancing of future. The effect of\nSORT_BY_SIZE variable can be overwritten by some specialized future option\nused inside \\command.\n\nRegards\n\nPavel\n\n\n>\n> --\n> Fabien.\n>\n>\n>\n\npá 12. 7. 2019 v 15:10 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\nHello Pavel,\n\n> rebased patch attached\n\nI prefer patches with a number rather than a date, if possible. For one \nthing, there may be several updates in one day.\n\nAbout this version (20180708, probably v3): applies cleanly, compiles, \nmake check ok, doc build ok. No tests.\n\nIt works for me on a few manual tests against a 11.4 server.\n\nDocumentation: if you say \"\\d*+\", then it already applies to \\db+ and \n\\dP+, so why listing them? Otherwise, state all commands or make it work \non all commands that have a size?\\dT+ show sizes too, and there is a mix of values \"1, 4, 8, 12, 24, var\". I don't think so sort by size there has sense \n\nAbout the text:\n   - remove , before \"sorts\"\n   - ... outputs by decreasing size, when size is displayed.\n   - add: When size is not displayed, the output is sorted by names.ok \n\nI still think that the object name should be kept as a secondary sort \ncriterion, in case of size equality, so that the output is deterministic. \nHaving plenty of objects of the same size out of alphabetical order looks \nvery strange.\n\nI still do not like much the boolean approach. I understand that the name \napproach has been rejected, and I can understand why.\n\nI've been thinking about another more generic interface, that I'm putting \nhere for discussion, I do not claim that it is a good idea. Probably could \nfall under \"over engineering\", but it might not be much harder to \nimplement, and it solves a few potential problems.\n\nThe idea is to add an option to \\d commands, such as \"\\echo -n\":\n\n   \\dt+ [-o 1d,2a] ...\n\nmeaning do the \\dt+, order by column 1 descending, column 2 ascending.\nWith this there would be no need for a special variable nor other \nextensions to specify some ordering, whatever the user wishes.\n\nMaybe it could be \"\\dt+ [-o '1 DESC, 2 ASC'] ...\" so that the string\nis roughly used as an ORDER BY specification by the query, but it would be \nlonger to specify.I have two objections - although I think so this functionality can coexists with functionality implemented by this patch1. You cannot use column number for sort by size, because this value is prettified (use pg_size_pretty). 2. Because @1, then there is not simple solution for sort by size3. This extension should be generic, and then it will be much bigger patch\n\nIt also solves the issue that if someone wants another sorting order we \nwould end with competing boolean variables such as SORT_BY_SIZE, \nSORT_BY_TYPE, SORT_BY_SCHEMA, which would be pretty unpractical. The \nboolean approach works for *one* sorting extension and breaks at the next \nextension.\n\nAlso, the boolean does not say that it is a descending order. I could be \ninterested in looking at the small tables.\n\nAnother benefit for me is that I do not like much variables with side \neffects, whereas with an explicit syntax there would be no such thing, the \nuser has what was asked for. Ok, psql is full of them, but I cannot say I \nlike it for that.\n\nThe approach could be extended to specify a limit, eg \\dt -l 10 would\nadd a LIMIT 10 on the query.It is common problem - when you do some repeated task, then you want to do quickly.  But sometimes you would to do some specialized task, and then you should to overwrite default setting easy. Good system should to support both. But commands that allows parametrization can be hard for learning, hard for use. There are lot of users of \"vim\" or \"emacs\", but most users prefers \"notepad\".All is about searching some compromise. \n\nAlso, the implementation could be high enough so that the description \nhandlers would not have to deal with it individually, it could return\nthe query which would then be completed with SORT/LIMIT clauses before\nbeing executed, possibly with a default order if none is specified.I don't think so your proposal is bad, and it is not in conflict with this patch, but ita) doesn't solve SORT BY SIZE problemb) requires modification of parser of any related \\command - so it will be bigger and massive patch. In this moment I prefer my simple implementation still. My patch is related just for few describe commands. Your proposal should be really generic (there is not a reason limit it just for reports with size) Simple boolean design doesn't block any enhancing of future. The effect of SORT_BY_SIZE variable can be overwritten by some specialized future option used inside \\command.RegardsPavel \n\n-- \nFabien.", "msg_date": "Fri, 12 Jul 2019 17:59:06 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "Hi\n\npá 12. 7. 2019 v 15:10 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\nnapsal:\n\n>\n> Hello Pavel,\n>\n> > rebased patch attached\n>\n> I prefer patches with a number rather than a date, if possible. For one\n> thing, there may be several updates in one day.\n>\n> About this version (20180708, probably v3): applies cleanly, compiles,\n> make check ok, doc build ok. No tests.\n>\n\nattached version 4\n\n\n> It works for me on a few manual tests against a 11.4 server.\n>\n> Documentation: if you say \"\\d*+\", then it already applies to \\db+ and\n> \\dP+, so why listing them? Otherwise, state all commands or make it work\n> on all commands that have a size?\n>\n> About the text:\n> - remove , before \"sorts\"\n> - ... outputs by decreasing size, when size is displayed.\n> - add: When size is not displayed, the output is sorted by names.\n>\n\nfixed\n\n\n> I still think that the object name should be kept as a secondary sort\n> criterion, in case of size equality, so that the output is deterministic.\n> Having plenty of objects of the same size out of alphabetical order looks\n> very strange.\n>\n\nfixed\n\nRegards\n\nPavel\n\n>\n> I still do not like much the boolean approach. I understand that the name\n> approach has been rejected, and I can understand why.\n>\n> I've been thinking about another more generic interface, that I'm putting\n> here for discussion, I do not claim that it is a good idea. Probably could\n> fall under \"over engineering\", but it might not be much harder to\n> implement, and it solves a few potential problems.\n>\n> The idea is to add an option to \\d commands, such as \"\\echo -n\":\n>\n> \\dt+ [-o 1d,2a] ...\n>\n> meaning do the \\dt+, order by column 1 descending, column 2 ascending.\n> With this there would be no need for a special variable nor other\n> extensions to specify some ordering, whatever the user wishes.\n>\n> Maybe it could be \"\\dt+ [-o '1 DESC, 2 ASC'] ...\" so that the string\n> is roughly used as an ORDER BY specification by the query, but it would be\n> longer to specify.\n>\n> It also solves the issue that if someone wants another sorting order we\n> would end with competing boolean variables such as SORT_BY_SIZE,\n> SORT_BY_TYPE, SORT_BY_SCHEMA, which would be pretty unpractical. The\n> boolean approach works for *one* sorting extension and breaks at the next\n> extension.\n>\n> Also, the boolean does not say that it is a descending order. I could be\n> interested in looking at the small tables.\n>\n> Another benefit for me is that I do not like much variables with side\n> effects, whereas with an explicit syntax there would be no such thing, the\n> user has what was asked for. Ok, psql is full of them, but I cannot say I\n> like it for that.\n>\n> The approach could be extended to specify a limit, eg \\dt -l 10 would\n> add a LIMIT 10 on the query.\n>\n> Also, the implementation could be high enough so that the description\n> handlers would not have to deal with it individually, it could return\n> the query which would then be completed with SORT/LIMIT clauses before\n> being executed, possibly with a default order if none is specified.\n>\n> --\n> Fabien.\n>\n>\n>", "msg_date": "Mon, 15 Jul 2019 06:12:06 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "On Mon, 15 Jul 2019 at 06:12, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> pá 12. 7. 2019 v 15:10 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n>>\n>>\n>> Hello Pavel,\n>>\n>> > rebased patch attached\n>>\n>> I prefer patches with a number rather than a date, if possible. For one\n>> thing, there may be several updates in one day.\n>>\n>> About this version (20180708, probably v3): applies cleanly, compiles,\n>> make check ok, doc build ok. No tests.\n>\n>\n> attached version 4\n>\n>>\n>> It works for me on a few manual tests against a 11.4 server.\n>>\n>> Documentation: if you say \"\\d*+\", then it already applies to \\db+ and\n>> \\dP+, so why listing them? Otherwise, state all commands or make it work\n>> on all commands that have a size?\n>>\n>> About the text:\n>> - remove , before \"sorts\"\n>> - ... outputs by decreasing size, when size is displayed.\n>> - add: When size is not displayed, the output is sorted by names.\n>\n>\n> fixed\n>\n>>\n>> I still think that the object name should be kept as a secondary sort\n>> criterion, in case of size equality, so that the output is deterministic.\n>> Having plenty of objects of the same size out of alphabetical order looks\n>> very strange.\n>\n>\n> fixed\n>\n> Regards\n>\n> Pavel\n>>\n>>\n>> I still do not like much the boolean approach. I understand that the name\n>> approach has been rejected, and I can understand why.\n>>\n>> I've been thinking about another more generic interface, that I'm putting\n>> here for discussion, I do not claim that it is a good idea. Probably could\n>> fall under \"over engineering\", but it might not be much harder to\n>> implement, and it solves a few potential problems.\n>>\n>> The idea is to add an option to \\d commands, such as \"\\echo -n\":\n>>\n>> \\dt+ [-o 1d,2a] ...\n>>\n>> meaning do the \\dt+, order by column 1 descending, column 2 ascending.\n>> With this there would be no need for a special variable nor other\n>> extensions to specify some ordering, whatever the user wishes.\n>>\n>> Maybe it could be \"\\dt+ [-o '1 DESC, 2 ASC'] ...\" so that the string\n>> is roughly used as an ORDER BY specification by the query, but it would be\n>> longer to specify.\n>>\n>> It also solves the issue that if someone wants another sorting order we\n>> would end with competing boolean variables such as SORT_BY_SIZE,\n>> SORT_BY_TYPE, SORT_BY_SCHEMA, which would be pretty unpractical. The\n>> boolean approach works for *one* sorting extension and breaks at the next\n>> extension.\n>>\n>> Also, the boolean does not say that it is a descending order. I could be\n>> interested in looking at the small tables.\n>>\n>> Another benefit for me is that I do not like much variables with side\n>> effects, whereas with an explicit syntax there would be no such thing, the\n>> user has what was asked for. Ok, psql is full of them, but I cannot say I\n>> like it for that.\n>>\n>> The approach could be extended to specify a limit, eg \\dt -l 10 would\n>> add a LIMIT 10 on the query.\n>>\n>> Also, the implementation could be high enough so that the description\n>> handlers would not have to deal with it individually, it could return\n>> the query which would then be completed with SORT/LIMIT clauses before\n>> being executed, possibly with a default order if none is specified.\n\nI had a look at this patch, seems like a useful thing to have.\nOne clarification though,\nWhat is the reason for compatibility with different versions in\nlistAllDbs and describeTablespaces, precisely\n\nif (verbose && pset.sversion >= 90200)\n+ {\n appendPQExpBuffer(&buf,\n \",\\n pg_catalog.pg_size_pretty(pg_catalog.pg_tablespace_size(oid))\nAS \\\"%s\\\"\",\n gettext_noop(\"Size\"));\n+ sizefunc = \"pg_catalog.pg_tablespace_size(oid)\";\n+ }\nin describeTablespaces but\nif (verbose && pset.sversion >= 80200)\n+ {\n appendPQExpBuffer(&buf,\n \",\\n CASE WHEN pg_catalog.has_database_privilege(d.datname,\n'CONNECT')\\n\"\n \" THEN\npg_catalog.pg_size_pretty(pg_catalog.pg_database_size(d.datname))\\n\"\n \" ELSE 'No Access'\\n\"\n \" END as \\\"%s\\\"\",\n gettext_noop(\"Size\"));\n+ sizefunc = \"pg_catalog.pg_database_size(d.datname)\";\n+ }\nin listAllDbs.\n\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Wed, 31 Jul 2019 14:54:31 +0200", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "On Fri, Jun 28, 2019 at 10:13 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> I returned to possibility to sort output of \\d* and \\l by size. There was\n> more a experiments in this area, but without success. Last patch was\n> example of over engineering, and now, I try to implement this feature\n> simply how it is possible. I don't think so we need too complex solution -\n> if somebody needs specific report, then it is not hard to run psql with\n> \"-E\" option, get and modify used query (and use a power of SQL). But\n> displaying databases objects sorted by size is very common case.\n>\n> This proposal is based on new psql variable \"SORT_BY_SIZE\". This variable\n> will be off by default. The value of this variable is used only in verbose\n> mode (when the size is displayed - I don't see any benefit sort of size\n> without showing size). Usage is very simple and implementation too:\n>\n> \\dt -- sorted by schema, name\n> \\dt+ -- still sorted by schema, name\n>\n> \\set SORT_BY_SIZE on\n> \\dt -- sorted by schema, name (size is not calculated and is not visible)\n> \\dt+ -- sorted by size\n>\n> \\dt+ public.* -- sorted by size from schema public\n>\n> Comments, notes?\n>\n> Regards\n>\n> Pavel\n>\n>\nOne oddity about pg_relation_size and pg_table_size is that they can be\neasily blocked by user activity. In fact it happens to us often in\nreporting environments and we have instead written different versions of\nthem that avoid the lock contention and still give \"close enough\" results.\n\nThis blocking could result in quite unexpected behavior, that someone uses\nyour proposed command and it never returns. Has that been considered as a\nreality at least to be documented?\n\nThanks,\nJeremy\n\nOn Fri, Jun 28, 2019 at 10:13 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiI returned to possibility to sort output of \\d* and \\l by size. There was more a experiments in this area, but without success. Last patch was example of over engineering, and now, I try to implement this feature simply how it is possible. I don't think so we need too complex solution - if somebody needs specific report, then it is not hard to run psql with \"-E\" option, get and modify used query (and use a power of SQL). But displaying databases objects sorted by size is very common case.This proposal is based on new psql variable \"SORT_BY_SIZE\". This variable will be off by default. The value of this variable is used only in verbose mode (when the size is displayed - I don't see any benefit sort of size without showing size). Usage is very simple and implementation too:\\dt -- sorted by schema, name\\dt+ -- still sorted  by schema, name\\set SORT_BY_SIZE on\\dt -- sorted by schema, name (size is not calculated and is not visible)\\dt+ -- sorted by size\\dt+ public.* -- sorted by size from schema publicComments, notes?RegardsPavelOne oddity about pg_relation_size and pg_table_size is that they can be easily blocked by user activity.  In fact it happens to us often in reporting environments and we have instead written different versions of them that avoid the lock contention and still give \"close enough\" results.This blocking could result in quite unexpected behavior, that someone uses your proposed command and it never returns.  Has that been considered as a reality at least to be documented?Thanks,Jeremy", "msg_date": "Wed, 31 Jul 2019 08:18:56 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "\nHello Jeremy,\n\n>> Comments, notes?\n>\n> One oddity about pg_relation_size and pg_table_size is that they can be\n> easily blocked by user activity. In fact it happens to us often in\n> reporting environments and we have instead written different versions of\n> them that avoid the lock contention and still give \"close enough\" results.\n>\n> This blocking could result in quite unexpected behavior, that someone uses\n> your proposed command and it never returns. Has that been considered as a\n> reality at least to be documented?\n\nISTM that it does not change anything wrt the current behavior because of \nthe prudent lazy approach: the sorting is *only* performed when the size \nis already available in one of the printed column.\n\nMaybe the more general question could be \"is there a caveat somewhere that \nwhen doing \\d.+ a user may have issues with locks because of the size \ncomputations?\".\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 31 Jul 2019 15:40:04 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "On 2019-Jul-31, Rafia Sabih wrote:\n\n> I had a look at this patch, seems like a useful thing to have.\n\nSo the two initial questions for this patch are\n\n1. Is this a feature we want?\n2. Is the user interface correct?\n\nI think the feature is useful, and Rafia also stated as much. Therefore\nISTM we're okay on that front.\n\nAs for the UI, Fabien thinks the patch adopts one that's far too\nsimplistic, and I agree. Fabien has proposed a number of different UIs,\nbut doesn't seem convinced of any of them. One of them was to have\n\"options\" in the command,\n \\dt+ [-o 1d,2a]\n\nAnother idea is to use variables in a more general form. So instead of\nPavel's proposal of SORT_BY_SIZE=on we could do something like\nSORT_BY=[list]\nwhere the list after the equal sign consists of predetermined elements\n(say SIZE, NAME, SCHEMA and so on) and indicates a specific column to\nsort by. This is less succint than Fabien's idea, and in particular you\ncan't specify it in the command itself but have to set the variable\nbeforehand instead.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Sep 2019 19:01:26 -0300", "msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "čt 12. 9. 2019 v 0:01 odesílatel Alvaro Herrera from 2ndQuadrant <\nalvherre@alvh.no-ip.org> napsal:\n\n> On 2019-Jul-31, Rafia Sabih wrote:\n>\n> > I had a look at this patch, seems like a useful thing to have.\n>\n> So the two initial questions for this patch are\n>\n> 1. Is this a feature we want?\n> 2. Is the user interface correct?\n>\n> I think the feature is useful, and Rafia also stated as much. Therefore\n> ISTM we're okay on that front.\n>\n> As for the UI, Fabien thinks the patch adopts one that's far too\n> simplistic, and I agree. Fabien has proposed a number of different UIs,\n> but doesn't seem convinced of any of them. One of them was to have\n> \"options\" in the command,\n> \\dt+ [-o 1d,2a]\n>\n\n> Another idea is to use variables in a more general form. So instead of\n> Pavel's proposal of SORT_BY_SIZE=on we could do something like\n> SORT_BY=[list]\n> where the list after the equal sign consists of predetermined elements\n> (say SIZE, NAME, SCHEMA and so on) and indicates a specific column to\n> sort by. This is less succint than Fabien's idea, and in particular you\n> can't specify it in the command itself but have to set the variable\n> beforehand instead.\n>\n\nfor more generic design probably you need redesign psql report systems. You\ncannot to use just ORDER BY 1,2 on some columns, but you need to produce\n(and later hide) some content (for size).\n\nSo it can be unfunny complex patch. I finished sort inside pspg and I it\nlooks to be better solution, than increase complexity (and less\nmaintainability (due support old releases)).\n\nRegards\n\nPavel\n\n\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nčt 12. 9. 2019 v 0:01 odesílatel Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org> napsal:On 2019-Jul-31, Rafia Sabih wrote:\n\n> I had a look at this patch, seems like a useful thing to have.\n\nSo the two initial questions for this patch are\n\n1. Is this a feature we want?\n2. Is the user interface correct?\n\nI think the feature is useful, and Rafia also stated as much.  Therefore\nISTM we're okay on that front.\n\nAs for the UI, Fabien thinks the patch adopts one that's far too\nsimplistic, and I agree.  Fabien has proposed a number of different UIs,\nbut doesn't seem convinced of any of them.  One of them was to have\n\"options\" in the command,\n \\dt+ [-o 1d,2a]\n\nAnother idea is to use variables in a more general form.  So instead of\nPavel's proposal of SORT_BY_SIZE=on we could do something like\nSORT_BY=[list]\nwhere the list after the equal sign consists of predetermined elements\n(say SIZE, NAME, SCHEMA and so on) and indicates a specific column to\nsort by.  This is less succint than Fabien's idea, and in particular you\ncan't specify it in the command itself but have to set the variable\nbeforehand instead.for more generic design probably you need redesign psql report systems. You cannot to use just ORDER BY 1,2 on some columns, but you need to produce (and later hide) some content (for size). So it can be unfunny complex patch. I finished sort inside pspg and I it looks to be better solution, than increase complexity (and less maintainability (due support old releases)).RegardsPavel \n\n-- \nÁlvaro Herrera                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 13 Sep 2019 09:35:24 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - patch: psql - sort_by_size" }, { "msg_contents": "pá 13. 9. 2019 v 9:35 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> čt 12. 9. 2019 v 0:01 odesílatel Alvaro Herrera from 2ndQuadrant <\n> alvherre@alvh.no-ip.org> napsal:\n>\n>> On 2019-Jul-31, Rafia Sabih wrote:\n>>\n>> > I had a look at this patch, seems like a useful thing to have.\n>>\n>> So the two initial questions for this patch are\n>>\n>> 1. Is this a feature we want?\n>> 2. Is the user interface correct?\n>>\n>> I think the feature is useful, and Rafia also stated as much. Therefore\n>> ISTM we're okay on that front.\n>>\n>> As for the UI, Fabien thinks the patch adopts one that's far too\n>> simplistic, and I agree. Fabien has proposed a number of different UIs,\n>> but doesn't seem convinced of any of them. One of them was to have\n>> \"options\" in the command,\n>> \\dt+ [-o 1d,2a]\n>>\n>\n>> Another idea is to use variables in a more general form. So instead of\n>> Pavel's proposal of SORT_BY_SIZE=on we could do something like\n>> SORT_BY=[list]\n>> where the list after the equal sign consists of predetermined elements\n>> (say SIZE, NAME, SCHEMA and so on) and indicates a specific column to\n>> sort by. This is less succint than Fabien's idea, and in particular you\n>> can't specify it in the command itself but have to set the variable\n>> beforehand instead.\n>>\n>\n> for more generic design probably you need redesign psql report systems.\n> You cannot to use just ORDER BY 1,2 on some columns, but you need to\n> produce (and later hide) some content (for size).\n>\n> So it can be unfunny complex patch. I finished sort inside pspg and I it\n> looks to be better solution, than increase complexity (and less\n> maintainability (due support old releases)).\n>\n\nI changed status for this patch to withdrawn\n\nI like a idea with enhancing \\dt about some clauses like \" \\dt+ [-o\n1d,2a]\". But it needs probably significant redesign of describe.c module.\nMaybe implementation of some simple query generator for queries to system\ncatalogue can good.\n\nSurely - this should be implemented from scratch - I am not a volunteer for\nthat.\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>> --\n>> Álvaro Herrera https://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>\n\npá 13. 9. 2019 v 9:35 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 12. 9. 2019 v 0:01 odesílatel Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org> napsal:On 2019-Jul-31, Rafia Sabih wrote:\n\n> I had a look at this patch, seems like a useful thing to have.\n\nSo the two initial questions for this patch are\n\n1. Is this a feature we want?\n2. Is the user interface correct?\n\nI think the feature is useful, and Rafia also stated as much.  Therefore\nISTM we're okay on that front.\n\nAs for the UI, Fabien thinks the patch adopts one that's far too\nsimplistic, and I agree.  Fabien has proposed a number of different UIs,\nbut doesn't seem convinced of any of them.  One of them was to have\n\"options\" in the command,\n \\dt+ [-o 1d,2a]\n\nAnother idea is to use variables in a more general form.  So instead of\nPavel's proposal of SORT_BY_SIZE=on we could do something like\nSORT_BY=[list]\nwhere the list after the equal sign consists of predetermined elements\n(say SIZE, NAME, SCHEMA and so on) and indicates a specific column to\nsort by.  This is less succint than Fabien's idea, and in particular you\ncan't specify it in the command itself but have to set the variable\nbeforehand instead.for more generic design probably you need redesign psql report systems. You cannot to use just ORDER BY 1,2 on some columns, but you need to produce (and later hide) some content (for size). So it can be unfunny complex patch. I finished sort inside pspg and I it looks to be better solution, than increase complexity (and less maintainability (due support old releases)).I changed status for this patch to withdrawn I like a idea with enhancing \\dt about some clauses like \" \\dt+ [-o 1d,2a]\". But it needs probably significant redesign of describe.c module. Maybe implementation of some simple query generator for queries to system catalogue can good. Surely - this should be implemented from scratch - I am not a volunteer for that.PavelRegardsPavel \n\n-- \nÁlvaro Herrera                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 13 Sep 2019 10:06:46 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - patch: psql - sort_by_size" } ]
[ { "msg_contents": "Resending patch v2.2, looks like the previous submission did not get attached to the original thread.\r\n\r\nThis version fixed an issue that involves CTE. Because we call subquery_planner before deciding whether to proceed with the transformation, we need to setup access to upper level CTEs at this point if the subquery contains any CTE RangeTblEntry.\r\n\r\nAlso added more test cases of NOT IN accessing CTEs, including recursive CTE. It's nice that CTE can use index now!\r\n\r\nRegards,\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL\r\n \r\n\r\nOn 6/28/19, 12:02 PM, \"Li, Zheng\" <zhelli@amazon.com> wrote:\r\n\r\n Rebased patch is attached.\r\n \r\n Comments are welcome.\r\n \r\n -----------\r\n Zheng Li\r\n AWS, Amazon Aurora PostgreSQL\r\n \r\n \r\n On 6/14/19, 5:39 PM, \"zhengli\" <zhelli@amazon.com> wrote:\r\n \r\n In our patch, we only proceed with the ANTI JOIN transformation if\r\n subplan_is_hashable(subplan) is\r\n false, it requires the subquery to be planned at this point.\r\n \r\n To avoid planning the subquery again later on, I want to keep a pointer of\r\n the subplan in SubLink so that we can directly reuse the subplan when\r\n needed. However, this change breaks initdb for some reason and I'm trying to\r\n figure it out.\r\n \r\n I'll send the rebased patch in the following email since it's been a while.\r\n \r\n \r\n \r\n --\r\n Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html", "msg_date": "Fri, 28 Jun 2019 16:18:52 +0000", "msg_from": "\"Li, Zheng\" <zhelli@amazon.com>", "msg_from_op": true, "msg_subject": "NOT IN subquery optimization" }, { "msg_contents": "On Sat, Jun 29, 2019 at 4:19 AM Li, Zheng <zhelli@amazon.com> wrote:>\n> Resending patch v2.2, looks like the previous submission did not get attached to the original thread.\n>\n> This version fixed an issue that involves CTE. Because we call subquery_planner before deciding whether to proceed with the transformation, we need to setup access to upper level CTEs at this point if the subquery contains any CTE RangeTblEntry.\n>\n> Also added more test cases of NOT IN accessing CTEs, including recursive CTE. It's nice that CTE can use index now!\n\nHi Zheng, Jim,\n\nWith my Commitfest doozer hat on, I have moved this entry to the\nSeptember 'fest. I noticed in passing that it needs to be adjusted\nfor the new pg_list.h API. It'd be good to get some feedback from\nreviewers on these two competing proposals:\n\nhttps://commitfest.postgresql.org/24/2020/\nhttps://commitfest.postgresql.org/24/2023/\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Aug 2019 10:57:45 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: NOT IN subquery optimization" }, { "msg_contents": "On 2019-Aug-02, Thomas Munro wrote:\n\n> Hi Zheng, Jim,\n> \n> With my Commitfest doozer hat on, I have moved this entry to the\n> September 'fest. I noticed in passing that it needs to be adjusted\n> for the new pg_list.h API. It'd be good to get some feedback from\n> reviewers on these two competing proposals:\n> \n> https://commitfest.postgresql.org/24/2020/\n> https://commitfest.postgresql.org/24/2023/\n\nHello,\n\nIs this patch dead?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Sep 2019 17:36:02 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: NOT IN subquery optimization" } ]
[ { "msg_contents": "Setting up a standby instance is still quite complicated. You need to\nrun pg_basebackup with all the right options. You need to make sure\npg_basebackup has the right permissions for the target directories. The\ncreated instance has to be integrated into the operating system's start\nscripts. There is this slightly awkward business of the --recovery-conf\noption and how it interacts with other features. And you should\nprobably run pg_basebackup under screen. And then how do you get\nnotified when it's done. And when it's done you have to log back in and\nfinish up. Too many steps.\n\nMy idea is that the postmaster can launch a base backup worker, wait\ntill it's done, then proceed with the rest of the startup. initdb gets\na special option to create a \"minimal\" data directory with only a few\nfiles, directories, and the usual configuration files. Then you create\na $PGDATA/basebackup.signal, start the postmaster as normal. It sees\nthe signal file, launches an auxiliary process that runs the base\nbackup, then proceeds with normal startup in standby mode.\n\nThis makes a whole bunch of things much nicer: The connection\ninformation for where to get the base backup from comes from\npostgresql.conf, so you only need to specify it in one place.\npg_basebackup is completely out of the picture; no need to deal with\ncommand-line options, --recovery-conf, screen, monitoring for\ncompletion, etc. If something fails, the base backup process can\nautomatically be restarted (maybe). Operating system integration is\nmuch easier: You only call initdb and then pg_ctl or postgres, as you\nare already doing. Automated deployment systems don't need to wait for\npg_basebackup to finish: You only call initdb, then start the server,\nand then you're done -- waiting for the base backup to finish can be\ndone by the regular monitoring system.\n\nAttached is a very hackish patch to implement this. It works like this:\n\n # (assuming you have a primary already running somewhere)\n initdb -D data2 --minimal\n $EDITOR data2/postgresql.conf # set primary_conninfo\n pg_ctl -D data2 start\n\n(Curious side note: If you don’t set primary_conninfo in these steps,\nthen libpq defaults apply, so the default behavior might end up being\nthat a given instance attempts to replicate from itself.)\n\nIt works for basic cases. It's missing tablespace support, proper\nfsyncing, progress reporting, probably more. Those would be pretty\nstraightforward I think. The interesting bit is the delicate ordering\nof the postmaster startup: Normally, the pg_control file is read quite\nearly, but if starting from a minimal data directory, we need to wait\nuntil the base backup is done. There is also the question what you do\nif the base backup fails halfway through. Currently you probably need\nto delete the whole data directory and start again with initdb. Better\nmight be a way to start again and overwrite any existing files, but that\ncan clearly also be dangerous. All this needs some careful analysis,\nbut I think it's doable.\n\nAny thoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 29 Jun 2019 22:05:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "base backup client as auxiliary backend process" }, { "msg_contents": "On Sun, Jun 30, 2019 at 8:05 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Attached is a very hackish patch to implement this. It works like this:\n>\n> # (assuming you have a primary already running somewhere)\n> initdb -D data2 --minimal\n> $EDITOR data2/postgresql.conf # set primary_conninfo\n> pg_ctl -D data2 start\n\n+1, very nice. How about --replica?\n\nFIY Windows doesn't like your patch:\n\nsrc/backend/postmaster/postmaster.c(1396): warning C4013: 'sleep'\nundefined; assuming extern returning int\n[C:\\projects\\postgresql\\postgres.vcxproj]\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.45930\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jul 2019 15:07:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Hello\n\n>>  Attached is a very hackish patch to implement this. It works like this:\n>>\n>>      # (assuming you have a primary already running somewhere)\n>>      initdb -D data2 --minimal\n>>      $EDITOR data2/postgresql.conf # set primary_conninfo\n>>      pg_ctl -D data2 start\n>\n> +1, very nice. How about --replica?\n\n+1\n\nAlso not works with -DEXEC_BACKEND for me.\n\n> There is also the question what you do\n> if the base backup fails halfway through. Currently you probably need\n> to delete the whole data directory and start again with initdb. Better\n> might be a way to start again and overwrite any existing files, but that\n> can clearly also be dangerous.\n\nI think the need for delete directory and rerun initdb is better than overwrite files.\n\n- we need check major version. Basebackup can works with different versions, but would be useless to copying cluster which we can not run\n- basebackup silently overwrite configs (pg_hba.conf, postgresql.conf, postgresql.auto.conf) in $PGDATA. This is ok for pg_basebackup but not for backend process\n- I think we need start walreceiver. At best, without interruption during startup replay (if possible)\n\n> XXX Is there a use for\n> \t\t * switching into (non-standby) recovery here?\n\nI think not.\n\nregards, Sergei\n\n\n", "msg_date": "Thu, 11 Jul 2019 14:12:36 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Em sáb, 29 de jun de 2019 às 17:05, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> escreveu:\n>\n> Setting up a standby instance is still quite complicated. You need to\n> run pg_basebackup with all the right options. You need to make sure\n> Attached is a very hackish patch to implement this. It works like this:\n>\n> # (assuming you have a primary already running somewhere)\n> initdb -D data2 --minimal\n> $EDITOR data2/postgresql.conf # set primary_conninfo\n> pg_ctl -D data2 start\n>\nGreat! The main complaints about pg_basebackup usage in TB clusters\nare: (a) it can't be restarted and (b) it can't be parallelized.\nAFAICS your proposal doesn't solve them. It would be nice if it can be\nsolved in future releases (using rsync or another in-house tool is as\nfragile as using pg_basebackup).\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n", "msg_date": "Thu, 11 Jul 2019 11:05:38 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Sat, Jun 29, 2019 at 4:05 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> My idea is that the postmaster can launch a base backup worker, wait\n> till it's done, then proceed with the rest of the startup. initdb gets\n> a special option to create a \"minimal\" data directory with only a few\n> files, directories, and the usual configuration files.\n\nWhy do we even have to do that much? Can we remove the need for an\ninitdb altogether?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 11 Jul 2019 10:23:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Jun 29, 2019 at 4:05 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> My idea is that the postmaster can launch a base backup worker, wait\n>> till it's done, then proceed with the rest of the startup. initdb gets\n>> a special option to create a \"minimal\" data directory with only a few\n>> files, directories, and the usual configuration files.\n\n> Why do we even have to do that much? Can we remove the need for an\n> initdb altogether?\n\nGotta have config files in place already, no?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jul 2019 10:36:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Thu, Jul 11, 2019 at 10:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Gotta have config files in place already, no?\n\nWhy?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 11 Jul 2019 15:07:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jul 11, 2019 at 10:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Gotta have config files in place already, no?\n\n> Why?\n\nHow's the postmaster to know that it's supposed to run pg_basebackup\nrather than start normally? Where will it get the connection information?\nSeem to need configuration data *somewhere*.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jul 2019 16:10:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Thu, Jul 11, 2019 at 4:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Jul 11, 2019 at 10:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Gotta have config files in place already, no?\n>\n> > Why?\n>\n> How's the postmaster to know that it's supposed to run pg_basebackup\n> rather than start normally? Where will it get the connection information?\n> Seem to need configuration data *somewhere*.\n\nMaybe just:\n\n./postgres --replica='connstr' -D createme\n\n?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 11 Jul 2019 16:20:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2019-07-11 22:20, Robert Haas wrote:\n> On Thu, Jul 11, 2019 at 4:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> On Thu, Jul 11, 2019 at 10:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> Gotta have config files in place already, no?\n>>\n>>> Why?\n>>\n>> How's the postmaster to know that it's supposed to run pg_basebackup\n>> rather than start normally? Where will it get the connection information?\n>> Seem to need configuration data *somewhere*.\n> \n> Maybe just:\n> \n> ./postgres --replica='connstr' -D createme\n\nWhat you are describing is of course theoretically possible, but it\ndoesn't really fit with how existing tooling normally deals with this,\nwhich is one of the problems I want to address.\n\ninitdb has all the knowledge of how to create the data *directory*, how\nto set permissions, deal with existing and non-empty directories, how to\nset up a separate WAL directory. Packaged environments might wrap this\nfurther by using the correct OS users, creating the directory first as\nroot, then changing owner, etc. This is all logic that we can reuse and\nprobably don't want to duplicate elsewhere.\n\nFurthermore, we have for the longest time encouraged packagers *not* to\ncreate data directories automatically when a service is started, because\nthis might store data in places that will be hidden by a later mount.\nKeeping this property requires making the initialization of the data\ndirectory a separate step somehow. That step doesn't have to be called\n\"initdb\", it could be a new \"pg_mkdirs\", but for the reasons described\nabove, this would create a fair mount of code duplication and not really\ngain anything.\n\nFinally, many installations want to have the configuration files under\ncontrol of some centralized configuration management system. The way\nthose want to work is usually: (1) create file system structures, (2)\ninstall configuration files from some templates, (3) start service.\nThis is of course how setting up a primary works. Having such a system\nset up a standby is currently seemingly impossible in an elegant way,\nbecause the order and timing of how things work is all wrong. My\nproposed change would fix this because things would be set up in the\nsame three-step process. (As has been pointed out, this would require\nthat the base backup does not copy over the configuration files from the\nremote, which my patch currently doesn't do correctly.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 11 Jul 2019 22:56:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-07-11 22:20, Robert Haas wrote:\n>> On Thu, Jul 11, 2019 at 4:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> How's the postmaster to know that it's supposed to run pg_basebackup\n>>> rather than start normally? Where will it get the connection information?\n>>> Seem to need configuration data *somewhere*.\n>> \n>> Maybe just:\n>> \n>> ./postgres --replica='connstr' -D createme\n\n> What you are describing is of course theoretically possible, but it\n> doesn't really fit with how existing tooling normally deals with this,\n> which is one of the problems I want to address.\n\nI don't care for Robert's suggestion for a different reason: it presumes\nthat all data that can possibly be needed to set up a new replica is\nfeasible to cram onto the postmaster command line, and always will be.\n\nAn immediate counterexample is that's not where you want to be specifying\nthe password for a replication connection. But even without that sort of\nsecurity issue, this approach won't scale. It also does not work even a\nlittle bit nicely for tooling in which the postmaster is not supposed to\nbe started directly by the user. (Which is to say, all postgres-service\ntooling everywhere.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jul 2019 17:28:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Hello.\r\n\r\nAt Sat, 29 Jun 2019 22:05:22 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in <61b8d18d-c922-ac99-b990-a31ba63cdcbb@2ndquadrant.com>\r\n> Setting up a standby instance is still quite complicated. You need to\r\n> run pg_basebackup with all the right options. You need to make sure\r\n> pg_basebackup has the right permissions for the target directories. The\r\n> created instance has to be integrated into the operating system's start\r\n> scripts. There is this slightly awkward business of the --recovery-conf\r\n> option and how it interacts with other features. And you should\r\n> probably run pg_basebackup under screen. And then how do you get\r\n> notified when it's done. And when it's done you have to log back in and\r\n> finish up. Too many steps.\r\n> \r\n> My idea is that the postmaster can launch a base backup worker, wait\r\n> till it's done, then proceed with the rest of the startup. initdb gets\r\n> a special option to create a \"minimal\" data directory with only a few\r\n> files, directories, and the usual configuration files. Then you create\r\n> a $PGDATA/basebackup.signal, start the postmaster as normal. It sees\r\n> the signal file, launches an auxiliary process that runs the base\r\n> backup, then proceeds with normal startup in standby mode.\r\n> \r\n> This makes a whole bunch of things much nicer: The connection\r\n> information for where to get the base backup from comes from\r\n> postgresql.conf, so you only need to specify it in one place.\r\n> pg_basebackup is completely out of the picture; no need to deal with\r\n> command-line options, --recovery-conf, screen, monitoring for\r\n> completion, etc. If something fails, the base backup process can\r\n> automatically be restarted (maybe). Operating system integration is\r\n> much easier: You only call initdb and then pg_ctl or postgres, as you\r\n> are already doing. Automated deployment systems don't need to wait for\r\n> pg_basebackup to finish: You only call initdb, then start the server,\r\n> and then you're done -- waiting for the base backup to finish can be\r\n> done by the regular monitoring system.\r\n> \r\n> Attached is a very hackish patch to implement this. It works like this:\r\n> \r\n> # (assuming you have a primary already running somewhere)\r\n> initdb -D data2 --minimal\r\n> $EDITOR data2/postgresql.conf # set primary_conninfo\r\n> pg_ctl -D data2 start\r\n\r\nNice idea! \r\n\r\n> (Curious side note: If you don’t set primary_conninfo in these steps,\r\n> then libpq defaults apply, so the default behavior might end up being\r\n> that a given instance attempts to replicate from itself.)\r\n\r\nWe may be able to have different setting for primary and replica\r\nfor other settings if we could have sections in the configuration\r\nfile, defining, say, [replica] section gives us more frexibility.\r\nThough it is a bit far from the topic, dedicate command-line\r\nconfiguration editor that can find and replace specified\r\nparameter would elimite the sublte editing step. It is annoying\r\nthat finding specific separator in conf file then trim then add\r\nnew contnet.\r\n\r\n> It works for basic cases. It's missing tablespace support, proper\r\n> fsyncing, progress reporting, probably more. Those would be pretty\r\n\r\nWhile catching up master, connections to replica are once\r\naccepted then result in FATAL error. I now and then receive\r\ninquiries for that. With the new feature, we get FATAL also while\r\nbasebackup phase. That can let users fear more frequently.\r\n\r\n> straightforward I think. The interesting bit is the delicate ordering\r\n> of the postmaster startup: Normally, the pg_control file is read quite\r\n> early, but if starting from a minimal data directory, we need to wait\r\n> until the base backup is done. There is also the question what you do\r\n> if the base backup fails halfway through. Currently you probably need\r\n> to delete the whole data directory and start again with initdb. Better\r\n> might be a way to start again and overwrite any existing files, but that\r\n> can clearly also be dangerous. All this needs some careful analysis,\r\n> but I think it's doable.\r\n> \r\n> Any thoughts?\r\n\r\nJust overwriting won't work since files removed just before\r\nretrying are left alon in replica. I think it should work\r\nsimilarly to initdb, that is, removing all then retrying.\r\n\r\nIt's easy if we don't consider reducing startup time. Just do\r\ninitdb then start exising postmaster internally. But melding them\r\ntogether makes room for reducing the startup time. We even could\r\nredirect read-only queries to master while setting up the server.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 12 Jul 2019 10:00:19 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "> Attached is a very hackish patch to implement this. It works like this:\n> \n> # (assuming you have a primary already running somewhere)\n> initdb -D data2 --replica\n> $EDITOR data2/postgresql.conf # set primary_conninfo\n> pg_ctl -D data2 start\n\nAttached is an updated patch for this. I have changed the initdb option\nname per suggestion. The WAL receiver is now started concurrently with\nthe base backup. There is progress reporting (ps display), fsyncing.\nConfiguration files are not copied anymore. There is a simple test\nsuite. Tablespace support is still missing, but it would be\nstraightforward.\n\nIt's still all to be considered experimental, but it's taking shape and\nworking pretty well.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 30 Aug 2019 21:10:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2019-Aug-30, Peter Eisentraut wrote:\n\n> Attached is an updated patch for this. I have changed the initdb option\n> name per suggestion. The WAL receiver is now started concurrently with\n> the base backup. There is progress reporting (ps display), fsyncing.\n> Configuration files are not copied anymore. There is a simple test\n> suite. Tablespace support is still missing, but it would be\n> straightforward.\n\nThis is an amazing feature. How come we don't have people cramming to\nreview this?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Sep 2019 19:15:24 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Hello, thanks for pinging.\n\nAt Wed, 11 Sep 2019 19:15:24 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in <20190911221524.GA16563@alvherre.pgsql>\n> On 2019-Aug-30, Peter Eisentraut wrote:\n> \n> > Attached is an updated patch for this. I have changed the initdb option\n> > name per suggestion. The WAL receiver is now started concurrently with\n> > the base backup. There is progress reporting (ps display), fsyncing.\n> > Configuration files are not copied anymore. There is a simple test\n> > suite. Tablespace support is still missing, but it would be\n> > straightforward.\n> \n> This is an amazing feature. How come we don't have people cramming to\n> review this?\n\nI love it, too. As for me, the reason for hesitating review this\nis the patch is said to be experimental. That means 'the details\ndon't matter, let's discuss it's design/outline.'. So I wanted to\nsee what the past reviewers comment on the revised shape before I\nwould stir up the discussion by maybe-pointless comment. (Then\nforgotten..)\n\nI'll re-look on this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 12 Sep 2019 11:47:09 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Fri, Aug 30, 2019 at 09:10:10PM +0200, Peter Eisentraut wrote:\n> > Attached is a very hackish patch to implement this. It works like this:\n> > \n> > # (assuming you have a primary already running somewhere)\n> > initdb -D data2 --replica\n> > $EDITOR data2/postgresql.conf # set primary_conninfo\n> > pg_ctl -D data2 start\n> \n> Attached is an updated patch for this. I have changed the initdb option\n> name per suggestion. The WAL receiver is now started concurrently with\n> the base backup. There is progress reporting (ps display), fsyncing.\n> Configuration files are not copied anymore. There is a simple test\n> suite. Tablespace support is still missing, but it would be\n> straightforward.\n\nI find this idea and this spec neat.\n\n- * Verify XLOG status looks valid.\n+ * Check that contents look valid.\n */\n- if (ControlFile->state < DB_SHUTDOWNED ||\n- ControlFile->state > DB_IN_PRODUCTION ||\n- !XRecOffIsValid(ControlFile->checkPoint))\n+ if (!XRecOffIsValid(ControlFile->checkPoint))\n ereport(FATAL,\nDoesn't seem like a good idea to me to remove this sanity check for\nnormal deployments, but actually you moved that down in StartupXLOG().\nIt seems to me tha this is unrelated and could be a separate patch so\nas the errors produced are more verbose. I think that we should also\nchange that code to use a switch/case on ControlFile->state.\n\nThe current defaults of pg_basebackup have been thought so as the\nbackups taken have a good stability and so as monitoring is eased\nthanks to --wal-method=stream and the use of replication slots.\nShouldn't the use of a least a temporary replication slot be mandatory\nfor the stability of the copy? It seems to me that there is a good\nargument for having a second process which streams WAL on top of the\nmain backup process, and just use a WAL receiver for that.\n\nOne problem which is not tackled here is what to do for the tablespace\nmap. pg_basebackup has its own specific trick for that, and with that\nnew feature we may want something equivalent? Not something to\nconsider as a first stage of course.\n\n */\n-static void\n+void\n WriteControlFile(void)\n[...]\n-static void\n+void\n ReadControlFile(void)\n[...]\nIf you begin to publish those routines, it seems to me that there\ncould be more consolidation with controldata_utils.c which includes\nnow a routine to update a control file.\n\n+#ifndef FRONTEND\n+extern void InitControlFile(uint64 sysidentifier);\n+extern void WriteControlFile(void);\n+extern void ReadControlFile(void);\n+#endif\nIt would be nice to avoid that.\n\n-extern char *cluster_name;\n+extern PGDLLIMPORT char *cluster_name;\nSeparate patch here?\n\n+ if (stat(BASEBACKUP_SIGNAL_FILE, &stat_buf) == 0)\n+ {\n+ int fd;\n+\n+ fd = BasicOpenFilePerm(STANDBY_SIGNAL_FILE, O_RDWR |\nPG_BINARY,\n+ S_IRUSR | S_IWUSR);\n+ if (fd >= 0)\n+ {\n+ (void) pg_fsync(fd);\n+ close(fd);\n+ }\n+ basebackup_signal_file_found = true;\n+ }\nI would put that in a different routine.\n\n+ /*\n+ * Wait until done. Start WAL receiver in the meantime, once\nbase\n+ * backup has received the starting position.\n+ */\n+ while (BaseBackupPID != 0)\n+ {\n+ PG_SETMASK(&UnBlockSig);\n+ pg_usleep(1000000L);\n+ PG_SETMASK(&BlockSig);\n\n+ primary_sysid = strtoull(walrcv_identify_system(wrconn,\n&primaryTLI), NULL, 10);\nNo more strtol with base 10 stuff please :)\n--\nMichael", "msg_date": "Wed, 18 Sep 2019 17:31:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Updated patch attached.\n\nOn 2019-09-18 10:31, Michael Paquier wrote:\n> - * Verify XLOG status looks valid.\n> + * Check that contents look valid.\n> */\n> - if (ControlFile->state < DB_SHUTDOWNED ||\n> - ControlFile->state > DB_IN_PRODUCTION ||\n> - !XRecOffIsValid(ControlFile->checkPoint))\n> + if (!XRecOffIsValid(ControlFile->checkPoint))\n> ereport(FATAL,\n> Doesn't seem like a good idea to me to remove this sanity check for\n> normal deployments, but actually you moved that down in StartupXLOG().\n> It seems to me tha this is unrelated and could be a separate patch so\n> as the errors produced are more verbose. I think that we should also\n> change that code to use a switch/case on ControlFile->state.\n\nDone. Yes, this was really a change made to get more precise error \nmessaged during debugging. It could be committed separately.\n\n> The current defaults of pg_basebackup have been thought so as the\n> backups taken have a good stability and so as monitoring is eased\n> thanks to --wal-method=stream and the use of replication slots.\n> Shouldn't the use of a least a temporary replication slot be mandatory\n> for the stability of the copy? It seems to me that there is a good\n> argument for having a second process which streams WAL on top of the\n> main backup process, and just use a WAL receiver for that.\n\nIs this something that the walreceiver should be doing independent of \nthis patch?\n\n> One problem which is not tackled here is what to do for the tablespace\n> map. pg_basebackup has its own specific trick for that, and with that\n> new feature we may want something equivalent? Not something to\n> consider as a first stage of course.\n\nThe updated has support for tablespaces without mapping. I'm thinking \nabout putting the mapping specification into a GUC list somehow. \nShouldn't be too hard.\n\n> */\n> -static void\n> +void\n> WriteControlFile(void)\n> [...]\n> -static void\n> +void\n> ReadControlFile(void)\n> [...]\n> If you begin to publish those routines, it seems to me that there\n> could be more consolidation with controldata_utils.c which includes\n> now a routine to update a control file.\n\nHmm, maybe long-term, but it seems too much dangerous surgery for this \npatch.\n\n> +#ifndef FRONTEND\n> +extern void InitControlFile(uint64 sysidentifier);\n> +extern void WriteControlFile(void);\n> +extern void ReadControlFile(void);\n> +#endif\n> It would be nice to avoid that.\n\nFixed by renaming a function in pg_resetwal.c.\n\n> + /*\n> + * Wait until done. Start WAL receiver in the meantime, once\n> base\n> + * backup has received the starting position.\n> + */\n> + while (BaseBackupPID != 0)\n> + {\n> + PG_SETMASK(&UnBlockSig);\n> + pg_usleep(1000000L);\n> + PG_SETMASK(&BlockSig);\n> \n> + primary_sysid = strtoull(walrcv_identify_system(wrconn,\n> &primaryTLI), NULL, 10);\n> No more strtol with base 10 stuff please :)\n\nHmm, why not? What's the replacement?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 28 Oct 2019 09:30:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Mon, Oct 28, 2019 at 09:30:52AM +0100, Peter Eisentraut wrote:\n> On 2019-09-18 10:31, Michael Paquier wrote:\n>> - * Verify XLOG status looks valid.\n>> + * Check that contents look valid.\n>> */\n>> - if (ControlFile->state < DB_SHUTDOWNED ||\n>> - ControlFile->state > DB_IN_PRODUCTION ||\n>> - !XRecOffIsValid(ControlFile->checkPoint))\n>> + if (!XRecOffIsValid(ControlFile->checkPoint))\n>> ereport(FATAL,\n>> Doesn't seem like a good idea to me to remove this sanity check for\n>> normal deployments, but actually you moved that down in StartupXLOG().\n>> It seems to me tha this is unrelated and could be a separate patch so\n>> as the errors produced are more verbose. I think that we should also\n>> change that code to use a switch/case on ControlFile->state.\n> \n> Done. Yes, this was really a change made to get more precise error messaged\n> during debugging. It could be committed separately.\n\nIf you wish to do so now, that's fine by me.\n\n>> The current defaults of pg_basebackup have been thought so as the\n>> backups taken have a good stability and so as monitoring is eased\n>> thanks to --wal-method=stream and the use of replication slots.\n>> Shouldn't the use of a least a temporary replication slot be mandatory\n>> for the stability of the copy? It seems to me that there is a good\n>> argument for having a second process which streams WAL on top of the\n>> main backup process, and just use a WAL receiver for that.\n> \n> Is this something that the walreceiver should be doing independent of this\n> patch?\n\nThere could be an argument for switching a WAL receiver to use a\ntemporary replication slot by default. Still, it seems to me that\nthis backup solution suffers from the same set of problems we have\nspent years to fix with pg_basebackup with missing WAL files caused by\nconcurrent checkpoints removing things needed while the copy of the\nmain data folder and other tablespaces happens.\n\n>> One problem which is not tackled here is what to do for the tablespace\n>> map. pg_basebackup has its own specific trick for that, and with that\n>> new feature we may want something equivalent? Not something to\n>> consider as a first stage of course.\n> \n> The updated has support for tablespaces without mapping. I'm thinking about\n> putting the mapping specification into a GUC list somehow. Shouldn't be too\n> hard.\n\nThat may become ugly if there are many tablespaces to take care of.\nAnother idea I can come up with would be to pass the new mapping to\ninitdb, still this requires an extra intermediate step to store the\nnew map, and then compare it with the mapping received at BASE_BACKUP\ntime. But perhaps you are looking for an experience different than\npg_basebackup. The first version of the patch does not actually\nrequire that anyway..\n\n>> No more strtol with base 10 stuff please :)\n> \n> Hmm, why not? What's the replacement?\n\nI was referring to this patch:\nhttps://commitfest.postgresql.org/25/2272/\nIt happens that all our calls of strtol in core use a base of 10. But\nplease just ignore this part.\n\nReceiveAndUnpackTarFile() is in both libpqwalreceiver.c and\npg_basebackup.c. It would be nice to refactor that.\n--\nMichael", "msg_date": "Thu, 7 Nov 2019 13:16:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2019-11-07 05:16, Michael Paquier wrote:\n>>> The current defaults of pg_basebackup have been thought so as the\n>>> backups taken have a good stability and so as monitoring is eased\n>>> thanks to --wal-method=stream and the use of replication slots.\n>>> Shouldn't the use of a least a temporary replication slot be mandatory\n>>> for the stability of the copy? It seems to me that there is a good\n>>> argument for having a second process which streams WAL on top of the\n>>> main backup process, and just use a WAL receiver for that.\n>> Is this something that the walreceiver should be doing independent of this\n>> patch?\n> There could be an argument for switching a WAL receiver to use a\n> temporary replication slot by default. Still, it seems to me that\n> this backup solution suffers from the same set of problems we have\n> spent years to fix with pg_basebackup with missing WAL files caused by\n> concurrent checkpoints removing things needed while the copy of the\n> main data folder and other tablespaces happens.\n\nI looked into this. It seems trivial to make walsender create and use a \ntemporary replication slot by default if no permanent replication slot \nis specified. This is basically the logic that pg_basebackup has but \ndone server-side. See attached patch for a demonstration. Any reason \nnot to do that?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 9 Nov 2019 22:13:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Hello\n\nCould you rebase patch please? I have errors during patch apply. CFbot checks latest demonstration patch.\n\n> I looked into this. It seems trivial to make walsender create and use a\n> temporary replication slot by default if no permanent replication slot\n> is specified. This is basically the logic that pg_basebackup has but\n> done server-side. See attached patch for a demonstration. Any reason\n> not to do that?\n\nSeems this would break pg_basebackup --no-slot option?\n\n> + Do not copy configuration files, that is, files that end in\n> + <filename>.conf</filename>.\n\npossible we need ignore *.signal files too?\n\n> +/*\n> + * XXX copied from pg_basebackup.c\n> + */\n> +\n> +unsigned long long totaldone;\n> +unsigned long long totalsize_kb;\n> +int tablespacenum;\n> +int tablespacecount;\n\nVariable declaration in the middle of file is correct for coding style? Not a problem for me, I just want to clarify.\nShould not be declared \"static\"?\nAlso how about tablespacedone instead of tablespacenum?\n\n> The updated has support for tablespaces without mapping. I'm thinking \n> about putting the mapping specification into a GUC list somehow. \n> Shouldn't be too hard.\n\nI think we can leave tablespace mapping for pg_basebackup only. More powerful tool for less common scenarios. Or for another future patch.\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 15 Nov 2019 16:52:27 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2019-11-15 14:52, Sergei Kornilov wrote:\n>> I looked into this. It seems trivial to make walsender create and use a\n>> temporary replication slot by default if no permanent replication slot\n>> is specified. This is basically the logic that pg_basebackup has but\n>> done server-side. See attached patch for a demonstration. Any reason\n>> not to do that?\n> Seems this would break pg_basebackup --no-slot option?\n\nAfter thinking about this a bit more, doing the temporary slot stuff on \nthe walsender side might lead to too many complications in practice.\n\nHere is another patch set that implements the temporary slot use on the \nwalreceiver side, essentially mirroring what pg_basebackup already does.\n\nI think this patch set might be useful on its own, even without the base \nbackup stuff to follow.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 22 Nov 2019 11:21:53 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Fri, Nov 22, 2019 at 11:21:53AM +0100, Peter Eisentraut wrote:\n> After thinking about this a bit more, doing the temporary slot stuff on the\n> walsender side might lead to too many complications in practice.\n> \n> Here is another patch set that implements the temporary slot use on the\n> walreceiver side, essentially mirroring what pg_basebackup already does.\n\nI have not looked at the patch, but controlling the generation of the\nslot from the client feels much more natural to me. This reuses the\nexisting interface, which is consistent, and we avoid a new class of\nbugs if there is any need to deal with the cleanup of the slot on the\nWAL sender side it itself created.\n--\nMichael", "msg_date": "Fri, 22 Nov 2019 21:56:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Hi Peter,\n\nOn Fri, Nov 22, 2019 at 6:22 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> Here is another patch set that implements the temporary slot use on the\n> walreceiver side, essentially mirroring what pg_basebackup already does.\n>\n> I think this patch set might be useful on its own, even without the base\n> backup stuff to follow.\n\n\nI very much like this idea of every replication connection should have a\nreplication slot, either permanent or temporary if user didn't specify. I\nagree\nthat this patch is useful on its own.\n\n> This makes a whole bunch of things much nicer: The connection\n> information for where to get the base backup from comes from\n> postgresql.conf, so you only need to specify it in one place.\n> pg_basebackup is completely out of the picture; no need to deal with\n> command-line options, --recovery-conf, screen, monitoring for\n> completion, etc. If something fails, the base backup process can\n> automatically be restarted (maybe). Operating system integration is\n> much easier: You only call initdb and then pg_ctl or postgres, as you\n> are already doing. Automated deployment systems don't need to wait for\n> pg_basebackup to finish: You only call initdb, then start the server,\n> and then you're done -- waiting for the base backup to finish can be\n> done by the regular monitoring system.\n\nBack to the base backup stuff, I don't quite understand all the benefits you\nmentioned above. It seems to me the greatest benefit with this patch is that\npostmaster takes care of pg_basebackup itself, which reduces the human wait\nin\nbetween running the pg_basebackup and pg_ctl/postgres commands. Is that\nright?\nI personally don't mind the --write-recovery-conf option because it helps me\nwrite the primary_conninfo and primary_slot_name gucs into\npostgresql.auto.conf, which to me as a developer is easier than editing\npostgres.conf without automation. Sorry about the dumb question but what's\nso\nbad about --write-recovery-conf? Are you planning to completely replace\npg_basebackup with this? Is there any use case that a user just need a\nbasebackup but not immediately start the backend process?\n\nAlso the patch doesn't apply to master any more, need a rebase.\n\n--\nAlexandra\n\nHi Peter,On Fri, Nov 22, 2019 at 6:22 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\nHere is another patch set that implements the temporary slot use on the \nwalreceiver side, essentially mirroring what pg_basebackup already does.\n\nI think this patch set might be useful on its own, even without the base \nbackup stuff to follow.I very much like this idea of every replication connection should have areplication slot, either permanent or temporary if user didn't specify. I agreethat this patch is useful on its own.> This makes a whole bunch of things much nicer: The connection> information for where to get the base backup from comes from> postgresql.conf, so you only need to specify it in one place.> pg_basebackup is completely out of the picture; no need to deal with> command-line options, --recovery-conf, screen, monitoring for> completion, etc.  If something fails, the base backup process can> automatically be restarted (maybe).  Operating system integration is> much easier: You only call initdb and then pg_ctl or postgres, as you> are already doing.  Automated deployment systems don't need to wait for> pg_basebackup to finish: You only call initdb, then start the server,> and then you're done -- waiting for the base backup to finish can be> done by the regular monitoring system.Back to the base backup stuff, I don't quite understand all the benefits youmentioned above. It seems to me the greatest benefit with this patch is thatpostmaster takes care of pg_basebackup itself, which reduces the human wait inbetween running the pg_basebackup and pg_ctl/postgres commands. Is that right?I personally don't mind the --write-recovery-conf option because it helps mewrite the primary_conninfo and primary_slot_name gucs intopostgresql.auto.conf, which to me as a developer is easier than editingpostgres.conf without automation.  Sorry about the dumb question but what's sobad about --write-recovery-conf?  Are you planning to completely replacepg_basebackup with this? Is there any use case that a user just need abasebackup but not immediately start the backend process?Also the patch doesn't apply to master any more, need a rebase.--Alexandra", "msg_date": "Thu, 9 Jan 2020 18:57:52 +0800", "msg_from": "Alexandra Wang <lewang@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Fri, 22 Nov 2019 at 19:22, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-11-15 14:52, Sergei Kornilov wrote:\n> >> I looked into this. It seems trivial to make walsender create and use a\n> >> temporary replication slot by default if no permanent replication slot\n> >> is specified. This is basically the logic that pg_basebackup has but\n> >> done server-side. See attached patch for a demonstration. Any reason\n> >> not to do that?\n> > Seems this would break pg_basebackup --no-slot option?\n>\n> After thinking about this a bit more, doing the temporary slot stuff on\n> the walsender side might lead to too many complications in practice.\n>\n> Here is another patch set that implements the temporary slot use on the\n> walreceiver side, essentially mirroring what pg_basebackup already does.\n>\n> I think this patch set might be useful on its own, even without the base\n> backup stuff to follow.\n>\n\nI agreed that these patches are useful on its own and 0001 patch and\n0002 patch look good to me. For 0003 patch,\n\n+ linkend=\"guc-primary-slot-name\"/>. Otherwise, the WAL receiver may use\n+ a temporary replication slot (determined by <xref\n+ linkend=\"guc-wal-receiver-create-temp-slot\"/>), but these are not shown\n+ here.\n\nI think it's better to show the temporary slot name on\npg_stat_wal_receiver view. Otherwise user would have no idea about\nwhat wal receiver is using the temporary slot.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jan 2020 12:32:19 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2020-01-10 04:32, Masahiko Sawada wrote:\n> I agreed that these patches are useful on its own and 0001 patch and\n\ncommitted 0001\n\n> 0002 patch look good to me. For 0003 patch,\n> \n> + linkend=\"guc-primary-slot-name\"/>. Otherwise, the WAL receiver may use\n> + a temporary replication slot (determined by <xref\n> + linkend=\"guc-wal-receiver-create-temp-slot\"/>), but these are not shown\n> + here.\n> \n> I think it's better to show the temporary slot name on\n> pg_stat_wal_receiver view. Otherwise user would have no idea about\n> what wal receiver is using the temporary slot.\n\nMakes sense. It makes the code a bit more fiddly, but it seems worth \nit. New patches attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 11 Jan 2020 10:52:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Sat, 11 Jan 2020 at 18:52, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-01-10 04:32, Masahiko Sawada wrote:\n> > I agreed that these patches are useful on its own and 0001 patch and\n>\n> committed 0001\n>\n> > 0002 patch look good to me. For 0003 patch,\n> >\n> > + linkend=\"guc-primary-slot-name\"/>. Otherwise, the WAL receiver may use\n> > + a temporary replication slot (determined by <xref\n> > + linkend=\"guc-wal-receiver-create-temp-slot\"/>), but these are not shown\n> > + here.\n> >\n> > I think it's better to show the temporary slot name on\n> > pg_stat_wal_receiver view. Otherwise user would have no idea about\n> > what wal receiver is using the temporary slot.\n>\n> Makes sense. It makes the code a bit more fiddly, but it seems worth\n> it. New patches attached.\n\nThank you for updating the patch!\n\n- <entry>Replication slot name used by this WAL receiver</entry>\n+ <entry>\n+ Replication slot name used by this WAL receiver. This is only set if a\n+ permanent replication slot is set using <xref\n+ linkend=\"guc-primary-slot-name\"/>. Otherwise, the WAL receiver may use\n+ a temporary replication slot (determined by <xref\n+ linkend=\"guc-wal-receiver-create-temp-slot\"/>), but these are not shown\n+ here.\n+ </entry>\n\nNow that the slot name is shown even if it's a temp slot the above\ndocumentation changes needs to be changed. Other changes look good to\nme.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Jan 2020 15:32:39 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2020-01-14 07:32, Masahiko Sawada wrote:\n> - <entry>Replication slot name used by this WAL receiver</entry>\n> + <entry>\n> + Replication slot name used by this WAL receiver. This is only set if a\n> + permanent replication slot is set using <xref\n> + linkend=\"guc-primary-slot-name\"/>. Otherwise, the WAL receiver may use\n> + a temporary replication slot (determined by <xref\n> + linkend=\"guc-wal-receiver-create-temp-slot\"/>), but these are not shown\n> + here.\n> + </entry>\n> \n> Now that the slot name is shown even if it's a temp slot the above\n> documentation changes needs to be changed. Other changes look good to\n> me.\n\ncommitted, thanks\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Jan 2020 14:58:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Tue, 14 Jan 2020 at 22:58, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-01-14 07:32, Masahiko Sawada wrote:\n> > - <entry>Replication slot name used by this WAL receiver</entry>\n> > + <entry>\n> > + Replication slot name used by this WAL receiver. This is only set if a\n> > + permanent replication slot is set using <xref\n> > + linkend=\"guc-primary-slot-name\"/>. Otherwise, the WAL receiver may use\n> > + a temporary replication slot (determined by <xref\n> > + linkend=\"guc-wal-receiver-create-temp-slot\"/>), but these are not shown\n> > + here.\n> > + </entry>\n> >\n> > Now that the slot name is shown even if it's a temp slot the above\n> > documentation changes needs to be changed. Other changes look good to\n> > me.\n>\n> committed, thanks\n\nThank you for committing these patches.\n\nCould you rebase the main patch that adds base backup client as\nauxiliary backend process since the previous version patch (v3)\nconflicts with the current HEAD?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Jan 2020 09:40:38 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2020-01-15 01:40, Masahiko Sawada wrote:\n> Could you rebase the main patch that adds base backup client as\n> auxiliary backend process since the previous version patch (v3)\n> conflicts with the current HEAD?\n\nattached\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 15 Jan 2020 16:17:29 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2020-01-09 11:57, Alexandra Wang wrote:\n> Back to the base backup stuff, I don't quite understand all the benefits you\n> mentioned above. It seems to me the greatest benefit with this patch is that\n> postmaster takes care of pg_basebackup itself, which reduces the human \n> wait in\n> between running the pg_basebackup and pg_ctl/postgres commands. Is that \n> right?\n> I personally don't mind the --write-recovery-conf option because it helps me\n> write the primary_conninfo and primary_slot_name gucs into\n> postgresql.auto.conf, which to me as a developer is easier than editing\n> postgres.conf without automation.  Sorry about the dumb question but \n> what's so\n> bad about --write-recovery-conf?\n\nMaking it easier to automate is one major appeal of my proposal. The \ncurrent way of setting up a standby is very difficult to automate correctly.\n\n> Are you planning to completely replace\n> pg_basebackup with this? Is there any use case that a user just need a\n> basebackup but not immediately start the backend process?\n\nI'm not planning to replace or change pg_basebackup.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Jan 2020 16:20:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Thu, 16 Jan 2020 at 00:17, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-01-15 01:40, Masahiko Sawada wrote:\n> > Could you rebase the main patch that adds base backup client as\n> > auxiliary backend process since the previous version patch (v3)\n> > conflicts with the current HEAD?\n>\n> attached\n\nThanks. I used and briefly looked at this patch. Here are some comments:\n\n1.\n+ /*\n+ * Wait until done. Start WAL receiver in the meantime, once base\n+ * backup has received the starting position.\n+ */\n+ while (BaseBackupPID != 0)\n+ {\n+ PG_SETMASK(&UnBlockSig);\n+ pg_usleep(1000000L);\n+ PG_SETMASK(&BlockSig);\n+ MaybeStartWalReceiver();\n+ }\n\nSince the postmaster is sleeping the new connection hangs without any\nmessage whereas normally we can get the message like \"the database\nsystem is starting up\" during not accepting new connections. I think\nsome programs that checks the connectivity of PostgreSQL starting up\nmight not work fine with this. So many we might want to refuse all new\nconnections while waiting for taking basebackup.\n\n2.\n+ initStringInfo(&stmt);\n+ appendStringInfo(&stmt, \"BASE_BACKUP PROGRESS NOWAIT EXCLUDE_CONF\");\n+ if (cluster_name && cluster_name[0])\n\nWhile using this patch I realized that the standby server cannot start\nwhen the master server has larger value of some GUC parameter such as\nmax_connections and max_prepared_transactions than the default values.\nAnd unlike taking basebackup using pg_basebacup or other methods the\ndatabase cluster initialized by this feature use default values for\nall configuration parameters regardless of values in the master. So I\nthink it's better to include .conf files but we will end up with\noverwriting the local .conf files instead. So I thought that\nbasebackup process can fetch .conf files from the master server and\nadd primary_conninfo to postgresql.auto.conf but I'm not sure.\n\n3.\n+ if (stat(BASEBACKUP_SIGNAL_FILE, &stat_buf) == 0)\n+ {\n+ int fd;\n+\n+ fd = BasicOpenFilePerm(STANDBY_SIGNAL_FILE, O_RDWR | PG_BINARY,\n+ S_IRUSR | S_IWUSR);\n+ if (fd >= 0)\n+ {\n+ (void) pg_fsync(fd);\n+ close(fd);\n+ }\n+ basebackup_signal_file_found = true;\n+ }\n+\n\nWhy do we open and just close the file?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Jan 2020 16:46:50 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Hi,\n\nOn 2020-01-11 10:52:30 +0100, Peter Eisentraut wrote:\n> On 2020-01-10 04:32, Masahiko Sawada wrote:\n> > I agreed that these patches are useful on its own and 0001 patch and\n> \n> committed 0001\n\nover on -committers Robert complained:\n\nOn 2020-01-23 15:49:37 -0500, Robert Haas wrote:\n> On Tue, Jan 14, 2020 at 8:57 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > walreceiver uses a temporary replication slot by default\n> >\n> > If no permanent replication slot is configured using\n> > primary_slot_name, the walreceiver now creates and uses a temporary\n> > replication slot. A new setting wal_receiver_create_temp_slot can be\n> > used to disable this behavior, for example, if the remote instance is\n> > out of replication slots.\n> >\n> > Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > Discussion: https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com\n> \n> Neither the commit message for this patch nor any of the comments in\n> the patch seem to explain why this is a desirable change.\n> \n> I assume that's probably discussed on the thread that is linked here,\n> but you shouldn't have to dig through the discussion thread to figure\n> out what the benefits of a change like this are.\n\nwhich I fully agree with.\n\n\nIt's not at all clear to me that the potential downsides of this have\nbeen fully thought through. And even if they have, they've not been\ndocumented.\n\nPreviously if a standby without a slot was slow receiving WAL,\ne.g. because the network bandwidth was insufficient, it'd at some point\njust fail because the required WAL is removed. But with this patch that\nwon't happen - instead the primary will just run out of space. At the\nvery least this would need to add documentation of this caveat to a few\nplaces.\n\nPerhaps that's worth doing anyway, because it's probably more common for\na standby to just temporarily run behind - but given that this feature\ndoesn't actually provide any robustness, due to e.g. the possibility of\ntemporary disconnections or restarts, I'm not sure it's providing all\nthat much compared to the dangers, for a feature on by default.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Feb 2020 01:37:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "Hi,\n\nComment:\n\n- It'd be good to split out the feature independent refactorings, like\n the introduction of InitControlFile(), into their own commit. Right\n now it's hard to separate out what should just should be moved code,\n and what should be behavioural changes. Especially when there's stuff\n like just reindenting comments as part of the patch.\n\n\n> @@ -886,12 +891,27 @@ PostmasterMain(int argc, char *argv[])\n> \t/* Verify that DataDir looks reasonable */\n> \tcheckDataDir();\n>\n> -\t/* Check that pg_control exists */\n> -\tcheckControlFile();\n> -\n> \t/* And switch working directory into it */\n> \tChangeToDataDir();\n>\n> +\tif (stat(BASEBACKUP_SIGNAL_FILE, &stat_buf) == 0)\n> +\t{\n> +\t\tint fd;\n> +\n> +\t\tfd = BasicOpenFilePerm(STANDBY_SIGNAL_FILE, O_RDWR | PG_BINARY,\n> +\t\t\t\t\t\t\t S_IRUSR | S_IWUSR);\n> +\t\tif (fd >= 0)\n> +\t\t{\n> +\t\t\t(void) pg_fsync(fd);\n> +\t\t\tclose(fd);\n> +\t\t}\n> +\t\tbasebackup_signal_file_found = true;\n> +\t}\n> +\n> +\t/* Check that pg_control exists */\n> +\tif (!basebackup_signal_file_found)\n> +\t\tcheckControlFile();\n> +\n\nThis should be moved into its own function, rather than open coded in\nPostmasterMain(). Imagine how PostmasterMain() would look if all the\ncheck/initialization functions weren't extracted into functions.\n\n\n> \t/*\n> \t * Check for invalid combinations of GUC settings.\n> \t */\n> @@ -970,7 +990,8 @@ PostmasterMain(int argc, char *argv[])\n> \t * processes will inherit the correct function pointer and not need to\n> \t * repeat the test.\n> \t */\n> -\tLocalProcessControlFile(false);\n> +\tif (!basebackup_signal_file_found)\n> +\t\tLocalProcessControlFile(false);\n>\n> \t/*\n> \t * Initialize SSL library, if specified.\n> @@ -1386,6 +1407,39 @@ PostmasterMain(int argc, char *argv[])\n> \t */\n> \tAddToDataDirLockFile(LOCK_FILE_LINE_PM_STATUS, PM_STATUS_STARTING);\n>\n> +\tif (basebackup_signal_file_found)\n> +\t{\n\nThis imo *really* should be a separate function.\n\n\n> +\t\tBaseBackupPID = StartBaseBackup();\n> +\n> +\t\t/*\n> +\t\t * Wait until done. Start WAL receiver in the meantime, once base\n> +\t\t * backup has received the starting position.\n> +\t\t */\n> +\t\twhile (BaseBackupPID != 0)\n> +\t\t{\n> +\t\t\tPG_SETMASK(&UnBlockSig);\n> +\t\t\tpg_usleep(1000000L);\n> +\t\t\tPG_SETMASK(&BlockSig);\n> +\t\t\tMaybeStartWalReceiver();\n> +\t\t}\n\n\nIs there seriously no better signalling that we can use than just\nlooping for a couple hours?\n\nIs it actully guaranteed that a compiler wouldn't just load\nBaseBackupPID into a register, and never see a change to it done in a\nsignal handler?\n\nThere should be a note mentioning that we'll just FATAL out if the base\nbackup process fails. Otherwise it's the obvious question reading this\ncode. Also - we have handling to restart WAL receiver, but there's no\nhandling for the base backup temporarily failing: Is that just because\nits easy to do in one, but not the other case?\n\n\n> +\t\t/*\n> +\t\t * Reread the control file that came in with the base backup.\n> +\t\t */\n> +\t\tReadControlFile();\n> +\t}\n\nIs it actualy rereading? I'm just reading the diff, so maybe I'm missing\nsomething, but you've made LocalProcessControlFile not enter this code\npath...\n\n\n> @@ -2824,6 +2880,8 @@ pmdie(SIGNAL_ARGS)\n>\n> \t\t\tif (StartupPID != 0)\n> \t\t\t\tsignal_child(StartupPID, SIGTERM);\n> +\t\t\tif (BaseBackupPID != 0)\n> +\t\t\t\tsignal_child(BaseBackupPID, SIGTERM);\n> \t\t\tif (BgWriterPID != 0)\n> \t\t\t\tsignal_child(BgWriterPID, SIGTERM);\n> \t\t\tif (WalReceiverPID != 0)\n> @@ -3062,6 +3120,23 @@ reaper(SIGNAL_ARGS)\n\n\n> \t\t\tcontinue;\n> \t\t}\n>\n> +\t\t/*\n> +\t\t * Was it the base backup process?\n> +\t\t */\n> +\t\tif (pid == BaseBackupPID)\n> +\t\t{\n> +\t\t\tBaseBackupPID = 0;\n> +\t\t\tif (EXIT_STATUS_0(exitstatus))\n> +\t\t\t\t;\n> +\t\t\telse if (EXIT_STATUS_1(exitstatus))\n> +\t\t\t\tereport(FATAL,\n> +\t\t\t\t\t\t(errmsg(\"base backup failed\")));\n> +\t\t\telse\n> +\t\t\t\tHandleChildCrash(pid, exitstatus,\n> +\t\t\t\t\t\t\t\t _(\"base backup process\"));\n> +\t\t\tcontinue;\n> +\t\t}\n> +\n\nWhat's the error handling for the case we shut down either because of\nSIGTERM above, or here? Does all the code just deal with that the next\nstart? If not, what makes this safe?\n\n\n\n> +/*\n> + * base backup worker process (client) main function\n> + */\n> +void\n> +BaseBackupMain(void)\n> +{\n> +\tWalReceiverConn *wrconn = NULL;\n> +\tchar\t *err;\n> +\tTimeLineID\tprimaryTLI;\n> +\tuint64\t\tprimary_sysid;\n> +\n> +\t/* Load the libpq-specific functions */\n> +\tload_file(\"libpqwalreceiver\", false);\n> +\tif (WalReceiverFunctions == NULL)\n> +\t\telog(ERROR, \"libpqwalreceiver didn't initialize correctly\");\n> +\n> +\t/* Establish the connection to the primary */\n> +\twrconn = walrcv_connect(PrimaryConnInfo, false, cluster_name[0] ? cluster_name : \"basebackup\", &err);\n> +\tif (!wrconn)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errmsg(\"could not connect to the primary server: %s\", err)));\n> +\n> +\t/*\n> +\t * Get the remote sysid and stick it into the local control file, so that\n> +\t * the walreceiver is happy. The control file will later be overwritten\n> +\t * by the base backup.\n> +\t */\n> +\tprimary_sysid = strtoull(walrcv_identify_system(wrconn, &primaryTLI), NULL, 10);\n> +\tInitControlFile(primary_sysid);\n> +\tWriteControlFile();\n> +\n> +\twalrcv_base_backup(wrconn);\n> +\n> +\twalrcv_disconnect(wrconn);\n> +\n> +\tSyncDataDirectory(false, ERROR);\n> +\n> +\tereport(LOG,\n> +\t\t\t(errmsg(\"base backup completed\")));\n> +\tproc_exit(0);\n> +}\n\nSo there's no error handling here (as in a sigsetjmp)? Nor any signal\nhandlers set up, despite\n+\t\tcase BaseBackupProcess:\n+\t\t\t/* don't set signals, basebackup has its own agenda */\n+\t\t\tBaseBackupMain();\n+\t\t\tproc_exit(1);\t\t/* should never return */\n+\n\nYou did set up forwarding of things like SIGHUP - but afaict that's not\ncorrectly wired up?\n\n\n> diff --git a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\n> index e4fd1f9bb6..52819d504c 100644\n> --- a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\n> +++ b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\n> @@ -17,20 +17,29 @@\n> +#include \"pgtar.h\"\n> #include \"pqexpbuffer.h\"\n> #include \"replication/walreceiver.h\"\n> #include \"utils/builtins.h\"\n> +#include \"utils/guc.h\"\n> #include \"utils/memutils.h\"\n> #include \"utils/pg_lsn.h\"\n> +#include \"utils/ps_status.h\"\n> #include \"utils/tuplestore.h\"\n>\n> PG_MODULE_MAGIC;\n> @@ -61,6 +70,7 @@ static int\tlibpqrcv_server_version(WalReceiverConn *conn);\n> static void libpqrcv_readtimelinehistoryfile(WalReceiverConn *conn,\n> \t\t\t\t\t\t\t\t\t\t\t TimeLineID tli, char **filename,\n> \t\t\t\t\t\t\t\t\t\t\t char **content, int *len);\n> +static void libpqrcv_base_backup(WalReceiverConn *conn);\n> static bool libpqrcv_startstreaming(WalReceiverConn *conn,\n> \t\t\t\t\t\t\t\t\tconst WalRcvStreamOptions *options);\n> static void libpqrcv_endstreaming(WalReceiverConn *conn,\n> @@ -89,6 +99,7 @@ static WalReceiverFunctionsType PQWalReceiverFunctions = {\n> \tlibpqrcv_identify_system,\n> \tlibpqrcv_server_version,\n> \tlibpqrcv_readtimelinehistoryfile,\n> +\tlibpqrcv_base_backup,\n> \tlibpqrcv_startstreaming,\n> \tlibpqrcv_endstreaming,\n> \tlibpqrcv_receive,\n> @@ -358,6 +369,395 @@ libpqrcv_server_version(WalReceiverConn *conn)\n> \treturn PQserverVersion(conn->streamConn);\n> }\n>\n> +/*\n> + * XXX copied from pg_basebackup.c\n> + */\n> +\n> +unsigned long long totaldone;\n> +unsigned long long totalsize_kb;\n> +int tablespacenum;\n> +int tablespacecount;\n> +\n> +static void\n> +base_backup_report_progress(void)\n> +{\n\nPutting all of this into libpqwalreceiver.c seems like quite a\nsignificant modularity violation. The header says:\n\n * libpqwalreceiver.c\n *\n * This file contains the libpq-specific parts of walreceiver. It's\n * loaded as a dynamic module to avoid linking the main server binary with\n * libpq.\n\nwhich really doesn't agree with all of the new stuff you're putting\nhere.\n\n> --- a/src/backend/storage/file/fd.c\n> +++ b/src/backend/storage/file/fd.c\n> @@ -3154,21 +3154,14 @@ looks_like_temp_rel_name(const char *name)\n> * Other symlinks are presumed to point at files we're not responsible\n> * for fsyncing, and might not have privileges to write at all.\n> *\n> - * Errors are logged but not considered fatal; that's because this is used\n> - * only during database startup, to deal with the possibility that there are\n> - * issued-but-unsynced writes pending against the data directory. We want to\n> - * ensure that such writes reach disk before anything that's done in the new\n> - * run. However, aborting on error would result in failure to start for\n> - * harmless cases such as read-only files in the data directory, and that's\n> - * not good either.\n> - *\n> - * Note that if we previously crashed due to a PANIC on fsync(), we'll be\n> - * rewriting all changes again during recovery.\n> + * If pre_sync is true, issue flush requests to the kernel before starting the\n> + * actual fsync calls. This can be skipped if the caller has already done it\n> + * itself.\n> *\n\nHuh, what happened with the previous comments here?\n\n\n> diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c\n> index f9cfeae264..c9edeb54d3 100644\n> --- a/src/bin/pg_resetwal/pg_resetwal.c\n> +++ b/src/bin/pg_resetwal/pg_resetwal.c\n> @@ -76,7 +76,7 @@ static int\tWalSegSz;\n> static int\tset_wal_segsize;\n>\n> static void CheckDataVersion(void);\n> -static bool ReadControlFile(void);\n> +static bool read_controlfile(void);\n> static void GuessControlValues(void);\n> static void PrintControlValues(bool guessed);\n> static void PrintNewControlValues(void);\n> @@ -393,7 +393,7 @@ main(int argc, char *argv[])\n> \t/*\n> \t * Attempt to read the existing pg_control file\n> \t */\n> -\tif (!ReadControlFile())\n> +\tif (!read_controlfile())\n> \t\tGuessControlValues();\n>\n> \t/*\n> @@ -578,7 +578,7 @@ CheckDataVersion(void)\n> * to the current format. (Currently we don't do anything of the sort.)\n> */\n> static bool\n> -ReadControlFile(void)\n> +read_controlfile(void)\n> {\n> \tint\t\t\tfd;\n> \tint\t\t\tlen;\n\nHuh?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Feb 2020 04:47:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Mon, Feb 03, 2020 at 01:37:25AM -0800, Andres Freund wrote:\n> On 2020-01-23 15:49:37 -0500, Robert Haas wrote:\n>> I assume that's probably discussed on the thread that is linked here,\n>> but you shouldn't have to dig through the discussion thread to figure\n>> out what the benefits of a change like this are.\n> \n> which I fully agree with.\n> \n> It's not at all clear to me that the potential downsides of this have\n> been fully thought through. And even if they have, they've not been\n> documented.\n\nThere is this, and please let me add a reference to another complaint\nI had about this commit:\nhttps://www.postgresql.org/message-id/20200122055510.GH174860@paquier.xyz\n--\nMichael", "msg_date": "Tue, 4 Feb 2020 14:28:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On Mon, 3 Feb 2020 at 20:06, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-01-11 10:52:30 +0100, Peter Eisentraut wrote:\n> > On 2020-01-10 04:32, Masahiko Sawada wrote:\n> > > I agreed that these patches are useful on its own and 0001 patch and\n> >\n> > committed 0001\n>\n> over on -committers Robert complained:\n>\n> On 2020-01-23 15:49:37 -0500, Robert Haas wrote:\n> > On Tue, Jan 14, 2020 at 8:57 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > > walreceiver uses a temporary replication slot by default\n> > >\n> > > If no permanent replication slot is configured using\n> > > primary_slot_name, the walreceiver now creates and uses a temporary\n> > > replication slot. A new setting wal_receiver_create_temp_slot can be\n> > > used to disable this behavior, for example, if the remote instance is\n> > > out of replication slots.\n> > >\n> > > Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> > > Discussion: https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com\n> >\n> > Neither the commit message for this patch nor any of the comments in\n> > the patch seem to explain why this is a desirable change.\n> >\n> > I assume that's probably discussed on the thread that is linked here,\n> > but you shouldn't have to dig through the discussion thread to figure\n> > out what the benefits of a change like this are.\n>\n> which I fully agree with.\n>\n>\n> It's not at all clear to me that the potential downsides of this have\n> been fully thought through. And even if they have, they've not been\n> documented.\n>\n> Previously if a standby without a slot was slow receiving WAL,\n> e.g. because the network bandwidth was insufficient, it'd at some point\n> just fail because the required WAL is removed. But with this patch that\n> won't happen - instead the primary will just run out of space. At the\n> very least this would need to add documentation of this caveat to a few\n> places.\n\n+1 to add downsides to the documentation.\n\nIt might not normally happen but with this parameter we will need to\nhave enough setting of max_replication_slots because the standby will\nfail to start after failover due to full of slots.\n\nWAL required by the standby could be removed on the primary due to the\nstandby delaying much, for example when the standby stopped for a long\ntime or when the standby is running but delayed for some reason. This\nfeature prevents WAL from removal for the latter case. That is, we can\nensure that required WAL is not removed during replication running.\nFor the former case we can use a permanent replication slot. Although\nthere is a risk of running out of space but I personally think this\nbehavior is better for most cases.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 5 Feb 2020 13:52:05 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2020-02-03 13:47, Andres Freund wrote:\n> Comment:\n> \n> - It'd be good to split out the feature independent refactorings, like\n> the introduction of InitControlFile(), into their own commit. Right\n> now it's hard to separate out what should just should be moved code,\n> and what should be behavioural changes. Especially when there's stuff\n> like just reindenting comments as part of the patch.\n\nAgreed. Here are three refactoring patches extracted that seem useful \non their own.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 17 Feb 2020 18:42:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2020-02-17 18:42, Peter Eisentraut wrote:\n> On 2020-02-03 13:47, Andres Freund wrote:\n>> Comment:\n>>\n>> - It'd be good to split out the feature independent refactorings, like\n>> the introduction of InitControlFile(), into their own commit. Right\n>> now it's hard to separate out what should just should be moved code,\n>> and what should be behavioural changes. Especially when there's stuff\n>> like just reindenting comments as part of the patch.\n> \n> Agreed. Here are three refactoring patches extracted that seem useful\n> on their own.\n\nThese have been committed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 22 Feb 2020 16:12:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "I have set this patch to \"returned with feedback\" in the upcoming commit \nfest, because I will not be able to finish it.\n\nUnsurprisingly, the sequencing of startup actions in postmaster.c is \nextremely tricky and needs more thinking. All the rest worked pretty \nwell, I thought.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 10:02:40 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: base backup client as auxiliary backend process" }, { "msg_contents": "On 2020-Jan-14, Peter Eisentraut wrote:\n\n> On 2020-01-14 07:32, Masahiko Sawada wrote:\n> > - <entry>Replication slot name used by this WAL receiver</entry>\n> > + <entry>\n> > + Replication slot name used by this WAL receiver. This is only set if a\n> > + permanent replication slot is set using <xref\n> > + linkend=\"guc-primary-slot-name\"/>. Otherwise, the WAL receiver may use\n> > + a temporary replication slot (determined by <xref\n> > + linkend=\"guc-wal-receiver-create-temp-slot\"/>), but these are not shown\n> > + here.\n> > + </entry>\n> > \n> > Now that the slot name is shown even if it's a temp slot the above\n> > documentation changes needs to be changed. Other changes look good to\n> > me.\n> \n> committed, thanks\n\nSergei has just proposed a change in semantics: if primary_slot_name is\nspecified as well as wal_receiver_create_temp_slot, then a temp slot is\nused and it uses the specified name, instead of ignoring the temp-slot\noption as currently.\n\nPatch is at https://postgr.es/m/3109511585392143@myt6-887fb48a9c29.qloud-c.yandex.net\n\n(To clarify: the current semantics if both options are set is that an\nexisting permanent slot is sought with the given name, and an error is\nraised if it doesn't exist.)\n\nWhat do you think? Preliminarly I think the proposed semantics are\nsaner.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 28 Mar 2020 10:49:02 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: base backup client as auxiliary backend process" } ]
[ { "msg_contents": "Hi,\n\nI think we should consider changing the effective_io_concurrency default\nvalue, i.e. the guc that determines how many pages we try to prefetch in\na couple of places (the most important being Bitmap Heap Scan).\n\nThe default is 1 since forever, but from my experience hardly the right\nvalue, no matter what storage system you use. I've always ended up with\nvalues that are either 0 (so, disabled prefetching) or significantly\nhigher (at least 8 or 16). In fact, e_i_c=1 can easily be detrimental\ndepending on the workload and storage system.\n\nWhich is an issue, because people often don't know how to tune this and\nI see systems with the default value quite often.\n\nSo I do propose to increase the defaut to a value between 4 and 16.\n\n\nI'm hardly the first person to notice this, as illustrated for example\nby this [1] post by Merlin Moncure on pgsql-hackers from 2017, which\nmeasured this behavior on Intel S3500 SSD:\n\n effective_io_concurrency 1: 46.3 sec, ~ 170 mb/sec peak via iostat\n effective_io_concurrency 2: 49.3 sec, ~ 158 mb/sec peak via iostat\n effective_io_concurrency 4: 29.1 sec, ~ 291 mb/sec peak via iostat\n effective_io_concurrency 8: 23.2 sec, ~ 385 mb/sec peak via iostat\n effective_io_concurrency 16: 22.1 sec, ~ 409 mb/sec peak via iostat\n effective_io_concurrency 32: 20.7 sec, ~ 447 mb/sec peak via iostat\n effective_io_concurrency 64: 20.0 sec, ~ 468 mb/sec peak via iostat\n effective_io_concurrency 128: 19.3 sec, ~ 488 mb/sec peak via iostat\n effective_io_concurrency 256: 19.2 sec, ~ 494 mb/sec peak via iostat\n\nThat's just one anecdotal example of behavior, of course, so I've\ndecided to do a couple of tests on different storage systems. Attached\nis a couple of scripts I used to generate synthetic data sets with data\nlaid out in different patterns (random vs. regular), and running queries\nscanning various fractions of the table (1%, 5%, ...) using plans using\nbitmap index scans.\n\nI've done that on three different storage systems:\n\n1) SATA RAID (3 x 7.2k drives in RAID0)\n2) SSD RAID (6 x SATA SSD in RAID0)\n3) NVMe drive\n\nAttached is a spreadsheet with a summary of results fo the tested cases.\nIn general, the data support what I already wrote above - the current\ndefault is pretty bad.\n\nIn some cases it helps a bit, but a bit higher value (4 or 8) performs\nsignificantly better. Consider for example this \"sequential\" data set\nfrom the 6xSSD RAID system (x-axis shows e_i_c values, pct means what\nfraction of pages matches the query):\n\n pct 0 1 4 16 64 128\n ---------------------------------------------------------------\n 1 25990 18624 3269 2219 2189 2171\n 5 88116 60242 14002 8663 8560 8726\n 10 120556 99364 29856 17117 16590 17383\n 25 101080 184327 79212 47884 46846 46855\n 50 130709 309857 163614 103001 94267 94809\n 75 126516 435653 248281 156586 139500 140087\n\ncompared to the e_i_c=0 case, it looks like this:\n\n pct 1 4 16 64 128\n ----------------------------------------------------\n 1 72% 13% 9% 8% 8%\n 5 68% 16% 10% 10% 10%\n 10 82% 25% 14% 14% 14%\n 25 182% 78% 47% 46% 46%\n 50 237% 125% 79% 72% 73%\n 75 344% 196% 124% 110% 111%\n\nSo for 1% of the table the e_i_c=1 is faster by about ~30%, but with\ne_i_c=4 (or more) it's ~10x faster. This is a fairly common pattern, not\njust on this storage system.\n\nThe e_i_c=1 can perform pretty poorly, especially when the query matches\nlarge fraction of the table - for example in this example it's 2-3x\nslower compared to no prefetching, and higher e_i_c values limit the\ndamage quite a bit.\n\nIt's not entirely terrible because in most cases those queries would use\nseqscan (the benchmark forces queries to use bitmap heap scan), but it's\nnot something we can ignore either because of possible underestimates.\n\nFurthermore, there are cases with much worse behavior. For example, one\nof the tests on SATA RAID behaves like this:\n\n pct 1 4 16 64 128\n ----------------------------------------------------\n 1 147% 101% 61% 52% 55%\n 5 180% 106% 71% 71% 70%\n 10 208% 106% 73% 80% 79%\n 25 225% 118% 84% 96% 86%\n 50 234% 123% 91% 102% 95%\n 75 241% 127% 94% 103% 98%\n\nPretty much all cases are significantly slower with e_i_c=1.\n\nOf course, I'm sure there may be other things to consider. For example,\nthese tests were done in isolation, while on actual systems there will\nbe other queries running concurrently (and those may also generate I/O).\n\n\nregards\n\n[1] https://www.postgresql.org/message-id/flat/55AA2469.20306%40dalibo.com#dda46134fb309ae09233b1547411c029\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 29 Jun 2019 22:15:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Increasing default value for effective_io_concurrency?" }, { "msg_contents": "Hi,\n\nOn 2019-06-29 22:15:19 +0200, Tomas Vondra wrote:\n> I think we should consider changing the effective_io_concurrency default\n> value, i.e. the guc that determines how many pages we try to prefetch in\n> a couple of places (the most important being Bitmap Heap Scan).\n\nMaybe we need improve the way it's used / implemented instead - it seems\njust too hard to determine the correct setting as currently implemented.\n\n\n> In some cases it helps a bit, but a bit higher value (4 or 8) performs\n> significantly better. Consider for example this \"sequential\" data set\n> from the 6xSSD RAID system (x-axis shows e_i_c values, pct means what\n> fraction of pages matches the query):\n\nI assume that the y axis is the time of the query?\n\nHow much data is this compared to memory available for the kernel to do\ncaching?\n\n\n> pct 0 1 4 16 64 128\n> ---------------------------------------------------------------\n> 1 25990 18624 3269 2219 2189 2171\n> 5 88116 60242 14002 8663 8560 8726\n> 10 120556 99364 29856 17117 16590 17383\n> 25 101080 184327 79212 47884 46846 46855\n> 50 130709 309857 163614 103001 94267 94809\n> 75 126516 435653 248281 156586 139500 140087\n> \n> compared to the e_i_c=0 case, it looks like this:\n> \n> pct 1 4 16 64 128\n> ----------------------------------------------------\n> 1 72% 13% 9% 8% 8%\n> 5 68% 16% 10% 10% 10%\n> 10 82% 25% 14% 14% 14%\n> 25 182% 78% 47% 46% 46%\n> 50 237% 125% 79% 72% 73%\n> 75 344% 196% 124% 110% 111%\n> \n> So for 1% of the table the e_i_c=1 is faster by about ~30%, but with\n> e_i_c=4 (or more) it's ~10x faster. This is a fairly common pattern, not\n> just on this storage system.\n> \n> The e_i_c=1 can perform pretty poorly, especially when the query matches\n> large fraction of the table - for example in this example it's 2-3x\n> slower compared to no prefetching, and higher e_i_c values limit the\n> damage quite a bit.\n\nI'm surprised the slowdown for small e_i_c values is that big - it's not\nobvious to me why that is. Which os / os version / filesystem / io\nscheduler / io scheduler settings were used?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 1 Jul 2019 16:32:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Increasing default value for effective_io_concurrency?" }, { "msg_contents": "On Mon, Jul 01, 2019 at 04:32:15PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-06-29 22:15:19 +0200, Tomas Vondra wrote:\n>> I think we should consider changing the effective_io_concurrency default\n>> value, i.e. the guc that determines how many pages we try to prefetch in\n>> a couple of places (the most important being Bitmap Heap Scan).\n>\n>Maybe we need improve the way it's used / implemented instead - it seems\n>just too hard to determine the correct setting as currently implemented.\n>\n\nSure, if we can improve those bits, that'd be nice. It's definitely hard\nto decide what value is appropriate for a given storage system. But I'm\nnot sure it's something we can do easily, considering how opaque the\nhardware is for us ...\n\nI wonder \n\n>\n>> In some cases it helps a bit, but a bit higher value (4 or 8) performs\n>> significantly better. Consider for example this \"sequential\" data set\n>> from the 6xSSD RAID system (x-axis shows e_i_c values, pct means what\n>> fraction of pages matches the query):\n>\n>I assume that the y axis is the time of the query?\n>\n\nThe y-axis is the fraction of table matched by the query. The values in\nthe contingency table are query durations (average of 3 runs, but the\nnumbers vere very close).\n\n>How much data is this compared to memory available for the kernel to do\n>caching?\n>\n\nMultiple of RAM, in all cases. The queries were hitting random subsets of\nthe data, and the page cache was dropped after each test, to eliminate\ncross-query caching.\n\n>\n>> pct 0 1 4 16 64 128\n>> ---------------------------------------------------------------\n>> 1 25990 18624 3269 2219 2189 2171\n>> 5 88116 60242 14002 8663 8560 8726\n>> 10 120556 99364 29856 17117 16590 17383\n>> 25 101080 184327 79212 47884 46846 46855\n>> 50 130709 309857 163614 103001 94267 94809\n>> 75 126516 435653 248281 156586 139500 140087\n>>\n>> compared to the e_i_c=0 case, it looks like this:\n>>\n>> pct 1 4 16 64 128\n>> ----------------------------------------------------\n>> 1 72% 13% 9% 8% 8%\n>> 5 68% 16% 10% 10% 10%\n>> 10 82% 25% 14% 14% 14%\n>> 25 182% 78% 47% 46% 46%\n>> 50 237% 125% 79% 72% 73%\n>> 75 344% 196% 124% 110% 111%\n>>\n>> So for 1% of the table the e_i_c=1 is faster by about ~30%, but with\n>> e_i_c=4 (or more) it's ~10x faster. This is a fairly common pattern, not\n>> just on this storage system.\n>>\n>> The e_i_c=1 can perform pretty poorly, especially when the query matches\n>> large fraction of the table - for example in this example it's 2-3x\n>> slower compared to no prefetching, and higher e_i_c values limit the\n>> damage quite a bit.\n>\n>I'm surprised the slowdown for small e_i_c values is that big - it's not\n>obvious to me why that is. Which os / os version / filesystem / io\n>scheduler / io scheduler settings were used?\n>\n\nThis is the system with NVMe storage, and SATA RAID:\n\nLinux bench2 4.19.26 #1 SMP Sat Mar 2 19:50:14 CET 2019 x86_64 Intel(R)\nXeon(R) CPU E5-2620 v4 @ 2.10GHz GenuineIntel GNU/Linux\n\n/dev/nvme0n1p1 on /mnt/data type ext4 (rw,relatime)\n/dev/md0 on /mnt/raid type ext4 (rw,relatime,stripe=48)\n\nThe other system looks pretty much the same (same kernel, ext4).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 2 Jul 2019 10:03:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Increasing default value for effective_io_concurrency?" }, { "msg_contents": "On Mon, Jul 1, 2019 at 7:32 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-06-29 22:15:19 +0200, Tomas Vondra wrote:\n> > I think we should consider changing the effective_io_concurrency default\n> > value, i.e. the guc that determines how many pages we try to prefetch in\n> > a couple of places (the most important being Bitmap Heap Scan).\n>\n> Maybe we need improve the way it's used / implemented instead - it seems\n> just too hard to determine the correct setting as currently implemented.\n\nPerhaps the translation from effective_io_concurrency to a prefetch\ndistance, which is found in the slightly-misnamed ComputeIoConcurrency\nfunction, should be changed. The comments therein say:\n\n * Experimental results show that both of these formulas\naren't aggressive\n * enough, but we don't really have any better proposals.\n\nPerhaps we could test experimentally what works well with N spindles\nand then fit a formula to that curve and stick it in here, so that our\ntuning is based on practice rather than theory.\n\nI'm not sure if that approach is adequate or not. It just seems like\nsomething to try.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Jul 2019 11:04:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Increasing default value for effective_io_concurrency?" }, { "msg_contents": "On Wed, Jul 03, 2019 at 11:04:59AM -0400, Robert Haas wrote:\n>On Mon, Jul 1, 2019 at 7:32 PM Andres Freund <andres@anarazel.de> wrote:\n>> On 2019-06-29 22:15:19 +0200, Tomas Vondra wrote:\n>> > I think we should consider changing the effective_io_concurrency default\n>> > value, i.e. the guc that determines how many pages we try to prefetch in\n>> > a couple of places (the most important being Bitmap Heap Scan).\n>>\n>> Maybe we need improve the way it's used / implemented instead - it seems\n>> just too hard to determine the correct setting as currently implemented.\n>\n>Perhaps the translation from effective_io_concurrency to a prefetch\n>distance, which is found in the slightly-misnamed ComputeIoConcurrency\n>function, should be changed. The comments therein say:\n>\n> * Experimental results show that both of these formulas\n>aren't aggressive\n> * enough, but we don't really have any better proposals.\n>\n>Perhaps we could test experimentally what works well with N spindles\n>and then fit a formula to that curve and stick it in here, so that our\n>tuning is based on practice rather than theory.\n>\n>I'm not sure if that approach is adequate or not. It just seems like\n>something to try.\n>\n\nMaybe. And it would probably work for the systems I used for benchmarks. \n\nIt however assumes two things: (a) the storage system actually has\nspindles and (b) you know how many spindles there are. Which is becoming\nless and less safe these days - flash storage becomes pretty common, and\neven when there are spindles they are often hidden behind the veil of\nvirtualization in a SAN, or something.\n\nI wonder if we might provide something like pg_test_prefetch which would\nmeasure performance with different values, similarly to pg_test_fsync.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 3 Jul 2019 17:24:12 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Increasing default value for effective_io_concurrency?" }, { "msg_contents": "On Wed, Jul 3, 2019 at 11:24 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Maybe. And it would probably work for the systems I used for benchmarks.\n>\n> It however assumes two things: (a) the storage system actually has\n> spindles and (b) you know how many spindles there are. Which is becoming\n> less and less safe these days - flash storage becomes pretty common, and\n> even when there are spindles they are often hidden behind the veil of\n> virtualization in a SAN, or something.\n\nYeah, that's true.\n\n> I wonder if we might provide something like pg_test_prefetch which would\n> measure performance with different values, similarly to pg_test_fsync.\n\nThat's not a bad idea, but I'm not sure if the results that we got in\na synthetic test - presumably unloaded - would be a good guide to what\nto use in a production situation. Maybe it would; I'm just not sure.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Jul 2019 11:42:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Increasing default value for effective_io_concurrency?" }, { "msg_contents": "On Wed, Jul 3, 2019 at 11:42:49AM -0400, Robert Haas wrote:\n> On Wed, Jul 3, 2019 at 11:24 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> > Maybe. And it would probably work for the systems I used for benchmarks.\n> >\n> > It however assumes two things: (a) the storage system actually has\n> > spindles and (b) you know how many spindles there are. Which is becoming\n> > less and less safe these days - flash storage becomes pretty common, and\n> > even when there are spindles they are often hidden behind the veil of\n> > virtualization in a SAN, or something.\n> \n> Yeah, that's true.\n> \n> > I wonder if we might provide something like pg_test_prefetch which would\n> > measure performance with different values, similarly to pg_test_fsync.\n> \n> That's not a bad idea, but I'm not sure if the results that we got in\n> a synthetic test - presumably unloaded - would be a good guide to what\n> to use in a production situation. Maybe it would; I'm just not sure.\n\nI think it would be better than what we have now.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 8 Jul 2019 20:11:55 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Increasing default value for effective_io_concurrency?" }, { "msg_contents": "On Mon, Jul 08, 2019 at 08:11:55PM -0400, Bruce Momjian wrote:\n>On Wed, Jul 3, 2019 at 11:42:49AM -0400, Robert Haas wrote:\n>> On Wed, Jul 3, 2019 at 11:24 AM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com> wrote:\n>> > Maybe. And it would probably work for the systems I used for benchmarks.\n>> >\n>> > It however assumes two things: (a) the storage system actually has\n>> > spindles and (b) you know how many spindles there are. Which is becoming\n>> > less and less safe these days - flash storage becomes pretty common, and\n>> > even when there are spindles they are often hidden behind the veil of\n>> > virtualization in a SAN, or something.\n>>\n>> Yeah, that's true.\n>>\n>> > I wonder if we might provide something like pg_test_prefetch which would\n>> > measure performance with different values, similarly to pg_test_fsync.\n>>\n>> That's not a bad idea, but I'm not sure if the results that we got in\n>> a synthetic test - presumably unloaded - would be a good guide to what\n>> to use in a production situation. Maybe it would; I'm just not sure.\n>\n>I think it would be better than what we have now.\n>\n\nTBH I don't know how useful would that tool be. AFAICS the key assumptions\nprefetching relies are that (a) issuing the prefetch request is much\ncheaper than jut doing the I/O, and (b) the prefetch request can be\ncompleted before we actually need the page.\n\n(a) is becoming not quite true on new hardware - if you look at results\nfrom the NVMe device, the improvements are much smaller compared to the\nother storage systems. The speedup is ~1.6x, no matter the e_i_c value,\nwhile on other storage types it's easily 10x in some cases.\n\nBut this is something we could measure using the new tool, because it's\nmostly hardware dependent.\n\nBut (b) is the hard bit, because it depends on how much time it takes to\nprocess a page read from the heap - if it takes a lot of time, lower e_i_c\nvalues are fine. If it's fast, we need to increase the prefetch distance.\n\nEssentially, from the tests I've done it seems fetching just 1 page in\nadvance is way too conservative, because (1) it does not really increase\nI/O concurrency at the storage level and (2) we often get into situation\nwhere the prefetch is still in progress when we actually need the page.\n\nI don't know how to meaningfully benchmark this, though - it's way to\ndependent on the particular workload / query. \n\nOf course, backend concurrency just makes it even more complicated.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 9 Jul 2019 19:49:44 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Increasing default value for effective_io_concurrency?" } ]
[ { "msg_contents": "bowerbird is failing the pg_dump regression tests with a lot of\n\nFATAL: SSPI authentication failed for user \"regress_postgres\"\n\nI think this is likely a consequence of ca129e58c0 having modified\n010_dump_connstr.pl to use \"regress_postgres\" not \"postgres\" as the\nbootstrap superuser name in the source cluster. I suppose I overlooked\nsome dependency on the user name that only affects SSPI ... but what?\nI don't see anything about the destination cluster configuration (which\nalready used a nondefault superuser name) that I didn't replicate\nin the source cluster configuration.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Jun 2019 16:36:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Where is SSPI auth username determined for TAP tests?" }, { "msg_contents": "On Sat, Jun 29, 2019 at 04:36:51PM -0400, Tom Lane wrote:\n> I think this is likely a consequence of ca129e58c0 having modified\n> 010_dump_connstr.pl to use \"regress_postgres\" not \"postgres\" as the\n> bootstrap superuser name in the source cluster. I suppose I overlooked\n> some dependency on the user name that only affects SSPI ... but what?\n> I don't see anything about the destination cluster configuration (which\n> already used a nondefault superuser name) that I didn't replicate\n> in the source cluster configuration.\n\nDidn't you get trapped with something similar to what has been fixed\nin d9f543e? If you want pg_hba.conf to be correctly set up for SSPI\non Windows, you should pass \"auth_extra => ['--create-role',\n'regress_postgres']\" to the init() method initializing the node.\n\nLooking at the commit...\n my $node = get_new_node('main');\n-$node->init(extra => [ '--locale=C', '--encoding=LATIN1' ]);\n+$node->init(extra =>\n+ [ '-U', $src_bootstrap_super, '--locale=C', '--encoding=LATIN1' ]);\n[...]\n $node->run_log(\n [\n $ENV{PG_REGRESS}, '--config-auth',\n\t$node->data_dir, '--create-role',\n- \"$dbname1,$dbname2,$dbname3,$dbname4\"\n+ \"$username1,$username2,$username3,$username4\"\n ]);\n \nThis part is wrong and just needs to be updated to as\n$src_bootstrap_super also gets its role added in --create-role, which\nwould set up pg_hba.conf as you would like.\n--\nMichael", "msg_date": "Sun, 30 Jun 2019 10:42:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Where is SSPI auth username determined for TAP tests?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sat, Jun 29, 2019 at 04:36:51PM -0400, Tom Lane wrote:\n>> I think this is likely a consequence of ca129e58c0 having modified\n>> 010_dump_connstr.pl to use \"regress_postgres\" not \"postgres\" as the\n>> bootstrap superuser name in the source cluster. I suppose I overlooked\n>> some dependency on the user name that only affects SSPI ... but what?\n\n> Didn't you get trapped with something similar to what has been fixed\n> in d9f543e? If you want pg_hba.conf to be correctly set up for SSPI\n> on Windows, you should pass \"auth_extra => ['--create-role',\n> 'regress_postgres']\" to the init() method initializing the node.\n\nAfter further study, I think the root issue here is that pg_regress.c's\nconfig_sspi_auth() has no provision for non-default bootstrap superuser\nnames --- it makes a mapping entry for (what should be) the default\nsuperuser name whether the cluster is using that or not. I now see that\n010_dump_connstr.pl is hacking around that by doing\n\nmy $envar_node = get_new_node('destination_envar');\n$envar_node->init(extra =>\n [ '-U', $dst_bootstrap_super, '--locale=C', '--encoding=LATIN1' ]);\n$envar_node->run_log(\n [\n $ENV{PG_REGRESS}, '--config-auth',\n $envar_node->data_dir, '--create-role',\n \"$dst_bootstrap_super,$restore_super\"\n ]);\n\nthat is, it's explicitly listing the non-default bootstrap superuser\namong the roles to be \"created\". This is all pretty weird and\nundocumented ...\n\nWe could apply the same hack on the source node, but I wonder if it\nwouldn't be better to make this less of a kluge. I'm tempted to\npropose that \"pg_regress --config-auth --user XXX\" should understand\nXXX as the bootstrap superuser name, and then we could clean up\n010_dump_connstr.pl by using that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Jun 2019 12:09:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Where is SSPI auth username determined for TAP tests?" }, { "msg_contents": "On Sun, Jun 30, 2019 at 12:09:18PM -0400, Tom Lane wrote:\n> We could apply the same hack on the source node, but I wonder if it\n> wouldn't be better to make this less of a kluge. I'm tempted to\n> propose that \"pg_regress --config-auth --user XXX\" should understand\n> XXX as the bootstrap superuser name, and then we could clean up\n> 010_dump_connstr.pl by using that.\n\nI have been reviewing that part, and the part to split the bootstrap\nuser from the set of extra roles created looks fine to me. Now, it\nseems to me that you can simplify 010_dump_connstr.pl as per the\nattached because PostgresNode::Init can take care of --auth-config\npart with the correct options using auth_extra. What do you think\nabout the cleanup attached?\n--\nMichael", "msg_date": "Wed, 3 Jul 2019 15:20:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Where is SSPI auth username determined for TAP tests?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I have been reviewing that part, and the part to split the bootstrap\n> user from the set of extra roles created looks fine to me. Now, it\n> seems to me that you can simplify 010_dump_connstr.pl as per the\n> attached because PostgresNode::Init can take care of --auth-config\n> part with the correct options using auth_extra. What do you think\n> about the cleanup attached?\n\nI haven't checked that this actually works, but it looks plausible,\nand I agree it's simpler/easier.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2019 09:53:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Where is SSPI auth username determined for TAP tests?" }, { "msg_contents": "On Wed, Jul 03, 2019 at 09:53:14AM -0400, Tom Lane wrote:\n> I haven't checked that this actually works, but it looks plausible,\n> and I agree it's simpler/easier.\n\nThanks, committed. While testing on Windows, I have been trapped by\nthe fact that IPC::Run mishandles double quotes, causing the tests to\nfail for the environment variable part because of a mismatching\npg_hba.conf entry. The difference is that with we run pg_regress\n--config-auth using IPC::Run::run on HEAD but the patch switches to\nsystem(). So I have finished by removing the double-quote handling\nfrom the restore user name which makes the whole test suite more\nconsistent. The patch has at the end the advantage of removing in\npg_ident.conf the entry related to the OS user running the scripts,\nwhich makes the environment more restricted by default.\n--\nMichael", "msg_date": "Thu, 4 Jul 2019 11:35:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Where is SSPI auth username determined for TAP tests?" } ]
[ { "msg_contents": "Hi,\n\nWith the glibc 2.28 coming, all users will have to reindex almost\nevery indexes after a glibc upgrade to guarantee the lack of\ncorruption. Unfortunately, reindexdb is not ideal for that as it's\nprocessing everything using a single connexion and isn't able to\ndiscard indexes that doesn't depend on a glibc collation.\n\nPFA a patchset to add parallelism to reindexdb (reusing the\ninfrastructure in vacuumdb with some additions) and an option to\ndiscard indexes that doesn't depend on glibc (without any specific\ncollation filtering or glibc version detection), with updated\nregression tests. Note that this should be applied on top of the\nexisting reindexdb cleanup & refactoring patch\n(https://commitfest.postgresql.org/23/2115/).\n\nThis was sponsored by VMware, and has been discussed internally with\nKevin and Michael, in Cc.", "msg_date": "Sun, 30 Jun 2019 11:45:47 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Sun, Jun 30, 2019 at 11:45:47AM +0200, Julien Rouhaud wrote:\n> With the glibc 2.28 coming, all users will have to reindex almost\n> every indexes after a glibc upgrade to guarantee the lack of\n> corruption. Unfortunately, reindexdb is not ideal for that as it's\n> processing everything using a single connexion and isn't able to\n> discard indexes that doesn't depend on a glibc collation.\n\nWe have seen that with a database of up to 100GB we finish by cutting\nthe reindex time from 30 minutes to a couple of minutes with a schema\nwe work on. Julien, what were the actual numbers?\n\n> PFA a patchset to add parallelism to reindexdb (reusing the\n> infrastructure in vacuumdb with some additions) and an option to\n> discard indexes that doesn't depend on glibc (without any specific\n> collation filtering or glibc version detection), with updated\n> regression tests. Note that this should be applied on top of the\n> existing reindexdb cleanup & refactoring patch\n> (https://commitfest.postgresql.org/23/2115/).\n\nPlease note that patch 0003 does not seem to apply correctly on HEAD\nas of c74d49d4. Here is also a small description of each patch:\n- 0001 refactors the connection slot facility from vacuumdb.c into a\nnew, separate file called parallel.c in src/bin/scripts/. This is not\nreally fancy as some code only moves around.\n- 0002 adds an extra option for simple lists to be able to use\npointers, with an interface to append elements in it.\n- 0003 begins to be the actual fancy thing with the addition of a\n--jobs option into reindexdb. The main issue here which should be\ndiscussed is that when it comes to reindex of tables, you basically\nare not going to have any conflicts between the objects manipulated.\nHowever if you wish to do a reindex on a set of indexes then things\nget more tricky as it is necessary to list items per-table so as\nmultiple connections do not conflict with each other if attempting to\nwork on multiple indexes of the same table. What this patch does is\nto select the set of indexes which need to be worked on (see the\naddition of cell in ParallelSlot), and then does a kind of\npre-planning of each item into the connection slots so as each\nconnection knows from the beginning which items it needs to process.\nThis is quite different from vacuumdb where a new item is distributed\nonly on a free connection from a unique list. I'd personally prefer\nif we keep the facility in parallel.c so as it is only\nexecution-dependent and that we have no pre-planning. This would\nrequire keeping within reindexdb.c an array of lists, with one list \ncorresponding to one connection instead which feels more natural.\n- 0004 is the part where the concurrent additions really matter as\nthis consists in applying an extra filter to the indexes selected so\nas only the glibc-sensitive indexes are chosen for the processing.\n--\nMichael", "msg_date": "Mon, 1 Jul 2019 17:54:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 1, 2019 at 10:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Jun 30, 2019 at 11:45:47AM +0200, Julien Rouhaud wrote:\n> > With the glibc 2.28 coming, all users will have to reindex almost\n> > every indexes after a glibc upgrade to guarantee the lack of\n> > corruption. Unfortunately, reindexdb is not ideal for that as it's\n> > processing everything using a single connexion and isn't able to\n> > discard indexes that doesn't depend on a glibc collation.\n>\n> We have seen that with a database of up to 100GB we finish by cutting\n> the reindex time from 30 minutes to a couple of minutes with a schema\n> we work on. Julien, what were the actual numbers?\n\nI did my benchmarking using a quite ideal database, having a large\nnumber of tables and various set of indexes, for a 75 GB total size.\nThis was done on my laptop which has 6 multithreaded cores (and crappy\nIO), also keeping the default max_parallel_maintenance_worker = 2.\n\nA naive reindexdb took approximately 33 minutes. Filtering the list\nof indexes took that down to slightly less than 15 min, but of course\neach database will have a different ratio there.\n\nThen, keeping the --glibc-dependent and using different level of parallelism:\n\n-j1: ~ 14:50\n-j3: ~ 7:30\n-j6: ~ 5:23\n-j8: ~ 4:45\n\nThat's pretty much the kind of results I was expecting given the\nhardware I used.\n\n> > PFA a patchset to add parallelism to reindexdb (reusing the\n> > infrastructure in vacuumdb with some additions) and an option to\n> > discard indexes that doesn't depend on glibc (without any specific\n> > collation filtering or glibc version detection), with updated\n> > regression tests. Note that this should be applied on top of the\n> > existing reindexdb cleanup & refactoring patch\n> > (https://commitfest.postgresql.org/23/2115/).\n>\n> Please note that patch 0003 does not seem to apply correctly on HEAD\n> as of c74d49d4.\n\nYes, this is because this patchset has to be applied on top of the\nreindexdb refactoring patch mentioned. It's sad that we don't have a\ngood way to deal with that kind of dependency, as it's also breaking\nThomas' cfbot :(\n\n> - 0003 begins to be the actual fancy thing with the addition of a\n> --jobs option into reindexdb. The main issue here which should be\n> discussed is that when it comes to reindex of tables, you basically\n> are not going to have any conflicts between the objects manipulated.\n> However if you wish to do a reindex on a set of indexes then things\n> get more tricky as it is necessary to list items per-table so as\n> multiple connections do not conflict with each other if attempting to\n> work on multiple indexes of the same table. What this patch does is\n> to select the set of indexes which need to be worked on (see the\n> addition of cell in ParallelSlot), and then does a kind of\n> pre-planning of each item into the connection slots so as each\n> connection knows from the beginning which items it needs to process.\n> This is quite different from vacuumdb where a new item is distributed\n> only on a free connection from a unique list. I'd personally prefer\n> if we keep the facility in parallel.c so as it is only\n> execution-dependent and that we have no pre-planning. This would\n> require keeping within reindexdb.c an array of lists, with one list\n> corresponding to one connection instead which feels more natural.\n\nMy fear here is that this approach would add some extra complexity,\nespecially requiring to deal with free connection handling both in\nGetIdleSlot() and the main reindexdb loop. Also, the pre-planning\nallows us to start processing the biggest tables first, which\noptimises the overall runtime.\n\n\n", "msg_date": "Mon, 1 Jul 2019 12:30:28 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Now that we have REINDEX CONCURRENTLY, I think reindexdb is going to\ngain more popularity.\n\nPlease don't reuse a file name as generic as \"parallel.c\" -- it's\nannoying when navigating source. Maybe conn_parallel.c multiconn.c\nconnscripts.c admconnection.c ...?\n\nIf your server crashes or is stopped midway during the reindex, you\nwould have to start again from scratch, and it's tedious (if it's\npossible at all) to determine which indexes were missed. I think it\nwould be useful to have a two-phase mode: in the initial phase reindexdb\ncomputes the list of indexes to be reindexed and saves them into a work\ntable somewhere. In the second phase, it reads indexes from that table\nand processes them, marking them as done in the work table. If the\nsecond phase crashes or is stopped, it can be restarted and consults the\nwork table. I would keep the work table, as it provides a bit of an\naudit trail. It may be important to be able to run even if unable to\ncreate such a work table (because of the <ironic>numerous</> users that\nDROP DATABASE postgres).\n\nMaybe we'd have two flags in the work table for each index:\n\"reindex requested\", \"reindex done\".\n \nThe \"glibc filter\" thing (which I take to mean \"indexes that depend on\ncollations\") would apply to the first phase: it just skips adding other\nindexes to the work table. I suppose ICU collations are not affected,\nso the filter would be for glibc collations only? The --glibc-dependent\nswitch seems too ad-hoc. Maybe \"--exclude-rule=glibc\"? That way we can\nadd other rules later. (Not \"--exclude=foo\" because we'll want to add\nthe possibility to ignore specific indexes by name.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 1 Jul 2019 09:51:12 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> - 0003 begins to be the actual fancy thing with the addition of a\n> --jobs option into reindexdb. The main issue here which should be\n> discussed is that when it comes to reindex of tables, you basically\n> are not going to have any conflicts between the objects manipulated.\n> However if you wish to do a reindex on a set of indexes then things\n> get more tricky as it is necessary to list items per-table so as\n> multiple connections do not conflict with each other if attempting to\n> work on multiple indexes of the same table. What this patch does is\n> to select the set of indexes which need to be worked on (see the\n> addition of cell in ParallelSlot), and then does a kind of\n> pre-planning of each item into the connection slots so as each\n> connection knows from the beginning which items it needs to process.\n> This is quite different from vacuumdb where a new item is distributed\n> only on a free connection from a unique list. I'd personally prefer\n> if we keep the facility in parallel.c so as it is only\n> execution-dependent and that we have no pre-planning. This would\n> require keeping within reindexdb.c an array of lists, with one list \n> corresponding to one connection instead which feels more natural.\n\nCouldn't we make this enormously simpler and less bug-prone by just\ndictating that --jobs applies only to reindex-table operations?\n\n> - 0004 is the part where the concurrent additions really matter as\n> this consists in applying an extra filter to the indexes selected so\n> as only the glibc-sensitive indexes are chosen for the processing.\n\nI think you'd be better off to define and document this as \"reindex\nonly collation-sensitive indexes\", without any particular reference\nto a reason why somebody might want to do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2019 10:10:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 1, 2019 at 3:51 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Please don't reuse a file name as generic as \"parallel.c\" -- it's\n> annoying when navigating source. Maybe conn_parallel.c multiconn.c\n> connscripts.c admconnection.c ...?\n\nI could use scripts_parallel.[ch] as I've already used it in the #define part?\n\n> If your server crashes or is stopped midway during the reindex, you\n> would have to start again from scratch, and it's tedious (if it's\n> possible at all) to determine which indexes were missed. I think it\n> would be useful to have a two-phase mode: in the initial phase reindexdb\n> computes the list of indexes to be reindexed and saves them into a work\n> table somewhere. In the second phase, it reads indexes from that table\n> and processes them, marking them as done in the work table. If the\n> second phase crashes or is stopped, it can be restarted and consults the\n> work table. I would keep the work table, as it provides a bit of an\n> audit trail. It may be important to be able to run even if unable to\n> create such a work table (because of the <ironic>numerous</> users that\n> DROP DATABASE postgres).\n\nOr we could create a table locally in each database, that would fix\nthis problem and probably make the code simpler?\n\nIt also raises some additional concerns about data expiration. I\nguess that someone could launch the tool by mistake, kill reindexdb,\nand run it again 2 months later while a lot of new objects have been\nadded for instance.\n\n> The \"glibc filter\" thing (which I take to mean \"indexes that depend on\n> collations\") would apply to the first phase: it just skips adding other\n> indexes to the work table. I suppose ICU collations are not affected,\n> so the filter would be for glibc collations only?\n\nIndeed, ICU shouldn't need such a filter. xxx_pattern_ops based\nindexes are also excluded.\n\n> The --glibc-dependent\n> switch seems too ad-hoc. Maybe \"--exclude-rule=glibc\"? That way we can\n> add other rules later. (Not \"--exclude=foo\" because we'll want to add\n> the possibility to ignore specific indexes by name.)\n\nThat's a good point, I like the --exclude-rule switch.\n\n\n", "msg_date": "Mon, 1 Jul 2019 18:14:20 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 1, 2019 at 4:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > - 0003 begins to be the actual fancy thing with the addition of a\n> > --jobs option into reindexdb. The main issue here which should be\n> > discussed is that when it comes to reindex of tables, you basically\n> > are not going to have any conflicts between the objects manipulated.\n> > However if you wish to do a reindex on a set of indexes then things\n> > get more tricky as it is necessary to list items per-table so as\n> > multiple connections do not conflict with each other if attempting to\n> > work on multiple indexes of the same table. What this patch does is\n> > to select the set of indexes which need to be worked on (see the\n> > addition of cell in ParallelSlot), and then does a kind of\n> > pre-planning of each item into the connection slots so as each\n> > connection knows from the beginning which items it needs to process.\n> > This is quite different from vacuumdb where a new item is distributed\n> > only on a free connection from a unique list. I'd personally prefer\n> > if we keep the facility in parallel.c so as it is only\n> > execution-dependent and that we have no pre-planning. This would\n> > require keeping within reindexdb.c an array of lists, with one list\n> > corresponding to one connection instead which feels more natural.\n>\n> Couldn't we make this enormously simpler and less bug-prone by just\n> dictating that --jobs applies only to reindex-table operations?\n\nThat would also mean that we'll have to fallback on doing reindex at\ntable-level, even if we only want to reindex indexes that depends on\nglibc. I'm afraid that this will often add a huge penalty.\n\n> > - 0004 is the part where the concurrent additions really matter as\n> > this consists in applying an extra filter to the indexes selected so\n> > as only the glibc-sensitive indexes are chosen for the processing.\n>\n> I think you'd be better off to define and document this as \"reindex\n> only collation-sensitive indexes\", without any particular reference\n> to a reason why somebody might want to do that.\n\nWe should still document that indexes based on ICU would be exluded?\nI also realize that I totally forgot to update reindexdb.sgml. Sorry\nabout that, I'll fix with the next versions.\n\n\n", "msg_date": "Mon, 1 Jul 2019 18:28:13 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "\tJulien Rouhaud wrote:\n\n> > I think you'd be better off to define and document this as \"reindex\n> > only collation-sensitive indexes\", without any particular reference\n> > to a reason why somebody might want to do that.\n> \n> We should still document that indexes based on ICU would be exluded?\n\nBut why exclude them?\nAs a data point, in the last 5 years, the en_US collation in ICU\nhad 7 different versions (across 11 major versions of ICU):\n\nICU\tUnicode en_US\n\n54.1\t7.0\t137.56\n55.1\t7.0\t153.56\n56.1\t8.0\t153.64\n57.1\t8.0\t153.64\n58.2\t9.0\t153.72\n59.1\t9.0\t153.72\n60.2\t10.0\t153.80\n61.1\t10.0\t153.80\n62.1\t11.0\t153.88\n63.2\t11.0\t153.88\n64.2\t12.1\t153.97\n\nThe rightmost column corresponds to pg_collation.collversion\nin Postgres.\nEach time there's a new Unicode version, it seems\nall collation versions are bumped. And there's a new Unicode\nversion pretty much every year these days.\nBased on this, most ICU upgrades in practice would require reindexing\naffected indexes.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 01 Jul 2019 22:13:18 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-Jul-01, Daniel Verite wrote:\n\n> But why exclude them?\n> As a data point, in the last 5 years, the en_US collation in ICU\n> had 7 different versions (across 11 major versions of ICU):\n\nSo we need a switch --include-rule=icu-collations?\n\n(I mentioned \"--exclude-rule=glibc\" elsewhere in the thread, but I think\nit should be --include-rule=glibc-collations instead, no?)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 1 Jul 2019 16:33:16 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 1, 2019 at 10:13 PM Daniel Verite <daniel@manitou-mail.org> wrote:\n>\n> > > I think you'd be better off to define and document this as \"reindex\n> > > only collation-sensitive indexes\", without any particular reference\n> > > to a reason why somebody might want to do that.\n> >\n> > We should still document that indexes based on ICU would be exluded?\n>\n> But why exclude them?\n> As a data point, in the last 5 years, the en_US collation in ICU\n> had 7 different versions (across 11 major versions of ICU):\n>\n> ICU Unicode en_US\n>\n> 54.1 7.0 137.56\n> 55.1 7.0 153.56\n> 56.1 8.0 153.64\n> 57.1 8.0 153.64\n> 58.2 9.0 153.72\n> 59.1 9.0 153.72\n> 60.2 10.0 153.80\n> 61.1 10.0 153.80\n> 62.1 11.0 153.88\n> 63.2 11.0 153.88\n> 64.2 12.1 153.97\n>\n> The rightmost column corresponds to pg_collation.collversion\n> in Postgres.\n> Each time there's a new Unicode version, it seems\n> all collation versions are bumped. And there's a new Unicode\n> version pretty much every year these days.\n> Based on this, most ICU upgrades in practice would require reindexing\n> affected indexes.\n\nI have a vague recollection that ICU was providing some backward\ncompatibility so that even if you upgrade your lib you can still get\nthe sort order that was active when you built your indexes, though\nmaybe for a limited number of versions.\n\nEven if that's just me being delusional, I'd still prefer Alvaro's\napproach to have distinct switches for each collation system.\n\n\n", "msg_date": "Mon, 1 Jul 2019 22:34:47 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 1, 2019 at 1:34 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> I have a vague recollection that ICU was providing some backward\n> compatibility so that even if you upgrade your lib you can still get\n> the sort order that was active when you built your indexes, though\n> maybe for a limited number of versions.\n\nThat isn't built in. Another database system that uses ICU handles\nthis by linking to multiple versions of ICU, each with its own UCA\nversion and associated collations. I don't think that we want to go\nthere, so it makes sense to make an upgrade that crosses ICU or glibc\nversions as painless as possible.\n\nNote that ICU does at least provide a standard way to use multiple\nversions at once; the symbol names have the ICU version baked in.\nYou're actually calling the functions using the versioned symbol names\nwithout realizing it, because there is macro trickery involved.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 Jul 2019 14:21:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Tue, Jul 2, 2019 at 8:34 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Even if that's just me being delusional, I'd still prefer Alvaro's\n> approach to have distinct switches for each collation system.\n\nHi Julien,\n\nMakes sense. But why use the name \"glibc\" in the code and user\ninterface? The name of the collation provider in PostgreSQL is \"libc\"\n(for example in the CREATE COLLATION command), and the problem applies\nno matter who makes your libc.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jul 2019 09:40:25 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-Jul-02, Thomas Munro wrote:\n\n> On Tue, Jul 2, 2019 at 8:34 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > Even if that's just me being delusional, I'd still prefer Alvaro's\n> > approach to have distinct switches for each collation system.\n> \n> Hi Julien,\n> \n> Makes sense. But why use the name \"glibc\" in the code and user\n> interface? The name of the collation provider in PostgreSQL is \"libc\"\n> (for example in the CREATE COLLATION command), and the problem applies\n> no matter who makes your libc.\n\nMakes sense. \"If your libc is glibc and you go across an upgrade over\nversion X, please use --include-rule=libc-collation\"\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 1 Jul 2019 17:46:48 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 01, 2019 at 06:28:13PM +0200, Julien Rouhaud wrote:\n> On Mon, Jul 1, 2019 at 4:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Couldn't we make this enormously simpler and less bug-prone by just\n>> dictating that --jobs applies only to reindex-table operations?\n\nI had the same argument about the first patch sets actually, but... :)\n\n> That would also mean that we'll have to fallback on doing reindex at\n> table-level, even if we only want to reindex indexes that depends on\n> glibc. I'm afraid that this will often add a huge penalty.\n\nYes, I would expect that most of the time glibc-sensible indexes are\nalso mixed with other ones which we don't care about here. One\nadvantage of the argument from Tom though is that it is possible to\nintroduce --jobs with minimal steps:\n1) Refactor the code for connection slots, without the cell addition\n2) Introduce --jobs without INDEX support.\n\nIn short, the conflict business between indexes is something which\ncould be tackled afterwards and with a separate patch. Parallel\nindexes at table-level has value in itself, particularly with\nCONCURRENTLY coming in the picture.\n--\nMichael", "msg_date": "Tue, 2 Jul 2019 11:49:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 01, 2019 at 06:14:20PM +0200, Julien Rouhaud wrote:\n> On Mon, Jul 1, 2019 at 3:51 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > Please don't reuse a file name as generic as \"parallel.c\" -- it's\n> > annoying when navigating source. Maybe conn_parallel.c multiconn.c\n> > connscripts.c admconnection.c ...?\n> \n> I could use scripts_parallel.[ch] as I've already used it in the\n> #define part?\n\nmulticonn.c sounds rather good, but I have a poor ear for any kind of\nnaming..\n\n>> If your server crashes or is stopped midway during the reindex, you\n>> would have to start again from scratch, and it's tedious (if it's\n>> possible at all) to determine which indexes were missed. I think it\n>> would be useful to have a two-phase mode: in the initial phase reindexdb\n>> computes the list of indexes to be reindexed and saves them into a work\n>> table somewhere. In the second phase, it reads indexes from that table\n>> and processes them, marking them as done in the work table. If the\n>> second phase crashes or is stopped, it can be restarted and consults the\n>> work table. I would keep the work table, as it provides a bit of an\n>> audit trail. It may be important to be able to run even if unable to\n>> create such a work table (because of the <ironic>numerous</> users that\n>> DROP DATABASE postgres).\n> \n> Or we could create a table locally in each database, that would fix\n> this problem and probably make the code simpler?\n> \n> It also raises some additional concerns about data expiration. I\n> guess that someone could launch the tool by mistake, kill reindexdb,\n> and run it again 2 months later while a lot of new objects have been\n> added for instance.\n\nThis looks like fancy additions, still that's not the core of the\nproblem, no? If you begin to play in this area you would need more\ncontrol options, basically a \"continue\" mode to be able to restart a\npreviously failed attempt, and a \"reinit\" mode able to restart the\noperation completely from scratch, and perhaps even a \"reset\" mode\nwhich cleans up any data already present. Not really a complexity,\nbut this has to be maintained a database level.\n\n>> The --glibc-dependent\n>> switch seems too ad-hoc. Maybe \"--exclude-rule=glibc\"? That way we can\n>> add other rules later. (Not \"--exclude=foo\" because we'll want to add\n>> the possibility to ignore specific indexes by name.)\n> \n> That's a good point, I like the --exclude-rule switch.\n\nSounds kind of nice.\n--\nMichael", "msg_date": "Tue, 2 Jul 2019 11:55:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-07-01 22:46, Alvaro Herrera wrote:\n> On 2019-Jul-02, Thomas Munro wrote:\n>> On Tue, Jul 2, 2019 at 8:34 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>> Even if that's just me being delusional, I'd still prefer Alvaro's\n>>> approach to have distinct switches for each collation system.\n>>\n>> Makes sense. But why use the name \"glibc\" in the code and user\n>> interface? The name of the collation provider in PostgreSQL is \"libc\"\n>> (for example in the CREATE COLLATION command), and the problem applies\n>> no matter who makes your libc.\n> \n> Makes sense. \"If your libc is glibc and you go across an upgrade over\n> version X, please use --include-rule=libc-collation\"\n\nI think it might be better to put the logic of what indexes are\ncollation affected etc. into the backend REINDEX command. We are likely\nto enhance the collation version and dependency tracking over time,\npossibly soon, possibly multiple times, and it would be very cumbersome\nto have to keep updating reindexdb with this. Moreover, since for\nperformance you likely want to reindex by table, implementing a logic of\n\"reindex all collation-affected indexes on this table\" would be much\neasier to do in the backend.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jul 2019 08:19:22 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 1, 2019 at 11:21 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jul 1, 2019 at 1:34 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > I have a vague recollection that ICU was providing some backward\n> > compatibility so that even if you upgrade your lib you can still get\n> > the sort order that was active when you built your indexes, though\n> > maybe for a limited number of versions.\n>\n> That isn't built in. Another database system that uses ICU handles\n> this by linking to multiple versions of ICU, each with its own UCA\n> version and associated collations. I don't think that we want to go\n> there, so it makes sense to make an upgrade that crosses ICU or glibc\n> versions as painless as possible.\n>\n> Note that ICU does at least provide a standard way to use multiple\n> versions at once; the symbol names have the ICU version baked in.\n> You're actually calling the functions using the versioned symbol names\n> without realizing it, because there is macro trickery involved.\n\nAh, thanks for the clarification!\n\n\n", "msg_date": "Tue, 2 Jul 2019 10:16:22 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Sun, Jun 30, 2019 at 11:45:47AM +0200, Julien Rouhaud wrote:\n>Hi,\n>\n>With the glibc 2.28 coming, all users will have to reindex almost\n>every indexes after a glibc upgrade to guarantee the lack of\n>corruption. Unfortunately, reindexdb is not ideal for that as it's\n>processing everything using a single connexion and isn't able to\n>discard indexes that doesn't depend on a glibc collation.\n>\n>PFA a patchset to add parallelism to reindexdb (reusing the\n>infrastructure in vacuumdb with some additions) and an option to\n>discard indexes that doesn't depend on glibc (without any specific\n>collation filtering or glibc version detection), with updated\n>regression tests. Note that this should be applied on top of the\n>existing reindexdb cleanup & refactoring patch\n>(https://commitfest.postgresql.org/23/2115/).\n>\n>This was sponsored by VMware, and has been discussed internally with\n>Kevin and Michael, in Cc.\n\nI wonder why this is necessary:\n\npg_log_error(\"cannot reindex glibc dependent objects and a subset of objects\");\n\nWhat's the reasoning behind that? It seems like a valid use case to me -\nimagine you have a bug database, but only a couple of tables are used by\nthe application regularly (the rest may be archive tables, for example).\nWhy not to allow rebuilding glibc-dependent indexes on the used tables, so\nthat the database can be opened for users sooner.\n\nBTW now that we allow rebuilding only some of the indexes, it'd be great\nto have a dry-run mode, were we just print which indexes will be rebuilt\nwithout actually rebuilding them.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 2 Jul 2019 10:28:04 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Tue, Jul 2, 2019 at 9:19 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-07-01 22:46, Alvaro Herrera wrote:\n> > On 2019-Jul-02, Thomas Munro wrote:\n> >> On Tue, Jul 2, 2019 at 8:34 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >>> Even if that's just me being delusional, I'd still prefer Alvaro's\n> >>> approach to have distinct switches for each collation system.\n> >>\n> >> Makes sense. But why use the name \"glibc\" in the code and user\n> >> interface? The name of the collation provider in PostgreSQL is \"libc\"\n> >> (for example in the CREATE COLLATION command), and the problem applies\n> >> no matter who makes your libc.\n> >\n> > Makes sense. \"If your libc is glibc and you go across an upgrade over\n> > version X, please use --include-rule=libc-collation\"\n>\n> I think it might be better to put the logic of what indexes are\n> collation affected etc. into the backend REINDEX command. We are likely\n> to enhance the collation version and dependency tracking over time,\n> possibly soon, possibly multiple times, and it would be very cumbersome\n> to have to keep updating reindexdb with this. Moreover, since for\n> performance you likely want to reindex by table, implementing a logic of\n> \"reindex all collation-affected indexes on this table\" would be much\n> easier to do in the backend.\n\nThat's a great idea, and would make the parallelism in reindexdb much\nsimpler. There's however a downside, as users won't have a way to\nbenefit from index filtering until they upgrade to this version. OTOH\nglibc 2.28 is already there, and a hypothetical fancy reindexdb is far\nfrom being released.\n\n\n", "msg_date": "Tue, 2 Jul 2019 10:30:57 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Tue, Jul 2, 2019 at 10:28 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> I wonder why this is necessary:\n>\n> pg_log_error(\"cannot reindex glibc dependent objects and a subset of objects\");\n>\n> What's the reasoning behind that? It seems like a valid use case to me -\n> imagine you have a bug database, but only a couple of tables are used by\n> the application regularly (the rest may be archive tables, for example).\n> Why not to allow rebuilding glibc-dependent indexes on the used tables, so\n> that the database can be opened for users sooner.\n\nIt just seemed wrong to me to allow a partial processing for something\nthat's aimed to prevent corruption. I'd think that if users are\nknowledgeable enough to only reindex a subset of indexes/tables in\nsuch cases, they can also discard indexes that don't get affected by a\ncollation lib upgrade. I'm not strongly opposed to supporting if\nthough, as there indeed can be valid use cases.\n\n> BTW now that we allow rebuilding only some of the indexes, it'd be great\n> to have a dry-run mode, were we just print which indexes will be rebuilt\n> without actually rebuilding them.\n\n+1. If we end up doing the filter in the backend, we'd have to add\nsuch option in the REINDEX command, and actually issue all the orders\nto retrieve the list.\n\n\n", "msg_date": "Tue, 2 Jul 2019 10:45:44 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Tue, Jul 02, 2019 at 10:45:44AM +0200, Julien Rouhaud wrote:\n>On Tue, Jul 2, 2019 at 10:28 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> I wonder why this is necessary:\n>>\n>> pg_log_error(\"cannot reindex glibc dependent objects and a subset of objects\");\n>>\n>> What's the reasoning behind that? It seems like a valid use case to me -\n>> imagine you have a bug database, but only a couple of tables are used by\n>> the application regularly (the rest may be archive tables, for example).\n>> Why not to allow rebuilding glibc-dependent indexes on the used tables, so\n>> that the database can be opened for users sooner.\n>\n>It just seemed wrong to me to allow a partial processing for something\n>that's aimed to prevent corruption. I'd think that if users are\n>knowledgeable enough to only reindex a subset of indexes/tables in\n>such cases, they can also discard indexes that don't get affected by a\n>collation lib upgrade. I'm not strongly opposed to supporting if\n>though, as there indeed can be valid use cases.\n>\n\nI don't know, it just seems like an unnecessary limitation.\n\n>> BTW now that we allow rebuilding only some of the indexes, it'd be great\n>> to have a dry-run mode, were we just print which indexes will be rebuilt\n>> without actually rebuilding them.\n>\n>+1. If we end up doing the filter in the backend, we'd have to add\n>such option in the REINDEX command, and actually issue all the orders\n>to retrieve the list.\n\nHmmm, yeah. FWIW I'm not requesting v0 to have that feature, but it'd be\ngood to design the feature in a way that allows adding it later.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 2 Jul 2019 12:12:40 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-07-02 10:30, Julien Rouhaud wrote:\n> That's a great idea, and would make the parallelism in reindexdb much\n> simpler. There's however a downside, as users won't have a way to\n> benefit from index filtering until they upgrade to this version. OTOH\n> glibc 2.28 is already there, and a hypothetical fancy reindexdb is far\n> from being released.\n\nIsn't that also the case for your proposal? We are not going to release\na new reindexdb before a new REINDEX.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 5 Jul 2019 18:16:03 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-07-02 10:45, Julien Rouhaud wrote:\n> It just seemed wrong to me to allow a partial processing for something\n> that's aimed to prevent corruption. I'd think that if users are\n> knowledgeable enough to only reindex a subset of indexes/tables in\n> such cases, they can also discard indexes that don't get affected by a\n> collation lib upgrade. I'm not strongly opposed to supporting if\n> though, as there indeed can be valid use cases.\n\nWe are moving in this direction. Thomas Munro has proposed an approach\nfor tracking collation versions on a per-object level rather than\nper-database. So then we'd need a way to reindex not those indexes\naffected by collation but only those affected by collation and not yet\nfixed.\n\nOne could also imagine a behavior where not-yet-fixed indexes are simply\nignored by the planner. So the gradual upgrading approach that Tomas\ndescribed is absolutely a possibility.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 5 Jul 2019 18:22:19 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 5, 2019 at 6:16 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-07-02 10:30, Julien Rouhaud wrote:\n> > That's a great idea, and would make the parallelism in reindexdb much\n> > simpler. There's however a downside, as users won't have a way to\n> > benefit from index filtering until they upgrade to this version. OTOH\n> > glibc 2.28 is already there, and a hypothetical fancy reindexdb is far\n> > from being released.\n>\n> Isn't that also the case for your proposal? We are not going to release\n> a new reindexdb before a new REINDEX.\n\nSure, but my point was that once the new reindexdb is released (or if\nyou're so desperate, using a nightly build or compiling your own), it\ncan be used against any previous major version. There is probably a\nlarge fraction of users who don't perform a postgres upgrade when they\nupgrade their OS, so that's IMHO also something to consider.\n\n\n", "msg_date": "Fri, 5 Jul 2019 19:25:41 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 05, 2019 at 07:25:41PM +0200, Julien Rouhaud wrote:\n> On Fri, Jul 5, 2019 at 6:16 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> Isn't that also the case for your proposal? We are not going to release\n>> a new reindexdb before a new REINDEX.\n> \n> Sure, but my point was that once the new reindexdb is released (or if\n> you're so desperate, using a nightly build or compiling your own), it\n> can be used against any previous major version. There is probably a\n> large fraction of users who don't perform a postgres upgrade when they\n> upgrade their OS, so that's IMHO also something to consider.\n\nI think that we need to think long-term here and be confident in the\nfact we will still see breakages with collations and glibc, using a\nsolution that we think is the right API. Peter's idea to make the\nbackend-aware command of the filtering is cool. On top of that, there\nis no need to add any conflict logic in reindexdb and we can live with\nrestricting --jobs support for non-index objects.\n--\nMichael", "msg_date": "Mon, 8 Jul 2019 16:57:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 8, 2019 at 9:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 05, 2019 at 07:25:41PM +0200, Julien Rouhaud wrote:\n> > On Fri, Jul 5, 2019 at 6:16 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> >> Isn't that also the case for your proposal? We are not going to release\n> >> a new reindexdb before a new REINDEX.\n> >\n> > Sure, but my point was that once the new reindexdb is released (or if\n> > you're so desperate, using a nightly build or compiling your own), it\n> > can be used against any previous major version. There is probably a\n> > large fraction of users who don't perform a postgres upgrade when they\n> > upgrade their OS, so that's IMHO also something to consider.\n>\n> I think that we need to think long-term here and be confident in the\n> fact we will still see breakages with collations and glibc, using a\n> solution that we think is the right API. Peter's idea to make the\n> backend-aware command of the filtering is cool. On top of that, there\n> is no need to add any conflict logic in reindexdb and we can live with\n> restricting --jobs support for non-index objects.\n\nDon't get me wrong, I do agree that implementing filtering in the\nbackend is a better design. What's bothering me is that I also agree\nthat there will be more glibc breakage, and if that happens within a\nfew years, a lot of people will still be using pg12- version, and they\nstill won't have an efficient way to rebuild their indexes. Now, it'd\nbe easy to publish an external tools that does a simple\nparallel-and-glic-filtering reindex tool that will serve that purpose\nfor the few years it'll be needed, so everyone can be happy.\n\nFor now, I'll resubmit the parallel patch using per-table only\napproach, and will submit the filtering in the backend using a new\nREINDEX option in a different thread.\n\n\n", "msg_date": "Mon, 8 Jul 2019 21:08:43 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 8, 2019 at 9:08 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> I'll resubmit the parallel patch using per-table only\n> approach\n\nAttached.", "msg_date": "Mon, 8 Jul 2019 23:02:14 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 1, 2019 at 09:51:12AM -0400, Alvaro Herrera wrote:\n> Now that we have REINDEX CONCURRENTLY, I think reindexdb is going to\n> gain more popularity.\n> \n> Please don't reuse a file name as generic as \"parallel.c\" -- it's\n> annoying when navigating source. Maybe conn_parallel.c multiconn.c\n> connscripts.c admconnection.c ...?\n> \n> If your server crashes or is stopped midway during the reindex, you\n> would have to start again from scratch, and it's tedious (if it's\n> possible at all) to determine which indexes were missed. I think it\n> would be useful to have a two-phase mode: in the initial phase reindexdb\n> computes the list of indexes to be reindexed and saves them into a work\n> table somewhere. In the second phase, it reads indexes from that table\n> and processes them, marking them as done in the work table. If the\n> second phase crashes or is stopped, it can be restarted and consults the\n> work table. I would keep the work table, as it provides a bit of an\n> audit trail. It may be important to be able to run even if unable to\n> create such a work table (because of the <ironic>numerous</> users that\n> DROP DATABASE postgres).\n> \n> Maybe we'd have two flags in the work table for each index:\n> \"reindex requested\", \"reindex done\".\n\nI think we have a similar issue with adding checksums, so let's address\nwith a generic framework and use it for all cases, like vacuumdb too.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 8 Jul 2019 20:15:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 08, 2019 at 11:02:14PM +0200, Julien Rouhaud wrote:\n> On Mon, Jul 8, 2019 at 9:08 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> I'll resubmit the parallel patch using per-table only\n>> approach\n> \n> Attached.\n\nI have done a lookup of this patch set with a focus on the refactoring\npart, and the split is a bit confusing.\n\n+void\n+DisconnectDatabase(ParallelSlot *slot)\n+{\n+ char errbuf[256];\ncommon.c has already an API to connect to a database. It would be\nmore natural to move the disconnection part also to common.c and have\nthe caller of DisconnectDatabase reset the slot connection by itself?\ndisconnectDatabase() (lower case for the first character) would make\nthe set more consistent. We could also have a wrapper like say\nDiscardSlot() which does this work, but that seems like an overkill\nfor a single slot if one API could do the cleanup of the full set.\n\n$ git grep select_loop\nscripts_parallel.c: /* We must reconstruct the fd_set for each\ncall to select_loop */\nscripts_parallel.c: i = select_loop(maxFd, &slotset, &aborting);\nscripts_parallel.c:select_loop(int maxFd, fd_set *workerset, bool\n*aborting)\nscripts_parallel.h:extern int select_loop(int maxFd, fd_set\n*workerset, bool *aborting);\n\nselect_loop is only present in scripts_parallel.c, so it can remain\nstatic.\n\n+ slots = (ParallelSlot *) pg_malloc(sizeof(ParallelSlot) *\nconcurrentCons);\n+ init_slot(slots, conn);\n+ if (parallel)\n+ {\n+ for (i = 1; i < concurrentCons; i++)\n+ {\n+ conn = connectDatabase(dbname, host, port,\nusername, prompt_password,\n+\nprogname, echo, false, true);\n+ init_slot(slots + i, conn);\n+ }\n+ }\n\nThis comes from 0002 and could be more refactored as vacuumdb does the\nsame thing. Based on 0001, init_slot() is called now in vacuumdb.c\nand initializes a set of slots while connecting to a given database.\nIn short, in input we have a set of parameters and the ask to open\nconnections with N slots, and the return result is an pg_malloc'd\narray of slots ready to be used. We could just call that\nParallelSlotInit() (if you have a better name feel free).\n\n+ /*\n+ * Get the connection slot to use. If in parallel mode, here we wait\n+ * for one connection to become available if none already is. In\n+ * non-parallel mode we simply use the only slot we have, which we\n+ * know to be free.\n+ */\n+ if (parallel)\nThis business also is duplicated in both reindexdb.c and vacuumdb.c.\n\n+bool\n+GetQueryResult(PGconn *conn, const char *progname)\n+{\nThis also does not stick with the parallel stuff, as that's basically\nonly getting a query result. We could stick that into common.c.\n\nPatch 2 has no documentation. The option as well as the restrictions\nin place need to be documented properly.\n\nHere is a small idea about the set of routines we could have for the\nparallel stuff, with only three of them needed to work on the parallel\nslots and get free connections:\n- Initialization of the full slot set.\n- Cleanup and disconnection of the slots.\n- Fetch an idle connection and wait for one until available.\n--\nMichael", "msg_date": "Tue, 9 Jul 2019 16:24:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Thanks for the review.\n\nOn Tue, Jul 9, 2019 at 9:24 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jul 08, 2019 at 11:02:14PM +0200, Julien Rouhaud wrote:\n> > On Mon, Jul 8, 2019 at 9:08 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >> I'll resubmit the parallel patch using per-table only\n> >> approach\n> >\n> > Attached.\n>\n> I have done a lookup of this patch set with a focus on the refactoring\n> part, and the split is a bit confusing.\n\nYes, that wasn't a smart split :(\n\n> +void\n> +DisconnectDatabase(ParallelSlot *slot)\n> +{\n> + char errbuf[256];\n> common.c has already an API to connect to a database. It would be\n> more natural to move the disconnection part also to common.c and have\n> the caller of DisconnectDatabase reset the slot connection by itself?\n\nOk.\n\n> $ git grep select_loop\n> scripts_parallel.c: /* We must reconstruct the fd_set for each\n> call to select_loop */\n> scripts_parallel.c: i = select_loop(maxFd, &slotset, &aborting);\n> scripts_parallel.c:select_loop(int maxFd, fd_set *workerset, bool\n> *aborting)\n> scripts_parallel.h:extern int select_loop(int maxFd, fd_set\n> *workerset, bool *aborting);\n>\n> select_loop is only present in scripts_parallel.c, so it can remain\n> static.\n\nGood point.\n\n> + slots = (ParallelSlot *) pg_malloc(sizeof(ParallelSlot) *\n> concurrentCons);\n> + init_slot(slots, conn);\n> + if (parallel)\n> + {\n> + for (i = 1; i < concurrentCons; i++)\n> + {\n> + conn = connectDatabase(dbname, host, port,\n> username, prompt_password,\n> +\n> progname, echo, false, true);\n> + init_slot(slots + i, conn);\n> + }\n> + }\n>\n> This comes from 0002 and could be more refactored as vacuumdb does the\n> same thing. Based on 0001, init_slot() is called now in vacuumdb.c\n> and initializes a set of slots while connecting to a given database.\n> In short, in input we have a set of parameters and the ask to open\n> connections with N slots, and the return result is an pg_malloc'd\n> array of slots ready to be used. We could just call that\n> ParallelSlotInit() (if you have a better name feel free).\n\nGiven how the rest of the functions are named, I'll probably use\nInitParallelSlots().\n\n>\n> + /*\n> + * Get the connection slot to use. If in parallel mode, here we wait\n> + * for one connection to become available if none already is. In\n> + * non-parallel mode we simply use the only slot we have, which we\n> + * know to be free.\n> + */\n> + if (parallel)\n> This business also is duplicated in both reindexdb.c and vacuumdb.c.\n>\n> +bool\n> +GetQueryResult(PGconn *conn, const char *progname)\n> +{\n> This also does not stick with the parallel stuff, as that's basically\n> only getting a query result. We could stick that into common.c.\n\nThis function also has a bad name, as it's discarding the result via\nProcessQueryResult. Maybe we should rename them to GetQuerySuccess()\nand ConsumeAndTrashQueryResult()?\n\n> Patch 2 has no documentation. The option as well as the restrictions\n> in place need to be documented properly.\n\nI forgot that I had forgotten to add documentation :( will fix this time.\n\n\n", "msg_date": "Tue, 9 Jul 2019 09:52:38 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-07-08 21:08, Julien Rouhaud wrote:\n> Don't get me wrong, I do agree that implementing filtering in the\n> backend is a better design. What's bothering me is that I also agree\n> that there will be more glibc breakage, and if that happens within a\n> few years, a lot of people will still be using pg12- version, and they\n> still won't have an efficient way to rebuild their indexes. Now, it'd\n> be easy to publish an external tools that does a simple\n> parallel-and-glic-filtering reindex tool that will serve that purpose\n> for the few years it'll be needed, so everyone can be happy.\n\nYou can already do that: Run a query through psql to get a list of\naffected tables or indexes and feed those to reindexdb using -i or -t\noptions.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Jul 2019 13:09:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Tue, Jul 9, 2019 at 9:52 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Jul 9, 2019 at 9:24 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > I have done a lookup of this patch set with a focus on the refactoring\n> > part, and the split is a bit confusing.\n> [...]\n\nI finished to do a better refactoring, and ended up with this API in\nscripts_parallel:\n\nextern ParallelSlot *ConsumeIdleSlot(ParallelSlot *slots, int numslots,\nconst char *progname);\n\nextern ParallelSlot *SetupParallelSlots(const char *dbname, const char *host,\nconst char *port,\nconst char *username, bool prompt_password,\nconst char *progname, bool echo,\nPGconn *conn, int numslots);\n\nextern bool WaitForSlotsCompletion(ParallelSlot *slots, int numslots,\n const char *progname);\n\nConsumeIdleSlot() being a wrapper on top of (now static) GetIdleSlot,\nwhich handles parallelism and possible failure.\n\nAttached v3, including updated documentation for the new -j option.", "msg_date": "Tue, 9 Jul 2019 14:56:37 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Tue, Jul 09, 2019 at 01:09:38PM +0200, Peter Eisentraut wrote:\n> You can already do that: Run a query through psql to get a list of\n> affected tables or indexes and feed those to reindexdb using -i or -t\n> options.\n\nSure, but that's limited if one can only afford a limited amount of\ndowntime for an upgrade window and you still need to handle properly\nthe index-level conflicts when doing the processing in parallel.\n--\nMichael", "msg_date": "Wed, 10 Jul 2019 13:46:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-Jul-09, Julien Rouhaud wrote:\n\n> I finished to do a better refactoring, and ended up with this API in\n> scripts_parallel:\n\nLooking good! I'm not sure about the \"Consume\" word in ConsumeIdleSlot;\nmaybe \"Reserve\"? \"Obtain\"? \"Get\"?\n\nCode commentary: I think the comment that sits atop the function should\ndescribe what the function does without getting too much in how it does\nit. For example in ConsumeIdleSlot you have \"If there are multiples\nslots, here we wait for one connection to become available if none\nalready is, returning NULL if an error occured. Otherwise, we simply\nuse the only slot we have, which we know to be free.\" which seems like\nit should be in another comment *inside* the function; make the external\none something like \"Reserve and return a connection that is currently\nidle, waiting until one becomes idle if none is\". Maybe you can put the\npart I first quoted as a second paragraph in the comment at top of\nfunction and keeping the second part I quoted as first paragraph; we\nseem to use that style too.\n\nPlacement: I think it's good if related functions stay together, or\nthere is some other rationale for placement within the file. I have two\nfavorite approaches: one is to put all externally callable functions at\ntop of file, followed by all the static helpers in the lower half of the\nfile. The other is to put each externally accessible immediately\nfollowed by its specific static helpers. If you choose one of those,\nthat means that SetupParallelSlots should either move upwards, or move\ndownwards. The current ordering seems a dartboard kind of thing where\nthe thrower is not Green Arrow.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 10 Jul 2019 10:15:31 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Hi Alvaro,\n\nThanks a lot for the review\n\nOn Wed, Jul 10, 2019 at 4:15 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jul-09, Julien Rouhaud wrote:\n>\n> > I finished to do a better refactoring, and ended up with this API in\n> > scripts_parallel:\n>\n> Looking good!\n\nThanks!\n\n> I'm not sure about the \"Consume\" word in ConsumeIdleSlot;\n> maybe \"Reserve\"? \"Obtain\"? \"Get\"?\n\nYes, Consume is maybe a little bit weird. I wanted to point out the\nmake it clear that this function is actually removing a slot from the\nfree list, especially since there's a (historical) get_idle_slot(). I\nlike Reserve, but Obtain and Get are probably too ambiguous.\n\n> Code commentary: I think the comment that sits atop the function should\n> describe what the function does without getting too much in how it does\n> it. For example in ConsumeIdleSlot you have \"If there are multiples\n> slots, here we wait for one connection to become available if none\n> already is, returning NULL if an error occured. Otherwise, we simply\n> use the only slot we have, which we know to be free.\" which seems like\n> it should be in another comment *inside* the function; make the external\n> one something like \"Reserve and return a connection that is currently\n> idle, waiting until one becomes idle if none is\". Maybe you can put the\n> part I first quoted as a second paragraph in the comment at top of\n> function and keeping the second part I quoted as first paragraph; we\n> seem to use that style too.\n\nGood point, I'll fix as you say.\n\n> Placement: I think it's good if related functions stay together, or\n> there is some other rationale for placement within the file. I have two\n> favorite approaches: one is to put all externally callable functions at\n> top of file, followed by all the static helpers in the lower half of the\n> file. The other is to put each externally accessible immediately\n> followed by its specific static helpers. If you choose one of those,\n> that means that SetupParallelSlots should either move upwards, or move\n> downwards. The current ordering seems a dartboard kind of thing where\n> the thrower is not Green Arrow.\n\n:) I tried to put everything in alphabetic order as it was previously\nbeing done, but I apparently failed again at sorting more than 3\ncharacters.\n\nI usually prefer to group exported functions together and static ones\ntogether, as I find it more maintainable in the long term, so upwards\nit'll be.\n\n\n", "msg_date": "Wed, 10 Jul 2019 21:44:14 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Wed, Jul 10, 2019 at 09:44:14PM +0200, Julien Rouhaud wrote:\n> On Wed, Jul 10, 2019 at 4:15 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> Looking good!\n> \n> Thanks!\n\nConfirmed. The last set is much easier to go through.\n\n>> I'm not sure about the \"Consume\" word in ConsumeIdleSlot;\n>> maybe \"Reserve\"? \"Obtain\"? \"Get\"?\n> \n> Yes, Consume is maybe a little bit weird. I wanted to point out the\n> make it clear that this function is actually removing a slot from the\n> free list, especially since there's a (historical) get_idle_slot(). I\n> like Reserve, but Obtain and Get are probably too ambiguous.\n\nThe refactoring patch is getting in shape. Now reindex_one_database()\nis the only place setting and manipulating the slots. I am wondering\nif we should have a wrapper which disconnects all the slots (doing\nconn = NULL after the disconnectDatabase() call does not matter).\nGet* would be my choice, because we look at the set of parallel slots,\nand get an idle one. It would be nice to have more consistency in the\nnames for the routines, say:\n- ParallelSlotInit() instead of SetupParallelSlots (still my\nsuggestion is not perfect either as that sounds like one single slot,\nbut we have a set of these).\n- ParallelSlotGetIdle() instead of ConsumeIdleSlot(). Still that's\nmore a wait-then-get behavior.\n- ParallelSlotWaitCompletion() instead of WaitForSlotsCompletion()\n- ParallelSlotDisconnect, as a wrapper for the calls to\nDisconnectDatabase().\n\n>> Placement: I think it's good if related functions stay together, or\n>> there is some other rationale for placement within the file. I have two\n>> favorite approaches: one is to put all externally callable functions at\n>> top of file, followed by all the static helpers in the lower half of the\n>> file. The other is to put each externally accessible immediately\n>> followed by its specific static helpers. If you choose one of those,\n>> that means that SetupParallelSlots should either move upwards, or move\n>> downwards. The current ordering seems a dartboard kind of thing where\n>> the thrower is not Green Arrow.\n> \n> I usually prefer to group exported functions together and static ones\n> together, as I find it more maintainable in the long term, so upwards\n> it'll be.\n\nThat's mainly a matter of taste. Depending on the code path in the\ntree, sometimes the two approaches from above are used. We have some\nother files where the static routines are listed first at the top,\nfollowed by the exported ones at the bottom as it saves from some\ndeclarations of the functions at the top of the file. Keeping the\nfootprint of the author is not that bad either, and that depends also\non the context. For this one, as the static functions are linked to\nthe exported ones in a close manner, I would keep each set grouped\nthough.\n\n+ /*\n+ * Database-wide parallel reindex requires special processing. If\n+ * multiple jobs were asked, we have to reindex system catalogs first,\n+ * as they can't be processed in parallel.\n+ */\n+ if (process_type == REINDEX_DATABASE)\n\nIn patch 0002, a parallel database REINDEX first processes the catalog\nrelations in a serializable fashion, and then all the other relations\nin parallel, which is right Could we have schema-level reindexes also\nprocess things in parallel with all the relations from all the schemas\nlisted? This would be profitable in particular for callers listing\nmultiple schemas with an unbalanced number of tables in each, and we'd\nneed to be careful of the same where pg_catalog is listed. Actually,\nyour patch breaks if we do a parallel run with pg_catalog and another\nschema, no?\n--\nMichael", "msg_date": "Thu, 11 Jul 2019 13:04:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Thu, Jul 11, 2019 at 6:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 10, 2019 at 09:44:14PM +0200, Julien Rouhaud wrote:\n> > On Wed, Jul 10, 2019 at 4:15 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >> Looking good!\n> >\n> > Thanks!\n>\n> Confirmed. The last set is much easier to go through.\n>\n> >> I'm not sure about the \"Consume\" word in ConsumeIdleSlot;\n> >> maybe \"Reserve\"? \"Obtain\"? \"Get\"?\n> >\n> > Yes, Consume is maybe a little bit weird. I wanted to point out the\n> > make it clear that this function is actually removing a slot from the\n> > free list, especially since there's a (historical) get_idle_slot(). I\n> > like Reserve, but Obtain and Get are probably too ambiguous.\n>\n> The refactoring patch is getting in shape. Now reindex_one_database()\n> is the only place setting and manipulating the slots. I am wondering\n> if we should have a wrapper which disconnects all the slots (doing\n> conn = NULL after the disconnectDatabase() call does not matter).\n\nYou already mentioned that in a previous mail. I was afraid it'd be\noverkill, but it'll make caller code easier, so let's do it.\n\n> Get* would be my choice, because we look at the set of parallel slots,\n> and get an idle one.\n\nThat's what the former GetIdleSlot (that I renamed to get_idle_slot as\nit's not static) is doing. ConsumeIdleSlot() actually get an idle\nslot and mark it as being used. That's probably some leakage of\ninternal implementation, but having a GetIdleParallelSlot (or\nParallelSlotGetIdle) *and* a get_idle_slot sounds like a bad idea, and\nI don't have a better idea on how to rename get_idle_slot. If you\nreally prefer Get* and you're fine with a static get_idle_slot, that's\nfine by me.\n\n> It would be nice to have more consistency in the\n> names for the routines, say:\n> - ParallelSlotInit() instead of SetupParallelSlots (still my\n> suggestion is not perfect either as that sounds like one single slot,\n> but we have a set of these).\n> - ParallelSlotGetIdle() instead of ConsumeIdleSlot(). Still that's\n> more a wait-then-get behavior.\n> - ParallelSlotWaitCompletion() instead of WaitForSlotsCompletion()\n> - ParallelSlotDisconnect, as a wrapper for the calls to\n> DisconnectDatabase().\n\nI don't have an opinion on whether to use parallel slot as prefix or\npostfix, so I'm fine with postfixing.\n\n> + /*\n> + * Database-wide parallel reindex requires special processing. If\n> + * multiple jobs were asked, we have to reindex system catalogs first,\n> + * as they can't be processed in parallel.\n> + */\n> + if (process_type == REINDEX_DATABASE)\n>\n> In patch 0002, a parallel database REINDEX first processes the catalog\n> relations in a serializable fashion, and then all the other relations\n> in parallel, which is right Could we have schema-level reindexes also\n> process things in parallel with all the relations from all the schemas\n> listed? This would be profitable in particular for callers listing\n> multiple schemas with an unbalanced number of tables in each\n\nIt would also be beneficial for a parallel reindex of a single schema.\n\n> and we'd\n> need to be careful of the same where pg_catalog is listed. Actually,\n> your patch breaks if we do a parallel run with pg_catalog and another\n> schema, no?\n\nIt can definitely cause problems if you ask for pg_catalog and other\nschema, same as if you ask a parallel reindex of some catalog tables\n(possibly with other tables). There's a --system switch for that\nneed, so I don't know if documenting the limitation to avoid some\nextra code to deal with it is a good enough solution?\n\n\n", "msg_date": "Thu, 11 Jul 2019 11:48:20 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Thu, Jul 11, 2019 at 11:48:20AM +0200, Julien Rouhaud wrote:\n> On Thu, Jul 11, 2019 at 6:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Get* would be my choice, because we look at the set of parallel slots,\n>> and get an idle one.\n> \n> That's what the former GetIdleSlot (that I renamed to get_idle_slot as\n> it's not static) is doing. ConsumeIdleSlot() actually get an idle\n> slot and mark it as being used. That's probably some leakage of\n> internal implementation, but having a GetIdleParallelSlot (or\n> ParallelSlotGetIdle) *and* a get_idle_slot sounds like a bad idea, and\n> I don't have a better idea on how to rename get_idle_slot. If you\n> really prefer Get* and you're fine with a static get_idle_slot, that's\n> fine by me.\n\nDo we actually need get_idle_slot? ConsumeIdleSlot is its only\ncaller.\n\n>> and we'd\n>> need to be careful of the same where pg_catalog is listed. Actually,\n>> your patch breaks if we do a parallel run with pg_catalog and another\n>> schema, no?\n> \n> It can definitely cause problems if you ask for pg_catalog and other\n> schema, same as if you ask a parallel reindex of some catalog tables\n> (possibly with other tables). There's a --system switch for that\n> need, so I don't know if documenting the limitation to avoid some\n> extra code to deal with it is a good enough solution?\n\nvacuumdb --full still has limitations in this area and we had some\nreports on the matter about this behavior being annoying. Its\ndocumentation also mentions that mixing catalog relations with --full\ncan cause deadlocks.\n\nDocumenting it may be fine at the end, but my take is that it would be\nnice to make sure that we don't have deadlocks if we can avoid them\neasily. It is also a matter of balance. If for example the patch\ngets 3 times bigger in size because of that we may have an argument\nfor not doing it and keep the code simple. What do people think about\nthat? I would be nice to get more opinions here.\n\nAnd while scanning the code...\n\n+ * getQuerySucess\nTypo here.\n\n- * Pump the conn till it's dry of results; return false if any are errors.\nThis comment could be improved on the way, like \"Go through all the\nconnections and make sure to consume any remaining results. If any\nerror is found, false is returned after processing all the parallel\nslots.\"\n--\nMichael", "msg_date": "Thu, 11 Jul 2019 22:34:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Thu, Jul 11, 2019 at 3:34 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 11, 2019 at 11:48:20AM +0200, Julien Rouhaud wrote:\n> > On Thu, Jul 11, 2019 at 6:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Get* would be my choice, because we look at the set of parallel slots,\n> >> and get an idle one.\n> >\n> > That's what the former GetIdleSlot (that I renamed to get_idle_slot as\n> > it's not static) is doing. ConsumeIdleSlot() actually get an idle\n> > slot and mark it as being used. That's probably some leakage of\n> > internal implementation, but having a GetIdleParallelSlot (or\n> > ParallelSlotGetIdle) *and* a get_idle_slot sounds like a bad idea, and\n> > I don't have a better idea on how to rename get_idle_slot. If you\n> > really prefer Get* and you're fine with a static get_idle_slot, that's\n> > fine by me.\n>\n> Do we actually need get_idle_slot? ConsumeIdleSlot is its only\n> caller.\n\nI think t hat it makes the code quite cleaner to have the selection\noutside ConsumeIdleSlot.\n\n> And while scanning the code...\n>\n> + * getQuerySucess\n> Typo here.\n\nArgh, I thought I caught all of them, thanks!\n\n> - * Pump the conn till it's dry of results; return false if any are errors.\n> This comment could be improved on the way, like \"Go through all the\n> connections and make sure to consume any remaining results. If any\n> error is found, false is returned after processing all the parallel\n> slots.\"\n\nYou're talking about getQuerySuccess right? That was actually the\noriginal comment of a function I renamed. +1 to improve it, but this\nfunction is in common.c and doesn't deal with parallel slot at all, so\nI'll just drop the slang parts.\n\n\n", "msg_date": "Thu, 11 Jul 2019 18:22:25 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Thu, Jul 11, 2019 at 06:22:25PM +0200, Julien Rouhaud wrote:\n> I think t hat it makes the code quite cleaner to have the selection\n> outside ConsumeIdleSlot.\n\nActually, you have an issue with ConsumeIdleSlot() if there is only\none parallel slot, no? In this case the current patch returns\nimmediately the slot available without waiting. I think that we\nshould wait until the slot becomes free in that case as well, and\nswitch isFree to false. If you want to keep things splitted, that's\nfine by me, I would still use \"Get\" within the name for the routine,\nand rename the other to get_idle_slot_internal() or\nget_idle_slot_guts() to point out that it has an internal role.\n\n> You're talking about getQuerySuccess right? That was actually the\n> original comment of a function I renamed. +1 to improve it, but this\n> function is in common.c and doesn't deal with parallel slot at all, so\n> I'll just drop the slang parts.\n\nIf we can design a clean interface with better comments, we can use\nthis occasion to browse the whole thing and make it better.\n--\nMichael", "msg_date": "Fri, 12 Jul 2019 10:20:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 12, 2019 at 3:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 11, 2019 at 06:22:25PM +0200, Julien Rouhaud wrote:\n> > I think t hat it makes the code quite cleaner to have the selection\n> > outside ConsumeIdleSlot.\n>\n> Actually, you have an issue with ConsumeIdleSlot() if there is only\n> one parallel slot, no? In this case the current patch returns\n> immediately the slot available without waiting. I think that we\n> should wait until the slot becomes free in that case as well, and\n> switch isFree to false.\n\nIt shouldn't be a problem, I reused the same infrastructure as for\nvacuumdb. so run_reindex_command has a new \"async\" parameter, so when\nthere's no parallelism it's using executeMaintenanceCommand (instead\nof PQsendQuery) which will block until query completion. That's why\nthere's no isFree usage at all in this case.\n\n> If you want to keep things splitted, that's\n> fine by me, I would still use \"Get\" within the name for the routine,\n> and rename the other to get_idle_slot_internal() or\n> get_idle_slot_guts() to point out that it has an internal role.\n\nOk, I'll change to get_idle_slot_internal then.\n\n\n", "msg_date": "Fri, 12 Jul 2019 07:49:13 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 12, 2019 at 07:49:13AM +0200, Julien Rouhaud wrote:\n> It shouldn't be a problem, I reused the same infrastructure as for\n> vacuumdb. so run_reindex_command has a new \"async\" parameter, so when\n> there's no parallelism it's using executeMaintenanceCommand (instead\n> of PQsendQuery) which will block until query completion. That's why\n> there's no isFree usage at all in this case.\n\nMy point is more about consistency and simplification with the case\nwhere n > 1 and that we could actually move the async/sync code paths\ninto the same banner as the async mode waits as well until a slot is\nfree, or in short when the query completes.\n--\nMichael", "msg_date": "Fri, 12 Jul 2019 14:57:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 12, 2019 at 7:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 12, 2019 at 07:49:13AM +0200, Julien Rouhaud wrote:\n> > It shouldn't be a problem, I reused the same infrastructure as for\n> > vacuumdb. so run_reindex_command has a new \"async\" parameter, so when\n> > there's no parallelism it's using executeMaintenanceCommand (instead\n> > of PQsendQuery) which will block until query completion. That's why\n> > there's no isFree usage at all in this case.\n>\n> My point is more about consistency and simplification with the case\n> where n > 1 and that we could actually move the async/sync code paths\n> into the same banner as the async mode waits as well until a slot is\n> free, or in short when the query completes.\n\nI attach v4 with all previous comment addressed.\n\nI also changed to handle parallel and non-parallel case the same way.\nI kept the possibility for synchronous behavior in reindexdb, as\nthere's an early need to run some queries in case of parallel\ndatabase-wide reindex. It avoids to open all the connections in case\nanything fails during this preliminary work, and it also avoids\nanother call for the async wait function. If we add parallelism to\nclusterdb (I'll probably work on that next time I have spare time),\nreindexdb would be the only caller left of\nexecuteMaintenanceCommand(), so that's something we may want to\nchange.\n\nI didn't change the behavior wrt. possible deadlock if user specify\ncatalog objects using --index or --table and ask for multiple\nconnection, as I'm afraid that it'll add too much code for a little\nbenefit. Please shout if you think otherwise.", "msg_date": "Fri, 12 Jul 2019 11:47:52 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 12, 2019 at 11:47 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> I didn't change the behavior wrt. possible deadlock if user specify\n> catalog objects using --index or --table and ask for multiple\n> connection, as I'm afraid that it'll add too much code for a little\n> benefit. Please shout if you think otherwise.\n\nSorry I meant schemas, not indexes.\n\nAfter more thinking about schema and multiple jobs, I think that\nerroring out is quite user unfriendly, as it's entirely ok to ask for\nmultiple indexes and multiple object that do support parallelism in a\nsingle call. So I think it's better to remove the error, ignore the\ngiven --jobs options for indexes and document this behavior.\n\n\n", "msg_date": "Tue, 16 Jul 2019 14:03:16 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Tue, Jul 16, 2019 at 02:03:16PM +0200, Julien Rouhaud wrote:\n> After more thinking about schema and multiple jobs, I think that\n> erroring out is quite user unfriendly, as it's entirely ok to ask\n> for\n> multiple indexes and multiple object that do support parallelism in\n> a\n> single call. So I think it's better to remove the error, ignore the\n> given --jobs options for indexes and document this behavior.\n\nNo objections to that. I still need to study a bit more 0002 though\nto come to a clear conclusion.\n\nActually, from patch 0002:\n+ free_slot = ParallelSlotsGetIdle(slots, concurrentCons, progname);\n+ if (!free_slot)\n+ {\n+ failed = true;\n+ goto finish;\n+ }\n+\n+ run_reindex_command(conn, process_type, objname, progname, echo,\n+ verbose, concurrently, true);\nThe same connection gets reused, shouldn't the connection come from\nthe free slot?\n\nOn top of that quick lookup, I have done an in-depth review on 0001 to\nbring it to a committable state, fixing a couple of typos, incorrect\ncomments (description of ParallelSlotsGetIdle was for example\nincorrect) on the way. Other things include that connectDatabase\nshould have an assertion for a non-NULL connection, calling pg_free()\non the slots terminate is more consistent as pg_malloc is used first.\nA comment at the top of processQueryResult still referred to\nvacuuming of a missing relation. Most of the patch was in a clean\nstate, with a clear interface for parallel slots, the place of the new\nroutines also makes sense, so I did not have much to do :)\n\nAnother thing I have noticed is that we don't really need to pass down\nprogname across all those layers as we finish by using pg_log_error()\nwhen processing results, so more simplifications can be done. Let's\nhandle all that in the same patch as we are messing with the area.\nconnectDatabase() and connectMaintenanceDatabase() still need it\nthough as this is used in the connection string, so\nParallelSlotsSetup() also needs it. This part is not really your\nfault but as I am looking at it, it does not hurt to clean up what we\ncan. Attached is an updated version of 0001 that I am comfortable\nwith. I'd like to commit that with the cleanups included and then\nlet's move to the real deal with 0002.\n--\nMichael", "msg_date": "Wed, 17 Jul 2019 16:59:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Wed, Jul 17, 2019 at 9:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 16, 2019 at 02:03:16PM +0200, Julien Rouhaud wrote:\n> > After more thinking about schema and multiple jobs, I think that\n> > erroring out is quite user unfriendly, as it's entirely ok to ask\n> > for\n> > multiple indexes and multiple object that do support parallelism in\n> > a\n> > single call. So I think it's better to remove the error, ignore the\n> > given --jobs options for indexes and document this behavior.\n>\n> No objections to that. I still need to study a bit more 0002 though\n> to come to a clear conclusion.\n>\n> Actually, from patch 0002:\n> + free_slot = ParallelSlotsGetIdle(slots, concurrentCons, progname);\n> + if (!free_slot)\n> + {\n> + failed = true;\n> + goto finish;\n> + }\n> +\n> + run_reindex_command(conn, process_type, objname, progname, echo,\n> + verbose, concurrently, true);\n> The same connection gets reused, shouldn't the connection come from\n> the free slot?\n\nOuch indeed.\n\n> On top of that quick lookup, I have done an in-depth review on 0001 to\n> bring it to a committable state, fixing a couple of typos, incorrect\n> comments (description of ParallelSlotsGetIdle was for example\n> incorrect) on the way. Other things include that connectDatabase\n> should have an assertion for a non-NULL connection,\n\ndisconnectDatabase you mean? Fine by me.\n\n> calling pg_free()\n> on the slots terminate is more consistent as pg_malloc is used first.\n> A comment at the top of processQueryResult still referred to\n> vacuuming of a missing relation. Most of the patch was in a clean\n> state, with a clear interface for parallel slots, the place of the new\n> routines also makes sense, so I did not have much to do :)\n\nThanks :)\n\n> Another thing I have noticed is that we don't really need to pass down\n> progname across all those layers as we finish by using pg_log_error()\n> when processing results, so more simplifications can be done. Let's\n> handle all that in the same patch as we are messing with the area.\n> connectDatabase() and connectMaintenanceDatabase() still need it\n> though as this is used in the connection string, so\n> ParallelSlotsSetup() also needs it. This part is not really your\n> fault but as I am looking at it, it does not hurt to clean up what we\n> can. Attached is an updated version of 0001 that I am comfortable\n> with. I'd like to commit that with the cleanups included and then\n> let's move to the real deal with 0002.\n\nGood catch, I totally missed this progname change. I read the patch\nyou attached, I have a few comments:\n\n+/*\n+ * Disconnect the given connection, canceling any statement if one is active.\n+ */\n+void\n+disconnectDatabase(PGconn *conn)\n\nNitpicking, but this comment doesn't follow the style of other\nfunctions' comments (it's also the case for existing comment on\nexecuteQuery at least).\n\n\nWhile reading the comments you added on ParallelSlotsSetup(), I\nwondered if we couldn't also add an Assert(conn) at the beginning?\n\n+void\n+ParallelSlotsTerminate(ParallelSlot *slots, int numslots)\n+{\n+ int i;\n+\n+ for (i = 0; i < numslots; i++)\n+ {\n+ PGconn *conn = slots[i].connection;\n+\n+ if (conn == NULL)\n+ continue;\n+\n+ disconnectDatabase(conn);\n+ }\n+\n+ pg_free(slots);\n+}\n\nIs it ok to call pg_free(slots) and let caller have a pointer pointing\nto freed memory?\n\n\n", "msg_date": "Wed, 17 Jul 2019 19:46:10 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Wed, Jul 17, 2019 at 07:46:10PM +0200, Julien Rouhaud wrote:\n> On Wed, Jul 17, 2019 at 9:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On top of that quick lookup, I have done an in-depth review on 0001 to\n>> bring it to a committable state, fixing a couple of typos, incorrect\n>> comments (description of ParallelSlotsGetIdle was for example\n>> incorrect) on the way. Other things include that connectDatabase\n>> should have an assertion for a non-NULL connection,\n> \n> disconnectDatabase you mean? Fine by me.\n\nOops, yes. I meant disconnectDatabase() here. The patch does so, not\nmy words.\n\n> +/*\n> + * Disconnect the given connection, canceling any statement if one is active.\n> + */\n> +void\n> +disconnectDatabase(PGconn *conn)\n> \n> Nitpicking, but this comment doesn't follow the style of other\n> functions' comments (it's also the case for existing comment on\n> executeQuery at least).\n\nconnectDatabase, connectMaintenanceDatabase, executeQuery and most of\nthe others follow that style, so I am just going to simplify\nconsumeQueryResult and processQueryResult to keep a consistent style.\n\n> While reading the comments you added on ParallelSlotsSetup(), I\n> wondered if we couldn't also add an Assert(conn) at the beginning?\n\nThat makes sense as this gets associated to the first slot. There\ncould be an argument for making a set of slots extensible to simplify\nthis interface, but that complicates the logic for the scripts.\n\n> Is it ok to call pg_free(slots) and let caller have a pointer pointing\n> to freed memory?\n\nThe interface has a Setup call which initializes the whole thing, and\nTerminate is the logical end point, so having the free logic within\nthe termination looks more consistent to me. We could now have actual\nInit() and Free() but I am not sure that this justifies the move as\nthis complicates the scripts using it.\n--\nMichael", "msg_date": "Thu, 18 Jul 2019 09:45:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Thu, Jul 18, 2019 at 09:45:14AM +0900, Michael Paquier wrote:\n> On Wed, Jul 17, 2019 at 07:46:10PM +0200, Julien Rouhaud wrote:\n>> Is it ok to call pg_free(slots) and let caller have a pointer pointing\n> to freed memory?\n> \n> The interface has a Setup call which initializes the whole thing, and\n> Terminate is the logical end point, so having the free logic within\n> the termination looks more consistent to me. We could now have actual\n> Init() and Free() but I am not sure that this justifies the move as\n> this complicates the scripts using it.\n\nI have reconsidered this point, moved the pg_free() call out of the\ntermination logic, and committed the first patch after an extra lookup\nand more polishing.\n\nFor the second patch, could you send a rebase with a fix for the\nconnection slot when processing the reindex commands?\n--\nMichael", "msg_date": "Fri, 19 Jul 2019 09:35:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 19, 2019 at 2:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 18, 2019 at 09:45:14AM +0900, Michael Paquier wrote:\n> > On Wed, Jul 17, 2019 at 07:46:10PM +0200, Julien Rouhaud wrote:\n> >> Is it ok to call pg_free(slots) and let caller have a pointer pointing\n> > to freed memory?\n> >\n> > The interface has a Setup call which initializes the whole thing, and\n> > Terminate is the logical end point, so having the free logic within\n> > the termination looks more consistent to me. We could now have actual\n> > Init() and Free() but I am not sure that this justifies the move as\n> > this complicates the scripts using it.\n>\n> I have reconsidered this point, moved the pg_free() call out of the\n> termination logic, and committed the first patch after an extra lookup\n> and more polishing.\n\nThanks!\n\n> For the second patch, could you send a rebase with a fix for the\n> connection slot when processing the reindex commands?\n\nAttached, I also hopefully removed all the now unneeded progname usage.", "msg_date": "Fri, 19 Jul 2019 08:29:27 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 19, 2019 at 08:29:27AM +0200, Julien Rouhaud wrote:\n> On Fri, Jul 19, 2019 at 2:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> For the second patch, could you send a rebase with a fix for the\n>> connection slot when processing the reindex commands?\n> \n> Attached, I also hopefully removed all the now unneeded progname usage.\n\n+ Note that this mode is not compatible the <option>-i / --index</option>\n+ or the <option>-s / --system</option> options.\nNits: this is not a style consistent with the documentation. When\nreferring to both the long and short options the formulation \"-i or\n--index\" gets used. Here we could just use the long option. This\nsentence is missing a \"with\".\n\n simple_string_list_append(&tables, optarg);\n+ tbl_count++;\n break;\nThe number of items in a simple list is not counted, and vacuumdb does\nthe same thing to count objects. What do you think about extending\nsimple lists to track the number of items stored?\n\n+$node->issues_sql_like([qw(reindexdb -j2)],\n+ qr/statement: REINDEX TABLE public.test1/,\n+ 'Global and parallel reindex will issue per-table REINDEX');\nWould it make sense to have some tests for schemas here?\n\nOne of my comments in [1] has not been answered. What about\nthe decomposition of a list of schemas into a list of tables when\nusing the parallel mode?\n\n[1]: https://www.postgresql.org/message-id/20190711040433.GG4500@paquier.xyz\n--\nMichael", "msg_date": "Mon, 22 Jul 2019 13:11:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 22, 2019 at 6:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 19, 2019 at 08:29:27AM +0200, Julien Rouhaud wrote:\n> > On Fri, Jul 19, 2019 at 2:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> For the second patch, could you send a rebase with a fix for the\n> >> connection slot when processing the reindex commands?\n> >\n> > Attached, I also hopefully removed all the now unneeded progname usage.\n>\n> + Note that this mode is not compatible the <option>-i / --index</option>\n> + or the <option>-s / --system</option> options.\n> Nits: this is not a style consistent with the documentation. When\n> referring to both the long and short options the formulation \"-i or\n> --index\" gets used. Here we could just use the long option. This\n> sentence is missing a \"with\".\n\nRight, so I kept the long option. Also this comment was outdated, as\nthe --jobs is now just ignored with a list of indexes, so I fixed that\ntoo.\n\n>\n> simple_string_list_append(&tables, optarg);\n> + tbl_count++;\n> break;\n> The number of items in a simple list is not counted, and vacuumdb does\n> the same thing to count objects. What do you think about extending\n> simple lists to track the number of items stored?\n\nI considered this, but it would require to adapt all code that declare\nSimpleStringList stack variable (vacuumdb.c, clusterdb.c,\ncreateuser.c, pg_dumpall.c and pg_dump.c), so it looked like too much\ntrouble to avoid 2 local variables here and 1 in vacuumdb.c. I don't\nhave a strong opinion here, so I can go for it if you prefer.\n\n>\n> +$node->issues_sql_like([qw(reindexdb -j2)],\n> + qr/statement: REINDEX TABLE public.test1/,\n> + 'Global and parallel reindex will issue per-table REINDEX');\n> Would it make sense to have some tests for schemas here?\n>\n> One of my comments in [1] has not been answered. What about\n> the decomposition of a list of schemas into a list of tables when\n> using the parallel mode?\n\nI did that in attached v6, and added suitable regression tests.", "msg_date": "Mon, 22 Jul 2019 14:40:19 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-Jul-22, Julien Rouhaud wrote:\n\n> On Mon, Jul 22, 2019 at 6:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> > simple_string_list_append(&tables, optarg);\n> > + tbl_count++;\n> > break;\n> > The number of items in a simple list is not counted, and vacuumdb does\n> > the same thing to count objects. What do you think about extending\n> > simple lists to track the number of items stored?\n> \n> I considered this, but it would require to adapt all code that declare\n> SimpleStringList stack variable (vacuumdb.c, clusterdb.c,\n> createuser.c, pg_dumpall.c and pg_dump.c), so it looked like too much\n> trouble to avoid 2 local variables here and 1 in vacuumdb.c. I don't\n> have a strong opinion here, so I can go for it if you prefer.\n\nCan we use List for this instead?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Jul 2019 11:11:50 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-Jul-19, Julien Rouhaud wrote:\n\n> > For the second patch, could you send a rebase with a fix for the\n> > connection slot when processing the reindex commands?\n> \n> Attached, I also hopefully removed all the now unneeded progname usage.\n\nBTW \"progname\" is a global variable in logging.c, and it's initialized\nby pg_logging_init(), so there's no point in having a local variable in\nmain() that's called the same and initialized the same way. You could\njust remove it from the signature of all those functions\n(connectDatabase and callers), and there would be no visible change.\n\nAlso: [see attached]\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 22 Jul 2019 11:18:06 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 22, 2019 at 5:11 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jul-22, Julien Rouhaud wrote:\n>\n> > On Mon, Jul 22, 2019 at 6:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > > simple_string_list_append(&tables, optarg);\n> > > + tbl_count++;\n> > > break;\n> > > The number of items in a simple list is not counted, and vacuumdb does\n> > > the same thing to count objects. What do you think about extending\n> > > simple lists to track the number of items stored?\n> >\n> > I considered this, but it would require to adapt all code that declare\n> > SimpleStringList stack variable (vacuumdb.c, clusterdb.c,\n> > createuser.c, pg_dumpall.c and pg_dump.c), so it looked like too much\n> > trouble to avoid 2 local variables here and 1 in vacuumdb.c. I don't\n> > have a strong opinion here, so I can go for it if you prefer.\n>\n> Can we use List for this instead?\n\nIsn't that for backend code only?\n\n\n", "msg_date": "Mon, 22 Jul 2019 17:23:30 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-Jul-22, Julien Rouhaud wrote:\n\n> On Mon, Jul 22, 2019 at 5:11 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > > I considered this, but it would require to adapt all code that declare\n> > > SimpleStringList stack variable (vacuumdb.c, clusterdb.c,\n> > > createuser.c, pg_dumpall.c and pg_dump.c), so it looked like too much\n> > > trouble to avoid 2 local variables here and 1 in vacuumdb.c. I don't\n> > > have a strong opinion here, so I can go for it if you prefer.\n> >\n> > Can we use List for this instead?\n> \n> Isn't that for backend code only?\n\nWell, we already have palloc() on the frontend side, and list.c doesn't\nhave any elog()/ereport(), so it should be possible to use it ... I do\nsee that it uses MemoryContextAlloc() in a few places. Maybe we can\njust #define that to palloc()?\n\n(Maybe we can use the impulse to get rid of these \"simple lists\"\naltogether?)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Jul 2019 11:33:01 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-22, Julien Rouhaud wrote:\n>> On Mon, Jul 22, 2019 at 5:11 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>> Can we use List for this instead?\n\n>> Isn't that for backend code only?\n\n> Well, we already have palloc() on the frontend side, and list.c doesn't\n> have any elog()/ereport(), so it should be possible to use it ... I do\n> see that it uses MemoryContextAlloc() in a few places. Maybe we can\n> just #define that to palloc()?\n\nI'm not happy about either the idea of pulling all of list.c into\nfrontend programs, or restricting it to be frontend-safe. That's\nvery fundamental infrastructure and I don't want it laboring under\nsuch a restriction. Furthermore, List usage generally leaks memory\nlike mad (cf nearby list_concat discussion) which doesn't seem like\nsomething we want for frontend code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jul 2019 11:57:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On 2019-Jul-22, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Jul-22, Julien Rouhaud wrote:\n> >> On Mon, Jul 22, 2019 at 5:11 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >>> Can we use List for this instead?\n> \n> >> Isn't that for backend code only?\n> \n> > Well, we already have palloc() on the frontend side, and list.c doesn't\n> > have any elog()/ereport(), so it should be possible to use it ... I do\n> > see that it uses MemoryContextAlloc() in a few places. Maybe we can\n> > just #define that to palloc()?\n> \n> I'm not happy about either the idea of pulling all of list.c into\n> frontend programs, or restricting it to be frontend-safe. That's\n> very fundamental infrastructure and I don't want it laboring under\n> such a restriction. Furthermore, List usage generally leaks memory\n> like mad (cf nearby list_concat discussion) which doesn't seem like\n> something we want for frontend code.\n\nFair enough. List has gotten quite sophisticated now, so I understand\nthe concern.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Jul 2019 13:05:32 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 22, 2019 at 11:18:06AM -0400, Alvaro Herrera wrote:\n> BTW \"progname\" is a global variable in logging.c, and it's initialized\n> by pg_logging_init(), so there's no point in having a local variable in\n> main() that's called the same and initialized the same way. You could\n> just remove it from the signature of all those functions\n> (connectDatabase and callers), and there would be no visible change.\n\nSure, and I was really tempted to do that until I noticed that we pass\ndown progname for fallback_application_name in the connection string\nand that we would basically need to externalize progname in logging.h,\nas well as switch all the callers of pg_logging_init to now include\ntheir own definition of progname, which was much more invasive than\nthe initial refactoring intended. I am also under the impression that\nwe had better keep get_progname() and pg_logging_init() as rather\nindependent things.\n\n> Also: [see attached]\n\nMissed those in the initial cleanup. Applied, thanks!\n--\nMichael", "msg_date": "Tue, 23 Jul 2019 14:32:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 22, 2019 at 02:40:19PM +0200, Julien Rouhaud wrote:\n> Right, so I kept the long option. Also this comment was outdated, as\n> the --jobs is now just ignored with a list of indexes, so I fixed that\n> too.\n\n+ if (!parallel)\n+ {\n+ if (user_list == NULL)\n+ {\n+ /*\n+ * Create a dummy list with an empty string, as user requires an\n+ * element.\n+ */\n+ process_list = pg_malloc0(sizeof(SimpleStringList));\n+ simple_string_list_append(process_list, \"\");\n+ }\n+ }\nThis part looks a bit hacky and this is needed because we don't have a\nlist of objects when doing a non-parallel system or database reindex.\nThe deal is that we just want a list with one element: the database of\nthe connection. Wouldn't it be more natural to assign the database\nname here using PQdb(conn)? Then add an assertion at the top of\nrun_reindex_command() checking for a non-NULL name?\n\n> I considered this, but it would require to adapt all code that declare\n> SimpleStringList stack variable (vacuumdb.c, clusterdb.c,\n> createuser.c, pg_dumpall.c and pg_dump.c), so it looked like too much\n> trouble to avoid 2 local variables here and 1 in vacuumdb.c. I don't\n> have a strong opinion here, so I can go for it if you prefer.\n\nOkay. This is a tad wider than the original patch proposal, and this\nadds two lines. So let's discard that for now and keep it simple.\n\n>> +$node->issues_sql_like([qw(reindexdb -j2)],\n>> + qr/statement: REINDEX TABLE public.test1/,\n>> + 'Global and parallel reindex will issue per-table REINDEX');\n>> Would it make sense to have some tests for schemas here?\n>>\n>> One of my comments in [1] has not been answered. What about\n>> the decomposition of a list of schemas into a list of tables when\n>> using the parallel mode?\n> \n> I did that in attached v6, and added suitable regression tests.\n\nThe two tests for \"reindexdb -j2\" can be combined into a single call,\nchecking for both commands to be generated in the same output. The\nsecond command triggering a reindex on two schemas cannot be used to\ncheck for the generation of both s1.t1 and s2.t2 as the ordering may\nnot be guaranteed. The commands arrays also looked inconsistent with\nthe rest. Attached is an updated patch with some format modifications\nand the fixes for the tests.\n--\nMichael", "msg_date": "Tue, 23 Jul 2019 16:38:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Sorry for the late answer,\n\nOn Tue, Jul 23, 2019 at 9:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jul 22, 2019 at 02:40:19PM +0200, Julien Rouhaud wrote:\n> > Right, so I kept the long option. Also this comment was outdated, as\n> > the --jobs is now just ignored with a list of indexes, so I fixed that\n> > too.\n>\n> + if (!parallel)\n> + {\n> + if (user_list == NULL)\n> + {\n> + /*\n> + * Create a dummy list with an empty string, as user requires an\n> + * element.\n> + */\n> + process_list = pg_malloc0(sizeof(SimpleStringList));\n> + simple_string_list_append(process_list, \"\");\n> + }\n> + }\n> This part looks a bit hacky and this is needed because we don't have a\n> list of objects when doing a non-parallel system or database reindex.\n> The deal is that we just want a list with one element: the database of\n> the connection. Wouldn't it be more natural to assign the database\n> name here using PQdb(conn)? Then add an assertion at the top of\n> run_reindex_command() checking for a non-NULL name?\n\nGood point, fixed it that way.\n>\n> > I considered this, but it would require to adapt all code that declare\n> > SimpleStringList stack variable (vacuumdb.c, clusterdb.c,\n> > createuser.c, pg_dumpall.c and pg_dump.c), so it looked like too much\n> > trouble to avoid 2 local variables here and 1 in vacuumdb.c. I don't\n> > have a strong opinion here, so I can go for it if you prefer.\n>\n> Okay. This is a tad wider than the original patch proposal, and this\n> adds two lines. So let's discard that for now and keep it simple.\n\nOk!\n\n> >> +$node->issues_sql_like([qw(reindexdb -j2)],\n> >> + qr/statement: REINDEX TABLE public.test1/,\n> >> + 'Global and parallel reindex will issue per-table REINDEX');\n> >> Would it make sense to have some tests for schemas here?\n> >>\n> >> One of my comments in [1] has not been answered. What about\n> >> the decomposition of a list of schemas into a list of tables when\n> >> using the parallel mode?\n> >\n> > I did that in attached v6, and added suitable regression tests.\n>\n> The two tests for \"reindexdb -j2\" can be combined into a single call,\n> checking for both commands to be generated in the same output. The\n> second command triggering a reindex on two schemas cannot be used to\n> check for the generation of both s1.t1 and s2.t2 as the ordering may\n> not be guaranteed. The commands arrays also looked inconsistent with\n> the rest. Attached is an updated patch with some format modifications\n> and the fixes for the tests.\n\nAttached v8 based on your v7 + the fix for the dummy part.", "msg_date": "Thu, 25 Jul 2019 08:50:40 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nHi\r\n\r\nI did some review and have few notes about behavior.\r\n\r\nreindex database does not work with concurrently option:\r\n\r\n> ./inst/bin/reindexdb --echo -d postgres -j 8 --concurrently\r\n> SELECT pg_catalog.set_config('search_path', '', false);\r\n> REINDEX SYSTEM CONCURRENTLY postgres;\r\n> reindexdb: error: reindexing of system catalogs on database \"postgres\" failed: ERROR: cannot reindex system catalogs concurrently\r\n\r\nI think we need print message and skip system catalogs for concurrently reindex.\r\nOr we can disallow concurrently database reindex with multiple jobs. I prefer first option.\r\n\r\n> +\t\t\treindex_one_database(dbname, REINDEX_SCHEMA, &schemas, host,\r\n> +\t\t\t\t\t\t\t\t port, username, prompt_password, progname,\r\n> +\t\t\t\t\t\t\t\t echo, verbose, concurrently,\r\n> +\t\t\t\t\t\t\t\t Min(concurrentCons, nsp_count));\r\n\r\nShould be just concurrentCons instead of Min(concurrentCons, nsp_count)\r\nreindex_one_database for REINDEX_SCHEMA will build tables list and then processing by available workers. So:\r\n-j 8 -S public -S public -S public -S poblic -S public -S public - will work with 6 jobs (and without multiple processing for same table)\r\n-j 8 -S public - will have only one worker regardless tables count\r\n\r\n> if (concurrentCons > FD_SETSIZE - 1)\r\n\r\n\"if (concurrentCons >= FD_SETSIZE)\" would not cleaner? Well, pgbench uses >= condition, vacuumdb uses > FD_SETSIZE - 1. No more FD_SETSIZE in conditions =)\r\n\r\nregards, Sergei", "msg_date": "Thu, 25 Jul 2019 08:16:56 +0000", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Thanks for the review!\n\nOn Thu, Jul 25, 2019 at 10:17 AM Sergei Kornilov <sk@zsrv.org> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, failed\n> Spec compliant: not tested\n> Documentation: tested, passed\n>\n> Hi\n>\n> I did some review and have few notes about behavior.\n>\n> reindex database does not work with concurrently option:\n>\n> > ./inst/bin/reindexdb --echo -d postgres -j 8 --concurrently\n> > SELECT pg_catalog.set_config('search_path', '', false);\n> > REINDEX SYSTEM CONCURRENTLY postgres;\n> > reindexdb: error: reindexing of system catalogs on database \"postgres\" failed: ERROR: cannot reindex system catalogs concurrently\n>\n> I think we need print message and skip system catalogs for concurrently reindex.\n> Or we can disallow concurrently database reindex with multiple jobs. I prefer first option.\n\nGood point. I agree with 1st option, as that's already what would\nhappen without the --jobs switch:\n\n$ reindexdb -d postgres --concurrently\nWARNING: cannot reindex system catalogs concurrently, skipping all\n\n(although this is emitted by the backend)\nI modified the client code to behave the same and added a regression test.\n\n> > + reindex_one_database(dbname, REINDEX_SCHEMA, &schemas, host,\n> > + port, username, prompt_password, progname,\n> > + echo, verbose, concurrently,\n> > + Min(concurrentCons, nsp_count));\n>\n> Should be just concurrentCons instead of Min(concurrentCons, nsp_count)\n\nIndeed, that changed with v8 and I forgot to update it, fixed.\n\n> reindex_one_database for REINDEX_SCHEMA will build tables list and then processing by available workers. So:\n> -j 8 -S public -S public -S public -S poblic -S public -S public - will work with 6 jobs (and without multiple processing for same table)\n> -j 8 -S public - will have only one worker regardless tables count\n>\n> > if (concurrentCons > FD_SETSIZE - 1)\n>\n> \"if (concurrentCons >= FD_SETSIZE)\" would not cleaner? Well, pgbench uses >= condition, vacuumdb uses > FD_SETSIZE - 1. No more FD_SETSIZE in conditions =)\n\nI don't have a strong opinion here. If we change for >=, it'd be\nbetter to also adapt vacuumdb for consistency. I didn't change it for\nnow, to stay consistent with vacuumdb.", "msg_date": "Thu, 25 Jul 2019 10:42:11 +0200", "msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Hi\n\nThank you, v9 code and behavior looks good for me. Builds cleanly, works with older releases. I'll mark Ready to Commiter.\n\n> I don't have a strong opinion here. If we change for >=, it'd be\n> better to also adapt vacuumdb for consistency. I didn't change it for\n> now, to stay consistent with vacuumdb.\n\nYep, no strong opinion from me too.\n\nregards, Sergei\n\n\n", "msg_date": "Thu, 25 Jul 2019 12:12:40 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Thu, Jul 25, 2019 at 12:12:40PM +0300, Sergei Kornilov wrote:\n> Thank you, v9 code and behavior looks good for me. Builds cleanly,\n> works with older releases. I'll mark Ready to Commiter.\n\nThe restriction with --jobs and --system is not documented, and that's\ngood to have from the start. This actually makes the parallel job\nhandling with --index inconsistent as we mask parallelism in this case\nby enforcing the use of one connection. I think that we should\nrevisit the interactions with --index and --jobs actually, because,\nassuming that users never read the documentation, they may actually be\nsurprised to see that something like --index idx1 .. --index idxN\n--jobs=N does not lead to any improvements at all, until they find out\nthe reason why. It is also much easier to have an error as starting\npoint because it can be lifted later one. There is an argument that\nwe may actually not have this restriction at all on --index as long as\nthe user knows what it is doing and does not define indexes from the\nsame relation, still I would keep an error.\n\nThinking deeper about it, there is also a point of gathering first all\nthe relations if one associates --schemas and --tables in the same\ncall of reindexdb and then pass down a list of decomposed relations\nwhich are processed in parallel. The code as currently presented is\nrather straight-forward, and I don't think that this is worth the\nextra complication, but this was not mentioned until now on this\nthread :)\n\nFor the non-parallel case in reindex_one_database(), I would add an\nAssert on REINDEX_DATABASE and REINDEX_SYSTEM with a comment to\nmention that a NULL list of objects can just be passed down only in\nthose two cases when the single-object list with the database name is\nbuilt.\n\n>> I don't have a strong opinion here. If we change for >=, it'd be\n>> better to also adapt vacuumdb for consistency. I didn't change it for\n>> now, to stay consistent with vacuumdb.\n> \n> Yep, no strong opinion from me too.\n\nMy opinion tends towards consistency. Consistency sounds good.\n--\nMichael", "msg_date": "Thu, 25 Jul 2019 19:02:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Hi\n\n>>>  I don't have a strong opinion here. If we change for >=, it'd be\n>>>  better to also adapt vacuumdb for consistency. I didn't change it for\n>>>  now, to stay consistent with vacuumdb.\n>>\n>>  Yep, no strong opinion from me too.\n>\n> My opinion tends towards consistency. Consistency sounds good.\n\nWhich one consistency you prefer? Currently we have just one >= FD_SETSIZE in pgbench and one > FD_SETSIZE -1 in vacuumdb. That's all.\n\nregards, Sergei\n\n\n", "msg_date": "Thu, 25 Jul 2019 13:18:13 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Thu, Jul 25, 2019 at 12:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 25, 2019 at 12:12:40PM +0300, Sergei Kornilov wrote:\n> > Thank you, v9 code and behavior looks good for me. Builds cleanly,\n> > works with older releases. I'll mark Ready to Commiter.\n>\n> The restriction with --jobs and --system is not documented, and that's\n> good to have from the start.\n\nThat's documented in the patch:\n+ Note that this option is ignored with the <option>--index</option>\n+ option to prevent deadlocks when processing multiple indexes from\n+ the same relation, and incompatible with the <option>--system</option>\n+ option.\n\nThe restriction with --jobs and --concurrently is indeed not\nspecifically documented in reindexdb.sgml, there's only a mention in\nreindex.sgml:\n\n <command>REINDEX SYSTEM</command> does not support\n <command>CONCURRENTLY</command> since system catalogs cannot be reindexed\n\nThe behavior doesn't really change with this patch, though we could\nenhance the documentation.\n\n> This actually makes the parallel job\n> handling with --index inconsistent as we mask parallelism in this case\n> by enforcing the use of one connection. I think that we should\n> revisit the interactions with --index and --jobs actually, because,\n> assuming that users never read the documentation, they may actually be\n> surprised to see that something like --index idx1 .. --index idxN\n> --jobs=N does not lead to any improvements at all, until they find out\n> the reason why.\n\nThe problem is that a user doing something like:\n\nreindexdb -j48 -i some_index -S s1 -S s2 -S s3....\n\nwill probably be disappointed to learn that he has to run a specific\ncommand for the index(es) that should be reindexed. Maybe we can\nissue a warning that parallelism isn't used when an index list is\nprocessed and user asked for multiple jobs?\n\n> Thinking deeper about it, there is also a point of gathering first all\n> the relations if one associates --schemas and --tables in the same\n> call of reindexdb and then pass down a list of decomposed relations\n> which are processed in parallel. The code as currently presented is\n> rather straight-forward, and I don't think that this is worth the\n> extra complication, but this was not mentioned until now on this\n> thread :)\n\n+1\n\n\n> For the non-parallel case in reindex_one_database(), I would add an\n> Assert on REINDEX_DATABASE and REINDEX_SYSTEM with a comment to\n> mention that a NULL list of objects can just be passed down only in\n> those two cases when the single-object list with the database name is\n> built.\n\nSomething like that?\n\n if (!parallel)\n {\n- if (user_list == NULL)\n+ /*\n+ * Database-wide and system catalogs processing should omit the list\n+ * of objects to process.\n+ */\n+ if (process_type == REINDEX_DATABASE || process_type == REINDEX_SYSTEM)\n {\n+ Assert(user_list == NULL);\n+\n process_list = pg_malloc0(sizeof(SimpleStringList));\n simple_string_list_append(process_list, PQdb(conn));\n }\n\nThere's another assert after the else-parallel that checks that a\nlist is present, so there's no need to also check it here.\n\nI don't send a new patch since the --index wanted behavior is not clear yet.\n\n\n", "msg_date": "Thu, 25 Jul 2019 13:00:34 +0200", "msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Thu, Jul 25, 2019 at 01:00:34PM +0200, Julien Rouhaud wrote:\n> The problem is that a user doing something like:\n> \n> reindexdb -j48 -i some_index -S s1 -S s2 -S s3....\n> \n> will probably be disappointed to learn that he has to run a specific\n> command for the index(es) that should be reindexed. Maybe we can\n> issue a warning that parallelism isn't used when an index list is\n> processed and user asked for multiple jobs?\n\nArguments go in both directions as some other users may be surprised\nby the performance of indexes as serialization is enforced.\n\n> I don't send a new patch since the --index wanted behavior is not\n> clear yet.\n\nSo I am sending one patch (actually two) after a closer review that I\nhave spent time shaping into a committable state. And for this part I\nhave another suggestion that is to use a switch/case without a default\nso as any newly-added object types would allow somebody to think about\nthose code paths as this would generate compiler warnings.\n\nWhile reviewing I have found an extra bug in the logic: when using a\nlist of tables, the number of parallel slots is the minimum between\nconcurrentCons and tbl_count, but this does not get applied after\nbuilding a list of tables for a schema or database reindex, so we had\nbetter recompute the number of items in reindex_one_database() before\nallocating the number of parallel slots. There was also a small gem\nin the TAP tests for one of the commands using \"-j2\" in one of the\ncommand arguments.\n\nSo here we go:\n- 0001 is your original thing, with --jobs enforced to 1 for the index\npart.\n- 0002 is my addition to forbid --index with --jobs.\n\nI am fine to be outvoted regarding 0002, and it is the case based on\nthe state of this thread with 2:1. We could always revisit this\ndecision in this development cycle anyway.\n--\nMichael", "msg_date": "Fri, 26 Jul 2019 12:27:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 26, 2019 at 5:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 25, 2019 at 01:00:34PM +0200, Julien Rouhaud wrote:\n> > The problem is that a user doing something like:\n> >\n> > reindexdb -j48 -i some_index -S s1 -S s2 -S s3....\n> >\n> > will probably be disappointed to learn that he has to run a specific\n> > command for the index(es) that should be reindexed. Maybe we can\n> > issue a warning that parallelism isn't used when an index list is\n> > processed and user asked for multiple jobs?\n>\n> Arguments go in both directions as some other users may be surprised\n> by the performance of indexes as serialization is enforced.\n\nSure, but there is no easy solution in that case, as you'd have to do\nall the work of spawning multiple reindexdb according to the\nunderlying table, so probably what will happen here is that there'll\njust be two simple calls to reindexdb, one for the indexes, serialized\nanyway, and one for everything else. My vote is still to allow it,\npossibly emitting a notice or a warning.\n\n> > I don't send a new patch since the --index wanted behavior is not\n> > clear yet.\n>\n> So I am sending one patch (actually two) after a closer review that I\n> have spent time shaping into a committable state. And for this part I\n> have another suggestion that is to use a switch/case without a default\n> so as any newly-added object types would allow somebody to think about\n> those code paths as this would generate compiler warnings.\n\nThanks for that! I'm fine with using switch to avoid future bad surprises.\n\n> While reviewing I have found an extra bug in the logic: when using a\n> list of tables, the number of parallel slots is the minimum between\n> concurrentCons and tbl_count, but this does not get applied after\n> building a list of tables for a schema or database reindex, so we had\n> better recompute the number of items in reindex_one_database() before\n> allocating the number of parallel slots.\n\nI see that you iterate over the SimpleStringList after it's generated.\nWhy not computing that while building it in get_parallel_object_list\n(and keep the provided table list count) instead?\n\n\n", "msg_date": "Fri, 26 Jul 2019 09:36:32 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 26, 2019 at 09:36:32AM +0200, Julien Rouhaud wrote:\n> I see that you iterate over the SimpleStringList after it's generated.\n> Why not computing that while building it in get_parallel_object_list\n> (and keep the provided table list count) instead?\n\nYeah. I was hesitating to do that, or just break out of the counting\nloop if there are more objects than concurrent jobs, but that's less\nintuitive. \n--\nMichael", "msg_date": "Fri, 26 Jul 2019 16:41:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Hi\n\n> So here we go:\n> - 0001 is your original thing, with --jobs enforced to 1 for the index\n> part.\n> - 0002 is my addition to forbid --index with --jobs.\n>\n> I am fine to be outvoted regarding 0002, and it is the case based on\n> the state of this thread with 2:1. We could always revisit this\n> decision in this development cycle anyway.\n\nExplicit is better than implicit, so I am +1 to commit both patches.\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 26 Jul 2019 10:53:03 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 26, 2019 at 10:53:03AM +0300, Sergei Kornilov wrote:\n> Explicit is better than implicit, so I am +1 to commit both patches.\n\nHence my count is incorrect:\n- Forbid --jobs and --index: Michael P, Sergei K.\n- Enforce --jobs=1 with --index: Julien R.\n- Have no restrictions: 0.\n--\nMichael", "msg_date": "Fri, 26 Jul 2019 17:03:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Fri, Jul 26, 2019 at 9:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 26, 2019 at 09:36:32AM +0200, Julien Rouhaud wrote:\n> > I see that you iterate over the SimpleStringList after it's generated.\n> > Why not computing that while building it in get_parallel_object_list\n> > (and keep the provided table list count) instead?\n>\n> Yeah. I was hesitating to do that, or just break out of the counting\n> loop if there are more objects than concurrent jobs, but that's less\n> intuitive.\n\nThat's probably still more intuitive than having the count coming from\neither main() or from get_parallel_object_list() depending on the\nprocess type, so I'm fine with that alternative. Maybe we could bite\nthe bullet and add a count meber to Simple*List, also providing a\nmacro to initialize a new list so that next time a field is added\nthere won't be a massive boilerplate code change?\n\n\n", "msg_date": "Sat, 27 Jul 2019 11:44:47 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Sat, Jul 27, 2019 at 11:44:47AM +0200, Julien Rouhaud wrote:\n> That's probably still more intuitive than having the count coming from\n> either main() or from get_parallel_object_list() depending on the\n> process type, so I'm fine with that alternative. Maybe we could bite\n> the bullet and add a count meber to Simple*List, also providing a\n> macro to initialize a new list so that next time a field is added\n> there won't be a massive boilerplate code change?\n\nPerhaps, we could discuss about that on a separate thread. For now I\nhave gone with the simplest approach of counting the items, and\nstopping the count if there are more items than jobs. While reviewing\nI have found a double-free in your patch when building a list of\nrelations for schemas or databases. If the list finishes empty,\nPQfinish() was called twice on the connection, leading to a crash. I\nhave added a test for that, done an extra pass on the patch adjusting\na couple of things then committed the patch with the restriction on\n--index and --jobs. This entry is now marked as committed in the CF\napp.\n--\nMichael", "msg_date": "Sat, 27 Jul 2019 22:27:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Sat, Jul 27, 2019 at 3:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jul 27, 2019 at 11:44:47AM +0200, Julien Rouhaud wrote:\n> > That's probably still more intuitive than having the count coming from\n> > either main() or from get_parallel_object_list() depending on the\n> > process type, so I'm fine with that alternative. Maybe we could bite\n> > the bullet and add a count meber to Simple*List, also providing a\n> > macro to initialize a new list so that next time a field is added\n> > there won't be a massive boilerplate code change?\n>\n> Perhaps, we could discuss about that on a separate thread.\n\nAgreed.\n\n> For now I\n> have gone with the simplest approach of counting the items, and\n> stopping the count if there are more items than jobs. While reviewing\n> I have found a double-free in your patch when building a list of\n> relations for schemas or databases. If the list finishes empty,\n> PQfinish() was called twice on the connection, leading to a crash. I\n> have added a test for that\n\nOops, thanks for spotting and fixing.\n\n> , done an extra pass on the patch adjusting\n> a couple of things then committed the patch with the restriction on\n> --index and --jobs. This entry is now marked as committed in the CF\n> app.\n\nThanks!\n\n\n", "msg_date": "Sat, 27 Jul 2019 20:23:42 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "On Mon, Jul 22, 2019 at 01:05:32PM -0400, Alvaro Herrera wrote:\n> Fair enough. List has gotten quite sophisticated now, so I understand\n> the concern.\n\nJust wondering something... List cells include one pointer, one\nsigned integer and an OID. The two last entries are basically 4-byte\neach, hence could we reduce a bit the bloat by unifying both of them?\nI understand that the distinction exists because both may not be of\nthe same size..\n\n/me runs and hides\n--\nMichael", "msg_date": "Sun, 28 Jul 2019 16:28:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Just wondering something... List cells include one pointer, one\n> signed integer and an OID. The two last entries are basically 4-byte\n> each, hence could we reduce a bit the bloat by unifying both of them?\n\nWe couldn't really simplify the API that way; for example,\nlfirst_int and lfirst_oid *must* be different because they\nmust return different types. I think it'd be a bad idea\nto have some parts of the API that distinguish the two types\nwhile others pretend they're the same, so there's not much\nroom for shortening that.\n\nYou could imagine unifying the implementations of many of the\n_int and _oid functions, but I can't get excited about that.\nIt would add confusion for not a lot of code savings.\n\n> I understand that the distinction exists because both may not be of\n> the same size..\n\nWell, even more to the point, one's signed and one isn't.\n\nIn the long run, might we ever switch to 64-bit OIDs? I dunno.\nNow that we kicked them out of user tables, it might be feasible,\nbut by the same token there's not much pressure to do it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Jul 2019 10:07:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Hi,\n\nOn 2019-07-28 10:07:27 -0400, Tom Lane wrote:\n> In the long run, might we ever switch to 64-bit OIDs? I dunno.\n> Now that we kicked them out of user tables, it might be feasible,\n> but by the same token there's not much pressure to do it.\n\nDepends on the the table, I'd say. Having toast tables have 64bit ids,\nand not advance the oid counter, would be quite the advantage over the\ncurrent situation. Toasting performance craters once the oid counter has\nwrapped. But obviously there are upgrade problems there - presumably\nwe'd need 'narrow\" and 'wide' toast tables, or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 28 Jul 2019 09:34:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-28 10:07:27 -0400, Tom Lane wrote:\n>> In the long run, might we ever switch to 64-bit OIDs? I dunno.\n\n> Depends on the the table, I'd say. Having toast tables have 64bit ids,\n> and not advance the oid counter, would be quite the advantage over the\n> current situation. Toasting performance craters once the oid counter has\n> wrapped. But obviously there are upgrade problems there - presumably\n> we'd need 'narrow\" and 'wide' toast tables, or such.\n\nYeah, but I'd be inclined to fix toast tables as a special case,\nrather than widening OIDs in general. We could define the chunk\nnumber as being int8 not OID for the \"wide\" style.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Jul 2019 12:42:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add parallelism and glibc dependent only options to reindexdb" } ]
[ { "msg_contents": "Hello hackers,\n\nPlease consider fixing the next bunch of typos and inconsistencies in\nthe tree:\n4.1. AccesShareLock -> AccessShareLock\n4.2. AdjustTimestampForTypemod -> AdjustTimestampForTypmod\n4.3. AExprConst -> AexprConst\n4.4. AlterExtensionOwner_oid - remove (orphaned after 994c36e0)\n4.5. AlterTableDropColumn -> ATExecDropColumn (renamed in 077db40f)\n4.6. ApplySortComparatorFull -> ApplySortAbbrevFullComparator\n4.7. arracontjoinsel -> arraycontjoinsel\n4.8. ArrayNItems -> ArrayGetNItems\n4.9. ArrayRef -> SubscriptingRef (renamed by 558d77f2)\n4.10. AtPrepare_Inval - remove (orphaned after efc16ea52)\n\n4.11. AttributeOffsetGetAttributeNumber - > AttrOffsetGetAttrNumber\n4.12. AttInMetaData -> AttInMetadata\n4.13. AuthenticationMD5 -> AuthenticationMD5Password (for the sake of\nconsistency with the docs)\n4.14. AUTH_REQ_GSSAPI -> AUTH_REQ_GSS\n4.15. autogened -> autogenerated\n4.16. BarrierWait -> BarrierArriveAndWait()\n4.17. bgprocno -> bgwprocno\n4.18. BGW_NVER_RESTART -> BGW_NEVER_RESTART\n4.19. BloomInitBuffer -> BloomInitPage\n4.20. br_deconstruct_tuple -> brin_deconstruct_tuple\n\n4.21. brin_tuples.c -> brin_tuple.c\n4.22. bt_parallel_done -> _bt_parallel_done\n4.23. bt_parallel_release -> _bt_parallel_release\n4.24. btree_insert_redo -> btree_xlog_insert\n4.25. bucket_has_garbage -> split_cleanup\n4.26. byta -> bytea\n4.27. CachePlan -> CachedPlan\n4.28. CheckBufferLeaks -> CheckForBufferLeaks\n4.29. check_for_free_segments -> check_for_freed_segments\n4.30. chunkbit -> schunkbit\n\n4.31. cking -> remove (the comment is old and irrelevant since PG95-1_01)\n4.32. ClearPgTM -> ClearPgTm\n4.33. close_ - > closept_\n4.34. CloseTransient File -> CloseTransientFile\n4.35. colorTrigramsGroups -> colorTrigramGroups\n4.36. combinedproj -> remove (orphaned after 69c3936a)\n4.37. contigous_pages -> contiguous_pages\n4.38. cookies_len -> cookies_size\n4.39. cost_tableexprscan -> remove (not used since introduction in fcec6caa)\n4.40. create_custom_plan -> create_customscan_plan\n\n4.41. CreateInitialDecodingContext -> CreateInitDecodingContext\n4.42. CreateSlot -> CreateSlotOnDisk\n4.43. create_tablexprscan_path -> remove (not used since introduction in\nfcec6caa)\n4.44. crypt_des -> px_crypt_des\n4.45. ctrigOid -> trigOid\n4.46. curCollations -> colCollations\n4.47. cur_mem & prev_mem -> cur_em & prev_em\n4.48. customer_id_indexdex -> customer_id_index\n4.49. custom_scan -> cscan\n\nI've split proposed patch to make the fixes checking simpler.\n\nBest regards,\nAlexander", "msg_date": "Sun, 30 Jun 2019 16:06:47 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Fix typos and inconsistencies for HEAD" }, { "msg_contents": "On Sun, Jun 30, 2019 at 04:06:47PM +0300, Alexander Lakhin wrote:\n> 4.33. close_ - > closept_\n\nThis one is incorrect as it refers to the various close_* routines\nbelow.\n\n> 4.36. combinedproj -> remove (orphaned after 69c3936a)\n\nThis looks intentional?\n\n> I've split proposed patch to make the fixes checking simpler.\n\nAgreed with the rest, and applied. Thanks!\n--\nMichael", "msg_date": "Mon, 1 Jul 2019 10:02:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix typos and inconsistencies for HEAD" } ]
[ { "msg_contents": "Hi All,\n\nThere is an interesting issue: I created one replication slot,\nspecifying pgouput plugin and one publication \"foobar\". The\npublication \"foobar\" was set to tables \"foo1, foo2\".\n\nThe slot was left unread, while the tables foo1 and foo2 get changed.\nThen, I alter the publication \"foobar\" to remove table \"foo2\" and\nstart logical replication.\n\nThe changes from foo2 still be sent from the slot!\n\nAnother try is even if I drop the publication \"foobar\", the slot still\nfind the original publication definition and send the changes without\nproblem.\n\nI check the source codes, and I think it's due to the snapshot, when\npgoutput load the publication, it would use the catalog tuples from\nthe snapshot instead of current version, so even if the publication\nget altered or get dropped, the original version is still there in the\nsnapshot.\n\nIs it expected or it's a bug? Anyways, alter publication would not\naffect the replication stream is unexpected.\n\nRegards,\nJinhua Luo\n\n\n", "msg_date": "Mon, 1 Jul 2019 19:12:45 +0800", "msg_from": "Jinhua Luo <luajit.io@gmail.com>", "msg_from_op": true, "msg_subject": "logical replication slot and publication alter" } ]
[ { "msg_contents": "Hi hackers,\n\nI have found two minor issues with unified logging system for \ncommand-line programs (commited by Peter cc8d415117), while was rebasing \nmy pg_rewind patch:\n\n1) forgotten new-line symbol in pg_fatal call inside pg_rewind, which \nwill cause the following Assert in common/logging.c to fire\n\nAssert(fmt[strlen(fmt) - 1] != '\\n');\n\nIt seems not to be a problem for a production Postgres installation \nwithout asserts, but should be removed for sanity.\n\n2) swapped progname <-> full_path in initdb.c setup_bin_paths's call \n[1], while logging message remained the same. So the output will be \nrather misleading, since in the pg_ctl and pg_dumpall the previous order \nis used.\n\nAttached is a small patch that fixes these issues.\n\n[1] \nhttps://github.com/postgres/postgres/commit/cc8d41511721d25d557fc02a46c053c0a602fed0#diff-c4414062a0071ec15df504d39a6df705R2500\n\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company", "msg_date": "Mon, 1 Jul 2019 18:18:33 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Fix two issues after moving to unified logging system for\n command-line utils" }, { "msg_contents": "On 2019-07-01 16:18, Alexey Kondratov wrote:\n> I have found two minor issues with unified logging system for \n> command-line programs (commited by Peter cc8d415117), while was rebasing \n> my pg_rewind patch:\n\nFixed, thanks.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jul 2019 23:46:22 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix two issues after moving to unified logging system for\n command-line utils" } ]
[ { "msg_contents": "Hello,\n\nwe use atoi for user argument processing in same place which return zero\nfor both invalid input and input value zero. In most case its ok because we\nerror out with appropriate error message for input zero but in same place\nwhere we accept zero as valued input it case a problem by preceding for\ninvalid input as input value zero. The attached patch change those place to\nstrtol which can handle invalid input\n\nregards\n\nSurafel", "msg_date": "Mon, 1 Jul 2019 20:48:27 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Change atoi to strtol in same place" }, { "msg_contents": "Hi Surafel,\n\nOn Mon, Jul 01, 2019 at 08:48:27PM +0300, Surafel Temesgen wrote:\n>Hello,\n>\n>we use atoi for user argument processing in same place which return zero\n>for both invalid input and input value zero. In most case its ok because we\n>error out with appropriate error message for input zero but in same place\n>where we accept zero as valued input it case a problem by preceding for\n>invalid input as input value zero. The attached patch change those place to\n>strtol which can handle invalid input\n>\n>regards\n>\n>Surafel\n\nThis seems to have bit-rotted (due to minor changes to pg_basebackup).\nCan you fix that and post an updated version?\n\nIn general, I think it's a good idea to fix those places, but I wonder\nif we need to change the error messages too.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 6 Jul 2019 00:40:53 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "> Surafel Temesgen wrote:\n> > we use atoi for user argument processing in same place which return\n> > zero for both invalid input and input value zero. [...] in same\n> > place where we accept zero as valued input it case a problem by\n> > preceding for invalid input as input value zero. strtol which can\n> > handle invalid input\n\nNot only that, but atoi causes Undefined Behavior on erroneous input.\nThe C99 standard says this:\n\n7.20.1 Numeric conversion functions\n The functions atof, atoi, atol, and atoll need not affect the value of the\n integer expression errno on an error. If the value of the result cannot be\n represented, the behavior is undefined.\n\nTomas Vondra wrote:\n> This seems to have bit-rotted (due to minor changes to pg_basebackup).\n> Can you fix that and post an updated version?\n\nI adjusted the patch to apply cleanly on a0555ddab9.\n\n> In general, I think it's a good idea to fix those places, but I wonder\n> if we need to change the error messages too.\n\nI'll leave that decision for the community to debate. I did, however,\nremove the newlines for the new error messages being passed to\npg_log_error(). \n\nAs discussed in message [0], the logging functions in common/logging.c\nnow contain an assertion that messages do not end in newline:\n\n Assert(fmt[strlen(fmt) - 1] != '\\n');\n\n(in pg_log_error via pg_log_generic via pg_log_generic_v)\n\nI also added limits.h to some places it was missing, so the patch would\nbuild.\n\n0: https://postgr.es/m/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com", "msg_date": "Tue, 23 Jul 2019 23:02:37 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Wed, 24 Jul 2019 at 16:02, Joe Nelson <joe@begriffs.com> wrote:\n> > In general, I think it's a good idea to fix those places, but I wonder\n> > if we need to change the error messages too.\n>\n> I'll leave that decision for the community to debate. I did, however,\n> remove the newlines for the new error messages being passed to\n> pg_log_error().\n\nI'd like to put my vote not to add this complex code to each option\nvalidation that requires an integer number. I'm not sure there\ncurrently is a home for it, but if there was, wouldn't it be better\nwriting a function that takes a lower and upper bound and sets some\noutput param with the value and returns a bool to indicate if it's\nwithin range or not?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 24 Jul 2019 16:57:42 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On 2019-07-24 16:57:42 +1200, David Rowley wrote:\n> On Wed, 24 Jul 2019 at 16:02, Joe Nelson <joe@begriffs.com> wrote:\n> > > In general, I think it's a good idea to fix those places, but I wonder\n> > > if we need to change the error messages too.\n> >\n> > I'll leave that decision for the community to debate. I did, however,\n> > remove the newlines for the new error messages being passed to\n> > pg_log_error().\n> \n> I'd like to put my vote not to add this complex code to each option\n> validation that requires an integer number. I'm not sure there\n> currently is a home for it, but if there was, wouldn't it be better\n> writing a function that takes a lower and upper bound and sets some\n> output param with the value and returns a bool to indicate if it's\n> within range or not?\n\n+many\n\n\n", "msg_date": "Tue, 23 Jul 2019 22:15:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Wed, Jul 24, 2019 at 04:57:42PM +1200, David Rowley wrote:\n> I'd like to put my vote not to add this complex code to each option\n> validation that requires an integer number. I'm not sure there\n> currently is a home for it, but if there was, wouldn't it be better\n> writing a function that takes a lower and upper bound and sets some\n> output param with the value and returns a bool to indicate if it's\n> within range or not?\n\nPerhaps. When I see this patch calling strtol basically only for 10\nas base, this reminds me of Fabien Coelho's patch refactor all the\nstrtoint routines we have in the code:\nhttps://commitfest.postgresql.org/23/2099/\n\nThe conclusion that we are reaching on the thread is to remove more\ndependencies on strtol that we have in the code, and replace it with\nour own, more performant wrappers. This thread makes me wondering\nthat we had better wait before doing this move.\n--\nMichael", "msg_date": "Wed, 24 Jul 2019 14:16:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On 2019-Jul-24, Michael Paquier wrote:\n\n> The conclusion that we are reaching on the thread is to remove more\n> dependencies on strtol that we have in the code, and replace it with\n> our own, more performant wrappers. This thread makes me wondering\n> that we had better wait before doing this move.\n\nOkay, so who is submitting a new version here? Surafel, Joe?\n\nWaiting on Author.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 6 Sep 2019 12:56:39 -0400", "msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Alvaro Herrera from 2ndQuadrant wrote:\n> Okay, so who is submitting a new version here? Surafel, Joe?\n\nSure, I'll look into it over the weekend.\n\n\n", "msg_date": "Fri, 6 Sep 2019 12:24:12 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Alvaro Herrera from 2ndQuadrant wrote:\n> Okay, so who is submitting a new version here? Surafel, Joe?\n\nI've attached a revision that uses pg_strtoint64 from str2int-10.patch\n(posted in <alpine.DEB.2.21.1909032007230.21690@lancre>). My patch is\nbased off that one and c5bc7050af.\n\nIt covers the same front-end utilities as atoi-to-strtol-v2.patch. Other\nutilities in the pg codebase still have atoi inside, but I thought I'd\nshare my progress to see if people like the direction. If so, I can\nupdate the rest of the utils.\n\nI added a helper function to a new file in src/fe_utils, but had to\nmodify Makefiles in ways that might not be too clean. Maybe there's a\nbetter place for the function.\n\n-- \nJoe Nelson https://begriffs.com", "msg_date": "Mon, 9 Sep 2019 23:02:49 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Mon, Sep 09, 2019 at 11:02:49PM -0500, Joe Nelson wrote:\n> Alvaro Herrera from 2ndQuadrant wrote:\n> > Okay, so who is submitting a new version here? Surafel, Joe?\n> \n> I've attached a revision that uses pg_strtoint64 from str2int-10.patch\n> (posted in <alpine.DEB.2.21.1909032007230.21690@lancre>). My patch is\n> based off that one and c5bc7050af.\n> \n> It covers the same front-end utilities as atoi-to-strtol-v2.patch. Other\n> utilities in the pg codebase still have atoi inside, but I thought I'd\n> share my progress to see if people like the direction. If so, I can\n> update the rest of the utils.\n> \n> I added a helper function to a new file in src/fe_utils, but had to\n> modify Makefiles in ways that might not be too clean. Maybe there's a\n> better place for the function.\n\nUsing a wrapper in src/fe_utils makes sense. I would have used\noptions.c for the new file, or would that be too much generic?\n\nThe code indentation is weird, particularly the switch/case in\npg_strtoint64_range().\n\nThe error handling is awkward. I think that you should just call\npg_log_error in pg_strtoint64_range instead of returning an error\nstring as you do. You could do that by passing down the option name\nto the routine, and generate a new set of error messages using that.\n\nWhy do you need to update common/Makefile?\n\nThe builds on Windows are broken. You are adding one file into\nfe_utils, but forgot to update @pgfeutilsfiles in Mkvcbuild.pm. \n--\nMichael", "msg_date": "Tue, 10 Sep 2019 14:35:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Tue, Sep 10, 2019 at 1:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n> The error handling is awkward. I think that you should just call\n> pg_log_error in pg_strtoint64_range instead of returning an error\n> string as you do. You could do that by passing down the option name\n> to the routine, and generate a new set of error messages using that.\n\n-1. I think it's very useful to have routines for this sort of thing\nthat return an error message rather than emitting an error report\ndirectly. That gives the caller a lot more control.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 10 Sep 2019 08:03:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Tue, Sep 10, 2019 at 08:03:32AM -0400, Robert Haas wrote:\n> -1. I think it's very useful to have routines for this sort of thing\n> that return an error message rather than emitting an error report\n> directly. That gives the caller a lot more control.\n\nPlease let me counter-argue here. There are a couple of reasons to\nnot do as the patch proposes:\n1) Consistency with the error messages makes less work for translators,\nwho already have a lot to deal with. The patch is awkward in this\nsense, to give some examples:\n+ if (s != PG_STRTOINT_OK)\n {\n- pg_log_error(\"invalid status interval \\\"%s\\\"\", optarg);\n+ pg_log_error(\"invalid status interval: %s\", parse_error);\n\n}\n[...]\n- pg_log_error(\"invalid compression level \\\"%s\\\"\", optarg);\n+ pg_log_error(\"invalid compression level: %s\", parse_error);\n\n2) A centralized error message can provide the same level of details.\nHere are suggestions for each error status:\npg_log_error(\"could not parse value for option %s\", optname);\npg_log_error(\"invalid value for option %s\", optname);\noptname should be defined by the caller with strings like\n\"-t/--timeout\" or such. Then, if ranges are specified and the error\nis on a range, I think that we should just add a second error message\nto provide a hint to the user, if wanted by the caller of\npg_strtoint64_range() so an extra argument could do handle that:\npg_log_error(\"value must be in range %d..%d\")\n\n3) I think that we should not expose directly the status values of\npg_strtoint_status in pg_strtoint64_range(), that's less for module\nauthors to worry about, and that would be the same approach as we are\nusing for the wrappers of pg_strto[u]intXX() in the patch of the other\nthread (see pg_strto[u]intXX_check for example in [1]).\n\n[1]: https://www.postgresql.org/message-id/20190910030525.GA22934@paquier.xyz\n--\nMichael", "msg_date": "Wed, 11 Sep 2019 12:53:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Michael Paquier wrote:\n> Using a wrapper in src/fe_utils makes sense. I would have used\n> options.c for the new file, or would that be too much generic?\n\nSure, options.c sounds fine -- it doesn't seem any more generic than\n\"arg_utils\" and is a little simpler. The only other question I have is\nif the function inside -- with some adjustment in its interface -- might\nbe useful in other contexts beyond front-end args checking.\n\n> The code indentation is weird, particularly the switch/case in\n> pg_strtoint64_range().\n\nI did run pgindent... Do I need to tell it about the existence of the\nnew file?\n\n> The error handling is awkward.\n\nLet's continue this point in your follow-up\n<20190911035356.GE1953@paquier.xyz>.\n\n> Why do you need to update common/Makefile?\n\nThought I needed to do this so that other parts would link properly, but\njust removed the changes there and stuff still links OK, so I'll remove\nthat change.\n\n> The builds on Windows are broken. You are adding one file into\n> fe_utils, but forgot to update @pgfeutilsfiles in Mkvcbuild.pm. --\n\nThanks for the tip, I'm still learning about the build process. Is there\na service I can use to test my patches across multiple platforms? I'd\nrather not bother reviewers with build problems that I can catch in a\nmore automated way.\n\n-- \nJoe Nelson https://begriffs.com\n\n\n", "msg_date": "Wed, 11 Sep 2019 01:22:12 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Robert Haas wrote:\n> > -1. I think it's very useful to have routines for this sort of thing\n> > that return an error message rather than emitting an error report\n> > directly. That gives the caller a lot more control.\n\nMichael Paquier wrote:\n> 1) Consistency with the error messages makes less work for translators,\n> who already have a lot to deal with.\n\nAgreed that the messages can be slightly inconsistent. I tried to make\nthe new messages match the styles of other messages in their respective\nutilities. Maybe the bigger issue here is inconsistent output styles\nacross the utilities in general:\n\n\tpg_standby.c includes flag names\n\t\t%s: -r maxretries %s\n\tpg_basebackup.c writes the settings out in words\n\t\tinvalid compression level: %s\n\t\nNote that the final %s in those examples will expand to a more detailed\nmessage. For example passing \"-Z 10\" to pg_dump in the current patch will\noutput:\n\n\tpg_dump: error: invalid compression level: 10 is outside range 0..9\n\n> 2) A centralized error message can provide the same level of details.\n\nEven assuming we standardize the message format, different callers have\ndifferent means to handle the messages. The front-end utilities affected in my\npatch use calls as varied as fprintf, pg_log_error, write_stderr and pg_fatal.\nThus pg_strtoint64_range needs more flexibility than calling pg_log_error\ninternally.\n\n> 3) I think that we should not expose directly the status values of\n> pg_strtoint_status in pg_strtoint64_range(), that's less for module\n> authors to worry about, and that would be the same approach as we are\n> using for the wrappers of pg_strto[u]intXX() in the patch of the other\n> thread (see pg_strto[u]intXX_check for example in [1]).\n\nThe pg_strto[u]intXX_check functions can return the integer directly only\nbecause they handle errors with ereport(ERROR, ...). However, as I mentioned\nearlier, this is not always what the front-end utilities need to do.\n\n-- \nJoe Nelson https://begriffs.com\n\n\n", "msg_date": "Wed, 11 Sep 2019 02:10:56 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Tue, Sep 10, 2019 at 11:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Sep 10, 2019 at 08:03:32AM -0400, Robert Haas wrote:\n> > -1. I think it's very useful to have routines for this sort of thing\n> > that return an error message rather than emitting an error report\n> > directly. That gives the caller a lot more control.\n>\n> Please let me counter-argue here.\n\nOK, but on the other hand, Joe's example of a custom message \"invalid\ncompression level: 10 is outside range 0..9\" is a world better than\n\"invalid compression level: %s\". We might even be able to do better\n\"argument to -Z must be a compression level between 0 and 9\". In\nbackend error-reporting, it's often important to show the misguided\nvalue, because it may be coming from a complex query where it's hard\nto find the source of the problematic value. But if the user types\n-Z42 or -Zborked, I'm not sure it's important to tell them that the\nproblem is with \"42\" or \"borked\". It's more important to explain the\nconcept, or such would be my judgement.\n\nAlso, consider an option where the value must be an integer between 1\nand 100 or one of several fixed strings (e.g. think of\nrecovery_target_timeline). The user clearly can't use the generic\nerror message in that case. Perhaps the answer is to say that such\nusers shouldn't use the provided range-checking function but rather\nimplement the logic from scratch. But that seems a bit limiting.\n\nAlso, suppose the user doesn't want to pg_log_error(). Maybe it's a\nwarning. Maybe it doesn't even need to be logged.\n\nWhat this boils down to in the end is that putting more of the policy\ndecisions into the function helps ensure consistency and save code\nwhen the function is used, but it also results in the function being\nused less often. Reasonable people can differ on the merits of\ndifferent approaches, but for me the down side of returning the error\nmessage appears minor at most, and the up sides seem significant.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 11 Sep 2019 08:24:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "... can we have a new patch? Not only because there seems to have been\nsome discussion points that have gone unaddressed (?), but also because\nAppveyor complains really badly about this one.\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.58672\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Sep 2019 17:18:56 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Alvaro Herrera wrote:\n> ... can we have a new patch? Not only because there seems to have\n> been some discussion points that have gone unaddressed (?)\n\nYes, I'll work on it over the weekend.\n\n> but also because Appveyor complains really badly about this one.\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.58672\n\nNote that it requires functions from str2int-10.patch, and will not\ncompile when applied to master by itself. I didn't want to duplicate\nfunctionality from that other uncommitted patch in mine.\n\n\n", "msg_date": "Fri, 27 Sep 2019 21:35:53 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Fri, Sep 27, 2019 at 09:35:53PM -0500, Joe Nelson wrote:\n> Note that it requires functions from str2int-10.patch, and will not\n> compile when applied to master by itself. I didn't want to duplicate\n> functionality from that other uncommitted patch in mine.\n\nIf we have a dependency between both threads, perhaps more people\ncould comment there? Here is the most relevant update:\nhttps://www.postgresql.org/message-id/20190917022913.GB1660@paquier.xyz\n--\nMichael", "msg_date": "Sat, 28 Sep 2019 15:06:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Alvaro Herrera wrote:\n> ... can we have a new patch?\n\nOK, I've attached v4. It works cleanly on 55282fa20f with\nstr2int-16.patch applied. My patch won't compile without the other one\napplied too.\n\nChanged:\n[x] revert my changes in common/Makefile\n[x] rename arg_utils.[ch] to option.[ch]\n[x] update @pgfeutilsfiles in Mkvcbuild.pm\n[x] pgindent everything\n[x] get rid of atoi() in more utilities\n\nOne question about how the utilities parse port numbers. I currently\nhave it check that the value can be parsed as an integer, and that its\nrange is within 1 .. (1<<16)-1. I wonder if the former restriction is\n(un)desirable, because ultimately getaddrinfo() takes a \"service name\ndescription\" for the port, which can be a name such as found in\n'/etc/services' as well as the string representation of a number. If\ndesired, I *could* treat only range errors as a failure for ports, and\nallow integer parse errors.\n\n-- \nJoe Nelson https://begriffs.com", "msg_date": "Sun, 29 Sep 2019 23:51:23 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Hello.\n\nAt Sun, 29 Sep 2019 23:51:23 -0500, Joe Nelson <joe@begriffs.com> wrote in <20190930045123.GC68117@begriffs.com>\n> Alvaro Herrera wrote:\n> > ... can we have a new patch?\n> \n> OK, I've attached v4. It works cleanly on 55282fa20f with\n> str2int-16.patch applied. My patch won't compile without the other one\n> applied too.\n> \n> Changed:\n> [x] revert my changes in common/Makefile\n> [x] rename arg_utils.[ch] to option.[ch]\n> [x] update @pgfeutilsfiles in Mkvcbuild.pm\n> [x] pgindent everything\n> [x] get rid of atoi() in more utilities\n\nCompiler complained as \"INT_MAX undeclared\" (gcc 7.3 / CentOS7.6).\n\n> One question about how the utilities parse port numbers. I currently\n> have it check that the value can be parsed as an integer, and that its\n> range is within 1 .. (1<<16)-1. I wonder if the former restriction is\n> (un)desirable, because ultimately getaddrinfo() takes a \"service name\n> description\" for the port, which can be a name such as found in\n> '/etc/services' as well as the string representation of a number. If\n> desired, I *could* treat only range errors as a failure for ports, and\n> allow integer parse errors.\n\nWe could do that, but perhaps no use for our usage. We are not\nlikely to use named ports other than 'postgres', if any.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 01 Oct 2019 19:32:08 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "At Tue, 01 Oct 2019 19:32:08 +0900 (Tokyo Standard Time), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in <20191001.193208.264851337.horikyota.ntt@gmail.com>\n> Hello.\n> \n> At Sun, 29 Sep 2019 23:51:23 -0500, Joe Nelson <joe@begriffs.com> wrote in <20190930045123.GC68117@begriffs.com>\n> > Alvaro Herrera wrote:\n> > > ... can we have a new patch?\n> > \n> > OK, I've attached v4. It works cleanly on 55282fa20f with\n> > str2int-16.patch applied. My patch won't compile without the other one\n> > applied too.\n> > \n> > Changed:\n> > [x] revert my changes in common/Makefile\n> > [x] rename arg_utils.[ch] to option.[ch]\n> > [x] update @pgfeutilsfiles in Mkvcbuild.pm\n> > [x] pgindent everything\n> > [x] get rid of atoi() in more utilities\n\nI didn't checked closely, but -k of pg_standby's message looks\nsomewhat strange. Needs a separator?\n\n> pg_standby: -k keepfiles could not parse 'hoge' as integer\n\nBuilding a sentense just concatenating multiple nonindependent\n(or incomplete) subphrases makes translation harder.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 01 Oct 2019 19:46:21 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Kyotaro Horiguchi wrote:\n> > pg_standby: -k keepfiles could not parse 'hoge' as integer\n>\n> I didn't checked closely, but -k of pg_standby's message looks\n> somewhat strange. Needs a separator?\n\nGood point, how about this:\n\n\tpg_standby: -k keepfiles: <localized error message>\n\n> Building a sentense just concatenating multiple nonindependent\n> (or incomplete) subphrases makes translation harder.\n\nI could have pg_strtoint64_range() wrap its error messages in _() so\nthat translators could customize the messages prior to concatenation.\n\n\t*error = psprintf(_(\"could not parse '%s' as integer\"), str);\n\nWould this suffice?\n\n\n", "msg_date": "Thu, 3 Oct 2019 22:23:50 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On 2019-Oct-03, Joe Nelson wrote:\n\n> Kyotaro Horiguchi wrote:\n> > > pg_standby: -k keepfiles could not parse 'hoge' as integer\n> >\n> > I didn't checked closely, but -k of pg_standby's message looks\n> > somewhat strange. Needs a separator?\n> \n> Good point, how about this:\n> \n> \tpg_standby: -k keepfiles: <localized error message>\n\nThe wording is a bit strange. How about something like\npg_standy: invalid argument to -k: %s\n\nwhere the %s is the error message produced like you propose:\n\n> I could have pg_strtoint64_range() wrap its error messages in _() so\n> that translators could customize the messages prior to concatenation.\n> \n> \t*error = psprintf(_(\"could not parse '%s' as integer\"), str);\n\n... except that they would rather be more explicit about what the\nproblem is. \"insufficient digits\" or \"extraneous character\", etc.\n\n> Would this suffice?\n\nI hope that no callers would like to have the messages not translated,\nbecause that seems like it would become a mess.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Oct 2019 12:33:08 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Hi Joe,\n\nOn a quick look, the patch seems to be going in a good direction\nalthough there are quite some pending work to be done.\n\nOne suggestion:\nThe start value for port number is set to 1, however it seems like the\nport number that falls in the range of 1-1023 is reserved and can't be\nused. So, is it possible to have the start value as 1024 instead of 1\n?\n\nFurther, I encountered one syntax error (INT_MAX undeclared) as the\nheader file \"limits.h\" has not been included in postgres_fe.h or\noption.h\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Fri, Oct 4, 2019 at 9:04 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Oct-03, Joe Nelson wrote:\n>\n> > Kyotaro Horiguchi wrote:\n> > > > pg_standby: -k keepfiles could not parse 'hoge' as integer\n> > >\n> > > I didn't checked closely, but -k of pg_standby's message looks\n> > > somewhat strange. Needs a separator?\n> >\n> > Good point, how about this:\n> >\n> > pg_standby: -k keepfiles: <localized error message>\n>\n> The wording is a bit strange. How about something like\n> pg_standy: invalid argument to -k: %s\n>\n> where the %s is the error message produced like you propose:\n>\n> > I could have pg_strtoint64_range() wrap its error messages in _() so\n> > that translators could customize the messages prior to concatenation.\n> >\n> > *error = psprintf(_(\"could not parse '%s' as integer\"), str);\n>\n> ... except that they would rather be more explicit about what the\n> problem is. \"insufficient digits\" or \"extraneous character\", etc.\n>\n> > Would this suffice?\n>\n> I hope that no callers would like to have the messages not translated,\n> because that seems like it would become a mess.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\n\n", "msg_date": "Sat, 5 Oct 2019 08:36:46 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Ashutosh Sharma wrote:\n> One suggestion: The start value for port number is set to 1, however\n> it seems like the port number that falls in the range of 1-1023 is\n> reserved and can't be used. So, is it possible to have the start value\n> as 1024 instead of 1 ?\n\nGood idea -- changed it. I also created macros FE_UTILS_PORT_{MIN,MAX}\nso the range can be updated in one place for all utilities.\n\n> Further, I encountered one syntax error (INT_MAX undeclared) as the\n> header file \"limits.h\" has not been included in postgres_fe.h or\n> option.h\n\nOops. Added limits.h now in option.h. The Postgres build accidentally\nworked on my system without explicitly including this header because\n__has_builtin(__builtin_isinf) is true for me so src/include/port.h\npulled in math.h with an #if which pulled in limits.h. \n\n> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > The wording is a bit strange. How about something like pg_standy:\n> > invalid argument to -k: %s\n\nI updated the error messages in pg_standby.\n\n> > > *error = psprintf(_(\"could not parse '%s' as integer\"), str);\n> >\n> > ... except that they would rather be more explicit about what the\n> > problem is. \"insufficient digits\" or \"extraneous character\", etc.\n\nSadly pg_strtoint64 returns the same error code for both cases. So we\ncould either petition for more detailed errors in the thread for that\nother patch, or examine the string ourselves to check. Maybe it's not\nneeded since \"could not parse 'abc' as integer\" kind of does show the\nproblem.\n\n> > I hope that no callers would like to have the messages not translated,\n> > because that seems like it would become a mess.\n\nThat's true... I think it should be OK though, since we return the\npg_strtoint_status so callers can inspect that rather than relying on certain\nwords being in the error string. I'm guessing the translated string would be\nmost appropriate for end users.\n\n-- \nJoe Nelson https://begriffs.com", "msg_date": "Sun, 6 Oct 2019 19:21:50 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Mon, 7 Oct 2019 at 13:21, Joe Nelson <joe@begriffs.com> wrote:\n>\n> Ashutosh Sharma wrote:\n> > One suggestion: The start value for port number is set to 1, however\n> > it seems like the port number that falls in the range of 1-1023 is\n> > reserved and can't be used. So, is it possible to have the start value\n> > as 1024 instead of 1 ?\n>\n> Good idea -- changed it. I also created macros FE_UTILS_PORT_{MIN,MAX}\n> so the range can be updated in one place for all utilities.\n\n(I've only had a very quick look at this, and FWIW, here's my opinion)\n\nIt's not for this patch to decide what ports clients can connect to\nPostgreSQL on. As far as I'm aware Windows has no restrictions on what\nports unprivileged users can listen on. I think we're likely going to\nupset a handful of people if we block the client tools from connecting\nto ports < 1024.\n\n> > Further, I encountered one syntax error (INT_MAX undeclared) as the\n> > header file \"limits.h\" has not been included in postgres_fe.h or\n> > option.h\n>\n> Oops. Added limits.h now in option.h. The Postgres build accidentally\n> worked on my system without explicitly including this header because\n> __has_builtin(__builtin_isinf) is true for me so src/include/port.h\n> pulled in math.h with an #if which pulled in limits.h.\n>\n> > Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > The wording is a bit strange. How about something like pg_standy:\n> > > invalid argument to -k: %s\n>\n> I updated the error messages in pg_standby.\n>\n> > > > *error = psprintf(_(\"could not parse '%s' as integer\"), str);\n> > >\n> > > ... except that they would rather be more explicit about what the\n> > > problem is. \"insufficient digits\" or \"extraneous character\", etc.\n\nThis part seems over-complex to me. What's wrong with just returning a\nbool and if the caller gets \"false\", then just have it emit the error,\nsuch as:\n\n\"compression level must be between %d and %d\"\n\nI see Michael's patch is adding this new return type, but really, is\nthere a good reason why we need to do something special when the user\ndoes not pass in an integer?\n\nCurrent patch:\n$ pg_dump -Z blah\ninvalid compression level: could not parse 'blah' as integer\n\nI propose:\n$ pg_dump -Z blah\ncompression level must be an integer in range 0..9\n\nThis might save a few round trips, e.g the current patch will do:\n$ pg_dump -Z blah\ninvalid compression level: could not parse 'blah' as integer\n$ pg_dump -Z 12345\ninvalid compression level: 12345 is outside range 0..9\n$ ...\n\nAlso:\n\n+ case PG_STRTOINT_RANGE_ERROR:\n+ *error = psprintf(_(\"%s is outside range \"\n+ INT64_FORMAT \"..\" INT64_FORMAT),\n+ str, min, max);\n\nThe translation string here must be consistent over all platforms. I\nthink this will cause issues if the translation string uses %ld and\nthe platform requires %lld?\n\nI think what this patch should be really aiming for is to simplify the\nclient command-line argument parsing and adding what benefits it can.\nI don't think there's really a need to make anything more complex than\nit is already here.\n\nI think you should maybe aim for 2 patches here.\n\nPatch 1: Add new function to validate range and return bool indicating\nif the string is an integer within range. Set output argument to the\nint value if it is valid. Modify all locations where we currently\nvalidate the range of the input arg to use the new function.\n\nPatch 2: Add additional validation where we don't currently do\nanything. e.g pg_dump -j\n\nWe can then see if there's any merit in patch 2 of if it's adding more\ncomplexity than is really needed.\n\nI also think some compilers won't like:\n\n+ compressLevel = parsed;\n\ngiven that \"parsed\" is int64 and \"compressLevel\" is int, surely some\ncompilers will warn of possible truncation? An explicit cast to int\nshould fix those or you could consider just writing a version of the\nfunction for int32 and int64 and directly passing in the variable to\nbe set.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Mon, 7 Oct 2019 18:10:05 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Mon, Oct 7, 2019 at 10:40 AM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> On Mon, 7 Oct 2019 at 13:21, Joe Nelson <joe@begriffs.com> wrote:\n> >\n> > Ashutosh Sharma wrote:\n> > > One suggestion: The start value for port number is set to 1, however\n> > > it seems like the port number that falls in the range of 1-1023 is\n> > > reserved and can't be used. So, is it possible to have the start value\n> > > as 1024 instead of 1 ?\n> >\n> > Good idea -- changed it. I also created macros FE_UTILS_PORT_{MIN,MAX}\n> > so the range can be updated in one place for all utilities.\n>\n> (I've only had a very quick look at this, and FWIW, here's my opinion)\n>\n> It's not for this patch to decide what ports clients can connect to\n> PostgreSQL on. As far as I'm aware Windows has no restrictions on what\n> ports unprivileged users can listen on. I think we're likely going to\n> upset a handful of people if we block the client tools from connecting\n> to ports < 1024.\n>\n\nAFAIU from the information given in the wiki page -[1], the port\nnumbers in the range of 1-1023 are for the standard protocols and\nservices. And there is nowhere mentioned that it is only true for some\nOS and not for others. But, having said that I've just verified it on\nLinux so I'm not aware of the behaviour on Windows.\n\n[1] - https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers\n\n> > > Further, I encountered one syntax error (INT_MAX undeclared) as the\n> > > header file \"limits.h\" has not been included in postgres_fe.h or\n> > > option.h\n> >\n> > Oops. Added limits.h now in option.h. The Postgres build accidentally\n> > worked on my system without explicitly including this header because\n> > __has_builtin(__builtin_isinf) is true for me so src/include/port.h\n> > pulled in math.h with an #if which pulled in limits.h.\n> >\n> > > Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > > The wording is a bit strange. How about something like pg_standy:\n> > > > invalid argument to -k: %s\n> >\n> > I updated the error messages in pg_standby.\n> >\n> > > > > *error = psprintf(_(\"could not parse '%s' as integer\"), str);\n> > > >\n> > > > ... except that they would rather be more explicit about what the\n> > > > problem is. \"insufficient digits\" or \"extraneous character\", etc.\n>\n> This part seems over-complex to me. What's wrong with just returning a\n> bool and if the caller gets \"false\", then just have it emit the error,\n> such as:\n>\n> \"compression level must be between %d and %d\"\n>\n> I see Michael's patch is adding this new return type, but really, is\n> there a good reason why we need to do something special when the user\n> does not pass in an integer?\n>\n> Current patch:\n> $ pg_dump -Z blah\n> invalid compression level: could not parse 'blah' as integer\n>\n> I propose:\n> $ pg_dump -Z blah\n> compression level must be an integer in range 0..9\n>\n> This might save a few round trips, e.g the current patch will do:\n> $ pg_dump -Z blah\n> invalid compression level: could not parse 'blah' as integer\n> $ pg_dump -Z 12345\n> invalid compression level: 12345 is outside range 0..9\n> $ ...\n>\n> Also:\n>\n> + case PG_STRTOINT_RANGE_ERROR:\n> + *error = psprintf(_(\"%s is outside range \"\n> + INT64_FORMAT \"..\" INT64_FORMAT),\n> + str, min, max);\n>\n> The translation string here must be consistent over all platforms. I\n> think this will cause issues if the translation string uses %ld and\n> the platform requires %lld?\n>\n> I think what this patch should be really aiming for is to simplify the\n> client command-line argument parsing and adding what benefits it can.\n> I don't think there's really a need to make anything more complex than\n> it is already here.\n>\n> I think you should maybe aim for 2 patches here.\n>\n> Patch 1: Add new function to validate range and return bool indicating\n> if the string is an integer within range. Set output argument to the\n> int value if it is valid. Modify all locations where we currently\n> validate the range of the input arg to use the new function.\n>\n> Patch 2: Add additional validation where we don't currently do\n> anything. e.g pg_dump -j\n>\n> We can then see if there's any merit in patch 2 of if it's adding more\n> complexity than is really needed.\n>\n> I also think some compilers won't like:\n>\n> + compressLevel = parsed;\n>\n> given that \"parsed\" is int64 and \"compressLevel\" is int, surely some\n> compilers will warn of possible truncation? An explicit cast to int\n> should fix those or you could consider just writing a version of the\n> function for int32 and int64 and directly passing in the variable to\n> be set.\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Mon, 7 Oct 2019 10:57:09 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Mon, 7 Oct 2019 at 18:27, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> AFAIU from the information given in the wiki page -[1], the port\n> numbers in the range of 1-1023 are for the standard protocols and\n> services. And there is nowhere mentioned that it is only true for some\n> OS and not for others. But, having said that I've just verified it on\n> Linux so I'm not aware of the behaviour on Windows.\n>\n> [1] - https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers\n\nHere are the results of a quick test on a windows machine:\n\nL:\\Projects\\Postgres\\install\\bin>echo test > c:\\windows\\test.txt\nAccess is denied.\n\nL:\\Projects\\Postgres\\install\\bin>cat ../data/postgresql.conf | grep \"port = \"\nport = 543 # (change requires restart)\n\nL:\\Projects\\Postgres\\install\\bin>psql -p 543 postgres\npsql (11.5)\nWARNING: Console code page (850) differs from Windows code page (1252)\n 8-bit characters might not work correctly. See psql reference\n page \"Notes for Windows users\" for details.\nType \"help\" for help.\n\npostgres=#\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Mon, 7 Oct 2019 18:35:21 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Mon, Oct 7, 2019 at 11:05 AM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> On Mon, 7 Oct 2019 at 18:27, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > AFAIU from the information given in the wiki page -[1], the port\n> > numbers in the range of 1-1023 are for the standard protocols and\n> > services. And there is nowhere mentioned that it is only true for some\n> > OS and not for others. But, having said that I've just verified it on\n> > Linux so I'm not aware of the behaviour on Windows.\n> >\n> > [1] - https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers\n>\n> Here are the results of a quick test on a windows machine:\n>\n> L:\\Projects\\Postgres\\install\\bin>echo test > c:\\windows\\test.txt\n> Access is denied.\n>\n> L:\\Projects\\Postgres\\install\\bin>cat ../data/postgresql.conf | grep \"port = \"\n> port = 543 # (change requires restart)\n>\n> L:\\Projects\\Postgres\\install\\bin>psql -p 543 postgres\n> psql (11.5)\n> WARNING: Console code page (850) differs from Windows code page (1252)\n> 8-bit characters might not work correctly. See psql reference\n> page \"Notes for Windows users\" for details.\n> Type \"help\" for help.\n>\n> postgres=#\n>\n\nOh then that means all the unused ports (be it dedicated for some\nparticular protocol or service) can be used on Windows. I just tried\nusing port number 21 and 443 for postgres on my old Windows setup and\nit worked. See below,\n\nC:\\Users\\ashu\\git-clone-postgresql\\postgresql\\TMP\\test\\bin>.\\pg_ctl -D\n..\\data -c -w -l logfile -o \"\n-p 21\" start\nwaiting for server to start.... done\nserver started\n\nC:\\Users\\ashu\\git-clone-postgresql\\postgresql\\TMP\\test\\bin>.\\psql -d\npostgres -p 21\npsql (10.5)\nWARNING: Console code page (437) differs from Windows code page (1252)\n 8-bit characters might not work correctly. See psql reference\n page \"Notes for Windows users\" for details.\nType \"help\" for help.\n\npostgres=# \\q\n\nC:\\Users\\ashu\\git-clone-postgresql\\postgresql\\TMP\\test\\bin>.\\pg_ctl -D\n..\\data -c -w -l logfile stop\n\nwaiting for server to shut down.... done\nserver stopped\n\nC:\\Users\\ashu\\git-clone-postgresql\\postgresql\\TMP\\test\\bin>.\\pg_ctl -D\n..\\data -c -w -l logfile -o \"\n-p 443\" start\nwaiting for server to start.... done\nserver started\n\nC:\\Users\\ashu\\git-clone-postgresql\\postgresql\\TMP\\test\\bin>.\\psql -d\npostgres -p 443\npsql (10.5)\nWARNING: Console code page (437) differs from Windows code page (1252)\n 8-bit characters might not work correctly. See psql reference\n page \"Notes for Windows users\" for details.\nType \"help\" for help.\n\nThis looks a weird behaviour to me. I think this is probably one\nreason why people don't prefer using Windows. Anyways, thanks David\nfor putting that point, it was really helpful.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Oct 2019 12:35:19 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "David Rowley wrote:\n> It's not for this patch to decide what ports clients can connect to\n> PostgreSQL on. As far as I'm aware Windows has no restrictions on what\n> ports unprivileged users can listen on. I think we're likely going to\n> upset a handful of people if we block the client tools from connecting\n> to ports < 1024.\n\nMakes sense. We can instead allow any port number and if it errors at\nconnection time then the user will find out at that point.\n\n> This part seems over-complex to me. What's wrong with just returning a\n> bool and if the caller gets \"false\", then just have it emit the error,\n> such as:\n> \n> \"compression level must be between %d and %d\"\n> \n> I see Michael's patch is adding this new return type, but really, is\n> there a good reason why we need to do something special when the user\n> does not pass in an integer?\n\nDisplaying the range when given a non-integer input could be misleading.\nFor instance, suppose we put as little restriction on the port number\nrange as possible, enforcing only that it's positive, between 1 and\nINT_MAX. If someone passes a non-integer value, they would get a\nmessage like:\n\n\tinvalid port number: must be an integer in range 1..2147483647\n\nSure, the parsing code will accept such a big number, but we don't want\nto *suggest* the number. Notice if the user had passed a well-formed\nnumber for the port it's unlikely to be greater than INT_MAX, so they\nwouldn't have to see this weird message.\n\nPerhaps you weren't suggesting to remove the upper limit from port\nchecking, just to change the lower limit from 1024 back to 1. In that\ncase we could keep it capped at 65535 and the error message above would\nbe OK.\n\nOther utilities do have command line args that are allowed the whole\nnon-negative (but signed) int range, and their error message would show\nthe big number. It's not misleading in that case, but a little\nostentatious.\n\n> Current patch:\n> $ pg_dump -Z blah\n> invalid compression level: could not parse 'blah' as integer\n> \n> I propose:\n> $ pg_dump -Z blah\n> compression level must be an integer in range 0..9\n> \n> This might save a few round trips, e.g the current patch will do:\n\nI do see your reasoning that we're teasing people with a puzzle they\nhave to solve with multiple invocations. On the other hand, passing a\nnon-number for the compression level is pretty strange, and perhaps\nexplicitly calling out the mistake might make someone look more\ncarefully at what they -- or their script -- is doing.\n\n> The translation string here must be consistent over all platforms. I\n> think this will cause issues if the translation string uses %ld and\n> the platform requires %lld?\n\nA very good and subtle point. I'll change it to %lld so that a single\nformat string will work everywhere.\n\n> I think you should maybe aim for 2 patches here.\n> \n> Patch 1: ...\n> \n> Patch 2: Add additional validation where we don't currently do\n> anything. e.g pg_dump -j\n> \n> We can then see if there's any merit in patch 2 of if it's adding more\n> complexity than is really needed.\n\nAre you saying that my current patch adds extra constraints for\npg_dump's -j argument, or that in the future we could do that? Because I\ndon't believe the current patch adds any appreciable complexity for that\nparticular argument, other than ensuring the value is positive, which\ndoesn't seem too contentious.\n\nMaybe we can see whether we can reach consensus on the current\nparse-and-check combo patch, and if discussion drags on much longer then\ntry to split it up?\n\n> I also think some compilers won't like:\n> \n> + compressLevel = parsed;\n> \n> given that \"parsed\" is int64 and \"compressLevel\" is int, surely some\n> compilers will warn of possible truncation? An explicit cast to int\n> should fix those\n\nGood point, I bet some compilers (justly) warn about truncation. We've\nchecked the range so I'll add an explicit cast.\n\n> or you could consider just writing a version of the function for int32\n> and int64 and directly passing in the variable to be set.\n\nOne complication is that the destination values are often int rather\nthan int32, and I don't know their width in general (probably 32, maybe\n16, but *possibly* 64?). The pg_strtoint64_range() function with range\nargument of INT_MAX is flexible enough to handle whatever situation we\nencounter. Am I overthinking this part?\n\n-- \nJoe Nelson https://begriffs.com\n\n\n", "msg_date": "Tue, 8 Oct 2019 01:46:51 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On Tue, 8 Oct 2019 at 19:46, Joe Nelson <joe@begriffs.com> wrote:\n>\n> David Rowley wrote:\n> > The translation string here must be consistent over all platforms. I\n> > think this will cause issues if the translation string uses %ld and\n> > the platform requires %lld?\n>\n> A very good and subtle point. I'll change it to %lld so that a single\n> format string will work everywhere.\n\nThe way to do this is to make a temp buffer and snprintf into that\nbuffer then use %s.\n\nSee basebackup.c where it does:\n\nchar buf[64];\n\nsnprintf(buf, sizeof(buf), INT64_FORMAT, total_checksum_failures);\n\nereport(WARNING,\n(errmsg(\"%s total checksum verification failures\", buf)));\n\nas an example.\n\n> > I think you should maybe aim for 2 patches here.\n> >\n> > Patch 1: ...\n> >\n> > Patch 2: Add additional validation where we don't currently do\n> > anything. e.g pg_dump -j\n> >\n> > We can then see if there's any merit in patch 2 of if it's adding more\n> > complexity than is really needed.\n>\n> Are you saying that my current patch adds extra constraints for\n> pg_dump's -j argument, or that in the future we could do that? Because I\n> don't believe the current patch adds any appreciable complexity for that\n> particular argument, other than ensuring the value is positive, which\n> doesn't seem too contentious.\n\n> Maybe we can see whether we can reach consensus on the current\n> parse-and-check combo patch, and if discussion drags on much longer then\n> try to split it up?\n\nI just think you're more likely to get a committer onside if you made\nit so they didn't have to consider if throwing errors where we\npreviously didn't would be a bad thing. It's quite common to get core\ninfrastructure in first then followup with code that uses it. This\nwould be core infrastructure plus some less controversial usages of\nit, then follow up with more. This was really just a suggestion. I\ndidn't dig into the patch in enough detail to decide on how many\nplaces could raise an error that would have silently just have done\nsomething unexpected beforehand.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 8 Oct 2019 23:06:25 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Tue, 8 Oct 2019 at 19:46, Joe Nelson <joe@begriffs.com> wrote:\n>> David Rowley wrote:\n>>> The translation string here must be consistent over all platforms. I\n>>> think this will cause issues if the translation string uses %ld and\n>>> the platform requires %lld?\n\n>> A very good and subtle point. I'll change it to %lld so that a single\n>> format string will work everywhere.\n\n> The way to do this is to make a temp buffer and snprintf into that\n> buffer then use %s.\n\nWe have done it that way in the past, but it was mainly because we\ncouldn't be sure that snprintf was on board with %lld. I think that\nthe new consensus is that forcing use of \"long long\" is a less messy\nsolution (unless you need to back-patch the code). See commit\n6a1cd8b92 for recent precedent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Oct 2019 16:51:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Here's v6 of the patch.\n\n[x] Rebase on 20961ceaf0\n[x] Don't call exit(1) after pg_fatal()\n[x] Use Tom Lane's suggestion for %lld in _() string\n[x] Allow full unsigned 16-bit range for ports (don't disallow ports 0-1023)\n[x] Explicitly cast parsed values to smaller integers\n\n-- \nJoe Nelson https://begriffs.com", "msg_date": "Fri, 11 Oct 2019 23:27:54 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "At Fri, 11 Oct 2019 23:27:54 -0500, Joe Nelson <joe@begriffs.com> wrote in \n> Here's v6 of the patch.\n> \n> [x] Rebase on 20961ceaf0\n> [x] Don't call exit(1) after pg_fatal()\n> [x] Use Tom Lane's suggestion for %lld in _() string\n> [x] Allow full unsigned 16-bit range for ports (don't disallow ports 0-1023)\n> [x] Explicitly cast parsed values to smaller integers\n\nThank you for the new version.\n\nBy the way in the upthread,\n\nAt Tue, 8 Oct 2019 01:46:51 -0500, Joe Nelson <joe@begriffs.com> wrote in \n> > I see Michael's patch is adding this new return type, but really, is\n> > there a good reason why we need to do something special when the user\n> > does not pass in an integer?\n\nI agree to David in that it's better to avoid that kind of complexity\nif possible. The significant point of separating them was that you\ndon't want to suggest a false value range for non-integer inputs.\n\nLooking the latest patch, the wrong suggestions and the complexity\nintroduced by the %lld alternative are already gone. So I think we're\nreaching the simple solution where pg_strtoint64_range doesn't need to\nbe involved in message building.\n\n\"<hoge> must be an integer in the range (mm .. xx)\"\n\nDoesn't the generic message work for all kinds of failure here?\n\n# It is also easy for transators than the split message case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 17 Oct 2019 17:52:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Kyotaro Horiguchi wrote:\n> Doesn't the generic message work for all kinds of failure here?\n\nYeah it does now. I updated pg_strtoint64_range to generate the same\nmessage for all errors.\n\n> So I think we're reaching the simple solution where\n> pg_strtoint64_range doesn't need to be involved in message building.\n\nEven though there's only one message, it still seems best to have the\nfunction create the error string. That way the string stays consistent\nand isn't duplicated across the code.\n\n-- \nJoe Nelson https://begriffs.com", "msg_date": "Sun, 27 Oct 2019 20:20:00 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "I see this patch has been moved to the next commitfest. What's the next\nstep, does it need another review?\n\n-- \nJoe Nelson https://begriffs.com\n\n\n", "msg_date": "Fri, 6 Dec 2019 11:43:58 -0600", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "On 2019-12-06 18:43, Joe Nelson wrote:\n> I see this patch has been moved to the next commitfest. What's the next\n> step, does it need another review?\n\nI think you need to promote your patch better. The thread subject and \nthe commit fest entry title are somewhat nonsensical and no longer match \nwhat the patch actually does. The patch contains no commit message and \nno documentation or test changes, so it's not easy to make out what it's \nsupposed to do or verify that it does so correctly. A reviewer would \nhave to take this patch on faith or manually test every single command \nline option to see what it does. Moreover, a lot of this error message \ntweaking is opinion-based, so it's more burdensome to do an objective \nreview. This patch is competing for attention against more than 200 \nother patches that have more going for them in these aspects.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jan 2020 10:49:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Joe Nelson wrote:\n> > I see this patch has been moved to the next commitfest. What's the next\n> > step, does it need another review?\n\nPeter Eisentraut wrote:\n> I think you need to promote your patch better.\n\nThanks for taking the time to revive this thread.\n\nQuick sales pitch for the patch:\n\n* Standardizes bounds checking and error message format in utilities\n pg_standby, pg_basebackup, pg_receivewal, pg_recvlogical, pg_ctl,\n pg_dump, pg_restore, pg_upgrade, pgbench, reindexdb, and vacuumdb\n* Removes Undefined Behavior caused by atoi() as described\n in the C99 standard, section 7.20.1\n* Unifies integer parsing between the front- and back-end using\n functions provided in https://commitfest.postgresql.org/25/2272/\n\nIn reality I doubt my patch is fixing any pressing problem, it's just a\nsmall refactor.\n\n> The thread subject and the commit fest entry title are somewhat\n> nonsensical and no longer match what the patch actually does.\n\nI thought changing the subject line might be discouraged, but since you\nsuggest doing it, I just did. Updated the title of the commitfest entry\nhttps://commitfest.postgresql.org/26/2197/ as well.\n\n> The patch contains no commit message\n\nDoes this list not accept plain patches, compatible with git-apply?\n(Maybe your point is that I should make it as easy for committers as\npossible, and asking them to invent a commit message is just extra\nwork.)\n\n> and no documentation or test changes\n\nThe interfaces of the utilities remain the same. Same flags. The only\nchange would be the error messages produced for incorrect values.\n\nThe tests I ran passed successfully, but perhaps there were others I\ndidn't try running and should have.\n\n> Moreover, a lot of this error message tweaking is opinion-based, so\n> it's more burdensome to do an objective review. This patch is\n> competing for attention against more than 200 other patches that have\n> more going for them in these aspects.\n\nTrue. I think the code looks nicer and the errors are more informative,\nbut I'll leave it in the community's hands to determine if this is\nsomething they want.\n\nOnce again, I appreciate your taking the time to help me with this\nprocess.\n\n\n", "msg_date": "Sun, 12 Jan 2020 15:43:38 -0600", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: refactoring - standardize integer parsing in front-end utilities" }, { "msg_contents": "On 2019-Dec-06, Joe Nelson wrote:\n\n> I see this patch has been moved to the next commitfest. What's the next\n> step, does it need another review?\n\nThis patch doesn't currently apply; it has conflicts with at least\n01368e5d9da7 and 7e735035f208; even in 7e735035f208^ it applies with\nfuzz. Please post an updated version so that it can move forward.\n\nOn the other hand, I doubt that patching pg_standby is productive. I\nwould just leave that out entirely. See this thread from 2014\nhttps://postgr.es/m/545946E9.8060504@gmx.net\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 11 Feb 2020 13:54:15 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "> On 11 Feb 2020, at 17:54, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2019-Dec-06, Joe Nelson wrote:\n> \n>> I see this patch has been moved to the next commitfest. What's the next\n>> step, does it need another review?\n> \n> This patch doesn't currently apply; it has conflicts with at least\n> 01368e5d9da7 and 7e735035f208; even in 7e735035f208^ it applies with\n> fuzz. Please post an updated version so that it can move forward.\n\nPing. With the 2020-03 CommitFest now under way, are you able to supply a\nrebased patch for consideration?\n\ncheers ./daniel\n\n\n", "msg_date": "Mon, 2 Mar 2020 13:32:57 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "Daniel Gustafsson wrote:\n\n> > On 11 Feb 2020, at 17:54, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > This patch doesn't currently apply; it has conflicts with at least\n> > 01368e5d9da7 and 7e735035f208; even in 7e735035f208^ it applies with\n> > fuzz. Please post an updated version so that it can move forward.\n> \n> Ping. With the 2020-03 CommitFest now under way, are you able to supply a\n> rebased patch for consideration?\n\nMy patch relies on another that was previously returned with feedback in\nthe 2019-11 CF: https://commitfest.postgresql.org/25/2272/\n\nI've lost interest in continuing to rebase this. Someone else can take over the\nwork if they are interested in it. Otherwise just close the CF entry.\n\n\n", "msg_date": "Wed, 4 Mar 2020 23:06:33 -0600", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" }, { "msg_contents": "> On 5 Mar 2020, at 06:06, Joe Nelson <joe@begriffs.com> wrote:\n> \n> Daniel Gustafsson wrote:\n> \n>>> On 11 Feb 2020, at 17:54, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>> \n>>> This patch doesn't currently apply; it has conflicts with at least\n>>> 01368e5d9da7 and 7e735035f208; even in 7e735035f208^ it applies with\n>>> fuzz. Please post an updated version so that it can move forward.\n>> \n>> Ping. With the 2020-03 CommitFest now under way, are you able to supply a\n>> rebased patch for consideration?\n> \n> My patch relies on another that was previously returned with feedback in\n> the 2019-11 CF: https://commitfest.postgresql.org/25/2272/\n> \n> I've lost interest in continuing to rebase this. Someone else can take over the\n> work if they are interested in it. Otherwise just close the CF entry.\n\nOk, I'm marking this as withdrawn in the CF app, anyone interested can pick it\nup where this thread left off and re-submit. Thanks for working on it!\n\ncheers ./daniel\n\n", "msg_date": "Thu, 5 Mar 2020 10:38:38 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Change atoi to strtol in same place" } ]
[ { "msg_contents": "This is for design review. I have a patch (WIP) for Approach 1, and if\nthis discussion starts to converge on that approach I will polish and\npost it.\n\nLet's start at the beginning: why do we have two strategies -- hash\nand sort -- for aggregating data? The two are more similar than they\nfirst appear. A partitioned hash strategy writes randomly among the\npartitions, and later reads the partitions sequentially; a sort will\nwrite sorted runs sequentially, but then read the among the runs\nrandomly during the merge phase. A hash is a convenient small\nrepresentation of the data that is cheaper to operate on; sort uses\nabbreviated keys for the same reason.\n\nHash offers:\n\n* Data is aggregated on-the-fly, effectively \"compressing\" the amount\n of data that needs to go to disk. This is particularly important\n when the data contains skewed groups (see below).\n\n* Can output some groups after the first pass of the input data even\n if other groups spilled.\n\n* Some data types only support hashing; not sorting.\n\nSort+Group offers:\n\n* Only one group is accumulating at once, so if the transition state\n grows (like with ARRAY_AGG), it minimizes the memory needed.\n\n* The input may already happen to be sorted.\n\n* Some data types only support sorting; not hashing.\n\nCurrently, Hash Aggregation is only chosen if the optimizer believes\nthat all the groups (and their transition states) fit in\nmemory. Unfortunately, if the optimizer is wrong (often the case if the\ninput is not a base table), the hash table will\nkeep growing beyond work_mem, potentially bringing the entire system\nto OOM. This patch fixes that problem by extending the Hash\nAggregation strategy to spill to disk when needed.\n\nPrevious discussions:\n\n\nhttps://www.postgresql.org/message-id/1407706010.6623.16.camel@jeff-desktop\n\nhttps://www.postgresql.org/message-id/1419326161.24895.13.camel%40jeff-desktop\n\nhttps://www.postgresql.org/message-id/87be3bd5-6b13-d76e-5618-6db0a4db584d%40iki.fi\n\nA lot was discussed, which I will try to summarize and address here.\n\nDigression: Skewed Groups:\n\nImagine the input tuples have the following grouping keys:\n\n 0, 1, 0, 2, 0, 3, 0, 4, ..., 0, N-1, 0, N\n\nGroup 0 is a skew group because it consists of 50% of all tuples in\nthe table, whereas every other group has a single tuple. If the\nalgorithm is able to keep group 0 in memory the whole time until\nfinalized, that means that it doesn't have to spill any group-0\ntuples. In this example, that would amount to a 50% savings, and is a\nmajor advantage of Hash Aggregation versus Sort+Group.\n\nHigh-level approaches:\n\n1. When the in-memory hash table fills, keep existing entries in the\nhash table, and spill the raw tuples for all new groups in a\npartitioned fashion. When all input tuples are read, finalize groups\nin memory and emit. Now that the in-memory hash table is cleared (and\nmemory context reset), process a spill file the same as the original\ninput, but this time with a fraction of the group cardinality.\n\n2. When the in-memory hash table fills, partition the hash space, and\nevict the groups from all partitions except one by writing out their\npartial aggregate states to disk. Any input tuples belonging to an\nevicted partition get spilled to disk. When the input is read\nentirely, finalize the groups remaining in memory and emit. Now that\nthe in-memory hash table is cleared, process the next partition by\nloading its partial states into the hash table, and then processing\nits spilled tuples.\n\n3. Use some kind of hybrid[1][2] of hashing and sorting.\n\nEvaluation of approaches:\n\nApproach 1 is a nice incremental improvement on today's code. The\nfinal patch may be around 1KLOC. It's a single kind of on-disk data\n(spilled tuples), and a single algorithm (hashing). It also handles\nskewed groups well because the skewed groups are likely to be\nencountered before the hash table fills up the first time, and\ntherefore will stay in memory.\n\nApproach 2 is nice because it resembles the approach of Hash Join, and\nit can determine whether a tuple should be spilled without a hash\nlookup. Unfortunately, those upsides are fairly mild, and it has\nsignificant downsides:\n\n* It doesn't handle skew values well because it's likely to evict\n them.\n\n* If we leave part of the hash table in memory, it's difficult to\n ensure that we will be able to actually use the space freed by\n eviction, because the freed memory may be fragmented. That could\n force us to evict the entire in-memory hash table as soon as we\n partition, reducing a lot of the benefit of hashing.\n\n* It requires eviction for the algorithm to work. That may be\n necessary for handling cases like ARRAY_AGG (see below) anyway, but\n this approach constrains the specifics of eviction.\n\nApproach 3 is interesting because it unifies the two approaches and\ncan get some of the benfits of both. It's only a single path, so it\navoids planner mistakes. I really like this idea and it's possible we\nwill end up with approach 3. However:\n\n* It requires that all data types support sorting, or that we punt\n somehow.\n\n* Right now we are in a weird state because hash aggregation cheats,\n so it's difficult to evaluate whether Approach 3 is moving us in the\n right direction because we have no other correct implementation to\n compare against. Even if Approach 3 is where we end up, it seems\n like we should fix hash aggregation as a stepping stone first.\n\n* It means we have a hash table and sort running concurrently, each\n using memory. Andres said this might not be a problem[3], but I'm\n not convinced that the problem is zero. If you use small work_mem\n for the write phase of sorting, you'll end up with a lot of runs to\n merge later and that has some kind of cost.\n\n* The simplicity might start to evaporate when we consider grouping\n sets and eviction strategy.\n\nMain topics to consider:\n\nARRAY_AGG:\n\nSome aggregates, like ARRAY_AGG, have a transition state that grows\nproportionally with the group size. In other words, it is not a\nsummary like COUNT or AVG, it contains all of the input data in a new\nform.\n\nThese aggregates are not a good candidate for hash aggregation. Hash\naggregation is about keeping many transition states running in\nparallel, which is just a bad fit for large transition states. Sorting\nis better because it advances one transition state at a time. We could:\n\n* Let ARRAY_AGG continue to exceed work_mem like today.\n\n* Block or pessimize use of hash aggregation for such aggregates.\n\n* Evict groups from the hash table when it becomes too large. This\n requires the ability to serialize and deserialize transition states,\n and some approaches here might also need combine_func\n specified. These requirements seem reasonable, but we still need\n some answer of what to do for aggregates that grow like ARRAY_AGG\n but don't have the required serialfunc, deserialfunc, or\n combine_func.\n\nGROUPING SETS:\n\nWith grouping sets, there are multiple hash tables and each hash table\nhas it's own hash function, so that makes partitioning more\ncomplex. In Approach 1, that means we need to either (a) not partition\nthe spilled tuples; or (b) have a different set of partitions for each\nhash table and spill the same tuple multiple times. In Approach 2, we\nwould be required to partition each hash table separately and spill\ntuples multiple times. In Approach 3 (depending on the exact approach\nbut taking a guess here) we would need to add a set of phases (one\nextra phase for each hash table) for spilled tuples.\n\nMEMORY TRACKING:\n\nI have a patch to track the total allocated memory by\nincrementing/decrementing it when blocks are malloc'd/free'd. This\ndoesn't do bookkeeping for each chunk, only each block. Previously,\nRobert Haas raised some concerns[4] about performance, which were\nmitigated[5] but perhaps not entirely eliminated (but did become\nelusive).\n\nThe only alternative is estimation, which is ugly and seems like a bad\nidea. Memory usage isn't just driven by inputs, it's also driven by\npatterns of use. Misestimates in the planner are fine (within reason)\nbecause we don't have any other choice, and a small-factor misestimate\nmight not change the plan anyway. But in the executor, a small-factor\nmisestimate seems like it's just not doing the job. If a user found\nthat hash aggregation was using 3X work_mem, and my only explanation\nis \"well, it's just an estimate\", I would be pretty embarrassed and\nthe user would likely lose confidence in the feature. I don't mean\nthat we must track memory perfectly everywhere, but using an estimate\nseems like a mediocre improvement of the current state.\n\nWe should proceed with memory context tracking and try to eliminate or\nmitigate performance concerns. I would not like to make any hurculean\neffort as a part of the hash aggregation work though; I think it's\nbasically just something a memory manager in a database system should\nhave supported all along. I think we will find other uses for it as\ntime goes on. We have more and more things happening in the executor\nand having a cheap way to check \"how much memory is this thing using?\"\nseems very likely to be useful.\n\nOther points:\n\n* Someone brought up the idea of using logtapes.c instead of writing\n separate files for each partition. That seems reasonable, but it's\n using logtapes.c slightly outside of its intended purpose. Also,\n it's awkward to need to specify the number of tapes up-front. Worth\n experimenting with to see if it's a win.\n\n* Tomas did some experiments regarding the number of batches to choose\n and how to choose them. It seems like there's room for improvement\n over ths simple calculation I'm doing now.\n\n* A lot of discussion about a smart eviction strategy. I don't see\n strong evidence that it's worth the complexity at this time. The\n smarter we try to be, the more bookkeeping and memory fragmentation\n problems we will have. If we evict something, we should probably\n evict the whole hash table or some large part of it.\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://postgr.es/m/20180604185205.epue25jzpavokupf%40alap3.anarazel.de\n[2] \nhttps://postgr.es/m/message-id/CAGTBQpa__-NP7%3DkKwze_enkqw18vodRxKkOmNhxAPzqkruc-8g%40mail.gmail.com\n[3] \nhttps://www.postgresql.org/message-id/20180605175209.vavuqe4idovcpeie%40alap3.anarazel.de\n[4] \nhttps://www.postgresql.org/message-id/CA%2BTgmobnu7XEn1gRdXnFo37P79bF%3DqLt46%3D37ajP3Cro9dBRaA%40mail.gmail.com\n[5] \nhttps://www.postgresql.org/message-id/1413422787.18615.18.camel%40jeff-desktop\n\n\n\n\n", "msg_date": "Mon, 01 Jul 2019 12:13:53 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Memory-Bounded Hash Aggregation" }, { "msg_contents": "Hi Jeff,\n\nOn Mon, Jul 01, 2019 at 12:13:53PM -0700, Jeff Davis wrote:\n>This is for design review. I have a patch (WIP) for Approach 1, and if\n>this discussion starts to converge on that approach I will polish and\n>post it.\n>\n\nThanks for working on this.\n\n>Let's start at the beginning: why do we have two strategies -- hash\n>and sort -- for aggregating data? The two are more similar than they\n>first appear. A partitioned hash strategy writes randomly among the\n>partitions, and later reads the partitions sequentially; a sort will\n>write sorted runs sequentially, but then read the among the runs\n>randomly during the merge phase. A hash is a convenient small\n>representation of the data that is cheaper to operate on; sort uses\n>abbreviated keys for the same reason.\n>\n\nWhat does \"partitioned hash strategy\" do? It's probably explained in one\nof the historical discussions, but I'm not sure which one. I assume it\nsimply hashes the group keys and uses that to partition the data, and then\npassing it to hash aggregate.\n\n>Hash offers:\n>\n>* Data is aggregated on-the-fly, effectively \"compressing\" the amount\n> of data that needs to go to disk. This is particularly important\n> when the data contains skewed groups (see below).\n>\n>* Can output some groups after the first pass of the input data even\n> if other groups spilled.\n>\n>* Some data types only support hashing; not sorting.\n>\n>Sort+Group offers:\n>\n>* Only one group is accumulating at once, so if the transition state\n> grows (like with ARRAY_AGG), it minimizes the memory needed.\n>\n>* The input may already happen to be sorted.\n>\n>* Some data types only support sorting; not hashing.\n>\n>Currently, Hash Aggregation is only chosen if the optimizer believes\n>that all the groups (and their transition states) fit in\n>memory. Unfortunately, if the optimizer is wrong (often the case if the\n>input is not a base table), the hash table will\n>keep growing beyond work_mem, potentially bringing the entire system\n>to OOM. This patch fixes that problem by extending the Hash\n>Aggregation strategy to spill to disk when needed.\n>\n\nOK, makes sense.\n\n>Previous discussions:\n>\n>\n>https://www.postgresql.org/message-id/1407706010.6623.16.camel@jeff-desktop\n>\n>https://www.postgresql.org/message-id/1419326161.24895.13.camel%40jeff-desktop\n>\n>https://www.postgresql.org/message-id/87be3bd5-6b13-d76e-5618-6db0a4db584d%40iki.fi\n>\n>A lot was discussed, which I will try to summarize and address here.\n>\n>Digression: Skewed Groups:\n>\n>Imagine the input tuples have the following grouping keys:\n>\n> 0, 1, 0, 2, 0, 3, 0, 4, ..., 0, N-1, 0, N\n>\n>Group 0 is a skew group because it consists of 50% of all tuples in\n>the table, whereas every other group has a single tuple. If the\n>algorithm is able to keep group 0 in memory the whole time until\n>finalized, that means that it doesn't have to spill any group-0\n>tuples. In this example, that would amount to a 50% savings, and is a\n>major advantage of Hash Aggregation versus Sort+Group.\n>\n\nRight. I agree efficiently handling skew is important and may be crucial\nfor achieving good performance.\n\n>High-level approaches:\n>\n>1. When the in-memory hash table fills, keep existing entries in the\n>hash table, and spill the raw tuples for all new groups in a\n>partitioned fashion. When all input tuples are read, finalize groups\n>in memory and emit. Now that the in-memory hash table is cleared (and\n>memory context reset), process a spill file the same as the original\n>input, but this time with a fraction of the group cardinality.\n>\n>2. When the in-memory hash table fills, partition the hash space, and\n>evict the groups from all partitions except one by writing out their\n>partial aggregate states to disk. Any input tuples belonging to an\n>evicted partition get spilled to disk. When the input is read\n>entirely, finalize the groups remaining in memory and emit. Now that\n>the in-memory hash table is cleared, process the next partition by\n>loading its partial states into the hash table, and then processing\n>its spilled tuples.\n>\n>3. Use some kind of hybrid[1][2] of hashing and sorting.\n>\n\nUnfortunately the second link does not work :-(\n\n>Evaluation of approaches:\n>\n>Approach 1 is a nice incremental improvement on today's code. The\n>final patch may be around 1KLOC. It's a single kind of on-disk data\n>(spilled tuples), and a single algorithm (hashing). It also handles\n>skewed groups well because the skewed groups are likely to be\n>encountered before the hash table fills up the first time, and\n>therefore will stay in memory.\n>\n\nI'm not going to block Approach 1, althought I'd really like to see\nsomething that helps with array_agg.\n\n>Approach 2 is nice because it resembles the approach of Hash Join, and\n>it can determine whether a tuple should be spilled without a hash\n>lookup. Unfortunately, those upsides are fairly mild, and it has\n>significant downsides:\n>\n>* It doesn't handle skew values well because it's likely to evict\n> them.\n>\n>* If we leave part of the hash table in memory, it's difficult to\n> ensure that we will be able to actually use the space freed by\n> eviction, because the freed memory may be fragmented. That could\n> force us to evict the entire in-memory hash table as soon as we\n> partition, reducing a lot of the benefit of hashing.\n>\n\nYeah, and it may not work well with the memory accounting if we only track\nthe size of allocated blocks, not chunks (because pfree likely won't free\nthe blocks).\n\n>* It requires eviction for the algorithm to work. That may be\n> necessary for handling cases like ARRAY_AGG (see below) anyway, but\n> this approach constrains the specifics of eviction.\n>\n>Approach 3 is interesting because it unifies the two approaches and\n>can get some of the benfits of both. It's only a single path, so it\n>avoids planner mistakes. I really like this idea and it's possible we\n>will end up with approach 3. However:\n>\n>* It requires that all data types support sorting, or that we punt\n> somehow.\n>\n>* Right now we are in a weird state because hash aggregation cheats,\n> so it's difficult to evaluate whether Approach 3 is moving us in the\n> right direction because we have no other correct implementation to\n> compare against. Even if Approach 3 is where we end up, it seems\n> like we should fix hash aggregation as a stepping stone first.\n>\n\nAren't all three approaches a way to \"fix\" hash aggregate? In any case,\nit's certainly reasonable to make incremental changes. The question is\nwhether \"approach 1\" is sensible step towards some form of \"approach 3\"\n\n\n>* It means we have a hash table and sort running concurrently, each\n> using memory. Andres said this might not be a problem[3], but I'm\n> not convinced that the problem is zero. If you use small work_mem\n> for the write phase of sorting, you'll end up with a lot of runs to\n> merge later and that has some kind of cost.\n>\n\nWhy would we need to do both concurrently? I thought we'd empty the hash\ntable before doing the sort, no?\n\n>* The simplicity might start to evaporate when we consider grouping\n> sets and eviction strategy.\n>\n\nHmm, yeah :-/\n\n>Main topics to consider:\n>\n>ARRAY_AGG:\n>\n>Some aggregates, like ARRAY_AGG, have a transition state that grows\n>proportionally with the group size. In other words, it is not a\n>summary like COUNT or AVG, it contains all of the input data in a new\n>form.\n>\n\nStrictly speaking the state may grow even for count/avg aggregates, e.g.\nfor numeric types, but it's far less serious than array_agg etc.\n\n>These aggregates are not a good candidate for hash aggregation. Hash\n>aggregation is about keeping many transition states running in\n>parallel, which is just a bad fit for large transition states. Sorting\n>is better because it advances one transition state at a time. We could:\n>\n>* Let ARRAY_AGG continue to exceed work_mem like today.\n>\n>* Block or pessimize use of hash aggregation for such aggregates.\n>\n>* Evict groups from the hash table when it becomes too large. This\n> requires the ability to serialize and deserialize transition states,\n> and some approaches here might also need combine_func\n> specified. These requirements seem reasonable, but we still need\n> some answer of what to do for aggregates that grow like ARRAY_AGG\n> but don't have the required serialfunc, deserialfunc, or\n> combine_func.\n>\n\nDo we actually need to handle that case? How many such aggregates are\nthere? I think it's OK to just ignore that case (and keep doing what we do\nnow), and require serial/deserial functions for anything better.\n\n>GROUPING SETS:\n>\n>With grouping sets, there are multiple hash tables and each hash table\n>has it's own hash function, so that makes partitioning more\n>complex. In Approach 1, that means we need to either (a) not partition\n>the spilled tuples; or (b) have a different set of partitions for each\n>hash table and spill the same tuple multiple times. In Approach 2, we\n>would be required to partition each hash table separately and spill\n>tuples multiple times. In Approach 3 (depending on the exact approach\n>but taking a guess here) we would need to add a set of phases (one\n>extra phase for each hash table) for spilled tuples.\n>\n\nNo thoughts about this yet.\n\n>MEMORY TRACKING:\n>\n>I have a patch to track the total allocated memory by\n>incrementing/decrementing it when blocks are malloc'd/free'd. This\n>doesn't do bookkeeping for each chunk, only each block. Previously,\n>Robert Haas raised some concerns[4] about performance, which were\n>mitigated[5] but perhaps not entirely eliminated (but did become\n>elusive).\n>\n>The only alternative is estimation, which is ugly and seems like a bad\n>idea. Memory usage isn't just driven by inputs, it's also driven by\n>patterns of use. Misestimates in the planner are fine (within reason)\n>because we don't have any other choice, and a small-factor misestimate\n>might not change the plan anyway. But in the executor, a small-factor\n>misestimate seems like it's just not doing the job. If a user found\n>that hash aggregation was using 3X work_mem, and my only explanation\n>is \"well, it's just an estimate\", I would be pretty embarrassed and\n>the user would likely lose confidence in the feature. I don't mean\n>that we must track memory perfectly everywhere, but using an estimate\n>seems like a mediocre improvement of the current state.\n\nI agree estimates are not the right tool here.\n\n>\n>We should proceed with memory context tracking and try to eliminate or\n>mitigate performance concerns. I would not like to make any hurculean\n>effort as a part of the hash aggregation work though; I think it's\n>basically just something a memory manager in a database system should\n>have supported all along. I think we will find other uses for it as\n>time goes on. We have more and more things happening in the executor\n>and having a cheap way to check \"how much memory is this thing using?\"\n>seems very likely to be useful.\n>\n\nIMO we should just use the cheapest memory accounting (tracking the amount\nof memory allocated for blocks). I agree it's a feature we need, I don't\nthink we can devise anything cheaper than this.\n\n>Other points:\n>\n>* Someone brought up the idea of using logtapes.c instead of writing\n> separate files for each partition. That seems reasonable, but it's\n> using logtapes.c slightly outside of its intended purpose. Also,\n> it's awkward to need to specify the number of tapes up-front. Worth\n> experimenting with to see if it's a win.\n>\n>* Tomas did some experiments regarding the number of batches to choose\n> and how to choose them. It seems like there's room for improvement\n> over ths simple calculation I'm doing now.\n>\n\nMe? I don't recall such benchmarks, but maybe I did. But I think we'll\nneed to repeat those with the new patches etc. I think the question is\nwhether we see this as an emergency solution - in that case I wouldn't\nobsess about getting the best possible parameters.\n\n>* A lot of discussion about a smart eviction strategy. I don't see\n> strong evidence that it's worth the complexity at this time. The\n> smarter we try to be, the more bookkeeping and memory fragmentation\n> problems we will have. If we evict something, we should probably\n> evict the whole hash table or some large part of it.\n>\n\nMaybe. For each \"smart\" eviction strategy there is a (trivial) example\nof data on which it performs poorly.\n\nI think it's the same thing as with the number of partitions - if we\nconsider this to be an emergency solution, it's OK if the performance is\nnot entirely perfect when it kicks in.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 3 Jul 2019 02:17:53 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Mon, 2019-07-01 at 12:13 -0700, Jeff Davis wrote:\n> This is for design review. I have a patch (WIP) for Approach 1, and\n> if\n> this discussion starts to converge on that approach I will polish and\n> post it.\n\nWIP patch attached (based on 9a81c9fa); targeting September CF.\n\nNot intended for detailed review yet, but it seems to work in enough\ncases (including grouping sets and JIT) to be a good proof-of-concept\nfor the algorithm and its complexity.\n\nInitial performance numbers put it at 2X slower than sort for grouping\n10M distinct integers. There are quite a few optimizations I haven't\ntried yet and quite a few tunables I haven't tuned yet, so hopefully I\ncan close the gap a bit for the small-groups case.\n\nI will offer more details soon when I have more confidence in the\nnumbers.\n\nIt does not attempt to spill ARRAY_AGG at all yet.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 03 Jul 2019 18:07:46 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2019-07-03 at 02:17 +0200, Tomas Vondra wrote:\n> What does \"partitioned hash strategy\" do? It's probably explained in\n> one\n> of the historical discussions, but I'm not sure which one. I assume\n> it\n> simply hashes the group keys and uses that to partition the data, and\n> then\n> passing it to hash aggregate.\n\nYes. When spilling, it is cheap to partition on the hash value at the\nsame time, which dramatically reduces the need to spill multiple times.\nPrevious discussions:\n\n\n> Unfortunately the second link does not work :-(\n\nIt's supposed to be:\n\n\nhttps://www.postgresql.org/message-id/CAGTBQpa__-NP7%3DkKwze_enkqw18vodRxKkOmNhxAPzqkruc-8g%40mail.gmail.com\n\n\n> I'm not going to block Approach 1, althought I'd really like to see\n> something that helps with array_agg.\n\nI have a WIP patch that I just posted. It doesn't yet work with\nARRAY_AGG, but I think it can be made to work by evicting the entire\nhash table, serializing the transition states, and then later combining\nthem.\n\n> Aren't all three approaches a way to \"fix\" hash aggregate? In any\n> case,\n> it's certainly reasonable to make incremental changes. The question\n> is\n> whether \"approach 1\" is sensible step towards some form of \"approach\n> 3\"\n\nDisk-based hashing certainly seems like a reasonable algorithm on paper\nthat has some potential advantages over sorting. It certainly seems\nsensible to me that we explore the disk-based hashing strategy first,\nand then we would at least know what we are missing (if anything) by\ngoing with the hybrid approach later.\n\nThere's also a fair amount of design space to explore in the hybrid\nstrategy. That could take a while to converge, especially if we don't\nhave anything in place to compare against.\n\n> > * It means we have a hash table and sort running concurrently, each\n> > using memory. Andres said this might not be a problem[3], but I'm\n> > not convinced that the problem is zero. If you use small work_mem\n> > for the write phase of sorting, you'll end up with a lot of runs\n> > to\n> > merge later and that has some kind of cost.\n> > \n> \n> Why would we need to do both concurrently? I thought we'd empty the\n> hash\n> table before doing the sort, no?\n\nSo you are saying we spill the tuples into a tuplestore, then feed the\ntuplestore through a tuplesort? Seems inefficient, but I guess we can.\n\n> Do we actually need to handle that case? How many such aggregates are\n> there? I think it's OK to just ignore that case (and keep doing what\n> we do\n> now), and require serial/deserial functions for anything better.\n\nPunting on a few cases is fine with me, if the user has a way to fix\nit.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 03 Jul 2019 19:03:06 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Jul 03, 2019 at 07:03:06PM -0700, Jeff Davis wrote:\n>On Wed, 2019-07-03 at 02:17 +0200, Tomas Vondra wrote:\n>> What does \"partitioned hash strategy\" do? It's probably explained in\n>> one\n>> of the historical discussions, but I'm not sure which one. I assume\n>> it\n>> simply hashes the group keys and uses that to partition the data, and\n>> then\n>> passing it to hash aggregate.\n>\n>Yes. When spilling, it is cheap to partition on the hash value at the\n>same time, which dramatically reduces the need to spill multiple times.\n>Previous discussions:\n>\n>\n>> Unfortunately the second link does not work :-(\n>\n>It's supposed to be:\n>\n>\n>https://www.postgresql.org/message-id/CAGTBQpa__-NP7%3DkKwze_enkqw18vodRxKkOmNhxAPzqkruc-8g%40mail.gmail.com\n>\n>\n>> I'm not going to block Approach 1, althought I'd really like to see\n>> something that helps with array_agg.\n>\n>I have a WIP patch that I just posted. It doesn't yet work with\n>ARRAY_AGG, but I think it can be made to work by evicting the entire\n>hash table, serializing the transition states, and then later combining\n>them.\n>\n>> Aren't all three approaches a way to \"fix\" hash aggregate? In any\n>> case,\n>> it's certainly reasonable to make incremental changes. The question\n>> is\n>> whether \"approach 1\" is sensible step towards some form of \"approach\n>> 3\"\n>\n>Disk-based hashing certainly seems like a reasonable algorithm on paper\n>that has some potential advantages over sorting. It certainly seems\n>sensible to me that we explore the disk-based hashing strategy first,\n>and then we would at least know what we are missing (if anything) by\n>going with the hybrid approach later.\n>\n>There's also a fair amount of design space to explore in the hybrid\n>strategy. That could take a while to converge, especially if we don't\n>have anything in place to compare against.\n>\n\nMakes sense. I haven't thought about how the hybrid approach would be\nimplemented very much, so I can't quite judge how complicated would it be\nto extend \"approach 1\" later. But if you think it's a sensible first step,\nI trust you. And I certainly agree we need something to compare the other\napproaches against.\n\n\n>> > * It means we have a hash table and sort running concurrently, each\n>> > using memory. Andres said this might not be a problem[3], but I'm\n>> > not convinced that the problem is zero. If you use small work_mem\n>> > for the write phase of sorting, you'll end up with a lot of runs\n>> > to\n>> > merge later and that has some kind of cost.\n>> >\n>>\n>> Why would we need to do both concurrently? I thought we'd empty the\n>> hash\n>> table before doing the sort, no?\n>\n>So you are saying we spill the tuples into a tuplestore, then feed the\n>tuplestore through a tuplesort? Seems inefficient, but I guess we can.\n>\n\nI think the question is whether we see this as \"emergency fix\" (for cases\nthat are misestimated and could/would fail with OOM at runtime), or as\nsomething that is meant to make \"hash agg\" more widely applicable.\n\nI personally see it as an emergency fix, in which cases it's perfectly\nfine if it's not 100% efficient, assuming it kicks in only rarely.\nEffectively, we're betting on hash agg, and from time to time we lose.\n\nBut even if we see it as a general optimization technique it does not have\nto be perfectly efficient, as long as it's properly costed (so the planner\nonly uses it when appropriate).\n\nIf we have a better solution (in terms of efficiency, code complexity,\netc.) then sure - let's use that. But considering we've started this\ndiscussion in ~2015 and we still don't have anything, I wouldn't hold my\nbreath. Let's do something good enough, and maybe improve it later.\n\n>> Do we actually need to handle that case? How many such aggregates are\n>> there? I think it's OK to just ignore that case (and keep doing what\n>> we do\n>> now), and require serial/deserial functions for anything better.\n>\n>Punting on a few cases is fine with me, if the user has a way to fix\n>it.\n>\n\n+1 to doing that\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 4 Jul 2019 16:24:17 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Jul 03, 2019 at 07:03:06PM -0700, Jeff Davis wrote:\n>On Wed, 2019-07-03 at 02:17 +0200, Tomas Vondra wrote:\n>> What does \"partitioned hash strategy\" do? It's probably explained in\n>> one\n>> of the historical discussions, but I'm not sure which one. I assume\n>> it\n>> simply hashes the group keys and uses that to partition the data, and\n>> then\n>> passing it to hash aggregate.\n>\n>Yes. When spilling, it is cheap to partition on the hash value at the\n>same time, which dramatically reduces the need to spill multiple times.\n>Previous discussions:\n>\n>\n>> Unfortunately the second link does not work :-(\n>\n>It's supposed to be:\n>\n>\n>https://www.postgresql.org/message-id/CAGTBQpa__-NP7%3DkKwze_enkqw18vodRxKkOmNhxAPzqkruc-8g%40mail.gmail.com\n>\n>\n>> I'm not going to block Approach 1, althought I'd really like to see\n>> something that helps with array_agg.\n>\n>I have a WIP patch that I just posted. It doesn't yet work with\n>ARRAY_AGG, but I think it can be made to work by evicting the entire\n>hash table, serializing the transition states, and then later combining\n>them.\n>\n>> Aren't all three approaches a way to \"fix\" hash aggregate? In any\n>> case,\n>> it's certainly reasonable to make incremental changes. The question\n>> is\n>> whether \"approach 1\" is sensible step towards some form of \"approach\n>> 3\"\n>\n>Disk-based hashing certainly seems like a reasonable algorithm on paper\n>that has some potential advantages over sorting. It certainly seems\n>sensible to me that we explore the disk-based hashing strategy first,\n>and then we would at least know what we are missing (if anything) by\n>going with the hybrid approach later.\n>\n>There's also a fair amount of design space to explore in the hybrid\n>strategy. That could take a while to converge, especially if we don't\n>have anything in place to compare against.\n>\n\nMakes sense. I haven't thought about how the hybrid approach would be\nimplemented very much, so I can't quite judge how complicated would it be\nto extend \"approach 1\" later. But if you think it's a sensible first step,\nI trust you. And I certainly agree we need something to compare the other\napproaches against.\n\n\n>> > * It means we have a hash table and sort running concurrently, each\n>> > using memory. Andres said this might not be a problem[3], but I'm\n>> > not convinced that the problem is zero. If you use small work_mem\n>> > for the write phase of sorting, you'll end up with a lot of runs\n>> > to\n>> > merge later and that has some kind of cost.\n>> >\n>>\n>> Why would we need to do both concurrently? I thought we'd empty the\n>> hash\n>> table before doing the sort, no?\n>\n>So you are saying we spill the tuples into a tuplestore, then feed the\n>tuplestore through a tuplesort? Seems inefficient, but I guess we can.\n>\n\nI think the question is whether we see this as \"emergency fix\" (for cases\nthat are misestimated and could/would fail with OOM at runtime), or as\nsomething that is meant to make \"hash agg\" more widely applicable.\n\nI personally see it as an emergency fix, in which cases it's perfectly\nfine if it's not 100% efficient, assuming it kicks in only rarely.\nEffectively, we're betting on hash agg, and from time to time we lose.\n\nBut even if we see it as a general optimization technique it does not have\nto be perfectly efficient, as long as it's properly costed (so the planner\nonly uses it when appropriate).\n\nIf we have a better solution (in terms of efficiency, code complexity,\netc.) then sure - let's use that. But considering we've started this\ndiscussion in ~2015 and we still don't have anything, I wouldn't hold my\nbreath. Let's do something good enough, and maybe improve it later.\n\n>> Do we actually need to handle that case? How many such aggregates are\n>> there? I think it's OK to just ignore that case (and keep doing what\n>> we do\n>> now), and require serial/deserial functions for anything better.\n>\n>Punting on a few cases is fine with me, if the user has a way to fix\n>it.\n>\n\n+1 to doing that\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 11 Jul 2019 17:55:58 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, 2019-07-11 at 17:55 +0200, Tomas Vondra wrote:\n> Makes sense. I haven't thought about how the hybrid approach would be\n> implemented very much, so I can't quite judge how complicated would\n> it be\n> to extend \"approach 1\" later. But if you think it's a sensible first\n> step,\n> I trust you. And I certainly agree we need something to compare the\n> other\n> approaches against.\n\nIs this a duplicate of your previous email?\n\nI'm slightly confused but I will use the opportunity to put out another\nWIP patch. The patch could use a few rounds of cleanup and quality\nwork, but the funcionality is there and the performance seems\nreasonable.\n\nI rebased on master and fixed a few bugs, and most importantly, added\ntests.\n\nIt seems to be working with grouping sets fine. It will take a little\nlonger to get good performance numbers, but even for group size of one,\nI'm seeing HashAgg get close to Sort+Group in some cases.\n\nYou are right that the missed lookups appear to be costly, at least\nwhen the data all fits in system memory. I think it's the cache misses,\nbecause sometimes reducing work_mem improves performance. I'll try\ntuning the number of buckets for the hash table and see if that helps.\nIf not, then the performance still seems pretty good to me.\n\nOf course, HashAgg can beat sort for larger group sizes, but I'll try\nto gather some more data on the cross-over point.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 11 Jul 2019 18:06:33 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, Jul 11, 2019 at 06:06:33PM -0700, Jeff Davis wrote:\n>On Thu, 2019-07-11 at 17:55 +0200, Tomas Vondra wrote:\n>> Makes sense. I haven't thought about how the hybrid approach would be\n>> implemented very much, so I can't quite judge how complicated would\n>> it be\n>> to extend \"approach 1\" later. But if you think it's a sensible first\n>> step,\n>> I trust you. And I certainly agree we need something to compare the\n>> other\n>> approaches against.\n>\n>Is this a duplicate of your previous email?\n>\n\nYes. I don't know how I managed to send it again. Sorry.\n\n>I'm slightly confused but I will use the opportunity to put out another\n>WIP patch. The patch could use a few rounds of cleanup and quality\n>work, but the funcionality is there and the performance seems\n>reasonable.\n>\n>I rebased on master and fixed a few bugs, and most importantly, added\n>tests.\n>\n>It seems to be working with grouping sets fine. It will take a little\n>longer to get good performance numbers, but even for group size of one,\n>I'm seeing HashAgg get close to Sort+Group in some cases.\n>\n\nNice! That's a very nice progress!\n\n>You are right that the missed lookups appear to be costly, at least\n>when the data all fits in system memory. I think it's the cache misses,\n>because sometimes reducing work_mem improves performance. I'll try\n>tuning the number of buckets for the hash table and see if that helps.\n>If not, then the performance still seems pretty good to me.\n>\n>Of course, HashAgg can beat sort for larger group sizes, but I'll try\n>to gather some more data on the cross-over point.\n>\n\nYes, makes sense. I think it's acceptable as long as we consider this\nduring costing (when we know in advance we'll need this) or treat it to be\nemergency measure.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 12 Jul 2019 08:59:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "> High-level approaches:\n> \n> 1. When the in-memory hash table fills, keep existing entries in the\n> hash table, and spill the raw tuples for all new groups in a\n> partitioned fashion. When all input tuples are read, finalize groups\n> in memory and emit. Now that the in-memory hash table is cleared (and\n> memory context reset), process a spill file the same as the original\n> input, but this time with a fraction of the group cardinality.\n> \n> 2. When the in-memory hash table fills, partition the hash space, and\n> evict the groups from all partitions except one by writing out their\n> partial aggregate states to disk. Any input tuples belonging to an\n> evicted partition get spilled to disk. When the input is read\n> entirely, finalize the groups remaining in memory and emit. Now that\n> the in-memory hash table is cleared, process the next partition by\n> loading its partial states into the hash table, and then processing\n> its spilled tuples.\n\nI'm late to the party.\n\nThese two approaches both spill the input tuples, what if the skewed\ngroups are not encountered before the hash table fills up? The spill\nfiles' size and disk I/O could be downsides.\n\nGreenplum spills all the groups by writing the partial aggregate states,\nreset the memory context, process incoming tuples and build in-memory\nhash table, then reload and combine the spilled partial states at last,\nhow does this sound?\n\n-- \nAdam Lee\n\n\n", "msg_date": "Fri, 2 Aug 2019 14:44:05 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Fri, 2019-08-02 at 14:44 +0800, Adam Lee wrote:\n> I'm late to the party.\n\nYou are welcome to join any time!\n\n> These two approaches both spill the input tuples, what if the skewed\n> groups are not encountered before the hash table fills up? The spill\n> files' size and disk I/O could be downsides.\n\nLet's say the worst case is that we encounter 10 million groups of size\none first; just enough to fill up memory. Then, we encounter a single\nadditional group of size 20 million, and need to write out all of those\n20 million raw tuples. That's still not worse than Sort+GroupAgg which\nwould need to write out all 30 million raw tuples (in practice Sort is\npretty fast so may still win in some cases, but not by any huge\namount).\n\n> Greenplum spills all the groups by writing the partial aggregate\n> states,\n> reset the memory context, process incoming tuples and build in-memory\n> hash table, then reload and combine the spilled partial states at\n> last,\n> how does this sound?\n\nThat can be done as an add-on to approach #1 by evicting the entire\nhash table (writing out the partial states), then resetting the memory\ncontext.\n\nIt does add to the complexity though, and would only work for the\naggregates that support serializing and combining partial states. It\nalso might be a net loss to do the extra work of initializing and\nevicting a partial state if we don't have large enough groups to\nbenefit.\n\nGiven that the worst case isn't worse than Sort+GroupAgg, I think it\nshould be left as a future optimization. That would give us time to\ntune the process to work well in a variety of cases.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 02 Aug 2019 08:11:19 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Fri, Aug 02, 2019 at 08:11:19AM -0700, Jeff Davis wrote:\n>On Fri, 2019-08-02 at 14:44 +0800, Adam Lee wrote:\n>> I'm late to the party.\n>\n>You are welcome to join any time!\n>\n>> These two approaches both spill the input tuples, what if the skewed\n>> groups are not encountered before the hash table fills up? The spill\n>> files' size and disk I/O could be downsides.\n>\n>Let's say the worst case is that we encounter 10 million groups of size\n>one first; just enough to fill up memory. Then, we encounter a single\n>additional group of size 20 million, and need to write out all of those\n>20 million raw tuples. That's still not worse than Sort+GroupAgg which\n>would need to write out all 30 million raw tuples (in practice Sort is\n>pretty fast so may still win in some cases, but not by any huge\n>amount).\n>\n>> Greenplum spills all the groups by writing the partial aggregate\n>> states,\n>> reset the memory context, process incoming tuples and build in-memory\n>> hash table, then reload and combine the spilled partial states at\n>> last,\n>> how does this sound?\n>\n>That can be done as an add-on to approach #1 by evicting the entire\n>hash table (writing out the partial states), then resetting the memory\n>context.\n>\n>It does add to the complexity though, and would only work for the\n>aggregates that support serializing and combining partial states. It\n>also might be a net loss to do the extra work of initializing and\n>evicting a partial state if we don't have large enough groups to\n>benefit.\n>\n>Given that the worst case isn't worse than Sort+GroupAgg, I think it\n>should be left as a future optimization. That would give us time to\n>tune the process to work well in a variety of cases.\n>\n\n+1 to leaving that as a future optimization\n\nI think it's clear there's no perfect eviction strategy - for every\nalgorithm we came up with we can construct a data set on which it\nperforms terribly (I'm sure we could do that for the approach used by\nGreenplum, for example).\n\nSo I think it makes sense to do what Jeff proposed, and then maybe try\nimproving that in the future with a switch to different eviction\nstrategy based on some heuristics.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 2 Aug 2019 17:21:23 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "I started to review this patch yesterday with Melanie Plageman, so we\nrebased this patch over the current master. The main conflicts were\ndue to a simplehash patch that has been committed separately[1]. I've\nattached the rebased patch.\n\nI was playing with the code, and if one of the table's most common\nvalues isn't placed into the initial hash table it spills a whole lot\nof tuples to disk that might have been avoided if we had some way to\n'seed' the hash table with MCVs from the statistics. Seems to me that\nyou would need some way of dealing with values that are in the MCV\nlist, but ultimately don't show up in the scan. I imagine that this\nkind of optimization would most useful for aggregates on a full table\nscan.\n\nSome questions:\n\nRight now the patch always initializes 32 spill partitions. Have you given\nany thought into how to intelligently pick an optimal number of\npartitions yet?\n\n> That can be done as an add-on to approach #1 by evicting the entire\n> Hash table (writing out the partial states), then resetting the memory\n> Context.\n\nBy add-on approach, do you mean to say that you have something in mind\nto combine the two strategies? Or do you mean that it could be implemented\nas a separate strategy?\n\n> I think it's clear there's no perfect eviction strategy - for every\n> algorithm we came up with we can construct a data set on which it\n> performs terribly (I'm sure we could do that for the approach used by\n> Greenplum, for example).\n>\n> So I think it makes sense to do what Jeff proposed, and then maybe try\n> improving that in the future with a switch to different eviction\n> strategy based on some heuristics.\n\nI agree. It definitely feels like both spilling strategies have their\nown use case.\n\nThat said, I think it's worth mentioning that with parallel aggregates\nit might actually be more useful to spill the trans values instead,\nand have them combined in a Gather or Finalize stage.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/48abe675e1330f0c264ab2fe0d4ff23eb244f9ef.camel%40j-davis.com", "msg_date": "Wed, 28 Aug 2019 12:52:13 -0700", "msg_from": "Taylor Vesely <tvesely@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2019-08-28 at 12:52 -0700, Taylor Vesely wrote:\n> I started to review this patch yesterday with Melanie Plageman, so we\n> rebased this patch over the current master. The main conflicts were\n> due to a simplehash patch that has been committed separately[1]. I've\n> attached the rebased patch.\n\nGreat, thanks!\n\n> I was playing with the code, and if one of the table's most common\n> values isn't placed into the initial hash table it spills a whole lot\n> of tuples to disk that might have been avoided if we had some way to\n> 'seed' the hash table with MCVs from the statistics. Seems to me that\n> you would need some way of dealing with values that are in the MCV\n> list, but ultimately don't show up in the scan. I imagine that this\n> kind of optimization would most useful for aggregates on a full table\n> scan.\n\nInteresting idea, I didn't think of that.\n\n> Some questions:\n> \n> Right now the patch always initializes 32 spill partitions. Have you\n> given\n> any thought into how to intelligently pick an optimal number of\n> partitions yet?\n\nYes. The idea is to guess how many groups are remaining, then guess how\nmuch space they will need in memory, then divide by work_mem. I just\ndidn't get around to it yet. (Same with the costing work.)\n\n> By add-on approach, do you mean to say that you have something in\n> mind\n> to combine the two strategies? Or do you mean that it could be\n> implemented\n> as a separate strategy?\n\nIt would be an extension of the existing patch, but would add a fair\namount of complexity (dealing with partial states, etc.) and the\nbenefit would be fairly modest. We can do it later if justified.\n\n> That said, I think it's worth mentioning that with parallel\n> aggregates\n> it might actually be more useful to spill the trans values instead,\n> and have them combined in a Gather or Finalize stage.\n\nThat's a good point.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 29 Aug 2019 23:28:12 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2019-08-28 at 12:52 -0700, Taylor Vesely wrote:\n> Right now the patch always initializes 32 spill partitions. Have you\n> given\n> any thought into how to intelligently pick an optimal number of\n> partitions yet?\n\nAttached a new patch that addresses this.\n\n1. Divide hash table memory used by the number of groups in the hash\ntable to get the average memory used per group.\n2. Multiply by the number of groups spilled -- which I pessimistically\nestimate as the number of tuples spilled -- to get the total amount of\nmemory that we'd like to have to process all spilled tuples at once.\n3. Divide the desired amount of memory by work_mem to get the number of\npartitions we'd like to have such that each partition can be processed\nin work_mem without spilling.\n4. Apply a few sanity checks, fudge factors, and limits.\n\nUsing this runtime information should be substantially better than\nusing estimates and projections.\n\nAdditionally, I removed some branches from the common path. I think I\nstill have more work to do there.\n\nI also rebased of course, and fixed a few other things.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 27 Nov 2019 14:58:04 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Nov 27, 2019 at 02:58:04PM -0800, Jeff Davis wrote:\n>On Wed, 2019-08-28 at 12:52 -0700, Taylor Vesely wrote:\n>> Right now the patch always initializes 32 spill partitions. Have you\n>> given\n>> any thought into how to intelligently pick an optimal number of\n>> partitions yet?\n>\n>Attached a new patch that addresses this.\n>\n>1. Divide hash table memory used by the number of groups in the hash\n>table to get the average memory used per group.\n>2. Multiply by the number of groups spilled -- which I pessimistically\n>estimate as the number of tuples spilled -- to get the total amount of\n>memory that we'd like to have to process all spilled tuples at once.\n\nIsn't the \"number of tuples = number of groups\" estimate likely to be\nway too pessimistic? IIUC the consequence is that it pushes us to pick\nmore partitions than necessary, correct?\n\nCould we instead track how many tuples we actually consumed for the the\nin-memory groups, and then use this information to improve the estimate\nof number of groups? I mean, if we know we've consumed 1000 tuples which\ncreated 100 groups, then we know there's ~1:10 ratio.\n\n>3. Divide the desired amount of memory by work_mem to get the number of\n>partitions we'd like to have such that each partition can be processed\n>in work_mem without spilling.\n>4. Apply a few sanity checks, fudge factors, and limits.\n>\n>Using this runtime information should be substantially better than\n>using estimates and projections.\n>\n>Additionally, I removed some branches from the common path. I think I\n>still have more work to do there.\n>\n>I also rebased of course, and fixed a few other things.\n>\n\nA couple of comments based on eye-balling the patch:\n\n\n1) Shouldn't the hashagg_mem_overflow use the other GUC naming, i.e.\nmaybe it should be enable_hashagg_mem_overflow or something similar?\n\n\n2) I'm a bit puzzled by this code in ExecInterpExpr (there are multiple\nsuch blocks, this is just an example)\n\n aggstate = op->d.agg_init_trans.aggstate;\n pergroup_allaggs = aggstate->all_pergroups[op->d.agg_init_trans.setoff];\n pergroup = &pergroup_allaggs[op->d.agg_init_trans.transno];\n\n /* If transValue has not yet been initialized, do so now. */\n if (pergroup_allaggs != NULL && pergroup->noTransValue)\n { ... }\n\nHow could the (pergroup_allaggs != NULL) protect against anything? Let's\nassume the pointer really is NULL. Surely we'll get a segfault on the\npreceding line which does dereference it\n\n pergroup = &pergroup_allaggs[op->d.agg_init_trans.transno];\n\nOr am I missing anything?\n\n\n3) execGrouping.c\n\nA couple of functions would deserve a comment, explaining what it does.\n\n - LookupTupleHashEntryHash\n - prepare_hash_slot\n - calculate_hash\n\nAnd it's not clear to me why we should remove part of the comment before\nTupleHashTableHash.\n\n\n4) I'm not sure I agree with this reasoning that HASH_PARTITION_FACTOR\nmaking the hash tables smaller is desirable - it may be, but if that was\ngenerally the case we'd just use small hash tables all the time. It's a\nbit annoying to give user the capability to set work_mem and then kinda\noverride that.\n\n * ... Another benefit of having more, smaller partitions is that small\n * hash tables may perform better than large ones due to memory caching\n * effects.\n\n\n5) Not sure what \"directly\" means in this context?\n\n * partitions at the time we need to spill, and because this algorithm\n * shouldn't depend too directly on the internal memory needs of a\n * BufFile.\n\n#define HASH_PARTITION_MEM (HASH_MIN_PARTITIONS * BLCKSZ)\n\nDoes that mean we don't want to link to PGAlignedBlock, or what?\n\n\n6) I think we should have some protection against underflows in this\npiece of code:\n\n- this would probably deserve some protection against underflow if HASH_PARTITION_MEM gets too big\n\n if (hashagg_mem_overflow)\n aggstate->hash_mem_limit = SIZE_MAX;\n else\n aggstate->hash_mem_limit = (work_mem * 1024L) - HASH_PARTITION_MEM;\n\nAt the moment it's safe because work_mem is 64kB at least, and\nHASH_PARTITION_MEM is 32kB (4 partitions, 8kB each). But if we happen to\nbump HASH_MIN_PARTITIONS up, this can underflow.\n\n\n7) Shouldn't lookup_hash_entry briefly explain why/how it handles the\nmemory limit?\n\n\n8) The comment before lookup_hash_entries says:\n\n ...\n * Return false if hash table has exceeded its memory limit.\n ..\n\nBut that's clearly bogus, because that's a void function.\n\n\n9) Shouldn't the hash_finish_initial_spills calls in agg_retrieve_direct\nhave a comment, similar to the surrounding code? Might be an overkill,\nnot sure.\n\n\n10) The comment for agg_refill_hash_table says\n\n * Should only be called after all in memory hash table entries have been\n * consumed.\n\nCan we enforce that with an assert, somehow?\n\n\n11) The hash_spill_npartitions naming seems a bit confusing, because it\nseems to imply it's about the \"spill\" while in practice it just choses\nnumber of spill partitions. Maybe hash_choose_num_spill_partitions would\nbe better?\n\n\n12) It's not clear to me why we need HASH_MAX_PARTITIONS? What's the\nreasoning behind the current value (256)? Not wanting to pick too many\npartitions? Comment?\n\n if (npartitions > HASH_MAX_PARTITIONS)\n npartitions = HASH_MAX_PARTITIONS;\n\n\n13) As for this:\n\n /* make sure that we don't exhaust the hash bits */\n if (partition_bits + input_bits >= 32)\n partition_bits = 32 - input_bits;\n\nWe already ran into this issue (exhausting bits in a hash value) in\nhashjoin batching, we should be careful to use the same approach in both\nplaces (not the same code, just general approach).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 28 Nov 2019 18:46:44 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, Nov 28, 2019 at 9:47 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Wed, Nov 27, 2019 at 02:58:04PM -0800, Jeff Davis wrote:\n> >On Wed, 2019-08-28 at 12:52 -0700, Taylor Vesely wrote:\n> >> Right now the patch always initializes 32 spill partitions. Have you\n> >> given\n> >> any thought into how to intelligently pick an optimal number of\n> >> partitions yet?\n> >\n> >Attached a new patch that addresses this.\n> >\n> >1. Divide hash table memory used by the number of groups in the hash\n> >table to get the average memory used per group.\n> >2. Multiply by the number of groups spilled -- which I pessimistically\n> >estimate as the number of tuples spilled -- to get the total amount of\n> >memory that we'd like to have to process all spilled tuples at once.\n>\n> Isn't the \"number of tuples = number of groups\" estimate likely to be\n> way too pessimistic? IIUC the consequence is that it pushes us to pick\n> more partitions than necessary, correct?\n\n\n> Could we instead track how many tuples we actually consumed for the the\n> in-memory groups, and then use this information to improve the estimate\n> of number of groups? I mean, if we know we've consumed 1000 tuples which\n> created 100 groups, then we know there's ~1:10 ratio.\n>\n\nWhat would the cost be of having many small partitions? Some of the\nspill files created may not be used if the estimate was pessimistic,\nbut that seems better than the alternative of re-spilling, since every\nspill writes every tuple again.\n\nAlso, number of groups = number of tuples is only for re-spilling.\nThis is a little bit unclear from the variable naming.\n\nIt looks like the parameter input_tuples passed to hash_spill_init()\nin lookup_hash_entries() is the number of groups estimated by planner.\nHowever, when reloading a spill file, if we run out of memory and\nre-spill, hash_spill_init() is passed batch->input_groups (which is\nactually set from input_ngroups which is the number of tuples in the\nspill file). So, input_tuples is groups and input_groups is\ninput_tuples. It may be helpful to rename this.\n\n\n>\n> 4) I'm not sure I agree with this reasoning that HASH_PARTITION_FACTOR\n> making the hash tables smaller is desirable - it may be, but if that was\n> generally the case we'd just use small hash tables all the time. It's a\n> bit annoying to give user the capability to set work_mem and then kinda\n> override that.\n>\n> * ... Another benefit of having more, smaller partitions is that small\n> * hash tables may perform better than large ones due to memory caching\n> * effects.\n>\n>\nSo, it looks like the HASH_PARTITION_FACTOR is only used when\nre-spilling. The initial hashtable will use work_mem.\nIt seems like the reason for using it when re-spilling is to be very\nconservative to avoid more than one re-spill and make sure each spill\nfile fits in a hashtable in memory.\nThe comment does seem to point to some other reason, though...\n\n\n>\n> 11) The hash_spill_npartitions naming seems a bit confusing, because it\n> seems to imply it's about the \"spill\" while in practice it just choses\n> number of spill partitions. Maybe hash_choose_num_spill_partitions would\n> be better?\n>\n>\nAgreed that a name with \"choose\" or \"calculate\" as the verb would be\nmore clear.\n\n\n>\n> 12) It's not clear to me why we need HASH_MAX_PARTITIONS? What's the\n> reasoning behind the current value (256)? Not wanting to pick too many\n> partitions? Comment?\n>\n> if (npartitions > HASH_MAX_PARTITIONS)\n> npartitions = HASH_MAX_PARTITIONS;\n>\n>\n256 actually seems very large. hash_spill_npartitions() will be called\nfor every respill, so, HASH_MAX_PARTITIONS it not the total number of\nspill files permitted, but, actually, it is the number of respill\nfiles in a given spill (a spill set). So if you made X partitions\ninitially and every partition re-spills, now you would have (at most)\nX * 256 partitions.\nIf HASH_MAX_PARTITIONS is 256, wouldn't the metadata from the spill\nfiles take up a lot of memory at that point?\n\nMelanie & Adam Lee\n\nOn Thu, Nov 28, 2019 at 9:47 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Wed, Nov 27, 2019 at 02:58:04PM -0800, Jeff Davis wrote:\n>On Wed, 2019-08-28 at 12:52 -0700, Taylor Vesely wrote:\n>> Right now the patch always initializes 32 spill partitions. Have you\n>> given\n>> any thought into how to intelligently pick an optimal number of\n>> partitions yet?\n>\n>Attached a new patch that addresses this.\n>\n>1. Divide hash table memory used by the number of groups in the hash\n>table to get the average memory used per group.\n>2. Multiply by the number of groups spilled -- which I pessimistically\n>estimate as the number of tuples spilled -- to get the total amount of\n>memory that we'd like to have to process all spilled tuples at once.\n\nIsn't the \"number of tuples = number of groups\" estimate likely to be\nway too pessimistic? IIUC the consequence is that it pushes us to pick\nmore partitions than necessary, correct? \n\nCould we instead track how many tuples we actually consumed for the the\nin-memory groups, and then use this information to improve the estimate\nof number of groups? I mean, if we know we've consumed 1000 tuples which\ncreated 100 groups, then we know there's ~1:10 ratio.What would the cost be of having many small partitions? Some of thespill files created may not be used if the estimate was pessimistic,but that seems better than the alternative of re-spilling, since everyspill writes every tuple again.Also, number of groups = number of tuples is only for re-spilling.This is a little bit unclear from the variable naming.It looks like the parameter input_tuples passed to hash_spill_init() in lookup_hash_entries() is the number of groups estimated by planner.However, when reloading a spill file, if we run out of memory andre-spill, hash_spill_init() is passed batch->input_groups (which isactually set from input_ngroups which is the number of tuples in thespill file). So, input_tuples is groups and input_groups isinput_tuples. It may be helpful to rename this. \n\n4) I'm not sure I agree with this reasoning that HASH_PARTITION_FACTOR\nmaking the hash tables smaller is desirable - it may be, but if that was\ngenerally the case we'd just use small hash tables all the time. It's a\nbit annoying to give user the capability to set work_mem and then kinda\noverride that.\n\n  * ... Another benefit of having more, smaller partitions is that small\n  * hash tables may perform better than large ones due to memory caching\n  * effects.\nSo, it looks like the HASH_PARTITION_FACTOR is only used whenre-spilling. The initial hashtable will use work_mem.It seems like the reason for using it when re-spilling is to be veryconservative to avoid more than one re-spill and make sure each spillfile fits in a hashtable in memory.The comment does seem to point to some other reason, though... \n\n11) The hash_spill_npartitions naming seems a bit confusing, because it\nseems to imply it's about the \"spill\" while in practice it just choses\nnumber of spill partitions. Maybe hash_choose_num_spill_partitions would\nbe better?\nAgreed that a name with \"choose\" or \"calculate\" as the verb would bemore clear. \n\n12) It's not clear to me why we need HASH_MAX_PARTITIONS? What's the\nreasoning behind the current value (256)? Not wanting to pick too many\npartitions? Comment?\n\n     if (npartitions > HASH_MAX_PARTITIONS)\n         npartitions = HASH_MAX_PARTITIONS;\n256 actually seems very large. hash_spill_npartitions() will be calledfor every respill, so, HASH_MAX_PARTITIONS it not the total number ofspill files permitted, but, actually, it is the number of respillfiles in a given spill (a spill set). So if you made X partitionsinitially and every partition re-spills, now you would have (at most)X * 256 partitions.If HASH_MAX_PARTITIONS is 256, wouldn't the metadata from the spillfiles take up a lot of memory at that point?Melanie & Adam Lee", "msg_date": "Wed, 4 Dec 2019 17:24:00 -0800", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "Thanks very much for a great review! I've attached a new patch.\n\nThere are some significant changes in the new version also:\n\nIn the non-spilling path, removed the extra nullcheck branch in the\ncompiled evaltrans expression. When the first tuple is spilled, I the\nbranch becomes necessary, so I recompile the expression using a new\nopcode that includes that branch.\n\nI also changed the read-from-spill path to use a slot with\nTTSOpsMinimalTuple (avoiding the need to make it into a virtual slot\nright away), which means I need to recompile the evaltrans expression\nfor that case, as well.\n\nI also improved the way we initialize the hash tables to use a better\nestimate for the number of groups. And I made it only initialize one\nhash table in the read-from-spill path.\n\nWith all of the changes I made (thanks to some suggestions from Andres)\nthe performance is looking pretty good. It's pretty easy to beat\nSort+Group when the group size is 10+. Even for average group size of\n~1, HashAgg is getting really close to Sort in some cases.\n\nThere are still a few things to do, most notably costing. I also need\nto project before spilling to avoid wasting disk. And I'm sure my\nchanges have created some more problems, so I have some significant\nwork to do on quality.\n\nMy answers to your questions inline:\n\nOn Thu, 2019-11-28 at 18:46 +0100, Tomas Vondra wrote:\n> Could we instead track how many tuples we actually consumed for the\n> the\n> in-memory groups, and then use this information to improve the\n> estimate\n> of number of groups? I mean, if we know we've consumed 1000 tuples\n> which\n> created 100 groups, then we know there's ~1:10 ratio.\n\nThat would be a good estimate for an even distribution, but not\nnecessarily for a skewed distribution. I'm not opposed to it, but it's\ngenerally my philosophy to overpartition as it seems there's not a big\ndownside.\n\n> A couple of comments based on eye-balling the patch:\n> \n> \n> 1) Shouldn't the hashagg_mem_overflow use the other GUC naming, i.e.\n> maybe it should be enable_hashagg_mem_overflow or something similar?\n\nThe enable_* naming is for planner GUCs. hashagg_mem_overflow is an\nexecution-time GUC that disables spilling and overflows work_mem (that\nis, it reverts to the old behavior).\n\n> \n> assume the pointer really is NULL. Surely we'll get a segfault on the\n> preceding line which does dereference it\n> \n> pergroup = &pergroup_allaggs[op->d.agg_init_trans.transno];\n> \n> Or am I missing anything?\n\nThat's not actually dereferencing anything, it's just doing a pointer\ncalculation. You are probably right that it's not a good thing to rely\non, or at least not quite as readable, so I changed the order to put\nthe NULL check first.\n\n\n> \n> 3) execGrouping.c\n> \n> A couple of functions would deserve a comment, explaining what it\n> does.\n> \n> - LookupTupleHashEntryHash\n> - prepare_hash_slot\n> - calculate_hash\n\nDone, thank you.\n\n> And it's not clear to me why we should remove part of the comment\n> before\n> TupleHashTableHash.\n\nTrying to remember back to when I first did that, but IIRC the comment\nwas not updated from a previous change, and I was cleaning it up. I\nwill check over that again to be sure it's an improvement.\n\n> \n> 4) I'm not sure I agree with this reasoning that\n> HASH_PARTITION_FACTOR\n> making the hash tables smaller is desirable - it may be, but if that\n> was\n> generally the case we'd just use small hash tables all the time. It's\n> a\n> bit annoying to give user the capability to set work_mem and then\n> kinda\n> override that.\n\nI think adding some kind of headroom is reasonable to avoid recursively\nspilling, but perhaps it's not critical. I see this as a tuning\nquestion more than anything else. I don't see it as \"overriding\"\nwork_mem, but I can see where you're coming from.\n\n> 5) Not sure what \"directly\" means in this context?\n> \n> * partitions at the time we need to spill, and because this\n> algorithm\n> * shouldn't depend too directly on the internal memory needs of a\n> * BufFile.\n> \n> #define HASH_PARTITION_MEM (HASH_MIN_PARTITIONS * BLCKSZ)\n> \n> Does that mean we don't want to link to PGAlignedBlock, or what?\n\nThat's what I meant, yes, but I reworded the comment to not say that.\n\n> 6) I think we should have some protection against underflows in this\n> piece of code:\n> \n> - this would probably deserve some protection against underflow if\n> HASH_PARTITION_MEM gets too big\n> \n> if (hashagg_mem_overflow)\n> aggstate->hash_mem_limit = SIZE_MAX;\n> else\n> aggstate->hash_mem_limit = (work_mem * 1024L) -\n> HASH_PARTITION_MEM;\n> \n> At the moment it's safe because work_mem is 64kB at least, and\n> HASH_PARTITION_MEM is 32kB (4 partitions, 8kB each). But if we happen\n> to\n> bump HASH_MIN_PARTITIONS up, this can underflow.\n\nThank you, done.\n\n> 7) Shouldn't lookup_hash_entry briefly explain why/how it handles the\n> memory limit?\n\nImproved.\n\n> \n> 8) The comment before lookup_hash_entries says:\n> \n> ...\n> * Return false if hash table has exceeded its memory limit.\n> ..\n> \n> But that's clearly bogus, because that's a void function.\n\nThank you, improved comment.\n\n> 9) Shouldn't the hash_finish_initial_spills calls in\n> agg_retrieve_direct\n> have a comment, similar to the surrounding code? Might be an\n> overkill,\n> not sure.\n\nSure, done.\n\n> 10) The comment for agg_refill_hash_table says\n> \n> * Should only be called after all in memory hash table entries have\n> been\n> * consumed.\n> \n> Can we enforce that with an assert, somehow?\n\nIt's a bit awkward. Simplehash doesn't expose the number of groups, and\nwe would also have to check each hash table. Not a bad idea to add an\ninterface to simplehash to make that work, though.\n\n> 11) The hash_spill_npartitions naming seems a bit confusing, because\n> it\n> seems to imply it's about the \"spill\" while in practice it just\n> choses\n> number of spill partitions. Maybe hash_choose_num_spill_partitions\n> would\n> be better?\n\nDone.\n\n> 12) It's not clear to me why we need HASH_MAX_PARTITIONS? What's the\n> reasoning behind the current value (256)? Not wanting to pick too\n> many\n> partitions? Comment?\n> \n> if (npartitions > HASH_MAX_PARTITIONS)\n> npartitions = HASH_MAX_PARTITIONS;\n\nAdded a comment. There's no deep reasoning there -- I just don't want\nit to choose to create 5000 files and surprise a user.\n\n> 13) As for this:\n> \n> /* make sure that we don't exhaust the hash bits */\n> if (partition_bits + input_bits >= 32)\n> partition_bits = 32 - input_bits;\n> \n> We already ran into this issue (exhausting bits in a hash value) in\n> hashjoin batching, we should be careful to use the same approach in\n> both\n> places (not the same code, just general approach).\n\nDidn't investigate this yet, but will do.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 04 Dec 2019 18:55:43 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Dec 04, 2019 at 06:55:43PM -0800, Jeff Davis wrote:\n> \n> Thanks very much for a great review! I've attached a new patch.\n\nHi,\n\nAbout the `TODO: project needed attributes only` in your patch, when\nwould the input tuple contain columns not needed? It seems like anything\nyou can project has to be in the group or aggregates.\n\n-- \nMelanie Plageman & Adam\n\n\n", "msg_date": "Wed, 4 Dec 2019 19:50:13 -0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2019-12-04 at 19:50 -0800, Adam Lee wrote:\n> On Wed, Dec 04, 2019 at 06:55:43PM -0800, Jeff Davis wrote:\n> > \n> > Thanks very much for a great review! I've attached a new patch.\n> \n> Hi,\n> \n> About the `TODO: project needed attributes only` in your patch, when\n> would the input tuple contain columns not needed? It seems like\n> anything\n> you can project has to be in the group or aggregates.\n\nIf you have a table like:\n\n CREATE TABLE foo(i int, j int, x int, y int, z int);\n\nAnd do:\n\n SELECT i, SUM(j) FROM foo GROUP BY i;\n\nAt least from a logical standpoint, you might expect that we project\nonly the attributes we need from foo before feeding them into the\nHashAgg. But that's not quite how postgres works. Instead, it leaves\nthe tuples intact (which, in this case, means they have 5 attributes)\nuntil after aggregation and lazily fetches whatever attributes are\nreferenced. Tuples are spilled from the input, at which time they still\nhave 5 attributes; so naively copying them is wasteful.\n\nI'm not sure how often this laziness is really a win in practice,\nespecially after the expression evaluation has changed so much in\nrecent releases. So it might be better to just project all the\nattributes eagerly, and then none of this would be a problem. If we\nstill wanted to be lazy about attribute fetching, that should still be\npossible even if we did a kind of \"logical\" projection of the tuple so\nthat the useless attributes would not be relevant. Regardless, that's\noutside the scope of the patch I'm currently working on.\n\nWhat I'd like to do is copy just the attributes needed into a new\nvirtual slot, leave the unneeded ones NULL, and then write it out to\nthe tuplestore as a MinimalTuple. I just need to be sure to get the\nright attributes.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 04 Dec 2019 22:57:51 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2019-12-04 at 17:24 -0800, Melanie Plageman wrote:\n> \n> It looks like the parameter input_tuples passed to hash_spill_init() \n> in lookup_hash_entries() is the number of groups estimated by\n> planner.\n> However, when reloading a spill file, if we run out of memory and\n> re-spill, hash_spill_init() is passed batch->input_groups (which is\n> actually set from input_ngroups which is the number of tuples in the\n> spill file). So, input_tuples is groups and input_groups is\n> input_tuples. It may be helpful to rename this.\n\nYou're right; this is confusing. I will clarify this in the next patch.\n \n> So, it looks like the HASH_PARTITION_FACTOR is only used when\n> re-spilling. The initial hashtable will use work_mem.\n> It seems like the reason for using it when re-spilling is to be very\n> conservative to avoid more than one re-spill and make sure each spill\n> file fits in a hashtable in memory.\n\nIt's used any time a spill happens, even the first spill. I'm flexible\non the use of HASH_PARTITION_FACTOR though... it seems not everyone\nthinks it's a good idea. To me it's just a knob to tune and I tend to\nthink over-partitioning is the safer bet most of the time.\n\n> The comment does seem to point to some other reason, though...\n\nI have observed some anomalies where smaller work_mem values (for\nalready-low values of work_mem) result faster runtime. The only\nexplanation I have is caching effects.\n\n> 256 actually seems very large. hash_spill_npartitions() will be\n> called\n> for every respill, so, HASH_MAX_PARTITIONS it not the total number of\n> spill files permitted, but, actually, it is the number of respill\n> files in a given spill (a spill set). So if you made X partitions\n> initially and every partition re-spills, now you would have (at most)\n> X * 256 partitions.\n\nRight. Though I'm not sure there's any theoretical max... given enough\ninput tuples and it will just keep getting deeper. If this is a serious\nconcern maybe I should make it depth-first recursion by prepending new\nwork items rather than appending. That would still not bound the\ntheoretical max, but it would slow the growth.\n\n> If HASH_MAX_PARTITIONS is 256, wouldn't the metadata from the spill\n> files take up a lot of memory at that point?\n\nYes. Each file keeps a BLCKSZ buffer, plus some other metadata. And it\ndoes create a file, so it's offloading some work to the OS to manage\nthat new file.\n\nIt's annoying to properly account for these costs because the memory\nneeds to be reserved at the time we are building the hash table, but we\ndon't know how many partitions we want until it comes time to spill.\nAnd for that matter, we don't even know whether we will need to spill\nor not.\n\nThere are two alternative approaches which sidestep this problem:\n\n1. Reserve a fixed fraction of work_mem, say, 1/8 to make space for\nhowever many partitions that memory can handle. We would still have a\nmin and max, but the logic for reserving the space would be easy and so\nwould choosing the number of partitions to create.\n * Pro: simple\n * Con: lose the ability to choose the numer of partitions\n\n2. Use logtape.c instead (suggestion from Heikki). Supporting more\nlogical tapes doesn't impose costs on the OS, and we can potentially\nuse a lot of logical tapes.\n * Pro: can use lots of partitions without making lots of files\n * Con: buffering still needs to happen somewhere, so we still need\nmemory for each logical tape. Also, we risk losing locality of read\naccess when reading the tapes, or perhaps confusing readahead.\nFundamentally, logtapes.c was designed for sequential write, random\nread; but we are going to do random write and sequential read.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 04 Dec 2019 23:28:04 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, 2019-11-28 at 18:46 +0100, Tomas Vondra wrote:\n> And it's not clear to me why we should remove part of the comment\n> before\n> TupleHashTableHash.\n\nIt looks like 5dfc1981 changed the signature of TupleHashTableHash\nwithout updating the comment, so it doesn't really make sense any more.\nI just updated the comment as a part of my patch, but it's not related.\n\nAndres, comments? Maybe we can just commit a fix for that comment and\ntake it out of my patch.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 05 Dec 2019 12:55:51 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, Dec 05, 2019 at 12:55:51PM -0800, Jeff Davis wrote:\n>On Thu, 2019-11-28 at 18:46 +0100, Tomas Vondra wrote:\n>> And it's not clear to me why we should remove part of the comment\n>> before\n>> TupleHashTableHash.\n>\n>It looks like 5dfc1981 changed the signature of TupleHashTableHash\n>without updating the comment, so it doesn't really make sense any more.\n>I just updated the comment as a part of my patch, but it's not related.\n>\n>Andres, comments? Maybe we can just commit a fix for that comment and\n>take it out of my patch.\n>\n\n+1 to push that as an independent fix\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 5 Dec 2019 23:04:57 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On 2019-12-05 12:55:51 -0800, Jeff Davis wrote:\n> On Thu, 2019-11-28 at 18:46 +0100, Tomas Vondra wrote:\n> > And it's not clear to me why we should remove part of the comment\n> > before\n> > TupleHashTableHash.\n> \n> It looks like 5dfc1981 changed the signature of TupleHashTableHash\n> without updating the comment, so it doesn't really make sense any more.\n> I just updated the comment as a part of my patch, but it's not related.\n> \n> Andres, comments? Maybe we can just commit a fix for that comment and\n> take it out of my patch.\n\nFine with me!\n\n- Andres\n\n\n", "msg_date": "Thu, 5 Dec 2019 14:14:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Dec 04, 2019 at 10:57:51PM -0800, Jeff Davis wrote:\n> > About the `TODO: project needed attributes only` in your patch, when\n> > would the input tuple contain columns not needed? It seems like\n> > anything\n> > you can project has to be in the group or aggregates.\n> \n> If you have a table like:\n> \n> CREATE TABLE foo(i int, j int, x int, y int, z int);\n> \n> And do:\n> \n> SELECT i, SUM(j) FROM foo GROUP BY i;\n> \n> At least from a logical standpoint, you might expect that we project\n> only the attributes we need from foo before feeding them into the\n> HashAgg. But that's not quite how postgres works. Instead, it leaves\n> the tuples intact (which, in this case, means they have 5 attributes)\n> until after aggregation and lazily fetches whatever attributes are\n> referenced. Tuples are spilled from the input, at which time they still\n> have 5 attributes; so naively copying them is wasteful.\n> \n> I'm not sure how often this laziness is really a win in practice,\n> especially after the expression evaluation has changed so much in\n> recent releases. So it might be better to just project all the\n> attributes eagerly, and then none of this would be a problem. If we\n> still wanted to be lazy about attribute fetching, that should still be\n> possible even if we did a kind of \"logical\" projection of the tuple so\n> that the useless attributes would not be relevant. Regardless, that's\n> outside the scope of the patch I'm currently working on.\n> \n> What I'd like to do is copy just the attributes needed into a new\n> virtual slot, leave the unneeded ones NULL, and then write it out to\n> the tuplestore as a MinimalTuple. I just need to be sure to get the\n> right attributes.\n> \n> Regards,\n> \tJeff Davis\n\nMelanie and I tried this, had a installcheck passed patch. The way how\nwe verify it is composing a wide table with long unnecessary text\ncolumns, then check the size it writes on every iteration.\n\nPlease check out the attachment, it's based on your 1204 version.\n\n-- \nAdam Lee", "msg_date": "Tue, 10 Dec 2019 13:34:22 -0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, 2019-11-28 at 18:46 +0100, Tomas Vondra wrote:\n> 13) As for this:\n> \n> /* make sure that we don't exhaust the hash bits */\n> if (partition_bits + input_bits >= 32)\n> partition_bits = 32 - input_bits;\n> \n> We already ran into this issue (exhausting bits in a hash value) in\n> hashjoin batching, we should be careful to use the same approach in\n> both\n> places (not the same code, just general approach).\n\nI assume you're talking about ExecHashIncreaseNumBatches(), and in\nparticular, commit 8442317b. But that's a 10-year-old commit, so\nperhaps you're talking about something else?\n\nIt looks like that code in HJ is protecting against having a very large\nnumber of batches, such that we can't allocate an array of pointers for\neach batch. And it seems like the concern is more related to a planner\nerror causing such a large nbatch.\n\nI don't quite see the analogous case in HashAgg. npartitions is already\nconstrained to a maximum of 256. And the batches are individually\nallocated, held in a list, not an array.\n\nIt could perhaps use some defensive programming to make sure that we\ndon't run into problems if the max is set very high.\n\nCan you clarify what you're looking for here?\n\nPerhaps I can also add a comment saying that we can have less than\nHASH_MIN_PARTITIONS when running out of bits.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 12 Dec 2019 18:10:50 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, Dec 12, 2019 at 06:10:50PM -0800, Jeff Davis wrote:\n>On Thu, 2019-11-28 at 18:46 +0100, Tomas Vondra wrote:\n>> 13) As for this:\n>>\n>> /* make sure that we don't exhaust the hash bits */\n>> if (partition_bits + input_bits >= 32)\n>> partition_bits = 32 - input_bits;\n>>\n>> We already ran into this issue (exhausting bits in a hash value) in\n>> hashjoin batching, we should be careful to use the same approach in\n>> both\n>> places (not the same code, just general approach).\n>\n>I assume you're talking about ExecHashIncreaseNumBatches(), and in\n>particular, commit 8442317b. But that's a 10-year-old commit, so\n>perhaps you're talking about something else?\n>\n>It looks like that code in HJ is protecting against having a very large\n>number of batches, such that we can't allocate an array of pointers for\n>each batch. And it seems like the concern is more related to a planner\n>error causing such a large nbatch.\n>\n>I don't quite see the analogous case in HashAgg. npartitions is already\n>constrained to a maximum of 256. And the batches are individually\n>allocated, held in a list, not an array.\n>\n>It could perhaps use some defensive programming to make sure that we\n>don't run into problems if the max is set very high.\n>\n>Can you clarify what you're looking for here?\n>\n\nI'm talking about this recent discussion on pgsql-bugs:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGLyafKXBMFqZCSeYikPbdYURbwr%2BjP6TAy8sY-8LO0V%2BQ%40mail.gmail.com\n\nI.e. when number of batches/partitions and buckets is high enough, we\nmay end up with very few bits in one of the parts.\n\n>Perhaps I can also add a comment saying that we can have less than\n>HASH_MIN_PARTITIONS when running out of bits.\n>\n\nMaybe.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Dec 2019 17:17:43 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Nov 27, 2019 at 02:58:04PM -0800, Jeff Davis wrote:\n>On Wed, 2019-08-28 at 12:52 -0700, Taylor Vesely wrote:\n>> Right now the patch always initializes 32 spill partitions. Have you\n>> given\n>> any thought into how to intelligently pick an optimal number of\n>> partitions yet?\n>\n>Attached a new patch that addresses this.\n>\n>1. Divide hash table memory used by the number of groups in the hash\n>table to get the average memory used per group.\n>2. Multiply by the number of groups spilled -- which I pessimistically\n>estimate as the number of tuples spilled -- to get the total amount of\n>memory that we'd like to have to process all spilled tuples at once.\n>3. Divide the desired amount of memory by work_mem to get the number of\n>partitions we'd like to have such that each partition can be processed\n>in work_mem without spilling.\n>4. Apply a few sanity checks, fudge factors, and limits.\n>\n>Using this runtime information should be substantially better than\n>using estimates and projections.\n>\n>Additionally, I removed some branches from the common path. I think I\n>still have more work to do there.\n>\n>I also rebased of course, and fixed a few other things.\n>\n\nI've done a bit more testing on this, after resolving a couple of minor\nconflicts due to recent commits (rebased version attached).\n\nIn particular, I've made a comparison with different dataset sizes,\ngroup sizes, GUC settings etc. The script and results from two different\nmachines are available here:\n\n * https://bitbucket.org/tvondra/hashagg-tests/src/master/\n\nThe script essentially runs a simple grouping query with different\nnumber of rows, groups, work_mem and parallelism settings. There's\nnothing particularly magical about it.\n\nI did run it both on master and patched code, allowing us to compare\nresults and assess impact of the patch. Overall, the changes are\nexpected and either neutral or beneficial, i.e. the timing are the same\nor faster.\n\nThe number of cases that regressed is fairly small, but sometimes the\nregressions are annoyingly large - up to 2x in some cases. Consider for\nexample this trivial example with 100M rows:\n\n CREATE TABLE t AS\n SELECT (100000000 * random())::int AS a\n FROM generate_series(1,100000000) s(i);\n\nOn the master, the plan with default work_mem (i.e. 4MB) and\n\n SET max_parallel_workers_per_gather = 8;\n \nlooks like this:\n\nEXPLAIN SELECT * FROM (SELECT a, count(*) FROM t GROUP BY a OFFSET 1000000000) foo;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Limit (cost=16037474.49..16037474.49 rows=1 width=12)\n -> Finalize GroupAggregate (cost=2383745.73..16037474.49 rows=60001208 width=12)\n Group Key: t.a\n -> Gather Merge (cost=2383745.73..14937462.25 rows=100000032 width=12)\n Workers Planned: 8\n -> Partial GroupAggregate (cost=2382745.59..2601495.66 rows=12500004 width=12)\n Group Key: t.a\n -> Sort (cost=2382745.59..2413995.60 rows=12500004 width=4)\n Sort Key: t.a\n -> Parallel Seq Scan on t (cost=0.00..567478.04 rows=12500004 width=4)\n(10 rows)\n\nWhich kinda makes sense - we can't do hash aggregate, because there are\n100M distinct values, and that won't fit into 4MB of memory (and the\nplanner knows about that).\n\nAnd it completes in about 108381 ms, give or take. With the patch, the\nplan changes like this:\n\n\nEXPLAIN SELECT * FROM (SELECT a, count(*) FROM t GROUP BY a OFFSET 1000000000) foo;\n\n QUERY PLAN\n---------------------------------------------------------------------------\n Limit (cost=2371037.74..2371037.74 rows=1 width=12)\n -> HashAggregate (cost=1942478.48..2371037.74 rows=42855926 width=12)\n Group Key: t.a\n -> Seq Scan on t (cost=0.00..1442478.32 rows=100000032 width=4)\n(4 rows)\n\ni.e. it's way cheaper than the master plan, it's not parallel, but when\nexecuted it takes much longer (about 147442 ms). After forcing a\nparallel query (by setting parallel_setup_cost = 0) the plan changes to\na parallel one, but without a partial aggregate, but it's even slower.\n\nThe explain analyze for the non-parallel plan looks like this:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2371037.74..2371037.74 rows=1 width=12) (actual time=160180.718..160180.718 rows=0 loops=1)\n -> HashAggregate (cost=1942478.48..2371037.74 rows=42855926 width=12) (actual time=54462.728..157594.756 rows=63215980 loops=1)\n Group Key: t.a\n Memory Usage: 4096kB Batches: 8320 Disk Usage:4529172kB\n -> Seq Scan on t (cost=0.00..1442478.32 rows=100000032 width=4) (actual time=0.014..12198.044 rows=100000000 loops=1)\n Planning Time: 0.110 ms\n Execution Time: 160183.517 ms\n(7 rows)\n\nSo the cost is about 7x lower than for master, but the duration is much\nhigher. I don't know how much of this is preventable, but it seems there\nmight be something missing in the costing, because when I set work_mem to\n1TB on the master, and I tweak the n_distinct estimates for the column\nto be exactly the same on the two clusters, I get this:\n\nmaster:\n-------\n\nSET work_mem = '1TB';\nEXPLAIN SELECT * FROM (SELECT a, count(*) FROM t GROUP BY a OFFSET 1000000000) foo;\n\n QUERY PLAN \n---------------------------------------------------------------------------\n Limit (cost=2574638.28..2574638.28 rows=1 width=12)\n -> HashAggregate (cost=1942478.48..2574638.28 rows=63215980 width=12)\n Group Key: t.a\n -> Seq Scan on t (cost=0.00..1442478.32 rows=100000032 width=4)\n(4 rows)\n\n\npatched:\n--------\n\nEXPLAIN SELECT * FROM (SELECT a, count(*) FROM t GROUP BY a OFFSET 1000000000) foo;\n\n QUERY PLAN\n---------------------------------------------------------------------------\n Limit (cost=2574638.28..2574638.28 rows=1 width=12)\n -> HashAggregate (cost=1942478.48..2574638.28 rows=63215980 width=12)\n Group Key: t.a\n -> Seq Scan on t (cost=0.00..1442478.32 rows=100000032 width=4)\n(4 rows)\n\nThat is, the cost is exactly the same, except that in the second case we\nexpect to do quite a bit of batching - there are 8320 batches (and we\nknow that, because on master we'd not use hash aggregate without the\nwork_mem tweak).\n\nSo I think we're not costing the batching properly / at all.\n\n\nA couple more comments:\n\n1) IMHO we should rename hashagg_mem_overflow to enable_hashagg_overflow\nor something like that. I think that describes the GUC purpose better\n(and it's more consistent with enable_hashagg_spill).\n\n\n2) show_hashagg_info\n\nI think there's a missing space after \":\" here:\n\n\t\t\t\t\" Batches: %d Disk Usage:%ldkB\",\n\nand maybe we should use just \"Disk:\" just like in we do for sort:\n\n-> Sort (actual time=662.136..911.558 rows=1000000 loops=1)\n Sort Key: t2.a\n Sort Method: external merge Disk: 13800kB\n\n\n3) I'm not quite sure what to think about the JIT recompile we do for\nEEOP_AGG_INIT_TRANS_SPILLED etc. I'm no llvm/jit expert, but do we do\nthat for some other existing cases?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 14 Dec 2019 18:32:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Sat, Dec 14, 2019 at 06:32:25PM +0100, Tomas Vondra wrote:\n> I've done a bit more testing on this, after resolving a couple of minor\n> conflicts due to recent commits (rebased version attached).\n> \n> In particular, I've made a comparison with different dataset sizes,\n> group sizes, GUC settings etc. The script and results from two different\n> machines are available here:\n> \n> The script essentially runs a simple grouping query with different\n> number of rows, groups, work_mem and parallelism settings. There's\n> nothing particularly magical about it.\n\nNice!\n\n> I did run it both on master and patched code, allowing us to compare\n> results and assess impact of the patch. Overall, the changes are\n> expected and either neutral or beneficial, i.e. the timing are the same\n> or faster.\n> \n> The number of cases that regressed is fairly small, but sometimes the\n> regressions are annoyingly large - up to 2x in some cases. Consider for\n> example this trivial example with 100M rows:\n\nI suppose this is because the patch has no costing changes yet. I hacked\na little to give hash agg a spilling punish, just some value based on\n(groups_in_hashtable * num_of_input_tuples)/num_groups_from_planner, it\nwould not choose hash aggregate in this case.\n\nHowever, that punish is wrong, because comparing to the external sort\nalgorithm, hash aggregate has the respilling, which involves even more\nI/O, especially with a very large number of groups but a very small\nnumber of tuples in a single group like the test you did. It would be a\nchallenge.\n\nBTW, Jeff, Greenplum has a test for hash agg spill, I modified a little\nto check how many batches a query uses, it's attached, not sure if it\nwould help.\n\n-- \nAdam Lee", "msg_date": "Fri, 20 Dec 2019 17:16:26 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Tue, 2019-12-10 at 13:34 -0800, Adam Lee wrote:\n> Melanie and I tried this, had a installcheck passed patch. The way\n> how\n> we verify it is composing a wide table with long unnecessary text\n> columns, then check the size it writes on every iteration.\n> \n> Please check out the attachment, it's based on your 1204 version.\n\nThank you. Attached a new patch that incorporates your projection work.\n\nA few comments:\n\n* You are only nulling out up to tts_nvalid, which means that you can\nstill end up storing more on disk if the wide column comes at the end\nof the table and hasn't been deserialized yet. I fixed this by copying\nneeded attributes to the hash_spill_slot and making it virtual.\n\n* aggregated_columns does not need to be a member of AggState; nor does\nit need to be computed inside of the perhash loop. Aside: if adding a\nfield to AggState is necessary, you need to bump the field numbers of\nlater fields that are labeled for JIT use, otherwise it will break JIT.\n\n* I used an array rather than a bitmapset. It makes it easier to find\nthe highest column (to do a slot_getsomeattrs), and it might be a\nlittle more efficient for wide tables with mostly useless columns.\n\n* Style nitpick: don't mix code and declarations\n\nThe updated patch also saves the transitionSpace calculation in the Agg\nnode for better hash table size estimating. This is a good way to\nchoose an initial number of buckets for the hash table, and also to cap\nthe number of groups we permit in the hash table when we expect the\ngroups to grow.\n\nRegards,\n\tJeff Davis", "msg_date": "Sun, 22 Dec 2019 15:05:46 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Sat, 2019-12-14 at 18:32 +0100, Tomas Vondra wrote:\n> So I think we're not costing the batching properly / at all.\n\nThank you for all of the testing! I think the results are good: even\nfor cases where HashAgg is the wrong choice, it's not too bad. You're\nright that costing is not done, and when it is, I think it will avoid\nthese bad choices most of the time.\n\n> A couple more comments:\n> \n> 1) IMHO we should rename hashagg_mem_overflow to\n> enable_hashagg_overflow\n> or something like that. I think that describes the GUC purpose better\n> (and it's more consistent with enable_hashagg_spill).\n\nThe other enable_* GUCs are all planner GUCs, so I named this one\ndifferently to stand out as an executor GUC.\n\n> 2) show_hashagg_info\n> \n> I think there's a missing space after \":\" here:\n> \n> \t\t\t\t\" Batches: %d Disk Usage:%ldkB\",\n> \n> and maybe we should use just \"Disk:\" just like in we do for sort:\n\nDone, thank you.\n\n> 3) I'm not quite sure what to think about the JIT recompile we do for\n> EEOP_AGG_INIT_TRANS_SPILLED etc. I'm no llvm/jit expert, but do we do\n> that for some other existing cases?\n\nAndres asked for that explicitly to avoid branches in the non-spilling\ncode path (or at least branches that are likely to be mispredicted).\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sun, 22 Dec 2019 15:33:41 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Sat, 2019-12-14 at 18:32 +0100, Tomas Vondra wrote:\n> So I think we're not costing the batching properly / at all.\n\nHi,\n\nI've attached a new patch that adds some basic costing for disk during\nhashagg.\n\nThe accuracy is unfortunately not great, especially at smaller work_mem\nsizes and smaller entry sizes. The biggest discrepency seems to be the\nestimate for the average size of an entry in the hash table is\nsignificantly smaller than the actual average size. I'm not sure how\nbig of a problem this accuracy is or how it compares to sort, for\ninstance (it's a bit hard to compare because sort works with\ntheoretical memory usage while hashagg looks at actual allocated\nmemory).\n\nCosting was the last major TODO, so I'm considering this feature\ncomplete, though it still needs some work on quality.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 27 Dec 2019 15:35:30 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "Hi, Jeff\n\nI tried to use the logical tape APIs for hash agg spilling, based on\nyour 1220 version.\n\nTurns out it doesn't make much of performance difference with the\ndefault 8K block size (might be my patch's problem), but the disk space\n(not I/O) would be saved a lot because I force the respilling to use the\nsame LogicalTapeSet.\n\nLogtape APIs with default block size 8K:\n```\npostgres=# EXPLAIN ANALYZE SELECT avg(g) FROM generate_series(0,5000000) g GROUP BY g;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=75000.02..75002.52 rows=200 width=36) (actual time=7701.706..24473.002 rows=5000001 loops=1)\n Group Key: g\n Memory Usage: 4096kB Batches: 516 Disk: 116921kB\n -> Function Scan on generate_series g (cost=0.00..50000.01 rows=5000001 width=4) (actual time=1611.829..3253.150 rows=5000001 loops=1)\n Planning Time: 0.194 ms\n Execution Time: 25129.239 ms\n(6 rows)\n```\n\nBare BufFile APIs:\n```\npostgres=# EXPLAIN ANALYZE SELECT avg(g) FROM generate_series(0,5000000) g GROUP BY g;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=75000.02..75002.52 rows=200 width=36) (actual time=7339.835..24472.466 rows=5000001 loops=1)\n Group Key: g\n Memory Usage: 4096kB Batches: 516 Disk: 232773kB\n -> Function Scan on generate_series g (cost=0.00..50000.01 rows=5000001 width=4) (actual time=1580.057..3128.749 rows=5000001 loops=1)\n Planning Time: 0.769 ms\n Execution Time: 26696.502 ms\n(6 rows)\n```\n\nEven though, I'm not sure which API is better, because we should avoid\nthe respilling as much as we could in the planner, and hash join uses\nthe bare BufFile.\n\nAttached my hacky and probably not robust diff for your reference.\n\n-- \nAdam Lee", "msg_date": "Wed, 8 Jan 2020 15:12:02 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On 28/12/2019 01:35, Jeff Davis wrote:\n> I've attached a new patch that adds some basic costing for disk during\n> hashagg.\n\nThis patch (hashagg-20191227.patch) doesn't compile:\n\nnodeAgg.c:3379:7: error: ‘hashagg_mem_overflow’ undeclared (first use in \nthis function)\n if (hashagg_mem_overflow)\n ^~~~~~~~~~~~~~~~~~~~\n\nLooks like the new GUCs got lost somewhere between \nhashagg-20191220.patch and hashagg-20191227.patch.\n\n> /*\n> * find_aggregated_cols\n> *\t Construct a bitmapset of the column numbers of aggregated Vars\n> *\t appearing in our targetlist and qual (HAVING clause)\n> */\n> static Bitmapset *\n> find_aggregated_cols(AggState *aggstate)\n> {\n> \tAgg\t\t *node = (Agg *) aggstate->ss.ps.plan;\n> \tBitmapset *colnos = NULL;\n> \tListCell *temp;\n> \n> \t/*\n> \t * We only want the columns used by aggregations in the targetlist or qual\n> \t */\n> \tif (node->plan.targetlist != NULL)\n> \t{\n> \t\tforeach(temp, (List *) node->plan.targetlist)\n> \t\t{\n> \t\t\tif (IsA(lfirst(temp), TargetEntry))\n> \t\t\t{\n> \t\t\t\tNode *node = (Node *)((TargetEntry *)lfirst(temp))->expr;\n> \t\t\t\tif (IsA(node, Aggref) || IsA(node, GroupingFunc))\n> \t\t\t\t\tfind_aggregated_cols_walker(node, &colnos);\n> \t\t\t}\n> \t\t}\n> \t}\n\nThis makes the assumption that all Aggrefs or GroupingFuncs are at the \ntop of the TargetEntry. That's not true, e.g.:\n\nselect 0+sum(a) from foo group by b;\n\nI think find_aggregated_cols() and find_unaggregated_cols() should be \nmerged into one function that scans the targetlist once, and returns two \nBitmapsets. They're always used together, anyway.\n\n- Heikki\n\n\n", "msg_date": "Wed, 8 Jan 2020 12:38:18 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2020-01-08 at 12:38 +0200, Heikki Linnakangas wrote:\n> This makes the assumption that all Aggrefs or GroupingFuncs are at\n> the \n> top of the TargetEntry. That's not true, e.g.:\n> \n> select 0+sum(a) from foo group by b;\n> \n> I think find_aggregated_cols() and find_unaggregated_cols() should\n> be \n> merged into one function that scans the targetlist once, and returns\n> two \n> Bitmapsets. They're always used together, anyway.\n\nI cut the projection out for now, because there's some work in that\narea in another thread[1]. If that work doesn't pan out, I can\nreintroduce the projection logic to this one.\n\nNew patch attached.\n\nIt now uses logtape.c (thanks Adam for prototyping this work) instead\nof buffile.c. This gives better control over the number of files and\nthe memory consumed for buffers, and reduces waste. It requires two\nchanges to logtape.c though:\n * add API to extend the number of tapes\n * lazily allocate buffers for reading (buffers for writing were\nalready allocated lazily) so that the total number of buffers needed at\nany time is bounded\n\nUnfortunately, I'm seeing some bad behavior (at least in some cases)\nwith logtape.c, where it's spending a lot of time qsorting the list of\nfree blocks. Adam, did you also see this during your perf tests? It\nseems to be worst with lower work_mem settings and a large number of\ninput groups (perhaps there are just too many small tapes?).\n\nIt also has some pretty major refactoring that hopefully makes it\nsimpler to understand and reason about, and hopefully I didn't\nintroduce too many bugs/regressions.\n\nA list of other changes:\n * added test that involves rescan\n * tweaked some details and tunables so that I think memory usage\ntracking and reporting (EXPLAIN ANALYZE) is better, especially for\nsmaller work_mem\n * simplified quite a few function signatures\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://postgr.es/m/CAAKRu_Yj=Q_ZxiGX+pgstNWMbUJApEJX-imvAEwryCk5SLUebg@mail.gmail.com", "msg_date": "Fri, 24 Jan 2020 17:01:35 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Fri, Jan 24, 2020 at 5:01 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Unfortunately, I'm seeing some bad behavior (at least in some cases)\n> with logtape.c, where it's spending a lot of time qsorting the list of\n> free blocks. Adam, did you also see this during your perf tests? It\n> seems to be worst with lower work_mem settings and a large number of\n> input groups (perhaps there are just too many small tapes?).\n\nThat sounds weird. Might be pathological in some sense.\n\nI have a wild guess for you. Maybe this has something to do with the\n\"test for presorted input\" added by commit a3f0b3d68f9. That can\nperform very badly when the input is almost sorted, but has a few\ntuples that are out of order towards the end. (I have called these\n\"banana skin tuples\" in the past.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 24 Jan 2020 17:16:47 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Fri, 2020-01-24 at 17:16 -0800, Peter Geoghegan wrote:\n> That sounds weird. Might be pathological in some sense.\n> \n> I have a wild guess for you. Maybe this has something to do with the\n> \"test for presorted input\" added by commit a3f0b3d68f9. That can\n> perform very badly when the input is almost sorted, but has a few\n> tuples that are out of order towards the end. (I have called these\n> \"banana skin tuples\" in the past.)\n\nMy simple test case is: 'explain analyze select i from big group by\ni;', where \"big\" has 20M tuples.\n\nI tried without that change and it helped (brought the time from 55s to\n45s). But if I completely remove the sorting of the freelist, it goes\ndown to 12s. So it's something about the access pattern.\n\nAfter digging a bit more, I see that, for Sort, the LogicalTapeSet's\nfreelist hovers around 300 entries and doesn't grow larger than that.\nFor HashAgg, it gets up to almost 60K. The pattern in HashAgg is that\nthe space required is at a maximum after the first spill, and after\nthat point the used space declines with each batch (because the groups\nthat fit in the hash table were finalized and emitted, and only the\nones that didn't fit were written out). As the amount of required space\ndeclines, the size of the freelist grows.\n\nThat leaves a few options:\n\n1) Cap the size of the LogicalTapeSet's freelist. If the freelist is\ngrowing large, that's probably because it will never actually be used.\nI'm not quite sure how to pick the cap though, and it seems a bit hacky\nto just leak the freed space.\n\n2) Use a different structure more capable of handling a large fraction\nof free space. A compressed bitmap might make sense, but that seems\nlike overkill to waste effort tracking a lot of space that is unlikely\nto ever be used.\n\n3) Don't bother tracking free space for HashAgg at all. There's already\nan API for that so I don't need to further hack logtape.c.\n\n4) Try to be clever and shrink the file (or at least the tracked\nportion of the file) if the freed blocks are at the end. This wouldn't\nbe very useful in the first recursive level, but the problem is worst\nfor the later levels anyway. Unfortunately, I think this requires a\nbreadth-first strategy to make sure that blocks at the end get freed.\nIf I do change it to breadth-first also, this does amount to a\nsignificant speedup.\n\nI am leaning toward #1 or #3.\n\nAs an aside, I'm curious why the freelist is managed the way it is.\nNewly-released blocks are likely to be higher in number (or at least\nnot the lowest in number), but they are added to the end of an array.\nThe array is therefore likely to require repeated re-sorting to get\nback to descending order. Wouldn't a minheap or something make more\nsense?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 29 Jan 2020 14:48:56 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2020-01-29 at 14:48 -0800, Jeff Davis wrote:\n> 2) Use a different structure more capable of handling a large\n> fraction\n> of free space. A compressed bitmap might make sense, but that seems\n> like overkill to waste effort tracking a lot of space that is\n> unlikely\n> to ever be used.\n\nI ended up converting the freelist to a min heap.\n\nAttached is a patch which makes three changes to better support\nHashAgg:\n\n1. Use a minheap for the freelist. The original design used an array\nthat had to be sorted between a read (which frees a block) and a write\n(which needs to sort the array to consume the lowest block number). The\ncomments said:\n\n * sorted. This is an efficient way to handle it because we expect\ncycles\n * of releasing many blocks followed by re-using many blocks, due to\n * the larger read buffer. \n\nBut I didn't find a case where that actually wins over a simple\nminheap. With that in mind, a minheap seems closer to what one might\nexpect for that purpose, and more robust when the assumptions don't\nhold up as well. If someone knows of a case where the re-sorting\nbehavior is important, please let me know.\n\nChanging to a minheap effectively solves the problem for HashAgg,\nthough in theory the memory consumption of the freelist itself could\nbecome significant (though it's only 0.1% of the free space being\ntracked).\n\n2. Lazily-allocate the read buffer. The write buffer was always lazily-\nallocated, so this patch creates better symmetry. More importantly, it\nmeans freshly-rewound tapes don't have any buffer allocated, so it\ngreatly expands the number of tapes that can be managed efficiently as\nlong as only a limited number are active at once.\n\n3. Allow expanding the number of tapes for an existing tape set. This\nis useful for HashAgg, which doesn't know how many tapes will be needed\nin advance.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 03 Feb 2020 10:29:55 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Mon, 2020-02-03 at 10:29 -0800, Jeff Davis wrote:\n> I ended up converting the freelist to a min heap.\n> \n> Attached is a patch which makes three changes to better support\n> HashAgg:\n\nAnd now I'm attaching another version of the main Hash Aggregation\npatch to be applied on top of the logtape.c patch.\n\nNot a lot of changes from the last version; mostly some cleanup and\nrebasing. But it's faster now with the logtape.c changes.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 03 Feb 2020 18:24:14 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Mon, Feb 03, 2020 at 06:24:14PM -0800, Jeff Davis wrote:\n> On Mon, 2020-02-03 at 10:29 -0800, Jeff Davis wrote:\n> > I ended up converting the freelist to a min heap.\n> > \n> > Attached is a patch which makes three changes to better support\n> > HashAgg:\n> \n> And now I'm attaching another version of the main Hash Aggregation\n> patch to be applied on top of the logtape.c patch.\n> \n> Not a lot of changes from the last version; mostly some cleanup and\n> rebasing. But it's faster now with the logtape.c changes.\n\nNice!\n\nJust back from the holiday. I had the perf test with Tomas's script,\ndidn't notice the freelist sorting regression at that time.\n\nThe minheap looks good, have you tested the performance and aggregate\nvalidation?\n\nAbout the \"Cap the size of the LogicalTapeSet's freelist\" and \"Don't\nbother tracking free space for HashAgg at all\" you mentioned in last\nmail, I suppose these two options will lost the disk space saving\nbenefit since some blocks are not reusable then?\n\n-- \nAdam Lee\n\n\n", "msg_date": "Tue, 4 Feb 2020 18:42:29 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On 03/02/2020 20:29, Jeff Davis wrote:\n> 1. Use a minheap for the freelist. The original design used an array\n> that had to be sorted between a read (which frees a block) and a write\n> (which needs to sort the array to consume the lowest block number). The\n> comments said:\n> \n> * sorted. This is an efficient way to handle it because we expect\n> cycles\n> * of releasing many blocks followed by re-using many blocks, due to\n> * the larger read buffer.\n> \n> But I didn't find a case where that actually wins over a simple\n> minheap. With that in mind, a minheap seems closer to what one might\n> expect for that purpose, and more robust when the assumptions don't\n> hold up as well. If someone knows of a case where the re-sorting\n> behavior is important, please let me know.\n\nA minheap certainly seems more natural for that. I guess re-sorting the \narray would be faster in the extreme case that you free almost all of \nthe blocks, and then consume almost all of the blocks, but I don't think \nthe usage pattern is ever that extreme. Because if all the data fit in \nmemory, we wouldn't be spilling in the first place.\n\nI wonder if a more advanced heap like the pairing heap or fibonacci heap \nwould perform better? Probably doesn't matter in practice, so better \nkeep it simple...\n\n> Changing to a minheap effectively solves the problem for HashAgg,\n> though in theory the memory consumption of the freelist itself could\n> become significant (though it's only 0.1% of the free space being\n> tracked).\n\nWe could fairly easily spill parts of the freelist to disk, too, if \nnecessary. But it's probably not worth the trouble.\n\n> 2. Lazily-allocate the read buffer. The write buffer was always lazily-\n> allocated, so this patch creates better symmetry. More importantly, it\n> means freshly-rewound tapes don't have any buffer allocated, so it\n> greatly expands the number of tapes that can be managed efficiently as\n> long as only a limited number are active at once.\n\nMakes sense.\n\n> 3. Allow expanding the number of tapes for an existing tape set. This\n> is useful for HashAgg, which doesn't know how many tapes will be needed\n> in advance.\n\nI'd love to change the LogicalTape API so that you could allocate and \nfree tapes more freely. I wrote a patch to do that, as part of replacing \ntuplesort.c's polyphase algorithm with a simpler one (see [1]), but I \nnever got around to committing it. Maybe the time is ripe to do that now?\n\n[1] \nhttps://www.postgresql.org/message-id/420a0ec7-602c-d406-1e75-1ef7ddc58d83@iki.fi\n\n- Heikki\n\n\n", "msg_date": "Tue, 4 Feb 2020 18:10:15 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Mon, Feb 3, 2020 at 6:24 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> And now I'm attaching another version of the main Hash Aggregation\n> patch to be applied on top of the logtape.c patch.\n\nHave you tested this against tuplesort.c, particularly parallel CREATE\nINDEX? It would be worth trying to measure any performance impact.\nNote that most parallel CREATE INDEX tuplesorts will do a merge within\neach worker, and one big merge in the leader. It's much more likely to\nhave multiple passes than a regular serial external sort.\n\nParallel CREATE INDEX is currently accidentally disabled on the master\nbranch. That should be fixed in the next couple of days. You can\ntemporarily revert 74618e77 if you want to get it back for testing\npurposes today.\n\nHave you thought about integer overflow in your heap related routines?\nThis isn't as unlikely as you might think. See commit 512f67c8, for\nexample.\n\nHave you thought about the MaxAllocSize restriction as it concerns\nlts->freeBlocks? Will that be okay when you have many more tapes than\nbefore?\n\n> Not a lot of changes from the last version; mostly some cleanup and\n> rebasing. But it's faster now with the logtape.c changes.\n\nLogicalTapeSetExtend() seems to work in a way that assumes that the\ntape is frozen. It would be good to document that assumption, and\npossible enforce it by way of an assertion. The same remark applies to\nany other assumptions you're making there.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 Feb 2020 15:08:11 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Feb 5, 2020 at 12:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Parallel CREATE INDEX is currently accidentally disabled on the master\n> branch. That should be fixed in the next couple of days. You can\n> temporarily revert 74618e77 if you want to get it back for testing\n> purposes today.\n\n(Fixed -- sorry for the disruption.)\n\n\n", "msg_date": "Wed, 5 Feb 2020 12:53:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Tue, 2020-02-04 at 18:42 +0800, Adam Lee wrote:\n> The minheap looks good, have you tested the performance and aggregate\n> validation?\n\nNot sure exactly what you mean, but I tested the min heap with both\nSort and HashAgg and it performs well.\n\n> About the \"Cap the size of the LogicalTapeSet's freelist\" and \"Don't\n> bother tracking free space for HashAgg at all\" you mentioned in last\n> mail, I suppose these two options will lost the disk space saving\n> benefit since some blocks are not reusable then?\n\nNo freelist at all will, of course, leak the blocks and not reuse the\nspace.\n\nA capped freelist is not bad in practice; it seems to still work as\nlong as the cap is reasonable. But it feels too arbitrary, and could\ncause unexpected leaks when our assumptions change. I think a minheap\njust makes more sense unless the freelist just becomes way too large.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 05 Feb 2020 10:26:17 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Tue, 2020-02-04 at 15:08 -0800, Peter Geoghegan wrote:\n> Have you tested this against tuplesort.c, particularly parallel\n> CREATE\n> INDEX? It would be worth trying to measure any performance impact.\n> Note that most parallel CREATE INDEX tuplesorts will do a merge\n> within\n> each worker, and one big merge in the leader. It's much more likely\n> to\n> have multiple passes than a regular serial external sort.\n\nI did not observe any performance regression when creating an index in\nparallel over 20M ints (random ints in random order). I tried 2\nparallel workers with work_mem=4MB and also 4 parallel workers with\nwork_mem=256kB.\n\n> Have you thought about integer overflow in your heap related\n> routines?\n> This isn't as unlikely as you might think. See commit 512f67c8, for\n> example.\n\nIt's dealing with blocks rather than tuples, so it's a bit less likely.\nBut changed it to use \"unsigned long\" instead.\n\n> Have you thought about the MaxAllocSize restriction as it concerns\n> lts->freeBlocks? Will that be okay when you have many more tapes than\n> before?\n\nI added a check. If it exceeds MaxAllocSize, before trying to perform\nthe allocation, just leak the block rather than adding it to the\nfreelist. Perhaps there's a usecase for an extraordinarily-long\nfreelist, but it's outside the scope of this patch.\n\n> LogicalTapeSetExtend() seems to work in a way that assumes that the\n> tape is frozen. It would be good to document that assumption, and\n> possible enforce it by way of an assertion. The same remark applies\n> to\n> any other assumptions you're making there.\n\nCan you explain? I am not freezing any tapes in Hash Aggregation, so\nwhat about LogicalTapeSetExtend() assumes the tape is frozen?\n\nAttached new logtape.c patches.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 05 Feb 2020 10:37:15 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Feb 5, 2020 at 10:37 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > LogicalTapeSetExtend() seems to work in a way that assumes that the\n> > tape is frozen. It would be good to document that assumption, and\n> > possible enforce it by way of an assertion. The same remark applies\n> > to\n> > any other assumptions you're making there.\n>\n> Can you explain? I am not freezing any tapes in Hash Aggregation, so\n> what about LogicalTapeSetExtend() assumes the tape is frozen?\n\nSorry, I was very unclear. I meant to write just the opposite: you\nassume that the tapes are *not* frozen. If you're adding a new\ncapability to logtape.c, it makes sense to be clear on the\nrequirements on tapeset state or individual tape state.\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 Feb 2020 10:40:30 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Tue, 2020-02-04 at 18:10 +0200, Heikki Linnakangas wrote:\n> I'd love to change the LogicalTape API so that you could allocate\n> and \n> free tapes more freely. I wrote a patch to do that, as part of\n> replacing \n> tuplesort.c's polyphase algorithm with a simpler one (see [1]), but\n> I \n> never got around to committing it. Maybe the time is ripe to do that\n> now?\n\nIt's interesting that you wrote a patch to pause the tapes a while ago.\nDid it just fall through the cracks or was there a problem with it?\n\nIs pause/resume functionality required, or is it good enough that\nrewinding a tape frees the buffer, to be lazily allocated later?\n\nRegarding the API, I'd like to change it, but I'm running into some\nperformance challenges when adding a layer of indirection. If I apply\nthe very simple attached patch, which simply makes a separate\nallocation for the tapes array, it seems to slow down sort by ~5%.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 05 Feb 2020 11:56:11 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2020-02-05 at 11:56 -0800, Jeff Davis wrote:\n> Regarding the API, I'd like to change it, but I'm running into some\n> performance challenges when adding a layer of indirection. If I apply\n> the very simple attached patch, which simply makes a separate\n> allocation for the tapes array, it seems to slow down sort by ~5%.\n\nI tried a few different approaches to allow a flexible number of tapes\nwithout regressing normal Sort performance. I found some odd hacks, but\nI can't explain why they perform better than the more obvious approach.\n\nThe LogicalTapeSetExtend() API is a natural evolution of what's already\nthere, so I think I'll stick with that to keep the scope of Hash\nAggregation under control.\n\nIf we improve the API later I'm happy to adapt the HashAgg work to use\nit -- anything to take more code out of nodeAgg.c!\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 05 Feb 2020 17:54:48 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Fri, 2020-01-24 at 17:01 -0800, Jeff Davis wrote:\n> New patch attached.\n\nThree minor independent refactoring patches:\n\n1. Add new entry points for the tuple hash table:\n\n TupleHashTableHash()\n LookupTupleHashEntryHash()\n\nwhich are useful for saving and reusing hash values to avoid\nrecomputing.\n\n2. Refactor hash_agg_entry_size() so that the callers don't need to do\nas much work.\n\n3. Save calculated aggcosts->transitionSpace in the Agg node for later\nuse, rather than discarding it.\n\nThese are helpful for the upcoming Hash Aggregation work.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 05 Feb 2020 18:20:22 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On 05/02/2020 21:56, Jeff Davis wrote:\n> On Tue, 2020-02-04 at 18:10 +0200, Heikki Linnakangas wrote:\n>> I'd love to change the LogicalTape API so that you could allocate\n>> and\n>> free tapes more freely. I wrote a patch to do that, as part of\n>> replacing\n>> tuplesort.c's polyphase algorithm with a simpler one (see [1]), but\n>> I\n>> never got around to committing it. Maybe the time is ripe to do that\n>> now?\n> \n> It's interesting that you wrote a patch to pause the tapes a while ago.\n> Did it just fall through the cracks or was there a problem with it?\n> \n> Is pause/resume functionality required, or is it good enough that\n> rewinding a tape frees the buffer, to be lazily allocated later?\n\nIt wasn't strictly required for what I was hacking on then. IIRC it \nwould have saved some memory during sorting, but Peter G felt that it \nwasn't worth the trouble, because he made some other changes around the \nsame time, which made it less important \n(https://www.postgresql.org/message-id/CAM3SWZS0nwOPoJQHvxugA9kKPzky2QC2348TTWdSStZOkke5tg%40mail.gmail.com). \nI dropped the ball on both patches then, but I still think they would be \nworthwhile.\n\n- Heikki\n\n\n\n", "msg_date": "Thu, 6 Feb 2020 11:01:36 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, Feb 6, 2020 at 12:01 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> It wasn't strictly required for what I was hacking on then. IIRC it\n> would have saved some memory during sorting, but Peter G felt that it\n> wasn't worth the trouble, because he made some other changes around the\n> same time, which made it less important\n\nFWIW, I am not opposed to the patch at all. I would be quite happy to\nget rid of a bunch of code in tuplesort.c that apparently isn't really\nnecessary anymore (by removing polyphase merge).\n\nAll I meant back in 2016 was that \"pausing\" tapes was orthogonal to my\nown idea of capping the number of tapes that could be used by\ntuplesort.c. The 500 MAXORDER cap thing hadn't been committed yet when\nI explained this in the message you linked to, and it wasn't clear if\nit would ever be committed (Robert committed it about a month\nafterwards, as it turned out). Capping the size of the merge heap made\nmarginal sorts faster overall, since a more cache efficient merge heap\nmore than made up for having more than one merge pass overall (thanks\nto numerous optimizations added in 2016, some of which were your\nwork).\n\nI also said that the absolute overhead of tapes was not that important\nback in 2016. Using many tapes within tuplesort.c can never happen\nanyway (with the 500 MAXORDER cap). Maybe the use of logtape.c by hash\naggregate changes the picture there now. Even if it doesn't, I still\nthink that your patch is a good idea.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 6 Feb 2020 13:45:11 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Mon, 2020-02-03 at 18:24 -0800, Jeff Davis wrote:\n> On Mon, 2020-02-03 at 10:29 -0800, Jeff Davis wrote:\n> > I ended up converting the freelist to a min heap.\n> > \n> > Attached is a patch which makes three changes to better support\n> > HashAgg:\n> \n> And now I'm attaching another version of the main Hash Aggregation\n> patch to be applied on top of the logtape.c patch.\n> \n> Not a lot of changes from the last version; mostly some cleanup and\n> rebasing. But it's faster now with the logtape.c changes.\n\nAttaching latest version (combined logtape changes along with main\nHashAgg patch).\n\nI believe I've addressed all of the comments, except for Heikki's\nquestion about changing the logtape.c API. I think big changes to the\nAPI (such as Heikki's proposal) are out of scope for this patch,\nalthough I do favor the changes in general. This patch just includes\nthe LogicalTapeSetExtend() API by Adam Lee, which is less intrusive.\n\nI noticed (and fixed) a small regression for some in-memory hashagg\nqueries due to the way I was choosing the number of buckets when\ncreating the hash table. I don't think that it is necessarily worse in\ngeneral, but given that there is at least one case of a regression, I\nmade it more closely match the old behavior, and the regression\ndisappared.\n\nI improved costing by taking into account the actual number of\npartitions and the memory limits, at least for the first pass (in\nrecursive passes the number of partitions can change).\n\nAside from that, just some cleanup and rebasing.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 10 Feb 2020 15:57:21 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Mon, 2020-02-10 at 15:57 -0800, Jeff Davis wrote:\n> Attaching latest version (combined logtape changes along with main\n> HashAgg patch).\n\nI ran a matrix of small performance tests to look for regressions.\n\nThe goal was to find out if the refactoring or additional branches\nintroduced by this patch caused regressions in in-memory HashAgg, Sort,\nor the JIT paths. Fortunately, I didn't find any.\n\nThis is *not* supposed to represent the performance benefits of the\npatch, only to see if I regressed somewhere else. The performance\nbenefits will be shown in the next round of tests.\n\nI tried with JIT on/off, work_mem='4MB' and also a value high enough to\nfit the entire working set, enable_hashagg on/off, and 4 different\ntables.\n\nThe 4 tables are (each containing 20 million tuples):\n\n t1k_20k_int4:\n 1K groups of 20K tuples each (randomly generated and ordered)\n t20m_1_int4:\n 20M groups of 1 tuple each (randomly generated and ordered)\n t1k_20k_text:\n the same as t1k_20k_int4 but cast to text (collation C.UTF-8)\n t20m_1_text:\n the same as t20m_1_int4 but cast to text (collation C.UTF-8)\n\nThe query is:\n\n select count(*) from (select i, count(*) from $TABLE group by i) s;\n\nI just did 3 runs in psql and took the median result.\n\nI ran against master (cac8ce4a, slightly older, before any of my\npatches went in) and my dev branch (attached patch applied against\n0973f560).\n\nResults were pretty boring, in a good way. All results within the\nnoise, and about as many results were better on dev than master as\nthere were better on master than dev.\n\nI also did some JIT-specific tests against only t1k_20k_int4. For that,\nthe hash table fits in memory anyway, so I didn't vary work_mem. The\nquery I ran included more aggregates to better test JIT:\n\n select i, sum(i), avg(i), min(i)\n from t1k_20k_int4\n group by i\n offset 1000000; -- offset so it doesn't return result\n\nI know these tests are simplistic, but I also think they represent a\nlot of areas where regressions could have potentially been introduced.\nIf someone else can find a regression, please let me know.\n\nThe new patch is basically just rebased -- a few other very minor\nchanges.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 12 Feb 2020 21:51:12 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2020-02-12 at 21:51 -0800, Jeff Davis wrote:\n> The new patch is basically just rebased -- a few other very minor\n> changes.\n\nI extracted out some minor refactoring of nodeAgg.c that I can commit\nseparately. That will make the main patch a little easier to review.\nAttached.\n\n* split build_hash_table() into two functions\n* separated hash calculation from lookup\n* changed lookup_hash_entry to return AggStatePerGroup directly instead\nof the TupleHashEntryData (which the caller only used to get the\nAggStatePerGroup, anyway)\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 13 Feb 2020 14:48:20 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Jan 8, 2020 at 2:38 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> This makes the assumption that all Aggrefs or GroupingFuncs are at the\n> top of the TargetEntry. That's not true, e.g.:\n>\n> select 0+sum(a) from foo group by b;\n>\n> I think find_aggregated_cols() and find_unaggregated_cols() should be\n> merged into one function that scans the targetlist once, and returns two\n> Bitmapsets. They're always used together, anyway.\n>\n>\nSo, I've attached a patch that does what Heikki recommended and gets\nboth aggregated and unaggregated columns in two different bitmapsets.\nI think it works for more cases than the other patch.\nI'm not sure it is the ideal interface, but, since there aren't many\nconsumers, I don't know.\nAlso, it needs some formatting/improved naming/etc.\n\nPer Jeff's comment in [1] I started looking into using the scanCols\npatch from the thread on extracting scanCols from PlannerInfo [2] to\nget the aggregated and unaggregated columns for this patch.\n\nSince we only make one bitmap for scanCols containing all of the\ncolumns that need to be scanned, there is no context about where the\ncolumns came from in the query.\nThat is, once the bit is set in the bitmapset, we have no way of\nknowing if that column was needed for aggregation or if it is filtered\nout immediately.\n\nWe could solve this by creating multiple bitmaps at the time that we\ncreate the scanCols field -- one for aggregated columns, one for\nunaggregated columns, and, potentially more if useful to other\nconsumers.\n\nThe initial problem with this is that we extract scanCols from the\nPlannerInfo->simple_rel_array and PlannerInfo->simple_rte_array.\nIf we wanted more context about where those columns were from in the\nquery, we would have to either change how we construct the scanCols or\nconstruct them early and add to the bitmap when adding columns to the\nsimple_rel_array and simple_rte_array (which, I suppose, is the same\nthing as changing how we construct scanCols).\n\nThis might decentralize the code for the benefit of one consumer.\nAlso, looping through the simple_rel_array and simple_rte_array a\ncouple times per query seems like it would add negligible overhead.\nI'm more hesitant to add code that, most likely, would involve a\nwalker to the codepath everybody uses if only agg will leverage the\ntwo distinct bitmapsets.\n\nOverall, I think it seems like a good idea to leverage scanCols for\ndetermining what columns hashagg needs to spill, but I can't think of\na way of doing it that doesn't seem bad. scanCols are currently just\nthat -- columns that will need to be scanned.\n\n[1]\nhttps://www.postgresql.org/message-id/e5566f7def33a9e9fdff337cca32d07155d7b635.camel%40j-davis.com\n[2]\nhttps://www.postgresql.org/message-id/flat/CAAKRu_Yj%3DQ_ZxiGX%2BpgstNWMbUJApEJX-imvAEwryCk5SLUebg%40mail.gmail.com\n\n-- \nMelanie Plageman", "msg_date": "Thu, 13 Feb 2020 18:01:38 -0800", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2020-02-12 at 21:51 -0800, Jeff Davis wrote:\n> On Mon, 2020-02-10 at 15:57 -0800, Jeff Davis wrote:\n> > Attaching latest version (combined logtape changes along with main\n> > HashAgg patch).\n> \n> I ran a matrix of small performance tests to look for regressions.\n\nI ran some more tests, this time comparing Hash Aggregation to\nSort+Group.\n\nSummary of trends:\n\n group key complexity : favors Hash\n group key size : favors Hash\n group size : favors Hash\n higher work_mem : favors Sort[1]\n data set size : favors Sort[1]\n number of aggregates : favors Hash[2]\n\n [1] I have closed the gap a bit with some post-experiment tuning.\n I have just begun to analyze this case so I think there is\n quite a bit more room for improvement.\n [2] Could use more exploration -- I don't have an explanation.\n\nData sets:\n t20m_1_int4: ~20 million groups of size ~1 (uniform)\n t1m_20_int4: ~1 million groups of size ~20 (uniform)\n t1k_20k_int4: ~1k groups of size ~20k (uniform)\n\n also, text versions of each of those with collate \"C.UTF-8\"\n\nResults:\n\n1. A general test to vary the group size, key type, and work_mem.\n\nQuery:\n select i from $TABLE group by i offset 100000000;\n\nwork_mem='4MB'\n\n+----------------+----------+-------------+--------------+\n| | sort(ms) | hashagg(ms) | sort/hashagg |\n+----------------+----------+-------------+--------------+\n| t20m_1_int4 | 11852 | 10640 | 1.11 |\n| t1m_20_int4 | 11108 | 8109 | 1.37 |\n| t1k_20k_int4 | 8575 | 2732 | 3.14 |\n| t20m_1_text | 80463 | 12902 | 6.24 |\n| t1m_20_text\n| 58586 | 9252 | 6.33 |\n| t1k_20k_text | 21781 | \n5739 | 3.80 |\n+----------------+----------+-------------+----\n----------+\n\nwork_mem='32MB'\n\n+----------------+----------+-------------+--------------+\n| | sort(ms) | hashagg(ms) | sort/hashagg |\n+----------------+----------+-------------+--------------+\n| t20m_1_int4 | 9656 | 11702 | 0.83 |\n| t1m_20_int4 | 8870 | 9804 | 0.90 |\n| t1k_20k_int4 | 6359 | 1852 | 3.43 |\n| t20m_1_text | 74266 | 14434 | 5.15 |\n| t1m_20_text | 56549 | 10180 | 5.55 |\n| t1k_20k_text | 21407 | 3989 | 5.37 |\n+----------------+----------+-------------+--------------+\n\n2. Test group key size\n\ndata set:\n 20m rows, four int4 columns.\n Columns a,b,c are all the constant value 1, forcing each\n comparison to look at all four columns.\n\nQuery: select a,b,c,d from wide group by a,b,c,d offset 100000000;\n\n work_mem='4MB'\n Sort : 30852ms\n HashAgg : 12343ms\n Sort/HashAgg : 2.50\n\nIn theory, if the first grouping column is highly selective, then Sort\nmay have a slight advantage because it can look at only the first\ncolumn, while HashAgg needs to look at all 4. But HashAgg only needs to\nperform this calculation once and it seems hard enough to show this in\npractice that I consider it an edge case. In \"normal\" cases, it appears\nthat more grouping columns significantly favors Hash Agg.\n\n3. Test number of aggregates\n\nData Set: same as for test #2 (group key size).\n\nQuery: select d, count(a),sum(b),avg(c),min(d)\n from wide group by d offset 100000000;\n\n work_mem='4MB'\n Sort : 22373ms\n HashAgg : 17338ms\n Sort/HashAgg : 1.29\n\nI don't have an explanation of why HashAgg is doing better here. Both\nof them are using JIT and essentially doing the same number of\nadvancements. This could use more exploration, but the effect isn't\nmajor.\n\n4. Test data size\n\nData 400 million rows of four random int8s. Group size of one.\n\nQuery: select a from t400m_1_int8 group by a offset 1000000000;\n\n work_mem='32MB'\n Sort : 300675ms\n HashAgg : 560740ms\n Sort/HashAgg : 0.54\n\nI tried increasing the max number of partitions and brought the HashAgg\nruntime down to 481985 (using 1024 partitions), which closes the gap to\n0.62. That's not too bad for HashAgg considering this is a group size\nof one with a simple group key. A bit more tuning might be able to\nclose the gap further.\n\nConclusion:\n\nHashAgg is winning in a lot of cases, and this will be an important\nimprovement for many workloads. Not only is it faster in a lot of\ncases, but it's also less risky. When an input has unknown group size,\nit's much easier for the planner to choose HashAgg -- a small downside\nand a big upside.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 14 Feb 2020 13:53:22 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "Hi,\n\nI wanted to take a look at this thread and do a review, but it's not\nvery clear to me if the recent patches posted here are independent or\nhow exactly they fit together. I see\n\n1) hashagg-20200212-1.patch (2020/02/13 by Jeff)\n\n2) refactor.patch (2020/02/13 by Jeff)\n\n3) v1-0001-aggregated-unaggregated-cols-together.patch (2020/02/14 by\n Melanie)\n\nI suppose this also confuses the cfbot - it's probably only testing (3)\nas it's the last thing posted here, at least I think that's the case.\n\nAnd it fails:\n\nnodeAgg.c: In function ‘find_aggregated_cols_walker’:\nnodeAgg.c:1208:2: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n FindColsContext *find_cols_context = (FindColsContext *) context;\n ^\nnodeAgg.c: In function ‘find_unaggregated_cols_walker’:\nnodeAgg.c:1225:2: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n FindColsContext *find_cols_context = (FindColsContext *) context;\n ^\ncc1: all warnings being treated as errors\n<builtin>: recipe for target 'nodeAgg.o' failed\nmake[3]: *** [nodeAgg.o] Error 1\nmake[3]: *** Waiting for unfinished jobs....\n\n\nIt's probably a good idea to either start a separate thread for patches\nthat are only loosely related to the main topic, or always post the\nwhole patch series.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 18 Feb 2020 19:57:49 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Tue, 2020-02-18 at 19:57 +0100, Tomas Vondra wrote:\n> Hi,\n> \n> I wanted to take a look at this thread and do a review, but it's not\n> very clear to me if the recent patches posted here are independent or\n> how exactly they fit together. I see\n\nAttached latest version rebased on master.\n\n> It's probably a good idea to either start a separate thread for\n> patches\n> that are only loosely related to the main topic, or always post the\n> whole patch series.\n\nWill do, sorry for the confusion.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 18 Feb 2020 14:18:50 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Tue, Feb 18, 2020 at 10:57 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> Hi,\n>\n> I wanted to take a look at this thread and do a review, but it's not\n> very clear to me if the recent patches posted here are independent or\n> how exactly they fit together. I see\n>\n> 1) hashagg-20200212-1.patch (2020/02/13 by Jeff)\n>\n> 2) refactor.patch (2020/02/13 by Jeff)\n>\n> 3) v1-0001-aggregated-unaggregated-cols-together.patch (2020/02/14 by\n> Melanie)\n>\n> I suppose this also confuses the cfbot - it's probably only testing (3)\n> as it's the last thing posted here, at least I think that's the case.\n>\n> And it fails:\n>\n> nodeAgg.c: In function ‘find_aggregated_cols_walker’:\n> nodeAgg.c:1208:2: error: ISO C90 forbids mixed declarations and code\n> [-Werror=declaration-after-statement]\n> FindColsContext *find_cols_context = (FindColsContext *) context;\n> ^\n> nodeAgg.c: In function ‘find_unaggregated_cols_walker’:\n> nodeAgg.c:1225:2: error: ISO C90 forbids mixed declarations and code\n> [-Werror=declaration-after-statement]\n> FindColsContext *find_cols_context = (FindColsContext *) context;\n> ^\n> cc1: all warnings being treated as errors\n> <builtin>: recipe for target 'nodeAgg.o' failed\n> make[3]: *** [nodeAgg.o] Error 1\n> make[3]: *** Waiting for unfinished jobs....\n>\n>\nOops! Sorry, I would fix the code that those compiler warnings is\ncomplaining about, but that would confuse the cfbot more. So, I'll let\nJeff decide what he wants to do about the patch at all (e.g. include\nit in his overall patch or exclude it for now). Anyway it is trivial\nto move those declarations up, were he to decide to include it.\n\n-- \nMelanie Plageman\n\nOn Tue, Feb 18, 2020 at 10:57 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:Hi,\n\nI wanted to take a look at this thread and do a review, but it's not\nvery clear to me if the recent patches posted here are independent or\nhow exactly they fit together. I see\n\n1) hashagg-20200212-1.patch (2020/02/13 by Jeff)\n\n2) refactor.patch (2020/02/13 by Jeff)\n\n3) v1-0001-aggregated-unaggregated-cols-together.patch (2020/02/14 by\n    Melanie)\n\nI suppose this also confuses the cfbot - it's probably only testing (3)\nas it's the last thing posted here, at least I think that's the case.\n\nAnd it fails:\n\nnodeAgg.c: In function ‘find_aggregated_cols_walker’:\nnodeAgg.c:1208:2: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n   FindColsContext *find_cols_context = (FindColsContext *) context;\n   ^\nnodeAgg.c: In function ‘find_unaggregated_cols_walker’:\nnodeAgg.c:1225:2: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n   FindColsContext *find_cols_context = (FindColsContext *) context;\n   ^\ncc1: all warnings being treated as errors\n<builtin>: recipe for target 'nodeAgg.o' failed\nmake[3]: *** [nodeAgg.o] Error 1\nmake[3]: *** Waiting for unfinished jobs....\nOops! Sorry, I would fix the code that those compiler warnings iscomplaining about, but that would confuse the cfbot more. So, I'll letJeff decide what he wants to do about the patch at all (e.g. includeit in his overall patch or exclude it for now). Anyway it is trivialto move those declarations up, were he to decide to include it.-- Melanie Plageman", "msg_date": "Tue, 18 Feb 2020 15:31:22 -0800", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "Hi,\n\nI've started reviewing the 20200218 version of the patch. In general it\nseems fine, but I have a couple minor comments and two crashes.\n\n\n1) explain.c currently does this:\n\nI wonder if we could show something for plain explain (without analyze).\nAt least the initial estimate of partitions, etc. I know not showing\nthose details until after execution is what e.g. sort does, but I find\nit a bit annoying.\n\nA related comment is that maybe this should report also the initial\nnumber of partitions, not just the total number. With just the total\nit's impossible to say if there were any repartitions, etc.\n\n\n2) The ExecBuildAggTrans comment should probably explain \"spilled\".\n\n\n3) I wonder if we need to invent new opcodes? Wouldn't it be simpler to\njust add a new flag to the agg_* structs instead? I haven't tried hacking\nthis, so maybe it's a silly idea.\n\n\n4) lookup_hash_entries says\n\n /* check to see if we need to spill the tuple for this grouping set */\n\nBut that seems bogus, because AFAIK we can't spill tuples for grouping\nsets. So maybe this should say just \"grouping\"?\n\n\n5) Assert(nbuckets > 0);\n\nI was curious what happens in case of extreme skew, when a lot/all rows\nconsistently falls into a single partition. So I did this:\n\n create table t (a int, b real);\n\n insert into t select i, random()\n from generate_series(-2000000000, 2000000000) s(i)\n where mod(hashint4(i), 16384) = 0;\n\n analyze t;\n\n set work_mem = '64kB';\n set max_parallel_workers_per_gather = 0;\n set enable_sort = 0;\n\n explain select a, sum(b) from t group by a;\n\n QUERY PLAN \n ---------------------------------------------------------------\n HashAggregate (cost=23864.26..31088.52 rows=244631 width=8)\n Group Key: a\n -> Seq Scan on t (cost=0.00..3529.31 rows=244631 width=8)\n (3 rows)\n\nThis however quickly fails on this assert in BuildTupleHashTableExt (see\nbacktrace1.txt):\n\n Assert(nbuckets > 0);\n\nThe value is computed in hash_choose_num_buckets, and there seem to be\nno protections against returning bogus values like 0. So maybe we should\nreturn\n\n Min(nbuckets, 1024)\n\nor something like that, similarly to hash join. OTOH maybe it's simply\ndue to agg_refill_hash_table() passing bogus values to the function?\n\n\n6) Another thing that occurred to me was what happens to grouping sets,\nwhich we can't spill to disk. So I did this:\n\n create table t2 (a int, b int, c int);\n\n -- run repeatedly, until there are about 20M rows in t2 (1GB)\n with tx as (select array_agg(a) as a, array_agg(b) as b\n from (select a, b from t order by random()) foo),\n ty as (select array_agg(a) AS a\n from (select a from t order by random()) foo)\n insert into t2 select unnest(tx.a), unnest(ty.a), unnest(tx.b)\n from tx, ty;\n\n analyze t2;\n\nThis produces a table with two independent columns, skewed the same as\nthe column t.a. I don't know which of this actually matters, considering\ngrouping sets don't spill, so maybe the independence is sufficient and\nthe skew may be irrelevant?\n\nAnd then do this:\n\n set work_mem = '200MB';\n set max_parallel_workers_per_gather = 0;\n set enable_sort = 0;\n\n explain select a, b, sum(c) from t2 group by cube (a,b);;\n\n QUERY PLAN \n ---------------------------------------------------------------------\n MixedAggregate (cost=0.00..833064.27 rows=2756495 width=16)\n Hash Key: a, b\n Hash Key: a\n Hash Key: b\n Group Key: ()\n -> Seq Scan on t2 (cost=0.00..350484.44 rows=22750744 width=12)\n (6 rows)\n\nwhich fails with segfault at execution time:\n\n tuplehash_start_iterate (tb=0x18, iter=iter@entry=0x2349340)\n 870\t\tfor (i = 0; i < tb->size; i++)\n (gdb) bt\n #0 tuplehash_start_iterate (tb=0x18, iter=iter@entry=0x2349340)\n #1 0x0000000000654e49 in agg_retrieve_hash_table_in_memory ...\n\nThat's not surprising, because 0x18 pointer is obviously bogus. I guess\nthis is simply an offset 18B added to a NULL pointer?\n\nDisabling hashagg spill (setting both GUCs to off) makes no difference,\nbut on master it fails like this:\n\n ERROR: out of memory\n DETAIL: Failed on request of size 3221225472 in memory context \"ExecutorState\".\n\nwhich is annoying, but expected with an under-estimate and hashagg. And\nmuch better than just crashing the whole cluster.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 19 Feb 2020 20:16:36 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Feb 19, 2020 at 08:16:36PM +0100, Tomas Vondra wrote:\n> 4) lookup_hash_entries says\n> \n> /* check to see if we need to spill the tuple for this grouping set */\n> \n> But that seems bogus, because AFAIK we can't spill tuples for grouping\n> sets. So maybe this should say just \"grouping\"?\n\nAs I see it, it does traverse all hash sets, fill the hash table and\nspill if needed, for each tuple.\n\nThe segfault is probably related to this and MixedAggregate, I'm looking\ninto it.\n\n-- \nAdam Lee\n\n\n", "msg_date": "Thu, 20 Feb 2020 12:04:37 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Feb 19, 2020 at 08:16:36PM +0100, Tomas Vondra wrote:\n> 5) Assert(nbuckets > 0);\n> ... \n> This however quickly fails on this assert in BuildTupleHashTableExt (see\n> backtrace1.txt):\n> \n> Assert(nbuckets > 0);\n> \n> The value is computed in hash_choose_num_buckets, and there seem to be\n> no protections against returning bogus values like 0. So maybe we should\n> return\n> \n> Min(nbuckets, 1024)\n> \n> or something like that, similarly to hash join. OTOH maybe it's simply\n> due to agg_refill_hash_table() passing bogus values to the function?\n> \n> \n> 6) Another thing that occurred to me was what happens to grouping sets,\n> which we can't spill to disk. So I did this:\n> \n> create table t2 (a int, b int, c int);\n> \n> -- run repeatedly, until there are about 20M rows in t2 (1GB)\n> with tx as (select array_agg(a) as a, array_agg(b) as b\n> from (select a, b from t order by random()) foo),\n> ty as (select array_agg(a) AS a\n> from (select a from t order by random()) foo)\n> insert into t2 select unnest(tx.a), unnest(ty.a), unnest(tx.b)\n> from tx, ty;\n> \n> analyze t2;\n> ...\n> \n> which fails with segfault at execution time:\n> \n> tuplehash_start_iterate (tb=0x18, iter=iter@entry=0x2349340)\n> 870\t\tfor (i = 0; i < tb->size; i++)\n> (gdb) bt\n> #0 tuplehash_start_iterate (tb=0x18, iter=iter@entry=0x2349340)\n> #1 0x0000000000654e49 in agg_retrieve_hash_table_in_memory ...\n> \n> That's not surprising, because 0x18 pointer is obviously bogus. I guess\n> this is simply an offset 18B added to a NULL pointer?\n\nI did some investigation, have you disabled the assert when this panic\nhappens? If so, it's the same issue as \"5) nbucket == 0\", which passes a\nzero size to allocator when creates that endup-with-0x18 hashtable.\n\nSorry my testing env goes weird right now, haven't reproduced it yet.\n\n-- \nAdam Lee\n\n\n", "msg_date": "Thu, 20 Feb 2020 13:47:19 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2020-02-19 at 20:16 +0100, Tomas Vondra wrote:\n> 1) explain.c currently does this:\n> \n> I wonder if we could show something for plain explain (without\n> analyze).\n> At least the initial estimate of partitions, etc. I know not showing\n> those details until after execution is what e.g. sort does, but I\n> find\n> it a bit annoying.\n\nLooks like you meant to include some example explain output, but I\nthink I understand what you mean. I'll look into it.\n\n> 2) The ExecBuildAggTrans comment should probably explain \"spilled\".\n\nDone.\n\n> 3) I wonder if we need to invent new opcodes? Wouldn't it be simpler\n> to\n> just add a new flag to the agg_* structs instead? I haven't tried\n> hacking\n> this, so maybe it's a silly idea.\n\nThere was a reason I didn't do it this way, but I'm trying to remember\nwhy. I'll look into this, also.\n\n> 4) lookup_hash_entries says\n> \n> /* check to see if we need to spill the tuple for this grouping\n> set */\n> \n> But that seems bogus, because AFAIK we can't spill tuples for\n> grouping\n> sets. So maybe this should say just \"grouping\"?\n\nYes, we can spill tuples for grouping sets. Unfortunately, I think my\ntests (which covered this case previously) don't seem to be exercising\nthat path well now. I am going to improve my tests, too.\n\n> 5) Assert(nbuckets > 0);\n\nI did not repro this issue, but I did set a floor of 256 buckets.\n\n> which fails with segfault at execution time:\n\nFixed. I was resetting the hash table context without setting the\npointers to NULL.\n\nThanks! I attached a new, rebased version. The fixes are quick fixes\nfor now and I will revisit them after I improve my test cases (which\nmight find more issues).\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 20 Feb 2020 16:56:38 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "Hi,\n\nOn 2020-02-19 20:16:36 +0100, Tomas Vondra wrote:\n> 3) I wonder if we need to invent new opcodes? Wouldn't it be simpler to\n> just add a new flag to the agg_* structs instead? I haven't tried hacking\n> this, so maybe it's a silly idea.\n\nNew opcodes don't really cost that much - it's a jump table based\ndispatch already (yes, it increases the table size slightly, but not by\nmuch). But adding branches inside opcode implementation does add cost -\nand we're already bottlenecked by stalls.\n\nI assume code duplication is your primary concern here?\n\nIf so, I think the patch 0008 in\nhttps://postgr.es/m/20191023163849.sosqbfs5yenocez3%40alap3.anarazel.de\nwould improve the situation. I'll try to rebase that onto master.\n\nI'd also like to apply something like 0013 from that thread, I find the\nwhole curperagg, select_current_set, curaggcontext logic confusing as\nhell. I'd so far planned to put this on the backburner until this patch\nhas been committed, to avoid breaking it. But perhaps that's not the\nright call?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 21 Feb 2020 12:22:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Fri, 2020-02-21 at 12:22 -0800, Andres Freund wrote:\n> I'd also like to apply something like 0013 from that thread, I find\n> the\n> whole curperagg, select_current_set, curaggcontext logic confusing as\n> hell. I'd so far planned to put this on the backburner until this\n> patch\n> has been committed, to avoid breaking it. But perhaps that's not the\n> right call?\n\nAt least for now, I appreciate you holding off on those a bit.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 22 Feb 2020 09:55:26 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "Hi,\n\nOn 2020-02-22 09:55:26 -0800, Jeff Davis wrote:\n> On Fri, 2020-02-21 at 12:22 -0800, Andres Freund wrote:\n> > I'd also like to apply something like 0013 from that thread, I find\n> > the\n> > whole curperagg, select_current_set, curaggcontext logic confusing as\n> > hell. I'd so far planned to put this on the backburner until this\n> > patch\n> > has been committed, to avoid breaking it. But perhaps that's not the\n> > right call?\n> \n> At least for now, I appreciate you holding off on those a bit.\n\nBoth patches, or just 0013? Seems the earlier one might make the\naddition of the opcodes you add less verbose?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 22 Feb 2020 10:00:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Sat, 2020-02-22 at 10:00 -0800, Andres Freund wrote:\n> Both patches, or just 0013? Seems the earlier one might make the\n> addition of the opcodes you add less verbose?\n\nJust 0013, thank you. 0008 looks like it will simplify things.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 22 Feb 2020 11:02:16 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2020-02-19 at 20:16 +0100, Tomas Vondra wrote:\n> 5) Assert(nbuckets > 0);\n\n...\n\n> 6) Another thing that occurred to me was what happens to grouping\n> sets,\n> which we can't spill to disk. So I did this:\n\n...\n\n> which fails with segfault at execution time:\n\nThe biggest problem was that my grouping sets test was not testing\nmultiple hash tables spilling, so a couple bugs crept in. I fixed them,\nthank you.\n\nTo fix the tests, I also had to fix the GUCs and the way the planner\nuses them with my patch. In master, grouping sets are planned by\ngenerating a path that tries to do as many grouping sets with hashing\nas possible (limited by work_mem). But with my patch, the notion of\nfitting hash tables in work_mem is not necessarily important. If we\nignore work_mem during path generation entirely (and only consider it\nduring costing and execution), it will change quite a few plans and\nundermine the concept of mixed aggregates entirely. That may be a good\nthing to do eventually as a simplification, but for now it seems like\ntoo much, so I am preserving the notion of trying to fit hash tables in\nwork_mem to create mixed aggregates.\n\nBut that created the testing problem: I need a reliable way to get\ngrouping sets with several hash tables in memory that are all spilling,\nbut the planner is trying to avoid exactly that. So, I am introducing a\nnew GUC called enable_groupingsets_hash_disk (better name suggestions\nwelcome), defaulting it to \"off\" (and turned on during the test).\n\nAdditionally, I removed the other GUCs I introduced in earlier versions\nof this patch. They were basically designed around the idea to revert\nback to the previous hash aggregation behavior if desired (by setting\nenable_hashagg_spill=false and hashagg_mem_overflow=true). That makes\nsome sense, but that was already covered pretty well by existing GUCs.\nIf you want to use HashAgg without spilling, just set work_mem higher;\nand if you want to avoid the planner from choosing HashAgg at all, you\nset enable_hashagg=false. So I just got rid of enable_hashagg_spill and\nhashagg_mem_overflow.\n\nI didn't forget about your explain-related suggestions. I'll address\nthem in the next patch.\n\nRegards,\n\tJeff Davis", "msg_date": "Sat, 22 Feb 2020 11:59:59 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, Feb 20, 2020 at 04:56:38PM -0800, Jeff Davis wrote:\n>On Wed, 2020-02-19 at 20:16 +0100, Tomas Vondra wrote:\n>> 1) explain.c currently does this:\n>>\n>> I wonder if we could show something for plain explain (without\n>> analyze).\n>> At least the initial estimate of partitions, etc. I know not showing\n>> those details until after execution is what e.g. sort does, but I\n>> find\n>> it a bit annoying.\n>\n>Looks like you meant to include some example explain output, but I\n>think I understand what you mean. I'll look into it.\n>\n\nOh, right. What I wanted to include is this code snippet:\n\n if (es->analyze)\n show_hashagg_info((AggState *) planstate, es);\n\nbut I forgot to do the copy-paste.\n\n>> 2) The ExecBuildAggTrans comment should probably explain \"spilled\".\n>\n>Done.\n>\n>> 3) I wonder if we need to invent new opcodes? Wouldn't it be simpler\n>> to\n>> just add a new flag to the agg_* structs instead? I haven't tried\n>> hacking\n>> this, so maybe it's a silly idea.\n>\n>There was a reason I didn't do it this way, but I'm trying to remember\n>why. I'll look into this, also.\n>\n>> 4) lookup_hash_entries says\n>>\n>> /* check to see if we need to spill the tuple for this grouping\n>> set */\n>>\n>> But that seems bogus, because AFAIK we can't spill tuples for\n>> grouping\n>> sets. So maybe this should say just \"grouping\"?\n>\n>Yes, we can spill tuples for grouping sets. Unfortunately, I think my\n>tests (which covered this case previously) don't seem to be exercising\n>that path well now. I am going to improve my tests, too.\n>\n>> 5) Assert(nbuckets > 0);\n>\n>I did not repro this issue, but I did set a floor of 256 buckets.\n>\n\nHmmm. I can reproduce it reliably (on the patch from 2020/02/18) but it\nseems to only happen when the table is large enough. For me, doing\n\n insert into t select * from t;\n\nuntil the table has ~7.8M rows does the trick. I can't reproduce it on\nthe current patch, so ensuring there are at least 256 buckets seems to\nhave helped. If I add an elog() to print nbuckets at the beginning of\nhash_choose_num_buckets, I see it starts as 0 from time to time (and\nthen gets tweaked to 256).\n\nI suppose this is due to how the input data is generated, i.e. all hash\nvalues should fall to the first batch, so all other batches should be\nempty. But in agg_refill_hash_table we use the number of input tuples as\na starting point for, which is how we get nbuckets = 0.\n\nI think enforcing nbuckets to be at least 256 is OK.\n\n>> which fails with segfault at execution time:\n>\n>Fixed. I was resetting the hash table context without setting the\n>pointers to NULL.\n>\n\nYep, can confirm it's no longer crashing for me.\n\n>Thanks! I attached a new, rebased version. The fixes are quick fixes\n>for now and I will revisit them after I improve my test cases (which\n>might find more issues).\n>\n\nOK, sounds good.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 23 Feb 2020 01:42:16 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On 2020-02-22 11:02:16 -0800, Jeff Davis wrote:\n> On Sat, 2020-02-22 at 10:00 -0800, Andres Freund wrote:\n> > Both patches, or just 0013? Seems the earlier one might make the\n> > addition of the opcodes you add less verbose?\n> \n> Just 0013, thank you. 0008 looks like it will simplify things.\n\nPushed 0008.\n\n\n", "msg_date": "Mon, 24 Feb 2020 15:29:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Mon, 2020-02-24 at 15:29 -0800, Andres Freund wrote:\n> On 2020-02-22 11:02:16 -0800, Jeff Davis wrote:\n> > On Sat, 2020-02-22 at 10:00 -0800, Andres Freund wrote:\n> > > Both patches, or just 0013? Seems the earlier one might make the\n> > > addition of the opcodes you add less verbose?\n> > \n> > Just 0013, thank you. 0008 looks like it will simplify things.\n> \n> Pushed 0008.\n\nRebased on your change. This simplified the JIT and interpretation code\nquite a bit.\n\nAlso:\n* caching the compiled expressions so I can switch between the variants\ncheaply\n* added \"Planned Partitions\" to explain output\n* included tape buffers in the \"Memory Used\" output\n* Simplified the way I try to track memory usage and trigger spilling. \n* Reset hash tables always rather than rebuilding them from scratch.\n\nI will do another round of performance tests and see if anything\nchanged from last time.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 26 Feb 2020 19:14:18 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, 2020-02-26 at 19:14 -0800, Jeff Davis wrote:\n> Rebased on your change. This simplified the JIT and interpretation\n> code\n> quite a bit.\n\nAttached another version.\n\n * tweaked EXPLAIN output some more\n * rebased and cleaned up\n * Added back the enable_hashagg_disk flag (defaulting to on). I've\ngone back and forth on this, but it seems like a good idea to have it\nthere. So now there are a total of two GUCs: enable_hashagg_disk and\nenable_groupingsets_hash_disk\n\nUnless I (or someone else) finds something significant, this is close\nto commit.\n\nRegards,\n Jeff Davis", "msg_date": "Wed, 11 Mar 2020 23:55:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Mar 11, 2020 at 11:55:35PM -0700, Jeff Davis wrote:\n> * tweaked EXPLAIN output some more\n> Unless I (or someone else) finds something significant, this is close\n> to commit.\n\nThanks for working on this ; I finally made a pass over the patch.\n\n+++ b/doc/src/sgml/config.sgml\n+ <term><varname>enable_groupingsets_hash_disk</varname> (<type>boolean</type>)\n+ Enables or disables the query planner's use of hashed aggregation for\n+ grouping sets when the size of the hash tables is expected to exceed\n+ <varname>work_mem</varname>. See <xref\n+ linkend=\"queries-grouping-sets\"/>. Note that this setting only\n+ affects the chosen plan; execution time may still require using\n+ disk-based hash aggregation. ...\n...\n+ <term><varname>enable_hashagg_disk</varname> (<type>boolean</type>)\n+ ... This only affects the planner choice;\n+ execution time may still require using disk-based hash\n+ aggregation. The default is <literal>on</literal>.\n\nI don't understand what's meant by \"the chosen plan\".\nShould it say, \"at execution ...\" instead of \"execution time\" ?\n\n+ Enables or disables the query planner's use of hashed aggregation plan\n+ types when the memory usage is expected to exceed\n\nEither remove \"plan types\" for consistency with enable_groupingsets_hash_disk,\nOr add it there. Maybe it should say \"when the memory usage would OTHERWISE BE\nexpected to exceed..\"\n\n+show_hashagg_info(AggState *aggstate, ExplainState *es)\n+{\n+\tAgg\t\t*agg\t = (Agg *)aggstate->ss.ps.plan;\n+\tlong\t memPeakKb = (aggstate->hash_mem_peak + 1023) / 1024;\n\nI see this partially duplicates my patch [0] to show memory stats for (at\nAndres' suggestion) all of execGrouping.c. Perhaps you'd consider naming the\nfunction something more generic in case my patch progresses ? I'm using:\n|show_tuplehash_info(HashTableInstrumentation *inst, ExplainState *es);\n\nMine also shows:\n|ExplainPropertyInteger(\"Original Hash Buckets\", NULL,\n|ExplainPropertyInteger(\"Peak Memory Usage (hashtable)\", \"kB\",\n|ExplainPropertyInteger(\"Peak Memory Usage (tuples)\", \"kB\",\n\n[0] https://www.postgresql.org/message-id/20200306213310.GM684%40telsasoft.com\n\nYou added hash_mem_peak and hash_batches_used to struct AggState.\nIn my 0001 patch, I added instrumentation to struct TupleHashTable, and in my\n0005 patch I move it into AggStatePerHashData and other State nodes.\n\n+\tif (from_tape)\n+\t\tpartition_mem += HASHAGG_READ_BUFFER_SIZE;\n+\tpartition_mem = npartitions * HASHAGG_WRITE_BUFFER_SIZE;\n\n=> That looks wrong ; should say += ?\n\n+\t\t\tgettext_noop(\"Enables the planner's use of hashed aggregation plans that are expected to exceed work_mem.\"),\n\nshould say:\n\"when the memory usage is otherwise be expected to exceed..\"\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 12 Mar 2020 16:01:46 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, 2020-03-12 at 16:01 -0500, Justin Pryzby wrote:\n> I don't understand what's meant by \"the chosen plan\".\n> Should it say, \"at execution ...\" instead of \"execution time\" ?\n\nI removed that wording; hopefully it's more clear without it?\n\n> Either remove \"plan types\" for consistency with\n> enable_groupingsets_hash_disk,\n> Or add it there. Maybe it should say \"when the memory usage would\n> OTHERWISE BE\n> expected to exceed..\"\n\nI added \"plan types\".\n\nI don't think \"otherwise be...\" would quite work there. \"Otherwise\"\nsounds to me like it's referring to another plan type (e.g.\nSort+GroupAgg), and that doesn't fit.\n\nIt's probably best to leave that level of detail out of the docs. I\nthink the main use case for enable_hashagg_disk is for users who\nexperience some plan changes and want the old behavior which favors\nSort when there are a lot of groups.\n\n> +show_hashagg_info(AggState *aggstate, ExplainState *es)\n> +{\n> +\tAgg\t\t*agg\t = (Agg *)aggstate->ss.ps.plan;\n> +\tlong\t memPeakKb = (aggstate->hash_mem_peak + 1023) / 1024;\n> \n> I see this partially duplicates my patch [0] to show memory stats for\n\n...\n\n> You added hash_mem_peak and hash_batches_used to struct AggState.\n> In my 0001 patch, I added instrumentation to struct TupleHashTable\n\nI replied in that thread and I'm not sure that tracking the memory in\nthe TupleHashTable is the right approach. The group keys and the\ntransition state data can't be estimated easily that way. Perhaps we\ncan do that if the THT owns the memory contexts (and can call\nMemoryContextMemAllocated()), rather than using passed-in ones, but\nthat might require more discussion. (I'm open to that idea, by the\nway.)\n\nAlso, my patch also considers the buffer space, so would that be a\nthird memory number?\n\nFor now, I think I'll leave the way I report it in a simpler form and\nwe can change it later as we sort out these details. That leaves mine\nspecific to HashAgg, but we can always refactor it later.\n\nI did change my code to put the metacontext in a child context of its\nown so that I could call MemoryContextMemAllocated() on it to include\nit in the memory total, and that will make reporting it separately\neasier when we want to do so.\n\n> +\tif (from_tape)\n> +\t\tpartition_mem += HASHAGG_READ_BUFFER_SIZE;\n> +\tpartition_mem = npartitions * HASHAGG_WRITE_BUFFER_SIZE;\n> \n> => That looks wrong ; should say += ?\n\nGood catch! Fixed.\n\nRegards,\n\tJeff Davis", "msg_date": "Sun, 15 Mar 2020 16:05:37 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "\nCommitted.\n\nThere's some future work that would be nice (some of these are just\nideas and may not be worth it):\n\n* Refactor MemoryContextMemAllocated() to be a part of\nMemoryContextStats(), but allow it to avoid walking through the blocks\nand freelists.\n\n* Improve the choice of the initial number of buckets in the hash\ntable. For this patch, I tried to preserve the existing behavior of\nestimating the number of groups and trying to initialize with that many\nbuckets. But my performance tests seem to indicate this is not the best\napproach. More work is needed to find what we should really do here.\n\n* For workloads that are not in work_mem *or* system memory, and need\nto actually go to storage, I see poor CPU utilization because it's not\neffectively overlapping CPU and IO work. Perhaps buffering or readahead\nchanges can improve this, or async IO (even better).\n\n* Project unnecessary attributes away before spilling tuples to disk.\n\n* Improve logtape.c API so that the caller doesn't need to manage a\nbunch of tape numbers.\n\n* Improve estimate of the hash entry size. This patch doesn't change\nthe way the planner estimates it, but I observe that actual size as\nseen at runtime is significantly different. This is connected to the\ninitial number of buckets for the hash table.\n\n* In recursive steps, I don't have a good estimate for the number of\ngroups, so I just estimate it as the number of tuples in that spill\ntape (which is pessimistic). That could be improved by doing a real\ncardinality estimate as the tuples are spilling (perhaps with HLL?).\n\n* Many aggregates with pass-by-ref transition states don't provide a\ngreat aggtransspace. We should consider doing something smarter, like\nhaving negative numbers represent a number that should be multiplied by\nthe size of the group (e.g. ARRAY_AGG would have a size dependent on\nthe group size, not a constant).\n\n* If we want to handle ARRAY_AGG (and the like) well, we can consider\nspilling the partial states in the hash table whem the memory is full.\nThat would add a fair amount of complexity because there would be two\ntypes of spilled data (tuples and partial states), but it could be\nuseful in some cases.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 18 Mar 2020 16:35:57 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Wed, Mar 18, 2020 at 04:35:57PM -0700, Jeff Davis wrote:\n>\n>Committed.\n>\n\n\\o/\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 00:54:00 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Sun, Mar 15, 2020 at 04:05:37PM -0700, Jeff Davis wrote:\n> > +\tif (from_tape)\n> > +\t\tpartition_mem += HASHAGG_READ_BUFFER_SIZE;\n> > +\tpartition_mem = npartitions * HASHAGG_WRITE_BUFFER_SIZE;\n> > \n> > => That looks wrong ; should say += ?\n> \n> Good catch! Fixed.\n\n> +++ b/src/backend/executor/nodeAgg.c\n> @@ -2518,9 +3499,36 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> \t */\n> \tif (use_hashing)\n> \t{\n> +\t\tPlan *outerplan = outerPlan(node);\n> +\t\tuint64\ttotalGroups = 0;\n> +\t\tfor (i = 0; i < aggstate->num_hashes; i++)\n> +\t\t\ttotalGroups = aggstate->perhash[i].aggnode->numGroups;\n> +\n> +\t\thash_agg_set_limits(aggstate->hashentrysize, totalGroups, 0,\n\nI realize that I missed the train but .. that looks like another += issue?\n\nAlso, Andres was educating me about the range of behavior of \"long\" type, and I\nsee now while rebasing that you did the same thing.\nhttps://www.postgresql.org/message-id/20200306175859.d56ohskarwldyrrw%40alap3.anarazel.de\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 19 Mar 2020 01:42:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "Hi,\n\nI happen to notice that \"set enable_sort to false\" cannot guarantee the\nplanner to use hashagg in test groupingsets.sql,\nthe following comparing results of sortagg and hashagg seems to have no\nmeaning.\n\nThanks,\nPengzhou\n\nOn Thu, Mar 19, 2020 at 7:36 AM Jeff Davis <pgsql@j-davis.com> wrote:\n\n>\n> Committed.\n>\n> There's some future work that would be nice (some of these are just\n> ideas and may not be worth it):\n>\n> * Refactor MemoryContextMemAllocated() to be a part of\n> MemoryContextStats(), but allow it to avoid walking through the blocks\n> and freelists.\n>\n> * Improve the choice of the initial number of buckets in the hash\n> table. For this patch, I tried to preserve the existing behavior of\n> estimating the number of groups and trying to initialize with that many\n> buckets. But my performance tests seem to indicate this is not the best\n> approach. More work is needed to find what we should really do here.\n>\n> * For workloads that are not in work_mem *or* system memory, and need\n> to actually go to storage, I see poor CPU utilization because it's not\n> effectively overlapping CPU and IO work. Perhaps buffering or readahead\n> changes can improve this, or async IO (even better).\n>\n> * Project unnecessary attributes away before spilling tuples to disk.\n>\n> * Improve logtape.c API so that the caller doesn't need to manage a\n> bunch of tape numbers.\n>\n> * Improve estimate of the hash entry size. This patch doesn't change\n> the way the planner estimates it, but I observe that actual size as\n> seen at runtime is significantly different. This is connected to the\n> initial number of buckets for the hash table.\n>\n> * In recursive steps, I don't have a good estimate for the number of\n> groups, so I just estimate it as the number of tuples in that spill\n> tape (which is pessimistic). That could be improved by doing a real\n> cardinality estimate as the tuples are spilling (perhaps with HLL?).\n>\n> * Many aggregates with pass-by-ref transition states don't provide a\n> great aggtransspace. We should consider doing something smarter, like\n> having negative numbers represent a number that should be multiplied by\n> the size of the group (e.g. ARRAY_AGG would have a size dependent on\n> the group size, not a constant).\n>\n> * If we want to handle ARRAY_AGG (and the like) well, we can consider\n> spilling the partial states in the hash table whem the memory is full.\n> That would add a fair amount of complexity because there would be two\n> types of spilled data (tuples and partial states), but it could be\n> useful in some cases.\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n>\n>\n\nHi,I happen to notice that \"set enable_sort to false\" cannot guarantee the planner to use hashagg in test groupingsets.sql,the following comparing results of sortagg and hashagg seems to have no meaning.Thanks,Pengzhou On Thu, Mar 19, 2020 at 7:36 AM Jeff Davis <pgsql@j-davis.com> wrote:\nCommitted.\n\nThere's some future work that would be nice (some of these are just\nideas and may not be worth it):\n\n* Refactor MemoryContextMemAllocated() to be a part of\nMemoryContextStats(), but allow it to avoid walking through the blocks\nand freelists.\n\n* Improve the choice of the initial number of buckets in the hash\ntable. For this patch, I tried to preserve the existing behavior of\nestimating the number of groups and trying to initialize with that many\nbuckets. But my performance tests seem to indicate this is not the best\napproach. More work is needed to find what we should really do here.\n\n* For workloads that are not in work_mem *or* system memory, and need\nto actually go to storage, I see poor CPU utilization because it's not\neffectively overlapping CPU and IO work. Perhaps buffering or readahead\nchanges can improve this, or async IO (even better).\n\n* Project unnecessary attributes away before spilling tuples to disk.\n\n* Improve logtape.c API so that the caller doesn't need to manage a\nbunch of tape numbers.\n\n* Improve estimate of the hash entry size. This patch doesn't change\nthe way the planner estimates it, but I observe that actual size as\nseen at runtime is significantly different. This is connected to the\ninitial number of buckets for the hash table.\n\n* In recursive steps, I don't have a good estimate for the number of\ngroups, so I just estimate it as the number of tuples in that spill\ntape (which is pessimistic). That could be improved by doing a real\ncardinality estimate as the tuples are spilling (perhaps with HLL?).\n\n* Many aggregates with pass-by-ref transition states don't provide a\ngreat aggtransspace. We should consider doing something smarter, like\nhaving negative numbers represent a number that should be multiplied by\nthe size of the group (e.g. ARRAY_AGG would have a size dependent on\nthe group size, not a constant).\n\n* If we want to handle ARRAY_AGG (and the like) well, we can consider\nspilling the partial states in the hash table whem the memory is full.\nThat would add a fair amount of complexity because there would be two\ntypes of spilled data (tuples and partial states), but it could be\nuseful in some cases.\n\nRegards,\n        Jeff Davis", "msg_date": "Fri, 20 Mar 2020 13:20:33 +0800", "msg_from": "Pengzhou Tang <ptang@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Fri, Mar 20, 2020 at 1:20 PM Pengzhou Tang <ptang@pivotal.io> wrote:\n\n> Hi,\n>\n> I happen to notice that \"set enable_sort to false\" cannot guarantee the\n> planner to use hashagg in test groupingsets.sql,\n> the following comparing results of sortagg and hashagg seems to have no\n> meaning.\n>\n>\nPlease forget my comment, I should set enable_groupingsets_hash_disk too.\n\nOn Fri, Mar 20, 2020 at 1:20 PM Pengzhou Tang <ptang@pivotal.io> wrote:Hi,I happen to notice that \"set enable_sort to false\" cannot guarantee the planner to use hashagg in test groupingsets.sql,the following comparing results of sortagg and hashagg seems to have no meaning.Please forget my comment, I should set enable_groupingsets_hash_disk too.", "msg_date": "Fri, 20 Mar 2020 13:26:43 +0800", "msg_from": "Pengzhou Tang <ptang@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "Hello,\n\nWhen calculating the disk costs of hash aggregation that spills to disk,\nthere is something wrong with how we determine depth:\n\n> depth = ceil( log(nbatches - 1) / log(num_partitions) );\n\nIf nbatches is some number between 1.0 and 2.0, we would have a negative\ndepth. As a result, we may have a negative cost for hash aggregation\nplan node, as described in [1].\n\nI don't think 'log(nbatches - 1)' is what we want here. Should it be\njust '(nbatches - 1)'?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAMbWs4_maqdBnRR4x01pDpoV-CiQ%2BRvMQaPm4JoTPbA%3DmZmhMw%40mail.gmail.com\n\nThanks\nRichard\n\nOn Thu, Mar 19, 2020 at 7:36 AM Jeff Davis <pgsql@j-davis.com> wrote:\n\n>\n> Committed.\n>\n> There's some future work that would be nice (some of these are just\n> ideas and may not be worth it):\n>\n> * Refactor MemoryContextMemAllocated() to be a part of\n> MemoryContextStats(), but allow it to avoid walking through the blocks\n> and freelists.\n>\n> * Improve the choice of the initial number of buckets in the hash\n> table. For this patch, I tried to preserve the existing behavior of\n> estimating the number of groups and trying to initialize with that many\n> buckets. But my performance tests seem to indicate this is not the best\n> approach. More work is needed to find what we should really do here.\n>\n> * For workloads that are not in work_mem *or* system memory, and need\n> to actually go to storage, I see poor CPU utilization because it's not\n> effectively overlapping CPU and IO work. Perhaps buffering or readahead\n> changes can improve this, or async IO (even better).\n>\n> * Project unnecessary attributes away before spilling tuples to disk.\n>\n> * Improve logtape.c API so that the caller doesn't need to manage a\n> bunch of tape numbers.\n>\n> * Improve estimate of the hash entry size. This patch doesn't change\n> the way the planner estimates it, but I observe that actual size as\n> seen at runtime is significantly different. This is connected to the\n> initial number of buckets for the hash table.\n>\n> * In recursive steps, I don't have a good estimate for the number of\n> groups, so I just estimate it as the number of tuples in that spill\n> tape (which is pessimistic). That could be improved by doing a real\n> cardinality estimate as the tuples are spilling (perhaps with HLL?).\n>\n> * Many aggregates with pass-by-ref transition states don't provide a\n> great aggtransspace. We should consider doing something smarter, like\n> having negative numbers represent a number that should be multiplied by\n> the size of the group (e.g. ARRAY_AGG would have a size dependent on\n> the group size, not a constant).\n>\n> * If we want to handle ARRAY_AGG (and the like) well, we can consider\n> spilling the partial states in the hash table whem the memory is full.\n> That would add a fair amount of complexity because there would be two\n> types of spilled data (tuples and partial states), but it could be\n> useful in some cases.\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n>\n>\n\nHello,When calculating the disk costs of hash aggregation that spills to disk,there is something wrong with how we determine depth:>            depth = ceil( log(nbatches - 1) / log(num_partitions) );If nbatches is some number between 1.0 and 2.0, we would have a negativedepth. As a result, we may have a negative cost for hash aggregationplan node, as described in [1].I don't think 'log(nbatches - 1)' is what we want here. Should it bejust '(nbatches - 1)'?[1] https://www.postgresql.org/message-id/flat/CAMbWs4_maqdBnRR4x01pDpoV-CiQ%2BRvMQaPm4JoTPbA%3DmZmhMw%40mail.gmail.comThanksRichardOn Thu, Mar 19, 2020 at 7:36 AM Jeff Davis <pgsql@j-davis.com> wrote:\nCommitted.\n\nThere's some future work that would be nice (some of these are just\nideas and may not be worth it):\n\n* Refactor MemoryContextMemAllocated() to be a part of\nMemoryContextStats(), but allow it to avoid walking through the blocks\nand freelists.\n\n* Improve the choice of the initial number of buckets in the hash\ntable. For this patch, I tried to preserve the existing behavior of\nestimating the number of groups and trying to initialize with that many\nbuckets. But my performance tests seem to indicate this is not the best\napproach. More work is needed to find what we should really do here.\n\n* For workloads that are not in work_mem *or* system memory, and need\nto actually go to storage, I see poor CPU utilization because it's not\neffectively overlapping CPU and IO work. Perhaps buffering or readahead\nchanges can improve this, or async IO (even better).\n\n* Project unnecessary attributes away before spilling tuples to disk.\n\n* Improve logtape.c API so that the caller doesn't need to manage a\nbunch of tape numbers.\n\n* Improve estimate of the hash entry size. This patch doesn't change\nthe way the planner estimates it, but I observe that actual size as\nseen at runtime is significantly different. This is connected to the\ninitial number of buckets for the hash table.\n\n* In recursive steps, I don't have a good estimate for the number of\ngroups, so I just estimate it as the number of tuples in that spill\ntape (which is pessimistic). That could be improved by doing a real\ncardinality estimate as the tuples are spilling (perhaps with HLL?).\n\n* Many aggregates with pass-by-ref transition states don't provide a\ngreat aggtransspace. We should consider doing something smarter, like\nhaving negative numbers represent a number that should be multiplied by\nthe size of the group (e.g. ARRAY_AGG would have a size dependent on\nthe group size, not a constant).\n\n* If we want to handle ARRAY_AGG (and the like) well, we can consider\nspilling the partial states in the hash table whem the memory is full.\nThat would add a fair amount of complexity because there would be two\ntypes of spilled data (tuples and partial states), but it could be\nuseful in some cases.\n\nRegards,\n        Jeff Davis", "msg_date": "Thu, 26 Mar 2020 17:56:56 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Thu, Mar 26, 2020 at 05:56:56PM +0800, Richard Guo wrote:\n>Hello,\n>\n>When calculating the disk costs of hash aggregation that spills to disk,\n>there is something wrong with how we determine depth:\n>\n>> depth = ceil( log(nbatches - 1) / log(num_partitions) );\n>\n>If nbatches is some number between 1.0 and 2.0, we would have a negative\n>depth. As a result, we may have a negative cost for hash aggregation\n>plan node, as described in [1].\n>\n>I don't think 'log(nbatches - 1)' is what we want here. Should it be\n>just '(nbatches - 1)'?\n>\n\nI think using log() is correct, but why should we allow fractional\nnbatches values between 1.0 and 2.0? You either have 1 batch or 2\nbatches, you can't have 1.5 batches. So I think the issue is here\n\n nbatches = Max((numGroups * hashentrysize) / mem_limit,\n numGroups / ngroups_limit );\n\nand we should probably do\n\n nbatches = ceil(nbatches);\n\nright after it.\n\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 27 Mar 2020 02:31:08 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-Bounded Hash Aggregation" }, { "msg_contents": "On Fri, 2020-03-27 at 02:31 +0100, Tomas Vondra wrote:\n> On Thu, Mar 26, 2020 at 05:56:56PM +0800, Richard Guo wrote:\n> > If nbatches is some number between 1.0 and 2.0, we would have a\n> > negative\n> > depth. As a result, we may have a negative cost for hash\n> > aggregation\n> > plan node, as described in [1].\n> > numGroups / ngroups_limit );\n> \n> and we should probably do\n> \n> nbatches = ceil(nbatches);\n> \n\nThank you both. I also protected against nbatches == 0 (shouldn't\nhappen), and against num_partitions <= 1. That allowed me to remove the\nconditional and simplify a bit.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 28 Mar 2020 12:29:29 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Memory-Bounded Hash Aggregation" } ]
[ { "msg_contents": "Hi,\n\nIn below testcase when I changed the staorage option for root partition,\nnewly attached partition not including the changed staorage option.\nIs this an expected behavior?\n\npostgres=# CREATE TABLE tab1 (c1 INT, c2 text) PARTITION BY RANGE(c1);\nCREATE TABLE\npostgres=# create table tt_p1 as select * from tab1 where 1=2;\nSELECT 0\npostgres=# alter table tab1 alter COLUMN c2 set storage main;\nALTER TABLE\npostgres=#\npostgres=# alter table tab1 attach partition tt_p1 for values from (20) to\n(30);\nALTER TABLE\npostgres=# \\d+ tab1\n Partitioned table \"public.tab1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target\n| Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n c1 | integer | | | | plain |\n |\n c2 | text | | | | main |\n |\nPartition key: RANGE (c1)\nPartitions: tt_p1 FOR VALUES FROM (20) TO (30)\n\npostgres=# \\d+ tt_p1\n Table \"public.tt_p1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n--------+---------+-----------+----------+---------+----------+--------------+-------------\n c1 | integer | | | | plain |\n |\n c2 | text | | | | extended |\n |\nPartition of: tab1 FOR VALUES FROM (20) TO (30)\nPartition constraint: ((c1 IS NOT NULL) AND (c1 >= 20) AND (c1 < 30))\nAccess method: heap\n\n-- \n\nWith Regards,\n\nPrabhat Kumar Sahu\n\nHi,In below testcase when I changed the staorage option for root partition, newly attached partition not including the changed staorage option.Is this an expected behavior?postgres=# CREATE TABLE tab1 (c1 INT, c2 text) PARTITION BY RANGE(c1);CREATE TABLEpostgres=# create table tt_p1 as select * from tab1  where 1=2;SELECT 0postgres=# alter  table tab1 alter COLUMN c2 set storage main;ALTER TABLEpostgres=# postgres=# alter table tab1 attach partition tt_p1 for values from (20) to (30);ALTER TABLEpostgres=# \\d+ tab1                             Partitioned table \"public.tab1\" Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description --------+---------+-----------+----------+---------+---------+--------------+------------- c1     | integer |           |          |         | plain   |              |  c2     | text    |           |          |         | main    |              | Partition key: RANGE (c1)Partitions: tt_p1 FOR VALUES FROM (20) TO (30)postgres=# \\d+ tt_p1                                   Table \"public.tt_p1\" Column |  Type   | Collation | Nullable | Default | Storage  | Stats target | Description --------+---------+-----------+----------+---------+----------+--------------+------------- c1     | integer |           |          |         | plain    |              |  c2     | text    |           |          |         | extended |              | Partition of: tab1 FOR VALUES FROM (20) TO (30)Partition constraint: ((c1 IS NOT NULL) AND (c1 >= 20) AND (c1 < 30))Access method: heap-- \nWith Regards,Prabhat Kumar Sahu", "msg_date": "Tue, 2 Jul 2019 13:41:47 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Attached partition not considering altered column properties of root\n partition." }, { "msg_contents": "Hi Prabhat,\n\nOn Tue, Jul 2, 2019 at 5:12 PM Prabhat Sahu\n<prabhat.sahu@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> In below testcase when I changed the staorage option for root partition, newly attached partition not including the changed staorage option.\n> Is this an expected behavior?\n\nThanks for the report. This seems like a bug. Documentation claims\nthat the child tables inherit column storage options from the parent\ntable. That's actually enforced in only some cases.\n\n1. If you create the child table as a child to begin with (that is,\nnot attach it as child after the fact):\n\ncreate table parent (a text);\ncreate table child () inherits (parent);\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('parent'::regclass, 'child'::regclass) and attname = 'a';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n parent │ a │ x\n child │ a │ x\n(2 rows)\n\n\n2. If you change the parent's column's storage option, child's column\nis recursively changed.\n\nalter table parent alter a set storage main;\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('parent'::regclass, 'child'::regclass) and attname = 'a';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n parent │ a │ m\n child │ a │ m\n(2 rows)\n\nHowever, we fail to enforce the rule when the child is attached after the fact:\n\ncreate table child2 (a text);\nalter table child2 inherit parent;\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('parent'::regclass, 'child'::regclass,\n'child2'::regclass) and attname = 'a';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n parent │ a │ m\n child │ a │ m\n child2 │ a │ x\n(3 rows)\n\nTo fix this, MergeAttributesIntoExisting() should check that the\nattribute options of a child don't conflict with the parent, which the\nattached patch implements. Note that partitioning uses the same code\nas inheritance, so the fix applies to it too. After the patch:\n\ncreate table p (a int, b text) partition by list (a);\ncreate table p1 partition of p for values in (1);\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('p'::regclass, 'p1'::regclass) and attname = 'b';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n p │ b │ x\n p1 │ b │ x\n(2 rows)\n\nalter table p alter b set storage main;\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('p'::regclass, 'p1'::regclass) and attname = 'b';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n p │ b │ m\n p1 │ b │ m\n(2 rows)\n\ncreate table p2 (like p);\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('p'::regclass, 'p1'::regclass, 'p2'::regclass) and\nattname = 'b';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n p │ b │ m\n p1 │ b │ m\n p2 │ b │ x\n(3 rows)\n\nalter table p attach partition p2 for values in (2);\nERROR: child table \"p2\" has different storage option for column \"b\" than parent\nDETAIL: EXTENDED versus MAIN\n\n-- ok after changing p2 to match\nalter table p2 alter b set storage main;\nalter table p attach partition p2 for values in (2);\n\nThanks,\nAmit", "msg_date": "Wed, 3 Jul 2019 10:52:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Attached partition not considering altered column properties of\n root partition." }, { "msg_contents": "Thanks Amit for the fix patch,\n\nI have applied the patch and verified the issue.\nThe attached partition with altered column properties shows error as below:\npostgres=# alter table p attach partition p2 for values in (2);\npsql: ERROR: child table \"p2\" has different storage option for column \"b\"\nthan parent\nDETAIL: EXTENDED versus MAIN\n\nThanks,\nPrabhat Sahu\n\nOn Wed, Jul 3, 2019 at 7:23 AM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Hi Prabhat,\n>\n> On Tue, Jul 2, 2019 at 5:12 PM Prabhat Sahu\n> <prabhat.sahu@enterprisedb.com> wrote:\n> >\n> > Hi,\n> >\n> > In below testcase when I changed the staorage option for root partition,\n> newly attached partition not including the changed staorage option.\n> > Is this an expected behavior?\n>\n> Thanks for the report. This seems like a bug. Documentation claims\n> that the child tables inherit column storage options from the parent\n> table. That's actually enforced in only some cases.\n>\n> 1. If you create the child table as a child to begin with (that is,\n> not attach it as child after the fact):\n>\n> create table parent (a text);\n> create table child () inherits (parent);\n> select attrelid::regclass, attname, attstorage from pg_attribute where\n> attrelid in ('parent'::regclass, 'child'::regclass) and attname = 'a';\n> attrelid │ attname │ attstorage\n> ──────────┼─────────┼────────────\n> parent │ a │ x\n> child │ a │ x\n> (2 rows)\n>\n>\n> 2. If you change the parent's column's storage option, child's column\n> is recursively changed.\n>\n> alter table parent alter a set storage main;\n> select attrelid::regclass, attname, attstorage from pg_attribute where\n> attrelid in ('parent'::regclass, 'child'::regclass) and attname = 'a';\n> attrelid │ attname │ attstorage\n> ──────────┼─────────┼────────────\n> parent │ a │ m\n> child │ a │ m\n> (2 rows)\n>\n> However, we fail to enforce the rule when the child is attached after the\n> fact:\n>\n> create table child2 (a text);\n> alter table child2 inherit parent;\n> select attrelid::regclass, attname, attstorage from pg_attribute where\n> attrelid in ('parent'::regclass, 'child'::regclass,\n> 'child2'::regclass) and attname = 'a';\n> attrelid │ attname │ attstorage\n> ──────────┼─────────┼────────────\n> parent │ a │ m\n> child │ a │ m\n> child2 │ a │ x\n> (3 rows)\n>\n> To fix this, MergeAttributesIntoExisting() should check that the\n> attribute options of a child don't conflict with the parent, which the\n> attached patch implements. Note that partitioning uses the same code\n> as inheritance, so the fix applies to it too. After the patch:\n>\n> create table p (a int, b text) partition by list (a);\n> create table p1 partition of p for values in (1);\n> select attrelid::regclass, attname, attstorage from pg_attribute where\n> attrelid in ('p'::regclass, 'p1'::regclass) and attname = 'b';\n> attrelid │ attname │ attstorage\n> ──────────┼─────────┼────────────\n> p │ b │ x\n> p1 │ b │ x\n> (2 rows)\n>\n> alter table p alter b set storage main;\n> select attrelid::regclass, attname, attstorage from pg_attribute where\n> attrelid in ('p'::regclass, 'p1'::regclass) and attname = 'b';\n> attrelid │ attname │ attstorage\n> ──────────┼─────────┼────────────\n> p │ b │ m\n> p1 │ b │ m\n> (2 rows)\n>\n> create table p2 (like p);\n> select attrelid::regclass, attname, attstorage from pg_attribute where\n> attrelid in ('p'::regclass, 'p1'::regclass, 'p2'::regclass) and\n> attname = 'b';\n> attrelid │ attname │ attstorage\n> ──────────┼─────────┼────────────\n> p │ b │ m\n> p1 │ b │ m\n> p2 │ b │ x\n> (3 rows)\n>\n> alter table p attach partition p2 for values in (2);\n> ERROR: child table \"p2\" has different storage option for column \"b\" than\n> parent\n> DETAIL: EXTENDED versus MAIN\n>\n> -- ok after changing p2 to match\n> alter table p2 alter b set storage main;\n> alter table p attach partition p2 for values in (2);\n>\n> Thanks,\n> Amit\n\nThanks Amit for the fix patch,I have applied the patch and verified the issue. The attached partition with altered column properties shows error as below:postgres=# alter table p attach partition p2 for values in (2);psql: ERROR:  child table \"p2\" has different storage option for column \"b\" than parentDETAIL:  EXTENDED versus MAINThanks,Prabhat SahuOn Wed, Jul 3, 2019 at 7:23 AM Amit Langote <amitlangote09@gmail.com> wrote:Hi Prabhat,\n\nOn Tue, Jul 2, 2019 at 5:12 PM Prabhat Sahu\n<prabhat.sahu@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> In below testcase when I changed the staorage option for root partition, newly attached partition not including the changed staorage option.\n> Is this an expected behavior?\n\nThanks for the report.  This seems like a bug.  Documentation claims\nthat the child tables inherit column storage options from the parent\ntable.  That's actually enforced in only some cases.\n\n1. If you create the child table as a child to begin with (that is,\nnot attach it as child after the fact):\n\ncreate table parent (a text);\ncreate table child () inherits (parent);\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('parent'::regclass, 'child'::regclass) and attname = 'a';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n parent   │ a       │ x\n child    │ a       │ x\n(2 rows)\n\n\n2. If you change the parent's column's storage option, child's column\nis recursively changed.\n\nalter table parent alter a set storage main;\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('parent'::regclass, 'child'::regclass) and attname = 'a';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n parent   │ a       │ m\n child    │ a       │ m\n(2 rows)\n\nHowever, we fail to enforce the rule when the child is attached after the fact:\n\ncreate table child2 (a text);\nalter table child2 inherit parent;\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('parent'::regclass, 'child'::regclass,\n'child2'::regclass) and attname = 'a';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n parent   │ a       │ m\n child    │ a       │ m\n child2   │ a       │ x\n(3 rows)\n\nTo fix this, MergeAttributesIntoExisting() should check that the\nattribute options of a child don't conflict with the parent, which the\nattached patch implements.  Note that partitioning uses the same code\nas inheritance, so the fix applies to it too.  After the patch:\n\ncreate table p (a int, b text) partition by list (a);\ncreate table p1 partition of p for values in (1);\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('p'::regclass, 'p1'::regclass) and attname = 'b';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n p        │ b       │ x\n p1       │ b       │ x\n(2 rows)\n\nalter table p alter b set storage main;\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('p'::regclass, 'p1'::regclass) and attname = 'b';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n p        │ b       │ m\n p1       │ b       │ m\n(2 rows)\n\ncreate table p2 (like p);\nselect attrelid::regclass, attname, attstorage from pg_attribute where\nattrelid in ('p'::regclass, 'p1'::regclass, 'p2'::regclass) and\nattname = 'b';\n attrelid │ attname │ attstorage\n──────────┼─────────┼────────────\n p        │ b       │ m\n p1       │ b       │ m\n p2       │ b       │ x\n(3 rows)\n\nalter table p attach partition p2 for values in (2);\nERROR:  child table \"p2\" has different storage option for column \"b\" than parent\nDETAIL:  EXTENDED versus MAIN\n\n-- ok after changing p2 to match\nalter table p2 alter b set storage main;\nalter table p attach partition p2 for values in (2);\n\nThanks,\nAmit", "msg_date": "Wed, 3 Jul 2019 15:10:28 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Attached partition not considering altered column properties of\n root partition." }, { "msg_contents": "On 2019-Jul-03, Amit Langote wrote:\n\n> Thanks for the report. This seems like a bug. Documentation claims\n> that the child tables inherit column storage options from the parent\n> table. That's actually enforced in only some cases.\n\n> To fix this, MergeAttributesIntoExisting() should check that the\n> attribute options of a child don't conflict with the parent, which the\n> attached patch implements. Note that partitioning uses the same code\n> as inheritance, so the fix applies to it too. After the patch:\n\nThanks for the patch!\n\nI'm not completely sold on back-patching this. Should we consider\nchanging it in 12beta and up only, or should we just backpatch it all\nthe way back to 9.4? It's going to raise errors in cases that\npreviously worked.\n\nOn the patch itself: I think ERRCODE_DATATYPE_MISMATCH is not the\ncorrect one to use for this; maybe ERRCODE_INVALID_OBJECT_DEFINITION or,\nmore likely, ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE.\n\n\"FOO versus BAR\" does not sound proper style. Maybe \"Child table has\nFOO, parent table expects BAR.\" Or maybe put it all in errmsg, \n errmsg(\"child table \\\"%s\\\" has storage option \\\"%s\\\" for column \\\"%s\\\" mismatching \\\"%s\\\" on parent\",\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jul 2019 18:16:56 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Attached partition not considering altered column properties of\n root partition." }, { "msg_contents": "Hi Alvaro,\n\nThanks for looking at this.\n\nOn Sat, Jul 27, 2019 at 8:38 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Thanks for the patch!\n>\n> I'm not completely sold on back-patching this. Should we consider\n> changing it in 12beta and up only, or should we just backpatch it all\n> the way back to 9.4? It's going to raise errors in cases that\n> previously worked.\n\nApplying the fix to only 12beta and up is perhaps fine. AFAICT,\nthere's no clear design reason for why the attribute storage property\nmust be the same in all child tables, so most users wouldn't even be\naware that we ensure that in some cases. OTOH, getting an error now\nto ensure the invariant more strictly might be more annoying than\nhelpful.\n\n> On the patch itself: I think ERRCODE_DATATYPE_MISMATCH is not the\n> correct one to use for this; maybe ERRCODE_INVALID_OBJECT_DEFINITION or,\n> more likely, ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE.\n\nOK, I prefer the latter.\n\n> \"FOO versus BAR\" does not sound proper style. Maybe \"Child table has\n> FOO, parent table expects BAR.\" Or maybe put it all in errmsg,\n> errmsg(\"child table \\\"%s\\\" has storage option \\\"%s\\\" for column \\\"%s\\\" mismatching \\\"%s\\\" on parent\",\n\nI too found the \"FOO versus BAR\" style a bit odd, so changed to the\nerror message you suggest. There are other instances of this style\nthough:\n\n$ git grep \"%s versus %s\"\nsrc/backend/commands/tablecmds.c:\nerrdetail(\"%s versus %s\",\nsrc/backend/commands/tablecmds.c:\nerrdetail(\"%s versus %s\",\nsrc/backend/commands/tablecmds.c:\nerrdetail(\"%s versus %s\",\nsrc/backend/commands/tablecmds.c:\nerrdetail(\"%s versus %s\",\nsrc/backend/parser/parse_coerce.c:\nerrdetail(\"%s versus %s\",\nsrc/backend/parser/parse_coerce.c:\nerrdetail(\"%s versus %s\",\nsrc/backend/parser/parse_coerce.c:\nerrdetail(\"%s versus %s\",\nsrc/backend/parser/parse_coerce.c: errdetail(\"%s versus %s\",\nsrc/backend/parser/parse_coerce.c: errdetail(\"%s versus %s\",\nsrc/backend/parser/parse_param.c: errdetail(\"%s versus %s\",\n\nShould we leave them alone?\n\nAttached updated patch.\n\nThanks,\nAmit", "msg_date": "Mon, 29 Jul 2019 15:12:01 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Attached partition not considering altered column properties of\n root partition." }, { "msg_contents": "On Tue, Jul 2, 2019 at 9:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Thanks for the report. This seems like a bug. Documentation claims\n> that the child tables inherit column storage options from the parent\n> table. That's actually enforced in only some cases.\n\nI realize I'm just repeating the same argument I've already made\nbefore on other related topics, but I don't think this is a bug.\nInherited-from-parent is not the same as\nenforced-to-always-be-the-same-as-parent. Note that this is allowed,\nchanging only the child:\n\nrhaas=# create table foo (a int, b text) partition by range (a);\nCREATE TABLE\nrhaas=# create table foo1 partition of foo for values from (0) to (10);\nCREATE TABLE\nrhaas=# alter table foo1 alter column b set storage plain;\nALTER TABLE\n\nAs is this, changing only the parent:\n\nrhaas=# alter table only foo1 alter column b set storage plain;\nALTER TABLE\n\nHow can you possibly argue that the intended behavior is\neverything-always-matches when that's not what's documented and\nthere's absolutely nothing that tries to enforce it?\n\nI'm getting really tired of people thinking that they can invent rules\nfor table partitioning that were (1) never intended by the original\nimplementation and (2) not even slightly enforced by the code, and\nthen decide that those behavior changes should not only be made but\nback-patched. That is just dead wrong. There is no problem here.\nThere is no need to change ANYTHING. There is even less need to do it\nin the back branches.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 30 Jul 2019 13:38:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Attached partition not considering altered column properties of\n root partition." }, { "msg_contents": "On Wed, Jul 31, 2019 at 2:38 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jul 2, 2019 at 9:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Thanks for the report. This seems like a bug. Documentation claims\n> > that the child tables inherit column storage options from the parent\n> > table. That's actually enforced in only some cases.\n>\n> I realize I'm just repeating the same argument I've already made\n> before on other related topics, but I don't think this is a bug.\n> Inherited-from-parent is not the same as\n> enforced-to-always-be-the-same-as-parent. Note that this is allowed,\n> changing only the child:\n>\n> rhaas=# create table foo (a int, b text) partition by range (a);\n> CREATE TABLE\n> rhaas=# create table foo1 partition of foo for values from (0) to (10);\n> CREATE TABLE\n> rhaas=# alter table foo1 alter column b set storage plain;\n> ALTER TABLE\n>\n> As is this, changing only the parent:\n>\n> rhaas=# alter table only foo1 alter column b set storage plain;\n> ALTER TABLE\n>\n> How can you possibly argue that the intended behavior is\n> everything-always-matches when that's not what's documented and\n> there's absolutely nothing that tries to enforce it?\n\nYou're right. The patch as proposed is barely enough to ensure the\neverything-always-matches behavior. Let's update^H^H^H^H^H^H^H forget\nabout the patch. :)\n\nI do remember that we made a list of things that we decided must match\nin all partitions, which ended up being slightly bigger than the same\nlist for regular inheritance children, but much smaller than the list\nof all the things that could be different among children. I forgot we\ndid that when replying to Prabhat's report. In this particular case,\nI do have doubts about whether we really need attstorage to be the\nsame in all the children, so maybe I should've first asked why we\nshould think of this as a bug.\n\n> I'm getting really tired of people thinking that they can invent rules\n> for table partitioning that were (1) never intended by the original\n> implementation and (2) not even slightly enforced by the code, and\n> then decide that those behavior changes should not only be made but\n> back-patched. That is just dead wrong. There is no problem here.\n> There is no need to change ANYTHING. There is even less need to do it\n> in the back branches.\n\nOK, I'm withdrawing my patch.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 31 Jul 2019 09:51:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Attached partition not considering altered column properties of\n root partition." } ]
[ { "msg_contents": "Hi Tom,\n\nI think an assumption of d25ea01275 breaks partitionwise join. Sorry\nit took me a while to report it.\n\nIn https://postgr.es/m/8168.1560446056@sss.pgh.pa.us, Tom wrote:\n> I poked into this and found the cause. For the sample query, we have\n> an EquivalenceClass containing the expression\n> COALESCE(COALESCE(Var_1_1, Var_2_1), Var_3_1)\n> where each of the Vars belongs to an appendrel parent.\n> add_child_rel_equivalences() needs to add expressions representing the\n> transform of that to each child relation. That is, if the children\n> of table 1 are A1 and A2, of table 2 are B1 and B2, and of table 3\n> are C1 and C2, what we'd like to add are the expressions\n> COALESCE(COALESCE(Var_A1_1, Var_2_1), Var_3_1)\n> COALESCE(COALESCE(Var_A2_1, Var_2_1), Var_3_1)\n> COALESCE(COALESCE(Var_1_1, Var_B1_1), Var_3_1)\n> COALESCE(COALESCE(Var_1_1, Var_B2_1), Var_3_1)\n> COALESCE(COALESCE(Var_1_1, Var_2_1), Var_C1_1)\n> COALESCE(COALESCE(Var_1_1, Var_2_1), Var_C2_1)\n> However, what it's actually producing is additional combinations for\n> each appendrel after the first, because each call also mutates the\n> previously-added child expressions. So in this example we also get\n> COALESCE(COALESCE(Var_A1_1, Var_B1_1), Var_3_1)\n> COALESCE(COALESCE(Var_A2_1, Var_B2_1), Var_3_1)\n> COALESCE(COALESCE(Var_A1_1, Var_2_1), Var_C1_1)\n> COALESCE(COALESCE(Var_A2_1, Var_2_1), Var_C2_1)\n> COALESCE(COALESCE(Var_A1_1, Var_B1_1), Var_C1_1)\n> COALESCE(COALESCE(Var_A2_1, Var_B2_1), Var_C2_1)\n> With two appendrels involved, that's O(N^2) expressions; with\n> three appendrels, more like O(N^3).\n>\n> This is by no means specific to FULL JOINs; you could get the same\n> behavior with join clauses like \"WHERE t1.a + t2.b + t3.c = t4.d\".\n>\n> These extra expressions don't have any use, since we're not\n> going to join the children directly to each other.\n\n...unless partition wise join thinks they can be joined. Partition\nwise join can't handle 3-way full joins today, but only because it's\nbroken itself when trying to match a full join clause to the partition\nkey due to one side being a COALESCE expression. Consider this\nexample query:\n\n-- p is defined as:\n-- create table p (a int) partition by list (a);\n-- create table p1 partition of p for values in (1);\n-- create table p2 partition of p for values in (2);\nexplain select * from p t1 full outer join p t2 using (a) full outer\njoin p t3 using (a) full outer join p t4 using (a) order by 1;\n QUERY PLAN\n─────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n Sort (cost=16416733.32..16628145.85 rows=84565012 width=4)\n Sort Key: (COALESCE(COALESCE(COALESCE(t1.a, t2.a), t3.a), t4.a))\n -> Merge Full Join (cost=536957.40..1813748.77 rows=84565012 width=4)\n Merge Cond: (t4.a = (COALESCE(COALESCE(t1.a, t2.a), t3.a)))\n -> Sort (cost=410.57..423.32 rows=5100 width=4)\n Sort Key: t4.a\n -> Append (cost=0.00..96.50 rows=5100 width=4)\n -> Seq Scan on p1 t4 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on p2 t4_1 (cost=0.00..35.50\nrows=2550 width=4)\n -> Materialize (cost=536546.83..553128.21 rows=3316275 width=12)\n -> Sort (cost=536546.83..544837.52 rows=3316275 width=12)\n Sort Key: (COALESCE(COALESCE(t1.a, t2.a), t3.a))\n -> Merge Full Join (cost=14254.85..64024.48\nrows=3316275 width=12)\n Merge Cond: (t3.a = (COALESCE(t1.a, t2.a)))\n -> Sort (cost=410.57..423.32 rows=5100 width=4)\n Sort Key: t3.a\n -> Append (cost=0.00..96.50\nrows=5100 width=4)\n -> Seq Scan on p1 t3\n(cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on p2 t3_1\n(cost=0.00..35.50 rows=2550 width=4)\n -> Sort (cost=13844.29..14169.41\nrows=130050 width=8)\n Sort Key: (COALESCE(t1.a, t2.a))\n -> Merge Full Join\n(cost=821.13..2797.38 rows=130050 width=8)\n Merge Cond: (t1.a = t2.a)\n -> Sort (cost=410.57..423.32\nrows=5100 width=4)\n Sort Key: t1.a\n -> Append\n(cost=0.00..96.50 rows=5100 width=4)\n -> Seq Scan on p1\nt1 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on p2\nt1_1 (cost=0.00..35.50 rows=2550 width=4)\n -> Sort (cost=410.57..423.32\nrows=5100 width=4)\n Sort Key: t2.a\n -> Append\n(cost=0.00..96.50 rows=5100 width=4)\n -> Seq Scan on p1\nt2 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on p2\nt2_1 (cost=0.00..35.50 rows=2550 width=4)\n\n-- turn on enable_partitionwise_join\nset enable_partitionwise_join to on;\nexplain select * from p t1 full outer join p t2 using (a) full outer\njoin p t3 using (a) full outer join p t4 using (a) order by 1;\n QUERY PLAN\n───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n Sort (cost=16385259.94..16596672.47 rows=84565012 width=4)\n Sort Key: (COALESCE(COALESCE(COALESCE(t1.a, t2.a), t3.a), t4.a))\n -> Merge Full Join (cost=505484.02..1782275.39 rows=84565012 width=4)\n Merge Cond: (t4.a = (COALESCE(COALESCE(t1.a, t2.a), t3.a)))\n -> Sort (cost=410.57..423.32 rows=5100 width=4)\n Sort Key: t4.a\n -> Append (cost=0.00..96.50 rows=5100 width=4)\n -> Seq Scan on p1 t4 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on p2 t4_1 (cost=0.00..35.50\nrows=2550 width=4)\n -> Materialize (cost=505073.45..521654.83 rows=3316275 width=12)\n -> Sort (cost=505073.45..513364.14 rows=3316275 width=12)\n Sort Key: (COALESCE(COALESCE(t1.a, t2.a), t3.a))\n -> Merge Full Join (cost=7653.92..32551.10\nrows=3316275 width=12)\n Merge Cond: (t3.a = (COALESCE(t1.a, t2.a)))\n -> Sort (cost=410.57..423.32 rows=5100 width=4)\n Sort Key: t3.a\n -> Append (cost=0.00..96.50\nrows=5100 width=4)\n -> Seq Scan on p1 t3\n(cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on p2 t3_1\n(cost=0.00..35.50 rows=2550 width=4)\n -> Sort (cost=7243.35..7405.91 rows=65024 width=8)\n Sort Key: (COALESCE(t1.a, t2.a))\n -> Result (cost=359.57..2045.11\nrows=65024 width=8)\n -> Append\n(cost=359.57..2045.11 rows=65024 width=8)\n -> Merge Full Join\n(cost=359.57..860.00 rows=32512 width=8)\n Merge Cond: (t1.a = t2.a)\n -> Sort\n(cost=179.78..186.16 rows=2550 width=4)\n Sort Key: t1.a\n -> Seq Scan\non p1 t1 (cost=0.00..35.50 rows=2550 width=4)\n -> Sort\n(cost=179.78..186.16 rows=2550 width=4)\n Sort Key: t2.a\n -> Seq Scan\non p1 t2 (cost=0.00..35.50 rows=2550 width=4)\n -> Merge Full Join\n(cost=359.57..860.00 rows=32512 width=8)\n Merge Cond: (t1_1.a = t2_1.a)\n -> Sort\n(cost=179.78..186.16 rows=2550 width=4)\n Sort Key: t1_1.a\n -> Seq Scan\non p2 t1_1 (cost=0.00..35.50 rows=2550 width=4)\n -> Sort\n(cost=179.78..186.16 rows=2550 width=4)\n Sort Key: t2_1.a\n -> Seq Scan\non p2 t2_1 (cost=0.00..35.50 rows=2550 width=4)\n\nSee how it only managed to use partition wise join up to 2-way join,\nbut gives up at 3-way join and higher, because the join condition\nlooks like this: t3.a = (COALESCE(t1.a, t2.a). When building the join\nrelation (t1, t2, t3) between (t3) and (t1, t2), it fails to see that\nCOALESCE(t1.a, t2.a) actually matches the partition key of (t1, t2).\nWhen I fix the code that does the matching and run with merge joins\ndisabled, I can get a plan where the whole 4-way join is partitioned:\n\nexplain select * from p t1 full outer join p t2 using (a) full outer\njoin p t3 using (a) full outer join p t4 using (a) order by 1;\n QUERY PLAN\n─────────────────────────────────────────────────────────────────────────────────────────────────────\n Gather Merge (cost=831480.11..1859235.87 rows=8808720 width=4)\n Workers Planned: 2\n -> Sort (cost=830480.09..841490.99 rows=4404360 width=4)\n Sort Key: (COALESCE(COALESCE(COALESCE(t1.a, t2.a), t3.a), t4.a))\n -> Parallel Append (cost=202.12..224012.93 rows=4404360 width=4)\n -> Hash Full Join (cost=202.12..201991.13 rows=5285232 width=4)\n Hash Cond: (COALESCE(COALESCE(t1.a, t2.a), t3.a) = t4.a)\n -> Hash Full Join (cost=134.75..15904.32\nrows=414528 width=12)\n Hash Cond: (COALESCE(t1.a, t2.a) = t3.a)\n -> Hash Full Join (cost=67.38..1247.18\nrows=32512 width=8)\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on p1 t1\n(cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on p1 t2\n(cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on p1 t3\n(cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on p1 t4 (cost=0.00..35.50\nrows=2550 width=4)\n -> Hash Full Join (cost=202.12..201991.13 rows=5285232 width=4)\n Hash Cond: (COALESCE(COALESCE(t1_1.a, t2_1.a),\nt3_1.a) = t4_1.a)\n -> Hash Full Join (cost=134.75..15904.32\nrows=414528 width=12)\n Hash Cond: (COALESCE(t1_1.a, t2_1.a) = t3_1.a)\n -> Hash Full Join (cost=67.38..1247.18\nrows=32512 width=8)\n Hash Cond: (t1_1.a = t2_1.a)\n -> Seq Scan on p2 t1_1\n(cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on p2 t2_1\n(cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on p2 t3_1\n(cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on p2 t4_1 (cost=0.00..35.50\nrows=2550 width=4)\n(31 rows)\n\nBut with merge joins enabled:\n\nexplain select * from p t1 full outer join p t2 using (a) full outer\njoin p t3 using (a) full outer join p t4 using (a) order by 1;\nERROR: could not find pathkey item to sort\n\nThat's because, there's no child COALESCE(t1_1.a, t2_1.a) expression\nin the EC that contains COALESCE(t1.a, t2.a), where t1_1 and t2_1\nrepresent the 1st partition of t1 and t2, resp. The problem is that\nadd_child_rel_equivalences(), as of d25ea01275, only adds the\nfollowing child expressions of COALESCE(t1.a, t2.a):\n\n-- when translating t1\nCOALESCE(t1_1.a, t2.a)\nCOALESCE(t1_2.a, t2.a)\n-- when translating t2\nCOALESCE(t1.a, t2_1.a)\nCOALESCE(t1.a, t2_2.a)\n\nwhereas previously, the following would be added too when translating t2:\n\nCOALESCE(t1_1.a, t2_1.a)\nCOALESCE(t1_1.a, t2_2.a)\nCOALESCE(t1_2.a, t2_1.a)\nCOALESCE(t1_2.a, t2_2.a)\n\nNote that of those, only COALESCE(t1_1.a, t2_1.a) and COALESCE(t1_2.a,\nt2_2.a) are interesting, because partition wise join will only ever\nconsider pairs (t1_1, t2_1) and (t1_2, t2_2) to be joined.\n\nWe can get the needed child expressions and still avoid the\ncombinatorial explosion in the size of resulting EC members list if we\ntaught add_child_rel_equivalences() to only translate ECs that the\ninput parent relation is capable of producing. So, COALESCE(t1.a,\nt2.a) will not be translated if the input relation is only (t1) or\n(t2), that is, when called from set_append_rel_size(). Instead it\nwould be translated if it's passed the joinrel (t1, t2). IOW, teach\nbuild_child_join_rel() to call add_child_rel_equivalences(), which\nI've tried to implement in the attached.\n\nI have attached two patches.\n\n0001 - fix partitionwise join to work correctly with n-way joins of\nwhich some are full joins (+ cosmetic improvements around the code\nthat was touched)\n0002 - fix to translate multi-relation EC members correctly\n\nThanks,\nAmit", "msg_date": "Tue, 2 Jul 2019 18:28:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "d25ea01275 and partitionwise join" }, { "msg_contents": "On Tue, Jul 2, 2019 at 6:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> 0001 - fix partitionwise join to work correctly with n-way joins of\n> which some are full joins (+ cosmetic improvements around the code\n> that was touched)\n\nHere are my comments about the cosmetic improvements: they seem pretty\nlarge to me, so I'd make a separate patch for that. In addition, I'd\nmove have_partkey_equi_join() and match_expr_to_partition_keys() to\nrelnode.c, because these functions are only used in that file.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 16 Jul 2019 20:22:26 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Fujita-san,\n\nThanks for looking at this.\n\nOn Tue, Jul 16, 2019 at 8:22 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n> On Tue, Jul 2, 2019 at 6:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > 0001 - fix partitionwise join to work correctly with n-way joins of\n> > which some are full joins (+ cosmetic improvements around the code\n> > that was touched)\n>\n> Here are my comments about the cosmetic improvements: they seem pretty\n> large to me, so I'd make a separate patch for that.\n\nOK, my bad that I added so many cosmetic changes into a patch that is\nmeant to fix the main issue. Just to clarify, I'm proposing these\ncosmetic improvements to better clarify the terminological separation\nbetween nullable and non-nullable partition keys, which I found a bit\nhard to understand as is.\n\nI've broken the patch into two: 0001 contains only cosmetic changes\nand 0002 the fix for handling full joins properly. Would you rather\nthat be reversed?\n\n> In addition, I'd\n> move have_partkey_equi_join() and match_expr_to_partition_keys() to\n> relnode.c, because these functions are only used in that file.\n\nI hadn't noticed that. Makes sense to move them to relnode.c, which\nis implemented in 0001.\n\nThanks,\nAmit", "msg_date": "Thu, 18 Jul 2019 11:18:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Thu, Jul 18, 2019 at 11:18 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Jul 16, 2019 at 8:22 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Tue, Jul 2, 2019 at 6:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > 0001 - fix partitionwise join to work correctly with n-way joins of\n> > > which some are full joins (+ cosmetic improvements around the code\n> > > that was touched)\n> >\n> > Here are my comments about the cosmetic improvements: they seem pretty\n> > large to me, so I'd make a separate patch for that.\n>\n> OK, my bad that I added so many cosmetic changes into a patch that is\n> meant to fix the main issue. Just to clarify, I'm proposing these\n> cosmetic improvements to better clarify the terminological separation\n> between nullable and non-nullable partition keys, which I found a bit\n> hard to understand as is.\n\nOK, thanks for the explanation!\n\n> I've broken the patch into two: 0001 contains only cosmetic changes\n> and 0002 the fix for handling full joins properly. Would you rather\n> that be reversed?\n\nI like this order.\n\n> > In addition, I'd\n> > move have_partkey_equi_join() and match_expr_to_partition_keys() to\n> > relnode.c, because these functions are only used in that file.\n>\n> I hadn't noticed that. Makes sense to move them to relnode.c, which\n> is implemented in 0001.\n\nThanks for including that! Will review.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 18 Jul 2019 20:10:06 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Fujita-san,\n\nOn Thu, Jul 18, 2019 at 8:10 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n> On Thu, Jul 18, 2019 at 11:18 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Jul 16, 2019 at 8:22 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > On Tue, Jul 2, 2019 at 6:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > 0001 - fix partitionwise join to work correctly with n-way joins of\n> > > > which some are full joins (+ cosmetic improvements around the code\n> > > > that was touched)\n> > >\n> > > Here are my comments about the cosmetic improvements: they seem pretty\n> > > large to me, so I'd make a separate patch for that.\n> >\n> > OK, my bad that I added so many cosmetic changes into a patch that is\n> > meant to fix the main issue. Just to clarify, I'm proposing these\n> > cosmetic improvements to better clarify the terminological separation\n> > between nullable and non-nullable partition keys, which I found a bit\n> > hard to understand as is.\n>\n> OK, thanks for the explanation!\n>\n> > I've broken the patch into two: 0001 contains only cosmetic changes\n> > and 0002 the fix for handling full joins properly. Would you rather\n> > that be reversed?\n>\n> I like this order.\n>\n> > > In addition, I'd\n> > > move have_partkey_equi_join() and match_expr_to_partition_keys() to\n> > > relnode.c, because these functions are only used in that file.\n> >\n> > I hadn't noticed that. Makes sense to move them to relnode.c, which\n> > is implemented in 0001.\n>\n> Thanks for including that! Will review.\n\nTo avoid losing track of this, I've added this to November CF.\n\nhttps://commitfest.postgresql.org/25/2278/\n\nI know there is one more patch beside the partitionwise join fix, but\nI've set the title to suggest that this is related mainly to\npartitionwise joins.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 4 Sep 2019 10:53:17 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Hi Amit,\n\nOn Wed, Sep 4, 2019 at 10:01 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Fujita-san,\n>\n> To avoid losing track of this, I've added this to November CF.\n>\n> https://commitfest.postgresql.org/25/2278/\n>\n> I know there is one more patch beside the partitionwise join fix, but\n> I've set the title to suggest that this is related mainly to\n> partitionwise joins.\n>\n\n Thank you for working on this. Currently partitionwise join does not\n take COALESCE expr into consideration when matching to partition keys.\n This is a problem.\n\n BTW, a rebase is needed for the patch set.\n\nThanks\nRichard\n\nHi Amit,On Wed, Sep 4, 2019 at 10:01 AM Amit Langote <amitlangote09@gmail.com> wrote:Fujita-san,\n\nTo avoid losing track of this, I've added this to November CF.\n\nhttps://commitfest.postgresql.org/25/2278/\n\nI know there is one more patch beside the partitionwise join fix, but\nI've set the title to suggest that this is related mainly to\npartitionwise joins. Thank you for working on this. Currently partitionwise join does not take COALESCE expr into consideration when matching to partition keys. This is a problem. BTW, a rebase is needed for the patch set.ThanksRichard", "msg_date": "Wed, 4 Sep 2019 15:30:20 +0800", "msg_from": "Richard Guo <riguo@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Hi Amit,\n\nOn Wed, Sep 4, 2019 at 3:30 PM Richard Guo <riguo@pivotal.io> wrote:\n\n> Hi Amit,\n>\n> On Wed, Sep 4, 2019 at 10:01 AM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n>\n>> Fujita-san,\n>>\n>> To avoid losing track of this, I've added this to November CF.\n>>\n>> https://commitfest.postgresql.org/25/2278/\n>>\n>> I know there is one more patch beside the partitionwise join fix, but\n>> I've set the title to suggest that this is related mainly to\n>> partitionwise joins.\n>>\n>\n> Thank you for working on this. Currently partitionwise join does not\n> take COALESCE expr into consideration when matching to partition keys.\n> This is a problem.\n>\n> BTW, a rebase is needed for the patch set.\n>\n\n\nI'm reviewing v2-0002 and I have concern about how COALESCE expr is\nprocessed in match_join_arg_to_partition_keys().\n\nIf there is a COALESCE expr with first arg being non-partition key expr\nand second arg being partition key, the patch would match it to the\npartition key, which may result in wrong results in some cases.\n\nFor instance, consider the partition table below:\n\ncreate table p (k int, val int) partition by range(k);\ncreate table p_1 partition of p for values from (1) to (10);\ncreate table p_2 partition of p for values from (10) to (100);\n\nSo with patch v2-0002, the following query will be planned with\npartitionwise join.\n\n# explain (costs off)\nselect * from (p as t1 full join p as t2 on t1.k = t2.k) as\nt12(k1,val1,k2,val2)\n full join p as t3 on COALESCE(t12.val1, t12.k1)\n= t3.k;\n QUERY PLAN\n----------------------------------------------------------\n Append\n -> Hash Full Join\n Hash Cond: (COALESCE(t1.val, t1.k) = t3.k)\n -> Hash Full Join\n Hash Cond: (t1.k = t2.k)\n -> Seq Scan on p_1 t1\n -> Hash\n -> Seq Scan on p_1 t2\n -> Hash\n -> Seq Scan on p_1 t3\n -> Hash Full Join\n Hash Cond: (COALESCE(t1_1.val, t1_1.k) = t3_1.k)\n -> Hash Full Join\n Hash Cond: (t1_1.k = t2_1.k)\n -> Seq Scan on p_2 t1_1\n -> Hash\n -> Seq Scan on p_2 t2_1\n -> Hash\n -> Seq Scan on p_2 t3_1\n(19 rows)\n\nBut as t1.val is not a partition key, actually we cannot use\npartitionwise join here.\n\nIf we insert below data into the table, we will get wrong results for\nthe query above.\n\ninsert into p select 5,15;\ninsert into p select 15,5;\n\nThanks\nRichard\n\nHi Amit,On Wed, Sep 4, 2019 at 3:30 PM Richard Guo <riguo@pivotal.io> wrote:Hi Amit,On Wed, Sep 4, 2019 at 10:01 AM Amit Langote <amitlangote09@gmail.com> wrote:Fujita-san,\n\nTo avoid losing track of this, I've added this to November CF.\n\nhttps://commitfest.postgresql.org/25/2278/\n\nI know there is one more patch beside the partitionwise join fix, but\nI've set the title to suggest that this is related mainly to\npartitionwise joins. Thank you for working on this. Currently partitionwise join does not take COALESCE expr into consideration when matching to partition keys. This is a problem. BTW, a rebase is needed for the patch set.I'm reviewing v2-0002 and I have concern about how COALESCE expr isprocessed in match_join_arg_to_partition_keys().If there is a COALESCE expr with first arg being non-partition key exprand second arg being partition key, the patch would match it to thepartition key, which may result in wrong results in some cases.For instance, consider the partition table below:create table p (k int, val int) partition by range(k);create table p_1 partition of p for values from (1) to (10);create table p_2 partition of p for values from (10) to (100);So with patch v2-0002, the following query will be planned withpartitionwise join.# explain (costs off)select * from (p as t1 full join p as t2 on t1.k = t2.k) as t12(k1,val1,k2,val2)                            full join p as t3 on COALESCE(t12.val1, t12.k1) = t3.k;                        QUERY PLAN---------------------------------------------------------- Append   ->  Hash Full Join         Hash Cond: (COALESCE(t1.val, t1.k) = t3.k)         ->  Hash Full Join               Hash Cond: (t1.k = t2.k)               ->  Seq Scan on p_1 t1               ->  Hash                     ->  Seq Scan on p_1 t2         ->  Hash               ->  Seq Scan on p_1 t3   ->  Hash Full Join         Hash Cond: (COALESCE(t1_1.val, t1_1.k) = t3_1.k)         ->  Hash Full Join               Hash Cond: (t1_1.k = t2_1.k)               ->  Seq Scan on p_2 t1_1               ->  Hash                     ->  Seq Scan on p_2 t2_1         ->  Hash               ->  Seq Scan on p_2 t3_1(19 rows)But as t1.val is not a partition key, actually we cannot usepartitionwise join here.If we insert below data into the table, we will get wrong results forthe query above.insert into p select 5,15;insert into p select 15,5; ThanksRichard", "msg_date": "Wed, 4 Sep 2019 16:29:35 +0800", "msg_from": "Richard Guo <riguo@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Hello Richard,\n\nOn Wed, Sep 4, 2019 at 4:30 PM Richard Guo <riguo@pivotal.io> wrote:\n>\n> Hi Amit,\n>\n> On Wed, Sep 4, 2019 at 10:01 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> Fujita-san,\n>>\n>> To avoid losing track of this, I've added this to November CF.\n>>\n>> https://commitfest.postgresql.org/25/2278/\n>>\n>> I know there is one more patch beside the partitionwise join fix, but\n>> I've set the title to suggest that this is related mainly to\n>> partitionwise joins.\n>\n>\n> Thank you for working on this. Currently partitionwise join does not\n> take COALESCE expr into consideration when matching to partition keys.\n> This is a problem.\n>\n> BTW, a rebase is needed for the patch set.\n\nThanks a lot for looking at this.\n\nI tried rebasing today and found that adopting this patch to the\nfollowing recent commit to equivalence processing code would take some\ntime that I don't currently have.\n\ncommit 3373c7155350cf6fcd51dd090f29e1332901e329\nAuthor: David Rowley <drowley@postgresql.org>\nDate: Sun Jul 21 17:30:58 2019 +1200\n\n Speed up finding EquivalenceClasses for a given set of rels\n\nI will come back to this in a couple of weeks, along with addressing\nyour other comments.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 6 Sep 2019 16:34:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Hi Richard,\n\nThanks a lot for taking a close look at the patch and sorry about the delay.\n\nOn Wed, Sep 4, 2019 at 5:29 PM Richard Guo <riguo@pivotal.io> wrote:\n>> On Wed, Sep 4, 2019 at 10:01 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> I'm reviewing v2-0002 and I have concern about how COALESCE expr is\n> processed in match_join_arg_to_partition_keys().\n>\n> If there is a COALESCE expr with first arg being non-partition key expr\n> and second arg being partition key, the patch would match it to the\n> partition key, which may result in wrong results in some cases.\n>\n> For instance, consider the partition table below:\n>\n> create table p (k int, val int) partition by range(k);\n> create table p_1 partition of p for values from (1) to (10);\n> create table p_2 partition of p for values from (10) to (100);\n>\n> So with patch v2-0002, the following query will be planned with\n> partitionwise join.\n>\n> # explain (costs off)\n> select * from (p as t1 full join p as t2 on t1.k = t2.k) as t12(k1,val1,k2,val2)\n> full join p as t3 on COALESCE(t12.val1, t12.k1) = t3.k;\n> QUERY PLAN\n> ----------------------------------------------------------\n> Append\n> -> Hash Full Join\n> Hash Cond: (COALESCE(t1.val, t1.k) = t3.k)\n> -> Hash Full Join\n> Hash Cond: (t1.k = t2.k)\n> -> Seq Scan on p_1 t1\n> -> Hash\n> -> Seq Scan on p_1 t2\n> -> Hash\n> -> Seq Scan on p_1 t3\n> -> Hash Full Join\n> Hash Cond: (COALESCE(t1_1.val, t1_1.k) = t3_1.k)\n> -> Hash Full Join\n> Hash Cond: (t1_1.k = t2_1.k)\n> -> Seq Scan on p_2 t1_1\n> -> Hash\n> -> Seq Scan on p_2 t2_1\n> -> Hash\n> -> Seq Scan on p_2 t3_1\n> (19 rows)\n>\n> But as t1.val is not a partition key, actually we cannot use\n> partitionwise join here.\n>\n> If we insert below data into the table, we will get wrong results for\n> the query above.\n>\n> insert into p select 5,15;\n> insert into p select 15,5;\n\nGood catch! It's quite wrong to use COALESCE(t12.val1, t12.k1) = t3.k\nfor partitionwise join as the COALESCE expression might as well output\nthe value of val1 which doesn't conform to partitioning.\n\nI've fixed match_join_arg_to_partition_keys() to catch that case and\nfail. Added a test case as well.\n\nPlease find attached updated patches.\n\nThanks,\nAmit", "msg_date": "Thu, 19 Sep 2019 17:15:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Thu, Sep 19, 2019 at 4:15 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi Richard,\n>\n> Thanks a lot for taking a close look at the patch and sorry about the\n> delay.\n>\n> On Wed, Sep 4, 2019 at 5:29 PM Richard Guo <riguo@pivotal.io> wrote:\n> >> On Wed, Sep 4, 2019 at 10:01 AM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > I'm reviewing v2-0002 and I have concern about how COALESCE expr is\n> > processed in match_join_arg_to_partition_keys().\n> >\n> > If there is a COALESCE expr with first arg being non-partition key expr\n> > and second arg being partition key, the patch would match it to the\n> > partition key, which may result in wrong results in some cases.\n> >\n> > For instance, consider the partition table below:\n> >\n> > create table p (k int, val int) partition by range(k);\n> > create table p_1 partition of p for values from (1) to (10);\n> > create table p_2 partition of p for values from (10) to (100);\n> >\n> > So with patch v2-0002, the following query will be planned with\n> > partitionwise join.\n> >\n> > # explain (costs off)\n> > select * from (p as t1 full join p as t2 on t1.k = t2.k) as\n> t12(k1,val1,k2,val2)\n> > full join p as t3 on COALESCE(t12.val1,\n> t12.k1) = t3.k;\n> > QUERY PLAN\n> > ----------------------------------------------------------\n> > Append\n> > -> Hash Full Join\n> > Hash Cond: (COALESCE(t1.val, t1.k) = t3.k)\n> > -> Hash Full Join\n> > Hash Cond: (t1.k = t2.k)\n> > -> Seq Scan on p_1 t1\n> > -> Hash\n> > -> Seq Scan on p_1 t2\n> > -> Hash\n> > -> Seq Scan on p_1 t3\n> > -> Hash Full Join\n> > Hash Cond: (COALESCE(t1_1.val, t1_1.k) = t3_1.k)\n> > -> Hash Full Join\n> > Hash Cond: (t1_1.k = t2_1.k)\n> > -> Seq Scan on p_2 t1_1\n> > -> Hash\n> > -> Seq Scan on p_2 t2_1\n> > -> Hash\n> > -> Seq Scan on p_2 t3_1\n> > (19 rows)\n> >\n> > But as t1.val is not a partition key, actually we cannot use\n> > partitionwise join here.\n> >\n> > If we insert below data into the table, we will get wrong results for\n> > the query above.\n> >\n> > insert into p select 5,15;\n> > insert into p select 15,5;\n>\n> Good catch! It's quite wrong to use COALESCE(t12.val1, t12.k1) = t3.k\n> for partitionwise join as the COALESCE expression might as well output\n> the value of val1 which doesn't conform to partitioning.\n>\n> I've fixed match_join_arg_to_partition_keys() to catch that case and\n> fail. Added a test case as well.\n>\n> Please find attached updated patches.\n>\n\nThank you for the fix. Will review.\n\nThanks\nRichard\n\nOn Thu, Sep 19, 2019 at 4:15 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi Richard,\n\nThanks a lot for taking a close look at the patch and sorry about the delay.\n\nOn Wed, Sep 4, 2019 at 5:29 PM Richard Guo <riguo@pivotal.io> wrote:\n>> On Wed, Sep 4, 2019 at 10:01 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> I'm reviewing v2-0002 and I have concern about how COALESCE expr is\n> processed in match_join_arg_to_partition_keys().\n>\n> If there is a COALESCE expr with first arg being non-partition key expr\n> and second arg being partition key, the patch would match it to the\n> partition key, which may result in wrong results in some cases.\n>\n> For instance, consider the partition table below:\n>\n> create table p (k int, val int) partition by range(k);\n> create table p_1 partition of p for values from (1) to (10);\n> create table p_2 partition of p for values from (10) to (100);\n>\n> So with patch v2-0002, the following query will be planned with\n> partitionwise join.\n>\n> # explain (costs off)\n> select * from (p as t1 full join p as t2 on t1.k = t2.k) as t12(k1,val1,k2,val2)\n>                             full join p as t3 on COALESCE(t12.val1, t12.k1) = t3.k;\n>                         QUERY PLAN\n> ----------------------------------------------------------\n>  Append\n>    ->  Hash Full Join\n>          Hash Cond: (COALESCE(t1.val, t1.k) = t3.k)\n>          ->  Hash Full Join\n>                Hash Cond: (t1.k = t2.k)\n>                ->  Seq Scan on p_1 t1\n>                ->  Hash\n>                      ->  Seq Scan on p_1 t2\n>          ->  Hash\n>                ->  Seq Scan on p_1 t3\n>    ->  Hash Full Join\n>          Hash Cond: (COALESCE(t1_1.val, t1_1.k) = t3_1.k)\n>          ->  Hash Full Join\n>                Hash Cond: (t1_1.k = t2_1.k)\n>                ->  Seq Scan on p_2 t1_1\n>                ->  Hash\n>                      ->  Seq Scan on p_2 t2_1\n>          ->  Hash\n>                ->  Seq Scan on p_2 t3_1\n> (19 rows)\n>\n> But as t1.val is not a partition key, actually we cannot use\n> partitionwise join here.\n>\n> If we insert below data into the table, we will get wrong results for\n> the query above.\n>\n> insert into p select 5,15;\n> insert into p select 15,5;\n\nGood catch!  It's quite wrong to use COALESCE(t12.val1, t12.k1) = t3.k\nfor partitionwise join as the COALESCE expression might as well output\nthe value of val1 which doesn't conform to partitioning.\n\nI've fixed match_join_arg_to_partition_keys() to catch that case and\nfail.  Added a test case as well.\n\nPlease find attached updated patches.Thank you for the fix. Will review.ThanksRichard", "msg_date": "Fri, 20 Sep 2019 17:55:21 +0800", "msg_from": "Richard Guo <riguo@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Thu, Sep 19, 2019 at 05:15:37PM +0900, Amit Langote wrote:\n> Please find attached updated patches.\n\nTom pointed me to this thread, since we hit it in 12.0\nhttps://www.postgresql.org/message-id/flat/16802.1570989962%40sss.pgh.pa.us#070f6675a11dff17760b1cfccf1c038d\n\nI can't say much about the patch; there's a little typo:\n\"The nullability of inner relation keys prevents them to\"\n..should say \"prevent them from\".\n\nIn order to compile it against REL12, I tried to cherry-pick this one:\n3373c715: Speed up finding EquivalenceClasses for a given set of rels\n\nBut then it crashes in check-world (possibly due to misapplied hunks).\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n", "msg_date": "Sun, 13 Oct 2019 15:02:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Sun, Oct 13, 2019 at 03:02:17PM -0500, Justin Pryzby wrote:\n> On Thu, Sep 19, 2019 at 05:15:37PM +0900, Amit Langote wrote:\n> > Please find attached updated patches.\n> \n> Tom pointed me to this thread, since we hit it in 12.0\n> https://www.postgresql.org/message-id/flat/16802.1570989962%40sss.pgh.pa.us#070f6675a11dff17760b1cfccf1c038d\n> \n> I can't say much about the patch; there's a little typo:\n> \"The nullability of inner relation keys prevents them to\"\n> ..should say \"prevent them from\".\n> \n> In order to compile it against REL12, I tried to cherry-pick this one:\n> 3373c715: Speed up finding EquivalenceClasses for a given set of rels\n> \n> But then it crashes in check-world (possibly due to misapplied hunks).\n\nI did it again paying more attention and got it to pass.\n\nThe PWJ + FULL JOIN query seems ok now.\n\nBut I'll leave PWJ disabled until I can look more closely.\n\n$ PGOPTIONS='-c max_parallel_workers_per_gather=0 -c enable_mergejoin=off -c enable_hashagg=off -c enable_partitionwise_join=on' psql postgres -f tmp/sql-2019-10-11.1 \nSET\n Nested Loop (cost=80106964.13..131163200.28 rows=2226681567 width=6)\n Join Filter: ((s.site_location = ''::text) OR ((s.site_office)::integer = ((COALESCE(t1.site_id, t2.site_id))::integer)))\n -> Group (cost=80106964.13..80837945.46 rows=22491733 width=12)\n Group Key: (COALESCE(t1.start_time, t2.start_time)), ((COALESCE(t1.site_id, t2.site_id))::integer)\n -> Merge Append (cost=80106964.13..80613028.13 rows=22491733 width=12)\n Sort Key: (COALESCE(t1.start_time, t2.start_time)), ((COALESCE(t1.site_id, t2.site_id))::integer)\n -> Group (cost=25494496.54..25633699.28 rows=11136219 width=12)\n Group Key: (COALESCE(t1.start_time, t2.start_time)), ((COALESCE(t1.site_id, t2.site_id))::integer)\n -> Sort (cost=25494496.54..25522337.09 rows=11136219 width=12)\n Sort Key: (COALESCE(t1.start_time, t2.start_time)), ((COALESCE(t1.site_id, t2.site_id))::integer)\n -> Hash Full Join (cost=28608.75..24191071.36 rows=11136219 width=12)\n Hash Cond: ((t1.start_time = t2.start_time) AND (t1.site_id = t2.site_id))\n Filter: ((COALESCE(t1.start_time, t2.start_time) >= '2019-10-01 00:00:00'::timestamp without time zone) AND (COALESCE(t1.start_time, t2.start_time) < '2019-10-01 01:00:00'::timestamp without time zone))\n -> Seq Scan on t1 (cost=0.00..14495.10 rows=940910 width=10)\n -> Hash (cost=14495.10..14495.10 rows=940910 width=10)\n -> Seq Scan on t1 t2 (cost=0.00..14495.10 rows=940910 width=10)\n -> Group (cost=54612467.58..54754411.51 rows=11355514 width=12)\n Group Key: (COALESCE(t1_1.start_time, t2_1.start_time)), ((COALESCE(t1_1.site_id, t2_1.site_id))::integer)\n -> Sort (cost=54612467.58..54640856.37 rows=11355514 width=12)\n Sort Key: (COALESCE(t1_1.start_time, t2_1.start_time)), ((COALESCE(t1_1.site_id, t2_1.site_id))::integer)\n -> Hash Full Join (cost=28608.75..53281777.94 rows=11355514 width=12)\n Hash Cond: ((t1_1.start_time = t2_1.start_time) AND (t1_1.site_id = t2_1.site_id))\n Filter: ((COALESCE(t1_1.start_time, t2_1.start_time) >= '2019-10-01 00:00:00'::timestamp without time zone) AND (COALESCE(t1_1.start_time, t2_1.start_time) < '2019-10-01 01:00:00'::timestamp without time zone))\n -> Seq Scan on t2 t1_1 (cost=0.00..14495.10 rows=940910 width=10)\n -> Hash (cost=14495.10..14495.10 rows=940910 width=10)\n -> Seq Scan on t2 t2_1 (cost=0.00..14495.10 rows=940910 width=10)\n -> Materialize (cost=0.00..2.48 rows=99 width=6)\n -> Seq Scan on s (cost=0.00..1.99 rows=99 width=6)\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n", "msg_date": "Sun, 13 Oct 2019 16:07:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Hi Justin,\n\nOn Mon, Oct 14, 2019 at 5:02 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Sep 19, 2019 at 05:15:37PM +0900, Amit Langote wrote:\n> > Please find attached updated patches.\n>\n> Tom pointed me to this thread, since we hit it in 12.0\n> https://www.postgresql.org/message-id/flat/16802.1570989962%40sss.pgh.pa.us#070f6675a11dff17760b1cfccf1c038d\n>\n> I can't say much about the patch; there's a little typo:\n> \"The nullability of inner relation keys prevents them to\"\n> ..should say \"prevent them from\".\n\nThanks, will fix.\n\nRegards,\nAmit\n\n\n", "msg_date": "Wed, 23 Oct 2019 12:28:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Mon, Oct 14, 2019 at 5:02 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> I can't say much about the patch; there's a little typo:\n>> \"The nullability of inner relation keys prevents them to\"\n>> ..should say \"prevent them from\".\n\n> Thanks, will fix.\n\nJust to leave a breadcrumb in this thread --- the planner failure\ninduced by d25ea01275 has been fixed in 529ebb20a. The difficulty\nwith multiway full joins that Amit started this thread with remains\nopen, but I imagine the posted patches will need rebasing over\n529ebb20a.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Nov 2019 12:00:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Wed, Nov 6, 2019 at 2:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Mon, Oct 14, 2019 at 5:02 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> I can't say much about the patch; there's a little typo:\n> >> \"The nullability of inner relation keys prevents them to\"\n> >> ..should say \"prevent them from\".\n>\n> > Thanks, will fix.\n>\n> Just to leave a breadcrumb in this thread --- the planner failure\n> induced by d25ea01275 has been fixed in 529ebb20a. The difficulty\n> with multiway full joins that Amit started this thread with remains\n> open, but I imagine the posted patches will need rebasing over\n> 529ebb20a.\n\nHere are the rebased patches.\n\nI've divided the patch containing only cosmetic improvements into two\npatches, of which the latter only moves around code. Patch 0003\nimplements the actual fix to the problem with multiway joins.\n\nThanks,\nAmit", "msg_date": "Wed, 6 Nov 2019 18:40:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Wed, Nov 6, 2019 at 2:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Just to leave a breadcrumb in this thread --- the planner failure\n>> induced by d25ea01275 has been fixed in 529ebb20a. The difficulty\n>> with multiway full joins that Amit started this thread with remains\n>> open, but I imagine the posted patches will need rebasing over\n>> 529ebb20a.\n\n> Here are the rebased patches.\n\nThe cfbot shows these patches as failing regression tests. I think\nit is just cosmetic fallout from 6ef77cf46 having changed EXPLAIN's\nchoices of table alias names; but please look closer to confirm,\nand post updated patches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Feb 2020 18:18:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Sat, Feb 29, 2020 at 8:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Wed, Nov 6, 2019 at 2:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Just to leave a breadcrumb in this thread --- the planner failure\n> >> induced by d25ea01275 has been fixed in 529ebb20a. The difficulty\n> >> with multiway full joins that Amit started this thread with remains\n> >> open, but I imagine the posted patches will need rebasing over\n> >> 529ebb20a.\n>\n> > Here are the rebased patches.\n>\n> The cfbot shows these patches as failing regression tests. I think\n> it is just cosmetic fallout from 6ef77cf46 having changed EXPLAIN's\n> choices of table alias names; but please look closer to confirm,\n> and post updated patches.\n\nThanks for notifying.\n\nChecked and indeed fallout from 6ef77cf46 seems to be the reason a\ntest is failing.\n\nUpdated patches attached.\n\nThanks,\nAmit", "msg_date": "Sun, 1 Mar 2020 23:21:29 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Updated patches attached.\n\nI looked through these and committed 0001+0002, with some further\ncomment-polishing. However, I have no faith at all in 0003. It is\nblithely digging through COALESCE expressions with no concern for\nwhether they came from full joins or not, or whether the other values\nbeing coalesced to might completely change the semantics. Digging\nthrough PlaceHolderVars scares me even more; what's that got to do\nwith the problem, anyway? So while this might fix the complained-of\nissue of failing to use a partitionwise join, I think it wouldn't be\nhard to create examples that it would incorrectly turn into\npartitionwise joins.\n\nI wonder whether it'd be feasible to fix the problem by going in the\nother direction; that is, while constructing the nullable_partexprs\nlists for a full join, add synthesized COALESCE() expressions for the\noutput columns (by wrapping COALESCE around copies of the input rels'\npartition expressions), and then not need to do anything special in\nmatch_expr_to_partition_keys. We'd still need to convince ourselves\nthat this did the right thing and not any of the wrong things, but\nI think it might be easier to prove it that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Apr 2020 17:13:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Sat, Apr 4, 2020 at 6:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Updated patches attached.\n>\n> I looked through these and committed 0001+0002, with some further\n> comment-polishing. However, I have no faith at all in 0003.\n\nThanks for the review.\n\n> It is\n> blithely digging through COALESCE expressions with no concern for\n> whether they came from full joins or not, or whether the other values\n> being coalesced to might completely change the semantics. Digging\n> through PlaceHolderVars scares me even more; what's that got to do\n> with the problem, anyway? So while this might fix the complained-of\n> issue of failing to use a partitionwise join, I think it wouldn't be\n> hard to create examples that it would incorrectly turn into\n> partitionwise joins.\n>\n> I wonder whether it'd be feasible to fix the problem by going in the\n> other direction; that is, while constructing the nullable_partexprs\n> lists for a full join, add synthesized COALESCE() expressions for the\n> output columns (by wrapping COALESCE around copies of the input rels'\n> partition expressions), and then not need to do anything special in\n> match_expr_to_partition_keys. We'd still need to convince ourselves\n> that this did the right thing and not any of the wrong things, but\n> I think it might be easier to prove it that way.\n\nOkay, I tried that in the updated patch. I didn't try to come up with\nexamples that might break it though.\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 4 Apr 2020 18:31:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Okay, I tried that in the updated patch. I didn't try to come up with\n> examples that might break it though.\n\nI looked through this.\n\n* Wasn't excited about inventing makeCoalesceExpr(); the fact that it only\nhad two potential call sites seemed to make it not worth the trouble.\nPlus, as defined, it could not handle the general case of COALESCE, which\ncan have N arguments; so that seemed like a recipe for confusion.\n\n* I think your logic for building the coalesce combinations was just\nwrong. We need combinations of left-hand inputs with right-hand inputs,\nnot left-hand with left-hand or right-hand with right-hand. Also the\nnesting should already have happened in the inputs, we don't need to\ntry to generate it here. The looping code was pretty messy as well.\n\n* I don't think using partopcintype is necessarily right; that could be\na polymorphic type, for instance. Safer to copy the type of the input\nexpressions. Likely we could have got away with using partcollation,\nbut for consistency I copied that as well.\n\n* You really need to update the data structure definitional comments\nwhen you make a change like this.\n\n* I did not like putting a test case that requires\nenable_partitionwise_aggregate in the partition_join test; that seems\nmisplaced. But it's not necessary to the test, is it?\n\n* I did not follow the point of your second test case. The WHERE\nconstraint on p1.a allows the planner to strength-reduce the joins,\nwhich is why there's no full join in that explain result, but then\nwe aren't going to get to this code at all.\n\nI repaired the above in the attached.\n\nI'm actually sort of pleasantly surprised that this worked; I was\nnot sure that building COALESCEs like this would provide the result\nwe needed. But it seems okay -- it does fix the behavior in this\n3-way test case, as well as the 4-way join you showed at the top\nof the thread. It's fairly dependent on the fact that the planner\nwon't rearrange full joins; otherwise, the COALESCE structures we\nbuild here might not match those made at parse time. But that's\nnot likely to change anytime soon; and this is hardly the only\nplace that would break, so I'm not sweating about it. (I have\nsome vague ideas about getting rid of the COALESCEs as part of\nthe Var redefinition I've been muttering about, anyway; there\nmight be a cleaner fix for this if that happens.)\n\nAnyway, I think this is probably OK for now. Given that the\nnullable_partexprs lists are only used in one place, it's pretty\nhard to see how it would break anything.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 05 Apr 2020 18:29:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Mon, Apr 6, 2020 at 7:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Okay, I tried that in the updated patch. I didn't try to come up with\n> > examples that might break it though.\n>\n> I looked through this.\n\nThank you.\n\n> * I think your logic for building the coalesce combinations was just\n> wrong. We need combinations of left-hand inputs with right-hand inputs,\n> not left-hand with left-hand or right-hand with right-hand. Also the\n> nesting should already have happened in the inputs, we don't need to\n> try to generate it here. The looping code was pretty messy as well.\n\nIt didn't occur to me that that many Coalesce combinations would be\nnecessary given the component rel combinations possible.\n\n> * I don't think using partopcintype is necessarily right; that could be\n> a polymorphic type, for instance. Safer to copy the type of the input\n> expressions. Likely we could have got away with using partcollation,\n> but for consistency I copied that as well.\n\nAh, seeing set_baserel_partition_key_exprs(), I suppose they will come\nfrom parttypid and parttypcoll of the base partitioned tables, which\nseems fine.\n\n> * You really need to update the data structure definitional comments\n> when you make a change like this.\n\nSorry, I should have.\n\n> * I did not like putting a test case that requires\n> enable_partitionwise_aggregate in the partition_join test; that seems\n> misplaced. But it's not necessary to the test, is it?\n\nEarlier in the discussion (which turned into a separate discussion),\nthere were test cases where partition-level grouping would fail with\nerrors in setrefs.c, but I think that was fixed last year by\n529ebb20aaa5. Agree that it has nothing to do with the problem being\nsolved here.\n\n> * I did not follow the point of your second test case. The WHERE\n> constraint on p1.a allows the planner to strength-reduce the joins,\n> which is why there's no full join in that explain result, but then\n> we aren't going to get to this code at all.\n\nOops, I thought I copy-pasted 4-way full join test not this one, but\nevidently didn't.\n\n> I repaired the above in the attached.\n>\n> I'm actually sort of pleasantly surprised that this worked; I was\n> not sure that building COALESCEs like this would provide the result\n> we needed. But it seems okay -- it does fix the behavior in this\n> 3-way test case, as well as the 4-way join you showed at the top\n> of the thread. It's fairly dependent on the fact that the planner\n> won't rearrange full joins; otherwise, the COALESCE structures we\n> build here might not match those made at parse time. But that's\n> not likely to change anytime soon; and this is hardly the only\n> place that would break, so I'm not sweating about it. (I have\n> some vague ideas about getting rid of the COALESCEs as part of\n> the Var redefinition I've been muttering about, anyway; there\n> might be a cleaner fix for this if that happens.)\n>\n> Anyway, I think this is probably OK for now. Given that the\n> nullable_partexprs lists are only used in one place, it's pretty\n> hard to see how it would break anything.\n\nMakes sense.\n\n--\nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Apr 2020 17:45:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Mon, Apr 6, 2020 at 7:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * I think your logic for building the coalesce combinations was just\n>> wrong. We need combinations of left-hand inputs with right-hand inputs,\n>> not left-hand with left-hand or right-hand with right-hand. Also the\n>> nesting should already have happened in the inputs, we don't need to\n>> try to generate it here. The looping code was pretty messy as well.\n\n> It didn't occur to me that that many Coalesce combinations would be\n> necessary given the component rel combinations possible.\n\nWell, we don't of course: we only need the one pair that corresponds to\nthe COALESCE structures the parser would have generated. But we aren't\nsure which one that is. I thought about looking through the full join\nRTE's joinaliasvars list for COALESCE items instead of doing it like this,\nbut there is a problem: I don't believe that that data structure gets\nmaintained after flatten_join_alias_vars(). So it might contain\nout-of-date expressions that don't match what we need them to match.\n\nPossibly there will be a cleaner answer here if I succeed in redesigning\nthe Var data structure to account for outer joins better.\n\n>> * I did not follow the point of your second test case. The WHERE\n>> constraint on p1.a allows the planner to strength-reduce the joins,\n>> which is why there's no full join in that explain result, but then\n>> we aren't going to get to this code at all.\n\n> Oops, I thought I copy-pasted 4-way full join test not this one, but\n> evidently didn't.\n\nHave you got such a query at hand? I wondered whether we shouldn't\nuse a 4-way rather than 3-way test case; it'd offer more assurance\nthat nesting of these things works.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Apr 2020 10:09:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Mon, Apr 6, 2020 at 11:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Oops, I thought I copy-pasted 4-way full join test not this one, but\n> > evidently didn't.\n>\n> Have you got such a query at hand? I wondered whether we shouldn't\n> use a 4-way rather than 3-way test case; it'd offer more assurance\n> that nesting of these things works.\n\nHmm, I just did:\n\n-SELECT COUNT(*) FROM prt1 FULL JOIN prt2 p2(b,a,c) USING(a,b) FULL\nJOIN prt2 p3(b,a,c) USING (a, b)\n+SELECT COUNT(*) FROM prt1 FULL JOIN prt2 p2(b,a,c) USING(a,b) FULL\nJOIN prt2 p3(b,a,c) USING (a, b) FULL JOIN prt1 p4 (a,b,c) USING (a,\nb)\n\nwhich does succeed in using partitionwise join. Please see attached\ndelta that applies on your v7 if that is what you'd rather have.\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Apr 2020 00:19:21 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> which does succeed in using partitionwise join. Please see attached\n> delta that applies on your v7 if that is what you'd rather have.\n\nI figured these queries were cheap enough that we could afford to run\nboth. With that and some revision of the comments (per attached),\nI was feeling like we were ready to go. However, re-reading the thread,\none of Richard's comments struck me as still relevant. If you try, say,\n\ncreate table p (k int, val int) partition by range(k);\ncreate table p_1 partition of p for values from (1) to (10);\ncreate table p_2 partition of p for values from (10) to (100);\n\nset enable_partitionwise_join = 1;\n\nexplain\nselect * from (p as t1 full join p as t2 on t1.k = t2.k) as t12(k1,val1,k2,val2)\n full join p as t3 on COALESCE(t12.k1, t12.k2) = t3.k;\n\nthis patch will give you a partitioned join, with a different plan\nthan you get without enable_partitionwise_join. This is scary,\nbecause it's not immediately obvious that the transformation is\ncorrect.\n\nI *think* that it might be all right, because although what we\nare matching to is a user-written COALESCE() not an actual\nFULL JOIN USING column, it has to behave in somewhat the same\nway. In particular, by construction it must be a coalesce of\nsome representation of the matching partition columns of the\nfull join's inputs. So, even though it might go to null in\ndifferent cases than an actual USING variable would do, it\ndoes not break the ability to partition the join.\n\nHowever, I have not spent a whole lot of time thinking about\npartitionwise joins, so rather than go ahead and commit I am\ngoing to toss that point back out for community consideration.\nAt the very least, what I'd written in the comment needs a\nlot more defense than it has now.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 06 Apr 2020 13:41:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Tue, Apr 7, 2020 at 2:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > which does succeed in using partitionwise join. Please see attached\n> > delta that applies on your v7 if that is what you'd rather have.\n>\n> I figured these queries were cheap enough that we could afford to run\n> both. With that and some revision of the comments (per attached),\n> I was feeling like we were ready to go.\n\nLooks good to me.\n\n> However, re-reading the thread,\n> one of Richard's comments struck me as still relevant. If you try, say,\n>\n> create table p (k int, val int) partition by range(k);\n> create table p_1 partition of p for values from (1) to (10);\n> create table p_2 partition of p for values from (10) to (100);\n>\n> set enable_partitionwise_join = 1;\n>\n> explain\n> select * from (p as t1 full join p as t2 on t1.k = t2.k) as t12(k1,val1,k2,val2)\n> full join p as t3 on COALESCE(t12.k1, t12.k2) = t3.k;\n>\n> this patch will give you a partitioned join, with a different plan\n> than you get without enable_partitionwise_join. This is scary,\n> because it's not immediately obvious that the transformation is\n> correct.\n>\n> I *think* that it might be all right, because although what we\n> are matching to is a user-written COALESCE() not an actual\n> FULL JOIN USING column, it has to behave in somewhat the same\n> way. In particular, by construction it must be a coalesce of\n> some representation of the matching partition columns of the\n> full join's inputs. So, even though it might go to null in\n> different cases than an actual USING variable would do, it\n> does not break the ability to partition the join.\n\nSeems fine to me too. Maybe users should avoid writing it by hand if\npossible anyway, because even slight variation in the way it's written\nwill affect this:\n\nset enable_partitionwise_join = 1;\n\n-- order of coalesce() arguments reversed\nexplain (costs off)\nselect * from (p as t1 full join p as t2 on t1.k = t2.k) as t12(k1,val1,k2,val2)\nfull join p as t3 on COALESCE(t12.k2, t12.k1) = t3.k;\n QUERY PLAN\n----------------------------------------------\n Hash Full Join\n Hash Cond: (COALESCE(t2.k, t1.k) = t3.k)\n -> Append\n -> Hash Full Join\n Hash Cond: (t1_1.k = t2_1.k)\n -> Seq Scan on p_1 t1_1\n -> Hash\n -> Seq Scan on p_1 t2_1\n -> Hash Full Join\n Hash Cond: (t1_2.k = t2_2.k)\n -> Seq Scan on p_2 t1_2\n -> Hash\n -> Seq Scan on p_2 t2_2\n -> Hash\n -> Append\n -> Seq Scan on p_1 t3_1\n -> Seq Scan on p_2 t3_2\n(17 rows)\n\n> However, I have not spent a whole lot of time thinking about\n> partitionwise joins, so rather than go ahead and commit I am\n> going to toss that point back out for community consideration.\n\nAgreed.\n\n> At the very least, what I'd written in the comment needs a\n> lot more defense than it has now.\n\nSorry, which comment are you referring to?\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Apr 2020 12:17:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Tue, Apr 7, 2020 at 2:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I *think* that it might be all right, because although what we\n>> are matching to is a user-written COALESCE() not an actual\n>> FULL JOIN USING column, it has to behave in somewhat the same\n>> way. In particular, by construction it must be a coalesce of\n>> some representation of the matching partition columns of the\n>> full join's inputs. So, even though it might go to null in\n>> different cases than an actual USING variable would do, it\n>> does not break the ability to partition the join.\n\n> Seems fine to me too. Maybe users should avoid writing it by hand if\n> possible anyway, because even slight variation in the way it's written\n> will affect this:\n\nI'm not particularly concerned about users intentionally trying to trigger\nthis behavior. I just want to be sure that if someone accidentally does\nso, we don't produce a wrong plan.\n\nI waited till after the \"advanced partitionwise join\" patch went\nin because that seemed more important (plus I wondered a bit if\nthat would subsume this). But this patch seems to still work,\nand the other thing doesn't fix the problem, so pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 22:17:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Wed, Apr 8, 2020 at 11:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> But this patch seems to still work,\n> and the other thing doesn't fix the problem, so pushed.\n\nThanks for working on this!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 8 Apr 2020 11:37:45 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: d25ea01275 and partitionwise join" }, { "msg_contents": "On Wed, Apr 8, 2020 at 11:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Tue, Apr 7, 2020 at 2:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I *think* that it might be all right, because although what we\n> >> are matching to is a user-written COALESCE() not an actual\n> >> FULL JOIN USING column, it has to behave in somewhat the same\n> >> way. In particular, by construction it must be a coalesce of\n> >> some representation of the matching partition columns of the\n> >> full join's inputs. So, even though it might go to null in\n> >> different cases than an actual USING variable would do, it\n> >> does not break the ability to partition the join.\n>\n> > Seems fine to me too. Maybe users should avoid writing it by hand if\n> > possible anyway, because even slight variation in the way it's written\n> > will affect this:\n>\n> I'm not particularly concerned about users intentionally trying to trigger\n> this behavior. I just want to be sure that if someone accidentally does\n> so, we don't produce a wrong plan.\n>\n> I waited till after the \"advanced partitionwise join\" patch went\n> in because that seemed more important (plus I wondered a bit if\n> that would subsume this). But this patch seems to still work,\n> and the other thing doesn't fix the problem, so pushed.\n\nThank you for your time on this.\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 11:53:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: d25ea01275 and partitionwise join" } ]
[ { "msg_contents": "Hi Hackers,\n\nI think I found an issue in the TopoSort() function.\n\nAs the comments say,\n\n /* .....\n * ...... If there are any other processes\n * in the same lock group on the queue, set their number of\n * beforeConstraints to -1 to indicate that they should be\nemitted\n * with their groupmates rather than considered separately.\n */\n\nIf the line \"break;\" exists, there is no chance to set beforeConstraints to\n-1 for other processes in the same lock group.\n\nSo, I think we need delete the line \"break;\" . See the patch.\n\nI just took a look, and I found all the following versions have this line .\n\n\npostgresql-12beta2, postgresql-12beta1, postgresql-11.4,\npostgresql-11.3,postgresql-11.0,\npostgresql-10.9,postgresql-10.5, postgresql-10.0\n\n\nThanks,\nRuihai", "msg_date": "Tue, 2 Jul 2019 22:47:32 +0800", "msg_from": "Rui Hai Jiang <ruihaij@gmail.com>", "msg_from_op": true, "msg_subject": "TopoSort() fix" }, { "msg_contents": "Rui Hai Jiang <ruihaij@gmail.com> writes:\n> I think I found an issue in the TopoSort() function.\n\nThis indeed seems like a live bug introduced by a1c1af2a.\nRobert?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2019 11:23:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Could the attached patch fix this issue? Or does any one else plan to fix\nit?\n\nIf people are busy and have not time, I can go ahead to fix it. To fix\nthis issue, do we need a patch for each official branch?\n\n\nRegards,\nRuihai\n\nOn Tue, Jul 2, 2019 at 11:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Rui Hai Jiang <ruihaij@gmail.com> writes:\n> > I think I found an issue in the TopoSort() function.\n>\n> This indeed seems like a live bug introduced by a1c1af2a.\n> Robert?\n>\n> regards, tom lane\n>\n\nCould the attached patch fix this issue? Or does any one else plan to fix it? If people are busy and have not  time, I can go ahead to fix it.  To fix this issue, do we need a patch for each official branch? Regards,RuihaiOn Tue, Jul 2, 2019 at 11:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Rui Hai Jiang <ruihaij@gmail.com> writes:\n> I think I found an issue in the TopoSort() function.\n\nThis indeed seems like a live bug introduced by a1c1af2a.\nRobert?\n\n                        regards, tom lane", "msg_date": "Wed, 3 Jul 2019 10:41:59 +0800", "msg_from": "Rui Hai Jiang <ruihaij@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Wed, Jul 03, 2019 at 10:41:59AM +0800, Rui Hai Jiang wrote:\n> Could the attached patch fix this issue? Or does any one else plan to fix\n> it?\n> \n> If people are busy and have not time, I can go ahead to fix it. To fix\n> this issue, do we need a patch for each official branch?\n\nOnly a committer could merge any fix you produce. What you have sent\nlooks fine to me, so let's wait for Robert, who has visiblu broken\nthis part to comment. Back-patched versions are usually taken care of\nby the committer merging the fix, and by experience it is better to\nagree about the shape of a patch on HEAD before working on other\nbranches. Depending on the review done, the patch's shape may change\nslightly...\n--\nMichael", "msg_date": "Wed, 3 Jul 2019 16:11:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Tue, Jul 2, 2019 at 11:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Rui Hai Jiang <ruihaij@gmail.com> writes:\n> > I think I found an issue in the TopoSort() function.\n>\n> This indeed seems like a live bug introduced by a1c1af2a.\n> Robert?\n\nThis is pretty thoroughly swapped out of my head, but it looks like\nthat analysis might be correct.\n\nIs it practical to come up with a test case that demonstrates the problem?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Jul 2019 09:38:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "I'll try to figure out some scenarios to do the test. A parallel process\ngroup is needed for the test.\n\nActually I was trying to do some testing against the locking mechanism. I\nhappened to see this issue.\n\nOn Wed, Jul 3, 2019 at 9:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jul 2, 2019 at 11:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Rui Hai Jiang <ruihaij@gmail.com> writes:\n> > > I think I found an issue in the TopoSort() function.\n> >\n> > This indeed seems like a live bug introduced by a1c1af2a.\n> > Robert?\n>\n> This is pretty thoroughly swapped out of my head, but it looks like\n> that analysis might be correct.\n>\n> Is it practical to come up with a test case that demonstrates the problem?\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nI'll try to figure out some  scenarios to do the test. A parallel process group is needed for the test.Actually I was trying to do some testing against the locking mechanism. I happened to see this issue.On Wed, Jul 3, 2019 at 9:38 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jul 2, 2019 at 11:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Rui Hai Jiang <ruihaij@gmail.com> writes:\n> > I think I found an issue in the TopoSort() function.\n>\n> This indeed seems like a live bug introduced by a1c1af2a.\n> Robert?\n\nThis is pretty thoroughly swapped out of my head, but it looks like\nthat analysis might be correct.\n\nIs it practical to come up with a test case that demonstrates the problem?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 4 Jul 2019 11:15:50 +0800", "msg_from": "Rui Hai Jiang <ruihaij@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On 2019-Jul-04, Rui Hai Jiang wrote:\n\n> I'll try to figure out some scenarios to do the test. A parallel process\n> group is needed for the test.\n> \n> Actually I was trying to do some testing against the locking mechanism. I\n> happened to see this issue.\n\nHello, is anybody looking into this issue?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jul 2019 18:05:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Hi,\n\nOn 2019-07-26 18:05:38 -0400, Alvaro Herrera wrote:\n> On 2019-Jul-04, Rui Hai Jiang wrote:\n> \n> > I'll try to figure out some scenarios to do the test. A parallel process\n> > group is needed for the test.\n\nRui, have you made any progress on this?\n\n\n> > Actually I was trying to do some testing against the locking mechanism. I\n> > happened to see this issue.\n> \n> Hello, is anybody looking into this issue?\n\nI guess this is on Robert's docket otherwise. He's on vacation till\nearly next week...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 26 Jul 2019 16:48:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-26 18:05:38 -0400, Alvaro Herrera wrote:\n>> Hello, is anybody looking into this issue?\n\n> I guess this is on Robert's docket otherwise. He's on vacation till\n> early next week...\n\nI think this is a sufficiently obvious bug, and a sufficiently\nobvious fix, that we should just fix it and not insist on getting\na reproducible test case first. I think a test case would soon\nbit-rot anyway, and no longer exercise the problem.\n\nI certainly do *not* want to wait so long that we miss the\nupcoming minor releases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jul 2019 20:24:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Fri, Jul 26, 2019 at 08:24:16PM -0400, Tom Lane wrote:\n> I think this is a sufficiently obvious bug, and a sufficiently\n> obvious fix, that we should just fix it and not insist on getting\n> a reproducible test case first. I think a test case would soon\n> bit-rot anyway, and no longer exercise the problem.\n> \n> I certainly do *not* want to wait so long that we miss the\n> upcoming minor releases.\n\n+1. Any volunteers?\n--\nMichael", "msg_date": "Mon, 29 Jul 2019 10:50:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Jul 26, 2019 at 08:24:16PM -0400, Tom Lane wrote:\n>> I think this is a sufficiently obvious bug, and a sufficiently\n>> obvious fix, that we should just fix it and not insist on getting\n>> a reproducible test case first. I think a test case would soon\n>> bit-rot anyway, and no longer exercise the problem.\n>> I certainly do *not* want to wait so long that we miss the\n>> upcoming minor releases.\n\n> +1. Any volunteers?\n\nIf Robert doesn't weigh in pretty soon, I'll take responsibility for it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jul 2019 10:56:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Mon, Jul 29, 2019 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Fri, Jul 26, 2019 at 08:24:16PM -0400, Tom Lane wrote:\n> >> I think this is a sufficiently obvious bug, and a sufficiently\n> >> obvious fix, that we should just fix it and not insist on getting\n> >> a reproducible test case first. I think a test case would soon\n> >> bit-rot anyway, and no longer exercise the problem.\n> >> I certainly do *not* want to wait so long that we miss the\n> >> upcoming minor releases.\n>\n> > +1. Any volunteers?\n>\n> If Robert doesn't weigh in pretty soon, I'll take responsibility for it.\n\nThat's fine, or if you prefer that I commit it, I will.\n\nI just got back from a week's vacation and am only very gradually\nunburying myself from mounds of email. (Of course, the way\npgsql-hackers is getting, there's sort of always a mound of email\nthese days.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 29 Jul 2019 16:40:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "[ removing <ruihaij@gmail.com>, as that mailing address seems to be MIA ]\n\nRobert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 29, 2019 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If Robert doesn't weigh in pretty soon, I'll take responsibility for it.\n\n> That's fine, or if you prefer that I commit it, I will.\n\nFYI, I just got done inventing a way to reach that code, and I have\nto suspect that it's impossible to do so in production, because under\nordinary circumstances no parallel worker will take any exclusive lock\nthat isn't already held by its leader. (If you happen to know an\neasy counterexample, let's see it.)\n\nThe attached heavily-hacked version of deadlock-soft.spec makes it go by\nforcing duplicate advisory locks to be taken in worker processes, which\nof course first requires disabling PreventAdvisoryLocksInParallelMode().\nI kind of wonder if we should provide some debug-only, here-be-dragons\nway to disable that restriction so that we could make this an official\nregression test, because I'm now pretty suspicious that none of this code\nhas ever executed before.\n\nAnyway, armed with this, I was able to prove that HEAD just hangs up\non this test case; apparently the deadlock checker never detects that\nthe additional holders of the advisory lock need to be rearranged.\nAnd removing that \"break\" fixes it.\n\nSo I'll go commit the break-ectomy, but what do people think about\ntesting this better?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 29 Jul 2019 17:55:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Mon, Jul 29, 2019 at 5:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> FYI, I just got done inventing a way to reach that code, and I have\n> to suspect that it's impossible to do so in production, because under\n> ordinary circumstances no parallel worker will take any exclusive lock\n> that isn't already held by its leader. (If you happen to know an\n> easy counterexample, let's see it.)\n\nI think the way you could make that happen would be to run a parallel\nquery that calls a user-defined function which does LOCK TABLE.\n\n> Anyway, armed with this, I was able to prove that HEAD just hangs up\n> on this test case; apparently the deadlock checker never detects that\n> the additional holders of the advisory lock need to be rearranged.\n> And removing that \"break\" fixes it.\n\nNice!\n\n> So I'll go commit the break-ectomy, but what do people think about\n> testing this better?\n\nI think it's a great idea. I was never very happy with the amount of\nexercise I was able to give this code.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 29 Jul 2019 20:57:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 29, 2019 at 5:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> FYI, I just got done inventing a way to reach that code, and I have\n>> to suspect that it's impossible to do so in production, because under\n>> ordinary circumstances no parallel worker will take any exclusive lock\n>> that isn't already held by its leader. (If you happen to know an\n>> easy counterexample, let's see it.)\n\n> I think the way you could make that happen would be to run a parallel\n> query that calls a user-defined function which does LOCK TABLE.\n\nI tried that first. There are backstops preventing doing LOCK TABLE\nin a worker, just like for advisory locks.\n\nI believe the only accessible route to taking any sort of new lock\nin a parallel worker is catalog lookups causing AccessShareLock on\na catalog. In principle, maybe you could make a deadlock situation\nby combining parallel workers with something that takes\nAccessExclusiveLock on a catalog ... but making that into a reliable\ntest case sounds about impossible, because AEL on a catalog will\nhave all sorts of unpleasant side-effects, such as blocking\nisolationtester's own queries. (Getting it to work in a\nCLOBBER_CACHE_ALWAYS build seems right out, for instance.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jul 2019 21:48:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Mon, Jul 29, 2019 at 9:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I tried that first. There are backstops preventing doing LOCK TABLE\n> in a worker, just like for advisory locks.\n>\n> I believe the only accessible route to taking any sort of new lock\n> in a parallel worker is catalog lookups causing AccessShareLock on\n> a catalog.\n\nCan't the worker just query a previously-untouched table, maybe by\nconstructing a string and then using EXECUTE to execute it?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 29 Jul 2019 22:06:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Mon, Jul 29, 2019 at 10:56:05AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> +1. Any volunteers?\n> \n> If Robert doesn't weigh in pretty soon, I'll take responsibility for it.\n\nThanks Tom for taking care of it!\n--\nMichael", "msg_date": "Tue, 30 Jul 2019 11:31:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On 2019-Jul-29, Tom Lane wrote:\n\n> FYI, I just got done inventing a way to reach that code, and I have\n> to suspect that it's impossible to do so in production, because under\n> ordinary circumstances no parallel worker will take any exclusive lock\n> that isn't already held by its leader.\n\nHmm, okay, so this wasn't a bug that would have bit anyone in practice,\nyeah? That's reassuring.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 29 Jul 2019 23:21:57 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-29, Tom Lane wrote:\n>> FYI, I just got done inventing a way to reach that code, and I have\n>> to suspect that it's impossible to do so in production, because under\n>> ordinary circumstances no parallel worker will take any exclusive lock\n>> that isn't already held by its leader.\n\n> Hmm, okay, so this wasn't a bug that would have bit anyone in practice,\n> yeah? That's reassuring.\n\nAt the least, you'd have to go well out of your way to make it happen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jul 2019 00:06:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 29, 2019 at 9:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I believe the only accessible route to taking any sort of new lock\n>> in a parallel worker is catalog lookups causing AccessShareLock on\n>> a catalog.\n\n> Can't the worker just query a previously-untouched table, maybe by\n> constructing a string and then using EXECUTE to execute it?\n\nHm, yeah, looks like you could get a new AccessShareLock that way too.\nBut not any exclusive lock.\n\nI also looked into whether one could use SELECT FOR UPDATE/SHARE to get\nstronger locks at a tuple level, but that's been blocked off as well.\nYou guys really did a pretty good job of locking that down.\n\nAfter thinking about this for awhile, though, I believe it might be\nreasonable to just remove PreventAdvisoryLocksInParallelMode()\naltogether. The \"parallel unsafe\" markings on the advisory-lock\nfunctions seem like adequate protection against somebody running\nthem in a parallel worker. If you defeat that by calling them from\na mislabeled-parallel-safe wrapper (as the proposed test case does),\nthen any negative consequences are on your own head. AFAICT the\nonly actual negative consequence is that the locks disappear the\nmoment the parallel worker exits, so we'd not be opening any large\nholes even to people who rip off the safety cover.\n\n(BTW, why aren't these functions just \"parallel restricted\"?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jul 2019 10:27:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Tue, Jul 30, 2019 at 10:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I also looked into whether one could use SELECT FOR UPDATE/SHARE to get\n> stronger locks at a tuple level, but that's been blocked off as well.\n> You guys really did a pretty good job of locking that down.\n\nThanks. We learned from the master.\n\n> After thinking about this for awhile, though, I believe it might be\n> reasonable to just remove PreventAdvisoryLocksInParallelMode()\n> altogether. The \"parallel unsafe\" markings on the advisory-lock\n> functions seem like adequate protection against somebody running\n> them in a parallel worker. If you defeat that by calling them from\n> a mislabeled-parallel-safe wrapper (as the proposed test case does),\n> then any negative consequences are on your own head. AFAICT the\n> only actual negative consequence is that the locks disappear the\n> moment the parallel worker exits, so we'd not be opening any large\n> holes even to people who rip off the safety cover.\n>\n> (BTW, why aren't these functions just \"parallel restricted\"?)\n\nI don't exactly remember why we installed all of these restrictions\nany more. You might be able to find some discussion of it by\nsearching the archives. I believe we may have been concerned about\nthe fact that group locking would cause advisory locks taken in one\nprocess not to conflict with the same advisory lock taken in some\ncooperating process, and maybe that would be unwelcome behavior for\nsomeone. For example, suppose the user defines a function that takes\nan advisory lock on the number 1, does a bunch of stuff that should\nnever happen multiply at the same time, and then releases the lock.\nWithout parallel query, that will work. With parallel query, it\nwon't, because several workers running the same query might run the\nsame function simultaneously and their locks won't conflict.\n\nBut it is really pretty arguable whether we should feel responsible\nfor that problem. We could just decide that if you're doing that, and\nyou don't want the scenario described above to happen, you oughta mark\nthe function that contains this logic at least PARALLEL RESTRICTED,\nand if you don't, then it's your fault for doing a dumb thing. I\nbelieve when we were early on in the development of this we wanted to\nbe very conservative lest, ah, someone accuse us of not locking things\ndown well enough, but maybe at this point parallel query is a\nsufficiently well-established concept that we should lighten up on\nsome cases where we took an overly-stringent line. If we take that\nview, then I'm not sure why these functions couldn't be just marked\nPARALLEL SAFE.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 30 Jul 2019 12:46:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jul 30, 2019 at 10:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (BTW, why aren't these functions just \"parallel restricted\"?)\n\n> ...\n> But it is really pretty arguable whether we should feel responsible\n> for that problem. We could just decide that if you're doing that, and\n> you don't want the scenario described above to happen, you oughta mark\n> the function that contains this logic at least PARALLEL RESTRICTED,\n> and if you don't, then it's your fault for doing a dumb thing. I\n> believe when we were early on in the development of this we wanted to\n> be very conservative lest, ah, someone accuse us of not locking things\n> down well enough, but maybe at this point parallel query is a\n> sufficiently well-established concept that we should lighten up on\n> some cases where we took an overly-stringent line. If we take that\n> view, then I'm not sure why these functions couldn't be just marked\n> PARALLEL SAFE.\n\nNo, there's a sufficient reason why we should force advisory locks\nto be taken in the leader process, namely that the behavior is totally\ndifferent if we don't: they will disappear at the end of the parallel\nworker run, not at end of transaction or session as documented.\n\nHowever, that argument doesn't seem to be a reason why the advisory-lock\nfunctions couldn't be parallel-restricted rather than parallel-unsafe.\n\nIn any case, my question at the moment is whether we need the belt-and-\nsuspenders-too approach of having both non-parallel-safe marking and an\nexplicit check inside these functions. We've largely moved away from\nhard-wired checks for e.g. superuserness, and surely these things are\nless dangerous than most formerly-superuser-only functions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jul 2019 13:36:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Tue, Jul 30, 2019 at 1:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> No, there's a sufficient reason why we should force advisory locks\n> to be taken in the leader process, namely that the behavior is totally\n> different if we don't: they will disappear at the end of the parallel\n> worker run, not at end of transaction or session as documented.\n\nOh, good point. I forgot about that.\n\n> However, that argument doesn't seem to be a reason why the advisory-lock\n> functions couldn't be parallel-restricted rather than parallel-unsafe.\n\nAgreed.\n\n> In any case, my question at the moment is whether we need the belt-and-\n> suspenders-too approach of having both non-parallel-safe marking and an\n> explicit check inside these functions. We've largely moved away from\n> hard-wired checks for e.g. superuserness, and surely these things are\n> less dangerous than most formerly-superuser-only functions.\n\nIf we can't think of a way that the lack of these checks could crash\nit, then I think it's OK to remove the hardwired checks. If we can,\nI'd favor keeping them.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 30 Jul 2019 13:40:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jul 30, 2019 at 1:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In any case, my question at the moment is whether we need the belt-and-\n>> suspenders-too approach of having both non-parallel-safe marking and an\n>> explicit check inside these functions. We've largely moved away from\n>> hard-wired checks for e.g. superuserness, and surely these things are\n>> less dangerous than most formerly-superuser-only functions.\n\n> If we can't think of a way that the lack of these checks could crash\n> it, then I think it's OK to remove the hardwired checks. If we can,\n> I'd favor keeping them.\n\nWell, there'd be an actual isolation test that they work ;-), if you\noverride the marking. Admittedly, one test case does not prove that\nthere's no way to crash the system, but that can be said of most\nparts of Postgres.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jul 2019 13:44:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Tue, Jul 30, 2019 at 1:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, there'd be an actual isolation test that they work ;-), if you\n> override the marking. Admittedly, one test case does not prove that\n> there's no way to crash the system, but that can be said of most\n> parts of Postgres.\n\nTrue. I'm just talking about what we can foresee.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 30 Jul 2019 13:45:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jul 30, 2019 at 1:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, there'd be an actual isolation test that they work ;-), if you\n>> override the marking. Admittedly, one test case does not prove that\n>> there's no way to crash the system, but that can be said of most\n>> parts of Postgres.\n\n> True. I'm just talking about what we can foresee.\n\nSure. But I think what we can foresee is that if there are any bugs\nreachable this way, they'd be reachable and need fixing regardless.\nWe've already established that parallel workers can take and release locks\nthat their leader isn't holding. Apparently, they won't take anything\nstronger than RowExclusiveLock; but even AccessShare is enough to let a\nprocess participate in all interesting behaviors of the lock manager,\nincluding blocking, being blocked, and being released early by deadlock\nresolution. And the advisory-lock functions are pretty darn thin wrappers\naround the lock manager. So I'm finding it hard to see where there's\nincremental risk, even if a user does intentionally bypass the parallel\nsafety markings. And what we get in return is an easier way to add tests\nfor this area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jul 2019 14:10:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" }, { "msg_contents": "On Tue, Jul 30, 2019 at 2:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Sure. But I think what we can foresee is that if there are any bugs\n> reachable this way, they'd be reachable and need fixing regardless.\n> We've already established that parallel workers can take and release locks\n> that their leader isn't holding. Apparently, they won't take anything\n> stronger than RowExclusiveLock; but even AccessShare is enough to let a\n> process participate in all interesting behaviors of the lock manager,\n> including blocking, being blocked, and being released early by deadlock\n> resolution. And the advisory-lock functions are pretty darn thin wrappers\n> around the lock manager. So I'm finding it hard to see where there's\n> incremental risk, even if a user does intentionally bypass the parallel\n> safety markings. And what we get in return is an easier way to add tests\n> for this area.\n\nSure, I was basically just asking whether you could foresee any\ncrash-risk of the proposed change. It sounds like the answer is \"no,\"\nso I'm fine with it on that basis.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 31 Jul 2019 11:46:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TopoSort() fix" } ]
[ { "msg_contents": "The single largest benefit of the v12 nbtree enhancement was its\nadaptability with indexes where a portion of the key space contains\nmany duplicates. Successive page splits choose split points in a way\nthat leaves duplicates together on their own page, and eventually pack\npages full of duplicates.\n\nI have thought up a specific case where the logic can be fooled into\nconsistently doing the wrong thing, leading to very poor space\nutilization:\n\ndrop table if exists descnums;\ncreate table descnums(nums int4);\ncreate index bloat_desc on descnums (nums desc nulls first);\n-- Fill first leaf page (leaf root page) with NULL values, to the\npoint where it almost splits:\ninsert into descnums select null from generate_series(0,400);\n-- Insert integers, which will be treated as descending insertions within index:\ninsert into descnums select i from generate_series(0,10000) i;\n-- Observe if we had 50:50 page splits here:\ncreate extension if not exists pgstattuple;\nselect avg_leaf_density, leaf_fragmentation from pgstatindex('bloat_desc');\n\nThe final output looks like this:\n\n avg_leaf_density | leaf_fragmentation\n------------------+--------------------\n 1.83 | 99.88\n(1 row)\n\nAlthough the case is contrived, it is clearly not okay that this is\npossible -- avg_leaf_density should be about 50 here, which is what\nyou'll see on v11. You'll also see an avg_leaf_density that's at least\n50 if you vary any of the details. For example, if you remove \"nulls\nfirst\" then you'll get an avg_leaf_density of ~50. Or, if you make the\nindex ASC then the avg_leaf_density is almost exactly 90 for the usual\nreason (the NULLs won't \"get in the way\" of consistently getting\nrightmost splits that way). Note that I've deliberately arranged for\nthe page splits to be as ineffective as possible by almost filling a\nleaf page with NULLs, leaving a tiny gap for all future non-NULL\ninteger insertions.\n\nThis is another case where a bimodal distribution causes trouble when\ncombined with auto-incrementing insertions -- it is slightly similar\nto the TPC-C issue that the v12 work fixed IMV. You could say that the\nreal root of the problem here is one of two things, depending on your\nperspective:\n\n1. Arguably, nbtsplitloc.c is already doing the right thing here, and\nthe root of the issue is that _bt_truncate() lacks any way of\ngenerating a new high key that is \"mid way between\" the value NULL in\nthe lastleft tuple and the integer in the firstright tuple during the\nfirst split. If _bt_truncate() created a \"mid point value\" of around\nINT_MAX/2 for the new high key during the first split, then everything\nwould work out -- we wouldn't keep splitting the same leftmost page\nagain and again. The behavior would stabilize in the same way as it\ndoes in the ASC + NULLS LAST case, without hurting any other case that\nalready works well. This isn't an academic point; we might actually\nneed to do that in order to be able to pack the leaf page 90% full\nwith DESC insertions, which ought to be a future goal for\nnbtsplitloc.c. But that's clearly not in scope for v12.\n\n2. The other way you could look at it (which is likely to be the basis\nof my fix for v12) is that nbtsplitloc.c has been fooled into treating\npage splits as \"many duplicate\" splits, when in fact there are not\nthat many duplicates involved -- there just appears to be many\nduplicates because they're so unevenly distributed on the page. It\nwould be fine for it to be wrong if there was some way that successive\npage splits could course correct (see point 1), but that isn't\npossible here because of the skew -- we get stuck with the same lousy\nchoice of split point again and again. (There also wouldn't be a\nproblem if the integers values were random, since we'd have just one\nor two uneven splits at the start.)\n\nI've already written a rough patch that fixes the issue by taking this\nsecond view of the problem. The patch makes nbtsplitloc.c more\nskeptical about finishing with the \"many duplicates\" strategy,\navoiding the problem -- it can just fall back on a 50:50 page split\nwhen it looks like this is happening (the related \"single value\"\nstrategy must already so something similar in _bt_strategy()).\nCurrently, it simply considers if the new item on the page has an\noffset number immediately to the right of the split point indicated by\nthe \"many duplicates\" strategy. We look for it within ~10 offset\npositions to the right, since that strongly suggests that there aren't\nthat many duplicates after all. I may make the check more careful\nstill, for example by performing additional comparisons on the page to\nmake sure that there are in fact very few distinct values on the whole\npage.\n\nMy draft fix doesn't cause any regressions in any of my test cases --\nthe fix barely affects the splits chosen for my real-world test data,\nand TPC test data. As far as I know, I already have a comprehensive\nfix. I will need to think about it much more carefully before\nproceeding, though.\n\nThoughts?\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 2 Jul 2019 15:51:34 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Adversarial case for \"many duplicates\" nbtree split strategy in v12" }, { "msg_contents": "On Tue, Jul 2, 2019 at 3:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I've already written a rough patch that fixes the issue by taking this\n> second view of the problem. The patch makes nbtsplitloc.c more\n> skeptical about finishing with the \"many duplicates\" strategy,\n> avoiding the problem -- it can just fall back on a 50:50 page split\n> when it looks like this is happening (the related \"single value\"\n> strategy must already so something similar in _bt_strategy()).\n> Currently, it simply considers if the new item on the page has an\n> offset number immediately to the right of the split point indicated by\n> the \"many duplicates\" strategy. We look for it within ~10 offset\n> positions to the right, since that strongly suggests that there aren't\n> that many duplicates after all.\n\nAttached draft patch shows what I have in mind. I can't think of\nanother case that will make nbtsplitloc.c do the wrong thing, so I am\ncautiously optimistic about this being the last we'll hear about cases\nwhere we *consistently* do the wrong thing because somebody got very\nunlucky *once*.\n\nI continue to maintain the test suite used to develop the v12\nenhancements to nbtree. These are mostly smoke tests which take a long\ntime to run, but there are a few particularly ticklish behaviors that\nmerit inclusion in the standard regression test suite.\n\nI wonder if it would make sense to add some tests of the new\nnbtsplitloc.c behaviors to the regression tests of\ncontrib/pgstattuple, including the behavior that this patch is\nconcerned with, as well as the \"split after new tuple\" behavior -- we\ncould do something with pgstatindex()'s avg_leaf_density field to make\nthat work. These tests would need to work in a portable fashion, while\nstill being effective as tests, but that shouldn't be too difficult.\nThe leaf space utilization very often looks *identical* to what you'll\nsee with rightmost page splits when the \"split after new tuple\"\noptimization is applied, for example. The tests will need to be\ntolerant of variations in page layout due to alignment and BLCKSZ\ndifferences, but the tolerance can probably be quite small. Maybe +/-\n5%.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 10 Jul 2019 16:59:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Adversarial case for \"many duplicates\" nbtree split strategy in\n v12" } ]
[ { "msg_contents": "Hi,\nbecause there's destination data\nsrc/makefiles/pgxs.mk line\nln -s $< $@\nfails and make clean doesn't remove these links.\nln -sf\nis an option but it's not tested in configure\nor rm -f\n\nRegards\nDidier\n\n\n", "msg_date": "Wed, 3 Jul 2019 02:26:49 +0200", "msg_from": "didier <did447@gmail.com>", "msg_from_op": true, "msg_subject": "contrib make check-world fail if data have been modified and there's\n vpath" }, { "msg_contents": "didier <did447@gmail.com> writes:\n> because there's destination data\n> src/makefiles/pgxs.mk line\n> ln -s $< $@\n> fails and make clean doesn't remove these links.\n> ln -sf\n> is an option but it's not tested in configure\n> or rm -f\n\nCan you be more specific about what the problem case is?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jul 2019 21:15:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib make check-world fail if data have been modified and\n there's vpath" } ]
[ { "msg_contents": "Hi Everyone,\n\nI have installed PostgresSQL 11.2 version on Centos 7 and try to install\n*pguint* from source to install *TINYINT* datatype .\nBut installation had problem not able to resolve dependency packages. I\nhave followed below method to install , please help me to resolve this\nissue.\n\n1. yum install centos-release-scl-rh\n2.yum install llvm-toolset-7-clang-tools-extra\n3.yum install devtoolset-7\n4.yum install llvm-toolset-5\n5.make PG_CONFIG=/usr/pgsql-11/bin/pg_config\n6.make PG_CONFIG=/usr/pgsql-11/bin/pg_config install\nInstalled llvm5.0 packages successfully but when try to create pg extension\ngetting below error ,\n\n\n\n\n*[postgres] # CREATE EXTENSION uint;ERROR: XX000: could not load library\n\"/usr/pgsql-11/lib/uint.so\": /usr/pgsql-11/lib/uint.so: undefined symbol:\nGET_1_BYTELOCATION: internal_load_library, dfmgr.c:240Time: 17.247 ms*\n\nPlease help me to resolve this issue.\n\nRegards,\nSuresh Seema\n\n-- \nThanks & Regards\nSuresh S\n\nHi Everyone,I have installed PostgresSQL 11.2 version on Centos 7 and try to install  pguint from source to install TINYINT datatype . But installation   had problem not able to resolve dependency packages. I have followed below method to install , please help me to resolve this issue.1. yum install centos-release-scl-rh2.yum install llvm-toolset-7-clang-tools-extra3.yum install devtoolset-74.yum install llvm-toolset-55.make PG_CONFIG=/usr/pgsql-11/bin/pg_config6.make PG_CONFIG=/usr/pgsql-11/bin/pg_config installInstalled llvm5.0 packages successfully but when try to create pg extension getting below error ,[postgres] # CREATE EXTENSION  uint;ERROR:  XX000: could not load library \"/usr/pgsql-11/lib/uint.so\": /usr/pgsql-11/lib/uint.so: undefined symbol: GET_1_BYTELOCATION:  internal_load_library, dfmgr.c:240Time: 17.247 msPlease help me to resolve this issue.Regards,Suresh Seema-- Thanks & RegardsSuresh S", "msg_date": "Wed, 3 Jul 2019 14:24:51 +0530", "msg_from": "Suresh Kumar <suresh01.s@gmail.com>", "msg_from_op": true, "msg_subject": "pguint Installation error in PostgreSQL server version 11.2" } ]
[ { "msg_contents": "Hi,\nCurrently there's 0 coverage of CustomScan code path in core.\n\nWhat about adding a noops custom_scan test in src/test/modules/ ? Or\nis it out of pg perimeter and each extension using it should take are\nof themselves?\n\nIf there's an interest I'm willing to write and propose such test suite.\n\nRegards\nDidier.\n\n\n", "msg_date": "Wed, 3 Jul 2019 22:34:52 +0200", "msg_from": "didier <did447@gmail.com>", "msg_from_op": true, "msg_subject": "Custom Scan coverage." }, { "msg_contents": "On 2019-Jul-03, didier wrote:\n\n> Currently there's 0 coverage of CustomScan code path in core.\n\nYeah :-(\n\n> What about adding a noops custom_scan test in src/test/modules/ ? Or\n> is it out of pg perimeter and each extension using it should take are\n> of themselves?\n> \n> If there's an interest I'm willing to write and propose such test suite.\n\nWe'd certainly like if there was coverage of that infrastructure.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 3 Jul 2019 16:48:12 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Custom Scan coverage." } ]