threads
listlengths
1
2.99k
[ { "msg_contents": "Dear all,\n\nI've been looking at tidying up some of the repeated code which now\nresides in tablecmds.c - in particular the ALTER TABLE ALTER COLUMN\ncode.\n\nMost of these routines share common code:\n\n1) AccessExclusive Lock on relation.\n\n2) Relation is a table, not a system table, user is owner.\n\n3) Recurse over child relations.\n\nAnd several routines then:\n\n4) check column exists\n\n5) check column is not a system attribute\n\nI would propose to combine these checks into two routines: \nCanAlterTable(relid,systemOK) [systemOK is for the set statistics case]\nGetTargetAttnum(relid,Attname) returns attnum\n\n[This would bring some consistency to checking, for example fixing the\ncurrent segfault if you try ALTER TABLE test ALTER COLUMN xmin SET\nDEFAULT 3;]\n\nand two macros:\n\nRECURSE_OVER_CHILDREN(relid);\nAlterTableDoSomething(childrel,...);\nRECURSE_OVER_CHILDREN_END;\n\n(this seems more straightforward than passing the text of the function\ncall as a macro parameter).\n\nALTER COLUMN RENAME\n\nCurrently, attributes in tables, views and sequences can be renamed. \n-tables and views make sense, of course.\nSequences still seem to work after they've had attributes renamed, but I\nsee little value in being able to do this. Is it OK to prohibit the\nrenaming of sequence columns?\n\ntcop/utility.c vs. commands/\n\nThere are also permissions checks made in tcop/utility.c before\nAlterTableOwner and renamerel are called. It may be best to move these\ninto commands/tablecmds.c. It seems that tcop/utility.c was supposed to\nhandle the permissions checks for statements, but the inheritance\nsupport has pushed some of that into commands/ . Should permissions\nchecking for other utility statements be migrated to commands/ for\nconsistency? I don't propose to do this now -but it might be a later\nstage in the process.\n\nIf this general outline is OK, I'll work on a patch -this shouldn't be\nquite as drastic as the last one :-)\n\nRegards\n\nJohn\n\n-- \nJohn Gray\tECHOES: sponsored walks for Christian Aid to the highest\nAzuli IT\t\tpoints of English counties, 4th-6th May 2002\nwww.azuli.co.uk\t\twww.stannesmoseley.care4free.net/echoes.html\n\n\n", "msg_date": "19 Apr 2002 20:12:06 +0100", "msg_from": "John Gray <jgray@azuli.co.uk>", "msg_from_op": true, "msg_subject": "commands subdirectory continued -code cleanup" }, { "msg_contents": "John Gray <jgray@azuli.co.uk> writes:\n> Sequences still seem to work after they've had attributes renamed, but I\n> see little value in being able to do this. Is it OK to prohibit the\n> renaming of sequence columns?\n\nThat seems like an error to me. Setting defaults, constraints, etc on a\nsequence is bogus too --- do we catch those?\n\n> There are also permissions checks made in tcop/utility.c before\n> AlterTableOwner and renamerel are called. It may be best to move these\n> into commands/tablecmds.c. It seems that tcop/utility.c was supposed to\n> handle the permissions checks for statements, but the inheritance\n> support has pushed some of that into commands/ . Should permissions\n> checking for other utility statements be migrated to commands/ for\n> consistency? I don't propose to do this now -but it might be a later\n> stage in the process.\n\nNot sure. There are subroutines in utility.c that are useful for\nthis purpose, and I don't really see the value of having them called\nfrom all over the place when it can be more localized. We should\nprobably be consistent about having tablecmds.c make all the relevant\npermissions checks for its operations, but I don't think that\nnecessarily translates into the same choice for the rest of commands/.\nAFAIR none of the rest of commands/ has the recursive-operations issue\nthat forces this approach for table commands.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Apr 2002 15:34:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: commands subdirectory continued -code cleanup " }, { "msg_contents": "John Gray wrote:\n> \n> and two macros:\n> \n> RECURSE_OVER_CHILDREN(relid);\n> AlterTableDoSomething(childrel,...);\n> RECURSE_OVER_CHILDREN_END;\n> \n> (this seems more straightforward than passing the text of the function\n> call as a macro parameter).\n> \n\nSuggestion:\n\nRECURSE_OVER_CHILDREN(inh, relid)\n{\n\tAlterTableDoSomething(childrel,...);\n}\n\n\n-- \nFernando Nasser\nRed Hat - Toronto E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n", "msg_date": "Fri, 19 Apr 2002 15:38:07 -0400", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: commands subdirectory continued -code cleanup" }, { "msg_contents": "On Fri, 2002-04-19 at 20:34, Tom Lane wrote:\n> John Gray <jgray@azuli.co.uk> writes:\n> > Sequences still seem to work after they've had attributes renamed, but I\n> > see little value in being able to do this. Is it OK to prohibit the\n> > renaming of sequence columns?\n> \n> That seems like an error to me. Setting defaults, constraints, etc on a\n> sequence is bogus too --- do we catch those?\n> \nYes, we catch those. The current gaps in checking are:\n1) renameatt allows any type of relation to have columns renamed\n(including indexes, but that case is caught by heap_open objecting to\nthe relation being an index)\n\n2) add/drop default allows the addition/dropping of constraints on\nsystem attributes (Apropos of this, I think it might also be good to add\na check into AddRelationRawConstraints that verifies that attnum in\ncolDef is actually positive and <natts for the relation -just in case it\nis passed a bogus structure from somewhere else. This would be a cheap\ncheck to do.)\n\n> Not sure. There are subroutines in utility.c that are useful for\n> this purpose, and I don't really see the value of having them called\n> from all over the place when it can be more localized. We should\n> probably be consistent about having tablecmds.c make all the relevant\n> permissions checks for its operations, but I don't think that\n> necessarily translates into the same choice for the rest of commands/.\n> AFAIR none of the rest of commands/ has the recursive-operations issue\n> that forces this approach for table commands.\n> \n\nOK. I can remove the duplicate system relation check from renamerel and\njust leave a comment explaining that the permissions checks are\nperformed in utility.c. (renamerel is unlike other tablecmds.c functions\nin that it doesn't recurse. It is also called from cluster.c but again,\npermissions are checked for that in tcop/utility.c) \n\nRegards\n\nJohn\n\n", "msg_date": "20 Apr 2002 10:14:16 +0100", "msg_from": "John Gray <jgray@azuli.co.uk>", "msg_from_op": true, "msg_subject": "Re: commands subdirectory continued -code cleanup" }, { "msg_contents": "On Fri, 2002-04-19 at 20:38, Fernando Nasser wrote:\n> John Gray wrote:\n> > \n> > and two macros:\n> > \n> > RECURSE_OVER_CHILDREN(relid);\n> > AlterTableDoSomething(childrel,...);\n> > RECURSE_OVER_CHILDREN_END;\n> > \n> > (this seems more straightforward than passing the text of the function\n> > call as a macro parameter).\n> > \n> \n> Suggestion:\n> \n> RECURSE_OVER_CHILDREN(inh, relid)\n> {\n> \tAlterTableDoSomething(childrel,...);\n> }\n> \n\nYes, that would be nicer -I just have to work out a suitable rewrite of\nthe code so that it could fit that kind of macro setup -at present,\nthere is code that is executed in each iteration of the loop, which\nmeans there is more than one group to close after the AlterTable call.\n\nI'll think on it...\n\nRegards\n\nJohn\n-- \nJohn Gray\tECHOES: sponsored walks for Christian Aid to the highest\nAzuli IT\t\tpoints of English counties, 4th-6th May 2002\nwww.azuli.co.uk\t\twww.stannesmoseley.care4free.net/echoes.html\n\n\n", "msg_date": "20 Apr 2002 11:16:16 +0100", "msg_from": "John Gray <jgray@azuli.co.uk>", "msg_from_op": true, "msg_subject": "Re: commands subdirectory continued -code cleanup" }, { "msg_contents": "\n<snip>\n\n> and two macros:\n> \n> RECURSE_OVER_CHILDREN(relid);\n> AlterTableDoSomething(childrel,...);\n> RECURSE_OVER_CHILDREN_END;\n> \n> (this seems more straightforward than passing the text of the function\n> call as a macro parameter).\n\nThe above all looks fine. The other stuff I wouldn't really know about.\n\nChris\n\n", "msg_date": "Mon, 22 Apr 2002 10:11:44 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: commands subdirectory continued -code cleanup" } ]
[ { "msg_contents": "[Please CC any replies, I'm subscribed nomail]\n\nHi,\n\nChapter 7 of the Developers guide in about the Page Format on disk and it's\na little out of date not to mention somewhat incomplete.\n\n1. Is there documentation elsewhere (other than the source)?\n\n2. If not, would patches be accepted to correct the situation? I've been\nlooking into it a bit recently so I think I may be able to whip something\nuseful up.\n\nThanks in advance,\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> Canada, Mexico, and Australia form the Axis of Nations That\n> Are Actually Quite Nice But Secretly Have Nasty Thoughts About America\n", "msg_date": "Sun, 21 Apr 2002 00:22:17 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": true, "msg_subject": "Documentation on page files" }, { "msg_contents": "Martijn van Oosterhout <kleptog@svana.org> writes:\n> Chapter 7 of the Developers guide in about the Page Format on disk and it's\n> a little out of date not to mention somewhat incomplete.\n\nIndeed, this seems to have very little relation to reality :-(.\nI didn't even realize that we had such a description in the SGML docs.\nIt's obviously not been updated for many years. I'm not sure if the\n\"continuation\" mechanism it describes ever existed at all, but it sure\nhasn't been there since the code left Berkeley.\n\n> 1. Is there documentation elsewhere (other than the source)?\n\nNot that I can think of. The most accurate information seems to be in\nsrc/include/storage/bufpage.h; AFAICT all the comments in that file are\nup-to-date. In addition to this it'd be worth pulling out some\ndescription of the \"special space\" structures used by the various index\naccess methods.\n\n> 2. If not, would patches be accepted to correct the situation?\n\nGo for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 Apr 2002 12:16:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Documentation on page files " }, { "msg_contents": "On Sat, 20 Apr 2002, Tom Lane wrote:\n\n> Martijn van Oosterhout <kleptog@svana.org> writes:\n> > Chapter 7 of the Developers guide in about the Page Format on disk and it's\n> > a little out of date not to mention somewhat incomplete.\n>\n> Indeed, this seems to have very little relation to reality :-(.\n\nI dunno, it seems to be not too bad to me, though woefully incomplete.\nI too was considering writing an updated version of this.\n\n> I'm not sure if the\n> \"continuation\" mechanism it describes ever existed at all, but it sure\n> hasn't been there since the code left Berkeley.\n\nYeah, I was wondering about that. This has been replaced by TOAST, right?\n\n> > 2. If not, would patches be accepted to correct the situation?\n>\n> Go for it.\n\nYes, please! I'd be happy to review and updated version.\n\nOne thing that would be good, since this is a developers' guide,\nwould be to include references to the source files and dates from\nwhich the information comes. That way one could see if updates are\nnecessary by doing a diff on those files between the given date\nand the head, to see what changes have been made since the description\nwas written. Also good would be to have the data structures explicitly\nnamed so that when one dives into the source, one already has a\ngood idea of what one's looking at.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Sun, 21 Apr 2002 15:46:07 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Documentation on page files " }, { "msg_contents": "On Sun, Apr 21, 2002 at 03:46:07PM +0900, Curt Sampson wrote:\n> On Sat, 20 Apr 2002, Tom Lane wrote:\n> \n> > Martijn van Oosterhout <kleptog@svana.org> writes:\n> > > 2. If not, would patches be accepted to correct the situation?\n> >\n> > Go for it.\n> \n> Yes, please! I'd be happy to review and updated version.\n\nOk, my first attempt can be seen here:\n\nhttp://svana.org/kleptog/pgsql/page.sgml.txt\n\nI don't know whatever SGML format this is using, so the layout is not great,\nbut the information should be accurate. I used it to create a program to\ndump the datafiles directly without the postmaster :).\n\nI'll submit a proper patch once we have something useful.\n\n> One thing that would be good, since this is a developers' guide,\n> would be to include references to the source files and dates from\n> which the information comes. That way one could see if updates are\n> necessary by doing a diff on those files between the given date\n> and the head, to see what changes have been made since the description\n> was written. Also good would be to have the data structures explicitly\n> named so that when one dives into the source, one already has a\n> good idea of what one's looking at.\n\nWell, I have included the names of the structures involved. Do you think\nit's worth adding filenames given that TAGS makes tracking them down easily\nenough? I can put in dates if you like.\n\nIssues to be dealt with:\n- Do I need to say more about TOAST?\n- Indexes?\n- Split into sections\n- How much detail is enough/too much?\n\nHave a nice day,\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> Canada, Mexico, and Australia form the Axis of Nations That\n> Are Actually Quite Nice But Secretly Have Nasty Thoughts About America\n", "msg_date": "Sun, 21 Apr 2002 19:28:32 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": true, "msg_subject": "Re: Documentation on page files" }, { "msg_contents": "On Sun, Apr 21, 2002 at 07:28:32PM +1000, Martijn van Oosterhout wrote:\n> \n> http://svana.org/kleptog/pgsql/page.sgml.txt\n> \n> I don't know whatever SGML format this is using, so the layout is not great,\n> but the information should be accurate. I used it to create a program to\n> dump the datafiles directly without the postmaster :).\n\nExcellent - since this is a FRP (Frequently Requested Program) how do you\nfeel about dumping it in contrib? Even if it's hardcoded for your particular\ntable structure, it could serve as a starting point for some poor DBA\nwho's got to recover from a lost xlog, for example.\n\nRoss\n", "msg_date": "Mon, 22 Apr 2002 11:14:36 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Documentation on page files" }, { "msg_contents": "Martijn,\n\nIt may be useful to look at the pg_filedump utility located\nat http://sources.redhat.com/rhdb/tools.html \n\nThis utility dumps out information at the page level and is\ncommented to help the user understand the format/content of\nPostgreSQL heap/index/control files.\n\nCheers,\nPatrick\n-----------------\nPatrick Macdonald\nRed Hat Database\n\n\nTom Lane wrote:\n> \n> Martijn van Oosterhout <kleptog@svana.org> writes:\n> > Chapter 7 of the Developers guide in about the Page Format on disk and it's\n> > a little out of date not to mention somewhat incomplete.\n> \n> Indeed, this seems to have very little relation to reality :-(.\n> I didn't even realize that we had such a description in the SGML docs.\n> It's obviously not been updated for many years. I'm not sure if the\n> \"continuation\" mechanism it describes ever existed at all, but it sure\n> hasn't been there since the code left Berkeley.\n> \n> > 1. Is there documentation elsewhere (other than the source)?\n> \n> Not that I can think of. The most accurate information seems to be in\n> src/include/storage/bufpage.h; AFAICT all the comments in that file are\n> up-to-date. In addition to this it'd be worth pulling out some\n> description of the \"special space\" structures used by the various index\n> access methods.\n> \n> > 2. If not, would patches be accepted to correct the situation?\n> \n> Go for it.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Mon, 22 Apr 2002 13:57:31 -0400", "msg_from": "Patrick Macdonald <patrickm@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Documentation on page files" }, { "msg_contents": "On Mon, Apr 22, 2002 at 11:14:36AM -0500, Ross J. Reedstrom wrote:\n> On Sun, Apr 21, 2002 at 07:28:32PM +1000, Martijn van Oosterhout wrote:\n> > \n> > http://svana.org/kleptog/pgsql/page.sgml.txt\n> > \n> > I don't know whatever SGML format this is using, so the layout is not great,\n> > but the information should be accurate. I used it to create a program to\n> > dump the datafiles directly without the postmaster :).\n> \n> Excellent - since this is a FRP (Frequently Requested Program) how do you\n> feel about dumping it in contrib? Even if it's hardcoded for your particular\n> table structure, it could serve as a starting point for some poor DBA\n> who's got to recover from a lost xlog, for example.\n\nActually, it reads the table structure from the catalog. It also will find\nthe right files to open. It reads files from both PG 6.5 and 7.2 although it\nshouldn't be too hard to make work for other versions. And if you people\ndon't reorder the first few fields in pg_attribute, it will work for all\nfuture versions too.\n\nThe dumping is more of an extra, the original idea was to check for errors\nin the datafiles. Hence the working name of \"pgfsck\". At the moment the\ndumping dumps only tuples where xmax == 0 but I'm not sure if that's\ncorrect.\n\nIt doesn't handle compressed tuples nor toasted ones, though thats more\nadvanced really. And ofcourse outputing data in human readable format has to\nbe added for each type. I only started writing it on Sunday, so let me give\nit a usable interface and I'll let people try it out.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> Canada, Mexico, and Australia form the Axis of Nations That\n> Are Actually Quite Nice But Secretly Have Nasty Thoughts About America\n", "msg_date": "Tue, 23 Apr 2002 09:29:28 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": true, "msg_subject": "Re: Documentation on page files" }, { "msg_contents": "On Tue, 2002-04-23 at 01:29, Martijn van Oosterhout wrote:\n> \n> The dumping is more of an extra, the original idea was to check for errors\n> in the datafiles. Hence the working name of \"pgfsck\". At the moment the\n> dumping dumps only tuples where xmax == 0 but I'm not sure if that's\n> correct.\n\nAFAIK it is not. As Tom once explained me, it is ok for tuples xmax to\nbe !=0 and still have a valid tuple. The validity is determined by some\nbits in tuple header.\n\n\nBut I think the most useful behaviour should be to dump system fields\ntoo, so mildly knowledgeable sysadmin can import the dump and do the\nright thing afterwards (like restore data as it was before transaction\nnr 7000)\n\n-------------\nHannu\n\n", "msg_date": "23 Apr 2002 09:15:22 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Documentation on page files" }, { "msg_contents": "On Tue, Apr 23, 2002 at 09:15:22AM +0200, Hannu Krosing wrote:\n> On Tue, 2002-04-23 at 01:29, Martijn van Oosterhout wrote:\n> > \n> > The dumping is more of an extra, the original idea was to check for errors\n> > in the datafiles. Hence the working name of \"pgfsck\". At the moment the\n> > dumping dumps only tuples where xmax == 0 but I'm not sure if that's\n> > correct.\n> \n> AFAIK it is not. As Tom once explained me, it is ok for tuples xmax to\n> be !=0 and still have a valid tuple. The validity is determined by some\n> bits in tuple header.\n\nWell, from my thinking about how you would use these fields in a logical\nway, it seems it's possible for xmax to be non-zero if the transaction\nnumbered xmax was not committed. But in that case (unless it was a delete)\nthere would be a newer tuple with the same oid but xmax == 0 (and this\nuncommitted transaction as xmin).\n\nThe problem is that inside the DB, you have a current transaction plus a\nlist of committed transactions. Externally, you have no idea, so xmax == 0\nis as valid a view as any other. This would have the effect of dumping out\nwhatever would be visible if every transaction were committed.\n\nI think. If anyone knows a good document on MVCC implementations, let me\nknow.\n\n> But I think the most useful behaviour should be to dump system fields\n> too, so mildly knowledgeable sysadmin can import the dump and do the\n> right thing afterwards (like restore data as it was before transaction\n> nr 7000)\n\nWell, i didn't think you could have statements of the form:\n\ninsert into table (xmin,xmax,cmin,cmax,...) values (...);\n\nSo you would have to leave it as a comment. In which case someone would have\nto go and by hand work out what would be in or out. I can make it an option\nbut I don't think it would be particularly useful. Maybe\n--pretend-uncommitted <xact>\n\nJust a thought, if I did a \"delete from table\" accedently, and stopped the\npostmaster and twiddled the xlog for that transaction, would that have the\neffect of undeleting those tuples?\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> Canada, Mexico, and Australia form the Axis of Nations That\n> Are Actually Quite Nice But Secretly Have Nasty Thoughts About America\n", "msg_date": "Tue, 23 Apr 2002 17:52:00 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": true, "msg_subject": "Re: Documentation on page files" }, { "msg_contents": "On Tue, 23 Apr 2002, Martijn van Oosterhout wrote:\n\n> > But I think the most useful behaviour should be to dump system fields\n> > too, so mildly knowledgeable sysadmin can import the dump and do the\n> > right thing afterwards (like restore data as it was before transaction\n> > nr 7000)\n>\n> Well, i didn't think you could have statements of the form:\n>\n> insert into table (xmin,xmax,cmin,cmax,...) values (...);\n>\n> So you would have to leave it as a comment. In which case someone would have\n> to go and by hand work out what would be in or out. I can make it an option\n> but I don't think it would be particularly useful.\n\nWhat we really want is to be able to take that output, edit it,\nand put it into another program that creates the page files anew.\nOr maybe just generate new data ourselves and run it into that\nprogram. Then, if we can take a tablespace we've created this way\nand attach it to an already-running postgres system, we've got\nreally, really fast import/export capability.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Tue, 23 Apr 2002 17:22:00 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Documentation on page files" }, { "msg_contents": "On Tue, 2002-04-23 at 12:52, Martijn van Oosterhout wrote:\n\n> Well, from my thinking about how you would use these fields in a logical\n> way, it seems it's possible for xmax to be non-zero if the transaction\n> numbered xmax was not committed. But in that case (unless it was a delete)\n> there would be a newer tuple with the same oid but xmax == 0 (and this\n> uncommitted transaction as xmin).\n\nUnless it was an uncommitted DELETE and not UPDATE.\n\n> The problem is that inside the DB, you have a current transaction plus a\n> list of committed transactions. Externally, you have no idea, so xmax == 0\n> is as valid a view as any other. This would have the effect of dumping out\n> whatever would be visible if every transaction were committed.\n\nIIRC there are some bits that determine the commit status of tuple.\n\n> I think. If anyone knows a good document on MVCC implementations, let me\n> know.\n> \n> > But I think the most useful behaviour should be to dump system fields\n> > too, so mildly knowledgeable sysadmin can import the dump and do the\n> > right thing afterwards (like restore data as it was before transaction\n> > nr 7000)\n> \n> Well, i didn't think you could have statements of the form:\n> \n> insert into table (xmin,xmax,cmin,cmax,...) values (...);\n\nbut you can have\n\n insert into newtable values (...);\n\nSo you are free to name your xmin,... whatever you like\n\n> So you would have to leave it as a comment. In which case someone would have\n> to go and by hand work out what would be in or out. I can make it an option\n> but I don't think it would be particularly useful.\n\nI have written a small python script that does the above, and I did\nwrite it because I needed it, so it must have some use ;)\n\nAfter inserting the data to database I was then able to select all but\nlatest (before delete) version of each tuple.\n\n--------------\nHannu\n\n\n\n", "msg_date": "23 Apr 2002 23:59:17 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Documentation on page files" }, { "msg_contents": "\nAdded to CVS.\n\n---------------------------------------------------------------------------\n\nMartijn van Oosterhout wrote:\n> On Sun, Apr 21, 2002 at 03:46:07PM +0900, Curt Sampson wrote:\n> > On Sat, 20 Apr 2002, Tom Lane wrote:\n> > \n> > > Martijn van Oosterhout <kleptog@svana.org> writes:\n> > > > 2. If not, would patches be accepted to correct the situation?\n> > >\n> > > Go for it.\n> > \n> > Yes, please! I'd be happy to review and updated version.\n> \n> Ok, my first attempt can be seen here:\n> \n> http://svana.org/kleptog/pgsql/page.sgml.txt\n> \n> I don't know whatever SGML format this is using, so the layout is not great,\n> but the information should be accurate. I used it to create a program to\n> dump the datafiles directly without the postmaster :).\n> \n> I'll submit a proper patch once we have something useful.\n> \n> > One thing that would be good, since this is a developers' guide,\n> > would be to include references to the source files and dates from\n> > which the information comes. That way one could see if updates are\n> > necessary by doing a diff on those files between the given date\n> > and the head, to see what changes have been made since the description\n> > was written. Also good would be to have the data structures explicitly\n> > named so that when one dives into the source, one already has a\n> > good idea of what one's looking at.\n> \n> Well, I have included the names of the structures involved. Do you think\n> it's worth adding filenames given that TAGS makes tracking them down easily\n> enough? I can put in dates if you like.\n> \n> Issues to be dealt with:\n> - Do I need to say more about TOAST?\n> - Indexes?\n> - Split into sections\n> - How much detail is enough/too much?\n> \n> Have a nice day,\n> -- \n> Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> > Canada, Mexico, and Australia form the Axis of Nations That\n> > Are Actually Quite Nice But Secretly Have Nasty Thoughts About America\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 20 Jun 2002 17:46:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Documentation on page files" } ]
[ { "msg_contents": "\nHi all.\n\nI'd like to implement a .NET Data provider for\npostgresql using the protocol as specified in\n<http://developer.postgresql.org/docs/postgres/protocol.html>.\n\n\nI think this is important because there are many sites\nout there that would be using ASP.NET and a postgresql\ndatabase backend. I know that they could use the ODBC\nbridge but I think that a native implementation would\nbe cool, just as the JDBC implementation is :)\n\n\nThere are already implementations of .NET runtime\nframework for FreeBSD\n<http://msdn.microsoft.com/library/default.asp?url=/library/en-us/Dndotnet/html/mssharsourcecli.asp?frame=true>\n, and Linux <http://www.go-mono.org>.\n\nI'd like to know if there is already anybody working\nwith something like this because I'm creating a new\nproject at sourceforge.net and I don't want to overlap\nanywork already done :).\n\n\nThanks in advance.\n\n\nFrancisco Jr.\n\n_______________________________________________________________________________________________\nYahoo! Empregos\nO trabalho dos seus sonhos pode estar aqui. Cadastre-se hoje mesmo no Yahoo! Empregos e tenha acesso a milhares de vagas abertas!\nhttp://br.empregos.yahoo.com/\n", "msg_date": "Sat, 20 Apr 2002 21:11:15 -0300 (ART)", "msg_from": "\"=?iso-8859-1?q?Francisco=20Jr.?=\" <fxjrlists@yahoo.com.br>", "msg_from_op": true, "msg_subject": "Implement a .NET Data Provider" }, { "msg_contents": "Le Dimanche 21 Avril 2002 02:11, Francisco Jr. a écrit :\n> I'd like to know if there is already anybody working\n> with something like this because I'm creating a new\n> project at sourceforge.net and I don't want to overlap\n> anywork already done :).\n\nMaybe you should try contact the ODBC list which is mainly working on Windows \nfeatures / connectivity. Also, the ODBC team might open a CVS account for you \non Postgresql.org.\n\nSourceForge does not allow projects to leave. Therefore, when your project is \nmature enough to be included in PostgreSQL main tree, there will still be \ngarbage on Sourceforge in a Google search.\n\nCheers,\nJean-Michel POURE\n", "msg_date": "Sun, 21 Apr 2002 10:15:53 +0200", "msg_from": "Jean-Michel POURE <jmpoure@translationforge.com>", "msg_from_op": false, "msg_subject": "Re: Implement a .NET Data Provider" } ]
[ { "msg_contents": "In order to apply a dependency of foreign keys against a column set\nthe most obvious way to go is via the unique index which in turn\ndepends on the expected columns.\n\nA(id) -> B(id)\n\nA.id -> Foreign key -> Index on B.id -> B.id\n\nIf B.id is dropped it'll cascade forward.\n\n\nThe trick? Foreign keys are currently any number of triggers without\na central location for marking them as such. So...\n\nA.id\n -> Trigger on A -> Index on B.id -> B.id\n -> Trigger on A -> Index on B.id -> B.id\n -> Trigger on A -> Index on B.id -> B.id\n\nOf course, since Trigger on A depends on A we also have\nTrigger on A -> B.id\n\nNot so bad if we can go with the currently coded assumption that\ndependencies will be dropped starting with the columns (during DROP\nTABLE) and then do the relation.\n\nThis will allow dropping tons of stuff via foreign key relations and a\nCASCADE option but it won't make them very nice to look at. Not to\nmention the trigger creation code would need knowledge of foreign keys\nor more specifically indexes.\n\nIs everyone Ok with the above? Or do we go about making an pg_fkey\ntype table for tracking this stuff?\n\nFKey Triggers -> pg_fkey entry\nA.id -> pg_fkey entry\npg_fkey entry -> index on B.id -> B.id\n\nSelf scrubbing really. Makes foreign keys really obvious. Foreign\nkey code needs to know about triggers, not the other way around.\n\nOf course, this depends on the pg_depend stuff I just submitted to\npatches. Any thoughts on the data pg_fkey would need? Name, A.oid,\nA.<int2 vector -- column list>\n\n--\nRod\n\n", "msg_date": "Sat, 20 Apr 2002 22:07:18 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Foreign keys and indexes." }, { "msg_contents": "> Of course, since Trigger on A depends on A we also have\n> Trigger on A -> B.id\n\nShould read:\nTrigger on A -> relation A\n\nTriggers depend on relation which owns it :)\n\n", "msg_date": "Sat, 20 Apr 2002 22:16:17 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: Foreign keys and indexes." }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Is everyone Ok with the above? Or do we go about making an pg_fkey\n> type table for tracking this stuff?\n\nIn general there ought to be a pg_constraint table that records all\ntypes of constraints (not only foreign keys). We blew it once already\nby making pg_relcheck (which only handles check constraints). Let's\nnot miss the boat again.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 14:26:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Foreign keys and indexes. " } ]
[ { "msg_contents": "I'm trying to (finally) get my rather extensive patches (mostly\naddressing int8 versions of date/time storage) applied but am now having\ntrouble with the regression tests.\n\nI'm sure this has been on the list, but I'm not recalling the\nexplanation or workaround. My guess is that it is related to collation\ntroubles with the new locale-always-enabled feature. I've tended to\nnever enable this stuff in the past.\n\nThe first symptom is failures in the char regression test. An example\ndiff is\n\n*** expected/char.out\tTue Jun 5 07:20:01 2001\n--- results/char.out\tSun Apr 21 10:04:08 2002\n***************\n*** 63,74 ****\n WHERE c.f1 < 'a';\n five | f1 \n ------+----\n- | A\n | 1\n | 2\n | 3\n | \n! (5 rows)\n\nSo the 'A' row is left out of the result on my machine.\n\nAll other failures (there are 7 tests total which fail) are likely\nsimilar in nature. I've tried a \"make clean\", a \"make distclean\", and\nneed a hint on what to try next. I'd *really* like to get these patches\napplied, and am almost certain that they are not related to these\nregression failures, but...\n\nEarly help would be appreciated; I've got time in the next couple of\nhours to get this stuff finished!! :)\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 10:19:26 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "failed regression tests" }, { "msg_contents": "Thomas Lockhart writes:\n\n> I'm sure this has been on the list, but I'm not recalling the\n> explanation or workaround. My guess is that it is related to collation\n> troubles with the new locale-always-enabled feature. I've tended to\n> never enable this stuff in the past.\n>\n> The first symptom is failures in the char regression test. An example\n> diff is\n\ninitdb --no-locale\n\nI'm pondering ways to make the regression tests locale-aware, but it\nhasn't happened yet.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 21 Apr 2002 13:45:49 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: failed regression tests" }, { "msg_contents": "> > The first symptom is failures in the char regression test...\n> initdb --no-locale\n\nOoh. Thanks, that fixes it. Where would I have found this in the docs? I\nwas looking in the wrong place (in configure/build) rather than at\ninitdb. Should we have something in the release notes? I see (now that I\nlook) that there is a one-liner, but istm that this may deserve a\nparagraph in the \"significant changes\" category.\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 10:57:55 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: failed regression tests" }, { "msg_contents": "> > > The first symptom is failures in the char regression test...\n> > initdb --no-locale\n> \n> Ooh. Thanks, that fixes it. Where would I have found this in the docs? I\n> was looking in the wrong place (in configure/build) rather than at\n> initdb. Should we have something in the release notes? I see (now that I\n> look) that there is a one-liner, but istm that this may deserve a\n> paragraph in the \"significant changes\" category.\n\nSince once a user do initdb without knowing he is enabling the locale\nsupport, the only way to recover from it is doing initdb again. I\nsuggest something like showing a message in the initdb time\nemphasizing he is enabling the local support.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 22 Apr 2002 10:27:23 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: failed regression tests" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 19 April 2002 19:54\n> To: Rod Taylor\n> Cc: Hackers List\n> Subject: Re: Really annoying comments... \n> \n> \n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > COMMENT ON DATABASE db IS 'Comment';\n> > Now switch databases. Comment is gone.\n> \n> Yeah, it's not very helpful. I'm not sure why we bothered to \n> implement that in the first place.\n> \n> > I suppose in order to add a comment field to pg_database it \n> would need \n> > to be toasted or something (ton of work). Any other way to \n> fix this?\n> \n> I'm more inclined to rip it out ;-). \n\nEeep! pgAdmin handles comments coming from multiple pg_description tables\nand it works very well (IMHO) in the pgAdmin UI. By all means make them work\nmore sensibly in whatever way seems most appropriate - I'll fix pgAdmin to\nhandle it, but don't just rip them out please!!\n\nRegards, Dave.\n", "msg_date": "Sun, 21 Apr 2002 19:37:30 +0100", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Really annoying comments... " }, { "msg_contents": "Dave Page <dpage@vale-housing.co.uk> writes:\n>> I'm more inclined to rip it out ;-). \n\n> Eeep! pgAdmin handles comments coming from multiple pg_description tables\n> and it works very well (IMHO) in the pgAdmin UI. By all means make them work\n> more sensibly in whatever way seems most appropriate - I'll fix pgAdmin to\n> handle it, but don't just rip them out please!!\n\nWell, it would seem like the only sensible rule would be to allow\nCOMMENT ON DATABASE only for the *current* database. Then at least\nyou know which DB to look in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 14:44:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Really annoying comments... " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 21 April 2002 19:45\n> To: Dave Page\n> Cc: Rod Taylor; Hackers List\n> Subject: Re: [HACKERS] Really annoying comments... \n> \n> \n> Dave Page <dpage@vale-housing.co.uk> writes:\n> >> I'm more inclined to rip it out ;-).\n> \n> > Eeep! pgAdmin handles comments coming from multiple pg_description \n> > tables and it works very well (IMHO) in the pgAdmin UI. By \n> all means \n> > make them work more sensibly in whatever way seems most \n> appropriate - \n> > I'll fix pgAdmin to handle it, but don't just rip them out please!!\n> \n> Well, it would seem like the only sensible rule would be to \n> allow COMMENT ON DATABASE only for the *current* database. \n> Then at least you know which DB to look in.\n\nThat wouldn't cause me any pain - in pgAdmin the comment is just a property\nof a pgDatabase object - if you modify it, it will always be set through a\nconnection to that database.\n\nRegards, Dave.\n", "msg_date": "Sun, 21 Apr 2002 19:49:24 +0100", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Really annoying comments... " }, { "msg_contents": "This makes the most sense. One could assume a user who doesn't have\naccess to a particular database shouldn't know what it's for either.\nSo making the comments global could be problematic in some cases.\n\nI'll enforce this and send in a patch.\n--\nRod\n----- Original Message -----\nFrom: \"Dave Page\" <dpage@vale-housing.co.uk>\nTo: \"'Tom Lane'\" <tgl@sss.pgh.pa.us>\nCc: \"Rod Taylor\" <rbt@zort.ca>; \"Hackers List\"\n<pgsql-hackers@postgresql.org>\nSent: Sunday, April 21, 2002 2:49 PM\nSubject: RE: [HACKERS] Really annoying comments...\n\n\n>\n>\n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > Sent: 21 April 2002 19:45\n> > To: Dave Page\n> > Cc: Rod Taylor; Hackers List\n> > Subject: Re: [HACKERS] Really annoying comments...\n> >\n> >\n> > Dave Page <dpage@vale-housing.co.uk> writes:\n> > >> I'm more inclined to rip it out ;-).\n> >\n> > > Eeep! pgAdmin handles comments coming from multiple\npg_description\n> > > tables and it works very well (IMHO) in the pgAdmin UI. By\n> > all means\n> > > make them work more sensibly in whatever way seems most\n> > appropriate -\n> > > I'll fix pgAdmin to handle it, but don't just rip them out\nplease!!\n> >\n> > Well, it would seem like the only sensible rule would be to\n> > allow COMMENT ON DATABASE only for the *current* database.\n> > Then at least you know which DB to look in.\n>\n> That wouldn't cause me any pain - in pgAdmin the comment is just a\nproperty\n> of a pgDatabase object - if you modify it, it will always be set\nthrough a\n> connection to that database.\n>\n> Regards, Dave.\n>\n\n", "msg_date": "Sun, 21 Apr 2002 16:09:45 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Really annoying comments... " } ]
[ { "msg_contents": "I've applied patches to implement an int64-based data/time storage\nscheme. I've also accumulated some other minor fixes, which result in an\ninitdb being required (sorry!).\n\nNote that the *default* timestamp type is now TIMESTAMP WITHOUT TIME\nZONE. This is what we discussed previously for the transition to SQL9x\ncompliance.\n\nFull cvs log entry is included below.\n\n - Thomas\n\nSupport alternate storage scheme of 64-bit integer for date/time types.\n Use \"--enable-integer-datetimes\" in configuration to use this rather\n than the original float8 storage. I would recommend the integer-based\n storage for any platform on which it is available. We perhaps should\n make this the default for the production release.\nChange timezone(timestamptz) results to return timestamp rather than\n a character string. Formerly, we didn't have a way to represent\n timestamps with an explicit time zone other than freezing the info into\n a string. Now, we can reasonably omit the explicit time zone from the\n result and return a timestamp with values appropriate for the specified\n time zone. Much cleaner, and if you need the time zone in the result\n you can put it into a character string pretty easily anyway.\nAllow fractional seconds in date/time types even for dates prior to 1BC.\nLimit timestamp data types to 6 decimal places of precision. Just right\n for a micro-second storage of int8 date/time types, and reduces the\n number of places ad-hoc rounding was occuring for the float8-based\ntypes.\nUse lookup tables for precision/rounding calculations for timestamp and\n interval types. Formerly used pow() to calculate the desired value but\n with a more limited range there is no reason to not type in a lookup\n table. Should be *much* better performance, though formerly there were\n some optimizations to help minimize the number of times pow() was\ncalled.\nDefine a HAVE_INT64_TIMESTAMP variable. Based on the configure option\n \"--enable-integer-datetimes\" and the existing internal INT64_IS_BUSTED.\nAdd explicit date/interval operators and functions for addition and\n subtraction. Formerly relied on implicit type promotion from date to\n timestamp with time zone.\nChange timezone conversion functions for the timetz type from \"timetz()\"\n to \"timezone()\". This is consistant with other time zone coersion\n functions for other types.\nBump the catalog version to 200204201.\nFix up regression tests to reflect changes in fractional seconds\n representation for date/times in BC eras.\nAll regression tests pass on my Linux box.\n", "msg_date": "Sun, 21 Apr 2002 13:02:13 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Patches applied; initdb time!" }, { "msg_contents": "btw, I've updated the regression tests and results for my platform, but\nother platforms (e.g. Solaris) will need their results files updated...\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 13:27:15 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "I'm seeing half a dozen gcc warnings as a result of these patches.\nDo you want to fix 'em, or shall I?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 16:41:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time! " }, { "msg_contents": "> I'm seeing half a dozen gcc warnings as a result of these patches.\n> Do you want to fix 'em, or shall I?\n\nWhere are they? I haven't noticed anything in the files I have changes;\nare the warnings elsewhere?\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 13:50:50 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n>> I'm seeing half a dozen gcc warnings as a result of these patches.\n>> Do you want to fix 'em, or shall I?\n\n> Where are they?\n\nWith fairly vanilla configure options, I get\n\nmake[3]: Entering directory `/home/postgres/pgsql/src/backend/parser'\ngcc -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -I../../../src/include -c -o gram.o gram.c\ngram.y:6688: warning: `set_name_needs_quotes' defined but not used\n\nmake[3]: Entering directory `/home/postgres/pgsql/src/backend/commands'\ngcc -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -I../../../src/include -c -o sequence.o sequence.c\nIn file included from sequence.c:25:\n../../../src/include/utils/int8.h:33: warning: `INT64CONST' redefined\n../../../src/include/utils/pg_crc.h:83: warning: this is the location of the previous definition\ngcc -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -I../../../src/include -c -o variable.o variable.c\nvariable.c: In function `parse_datestyle':\nvariable.c:262: warning: `rstat' might be used uninitialized in this function\nvariable.c:264: warning: `value' might be used uninitialized in this function\n\nmake[4]: Entering directory `/home/postgres/pgsql/src/backend/utils/adt'\ngcc -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -I../../../../src/include -c -o selfuncs.o selfuncs.c\nIn file included from selfuncs.c:95:\n../../../../src/include/utils/int8.h:33: warning: `INT64CONST' redefined\n../../../../src/include/utils/pg_crc.h:83: warning: this is the location of the previous definition\n\nSeems not good to have INT64CONST separately defined in int8.h and \npg_crc.h. Offhand I'd either move it into c.h, or else consider that\nint8.h is the Right Place for it and make pg_crc.h include int8.h.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 16:57:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time! " }, { "msg_contents": "> > I'm seeing half a dozen gcc warnings as a result of these patches.\n> Where are they?\n\nMore specifically, the *only* compiler warning I see (other than the\nusual yacc/lex symbol warnings) is that a routine in gram.y,\nset_name_needs_quotes(), is defined but not used. Don't know where that\nroutine came from, and afaik I didn't accidentally remove a reference\nwhen trying to merge changes...\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 13:59:17 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "> With fairly vanilla configure options, I get...\n\nPlease be specific on the options and platform. I do *not* see these\nwarnings here with my \"fairly vanilla configure options\" ;)\n\nCan't fix what I can't see, and we should track down what interactions\nare happening to get these variables exposed...\n\nbtw, the INT64CONST must be defined for int8 (which is where I get the\ndefinition for the date/time stuff); not sure why it appears in two\nseparate places and not sure why my compiler (gcc-2.96.xxx) does not\nnotice it.\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 14:02:31 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> btw, I've updated the regression tests and results for my platform, but\n> other platforms (e.g. Solaris) will need their results files updated...\n\nI committed a fix for HPUX's horology file, and did some extrapolation\nto produce a Solaris version; someone please verify that it's OK.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 17:10:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time! " }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n>> With fairly vanilla configure options, I get...\n> Please be specific on the options and platform.\n\nHPUX 10.20,\n\n./configure --with-CXX --with-tcl --enable-cassert\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 17:13:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time! " }, { "msg_contents": "Thomas Lockhart wrote:\n>>With fairly vanilla configure options, I get...\n> \n> \n> Please be specific on the options and platform. I do *not* see these\n> warnings here with my \"fairly vanilla configure options\" ;)\n> \n> Can't fix what I can't see, and we should track down what interactions\n> are happening to get these variables exposed...\n> \n> btw, the INT64CONST must be defined for int8 (which is where I get the\n> definition for the date/time stuff); not sure why it appears in two\n> separate places and not sure why my compiler (gcc-2.96.xxx) does not\n> notice it.\n> \n\nI just built from cvs tip using:\n./configure --enable-integer-datetimes --enable-locale --enable-debug \n--enable-cassert --enable-multibyte --enable-syslog --enable-nls \n--enable-depend\n\nand got:\n\ngram.y:6688: warning: `set_name_needs_quotes' defined but not used\n\nvariable.c: In function `parse_datestyle':\nvariable.c:262: warning: `rstat' might be used uninitialized in this \nfunction\nvariable.c:264: warning: `value' might be used uninitialized in this \nfunction\n\n-- and the usual lexer related warnings --\n\npgc.c: In function `yylex':\npgc.c:1249: warning: label `find_rule' defined but not used\npgc.l: At top level:\npgc.c:3073: warning: `yy_flex_realloc' defined but not used\nand\n\npl_scan.c: In function `plpgsql_base_yylex':\npl_scan.c:1020: warning: label `find_rule' defined but not used\nscan.l: At top level:\npl_scan.c:2321: warning: `yy_flex_realloc' defined but not used\n\nbut did *not* get the INT64CONST warning that Tom did. I'm using an \nupdated Red Hat 7.2 box.\n\nHTH,\n\nJoe\n\n\n", "msg_date": "Sun, 21 Apr 2002 14:14:52 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> More specifically, the *only* compiler warning I see (other than the\n> usual yacc/lex symbol warnings) is that a routine in gram.y,\n> set_name_needs_quotes(), is defined but not used. Don't know where that\n> routine came from, and afaik I didn't accidentally remove a reference\n> when trying to merge changes...\n\nYeah, you did. However the routine could possibly go away now.\nIt was a hack I put in recently to handle cases like\n\nregression=# create schema \"MySchema\";\nCREATE\nregression=# create schema \"MyOtherSchema\";\nCREATE\nregression=# set search_path TO \"MySchema\", \"MyOtherSchema\";\nERROR: SET takes only one argument for this parameter\n\nFormerly gram.y merged the list items into a single string, and so it\nneeded to double-quote mixed-case names to prevent case folding when\nthe string got re-parsed later.\n\nThis example worked last week, and probably would work again if the\nsystem were applying your new list-argument logic for search_path ...\nbut I'm not sure where to look to learn about that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 17:22:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time! " }, { "msg_contents": "> >> With fairly vanilla configure options, I get...\n> > Please be specific on the options and platform.\n> HPUX 10.20,\n> ./configure --with-CXX --with-tcl --enable-cassert\n\nBoy, how plain-vanilla. *My* configure line is all of\n\n./configure --prefix=/home/thomas/local\n\nBut I do override some parameters in my Makefile.custom:\n\nCFLAGS+= -g -O0 -DUSE_ASSERT_CHECKING\nCFLAGS+= -DCOPY_PARSE_PLAN_TREES\n\nWhich gives me (except for the plan tree thing) something very similar.\n\nI've looked a bit more, and the set_name_needs_quotes() is probably\nobsoleted by my update, which generalizes parameter handling in SET\nvariables. I'll rip it out unless we get a test case in the regression\ntests which demonstrates a problem. I'm pretty sure that it may have\nallowed \n\n SET key='par1 w space,par2';\n\nbut that would be handled now by\n\n SET key='par1 w space',par2;\n\nfor cases in which \"key\" would accept multiple values. We now can allow\nsingle parameters with embedded commas *and* whitespace, which would\nhave been impossible before. Not sure why white space is desirable\nhowever, so the new behavior seems adequate to me.\n\nI'm still not sure why the INT64CONST conflict does not show up as a\nwarning on my machine, but looking at the code I'm not sure why we would\never have had two versions in the first place. Anyone want to take\nresponsibility for consolidating it into The Right Place? If not, I'll\ngo ahead and do it...\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 14:24:57 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> but did *not* get the INT64CONST warning that Tom did. I'm using an \n> updated Red Hat 7.2 box.\n\nProbably it depends on compiler version? I'm using gcc 2.95.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 17:25:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time! " }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> I'm still not sure why the INT64CONST conflict does not show up as a\n> warning on my machine, but looking at the code I'm not sure why we would\n> ever have had two versions in the first place. Anyone want to take\n> responsibility for consolidating it into The Right Place? If not, I'll\n> go ahead and do it...\n\nI think it was originally needed only for the CRC code, so we put it\nthere to begin with. Clearly should be in a more widely used place now.\nDo you have any opinion whether c.h or int8.h is the Right Place?\nI'm still dithering about that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 17:33:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time! " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>but did *not* get the INT64CONST warning that Tom did. I'm using an \n>>updated Red Hat 7.2 box.\n> \n> \n> Probably it depends on compiler version? I'm using gcc 2.95.3.\n> \n\ncould be:\n[postgres@jec-linux pgsql]$ gcc -v\nReading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs\ngcc version 2.96 20000731 (Red Hat Linux 7.1 2.96-98)\n\nJoe\n\n\n\n", "msg_date": "Sun, 21 Apr 2002 14:37:18 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "> I think it was originally needed only for the CRC code, so we put it\n> there to begin with. Clearly should be in a more widely used place now.\n> Do you have any opinion whether c.h or int8.h is the Right Place?\n> I'm still dithering about that.\n\nIn looking at the code, istm that the versions should be merged with\nfeatures from both. The generated constants should be surrounded in\nparens, but the explicit coersion to (int64) should be omitted at least\nwith the \"LL\" version.\n\nI've got some other \"int64\" pushups to worry about; let's try fixing\nthose too (though afaict they may need to happen in different places).\nAt the moment, we have INT64_IS_BUSTED as an amalgam of other conditions\nor undefined variables. I've also got a HAVE_INT64_TIMESTAMP which comes\nfrom a configured variable USE_INTEGER_DATETIMES and an undefined\nINT64_IS_BUSTED. This is now housed in c.h, but istm that we *should*\ncheck for conflicting settings in configure itself, and carry forward a\nconsistant set of parameters from there.\n\nAnyway, at the moment some of this stuff is in c.h, and that is probably\nthe right place to put the INT64CONST definitions, at least until things\nsort out differently.\n\nbtw, I've updated gram.y and variable.c to suppress the reported\nwarnings (which I *still* don't see here; that is very annoying).\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 14:47:08 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> btw, I've updated gram.y and variable.c to suppress the reported\n> warnings (which I *still* don't see here; that is very annoying).\n> \n\nFWIW, I'm still seeing:\ngram.y:99: warning: `set_name_needs_quotes' declared `static' but never \ndefined\n\nJoe\n\n", "msg_date": "Sun, 21 Apr 2002 14:50:29 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "> FWIW, I'm still seeing:\n> gram.y:99: warning: `set_name_needs_quotes' declared `static' but never\n> defined\n\nAck. Sloppy patching. Should be fixed now...\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 14:53:31 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "> > But I do override some parameters in my Makefile.custom:\n> > CFLAGS+= -g -O0 -DUSE_ASSERT_CHECKING\n> If you use -O0 then you miss most of the interesting warnings.\n\n?? Not in this case. afaik -O0 suppresses most optimizations (and hence\ndoes not reorder instructions, which is why I use it for debugging; I\nknow, debuggers nowadays work pretty well even with instruction\nreordering, but...).\n\nAnyway, compiling with \"-O2\" on variable.c still does not show the\nwarnings with my 2.96.x compiler...\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 15:06:10 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "Thomas Lockhart writes:\n\n> But I do override some parameters in my Makefile.custom:\n>\n> CFLAGS+= -g -O0 -DUSE_ASSERT_CHECKING\n\nIf you use -O0 then you miss most of the interesting warnings.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 21 Apr 2002 18:06:59 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> But I do override some parameters in my Makefile.custom:\n> CFLAGS+= -g -O0 -DUSE_ASSERT_CHECKING\n>> If you use -O0 then you miss most of the interesting warnings.\n\n> ?? Not in this case. afaik -O0 suppresses most optimizations\n\nIn particular, you don't get \"unused variable\" and \"variable may not\nhave been set before being used\" warnings at -O0, because the\ncontrol-flow analysis needed to emit those warnings is not done at -O0.\n\nI generally use -O1 for development; it's sometimes a little hairy\nstepping through the generated code, but usually gcc works well enough\nat -O1, and I get the important warnings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 18:11:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time! " }, { "msg_contents": "Thomas Lockhart wrote:\n>>FWIW, I'm still seeing:\n>>gram.y:99: warning: `set_name_needs_quotes' declared `static' but never\n>>defined\n> \n> \n> Ack. Sloppy patching. Should be fixed now...\n> \n> - Thomas\n\nYup, did the trick.\n\nThanks,\n\nJoe\n\n\n", "msg_date": "Sun, 21 Apr 2002 15:35:47 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "...\n> In particular, you don't get \"unused variable\" and \"variable may not\n> have been set before being used\" warnings at -O0, because the\n> control-flow analysis needed to emit those warnings is not done at -O0.\n\nRight. The point is that I don't get those (apparently) with -O2 either,\nwith my particular compiler. Hmm. Actually, I *do* get those if I make\nsure that some of the other options are set too; my quick test added -O2\nbut left out some of the -w switches. OK, never mind...\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 15:36:49 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "\n\nThomas Lockhart wrote:\n\n>>>But I do override some parameters in my Makefile.custom:\n>>>CFLAGS+= -g -O0 -DUSE_ASSERT_CHECKING\n>>>\n>>If you use -O0 then you miss most of the interesting warnings.\n>>\n>\n>?? Not in this case. afaik -O0 suppresses most optimizations (and hence\n>does not reorder instructions, which is why I use it for debugging; I\n>know, debuggers nowadays work pretty well even with instruction\n>reordering, but...).\n>\n>Anyway, compiling with \"-O2\" on variable.c still does not show the\n>warnings with my 2.96.x compiler...\n>\nIt's actually the optimiser that allows a large number of the warnings \nto be uncovered. It generates extra code-path and coverage information, \nas well as other things, that are needed for the guts of GCC to squawk \nabout a number of odd behaviours.\n\n", "msg_date": "Sun, 21 Apr 2002 15:52:23 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "> Right. The point is that I don't get those (apparently) with -O2 either,\n> with my particular compiler. Hmm. Actually, I *do* get those if I make\n> sure that some of the other options are set too; my quick test added -O2\n> but left out some of the -w switches. OK, never mind...\n\nbtw, now that I've started using \"-O2\", my geometry regression test now\npasses as though it were the \"standard linux result\". It's been a *long*\ntime since that test passed for me, which probably says that it has been\nquite a while since I didn't force a \"-O0\"...\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 19:57:55 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Patches applied; initdb time!" }, { "msg_contents": "Hi,\n\nTried to compile PG from CVS today, my platform is:\n\n$ uname -a\nLinux pastis 2.4.17-686 #2 Sat Dec 22 21:58:49 EST 2001 i686 unknown\n\n$ gcc -v\nReading specs from /usr/lib/gcc-lib/i386-linux/2.95.4/specs\ngcc version 2.95.4 20011002 (Debian prerelease)\n\nI do a simple ./configure then a simple make\n\nAnd the error is:\n\n\" [...]\nmake[3]: Entering directory\n`/home/jpargudo/etudes/postgresql-cvs/pgsql-cvs-snapshot-20020423/src/backend/bootstrap'\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I.\n-I../../../src/include -c -o bootparse.o bootparse.c\nbootparse.y: In function `Int_yyparse':\nbootparse.y:276: structure has no member named `class'\nmake[3]: *** [bootparse.o] Erreur 1\nmake[3]: Leaving directory\n`/home/jpargudo/etudes/postgresql-cvs/pgsql-cvs-snapshot-20020423/src/backend/bootstrap'\nmake[2]: *** [bootstrap-recursive] Erreur 2\nmake[2]: Leaving directory\n`/home/jpargudo/etudes/postgresql-cvs/pgsql-cvs-snapshot-20020423/src/backend'\nmake[1]: *** [all] Erreur 2\nmake[1]: Leaving directory\n`/home/jpargudo/etudes/postgresql-cvs/pgsql-cvs-snapshot-20020423/src'\nmake: *** [all] Erreur 2\n\"\n\nI can't find anywhere such already notifyied bug :-(\n\nWhat am I doing wrong?...\n\nI'll watch the source and try to guess what's wrong in bootstrap.* ...\n\nCheers,\n\n-- \nJean-Paul ARGUDO IDEALX S.A.S\nConsultant bases de donn�es 15-17, av. de S�gur\nhttp://www.idealx.com F-75007 PARIS\n", "msg_date": "Tue, 23 Apr 2002 10:42:08 +0200", "msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>", "msg_from_op": false, "msg_subject": "cvs update, configure, make, error in bootstrap.* ?..." }, { "msg_contents": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com> writes:\n> make[3]: Entering directory\n> `/home/jpargudo/etudes/postgresql-cvs/pgsql-cvs-snapshot-20020423/src/backend/bootstrap'\n> gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I.\n> -I../../../src/include -c -o bootparse.o bootparse.c\n> bootparse.y: In function `Int_yyparse':\n> bootparse.y:276: structure has no member named `class'\n> make[3]: *** [bootparse.o] Erreur 1\n> make[3]: Leaving directory\n\nYou seem to have an out-of-date bootparse.c. Perhaps a timestamp skew\nproblem? Try removing bootstrap_tokens.h and bootparse.c, then try\nagain.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Apr 2002 10:24:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs update, configure, make, error in bootstrap.* ?... " }, { "msg_contents": "> You seem to have an out-of-date bootparse.c. Perhaps a timestamp skew\n> problem? Try removing bootstrap_tokens.h and bootparse.c, then try\n> again.\n\nYup. Works.\n\nI had to do it many times to make it work... strange :)\n\nI noticed many .cvsignore in many folders (there is one in\nsrc/backend/bootstrap for example), is that ok?\n\nThanks for the right help :)\n\n-- \nJean-Paul ARGUDO IDEALX S.A.S\nConsultant bases de donn�es 15-17, av. de S�gur\nhttp://www.idealx.com F-75007 PARIS\n", "msg_date": "Tue, 23 Apr 2002 18:40:23 +0200", "msg_from": "Jean-Paul ARGUDO <jean-paul.argudo@idealx.com>", "msg_from_op": false, "msg_subject": "Re: cvs update, configure, make, error in bootstrap.* ?..." } ]
[ { "msg_contents": "OK I know it's been beaten nearly to death, but no clear action has come \nof it quite yet. We all seem to agree that there is some non-optimal \nway in which the planner handles edge cases (cost wise). I don't \nbelieve that there are any fundamental type faults in any of the logic \nbecause we'd have much more major problems. Instead I'd like to \ninvestigate these edge cases where the planner chooses sub-optimal cases \nand see if there is anythign that can be done about it. \n\nNo clue if I can cause any help or not yet, just something I'm going ot \nbe looking into. The reason I'm writing though is I need data samples \nand queries that evoke the non-optimal responses (IE choosing the wrong \nplan) in order to look into it. \n\nAlso I'd also like to know if there is a way to get the planner to burp \nout all the possible plans it considered before selecting a final plan \nor do I need to do a little surgery to get that done?\n\n\nTIA guys!\n\nMichael Loftis\n\nBTW I'm not masochistic, I'm just out of work and BORED :)\n\n", "msg_date": "Sun, 21 Apr 2002 13:30:06 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": true, "msg_subject": "Coster/planner and edge cases..." }, { "msg_contents": "Michael Loftis <mloftis@wgops.com> writes:\n> Also I'd also like to know if there is a way to get the planner to burp \n> out all the possible plans it considered before selecting a final plan \n> or do I need to do a little surgery to get that done?\n\nYou can define OPTIMIZER_DEBUG but the interface leaves a lot to be\ndesired (output to backend stdout, no way to turn it on or off except\nrecompile...) Also, I believe all you will see are the paths that\nsurvived the initial pruning done by add_path. This is about the\nright level of detail for examining join choices, but perhaps not very\nhelpful for why-didn't-it-use-my-index choices; the paths you wanted\nto know about may not have got into the relation's candidate-path list\nin the first place.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 18:06:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Coster/planner and edge cases... " }, { "msg_contents": "\n\nTom Lane wrote:\n\n>Michael Loftis <mloftis@wgops.com> writes:\n>\n>>Also I'd also like to know if there is a way to get the planner to burp \n>>out all the possible plans it considered before selecting a final plan \n>>or do I need to do a little surgery to get that done?\n>>\n>\n>You can define OPTIMIZER_DEBUG but the interface leaves a lot to be\n>desired (output to backend stdout, no way to turn it on or off except\n>recompile...) Also, I believe all you will see are the paths that\n>survived the initial pruning done by add_path. This is about the\n>right level of detail for examining join choices, but perhaps not very\n>helpful for why-didn't-it-use-my-index choices; the paths you wanted\n>to know about may not have got into the relation's candidate-path list\n>in the first place.\n>\nAlright, that gives me some places to attack it at then anyway. Thanks \nvery much Tom. Sounds like I'll probably be doing a little bit of work \nIE I'd like to have the information come back as say a notice or maybe \nas extra information for an EXPLAIN for my purposes, but unless there is \ninterest, consensus on how it should be done, and a TODO item made of \nit, I won't be making a patch of that, no reason to clutter the backend \nwith stuff that hopefully won't be needed for long :)\n\nMichael\n\n", "msg_date": "Sun, 21 Apr 2002 15:48:43 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": true, "msg_subject": "Re: Coster/planner and edge cases..." }, { "msg_contents": "Michael Loftis <mloftis@wgops.com> writes:\n> IE I'd like to have the information come back as say a notice or maybe \n> as extra information for an EXPLAIN for my purposes, but unless there is \n> interest, consensus on how it should be done, and a TODO item made of \n> it, I won't be making a patch of that, no reason to clutter the backend \n> with stuff that hopefully won't be needed for long :)\n\nI think it'd be useful to have, actually, as long as we're not talking\nabout much code bloat. I tend to try to find a way to see what I want\nwith EXPLAIN, because using OPTIMIZER_DEBUG is such a pain. But it's\noften difficult to force the plan I'm interested in to rise to the top.\nA nicer user interface for looking at the rejected alternatives would\nseem like a step forward to me, whether or not ordinary users have any\nneed for it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 19:05:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Coster/planner and edge cases... " }, { "msg_contents": "Tom Lane wrote:\n> Michael Loftis <mloftis@wgops.com> writes:\n> > IE I'd like to have the information come back as say a notice or maybe \n> > as extra information for an EXPLAIN for my purposes, but unless there is \n> > interest, consensus on how it should be done, and a TODO item made of \n> > it, I won't be making a patch of that, no reason to clutter the backend \n> > with stuff that hopefully won't be needed for long :)\n> \n> I think it'd be useful to have, actually, as long as we're not talking\n> about much code bloat. I tend to try to find a way to see what I want\n> with EXPLAIN, because using OPTIMIZER_DEBUG is such a pain. But it's\n> often difficult to force the plan I'm interested in to rise to the top.\n> A nicer user interface for looking at the rejected alternatives would\n> seem like a step forward to me, whether or not ordinary users have any\n> need for it...\n\nI think there is consensus. Added to TODO:\n\n\tImprove ability to display optimizer analysis using OPTIMIZER_DEBUG\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 16:26:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Coster/planner and edge cases..." } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\tthomas@postgresql.org\t02/04/21 17:37:03\n\nModified files:\n\tsrc/backend/commands: variable.c \n\nLog message:\n\tInitialize or set a couple of variables to suppress compiler warnings.\n\tThese were for cases protected by elog(ERROR) exits, but may as well\n\tkeep the compiler happy. Not sure why they don't show up on my gcc-2.96.x\n\tversion of the compiler.\n\nModified files:\n\tsrc/backend/parser: gram.y \n\nLog message:\n\tRemove the definition for set_name_needs_quotes() on the assumption that\n\tit is now obsolete. Need some regression test cases to prove otherwise...\n\n", "msg_date": "Sun, 21 Apr 2002 17:37:03 -0400 (EDT)", "msg_from": "thomas@postgresql.org (Thomas Lockhart)", "msg_from_op": true, "msg_subject": "pgsql/src/backend commands/variable.c parser/g ..." }, { "msg_contents": "thomas@postgresql.org (Thomas Lockhart) writes:\n> Log message:\n> \tRemove the definition for set_name_needs_quotes() on the assumption that\n> \tit is now obsolete. Need some regression test cases to prove otherwise...\n\nI agree that we don't want to reinstate that hack on the gram.y side.\nHowever, it seems to me way past time that we did what needs to be done\nwith variable.c --- ie, get rid of it. All these special-cased\nvariables should be folded into GUC.\n\nThe code as committed has some problems beyond having broken support\nfor search_path with a list:\n\nregression=# set seed to 1,2;\nserver closed the connection unexpectedly\n\n(crash is due to assert failure)\n\nbut really there's no point in worrying about that one case. What we\nneed to do is figure out what needs to be done to GUC to let it support\nthese variables, and then merge the variable.c code into that structure.\n\nShould we allow GUC stuff to take a list of A_Const as being the most\ngeneral case, or is that overkill?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 17:52:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "GUC vs variable.c (was Patches applied...)" }, { "msg_contents": "> I agree that we don't want to reinstate that hack on the gram.y side.\n> However, it seems to me way past time that we did what needs to be done\n> with variable.c --- ie, get rid of it. All these special-cased\n> variables should be folded into GUC.\n\nOr in some cases into pg_database? We might want some of this to travel\nas database-specific properties adjustable using SQL or SET syntax.\n\n> The code as committed has some problems beyond having broken support\n> for search_path with a list:\n> regression=# set seed to 1,2;\n> server closed the connection unexpectedly\n\nOK. Would be nice to have a regression test covering this. And this is\nquite easy to fix of course.\n\n> but really there's no point in worrying about that one case. What we\n> need to do is figure out what needs to be done to GUC to let it support\n> these variables, and then merge the variable.c code into that structure.\n> Should we allow GUC stuff to take a list of A_Const as being the most\n> general case, or is that overkill?\n\nNot sure. Peter would like to change the SET DATESTYLE support if I\nremember correctly. But I've gotten the impression, perhaps wrongly,\nthat this extends to changing features in dates and times beyond style\nsettings. If it is just the two-dimensional nature of the datestyle\nparameters (euro vs non-euro, and output format) then I'm sure that some\nother reasonable syntax could be arranged. I'm not sure what he would\nrecommend wrt GUC in just the context of general capabilities for\nspecifying parameters.\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 15:01:42 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: GUC vs variable.c (was Patches applied...)" }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n>> However, it seems to me way past time that we did what needs to be done\n>> with variable.c --- ie, get rid of it. All these special-cased\n>> variables should be folded into GUC.\n\n> Or in some cases into pg_database? We might want some of this to travel\n> as database-specific properties adjustable using SQL or SET syntax.\n\nAh, but we *have* that ability right now; see Peter's recent changes\nto support per-database and per-user GUC settings. The functionality\navailable for handling GUC-ified variables is now so far superior to\nplain SET that it's really foolish to consider having any parameters\nthat are outside GUC control.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 18:16:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GUC vs variable.c (was Patches applied...) " }, { "msg_contents": "...\n> Ah, but we *have* that ability right now; see Peter's recent changes\n> to support per-database and per-user GUC settings. The functionality\n> available for handling GUC-ified variables is now so far superior to\n> plain SET that it's really foolish to consider having any parameters\n> that are outside GUC control.\n\nistm that with the recent discussion of transaction-fying SET variables\nthat table-fying some settable parameters may be appropriate. Leave out\nthe \"foolish\" from the discussion please ;)\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 15:32:30 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: GUC vs variable.c (was Patches applied...)" }, { "msg_contents": "...\n> regression=# set seed to 1,2;\n> server closed the connection unexpectedly\n> (crash is due to assert failure)\n\nNow that I look, the assert is one I put in explicitly to catch multiple\narguments! I wasn't sure what the behavior *should* be, though I could\nhave done worse than simply checking for multiple arguments and throwing\na more graceful elog(ERROR) with a message about having too many\narguments to SET SEED...\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 16:22:40 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: GUC vs variable.c (was Patches applied...)" }, { "msg_contents": "Hmm. In looking at SET, why couldn't we develop this as an extendable\ncapability a la pg_proc? If PostgreSQL knew how to link up the set\nkeyword with a call to a subroutine, then we could go ahead and call\nthat routine generically, right? Do the proposals on the table call for\nthis kind of implementation, or are they all \"extra-tabular\"?\n\nWe could make this extensible by defining a separate table, or by\ndefining a convention for pg_proc as we do under different circumstances\nwith type coersion.\n\nThe side effects of the calls would still need some protection to be\nrolled back on transaction abort.\n\nComments?\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 16:36:44 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: GUC vs variable.c (was Patches applied...)" }, { "msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> Hmm. In looking at SET, why couldn't we develop this as an extendable\n> capability a la pg_proc?\n\nWell, my thoughts were along the line of providing specialized parsing\nsubroutines tied to specific GUC variables. There already are\nparse_hook and assign_hook concepts in GUC, but possibly they need a\nlittle more generalization to cover what these variables need to do.\n\nIf you're suggesting setting up an actual database table, I'm not\nsure I see the point. Any system parameter is going to have to be\ntied to backend code that knows what to do with the parameter, so\nit's not like you can expect to do anything useful purely by adding\ntable entries. The C-code tables existing inside guc.c seem like\nenough flexibility to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 19:53:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GUC vs variable.c (was Patches applied...) " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The only thing that I had suggested on occasion was that if nontrivial\n> work were to be put into SET DATESTYLE, we might want to consider if a\n> certain amount of \"cleanup\" could be done at the same time. For example,\n> the particular date styles have somewhat unfortunate names, as does the\n> \"european\" option. And the parameter could be separated into two. One\n> doesn't have to agree with these suggestions, but without them the work is\n> sufficiently complicated that no one has gotten around to it yet.\n\nI think you were mainly concerned that we not define two interacting\nGUC variables (ie, setting one could have side-effects on the other)?\n\nI don't see any inherent reason that DATESTYLE couldn't be imported into\nGUC as-is. The semantics might be uglier than you'd like, but why would\nthey be any worse than they are now?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Apr 2002 19:57:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GUC vs variable.c (was Patches applied...) " }, { "msg_contents": "Thomas Lockhart writes:\n\n> Not sure. Peter would like to change the SET DATESTYLE support if I\n> remember correctly. But I've gotten the impression, perhaps wrongly,\n> that this extends to changing features in dates and times beyond style\n> settings. If it is just the two-dimensional nature of the datestyle\n> parameters (euro vs non-euro, and output format) then I'm sure that some\n> other reasonable syntax could be arranged. I'm not sure what he would\n> recommend wrt GUC in just the context of general capabilities for\n> specifying parameters.\n\nThe only thing that I had suggested on occasion was that if nontrivial\nwork were to be put into SET DATESTYLE, we might want to consider if a\ncertain amount of \"cleanup\" could be done at the same time. For example,\nthe particular date styles have somewhat unfortunate names, as does the\n\"european\" option. And the parameter could be separated into two. One\ndoesn't have to agree with these suggestions, but without them the work is\nsufficiently complicated that no one has gotten around to it yet.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 21 Apr 2002 19:59:20 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: GUC vs variable.c (was Patches applied...)" }, { "msg_contents": "...\n> If you're suggesting setting up an actual database table, I'm not\n> sure I see the point. Any system parameter is going to have to be\n> tied to backend code that knows what to do with the parameter, so\n> it's not like you can expect to do anything useful purely by adding\n> table entries. The C-code tables existing inside guc.c seem like\n> enough flexibility to me.\n\nAh, certainly not! (which is as close as I'll come to saying \"how\nfoolish!\" :)\n\nYou've done work on generalizing the extensibility features of\nPostgreSQL. A next step to take with that is to allow for a more generic\n\"package\" capability, where packages can be defined, and can have some\ninitialization code run at the time the database starts up. This would\nallow packages to have their own internal state as extensions to the\ninternal state of the core package.\n\nHaving SET be extensible is another piece to the puzzle, allowing these\nkinds of parameters to also be extensible. I'm not sure that this should\nbe considered a part of the GUC design (the parameters are designed to\nbe available *outside* the database itself, to allow startup issues to\nbe avoided, right?) but perhaps GUC should be considered a subset of the\nactual SET feature set.\n\nI got the strong feeling that Hiroshi was concerned that we were\nintending to lump all SET features into a single one-size-fits-all\nframework. This may be the flip side of it; just because we like SET to\nbe used in lots of places doesn't mean we should always limit it to\nthings built in to the core. And we should be wary of forcing all things\n\"SET\" to behave with transactional properties if that doesn't make\nsense. I've always been comfortable with the concept of \"out of band\"\nbehavior, which I think is reflected, for example, with DDL vs DML\naspects of the SQL language. Current SET behavior aside (where the\nparser is rejecting SET commands out of hand after errors within a\ntransaction) we should put as few *designed in* restrictions on SET as\npossible, at least until we are willing to introduce a richer set of\ncommands (that is, something in addition to SET) as alternatives.\n\nall imho of course :)\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 17:06:28 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: GUC vs variable.c (was Patches applied...)" } ]
[ { "msg_contents": "I read a thread about table lock timeout but don't know whether anything has been done about it.\n\nHere is what I'd like to do:\nI don't want my transactions to be on hold for too long so I'd like to use a syntax I use on INFORMIX already:\nSET LOCK MODE TO [WAIT [second] | NOT WAIT]\nI'm using ecpg and I think I'm up to make a patch to it to support the WAIT [second] with the asynchronous functions but I need a way to know whether a statement would lock from the back-end to implement the NOT WAIT.\nIs there a way?\n\n\n\n\n\n\n\nI read a thread about table lock timeout but don't \nknow whether anything has been done about it.\n \nHere is what I'd like to do:\nI don't want my transactions to be on hold for too \nlong so I'd like to use a syntax I use on INFORMIX already:\nSET LOCK MODE TO [WAIT [second] | NOT \nWAIT]\nI'm using ecpg and I think I'm up to make a patch \nto it to support the WAIT [second] with the asynchronous functions but I need a \nway to know whether a statement would lock from the back-end to implement the \nNOT WAIT.\nIs there a way?", "msg_date": "Mon, 22 Apr 2002 12:00:09 +1000", "msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>", "msg_from_op": true, "msg_subject": "How to know whether a table is locked ?" }, { "msg_contents": "Nicolas Bazin wrote:\n> I read a thread about table lock timeout but don't know whether\n> anything has been done about it.\n> \n> Here is what I'd like to do: I don't want my transactions to\n> be on hold for too long so I'd like to use a syntax I use on\n> INFORMIX already: SET LOCK MODE TO [WAIT [second] | NOT WAIT]\n> I'm using ecpg and I think I'm up to make a patch to it to\n> support the WAIT [second] with the asynchronous functions but\n> I need a way to know whether a statement would lock from the\n> back-end to implement the NOT WAIT. Is there a way?\n\nWe are discussing a SET timeout parameter that would do something\nsimilar. However, it will not be done until 7.3, at the earliest. The\nonly solution right now would be to do an alarm() and issue a query\ncancel after the alarm sounds.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 16:55:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to know whether a table is locked ?" } ]
[ { "msg_contents": "On FreeBSD/Alpha, current CVS:\n\ngmake -C common SUBSYS.o\ngmake[4]: Entering directory `/home/chriskl/pgsql/src/backend/access/common'\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../..\n/../src/include -c -o heaptuple.o heaptuple.c -MMD\nIn file included from ../../../../src/include/utils/timestamp.h:24,\n from ../../../../src/include/utils/nabstime.h:21,\n from ../../../../src/include/access/xact.h:19,\n from ../../../../src/include/utils/tqual.h:19,\n from ../../../../src/include/access/relscan.h:17,\n from ../../../../src/include/access/heapam.h:18,\n from heaptuple.c:23:\n../../../../src/include/utils/int8.h:35: warning: `INT64CONST' redefined\n../../../../src/include/utils/pg_crc.h:85: warning: this is the location of\nthe previous definition\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../..\n/../src/include -c -o indextuple.o indextuple.c -MMD\nIn file included from ../../../../src/include/utils/timestamp.h:24,\n from ../../../../src/include/utils/nabstime.h:21,\n from ../../../../src/include/access/xact.h:19,\n from ../../../../src/include/utils/tqual.h:19,\n from ../../../../src/include/access/relscan.h:17,\n from ../../../../src/include/access/heapam.h:18,\n from indextuple.c:19:\n../../../../src/include/utils/int8.h:35: warning: `INT64CONST' redefined\n../../../../src/include/utils/pg_crc.h:85: warning: this is the location of\nthe previous definition\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../..\n/../src/include -c -o indexvalid.o indexvalid.c -MMD\nIn file included from ../../../../src/include/utils/timestamp.h:24,\n from ../../../../src/include/utils/nabstime.h:21,\n from ../../../../src/include/access/xact.h:19,\n from ../../../../src/include/utils/tqual.h:19,\n from ../../../../src/include/access/relscan.h:17,\n from ../../../../src/include/nodes/execnodes.h:17,\n from ../../../../src/include/nodes/plannodes.h:17,\n from ../../../../src/include/executor/execdesc.h:19,\n from ../../../../src/include/executor/executor.h:17,\n from ../../../../src/include/executor/execdebug.h:17,\n from indexvalid.c:19:\n../../../../src/include/utils/int8.h:35: warning: `INT64CONST' redefined\n../../../../src/include/utils/pg_crc.h:85: warning: this is the location of\nthe previous definition\n\n", "msg_date": "Mon, 22 Apr 2002 12:55:12 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "few probs with integer timestamps" }, { "msg_contents": "> On FreeBSD/Alpha, current CVS:\n...\n> ../../../../src/include/utils/int8.h:35: warning: `INT64CONST' redefined\n> ../../../../src/include/utils/pg_crc.h:85: warning: this is the location of\n> the previous definition\n\nafaict this is OK (that is, the compiled code should work) until we get\nthe definitions moved around and cleaned up. I'm not sure why I didn't\nsee that multiple definition on my machine...\n\n - Thomas\n", "msg_date": "Sun, 21 Apr 2002 22:54:56 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: few probs with integer timestamps" } ]
[ { "msg_contents": "I've committed a bit more to variable.c to handle integer inputs to GUC\nparameters (string and float were already supported). I've included the\ncvs log message below.\n\nFurther changes aren't precluded of course, but the code now supports\nstring, integer, and floating point inputs to parameters (for those\nparameters which can accept them ;).\n\n - Thomas\n\nConvert GUC parameters back to strings if input as integers.\nChange elog(ERROR) messages to say that a variable takes one parameter,\n rather than saying that it does not take multiple parameters.\n", "msg_date": "Mon, 22 Apr 2002 08:19:29 -0700", "msg_from": "Thomas Lockhart <thomas@fourpalms.org>", "msg_from_op": true, "msg_subject": "Fixups on variable.c" } ]
[ { "msg_contents": "Edward (edx@astercity.net) reports a bug with a severity of 1\nThe lower the number the more severe it is.\n\nShort Description\nECPG: inserting float numbers\n\nLong Description\nInserting records with single precision real variables having small value (range 1.0e-6 or less) frequently results in errors in ECPG translations putting into resulted sql statement unexpected characters => see fragments of sample code and ECPGdebug log where after value of rate variable the unexpected character '^A' appears\n\nSample Code\n/* fragment of the program code */\nexec sql begin declare section;\n/* ... */\nfloat rate; /* level change rate */\n/* ... */\nexec sql end declare section;\n/* ... */\nsprintf(buf,\"INSERT: %.8s.%.8s @%.32s -> %08x/%08x %f %f %f %f %f\", loc, name, ts, devstat, meastat, relative, level, ullage, rate, volume );\ndbg_message( __FILE__, __LINE__, DBG_DBI, MSG_SQL, buf );\nexec sql INSERT INTO rdr_meas ( name, loc, ts, devstat, meastat, relative, level, ullage, levelrate, volume ) VALUES ( :name, :loc, 'now', :devstat, :meastat, :relative, :level, :ullage, :rate, :volume ) ;\n/* ... */\n---\nThe above produces in ECPG debug :\n...\n[2782]: ECPGexecute line 1042: QUERY: insert into rdr_meas ( name , loc , ts , devstat , meastat , relative , level , ullage , levelrate , volume ) values ( 'NR1 ' , 'Swedwood' , 'now' , 0 , 4096 , 37.388961791992 , 0.71039032936096 , 1.1896096467972 , -5.5060195336409e-06 ^A , 3.4871203899384 ) on connection radar\n[2782]: ECPGexecute line 1042: Error: ERROR: parser: parse error at or near \"^A\"\n[2782]: raising sqlcode -400 in line 1042, ''ERROR: parser: parse error at or near \"^A\"' in line 1042.'.\n\n\nNo file was uploaded with this report\n\n", "msg_date": "Mon, 22 Apr 2002 12:41:43 -0400 (EDT)", "msg_from": "pgsql-bugs@postgresql.org", "msg_from_op": true, "msg_subject": "Bug #640: ECPG: inserting float numbers" }, { "msg_contents": "On Monday, 22 April 2002 18:41, you wrote:\n> Edward (edx@astercity.net) reports a bug with a severity of 1\n> The lower the number the more severe it is.\n>\n> Short Description\n> ECPG: inserting float numbers\n>\n> Long Description\n> Inserting records with single precision real variables having small value\n> (range 1.0e-6 or less) frequently results in errors in ECPG translations\n> putting into resulted sql statement unexpected characters => see fragments\n> of sample code and ECPGdebug log where after value of rate variable the\n> unexpected character '^A' appears\n>\n> Sample Code\n> /* fragment of the program code */\n> exec sql begin declare section;\n> /* ... */\n> float rate; /* level change rate */\n> /* ... */\n> exec sql end declare section;\n> /* ... */\n> sprintf(buf,\"INSERT: %.8s.%.8s @%.32s -> %08x/%08x %f %f %f %f %f\", loc,\n> name, ts, devstat, meastat, relative, level, ullage, rate, volume );\n> dbg_message( __FILE__, __LINE__, DBG_DBI, MSG_SQL, buf );\n> exec sql INSERT INTO rdr_meas ( name, loc, ts, devstat, meastat, relative,\n> level, ullage, levelrate, volume ) VALUES ( :name, :loc, 'now', :devstat,\n> :meastat, :relative, :level, :ullage, :rate, :volume ) ; /* ... */\n> ---\n> The above produces in ECPG debug :\n> ...\n> [2782]: ECPGexecute line 1042: QUERY: insert into rdr_meas ( name , loc ,\n> ts , devstat , meastat , relative , level , ullage , levelrate ,\n> volume ) values ( 'NR1 ' , 'Swedwood' , 'now' , 0 , 4096 ,\n> 37.388961791992 , 0.71039032936096 , 1.1896096467972 , -5.5060195336409e-06\n> ^A , 3.4871203899384 ) on connection radar [2782]: ECPGexecute line 1042:\n> Error: ERROR: parser: parse error at or near \"^A\" [2782]: raising sqlcode\n> -400 in line 1042, ''ERROR: parser: parse error at or near \"^A\"' in line\n> 1042.'.\n>\n>\n> No file was uploaded with this report\n>\n----------\nI am fighting with this bug on \"PostgreSQL 7.2 on i686-pc-linux-gnu, compiled \nby GCC 2.96\". It appears also on 7.1.x, which was reported previously \n(see buglist -> ecpg: unstable INSERT operation) on August 2001.\nThe temporary workaround I apllied here is \"if\" statement before INSERT:\nif( fabs( rate ) < 1.0e-3 ) rate = 0.0;\n\nEdward\n", "msg_date": "Wed, 1 May 2002 07:25:55 +0200", "msg_from": "Edward Pilipczuk <edx@astercity.net>", "msg_from_op": false, "msg_subject": "Re: Bug #640: ECPG: inserting float numbers" }, { "msg_contents": "\nHas this been addressed? Can you supply a reproducable example?\n\n---------------------------------------------------------------------------\n\nEdward Pilipczuk wrote:\n> On Monday, 22 April 2002 18:41, you wrote:\n> > Edward (edx@astercity.net) reports a bug with a severity of 1\n> > The lower the number the more severe it is.\n> >\n> > Short Description\n> > ECPG: inserting float numbers\n> >\n> > Long Description\n> > Inserting records with single precision real variables having small value\n> > (range 1.0e-6 or less) frequently results in errors in ECPG translations\n> > putting into resulted sql statement unexpected characters => see fragments\n> > of sample code and ECPGdebug log where after value of rate variable the\n> > unexpected character '^A' appears\n> >\n> > Sample Code\n> > /* fragment of the program code */\n> > exec sql begin declare section;\n> > /* ... */\n> > float rate; /* level change rate */\n> > /* ... */\n> > exec sql end declare section;\n> > /* ... */\n> > sprintf(buf,\"INSERT: %.8s.%.8s @%.32s -> %08x/%08x %f %f %f %f %f\", loc,\n> > name, ts, devstat, meastat, relative, level, ullage, rate, volume );\n> > dbg_message( __FILE__, __LINE__, DBG_DBI, MSG_SQL, buf );\n> > exec sql INSERT INTO rdr_meas ( name, loc, ts, devstat, meastat, relative,\n> > level, ullage, levelrate, volume ) VALUES ( :name, :loc, 'now', :devstat,\n> > :meastat, :relative, :level, :ullage, :rate, :volume ) ; /* ... */\n> > ---\n> > The above produces in ECPG debug :\n> > ...\n> > [2782]: ECPGexecute line 1042: QUERY: insert into rdr_meas ( name , loc ,\n> > ts , devstat , meastat , relative , level , ullage , levelrate ,\n> > volume ) values ( 'NR1 ' , 'Swedwood' , 'now' , 0 , 4096 ,\n> > 37.388961791992 , 0.71039032936096 , 1.1896096467972 , -5.5060195336409e-06\n> > ^A , 3.4871203899384 ) on connection radar [2782]: ECPGexecute line 1042:\n> > Error: ERROR: parser: parse error at or near \"^A\" [2782]: raising sqlcode\n> > -400 in line 1042, ''ERROR: parser: parse error at or near \"^A\"' in line\n> > 1042.'.\n> >\n> >\n> > No file was uploaded with this report\n> >\n> ----------\n> I am fighting with this bug on \"PostgreSQL 7.2 on i686-pc-linux-gnu, compiled \n> by GCC 2.96\". It appears also on 7.1.x, which was reported previously \n> (see buglist -> ecpg: unstable INSERT operation) on August 2001.\n> The temporary workaround I apllied here is \"if\" statement before INSERT:\n> if( fabs( rate ) < 1.0e-3 ) rate = 0.0;\n> \n> Edward\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jun 2002 15:57:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug #640: ECPG: inserting float numbers" }, { "msg_contents": "Bruce, the attached source reproduces this on 7.2, I don't have a\nlater version at hand to test if it's been fixed:\n\n createdb floattest\n echo \"CREATE TABLE tab1(col1 FLOAT);\" | psql floattest\n ecpg insert-float.pgc\n gcc insert-float.c -lecpg -lpq\n ./a.out floattest\n\nresults in:\n\n col1: -0.000006\n *!*!* Error -400: 'ERROR: parser: parse error at or near \"a\"' in line 21.\n\nand in epcgdebug:\n\n [29189]: ECPGexecute line 21: QUERY: insert into tab1 ( col1 ) values ( -6.0000002122251e-06A ) on connection floattest\n [29189]: ECPGexecute line 21: Error: ERROR: parser: parse error at or near \"a\"\n [29189]: raising sqlcode -400 in line 21, ''ERROR: parser: parse error at or near \"a\"' in line 21.'.\n\nRegards, Lee Kindness.\n\nBruce Momjian writes:\n > Has this been addressed? Can you supply a reproducable example?\n > Edward Pilipczuk wrote:\n > > On Monday, 22 April 2002 18:41, you wrote:\n > > > Edward (edx@astercity.net) reports a bug with a severity of 1\n > > > ECPG: inserting float numbers\n > > > Inserting records with single precision real variables having small value\n > > > (range 1.0e-6 or less) frequently results in errors in ECPG translations\n > > > putting into resulted sql statement unexpected characters => see fragments\n > > > of sample code and ECPGdebug log where after value of rate variable the\n > > > unexpected character '^A' appears\n > > >\n > > > Sample Code\n > > > [ snip ]\n\n\n#include <stdlib.h>\n\nEXEC SQL INCLUDE sqlca;\n\nint main(int argc, char **argv)\n{\n EXEC SQL BEGIN DECLARE SECTION;\n char *db = argv[1];\n float col1;\n EXEC SQL END DECLARE SECTION;\n FILE *f;\n\n if( (f = fopen(\"ecpgdebug\", \"w\" )) != NULL )\n ECPGdebug(1, f);\n\n EXEC SQL CONNECT TO :db;\n EXEC SQL BEGIN;\n\n col1 = -6e-06;\n printf(\"col1: %f\\n\", col1);\n EXEC SQL INSERT INTO tab1(col1) VALUES (:col1);\n if( sqlca.sqlcode < 0 )\n {\n fprintf(stdout, \"*!*!* Error %ld: %s\\n\", sqlca.sqlcode, sqlca.sqlerrm.sqlerrmc);\n EXEC SQL ABORT;\n EXEC SQL DISCONNECT;\n return( 1 );\n }\n else\n {\n EXEC SQL COMMIT;\n EXEC SQL DISCONNECT;\n return( 0 );\n }\n}", "msg_date": "Tue, 11 Jun 2002 11:51:50 +0100", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bug #640: ECPG: inserting float numbers" }, { "msg_contents": "\nOK, I have reproduced the problem on my machine:\n\t\n\t#$ ./a.out floattest\n\tcol1: -0.000006\n\t*!*!* Error -220: No such connection NULL in line 21.\n\nWow, how did that \"A\" get into the query string:\n\n insert into tab1 ( col1 ) values ( -6.0000002122251e-06A )\n\nQuite strange. Michael, any ideas?\n\n---------------------------------------------------------------------------\n\nLee Kindness wrote:\nContent-Description: message body text\n\n> Bruce, the attached source reproduces this on 7.2, I don't have a\n> later version at hand to test if it's been fixed:\n> \n> createdb floattest\n> echo \"CREATE TABLE tab1(col1 FLOAT);\" | psql floattest\n> ecpg insert-float.pgc\n> gcc insert-float.c -lecpg -lpq\n> ./a.out floattest\n> \n> results in:\n> \n> col1: -0.000006\n> *!*!* Error -400: 'ERROR: parser: parse error at or near \"a\"' in line 21.\n> \n> and in epcgdebug:\n> \n> [29189]: ECPGexecute line 21: QUERY: insert into tab1 ( col1 ) values ( -6.0000002122251e-06A ) on connection floattest\n> [29189]: ECPGexecute line 21: Error: ERROR: parser: parse error at or near \"a\"\n> [29189]: raising sqlcode -400 in line 21, ''ERROR: parser: parse error at or near \"a\"' in line 21.'.\n> \n> Regards, Lee Kindness.\n> \n> Bruce Momjian writes:\n> > Has this been addressed? Can you supply a reproducable example?\n> > Edward Pilipczuk wrote:\n> > > On Monday, 22 April 2002 18:41, you wrote:\n> > > > Edward (edx@astercity.net) reports a bug with a severity of 1\n> > > > ECPG: inserting float numbers\n> > > > Inserting records with single precision real variables having small value\n> > > > (range 1.0e-6 or less) frequently results in errors in ECPG translations\n> > > > putting into resulted sql statement unexpected characters => see fragments\n> > > > of sample code and ECPGdebug log where after value of rate variable the\n> > > > unexpected character '^A' appears\n> > > >\n> > > > Sample Code\n> > > > [ snip ]\n> \n\n> #include <stdlib.h>\n> \n> EXEC SQL INCLUDE sqlca;\n> \n> int main(int argc, char **argv)\n> {\n> EXEC SQL BEGIN DECLARE SECTION;\n> char *db = argv[1];\n> float col1;\n> EXEC SQL END DECLARE SECTION;\n> FILE *f;\n> \n> if( (f = fopen(\"ecpgdebug\", \"w\" )) != NULL )\n> ECPGdebug(1, f);\n> \n> EXEC SQL CONNECT TO :db;\n> EXEC SQL BEGIN;\n> \n> col1 = -6e-06;\n> printf(\"col1: %f\\n\", col1);\n> EXEC SQL INSERT INTO tab1(col1) VALUES (:col1);\n> if( sqlca.sqlcode < 0 )\n> {\n> fprintf(stdout, \"*!*!* Error %ld: %s\\n\", sqlca.sqlcode, sqlca.sqlerrm.sqlerrmc);\n> EXEC SQL ABORT;\n> EXEC SQL DISCONNECT;\n> return( 1 );\n> }\n> else\n> {\n> EXEC SQL COMMIT;\n> EXEC SQL DISCONNECT;\n> return( 0 );\n> }\n> }\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jun 2002 06:58:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug #640: ECPG: inserting float numbers" }, { "msg_contents": "Bruce, after checking the libecpg source i'm fairly sure the problem\nis due to the malloc buffer that the float is being sprintf'd into\nbeing too small... It is always allocated 20 bytes but with a %.14g\nprintf specifier -6e-06 results in 20 characters:\n\n -6.0000000000000e-06\n\nand the NULL goes... bang! I guess the '-' wasn't factored in and 21\nbytes would be enough. Patch against current CVS (but untested):\n\nIndex: src/interfaces/ecpg/lib/execute.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/interfaces/ecpg/lib/execute.c,v\nretrieving revision 1.36\ndiff -r1.36 execute.c\n703c703\n< \t\t\t\tif (!(mallocedval = ECPGalloc(var->arrsize * 20, stmt->lineno)))\n---\n> \t\t\t\tif (!(mallocedval = ECPGalloc(var->arrsize * 21, stmt->lineno)))\n723c723\n< \t\t\t\tif (!(mallocedval = ECPGalloc(var->arrsize * 20, stmt->lineno)))\n---\n> \t\t\t\tif (!(mallocedval = ECPGalloc(var->arrsize * 21, stmt->lineno)))\n\nLee.\n\nBruce Momjian writes:\n > \n > OK, I have reproduced the problem on my machine:\n > \t\n > \t#$ ./a.out floattest\n > \tcol1: -0.000006\n > \t*!*!* Error -220: No such connection NULL in line 21.\n > \n > Wow, how did that \"A\" get into the query string:\n > \n > insert into tab1 ( col1 ) values ( -6.0000002122251e-06A )\n > \n > Quite strange. Michael, any ideas?\n > \n > Lee Kindness wrote:\n > Content-Description: message body text\n > \n > > Bruce, the attached source reproduces this on 7.2, I don't have a\n > > later version at hand to test if it's been fixed:\n > > \n > > createdb floattest\n > > echo \"CREATE TABLE tab1(col1 FLOAT);\" | psql floattest\n > > ecpg insert-float.pgc\n > > gcc insert-float.c -lecpg -lpq\n > > ./a.out floattest\n > > \n > > results in:\n > > \n > > col1: -0.000006\n > > *!*!* Error -400: 'ERROR: parser: parse error at or near \"a\"' in line 21.\n > > \n > > and in epcgdebug:\n > > \n > > [29189]: ECPGexecute line 21: QUERY: insert into tab1 ( col1 ) values ( -6.0000002122251e-06A ) on connection floattest\n > > [29189]: ECPGexecute line 21: Error: ERROR: parser: parse error at or near \"a\"\n > > [29189]: raising sqlcode -400 in line 21, ''ERROR: parser: parse error at or near \"a\"' in line 21.'.\n > > \n > > Regards, Lee Kindness.\n", "msg_date": "Tue, 11 Jun 2002 12:40:14 +0100", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bug #640: ECPG: inserting float numbers" }, { "msg_contents": "On Tue, Jun 11, 2002 at 06:58:54AM -0400, Bruce Momjian wrote:\n> Wow, how did that \"A\" get into the query string:\n\nBuffer overrun. :-(\n\nI just committed Lee's patch. Thanks a lot.\n\nBTW. I didn't even know that this bug was reported against ecpg. Somehow\nI missed it. I guess I need to rescan the bug archive.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 12 Jun 2002 14:10:10 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug #640: ECPG: inserting float numbers" }, { "msg_contents": "Michael Meskes wrote:\n> On Tue, Jun 11, 2002 at 06:58:54AM -0400, Bruce Momjian wrote:\n> > Wow, how did that \"A\" get into the query string:\n> \n> Buffer overrun. :-(\n> \n> I just committed Lee's patch. Thanks a lot.\n> \n> BTW. I didn't even know that this bug was reported against ecpg. Somehow\n> I missed it. I guess I need to rescan the bug archive.\n\nThanks. No problem. It is impossible to catch all the bug reports.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jun 2002 12:40:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUGS] Bug #640: ECPG: inserting float numbers" } ]
[ { "msg_contents": "I've been reading past threads, studying backend code, reviewing Alex \nPilosov's \"cursor foo\" patch (submitted last August/September, but never \napplied), and conversing off list with a few people regarding a possible \nimplementation of Set Returning Functions (or SRF for short). Below is \nmy proposal for how this might work. After discussion, and if there is \nno objection, I would like to work on this implementation with the hope \nthat it could be in place for 7.3.\n\n\nProposal for set returning functions (SRF):\n-----------------------------------------------------\n\nThe problem:\n-----------------------------------------------------\nCurrently the ability to return multiple row, multiple column result \nsets from a function is quite limited. In fact, it is not possible to \nreturn multiple columns directly. It is possible to work around this \nlimitation, but only in a clumsy way (see contrib/dblink for an \nexample). Alternatively refcursors may be used, but they have their own \nset of issues, not the least of which is they cannot be used in view \ndefinitions or exist outside of explicit transactions.\n\n\nThe feature:\n-----------------------------------------------------\nThe desired feature is the ability to return multiple row, multiple \ncolumn result sets from a function, or set returning functions (SRF) for \nshort.\n\n\nDo we want this feature?\n-----------------------------------------------------\nBased on the many posts on this topic, I think the answer to this is a \nresounding yes.\n\n\nHow do we want the feature to behave?\n-----------------------------------------------------\nA SRF should behave similarly to any other table_ref (RangeTblEntry), \ni.e. as a tuple source in a FROM clause. Currently there are three \nprimary kinds of RangeTblEntry: RTE_RELATION (ordinary relation), \nRTE_SUBQUERY (subquery in FROM), and RTE_JOIN (join). SRF would join \nthis list and behave in much the same manner.\n\n\nHow do we want the feature implemented? (my proposal)\n-----------------------------------------------------\n1. Add a new table_ref node type:\n - Current nodes are RangeVar, RangeSubselect, or JoinExpr\n - Add new RangePortal node as a possible table_ref. The RangePortal\n node will be extented from the current Portal functionality.\n\n2. Add support for three modes of operation to RangePortal:\n a. Repeated calls -- this is the existing API for SRF, but\n implemented as a tuple source instead of as an expression.\n b. Materialized results -- use a TupleStore to materialize the\n result set.\n c. Return query -- use current Portal functionality, fetch entire\n result set.\n\n3. Add support to allow the RangePortal to materialize modes 1 and 3, if \nneeded for a re-read.\n\n4. Add a WITH keyword to CREATE FUNCTION, allowing SRF mode to be \nspecified. This would default to mode a) for backward compatibility.\n\n5. Ignore the current code which allows functions to return multiple \nresults as expressions; we can leave it there, but deprecate it with the \nintention of eventual removal.\n\n-----------------------------------------------------\nThoughts/comments would be much appreciated.\n\nThanks,\n\nJoe\n\n", "msg_date": "Mon, 22 Apr 2002 10:16:41 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "[RFC] Set Returning Functions" } ]
[ { "msg_contents": "We are about to need to fix a fair number of places in client code\n(eg, psql and pg_dump) that presently do things like\n\nSELECT * FROM pg_attribute\nWHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'foo');\n\nThis does not work reliably anymore because there could be multiple\nrelations named 'foo' in different namespaces. The sub-select to\nget the relation OID will fail because it'll return multiple results.\n\nThe brute-force answer is\n\nSELECT * FROM pg_attribute\nWHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'foo'\n AND relnamespace = (SELECT oid FROM pg_namespace\n WHERE nspname = 'bar'));\n\nBut aside from being really ugly, this requires that the client code\nknow exactly which namespace contains the relation it's after. If\nthe client is relying on namespace search then it may not know that;\nin fact, the client code very possibly isn't even aware of the exact\nnamespace search path it's using. I am planning to introduce an\ninformational function CURRENT_SCHEMAS() (or some such name) that\nreturns the current effective search path, probably as a NAME[] array.\nBut it looks really, really messy to write an SQL query that makes\nuse of such a function to look up the first occurrence of 'foo' in\nthe search path. We need to encapsulate the lookup procedure somehow\nso that we don't have lots of clients reinventing this wheel.\n\nWe already have some functions that accept a text string and do a\nsuitable lookup of a relation; an example is nextval(), for which\nyou can presently write\n\n\tnextval('foo')\t\t--- searches namespace path for foo\n\tnextval('foo.bar')\t--- looks only in namespace foo\n\tnextval('\"Foo\".bar')\t--- quoting works for mixed-case names\n\nSeems like what we want to do is make the lookup part of this available\nseparately, as a function that takes such a string and returns an OID.\nWe'd need such functions for each of the namespace-ified object kinds:\nrelations, datatypes, functions, and operators.\n\nA variant of the idea of inventing functions is to extend the existing\ndatatype 'regproc' to do this, and invent also 'regclass', 'regtype',\n'regoperator' datatypes to do the lookups for the other object kinds.\nI proposed this in a different context last year,\n\thttp://archives.postgresql.org/pgsql-hackers/2001-08/msg00589.php\nbut it seemed too late to do anything with the idea for 7.2.\n\nIf we went with the datatype approach then we'd be able to write\nqueries like\n\nSELECT * FROM pg_attribute WHERE attrelid = 'foo'::regclass;\n\nor\n\nSELECT * FROM pg_attribute WHERE attrelid = 'foo.bar'::regclass;\n\nor for that matter you could do\n\nSELECT * FROM pg_attribute WHERE attrelid = regclass('foo');\n\nwhich'd be syntactically indistinguishable from using a function.\n\nThe datatype approach seems a little bit odder at first glance, but it\nhas some interesting possibilities with respect to implicit casting\n(see above-referenced thread). So I'm inclined to go that route unless\nsomeone's got an objection.\n\nWith a datatype, we also have outbound conversion to think of: so there\nmust be a function that takes an OID and produces a string. What I am\ninclined to do on that side is emit an unqualified name if the OID\nrefers to a relation/type/etc that would be found first in the current\nnamespace search path. Otherwise, a qualified name (foo.bar) would be\nemitted. This will have usefulness for applications like pg_dump, which\nwill have exactly this requirement (per discussion a few days ago that\npg_dump should not qualify names unnecessarily).\n\nOne question is what to do with invalid input. For example, if table\nfoo doesn't exist then what should 'foo'::regclass do? The existing\nregproc datatype throws an error, but I wonder whether it wouldn't be\nmore useful to return NULL. Any thoughts on that?\n\nAlso, for functions and operators the name alone is not sufficient to\nuniquely identify the object. Type regproc currently throws an error\nif asked to convert a nonunique function name; that severely limits its\nusefulness. I'm toying with allowing datatypes in the input string,\neg\n\t'sum(bigint)'::regproc\nbut I wonder if this will create compatibility problems. In particular,\nshould the regproc and regoperator output converters include datatype\nindicators in the output string? (Always, never, only if not unique?)\nDoing so would be a non-backwards-compatible change for regproc.\nWe might avoid that complaint by leaving regproc as-is and instead\ninventing a parallel datatype (say regfunction) that supports datatype\nindications. But I'm not sure whether regproc is used enough to make\nthis an important concern.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Apr 2002 16:23:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Simplifying OID lookups in the presence of namespaces" }, { "msg_contents": "Tom Lane wrote:\n> A variant of the idea of inventing functions is to extend the existing\n> datatype 'regproc' to do this, and invent also 'regclass', 'regtype',\n> 'regoperator' datatypes to do the lookups for the other object kinds.\n> I proposed this in a different context last year,\n> \thttp://archives.postgresql.org/pgsql-hackers/2001-08/msg00589.php\n> but it seemed too late to do anything with the idea for 7.2.\n> \n\nInteresting thread. It seems like the same basic facility could also \nsupport an enum datatype that people migrating from mysql are always \nlooking for.\n\n\n\n> One question is what to do with invalid input. For example, if table\n> foo doesn't exist then what should 'foo'::regclass do? The existing\n> regproc datatype throws an error, but I wonder whether it wouldn't be\n> more useful to return NULL. Any thoughts on that?\n\nNULL makes sense.\n\n> \n> Also, for functions and operators the name alone is not sufficient to\n> uniquely identify the object. Type regproc currently throws an error\n> if asked to convert a nonunique function name; that severely limits its\n> usefulness. I'm toying with allowing datatypes in the input string,\n> eg\n> \t'sum(bigint)'::regproc\n> but I wonder if this will create compatibility problems. In particular,\n> should the regproc and regoperator output converters include datatype\n> indicators in the output string? (Always, never, only if not unique?)\n\nI'd be inclined to include datatype always. If you don't, how can you \nuse this for pg_dump, etc?\n\n\nJoe\n\n", "msg_date": "Mon, 22 Apr 2002 20:35:03 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Simplifying OID lookups in the presence of namespaces" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> Also, for functions and operators the name alone is not sufficient to\n>> uniquely identify the object. Type regproc currently throws an error\n>> if asked to convert a nonunique function name; that severely limits its\n>> usefulness. I'm toying with allowing datatypes in the input string,\n>> eg\n>> 'sum(bigint)'::regproc\n>> but I wonder if this will create compatibility problems. In particular,\n>> should the regproc and regoperator output converters include datatype\n>> indicators in the output string? (Always, never, only if not unique?)\n\n> I'd be inclined to include datatype always. If you don't, how can you \n> use this for pg_dump, etc?\n\npg_dump would probably actually prefer not having type info in the\noutput string; it'll just have to strip it off in most places. But\nI don't have a good feeling for the needs of other applications,\nso I was asking what other people thought.\n\nIf we supported both ways via two datatypes, we'd have all the bases\ncovered; I'm just wondering if it's worth the trouble.\n\n\t\t\tregards, tom lane\n\nPS: interesting thought about enum ...\n", "msg_date": "Mon, 22 Apr 2002 23:46:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Simplifying OID lookups in the presence of namespaces " } ]
[ { "msg_contents": "From here:\nhttp://osdb.sourceforge.net/\nWe find this quote:\n\"For you long-suffering OSDB PostgreSQL users, we offer \n\n--postgresql=no_hash_index \n\nto work around the hash index problems of OSDB with PostgreSQL V7.1 and\n7.2. As always, let us know of any problems. May the source be with\nyou!\"\n\nDoes anyone know what the above is all about?\n", "msg_date": "Mon, 22 Apr 2002 14:15:37 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "What is wrong with hashed index usage?" }, { "msg_contents": "On Mon, 22 Apr 2002 14:15:37 -0700\n\"Dann Corbit\" <DCorbit@connx.com> wrote:\n> From here:\n> http://osdb.sourceforge.net/\n> We find this quote:\n> \"For you long-suffering OSDB PostgreSQL users, we offer \n> \n> --postgresql=no_hash_index \n> \n> to work around the hash index problems of OSDB with PostgreSQL V7.1 and\n> 7.2. As always, let us know of any problems. May the source be with\n> you!\"\n> \n> Does anyone know what the above is all about?\n\nYes -- search the list archives, or check the PostgreSQL docs. This problem\nhas been brought up several times: hash indexes deadlock under concurrent\nload. A run of pgbench with a reasonably high concurrency level (10 or 15)\nproduces the problem consistently.\n\nPreviously, I had volunteered to fix this, but\n\n (a) I'm busy with the PREPARE/EXECUTE stuff at the moment.\n\n (b) I'm not sure it's worth the investment of time: AFAIK,\n hash indexes don't have many advantages over btrees for\n scalar data.\n\nOn the other hand, if someone steps forward with some data on a\nspecific advantage that hash indexes have over btrees, I don't\nexpect that the concurrency problems should be too difficult to\nsolve.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 22 Apr 2002 17:59:16 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" } ]
[ { "msg_contents": " \n \n Thanks Jean.\n \n I will send a message to the ODBC list.\n At least I didn't create the project in sourceforge\n yet. I will try to get a cvs account at\n Postgresql.org\n as you said. :)\n \n Thanks very much!!!\n\nFrancisco Jr.\n\n_______________________________________________________________________________________________\nYahoo! Empregos\nO trabalho dos seus sonhos pode estar aqui. Cadastre-se hoje mesmo no Yahoo! Empregos e tenha acesso a milhares de vagas abertas!\nhttp://br.empregos.yahoo.com/\n", "msg_date": "Mon, 22 Apr 2002 18:35:08 -0300 (ART)", "msg_from": "\"=?iso-8859-1?q?Francisco=20Jr.?=\" <fxjrlists@yahoo.com.br>", "msg_from_op": true, "msg_subject": "Re: Implement a .NET Data" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Neil Conway [mailto:nconway@klamath.dyndns.org]\n> Sent: Monday, April 22, 2002 2:59 PM\n> To: Dann Corbit\n> Cc: pgsql-hackers@postgreSQL.org\n> Subject: Re: [HACKERS] What is wrong with hashed index usage?\n> \n> \n> On Mon, 22 Apr 2002 14:15:37 -0700\n> \"Dann Corbit\" <DCorbit@connx.com> wrote:\n> > From here:\n> > http://osdb.sourceforge.net/\n> > We find this quote:\n> > \"For you long-suffering OSDB PostgreSQL users, we offer \n> > \n> > --postgresql=no_hash_index \n> > \n> > to work around the hash index problems of OSDB with \n> PostgreSQL V7.1 and\n> > 7.2. As always, let us know of any problems. May the source be with\n> > you!\"\n> > \n> > Does anyone know what the above is all about?\n> \n> Yes -- search the list archives, or check the PostgreSQL \n> docs. This problem\n> has been brought up several times: hash indexes deadlock \n> under concurrent\n> load. A run of pgbench with a reasonably high concurrency \n> level (10 or 15)\n> produces the problem consistently.\n> \n> Previously, I had volunteered to fix this, but\n> \n> (a) I'm busy with the PREPARE/EXECUTE stuff at the moment.\n> \n> (b) I'm not sure it's worth the investment of time: AFAIK,\n> hash indexes don't have many advantages over btrees for\n> scalar data.\n> \n> On the other hand, if someone steps forward with some data on a\n> specific advantage that hash indexes have over btrees, I don't\n> expect that the concurrency problems should be too difficult to\n> solve.\n\nHere is where a hashed index shines:\nTo find a single item using a key, hashed indexes are enormously faster\nthan a btree.\n\nThat is typically speaking. I have not done performance benchmarks with\nPostgreSQL.\n\nIn general, hashed indexes are much to be preferred when you are doing\nfrequent keyed lookups for single items. Hashed indexes are (of course)\ncompletely useless for an ordered scan or for wide ranges of continuous\ndata.\n", "msg_date": "Mon, 22 Apr 2002 15:04:22 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "On Mon, 22 Apr 2002 15:04:22 -0700\n\"Dann Corbit\" <DCorbit@connx.com> wrote:\n> Here is where a hashed index shines:\n> To find a single item using a key, hashed indexes are enormously faster\n> than a btree.\n> \n> That is typically speaking. I have not done performance benchmarks with\n> PostgreSQL.\n\nYes -- but in the benchmarks I've done, the performance different\nis not more than 5% (for tables with ~ 600,000 rows, doing lookups\nbased on a PK with \"=\"). That said, my benchmarks could very well\nbe flawed, I didn't spend a lot of time on it. If you'd like to\ngenerate some interest in improving hash indexes, I'd like to see\nsome empirical data supporting your performance claims.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 22 Apr 2002 18:13:48 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "The benchmarks will depend mostly on the depth of the Btree. Hashes \nwill be markedly faster only in the case(s) where descending into the \ntree to produce a matching leaf node would take longer than walking to \nthe appropriate item in a hash.\n\nMost of the time until the btree gets deep they are nearly equivalent. \n When the tree ends up becoming many levels deep it can take longer to \nwalk than the hash.\n\nNeil Conway wrote:\n\n>On Mon, 22 Apr 2002 15:04:22 -0700\n>\"Dann Corbit\" <DCorbit@connx.com> wrote:\n>\n>>Here is where a hashed index shines:\n>>To find a single item using a key, hashed indexes are enormously faster\n>>than a btree.\n>>\n>>That is typically speaking. I have not done performance benchmarks with\n>>PostgreSQL.\n>>\n>\n>Yes -- but in the benchmarks I've done, the performance different\n>is not more than 5% (for tables with ~ 600,000 rows, doing lookups\n>based on a PK with \"=\"). That said, my benchmarks could very well\n>be flawed, I didn't spend a lot of time on it. If you'd like to\n>generate some interest in improving hash indexes, I'd like to see\n>some empirical data supporting your performance claims.\n>\n>Cheers,\n>\n>Neil\n>\n\n\n", "msg_date": "Mon, 22 Apr 2002 16:47:59 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Michael Loftis wrote:\n> The benchmarks will depend mostly on the depth of the Btree. Hashes \n> will be markedly faster only in the case(s) where descending into the \n> tree to produce a matching leaf node would take longer than walking to \n> the appropriate item in a hash.\n> \n> Most of the time until the btree gets deep they are nearly equivalent. \n> When the tree ends up becoming many levels deep it can take longer to \n> walk than the hash.\n\nAnd what causes the btree to get deep? Is it just the number of rows in\nthe index?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 19:54:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Michael Loftis <mloftis@wgops.com> writes:\n> [ on hash vs btree indexing ]\n> Most of the time until the btree gets deep they are nearly equivalent. \n> When the tree ends up becoming many levels deep it can take longer to \n> walk than the hash.\n\nMaybe. I've just completed a simple benchmark of btree vs hash indexes\nas implemented in Postgres, and I can't see any advantage.\n\nUsing current sources on Red Hat Linux 7.2, I built a simple test table\ncontaining one integer column, and filled it with 16 million random\nintegers generated by int4(1000000000 * random()). With a btree index,\n\"explain analyze select * from foo where f1 = 314888455\" (matching a\nsingle row of the table) took about 22 msec on first try (nothing in\ncache), and subsequent repetitions about 0.11 msec. With a hash index,\nthe first try took about 28 msec and repetitions about 0.15 msec.\nMoreover, the hash index was a whole lot bigger: main table size 674\nmeg, btree 301 meg, hash 574 meg, which possibly offers part of the\nexplanation for the greater access time.\n\nI would have tried a larger test case, but this one already taxed\nmy patience: it took 36 hours to build the hash index (vs 19 minutes\nfor the btree index). It looks like hash index build has an O(N^2)\nperformance curve --- the thing had 100 meg of hash index built within\nan hour of starting, but got slower and slower after that.\n\nIn short, lack of support for concurrent operations is hardly the\nworst problem with Postgres' hash indexes. If you wanna fix 'em,\nbe my guest ... but I think I shall spend my time elsewhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Apr 2002 16:05:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage? " }, { "msg_contents": "\n\nTom Lane wrote:\n\n>Michael Loftis <mloftis@wgops.com> writes:\n>\n>>[ on hash vs btree indexing ]\n>>Most of the time until the btree gets deep they are nearly equivalent. \n>>When the tree ends up becoming many levels deep it can take longer to \n>>walk than the hash.\n>>\n>\n>Maybe. I've just completed a simple benchmark of btree vs hash indexes\n>as implemented in Postgres, and I can't see any advantage.\n>\n>Using current sources on Red Hat Linux 7.2, I built a simple test table\n>containing one integer column, and filled it with 16 million random\n>integers generated by int4(1000000000 * random()). With a btree index,\n>\"explain analyze select * from foo where f1 = 314888455\" (matching a\n>single row of the table) took about 22 msec on first try (nothing in\n>cache), and subsequent repetitions about 0.11 msec. With a hash index,\n>the first try took about 28 msec and repetitions about 0.15 msec.\n>Moreover, the hash index was a whole lot bigger: main table size 674\n>meg, btree 301 meg, hash 574 meg, which possibly offers part of the\n>explanation for the greater access time.\n>\n>I would have tried a larger test case, but this one already taxed\n>my patience: it took 36 hours to build the hash index (vs 19 minutes\n>for the btree index). It looks like hash index build has an O(N^2)\n>performance curve --- the thing had 100 meg of hash index built within\n>an hour of starting, but got slower and slower after that.\n>\n>In short, lack of support for concurrent operations is hardly the\n>worst problem with Postgres' hash indexes. If you wanna fix 'em,\n>be my guest ... but I think I shall spend my time elsewhere.\n>\nI said can, no will. The particular btree implementation dictates what \nsorts of operations become bogged down. I do agree that in pretty much \nevery case, a well implemented btree will be better than a hash though. \n I don't know about PGs implementation but since I assume oyu all \ninhereted atleast part of it from the berkely boys you should be in very \nsolid form.\n\n>\n>\t\t\tregards, tom lane\n>\n\n\n", "msg_date": "Thu, 25 Apr 2002 13:19:33 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "\nNice report. I think we should start thinking of hiding the hash option\nfrom users, or warn them more forcefully, rather than hold it out as a\npossible option for them.\n\nPeople think hash is best for equals-only queries, and btree for others,\nand we can now see this clearly isn't the case.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Michael Loftis <mloftis@wgops.com> writes:\n> > [ on hash vs btree indexing ]\n> > Most of the time until the btree gets deep they are nearly equivalent. \n> > When the tree ends up becoming many levels deep it can take longer to \n> > walk than the hash.\n> \n> Maybe. I've just completed a simple benchmark of btree vs hash indexes\n> as implemented in Postgres, and I can't see any advantage.\n> \n> Using current sources on Red Hat Linux 7.2, I built a simple test table\n> containing one integer column, and filled it with 16 million random\n> integers generated by int4(1000000000 * random()). With a btree index,\n> \"explain analyze select * from foo where f1 = 314888455\" (matching a\n> single row of the table) took about 22 msec on first try (nothing in\n> cache), and subsequent repetitions about 0.11 msec. With a hash index,\n> the first try took about 28 msec and repetitions about 0.15 msec.\n> Moreover, the hash index was a whole lot bigger: main table size 674\n> meg, btree 301 meg, hash 574 meg, which possibly offers part of the\n> explanation for the greater access time.\n> \n> I would have tried a larger test case, but this one already taxed\n> my patience: it took 36 hours to build the hash index (vs 19 minutes\n> for the btree index). It looks like hash index build has an O(N^2)\n> performance curve --- the thing had 100 meg of hash index built within\n> an hour of starting, but got slower and slower after that.\n> \n> In short, lack of support for concurrent operations is hardly the\n> worst problem with Postgres' hash indexes. If you wanna fix 'em,\n> be my guest ... but I think I shall spend my time elsewhere.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 16:38:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "On Thu, 25 Apr 2002 16:38:00 -0400 (EDT)\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> \n> Nice report. I think we should start thinking of hiding the hash option\n> from users, or warn them more forcefully, rather than hold it out as a\n> possible option for them.\n\nWhy not do something Peter E. suggested earlier: if the functionality of\nhash indexes is a subset of that offered by btrees, it might be good to\nremove the hash index code and treat USING 'hash' as an alias for\nUSING 'btree'?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 25 Apr 2002 17:04:44 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Michael Loftis <mloftis@wgops.com> writes:\n> I don't know about PGs implementation but since I assume oyu all \n> inhereted atleast part of it from the berkely boys you should be in very \n> solid form.\n\nOne would have thought so, wouldn't one? AFAIK the hash index code is\nlock-stock-and-barrel straight from Berkeley; we've not touched it\nexcept for minor tweaking (portability issues and such).\n\nI spent a little time reading the code whilst I was waiting for the hash\nindex build to complete, and was kind of wondering why it bothers to\nmaintain bitmaps of free space. Seems like it could just keep all the\nfree pages chained together in a list, for zero overhead cost, and skip\nthe bitmaps. It locks the metapage anyway when allocating or freeing\na page, so keeping the freelist head pointer there doesn't seem like it\nwould have any performance penalty...\n\n<<whacks self on head>> NO <<whack>> I am not getting involved with the\nhash index code. I don't think it's worth our trouble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Apr 2002 17:14:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage? " }, { "msg_contents": "The idea behind hte bitmap (correct me if I'm wrong) is that when larger \nallocationsa re asked for they can be quickly found and there is no need \nto maintain the coalescing of smaller adjacent blocks into larger ones.\n\nI don't know if pg does this or not, but thats the only sane reason I \ncan come up with.\n\n*quietly installs an rm -rf trigger if tom does any I/O on the has files \noutside of the compiler* This is for your own safety Tom... Well that \nand our amusement.... :)\n\nTom Lane wrote:\n\n>Michael Loftis <mloftis@wgops.com> writes:\n>\n>> I don't know about PGs implementation but since I assume oyu all \n>>inhereted atleast part of it from the berkely boys you should be in very \n>>solid form.\n>>\n>\n>One would have thought so, wouldn't one? AFAIK the hash index code is\n>lock-stock-and-barrel straight from Berkeley; we've not touched it\n>except for minor tweaking (portability issues and such).\n>\n>I spent a little time reading the code whilst I was waiting for the hash\n>index build to complete, and was kind of wondering why it bothers to\n>maintain bitmaps of free space. Seems like it could just keep all the\n>free pages chained together in a list, for zero overhead cost, and skip\n>the bitmaps. It locks the metapage anyway when allocating or freeing\n>a page, so keeping the freelist head pointer there doesn't seem like it\n>would have any performance penalty...\n>\n><<whacks self on head>> NO <<whack>> I am not getting involved with the\n>hash index code. I don't think it's worth our trouble.\n>\n>\t\t\tregards, tom lane\n>\n\n\n", "msg_date": "Thu, 25 Apr 2002 14:33:57 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Neil Conway wrote:\n> On Thu, 25 Apr 2002 16:38:00 -0400 (EDT)\n> \"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> > \n> > Nice report. I think we should start thinking of hiding the hash option\n> > from users, or warn them more forcefully, rather than hold it out as a\n> > possible option for them.\n> \n> Why not do something Peter E. suggested earlier: if the functionality of\n> hash indexes is a subset of that offered by btrees, it might be good to\n> remove the hash index code and treat USING 'hash' as an alias for\n> USING 'btree'?\n\nI hate to do that because it makes people think something special is\nhappening for hash, but it isn't. We could throw an elog(NOTICE)\nstating that hash is not recommended and btree is faster, or something\nlike that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 22:25:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I hate to do that because it makes people think something special is\n> happening for hash, but it isn't. We could throw an elog(NOTICE)\n> stating that hash is not recommended and btree is faster, or something\n> like that.\n\nI think the only action called for is some improvement in the\ndocumentation. Right now the docs are not honest about the state\nof any of the non-btree index methods. Ain't none of 'em ready\nfor prime time IMHO. GIST is the only one that's getting any\ndevelopment attention --- and probably the only one that deserves\nit, given limited resources. Hash offers no compelling advantage\nover btree AFAICS, and rtree is likewise dominated by GIST (or would\nbe, if we shipped rtree-equivalent GIST opclasses in the standard\ndistribution).\n\nI do not like \"throw an elog\" as a substitute for documentation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Apr 2002 22:32:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I hate to do that because it makes people think something special is\n> > happening for hash, but it isn't. We could throw an elog(NOTICE)\n> > stating that hash is not recommended and btree is faster, or something\n> > like that.\n> \n> I think the only action called for is some improvement in the\n> documentation. Right now the docs are not honest about the state\n> of any of the non-btree index methods. Ain't none of 'em ready\n> for prime time IMHO. GIST is the only one that's getting any\n> development attention --- and probably the only one that deserves\n> it, given limited resources. Hash offers no compelling advantage\n> over btree AFAICS, and rtree is likewise dominated by GIST (or would\n> be, if we shipped rtree-equivalent GIST opclasses in the standard\n> distribution).\n> \n> I do not like \"throw an elog\" as a substitute for documentation.\n\nOK, documentation changes for hash attached. Do we need to also throw\na elog(WARNING) about its use? I don't think everyone is going to see\nthese documentation changes, and I hate to add it to the FAQ.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/indices.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/indices.sgml,v\nretrieving revision 1.31\ndiff -c -r1.31 indices.sgml\n*** doc/src/sgml/indices.sgml\t7 Jan 2002 02:29:12 -0000\t1.31\n--- doc/src/sgml/indices.sgml\t21 Jun 2002 03:13:47 -0000\n***************\n*** 181,192 ****\n </synopsis>\n <note>\n <para>\n! Because of the limited utility of hash indexes, a B-tree index\n! should generally be preferred over a hash index. We do not have\n! sufficient evidence that hash indexes are actually faster than\n! B-trees even for <literal>=</literal> comparisons. Moreover,\n! hash indexes require coarser locks; see <xref\n! linkend=\"locking-indexes\">.\n </para>\n </note> \n </para>\n--- 181,189 ----\n </synopsis>\n <note>\n <para>\n! Testing has shown that hash indexes are slower than btree indexes,\n! and the size and build time for hash indexes is much worse. For\n! these reasons, hash index use is discouraged.\n </para>\n </note> \n </para>\nIndex: doc/src/sgml/xindex.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/xindex.sgml,v\nretrieving revision 1.25\ndiff -c -r1.25 xindex.sgml\n*** doc/src/sgml/xindex.sgml\t29 May 2002 17:36:40 -0000\t1.25\n--- doc/src/sgml/xindex.sgml\t21 Jun 2002 03:13:48 -0000\n***************\n*** 11,19 ****\n \n <para>\n The procedures described thus far let you define new types, new\n! functions, and new operators. However, we cannot yet define a secondary\n! index (such as a B-tree, R-tree, or\n! hash access method) over a new type or its operators.\n </para>\n \n <para>\n--- 11,19 ----\n \n <para>\n The procedures described thus far let you define new types, new\n! functions, and new operators. However, we cannot yet define a\n! secondary index (such as a B-tree, R-tree, or hash access method)\n! over a new type or its operators.\n </para>\n \n <para>\nIndex: doc/src/sgml/ref/create_index.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/ref/create_index.sgml,v\nretrieving revision 1.31\ndiff -c -r1.31 create_index.sgml\n*** doc/src/sgml/ref/create_index.sgml\t18 May 2002 15:44:47 -0000\t1.31\n--- doc/src/sgml/ref/create_index.sgml\t21 Jun 2002 03:13:48 -0000\n***************\n*** 329,334 ****\n--- 329,339 ----\n an indexed attribute is involved in a comparison using\n the <literal>=</literal> operator.\n </para>\n+ <para>\n+ Testing has shown that hash indexes are slower than btree indexes,\n+ and the size and build time for hash indexes is much worse. For\n+ these reasons, hash index use is discouraged.\n+ </para>\n \n <para>\n Currently, only the B-tree and gist access methods support multicolumn", "msg_date": "Thu, 20 Jun 2002 23:25:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "We have documented current GiST interface but in russian.\nhttp://www.sai.msu.su/~megera/postgres/gist/doc/gist-inteface-r.shtml\nWe have no time to translate it to english :-)\nI'd appreciate if somebody could help us in documentation -\n\n\tOleg\nOn Thu, 20 Jun 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I hate to do that because it makes people think something special is\n> > > happening for hash, but it isn't. We could throw an elog(NOTICE)\n> > > stating that hash is not recommended and btree is faster, or something\n> > > like that.\n> >\n> > I think the only action called for is some improvement in the\n> > documentation. Right now the docs are not honest about the state\n> > of any of the non-btree index methods. Ain't none of 'em ready\n> > for prime time IMHO. GIST is the only one that's getting any\n> > development attention --- and probably the only one that deserves\n> > it, given limited resources. Hash offers no compelling advantage\n> > over btree AFAICS, and rtree is likewise dominated by GIST (or would\n> > be, if we shipped rtree-equivalent GIST opclasses in the standard\n> > distribution).\n> >\n> > I do not like \"throw an elog\" as a substitute for documentation.\n>\n> OK, documentation changes for hash attached. Do we need to also throw\n> a elog(WARNING) about its use? I don't think everyone is going to see\n> these documentation changes, and I hate to add it to the FAQ.\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 21 Jun 2002 08:45:46 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> <para>\n> ! Because of the limited utility of hash indexes, a B-tree index\n> ! should generally be preferred over a hash index. We do not have\n> ! sufficient evidence that hash indexes are actually faster than\n> ! B-trees even for <literal>=</literal> comparisons. Moreover,\n> ! hash indexes require coarser locks; see <xref\n> ! linkend=\"locking-indexes\">.\n> </para>\n> </note> \n> </para>\n> --- 181,189 ----\n> </synopsis>\n> <note>\n> <para>\n> ! Testing has shown that hash indexes are slower than btree indexes,\n> ! and the size and build time for hash indexes is much worse. For\n> ! these reasons, hash index use is discouraged.\n\nThis change strikes me as a step backwards. The existing wording tells\nthe truth; the proposed revision removes the facts in favor of a blanket\nassertion that is demonstrably false.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jun 2002 09:25:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > <para>\n> > ! Because of the limited utility of hash indexes, a B-tree index\n> > ! should generally be preferred over a hash index. We do not have\n> > ! sufficient evidence that hash indexes are actually faster than\n> > ! B-trees even for <literal>=</literal> comparisons. Moreover,\n> > ! hash indexes require coarser locks; see <xref\n> > ! linkend=\"locking-indexes\">.\n> > </para>\n> > </note> \n> > </para>\n> > --- 181,189 ----\n> > </synopsis>\n> > <note>\n> > <para>\n> > ! Testing has shown that hash indexes are slower than btree indexes,\n> > ! and the size and build time for hash indexes is much worse. For\n> > ! these reasons, hash index use is discouraged.\n> \n> This change strikes me as a step backwards. The existing wording tells\n> the truth; the proposed revision removes the facts in favor of a blanket\n> assertion that is demonstrably false.\n\nOK, which part of is \"demonstrably false\"? I think the old \"should\ngenerally be preferred\" is too vague. No one has come up with a case\nwhere hash has shown to be faster, and a lot of cases where it is slower.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 21 Jun 2002 09:32:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, which part of is \"demonstrably false\"? I think the old \"should\n> generally be preferred\" is too vague. No one has come up with a case\n> where hash has shown to be faster, and a lot of cases where it is slower.\n\nThe only thing I recall being lots worse is initial index build.\n\nI have not tested it much, but I would expect that hash holds up better\nin the presence of many equal keys than btree does...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jun 2002 10:47:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, which part of is \"demonstrably false\"? I think the old \"should\n> > generally be preferred\" is too vague. No one has come up with a case\n> > where hash has shown to be faster, and a lot of cases where it is slower.\n> \n> The only thing I recall being lots worse is initial index build.\n> \n> I have not tested it much, but I would expect that hash holds up better\n> in the presence of many equal keys than btree does...\n\nI remember three problems: build time, index size, and concurrency\nproblems. I was wondering about the equal key case myself, and assumed\nhash may be a win there, but with the concurrency problems, is that even\npossible?\n\nOK, I have reworded it. Is that better? How about an elog(NOTICE) for\nhash use?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/indices.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/indices.sgml,v\nretrieving revision 1.32\ndiff -c -r1.32 indices.sgml\n*** doc/src/sgml/indices.sgml\t21 Jun 2002 03:25:53 -0000\t1.32\n--- doc/src/sgml/indices.sgml\t21 Jun 2002 15:00:32 -0000\n***************\n*** 181,188 ****\n </synopsis>\n <note>\n <para>\n! Testing has shown that hash indexes are slower than btree indexes,\n! and the size and build time for hash indexes is much worse. For\n these reasons, hash index use is discouraged.\n </para>\n </note> \n--- 181,188 ----\n </synopsis>\n <note>\n <para>\n! Testing has shown hash indexes to be similar or slower than btree indexes,\n! and the index size and build time for hash indexes is much worse. For\n these reasons, hash index use is discouraged.\n </para>\n </note> \nIndex: doc/src/sgml/ref/create_index.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/ref/create_index.sgml,v\nretrieving revision 1.32\ndiff -c -r1.32 create_index.sgml\n*** doc/src/sgml/ref/create_index.sgml\t21 Jun 2002 03:25:53 -0000\t1.32\n--- doc/src/sgml/ref/create_index.sgml\t21 Jun 2002 15:00:32 -0000\n***************\n*** 330,337 ****\n the <literal>=</literal> operator.\n </para>\n <para>\n! Testing has shown that hash indexes are slower than btree indexes,\n! and the size and build time for hash indexes is much worse. For\n these reasons, hash index use is discouraged.\n </para>\n \n--- 330,337 ----\n the <literal>=</literal> operator.\n </para>\n <para>\n! Testing has shown hash indexes to be similar or slower than btree indexes,\n! and the index size and build time for hash indexes is much worse. For\n these reasons, hash index use is discouraged.\n </para>", "msg_date": "Fri, 21 Jun 2002 11:03:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I remember three problems: build time, index size, and concurrency\n> problems. I was wondering about the equal key case myself, and assumed\n> hash may be a win there, but with the concurrency problems, is that even\n> possible?\n\nSure. Many-equal-keys are a problem for btree whether you have any\nconcurrency or not.\n\n> OK, I have reworded it. Is that better?\n\nIt's better, but you've still discarded the original's explicit mention\nof concurrency problems. Why do you want to remove information?\n\n> How about an elog(NOTICE) for hash use?\n\nI don't think that's appropriate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jun 2002 11:47:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I remember three problems: build time, index size, and concurrency\n> > problems. I was wondering about the equal key case myself, and assumed\n> > hash may be a win there, but with the concurrency problems, is that even\n> > possible?\n> \n> Sure. Many-equal-keys are a problem for btree whether you have any\n> concurrency or not.\n> \n> > OK, I have reworded it. Is that better?\n> \n> It's better, but you've still discarded the original's explicit mention\n> of concurrency problems. Why do you want to remove information?\n\nOK, concurrency added. How is that?\n\n> \n> > How about an elog(NOTICE) for hash use?\n> \n> I don't think that's appropriate.\n\nI was thinking of this during CREATE INDEX ... hash:\n\n\tNOTICE: Hash index use is discouraged. See the CREATE INDEX\n\treference page for more information.\n\nDoes anyone else like/dislike that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/indices.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/indices.sgml,v\nretrieving revision 1.32\ndiff -c -r1.32 indices.sgml\n*** doc/src/sgml/indices.sgml\t21 Jun 2002 03:25:53 -0000\t1.32\n--- doc/src/sgml/indices.sgml\t21 Jun 2002 16:50:23 -0000\n***************\n*** 181,189 ****\n </synopsis>\n <note>\n <para>\n! Testing has shown that hash indexes are slower than btree indexes,\n! and the size and build time for hash indexes is much worse. For\n! these reasons, hash index use is discouraged.\n </para>\n </note> \n </para>\n--- 181,190 ----\n </synopsis>\n <note>\n <para>\n! Testing has shown hash indexes to be similar or slower than btree\n! indexes, and the index size and build time for hash indexes is much\n! worse. Hash indexes also suffer poor performance under high\n! concurrency. For these reasons, hash index use is discouraged.\n </para>\n </note> \n </para>\nIndex: doc/src/sgml/ref/create_index.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/ref/create_index.sgml,v\nretrieving revision 1.32\ndiff -c -r1.32 create_index.sgml\n*** doc/src/sgml/ref/create_index.sgml\t21 Jun 2002 03:25:53 -0000\t1.32\n--- doc/src/sgml/ref/create_index.sgml\t21 Jun 2002 16:50:23 -0000\n***************\n*** 330,338 ****\n the <literal>=</literal> operator.\n </para>\n <para>\n! Testing has shown that hash indexes are slower than btree indexes,\n! and the size and build time for hash indexes is much worse. For\n! these reasons, hash index use is discouraged.\n </para>\n \n <para>\n--- 330,339 ----\n the <literal>=</literal> operator.\n </para>\n <para>\n! Testing has shown hash indexes to be similar or slower than btree\n! indexes, and the index size and build time for hash indexes is much\n! worse. Hash indexes also suffer poor performance under high\n! concurrency. For these reasons, hash index use is discouraged.\n </para>\n \n <para>", "msg_date": "Fri, 21 Jun 2002 12:51:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "On Fri, 2002-06-21 at 11:51, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> \n> > \n> > > How about an elog(NOTICE) for hash use?\n> > \n> > I don't think that's appropriate.\n> \n> I was thinking of this during CREATE INDEX ... hash:\n> \n> \tNOTICE: Hash index use is discouraged. See the CREATE INDEX\n> \treference page for more information.\n> \n> Does anyone else like/dislike that?\nI dislike it. Some clients/dba's will wonder why we even have them. \n\nWhy should we bug the DBA on EVERY index that is a hash? \n\nI know I personally hate the FreeBSD linker warnings about certain\nfunctions, and don't like that precedent. \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "21 Jun 2002 15:09:19 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "Larry Rosenman wrote:\n> On Fri, 2002-06-21 at 11:51, Bruce Momjian wrote:\n> > Tom Lane wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > \n> > > \n> > > > How about an elog(NOTICE) for hash use?\n> > > \n> > > I don't think that's appropriate.\n> > \n> > I was thinking of this during CREATE INDEX ... hash:\n> > \n> > \tNOTICE: Hash index use is discouraged. See the CREATE INDEX\n> > \treference page for more information.\n> > \n> > Does anyone else like/dislike that?\n> I dislike it. Some clients/dba's will wonder why we even have them. \n> \n> Why should we bug the DBA on EVERY index that is a hash? \n> \n> I know I personally hate the FreeBSD linker warnings about certain\n> functions, and don't like that precedent. \n\nOK, that's enough of a negative vote for me. So you feel the\ndocumentation change is enough? Tom thinks so too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 21 Jun 2002 16:12:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" }, { "msg_contents": "On Fri, 2002-06-21 at 15:12, Bruce Momjian wrote:\n> Larry Rosenman wrote:\n> > On Fri, 2002-06-21 at 11:51, Bruce Momjian wrote:\n> > > Tom Lane wrote:\n> > > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > \n> > > \n> > > > \n> > > > > How about an elog(NOTICE) for hash use?\n> > > > \n> > > > I don't think that's appropriate.\n> > > \n> > > I was thinking of this during CREATE INDEX ... hash:\n> > > \n> > > \tNOTICE: Hash index use is discouraged. See the CREATE INDEX\n> > > \treference page for more information.\n> > > \n> > > Does anyone else like/dislike that?\n> > I dislike it. Some clients/dba's will wonder why we even have them. \n> > \n> > Why should we bug the DBA on EVERY index that is a hash? \n> > \n> > I know I personally hate the FreeBSD linker warnings about certain\n> > functions, and don't like that precedent. \n> \n> OK, that's enough of a negative vote for me. So you feel the\n> documentation change is enough? Tom thinks so too.\nYup.\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "21 Jun 2002 15:13:55 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: What is wrong with hashed index usage?" } ]
[ { "msg_contents": "[tgl@rh1 preproc]$ make\nbison -y -d preproc.y\nconflicts: 2 reduce/reduce\n\nThis is not good.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Apr 2002 18:46:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "ecpg/preproc.y is generating reduce/reduce conflicts" } ]
[ { "msg_contents": "Using current CVS (yesterday) I've rerun the benchmarks to see the\neffects of various NAMEDATALEN settings.\n\n3 times per setting.\n\n\nFirst time is pgbench inserts (-s 5)\nSecond time is pgbench run (-t 3000 -s 5)\n\nThird time is the postmaster during both of the above.\n\nI'll run it again tonight on a computer with faster disk I/O to see if\nthat helps descrease the sys times.\n\n-----\nNAMEDATALEN: 32\n\n 89.34 real 1.85 user 0.13 sys\n 146.87 real 1.51 user 3.91 sys\n\n 246.63 real 66.11 user 19.21 sys\n\n\nNAMEDATALEN: 64\n\n 93.10 real 1.82 user 0.16 sys\n 147.30 real 1.45 user 3.90 sys\n\n 249.28 real 66.01 user 18.82 sys\n\n\nNAMEDATALEN: 128\n\n 99.13 real 1.80 user 0.51 sys\n 169.47 real 1.87 user 4.54 sys\n\n 279.16 real 67.93 user 29.72 sys\n\n\nNAMEDATALEN: 256\n\n 106.60 real 1.81 user 0.43 sys\n 166.61 real 1.69 user 4.25 sys\n\n 283.76 real 66.88 user 26.59 sys", "msg_date": "22 Apr 2002 20:25:59 -0300", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "NAMEDATALEN revisited" } ]
[ { "msg_contents": "I was in Boston for a few days for a wedding. Never got time to be\nonline. I am back now. I will read my email and apply outstanding\npatches tomorrow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 00:33:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "I am back" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Francisco Jr. [mailto:fxjrlists@yahoo.com.br] \n> Sent: 22 April 2002 22:35\n> To: pgsql-hackers@postgresql.org\n> Subject: Re: Implement a .NET Data\n> \n> \n> \n> \n> Thanks Jean.\n> \n> I will send a message to the ODBC list.\n> At least I didn't create the project in sourceforge\n> yet. I will try to get a cvs account at\n> Postgresql.org\n> as you said. :)\n> \n> Thanks very much!!!\n\nYou might want to look at http://gborg.postgresql.org.\n\nRegards, Dave.\n", "msg_date": "Tue, 23 Apr 2002 08:25:55 +0100", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Implement a .NET Data" }, { "msg_contents": "\nThanks Dave.\n\nI will create the project at gborg.postgresql.org on\nFriday :)\n\n> You might want to look at\n> http://gborg.postgresql.org.\n> \n> Regards, Dave. \n\n_______________________________________________________________________________________________\nYahoo! Empregos\nO trabalho dos seus sonhos pode estar aqui. Cadastre-se hoje mesmo no Yahoo! Empregos e tenha acesso a milhares de vagas abertas!\nhttp://br.empregos.yahoo.com/\n", "msg_date": "Wed, 24 Apr 2002 18:15:53 -0300 (ART)", "msg_from": "\"=?iso-8859-1?q?Francisco=20Jr.?=\" <fxjrlists@yahoo.com.br>", "msg_from_op": false, "msg_subject": "Re: Implement a .NET Data" } ]
[ { "msg_contents": "I have tried using chunks technique when creating huge string for a stored \nprocedure (C code). I work like charm for small string, but when i create \nlarge strings i get a \"server closed the connection unexpectedly\" :-( I use \npalloc and repalloc for memory handling. Note! I made a standard C program \nthat just keept on making the chunk larger (read string), i never did crash \nany. So what is postgres doing? (ps. the same happens if I use standard malloc \nan realloc)\n\nAny ideas why? (No continues block of memory is large enough or?)\nAnd how does one normally handle Large strings in postgres?\n\n\\Steffen Nielsen\n", "msg_date": "Tue, 23 Apr 2002 10:55:52 +0200", "msg_from": "Steffen Nielsen <styf@cs.auc.dk>", "msg_from_op": true, "msg_subject": "Generating Huge String?" }, { "msg_contents": "Steffen Nielsen <styf@cs.auc.dk> writes:\n> I have tried using chunks technique when creating huge string for a stored \n> procedure (C code). I work like charm for small string, but when i create \n> large strings i get a \"server closed the connection unexpectedly\" :-(\n\nLook for bugs in your code ;-). I'd bet it's scribbling on memory that\ndoesn't belong to it.\n\n> And how does one normally handle Large strings in postgres?\n\nThe StringInfo functions are moderately convenient in most cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Apr 2002 09:49:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Generating Huge String? " }, { "msg_contents": "Hi!\n\nI checked out the StringInfo functions, they are basicly the same as the \nChunks functions I use, but I'll use the others instead even thought I don't \nthink they will help me on my problem. But anyway, I've checked out the malloc \nand realloc function, and believe that they won't allow allocation into an \nallready occopied memory area (at least on freebsd).\n\nBut if not, maybe I should create a new Chunk (malloc again, and copy; But \nthat would probably lead to heavy fragmentation of the memory) if realloc \ncan't allocate more continuos memory space?\n\nSorry if these question seem trivially, I'm a C newbie :-)\n\n/Steffen Nielsen\n\nQuoting Tom Lane <tgl@sss.pgh.pa.us>:\n\n> Steffen Nielsen <styf@cs.auc.dk> writes:\n> > I have tried using chunks technique when creating huge string for a stored\n> \n> > procedure (C code). I work like charm for small string, but when i create\n> \n> > large strings i get a \"server closed the connection unexpectedly\" :-(\n> \n> Look for bugs in your code ;-). I'd bet it's scribbling on memory that\n> doesn't belong to it.\n> \n> > And how does one normally handle Large strings in postgres?\n> \n> The StringInfo functions are moderately convenient in most cases.\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n", "msg_date": "Tue, 23 Apr 2002 22:17:59 +0200", "msg_from": "Steffen Nielsen <styf@cs.auc.dk>", "msg_from_op": true, "msg_subject": "Re: Generating Huge String? " } ]
[ { "msg_contents": "On Tuesday 23 April 2002 04:27 pm, Bruce Momjian wrote:\n> OK, would people please vote on how to handle SET in an aborted\n> transaction? This vote will allow us to resolve the issue and move\n> forward if needed.\n\n> at the end, should 'x' equal:\n\n> \t1 - All SETs are rolled back in aborted transaction\n\nThis seems the correct behavior.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 23 Apr 2002 14:13:29 +0000", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "OK, would people please vote on how to handle SET in an aborted\ntransaction? This vote will allow us to resolve the issue and move\nforward if needed.\n\nIn the case of:\n\n\tSET x=1;\n\tBEGIN;\n\tSET x=2;\n\tquery_that_aborts_transaction;\n\tSET x=3;\n\tCOMMIT;\n\nat the end, should 'x' equal:\n\t\n\t1 - All SETs are rolled back in aborted transaction\n\t2 - SETs are ignored after transaction abort\n\t3 - All SETs are honored in aborted transaction\n\t? - Have SETs vary in behavior depending on variable\n\nOur current behavior is 2.\n\nPlease vote and I will tally the results.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 12:27:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Vote on SET in aborted transaction" }, { "msg_contents": "* Bruce Momjian (pgman@candle.pha.pa.us) [020423 12:30]:\n> \t\n> \t1 - All SETs are rolled back in aborted transaction\n> \t2 - SETs are ignored after transaction abort\n> \t3 - All SETs are honored in aborted transaction\n> \t? - Have SETs vary in behavior depending on variable\n> \n> Our current behavior is 2.\n> \n> Please vote and I will tally the results.\n\n#2, no change in behavior.\n\nBut I base that on the assumption that #1 or #3 involve serious amounts\nof work, and don't see the big benefit.\n\nI liked the line of thought that was distinguishing between in-band \n(rolled back) and out-of-band (honored) SETs, although I don't think\nany acceptable syntax was arrived at, and I don't have a suggestion.\nIf this were solved, I'd vote for '?'.\n\nHmm. Maybe I do have a suggestion: SET [TRANSACTIONAL] ...\nBut it might not be very practical.\n\n-Brad\n", "msg_date": "Tue, 23 Apr 2002 12:43:03 -0400", "msg_from": "Bradley McLean <brad@bradm.net>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Bradley McLean wrote:\n> * Bruce Momjian (pgman@candle.pha.pa.us) [020423 12:30]:\n> > \t\n> > \t1 - All SETs are rolled back in aborted transaction\n> > \t2 - SETs are ignored after transaction abort\n> > \t3 - All SETs are honored in aborted transaction\n> > \t? - Have SETs vary in behavior depending on variable\n> > \n> > Our current behavior is 2.\n> > \n> > Please vote and I will tally the results.\n> \n> #2, no change in behavior.\n> \n> But I base that on the assumption that #1 or #3 involve serious amounts\n> of work, and don't see the big benefit.\n\nI don't want to make any big comments during the vote, but I should\nmention that #1 is needed by Tom's SET for namespace path, and #1 or #3\nis needed to clearly handle query timeouts.\n\nJust thought I would refresh people's memory on how this discussion got\nstarted.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 12:46:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "\n 1\n\n SET should follow transaction semantics and rollback.\n\n\nJan\n\nBruce Momjian wrote:\n> OK, would people please vote on how to handle SET in an aborted\n> transaction? This vote will allow us to resolve the issue and move\n> forward if needed.\n> \n> In the case of:\n> \n> \tSET x=1;\n> \tBEGIN;\n> \tSET x=2;\n> \tquery_that_aborts_transaction;\n> \tSET x=3;\n> \tCOMMIT;\n> \n> at the end, should 'x' equal:\n> \t\n> \t1 - All SETs are rolled back in aborted transaction\n> \t2 - SETs are ignored after transaction abort\n> \t3 - All SETs are honored in aborted transaction\n> \t? - Have SETs vary in behavior depending on variable\n> \n> Our current behavior is 2.\n> \n> Please vote and I will tally the results.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n", "msg_date": "Tue, 23 Apr 2002 12:59:58 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \t1 - All SETs are rolled back in aborted transaction\n> \t2 - SETs are ignored after transaction abort\n> \t3 - All SETs are honored in aborted transaction\n> \t? - Have SETs vary in behavior depending on variable\n\nMy vote is 1 - roll back all SETs.\n\nI'd be willing to consider making the behavior variable-specific\nif anyone can identify particular variables that need to behave\ndifferently. But overall I think it's better that the behavior\nbe consistent --- so you'll need a good argument to convince me\nthat anything should behave differently ;-).\n\nThere is a variant case that should also have been illustrated:\nwhat if there is no error, but the user does ROLLBACK instead of\nCOMMIT? The particular case that is causing difficulty for me is\n\n\tbegin;\n\tcreate schema foo;\n\tset search_path = foo;\n\trollback;\n\nThere is *no* alternative here but to roll back the search_path\nsetting. Therefore, the only alternatives that actually count\nare 1 and ? --- if you don't like 1 then you are voting for\nvariable-specific behavior, because search_path is going to behave\nthis way whether you like it or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Apr 2002 13:09:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction " }, { "msg_contents": "Bruce Momjian wrote:\n> OK, would people please vote on how to handle SET in an aborted\n> transaction? This vote will allow us to resolve the issue and move\n> forward if needed.\n> \n> In the case of:\n> \n> \tSET x=1;\n> \tBEGIN;\n> \tSET x=2;\n> \tquery_that_aborts_transaction;\n> \tSET x=3;\n> \tCOMMIT;\n> \n> at the end, should 'x' equal:\n> \t\n> \t1 - All SETs are rolled back in aborted transaction\n> \t2 - SETs are ignored after transaction abort\n> \t3 - All SETs are honored in aborted transaction\n> \t? - Have SETs vary in behavior depending on variable\n> \n> Our current behavior is 2.\n\n1 makes the most sense to me. I think it should be consistent for all \nSET variables.\n\nJoe\n\n", "msg_date": "Tue, 23 Apr 2002 11:27:52 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > 1 - All SETs are rolled back in aborted transaction\n> > 2 - SETs are ignored after transaction abort\n> > 3 - All SETs are honored in aborted transaction\n> > ? - Have SETs vary in behavior depending on variable\n> \n> My vote is 1 - roll back all SETs.\n\nHmm I don't understand which to vote, sorry.\nAre they all exclusive in the first place ?\n \n> I'd be willing to consider making the behavior variable-specific\n> if anyone can identify particular variables that need to behave\n> differently. But overall I think it's better that the behavior\n> be consistent --- so you'll need a good argument to convince me\n> that anything should behave differently ;-).\n> \n> There is a variant case that should also have been illustrated:\n> what if there is no error, but the user does ROLLBACK instead of\n> COMMIT? The particular case that is causing difficulty for me is\n> \n> begin;\n> create schema foo;\n> set search_path = foo;\n> rollback;\n> \n> There is *no* alternative here but to roll back the search_path\n> setting.\n\n\tbegin;\n\txxxx;\n\tERROR: parser: parse error at or near \"xxxx\"\n\nThere's *no* alternative here but to call *rollback*(commit).\nHowever PostgreSQL doesn't call *rollback* automatically and\nit's the user's responsibility to call *rollback* on errors.\nIMHO what to do with errors is users' responsibility basically.\nThe behavior of the *search_path\" variable is a *had better*\nor *convenient* kind of thing not a *no alternative* kind\nof thing.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Wed, 24 Apr 2002 15:13:59 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> > I'd be willing to consider making the behavior variable-specific\n> > if anyone can identify particular variables that need to behave\n> > differently. But overall I think it's better that the behavior\n> > be consistent --- so you'll need a good argument to convince me\n> > that anything should behave differently ;-).\n> > \n> > There is a variant case that should also have been illustrated:\n> > what if there is no error, but the user does ROLLBACK instead of\n> > COMMIT? The particular case that is causing difficulty for me is\n> > \n> > begin;\n> > create schema foo;\n> > set search_path = foo;\n> > rollback;\n> > \n> > There is *no* alternative here but to roll back the search_path\n> > setting.\n> \n> \tbegin;\n> \txxxx;\n> \tERROR: parser: parse error at or near \"xxxx\"\n> \n> There's *no* alternative here but to call *rollback*(commit).\n> However PostgreSQL doesn't call *rollback* automatically and\n> it's the user's responsibility to call *rollback* on errors.\n> IMHO what to do with errors is users' responsibility basically.\n> The behavior of the *search_path\" variable is a *had better*\n> or *convenient* kind of thing not a *no alternative* kind\n> of thing.\n\nI understand from an ODBC perspective that it is the apps\nresponsibility, but we need some defined behavior for a psql script that\nis fed into the database.\n\nAssuming the SET commands continue to come after it is aborted but\nbefore the COMMIT/ROLLBACK, we need to define how to handle it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 09:56:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "> OK, would people please vote on how to handle SET in an aborted\n> transaction?\n> at the end, should 'x' equal:\n> 1 - All SETs are rolled back in aborted transaction\n> 2 - SETs are ignored after transaction abort\n> 3 - All SETs are honored in aborted transaction\n> ? - Have SETs vary in behavior depending on variable\n\nI'll vote for \"?\", if for no other reason that you are proposing taking\naway a huge chunk of \"language space\" by apriori disallowing out of band\nbehaviors for anything starting with \"SET\". I think that is likely\nHiroshi's concern also.\n\nIf we can fit all current \"SET\" behaviors into a transaction model, then\nI'm not against that (though we should review the list of attributes\nwhich *are* currently affected before settling on this). afaik we have\nnot reviewed current behaviors and have not thought through the \"what\nif's\" that some soft of premature policy decision might constrain in the\nfuture.\n\nLet me give you some examples. We might someday have nested\ntransactions, or transactions which can be resumed from the point of\nfailure. We *might* want to be able to affect recovery behaviors, and we\n*might* want to do so with something like\n\nbegin;\nupdate foo...\nupdate bar...\n<last update fails>\nset blah to blah\nupdate baz...\nupdate bar...\n<last update now succeeds>\nend;\n\nNow we currently *don't* support this behavior, but istm that we\nshouldn't preclude it in the language by forcing some blanket \"all SET\nstatements will be transaction aware\".\n\nWhat language elements would you propose to cover the out of band cases\nif you *do* disallow \"SET\" in that context? If you don't have a\ncandidate, I'd be even more reluctant to go along with the results of\nsome arbitrary vote which is done in a narrow context.\n\nAnd btw, if we *are* going to put transaction semantics on all of our\nglobal variables (which is the context for starting this \"SET\"\ndiscussion, right? Is that really the context we are still in, even\nthough you have phrased a much more general statement above?) then let's\nhave the discussion on *HOW* we are going to accomplish that *BEFORE*\ndeciding to make a semantic constraint on our language support.\n\nHmm, if we are going to use transaction semantics, then we should\nconsider using our existing transaction mechanisms, and if we use our\nexisting transaction mechanisms we should consider pushing these global\nvariables into tables or in memory tables a la \"temp tables\". We get the\ntransaction semantics for free, with the cost of value lookup at the\nbeginning of a transaction or statement (not sure what we can get away\nwith here).\n\nIf we are *not* going to use those existing mechanisms, then what\nmechanism *are* we going to use? Some sort of \"abort hook\" mechanism to\nallow SET to register things to be rolled back?\n\nIf we end up making changes and increasing constraints, then we should\nalso expect some increased functionality as part of the scheme,\nspecifically \"SET extensibility\". We should allow (future) packages to\ndefine their parameters and allow SET to help.\n\nJust some thoughts...\n\n - Thomas\n", "msg_date": "Wed, 24 Apr 2002 07:23:27 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Let me give you some examples. We might someday have nested\n> transactions, or transactions which can be resumed from the point of\n> failure. We *might* want to be able to affect recovery behaviors, and we\n> *might* want to do so with something like\n\n> begin;\n> update foo...\n> update bar...\n> <last update fails>\n> set blah to blah\n> update baz...\n> update bar...\n> <last update now succeeds>\n> end;\n\nSure, once we have savepoints or nested transactions I would expect SET\nto work like that. The \"alternative 1\" should better be phrased as\n\"SETs should work the same way as regular SQL commands do\".\n\nI agree with your comment that it would be useful to look closely at the\nlist of settable variables to see whether any of them need different\nsemantics. Here's the list of everything that can be SET after backend\nstart (some of these require superuser privilege to set, but that seems\nirrelevant):\n\ndatestyle\ntimezone\nXactIsoLevel\nclient_encoding\nserver_encoding\nseed\nsession_authorization\nenable_seqscan\nenable_indexscan\nenable_tidscan\nenable_sort\nenable_nestloop\nenable_mergejoin\nenable_hashjoin\nksqo\ngeqo\ndebug_assertions\ndebug_print_query\ndebug_print_parse\ndebug_print_rewritten\ndebug_print_plan\ndebug_pretty_print\nshow_parser_stats\nshow_planner_stats\nshow_executor_stats\nshow_query_stats\nshow_btree_build_stats\nexplain_pretty_print\nstats_command_string\nstats_row_level\nstats_block_level\ntrace_notify\ntrace_locks\ntrace_userlocks\ntrace_lwlocks\ndebug_deadlocks\nsql_inheritance\naustralian_timezones\npassword_encryption\ntransform_null_equals\ngeqo_threshold\ngeqo_pool_size\ngeqo_effort\ngeqo_generations\ngeqo_random_seed\nsort_mem\nvacuum_mem\ntrace_lock_oidmin\ntrace_lock_table\nmax_expr_depth\nwal_debug\ncommit_delay\ncommit_siblings\neffective_cache_size\nrandom_page_cost\ncpu_tuple_cost\ncpu_index_tuple_cost\ncpu_operator_cost\ngeqo_selection_bias\nclient_min_messages\ndefault_transaction_isolation\ndynamic_library_path\nsearch_path\nserver_min_messages\n\nRight offhand, I am not seeing anything here for which there's a\ncompelling case not to roll it back on error.\n\nIn fact, I have yet to hear *any* plausible example of a variable\nthat we would really seriously want not to roll back on error.\n\n> And btw, if we *are* going to put transaction semantics on all of our\n> global variables (which is the context for starting this \"SET\"\n> discussion, right? Is that really the context we are still in, even\n> though you have phrased a much more general statement above?) then let's\n> have the discussion on *HOW* we are going to accomplish that *BEFORE*\n> deciding to make a semantic constraint on our language support.\n\nHardly necessary: we'll just make guc.c keep track of the\nstart-of-transaction values of all variables that have changed in the\ncurrent transaction, and restore them to that value upon transaction\nabort. Doesn't seem like a big deal to me. We've got tons of other\ncode that does exactly the same sort of thing.\n\n> Hmm, if we are going to use transaction semantics, then we should\n> consider using our existing transaction mechanisms, and if we use our\n> existing transaction mechanisms we should consider pushing these global\n> variables into tables or in memory tables a la \"temp tables\".\n\nQuite a few of the GUC settings are values that need to be set and used\nduring startup, before we have table access up and running. I do not\nthink that it's very practical to expect them to be accessed through\ntable access mechanisms.\n\n> If we end up making changes and increasing constraints, then we should\n> also expect some increased functionality as part of the scheme,\n> specifically \"SET extensibility\".\n\nIt might well be a good idea to allow variables to be added to guc.c's\nlists on-the-fly by the initialization routines of loadable modules.\nBut that's orthogonal to this discussion, IMHO.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 11:31:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction " }, { "msg_contents": "Vote number 1 -- ROLL BACK\n\nBruce Momjian wrote:\n\n>OK, would people please vote on how to handle SET in an aborted\n>transaction? This vote will allow us to resolve the issue and move\n>forward if needed.\n>\n>In the case of:\n>\n>\tSET x=1;\n>\tBEGIN;\n>\tSET x=2;\n>\tquery_that_aborts_transaction;\n>\tSET x=3;\n>\tCOMMIT;\n>\n>at the end, should 'x' equal:\n>\t\n>\t1 - All SETs are rolled back in aborted transaction\n>\t2 - SETs are ignored after transaction abort\n>\t3 - All SETs are honored in aborted transaction\n>\t? - Have SETs vary in behavior depending on variable\n>\n>Our current behavior is 2.\n>\n>Please vote and I will tally the results.\n>\n\n\n", "msg_date": "Wed, 24 Apr 2002 10:33:19 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "On Wed, 24 Apr 2002, Michael Loftis wrote:\n\n> Vote number 1 -- ROLL BACK\n\nI agree.. Number 1 - ROLL BACK\n\n>\n> Bruce Momjian wrote:\n>\n> >OK, would people please vote on how to handle SET in an aborted\n> >transaction? This vote will allow us to resolve the issue and move\n> >forward if needed.\n> >\n> >In the case of:\n> >\n> >\tSET x=1;\n> >\tBEGIN;\n> >\tSET x=2;\n> >\tquery_that_aborts_transaction;\n> >\tSET x=3;\n> >\tCOMMIT;\n> >\n> >at the end, should 'x' equal:\n> >\n> >\t1 - All SETs are rolled back in aborted transaction\n> >\t2 - SETs are ignored after transaction abort\n> >\t3 - All SETs are honored in aborted transaction\n> >\t? - Have SETs vary in behavior depending on variable\n> >\n> >Our current behavior is 2.\n> >\n> >Please vote and I will tally the results.\n> >\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 24 Apr 2002 14:20:23 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, would people please vote on how to handle SET in an aborted\n> transaction? This vote will allow us to resolve the issue and move\n> forward if needed.\n> \n> In the case of:\n> \n> SET x=1;\n> BEGIN;\n> SET x=2;\n> query_that_aborts_transaction;\n> SET x=3;\n> COMMIT;\n> \n> at the end, should 'x' equal:\n> \n> 1 - All SETs are rolled back in aborted transaction\n> 2 - SETs are ignored after transaction abort\n> 3 - All SETs are honored in aborted transaction\n> ? - Have SETs vary in behavior depending on variable\n> \n> Our current behavior is 2.\n> \n> Please vote and I will tally the results.\n\nIs it a vote in the first place ?\nI will vote the current(2 + 3 + ?).\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 08:42:43 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Tom Lane wrote:\n> \n \n> Right offhand, I am not seeing anything here for which there's a\n> compelling case not to roll it back on error.\n> \n> In fact, I have yet to hear *any* plausible example of a variable\n> that we would really seriously want not to roll back on error.\n\nHonetsly I don't understand what kind of example you\nexpect. How about the following ?\n\n[The curren schema is schema1]\n\n\tbegin;\n\tcreate schema foo;\n\tset search_path = foo;\n\tcreate table t1 (....);\n\t.\n [error occurs]\n\trollback;\n\tinsert into t1 select * from schema1.t1;\n\nShould the search_path be put back in this case ?\nAs I mentioned already many times, it doesn't seem\n*should be* kind of thing.\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 09:06:59 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "\nOK, the votes are in:\n\t\n\t#1\n\tLamar Owen\n\tJan Wieck\n\tTom Lane\n\tBruce Momjian\n\tJoe Conway\n\tCurt Sampson\n\tMichael Loftis\n\tVince Vielhaber\n\tSander Steffann\n\t \n\t#2\n\tBradley McLean\n\t \n\t \n\t \n\t#3\n\t\n\t#?\n\tThomas Lockhart\n\tHiroshi Inoue\n\nLooks like #1 is the clear winner.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> OK, would people please vote on how to handle SET in an aborted\n> transaction? This vote will allow us to resolve the issue and move\n> forward if needed.\n> \n> In the case of:\n> \n> \tSET x=1;\n> \tBEGIN;\n> \tSET x=2;\n> \tquery_that_aborts_transaction;\n> \tSET x=3;\n> \tCOMMIT;\n> \n> at the end, should 'x' equal:\n> \t\n> \t1 - All SETs are rolled back in aborted transaction\n> \t2 - SETs are ignored after transaction abort\n> \t3 - All SETs are honored in aborted transaction\n> \t? - Have SETs vary in behavior depending on variable\n> \n> Our current behavior is 2.\n> \n> Please vote and I will tally the results.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 20:39:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Vote totals for SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Tom Lane wrote:\n> >\n>\n> > Right offhand, I am not seeing anything here for which there's a\n> > compelling case not to roll it back on error.\n> >\n> > In fact, I have yet to hear *any* plausible example of a variable\n> > that we would really seriously want not to roll back on error.\n>\n> Honetsly I don't understand what kind of example you\n> expect. How about the following ?\n>\n> [The curren schema is schema1]\n>\n> begin;\n> create schema foo;\n> set search_path = foo;\n> create table t1 (....);\n> .\n> [error occurs]\n> rollback;\n> insert into t1 select * from schema1.t1;\n>\n> Should the search_path be put back in this case ?\n> As I mentioned already many times, it doesn't seem\n> *should be* kind of thing.\n\n Sure should it! You gave an example for the need to roll\n back, because otherwise you would end up with an invalid\n search path \"foo\".\n\n I still believe that rolling back is the only right thing to\n do. What if your application doesn't even know that some\n changes happened? Have a trigger that set's seqscan off, does\n some stuff and intends to reset it later again. Now it elog's\n out before, so your application will have to live with this\n mis-setting on this pooled DB connection until the end? I\n don't think so!\n\n\nJan\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Wed, 24 Apr 2002 20:52:31 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Hiroshi Inoue wrote:\n> > Tom Lane wrote:\n> > >\n> >\n> > > Right offhand, I am not seeing anything here for which there's a\n> > > compelling case not to roll it back on error.\n> > >\n> > > In fact, I have yet to hear *any* plausible example of a variable\n> > > that we would really seriously want not to roll back on error.\n> >\n> > Honetsly I don't understand what kind of example you\n> > expect. How about the following ?\n> >\n> > [The curren schema is schema1]\n> >\n> > begin;\n> > create schema foo;\n> > set search_path = foo;\n> > create table t1 (....);\n> > .\n> > [error occurs]\n> > rollback;\n> > insert into t1 select * from schema1.t1;\n> >\n> > Should the search_path be put back in this case ?\n> > As I mentioned already many times, it doesn't seem\n> > *should be* kind of thing.\n> \n> Sure should it! You gave an example for the need to roll\n> back, because\n\n> otherwise you would end up with an invalid\n> search path \"foo\".\n\nWhat's wrong with it ? The insert command after *rollback*\nwould fail. It seems the right thing to me. Otherwise\nthe insert command would try to append the data of the\ntable t1 to itself. The insert command is for copying\nschema1.t1 to foo.t1 in case the previous create schema\ncommand suceeded.\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 10:06:04 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Honetsly I don't understand what kind of example you\n> expect. How about the following ?\n\n> [The curren schema is schema1]\n\n> \tbegin;\n> \tcreate schema foo;\n> \tset search_path = foo;\n> \tcreate table t1 (....);\n> \t.\n> [error occurs]\n> \trollback;\n> \tinsert into t1 select * from schema1.t1;\n\n> Should the search_path be put back in this case ?\n\nSure it should be. Otherwise it's pointing at a nonexistent schema.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 21:08:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, the votes are in:\n> \n> #1\n> Lamar Owen\n> Jan Wieck\n> Tom Lane\n> Bruce Momjian\n> Joe Conway\n> Curt Sampson\n> Michael Loftis\n> Vince Vielhaber\n> Sander Steffann\n> \n> #2\n> Bradley McLean\n> \n> \n> \n> #3\n> \n> #?\n> Thomas Lockhart\n> Hiroshi Inoue\n> \n> Looks like #1 is the clear winner.\n\nI voted not only ? but also 2 and 3.\nAnd haven't I asked twice or so if it's a vote ?\n\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 10:10:26 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > OK, the votes are in:\n> > \n> > #1\n> > Lamar Owen\n> > Jan Wieck\n> > Tom Lane\n> > Bruce Momjian\n> > Joe Conway\n> > Curt Sampson\n> > Michael Loftis\n> > Vince Vielhaber\n> > Sander Steffann\n> > \n> > #2\n> > Bradley McLean\n> > \n> > \n> > \n> > #3\n> > \n> > #?\n> > Thomas Lockhart\n> > Hiroshi Inoue\n> > \n> > Looks like #1 is the clear winner.\n> \n> I voted not only ? but also 2 and 3.\n> And haven't I asked twice or so if it's a vote ?\n\nYes, it is a vote, and now that we see how everyone feels, we can\ndecide what to do.\n\nHiroshi, you can't vote for 2, 3, and ?. Please pick one. I picked '?'\nfor you because it seemed the closest to your intent. I can put you\ndown for 1/3 of a vote for all three if you wish.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 21:12:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > OK, the votes are in:\n> > >\n> > > #1\n> > > Lamar Owen\n> > > Jan Wieck\n> > > Tom Lane\n> > > Bruce Momjian\n> > > Joe Conway\n> > > Curt Sampson\n> > > Michael Loftis\n> > > Vince Vielhaber\n> > > Sander Steffann\n> > >\n> > > #2\n> > > Bradley McLean\n> > >\n> > >\n> > >\n> > > #3\n> > >\n> > > #?\n> > > Thomas Lockhart\n> > > Hiroshi Inoue\n> > >\n> > > Looks like #1 is the clear winner.\n> >\n> > I voted not only ? but also 2 and 3.\n> > And haven't I asked twice or so if it's a vote ?\n> \n> Yes, it is a vote, and now that we see how everyone feels, we can\n> decide what to do.\n> \n> Hiroshi, you can't vote for 2, 3, and ?.\n\nWhy ?\nI don't think the items are exclusive.\n \nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 25 Apr 2002 10:16:45 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > > Bruce Momjian wrote:\n> > > >\n> > > > OK, the votes are in:\n> > > >\n> > > > #1\n> > > > Lamar Owen\n> > > > Jan Wieck\n> > > > Tom Lane\n> > > > Bruce Momjian\n> > > > Joe Conway\n> > > > Curt Sampson\n> > > > Michael Loftis\n> > > > Vince Vielhaber\n> > > > Sander Steffann\n> > > >\n> > > > #2\n> > > > Bradley McLean\n> > > >\n> > > >\n> > > >\n> > > > #3\n> > > >\n> > > > #?\n> > > > Thomas Lockhart\n> > > > Hiroshi Inoue\n> > > >\n> > > > Looks like #1 is the clear winner.\n> > >\n> > > I voted not only ? but also 2 and 3.\n> > > And haven't I asked twice or so if it's a vote ?\n> > \n> > Yes, it is a vote, and now that we see how everyone feels, we can\n> > decide what to do.\n> > \n> > Hiroshi, you can't vote for 2, 3, and ?.\n> \n> Why ?\n> I don't think the items are exclusive.\n\nWell, 2 says roll back only after transaction aborts, 3 says honor all\nSET's, and ? says choose the behavior depending on the variable. How\ncan you have 2, 3, and ?. Seems ? is the catch-all vote because it\ndoesn't predefine the same behavior for all variables.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 21:17:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "\n\nHiroshi Inoue wrote:\n\n>What's wrong with it ? The insert command after *rollback*\n>would fail. It seems the right thing to me. Otherwise\n>the insert command would try to append the data of the\n>table t1 to itself. The insert command is for copying\n>schema1.t1 to foo.t1 in case the previous create schema\n>command suceeded.\n>\nExactly, in this example shows exactly why SETs should be part of the\ntransaction and roll back. Heck the insert may not even fail after all\nanyway and insert into the wrong schema. If the insert depends on the\nschema create succeeding it should be in the same transaction. (IE it\nwould get rolled back or not happen at all)\n\n>\n>\n>regards, \n>Hiroshi Inoue\n>\thttp://w2422.nsk.ne.jp/~inoue/\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\n", "msg_date": "Wed, 24 Apr 2002 18:26:20 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > >\n> > > > I voted not only ? but also 2 and 3.\n> > > > And haven't I asked twice or so if it's a vote ?\n> > >\n> > > Yes, it is a vote, and now that we see how everyone feels, we can\n> > > decide what to do.\n> > >\n> > > Hiroshi, you can't vote for 2, 3, and ?.\n> >\n> > Why ?\n> > I don't think the items are exclusive.\n> \n> Well, 2 says roll back only after transaction aborts,\n\nSorry for my poor understanding.\nIsn't it 1 ?\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 10:28:46 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > > > >\n> > > > > I voted not only ? but also 2 and 3.\n> > > > > And haven't I asked twice or so if it's a vote ?\n> > > >\n> > > > Yes, it is a vote, and now that we see how everyone feels, we can\n> > > > decide what to do.\n> > > >\n> > > > Hiroshi, you can't vote for 2, 3, and ?.\n> > >\n> > > Why ?\n> > > I don't think the items are exclusive.\n> > \n> > Well, 2 says roll back only after transaction aborts,\n> \n> Sorry for my poor understanding.\n> Isn't it 1 ?\n\nOK, original email attached. 1 rolls back all SETs in an aborted\ntransaction. 2 ignores SETs after transaction aborts, but SETs before\nthe transaction aborted are honored. 3 honors all SETs.\n\n---------------------------------------------------------------------------\n\n\nIn the case of:\n\n SET x=1;\n BEGIN;\n SET x=2;\n query_that_aborts_transaction;\n SET x=3;\n COMMIT;\n\nat the end, should 'x' equal:\n\n 1 - All SETs are rolled back in aborted transaction\n 2 - SETs are ignored after transaction abort\n 3 - All SETs are honored in aborted transaction\n ? - Have SETs vary in behavior depending on variable\n\nOur current behavior is 2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 21:29:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > > > >\n> > > > > > I voted not only ? but also 2 and 3.\n> > > > > > And haven't I asked twice or so if it's a vote ?\n> > > > >\n> > > > > Yes, it is a vote, and now that we see how everyone feels, we can\n> > > > > decide what to do.\n> > > > >\n> > > > > Hiroshi, you can't vote for 2, 3, and ?.\n> > > >\n> > > > Why ?\n> > > > I don't think the items are exclusive.\n> > >\n> > > Well, 2 says roll back only after transaction aborts,\n> >\n> > Sorry for my poor understanding.\n> > Isn't it 1 ?\n> \n> OK, original email attached. 1 rolls back all SETs in an aborted\n> transaction. \n\n> 2 ignores SETs after transaction aborts, but SETs before\n> the transaction aborted are honored.\n\nMust I understand this from your previous posting\n(2 says roll back only after transaction aborts,)\nor original posting ? What I understood was 2 only\nsays that SET fails between a failure and the\nsubsequenct ROLLBACK call.\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 10:41:19 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > > Bruce Momjian wrote:\n> > > >\n> > > > > > >\n> > > > > > > I voted not only ? but also 2 and 3.\n> > > > > > > And haven't I asked twice or so if it's a vote ?\n> > > > > >\n> > > > > > Yes, it is a vote, and now that we see how everyone feels, we can\n> > > > > > decide what to do.\n> > > > > >\n> > > > > > Hiroshi, you can't vote for 2, 3, and ?.\n> > > > >\n> > > > > Why ?\n> > > > > I don't think the items are exclusive.\n> > > >\n> > > > Well, 2 says roll back only after transaction aborts,\n> > >\n> > > Sorry for my poor understanding.\n> > > Isn't it 1 ?\n> > \n> > OK, original email attached. 1 rolls back all SETs in an aborted\n> > transaction. \n> \n> > 2 ignores SETs after transaction aborts, but SETs before\n> > the transaction aborted are honored.\n> \n> Must I understand this from your previous posting\n> (2 says roll back only after transaction aborts,)\n> or original posting ? What I understood was 2 only\n> says that SET fails between a failure and the\n> subsequenct ROLLBACK call.\n\nYes, 2 says that SET fails between failure query and COMMIT/ROLLBACK\ncall, which is current behavior.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 21:46:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Michael Loftis wrote:\n> \n> Hiroshi Inoue wrote:\n> \n> >What's wrong with it ? The insert command after *rollback*\n> >would fail. It seems the right thing to me. Otherwise\n> >the insert command would try to append the data of the\n> >table t1 to itself. The insert command is for copying\n> >schema1.t1 to foo.t1 in case the previous create schema\n> >command suceeded.\n> >\n> Exactly, in this example shows exactly why SETs should be part of the\n> transaction and roll back. Heck the insert may not even fail after all\n> anyway and insert into the wrong schema. If the insert depends on the\n> schema create succeeding it should be in the same transaction. (IE it\n> would get rolled back or not happen at all)\n\nWhere's the restriction that all objects in a schema\nmust be created in an transaction ? Each user has his\nreason and would need various kind of command call sequence.\nWhat I've mainly insisted is what to do with errors is\nusers' responsibilty but I've never seen the agreement\nfor it. So my current understanding is you all\nare thinking what to do with errors is system's\nresponsibilty. Then no matter how users call commands\nthe dbms must behave appropriately, mustn't it ?\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 10:49:04 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Michael Loftis wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > \n> > >What's wrong with it ? The insert command after *rollback*\n> > >would fail. It seems the right thing to me. Otherwise\n> > >the insert command would try to append the data of the\n> > >table t1 to itself. The insert command is for copying\n> > >schema1.t1 to foo.t1 in case the previous create schema\n> > >command suceeded.\n> > >\n> > Exactly, in this example shows exactly why SETs should be part of the\n> > transaction and roll back. Heck the insert may not even fail after all\n> > anyway and insert into the wrong schema. If the insert depends on the\n> > schema create succeeding it should be in the same transaction. (IE it\n> > would get rolled back or not happen at all)\n> \n> Where's the restriction that all objects in a schema\n> must be created in an transaction ? Each user has his\n> reason and would need various kind of command call sequence.\n> What I've mainly insisted is what to do with errors is\n> users' responsibilty but I've never seen the agreement\n> for it. So my current understanding is you all\n> are thinking what to do with errors is system's\n> responsibilty. Then no matter how users call commands\n> the dbms must behave appropriately, mustn't it ?\n\nHiroshi, we need a psql solution too. People are feeding query files\ninto psql all the time and we should have an appropriate behavior for\nthem.\n\nI now understand your point that from a ODBC perspective, you may not\nwant SETs rolled back and you would rather ODBC handle what to do with\nSETs. Not sure I like pushing that job off to the application\nprogrammer, but I think I see your point.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 21:59:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > > \n> > > Hiroshi Inoue wrote:\n> > > > Bruce Momjian wrote:\n> > > > >\n> > > > > > > >\n> > > > > > > > I voted not only ? but also 2 and 3.\n> > > > > > > > And haven't I asked twice or so if it's a vote ?\n> > > > > > >\n> > > > > > > Yes, it is a vote, and now that we see how everyone feels, we can\n> > > > > > > decide what to do.\n> > > > > > >\n> > > > > > > Hiroshi, you can't vote for 2, 3, and ?.\n> > > > > >\n> > > > > > Why ?\n> > > > > > I don't think the items are exclusive.\n> > > > >\n> > > > > Well, 2 says roll back only after transaction aborts,\n> > > >\n> > > > Sorry for my poor understanding.\n> > > > Isn't it 1 ?\n> > > \n> > > OK, original email attached. 1 rolls back all SETs in an aborted\n> > > transaction. \n> > \n> > > 2 ignores SETs after transaction aborts, but SETs before\n> > > the transaction aborted are honored.\n> > \n> > Must I understand this from your previous posting\n> > (2 says roll back only after transaction aborts,)\n> > or original posting ? What I understood was 2 only\n> > says that SET fails between a failure and the\n> > subsequenct ROLLBACK call.\n> \n> Yes, 2 says that SET fails between failure query and COMMIT/ROLLBACK\n> call, which is current behavior.\n\n What about a SET variable that controls the behaviour of\n SET variables :-)\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n", "msg_date": "Wed, 24 Apr 2002 22:00:41 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> > Sure should it! You gave an example for the need to roll\n> > back, because\n> > otherwise you would end up with an invalid\n> > search path \"foo\".\n>\n> What's wrong with it ? The insert command after *rollback*\n> would fail. It seems the right thing to me. Otherwise\n> the insert command would try to append the data of the\n> table t1 to itself. The insert command is for copying\n> schema1.t1 to foo.t1 in case the previous create schema\n> command suceeded.\n\n Wrong about your entire example is that the rollback is sheer\n wrong placed to make up your case ;-p\n\n There is absolutely no need to put the insert outside of the\n transaction that is intended to copy schema1.t1 to foo.t1.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Wed, 24 Apr 2002 22:06:10 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "\n\nHiroshi Inoue wrote:\n\n>Michael Loftis wrote:\n>\n>>Hiroshi Inoue wrote:\n>>\n>>>What's wrong with it ? The insert command after *rollback*\n>>>would fail. It seems the right thing to me. Otherwise\n>>>the insert command would try to append the data of the\n>>>table t1 to itself. The insert command is for copying\n>>>schema1.t1 to foo.t1 in case the previous create schema\n>>>command suceeded.\n>>>\n>>Exactly, in this example shows exactly why SETs should be part of the\n>>transaction and roll back. Heck the insert may not even fail after all\n>>anyway and insert into the wrong schema. If the insert depends on the\n>>schema create succeeding it should be in the same transaction. (IE it\n>>would get rolled back or not happen at all)\n>>\n>\n>Where's the restriction that all objects in a schema\n>must be created in an transaction ? Each user has his\n>reason and would need various kind of command call sequence.\n>What I've mainly insisted is what to do with errors is\n>users' responsibilty but I've never seen the agreement\n>for it. So my current understanding is you all\n>are thinking what to do with errors is system's\n>responsibilty. Then no matter how users call commands\n>the dbms must behave appropriately, mustn't it ?\n>\nIMHO as a user and developer it's more important to behave consistently.\nA rollback should cause everything inside of a transaciton block to\nrollback. If you need to keep something then it should either be done in\nit's own transaction, or outside of an explicit transaction entirely.\n\nThere is no restriction. The system is handling an error in the way\ninstructed by the user either ROLLBACK or COMMIT. If you COMMIT with\nerrors, it's your problem. But if you askt he system to ROLLBACK it's\nthe users expectation that the DBMS will ROLLBACK. Not ROLLBACK this and\nthat, but leave another thing alone. You say BEGIN ... COMMIT you expect\na COMMIT, you say BEGIN ... ROLLBACK you expect a ROLLBACK. You say\nBEGIN ... END the DBMS should 'do the right thing' (IE COMMIT if\nsuccessfull, ROLLBACK if not). Thats the behaviour I'd expect from ANY\ntransactional system.\n\nThe user will (and rightfully so) expect a ROLLBACK to do just that for\neverything. Yes this will break the way things work currently, but on\nthe whole, and going forward, it makes the system consistent. Right now\nwe roll back SELECTs, CREATEs, UPDATEs, etc., but not SETs (or atleast\nfrom what I can tell that's what we do.)\n\nI understand what you're saying Hiroshi-san, but really, it's a very\nweak reason. If you (as a programmer/developer) do something like in\nyour earlier example (perform an insert after ROLLBACK) then you know an\nerror occurred, and it's your own fault for inserting into the wrong\ntable outside of the transaction.\n\n\n", "msg_date": "Wed, 24 Apr 2002 19:06:12 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> >\n> > Must I understand this from your previous posting\n> > (2 says roll back only after transaction aborts,)\n> > or original posting ? What I understood was 2 only\n> > says that SET fails between a failure and the\n> > subsequenct ROLLBACK call.\n> \n> Yes, 2 says that SET fails between failure query and COMMIT/ROLLBACK\n> call, which is current behavior.\n\nOh I see. It was my mistake to have participated this vote.\nI'm not qualified from the first because I wasn't able to\nunderstand your vote list. \n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 11:08:51 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "\n\nBruce Momjian wrote:\n\n>Hiroshi, we need a psql solution too. People are feeding query files\n>into psql all the time and we should have an appropriate behavior for\n>them.\n>\n>I now understand your point that from a ODBC perspective, you may not\n>want SETs rolled back and you would rather ODBC handle what to do with\n>SETs. Not sure I like pushing that job off to the application\n>programmer, but I think I see your point.\n>\n\nAhhh Hiroshi is talkign formt he aspect of ODBC? Well, thats an ODBC \nissue, should be handled by the ODBC driver. Compliance with ODBC spec \n(or non-compliance) is not the issue of PostgreSQL proper. Thats the \nissue of the ODBC driver and it's maintainers (sorry if I'm sounding \nlike a bastard but heh).\n\nIf we start catering to all the different driver layers then we'll end \nup with a huge mess. What we're 'catering' to is the SQLxx specs, and \nthe expectations of a user when running and developing programs, am I right?\n\n\n", "msg_date": "Wed, 24 Apr 2002 19:08:56 -0700", "msg_from": "Michael Loftis <mloftis@wgops.com>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Michael Loftis wrote:\n> \n> Hiroshi Inoue wrote:\n>\n> >Where's the restriction that all objects in a schema\n> >must be created in an transaction ? Each user has his\n> >reason and would need various kind of command call sequence.\n> >What I've mainly insisted is what to do with errors is\n> >users' responsibilty but I've never seen the agreement\n> >for it. So my current understanding is you all\n> >are thinking what to do with errors is system's\n> >responsibilty. Then no matter how users call commands\n> >the dbms must behave appropriately, mustn't it ?\n> >\n> IMHO as a user and developer it's more important to behave consistently.\n> A rollback should cause everything inside of a transaciton block to\n> rollback.\n\nWhere does the *should* come from ?\nThe standard says that changes to the database should\nbe put back but doesn't say everything should be put back.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 11:20:58 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Hiroshi Inoue wrote:\n> > > Sure should it! You gave an example for the need to roll\n> > > back, because\n> > > otherwise you would end up with an invalid\n> > > search path \"foo\".\n> >\n> > What's wrong with it ? The insert command after *rollback*\n> > would fail. It seems the right thing to me. Otherwise\n> > the insert command would try to append the data of the\n> > table t1 to itself. The insert command is for copying\n> > schema1.t1 to foo.t1 in case the previous create schema\n> > command suceeded.\n> \n> Wrong about your entire example is that the rollback is sheer\n> wrong placed to make up your case ;-p\n\nIs this issue on the wrong(? not preferable) sequnence\nof calls ?\nPlease don't miss the point.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 11:37:36 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> Hiroshi, we need a psql solution too. People are feeding query files\n> into psql all the time and we should have an appropriate behavior for\n> them.\n\nWhat are you expecting for psql e.g. the following\nwrong(?) example ?\n\n\t[The curren schema is schema1]\n begin;\n create schema foo;\n set search_path = foo;\n create table t1 (....); [error occurs]\n commit;\n insert into t1 select * from schema1.t1;\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 11:52:44 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > Hiroshi, we need a psql solution too. People are feeding query files\n> > into psql all the time and we should have an appropriate behavior for\n> > them.\n> \n> What are you expecting for psql e.g. the following\n> wrong(?) example ?\n> \n> \t[The curren schema is schema1]\n> begin;\n> create schema foo;\n> set search_path = foo;\n> create table t1 (....); [error occurs]\n> commit;\n> insert into t1 select * from schema1.t1;\n\nI am expecting the INSERT will use the search_path value that existed\nbefore the error transaction began.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 22:53:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Michael Loftis wrote:\n> \n> Bruce Momjian wrote:\n> \n> >Hiroshi, we need a psql solution too. People are feeding query files\n> >into psql all the time and we should have an appropriate behavior for\n> >them.\n> >\n> >I now understand your point that from a ODBC perspective, you may not\n> >want SETs rolled back and you would rather ODBC handle what to do with\n> >SETs. Not sure I like pushing that job off to the application\n> >programmer, but I think I see your point.\n> >\n> \n> Ahhh Hiroshi is talkign formt he aspect of ODBC? Well, thats an ODBC\n> issue, should be handled by the ODBC driver. \n\nNo. \n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 25 Apr 2002 11:54:36 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> >\n> > What are you expecting for psql e.g. the following\n> > wrong(?) example ?\n> >\n> > [The curren schema is schema1]\n> > begin;\n> > create schema foo;\n> > set search_path = foo;\n> > create table t1 (....); [error occurs]\n> > commit;\n> > insert into t1 select * from schema1.t1;\n> \n> I am expecting the INSERT will use the search_path value that existed\n> before the error transaction began.\n> \n\nSo you see foo.t1 which is a copy of schema1.t1\nif all were successful and you may be able to see\nthe doubled schema1.t1 in case of errors.\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 25 Apr 2002 12:00:28 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > >\n> > > What are you expecting for psql e.g. the following\n> > > wrong(?) example ?\n> > >\n> > > [The curren schema is schema1]\n> > > begin;\n> > > create schema foo;\n> > > set search_path = foo;\n> > > create table t1 (....); [error occurs]\n> > > commit;\n> > > insert into t1 select * from schema1.t1;\n> > \n> > I am expecting the INSERT will use the search_path value that existed\n> > before the error transaction began.\n> > \n> \n> So you see foo.t1 which is a copy of schema1.t1\n> if all were successful and you may be able to see\n> the doubled schema1.t1 in case of errors.\n\nYes, I think that is how it would behave. If you don't roll back 'set\nsearch_path', you are pointing to a non-existant schema.\n\nProbably the proper thing here would be to have the INSERT in the\ntransaction too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 23:03:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > >\n> > > > What are you expecting for psql e.g. the following\n> > > > wrong(?) example ?\n> > > >\n> > > > [The curren schema is schema1]\n> > > > begin;\n> > > > create schema foo;\n> > > > set search_path = foo;\n> > > > create table t1 (....); [error occurs]\n> > > > commit;\n> > > > insert into t1 select * from schema1.t1;\n> > >\n> > > I am expecting the INSERT will use the search_path value that existed\n> > > before the error transaction began.\n> > >\n> >\n> > So you see foo.t1 which is a copy of schema1.t1\n> > if all were successful and you may be able to see\n> > the doubled schema1.t1 in case of errors.\n> \n> Yes, I think that is how it would behave. If you don't roll back 'set\n> search_path', you are pointing to a non-existant schema.\n\nOK I see your standpoint. If Tom agrees with Bruce I don't\nobject any more.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 25 Apr 2002 12:11:51 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Vote on SET in aborted transaction" }, { "msg_contents": "> What about a SET variable that controls the behaviour of\n> SET variables :-)\n\nOr two commands for the same thing:\n- a SET command that behaves as it does now\n- a TSET command that is transaction-aware\n\nOuch... :-)\nSander\n\n\n", "msg_date": "Thu, 25 Apr 2002 14:05:43 +0200", "msg_from": "\"Sander Steffann\" <sander@steffann.nl>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "\nJust curious here, but has anyone taken the time to see how others are\ndoing this? For instance, if we go with 1, are going against how everyone\nelse handles it? IMHO, its not a popularity contest ...\n\nPersonally, I do agree with #1, but I'm curious as to how those coming\nfrom other DBMS are going to have problems if this isn't what they are\nexpecting ...\n\n\nOn Wed, 24 Apr 2002, Bruce Momjian wrote:\n\n>\n> OK, the votes are in:\n>\n> \t#1\n> \tLamar Owen\n> \tJan Wieck\n> \tTom Lane\n> \tBruce Momjian\n> \tJoe Conway\n> \tCurt Sampson\n> \tMichael Loftis\n> \tVince Vielhaber\n> \tSander Steffann\n>\n> \t#2\n> \tBradley McLean\n>\n>\n>\n> \t#3\n>\n> \t#?\n> \tThomas Lockhart\n> \tHiroshi Inoue\n>\n> Looks like #1 is the clear winner.\n>\n> ---------------------------------------------------------------------------\n>\n> Bruce Momjian wrote:\n> > OK, would people please vote on how to handle SET in an aborted\n> > transaction? This vote will allow us to resolve the issue and move\n> > forward if needed.\n> >\n> > In the case of:\n> >\n> > \tSET x=1;\n> > \tBEGIN;\n> > \tSET x=2;\n> > \tquery_that_aborts_transaction;\n> > \tSET x=3;\n> > \tCOMMIT;\n> >\n> > at the end, should 'x' equal:\n> >\n> > \t1 - All SETs are rolled back in aborted transaction\n> > \t2 - SETs are ignored after transaction abort\n> > \t3 - All SETs are honored in aborted transaction\n> > \t? - Have SETs vary in behavior depending on variable\n> >\n> > Our current behavior is 2.\n> >\n> > Please vote and I will tally the results.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Thu, 25 Apr 2002 09:52:19 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Sander Steffann wrote:\n> > What about a SET variable that controls the behaviour of\n> > SET variables :-)\n>\n> Or two commands for the same thing:\n> - a SET command that behaves as it does now\n> - a TSET command that is transaction-aware\n>\n> Ouch... :-)\n> Sander\n\n Naw, that's far too easy. I got it now, a\n\n CONFIGURE variable ON ROLLBACK <action>\n\n action: SET DEFAULT (read again from .conf)\n | SET 'value' (might fail, fallback to .conf)\n | NO ACTION (ignore rollback)\n | ROLLBACK (return to value before transaction)\n\n Also, we should make all these settings DB dependant and be\n able to specify the configure settings in the .conf file, so\n that two databases running under the same postmaster bahave\n completely different, just to make the confusion perfect for\n every client.\n\n And for everyone who didn't get it, this was sarcasm!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Thu, 25 Apr 2002 10:18:26 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Marc G. Fournier wrote:\n> \n> Just curious here, but has anyone taken the time to see how others are\n> doing this? For instance, if we go with 1, are going against how everyone\n> else handles it? IMHO, its not a popularity contest ...\n\nYes, good point. I don't know that they use SET, but if they do, we\nshould find out how they handle it, though I doubt they have thought\nthrough their SET handling as well as we have. My guess is that they do\n3, honor all SETs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 11:50:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Marc G. Fournier wrote:\n> >\n> > Just curious here, but has anyone taken the time to see how others are\n> > doing this? For instance, if we go with 1, are going against how everyone\n> > else handles it? IMHO, its not a popularity contest ...\n> \n> Yes, good point. I don't know that they use SET, but if they do, we\n> should find out how they handle it, though I doubt they have thought\n> through their SET handling as well as we have. My guess is that they do\n> 3, honor all SETs.\n\nConnected to:\nOracle8 Enterprise Edition Release 8.0.5.0.0 - Production\nPL/SQL Release 8.0.5.0.0 - Production\n\nSQL> SELECT TO_CHAR(SYSDATE) FROM DUAL;\n\nTO_CHAR(S\n---------\n25-APR-02\n\nSQL> COMMIT;\n\nCommit complete.\n\nSQL> ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY MM DD';\n\nSession altered.\n\nSQL> ROLLBACK;\n\nRollback complete.\n\nSQL> SELECT TO_CHAR(SYSDATE) FROM DUAL;\n\nTO_CHAR(SY\n----------\n2002 04 25\n\nOf course, with Oracle, the only operations which can be rolled back are\nINSERTs, UPDATEs, and DELETEs (DML statements). A long time ago, on a\nplanet far, far away, I argued that PostgreSQL should follow Oracle's\nbehavior in this regard. I stand corrected. The ability to rollback DROP\nTABLE is a very nice feature Oracle doesn't have, and to remain\nconsistent, I agree with all of those that have voted for #1.\n\nMike Mascari\nmascarm@mascari.com\n", "msg_date": "Thu, 25 Apr 2002 13:16:51 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Thu, 25 Apr 2002, Mike Mascari wrote:\n\n> Bruce Momjian wrote:\n> >\n> > Marc G. Fournier wrote:\n> > >\n> > > Just curious here, but has anyone taken the time to see how others are\n> > > doing this? For instance, if we go with 1, are going against how everyone\n> > > else handles it? IMHO, its not a popularity contest ...\n> >\n> > Yes, good point. I don't know that they use SET, but if they do, we\n> > should find out how they handle it, though I doubt they have thought\n> > through their SET handling as well as we have. My guess is that they do\n> > 3, honor all SETs.\n>\n> Connected to:\n> Oracle8 Enterprise Edition Release 8.0.5.0.0 - Production\n> PL/SQL Release 8.0.5.0.0 - Production\n>\n> SQL> SELECT TO_CHAR(SYSDATE) FROM DUAL;\n>\n> TO_CHAR(S\n> ---------\n> 25-APR-02\n>\n> SQL> COMMIT;\n>\n> Commit complete.\n>\n> SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY MM DD';\n>\n> Session altered.\n>\n> SQL> ROLLBACK;\n>\n> Rollback complete.\n>\n> SQL> SELECT TO_CHAR(SYSDATE) FROM DUAL;\n>\n> TO_CHAR(SY\n> ----------\n> 2002 04 25\n>\n> Of course, with Oracle, the only operations which can be rolled back are\n> INSERTs, UPDATEs, and DELETEs (DML statements). A long time ago, on a\n> planet far, far away, I argued that PostgreSQL should follow Oracle's\n> behavior in this regard. I stand corrected. The ability to rollback DROP\n> TABLE is a very nice feature Oracle doesn't have, and to remain\n> consistent, I agree with all of those that have voted for #1.\n\nOkay, based on this, I'm pseudo-against ... I think, for reasons of\nreducing headaches for ppl posting, there should be some sort of 'SET\noracle_quirks' operation that would allow for those with largish legacy\napps trying to migrate over to do so without having to check for \"odd\"\nbehaviours like this ...\n\nOr maybe \"SET set_rollbacks = oracle\"? with default being #1 as discussed\n...\n\n\n", "msg_date": "Thu, 25 Apr 2002 14:59:43 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Marc G. Fournier wrote:\n> Okay, based on this, I'm pseudo-against ... I think, for reasons of\n> reducing headaches for ppl posting, there should be some sort of 'SET\n> oracle_quirks' operation that would allow for those with largish legacy\n> apps trying to migrate over to do so without having to check for \"odd\"\n> behaviours like this ...\n> \n> Or maybe \"SET set_rollbacks = oracle\"? with default being #1 as discussed\n\nYes, I understand. However, seeing that we have gone 6 years with this\nnever being an issue, I think we should just shoot for #1 and keep open\nto the idea of having a compatibility mode, and the possibility that #1\nmay not fit for all SET variables and we may have to do some special\ncases for those.\n\nMy guess is that we should implement #1 and see what feedback we get in\n7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 14:26:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Thu, 25 Apr 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > Okay, based on this, I'm pseudo-against ... I think, for reasons of\n> > reducing headaches for ppl posting, there should be some sort of 'SET\n> > oracle_quirks' operation that would allow for those with largish legacy\n> > apps trying to migrate over to do so without having to check for \"odd\"\n> > behaviours like this ...\n> >\n> > Or maybe \"SET set_rollbacks = oracle\"? with default being #1 as discussed\n>\n> Yes, I understand. However, seeing that we have gone 6 years with this\n> never being an issue, I think we should just shoot for #1 and keep open\n> to the idea of having a compatibility mode, and the possibility that #1\n> may not fit for all SET variables and we may have to do some special\n> cases for those.\n>\n> My guess is that we should implement #1 and see what feedback we get in\n> 7.3.\n\nIMHO, it hasn't been thought out well enough to be implemented yet ... the\noptions have been, but which to implement haven't ... right now, #1 is\nproposing to implement something that goes against what *at least* one of\nDBMS does ... so now you have programmers coming from that environment\nexpecting one thing to happen, when a totally different thing results ...\n\n\n", "msg_date": "Thu, 25 Apr 2002 16:01:21 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Marc G. Fournier wrote:\n> > My guess is that we should implement #1 and see what feedback we get in\n> > 7.3.\n> \n> IMHO, it hasn't been thought out well enough to be implemented yet ... the\n> options have been, but which to implement haven't ... right now, #1 is\n> proposing to implement something that goes against what *at least* one of\n> DBMS does ... so now you have programmers coming from that environment\n> expecting one thing to happen, when a totally different thing results ...\n\nBut, they don't expect our current behavior either (which is really\nweird). At least I haven't seen anyone complaining about our current\nweird behavior, and we are improving it, at least as our users request\nit.\n\nIn fact, Oracle doesn't implement rollback for DROP TABLE, and we\nclearly wanted that feature, so do we ignore rollback for SET too?\n\nI guess I don't see it as a killer if we can do better than Oracle, or\nat least most of our users (including you) think it is better than\nOracle. If someone wants Oracle behavior after we do #1, we can add it,\nright?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 16:32:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Thu, 25 Apr 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > > My guess is that we should implement #1 and see what feedback we get in\n> > > 7.3.\n> >\n> > IMHO, it hasn't been thought out well enough to be implemented yet ... the\n> > options have been, but which to implement haven't ... right now, #1 is\n> > proposing to implement something that goes against what *at least* one of\n> > DBMS does ... so now you have programmers coming from that environment\n> > expecting one thing to happen, when a totally different thing results ...\n>\n> But, they don't expect our current behavior either (which is really\n> weird). At least I haven't seen anyone complaining about our current\n> weird behavior, and we are improving it, at least as our users request\n> it.\n>\n> In fact, Oracle doesn't implement rollback for DROP TABLE, and we\n> clearly wanted that feature, so do we ignore rollback for SET too?\n>\n> I guess I don't see it as a killer if we can do better than Oracle, or\n> at least most of our users (including you) think it is better than\n> Oracle. If someone wants Oracle behavior after we do #1, we can add it,\n> right?\n\nI've often wondered why the \"but that's how the other RDBMS is doing\nit\" is only used when convenient. Case in point is the issue (that's\nbeen resolved) with the insert into foo(foo.bar) ... where every one\nI checked accepted it, but that wasn't a good enough reason for us to\nsupport it. Until the fact that applications that were using that\nsyntax was causing PostgreSQL not to be used was the issue resolved.\nNow I'm seeing the \"but that's the way Oracle does it\" excuse being\nused to justify a change. Can we try for some consistancy?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 25 Apr 2002 17:04:30 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "\nMarc is suggesting we may want to match Oracle somehow.\n\nI just want to have our SET work on a sane manner.\n\n---------------------------------------------------------------------------\n\nVince Vielhaber wrote:\n> On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > > > My guess is that we should implement #1 and see what feedback we get in\n> > > > 7.3.\n> > >\n> > > IMHO, it hasn't been thought out well enough to be implemented yet ... the\n> > > options have been, but which to implement haven't ... right now, #1 is\n> > > proposing to implement something that goes against what *at least* one of\n> > > DBMS does ... so now you have programmers coming from that environment\n> > > expecting one thing to happen, when a totally different thing results ...\n> >\n> > But, they don't expect our current behavior either (which is really\n> > weird). At least I haven't seen anyone complaining about our current\n> > weird behavior, and we are improving it, at least as our users request\n> > it.\n> >\n> > In fact, Oracle doesn't implement rollback for DROP TABLE, and we\n> > clearly wanted that feature, so do we ignore rollback for SET too?\n> >\n> > I guess I don't see it as a killer if we can do better than Oracle, or\n> > at least most of our users (including you) think it is better than\n> > Oracle. If someone wants Oracle behavior after we do #1, we can add it,\n> > right?\n> \n> I've often wondered why the \"but that's how the other RDBMS is doing\n> it\" is only used when convenient. Case in point is the issue (that's\n> been resolved) with the insert into foo(foo.bar) ... where every one\n> I checked accepted it, but that wasn't a good enough reason for us to\n> support it. Until the fact that applications that were using that\n> syntax was causing PostgreSQL not to be used was the issue resolved.\n> Now I'm seeing the \"but that's the way Oracle does it\" excuse being\n> used to justify a change. Can we try for some consistancy?\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 17:25:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Thu, 25 Apr 2002, Bruce Momjian wrote:\n\n>\n> Marc is suggesting we may want to match Oracle somehow.\n>\n> I just want to have our SET work on a sane manner.\n\nAs do I. But to Marc's suggestion, we discussed an oracle compatibility\nfactor in the past and it was dismissed. I seem to recall someone even\nvolunteering to write it for us.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 25 Apr 2002 17:42:33 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Thu, 25 Apr 2002, Bruce Momjian wrote:\n\n>\n> Marc is suggesting we may want to match Oracle somehow.\n>\n> I just want to have our SET work on a sane manner.\n\nMyself, I wonder why Oracle went the route they went ... does anyone have\naccess to a Sybase / Informix system, to confirm how they do it? Is\nOracle the 'odd man out', or are we going to be that? *Adding* something\n(ie. DROP TABLE rollbacks) that nobody appears to have is one thing ...\nbut changing the behaviour is a totally different ...\n\n> ---------------------------------------------------------------------------\n>\n> Vince Vielhaber wrote:\n> > On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> >\n> > > Marc G. Fournier wrote:\n> > > > > My guess is that we should implement #1 and see what feedback we get in\n> > > > > 7.3.\n> > > >\n> > > > IMHO, it hasn't been thought out well enough to be implemented yet ... the\n> > > > options have been, but which to implement haven't ... right now, #1 is\n> > > > proposing to implement something that goes against what *at least* one of\n> > > > DBMS does ... so now you have programmers coming from that environment\n> > > > expecting one thing to happen, when a totally different thing results ...\n> > >\n> > > But, they don't expect our current behavior either (which is really\n> > > weird). At least I haven't seen anyone complaining about our current\n> > > weird behavior, and we are improving it, at least as our users request\n> > > it.\n> > >\n> > > In fact, Oracle doesn't implement rollback for DROP TABLE, and we\n> > > clearly wanted that feature, so do we ignore rollback for SET too?\n> > >\n> > > I guess I don't see it as a killer if we can do better than Oracle, or\n> > > at least most of our users (including you) think it is better than\n> > > Oracle. If someone wants Oracle behavior after we do #1, we can add it,\n> > > right?\n> >\n> > I've often wondered why the \"but that's how the other RDBMS is doing\n> > it\" is only used when convenient. Case in point is the issue (that's\n> > been resolved) with the insert into foo(foo.bar) ... where every one\n> > I checked accepted it, but that wasn't a good enough reason for us to\n> > support it. Until the fact that applications that were using that\n> > syntax was causing PostgreSQL not to be used was the issue resolved.\n> > Now I'm seeing the \"but that's the way Oracle does it\" excuse being\n> > used to justify a change. Can we try for some consistancy?\n> >\n> > Vince.\n> > --\n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Thu, 25 Apr 2002 22:56:57 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> \n> >\n> > Marc is suggesting we may want to match Oracle somehow.\n> >\n> > I just want to have our SET work on a sane manner.\n> \n> Myself, I wonder why Oracle went the route they went ... does anyone have\n> access to a Sybase / Informix system, to confirm how they do it? Is\n> Oracle the 'odd man out', or are we going to be that? *Adding* something\n> (ie. DROP TABLE rollbacks) that nobody appears to have is one thing ...\n> but changing the behaviour is a totally different ...\n\nYes, let's find out what the others do. I don't see DROP TABLE\nrollbacking as totally different. How is it different from SET?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 22:20:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Vince Vielhaber wrote:\n> On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> \n> >\n> > Marc is suggesting we may want to match Oracle somehow.\n> >\n> > I just want to have our SET work on a sane manner.\n> \n> As do I. But to Marc's suggestion, we discussed an oracle compatibility\n> factor in the past and it was dismissed. I seem to recall someone even\n> volunteering to write it for us.\n\nYes, doing SET the Oracle way would be part of a much larger project\nthat turns on Oracle compatibility. We can add some comment to the code\nand come back to this area if we start to consider an Oracle mode more\nseriously.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 22:22:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Thu, 25 Apr 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> >\n> > >\n> > > Marc is suggesting we may want to match Oracle somehow.\n> > >\n> > > I just want to have our SET work on a sane manner.\n> >\n> > Myself, I wonder why Oracle went the route they went ... does anyone have\n> > access to a Sybase / Informix system, to confirm how they do it? Is\n> > Oracle the 'odd man out', or are we going to be that? *Adding* something\n> > (ie. DROP TABLE rollbacks) that nobody appears to have is one thing ...\n> > but changing the behaviour is a totally different ...\n>\n> Yes, let's find out what the others do. I don't see DROP TABLE\n> rollbacking as totally different. How is it different from SET?\n\nSET currently has an \"accepted behaviour\" with other DBMSs, or, at least,\nwith Oracle, and that is to ignore the rollback ...\n\nDROP TABLE also had an \"accepted behaviour\", and that was to leave it\nDROPed, so \"oops, I screwed up and just lost a complete table as a\nresult\", which, IMHO, isn't particularly good ...\n\nNOTE that I *do* think that #1 is what *should* happen, but there should\nbe some way of turning off that behaviour so that we don't screw up ppl\nexpecting \"Oracles behaviour\" ... I just think that implementing #1\nwithout the 'switch' is implementing a half-measure that is gonna come\nback and bite us ...\n\n", "msg_date": "Fri, 26 Apr 2002 00:31:18 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "At 04:01 PM 4/25/02 -0300, Marc G. Fournier wrote:\n> > My guess is that we should implement #1 and see what feedback we get in\n> > 7.3.\n>\n>IMHO, it hasn't been thought out well enough to be implemented yet ... the\n>options have been, but which to implement haven't ... right now, #1 is\n>proposing to implement something that goes against what *at least* one of\n>DBMS does ... so now you have programmers coming from that environment\n>expecting one thing to happen, when a totally different thing results ...\n\nI don't know about those programmers, but AFAIK when I shift from one DBMS \nto another I expect weird things to happen, because the whole DBMS world is \nfilled with all sorts of \"no standard\" behaviour.\n\nSET XXX doesn't even directly map to Oracle's stuff in the first place. \nSince it looks different, I think the migrator shouldn't be surprised if it \nworks differently. They might expect it to work the same, but if it doesn't \nthey'll just go \"OK yet another one of those\".\n\nWhat would be good are \"RDBMS X to Postgresql\" migration docs. I believe \nthere's already an Oracle to Postgresql migration document. So putting all \nthese things there and linking to them would be helpful.\n---\n\nI'm sorry if this has been discussed already:\n\nThere may be some SETs which operate on a different level of the \napplication. We may wish to clearly differentiate them from those that are \ntransactional and can operate in the domain of other SQL statements. Or put \nthose in config files and they never appear in SETs?\n\nCoz some things should not be rolled back. So you guys might come up with a \ndifferent keyword for it.\n\ne.g.\nCONFIG: for non transactional stuff that can appear as SQL statements.\nSET: for stuff that can be transactional.\n\nPractical example: Does doing an enable seqscan affect OTHER db connections \nand transactions as well? If it doesn't then yes it should be \ntransactional, whereas if does then it shouldn't bother being \ntransactional. And there could well be two cases operating in different \ndomains. e.g. CONFIG globalseqscan=0 and SET seqscan=0.\n\nRegards,\nLink.\n\n", "msg_date": "Fri, 26 Apr 2002 11:48:49 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Marc G. Fournier wrote:\n> > Yes, let's find out what the others do. I don't see DROP TABLE\n> > rollbacking as totally different. How is it different from SET?\n> \n> SET currently has an \"accepted behaviour\" with other DBMSs, or, at least,\n> with Oracle, and that is to ignore the rollback ...\n> \n> DROP TABLE also had an \"accepted behaviour\", and that was to leave it\n> DROPed, so \"oops, I screwed up and just lost a complete table as a\n> result\", which, IMHO, isn't particularly good ...\n> \n> NOTE that I *do* think that #1 is what *should* happen, but there should\n> be some way of turning off that behaviour so that we don't screw up ppl\n> expecting \"Oracles behaviour\" ... I just think that implementing #1\n> without the 'switch' is implementing a half-measure that is gonna come\n> back and bite us ...\n\nYes, I understand, and the logical place would be GUC. However, if we\nadd every option someone would ever want to GUC, the number of options\nwould be huge.\n\nWe currently have a problem doing #2. My suggestion is that we go to #1\nand wait to see if anyone actually asks for the option of choosing #3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Apr 2002 00:43:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Fri, 26 Apr 2002, Marc G. Fournier wrote:\n\n> NOTE that I *do* think that #1 is what *should* happen, but there should\n> be some way of turning off that behaviour so that we don't screw up ppl\n> expecting \"Oracles behaviour\" ...\n\nI don't think this follows. If it's only for people's expectations,\nbut we default to #1, their expectations will be violated until\nthey figure out that the option is there. After they figure out\nit's there, well, they don't expect it to behave like Oracle any\nmore, so they don't need the switch, right?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 26 Apr 2002 14:36:37 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Marc G. Fournier writes:\n > Myself, I wonder why Oracle went the route they went ... does anyone have\n > access to a Sybase / Informix system, to confirm how they do it? Is\n > Oracle the 'odd man out', or are we going to be that? *Adding* something\n > (ie. DROP TABLE rollbacks) that nobody appears to have is one thing ...\n > but changing the behaviour is a totally different ..\n\nFWIW, Ingres also doesn't rollback SET. However all its SET\nfunctionality is the sort of stuff you wouldn't assume to rollback:\n\n auto-commit\n connection\n journaling\n logging\n session\n work locations\n maxidle\n\nYou cannot do something sane like modify the date output through SET.\n\nLee.\n", "msg_date": "Fri, 26 Apr 2002 09:46:04 +0100", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> Marc G. Fournier wrote:\n> > On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> >\n> > >\n> > > Marc is suggesting we may want to match Oracle somehow.\n> > >\n> > > I just want to have our SET work on a sane manner.\n> >\n> > Myself, I wonder why Oracle went the route they went ... does anyone have\n> > access to a Sybase / Informix system, to confirm how they do it? Is\n> > Oracle the 'odd man out', or are we going to be that? *Adding* something\n> > (ie. DROP TABLE rollbacks) that nobody appears to have is one thing ...\n> > but changing the behaviour is a totally different ...\n>\n> Yes, let's find out what the others do. I don't see DROP TABLE\n> rollbacking as totally different. How is it different from SET?\n\n Man, you should know that our transactions are truly all or\n nothing. If you discard a transaction, the stamps xmin and\n xmax are ignored. This is a fundamental feature of Postgres,\n and if you're half through a utility command when you ERROR\n out, it guarantees consistency of the catalog. And now you\n want us to violate this concept for compatibility to Oracle's\n misbehaviour? No, thanks!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Fri, 26 Apr 2002 09:16:30 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Curt Sampson wrote:\n> On Fri, 26 Apr 2002, Marc G. Fournier wrote:\n>\n> > NOTE that I *do* think that #1 is what *should* happen, but there should\n> > be some way of turning off that behaviour so that we don't screw up ppl\n> > expecting \"Oracles behaviour\" ...\n>\n> I don't think this follows. If it's only for people's expectations,\n> but we default to #1, their expectations will be violated until\n> they figure out that the option is there. After they figure out\n> it's there, well, they don't expect it to behave like Oracle any\n> more, so they don't need the switch, right?\n\n Beeing able to \"read\" is definitely an advantage in the IT\n world. Someone just has to do it before finishing the\n implementation based on assumptions :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Fri, 26 Apr 2002 09:22:36 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Fri, 26 Apr 2002, Jan Wieck wrote:\n\n> Bruce Momjian wrote:\n> > Marc G. Fournier wrote:\n> > > On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> > >\n> > > >\n> > > > Marc is suggesting we may want to match Oracle somehow.\n> > > >\n> > > > I just want to have our SET work on a sane manner.\n> > >\n> > > Myself, I wonder why Oracle went the route they went ... does anyone have\n> > > access to a Sybase / Informix system, to confirm how they do it? Is\n> > > Oracle the 'odd man out', or are we going to be that? *Adding* something\n> > > (ie. DROP TABLE rollbacks) that nobody appears to have is one thing ...\n> > > but changing the behaviour is a totally different ...\n> >\n> > Yes, let's find out what the others do. I don't see DROP TABLE\n> > rollbacking as totally different. How is it different from SET?\n>\n> Man, you should know that our transactions are truly all or\n> nothing. If you discard a transaction, the stamps xmin and\n> xmax are ignored. This is a fundamental feature of Postgres,\n> and if you're half through a utility command when you ERROR\n> out, it guarantees consistency of the catalog. And now you\n> want us to violate this concept for compatibility to Oracle's\n> misbehaviour? No, thanks!\n\nHow does SET relate to xmin/xmax? :)\n\n\n", "msg_date": "Fri, 26 Apr 2002 10:24:12 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Fri, 26 Apr 2002, Jan Wieck wrote:\n>\n> > Bruce Momjian wrote:\n> > > Marc G. Fournier wrote:\n> > > > On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> > > >\n> > > > >\n> > > > > Marc is suggesting we may want to match Oracle somehow.\n> > > > >\n> > > > > I just want to have our SET work on a sane manner.\n> > > >\n> > > > Myself, I wonder why Oracle went the route they went ... does anyone have\n> > > > access to a Sybase / Informix system, to confirm how they do it? Is\n> > > > Oracle the 'odd man out', or are we going to be that? *Adding* something\n> > > > (ie. DROP TABLE rollbacks) that nobody appears to have is one thing ...\n> > > > but changing the behaviour is a totally different ...\n> > >\n> > > Yes, let's find out what the others do. I don't see DROP TABLE\n> > > rollbacking as totally different. How is it different from SET?\n> >\n> > Man, you should know that our transactions are truly all or\n> > nothing. If you discard a transaction, the stamps xmin and\n> > xmax are ignored. This is a fundamental feature of Postgres,\n> > and if you're half through a utility command when you ERROR\n> > out, it guarantees consistency of the catalog. And now you\n> > want us to violate this concept for compatibility to Oracle's\n> > misbehaviour? No, thanks!\n>\n> How does SET relate to xmin/xmax? :)\n>\n\n SET does not. But Bruce said he doesn't see DROP TABLE beeing\n totally different. That is related to xmin/xmax, isn't it?\n What I pointed out (or wanted to point out) is, that we\n cannot ignore rollback for catalog changes like DROP TABLE.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Fri, 26 Apr 2002 09:33:25 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> Coz some things should not be rolled back. So you guys might come up with a \n> different keyword for it.\n\n> CONFIG: for non transactional stuff that can appear as SQL statements.\n> SET: for stuff that can be transactional.\n\nPeople keep suggesting this, and I keep asking for a concrete example\nwhere non-rollback is needed, and I keep not getting one. I can't see\nthe value of investing work in creating an alternative behavior when\nwe have no solid example to justify it.\n\nThe \"Oracle compatibility\" argument would have some weight if we were\nmaking any concerted effort to be Oracle-compatible across the board;\nbut I have not detected any enthusiasm for that. Given that it's not\neven the same syntax (\"SET ...\" vs \"ALTER SESSION ...\") I'm not sure\nwhy an Oracle user would expect it to behave exactly the same.\n\n> Practical example: Does doing an enable seqscan affect OTHER db connections \n> and transactions as well?\n\nThere are no SET commands that affect other backends. (There are\nGUC variables with system-wide effects, but we don't allow them to be\nchanged by SET; rollback or not won't affect that.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Apr 2002 10:34:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > Marc G. Fournier wrote:\n> > > On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> > >\n> > > >\n> > > > Marc is suggesting we may want to match Oracle somehow.\n> > > >\n> > > > I just want to have our SET work on a sane manner.\n> > >\n> > > Myself, I wonder why Oracle went the route they went ... does anyone have\n> > > access to a Sybase / Informix system, to confirm how they do it? Is\n> > > Oracle the 'odd man out', or are we going to be that? *Adding* something\n> > > (ie. DROP TABLE rollbacks) that nobody appears to have is one thing ...\n> > > but changing the behaviour is a totally different ...\n> >\n> > Yes, let's find out what the others do. I don't see DROP TABLE\n> > rollbacking as totally different. How is it different from SET?\n> \n> Man, you should know that our transactions are truly all or\n> nothing. If you discard a transaction, the stamps xmin and\n> xmax are ignored. This is a fundamental feature of Postgres,\n> and if you're half through a utility command when you ERROR\n> out, it guarantees consistency of the catalog. And now you\n> want us to violate this concept for compatibility to Oracle's\n> misbehaviour? No, thanks!\n\nSo you do see a difference between SET and DROP TABLE because the second\nis a utility command. OK, I'll buy that, but my point was different.\n\nMy point was that we don't match Oracle for DROP TABLE, so why is\nmatching for SET so important?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Apr 2002 10:46:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Tom Lane wrote:\n> Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > Coz some things should not be rolled back. So you guys might come up with a \n> > different keyword for it.\n> \n> > CONFIG: for non transactional stuff that can appear as SQL statements.\n> > SET: for stuff that can be transactional.\n> \n> People keep suggesting this, and I keep asking for a concrete example\n> where non-rollback is needed, and I keep not getting one. I can't see\n> the value of investing work in creating an alternative behavior when\n> we have no solid example to justify it.\n> \n> The \"Oracle compatibility\" argument would have some weight if we were\n> making any concerted effort to be Oracle-compatible across the board;\n> but I have not detected any enthusiasm for that. Given that it's not\n> even the same syntax (\"SET ...\" vs \"ALTER SESSION ...\") I'm not sure\n> why an Oracle user would expect it to behave exactly the same.\n\nAgreed. OK, let me summarize.\n\nWe had a vote that was overwhemingly #1. Marc made a good point that we\nshould see how other databases behave, and we now know that Oracle and\nIngres do #3 (honor all SETs in an aborted transaction). Does anyone\nwant to change their vote from #1 to #3.\n\nSecond, there is the idea of doing #1, and having a GUC variable for #3.\nDoes anyone want that? I think Marc may. Anyone else?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Apr 2002 10:49:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Bruce Momjian wrote:\n> So you do see a difference between SET and DROP TABLE because the second\n> is a utility command. OK, I'll buy that, but my point was different.\n>\n> My point was that we don't match Oracle for DROP TABLE, so why is\n> matching for SET so important?\n\n Good point, I never understood the compatibility issue on\n this level either. Applications that create/drop tables at\n runtime are IMNSVHO self-modifying code. Thus, I don't\n consider it a big porting issue. Applications that do it\n should be \"replaced\", not ported.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Fri, 26 Apr 2002 11:15:08 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> SET does not. But Bruce said he doesn't see DROP TABLE beeing\n> totally different. That is related to xmin/xmax, isn't it?\n\nI think what Bruce meant was \"if rollback is good for DROP TABLE,\nwhy isn't it good for SET\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Apr 2002 11:20:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "At 10:34 AM 4/26/02 -0400, Tom Lane wrote:\n>Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > Coz some things should not be rolled back. So you guys might come up \n> with a\n> > different keyword for it.\n>\n> > CONFIG: for non transactional stuff that can appear as SQL statements.\n> > SET: for stuff that can be transactional.\n>\n>People keep suggesting this, and I keep asking for a concrete example\n>where non-rollback is needed, and I keep not getting one. I can't see\n\nSorry, I wasn't clear enough. I'm not asking for non-rollback behaviour.\n\nI was trying to say that _IF_ one ever needs to \"SET\" stuff that can't be \nrolled back then it may be better to use some other keyword for that feature.\n\nI'm actually for #1 SET being rolled back and to not have any \"Oracle \nbehaviour\" settings at all. Anything that can't be rolled back shouldn't \nuse SET.\n\n> > Practical example: Does doing an enable seqscan affect OTHER db \n> connections\n> > and transactions as well?\n>\n>There are no SET commands that affect other backends. (There are\n>GUC variables with system-wide effects, but we don't allow them to be\n>changed by SET; rollback or not won't affect that.)\n\nOK.\n\nCheerio,\nLink\n\n\n", "msg_date": "Fri, 26 Apr 2002 23:35:36 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> I was trying to say that _IF_ one ever needs to \"SET\" stuff that can't be \n> rolled back then it may be better to use some other keyword for that feature.\n> I'm actually for #1 SET being rolled back and to not have any \"Oracle \n> behaviour\" settings at all. Anything that can't be rolled back shouldn't \n> use SET.\n\nAh, I understand. Okay, I see a perfect candidate for the other syntax:\n\n\tALTER SESSION SET ...\n\n(or whatever the heck that Oracle syntax was). But I'm still looking\nfor a case of a variable where we actually want this behavior.\n\nThe Ingres examples Lee cited were interesting --- but they all appear\nto me to correspond to system-wide settings, which we do not allow SET\nto modify anyway. (To change system-wide settings, you have to change\npostgresql.conf, and then SIGHUP or restart the postmaster. This\nensures all the backends get the word. And indeed this behavior is\noutside transactional control.)\n\nI'm still looking for an example of something that is (a) reasonable\nto set on a per-backend basis, and (b) not reasonable to roll back\nif it's set in a transaction that fails.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Apr 2002 11:49:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "On Fri, 26 Apr 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > > Coz some things should not be rolled back. So you guys might come up with a\n> > > different keyword for it.\n> >\n> > > CONFIG: for non transactional stuff that can appear as SQL statements.\n> > > SET: for stuff that can be transactional.\n> >\n> > People keep suggesting this, and I keep asking for a concrete example\n> > where non-rollback is needed, and I keep not getting one. I can't see\n> > the value of investing work in creating an alternative behavior when\n> > we have no solid example to justify it.\n> >\n> > The \"Oracle compatibility\" argument would have some weight if we were\n> > making any concerted effort to be Oracle-compatible across the board;\n> > but I have not detected any enthusiasm for that. Given that it's not\n> > even the same syntax (\"SET ...\" vs \"ALTER SESSION ...\") I'm not sure\n> > why an Oracle user would expect it to behave exactly the same.\n>\n> Agreed. OK, let me summarize.\n>\n> We had a vote that was overwhemingly #1. Marc made a good point that we\n> should see how other databases behave, and we now know that Oracle and\n> Ingres do #3 (honor all SETs in an aborted transaction). Does anyone\n> want to change their vote from #1 to #3.\n>\n> Second, there is the idea of doing #1, and having a GUC variable for #3.\n> Does anyone want that? I think Marc may. Anyone else?\n\nActually, in light of Tom's comment about it not being the same syntax, I\nhave to admit that I missed that syntax difference in the original post :(\nI withdraw my GUC variable desire, unless/until someone does go with an\n'ALTER SESSION' command ...\n\n\n", "msg_date": "Fri, 26 Apr 2002 13:58:06 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Marc G. Fournier wrote:\n> > Second, there is the idea of doing #1, and having a GUC variable for #3.\n> > Does anyone want that? I think Marc may. Anyone else?\n> \n> Actually, in light of Tom's comment about it not being the same syntax, I\n> have to admit that I missed that syntax difference in the original post :(\n> I withdraw my GUC variable desire, unless/until someone does go with an\n> 'ALTER SESSION' command ...\n\nIt is good we had the 'compatibility' discussion. It is an important\npoint to always consider.\n\nTODO updated:\n\n\to Abort all SET changes made in an aborted transaction \n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Apr 2002 14:32:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "At 11:49 AM 4/26/02 -0400, Tom Lane wrote:\n>I'm still looking for an example of something that is (a) reasonable\n>to set on a per-backend basis, and (b) not reasonable to roll back\n>if it's set in a transaction that fails.\n\nThe way I see it is if (a) and you don't want it rolled back, you could put \nit in a transaction of its own.\nBEGIN;\nSET backend pref;\nCOMMIT;\n\nAnd if that transaction fails, maybe it should :).\n\nSo other than for performance, the example should also have a reason to \nbelong with other statements in a transaction.\n\nHave a nice weekend,\nLink.\n\n", "msg_date": "Sat, 27 Apr 2002 08:12:23 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "At 11:50 25/04/02 -0400, Bruce Momjian wrote:\n>Marc G. Fournier wrote:\n> >\n> > Just curious here, but has anyone taken the time to see how others are\n> > doing this? For instance, if we go with 1, are going against how everyone\n> > else handles it? IMHO, its not a popularity contest ...\n\nDec/RDB (and I think Oracle as well) ignores transactions. Even \nconfiguration commands (eg. setting date formats etc) ignore transactions.\n\nI think the key thing here is that they view variables as part of a \nprogramming language built on top of the database backend (like plpgsql). \nAs a result they separate variable management from database management.\n\nFWIW, I would be in the '?' camp - assuming that means some kind of \nsession-specific setting...failing that, I'd probably start looking for an \ninteractive form of plpgsql, so I could get persistant variables.\n\n\n\n", "msg_date": "Sat, 27 Apr 2002 11:54:17 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "I've been thinking this over and over, and it seems to me, that the way \nSETS in transactions SHOULD work is that they are all rolled back, period, \nwhether the transaction successfully completes OR NOT.\n\nTransactions ensure that either all or none of the DATA in the database is \nchanged. That nature is good. But does it make sense to apply \ntransactional mechanics to SETtings? I don't think it does.\n\nSETtings aren't data operators, so they don't need to be rolled back / \ncommitted so to speak. Their purpose is to affect the way things like the \ndatabase works in a more overreaching sense, not the data underneath it.\n\nFor this reason, I propose that a transaction should \"inherit\" its \nenvironment, and that all changes EXCEPT for those affecting tuples should \nbe rolled back after completion, leaving the environment the way we found \nit. If you need the environment changed, do it OUTSIDE the transaction.\n\nI would argue that the rollback on failure / don't rollback on completion \nis actually the worse possible way to handle this, because, again, this \nisn't about data, it's about environment. And I don't think things inside \na transaction should be mucking with the environment around them when \nthey're done.\n\nBut that's just my opinion, I could be wrong. Scott Marlowe\n\n", "msg_date": "Mon, 29 Apr 2002 09:09:54 -0600 (MDT)", "msg_from": "Scott Marlowe <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "> I've been thinking this over and over, and it seems to me, that the way\n> SETS in transactions SHOULD work is that they are all rolled back, period,\n> whether the transaction successfully completes OR NOT.\n\nVery interesting! This is a *consistant* use of SET which allows\ntransactions to be constructed as self-contained units without\nside-effects on subsequent transactions. Beautifully powerful.\n\n - Thomas\n\nI've got some other thoughts on features for other aspects of schemas\nand table and query properties, but this proposal for SET behavior\nstands on its own so I'll hold off on muddying the discussion.\n", "msg_date": "Mon, 29 Apr 2002 08:29:56 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Scott Marlowe <scott.marlowe@ihs.com> writes:\n> I've been thinking this over and over, and it seems to me, that the way \n> SETS in transactions SHOULD work is that they are all rolled back, period, \n> whether the transaction successfully completes OR NOT.\n\nThis would make it impossible for SET to have any persistent effect\nat all. (Every SQL command is inside a transaction --- an\nimplicitly-established one if necesary, but there is one.)\n\nIt might well be useful to have some kind of LOCAL SET command that\nbehaves the way you describe (effects good only for current transaction\nblock), but I don't think it follows that that should be the only\nbehavior available.\n\nWhat would you expect if LOCAL SET were followed by SET on the same\nvariable in the same transaction? Presumably the LOCAL SET would then\nbe nullified; or is this an error condition?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 11:30:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "Hannu Krosing wrote:\n> On Mon, 2002-04-29 at 17:09, Scott Marlowe wrote:\n> > For this reason, I propose that a transaction should \"inherit\" its \n> > environment, and that all changes EXCEPT for those affecting tuples should \n> > be rolled back after completion, leaving the environment the way we found \n> > it. If you need the environment changed, do it OUTSIDE the transaction.\n> \n> Unfortunately there is no such time in postgresql where commands are\n> done outside transaction.\n> \n> If you don't issue BEGIN; then each command is implicitly run in its own\n> transaction. \n> \n> Rolling each command back unless it is in implicit transaction would\n> really confuse the user.\n\nAgreed, very non-intuitive. And can you imagine how many applications\nwe would break.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Apr 2002 11:33:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "...\n> This would make it impossible for SET to have any persistent effect\n> at all. (Every SQL command is inside a transaction --- an\n> implicitly-established one if necesary, but there is one.)\n\nOf course the behavior would need to be defined from the user's\nviewpoint, not from a literal description of how the internals work.\nThere *is* a difference from a user's PoV between explicit transactions\nand single queries, no matter how that is implemented in the PostgreSQL\nbackend...\n\nLet's not let trivial english semantics divert the discussion please.\n\n - Thomas\n", "msg_date": "Mon, 29 Apr 2002 08:44:26 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "...\n> Agreed, very non-intuitive. And can you imagine how many applications\n> we would break.\n\nWhat is non-intuitive about it? What it *does* do is free the programmer\nfrom worrying about side effects which *do* break applications.\n\nRather than dismissing this out of hand, try to look at what it *does*\nenable. It allows developers to tune specific queries without having to\nrestore values afterwards. Values or settings which may change from\nversion to version, so end up embedding time bombs into applications.\n\nAnd the number of current applications \"broken\"? None, as a starting\npoint ;)\n\n - Thomas\n", "msg_date": "Mon, 29 Apr 2002 08:51:34 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Perhaps we could do \n> SET SET TO LOCAL TO TRANSACTION;\n> Which would affect itself and all subsequent SET commands up to \n> SET SET TO GLOBAL;\n> or end of transaction.\n\nThis makes my head hurt. If I do\n\n\tSET foo TO bar;\n\tbegin;\n\tSET SET TO GLOBAL;\n\tSET foo TO baz;\n\tSET SET TO LOCAL TO TRANSACTION;\n\tend;\n\n(assume no errors) what is the post-transaction state of foo?\n\nWhat about this case?\n\n\tSET foo TO bar;\n\tbegin;\n\tSET SET TO GLOBAL;\n\tSET foo TO baz;\n\tSET SET TO LOCAL TO TRANSACTION;\n\tSET foo TO quux;\n\tend;\n\nOf course this last case also exists with my idea of a LOCAL SET\ncommand,\n\n\tSET foo TO bar;\n\tbegin;\n\tSET foo TO baz;\n\tLOCAL SET foo TO quux;\n\t-- presumably SHOW foo will show quux here\n\tend;\n\t-- does SHOW foo now show bar, or baz?\n\nArguably you'd need to keep track of up to three values of a SET\nvariable to make this work --- the permanent (pre-transaction) value,\nto roll back to if error; the SET value, which will become permanent\nif we commit; and the LOCAL SET value, which may mask the pending\npermanent value. This seems needlessly complex though. Could we get\naway with treating the above case as an error?\n\nIn any case I find a LOCAL SET command more reasonable than making\nSET's effects depend on the value of a SETtable setting. There is\ncircular logic there. If I do\n\n\tbegin;\n\tSET SET TO LOCAL TO TRANSACTION;\n\tend;\n\nwhat is the post-transaction behavior of SET? And if you say LOCAL,\nhow do you justify it? Why wouldn't the effects of this SET be local?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 11:53:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Let's not let trivial english semantics divert the discussion please.\n\nIt's hardly a trivial point, seeing that transactions are such a\nfundamental aspect of the system. The statements that we have now that\ndepend on being-in-a-transaction-block-or-not (eg, VACUUM) are ugly\nkluges IMHO.\n\nLet me give you another reason why having only local SET would be a bad\nidea: how are you going to issue a SET with any persistent effect when\nworking through an interface like JDBC that wraps every command you give\nin a BEGIN/END block? We have also talked about modifying the backend's\nbehavior to act like BEGIN is issued implicitly as soon as you execute\nany command, so that explicit COMMIT is always needed (at least some\npeople think this is necessary for SQL spec compliance). Either one of\nthese are going to pose severe problems for the user-friendliness of SET\nif it only comes in a local flavor.\n\nI can certainly think of uses for a local-effects flavor of SET.\nBut I don't want that to be the only flavor.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 12:06:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Rather than dismissing this out of hand, try to look at what it *does*\n> enable. It allows developers to tune specific queries without having to\n> restore values afterwards. Values or settings which may change from\n> version to version, so end up embedding time bombs into applications.\n\nI think it's a great idea. I just want it to be a different syntax from\nthe existing SET, so as not to break existing applications that expect\nSET to be persistent. It seems to me that marking such a command with\na new syntax is reasonable from a user-friendliness point of view too:\nif you write \"LOCAL SET foo\" or some similar syntax, it is obvious to\nevery onlooker what your intentions are. If we redefine \"SET\" to have\ncontext-dependent semantics, I think we are just creating a recipe for\nconfusion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 12:20:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "On Mon, 2002-04-29 at 17:09, Scott Marlowe wrote:\n> For this reason, I propose that a transaction should \"inherit\" its \n> environment, and that all changes EXCEPT for those affecting tuples should \n> be rolled back after completion, leaving the environment the way we found \n> it. If you need the environment changed, do it OUTSIDE the transaction.\n\nUnfortunately there is no such time in postgresql where commands are\ndone outside transaction.\n\nIf you don't issue BEGIN; then each command is implicitly run in its own\ntransaction. \n\nRolling each command back unless it is in implicit transaction would\nreally confuse the user.\n \n> I would argue that the rollback on failure / don't rollback on completion \n> is actually the worse possible way to handle this, because, again, this \n> isn't about data, it's about environment. And I don't think things inside \n> a transaction should be mucking with the environment around them when \n> they're done.\n\nThat would assume nested transactions which we don't have yet.\n \n---------------\nHannu\n\n", "msg_date": "29 Apr 2002 18:27:10 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "\nOh, I like ... kinda like in perl where if you set a variable 'my' inside\nof conditional, it no longer exists outside of that conditional ...\n\nI do like this ...\n\nOn Mon, 29 Apr 2002, Scott Marlowe wrote:\n\n> I've been thinking this over and over, and it seems to me, that the way\n> SETS in transactions SHOULD work is that they are all rolled back, period,\n> whether the transaction successfully completes OR NOT.\n>\n> Transactions ensure that either all or none of the DATA in the database is\n> changed. That nature is good. But does it make sense to apply\n> transactional mechanics to SETtings? I don't think it does.\n>\n> SETtings aren't data operators, so they don't need to be rolled back /\n> committed so to speak. Their purpose is to affect the way things like the\n> database works in a more overreaching sense, not the data underneath it.\n>\n> For this reason, I propose that a transaction should \"inherit\" its\n> environment, and that all changes EXCEPT for those affecting tuples should\n> be rolled back after completion, leaving the environment the way we found\n> it. If you need the environment changed, do it OUTSIDE the transaction.\n>\n> I would argue that the rollback on failure / don't rollback on completion\n> is actually the worse possible way to handle this, because, again, this\n> isn't about data, it's about environment. And I don't think things inside\n> a transaction should be mucking with the environment around them when\n> they're done.\n>\n> But that's just my opinion, I could be wrong. Scott Marlowe\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Mon, 29 Apr 2002 13:29:33 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "> It's hardly a trivial point, seeing that transactions are such a\n> fundamental aspect of the system. The statements that we have now that\n> depend on being-in-a-transaction-block-or-not (eg, VACUUM) are ugly\n> kluges IMHO.\n\nThis is certainly not in the same category. And I'm sure you can see\nupon rereading my post that I made no claim that this is a trivial\npoint. Though it certainly can be fun to pick and choose words to make\nthem into whatever we want them to say, I *meant* that focusing on\ntrivial details in semantics of postings was diverting the discussion\nfrom the underlying technical issues which I'm sure you see. But here we\ngo again... ;)\n\n> Let me give you another reason why having only local SET would be a bad\n> idea: how are you going to issue a SET with any persistent effect when\n> working through an interface like JDBC that wraps every command you give\n> in a BEGIN/END block? We have also talked about modifying the backend's\n> behavior to act like BEGIN is issued implicitly as soon as you execute\n> any command, so that explicit COMMIT is always needed (at least some\n> people think this is necessary for SQL spec compliance). Either one of\n> these are going to pose severe problems for the user-friendliness of SET\n> if it only comes in a local flavor.\n\nAh, good, a technical issue :) And you are right, this would need to be\naddressed. But that certainly is not a fundamental problem.\n\n> I can certainly think of uses for a local-effects flavor of SET.\n> But I don't want that to be the only flavor.\n\nRight. And there was no suggestion that there be so; the original\nproposal used \"BEGIN/END blocks\" to differentiate the usage. Think about\nSET SESSION... as a possible syntax to completely decouple the behaviors\nif an explicit notation is desired.\n\nWe currently have a specific behavior of SET which does not quite match\nother databases. We are considering changing the behavior *farther away*\nfrom conventional behavior. I have no problem with that. But if we are\nchanging it, look farther ahead to see where we want to end up. We now\nhave schemas to help encapsulate information. We could start attaching\nproperties to schemas to help encapsulate behaviors. We may someday have\nnested transactions, which encapsulate transaction behaviors at a finer\ngrain than we have now. Let's choose approaches and behaviors which\ncould support these things in the future, as well as supporting our\ncurrent feature set.\n\n - Thomas\n", "msg_date": "Mon, 29 Apr 2002 09:30:32 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Mon, 29 Apr 2002, Tom Lane wrote:\n\n> Scott Marlowe <scott.marlowe@ihs.com> writes:\n> > I've been thinking this over and over, and it seems to me, that the way\n> > SETS in transactions SHOULD work is that they are all rolled back, period,\n> > whether the transaction successfully completes OR NOT.\n>\n> This would make it impossible for SET to have any persistent effect\n> at all. (Every SQL command is inside a transaction --- an\n> implicitly-established one if necesary, but there is one.)\n\nWhy? What I think Scott is proposing is that on COMMIT *or* ABORT, all\nSETs since the BEGIN are reversed ... hrmmm ... that didnt' sound right\neither ... is there no way of distiguishing between an IMPLICT transcation\nvs an EXPLICIT one?\n\nINSERT ...\n\nvs\n\nBEGIN\nINSERT ...\nCOMMIT\n\n?\n\n\n", "msg_date": "Mon, 29 Apr 2002 13:32:12 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "On Mon, 29 Apr 2002, Bruce Momjian wrote:\n\n> Hannu Krosing wrote:\n> > On Mon, 2002-04-29 at 17:09, Scott Marlowe wrote:\n> > > For this reason, I propose that a transaction should \"inherit\" its\n> > > environment, and that all changes EXCEPT for those affecting tuples should\n> > > be rolled back after completion, leaving the environment the way we found\n> > > it. If you need the environment changed, do it OUTSIDE the transaction.\n> >\n> > Unfortunately there is no such time in postgresql where commands are\n> > done outside transaction.\n> >\n> > If you don't issue BEGIN; then each command is implicitly run in its own\n> > transaction.\n> >\n> > Rolling each command back unless it is in implicit transaction would\n> > really confuse the user.\n>\n> Agreed, very non-intuitive. And can you imagine how many applications\n> we would break.\n\nSince there is obviously no defined standard for how a SET should be\ntreated within a transaction ... who cares? God, how many changes have we\nmade in the past that \"break applications\" but did them anyway?\n\nJust as a stupid question here ... but, why do we wrap single queries into\na transaction anyway? IMHO, a transaction is meant to tell the backend to\nremember this sequence of events, so that if it fails, you can roll it\nback ... with a single INSERT/UPDATE/DELETE, why 'auto-wrapper' it with a\nBEGIN/END?\n\n", "msg_date": "Mon, 29 Apr 2002 13:38:44 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Mon, 2002-04-29 at 17:30, Tom Lane wrote:\n> Scott Marlowe <scott.marlowe@ihs.com> writes:\n> > I've been thinking this over and over, and it seems to me, that the way \n> > SETS in transactions SHOULD work is that they are all rolled back, period, \n> > whether the transaction successfully completes OR NOT.\n> \n> This would make it impossible for SET to have any persistent effect\n> at all. (Every SQL command is inside a transaction --- an\n> implicitly-established one if necesary, but there is one.)\n> \n> It might well be useful to have some kind of LOCAL SET command that\n> behaves the way you describe (effects good only for current transaction\n> block), but I don't think it follows that that should be the only\n> behavior available.\n> \n> What would you expect if LOCAL SET were followed by SET on the same\n> variable in the same transaction? Presumably the LOCAL SET would then\n> be nullified; or is this an error condition?\n\nPerhaps we could do \n\nSET SET TO LOCAL TO TRANSACTION;\n\nWhich would affect itself and all subsequent SET commands up to \n\nSET SET TO GLOBAL;\n\nor end of transaction.\n\n-------------\n\nSET SET TO GLOBAL \n\ncould also be written as \n\nSET SET TO NOT LOCAL TO TRANSACTION;\n\nto comply with genral verbosity of SQL ;)\n\n----------\nHannu\n\n\n", "msg_date": "29 Apr 2002 18:41:17 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Mon, 29 Apr 2002, Bruce Momjian wrote:\n> \n> > Hannu Krosing wrote:\n> > > On Mon, 2002-04-29 at 17:09, Scott Marlowe wrote:\n> > > > For this reason, I propose that a transaction should \"inherit\" its\n> > > > environment, and that all changes EXCEPT for those affecting tuples should\n> > > > be rolled back after completion, leaving the environment the way we found\n> > > > it. If you need the environment changed, do it OUTSIDE the transaction.\n> > >\n> > > Unfortunately there is no such time in postgresql where commands are\n> > > done outside transaction.\n> > >\n> > > If you don't issue BEGIN; then each command is implicitly run in its own\n> > > transaction.\n> > >\n> > > Rolling each command back unless it is in implicit transaction would\n> > > really confuse the user.\n> >\n> > Agreed, very non-intuitive. And can you imagine how many applications\n> > we would break.\n> \n> Since there is obviously no defined standard for how a SET should be\n> treated within a transaction ... who cares? God, how many changes have we\n> made in the past that \"break applications\" but did them anyway?\n\nWell, I think SET being always rolled back in a multi-statement\ntransaction is not the behavior most people would want. I am sure there\nare some cases people would want it, but I doubt it should be the\ndefault.\n\n> Just as a stupid question here ... but, why do we wrap single queries into\n> a transaction anyway? IMHO, a transaction is meant to tell the backend to\n> remember this sequence of events, so that if it fails, you can roll it\n> back ... with a single INSERT/UPDATE/DELETE, why 'auto-wrapper' it with a\n> BEGIN/END?\n\nBecause INSERT/UPDATE/DELETE is actually INSERT/UPDATE/DELETE on every\neffected row, with tiggers and all, so it is not as _single_ as it\nappears.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Apr 2002 12:45:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> I can certainly think of uses for a local-effects flavor of SET.\n>> But I don't want that to be the only flavor.\n\n> Right. And there was no suggestion that there be so; the original\n> proposal used \"BEGIN/END blocks\" to differentiate the usage.\n\nRight. But I don't like the notion of making SET's behavior vary\ndepending on context. I think it's better both from a user-friendliness\nstandpoint and from a compatibility standpoint to use different syntaxes\nto indicate the desired behavior.\n\n> Think about\n> SET SESSION... as a possible syntax to completely decouple the behaviors\n> if an explicit notation is desired.\n\nWell, if you accept the notion of distinguishing it by syntax, then\nwe're down to arguing about which case should be associated with the\nexisting syntax. And I think persistent has to win on compatibility\ngrounds. (Doesn't the Perl DBI driver also do the automatic-begin\nthing? Breaking all Java apps and all Perl apps that issue SETs is\nrather a big compatibility problem IMHO...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 12:51:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> If we go with your syntax I would prefer SET LOCAL to LOCAL SET , so\n> that LOCAL feels tied more to variable rather than to SET .\n\nI agree. I was originally thinking that that way might require LOCAL to\nbecome a reserved word, but we should be able to avoid it.\n\nWith Thomas' nearby suggestion of SET SESSION ..., we'd have\n\n\tSET [ SESSION | LOCAL ] varname TO value\n\nand it only remains to argue which case is the default ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 13:04:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> And I also think that this will solve the original issue, which iirc was\n> rolling back SET TIMEOUT at ABORT.\n\nIt does provide a way to deal with that problem. But we still have the\nexample of\n\n\tbegin;\n\tcreate schema foo;\n\tset search_path = foo;\n\trollback;\n\nto mandate changing the behavior of plain SET to roll back on error.\n\n> If we have LOCAL SET, there is no need to have any other mechanism for\n> ROLLING BACK/COMMITing SET's - SET and DML can be kept totally separate,\n> as they should be based on fact that SET does not directly affect data.\n\nThat can only work if you have no connection at all between SETs and\ndata that is in the database; which seems to me to be a rather large\nrestriction on what SET can be used for. (In particular, search_path\ncouldn't be treated as a SET variable at all; we'd have to invent some\nother specialized command for it.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 13:09:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n\n> Just as a stupid question here ... but, why do we wrap single queries into\n> a transaction anyway? IMHO, a transaction is meant to tell the backend to\n> remember this sequence of events, so that if it fails, you can roll it\n> back ... with a single INSERT/UPDATE/DELETE, why 'auto-wrapper' it with a\n> BEGIN/END?\n\nWell, a single query (from the user's perspective) may involve a\nfunciton call that itself executes one or more other queries. I think\nyou want these to be under transactional control.\n\nPlus, it's my understanding that the whole MVCC implementation depends\non \"everything is in a transaction.\"\n\n-Doug\n", "msg_date": "29 Apr 2002 13:14:43 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Mon, 2002-04-29 at 17:53, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Perhaps we could do \n> > SET SET TO LOCAL TO TRANSACTION;\n> > Which would affect itself and all subsequent SET commands up to \n> > SET SET TO GLOBAL;\n> > or end of transaction.\n> \n> This makes my head hurt. If I do\n> \n> \tSET foo TO bar;\n> \tbegin;\n> \tSET SET TO GLOBAL;\n> \tSET foo TO baz;\n> \tSET SET TO LOCAL TO TRANSACTION;\n> \tend;\n> \n> (assume no errors) what is the post-transaction state of foo?\n\nshould be baz\n\nI'm elaborating the idea of SET with transaction scope here with\npossibility to do global SETs as well. Any global SET will also affect\nlocal set (by either setting it or just unsetting the local one).\n\n> \n> What about this case?\n> \n> \tSET foo TO bar;\n> \tbegin;\n> \tSET SET TO GLOBAL;\n> \tSET foo TO baz;\n> \tSET SET TO LOCAL TO TRANSACTION;\n> \tSET foo TO quux;\n> \tend;\n\nbaz again, as local foo==quux disappears at transaction end\n \n> Of course this last case also exists with my idea of a LOCAL SET\n> command,\n> \n> \tSET foo TO bar;\n> \tbegin;\n> \tSET foo TO baz;\n> \tLOCAL SET foo TO quux;\n> \t-- presumably SHOW foo will show quux here\n> \tend;\n> \t-- does SHOW foo now show bar, or baz?\n\nbaz\n\nI assume here only two kinds of SETs - global ones that happen always\nand local ones that are valid only within the transaction\n\n> Arguably you'd need to keep track of up to three values of a SET\n> variable to make this work --- the permanent (pre-transaction) value,\n> to roll back to if error;\n\nI started from the idea of not rolling back SETs as they do not affect\ndata but I think that transaction-local SETs are valuable.\n\nIf we go with your syntax I would prefer SET LOCAL to LOCAL SET , so\nthat LOCAL feels tied more to variable rather than to SET .\n\n> the SET value, which will become permanent\n> if we commit; and the LOCAL SET value, which may mask the pending\n> permanent value. This seems needlessly complex though. Could we get\n> away with treating the above case as an error?\n> \n> In any case I find a LOCAL SET command more reasonable than making\n> SET's effects depend on the value of a SETtable setting. There is\n> circular logic there. If I do\n> \n> \tbegin;\n> \tSET SET TO LOCAL TO TRANSACTION;\n> \tend;\n> \n> what is the post-transaction behavior of SET?\n\nIt is always GLOBAL unless SET TO LOCAL\n\nI explicitly defined this command as applying to itself and all\nfollowing commands in order to avoid this circularity so END would\ninvalidate it\n\nBut I already think that LOCAL SET / SET LOCAL is better and more clear.\n\n> And if you say LOCAL,\n> how do you justify it? Why wouldn't the effects of this SET be local?\n\n------------\nHannu\n\n\n\n\n\n\n", "msg_date": "29 Apr 2002 19:39:25 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "On Mon, 2002-04-29 at 18:20, Tom Lane wrote:\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > Rather than dismissing this out of hand, try to look at what it *does*\n> > enable. It allows developers to tune specific queries without having to\n> > restore values afterwards. Values or settings which may change from\n> > version to version, so end up embedding time bombs into applications.\n> \n> I think it's a great idea. \n\nSo do I. \n\nAnd I also think that this will solve the original issue, which iirc was\nrolling back SET TIMEOUT at ABORT.\n\nIf we have LOCAL SET, there is no need to have any other mechanism for\nROLLING BACK/COMMITing SET's - SET and DML can be kept totally separate,\nas they should be based on fact that SET does not directly affect data.\n\n--------------\nHannu\n\n", "msg_date": "29 Apr 2002 19:47:54 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "\nWhat happens inside of a nested transaction, assuming we do have those\nevenually ... ?\n\nOn Mon, 29 Apr 2002, Tom Lane wrote:\n\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Perhaps we could do\n> > SET SET TO LOCAL TO TRANSACTION;\n> > Which would affect itself and all subsequent SET commands up to\n> > SET SET TO GLOBAL;\n> > or end of transaction.\n>\n> This makes my head hurt. If I do\n>\n> \tSET foo TO bar;\n> \tbegin;\n> \tSET SET TO GLOBAL;\n> \tSET foo TO baz;\n> \tSET SET TO LOCAL TO TRANSACTION;\n> \tend;\n>\n> (assume no errors) what is the post-transaction state of foo?\n>\n> What about this case?\n>\n> \tSET foo TO bar;\n> \tbegin;\n> \tSET SET TO GLOBAL;\n> \tSET foo TO baz;\n> \tSET SET TO LOCAL TO TRANSACTION;\n> \tSET foo TO quux;\n> \tend;\n>\n> Of course this last case also exists with my idea of a LOCAL SET\n> command,\n>\n> \tSET foo TO bar;\n> \tbegin;\n> \tSET foo TO baz;\n> \tLOCAL SET foo TO quux;\n> \t-- presumably SHOW foo will show quux here\n> \tend;\n> \t-- does SHOW foo now show bar, or baz?\n>\n> Arguably you'd need to keep track of up to three values of a SET\n> variable to make this work --- the permanent (pre-transaction) value,\n> to roll back to if error; the SET value, which will become permanent\n> if we commit; and the LOCAL SET value, which may mask the pending\n> permanent value. This seems needlessly complex though. Could we get\n> away with treating the above case as an error?\n>\n> In any case I find a LOCAL SET command more reasonable than making\n> SET's effects depend on the value of a SETtable setting. There is\n> circular logic there. If I do\n>\n> \tbegin;\n> \tSET SET TO LOCAL TO TRANSACTION;\n> \tend;\n>\n> what is the post-transaction behavior of SET? And if you say LOCAL,\n> how do you justify it? Why wouldn't the effects of this SET be local?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Mon, 29 Apr 2002 15:10:13 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "On Mon, 29 Apr 2002, Tom Lane wrote:\n\n> Hannu Krosing <hannu@tm.ee> writes:\n> > If we go with your syntax I would prefer SET LOCAL to LOCAL SET , so\n> > that LOCAL feels tied more to variable rather than to SET .\n>\n> I agree. I was originally thinking that that way might require LOCAL to\n> become a reserved word, but we should be able to avoid it.\n>\n> With Thomas' nearby suggestion of SET SESSION ..., we'd have\n>\n> \tSET [ SESSION | LOCAL ] varname TO value\n>\n> and it only remains to argue which case is the default ;-)\n\nAh, I do like the syntax ... and would go with SESSION as default, but\nthat is based on me tinking about how 'local' variables work in perl,\nwhere if you don't explicitly state its local, its automatically global\n...\n\n\n\n", "msg_date": "Mon, 29 Apr 2002 15:13:44 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> What happens inside of a nested transaction, assuming we do have those\n> evenually ... ?\n\nPresumably, an error inside a nested transaction would cause you to\nrevert back to whatever the SET situation was at start of that\nsubtransaction.\n\nOffhand this doesn't seem any harder than any other part of what we'd\nhave to do for nested transactions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 14:19:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "Marc G. Fournier wrote:\n>\n> What happens inside of a nested transaction, assuming we do have those\n> evenually ... ?\n\nFolks,\n\n I don't really get it. We had a voting and I think I saw a\n clear enough result with #1, transactional behaviour, as the\n winner. Maybe I missed something, but what's this\n disscussion about?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Mon, 29 Apr 2002 14:39:38 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> I don't really get it. We had a voting and I think I saw a\n> clear enough result with #1, transactional behaviour, as the\n> winner. Maybe I missed something, but what's this\n> disscussion about?\n\nWe agreed on transactional behavior ... but Scott is proposing a variant\nthat was not considered earlier, and it seems worth considering.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 14:49:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " }, { "msg_contents": "On Mon, 29 Apr 2002, Jan Wieck wrote:\n\n> Marc G. Fournier wrote:\n> >\n> > What happens inside of a nested transaction, assuming we do have those\n> > evenually ... ?\n>\n> Folks,\n>\n> I don't really get it. We had a voting and I think I saw a\n> clear enough result with #1, transactional behaviour, as the\n> winner. Maybe I missed something, but what's this\n> disscussion about?\n\nThis discussion is about a #4 option that nobody considered ...\n\n\n", "msg_date": "Mon, 29 Apr 2002 15:52:19 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "> I don't really get it. We had a voting and I think I saw a\n> clear enough result with #1, transactional behaviour, as the\n> winner. Maybe I missed something, but what's this\n> disscussion about?\n\nGetting the right solution ;)\n\nThere was not a consensus, just a vote, and the *reasons* for the lack\nof consensus were not yet being addressed. They are now (or some are\nanyway), and the new proposal helped set that in motion.\n\nI would think that a vote in the absence of consensus is not always\noptimal (I'll leave aside stating my view on this case ;), but it has\nhelped focus the discussion. It is always amazing to me how threads\nemerge which bring a consensus when there wasn't even one on the\nhorizon.\n\n - Thomas\n", "msg_date": "Mon, 29 Apr 2002 13:06:48 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction" }, { "msg_contents": "\n----- Original Message -----\nFrom: \"Marc G. Fournier\" <scrappy@hub.org>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: \"Hannu Krosing\" <hannu@tm.ee>; \"Scott Marlowe\" <scott.marlowe@ihs.com>;\n\"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Monday, April 29, 2002 2:10 PM\nSubject: Re: [HACKERS] Vote totals for SET in aborted transaction\n\nLOCAL <NESTED TRANSACTION NAME> SET .... ?\n\n>>\n\n>\n> What happens inside of a nested transaction, assuming we do have those\n> evenually ... ?\n>\n> On Mon, 29 Apr 2002, Tom Lane wrote:\n>\n> > Hannu Krosing <hannu@tm.ee> writes:\n> > > Perhaps we could do\n> > > SET SET TO LOCAL TO TRANSACTION;\n> > > Which would affect itself and all subsequent SET commands up to\n> > > SET SET TO GLOBAL;\n> > > or end of transaction.\n> >\n> > This makes my head hurt. If I do\n> >\n> > SET foo TO bar;\n> > begin;\n> > SET SET TO GLOBAL;\n> > SET foo TO baz;\n> > SET SET TO LOCAL TO TRANSACTION;\n> > end;\n> >\n> > (assume no errors) what is the post-transaction state of foo?\n> >\n> > What about this case?\n> >\n> > SET foo TO bar;\n> > begin;\n> > SET SET TO GLOBAL;\n> > SET foo TO baz;\n> > SET SET TO LOCAL TO TRANSACTION;\n> > SET foo TO quux;\n> > end;\n> >\n> > Of course this last case also exists with my idea of a LOCAL SET\n> > command,\n> >\n> > SET foo TO bar;\n> > begin;\n> > SET foo TO baz;\n> > LOCAL SET foo TO quux;\n> > -- presumably SHOW foo will show quux here\n> > end;\n> > -- does SHOW foo now show bar, or baz?\n> >\n> > Arguably you'd need to keep track of up to three values of a SET\n> > variable to make this work --- the permanent (pre-transaction) value,\n> > to roll back to if error; the SET value, which will become permanent\n> > if we commit; and the LOCAL SET value, which may mask the pending\n> > permanent value. This seems needlessly complex though. Could we get\n> > away with treating the above case as an error?\n> >\n> > In any case I find a LOCAL SET command more reasonable than making\n> > SET's effects depend on the value of a SETtable setting. There is\n> > circular logic there. If I do\n> >\n> > begin;\n> > SET SET TO LOCAL TO TRANSACTION;\n> > end;\n> >\n> > what is the post-transaction behavior of SET? And if you say LOCAL,\n> > how do you justify it? Why wouldn't the effects of this SET be local?\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\n", "msg_date": "Mon, 29 Apr 2002 22:55:56 -0400", "msg_from": "\"John Ingram\" <jingram@ncinter.net>", "msg_from_op": false, "msg_subject": "Re: Vote totals for SET in aborted transaction " } ]
[ { "msg_contents": "\nDiscription of patch for python.\n\n---------------------------------------------------------------------------\n\n\nAndrew Johnson wrote:\n> On Wed, Apr 17, 2002 at 10:12:10PM -0400, Bruce Momjian wrote:\n> > \n> > No one has replied, so I worked up a patch that I will apply in a few\n> > days. Let me know if you don't like it.\n> \n> Hmm... problem I have with that is it changes the interface, and might\n> break existing code. Only other option is to use ordered, rather than named\n> parameters in the _pg.connect call.\n> \n> \n> > Index: src/interfaces/python/pgdb.py\n> > ===================================================================\n> > RCS file: /cvsroot/pgsql/src/interfaces/python/pgdb.py,v\n> > retrieving revision 1.10\n> > diff -c -r1.10 pgdb.py\n> > *** src/interfaces/python/pgdb.py\t19 Mar 2002 02:47:57 -0000\t1.10\n> > --- src/interfaces/python/pgdb.py\t18 Apr 2002 02:10:20 -0000\n> > ***************\n> > *** 337,343 ****\n> > ### module interface\n> > \n> > # connects to a database\n> > ! def connect(dsn = None, user = None, password = None, host = None, database = None):\n> > \t# first get params from DSN\n> > \tdbport = -1\n> > \tdbhost = \"\"\n> > --- 337,343 ----\n> > ### module interface\n> > \n> > # connects to a database\n> > ! def connect(dsn = None, user = None, password = None, xhost = None, database = None):\n> > \t# first get params from DSN\n> > \tdbport = -1\n> > \tdbhost = \"\"\n> > ***************\n> > *** 364,372 ****\n> > \t\tdbpasswd = password\n> > \tif database != None:\n> > \t\tdbbase = database\n> > ! \tif host != None:\n> > \t\ttry:\n> > ! \t\t\tparams = string.split(host, \":\")\n> > \t\t\tdbhost = params[0]\n> > \t\t\tdbport = int(params[1])\n> > \t\texcept:\n> > --- 364,372 ----\n> > \t\tdbpasswd = password\n> > \tif database != None:\n> > \t\tdbbase = database\n> > ! \tif xhost != None:\n> > \t\ttry:\n> > ! \t\t\tparams = string.split(xhost, \":\")\n> > \t\t\tdbhost = params[0]\n> > \t\t\tdbport = int(params[1])\n> > \t\texcept:\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 13:31:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PyGreSQL bug" } ]
[ { "msg_contents": "Based on this report, I am adding a FUNC_MAX_ARGS define to\nsrc/include/pg_config.h.win32. Certainly if we have INDEX_MAX_KEYS in\nthere, we should have FUNC_MAX_ARGS too.\n\n---------------------------------------------------------------------------\n\nChris Ryan wrote:\n> Bruce,\n> \n> \tI'm not active on the pgsql lists so I figured I would pass this on to\n> you.\n> \n> \tWe were working on compiling some apps using the libpq interface on\n> windows and ran into a problem where FUNC_MAX_ARGS was not defined in\n> the src/include/pg_config.h.win32 file. We added the following line to\n> the file:\n> \n> #define FUNC_MAX_ARGS INDEX_MAX_KEYS\n> \n> \tAfter an nmake -f win32.mak cleanl; nmake -f win32.mak everything\n> worked fine.\n> \n> Thanks in advance.\n> Chris\n> \n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/include/pg_config.h.win32\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/pg_config.h.win32,v\nretrieving revision 1.3\ndiff -c -r1.3 pg_config.h.win32\n*** src/include/pg_config.h.win32\t22 Jan 2002 19:02:40 -0000\t1.3\n--- src/include/pg_config.h.win32\t23 Apr 2002 23:42:59 -0000\n***************\n*** 19,24 ****\n--- 19,25 ----\n #define BLCKSZ\t8192\n \n #define INDEX_MAX_KEYS\t\t16\n+ #define FUNC_MAX_ARGS\t\tINDEX_MAX_KEYS\n \n #define HAVE_ATEXIT\n #define HAVE_MEMMOVE", "msg_date": "Tue, 23 Apr 2002 19:46:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: ?Missing '#define FUNC_MAX_ARGS' in pg_config.h.win32?" }, { "msg_contents": "Bruce Momjian writes:\n\n> Based on this report, I am adding a FUNC_MAX_ARGS define to\n> src/include/pg_config.h.win32. Certainly if we have INDEX_MAX_KEYS in\n> there, we should have FUNC_MAX_ARGS too.\n\nWe probably shouldn't have either one in there.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 23 Apr 2002 20:24:47 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: ?Missing '#define FUNC_MAX_ARGS' in pg_config.h.win32?" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Based on this report, I am adding a FUNC_MAX_ARGS define to\n> > src/include/pg_config.h.win32. Certainly if we have INDEX_MAX_KEYS in\n> > there, we should have FUNC_MAX_ARGS too.\n> \n> We probably shouldn't have either one in there.\n\nI thought so too, but the reporter complained something didn't compile\nwithout it. What should we do?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 Apr 2002 20:57:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: ?Missing '#define FUNC_MAX_ARGS' in pg_config.h.win32?" }, { "msg_contents": "Bruce Momjian writes:\n\n> Peter Eisentraut wrote:\n> > Bruce Momjian writes:\n> >\n> > > Based on this report, I am adding a FUNC_MAX_ARGS define to\n> > > src/include/pg_config.h.win32. Certainly if we have INDEX_MAX_KEYS in\n> > > there, we should have FUNC_MAX_ARGS too.\n> >\n> > We probably shouldn't have either one in there.\n>\n> I thought so too, but the reporter complained something didn't compile\n> without it. What should we do?\n\nI'm working right now on removing traces of FUNC_MAX_ARGS, INDEX_MAX_KEYS,\nand NAMEDATALEN from frontend code (pg_dump and psql). When I'm done I\nwill remove the definition from pg_config.h.win32.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 23 Apr 2002 21:10:20 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: ?Missing '#define FUNC_MAX_ARGS' in pg_config.h.win32?" } ]
[ { "msg_contents": "I also want to implement the same feature but for ecpg. On INFORMIX there is the following syntax to control timeouts:\nSET LOCK MODE TO [WAIT [seconds] | NO WAIT]\n\nThere is 2 possibilities: \n- either the pre-processor implements execution of the statement with the asynchronous functions. Then I have 2 issues, the polling delay will slow down the transaction performance, and I don't have a way to implement the NO WAIT for I would have to know if the statement is being processed right away or is waiting for another transaction to complete.\n- the backend implements this syntax and returns an error code for timeout.\n\n\n\n\n\n\n\nI also want to implement the same feature but for \necpg. On  INFORMIX there is the following syntax to control \ntimeouts:\nSET LOCK MODE TO [WAIT [seconds] | NO \nWAIT]\n \nThere is 2 possibilities: \n- either the pre-processor implements execution of \nthe statement with the asynchronous functions. Then I have 2 issues, the polling \ndelay will slow down the transaction performance, and I don't have a way to \nimplement the NO WAIT for I would have to know if the statement is being \nprocessed right away or is waiting for another transaction to \ncomplete.\n- the backend implements this syntax and returns an \nerror code for timeout.", "msg_date": "Wed, 24 Apr 2002 10:04:06 +1000", "msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>", "msg_from_op": true, "msg_subject": "timeout implementation issues" } ]
[ { "msg_contents": "Each test set has 3 time sets.\n\nFirst is on pgbench -i (-s 5)\nSecond is on pgbench -t 3000 -s 5\n\nThird is on postmaster during the run of the first 2.\n\n\nThe first test on a slow harddrive has a large effect for increasing the\nnamedatalen length.\n\nSecond through 4th sets don't really show any issues when the drives are\nquite a bit quicker -- IBM Deskstars stripped). Looks like something\nthrew the results off near the end of the second set / beginning of the\n3rd set. Probably nightly maintenance. I'm afraid I don't have a good\nbox for benchmarking (potential for many variables).\n\nAnyway, if people are using machines that are heavily disk bound,\nthey're probably not running a production Postgresql installation. \nThose who are will benefit from the longer length of namedatalen.\n\nThat said, the pg_depend patch going through fixes 50% of the serial\nproblem (auto-drop on table drop). The other 50% (short non-conflicting\nnames) is very easy to complete.\n\n\n\n------ SLOW SINGLE IDE DRIVE ---------\nNAMEDATALEN: 32\n\n 89.34 real 1.85 user 0.13 sys\n 146.87 real 1.51 user 3.91 sys\n\n 246.63 real 66.11 user 19.21 sys\n\n\nNAMEDATALEN: 64\n\n 93.10 real 1.82 user 0.16 sys\n 147.30 real 1.45 user 3.90 sys\n\n 249.28 real 66.01 user 18.82 sys\n\n\nNAMEDATALEN: 128\n\n 99.13 real 1.80 user 0.51 sys\n 169.47 real 1.87 user 4.54 sys\n\n 279.16 real 67.93 user 29.72 sys\n\n\nNAMEDATALEN: 256\n\n 106.60 real 1.81 user 0.43 sys\n 166.61 real 1.69 user 4.25 sys\n\n 283.76 real 66.88 user 26.59 sys\n\n\nNAMEDATALEN: 512\n\n 88.13 real 1.83 user 0.22 sys\n 160.77 real 1.64 user 4.48 sys\n\n 259.56 real 67.54 user 21.89 sys\n\n\n------ 2 IDE Raid 0 (Set 1 ---------\nNAMEDATALEN: 32\n\n 61.00 real 1.85 user 0.12 sys\n 87.07 real 1.66 user 3.80 sys\n\n 156.17 real 65.65 user 20.41 sys\n\n\nNAMEDATALEN: 64\n\n 60.20 real 1.79 user 0.19 sys\n 93.26 real 1.54 user 4.00 sys\n\n 162.31 real 65.79 user 20.30 sys\n\n\nNAMEDATALEN: 128\n\n 60.27 real 1.86 user 0.12 sys\n 86.23 real 1.59 user 3.83 sys\n\n 154.65 real 65.76 user 20.45 sys\n\n\nNAMEDATALEN: 256\n\n 62.17 real 1.86 user 0.12 sys\n 89.92 real 1.31 user 4.13 sys\n\n 160.37 real 66.58 user 20.08 sys\n\n\nNAMEDATALEN: 512\n\n 62.12 real 1.82 user 0.16 sys\n 87.08 real 1.50 user 4.00 sys\n\n 157.37 real 66.54 user 19.57 sys\n\n------ 2 IDE Raid 0 (Set 2) ---------\nNAMEDATALEN: 32\n\n 62.33 real 1.83 user 0.13 sys\n 91.30 real 1.62 user 3.96 sys\n\n 161.91 real 65.93 user 20.80 sys\n\n\nNAMEDATALEN: 64\n\n 62.39 real 1.83 user 0.13 sys\n 93.78 real 1.68 user 3.85 sys\n\n 164.34 real 65.72 user 20.70 sys\n\n\nNAMEDATALEN: 128\n\n 62.91 real 1.84 user 0.15 sys\n 90.87 real 1.58 user 3.91 sys\n\n 161.86 real 66.14 user 20.21 sys\n\n\nNAMEDATALEN: 256\n\n 72.59 real 1.79 user 0.52 sys\n 97.40 real 1.62 user 4.71 sys\n\n 178.38 real 68.78 user 31.35 sys\n\n\nNAMEDATALEN: 512\n\n 80.64 real 1.87 user 0.41 sys\n 99.19 real 1.64 user 4.87 sys\n\n 188.45 real 67.33 user 35.22 sys\n\n------ 2 IDE Raid 0 (Set 3) ---------\nNAMEDATALEN: 32\n\n 79.63 real 1.80 user 0.41 sys\n 89.69 real 1.54 user 4.69 sys\n\n 177.55 real 68.65 user 28.34 sys\n\n\nNAMEDATALEN: 64\n\n 74.94 real 1.89 user 0.52 sys\n 91.44 real 1.70 user 4.32 sys\n\n 174.74 real 66.25 user 33.59 sys\n\n\nNAMEDATALEN: 128\n\n 64.07 real 1.86 user 0.16 sys\n 79.06 real 1.53 user 4.35 sys\n\n 151.27 real 67.92 user 22.96 sys\n\n\nNAMEDATALEN: 256\n\n 78.43 real 1.81 user 0.58 sys\n 89.23 real 1.58 user 4.27 sys\n\n 175.71 real 66.94 user 37.41 sys\n\n\nNAMEDATALEN: 512\n\n 60.67 real 1.87 user 0.10 sys\n 84.61 real 1.47 user 3.88 sys\n\n 153.49 real 66.04 user 19.84 sys", "msg_date": "23 Apr 2002 23:36:14 -0300", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "namedatalen part 2 (cont'd)" }, { "msg_contents": "On 23 Apr 2002 23:36:14 -0300\n\"Rod Taylor\" <rbt@zort.ca> wrote:\n> First is on pgbench -i (-s 5)\n> Second is on pgbench -t 3000 -s 5\n\nHaven't several people observed that the results from pgbench are\nvery inconsistent? Perhaps some results from OSDB would be worthwhile...\n\n> The first test on a slow harddrive has a large effect for increasing the\n> namedatalen length.\n> \n> Second through 4th sets don't really show any issues when the drives are\n> quite a bit quicker -- IBM Deskstars stripped).\n\nAFAICS, the only consistent results are the first set (on the slow\nIDE drive) -- in which the performance degredation is quite high.\nBased on that data, I'd vote against making any changes to NAMEDATALEN.\n\nFor the other data sets, performance is inconsistent. I'd be more inclined\nto write that off as simply unreliable data and not necessarily an\nindication that high NAMEDATALEN values don't have a performance impact\non that machine.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Tue, 23 Apr 2002 22:50:37 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: namedatalen part 2 (cont'd)" }, { "msg_contents": "> Haven't several people observed that the results from pgbench are\n> very inconsistent? Perhaps some results from OSDB would be\nworthwhile...\n\nI've not looked very hard at OSDB. But it doesn't seem to run out of\nthe box.\n\n> Based on that data, I'd vote against making any changes to\nNAMEDATALEN.\n\nI'm fine with that so long as that SERIAL thing fixed before 7.3 is\nreleased but someone was asking about benches with recent changes to\nname hashing. Seems degradation is closer to 10% per double rather\nthan 15% as before.\n\n\n", "msg_date": "Tue, 23 Apr 2002 23:12:15 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: namedatalen part 2 (cont'd)" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> ...Based on that data, I'd vote against making any changes to NAMEDATALEN.\n\nIt looked to me like the cost for going to NAMEDATALEN = 64 would be\nreasonable. Based on these numbers I'd have a problem with 128 or more.\n\nBut as you observe, pgbench numbers are not very repeatable. It'd be\nnice to have some similar experiments with another benchmark before\nmaking a decision.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 01:40:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: namedatalen part 2 (cont'd) " }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > ...Based on that data, I'd vote against making any changes to NAMEDATALEN.\n> \n> It looked to me like the cost for going to NAMEDATALEN = 64 would be\n> reasonable. Based on these numbers I'd have a problem with 128 or more.\n> \n> But as you observe, pgbench numbers are not very repeatable. It'd be\n> nice to have some similar experiments with another benchmark before\n> making a decision.\n\nYes, 64 looked like the appropriate value too. Actually, I was\nsurprised to see as much of a slowdown as we did.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 09:57:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: namedatalen part 2 (cont'd)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, 64 looked like the appropriate value too. Actually, I was\n> surprised to see as much of a slowdown as we did.\n\nI was too. pgbench runs the same backend(s) throughout the test,\nso it shouldn't be paying anything meaningful in disk I/O for the\nlarger catalog size. After the first set of queries all the relevant\ncatalog rows will be cached in syscache. So where's the performance\nhit coming from?\n\nIt'd be interesting to redo these runs with profiling turned on\nand compare the profiles at, say, 32 and 512 to see where the time\nis going for larger NAMEDATALEN. Might be something that's easy\nto fix once we identify it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 10:20:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: namedatalen part 2 (cont'd) " } ]
[ { "msg_contents": "At 16:46 15/04/02 +0200, Mario Weilguni wrote:\n>And how about getting database internals via SQL-functions - e.g. getting \n>BLCSIZE, LOBBLCSIZE?\n\nISTM that there would be some merit in making a selection of compile-time \noptions available via SQL. Is this worth considering?\n\n\n\n", "msg_date": "Wed, 24 Apr 2002 15:58:15 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Re: Inefficient handling of LO-restore + Patch " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 16:46 15/04/02 +0200, Mario Weilguni wrote:\n>> And how about getting database internals via SQL-functions - e.g. getting \n>> BLCSIZE, LOBBLCSIZE?\n\n> ISTM that there would be some merit in making a selection of compile-time \n> options available via SQL. Is this worth considering?\n\nThis could usefully be combined with the nearby thread about recording\nconfiguration options (started by Thomas). I'd be inclined to make it\na low-footprint affair where you do something like\n\n\tselect compilation_options();\n\nand you get back a long textual list of var=value settings, say\n\nVERSION=7.3devel\nPLATFORM=hppa-hp-hpux10.20, compiled by GCC 2.95.3\nBLCKSZ=8192\nMULTIBYTE=yes\netc etc etc etc\n\nThis would solve the diagnostic need as-is, and it doesn't seem\nunreasonable to me to expect applications to look through the output\nfor a particular line if they need to get the state of a specific\nconfiguration option. It's also trivial to extend/modify as the set\nof options changes over time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 10:30:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch " }, { "msg_contents": "Tom Lane writes:\n\n> This could usefully be combined with the nearby thread about recording\n> configuration options (started by Thomas). I'd be inclined to make it\n> a low-footprint affair where you do something like\n>\n> \tselect compilation_options();\n>\n> and you get back a long textual list of var=value settings, say\n>\n> VERSION=7.3devel\n> PLATFORM=hppa-hp-hpux10.20, compiled by GCC 2.95.3\n> BLCKSZ=8192\n> MULTIBYTE=yes\n> etc etc etc etc\n\nThis assumes that compilation options only matter in the server and that\nonly SQL users would be interested in them. In fact, compilation options\naffect client and utility programs as well, and it's not unusual to have a\nwild mix (if only unintentional).\n\nIMHO, the best place to put this information is in the version output, as\nin:\n\n$ ./psql --version\npsql (PostgreSQL) 7.3devel\ncontains support for: readline\n\nAn SQL interface in addition to that would be OK, too. But let's not dump\neverything into one SHOW command.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 24 Apr 2002 14:24:48 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> select compilation_options();\n\n> This assumes that compilation options only matter in the server and that\n> only SQL users would be interested in them. In fact, compilation options\n> affect client and utility programs as well, and it's not unusual to have a\n> wild mix (if only unintentional).\n\nGood point. It'd be worthwhile to have some way of extracting such\ninformation from the clients as well.\n\n> IMHO, the best place to put this information is in the version output, as\n> in:\n\n> $ ./psql --version\n> psql (PostgreSQL) 7.3devel\n> contains support for: readline\n\nIs that sufficient? The clients probably are not affected by quite as\nmany config options as the server, but they still have a nontrivial\nlist. (Multibyte, SSL, Kerberos come to mind at once.) I'd not like\nto see us assume that a one-line output format will do the job.\n\nA way to interrogate the libpq being used by psql might be good too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 18:13:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch " }, { "msg_contents": "On Wed, Apr 24, 2002 at 06:13:28PM -0400, Tom Lane wrote:\n> \n> Is that sufficient? The clients probably are not affected by quite as\n> many config options as the server, but they still have a nontrivial\n> list. (Multibyte, SSL, Kerberos come to mind at once.) I'd not like\n> to see us assume that a one-line output format will do the job.\n> \n> A way to interrogate the libpq being used by psql might be good too.\n\nI like the way perl handles this, for example\n\nperl -MExtUtils::Embed -e ccopts\n\nfor options to cc used when compiling(-I and stuff) and\n\nperl -MExtUtils::Embed -e ldopts\n\nfor options to ld used when compiling(-L and stuff).\nI think it would be really nice if we could have psql to ask its libpq to\nspew out something similiar. Then you could do stuff like\n\ncc -o ex ex.c `psql -ccopts -ldopts` \n\nand not having to worry about where the libraries are.\n\n-- Magnus\n", "msg_date": "Fri, 3 May 2002 14:20:47 +0200", "msg_from": "Magnus Enbom <dot@rockstorm.se>", "msg_from_op": false, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" } ]
[ { "msg_contents": "Hi!\n\nI seem to have trouble returning large strings of text from stored procedures \n(coded in C), It works fine for smaller data?\nAnd elog prints out the result nicely; but when the result is \"returned\" it \nmakes the server disconnect???? I use the standard Datum methods (actually \nI've tried any possible way returning the damn text :-( ).\n\nAny Ideas why? Are there restriction on the size of text a stored procedure \ncan return?\n\nAny help is appreciated :-)\n\n/Steffen Nielsen\n\n", "msg_date": "Wed, 24 Apr 2002 15:44:44 +0200", "msg_from": "Steffen Nielsen <styf@cs.auc.dk>", "msg_from_op": true, "msg_subject": "Returning text from stored procedures??" }, { "msg_contents": "Steffen Nielsen <styf@cs.auc.dk> writes:\n> Any Ideas why? Are there restriction on the size of text a stored procedure \n> can return?\n\nOne gig ...\n\nIt's hard to guess what you're doing wrong when you haven't shown us\nyour code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 11:43:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Returning text from stored procedures?? " } ]
[ { "msg_contents": "\n\nCaveat: I'm not a pg hacker, I apologize\nin advance if this is a dumb question, but\nit has been nagging at me, and I don't\nknow who else to ask.\n\nIf the WAL is a record of all transactions,\nand if the checkpoint process can be managed\ntightly, is it possible to copy the WAL\nfiles from a master DB and use them to keep\na slave DB in sync? This seems like it \nwould be an easy way to slave a backup system\nwithout additional load on the primary....\n\n\n", "msg_date": "Wed, 24 Apr 2002 10:05:42 -0400", "msg_from": "\"Mike Biamonte\" <mbiamonte@affinitysolutions.com>", "msg_from_op": true, "msg_subject": "WAL -> Replication" }, { "msg_contents": "Mike Biamonte wrote:\n> \n> \n> Caveat: I'm not a pg hacker, I apologize\n> in advance if this is a dumb question, but\n> it has been nagging at me, and I don't\n> know who else to ask.\n> \n> If the WAL is a record of all transactions,\n> and if the checkpoint process can be managed\n> tightly, is it possible to copy the WAL\n> files from a master DB and use them to keep\n> a slave DB in sync? This seems like it \n> would be an easy way to slave a backup system\n> without additional load on the primary....\n\nWAL files are kept only until an fsync(), checkpoint, then reused. \nAlso, the info is tied to direct locations in the file. You could do\nthis for hot backup, but it would require quite bit of coding to make it\nwork.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 10:37:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL -> Replication" }, { "msg_contents": "On Thu, 25 Apr 2002, Bruce Momjian wrote:\n\n> WAL files are kept only until an fsync(), checkpoint, then reused.\n\nOne could keep them longer though, if one really wanted to.\n\n> Also, the info is tied to direct locations in the file. You could do\n> this for hot backup, but it would require quite bit of coding to make it\n> work.\n\nThat's kind of too bad, since log shipping is a very popular method of\nbackup and replication.\n\nNot that I'm volunteering to fix this. :-)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 26 Apr 2002 14:38:05 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL -> Replication" }, { "msg_contents": "On Fri, 2002-04-26 at 07:38, Curt Sampson wrote:\n> On Thu, 25 Apr 2002, Bruce Momjian wrote:\n> \n> > WAL files are kept only until an fsync(), checkpoint, then reused.\n> \n> One could keep them longer though, if one really wanted to.\n> \n> > Also, the info is tied to direct locations in the file. You could do\n> > this for hot backup, but it would require quite bit of coding to make it\n> > work.\n> \n> That's kind of too bad, since log shipping is a very popular method of\n> backup and replication.\n\nNow again from my just aquired DB2 knowledge:\n\nDB2 can run in two modes \n\n1) similar to ours, where logs are reused after checkpoints/commits\nallow it.\n\n2) with log archiving: logs are never reused, but when system determines\nit no longer needs them, it will hand said log over to archiving process\nthat will archive it (usually do a backup to some other place and then\ndelete it). This mode is used when online backup and restore\nfunctionality is desired. This is something that could be interesting\nfor 24x7 reliability.\n\n-----------------\nHannu\n\n\n", "msg_date": "26 Apr 2002 13:33:30 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: WAL -> Replication" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> DB2 can run in two modes \n> 1) similar to ours, where logs are reused after checkpoints/commits\n> allow it.\n> 2) with log archiving: logs are never reused, but when system determines\n> it no longer needs them, it will hand said log over to archiving process\n> that will archive it (usually do a backup to some other place and then\n> delete it).\n\nThere is in fact the skeleton of support in xlog.c for passing unwanted\nlog segments over to an archiver, rather than recycling them. So far\nno one's done anything with the facility. I think the main problem is\nthe one Bruce cited: because the WAL representation is tied to physical\ntuple locations and so forth, it's only useful to a slave that has an\n*exact* duplicate of the master's entire database cluster. That's not\nuseless, but it's pretty restrictive.\n\nIt could be useful for incremental backup, though I'm not sure how\nefficient it is for the purpose. WAL logs tend to be pretty voluminous.\nAt the very least you'd probably want enough smarts in the archiver to\nstrip out the page-image records.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Apr 2002 10:41:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL -> Replication " }, { "msg_contents": "Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > DB2 can run in two modes \n> > 1) similar to ours, where logs are reused after checkpoints/commits\n> > allow it.\n> > 2) with log archiving: logs are never reused, but when system determines\n> > it no longer needs them, it will hand said log over to archiving process\n> > that will archive it (usually do a backup to some other place and then\n> > delete it).\n> \n> There is in fact the skeleton of support in xlog.c for passing unwanted\n> log segments over to an archiver, rather than recycling them. So far\n> no one's done anything with the facility. I think the main problem is\n> the one Bruce cited: because the WAL representation is tied to physical\n> tuple locations and so forth, it's only useful to a slave that has an\n> *exact* duplicate of the master's entire database cluster. That's not\n> useless, but it's pretty restrictive.\n> \n> It could be useful for incremental backup, though I'm not sure how\n> efficient it is for the purpose. WAL logs tend to be pretty voluminous.\n> At the very least you'd probably want enough smarts in the archiver to\n> strip out the page-image records.\n\nYes, I think the bottom line is that we would need to add some things to\nthe WAL file to make archiving the logs work, for either point-in-time\nrecovery, or replication, both of which we need.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Apr 2002 16:33:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL -> Replication" }, { "msg_contents": "On Fri, 2002-04-26 at 19:41, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > DB2 can run in two modes \n> > 1) similar to ours, where logs are reused after checkpoints/commits\n> > allow it.\n> > 2) with log archiving: logs are never reused, but when system determines\n> > it no longer needs them, it will hand said log over to archiving process\n> > that will archive it (usually do a backup to some other place and then\n> > delete it).\n> \n> There is in fact the skeleton of support in xlog.c for passing unwanted\n> log segments over to an archiver, rather than recycling them. So far\n> no one's done anything with the facility. I think the main problem is\n> the one Bruce cited: because the WAL representation is tied to physical\n> tuple locations and so forth, it's only useful to a slave that has an\n> *exact* duplicate of the master's entire database cluster. That's not\n> useless, but it's pretty restrictive.\n\nIt is probably the fastest way to creating functionality for a hot spare\ndatabase.\n\nIf we could ship the log changes even earlier than whole logs are\ncomplete, we can get near-realtime backup server.\n\n> It could be useful for incremental backup, though I'm not sure how\n> efficient it is for the purpose. WAL logs tend to be pretty voluminous.\n\nBut if they contain enough repeated data they should compress quite\nwell.\n\n> At the very least you'd probably want enough smarts in the archiver to\n> strip out the page-image records.\n\nIf we aim for ability to restore the last known good state and not any\npoint of time in between, the archiving can be just playing back the\nlogs over sparse files + keeping record (bitmap or list) of pages that\nhave been updated and thus are really present in the file. Then doing\nfull restore would be just restoring some point of time online backup\nplus copying over changed pages.\n\n----------------\nHannu\n\n\n\n\n\n", "msg_date": "27 Apr 2002 16:25:41 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: WAL -> Replication" } ]
[ { "msg_contents": "I wanted to correct the patch this evening after work, and will check your changes. Thanks!\n\n-----Ursprüngliche Nachricht-----\nVon: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nGesendet: Mittwoch, 24. April 2002 16:03\nAn: Peter Eisentraut\nCc: Mario Weilguni; pgsql-hackers@postgresql.org\nBetreff: Re: [HACKERS] Inefficient handling of LO-restore + Patch\n\n\n\nOK, I have applied the following patch to fix these warnings. However,\nI need Mario to confirm these are the right changes. Thanks.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> This patch does not compile correctly:\n> \n> pg_backup_archiver.c: In function `ahwrite':\n> pg_backup_archiver.c:1252: warning: pointer of type `void *' used in arithmetic\n> pg_backup_archiver.c:1259: warning: pointer of type `void *' used in arithmetic\n> pg_backup_archiver.c:1263: warning: pointer of type `void *' used in arithmetic\n> make: *** [pg_backup_archiver.o] Error 1\n> \n> \n> Bruce Momjian writes:\n> \n> >\n> > Patch applied. Thanks.\n> >\n> > ---------------------------------------------------------------------------\n> >\n> >\n> > Mario Weilguni wrote:\n> > > Am Donnerstag, 11. April 2002 17:44 schrieb Tom Lane:\n> > > > \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > > > > And I did not find out how I can detect the large object\n> > > > > chunksize, either from getting it from the headers (include\n> > > > > \"storage/large_object.h\" did not work)\n> > > >\n> > >\n> > > You did not answer if it's ok to post the patch, hope it's ok:\n> >\n> >\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 24 Apr 2002 16:28:05 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Inefficient handling of LO-restore + Patch" } ]
[ { "msg_contents": "[Please CC any replies, I'm subscribed nomail]\n\nAs promised I've given it a bit of polish and it's actually almost useful.\nYou can have a look at it http://svana.org/kleptog/pgsql/pgfsck.html\n\nJust unpack the files into a directory. It's just a perl script with two\nmodules so no compiling necessary. You can download the package directly at\nhttp://svana.org/kleptog/pgsql/pgfsck-0.01.tar.gz\n\nI've tested it on versions 6.5, 7.0 and 7.2 and it works. It shouldn't\ncrash, no matter how bad a file you feed it. It can output insert statements\nalso to help reconstruction.\n\nHere is an example of the program being run over a suitably hexedited file.\n# ./pgfsck -r 16559 kleptog website\n-- Detected database format 7.2\n-- Table pg_class(1259):Page 1:Tuple 0: Unknown type _aclitem (1034)\n-- Table pg_class(1259):Page 1:Tuple 49: Unknown type _aclitem (1034)\n-- Table website(16559):Page 0:Tuple 7: Tuple incorrect length (parsed data=57,length=1638)\n-- Table website(16559):Page 0:Tuple 44: Decoding tuple runs off end: 627338916 > 69\n-- Table website(16559):Page 0:Tuple 70: Bad tuple offset. Should be: 3784 <= 11592 < 8192\n\nCurrently the following features are not supported:\n\n- Toasted / compressed tuples\n- Checking indexes doesn't work (should it?)\n- Views just produce empty output (because they are)\n- Arrays don't work\n- Since each type output has to be written, many types are not correctly output\n- Split tables (1GB) are not supported past the first part.\n- Some system tables in some versions have a strange layout. You may get many\n harmless warnings about the formats of pg_class, pg_attribute and/or pg_type.\n\nMost of these are basically because I don't know how they work, but with a\nbit of work some of these should be fixable.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> Canada, Mexico, and Australia form the Axis of Nations That\n> Are Actually Quite Nice But Secretly Have Nasty Thoughts About America\n", "msg_date": "Thu, 25 Apr 2002 00:57:43 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": true, "msg_subject": "Table checking/dumping program" } ]
[ { "msg_contents": "Here's today's tidbit for those who like to argue about nitty little\ndetails of behavior ...\n\nPresently, the parser (in particular gram.y) has quite a few special\ntransformations for certain type and function names. For example,\n\nYou write\t\t\tYou get\n\nchar(N)\t\t\t\tbpchar\ntrim(BOTH foo)\t\t\tbtrim(foo)\n\nThe question for the day is: should these transformations be applied to\nschema-qualified names? And should the parser force the transformed\nnames to be looked up in the system schema (pg_catalog), or should it\nallow them to be searched for using the regular namespace search path?\n\nI want to make the following proposal:\n\n1. Transformations are applied only to unqualified names. If you\nwrite a qualified name then it is treated as a plain-vanilla identifier\nand looked up in the catalogs without transformation, even if the name\ncomponent happens to match a name that would be transformed standing\nalone.\n\n2. If a transformation is applied then the resulting name will always\nbe forced to be looked up in the system schema; ie, the output will\neffectively be \"pg_catalog.something\" not just \"something\".\n\nSome examples:\n\nYou write\t\t\tYou get\n\nchar(N)\t\t\t\tpg_catalog.bpchar\npg_catalog.char\t\t\tpg_catalog.char (not bpchar)\nreal\t\t\t\tpg_catalog.float4\nmyschema.real\t\t\tmyschema.real (not float4)\ntrim(BOTH foo)\t\t\tpg_catalog.btrim(foo)\npg_catalog.trim(BOTH foo)\tan error (since the special production\n\t\t\t\tallowing BOTH won't be used)\n\nI have a number of reasons for thinking that this is a reasonable way to\ngo. Point one: transforming qualified names seems to violate the\n\"principle of least surprise\". If I write myschema.real I would not\nexpect that to be converted to myschema.float4, especially if I weren't\naware that Postgres internally calls REAL \"float4\". Point two: I don't\nbelieve that we need to do it to meet the letter of the SQL spec.\nAFAICT the spec treats all the names of built-in types and functions as\nkeywords, not as names belonging to a system schema. So special\nbehavior is required for TRIM(foo) but not for DEFINITION_SCHEMA.TRIM(foo).\nPoint three: if we do transform a name, then we are expecting a\nparticular system type or function to be selected, and we ought to\nensure that that happens; thus explicitly qualifying the output name\nseems proper. Again, this seems less surprising than other alternatives.\nIf I have a datatype myschema.float4 that I've put into the search path\nin front of pg_catalog, I think I'd be surprised to have it get picked\nwhen I write REAL.\n\nAnother reason for doing it this way is that I think it's necessary for\nreversibility. For example, consider what format_type should put out\nwhen it's trying to write a special-cased type name. If it needs to\nemit numeric(10,4) then it *cannot* stick \"pg_catalog.\" on the front of\nthat --- the result wouldn't parse. (At least not unless we uglify the\ngrammar a whole lot more to allow pg_catalog.FOO everywhere that just\nFOO currently has a special production.) So we need to make parsing\nrules that guarantee that numeric(10,4) will be interpreted as\npg_catalog.numeric and not something else, regardless of the active\nsearch path. On the other hand, a plain user datatype that happens\nto be named \"real\" should be accessible as myschema.real without\ninterference from the real->float4 transformation.\n\nA corner case that maybe requires more discussion is what about type and\nfunction names that are reserved per spec, but which we do not need any\nspecial transformation for? For example, the spec thinks that\nOCTET_LENGTH() is a keyword, but our implementation treats it as an\nordinary function name. I feel that the grammar should not prefix\n\"pg_catalog.\" to any name that it hasn't transformed or treated\nspecially in any way, even if that name is reserved per spec. Note that\nthis will not actually lead to any non-spec-compliant behavior as long\nas one allows the system to search pg_catalog before any user-provided\nschemas --- which is in fact the default behavior, as it's currently set\nup.\n\nAnother point is that I believe that REAL should be transformed to\npg_catalog.float4, but the quoted identifier \"real\" should not be.\nThis would require a bit of surgery --- presently xlateSqlType is\napplied to pretty much everything whether it's a quoted identifier\nor not. But if we allow xlateSqlType to continue to work that way,\nthen user-schema names that happen to match one of its target names\nare going to behave strangely. I think we will need to get rid of\nxlateSqlType/xlateSqlFunc and instead implement all the name\ntransformations we want as special productions, so that the target\nnames are shown as keywords in keywords.c. Without this, ruleutils.c\nwill not have a clue that the user type name \"real\" needs to be quoted\nto keep it from being transformed.\n\nBTW: as the code stands today, gram.y is prefixing \"pg_catalog.\" to\nsystem function names but not to type names, which is inconsistent;\nI hadn't thought carefully enough about these issues when I was hacking\nthe grammar for schematized datatypes. So in any case I have some work\nto do...\n\nComments, concerns, better ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Apr 2002 12:53:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Parser translations and schemas" }, { "msg_contents": "Tom Lane writes:\n\n> You write\t\t\tYou get\n>\n> char(N)\t\t\t\tpg_catalog.bpchar\n> pg_catalog.char\t\t\tpg_catalog.char (not bpchar)\n> real\t\t\t\tpg_catalog.float4\n> myschema.real\t\t\tmyschema.real (not float4)\n> trim(BOTH foo)\t\t\tpg_catalog.btrim(foo)\n> pg_catalog.trim(BOTH foo)\tan error (since the special production\n> \t\t\t\tallowing BOTH won't be used)\n\nExactly my thoughts.\n\n> A corner case that maybe requires more discussion is what about type and\n> function names that are reserved per spec, but which we do not need any\n> special transformation for? For example, the spec thinks that\n> OCTET_LENGTH() is a keyword, but our implementation treats it as an\n> ordinary function name. I feel that the grammar should not prefix\n> \"pg_catalog.\" to any name that it hasn't transformed or treated\n> specially in any way, even if that name is reserved per spec.\n\nI agree.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 24 Apr 2002 14:28:06 -0400 (EDT)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Parser translations and schemas" } ]
[ { "msg_contents": "I posted this some time ago to pgsql-bugs[1], to no response. So\nI'll venture to try here.\n\nPostgres breaks the standard for string literals by supporting\nC-like escape sequences. This causes pain for people trying to\nwrite portable applications. Is there any hope for an option to\nfollow the standard strictly?\n\nCc's of replies appreciated.\n\nThanks,\nAndrew\n\n[1] http://archives.postgresql.org/pgsql-bugs/2001-12/msg00048.php\n", "msg_date": "Wed, 24 Apr 2002 14:29:28 -0400", "msg_from": "pimlott@idiomtech.com (Andrew Pimlott)", "msg_from_op": true, "msg_subject": "non-standard escapes in string literals" }, { "msg_contents": "Andrew Pimlott wrote:\n> I posted this some time ago to pgsql-bugs[1], to no response. So\n> I'll venture to try here.\n> \n> Postgres breaks the standard for string literals by supporting\n> C-like escape sequences. This causes pain for people trying to\n> write portable applications. Is there any hope for an option to\n> follow the standard strictly?\n\nThis is actually the first time this has come up (that I remember). We\ndo support C escaping, but you are the first to mention that it can\ncause problems for portable applications.\n\nAnyone else want to comment? I don't know how to address this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 10:41:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: non-standard escapes in string literals" }, { "msg_contents": "On Thu, 25 Apr 2002 10:41:56 EDT, Bruce Momjian wrote:\n> Andrew Pimlott wrote:\n> > I posted this some time ago to pgsql-bugs[1], to no response. So\n> > I'll venture to try here.\n> > \n> > Postgres breaks the standard for string literals by supporting\n> > C-like escape sequences. This causes pain for people trying to\n> > write portable applications. Is there any hope for an option to\n> > follow the standard strictly?\n> \n> This is actually the first time this has come up (that I remember). We\n> do support C escaping, but you are the first to mention that it can\n> cause problems for portable applications.\n> \n> Anyone else want to comment? I don't know how to address this.\n\nIMHO, I agree that I would like to see the ANSI standard implemented.\n\nWhile I really like PostgreSQL, it currently does not scale as large\nas other DBMS systems. Due to this, we try to code as database\nagnostic as possible so that a port requires a minimum of effort.\nCurrently there are only a few areas remaining that are at issue.\n(Intervals and implicit type conversion have/are being addressed).\n\nI believe that the reason that it hasn't come up as an issue, per se,\nis that it would only affect strings with a backslash in them.\nBackslash is not a commonly used character. In addition, MySQL, also\nbroken, uses backslashes in the same/similar way. Lots of people\nusing PostgreSQL are stepping up from MySQL.\n\nThis also poses the biggest problem in terms of legacy compatibility.\nPerhaps the answer is to add a runtime config option (and default it\nto ANSI) and possibly deprecate the C escaping.\n\nThanks,\nF Harvell\n\n-- \nMr. F Harvell Phone: +1.407.673.2529\nFTS International Data Systems, Inc. Cell: +1.407.467.1919\n7457 Aloma Ave, Suite 302 Fax: +1.407.673.4472\nWinter Park, FL 32792 mailto:fharvell@fts.net\n\n\n", "msg_date": "Thu, 25 Apr 2002 13:30:34 -0400", "msg_from": "F Harvell <fharvell@fts.net>", "msg_from_op": false, "msg_subject": "Re: non-standard escapes in string literals " }, { "msg_contents": "F Harvell <fharvell@fts.net> writes:\n> This also poses the biggest problem in terms of legacy compatibility.\n> Perhaps the answer is to add a runtime config option (and default it\n> to ANSI) and possibly deprecate the C escaping.\n\nWhile I wouldn't necessarily object to a runtime option, I do object\nto both the other parts of your proposal ;-). Backslash escaping is\nnot broken; we aren't going to remove it or deprecate it, and I would\nvote against making it non-default.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Apr 2002 15:07:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: non-standard escapes in string literals " }, { "msg_contents": "Tom Lane wrote:\n> F Harvell <fharvell@fts.net> writes:\n> > This also poses the biggest problem in terms of legacy compatibility.\n> > Perhaps the answer is to add a runtime config option (and default it\n> > to ANSI) and possibly deprecate the C escaping.\n> \n> While I wouldn't necessarily object to a runtime option, I do object\n> to both the other parts of your proposal ;-). Backslash escaping is\n> not broken; we aren't going to remove it or deprecate it, and I would\n> vote against making it non-default.\n\nAdded to TODO:\n\n * Allow backslash handling in quoted strings to be disabled for portability\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 16:22:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: non-standard escapes in string literals" }, { "msg_contents": "On Thu, 25 Apr 2002 15:07:44 EDT, Tom Lane wrote:\n> F Harvell <fharvell@fts.net> writes:\n> > This also poses the biggest problem in terms of legacy compatibility.\n> > Perhaps the answer is to add a runtime config option (and default it\n> > to ANSI) and possibly deprecate the C escaping.\n> \n> While I wouldn't necessarily object to a runtime option, I do object\n> to both the other parts of your proposal ;-). Backslash escaping is\n> not broken; we aren't going to remove it or deprecate it, and I would\n> vote against making it non-default.\n> \n\nSorry, didn't mean to imply that backslash escaping was broken, just\nnon-compliant. Beyond that, your recommendations are also probably\nthe best course of action.\n\nI do desire that the \"default\" operation of the database be as ANSI\nstandard compliant as possible, however, I certainly understand the\nneed to be as backwards compliant as possible. The only issue that I\ncan see with keeping the backslash escaping default is that new,\nnon-PostgreSQL programmers will not be expecting the escaping and will\nbe potentially blindsided by it. (A bigger deal since backslashes are\nunusual and are not often tested for/with.) Perhaps prominent notice\nin the documentation will be adequate/appropriate. Maybe a section on\ndifferences with the ANSI standard should be created. (Is there\ncurrently a compilation of differences anywhere or are they all\ndispersed within the documentation?).\n\nThanks,\n F\n\n-- \nMr. F Harvell Phone: +1.407.673.2529\nFTS International Data Systems, Inc. Cell: +1.407.467.1919\n7457 Aloma Ave, Suite 302 Fax: +1.407.673.4472\nWinter Park, FL 32792 mailto:fharvell@fts.net\n\n\n", "msg_date": "Thu, 25 Apr 2002 17:31:36 -0400", "msg_from": "F Harvell <fharvell@fts.net>", "msg_from_op": false, "msg_subject": "Re: non-standard escapes in string literals " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Added to TODO:\n>\n> * Allow backslash handling in quoted strings to be disabled for portability\n\nBTW, what about embedded NUL characters in text strings? ;-)\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT +49-711-685-5973/fax +49-711-685-5898\n", "msg_date": "Fri, 03 May 2002 23:58:07 +0200", "msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>", "msg_from_op": false, "msg_subject": "Re: non-standard escapes in string literals" }, { "msg_contents": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE> writes:\n> BTW, what about embedded NUL characters in text strings? ;-)\n\nThere's approximately zero chance of that happening in the foreseeable\nfuture. Since null-terminated strings are the API for both the parser\nand all datatype I/O routines, there'd have to be a lot of code changed\nto support this. To take just one example: strcoll() uses\nnull-terminated strings, therefore we'd not be able to support\nlocale-aware text comparisons unless we write our own replacement for\nthe entire locale library. (Which we might do someday, but it's not\na trivial task.)\n\nThe amount of pain involved seems to far outweigh the gain...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 18:29:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: non-standard escapes in string literals " } ]
[ { "msg_contents": "We have had several threads about index usage, specifically when PostgreSQL has\nthe choice of using one or not.\n\nThere seems to be a few points of view:\n\n(1) The planner and statistics need to improve, so that erroneously using an\nindex (or not) happens less frequently or not at all.\n\n(2) Use programmatic hints which allow coders specify which indexes are used\nduring a query. (ala Oracle)\n\n(3) It is pretty much OK as-is, just use enable_seqscan=false in the query.\n\nMy point of view is about this subject is one from personal experience. I had a\ndatabase on which PostgreSQL would always (erroneously) choose not to use an\nindex. Are my experiences typical? Probably not, but are experiences like it\nvery common? I don't know, but we see a number \"Why won't PostgreSQL use my\nindex\" messages to at least conclude that it happens every now and then. In my\nexperience, when it happens, it is very frustrating.\n\nI think statement (1) is a good idea, but I think it is optimistic to expect\nthat a statistical analysis of a table will contain enough information for all\npossible cases.\n\nStatement (2) would allow the flexibility needed, but as was pointed out, the\nhints may become wrong over time as characteristics of the various change.\n\nStatement (3) is not good enough because disabling sequential scans affect\nwhole queries and sub-queries which would correctly not use an index would be\nforced to do so.\n\nMy personal preference is that some more specific mechanism than enable_seqscan\nbe provided for the DBA to assure an index is used. Working on the statistics\nand the planner is fine, but I suspect there will always be a strong argument\nfor manual override in the exceptional cases where it will be needed.\n\nWhat do you all think? What would be a good plan of attack?\n", "msg_date": "Wed, 24 Apr 2002 18:46:10 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "PostgreSQL index usage discussion." }, { "msg_contents": "* mlw (markw@mohawksoft.com) [020424 18:51]:\n> \n> (2) Use programmatic hints which allow coders specify which indexes are used\n> during a query. (ala Oracle)\n\nWe would certainly use this if it were available. Hopefully not to\nshoot ourselves in the foot, but for the rather common case of having\na small set of important predefined queries that run over data sets\nthat neither grow significantly nor change in characteristics (for\nexample, a table of airline routes and fares, with a few million\nrows).\n\nWe want to squeeze every last bit of performance out of certain\nqueries, and we're willing to spend the time to verify that the\nmanual tuning beats the planner.\n\n> What do you all think? What would be a good plan of attack?\n\nI dunno. If someone comes up with one that I can reasonably\ncontribute to, I will.\n\n-Brad\n", "msg_date": "Wed, 24 Apr 2002 21:37:48 -0400", "msg_from": "Bradley McLean <brad@bradm.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL index usage discussion." }, { "msg_contents": ">\n> (2) Use programmatic hints which allow coders specify which indexes are\nused\n> during a query. (ala Oracle)\n>\nAs I said before it would be useful a way to improve(not force) using\nindexes on particular queries, i.e. lowering the cost of using this index on\nthis query.\nRegards\n\n", "msg_date": "Thu, 25 Apr 2002 08:42:03 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL index usage discussion." }, { "msg_contents": "> I was told that DB2 has per-table (or rather per-tablespace) knowledge\n> of disk speeds, so keeping separate random and seqsqan costs for each\n> table and index could be a good way here (to force use of a particular\n> index make its use cheap)\n>\n\nI was wondering something even easier, keeping 1 cost per index, 1 cost per\nseqscan, but being allowed to scale cost for each index on each\nquery(recommended, null or unrecommended)\nRegards\n\n", "msg_date": "Thu, 25 Apr 2002 08:56:44 +0200", "msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL index usage discussion." }, { "msg_contents": "On Thu, 2002-04-25 at 00:46, mlw wrote:\n> We have had several threads about index usage, specifically when PostgreSQL has\n> the choice of using one or not.\n> \n> There seems to be a few points of view:\n> \n> (1) The planner and statistics need to improve, so that erroneously using an\n> index (or not) happens less frequently or not at all.\n> \n> (2) Use programmatic hints which allow coders specify which indexes are used\n> during a query. (ala Oracle)\n> \n> (3) It is pretty much OK as-is, just use enable_seqscan=false in the query.\n> \n> My point of view is about this subject is one from personal experience. I had a\n> database on which PostgreSQL would always (erroneously) choose not to use an\n> index. Are my experiences typical? Probably not, but are experiences like it\n> very common?\n\nI have currently 2 databases that run with enable_seqscan=false to avoid\nchoosing plans that take forever.\n\n> I don't know, but we see a number \"Why won't PostgreSQL use my\n> index\" messages to at least conclude that it happens every now and then. In my\n> experience, when it happens, it is very frustrating.\n> \n> I think statement (1) is a good idea, but I think it is optimistic to expect\n> that a statistical analysis of a table will contain enough information for all\n> possible cases.\n\nPerhaps we can come up with some special rules to avoid grossly pessimal\nplans.\n\n--------------------\nHannu\n\n\n", "msg_date": "25 Apr 2002 09:39:12 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL index usage discussion." }, { "msg_contents": "On Thu, 2002-04-25 at 08:42, Luis Alberto Amigo Navarro wrote:\n> >\n> > (2) Use programmatic hints which allow coders specify which indexes are\n> used\n> > during a query. (ala Oracle)\n> >\n> As I said before it would be useful a way to improve(not force) using\n> indexes on particular queries, i.e. lowering the cost of using this index on\n> this query.\n> Regards\n\nI was told that DB2 has per-table (or rather per-tablespace) knowledge\nof disk speeds, so keeping separate random and seqsqan costs for each \ntable and index could be a good way here (to force use of a particular\nindex make its use cheap)\n\n------------\nHannu\n\n\n\n", "msg_date": "25 Apr 2002 09:48:26 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL index usage discussion." } ]
[ { "msg_contents": "It seems we can create a forein key using REFERENCES privilege but\ncannot drop the table if its owner is not same as the referenced\ntable. Is this a feature or bug?\n\n-- create a table as user foo\n\\c - foo\ncreate table t1(i int primary key);\n-- grant reference privilege to user bar\ngrant references on t1 to bar;\n-- create a table as user bar\n\\c - bar\ncreate table t2(i int references t1);\n-- cannot drop t2 as user bar?\ndrop table t2;\nNOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"t1\"\nERROR: t1: Must be table owner.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 25 Apr 2002 11:07:27 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "referential integrity problem" } ]
[ { "msg_contents": "Hi all,\n\nWhy does the password_encryption GUC variable default to false?\n\nAFAICT there shouldn't be any issues with client compatibility -- in\nfact, I'd be inclined to rip out all support for storing cleartext\npasswords...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 25 Apr 2002 01:21:16 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "md5 passwords and pg_shadow" }, { "msg_contents": "Neil Conway wrote:\n> Hi all,\n> \n> Why does the password_encryption GUC variable default to false?\n> \n> AFAICT there shouldn't be any issues with client compatibility -- in\n> fact, I'd be inclined to rip out all support for storing cleartext\n> passwords...\n\nIt is false so passwords can be handled by pre-7.2 clients. Once you\nencrypt them, you can't use passwords on pre-7.2 clients because they\ndon't understand the double-md5 hash required. We will set it to true,\nbut when are most pre-7.2 clients gone?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 01:50:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: md5 passwords and pg_shadow" }, { "msg_contents": "On Thu, 25 Apr 2002 01:50:32 -0400 (EDT)\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> Neil Conway wrote:\n> > Hi all,\n> > \n> > Why does the password_encryption GUC variable default to false?\n> > \n> > AFAICT there shouldn't be any issues with client compatibility -- in\n> > fact, I'd be inclined to rip out all support for storing cleartext\n> > passwords...\n> \n> It is false so passwords can be handled by pre-7.2 clients. Once you\n> encrypt them, you can't use passwords on pre-7.2 clients because they\n> don't understand the double-md5 hash required.\n\nIMHO, there are two separate processes going on here:\n\n (1) password storage in pg_shadow\n\n (2) password submission over the wire\n\nYou want to use a hash like MD5 for #1 so that someone who breaks\ninto the server can't read all the passwords in pg_shadow. You\nwant to use a hash for #2 so that someone sniffing the network\nwon't be able to read passwords. Aren't these two goals\northagonal? In other words, what does the format in which the\npassword is stored have to do with the format in which data\nis sent over the wire?\n\nHow about this scheme:\n\n- store all passwords md5 hashed: never store the cleartext\npassword.\n- if the client is using 'password' authentication, they will\nsend in the cleartext password: MD5 hash it and compare it\nwith the store MD5 hash. If they match, authentication\nsucceeds.\n- if the client is using 'md5' authentication, use the\nexisting double-md5 hash technique\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 25 Apr 2002 12:48:13 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: md5 passwords and pg_shadow" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> IMHO, there are two separate processes going on here:\n\nThe connection you are missing is that hashed password storage is\nincompatible with crypt-style password transmission. If we force\nhashed storage then the only password transmission style available\nto pre-7.2 clients is cleartext. It's not at all clear that securing\nthe on-disk representation is a more important goal than wire security.\n(Perhaps it is for some cases, but in other cases it's surely not.)\nSo the parameter variable is there to let the DBA choose which he's\nmore worried about.\n\nWe should probably change the default setting for 7.3, but I don't\nthink we'll be able to force hashed storage of passwords in all\ninstallations for awhile longer yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Apr 2002 13:32:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: md5 passwords and pg_shadow " }, { "msg_contents": "\nOK, I remember now. 'Password' is fine for MD5-encrypted pg_shadow\nbecause you are using the password supplied over the wire to compare to\nthe md5. (Of couse, no one should be using 'password'.)\n\nIt is 'crypt' that is the problem. You get a random salted crypted\npassword from the user, and you can't compare that to the MD5.\n\nIn the 7.2 setup, the client knows the password and can double-md5 \nencrypts and sends it to you. The double-md5 uses the pg_shadow salt,\nand then a random salt.\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> On Thu, 25 Apr 2002 01:50:32 -0400 (EDT)\n> \"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> > Neil Conway wrote:\n> > > Hi all,\n> > > \n> > > Why does the password_encryption GUC variable default to false?\n> > > \n> > > AFAICT there shouldn't be any issues with client compatibility -- in\n> > > fact, I'd be inclined to rip out all support for storing cleartext\n> > > passwords...\n> > \n> > It is false so passwords can be handled by pre-7.2 clients. Once you\n> > encrypt them, you can't use passwords on pre-7.2 clients because they\n> > don't understand the double-md5 hash required.\n> \n> IMHO, there are two separate processes going on here:\n> \n> (1) password storage in pg_shadow\n> \n> (2) password submission over the wire\n> \n> You want to use a hash like MD5 for #1 so that someone who breaks\n> into the server can't read all the passwords in pg_shadow. You\n> want to use a hash for #2 so that someone sniffing the network\n> won't be able to read passwords. Aren't these two goals\n> orthagonal? In other words, what does the format in which the\n> password is stored have to do with the format in which data\n> is sent over the wire?\n> \n> How about this scheme:\n> \n> - store all passwords md5 hashed: never store the cleartext\n> password.\n> - if the client is using 'password' authentication, they will\n> send in the cleartext password: MD5 hash it and compare it\n> with the store MD5 hash. If they match, authentication\n> succeeds.\n> - if the client is using 'md5' authentication, use the\n> existing double-md5 hash technique\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 13:37:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: md5 passwords and pg_shadow" }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > IMHO, there are two separate processes going on here:\n> \n> The connection you are missing is that hashed password storage is\n> incompatible with crypt-style password transmission. If we force\n> hashed storage then the only password transmission style available\n> to pre-7.2 clients is cleartext. It's not at all clear that securing\n> the on-disk representation is a more important goal than wire security.\n> (Perhaps it is for some cases, but in other cases it's surely not.)\n> So the parameter variable is there to let the DBA choose which he's\n> more worried about.\n> \n> We should probably change the default setting for 7.3, but I don't\n> think we'll be able to force hashed storage of passwords in all\n> installations for awhile longer yet.\n\nIf we change that default in 7.3, pg_dump reload will md5 encrypt the\npasswords supplied from 7.2. Is that OK, and we can require them to set\nit to 'false' if they want pre-7.2 crypt compatibility?\n\nIf so, I can make the change.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 13:39:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: md5 passwords and pg_shadow" }, { "msg_contents": "On Thu, 25 Apr 2002 13:32:27 -0400\n\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > IMHO, there are two separate processes going on here:\n> \n> The connection you are missing is that hashed password storage is\n> incompatible with crypt-style password transmission.\n\nAh, ok -- that makes sense.\n\n> If we force\n> hashed storage then the only password transmission style available\n> to pre-7.2 clients is cleartext. It's not at all clear that securing\n> the on-disk representation is a more important goal than wire security.\n\nI'd agree they are both important.\n\nHow many pre-7.2 clients are actually out there? If 'crypt' authentication\nis deprecated in 7.2, is there any chance it will be removed in\n7.3? If it is, it should be safe to switch to the scheme I mentioned\nin my previous email, which is both less complicated, and\n\"secure-by-default\".\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 25 Apr 2002 14:33:46 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: md5 passwords and pg_shadow" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> How many pre-7.2 clients are actually out there? If 'crypt' authentication\n> is deprecated in 7.2, is there any chance it will be removed in\n> 7.3? If it is, it should be safe to switch to the scheme I mentioned\n> in my previous email, which is both less complicated, and\n> \"secure-by-default\".\n\nI don't see any particular need to change the implementation; what we\nhave works and it's flexible. I do think we should change the default\npassword_encryption setting soon. IIRC, we agreed to default to FALSE\nat a time when we didn't have md5 password support in the jdbc and odbc\ndrivers. We probably should have revisited the decision once we knew\nthat 7.2 would ship with md5 support in all client libraries --- but\nwe didn't think to.\n\nIt seems unlikely to me that FALSE will be the preferred setting for\nvery many 7.3 installations. There might be a few people out there\nstill using 7.1 clients with 7.3 servers, but a majority? No.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Apr 2002 14:54:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: md5 passwords and pg_shadow " }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > How many pre-7.2 clients are actually out there? If 'crypt' authentication\n> > is deprecated in 7.2, is there any chance it will be removed in\n> > 7.3? If it is, it should be safe to switch to the scheme I mentioned\n> > in my previous email, which is both less complicated, and\n> > \"secure-by-default\".\n> \n> I don't see any particular need to change the implementation; what we\n> have works and it's flexible. I do think we should change the default\n> password_encryption setting soon. IIRC, we agreed to default to FALSE\n> at a time when we didn't have md5 password support in the jdbc and odbc\n> drivers. We probably should have revisited the decision once we knew\n> that 7.2 would ship with md5 support in all client libraries --- but\n> we didn't think to.\n\nI did think of it but decided we couldn't release 7.2 that had crypt\nbroken for 7.1 clients. 90% of folks move moving to 7.2 are from 7.1,\nand they don't want to be required to upgrade all their clients at the\nsame time as the server upgrade.\n\nIf no one objects, I will change the default to md5 encrypted pg_shadow\npasswords for 7.3.\n\nObjections? To use crypt in pre-7,2 clients, people will have to change\ntheir postgresql.conf setting _before_ loading the database.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 16:26:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: md5 passwords and pg_shadow" } ]
[ { "msg_contents": "Hi,\n\nI'm running Postgres on Mac OSX (10.1.4). Every once in a while, I \nget the following problem: for some reason the postmaster seems to \nstop running postgres. When I look at the pid attributed to postgres \n(in postmaster.pid) and check it against ps -aux, I see that either \nthe process doesn't exist anymore or that it has been overwritten by \nsome other program (e.g. MySQL). It's not a big problem since it is \nenough to restart for the pids to get sorted (just once the problem \nhappened twice in a row), but does anyone have an idea how I could \navoid this?\n\nThanks.\n\n--------\nFrançois\n\nHome page: http://www.monpetitcoin.com/\n\"A fox is a wolf who sends flowers\"\n", "msg_date": "Thu, 25 Apr 2002 09:47:49 +0200", "msg_from": "Francois Suter <dba@paragraf.ch>", "msg_from_op": true, "msg_subject": "pid gets overwritten in OSX" }, { "msg_contents": "Francois Suter sez:\n} I'm running Postgres on Mac OSX (10.1.4). Every once in a while, I \n} get the following problem: for some reason the postmaster seems to \n} stop running postgres. When I look at the pid attributed to postgres \n} (in postmaster.pid) and check it against ps -aux, I see that either \n} the process doesn't exist anymore or that it has been overwritten by \n} some other program (e.g. MySQL). It's not a big problem since it is \n} enough to restart for the pids to get sorted (just once the problem \n} happened twice in a row), but does anyone have an idea how I could \n} avoid this?\n\nYou'll have to provide more information. I am running OSX 10.1.4 and both\nPostgreSQL 7.1.2 and MySQL and I have never seen any such behavior. The\nonly way I could even imagine them interacting is if you are trying to use\nthe same directory for both, and even then it shouldn't happen since MySQL\nand PostgreSQL use different naming schemes for their pid files.\n\nIs it possible that PostgreSQL isn't coming up after a reboot and the pid\nfile just happens to have an old pid from the last boot?\n\n} Thanks.\n} Fran�ois\n--Greg\n\n", "msg_date": "Thu, 25 Apr 2002 08:41:54 -0400", "msg_from": "Gregory Seidman <gss+pg@cs.brown.edu>", "msg_from_op": false, "msg_subject": "Re: pid gets overwritten in OSX" }, { "msg_contents": "Francois Suter wrote:\n> Hi,\n> \n> I'm running Postgres on Mac OSX (10.1.4). Every once in a while, I \n> get the following problem: for some reason the postmaster seems to \n> stop running postgres. When I look at the pid attributed to postgres \n> (in postmaster.pid) and check it against ps -aux, I see that either \n> the process doesn't exist anymore or that it has been overwritten by \n> some other program (e.g. MySQL). It's not a big problem since it is \n> enough to restart for the pids to get sorted (just once the problem \n> happened twice in a row), but does anyone have an idea how I could \n> avoid this?\n\nThat is strange. The odds that a pid would get reused by another\nlong-running program, and that it would be another database, is very\nsmall.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 25 Apr 2002 11:44:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pid gets overwritten in OSX" }, { "msg_contents": ">You'll have to provide more information. I am running OSX 10.1.4 and both\n>PostgreSQL 7.1.2 and MySQL and I have never seen any such behavior. The\n>only way I could even imagine them interacting is if you are trying to use\n>the same directory for both, and even then it shouldn't happen since MySQL\n>and PostgreSQL use different naming schemes for their pid files.\n\nNo, I'm definitely not using the same directory for both. As for more \ninfo, I'm using Postgres 7.2.\n\n>Is it possible that PostgreSQL isn't coming up after a reboot and the pid\n>file just happens to have an old pid from the last boot?\n\nIt could be. I have been thinking along this line. I could imagine \nthe following scenario: Postgres starts after quite a few other \nprocesses, tries to start with the pid stored in the postmaster.pid \nfile and actually doesn't start because the pid is already in use. Is \nthere an error log somewhere where such an error might appear?\n\nThanks.\n\n\n--------\nFrançois\n\nHome page: http://www.monpetitcoin.com/\n\"A fox is a wolf who sends flowers\"\n", "msg_date": "Fri, 26 Apr 2002 08:52:33 +0200", "msg_from": "Francois Suter <dba@paragraf.ch>", "msg_from_op": true, "msg_subject": "Re: pid gets overwritten in OSX" }, { "msg_contents": "Francois Suter <dba@paragraf.ch> writes:\n> the following scenario: Postgres starts after quite a few other \n> processes, tries to start with the pid stored in the postmaster.pid \n> file and actually doesn't start because the pid is already in use.\n\nPostgres does not \"try to start with the stored pid\"; that's entirely\nimpossible under any flavor of Unix. You get the PID the kernel assigns\nyou, and that's that. This could well be a problem of failure to start\nup, but you're barking up the wrong tree as to why.\n\nWhat is needed at this point is more observation. You need to determine\nwhether the postmaster is in fact starting (and later dying) or\nfailing to start at all --- ie, is the postmaster.pid file left over\nfrom a previous system boot cycle? Checking the mod date of the pid\nfile might be enough to tell.\n\n> Is there an error log somewhere where such an error might appear?\n\nWhat are you doing with the postmaster's stderr? If your start script\nfor the postmaster is routing it to /dev/null, send it someplace more\nhelpful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Apr 2002 10:46:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pid gets overwritten in OSX " }, { "msg_contents": "Thanks for the leads. I will investigate for a while and keep you \nposted if I find anything that might be of interest to everybody.\n\n>What is needed at this point is more observation. You need to determine\n>whether the postmaster is in fact starting (and later dying) or\n>failing to start at all --- ie, is the postmaster.pid file left over\n>from a previous system boot cycle? Checking the mod date of the pid\n>file might be enough to tell.\n>\n>What are you doing with the postmaster's stderr? If your start script\n>for the postmaster is routing it to /dev/null, send it someplace more\n>helpful.\n\n\n--------\nFrançois\n\nHome page: http://www.monpetitcoin.com/\n\"A fox is a wolf who sends flowers\"\n", "msg_date": "Sat, 27 Apr 2002 12:26:09 +0200", "msg_from": "Francois Suter <dba@paragraf.ch>", "msg_from_op": true, "msg_subject": "Re: pid gets overwritten in OSX" }, { "msg_contents": ">>What is needed at this point is more observation. You need to determine\n>>whether the postmaster is in fact starting (and later dying) or\n>>failing to start at all --- ie, is the postmaster.pid file left over\n>>from a previous system boot cycle? Checking the mod date of the pid\n>>file might be enough to tell.\n\nThe error happened again during the week-end and I was able to \ncollect the following from Postgres' logfile:\n\nLock file \"/usr/local/pgsql/data/postmaster.pid\" already exists.\nIs another postmaster (pid 217) running in \"/usr/local/pgsql/data\"?\n\nSo it seems that the problem is that the postmaster.pid file can't be \noverwritten. I checked the last mod date and it is indeed left over \nfrom last startup. Any idea what could be causing this problem?\n\n\n--------\nFrançois\n\nHome page: http://www.monpetitcoin.com/\n\"A fox is a wolf who sends flowers\"", "msg_date": "Mon, 29 Apr 2002 10:46:49 +0200", "msg_from": "Francois Suter <dba@paragraf.ch>", "msg_from_op": true, "msg_subject": "Re: pid gets overwritten in OSX" }, { "msg_contents": "Francois Suter <dba@paragraf.ch> writes:\n> The error happened again during the week-end and I was able to=20\n> collect the following from Postgres' logfile:\n\n> Lock file \"/usr/local/pgsql/data/postmaster.pid\" already exists.\n> Is another postmaster (pid 217) running in \"/usr/local/pgsql/data\"?\n\n> So it seems that the problem is that the postmaster.pid file can't be=20\n> overwritten. I checked the last mod date and it is indeed left over=20\n> from last startup. Any idea what could be causing this problem?\n\nWell, it *could* be overwritten, but Postgres won't do it if it sees\nthat there is a process of that PID in the system.\n\nWhat I think is happening is that there's some small variation in the\nnumber or ordering of processes launched during system boot. Maybe one\ntime Postgres is PID 217, the next time it is PID 218 and some other\ndaemon happens to get 217. But if 217 is what is in the lockfile, and\nwe see *any* other existent process with PID 217, we cravenly refuse\nto overwrite the lockfile.\n\nI have seen this sort of thing before with other daemons --- on my\nsystem, sendmail occasionally refuses to start after a power failure &\nreboot because it has the same sort of lockfile checking behavior.\n\nWe could perhaps avoid this scenario by being a little tighter about\nwhat we will believe is a conflicting process --- for example, if PID\n217 exists but isn't our same userID, don't assume it's the old\npostmaster still running. But I could easily see that cure being worse\nthan the disease. If it ever let us start two conflicting postmasters\nin the same data directory, data corruption would be the certain result.\nThat's exactly what the lockfile is there to prevent.\n\nThe real problem is that the old postmaster was evidently not allowed\nto shut down cleanly (else it'd have removed its lockfile). How are\nyou powering down the system, anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 10:28:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pid gets overwritten in OSX " }, { "msg_contents": ">The real problem is that the old postmaster was evidently not allowed\n>to shut down cleanly (else it'd have removed its lockfile). How are\n>you powering down the system, anyway?\n\nI'm shutting down normally (ok, I mean most of the time I press the \npower-up button and choose \"Shut down\" rather than going via the \nApple menu). I haven't had a system crash in ages! The only \ndifference I can see (and I would have to test if it makes any \ndifference) is that sometimes I'm working stand-alone at home and \nsometimes on the network in my office (I'm using a PowerBook G4), but \nI'm pretty sure I don't have this problem popping up everytime I go \nback to the office after having used my machine at home.\n\nMaybe there's some operation missing at shutdown. I installed \nPostgreSQL using Mark Liyanage's package. Could there be something \nmissing? Is Postgres taking care of the removal of the postmaster.pid \nfile or do you have to do it yourself in some shutdown script?\n\nBest regards.\n\n\n--------\nFrançois\n\nHome page: http://www.monpetitcoin.com/\n\"A fox is a wolf who sends flowers\"\n", "msg_date": "Mon, 29 Apr 2002 17:05:27 +0200", "msg_from": "Francois Suter <dba@paragraf.ch>", "msg_from_op": true, "msg_subject": "Re: pid gets overwritten in OSX" }, { "msg_contents": "Francois Suter <dba@paragraf.ch> writes:\n> Maybe there's some operation missing at shutdown. I installed \n> PostgreSQL using Mark Liyanage's package. Could there be something \n> missing? Is Postgres taking care of the removal of the postmaster.pid \n> file or do you have to do it yourself in some shutdown script?\n\nNo, you shouldn't need to do it yourself. The approved way to shut down\nPg is to send the postmaster a SIGTERM signal --- which I believe all\nUnixen will do automatically during the shutdown sequence. What may be\nhappening is that the system is not giving the postmaster a long enough\ngrace period between SIGTERM and hard kill. We need a minimum of about\nthree seconds I believe (there's a 2-second sleep() in the checkpoint\nsync code, which maybe should not be there, but it's there at the\nmoment). Traditionally systems have allowed 10 seconds or more to\nrespond to SIGTERM, but perhaps Apple thought they could shave some\ntime there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 11:25:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pid gets overwritten in OSX " }, { "msg_contents": "On Mon, 2002-04-29 at 17:05, Francois Suter wrote:\n> >The real problem is that the old postmaster was evidently not allowed\n> >to shut down cleanly (else it'd have removed its lockfile). How are\n> >you powering down the system, anyway?\n> \n> I'm shutting down normally (ok, I mean most of the time I press the \n> power-up button and choose \"Shut down\" rather than going via the \n> Apple menu). I haven't had a system crash in ages! The only \n> difference I can see (and I would have to test if it makes any \n> difference) is that sometimes I'm working stand-alone at home and \n> sometimes on the network in my office (I'm using a PowerBook G4), but \n> I'm pretty sure I don't have this problem popping up everytime I go \n> back to the office after having used my machine at home.\n> \n> Maybe there's some operation missing at shutdown. I installed \n> PostgreSQL using Mark Liyanage's package. Could there be something \n> missing? Is Postgres taking care of the removal of the postmaster.pid \n> file or do you have to do it yourself in some shutdown script?\n\nFrançois\n\nI would definitely quit postgres before shutting down. And Mac OS X does\nnot in my experience like working in \"offline\" mode. I had all sorts of\nproblems getting networking set up right in that mode. All my problems\ndisapeared when the machine was plugged in to the adsl router...\n\nI would say that DNS could be an issue here.\n\nCheers\n\nTony Grant\n\n-- \nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\nMacromedia UltraDev with PostgreSQL\nhttp://www.animaproductions.com/ultra.html\n\n", "msg_date": "29 Apr 2002 18:42:13 +0200", "msg_from": "tony <tony@animaproductions.com>", "msg_from_op": false, "msg_subject": "Re: pid gets overwritten in OSX" }, { "msg_contents": "I've been looking into Francois Suter's recent reports of Postgres not\nshutting down cleanly on Mac OS X 10.1. I find that it's quite\nreproducible. If you tell the system to shut down in the normal\nfashion (eg, pick \"Shut Down\" from the Apple menu), the postmaster\ndoes not terminate, leading to WAL recovery upon restart --- or\neven worse, failure to restart if the postmaster PID recorded in the\nlockfile happens to get assigned to some other daemon.\n\nObserve the normal trace of postmaster shutdown (running with -d4,\nlogging of timestamps and PIDs enabled):\n\n2002-04-30 00:08:30 [315] DEBUG: pmdie 15\n2002-04-30 00:08:30 [315] DEBUG: smart shutdown request\n2002-04-30 00:08:30 [331] DEBUG: shutting down\n2002-04-30 00:08:32 [331] DEBUG: database system is shut down\n2002-04-30 00:08:32 [331] DEBUG: proc_exit(0)\n2002-04-30 00:08:32 [331] DEBUG: shmem_exit(0)\n2002-04-30 00:08:32 [331] DEBUG: exit(0)\n2002-04-30 00:08:32 [315] DEBUG: reaping dead processes\n2002-04-30 00:08:32 [315] DEBUG: proc_exit(0)\n2002-04-30 00:08:32 [315] DEBUG: shmem_exit(0)\n2002-04-30 00:08:32 [315] DEBUG: exit(0)\n\nThe postmaster (here PID 315) forks a subprocess to flush shared buffers\nand checkpoint the WAL log. When the subprocess exits, the postmaster\nremoves its lockfile and shuts down. The subprocess takes a minimum of\n2 seconds because there's a sleep(2) in the checkpoint fsync code.\n\nNow here's what I see in the case of shutting down the OS X system:\n\n2002-04-30 00:25:35 [376] DEBUG: pmdie 15\n2002-04-30 00:25:35 [376] DEBUG: smart shutdown request\n\n... and nothing more. Actual system shutdown (power down) occurred at\napproximately 00:26:06 by my watch, over thirty seconds later than the\npostmaster received SIGTERM. So there was plenty of time to do the\ncheckpoint subprocess. (Indeed, I believe that thirty seconds is the\ngrace period Darwin's init process allows SIGTERM'd processes before\ngiving up and hard-killing them. So the system was actually sitting and\nwaiting for the postmaster.)\n\nWhat we appear to have here is that the kernel is not allowing the\npostmaster to fork a checkpoint subprocess. But there's no indication\nthat the postmaster got a fork() error return, either. Seems like it's\njust hung.\n\nDoes this ring a bell with anyone? Is it an OSX bug, or a \"feature\";\nand if the latter, how can we work around it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Apr 2002 01:26:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Mac OS X: system shutdown prevents checkpoint" }, { "msg_contents": "I showed this to my friend who's a FreeBSD committer (Adrian Chadd) and he's\nactually setting up a MacOS/X box at the moment and will look into it -\nassuming you don't discover the problem first...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Tuesday, 30 April 2002 1:26 PM\n> To: pgsql-hackers@postgresql.org\n> Cc: Francois Suter\n> Subject: [HACKERS] Mac OS X: system shutdown prevents checkpoint\n>\n>\n> I've been looking into Francois Suter's recent reports of Postgres not\n> shutting down cleanly on Mac OS X 10.1. I find that it's quite\n> reproducible. If you tell the system to shut down in the normal\n> fashion (eg, pick \"Shut Down\" from the Apple menu), the postmaster\n> does not terminate, leading to WAL recovery upon restart --- or\n> even worse, failure to restart if the postmaster PID recorded in the\n> lockfile happens to get assigned to some other daemon.\n>\n> Observe the normal trace of postmaster shutdown (running with -d4,\n> logging of timestamps and PIDs enabled):\n>\n> 2002-04-30 00:08:30 [315] DEBUG: pmdie 15\n> 2002-04-30 00:08:30 [315] DEBUG: smart shutdown request\n> 2002-04-30 00:08:30 [331] DEBUG: shutting down\n> 2002-04-30 00:08:32 [331] DEBUG: database system is shut down\n> 2002-04-30 00:08:32 [331] DEBUG: proc_exit(0)\n> 2002-04-30 00:08:32 [331] DEBUG: shmem_exit(0)\n> 2002-04-30 00:08:32 [331] DEBUG: exit(0)\n> 2002-04-30 00:08:32 [315] DEBUG: reaping dead processes\n> 2002-04-30 00:08:32 [315] DEBUG: proc_exit(0)\n> 2002-04-30 00:08:32 [315] DEBUG: shmem_exit(0)\n> 2002-04-30 00:08:32 [315] DEBUG: exit(0)\n>\n> The postmaster (here PID 315) forks a subprocess to flush shared buffers\n> and checkpoint the WAL log. When the subprocess exits, the postmaster\n> removes its lockfile and shuts down. The subprocess takes a minimum of\n> 2 seconds because there's a sleep(2) in the checkpoint fsync code.\n>\n> Now here's what I see in the case of shutting down the OS X system:\n>\n> 2002-04-30 00:25:35 [376] DEBUG: pmdie 15\n> 2002-04-30 00:25:35 [376] DEBUG: smart shutdown request\n>\n> ... and nothing more. Actual system shutdown (power down) occurred at\n> approximately 00:26:06 by my watch, over thirty seconds later than the\n> postmaster received SIGTERM. So there was plenty of time to do the\n> checkpoint subprocess. (Indeed, I believe that thirty seconds is the\n> grace period Darwin's init process allows SIGTERM'd processes before\n> giving up and hard-killing them. So the system was actually sitting and\n> waiting for the postmaster.)\n>\n> What we appear to have here is that the kernel is not allowing the\n> postmaster to fork a checkpoint subprocess. But there's no indication\n> that the postmaster got a fork() error return, either. Seems like it's\n> just hung.\n>\n> Does this ring a bell with anyone? Is it an OSX bug, or a \"feature\";\n> and if the latter, how can we work around it?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Tue, 30 Apr 2002 14:30:19 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Mac OS X: system shutdown prevents checkpoint" }, { "msg_contents": "At 1:26 AM -0400 4/30/02, Tom Lane wrote:\n>I've been looking into Francois Suter's recent reports of Postgres not\n>shutting down cleanly on Mac OS X 10.1.\n>\n>Now here's what I see in the case of shutting down the OS X system:\n>\n>2002-04-30 00:25:35 [376] DEBUG: pmdie 15\n>2002-04-30 00:25:35 [376] DEBUG: smart shutdown request\n>\n>... and nothing more. Actual system shutdown (power down) occurred at\n>approximately 00:26:06 by my watch, over thirty seconds later than the\n>postmaster received SIGTERM. So there was plenty of time to do the\n>checkpoint subprocess. (Indeed, I believe that thirty seconds is the\n>grace period Darwin's init process allows SIGTERM'd processes before\n>giving up and hard-killing them. So the system was actually sitting and\n>waiting for the postmaster.)\n>\n>What we appear to have here is that the kernel is not allowing the\n>postmaster to fork a checkpoint subprocess. But there's no indication\n>that the postmaster got a fork() error return, either. Seems like it's\n>just hung.\n\n\nUnfortunately, I don't have time right now to look into this myself, and because I just moved, I don't have a machine I can give someone an account on to try it themselves (PacBell says 20 days for DSL xfer). But I asked around, and got a pair of tips from the Mac OS X Core OS group. If you want to converse with either of the people named below, they're both active on the darwin-development mailing list. (http://lists.apple.com/mailman/listinfo/darwin-development)\n\n-pmb\n\n\nAt 1:52 PM -0700 5/1/02, Jim Magee wrote:\n>On Wednesday, May 1, 2002, at 01:34 PM, Peter Bierman wrote:\n>\n>> Is fork() disallowed after shutdown starts?\n>\n>No, it's allowed. But, depending upon timing, the new process may be\n>hammered with a SIGTERM right away (maybe even before main()). It is\n>always very tricky to fork() as the result of a daemon getting a signal.\n>They are often process group leader, and so their children may get the\n>same signal they just got.\n>\n>POSIX is very ambiguous on whether a new process in the group should also\n>get the signal while we're still delivering them, or whether it shouldn't\n>because it wasn't in the group at the time the signal was first\n>delivered). Both choices have their problems, and so developers have to\n>deal with either case. Do you have signals masked off correctly before\n>the fork()/exec()?\n>\n>Is fork really returning a PID in the parent, and it just looks like the\n>child didn't make it to returning from its fork() call? There are some\n>preparation things that happen in dyld and libc as part of returning fom\n>fork in the child, and these run before we make it look like fork()\n>returned in the child. If they encounter an error (maybe because the\n>services they need to talk to are no longer available), they have nothing\n>else to do but call _exit() - making it look like the child never returned\n>from fork().\n>\n>But in either the dydl/libc exit case, or the signal case, the parent\n>should get a wait result indicating why the child went away so\n>prematurely. If is was an exit(), maybe using vfork() will yield better\n>results, as there is no need for child-side setup in the vfork() case.\n>\n>--Jim\n\n\nAt 2:01 PM -0700 5/1/02, Matt Watson wrote:\n>\n>It could be that the child has blocked trying to contact a dead lookupd.\n\n\n\n\n", "msg_date": "Wed, 1 May 2002 14:51:52 -0700", "msg_from": "Peter Bierman <bierman@apple.com>", "msg_from_op": false, "msg_subject": "Re: Mac OS X: system shutdown prevents checkpoint" }, { "msg_contents": "Peter Bierman <bierman@apple.com> writes:\n> Is fork() disallowed after shutdown starts?\n>> \n>> No, it's allowed. But, depending upon timing, the new process may be\n>> hammered with a SIGTERM right away (maybe even before main()).\n\nGood point. The fork is executed with SIGTERM blocked --- but the\ncheckpoint child process currently will enable SIGTERM shortly after\nbeing forked. On reflection that seems like a bad idea; probably the\ncheckpoint process should ignore SIGTERM so that it won't get killed\nprematurely during system shutdown.\n\nHowever, that doesn't explain our OS X problem. I added some debug\nprintouts, and can now report positively that (a) the fork() call\nreturns normally in the parent process, providing an apparently-correct\nchild PID value; but (b) the fork never returns in the child. It\ndoesn't ever get as far as trying to enable SIGTERM.\n\n>> Is fork really returning a PID in the parent, and it just looks like the\n>> child didn't make it to returning from its fork() call? There are some\n>> preparation things that happen in dyld and libc as part of returning fom\n>> fork in the child, and these run before we make it look like fork()\n>> returned in the child. If they encounter an error (maybe because the\n>> services they need to talk to are no longer available), they have nothing\n>> else to do but call _exit() - making it look like the child never returned\n>> from fork().\n\nHmmm ... that seems very close to what I'm seeing.\n\n>> But in either the dydl/libc exit case, or the signal case, the parent\n>> should get a wait result indicating why the child went away so\n>> prematurely.\n\nThe parent is not getting any wait() result indicating that its child died.\n(If it were, we'd not have the problem being complained of.)\n\nIs it possible that something in the child's fork() processing will wait\naround for a response from a service that's already died? Why is fork()\ndependent on any outside service whatever --- isn't that a certain\nrecipe for system failures?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 00:45:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mac OS X: system shutdown prevents checkpoint " }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu, 02 May 2002 00:45:19 -0400\n\n;;; However, that doesn't explain our OS X problem. I added some debug\n;;; printouts, and can now report positively that (a) the fork() call\n;;; returns normally in the parent process, providing an apparently-correct\n;;; child PID value; but (b) the fork never returns in the child. It\n;;; doesn't ever get as far as trying to enable SIGTERM.\n&\n;;; Is it possible that something in the child's fork() processing will wait\n;;; around for a response from a service that's already died? Why is fork()\n;;; dependent on any outside service whatever --- isn't that a certain\n;;; recipe for system failures?\n\nI asked Apple this issue. This is a bug of Mac OS X. The problem is registered\nto their bug database for the appropriate eingineers for investigation.\n\n\nKenji Sugita\n\n", "msg_date": "Tue, 16 Jul 2002 10:22:14 +0900 (JST)", "msg_from": "sugita@sra.co.jp", "msg_from_op": false, "msg_subject": "Re: Mac OS X: system shutdown prevents checkpoint " } ]
[ { "msg_contents": "Assuming the following fetch statement in embedded SQL/C:\n\n EXEC SQL FETCH ALL IN selectFromTable_cur INTO\n\t:array1,\n\t:array2;\n\nis memory automatically allocated (by experimentation I guess so)?\nShould the input pointers be NULL initialised? How should the memory\nbe freed?\n\nAssuming the following fetch statement:\n\n while( 1 )\n {\n EXEC SQL FETCH 1000 IN selectFromTable_cur INTO\n\t :array1,\n\t :array2;\n if( (sqlca.sqlcode < 0) || (sqlca.sqlcode != 0) )\n break;\n }\n\nis memory automatically allocated (by experimentation I guess so)?\nShould the input pointers be NULL initialised before each fetch, or\nonly before the first one? How should the memory be freed?\n\nAny pointers to useful documentation?\n\nThanks, Lee Kindness.\n", "msg_date": "Thu, 25 Apr 2002 12:42:00 +0100", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "ECPG: FETCH ALL|n FROM cursor - Memory allocation?" }, { "msg_contents": "On Thu, Apr 25, 2002 at 12:42:00PM +0100, Lee Kindness wrote:\n> Assuming the following fetch statement in embedded SQL/C:\n> \n> EXEC SQL FETCH ALL IN selectFromTable_cur INTO\n> \t:array1,\n> \t:array2;\n> \n> is memory automatically allocated (by experimentation I guess so)?\n\nOnly if the pointers are NULL. If they have a value libecpg assumes that\nthis value points to enough memory to store all data.\n\n> Should the input pointers be NULL initialised? How should the memory\n> be freed?\n\nA simple free() will do. You also can free all automatically\nallocated memory from the most recent executed statement by calling\nECPGfree_auto_mem(). But this is not documented and will never be.\n\nThe correct way is to free(array1) and free(array2) while libecpg will\nfree the internal structures when the next statement is executed.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 25 Apr 2002 15:07:13 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ECPG: FETCH ALL|n FROM cursor - Memory allocation?" }, { "msg_contents": "Michael Meskes wrote:\n> On Thu, Apr 25, 2002 at 12:42:00PM +0100, Lee Kindness wrote:\n>>Should the input pointers be NULL initialised? How should the memory\n>>be freed?\n> \n> \n> A simple free() will do. You also can free all automatically\n> allocated memory from the most recent executed statement by calling\n> ECPGfree_auto_mem(). But this is not documented and will never be.\n> \n> The correct way is to free(array1) and free(array2) while libecpg will\n> free the internal structures when the next statement is executed.\n\nNever, never mix these two! ECPGfree_auto_mem will free even memory \nwhich has already been free'd by the user, perhaps we should get rid of \nthis method (any allocated memory regions are stored in a list, if you \nnever call ECPGfree_auto_mem, this list grows and grows).\n\n Christof\n\n", "msg_date": "Mon, 06 May 2002 09:37:18 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ECPG: FETCH ALL|n FROM cursor - Memory allocation?" }, { "msg_contents": "Okay, lets see if i've got this right...\n\nIf I allocate the memory before the FETCH then I (naturally) free\nit. However If I NULL initialise the pointer then libecpg will\nallocate the memory and I must NOT free it - libecpg will free it\nautomatically... Yeah?\n\nI think this highlights the need for some documentation on this\naspect.\n\nRegards, Lee Kindness.\n\nChristof Petig writes:\n > Michael Meskes wrote:\n > > On Thu, Apr 25, 2002 at 12:42:00PM +0100, Lee Kindness wrote:\n > >>Should the input pointers be NULL initialised? How should the memory\n > >>be freed?\n > > \n > > \n > > A simple free() will do. You also can free all automatically\n > > allocated memory from the most recent executed statement by calling\n > > ECPGfree_auto_mem(). But this is not documented and will never be.\n > > \n > > The correct way is to free(array1) and free(array2) while libecpg will\n > > free the internal structures when the next statement is executed.\n > \n > Never, never mix these two! ECPGfree_auto_mem will free even memory \n > which has already been free'd by the user, perhaps we should get rid of \n > this method (any allocated memory regions are stored in a list, if you \n > never call ECPGfree_auto_mem, this list grows and grows).\n > \n > Christof\n > \n", "msg_date": "Mon, 6 May 2002 11:24:36 +0100", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] ECPG: FETCH ALL|n FROM cursor - Memory allocation?" }, { "msg_contents": "On Mon, May 06, 2002 at 09:37:18AM +0200, Christof Petig wrote:\n> Never, never mix these two! ECPGfree_auto_mem will free even memory \n> which has already been free'd by the user, perhaps we should get rid of \n\nThat's why I discourage the usage of ECPGfree_auto_mem by the user.\nThere is only one reason why the symbol is not static and that is that\nit is used by another module in libecpg.\n\nI never thought about this as an end user routine, it's just meant as a\nclean up method in case of an error during statement execution.\n\nBTW Christof, ECPGfree_auto_mem is used by testdynalloc.pgc. Maybe we\nshould change that.\n\n> this method (any allocated memory regions are stored in a list, if you \n> never call ECPGfree_auto_mem, this list grows and grows).\n\nThat is not true. Before a statement is executed libecpg calls\nECPGclear_auto_mem which just frees ecpg's own structure but not the\nmemory used for data.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 6 May 2002 12:59:58 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ECPG: FETCH ALL|n FROM cursor - Memory allocation?" }, { "msg_contents": "On Mon, May 06, 2002 at 11:24:36AM +0100, Lee Kindness wrote:\n> If I allocate the memory before the FETCH then I (naturally) free\n> it. However If I NULL initialise the pointer then libecpg will\n> allocate the memory and I must NOT free it - libecpg will free it\n> automatically... Yeah?\n\nNo. No matter who allocates the memory. You have to free it yourself.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 6 May 2002 13:00:51 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ECPG: FETCH ALL|n FROM cursor - Memory allocation?" }, { "msg_contents": "Lee Kindness wrote:\n> Okay, lets see if i've got this right...\n> \n> If I allocate the memory before the FETCH then I (naturally) free\n> it. However If I NULL initialise the pointer then libecpg will\n> allocate the memory and I must NOT free it - libecpg will free it\n> automatically... Yeah?\n\nNo, I only said: Never mix free and ECPGfree_auto_mem because \nECPGfree_auto_mem will double free it if you free'd it already.\n\nAnd also: it might be a good idea to kill the undocumented function (and \nthe list).\n\nAnd: You need to free it (by one of the two methods above).\n\n> \n> I think this highlights the need for some documentation on this\n> aspect.\n\nYes it does.\n\n Christof\n\n", "msg_date": "Mon, 06 May 2002 14:47:31 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ECPG: FETCH ALL|n FROM cursor - Memory allocation?" } ]
[ { "msg_contents": "I am going to compare a 16KB PostgreSQL system to an 8KB system. I am working\non the assumption that 16K takes about as long to read as 8K, and That the CPU\noverhead of working with a 16K block is not too significant. \n\nI know with toast, block size is no longer an issue, but 8K is not a lot these\ndays, and it seems like a lot of syscall and block management overhead could be\nreduced by doubling it. Any comments?\n\nThe test system is a dual 850MHZ PIII, 1G memory, RedHat 7.2, 2 IBM SCSI 18G\nhard disks, intel motherboard with onboard adaptec SCSI ULVD.\n\nBesides pgbench, anyone have any tests that they would like to try?\n\nHas anyone already done this test and found it useful/useless?\n", "msg_date": "Thu, 25 Apr 2002 09:04:07 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Block size: 8K or 16K?" }, { "msg_contents": "Jean-Paul ARGUDO wrote:\n> \n> > I know with toast, block size is no longer an issue, but 8K is not a lot these\n> > days, and it seems like a lot of syscall and block management overhead could be\n> > reduced by doubling it. Any comments?\n> \n> IMHO, I think this would enhance performances only if tuple lenght is\n> above 8k, huh?..\n> \n> I mean, I think this would enhance databases with many large objects. On\n> the contrary, database with classical varchar and integers wont benefit\n> it, don't you think?\n\nSee, I'm not sure. I can make many arguments pro or con, an I could defend\neither, but my gut tells me that using 16K blocks will increase performance\nover 8K. Aleady I have seen a sequential scan of a large table go from 20\nseconds using 8K to 17.3 seconds using 16K.\n\nselect * from zsong where song like '%fubar%';\n\n\nI am copying my pgbench database to the new block size to test that.\n\n8K vs 16K\nPros: \nA sequential scan will require 1/2 the number of system calls for the same\namount of data.\nBlock \"cache\" management costs will be cut in half, for the same amount of\ndata.\nMore index information per fetch.\nLarger tuples can be stored without toasting.\n\nCons:\nTime to search a block for a tuple may increase.\nMore memory used per block (These days, I don't think this is too much of an\nissue.)\n\nThis is based on the assumption that reading an 8K chunk is about as costly as\nreading a 16K chunk. If this assumption is not true, then the arguments do not\nwork.\n", "msg_date": "Thu, 25 Apr 2002 09:49:40 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Block size: 8K or 16K?" }, { "msg_contents": "On Thu, 25 Apr 2002 09:04:07 -0400\n\"mlw\" <markw@mohawksoft.com> wrote:\n> I am going to compare a 16KB PostgreSQL system to an 8KB system. I am working\n> on the assumption that 16K takes about as long to read as 8K, and That the CPU\n> overhead of working with a 16K block is not too significant. \n> \n> I know with toast, block size is no longer an issue, but 8K is not a lot these\n> days, and it seems like a lot of syscall and block management overhead could be\n> reduced by doubling it. Any comments?\n\nIt's something I was planning to investigate, FWIW. I'd be interested to see\nthe results...\n\n> The test system is a dual 850MHZ PIII, 1G memory, RedHat 7.2, 2 IBM SCSI 18G\n> hard disks, intel motherboard with onboard adaptec SCSI ULVD.\n> \n> Besides pgbench, anyone have any tests that they would like to try?\n\nPerhaps OSDB? http://osdb.sf.net\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Thu, 25 Apr 2002 11:21:14 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: Block size: 8K or 16K?" }, { "msg_contents": "On Thu, 25 Apr 2002, mlw wrote:\n\n> ...but my gut tells me that using 16K blocks will increase performance\n> over 8K. Aleady I have seen a sequential scan of a large table go from 20\n> seconds using 8K to 17.3 seconds using 16K.\n\nYou should be able to get the same performance increase with 8K\nblocks by reading two blocks at a time while doing sequential scans.\nThat's why I've been promoting this idea of changing postgres to\ndo its own read-ahead.\n\nOf course, Bruce might be right that the OS read-ahead may take\ncare of this anyway, but then why would switching to 16K blocks\nimprove sequential scans? Possibly because I'm missing something here.\n\nAnyway, we now know how to test the change, should someone do it:\ncompare sequential scans with and without readahead on 8K blocks,\nand then compare that against a server without readahead but with\nblock sizes the size of the readahead (64K, I propose--oh wait, we\ncan only do 32K....)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Fri, 26 Apr 2002 14:09:23 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Block size: 8K or 16K?" }, { "msg_contents": "Curt Sampson wrote:\n> On Thu, 25 Apr 2002, mlw wrote:\n> \n> > ...but my gut tells me that using 16K blocks will increase performance\n> > over 8K. Aleady I have seen a sequential scan of a large table go from 20\n> > seconds using 8K to 17.3 seconds using 16K.\n> \n> You should be able to get the same performance increase with 8K\n> blocks by reading two blocks at a time while doing sequential scans.\n> That's why I've been promoting this idea of changing postgres to\n> do its own read-ahead.\n> \n> Of course, Bruce might be right that the OS read-ahead may take\n> care of this anyway, but then why would switching to 16K blocks\n> improve sequential scans? Possibly because I'm missing something here.\n\nI am almost sure that increasing the block size or doing read-ahead in\nthe db will only improve performance if someone is performing seeks in\nthe file at the same time, and hence OS readahead is being turned off.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 26 Apr 2002 01:28:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Block size: 8K or 16K?" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Curt Sampson wrote:\n> > On Thu, 25 Apr 2002, mlw wrote:\n> >\n> > > ...but my gut tells me that using 16K blocks will increase performance\n> > > over 8K. Aleady I have seen a sequential scan of a large table go from 20\n> > > seconds using 8K to 17.3 seconds using 16K.\n> >\n> > You should be able to get the same performance increase with 8K\n> > blocks by reading two blocks at a time while doing sequential scans.\n> > That's why I've been promoting this idea of changing postgres to\n> > do its own read-ahead.\n> >\n> > Of course, Bruce might be right that the OS read-ahead may take\n> > care of this anyway, but then why would switching to 16K blocks\n> > improve sequential scans? Possibly because I'm missing something here.\n> \n> I am almost sure that increasing the block size or doing read-ahead in\n> the db will only improve performance if someone is performing seeks in\n> the file at the same time, and hence OS readahead is being turned off.\n\nI largely agree with you, however, don't underestimate the overhead of a read()\ncall. By doubling the block size, the overhead of my full table scan was cut in\nhalf, thus potentially more efficient, 20 seconds was reduced to 17. (That was\non a machine only doing one query, not one under full load, so the real effect\nmay be much more subtle.)\n\nIn fact, I posted some results of a comparison between 16k and 8k blocks, I saw\nvery little difference on most tests while a couple looked pretty interesting.\n", "msg_date": "Fri, 26 Apr 2002 08:27:24 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Block size: 8K or 16K?" } ]
[ { "msg_contents": "I have been doing some benchmarking. The idea is to measure the difference\nbetween PostgreSQL 7.2.1 compiled for an 8K block or a 16K block. pgbench is\nalmost useless for measuring changes in performance, I can't seem to get any\nreal consistency, so I used the osdb to measure performance.\n\nIn the 16K block configuration, I reduced the number of blocks by half, so as\nto keep caching memory similarly sized.\n\n\nThe machine:\nRedHat 7.2, dual PIII 650, 1G ram, 2 IBM scsi 18G drives.\nuname -a:\nLinux slave1.mohawksoft.com 2.4.7-10smp #1 SMP Thu Sep 6 17:09:31 EDT 2001 i686\nunknown\n\nmount:\n[markw@slave1 markw]$ mount\n/dev/sda3 on / type ext3 (rw)\nnone on /proc type proc (rw)\nusbdevfs on /proc/bus/usb type usbdevfs (rw)\n/dev/sda1 on /boot type ext3 (rw)\n/dev/sda4 on /u01 type ext2 (rw)\n/dev/sdb1 on /u02 type ext2 (rw)\nnone on /dev/pts type devpts (rw,gid=5,mode=620)\nnone on /dev/shm type tmpfs (rw)\n\nThe 8K data directory:\n[postgres@slave1 data]$ pwd\n/home/postgres/data\n[postgres@slave1 data]$ ls -l\ntotal 40\nlrwxrwxrwx 1 root root 9 Mar 6 07:21 base -> /u02/base\ndrwx------ 2 postgres postgres 4096 Apr 25 21:51 global\ndrwx------ 2 postgres postgres 4096 Apr 25 21:29 pg_clog\n-rw------- 1 postgres postgres 10068 Mar 6 07:26 pg_hba.conf\n-rw------- 1 postgres postgres 1250 Mar 6 07:20 pg_ident.conf\n-rw------- 1 postgres postgres 4 Mar 6 07:20 PG_VERSION\nlrwxrwxrwx 1 root root 12 Mar 6 07:22 pg_xlog -> /u01/pg_xlog\n-rw------- 1 postgres postgres 3095 Apr 24 19:23 postgresql.conf\n-rw------- 1 postgres postgres 56 Apr 25 20:54 postmaster.opts\n-rw------- 1 postgres postgres 44 Apr 25 20:54 postmaster.pid\n\nThe 16K data directory:\n[postgres@slave1 data16kb]$ pwd\n/home/postgres/data16kb\n[postgres@slave1 data16kb]$ ls -l\ntotal 36\nlrwxrwxrwx 1 postgres postgres 13 Apr 25 08:47 base -> /u02/base16kb\ndrwx------ 2 postgres postgres 4096 Apr 25 20:53 global\ndrwx------ 2 postgres postgres 4096 Apr 25 20:33 pg_clog\n-rw------- 1 postgres postgres 10068 Apr 25 08:44 pg_hba.conf\n-rw------- 1 postgres postgres 1250 Apr 25 08:43 pg_ident.conf\n-rw------- 1 postgres postgres 4 Apr 25 08:43 PG_VERSION\nlrwxrwxrwx 1 postgres postgres 16 Apr 25 08:46 pg_xlog ->\n/u01/pg_xlog16kb\n-rw------- 1 postgres postgres 3095 Apr 25 08:44 postgresql.conf\n-rw------- 1 postgres postgres 64 Apr 25 14:59 postmaster.opts", "msg_date": "Thu, 25 Apr 2002 22:16:00 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "8K vs 16K block size report" } ]
[ { "msg_contents": "For tracking of Foreign Keys, Check constraints, and maybe NULL / NOT\nNULL (specific type of check constraint) I intend to create (as per\nsuggestion) pg_constraint.\n\nconrelid\nconname\ncontype ('c'heck, 'f'oreign key, ???)\nconkey (int2vector of columns of relid, like pg_index.indkey)\nconnum int4 -- unique identifying constraint number for the relation\nid.\nconsrc\nconbin\n\nDependencies would be on conrelid, and connum in pg_depend. If each\nconstraint has a unique number for the relation OIDs aren't required\nhere. Much like pg_attribute.\n\npg_class.relchecks would change to mean relconstraints, though I wont\nchange the column name.\n\nA view to pg_relcheck would be created.\n\nEnd result? Foreign Keys will be tracked and restricted / cascaded\nthrough via pg_depend code (see patches archive in April). This would\nknock several items off the TODO list.\n\nI'm not exactly sure how to find out what columns a check constraint\ndepends on, but I'm sure I'll figure that out sooner or later.\n\nAny thoughts or suggestions? Is there any reason to allow a check in\na namespace other than the relation it's tied to? Spec seems to allow\nthat, but is it actually useful?\n--\nRod\n\n", "msg_date": "Thu, 25 Apr 2002 22:30:45 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "pg_constraint" }, { "msg_contents": "> For tracking of Foreign Keys, Check constraints, and maybe NULL / NOT\n> NULL (specific type of check constraint) I intend to create (as per\n> suggestion) pg_constraint.\n\nHmmm...I don't see the need at all for NOT NULL constraint tracking. The\nspec doesn't seem to require it and we do not have names for them anyway.\nEven if they were given names, it'd be pointless, as there's only one per\ncolumn.\n\nPrimary keys and unique keys are SQL constraints - are you going to bother\ntracking them as well or leave them in the current format? Maybe you could\ndo it with a view or something.\n\nWhy not just create a pg_references table and leave pg_relcheck as is?\n\nChris\n\n", "msg_date": "Fri, 26 Apr 2002 15:34:12 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_constraint" }, { "msg_contents": "\n> > For tracking of Foreign Keys, Check constraints, and maybe NULL /\nNOT\n> > NULL (specific type of check constraint) I intend to create (as\nper\n> > suggestion) pg_constraint.\n>\n> Hmmm...I don't see the need at all for NOT NULL constraint tracking.\nThe\n> spec doesn't seem to require it and we do not have names for them\nanyway.\n> Even if they were given names, it'd be pointless, as there's only\none per\n> column.\n\nCorrect me if I'm wrong, but aren't NOT NULL constraints a shortform\nof the similar CHECK constraint (according to spec which I don't have\ninfront of me). I've been debating combining the 2 and allowing names\non them, but won't do this yet. CHECK (VALUE NOT NULL) would mark the\npg_attribute column and assign the name.\n\n> Primary keys and unique keys are SQL constraints - are you going to\nbother\n> tracking them as well or leave them in the current format? Maybe\nyou could\n> do it with a view or something.\n\n> Why not just create a pg_references table and leave pg_relcheck as\nis?\n\nrelcheck needs changes anyway. It needs to track the specific columns\nthat it depends on, rather than simply the table. This is for reasons\nof DROP COLUMN. Last thing you want is a bad check constraint after\nthat ;) The other reason is that they're supposed to be in the same\nnamespace (which makes sense) and having each constraint in its own\ntable would be silly.\n\nOf note, the above table should also have immediate, and deferrable\nbools attached to it.\n\nI debated about the primary / unique keys, but indicies seem to do a\ngood enough job with those.\n\n\n\n\n", "msg_date": "Fri, 26 Apr 2002 07:49:42 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: pg_constraint" }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> For tracking of Foreign Keys, Check constraints, and maybe NULL / NOT\n> NULL (specific type of check constraint) I intend to create (as per\n> suggestion) pg_constraint.\n\n> conrelid\n> conname\n> contype ('c'heck, 'f'oreign key, ???)\n\n'u'unique, 'p'rimary key, 'n'ot null seem to cover it\n\n> conkey (int2vector of columns of relid, like pg_index.indkey)\n> connum int4 -- unique identifying constraint number for the relation\n> id.\n> consrc\n> conbin\n\n> Dependencies would be on conrelid, and connum in pg_depend. If each\n> constraint has a unique number for the relation OIDs aren't required\n> here. Much like pg_attribute.\n\nCould we instead insist on a unique name per-table, and make this table's\nkey be (conrelid, conname)? Assigning a number seems quite artificial.\n\nconsrc/conbin seem to only cover the check-constraint case. Need some\nthought about what to store for foreign keys (ideally, enough info for\npg_dump to reconstruct the REFERENCES spec without looking at the\ntriggers) and unique/primary keys (a link to the implementing index\nseems like a good idea here).\n\n> I'm not exactly sure how to find out what columns a check constraint\n> depends on, but I'm sure I'll figure that out sooner or later.\n\npull_var_clause() on the nodetree representation is your friend.\nI see a difficulty in the above representation though: what if a check\nconstraint refers to > INDEX_MAX_KEY columns? Maybe conkey had better\nbe an int2[] variable-length array.\n\n> Any thoughts or suggestions? Is there any reason to allow a check in\n> a namespace other than the relation it's tied to? Spec seems to allow\n> that, but is it actually useful?\n\nFor constraints tied to tables, namespaces are irrelevant.\n\nThere is something in the spec about stand-alone assertions that can\nspecify cross-table constraints, but I think that's a task for some\nfuture year.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Apr 2002 10:25:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_constraint " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Hmmm...I don't see the need at all for NOT NULL constraint tracking. The\n> spec doesn't seem to require it and we do not have names for them anyway.\n> Even if they were given names, it'd be pointless, as there's only one per\n> column.\n\nHmm, you're probably right. Way back when, I was thinking of naming\nthem as a route to allowing DROP CONSTRAINT for them --- but given the\nALTER TABLE SET/DROP NOT NULL syntax that we have now, supporting DROP\nCONSTRAINT is not really necessary. So I concur that not-null isn't a\nfeature that pg_constraint needs to deal with.\n\n> Why not just create a pg_references table and leave pg_relcheck as is?\n\nOne reason is that that structure wouldn't guarantee that\ncheck-constraint names are distinct from references/unique-constraint\nnames, which'd make life difficult for DROP CONSTRAINT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Apr 2002 10:50:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_constraint " }, { "msg_contents": "> Could we instead insist on a unique name per-table, and make this\ntable's\n> key be (conrelid, conname)? Assigning a number seems quite\nartificial.\n\nThe only problem with this is that I don't want the rename of a\nconstraint to have to fall over into the pg_depend table. pg_depend\nis currently happy with system OIDS or a Relation OID and some unique\nnumber to represent it -- much as pg_description wouldn't want to know\nthe name of the constraint for the ability to add a comment to it.\n\n> consrc/conbin seem to only cover the check-constraint case. Need\nsome\n> thought about what to store for foreign keys (ideally, enough info\nfor\n> pg_dump to reconstruct the REFERENCES spec without looking at the\n> triggers) and unique/primary keys (a link to the implementing index\n> seems like a good idea here).\n\nI will implement the various flags required for these. conupdtyp,\ncondeltyp (on update type and on delete type respectively) as well as\nimmediate and deferrable bools.\n\n> > I'm not exactly sure how to find out what columns a check\nconstraint\n> > depends on, but I'm sure I'll figure that out sooner or later.\n>\n> pull_var_clause() on the nodetree representation is your friend.\n\nThanks for the tip.\n\n> I see a difficulty in the above representation though: what if a\ncheck\n> constraint refers to > INDEX_MAX_KEY columns? Maybe conkey had\nbetter\n> be an int2[] variable-length array.\n\nGood point.\n\n", "msg_date": "Fri, 26 Apr 2002 10:58:53 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: pg_constraint " }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n>> Could we instead insist on a unique name per-table, and make this\n>> table's\n>> key be (conrelid, conname)? Assigning a number seems quite\n>> artificial.\n\n> The only problem with this is that I don't want the rename of a\n> constraint to have to fall over into the pg_depend table. pg_depend\n> is currently happy with system OIDS or a Relation OID and some unique\n> number to represent it -- much as pg_description wouldn't want to know\n> the name of the constraint for the ability to add a comment to it.\n\nGood points, but I think those argue for assigning OIDs to constraints\nafter all. If that is what you want connum for then I have a *big*\nproblem with it: aren't you assuming that connum will be distinct from\nany attribute number that the relation might have? What's going to\nenforce that? Besides, the approach doesn't scale to allow other\nkinds of objects associated with a relation (just try keeping attnum,\nconnum, foonum, and barnum from overlapping...).\n\nI had once thought that we could avoid assigning OIDs to rules and\ntriggers, but learned differently as I got into the implementation.\nI'm thinking that constraints will be the same kind of thing; it'll\nbe a lot easier if you give them OIDs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Apr 2002 11:44:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_constraint " }, { "msg_contents": "> > The only problem with this is that I don't want the rename of a\n> > constraint to have to fall over into the pg_depend table.\npg_depend\n> > is currently happy with system OIDS or a Relation OID and some\nunique\n> > number to represent it -- much as pg_description wouldn't want to\nknow\n> > the name of the constraint for the ability to add a comment to it.\n>\n> Good points, but I think those argue for assigning OIDs to\nconstraints\n> after all. If that is what you want connum for then I have a *big*\n\nYes, OIDs are probably the right way to go.\n\n> problem with it: aren't you assuming that connum will be distinct\nfrom\n> any attribute number that the relation might have? What's going to\n\nAs far as pg_depend goes, it doesn't care whether they overlap or not\nas it knows the source (class) relation is pg_constraint.\n\nComment on stuff would need to be changed though.\n\n> I had once thought that we could avoid assigning OIDs to rules and\n> triggers, but learned differently as I got into the implementation.\n> I'm thinking that constraints will be the same kind of thing; it'll\n> be a lot easier if you give them OIDs.\n\nSounds like a plan. I'll\n\n\n", "msg_date": "Fri, 26 Apr 2002 13:28:53 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: pg_constraint " } ]
[ { "msg_contents": "I just recently upgraded from 7.0.x to 7.2.1. I installed from\npostgresql-7.2.1-2PGDG.i386.rpm on a Linux Redhat 7.1 system. I was\nable to resolve most dependancies, except for it telling me that I\nneeded libreadline.so.4, which \" ldconfig -p|grep readline\" showed me I\nalready had, so forced a --nodeps on it.\nHere's a self explanitory paste of what happens when I use \\x or \\l in\nPSQL\n-------------------------------------------------------------------------------------\n\npsql --version\npsql (PostgreSQL) 7.2.1\ncontains support for: readline, history, multibyte\nPortions Copyright (c) 1996-2001, PostgreSQL Global Development Group\nPortions Copyright (c) 1996, Regents of the University of California\nRead the file COPYRIGHT or use the command \\copyright to see the\nusage and distribution terms.\n\n psql -E template1\n********* QUERY **********\nSELECT usesuper FROM pg_user WHERE usename = 'root'\n**************************\n\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntemplate1=# \\z\n********* QUERY **********\nSELECT relname as \"Table\",\n relacl as \"Access privileges\"\nFROM pg_class\nWHERE relkind in ('r', 'v', 'S') AND\n relname NOT LIKE 'pg$_%' ESCAPE '$'\nORDER BY 1;\n**************************\n\nERROR: parser: parse error at or near \"escape\"\ntemplate1=# \\l\n********* QUERY **********\nSELECT d.datname as \"Name\",\n u.usename as \"Owner\",\n pg_encoding_to_char(d.encoding) as \"Encoding\"\nFROM pg_database d LEFT JOIN pg_user u ON d.datdba = u.usesysid\nORDER BY 1;\n**************************\n\nERROR: OUTER JOIN is not yet supported\ntemplate1=# \\q\n--------------------------------------------------------------------\n\nAs you can see, \\x and \\l in PSQL fail to work straight from\ninstallation in my case. Anybody have any ideas?\n\n\n\n", "msg_date": "Fri, 26 Apr 2002 03:37:50 -0500", "msg_from": "Shad <shad@okcmobiletech.com>", "msg_from_op": true, "msg_subject": "PSQL \\x \\l command issues" }, { "msg_contents": "Shad <shad@okcmobiletech.com> writes:\n\n> I just recently upgraded from 7.0.x to 7.2.1. I installed from\n> postgresql-7.2.1-2PGDG.i386.rpm on a Linux Redhat 7.1 system. I was\n> able to resolve most dependancies, except for it telling me that I\n> needed libreadline.so.4, which \" ldconfig -p|grep readline\" showed me I\n> already had, so forced a --nodeps on it.\n> Here's a self explanitory paste of what happens when I use \\x or \\l in\n> PSQL\n\nIt looks like you may still have some of the old installation\naround--what does \"select version();\" tell you?\n\n-Doug\n", "msg_date": "26 Apr 2002 09:38:55 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: PSQL \\x \\l command issues" }, { "msg_contents": "Shad <shad@okcmobiletech.com> writes:\n> I just recently upgraded from 7.0.x to 7.2.1.\n\nYou are clearly still talking to the 7.0 server:\n\n> ERROR: OUTER JOIN is not yet supported\n\nIn general, psql's backslash commands tend to be version-specific,\nand may fail when talking to a server of a different version.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Apr 2002 11:10:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PSQL \\x \\l command issues " }, { "msg_contents": "\n\n\n> You are clearly still talking to the 7.0 server:\n> In general, psql's backslash commands tend to be version-specific,\n> and may fail when talking to a server of a different version.\n>\n\nThat was it. I never thought to check the postgresql version, I just\ndid a quick check of the psql version to verifiy the rpm installation.\nThanks for the insight\n\n\n", "msg_date": "Fri, 26 Apr 2002 22:28:33 -0500", "msg_from": "Shad <shad@okcmobiletech.com>", "msg_from_op": true, "msg_subject": "Re: PSQL \\x \\l command issues" } ]
[ { "msg_contents": "I have enabled the multibyte support by default. The default encoding\nis SQL_ASCII. Note that I just modify configure minimu, and I will\nremove unnecessary staffs including #ifdef MULTIBYTE step by step...\n--\nTatsuo Ishii\n", "msg_date": "Fri, 26 Apr 2002 22:59:31 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "multibyte support is now enabled by default" } ]
[ { "msg_contents": "\nHi all,\n\nI have a problem with metadata.........just read this .......\n\nLet us simply suppose a table \"test\" with 2 fileds \"name (varchar(10))\" and \n\"age (numeric)\" and there b values as name=\"abc\",age=\"20\".\n\nNow in a function i need to develop a list where the column header info has \nto b made in this format i.e., as \"column name, column type, column \nwidth\".........\n\nam getting the column name using PQfname function, column type using PQftype \nand column size using PQfsize....... am also able to get each filed values n \ntheir size correctly..........\n\nbut the problem now is ......when i use PQfsize to get the column header size \nthen i get a -1 if the field is a variable length in nature else i get the \nexact size........i.e., in the above declared table for both the name and age \nbeing variable fields am getting -1 for their size.......now i need some \nmechanism to get 10 for \"name\" header and number of bytes allocated for \n\"age\"(30,6).......is their any way to overcome this problem? \n\nIs this a draw back in Postgre? Is there any way i can get the exact size \nthat i allocated when creating the table.....infact most of the other \ndatabases do provide APIs with not having this problem........can anyone help \nme, please.........\n\nShra\n\n", "msg_date": "Sat, 27 Apr 2002 00:59:55 +0530", "msg_from": "Shra <shravan@yaskatech.com>", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "Shra <shravan@yaskatech.com> writes:\n> [ where to get declared-length info for var-length columns ]\n\nThis is encoded in the atttypmod field (see PQfmod). I'd recommend\nusing format_type() to decode the info, but if you don't mind possibly\nhaving to change your code from time to time, you could just wire\nknowledge of the interpretation of typmod into your code.\n\nIn future, kindly be more selective about the lists you cross-post to.\nI generally make a practice of ignoring questions posted to the bugs\nlist, for example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Apr 2002 11:07:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUGS] " }, { "msg_contents": "Hi tom,\n\nhow to decode the atttypmod field........when i allocate 10 it shows it as 14 \nn for 20 it shows it as 24.......my problem is not just with \nvarchar........it is for all fields that has variable length .........the \nsame atttypmod returns -1 for integer? am confused........\n\nI want the size allocated for each field i a table whether its a var-length \nor fixed length........\n\nShra\n\nOn Saturday 27 April 2002 20:37, you wrote:\n> Shra <shravan@yaskatech.com> writes:\n> > [ where to get declared-length info for var-length columns ]\n>\n> This is encoded in the atttypmod field (see PQfmod). I'd recommend\n> using format_type() to decode the info, but if you don't mind possibly\n> having to change your code from time to time, you could just wire\n> knowledge of the interpretation of typmod into your code.\n>\n> In future, kindly be more selective about the lists you cross-post to.\n> I generally make a practice of ignoring questions posted to the bugs\n> list, for example.\n>\n> \t\t\tregards, tom lane\n\n-- \nShra\n\n", "msg_date": "Mon, 29 Apr 2002 11:50:25 +0530", "msg_from": "Shra <shravan@yaskatech.com>", "msg_from_op": true, "msg_subject": "MetaData (size of datatype)" }, { "msg_contents": "Hi Hannu,\n\n> -1 is given for types that are of fixed size and whose length can be\n> read from pg_type.typlen for that type.\n\nI don't think so...jsut look into this file pq_type.h.... it says.........\n*****************************************************************\n typlen is the number of bytes we use to represent a value of this type, e.g. \n4 for an int4. But for a variable length type, typlen is -1\n*****************************************************************\n\n-1 is for variable length n not for fixed length.........this point is very \nclear even in documentation..........\n\nnow how to find length for a numeric, varchar or anyother one that has \nvariable length where the system PQfsize returns -1.........?\n\nAs tom said.....The type is encoded in the atttypmod field (see PQfmod) and \nrecommended�using format_type().....\nbut when this is used, it returns -1 for integer , real n other fixed \ndatatypes .........\n\nany better way , please.........\n\nShra\n\n", "msg_date": "Mon, 29 Apr 2002 16:46:34 +0530", "msg_from": "Shra <shravan@yaskatech.com>", "msg_from_op": true, "msg_subject": "Re: MetaData (size of datatype)" }, { "msg_contents": "On Mon, 2002-04-29 at 13:16, Shra wrote:\n> Hi Hannu,\n> \n> > -1 is given for types that are of fixed size and whose length can be\n> > read from pg_type.typlen for that type.\n> \n> I don't think so...jsut look into this file pq_type.h.... it says.........\n> *****************************************************************\n> typlen is the number of bytes we use to represent a value of this type, e.g. \n> 4 for an int4. But for a variable length type, typlen is -1\n> *****************************************************************\n> \n> -1 is for variable length n not for fixed length.........this point is very \n> clear even in documentation..........\n\nYes, it in pg_type.typlen it is -1 is for variable length and actual\nlength for fixed-length types\n\nin pg_attribute.attypmod it is -1 for _fixed_ length types and actual\nlength for variable length types (actual length = defined length + 4\nbytes of length bytes)\n\n> now how to find length for a numeric, varchar or anyother one that has \n> variable length where the system PQfsize returns -1.........?\n> \n> As tom said.....The type is encoded in the atttypmod field (see PQfmod) and \n> recommended using format_type().....\n> but when this is used, it returns -1 for integer , real n other fixed \n> datatypes .........\n\nso do as Tom said - \n\nif\n PQfsize returns -1\nthen\n use PQfmod\nelse\n use PQfsize\n\n-----------\nHannu\n\n", "msg_date": "29 Apr 2002 15:54:01 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: MetaData (size of datatype)" }, { "msg_contents": "Shra <shravan@yaskatech.com> writes:\n> As tom said.....The type is encoded in the atttypmod field (see PQfmod) and \n> recommended�using format_type().....\n> but when this is used, it returns -1 for integer , real n other fixed \n> datatypes .........\n\nThe interpretation of typmod is datatype-specific. That's why I\nrecommended using format_type. But if you want to know what format_type\nknows, look at the source code (src/backend/utils/adt/format_type.c).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 10:39:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MetaData (size of datatype) " } ]
[ { "msg_contents": "Just exactly how does one get an array into a system table?\n\nOf course, _int2 and int2[] aren't normal C constructs so using it\nwithin CATALOG won't work.\n\nI suppose thats why the vector types were invented?\n--\nRod\n\n", "msg_date": "Sat, 27 Apr 2002 00:49:35 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Arrays in system tables" }, { "msg_contents": "Rod Taylor wrote:\n> Just exactly how does one get an array into a system table?\n> \n> Of course, _int2 and int2[] aren't normal C constructs so using it\n> within CATALOG won't work.\n> \n> I suppose thats why the vector types were invented?\n\nWell, pg_shadow had pg_class has:\n\n\t relacl | aclitem[] | \n\nand pg_shadow has:\n\n\t useconfig | text[] | \n\nso I would use those as guides.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 27 Apr 2002 02:06:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Arrays in system tables" }, { "msg_contents": "Ahh.. no wonder my aimless greps couldn't find anything.\n\nI should just have read the BKI stuff ;)\n\nThanks\n--\nRod\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Saturday, April 27, 2002 2:06 AM\nSubject: Re: [HACKERS] Arrays in system tables\n\n\n> Rod Taylor wrote:\n> > Just exactly how does one get an array into a system table?\n> >\n> > Of course, _int2 and int2[] aren't normal C constructs so using it\n> > within CATALOG won't work.\n> >\n> > I suppose thats why the vector types were invented?\n>\n> Well, pg_shadow had pg_class has:\n>\n> relacl | aclitem[] |\n>\n> and pg_shadow has:\n>\n> useconfig | text[] |\n>\n> so I would use those as guides.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Sat, 27 Apr 2002 02:08:34 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: Arrays in system tables" } ]
[ { "msg_contents": "On Saturday 27 April 2002 02:56 pm, Thomas Lockhart wrote:\n> I've posted Mandrake RPMs for PostgreSQL 7.2.1 at\n\n> ftp://ftp.postgresql.org/pub/binary/v7.2.1/RPMS/mandrake-8.1\n\n> Thanks to Lamar Owens for the source RPM; I didn't have to change a\n> thing to get these built for Mandrake!\n\nThat's good news. Really good news.\n\nThe delay was worth it, I guess. I have also had a report of a Red Hat 7.1 \nuser getting a rebuild without difficulty. Good things.\n\nAlthough I wonder how many have downloaded the Red Hat 6.2 SPARC RPM's I \nuploaded. :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 27 Apr 2002 11:05:22 +0000", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Mandrake 8.1 RPMs posted" }, { "msg_contents": "I've posted Mandrake RPMs for PostgreSQL 7.2.1 at\n\n ftp://ftp.postgresql.org/pub/binary/v7.2.1/RPMS/mandrake-8.1\n\nThanks to Lamar Owens for the source RPM; I didn't have to change a\nthing to get these built for Mandrake!\n\n - Thomas\n", "msg_date": "Sat, 27 Apr 2002 07:56:53 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Mandrake 8.1 RPMs posted" }, { "msg_contents": "I have a file from a sybase/novell system that supposedly\ncontains an entire copy of the database. The file extension\nis .db. Does anyone know a way to convert that to a\npostgres database or even text that I could further convert\nmyself?\n\nThanks.\n\n\n", "msg_date": "Mon, 29 Apr 2002 16:12:40 -0500", "msg_from": "\"Frank Morton\" <fmorton@base2inc.com>", "msg_from_op": false, "msg_subject": "sybase db conversion" } ]
[ { "msg_contents": "I will be taking a vacation in April 30 to May 31. I will try to check\nemail in late May while on vacation, but I am not sure.\n\nI will of course catch up on all email and patches when I return.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 27 Apr 2002 19:28:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Vacation in May" }, { "msg_contents": "On Sat, 27 Apr 2002 19:28:09 -0400 (EDT)\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> I will be taking a vacation in April 30 to May 31. I will try to check\n> email in late May while on vacation, but I am not sure.\n> \n> I will of course catch up on all email and patches when I return.\n\nWould it be a good idea for someone else to step up to handle patches\nwhile you're away? Leaving everything for a month will probably cause\nsome breakage (e.g. what happened after the 7.2 tree opened).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Mon, 29 Apr 2002 14:41:33 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: Vacation in May" }, { "msg_contents": "Neil Conway wrote:\n> On Sat, 27 Apr 2002 19:28:09 -0400 (EDT)\n> \"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> > I will be taking a vacation in April 30 to May 31. I will try to check\n> > email in late May while on vacation, but I am not sure.\n> > \n> > I will of course catch up on all email and patches when I return.\n> \n> Would it be a good idea for someone else to step up to handle patches\n> while you're away? Leaving everything for a month will probably cause\n> some breakage (e.g. what happened after the 7.2 tree opened).\n\nTom has offered to apply patches that would drift too much from a\none-month delay. If patches do drift, I will take responsibility for\nmaking sure they merge cleanly. It's the least I can do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Apr 2002 16:36:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Vacation in May" } ]
[ { "msg_contents": "This is a set of odd ball string operations for searching text.", "msg_date": "Sun, 28 Apr 2002 16:44:48 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "a new contrib " } ]
[ { "msg_contents": "Decode is a simple function inspired by Oracle's decode function, useful for\ncreating queries that work on both Oracle and PostgreSQL.\n\nconcat(...) concatinates multiple strings into one string.", "msg_date": "Sun, 28 Apr 2002 18:29:48 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "new contrib: decode(..) concat(..)" } ]
[ { "msg_contents": "Looks like regproctooid() was removed here:\n/pgsql/src/backend/utils/adt/regproc.c\nversion 1.66 Thu Apr 25 02:56:55 2002\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/utils/adt/regproc.c.diff?r1=1.65&r2=1.66\n\nbut it was added to pg_dump.c here:\n/pgsql/src/bin/pg_dump/pg_dump.c\nversion 1.255 Wed Apr 24 22:39:49 2002 UTC\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql/src/bin/pg_dump/pg_dump.c.diff?r1=1.254&r2=1.255\n\nand now pg_dumpall gives me this:\npg_dump: query to obtain list of indexes failed: ERROR: Function \n'regproctooid(regproc)' does not exist\n Unable to identify a function that satisfies the given argument \ntypes\n You may need to add explicit typecasts\npg_dump failed on template1, exiting\n\n\nJoe\n\n", "msg_date": "Sun, 28 Apr 2002 18:24:28 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "pg_dump broken in cvs tip" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Looks like regproctooid() was removed here:\n> /pgsql/src/backend/utils/adt/regproc.c\n> version 1.66 Thu Apr 25 02:56:55 2002\n> http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/utils/adt/regproc.c.diff?r1=1.65&r2=1.66\n\n> but it was added to pg_dump.c here:\n> /pgsql/src/bin/pg_dump/pg_dump.c\n> version 1.255 Wed Apr 24 22:39:49 2002 UTC\n> http://developer.postgresql.org/cvsweb.cgi/pgsql/src/bin/pg_dump/pg_dump.c.diff?r1=1.254&r2=1.255\n\nWups, looks like left hand (me) was not talking to right hand (Peter).\nWill fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Apr 2002 23:57:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump broken in cvs tip " } ]
[ { "msg_contents": "Feeding `pg_config --configure` into configure no longer works, as the\noutput of `pg_config --configure` now includes hypens (as in\n\"'--enable-cassert' '--enable-debug'\"), which configure rejects.\n\nThis appears to come from the change in the Makefile\n(/src/bin/pg_config/Makefile), where\n\ndiff -r1.3 -r1.4\n1c1\n< # $Header: /projects/cvsroot/pgsql/src/bin/pg_config/Makefile,v 1.3\n2001/09/16 16:11:11 petere Exp $\n---\n> # $Header: /projects/cvsroot/pgsql/src/bin/pg_config/Makefile,v 1.4\n2002/03/29 17:32:55 petere Exp $\n16c15\n< -e \"s,@configure@,$$configure,g\" \\\n---\n> -e \"s,@configure@,$(configure_args),g\" \\\n\nIs there a reason to keep this change if it breaks this feature, or is there\nan easy way to fix this? (I'm not a serious Makefile user, sorry!)\n\nThanks!\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Mon, 29 Apr 2002 10:08:20 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": true, "msg_subject": "pg_config Makefile includes hyphens in configure arguments" }, { "msg_contents": "Joel Burton writes:\n\n> Feeding `pg_config --configure` into configure no longer works, as the\n> output of `pg_config --configure` now includes hypens (as in\n> \"'--enable-cassert' '--enable-debug'\"), which configure rejects.\n\n[ apostrophes, I assume ]\n\nTry\n\n eval configure `pg_config --configure`\n\nThis allows correct behaviour with arguments that contain spaces.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 5 May 2002 19:52:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_config Makefile includes hyphens in configure arguments" } ]
[ { "msg_contents": "Has anyone considered adding support for GSSAPI\nauthentication/communication? The Java SDK version\n1.4 adds support for GSSAPI and it would be nice to\nuse it with the JDBC driver to provide Kerberos\nauthentication and secure communication.\n\nI played with the code and got authentication working\nwith the MIT GSSAPI library in auth.c (much like the\nKerberos V5 code) and the org.ietf.jgss package in the\nJDBC driver. I haven't looked at what it would take\nto add the per message verification/encryption stuff. \nI couldn't figure out from the mail list archives what\nthe plan and timeline is for the JDBC driver to\ncompile with JDK 1.4 but this change couldn't be added\nuntil that problem get resolved.\n\nAny thoughts?\n\nPhil\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Tax Center - online filing with TurboTax\nhttp://taxes.yahoo.com/\n", "msg_date": "Mon, 29 Apr 2002 07:09:07 -0700 (PDT)", "msg_from": "Phil Dodderidge <jspoon02@yahoo.com>", "msg_from_op": true, "msg_subject": "GSSAPI/Kerberos" } ]
[ { "msg_contents": "Hi all,\n\nI've already posted, a few weeks ago a concern about timestamp and\nindices.\n\nMy problem was that on my 7.1.3 db, I have timestamp columns with\nfonctinal indices as date(column).\n\nThes indices (indexes) wont work anymore because they're not marked loosy.\n\nI've been told that I should go for timestamp without timezone instead,\nwitch is okay because all timezones are the same. However, what happens\nwith daylight saving calculations. Is it still take into account??\n\nMany thaks for that great product!\n\nRegards\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Mon, 29 Apr 2002 17:14:20 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "clarification of timestamp" }, { "msg_contents": "> My problem was that on my 7.1.3 db, I have timestamp columns with\n> fonctinal indices as date(column).\n> Thes indices (indexes) wont work anymore because they're not marked loosy.\n> I've been told that I should go for timestamp without timezone instead,\n> witch is okay because all timezones are the same. However, what happens\n> with daylight saving calculations. Is it still take into account??\n\nNo, because there is no time zone to think about. What daylight savings\ncalculations would you need to consider? I'm a big believer in making\ndatabases time zone aware, but if everything really is happening within\nthe same time zone then your application should probably not notice an\neffect.\n\n - Thomas\n", "msg_date": "Mon, 29 Apr 2002 09:38:41 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: clarification of timestamp" }, { "msg_contents": "High Thomas!\n\nThanks for your reply.\n\nThe columns in question are radius start and end of a connection of a\ncustommer.\n\nNow, it's perfectly valid that start and stop cross a DST bondary in that\ncase, the duration computed as (stop-start) maybe false.\n\nWhat do you think?\n\nI'm perfectly OK with leaving time sones on but the table is growing and I\nneed dates indexes for my statistics.\n\nHope I made my self clear (i.d pardon my english)\n\nRegards,\nOn Mon, 29 Apr 2002, Thomas Lockhart wrote:\n\n> > My problem was that on my 7.1.3 db, I have timestamp columns with\n> > fonctinal indices as date(column).\n> > Thes indices (indexes) wont work anymore because they're not marked loosy.\n> > I've been told that I should go for timestamp without timezone instead,\n> > witch is okay because all timezones are the same. However, what happens\n> > with daylight saving calculations. Is it still take into account??\n> \n> No, because there is no time zone to think about. What daylight savings\n> calculations would you need to consider? I'm a big believer in making\n> databases time zone aware, but if everything really is happening within\n> the same time zone then your application should probably not notice an\n> effect.\n> \n> - Thomas\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Mon, 29 Apr 2002 18:46:10 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: clarification of timestamp" } ]
[ { "msg_contents": "In utils/int8.h we currently have\n\n/* this should be set in pg_config.h, but just in case it wasn't: */\n#ifndef INT64_FORMAT\n#warning \"Broken pg_config.h should have defined INT64_FORMAT\"\n#define INT64_FORMAT \"%ld\"\n#endif\n\nI would like to remove this. The #warning command is not standard C.\nI get a warning about it from HP's cc, and it may produce hard errors\non other non-gcc compilers with even less forgiving preprocessors.\nWe could just take out that one line --- but we are not in the habit of\nbackstopping configure/pg_config.h for any other settings, so I don't\nsee the point of doing it for INT64_FORMAT. I'd like to take out all\nfive lines.\n\nObjections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 13:26:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "#warning possibly dangerous?" } ]
[ { "msg_contents": "I've been thinking more about how to fold the routines in variable.c\ninto the standard GUC structure. I think that we really want to do\nthis, so that (a) these parameters can be set from GUC sources such as\npostgresql.conf, pg_database.datconfig, and pg_shadow.useconfig, and\n(b) we only have to solve the SET-parameter-rollback issue once as\na generic GUC feature, not fix it for each of these special-case\nvariables too.\n\nISTM that we'd need to do the following:\n\n1. Go back to a pure text-string-oriented interface to the per-variable\nroutines. Thomas' recent changes to make some of the variables have\na parsetree-based interface are okay as long as SETs coming from the\nparser are all you worry about --- but for GUC there has to be a textual\nequivalent that can be read from postgresql.conf or stored into\npg_database.datconfig/pg_shadow.useconfig. It seems to me to be cleanest\nto flatten the parsetree down into a string and then let the per-variable\nroutines parse that back. It might waste a few cycles, but the\nalternative is to support two interfaces (string and parsetree based)\nthroughout GUC. And we'll still need the parsing and flattening code\nto support postgresql.conf and pg_database.datconfig --- so what's the\nuse of supporting two interfaces?\n\n2. Add an optional \"show\" hook to GUC's set of per-variable hooks. If\npresent, this routine is called to produce the string that is used to\nSHOW the variable, rather than simply repeating the stored value. I see\nthis as being mainly useful for the datestyle and timezone variables,\nfor which the show routine might emit info that's not present in the\nmost-recently-assigned input string --- but it might be used for any\nvariable that would like to emit a \"canonical form\" representation of\nits value, rather than whatever was last passed to it.\n\nA variant approach would be to allow the assign_hook to return a new\nstring that becomes the actually stored value string; this would amount\nto performing the canonical-form calculation at assign time rather than\nat show time. The show hook seems more general though, and less work\nsince existing assign_hook code wouldn't need to be touched.\n\nThoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 14:05:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Folding variable.c into the GUC structure, redux" } ]
[ { "msg_contents": "I'm planning to add a mechanism to backend/utils/cache/inval.c that will\nallow additional modules to register to get notification of syscache and\nrelcache invalidation events. Right now, only the syscache and relcache\nget told about it --- but there's no reason we couldn't call additional\nroutines here.\n\nThe immediate thing I want this for is so that the namespace.c routines\ncan safely use a cached list of OIDs for accessible namespaces. Without\nthis cache, we'll have to repeatedly look up pg_namespace entries and\ncheck their USAGE privilege bits. That strikes me as a fairly expensive\noperation to have to do for each table, function, and operator name in\nevery query, when in practice the results aren't going to be changing\nvery often. I'd like to only do the lookups when search_path changes\nor we receive a notification of an update in pg_namespace.\n\nThis mechanism would also allow us to solve the plpgsql-cached-plans\nproblem that keeps coming up. If plpgsql registers a callback routine\nfor relcache events, then it'll get a notification every time something\nhappens to a relation schema. It could search its cached plans to see\nif there is any reference to that relation OID. If so, blow away the\ncached plan. (Or at least prevent it from being used again. There'd\nbe some possibility of this happening for a plan that's currently in\nuse, I believe, so you'd probably need to avoid discarding the plan\nuntil the active call is done.)\n\nWe'll have the same problem with the PREPAREd-plan feature that Neil is\nworking on, so it seems like time to do this. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 15:43:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Syscache/relcache invalidation event callbacks" }, { "msg_contents": "On Mon, Apr 29, 2002 at 03:43:30PM -0400, Tom Lane wrote:\n> I'm planning to add a mechanism to backend/utils/cache/inval.c that will\n> allow additional modules to register to get notification of syscache and\n> relcache invalidation events. Right now, only the syscache and relcache\n> get told about it --- but there's no reason we couldn't call additional\n> routines here.\n> \n> The immediate thing I want this for is so that the namespace.c routines\n> can safely use a cached list of OIDs for accessible namespaces. Without\n> this cache, we'll have to repeatedly look up pg_namespace entries and\n> check their USAGE privilege bits. That strikes me as a fairly expensive\n> operation to have to do for each table, function, and operator name in\n> every query, when in practice the results aren't going to be changing\n> very often. I'd like to only do the lookups when search_path changes\n> or we receive a notification of an update in pg_namespace.\n> \n> This mechanism would also allow us to solve the plpgsql-cached-plans\n> problem that keeps coming up. If plpgsql registers a callback routine\n> for relcache events, then it'll get a notification every time something\n> happens to a relation schema. It could search its cached plans to see\n> if there is any reference to that relation OID. If so, blow away the\n\n IMHO is clean call a callback if a change is relavant for cached\n plan -- it means if Oid is used in plan.\n\n> cached plan. (Or at least prevent it from being used again. There'd\n> be some possibility of this happening for a plan that's currently in\n> use, I believe, so you'd probably need to avoid discarding the plan\n> until the active call is done.)\n> \n> We'll have the same problem with the PREPAREd-plan feature that Neil is\n> working on, so it seems like time to do this. Comments?\n\n Wanted! It's very good idea.\n\n I have a question, how I will know how changes are relevant for my \n query plan? IMHO is needful some hight-level API, like\n\n list = ExtractQueryPlanOids( plan );\n reg = RegisterOidsCallback( list, mycallback, mycallbackdata );\n \n and now I can do:\n\n mycallback(reg, mycallbackdata)\n {\n remove_plan_from_my_cache( (MyKey *) mycallbackdata );\n UnregisterOidsCallback(reg);\n }\n\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 30 Apr 2002 10:29:47 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Syscache/relcache invalidation event callbacks" }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> I have a question, how I will know how changes are relevant for my \n> query plan? IMHO is needful some hight-level API, like\n\n> list = ExtractQueryPlanOids( plan );\n> reg = RegisterOidsCallback( list, mycallback, mycallbackdata );\n\nYes, some kind of routine to extract all the referenced relation OIDs\nin a plan tree would be a good idea. I can provide that. The inval\ncallback just tells you the OID of the relation that got flushed; it's\nup to you to get from there to the plans you need to rebuild. Perhaps\na hash table would work well.\n\nBTW, the inval callback had better just mark the plans invalid, not\ntry to rebuild them right away. I don't think it's safe to try to\ndo more database accesses while we're in the relcache invalidation\npath. \"Rebuild plan on next attempted use\" seems like a better idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Apr 2002 09:43:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Syscache/relcache invalidation event callbacks " }, { "msg_contents": "On Tue, Apr 30, 2002 at 09:43:29AM -0400, Tom Lane wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> > I have a question, how I will know how changes are relevant for my \n> > query plan? IMHO is needful some hight-level API, like\n> \n> > list = ExtractQueryPlanOids( plan );\n> > reg = RegisterOidsCallback( list, mycallback, mycallbackdata );\n> \n> Yes, some kind of routine to extract all the referenced relation OIDs\n\n The relations or others things like operators, functions, etc. Right?\n\n> in a plan tree would be a good idea. I can provide that. The inval\n> callback just tells you the OID of the relation that got flushed; it's\n> up to you to get from there to the plans you need to rebuild. Perhaps\n> a hash table would work well.\n\n There must be possible define some callback specific data too, the\n callback maybe will search query in some own specific cache and it\n require some key.\n \n> BTW, the inval callback had better just mark the plans invalid, not\n> try to rebuild them right away. I don't think it's safe to try to\n\n Hmm, it can depend on action, I can imagine:\n\n DROP TABLE tab;\n\n ERROR: mycallback(): can't rebuild the query used in PL/SQL function\n 'xyz'. Please drop this function first.\n \n ...table drop failed.\n \n This is maybe not possible implement now, but it's ideal conception\n :-)\n \n> do more database accesses while we're in the relcache invalidation\n> path. \"Rebuild plan on next attempted use\" seems like a better idea.\n\n Agree. It means store to cache query string too (I not sure if it's\n used in current qcache, but it's simple).\n\n There can be one query cache and one cached planns checking only, for RI, \n SPI, PL/SQL, PREPARE/EXECUTE. Or not? I think implement 4x same things is \n terrible.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 30 Apr 2002 16:26:55 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Syscache/relcache invalidation event callbacks" }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> There must be possible define some callback specific data too, the\n> callback maybe will search query in some own specific cache and it\n> require some key.\n\nYeah, I have it set up similarly to on_proc_exit callbacks: when you\nregister the callback function you can also provide a Datum (typically\na pointer). But I'm not sure that there's any real value in this.\nAFAICS, such callbacks are always going to be interested in looking\nthrough a cache data structure of some kind, and so there will always\nbe global variables pointing to what they need to get to. (I'm\nenvisioning *one* callback function handling a whole cache, not a\ncallback for each entry!)\n \n>> BTW, the inval callback had better just mark the plans invalid, not\n>> try to rebuild them right away. I don't think it's safe to try to\n\n> Hmm, it can depend on action, I can imagine:\n\n> DROP TABLE tab;\n\n> ERROR: mycallback(): can't rebuild the query used in PL/SQL function\n> 'xyz'. Please drop this function first.\n\n> ...table drop failed.\n\nNope, that is NOT okay.\n\n(1) The drop might have been done by some other backend.\n\n(2) Even if it was your own backend, the drop is already committed\n by the time you hear about it.\n\nYou invalidate the plan, and then when and if that function is called\nagain, you'll flag an error as a natural result of trying to recompute\nthe plan. No shortcuts.\n\n(If we do want to prevent a drop in cases like this, it has to be done\nvia the kind of dependency mechanism that Rod is working on.)\n\n> There can be one query cache and one cached planns checking only, for RI, \n> SPI, PL/SQL, PREPARE/EXECUTE. Or not?\n\nHmm. Seems like it might be a good idea, but I'm not certain that all\nof these have exactly the same requirements. If they can all share one\ncache that'd definitely be a win.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Apr 2002 10:39:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Syscache/relcache invalidation event callbacks " }, { "msg_contents": "On Tue, Apr 30, 2002 at 10:39:38AM -0400, Tom Lane wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> > There must be possible define some callback specific data too, the\n> > callback maybe will search query in some own specific cache and it\n> > require some key.\n> \n> Yeah, I have it set up similarly to on_proc_exit callbacks: when you\n> register the callback function you can also provide a Datum (typically\n> a pointer). But I'm not sure that there's any real value in this.\n> AFAICS, such callbacks are always going to be interested in looking\n> through a cache data structure of some kind, and so there will always\n> be global variables pointing to what they need to get to. (I'm\n> envisioning *one* callback function handling a whole cache, not a\n> callback for each entry!)\n\n I understand.\n \n> >> BTW, the inval callback had better just mark the plans invalid, not\n> >> try to rebuild them right away. I don't think it's safe to try to\n> \n> > Hmm, it can depend on action, I can imagine:\n> \n> > DROP TABLE tab;\n> \n> > ERROR: mycallback(): can't rebuild the query used in PL/SQL function\n> > 'xyz'. Please drop this function first.\n> \n> > ...table drop failed.\n> \n> Nope, that is NOT okay.\n\n It was dream.\n\n> (If we do want to prevent a drop in cases like this, it has to be done\n> via the kind of dependency mechanism that Rod is working on.)\n\n It's good that someone works on dreams:-)\n\n> > There can be one query cache and one cached planns checking only, for RI, \n> > SPI, PL/SQL, PREPARE/EXECUTE. Or not?\n> \n> Hmm. Seems like it might be a good idea, but I'm not certain that all\n> of these have exactly the same requirements. If they can all share one\n> cache that'd definitely be a win.\n\n All they needs save query planns -- IMHO it's enough. It's always\n couple \"plan-identificator\" + \"plan-tree\".\n\n There is not problem create branchs of the cache, separate for SPI,\n PL/SQL ..etc. But the _routines_ and API will _same_.\n\n branch = qCache_CreateBranch(\"PL/SQL\");\n qCache_AddPlan(branch, plan, hashkey);\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 30 Apr 2002 17:22:10 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Syscache/relcache invalidation event callbacks" } ]
[ { "msg_contents": "On Tuesday 30 April 2002 02:36 am, Christopher Kings-Lynne wrote:\n> > We have been very fortunate to have avoided such problems since we\n> > started six years ago, and I hope it never happens.\n\n> There sure are a lot of arguments in the hackers list tho :) I do wish\n> people would be a little less 'ad hominem' in their argument styles,\n> however.\n\nWell, Chris, having been a Usenet user for over ten years (and a C-News admin \nfor the first three of that), I've seen 'discussions' that were really ad \nhominem. I've seen stuff that wouldn't even be found on slashdot. Visit the \nold archives on google of alt.flame. Or news.groups. Or even the civil \nnews.admin.... :-)\n\nThis group is far and away the most civil public development group I have ever \nseen. Really. No joke.\n\n> It would be an interesting thing to consider what would happen to the\n> Postgres project if Tom left one day...\n\nOooohhhh, don't give me nightmares! Tom is the original 'Bugzilla' in my \nbook. But it's still educational to se how he got started, and how recent \nthat really was.\n\nOf course, PostgreSQL existed before he came in, and PostgreSQL would exist \nafterwards -- that is, after all, the beauty of free software.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 29 Apr 2002 23:25:30 +0000", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Re: Civility of core/hackers group" }, { "msg_contents": "Here is the resignation letter from Jordan Hubbard, long time FreeBSD\ncore member. The interesting part is where he explains that being in\nthe core/hackers group isn't fun anymore:\n\n\thttp://daily.daemonnews.org/view_story.php3?story_id=2837\n\nFreeBSD is certainly a larger project, so I don't know how relevant it\nis to our group, but I do think it is important to recognize how easily\na group can slip into a unhealthy situation. The killer line is:\n\n ...core still feels too much like the pre-WWII Polish Parliament\n sometimes, where we're fully capable of arguing some issue right up to\n the point where tanks are rolling through the front door and rendering\n the whole debate somewhat moot.\n\nWe have been very fortunate to have avoided such problems since we\nstarted six years ago, and I hope it never happens. \n\nFor those curious what the PostgreSQL core group discusses behind closed\ndoors -- basically nothing. I don't think we have had any meaningful\ndiscussion for many months, so it is not like things are being debated\nin core that you aren't hearing about; nothing is happening in core\nbecause there no conflicts.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Apr 2002 21:50:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Civility of core/hackers group" }, { "msg_contents": "> We have been very fortunate to have avoided such problems since we\n> started six years ago, and I hope it never happens.\n\nThere sure are a lot of arguments in the hackers list tho :) I do wish\npeople would be a little less 'ad hominem' in their argument styles,\nhowever.\n\nIt would be an interesting thing to consider what would happen to the\nPostgres project if Tom left one day...\n\nChris\n\n", "msg_date": "Tue, 30 Apr 2002 10:36:57 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Civility of core/hackers group" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > We have been very fortunate to have avoided such problems since we\n> > started six years ago, and I hope it never happens.\n> \n> There sure are a lot of arguments in the hackers list tho :) I do wish\n> people would be a little less 'ad hominem' in their argument styles,\n> however.\n\nYes, things do get a little testy sometimes, and it does worry me, but\nit seems to blow over quickly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Apr 2002 22:38:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Civility of core/hackers group" }, { "msg_contents": "Wrote a rather long message first time through. Anyway, basic problem\nis the major tickmarks on the next release.\n\nSMPng, KSEs, and various security overhauls are touching many portions\nof the sourcecode in a single shot. Normal development since the\nproject started has been fairly isolated. So theres a bit of a\nheadache with people committing where they normally wouldn't, but\nwaiting for due process could take forever (commits across 4 or 5\ndifferent maintainers sections). So, after the big commits, the\nmaintainers do bug triage.\n\nAll in all, it's working rather well. A ton of progress has been made\nand v5 release will be rather nice.\n\nIt'd be kinda like PostgreSQL implementing a fully object based\nstorage mechanism and interface, going threaded, and replacing node\ntrees with something else for v7.3. Once you start one, might as well\ndo the others since you hit most of the code anyway. With 3 or 4\npeople doing that it can be done. Getting 100+ comitters (not to\nmention those sending in patches) to work along side that mess and you\ncan see why Jordan is tired of baby sitting.\n\nCan't blame him. I think he had the most fun when he was simply\ncoming up with brilliant ideas like the ports tree. I still think he\nhas a few ideas left. Hopefully this will allow him the time to\nimplement -- probably on Darwin, but if it's good it'll float to <name\nproject here> shortly after.\n--\nRod\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Monday, April 29, 2002 9:50 PM\nSubject: [HACKERS] Civility of core/hackers group\n\n\n> Here is the resignation letter from Jordan Hubbard, long time\nFreeBSD\n> core member. The interesting part is where he explains that being\nin\n> the core/hackers group isn't fun anymore:\n>\n> http://daily.daemonnews.org/view_story.php3?story_id=2837\n>\n> FreeBSD is certainly a larger project, so I don't know how relevant\nit\n> is to our group, but I do think it is important to recognize how\neasily\n> a group can slip into a unhealthy situation. The killer line is:\n>\n> ...core still feels too much like the pre-WWII Polish Parliament\n> sometimes, where we're fully capable of arguing some issue right\nup to\n> the point where tanks are rolling through the front door and\nrendering\n> the whole debate somewhat moot.\n>\n> We have been very fortunate to have avoided such problems since we\n> started six years ago, and I hope it never happens.\n>\n> For those curious what the PostgreSQL core group discusses behind\nclosed\n> doors -- basically nothing. I don't think we have had any\nmeaningful\n> discussion for many months, so it is not like things are being\ndebated\n> in core that you aren't hearing about; nothing is happening in core\n> because there no conflicts.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n>\n\n", "msg_date": "Mon, 29 Apr 2002 22:38:34 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Civility of core/hackers group" }, { "msg_contents": "> > There sure are a lot of arguments in the hackers list tho :) I do\nwish\n> > people would be a little less 'ad hominem' in their argument\nstyles,\n> > however.\n>\n> Yes, things do get a little testy sometimes, and it does worry me,\nbut\n> it seems to blow over quickly.\n\nBah.. You can't beat a good whiteboard dual.\n\nMailing lists don't make good whiteboards though...\n\n", "msg_date": "Mon, 29 Apr 2002 22:51:57 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Civility of core/hackers group" }, { "msg_contents": "Rod Taylor wrote:\n> > > There sure are a lot of arguments in the hackers list tho :) I do\n> wish\n> > > people would be a little less 'ad hominem' in their argument\n> styles,\n> > > however.\n> >\n> > Yes, things do get a little testy sometimes, and it does worry me,\n> but\n> > it seems to blow over quickly.\n> \n> Bah.. You can't beat a good whiteboard dual.\n> \n> Mailing lists don't make good whiteboards though...\n\n\"Whiteboard dual\" is probably a good characterization.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Apr 2002 22:54:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Civility of core/hackers group" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Rod Taylor wrote:\n> > > > There sure are a lot of arguments in the hackers list tho :) I do\n> > wish\n> > > > people would be a little less 'ad hominem' in their argument\n> > styles,\n> > > > however.\n> > >\n> > > Yes, things do get a little testy sometimes, and it does worry me,\n> > but\n> > > it seems to blow over quickly.\n> >\n> > Bah.. You can't beat a good whiteboard dual.\n> >\n> > Mailing lists don't make good whiteboards though...\n> \n> \"Whiteboard dual\" is probably a good characterization.\n\nOr, is it \"dualing whiteboards\" (banjo player not included.)\n", "msg_date": "Mon, 29 Apr 2002 23:01:34 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Civility of core/hackers group" }, { "msg_contents": "On Tue, 30 Apr 2002, Christopher Kings-Lynne wrote:\n\n> > We have been very fortunate to have avoided such problems since we\n> > started six years ago, and I hope it never happens.\n> \n> There sure are a lot of arguments in the hackers list tho :) I do wish\n> people would be a little less 'ad hominem' in their argument styles,\n> however.\n\nI'd be more concerned if hackers didn't argue for their own point of\nview/code/methodology.\n\nGavin\n\n", "msg_date": "Tue, 30 Apr 2002 13:30:27 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Civility of core/hackers group" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n>> It would be an interesting thing to consider what would happen to the\n>> Postgres project if Tom left one day...\n\n> Of course, PostgreSQL existed before he came in, and PostgreSQL would exist \n> afterwards -- that is, after all, the beauty of free software.\n\nI was about to make the same comment. The project can survive the loss\nof any individual member(s).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Apr 2002 01:09:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Civility of core/hackers group " } ]
[ { "msg_contents": "Appears psql needs to know how to differentiate between it's own temp\ntables and those of another connection. On the plus side, this takes\ncare of a TODO item to add temp table listings to psql.\n\nConnection 1:\n\ntemplate1=# create temp table junk(col1 int4);\nCREATE\ntemplate1=# select * from junk;\n col1 \n------\n(0 rows)\n\n\nConnection 2:\ntemplate1=# \\d\n List of relations\n Name | Type | Owner \n------+-------+-------\n junk | table | rbt\n(1 row)\n\ntemplate1=# select * from junk;\nERROR: Relation \"junk\" does not exist\n\ntemplate1=# create temp table junk (col4 text);\nCREATE\n\n List of relations\n Name | Type | Owner \n------+-------+-------\n junk | table | rbt\n junk | table | rbt\n\n\n\n", "msg_date": "29 Apr 2002 20:32:23 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Temp tables are curious creatures...." }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Appears psql needs to know how to differentiate between it's own temp\n> tables and those of another connection.\n\nMore generally, psql is as yet clueless about schemas.\n\nregression=# create schema foo;\nCREATE\nregression=# create schema bar;\nCREATE\nregression=# create table foo.tab1 (f1 int);\nCREATE\nregression=# create table bar.tab1 (f2 int);\nCREATE\nregression=# \\d tab1\n Table \"tab1\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | integer |\n f2 | integer |\n\nThis is ... um ... wrong. I am not real sure what the right behavior\nis, however. Should \\d accept patterns like schema.table (and how\nshould its wildcard pattern matching fit with that?) If you don't\nspecify a schema, should it only show tables visible in your search\npath?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Apr 2002 21:35:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Temp tables are curious creatures.... " }, { "msg_contents": "\nI think you have to use the backend pid to find your own. I think\nthere is a libpq function that returns the backend pis so psql can\nframe the proper query.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> Appears psql needs to know how to differentiate between it's own temp\n> tables and those of another connection. On the plus side, this takes\n> care of a TODO item to add temp table listings to psql.\n> \n> Connection 1:\n> \n> template1=# create temp table junk(col1 int4);\n> CREATE\n> template1=# select * from junk;\n> col1 \n> ------\n> (0 rows)\n> \n> \n> Connection 2:\n> template1=# \\d\n> List of relations\n> Name | Type | Owner \n> ------+-------+-------\n> junk | table | rbt\n> (1 row)\n> \n> template1=# select * from junk;\n> ERROR: Relation \"junk\" does not exist\n> \n> template1=# create temp table junk (col4 text);\n> CREATE\n> \n> List of relations\n> Name | Type | Owner \n> ------+-------+-------\n> junk | table | rbt\n> junk | table | rbt\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Apr 2002 21:35:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Temp tables are curious creatures...." }, { "msg_contents": "On Tue, 2002-04-30 at 03:35, Tom Lane wrote:\n> Rod Taylor <rbt@zort.ca> writes:\n> > Appears psql needs to know how to differentiate between it's own temp\n> > tables and those of another connection.\n> \n> More generally, psql is as yet clueless about schemas.\n> \n> regression=# create schema foo;\n> CREATE\n> regression=# create schema bar;\n> CREATE\n> regression=# create table foo.tab1 (f1 int);\n> CREATE\n> regression=# create table bar.tab1 (f2 int);\n> CREATE\n> regression=# \\d tab1\n> Table \"tab1\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | integer |\n> f2 | integer |\n>\n> This is ... um ... wrong. I am not real sure what the right behavior\n> is, however. Should \\d accept patterns like schema.table (and how\n> should its wildcard pattern matching fit with that?) If you don't\n> specify a schema, should it only show tables visible in your search\n> path?\n\nYes.\n\n\nFor me the intuitive answer would be\n\nregression=# \\d tab1\n Table \"foo.tab1\"\n Column | Type | Modifiers\n --------+---------+-----------\n f1 | integer |\n\n Table \"bar.tab1\"\n Column | Type | Modifiers\n --------+---------+-----------\n f2 | integer |\n\n\ni.e. default wildcarding of missing pieces\n\n-------------\nHannu\n\n\n", "msg_date": "30 Apr 2002 12:37:24 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Temp tables are curious creatures...." }, { "msg_contents": "On Tue, 2002-04-30 at 03:35, Bruce Momjian wrote:\n> \n> I think you have to use the backend pid to find your own. I think\n> there is a libpq function that returns the backend pis so psql can\n> frame the proper query.\n\nIs anyoune working on information schema (or pg_xxx views) for use in\npsql and other development frontends?\n\nAlso, are there plans to have SQL-accessible backend_pid function in the\nbackend by default ?\n\nOn RH 7.1 I can create it as:\n\nCREATE FUNCTION getpid() RETURNS integer\n AS '/lib/libc.so.6','getpid'\nLANGUAGE 'C';\n\nBut I'd like it to be a builtin from the start so one can query it\nwithout relying on libpq\n\n---------------------------------------------------------------------------\nHannu\n\n\n", "msg_date": "30 Apr 2002 13:09:57 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Temp tables are curious creatures...." }, { "msg_contents": "> Is anyoune working on information schema (or pg_xxx views) for use\nin\n> psql and other development frontends?\n\nI had started to try an information schema. Didn't make it very far.\nWay too much information missing to come anywhere near spec -- so I've\nstarted trying to fill in those holes.\n\nGive me some time to finish my current set of patches and I'll go back\nat the information schema (hopefully with more luck).\n\n", "msg_date": "Tue, 30 Apr 2002 08:18:14 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Temp tables are curious creatures...." }, { "msg_contents": "\nAdd to TODO:\n\n\t* Add getpid() function to backend \nWe have this in libpq, but it should be in the backend code as a\nfunction call too.\n\n---------------------------------------------------------------------------\n\nHannu Krosing wrote:\n> On Tue, 2002-04-30 at 03:35, Bruce Momjian wrote:\n> > \n> > I think you have to use the backend pid to find your own. I think\n> > there is a libpq function that returns the backend pis so psql can\n> > frame the proper query.\n> \n> Is anyoune working on information schema (or pg_xxx views) for use in\n> psql and other development frontends?\n> \n> Also, are there plans to have SQL-accessible backend_pid function in the\n> backend by default ?\n> \n> On RH 7.1 I can create it as:\n> \n> CREATE FUNCTION getpid() RETURNS integer\n> AS '/lib/libc.so.6','getpid'\n> LANGUAGE 'C';\n> \n> But I'd like it to be a builtin from the start so one can query it\n> without relying on libpq\n> \n> ---------------------------------------------------------------------------\n> Hannu\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 25 May 2002 18:43:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Temp tables are curious creatures...." } ]
[ { "msg_contents": "> Do we want this feature?\n> -----------------------------------------------------\n> Based on the many posts on this topic, I think the answer to this is a\n> resounding yes.\n\nDefinitely!\n\n> How do we want the feature to behave?\n> -----------------------------------------------------\n> A SRF should behave similarly to any other table_ref (RangeTblEntry),\n> i.e. as a tuple source in a FROM clause. Currently there are three\n> primary kinds of RangeTblEntry: RTE_RELATION (ordinary relation),\n> RTE_SUBQUERY (subquery in FROM), and RTE_JOIN (join). SRF would join\n> this list and behave in much the same manner.\n\nYes - I don't see any point in adhering to the SQL standard lame definition.\nWe can just make \"CALL proc()\" map to \"SELECT * FROM proc()\" in the parser\nfor compliance.\n\n> How do we want the feature implemented? (my proposal)\n> -----------------------------------------------------\n> 1. Add a new table_ref node type:\n> - Current nodes are RangeVar, RangeSubselect, or JoinExpr\n> - Add new RangePortal node as a possible table_ref. The RangePortal\n> node will be extented from the current Portal functionality.\n>\n> 2. Add support for three modes of operation to RangePortal:\n> a. Repeated calls -- this is the existing API for SRF, but\n> implemented as a tuple source instead of as an expression.\n> b. Materialized results -- use a TupleStore to materialize the\n> result set.\n> c. Return query -- use current Portal functionality, fetch entire\n> result set.\n>\n> 3. Add support to allow the RangePortal to materialize modes 1 and 3, if\n> needed for a re-read.\n\nLooks cool. That's stuff outta my league tho.\n\n> 4. Add a WITH keyword to CREATE FUNCTION, allowing SRF mode to be\n> specified. This would default to mode a) for backward compatibility.\n\nInteresting idea. Didn't occur to me that we could specify it on a\nper-function level. How do Oracle and Firebird do it? What about the issue\nof people maybe wanting different behaviours at different times? ie.\nstatement level, rather than function level?\n\n> 5. Ignore the current code which allows functions to return multiple\n> results as expressions; we can leave it there, but deprecate it with the\n> intention of eventual removal.\n\nWhat does the current 'setof' pl/pgsql business actually _do_?\n\nChris\n\n", "msg_date": "Tue, 30 Apr 2002 14:59:27 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [RFC] Set Returning Functions" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> 5. Ignore the current code which allows functions to return multiple\n>> results as expressions; we can leave it there, but deprecate it with the\n>> intention of eventual removal.\n\n> What does the current 'setof' pl/pgsql business actually _do_?\n\nplpgsql doesn't handle setof at all, AFAIR. SQL-language functions do.\nThe gold is hidden in src/backend/executor/*.c. The SQL function\nexecutor (functions.c) suspends the query plan for the function's final\nSELECT, and re-executes it to get one more result row each time it's\nre-called. That's okay as far as it goes; but look at what happens when\nsuch a function is called from a SELECT targetlist.\n\nThe ExprMultipleResult flag from the function propagates up through\nexecQual.c, to ExecTargetList which forms a new result tuple for each\nfunction result. All the node executor routines that call ExecProject\nhave to be prepared to deal with that (eg, first if() in ExecScan).\n\nThis is all really messy, both in the implementation and in the\nconception IMHO; for example, the behavior with multiple SRFs in the\nsame targetlist is really pretty stupid (and it was worse when the\ncode left Berkeley). I'd like to deprecate and eventually remove the\nwhole feature. SRFs in FROM (as table sources) make way more sense\nthan SRFs in targetlists.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Apr 2002 09:58:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [RFC] Set Returning Functions " } ]
[ { "msg_contents": "Current CVS tip has most of the needed infrastructure for SQL-spec\nschema support: you can create schemas, and you can create objects\nwithin schemas, and search-path-based lookup for named objects works.\nThere's still a number of things to be done in the backend, but it's\ntime to start working on schema support in the various frontends that\nhave been broken by these changes. I believe that pretty much every\nfrontend library and client application that looks at system catalogs\nwill need revisions. So, this is a call for help --- I don't have the\ntime to fix all the frontends, nor sufficient familiarity with many\nof them.\n\nJDBC and ODBC metadata code is certainly broken; so are the catalog\nlookups in pgaccess, pgadmin, and so on. psql and pg_dump are broken\nas well (though I will take responsibility for fixing pg_dump, and will\nthen look at psql if no one else has done it by then). I'm not even\nsure what else might need to change.\n\nHere's an example of what's broken:\n\ntest=# create schema foo;\nCREATE\ntest=# create table foo.mytab (f1 int, f2 text);\nCREATE\ntest=# create schema bar;\nCREATE\ntest=# create table bar.mytab (f1 text, f3 int);\nCREATE\ntest=# \\d mytab\n Table \"mytab\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | text |\n f1 | integer |\n f2 | text |\n f3 | integer |\n\npsql's \\d command hasn't the foggiest idea that there might now be more\nthan one pg_class entry with the same relname. It needs to be taught\nabout that --- but even before that, we need to work out schema-aware\ndefinitions of the wildcard expansion rules for psql's backslash\ncommands that accept wildcarded names. In the above example, probably\n\"\\d mytab\" should have said \"no such table\" --- because neither foo nor\nbar were in my search path, so I should not see them unless I give a\nqualified name (eg, \"\\d foo.mytab\" or \"\\d bar.mytab\"). For commands\nthat accept wildcard patterns, what should happen --- should \"\\z my*\"\nfind these tables, if they're not in my search path? Is \"\\z f*.my*\"\nsensible to support? I dunno yet.\n\nIf you've got time to work on fixing frontend code, or even helping\nto work out definitional questions like these, please check out current\nCVS tip or a nightly snapshot tarball and give it a try. (But do NOT\nput any valuable data into current sources --- until pg_dump is fixed,\nyou won't be able to produce a useful backup of a database that uses\nmultiple schemas.)\n\nSome documentation can be found at\nhttp://developer.postgresql.org/docs/postgres/sql-naming.html\nhttp://developer.postgresql.org/docs/postgres/sql-createschema.html\nhttp://developer.postgresql.org/docs/postgres/sql-grant.html\nhttp://developer.postgresql.org/docs/postgres/runtime-config.html#RUNTIME-CONFIG-GENERAL (see SEARCH_PATH)\nbut more needs to be written. (In particular, I think the Tutorial\ncould stand to have a short section added about schemas; and the Admin\nGuide ought to be revised to discuss running one database with per-user\nschemas as a good alternative to per-user databases. Any volunteers to\nwrite that stuff?)\n\nSome things that don't work yet in the backend:\n\n1. There's no DROP SCHEMA. (If you need to, you can drop the contained\nobjects and then manually delete the pg_namespace row for the schema.)\nNo ALTER SCHEMA RENAME either (though you can just UPDATE the\npg_namespace row if you need that).\n\n2. CREATE SCHEMA with sub-statements isn't up to SQL spec requirements\nyet. Best bet is to create the schema and then create contained objects\nseparately, as in the above example.\n\n3. I'm not sure that the newly-defined GRANT privileges are all checked\neverywhere they should be. Also, the default privilege settings\nprobably need fine-tuning still.\n\n4. We probably need more helper functions and/or predefined system views\nto make it possible to fix the frontends in a reasonable way --- for\nexample, it's still quite difficult for something looking at pg_class to\ndetermine which tables are visible in the current search path. Thoughts\nabout what should be provided are welcome.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Apr 2002 13:31:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Schemas: status report, call for developers" }, { "msg_contents": "<snip>\n\n>\n>Here's an example of what's broken:\n>\n>test=# create schema foo;\n>CREATE\n>test=# create table foo.mytab (f1 int, f2 text);\n>CREATE\n>test=# create schema bar;\n>CREATE\n>test=# create table bar.mytab (f1 text, f3 int);\n>CREATE\n>test=# \\d mytab\n> Table \"mytab\"\n> Column | Type | Modifiers\n>--------+---------+-----------\n> f1 | text |\n> f1 | integer |\n> f2 | text |\n> f3 | integer |\n>\n\nI would think this should produce the following:\n\n Table \"bar.mytab\"\n Column | Type | Modifiers\n--------+---------+-----------\n f1 | text |\n f1 | integer |\n\n Table \"foo.mytab\"\n Column | Type | Modifiers\n--------+---------+-----------\n f2 | text |\n f3 | integer |\n\nWhat do you think?\n\n\n- Bill Cunningham\n\n\n\n", "msg_date": "Tue, 30 Apr 2002 10:45:45 -0700", "msg_from": "Bill Cunningham <billc@ballydev.com>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "Bill Cunningham <billc@ballydev.com> writes:\n> I would think this should produce the following:\n\n> test=# \\d mytab\n> Table \"bar.mytab\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | text |\n> f1 | integer |\n\n> Table \"foo.mytab\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f2 | text |\n> f3 | integer |\n\nEven when schemas bar and foo are not in your search path? (And,\nperhaps, not even accessible to you?)\n\nMy gut feeling is that \"\\d mytab\" should tell you about the same\ntable that \"select * from mytab\" would find. Anything else is\nprobably noise to you --- if you wanted to know about foo.mytab,\nyou could say \"\\d foo.mytab\".\n\nHowever, \\d is not a wildcardable operation AFAIR. For the commands\nthat do take wildcard patterns (like \\z), I'm not as sure what should\nhappen.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Apr 2002 14:23:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "I think DBD::Pg driver very much depends on system tables.\nHope, Jeffrey (current maintainer) is online.\n\n\tregards,\n\n\t\tOleg\nOn Tue, 30 Apr 2002, Tom Lane wrote:\n\n> Current CVS tip has most of the needed infrastructure for SQL-spec\n> schema support: you can create schemas, and you can create objects\n> within schemas, and search-path-based lookup for named objects works.\n> There's still a number of things to be done in the backend, but it's\n> time to start working on schema support in the various frontends that\n> have been broken by these changes. I believe that pretty much every\n> frontend library and client application that looks at system catalogs\n> will need revisions. So, this is a call for help --- I don't have the\n> time to fix all the frontends, nor sufficient familiarity with many\n> of them.\n>\n> JDBC and ODBC metadata code is certainly broken; so are the catalog\n> lookups in pgaccess, pgadmin, and so on. psql and pg_dump are broken\n> as well (though I will take responsibility for fixing pg_dump, and will\n> then look at psql if no one else has done it by then). I'm not even\n> sure what else might need to change.\n>\n> Here's an example of what's broken:\n>\n> test=# create schema foo;\n> CREATE\n> test=# create table foo.mytab (f1 int, f2 text);\n> CREATE\n> test=# create schema bar;\n> CREATE\n> test=# create table bar.mytab (f1 text, f3 int);\n> CREATE\n> test=# \\d mytab\n> Table \"mytab\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> f1 | text |\n> f1 | integer |\n> f2 | text |\n> f3 | integer |\n>\n> psql's \\d command hasn't the foggiest idea that there might now be more\n> than one pg_class entry with the same relname. It needs to be taught\n> about that --- but even before that, we need to work out schema-aware\n> definitions of the wildcard expansion rules for psql's backslash\n> commands that accept wildcarded names. In the above example, probably\n> \"\\d mytab\" should have said \"no such table\" --- because neither foo nor\n> bar were in my search path, so I should not see them unless I give a\n> qualified name (eg, \"\\d foo.mytab\" or \"\\d bar.mytab\"). For commands\n> that accept wildcard patterns, what should happen --- should \"\\z my*\"\n> find these tables, if they're not in my search path? Is \"\\z f*.my*\"\n> sensible to support? I dunno yet.\n>\n> If you've got time to work on fixing frontend code, or even helping\n> to work out definitional questions like these, please check out current\n> CVS tip or a nightly snapshot tarball and give it a try. (But do NOT\n> put any valuable data into current sources --- until pg_dump is fixed,\n> you won't be able to produce a useful backup of a database that uses\n> multiple schemas.)\n>\n> Some documentation can be found at\n> http://developer.postgresql.org/docs/postgres/sql-naming.html\n> http://developer.postgresql.org/docs/postgres/sql-createschema.html\n> http://developer.postgresql.org/docs/postgres/sql-grant.html\n> http://developer.postgresql.org/docs/postgres/runtime-config.html#RUNTIME-CONFIG-GENERAL (see SEARCH_PATH)\n> but more needs to be written. (In particular, I think the Tutorial\n> could stand to have a short section added about schemas; and the Admin\n> Guide ought to be revised to discuss running one database with per-user\n> schemas as a good alternative to per-user databases. Any volunteers to\n> write that stuff?)\n>\n> Some things that don't work yet in the backend:\n>\n> 1. There's no DROP SCHEMA. (If you need to, you can drop the contained\n> objects and then manually delete the pg_namespace row for the schema.)\n> No ALTER SCHEMA RENAME either (though you can just UPDATE the\n> pg_namespace row if you need that).\n>\n> 2. CREATE SCHEMA with sub-statements isn't up to SQL spec requirements\n> yet. Best bet is to create the schema and then create contained objects\n> separately, as in the above example.\n>\n> 3. I'm not sure that the newly-defined GRANT privileges are all checked\n> everywhere they should be. Also, the default privilege settings\n> probably need fine-tuning still.\n>\n> 4. We probably need more helper functions and/or predefined system views\n> to make it possible to fix the frontends in a reasonable way --- for\n> example, it's still quite difficult for something looking at pg_class to\n> determine which tables are visible in the current search path. Thoughts\n> about what should be provided are welcome.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 30 Apr 2002 21:41:47 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "Tom Lane wrote:\n\n>Bill Cunningham <billc@ballydev.com> writes:\n>\n>>I would think this should produce the following:\n>>\n>\n>>test=# \\d mytab\n>> Table \"bar.mytab\"\n>> Column | Type | Modifiers\n>>--------+---------+-----------\n>> f1 | text |\n>> f1 | integer |\n>>\n>\n>> Table \"foo.mytab\"\n>> Column | Type | Modifiers\n>>--------+---------+-----------\n>> f2 | text |\n>> f3 | integer |\n>>\n>\n>Even when schemas bar and foo are not in your search path? (And,\n>perhaps, not even accessible to you?)\n>\n>My gut feeling is that \"\\d mytab\" should tell you about the same\n>table that \"select * from mytab\" would find. Anything else is\n>probably noise to you --- if you wanted to know about foo.mytab,\n>you could say \"\\d foo.mytab\".\n>\n>However, \\d is not a wildcardable operation AFAIR. For the commands\n>that do take wildcard patterns (like \\z), I'm not as sure what should\n>happen.\n>\n>\t\t\tregards, tom lane\n>\nSo we now have a default schema name of the current user? For example:\n\nfoobar@somewhere> psql testme\ntestme=# select * from mytab\n\n Table \"foobar.mytab\"\n Column | Type | Modifiers\n--------+---------+-----------\n f2 | text |\n f3 | integer |\n\n\nlike that? This is exactly how DB2 operates, implict schemas for each user.\n\n- Bill Cunningham\n\n\n", "msg_date": "Tue, 30 Apr 2002 13:46:34 -0700", "msg_from": "Bill Cunningham <billc@ballydev.com>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "Bill Cunningham <billc@ballydev.com> writes:\n> So we now have a default schema name of the current user?\n> ... This is exactly how DB2 operates, implict schemas for each user.\n\nYou can operate that way. It's not the default though; the DBA will\nhave to explicitly do a CREATE SCHEMA for each user. For instance:\n\ntest=# CREATE USER tgl;\nCREATE USER\ntest=# CREATE SCHEMA tgl AUTHORIZATION tgl;\nCREATE\ntest=# \\c - tgl\nYou are now connected as new user tgl.\ntest=> select current_schemas();\n current_schemas\n-----------------\n {tgl,public}\t\t\t-- my search path is now tgl, public\n(1 row)\n\n-- this creates tgl.foo:\ntest=> create table foo(f1 int);\nCREATE\ntest=> select * from foo;\n f1\n----\n(0 rows)\n\ntest=> select * from tgl.foo;\n f1\n----\n(0 rows)\n\n\nIf you don't create schemas then you get backwards-compatible behavior\n(all the users end up sharing the \"public\" schema as their current\nschema).\n\nSee the development-docs pages I mentioned before for details.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Apr 2002 17:16:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "> For commands\n> that accept wildcard patterns, what should happen --- should \"\\z my*\"\n> find these tables, if they're not in my search path? Is \"\\z f*.my*\"\n> sensible to support? I dunno yet.\n\nTechnical question - this query:\n\n SELECT nspname AS schema,\n relname AS object\n FROM pg_class c\nINNER JOIN pg_namespace n\n ON c.relnamespace=n.oid\n WHERE relkind in ('r', 'v', 'S') AND\n relname NOT LIKE 'pg$_%%' ESCAPE '$'\n\nproduces a result like this:\n\n schema | object \n--------+--------\n public | abc\n foo | abc\n foo | xyz\n bar | xyz\n(4 rows)\n\nHow can I restrict the query to the schemas in the \ncurrent search path, i.e. the schema names returned\nby SELECT current_schemas() ?\n\n\nIan Barwick\n\n", "msg_date": "Wed, 1 May 2002 03:46:37 +0200", "msg_from": "Ian Barwick <barwick@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "> test=# CREATE USER tgl;\n> CREATE USER\n> test=# CREATE SCHEMA tgl AUTHORIZATION tgl;\n> CREATE\n\nWhat about \"CREATE USER tgl WITH SCHEMA;\" ?\n\nWhich will implicitly do a \"CREATE SCHEMA tgl AUTHORIZATION tgl;\"\n\nChris\n\n", "msg_date": "Wed, 1 May 2002 10:09:38 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "> produces a result like this:\n>\n> schema | object\n> --------+--------\n> public | abc\n> foo | abc\n> foo | xyz\n> bar | xyz\n> (4 rows)\n>\n> How can I restrict the query to the schemas in the\n> current search path, i.e. the schema names returned\n> by SELECT current_schemas() ?\n\nNow, if we had functions-returning-sets, this would all be easy as all you'd\nneed to do would be to join it with the function returning the set of\nschemas in your search path :)\n\nChris\n\n", "msg_date": "Wed, 1 May 2002 10:15:44 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What about \"CREATE USER tgl WITH SCHEMA;\" ?\n\nUh, what about it? It's not a standard syntax AFAIK.\n\nIf I were running an installation where I wanted \"one schema per user\"\nas default, I'd rather have an \"auto_create_schema\" SET parameter that\ntold CREATE USER to do the dirty work for me automatically.\n\nBut the sneaky part of this is that users are installation-wide,\nwhereas schemas are only database-wide. To make this really work\npainlessly, you'd want some kind of mechanism that'd auto-create\na schema for the user in every database he's allowed access to.\nHow can we define that cleanly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Apr 2002 23:20:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "Ian Barwick <barwick@gmx.net> writes:\n> How can I restrict the query to the schemas in the \n> current search path, i.e. the schema names returned\n> by SELECT current_schemas() ?\n\nWell, this is the issue open for public discussion.\n\nWe could define some function along the lines of\n\"is_visible_table(oid) returns bool\", and then you could use\nthat as a WHERE clause in your query. But I'm worried about\nthe performance implications --- is_visible_table() would have\nto do several retail probes of the system tables, and I don't\nsee any way to optimize that across hundreds of table OIDs.\n\nI have a nagging feeling that this could be answered by defining\na view on pg_class that only shows visible tables ... but I don't\nquite see how to define that efficiently, either. Ideas anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 00:38:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "Tom Lane wrote:\n> psql's \\d command hasn't the foggiest idea that there might now be more\n> than one pg_class entry with the same relname. It needs to be taught\n> about that --- but even before that, we need to work out schema-aware\n> definitions of the wildcard expansion rules for psql's backslash\n> commands that accept wildcarded names. In the above example, probably\n> \"\\d mytab\" should have said \"no such table\" --- because neither foo nor\n> bar were in my search path, so I should not see them unless I give a\n> qualified name (eg, \"\\d foo.mytab\" or \"\\d bar.mytab\").\n\n(and also in mail to Bill Cunningham)\n> My gut feeling is that \"\\d mytab\" should tell you about the same\n> table that \"select * from mytab\" would find. Anything else is\n> probably noise to you --\n\nGeneral consistency with SELECT behaviour sounds right to me.\n\n> For commands\n> that accept wildcard patterns, what should happen --- should \"\\z my*\"\n> find these tables, if they're not in my search path? Is \"\\z f*.my*\"\n> sensible to support? I dunno yet.\n\nMy digestive organs tell me: an unqualified wildcard pattern should\nstick to the search path; the search path should only be overridden\nwhen the user explicitly provides a wildcard pattern for schema names.\nThis would be consistent with the behaviour of \\d etc., i.e.\n\"\\d mytab\" should look for 'mytab' in the current search path;\n\"\\dt my*\" should look for tables beginning with \"my\" in the current\nsearch path; \"\\dt f*.my*\" would look for same in all schemas beginning\nwith \"f\"; and \"\\dt *.my*\" would look in all schemas.\n\nProblem: \"wildcard pattern\" is a bit of a misnomer, the relevant\ncommands take regular expressions, which means the dot in \"\\z f*.my*\"\nwon't necessarily be the dot in \"\\z foo.mytab\" - it would have to\nbe written \"\\z f*\\\\.my*\". Though technically correct this\nstrikes me as counterintuitive, especially with the double escaping\n(once for psql, once for the regex literal).\n\nAn alternative solution would be to allow the pattern matching\ncommands to accept either one (\"\\z my*\") or two (\"\\z f* my*\") regular\nexpressions; in the latter case the first regex is for the schema name,\nthe second for the object name. However, doing away with the dot altogether\nis also counterintuitive and breaks with the usual schema denotation.\n\nProposal: in \"wildcard\" slash commands drop regular expressions and\nuse LIKE for pattern matching. This would enable commands such as\n\"\\z f%.my%\". (Would this mean major breakage? Is there an installed\nbase of scripts which rely on psql slash commands and regular expressions?)\nI can't personally recall ever having needed to use a regular expression\nany more complex than the wildcard pattern matching which could be implemented\njust as well with LIKE. (Anyone requiring regular expression matching could\nstill create appropriate views).\n\nQuestion - which output format is preferable?:\n\n schema_test=# \\z\n Access privileges for database \"schema_test\"\n Schema | Object | Access privileges\n --------+--------+-------------------\n public | bar |\n foo | bar |\n (2 rows)\n\nor\n\n schema_test=# \\z\n Access privileges for database \"schema_test\"\n Object | Access privileges\n ------------+-------------------\n public.bar |\n foo.bar |\n (2 rows)\n\n> If you've got time to work on fixing frontend code, or even helping\n> to work out definitional questions like these (...)\n\nHmm, time for \"ask not what your database can do for you but what\nyou can do for your database\". I'm willing to put my keyboard where\nmy mouth is and take on psql once any outstanding questions are\ncleared up, if noone better qualified than me comes\nforward and provided someone takes a critical look at anything I do.\n\n\nYours\n\nIan Barwick\n\n\n", "msg_date": "Thu, 2 May 2002 00:54:18 +0200", "msg_from": "Ian Barwick <barwick@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "On Tue, Apr 30, 2002 at 09:41:47PM +0300, Oleg Bartunov wrote:\n> I think DBD::Pg driver very much depends on system tables.\n> Hope, Jeffrey (current maintainer) is online.\n\nThese changes may break DBD::Pg. What is the expected\ntime of this release? I will review my code for impact.\n\nThanks for the warning,\nJeffrey\n", "msg_date": "Wed, 1 May 2002 16:04:10 -0700", "msg_from": "\"Jeffrey W. Baker\" <jwbaker@acm.org>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "\nOn Thu, 2 May 2002, Ian Barwick wrote:\n\n> Tom Lane wrote:\n> [snipped]\n> > My gut feeling is that \"\\d mytab\" should tell you about the same\n> > table that \"select * from mytab\" would find. Anything else is\n> > probably noise to you --\n> \n> General consistency with SELECT behaviour sounds right to me.\n\n\nI take it temporary tables are going to be included in such a list, since that\nwould seem sensible from the SELECT behaviour point of view, and may be even\nalso from the user's point of view.\n\nSo, how does one determine the current schema for temporary tables, i.e. what\nname would be in search_path if it wasn't implicitly included? (Just throwing\nideas around in my head)\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Thu, 2 May 2002 03:05:41 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> So, how does one determine the current schema for temporary tables,\n> i.e. what name would be in search_path if it wasn't implicitly included?\n\nThe temp schema is pg_temp_nnn where nnn is your BackendId (PROC array\nslot number). AFAIK there isn't any exported way to determine your\nBackendId from an SQL query. Another problem is that the pg_temp\nschema is \"lazily evaluated\" --- it's not actually attached to and\ncleaned out until you first try to create a temp table in a particular\nsession. This seems a clear win from a performance point of view,\nbut it makes life even more difficult for queries that are trying to\ndetermine which pg_class entries are visible in one's search path.\n\nI have already had occasion to write subroutines that answer the\nquestion \"is this relation (resp. type, function, operator) visible\nin the current search path?\" --- where visible means not just that\nits namespace is in the path, but that this object is the frontmost\nentry of its particular name. Perhaps it'd make sense to export these\nroutines as SQL functions, along the lines of \"relation_is_visible(oid)\nreturns bool\". Then one could use queries similar to\n\n\tselect * from pg_class p\n\twhere p.relname like 'match_pattern'\n\t and relation_is_visible(p.oid);\n\nto implement a psql command that requires finding tables matching\nan (unqualified) relation-name pattern. The tables found would be\nonly those that you could reference with unqualified table names.\n\nThis doesn't yield much insight about cases where the match pattern\nincludes a (partial?) schema-name specification, though. If I'm\nallowed to write something like \"\\z s*.t*\" to find tables beginning\nwith t in schemas beginning with s, should that include all schemas\nbeginning with s? Only those in my search path (probably wrong)?\nOnly those that I have USAGE privilege on? Not sure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 23:33:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "On Thu, 2002-05-02 at 05:33, Tom Lane wrote:\n> \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > So, how does one determine the current schema for temporary tables,\n> > i.e. what name would be in search_path if it wasn't implicitly included?\n> \n> The temp schema is pg_temp_nnn where nnn is your BackendId (PROC array\n> slot number). AFAIK there isn't any exported way to determine your\n> BackendId from an SQL query.\n\nThe non-portable way on Linux RH 7.2 :\n\n>create function getpid() returns int as '/lib/libc.so.6','getpid' language 'C';\nCREATE\n>select getpid()\n getpid1 \n---------\n 31743\n(1 row)\n\nI think that useful libc stuff things like this should be put in some\nspecial schema, initially available to superusers only.\n\nperhaps LIBC.GETPID()\n\n----------\nHannu\n\n", "msg_date": "02 May 2002 09:25:03 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "On Thursday 02 May 2002 05:33, Tom Lane wrote:\n\n[on establishing whether a relation is in the search path]\n> This doesn't yield much insight about cases where the match pattern\n> includes a (partial?) schema-name specification, though. If I'm\n> allowed to write something like \"\\z s*.t*\" to find tables beginning\n> with t in schemas beginning with s, should that include all schemas\n> beginning with s? Only those in my search path (probably wrong)?\n> Only those that I have USAGE privilege on? Not sure.\n\nIf namespace privileges are based around the Unix directory/file protection \nmodel (as you stated in another thread, see:\nhttp://geocrawler.com/archives/3/10/2002/4/450/8433871/ ), then\na wildcard search on the schema name should logically include\nall visible schemas, not just the ones where the user has USAGE privilege.\n\nOr put it another way, is there any reason to exclude information from\nsay \\z which the user can find out by querying pg_class? At the moment\n(at least in CVS from 30.4.02) a user can see permissions on tables in schemas \non which he/she has no USAGE privileges:\n\ntemplate1=# create database schema_test;\nCREATE DATABASE\ntemplate1=# \\c schema_test \nYou are now connected to database schema_test.\nschema_test=# create schema foo;\nCREATE\nschema_test=# create table foo.bar (pk int, txt text);\nCREATE\nschema_test=# create schema foo2;\nCREATE\nschema_test=# create table foo2.bar (pk int, txt text);\nCREATE\nschema_test=# create user joe;\nCREATE USER\nschema_test=# grant usage on schema foo to joe;\nGRANT\nschema_test=# \\c - joe\nYou are now connected as new user joe.\nschema_test=> SELECT nspname AS schema,\nschema_test-> relname AS object,\nschema_test-> relkind AS type,\nschema_test-> relacl AS access\nschema_test-> FROM pg_class c\nschema_test-> INNER JOIN pg_namespace n\nschema_test-> ON c.relnamespace=n.oid\nschema_test-> WHERE relkind in ('r', 'v', 'S') AND\nschema_test-> relname NOT LIKE 'pg$_%%' ESCAPE '$' AND\nschema_test-> nspname || '.' || relname LIKE 'f%.b%';\n schema | object | type | access \n--------+--------+------+--------\n foo | bar | r | \n foo2 | bar | r | \n(2 rows)\n\ni.e. user \"joe\" can see which objects exist in schema \"foo2\", even though\nhe has no USAGE privilege. (Is this behaviour intended?)\n\nYours\n\nIan Barwick\n\n", "msg_date": "Thu, 2 May 2002 09:37:13 +0200", "msg_from": "Ian Barwick <barwick@gmx.de>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Thu, 2002-05-02 at 05:33, Tom Lane wrote:\n>> The temp schema is pg_temp_nnn where nnn is your BackendId (PROC array\n>> slot number). AFAIK there isn't any exported way to determine your\n>> BackendId from an SQL query.\n\n> The non-portable way on Linux RH 7.2 :\n\n>> create function getpid() returns int as '/lib/libc.so.6','getpid' language 'C';\n\nBut PID is not BackendId.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 09:48:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "On Wed, 1 May 2002, Jeffrey W. Baker wrote:\n\n> On Tue, Apr 30, 2002 at 09:41:47PM +0300, Oleg Bartunov wrote:\n> > I think DBD::Pg driver very much depends on system tables.\n> > Hope, Jeffrey (current maintainer) is online.\n>\n> These changes may break DBD::Pg. What is the expected\n> time of this release? I will review my code for impact.\n\nJeffrey,\n\nbtw, DBD-Pg 1.13 doesn't passed all tests\n(Linux 2.4.17, pgsql 7.2.1, DBI-1.21)\n\nt/02prepare.........ok\nt/03bind............ok\nt/04execute.........FAILED tests 5-7\n Failed 3/10 tests, 70.00% okay\nt/05fetch...........ok\nt/06disconnect......ok\n\n>\n> Thanks for the warning,\n> Jeffrey\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 2 May 2002 17:28:36 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Is \"PROC array slot number\" something internal to postgres ?\n\nYes.\n\nIf we used PID then we'd eventually have 64K (or whatever the range of\nPIDs is on your platform) different pg_temp_nnn entries cluttering\npg_namespace. But we only need MaxBackends different entries at any one\ntime. So the correct nnn value is 1..MaxBackends. BackendId meets the\nneed perfectly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 10:52:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "On Wed, 1 May 2002, Jeffrey W. Baker wrote:\n>> These changes may break DBD::Pg. What is the expected\n>> time of this release? I will review my code for impact.\n\nI think the current plan is to go beta in late summer. So there's\nno tremendous hurry. I was just sending out a wake-up call ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 10:54:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "On Thu, 2002-05-02 at 15:48, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > On Thu, 2002-05-02 at 05:33, Tom Lane wrote:\n> >> The temp schema is pg_temp_nnn where nnn is your BackendId (PROC array\n> >> slot number). AFAIK there isn't any exported way to determine your\n> >> BackendId from an SQL query.\n> \n> > The non-portable way on Linux RH 7.2 :\n> \n> >> create function getpid() returns int as '/lib/libc.so.6','getpid' language 'C';\n> \n> But PID is not BackendId.\n\nAre you sure ?\n\nI was assuming that BackendId was the process id of current backend\nand that's what getpid() returns.\n\n\nWhat is the Backend ID then ?\n\nIs \"PROC array slot number\" something internal to postgres ?\n\n-------------\nHannu\n\n\n", "msg_date": "02 May 2002 17:00:42 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "On Thu, May 02, 2002 at 05:28:36PM +0300, Oleg Bartunov wrote:\n> On Wed, 1 May 2002, Jeffrey W. Baker wrote:\n> \n> > On Tue, Apr 30, 2002 at 09:41:47PM +0300, Oleg Bartunov wrote:\n> > > I think DBD::Pg driver very much depends on system tables.\n> > > Hope, Jeffrey (current maintainer) is online.\n> >\n> > These changes may break DBD::Pg. What is the expected\n> > time of this release? I will review my code for impact.\n> \n> Jeffrey,\n> \n> btw, DBD-Pg 1.13 doesn't passed all tests\n> (Linux 2.4.17, pgsql 7.2.1, DBI-1.21)\n> \n> t/02prepare.........ok\n> t/03bind............ok\n> t/04execute.........FAILED tests 5-7\n> Failed 3/10 tests, 70.00% okay\n> t/05fetch...........ok\n> t/06disconnect......ok\n\nThese tests were failing when I inherited the code. I'll fix them\nwhen I rewrite the parser.\n\n-jwb\n", "msg_date": "Thu, 2 May 2002 08:43:23 -0700", "msg_from": "\"Jeffrey W. Baker\" <jwbaker@acm.org>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "Ian Barwick <barwick@gmx.de> writes:\n> i.e. user \"joe\" can see which objects exist in schema \"foo2\", even though\n> he has no USAGE privilege. (Is this behaviour intended?)\n\nIt's open for debate I suppose. Historically we have not worried about\npreventing people from looking into the system tables, except for cases\nsuch as pg_statistic where this might expose actual user data.\n\nAFAICS we could only prevent this by making selective views on the\nsystem tables and then prohibiting ordinary users from accessing the\nunderlying tables directly. I'm not in a big hurry to do that myself,\nif only for backward-compatibility reasons.\n\nWe still do have the option of separate databases, and I'd be inclined\nto tell people to use those if they want airtight separation between\nusers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 18:05:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Thu, 2002-05-02 at 16:52, Tom Lane wrote:\n>> If we used PID then we'd eventually have 64K (or whatever the range of\n>> PIDs is on your platform) different pg_temp_nnn entries cluttering\n>> pg_namespace.\n\n> Should they not be cleaned up at backend exit even when they are in\n> range 1..MaxBackends ?\n\nHm. We currently remove the schema contents (ie the temp tables) but\nnot the pg_namespace entry itself. Seems like deleting that only to\nhave to recreate it would be a waste of cycles.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 10:44:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "On Thu, 2002-05-02 at 16:52, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Is \"PROC array slot number\" something internal to postgres ?\n> \n> Yes.\n> \n> If we used PID then we'd eventually have 64K (or whatever the range of\n> PIDs is on your platform) different pg_temp_nnn entries cluttering\n> pg_namespace.\n\nShould they not be cleaned up at backend exit even when they are in\nrange 1..MaxBackends ?\n\n> But we only need MaxBackends different entries at any one\n> time. So the correct nnn value is 1..MaxBackends. BackendId meets the\n> need perfectly.\n\n----------\nHannu\n\n\n", "msg_date": "03 May 2002 17:39:36 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "\n\nRe: BackendID and the schema search path\n\nComing back to this subject if I may but only briefly, I hope. How about making\na slight change to current_schemas() and including an optional argument such\nthat something like:\n\n current_schemas(1)\n\nreturns the complete list of schemas in the search path including the implicit\ntemporary space and the pg_catalog (if not already listed obviously), while\ncurrent_schemas() and current_schemas(0) behave as now.\n\nAn alternative is to provide a get_backend_id() call but I don't think there's\nreally appropiate and then means the client has to know how to construct the\nname of the temporary schema, which isn't a good idea.\n\nHaving something like this would enable client's like PgAccess to determine the\ncomplete list of visible objects. Without it it's difficult to see how it is\npossible to include temporary objects in a list of tables and such. In such\na circumstance I'm inclined to say temporary objects are intermediate items and\nso of no interest to the PgAccess user.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Mon, 6 May 2002 16:31:05 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> Coming back to this subject if I may but only briefly, I hope. How\n> about making a slight change to current_schemas() and including an\n> optional argument such that something like:\n> current_schemas(1)\n> returns the complete list of schemas in the search path including the\n> implicit temporary space and the pg_catalog (if not already listed\n> obviously), while current_schemas() and current_schemas(0) behave as\n> now.\n\nI don't really care for that syntax, but certainly we could talk about\nproviding a version of current_schemas that tells the Whole Truth.\n\n> Having something like this would enable client's like PgAccess to\n> determine the complete list of visible objects.\n\nWell, no, it wouldn't. Say there are multiple tables named foo in\ndifferent namespaces in your search path (eg, a temp table hiding a\npermanent table of the same name). A test like \"where current_schemas\n*= relnamespace\" won't reflect this correctly.\n\nI'm suspecting that what we really need is some kind of\n\"is_visible_table()\" test function, and then you'd do\n\tselect * from pg_class where is_visible_table(oid);\nAt least I've not been able to think of a better idea than that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 11:43:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "\nOn Mon, 6 May 2002, Tom Lane wrote:\n\n> \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > Coming back to this subject if I may but only briefly, I hope. How\n> > about making a slight change to current_schemas() and including an\n> > optional argument such that something like:\n> > current_schemas(1)\n> > returns the complete list of schemas in the search path including the\n> > implicit temporary space and the pg_catalog (if not already listed\n> > obviously), while current_schemas() and current_schemas(0) behave as\n> > now.\n> \n> I don't really care for that syntax, but certainly we could talk about\n> providing a version of current_schemas that tells the Whole Truth.\n> \n> > Having something like this would enable client's like PgAccess to\n> > determine the complete list of visible objects.\n> \n> Well, no, it wouldn't. Say there are multiple tables named foo in\n> different namespaces in your search path (eg, a temp table hiding a\n> permanent table of the same name). A test like \"where current_schemas\n> *= relnamespace\" won't reflect this correctly.\n> \n> I'm suspecting that what we really need is some kind of\n> \"is_visible_table()\" test function, and then you'd do\n> \tselect * from pg_class where is_visible_table(oid);\n> At least I've not been able to think of a better idea than that.\n\nOk, where I was coming from was the idea of the client, I'm most interested in\nPgAccess at the moment, retrieving the search path and cross referencing that\nagainst the results of the queries for tables etc.\n\nI seemed to remember mention of an is_visible() function earlier in the thread\nbut that for some reason this would mean a performance hit across the board, or\nat least in many places. However, reviewing my emails I see not such comment\nabout performance. Tom originally suggested relation_is_visible(oid) as the\nfunction.\n\nI also got it wrong about when the temporary space is emptied. I had been\nthinking it was when the connection terminated. However, I see from the same\nold message that this happens when the first temporary item is created in a\nsession. Therefore, my way would be invalid anyway; or would it?\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Mon, 6 May 2002 20:12:52 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> I also got it wrong about when the temporary space is emptied. I had been\n> thinking it was when the connection terminated. However, I see from the same\n> old message that this happens when the first temporary item is created in a\n> session. Therefore, my way would be invalid anyway; or would it?\n\nIt would work as long as the variant form of current_schemas() truly\nreflects the effective search path --- because until you create a\ntemporary item, there is no temp schema in the effective path.\n\nStill, the issue of hiding seems to be a good reason not to code\nclients that way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 15:51:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "On Mon, 6 May 2002, Nigel J. Andrews wrote:\n> \n> On Mon, 6 May 2002, Tom Lane wrote:\n> \n> > \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > > Coming back to this subject if I may but only briefly, I hope. How\n> > > about making a slight change to current_schemas() and including an\n> > > optional argument such that something like:\n> > > current_schemas(1)\n> > > returns the complete list of schemas in the search path including the\n> > > implicit temporary space and the pg_catalog (if not already listed\n> > > obviously), while current_schemas() and current_schemas(0) behave as\n> > > now.\n> > \n> > I don't really care for that syntax, but certainly we could talk about\n> > providing a version of current_schemas that tells the Whole Truth.\n\nWouldn't such a function just be based on\n backend/catalog/namespace.c:RelnameGetRelid(const char *relname) ?\n\n\n> > I'm suspecting that what we really need is some kind of\n> > \"is_visible_table()\" test function, and then you'd do\n> > \tselect * from pg_class where is_visible_table(oid);\n> > At least I've not been able to think of a better idea than that.\n>\n> [snip]\n\nFor this if we look once again at RelnameGetRelid(relname) in\nbackend/catalog/namespace.c wouldn't this is_visible() function simply be a\nwrapper around it? Obviously the parameter [probably] wouldn't be an OID but\nrather a name.\n\nIf I knew which file would be most appropiate for this (utils/adt/name.c?) I'd\nhave had a go at making a patch.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Mon, 6 May 2002 22:09:25 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> For this if we look once again at RelnameGetRelid(relname) in\n> backend/catalog/namespace.c wouldn't this is_visible() function simply be a\n> wrapper around it?\n\nSort of. It's there already, see RelationIsVisible.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 06 May 2002 18:56:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "On Mon, 6 May 2002, Tom Lane wrote:\n\n> \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > For this if we look once again at RelnameGetRelid(relname) in\n> > backend/catalog/namespace.c wouldn't this is_visible() function simply be a\n> > wrapper around it?\n> \n> Sort of. It's there already, see RelationIsVisible.\n> \n\nDoh. Next function down.\n\nI see there are routines doing similar things but for functions and others. I'm\nright in saying that OID isn't unique in a database (necessarily) and so we\ncouldn't have a general object_is_visible(oid) function that did the appropiate\nfrom the type of object refered to?\n\nIt just seems that if we're interested in showing tables according to\nvisibility then shouldn't we be doing the same for these other things?\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Tue, 7 May 2002 11:33:41 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> I see there are routines doing similar things but for functions and\n> others. I'm right in saying that OID isn't unique in a database\n> (necessarily) and so we couldn't have a general object_is_visible(oid)\n> function that did the appropiate from the type of object refered to?\n\nNot in the current structure. Even if OID were guaranteed unique across\nthe database, how would you determine which kind of object a given OID\nreferred to? Seems like it would take expensive probing of a lot of\ndifferent tables until you found a match --- which is a bit silly when\nthe calling query generally knows darn well where it got the OID from.\n\nI suppose we could define an object_is_visible(tableoid, oid) function,\nbut I'm not sure if it has any real usefulness.\n\n> It just seems that if we're interested in showing tables according to\n> visibility then shouldn't we be doing the same for these other things?\n\nSure; if we go this routine then all five of the FooIsVisible routines\nwill need to be exported as SQL functions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 May 2002 09:11:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "Tom Lane wrote:\n> bar were in my search path, so I should not see them unless I give a\n> qualified name (eg, \"\\d foo.mytab\" or \"\\d bar.mytab\"). For commands\n> that accept wildcard patterns, what should happen --- should \"\\z my*\"\n> find these tables, if they're not in my search path? Is \"\\z f*.my*\"\n> sensible to support? I dunno yet.\n\nI am still reading the thread, but I thought \\z mytab should show only\nthe first match, like SELECT * from mytab, and \\z *.mytab should show\nall matching tables in the schema search path. This does make '.' a\nspecial character in the psql wildcard character set, but as no one uses\n'.' in a table name, I think it is OK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 26 May 2002 00:42:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "On Wednesday 01 May 2002 06:38, Tom Lane wrote:\n> Ian Barwick <barwick@gmx.net> writes:\n> > How can I restrict the query to the schemas in the\n> > current search path, i.e. the schema names returned\n> > by SELECT current_schemas() ?\n>\n> Well, this is the issue open for public discussion.\n>\n> We could define some function along the lines of\n> \"is_visible_table(oid) returns bool\", and then you could use\n> that as a WHERE clause in your query. But I'm worried about\n> the performance implications --- is_visible_table() would have\n> to do several retail probes of the system tables, and I don't\n> see any way to optimize that across hundreds of table OIDs.\n>\n> I have a nagging feeling that this could be answered by defining\n> a view on pg_class that only shows visible tables ... but I don't\n> quite see how to define that efficiently, either. Ideas anyone?\n\n(time passes...)\n\nHow about a function such as the one attached: \"select_schemas_setof()\"\nwhich returns the OIDs of the schemas in the current search path as\na set. (Note: \"select_schemas_setof()\" as shown is a userspace C function.)\n\nIt works like this:\n\n template1=# CREATE DATABASE schema_test;\n CREATE DATABASE\n template1=# \\c schema_test\n You are now connected to database schema_test.\n schema_test=# CREATE OR REPLACE FUNCTION current_schemas_setof()\n schema_test-# RETURNS setof OID\n schema_test-# as '/path/to/current_schemas_setof.so'\n schema_test-# LANGUAGE 'C';\n CREATE FUNCTION\n\n\nI can then do this:\n\n schema_test=# CREATE SCHEMA foo;\n CREATE SCHEMA\n schema_test=# CREATE TABLE foo.mytab(col1 int, col2 text);\n CREATE TABLE\n schema_test=# CREATE SCHEMA bar;\n CREATE SCHEMA\n schema_test=# CREATE TABLE bar.mytab(col1 int, col2 text);\n CREATE TABLE\n schema_test=# SET search_path = public, foo, bar;\n SET\n schema_test=# SELECT current_schemas();\n current_schemas\n ------------------\n {public,foo,bar}\n (1 row)\n\n schema_test=# SELECT current_schemas_setof, n.nspname\n schema_test-# FROM public.current_schemas_setof() cs, pg_namespace n\n schema_test-# WHERE cs.current_schemas_setof = n.oid;\n current_schemas_setof | nspname\n ----------------------+------------\n 16563 | pg_temp_1\n 11 | pg_catalog\n 2200 | public\n 24828 | foo\n 24835 | bar\n (3 rows)\n\n\nWith the function in place I can then create an SQL function like this:\n\n CREATE OR REPLACE FUNCTION public.first_visible_namespace(name)\n RETURNS oid\n AS\n 'SELECT n.oid\n FROM pg_namespace n, pg_class c, public.current_schemas_setof() cs\n WHERE c.relname= $1\n AND c.relnamespace=n.oid\n AND n.oid= cs.current_schemas_setof\n LIMIT 1'\n LANGUAGE 'sql';\n\nwhich can be used like this:\n\n schema_test=# select public.first_visible_namespace('mytab');\n first_visible_namespace\n -------------------------\n 24828\n (1 row)\n\ni.e. finds the first visible schema containing an unqualified relation name.\n24828 corresponds to the OID of schema \"foo\".\n\nThe following VIEW:\n\n CREATE VIEW public.desc_table_view AS\n SELECT n.nspname AS \"Schema\",\n c.relname AS \"Table\",\n a.attname AS \"Column\",\n format_type\t(a.atttypid, a.atttypmod) AS \"Type\"\n FROM pg_class c, pg_attribute a, pg_namespace n\n WHERE a.attnum > 0\n AND c.relkind IN ('r', 'v', 'S')\n AND a.attrelid = c.oid\n AND c.relnamespace=n.oid\n AND n.oid IN (SELECT first_visible_namespace(c.relname))\n ORDER BY a.attnum;\n\nthen provides a simplified simulation of psql's slash command \\d [NAME] for\nunqualified relation names, e.g.:\n\n schema_test=# SELECT * FROM public.desc_table_view WHERE \"Table\" = 'mytab';\n Schema | Table | Column | Type\n --------+-------+--------+---------\n foo | mytab | col1 | integer\n foo | mytab | col2 | text\n (2 rows)\n schema_test=# SET search_path= bar, foo, public;\n SET\n schema_test=# SELECT * FROM public.desc_table_view WHERE \"Table\" = 'mytab';\n Schema | Table | Column | Type\n --------+-------+--------+---------\n bar | mytab | col1 | integer\n bar | mytab | col2 | text\n (2 rows)\n\n schema_test=# SET search_path= public;\n SET\n schema_test=# SELECT * FROM public.desc_table_view WHERE \"Table\" = 'mytab';\n Schema | Table | Column | Type\n --------+-------+--------+------\n (0 rows)\n\n\nwhich I think is the desired behaviour. Currently \\d [NAME] produces this:\n \n schema_test=# SET search_path= bar, foo, public;\n SET\n schema_test=# \\d mytab\n Table \"mytab\"\n Column | Type | Modifiers\n --------+---------+-----------\n col1 | integer |\n col1 | integer |\n col2 | text |\n col2 | text |\n\ni.e. finds and describes \"foo.mytab\" and \"bar.mytab\".\n\n(Note: \"SELECT * FROM public.desc_table_view\" will just dump an unordered\nlist of all columns for the first visible instance of each table name).\n\nAssuming \"current_schemas_setof()\" can be implemented as an internal function,\n(I haven't managed it myself yet :-( ), I suspect it is a more efficient\nalternative to a putative \"is_visible_table(oid)\" and could be used in psql \n(and elsewhere) to resolve the schemas of unqualified relation names.\nThoughts? (Or am I barking up the wrong tree?)\n\nBTW is anyone working on schema support in psql? If the various definition\nissues raised by Tom Lane at the start of this thread are resolved (discussion\nseems to have trailed off without a consensus), I have some free time in June\nand would be willing to take it on.\n\n\nIan Barwick", "msg_date": "Sun, 26 May 2002 19:58:19 +0200", "msg_from": "Ian Barwick <barwick@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Schemas: status report, call for developers" }, { "msg_contents": "Ian Barwick <barwick@gmx.net> writes:\n> CREATE OR REPLACE FUNCTION public.first_visible_namespace(name)\n> RETURNS oid\n> AS\n> 'SELECT n.oid\n> FROM pg_namespace n, pg_class c, public.current_schemas_setof() cs\n> WHERE c.relname=3D $1\n> AND c.relnamespace=3Dn.oid\n> AND n.oid=3D cs.current_schemas_setof\n> LIMIT 1'\n> LANGUAGE 'sql';\n\nI don't believe this is correct. The LIMIT clause will ensure you\nget at most one answer, but it'd be pure luck whether it is the right\nanswer, when there are multiple tables of the same name in the\nnamespaces of the search path.\n\n> The following VIEW:\n\n> CREATE VIEW public.desc_table_view AS\n> SELECT n.nspname AS \"Schema\",\n> c.relname AS \"Table\",\n> a.attname AS \"Column\",\n> format_type=09(a.atttypid, a.atttypmod) AS \"Type\"\n> FROM pg_class c, pg_attribute a, pg_namespace n\n> WHERE a.attnum > 0\n> AND c.relkind IN ('r', 'v', 'S')\n> AND a.attrelid =3D c.oid\n> AND c.relnamespace=3Dn.oid\n> AND n.oid IN (SELECT first_visible_namespace(c.relname))\n> ORDER BY a.attnum;\n\nI was hoping to find something more efficient than that --- quite aside\nfrom the speed or correctness of first_visible_namespace(), a query\ndepending on an IN is not going to be fast.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 May 2002 16:12:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas: status report, call for developers " }, { "msg_contents": "Tom Lane wrote:\n> If you don't create schemas then you get backwards-compatible behavior\n> (all the users end up sharing the \"public\" schema as their current\n> schema).\n\nI am a little uncomfortable about this. It means that CREATE TABLE will\ncreate a table in 'public' if the user doesn't have a schema of their\nown, and in their private schema if it exists. I seems strange to have\nsuch a distinction based on whether a private schema exists. Is this OK?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jun 2002 00:42:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Schemas: status report, call for developers" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am a little uncomfortable about this. It means that CREATE TABLE will\n> create a table in 'public' if the user doesn't have a schema of their\n> own, and in their private schema if it exists. I seems strange to have\n> such a distinction based on whether a private schema exists. Is this OK?\n\nYou have a better idea?\n\nGiven that we want to support both backwards-compatible and SQL-spec-\ncompatible behavior, I think some such ugliness is inevitable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 00:53:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Schemas: status report, call for developers " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am a little uncomfortable about this. It means that CREATE TABLE will\n> > create a table in 'public' if the user doesn't have a schema of their\n> > own, and in their private schema if it exists. I seems strange to have\n> > such a distinction based on whether a private schema exists. Is this OK?\n> \n> You have a better idea?\n> \n> Given that we want to support both backwards-compatible and SQL-spec-\n> compatible behavior, I think some such ugliness is inevitable.\n\nI don't have a better idea, but I am wondering how this will work. If I\ncreate a schema with my name, does it get added to the front of my\nschema schema search path automatically, or do I set it with SET,\nperhaps in my per-user startup SET column?\n\nIf I want to prevent some users from creating tables in my database, do\nI remove CREATE on the schema using REVOKE SCHEMA, then create a schema\nfor every user using the database?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jun 2002 01:42:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Schemas: status report, call for developers" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't have a better idea, but I am wondering how this will work. If I\n> create a schema with my name, does it get added to the front of my\n> schema schema search path automatically,\n\nYes (unless you've futzed with the standard value of search_path).\n\n> If I want to prevent some users from creating tables in my database, do\n> I remove CREATE on the schema using REVOKE SCHEMA, then create a schema\n> for every user using the database?\n\nWell, you revoke world create access on the public schema (or maybe even\ndelete the public schema, if you don't need it). I don't see why you'd\ngive people their own schemas if the intent is to keep them from\ncreating tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 10:45:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Schemas: status report, call for developers " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't have a better idea, but I am wondering how this will work. If I\n> > create a schema with my name, does it get added to the front of my\n> > schema schema search path automatically,\n> \n> Yes (unless you've futzed with the standard value of search_path).\n> \n> > If I want to prevent some users from creating tables in my database, do\n> > I remove CREATE on the schema using REVOKE SCHEMA, then create a schema\n> > for every user using the database?\n> \n> Well, you revoke world create access on the public schema (or maybe even\n> delete the public schema, if you don't need it). I don't see why you'd\n> give people their own schemas if the intent is to keep them from\n> creating tables.\n\nNo, I was saying you would have to create schemas for the people who you\n_want_ to be able to create tables.\n\nWith the old NOCREATE patch, you could just remove create permission\nfrom a user. With schemas, you have to remove all permission for table\ncreation, then grant it to those you want by creating schemas for them.\n\nThis is similar to handling of Unix permissions. If you want to\nrestrict access to a file or directory, you remove public permission,\nand add group permission, then add the people who you want access to\nthat group.\n\nThere are no _negative_ permissions, as there are no negative\npermissions in the unix file system. I just wanted to be clear that\nrestricting access will be multi-step process.\n\nIf I remove public create access to public, can the super user or db\nowner still create tables?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jun 2002 11:12:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Schemas: status report, call for developers" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> If I remove public create access to public, can the super user or db\n> owner still create tables?\n\nSuperusers can always do whatever they want.\n\nThe DB owner (assume he's not a superuser) has no special privileges\nw.r.t. the public schema at the moment. We could perhaps put in a\nkluge to change this, but it would definitely be a kluge --- I don't\nsee any clean way to make the behavior different.\n\nOne possible approach would be for a superuser to change the ownership\nof public to be the DB owner.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 12:03:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Schemas: status report, call for developers " }, { "msg_contents": "There was discussion of how template1's \"public\" schema should behave. \nI think the only solution is to make template1's public schema writable\nonly by the super-user. This way, we can allow utility commands to\nconnect to template1, but they can't change anything or add their own\ntables.\n\nAs part of createdb, the new database will have to have it's public\nschema changed to world-writable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jun 2002 18:24:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Schemas and template1" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> As part of createdb, the new database will have to have it's public\n> schema changed to world-writable.\n\nThat ain't gonna happen, unfortunately. CREATE DATABASE runs in some\ndatabase other than the target one, so it's essentially impossible for\nthe newly-created DB to contain any internal state that's different\nfrom the template DB. Next idea please?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jun 2002 22:57:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas and template1 " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > As part of createdb, the new database will have to have it's public\n> > schema changed to world-writable.\n> \n> That ain't gonna happen, unfortunately. CREATE DATABASE runs in some\n> database other than the target one, so it's essentially impossible for\n> the newly-created DB to contain any internal state that's different\n> from the template DB. Next idea please?\n\nYes, there was an even bigger problem with my argument. If someone\nwanted to make public no-write, and have all created databases inherit\nfrom that, it wouldn't work because it would clear that on creation.\n\nHow about if we hard-wire template1 as being no-write to public\nsomewhere in the code, rather than in the db tables?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jun 2002 23:13:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schemas and template1" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How about if we hard-wire template1 as being no-write to public\n> somewhere in the code, rather than in the db tables?\n\nSeems pretty icky :-(\n\nIt occurs to me that maybe we don't need to worry. The main reason why\nwe've offered the advice \"don't fill template1 with junk\" in the past\nis that it was so hard to clear out the junk without zapping built-in\nentries. But now, you really have to work hard at it to shoot yourself\nin the foot that way. If you created junk in template1.public, no\nsweat:\n\t\\c template1 postgres\n\tDROP SCHEMA public;\n\tCREATE SCHEMA public;\n\t-- don't forget to set its permissions appropriately\n(This assumes we get DROP SCHEMA implemented in time for 7.3, but\nI think we can build that based on Rod's pg_depend stuff.) (Which\nI really really gotta review and apply soon.)\n\nI'm of the opinion that template1 and public are not very special\nat the moment; the C-level code doesn't think either of them are\nspecial, which is why you can drop and recreate them if you have to.\nWe should try not to re-introduce any low-level specialness.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jun 2002 23:42:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Schemas and template1 " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How about if we hard-wire template1 as being no-write to public\n> > somewhere in the code, rather than in the db tables?\n> \n> Seems pretty icky :-(\n> \n> It occurs to me that maybe we don't need to worry. The main reason why\n> we've offered the advice \"don't fill template1 with junk\" in the past\n> is that it was so hard to clear out the junk without zapping built-in\n> entries. But now, you really have to work hard at it to shoot yourself\n> in the foot that way. If you created junk in template1.public, no\n> sweat:\n> \t\\c template1 postgres\n> \tDROP SCHEMA public;\n> \tCREATE SCHEMA public;\n> \t-- don't forget to set its permissions appropriately\n> (This assumes we get DROP SCHEMA implemented in time for 7.3, but\n> I think we can build that based on Rod's pg_depend stuff.) (Which\n> I really really gotta review and apply soon.)\n> \n> I'm of the opinion that template1 and public are not very special\n> at the moment; the C-level code doesn't think either of them are\n> special, which is why you can drop and recreate them if you have to.\n> We should try not to re-introduce any low-level specialness.\n\nIt is strange we have to allow template1 open just for client stuff. I\nwould really like to lock it down read-only. I guess we can tell admins\nto lock down public in template1, and all newly created databases will\nbe the same.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jun 2002 23:44:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schemas and template1" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 30 April 2002 18:32\n> To: pgsql-hackers@postgresql.org; pgsql-interfaces@postgresql.org\n> Subject: [INTERFACES] Schemas: status report, call for developers\n> \n> \n> Current CVS tip has most of the needed infrastructure for \n> SQL-spec schema support: you can create schemas, and you can \n> create objects within schemas, and search-path-based lookup \n> for named objects works. There's still a number of things to \n> be done in the backend, but it's time to start working on \n> schema support in the various frontends that have been broken \n> by these changes. I believe that pretty much every frontend \n> library and client application that looks at system catalogs \n> will need revisions. So, this is a call for help --- I don't \n> have the time to fix all the frontends, nor sufficient \n> familiarity with many of them.\n> \n> JDBC and ODBC metadata code is certainly broken; so are the \n> catalog lookups in pgaccess, pgadmin, and so on. psql and \n> pg_dump are broken as well (though I will take responsibility \n> for fixing pg_dump, and will then look at psql if no one else \n> has done it by then). I'm not even sure what else might need \n> to change.\n> \n\nThanks Tom, this is just the post I've been waiting for!\n\nTo anyone thinking of hacking pgAdmin at the moment -> now would\nprobably not be the best time as I will be *seriously* restructuring\npgSchema.\n\nRegards, Dave.\n", "msg_date": "Tue, 30 Apr 2002 19:54:07 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [INTERFACES] Schemas: status report, call for developers" }, { "msg_contents": "> > JDBC and ODBC metadata code is certainly broken; so are the\n> > catalog lookups in pgaccess, pgadmin, and so on. psql and\n> > pg_dump are broken as well (though I will take responsibility\n> > for fixing pg_dump, and will then look at psql if no one else\n> > has done it by then). I'm not even sure what else might need\n> > to change.\n\nphpPgAdmin (WebDB) will be broken as well. I think myself and at least a\nfew other committers lurk here tho.\n\nOther things that will break:\n\nTOra\nVarious KDE interfaces\n\nChris\n\n", "msg_date": "Wed, 1 May 2002 10:05:23 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [INTERFACES] Schemas: status report,\n call for developers" }, { "msg_contents": "On Wed, May 01, 2002 at 10:05:23AM +0800, Christopher Kings-Lynne wrote:\n> \n> phpPgAdmin (WebDB) will be broken as well. I think myself and at least a\n> few other committers lurk here tho.\n> \n> Other things that will break:\n> \n> TOra\n> Various KDE interfaces\n\nGNUe will break, as well.\n\nRoss\n", "msg_date": "Tue, 30 Apr 2002 22:28:33 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [INTERFACES] Schemas: status report,\n call for developers" }, { "msg_contents": "I think it would be much faster simply to list of the programs that\nuse Postgresql internals that won't break.\n--\nRod\n----- Original Message -----\nFrom: \"Ross J. Reedstrom\" <reedstrm@rice.edu>\nTo: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nCc: \"Dave Page\" <dpage@vale-housing.co.uk>; \"Tom Lane\"\n<tgl@sss.pgh.pa.us>; <pgsql-hackers@postgresql.org>;\n<pgsql-interfaces@postgresql.org>; <pgadmin-hackers@postgresql.org>\nSent: Tuesday, April 30, 2002 11:28 PM\nSubject: Re: [HACKERS] [INTERFACES] Schemas: status report, call for\ndevelopers\n\n\n> On Wed, May 01, 2002 at 10:05:23AM +0800, Christopher Kings-Lynne\nwrote:\n> >\n> > phpPgAdmin (WebDB) will be broken as well. I think myself and at\nleast a\n> > few other committers lurk here tho.\n> >\n> > Other things that will break:\n> >\n> > TOra\n> > Various KDE interfaces\n>\n> GNUe will break, as well.\n>\n> Ross\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Tue, 30 Apr 2002 23:43:13 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Schemas: status report, call for developers" }, { "msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> GNUe will break, as well.\n\nDo I hear a volunteer to fix it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 00:03:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Schemas: status report, call for developers" }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> I think it would be much faster simply to list of the programs that\n> use Postgresql internals that won't break.\n\nApproximately none, I'm sure :-(. This thread isn't about that, it's\nabout stirring up the troops to fix everything that must be fixed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 00:09:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Schemas: status report, call for developers " }, { "msg_contents": "On Wed, May 01, 2002 at 12:03:00AM -0400, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> > GNUe will break, as well.\n> \n> Do I hear a volunteer to fix it?\n\nI'm willing to implement whatever clever solution we all come up with.\nI'll have to coordinate w/ the GNUe IRC folks to get it checked in.\n\nRoss\n", "msg_date": "Tue, 30 Apr 2002 23:49:10 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Schemas: status report, call for developers" }, { "msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n>>> GNUe will break, as well.\n> I'm willing to implement whatever clever solution we all come up with.\n\nIf you need help in inventing a solution, it'd be a good idea to explain\nthe problem. Personally I'm not familiar with GNUe ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 00:56:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Schemas: status report, call for developers" }, { "msg_contents": "On Wed, May 01, 2002 at 12:56:00AM -0400, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> >>> GNUe will break, as well.\n> > I'm willing to implement whatever clever solution we all come up with.\n> \n> If you need help in inventing a solution, it'd be a good idea to explain\n> the problem. Personally I'm not familiar with GNUe ...\n\nI think all the interfaces are having the same fundemental problem: how to\nlimit the tables 'seen' to a particular list of schema (those in the path).\n\nGNUe is GNU Enterprise System - a somewhat grandiose name for a business\nmiddleware solutions project. It's highly modular, with a common core to\ndeal with things like DB access. There's a reasonably nice forms designer\nto handle quickie 2-tier DB apps (client-server, skip the middleware).\n\nRight now, it's mostly coded in python. I'm taking off on a\nbusiness trip for the remainder of the week, starting tomorrow (err\ntoday?!) morning. I'll take the GNUe code along and see what it's db\nschema discovery code is actually doing, and think about what sort of\nclever things to do. I think for GNUe, we might get away with requiring\nthe end-user (designer) to select a particular schema to work in, and then\njust qualify everything.\n\nLater,\nRoss\n", "msg_date": "Wed, 1 May 2002 00:21:46 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Schemas: status report, call for developers" } ]
[ { "msg_contents": "I'm working on a revised patch for PREPARE/EXECUTE. The basic code\nhas been written (although I've been delayed due to the workload at\nschool). I'm now trying to add support for preparing queries with\nparameters, but it is failing at an early stage of the game:\n\nnconway=> prepare q1 as select 1;\nPREPARE\nnconway=> prepare q2 as select $1;\nERROR: Parameter '$1' is out of range\n\n(You'll see the same parse error with simply \"select $1;\")\n\nThe shortened version of the grammar I'm using is:\n\nPrepareStmt: PREPARE name AS OptimizableStmt\n\nWhat modifications need to be made to allow these kinds of\nparametized queries?\n\nBTW, is this a legacy from postquel? (from include/nodes/primnodes.h)\n\n--------------\n * Param\n * paramkind - specifies the kind of parameter. The possible values\n * for this field are specified in \"params.h\", and they are:\n *\n * PARAM_NAMED: The parameter has a name, i.e. something\n * like `$.salary' or `$.foobar'.\n--------------\n\nSpecifically, the \"something like ...\" stuff.\n\nThanks in advance,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Tue, 30 Apr 2002 20:11:10 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "the parsing of parameters" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> nconway=> prepare q2 as select $1;\n> ERROR: Parameter '$1' is out of range\n\n> (You'll see the same parse error with simply \"select $1;\")\n\nYou need to tell the parser the number of parameters to expect and their\ndatatypes. This is what the last two arguments to parser() are all\nabout. Look at _SPI_prepare for an example (I think plpgsql uses that).\nAlso, the plpgsql code for parameterized cursors might be a helpful\nreference.\n\nThe actual syntax of PREPARE probably has to be something like\n\n\tPREPARE queryname(parameter type list) FROM query\n\nelse you'll not have any way to get the type info.\n\n> BTW, is this a legacy from postquel? (from include/nodes/primnodes.h)\n\nI don't believe anything is using named parameters presently. PARAM_NEW\nand PARAM_OLD also seem to be leftovers from an old implementation of\nrules.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 14:32:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters " }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > nconway=> prepare q2 as select $1;\n> > ERROR: Parameter '$1' is out of range\n>\n> > (You'll see the same parse error with simply \"select $1;\")\n>\n> You need to tell the parser the number of parameters to expect and their\n> datatypes. This is what the last two arguments to parser() are all\n> about. Look at _SPI_prepare for an example (I think plpgsql uses that).\n> Also, the plpgsql code for parameterized cursors might be a helpful\n> reference.\n>\n> The actual syntax of PREPARE probably has to be something like\n>\n> PREPARE queryname(parameter type list) FROM query\n>\n> else you'll not have any way to get the type info.\n>\n> > BTW, is this a legacy from postquel? (from include/nodes/primnodes.h)\n>\n> I don't believe anything is using named parameters presently. PARAM_NEW\n> and PARAM_OLD also seem to be leftovers from an old implementation of\n> rules.\n\n I have a little patch that actually allows SPI_prepare() to\n use UNKNOWN_OID in the passed in parameter type array and\n put's the choosen datatypes Oid back into there.\n\n The parser treats those parameters like single quoted\n literals of unknown type and chooses what would be the most\n useful datatype here.\n\n Any objections?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Thu, 9 May 2002 16:49:11 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters" }, { "msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> I have a little patch that actually allows SPI_prepare() to\n> use UNKNOWN_OID in the passed in parameter type array and\n> put's the choosen datatypes Oid back into there.\n\n> The parser treats those parameters like single quoted\n> literals of unknown type and chooses what would be the most\n> useful datatype here.\n\n> Any objections?\n\nFor this particular application, at least, I do not see the value ...\nin fact this seems more likely to break stuff than help. If the\napplication does not know what the datatypes are supposed to be,\nhow is it going to call the prepared statement?\n\nYou could possibly get away with that for a textual interface (\"always\npass quoted literals\"), but it would surely destroy any chance of having\na binary protocol for passing parameters to prepared statements.\n\nOffhand I'm having a hard time visualizing why you'd want this at\nthe SPI_prepare level, either ... what's the application?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 May 2002 17:50:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters " }, { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > I have a little patch that actually allows SPI_prepare() to\n> > use UNKNOWN_OID in the passed in parameter type array and\n> > put's the choosen datatypes Oid back into there.\n>\n> > The parser treats those parameters like single quoted\n> > literals of unknown type and chooses what would be the most\n> > useful datatype here.\n>\n> > Any objections?\n>\n> For this particular application, at least, I do not see the value ...\n> in fact this seems more likely to break stuff than help. If the\n> application does not know what the datatypes are supposed to be,\n> how is it going to call the prepared statement?\n\n Right now using UNKNOWN_OID in that place leads to a parse\n error, what makes me feel absolutely comfortable that there\n will be nobody using it today. So what kind of \"break\" are\n you talking about?\n\n>\n> You could possibly get away with that for a textual interface (\"always\n> pass quoted literals\"), but it would surely destroy any chance of having\n> a binary protocol for passing parameters to prepared statements.\n\n Right. And BTW, how do you propose that the client\n application passes the values in binary form anyway? Are you\n going to maintain that process for backwards compatibility\n when we change the internal representation of stuff (like we\n want to for numeric) or who? And what about byte ordering?\n User defined types?\n\n I think the backend is the only one who can convert into it's\n personal, binary format. Wouldn't anything else lead to\n security holes?\n\n>\n> Offhand I'm having a hard time visualizing why you'd want this at\n> the SPI_prepare level, either ... what's the application?\n\n It propagates up to the SPI level. In fact it is down in the\n parser/analyzer.\n\n There are DB interfaces that allow a generic application to\n get a description of the result set (column names, types)\n even before telling the data types of all parameters.\n\n Our ODBC driver for example has it's own more or less\n complete SQL parser to deal with that case! I don't see THAT\n implementation very superior compared to the ability to ask\n the DB server for a guess. I thought that this PREPARE\n statement will be used by such interfaces in the future, no?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Fri, 10 May 2002 06:12:47 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters" }, { "msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> Tom Lane wrote:\n>> For this particular application, at least, I do not see the value ...\n>> in fact this seems more likely to break stuff than help. If the\n>> application does not know what the datatypes are supposed to be,\n>> how is it going to call the prepared statement?\n\n> Right now using UNKNOWN_OID in that place leads to a parse\n> error, what makes me feel absolutely comfortable that there\n> will be nobody using it today. So what kind of \"break\" are\n> you talking about?\n\nWhat I mean is that I don't see how an application is going to use\nPREPARE/EXECUTE without knowing the data types of the values it\nhas to send for EXECUTE. Inside SPI you could maybe do it, since\nthe calling code can examine the modified argtype array, but there\nis no such back-communication channel for PREPARE. This holds\nfor both textual and binary kinds of EXECUTE: how do you know what\nyou are supposed to send?\n\n>> You could possibly get away with that for a textual interface (\"always\n>> pass quoted literals\"), but it would surely destroy any chance of having\n>> a binary protocol for passing parameters to prepared statements.\n\n> Right. And BTW, how do you propose that the client\n> application passes the values in binary form anyway?\n\nSame way as binary cursors work today, with the same ensuing platform\nand version dependencies. Maybe someday we'll improve on that, but\nthat's a different project from supporting PREPARE/EXECUTE.\n\n> I think the backend is the only one who can convert into it's\n> personal, binary format. Wouldn't anything else lead to\n> security holes?\n\nGood point; might need to restrict the operation to superusers.\n\n> There are DB interfaces that allow a generic application to\n> get a description of the result set (column names, types)\n> even before telling the data types of all parameters.\n\n> Our ODBC driver for example has it's own more or less\n> complete SQL parser to deal with that case! I don't see THAT\n> implementation very superior compared to the ability to ask\n> the DB server for a guess. I thought that this PREPARE\n> statement will be used by such interfaces in the future, no?\n\nHmm. So your vision of PREPARE would allow the backend to reply\nwith a list of parameter types. How would you envision that working\nexactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 May 2002 11:17:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters " }, { "msg_contents": "On Fri, May 10, 2002 at 11:17:39AM -0400, Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Tom Lane wrote:\n> >> For this particular application, at least, I do not see the value ...\n> >> in fact this seems more likely to break stuff than help. If the\n> >> application does not know what the datatypes are supposed to be,\n> >> how is it going to call the prepared statement?\n> \n> > Right now using UNKNOWN_OID in that place leads to a parse\n> > error, what makes me feel absolutely comfortable that there\n> > will be nobody using it today. So what kind of \"break\" are\n> > you talking about?\n> \n> What I mean is that I don't see how an application is going to use\n> PREPARE/EXECUTE without knowing the data types of the values it\n> has to send for EXECUTE. Inside SPI you could maybe do it, since\n> the calling code can examine the modified argtype array, but there\n> is no such back-communication channel for PREPARE. This holds\n> for both textual and binary kinds of EXECUTE: how do you know what\n> you are supposed to send?\n\n In my original PREPARE/EXECUTE patch (it works in 7.1):\n\n PREPARE name AS select * from tab where data=$1 USING text;\n EXECUTE name USING 'nice text data';\n\n IMHO is possible think about\n\n EXECUTE name USING 'nice text'::text;\n\n or other cast methods.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 10 May 2002 18:09:05 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters" }, { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > There are DB interfaces that allow a generic application to\n> > get a description of the result set (column names, types)\n> > even before telling the data types of all parameters.\n>\n> > Our ODBC driver for example has it's own more or less\n> > complete SQL parser to deal with that case! I don't see THAT\n> > implementation very superior compared to the ability to ask\n> > the DB server for a guess. I thought that this PREPARE\n> > statement will be used by such interfaces in the future, no?\n>\n> Hmm. So your vision of PREPARE would allow the backend to reply\n> with a list of parameter types. How would you envision that working\n> exactly?\n\n I guess there's some sort of statement identifier you use to\n refer to something you've prepared. Wouldn't a function call\n returning a list of names or type oid's be sufficient?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Fri, 10 May 2002 13:05:33 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters" }, { "msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n>> Hmm. So your vision of PREPARE would allow the backend to reply\n>> with a list of parameter types. How would you envision that working\n>> exactly?\n\n> I guess there's some sort of statement identifier you use to\n> refer to something you've prepared. Wouldn't a function call\n> returning a list of names or type oid's be sufficient?\n\nI was thinking of having the type names returned unconditionally,\nperhaps like a SELECT result (compare the new behavior of EXPLAIN).\nBut if we assume that this won't be a commonly used feature, maybe\na separate inquiry operation is better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 May 2002 13:14:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters " }, { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> >> Hmm. So your vision of PREPARE would allow the backend to reply\n> >> with a list of parameter types. How would you envision that working\n> >> exactly?\n>\n> > I guess there's some sort of statement identifier you use to\n> > refer to something you've prepared. Wouldn't a function call\n> > returning a list of names or type oid's be sufficient?\n>\n> I was thinking of having the type names returned unconditionally,\n> perhaps like a SELECT result (compare the new behavior of EXPLAIN).\n> But if we assume that this won't be a commonly used feature, maybe\n> a separate inquiry operation is better.\n\n I wouldn't mind. One way or the other is okay with me.\n\n Reminds me though of another feature we should have on the\n TODO. INSERT/UPDATE/DELETE ... RETURNING ...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Fri, 10 May 2002 14:25:14 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters" }, { "msg_contents": "Jan Wieck wrote:\n> Tom Lane wrote:\n> > Jan Wieck <janwieck@yahoo.com> writes:\n> > >> Hmm. So your vision of PREPARE would allow the backend to reply\n> > >> with a list of parameter types. How would you envision that working\n> > >> exactly?\n> >\n> > > I guess there's some sort of statement identifier you use to\n> > > refer to something you've prepared. Wouldn't a function call\n> > > returning a list of names or type oid's be sufficient?\n> >\n> > I was thinking of having the type names returned unconditionally,\n> > perhaps like a SELECT result (compare the new behavior of EXPLAIN).\n> > But if we assume that this won't be a commonly used feature, maybe\n> > a separate inquiry operation is better.\n> \n> I wouldn't mind. One way or the other is okay with me.\n> \n> Reminds me though of another feature we should have on the\n> TODO. INSERT/UPDATE/DELETE ... RETURNING ...\n\nTODO already has:\n\n o Allow INSERT/UPDATE ... RETURNING new.col or old.col; handle\n RULE cases (Philip)\n\nDo we need DELETE too?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 May 2002 21:16:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: the parsing of parameters" } ]
[ { "msg_contents": "Hi,\n\nPureFTPd has got really good Postgres support:\n\nAuthenticates off postgres, with definable queries to return stuff like\nhomedirs, quotas, password hashes, etc.\n\nCool.\n\nChris\n\n", "msg_date": "Wed, 1 May 2002 10:07:13 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "PureFTPd" } ]
[ { "msg_contents": "Hi all,\n\nI'm having problems restoring a dump. I get this:\n\nYou are now connected as new user chriskl.\nERROR: Unrecognized language specified in a CREATE FUNCTION: 'plpgsql'.\n Pre-installed languages are SQL, C, and internal.\n Additional languages may be installed using 'createlang'.\nERROR: Unrecognized language specified in a CREATE FUNCTION: 'plpgsql'.\n Pre-installed languages are SQL, C, and internal.\n Additional languages may be installed using 'createlang'.\n\nI've done a \"createlang plpgsql template1\" before starting my restore, but I\njust cannot get it to recognise the language.\n\nThe dump format is the complete one that first tries to drop each database\nand then recreates it from scratch, so each of my databases is being dropped\nand then totally recreated.\n\nHow do I get this to work?\n\nChris\n\n", "msg_date": "Wed, 1 May 2002 17:08:54 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Problem with restoring a 7.1 dump" }, { "msg_contents": "Christopher Kings-Lynne <chriskl@familyhealth.com.au> wrote:\n[snap]\n> How do I get this to work?\n>\n> Chris\n\n\nI think i did this:\n\nCREATE FUNCTION \"plpgsql_call_handler\" () RETURNS opaque AS\n'/usr/local/pgsql/lib/plpgsql.so', 'plpgsql_call_handler' LANGUAGE 'C';\nCREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql' HANDLER\n\"plpgsql_call_handler\" LANCOMPILER 'PL/pgSQL';\n\nThis might be in the docs also.\nTry it :)\n\nRegards,\nMagnus\n\n", "msg_date": "Wed, 1 May 2002 14:23:49 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": false, "msg_subject": "Re: Problem with restoring a 7.1 dump" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> ERROR: Unrecognized language specified in a CREATE FUNCTION: 'plpgsql'.\n> Pre-installed languages are SQL, C, and internal.\n> Additional languages may be installed using 'createlang'.\n\n> I've done a \"createlang plpgsql template1\" before starting my restore, but I\n> just cannot get it to recognise the language.\n\nAre you sure you did that (and it worked)? I don't see how a definition\nin template1 would not propagate to new databases.\n\n> The dump format is the complete one that first tries to drop each database\n> and then recreates it from scratch, so each of my databases is being dropped\n> and then totally recreated.\n\nHm ... is the script also trying to drop individual elements in each\ndatabase? If so I wonder if it's actually dropping the plpgsql language\n(or at least the handler for it) and then failing to recreate it for\nsome reason. But that failure would provoke an error, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 10:34:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with restoring a 7.1 dump " } ]
[ { "msg_contents": "I've run into an interesting issue. A very long running transaction\ndoing data loads is getting quite slow. I really don't want to break\nup the transactions (and for now it's ok), but it makes me wonder what\nexactly analyze counts.\n\nSince dead, or yet to be visible tuples affect the plan that should be\ntaken (until vacuum anyway) are these numbers reflected in the stats\nanywhere?\n\nTook an empty table, with a transaction I inserted a number of records\nand before comitting I ran analyze.\n\nAnalyze obviously saw the table as empty, as the pg_statistic row for\nthat relation doesn't exist.\n\nCommit, then analyze again and the values were taken into account.\n\n\nCertainly for large dataloads doing an analyze on the table after a\nsubstantial (non-comitted) change has taken place would be worth while\nfor all elements involved. An index scan on the visible records may\nbe faster, but on the actual tuples in the table a sequential scan\nmight be best.\n\nOf course, for small transactions no-effect will be seen. But this\nmay help with the huge dataloads, especially where triggers or\nconstraints are in effect.\n--\nRod\n\n", "msg_date": "Wed, 1 May 2002 10:21:30 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Analyze on large changes..." }, { "msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Since dead, or yet to be visible tuples affect the plan that should be\n> taken (until vacuum anyway) are these numbers reflected in the stats\n> anywhere?\n\nAnalyze just uses SnapshotNow visibility rules, so it sees the same set\nof tuples that you would see if you did a SELECT.\n\nIt might be interesting to try to estimate the fraction of dead tuples\nin the table, though I'm not sure quite how to fold that into the cost\nestimates. [ thinks... ] Actually I think we might just be\ndouble-counting if we did. The dead tuples surely should not count as\npart of the number of returned rows. We already do account for the\nI/O effort to read them (because I/O is estimated based on the total\nnumber of blocks in the table, which will include the space used by\ndead tuples). We're only missing the CPU time involved in the tuple\nvalidity check, which is pretty small.\n\n> Took an empty table, with a transaction I inserted a number of records\n> and before comitting I ran analyze.\n\nI tried to repeat this:\n\nregression=# begin;\nBEGIN\nregression=# create table foo (f1 int);\nCREATE\nregression=# insert into foo [ ... some data ... ]\n\nregression=# analyze foo;\nERROR: ANALYZE cannot run inside a BEGIN/END block\n\nThis seems a tad silly; I can't see any reason why ANALYZE couldn't be\ndone inside a BEGIN block. I think this is just a hangover from\nANALYZE's origins as part of VACUUM. Can anyone see a reason not to\nallow it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 10:53:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analyze on large changes... " }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> My limited understanding of current behaviour is the search for a valid \n> row's tuple goes from older tuples to newer ones via forward links\n\nNo. Each tuple is independently indexed and independently visited.\nGiven the semantics of MVCC I think that's correct --- after all, what's\ndead to you is not necessarily dead to someone else.\n\nThere's been some speculation about trying to determine whether a dead\ntuple is dead to everyone (essentially the same test VACUUM makes), and\nif so propagating a marker back to the index tuple so that future\nindexscans don't have to make useless visits to the heap tuple.\n(I don't think we want to try to actually remove the index tuple; that's\nVACUUM's job, and there are locking problems if we try to make it happen\nin plain SELECTs. But we could set a marker bit in the index entry.)\nUnder normal circumstances where all transactions are short, it wouldn't\nbe very long before a dead tuple could be marked, so this should fix the\nperformance issue. Doing it in a reasonably clean fashion is the sticky\npart.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 14:10:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analyze on large changes... " }, { "msg_contents": "Hi Tom,\n\n(Please correct me where I'm wrong)\n\nIs it possible to reduce the performance impact of dead tuples esp when the \nindex is used? Right now performance goes down gradually till we vacuum \n(something like a 1/x curve).\n\nMy limited understanding of current behaviour is the search for a valid \nrow's tuple goes from older tuples to newer ones via forward links (based \non some old docs[1]).\n\nHow about searching from newer tuples to older tuples instead, using \nbackward links?\n\nMy assumption is newer tuples are more likely to be wanted than older ones \n- and so the number of tuples to search through will be less this way.\n\n**If index update is ok.\nIf a tuple is inserted, the index record is updated to point to inserted \ntuple, and the inserted tuple is made to point to a previous tuple.\ne.g.\n\nIndex-> old tuple->older tuple->oldest tuple\nIndex-> New tuple->old tuple->older tuple->oldest tuple\n\n**if index update not desirable\nIndex points to first tuple (valid or not).\n\nIf a tuple is inserted, the first tuple is updated to point to inserted \ntuple, and the inserted tuple is made to point to a previous tuple.\ne.g.\n\nIndex-> first tuple->old tuple->older tuple->oldest tuple\nIndex-> first tuple-> New tuple->old tuple->older tuple->oldest tuple\n\nIf this is done performance might not deterioriate as much when using index \nscans right? I'm not sure if a backward links would help for sequential \nscans, which are usually best done forward.\n\nRegards,\nLink.\n\n[1] http://developer.postgresql.org/pdf/transactions.pdf\nTuple headers contain:\n• xmin: transaction ID of inserting transaction\n• xmax: transaction ID of replacing/ deleting transaction (initially NULL)\n• forward link: link to newer version of same logical row, if any\nBasic idea: tuple is visible if xmin is valid and xmax is not. \"Valid\"\nmeans\n\"either committed or the current transaction\".\nIf we plan to update rather than delete, we first add new version of row\nto table, then set xmax and forward link in old tuple. Forward link will\nbe needed by concurrent updaters (but not by readers).\n\nAt 10:53 AM 5/1/02 -0400, Tom Lane wrote:\n>estimates. [ thinks... ] Actually I think we might just be\n>double-counting if we did. The dead tuples surely should not count as\n>part of the number of returned rows. We already do account for the\n>I/O effort to read them (because I/O is estimated based on the total\n\n\n\n", "msg_date": "Thu, 02 May 2002 02:13:59 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Analyze on large changes... " }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> But does Postgresql visit the older tuples first moving to the newer ones, \n> or the newer ones first?\n\nIt's going to visit them *all*. Reordering won't improve the\nperformance.\n\nFWIW I think that with the present implementation of btree, the newer\ntuples actually will be visited first --- when inserting a duplicate\nkey, the new entry will be inserted to the left of the equal key(s)\nalready present. But it doesn't matter. The only way to speed this\nup is to eliminate some of the visitings, which requires keeping more\ninfo in the index than we presently do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 00:49:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Search from newer tuples first, vs older tuples first? " }, { "msg_contents": "At 02:10 PM 5/1/02 -0400, Tom Lane wrote:\n>Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > My limited understanding of current behaviour is the search for a valid\n> > row's tuple goes from older tuples to newer ones via forward links\n>\n>No. Each tuple is independently indexed and independently visited.\n>Given the semantics of MVCC I think that's correct --- after all, what's\n>dead to you is not necessarily dead to someone else.\n\nBut does Postgresql visit the older tuples first moving to the newer ones, \nor the newer ones first? From observation it seems to be starting from the \nolder ones. I'm thinking visiting the newer ones first would be better. \nWould that reduce the slowing down effect?\n\nAnyway, are you saying:\nIndex row X entry #1 -> oldest tuple\n...\nIndex row X entry #2 -> older tuple\n...\nIndex row X entry #3 -> old tuple\n...\nIndex row X entry #4 -> just inserted tuple\n\nAnd a search for a valid tuple goes through each index entry and visits \neach tuple to see if it is visible.\n\nThat seems like a lot of work to do, any docs/urls which explain this? Are \nthe index tuples for the same row generally in the same physical location?\n\nWhereas the following still looks like less work and still compatible with \nMVCC:\nindex tuple -> new tuple -> rolled back tuple -> old tuple -> older tuple.\n\nJust one index tuple per row. The tuples are checked from newer to older \nfor visibility via backward links.\n\nThe docs I mentioned say updates use the forward links. Repeated updates \ndefinitely slow down, so backward links might help?\n\nRegards,\nLink.\n\n\n\n", "msg_date": "Thu, 02 May 2002 12:54:46 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Search from newer tuples first, vs older tuples first?" }, { "msg_contents": "At 12:49 AM 5/2/02 -0400, Tom Lane wrote:\n>Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > But does Postgresql visit the older tuples first moving to the newer ones,\n> > or the newer ones first?\n>\n>It's going to visit them *all*. Reordering won't improve the\n>performance.\n\nAck! I thought it went through them till the first valid tuple and was just \ngoing the wrong way.\n\n>FWIW I think that with the present implementation of btree, the newer\n>tuples actually will be visited first --- when inserting a duplicate\n>key, the new entry will be inserted to the left of the equal key(s)\n>already present. But it doesn't matter. The only way to speed this\n>up is to eliminate some of the visitings, which requires keeping more\n>info in the index than we presently do.\n\nOK I'm starting to get it :). Will the index behaviour be changed soon?\n\nHmm, then what are the row tuple forward links for? Why forward?\n\nRegards,\nLink.\n\n", "msg_date": "Thu, 02 May 2002 16:52:03 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Search from newer tuples first, vs older tuples first? " }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> OK I'm starting to get it :). Will the index behaviour be changed soon?\n\nWhen someone steps up and does it. I've learned not to predict\nschedules for this project.\n\n> Hmm, then what are the row tuple forward links for? Why forward?\n\nUpdates in READ COMMITTED mode have to be able to find the newest\nversion of a tuple they've already found.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 10:00:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Search from newer tuples first, vs older tuples first? " }, { "msg_contents": "Tom Lane wrote:\n> Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > OK I'm starting to get it :). Will the index behaviour be changed soon?\n> \n> When someone steps up and does it. I've learned not to predict\n> schedules for this project.\n\nIt is not that hard to implement, just messy. When the index returns a\nheap row and the heap row is viewed for visibility, if _no_one_ can see\nthe row, the index can be marked as expired. It could be a single bit\nin the index tuple, and doesn't need to be flushed to disk, though the\nindex page has to be marked as dirty. However, we are going to need to\nflush a pre-change image to WAL so it may as well be handled as a normal\nindex page change.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Jun 2002 02:09:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Search from newer tuples first, vs older tuples first?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It is not that hard to implement, just messy. When the index returns a\n> heap row and the heap row is viewed for visibility, if _no_one_ can see\n> the row, the index can be marked as expired. It could be a single bit\n> in the index tuple, and doesn't need to be flushed to disk, though the\n> index page has to be marked as dirty. However, we are going to need to\n> flush a pre-change image to WAL so it may as well be handled as a normal\n> index page change.\n\nThis did actually get done while you were on vacation. It does *not*\nneed a WAL entry, on the same principle that setting XMIN_COMMITTED,\nXMAX_ABORTED, etc hint bits do not need WAL entries --- namely the\nbits can always get set again if they are lost in a crash.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Jun 2002 18:34:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Search from newer tuples first, vs older tuples first? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It is not that hard to implement, just messy. When the index returns a\n> > heap row and the heap row is viewed for visibility, if _no_one_ can see\n> > the row, the index can be marked as expired. It could be a single bit\n> > in the index tuple, and doesn't need to be flushed to disk, though the\n> > index page has to be marked as dirty. However, we are going to need to\n> > flush a pre-change image to WAL so it may as well be handled as a normal\n> > index page change.\n> \n> This did actually get done while you were on vacation. It does *not*\n> need a WAL entry, on the same principle that setting XMIN_COMMITTED,\n> XMAX_ABORTED, etc hint bits do not need WAL entries --- namely the\n> bits can always get set again if they are lost in a crash.\n\nOh, thanks. That is great news. I am having trouble determining when a\nthread ends so please be patient with me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Jun 2002 18:44:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Search from newer tuples first, vs older tuples first?" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It is not that hard to implement, just messy. When the index returns a\n> > heap row and the heap row is viewed for visibility, if _no_one_ can see\n> > the row, the index can be marked as expired. It could be a single bit\n> > in the index tuple, and doesn't need to be flushed to disk, though the\n> > index page has to be marked as dirty. However, we are going to need to\n> > flush a pre-change image to WAL so it may as well be handled as a normal\n> > index page change.\n> \n> This did actually get done while you were on vacation. It does *not*\n> need a WAL entry, on the same principle that setting XMIN_COMMITTED,\n> XMAX_ABORTED, etc hint bits do not need WAL entries --- namely the\n> bits can always get set again if they are lost in a crash.\n\nTODO item marked as done:\n\n\t* -Add deleted bit to index tuples to reduce heap access\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Jun 2002 18:45:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Search from newer tuples first, vs older tuples first?" }, { "msg_contents": "Tom Lane wrote:\n> I tried to repeat this:\n> \n> regression=# begin;\n> BEGIN\n> regression=# create table foo (f1 int);\n> CREATE\n> regression=# insert into foo [ ... some data ... ]\n> \n> regression=# analyze foo;\n> ERROR: ANALYZE cannot run inside a BEGIN/END block\n> \n> This seems a tad silly; I can't see any reason why ANALYZE couldn't be\n> done inside a BEGIN block. I think this is just a hangover from\n> ANALYZE's origins as part of VACUUM. Can anyone see a reason not to\n> allow it?\n\nThe following patch allows analyze to be run inside a transaction. \nVacuum and vacuum analyze still can not be run in a transaction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/commands/analyze.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/commands/analyze.c,v\nretrieving revision 1.35\ndiff -c -r1.35 analyze.c\n*** src/backend/commands/analyze.c\t24 May 2002 18:57:55 -0000\t1.35\n--- src/backend/commands/analyze.c\t11 Jun 2002 21:38:51 -0000\n***************\n*** 156,170 ****\n \t\televel = DEBUG1;\n \n \t/*\n- \t * Begin a transaction for analyzing this relation.\n- \t *\n- \t * Note: All memory allocated during ANALYZE will live in\n- \t * TransactionCommandContext or a subcontext thereof, so it will all\n- \t * be released by transaction commit at the end of this routine.\n- \t */\n- \tStartTransactionCommand();\n- \n- \t/*\n \t * Check for user-requested abort.\tNote we want this to be inside a\n \t * transaction, so xact.c doesn't issue useless WARNING.\n \t */\n--- 156,161 ----\n***************\n*** 177,186 ****\n \tif (!SearchSysCacheExists(RELOID,\n \t\t\t\t\t\t\t ObjectIdGetDatum(relid),\n \t\t\t\t\t\t\t 0, 0, 0))\n- \t{\n- \t\tCommitTransactionCommand();\n \t\treturn;\n- \t}\n \n \t/*\n \t * Open the class, getting only a read lock on it, and check\n--- 168,174 ----\n***************\n*** 196,202 ****\n \t\t\telog(WARNING, \"Skipping \\\"%s\\\" --- only table or database owner can ANALYZE it\",\n \t\t\t\t RelationGetRelationName(onerel));\n \t\trelation_close(onerel, AccessShareLock);\n- \t\tCommitTransactionCommand();\n \t\treturn;\n \t}\n \n--- 184,189 ----\n***************\n*** 211,217 ****\n \t\t\telog(WARNING, \"Skipping \\\"%s\\\" --- can not process indexes, views or special system tables\",\n \t\t\t\t RelationGetRelationName(onerel));\n \t\trelation_close(onerel, AccessShareLock);\n- \t\tCommitTransactionCommand();\n \t\treturn;\n \t}\n \n--- 198,203 ----\n***************\n*** 222,228 ****\n \t\tstrcmp(RelationGetRelationName(onerel), StatisticRelationName) == 0)\n \t{\n \t\trelation_close(onerel, AccessShareLock);\n- \t\tCommitTransactionCommand();\n \t\treturn;\n \t}\n \n--- 208,213 ----\n***************\n*** 283,289 ****\n \tif (attr_cnt <= 0)\n \t{\n \t\trelation_close(onerel, NoLock);\n- \t\tCommitTransactionCommand();\n \t\treturn;\n \t}\n \n--- 268,273 ----\n***************\n*** 370,378 ****\n \t * entries we made in pg_statistic.)\n \t */\n \trelation_close(onerel, NoLock);\n- \n- \t/* Commit and release working memory */\n- \tCommitTransactionCommand();\n }\n \n /*\n--- 354,359 ----\nIndex: src/backend/commands/vacuum.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/commands/vacuum.c,v\nretrieving revision 1.226\ndiff -c -r1.226 vacuum.c\n*** src/backend/commands/vacuum.c\t24 May 2002 18:57:56 -0000\t1.226\n--- src/backend/commands/vacuum.c\t11 Jun 2002 21:38:59 -0000\n***************\n*** 110,117 ****\n \n \n /* non-export function prototypes */\n- static void vacuum_init(VacuumStmt *vacstmt);\n- static void vacuum_shutdown(VacuumStmt *vacstmt);\n static List *getrels(const RangeVar *vacrel, const char *stmttype);\n static void vac_update_dbstats(Oid dbid,\n \t\t\t\t TransactionId vacuumXID,\n--- 110,115 ----\n***************\n*** 178,190 ****\n \t * user's transaction too, which would certainly not be the desired\n \t * behavior.\n \t */\n! \tif (IsTransactionBlock())\n \t\telog(ERROR, \"%s cannot run inside a BEGIN/END block\", stmttype);\n \n \t/* Running VACUUM from a function would free the function context */\n! \tif (!MemoryContextContains(QueryContext, vacstmt))\n \t\telog(ERROR, \"%s cannot be executed from a function\", stmttype);\n! \n \t/*\n \t * Send info about dead objects to the statistics collector\n \t */\n--- 176,188 ----\n \t * user's transaction too, which would certainly not be the desired\n \t * behavior.\n \t */\n! \tif (vacstmt->vacuum && IsTransactionBlock())\n \t\telog(ERROR, \"%s cannot run inside a BEGIN/END block\", stmttype);\n \n \t/* Running VACUUM from a function would free the function context */\n! \tif (vacstmt->vacuum && !MemoryContextContains(QueryContext, vacstmt))\n \t\telog(ERROR, \"%s cannot be executed from a function\", stmttype);\n! \n \t/*\n \t * Send info about dead objects to the statistics collector\n \t */\n***************\n*** 207,215 ****\n \tvrl = getrels(vacstmt->relation, stmttype);\n \n \t/*\n! \t * Start up the vacuum cleaner.\n \t */\n! \tvacuum_init(vacstmt);\n \n \t/*\n \t * Process each selected relation.\tWe are careful to process each\n--- 205,255 ----\n \tvrl = getrels(vacstmt->relation, stmttype);\n \n \t/*\n! \t *\t\tFormerly, there was code here to prevent more than one VACUUM from\n! \t *\t\texecuting concurrently in the same database. However, there's no\n! \t *\t\tgood reason to prevent that, and manually removing lockfiles after\n! \t *\t\ta vacuum crash was a pain for dbadmins. So, forget about lockfiles,\n! \t *\t\tand just rely on the locks we grab on each target table\n! \t *\t\tto ensure that there aren't two VACUUMs running on the same table\n! \t *\t\tat the same time.\n! \t *\n! \t *\t\tThe strangeness with committing and starting transactions in the\n! \t *\t\tinit and shutdown routines is due to the fact that the vacuum cleaner\n! \t *\t\tis invoked via an SQL command, and so is already executing inside\n! \t *\t\ta transaction.\tWe need to leave ourselves in a predictable state\n! \t *\t\ton entry and exit to the vacuum cleaner. We commit the transaction\n! \t *\t\tstarted in PostgresMain() inside vacuum_init(), and start one in\n! \t *\t\tvacuum_shutdown() to match the commit waiting for us back in\n! \t *\t\tPostgresMain().\n \t */\n! \tif (vacstmt->vacuum)\n! \t{\n! \t\tif (vacstmt->relation == NULL)\n! \t\t{\n! \t\t\t/*\n! \t\t\t * Compute the initially applicable OldestXmin and FreezeLimit\n! \t\t\t * XIDs, so that we can record these values at the end of the\n! \t\t\t * VACUUM. Note that individual tables may well be processed with\n! \t\t\t * newer values, but we can guarantee that no (non-shared)\n! \t\t\t * relations are processed with older ones.\n! \t\t\t *\n! \t\t\t * It is okay to record non-shared values in pg_database, even though\n! \t\t\t * we may vacuum shared relations with older cutoffs, because only\n! \t\t\t * the minimum of the values present in pg_database matters. We\n! \t\t\t * can be sure that shared relations have at some time been\n! \t\t\t * vacuumed with cutoffs no worse than the global minimum; for, if\n! \t\t\t * there is a backend in some other DB with xmin = OLDXMIN that's\n! \t\t\t * determining the cutoff with which we vacuum shared relations,\n! \t\t\t * it is not possible for that database to have a cutoff newer\n! \t\t\t * than OLDXMIN recorded in pg_database.\n! \t\t\t */\n! \t\t\tvacuum_set_xid_limits(vacstmt, false,\n! \t\t\t\t\t\t\t\t &initialOldestXmin, &initialFreezeLimit);\n! \t\t}\n! \n! \t\t/* matches the StartTransaction in PostgresMain() */\n! \t\tCommitTransactionCommand();\n! \t}\n \n \t/*\n \t * Process each selected relation.\tWe are careful to process each\n***************\n*** 225,305 ****\n \t\tif (vacstmt->vacuum)\n \t\t\tvacuum_rel(relid, vacstmt, RELKIND_RELATION);\n \t\tif (vacstmt->analyze)\n \t\t\tanalyze_rel(relid, vacstmt);\n \t}\n \n \t/* clean up */\n! \tvacuum_shutdown(vacstmt);\n! }\n! \n! /*\n! *\tvacuum_init(), vacuum_shutdown() -- start up and shut down the vacuum cleaner.\n! *\n! *\t\tFormerly, there was code here to prevent more than one VACUUM from\n! *\t\texecuting concurrently in the same database. However, there's no\n! *\t\tgood reason to prevent that, and manually removing lockfiles after\n! *\t\ta vacuum crash was a pain for dbadmins. So, forget about lockfiles,\n! *\t\tand just rely on the locks we grab on each target table\n! *\t\tto ensure that there aren't two VACUUMs running on the same table\n! *\t\tat the same time.\n! *\n! *\t\tThe strangeness with committing and starting transactions in the\n! *\t\tinit and shutdown routines is due to the fact that the vacuum cleaner\n! *\t\tis invoked via an SQL command, and so is already executing inside\n! *\t\ta transaction.\tWe need to leave ourselves in a predictable state\n! *\t\ton entry and exit to the vacuum cleaner. We commit the transaction\n! *\t\tstarted in PostgresMain() inside vacuum_init(), and start one in\n! *\t\tvacuum_shutdown() to match the commit waiting for us back in\n! *\t\tPostgresMain().\n! */\n! static void\n! vacuum_init(VacuumStmt *vacstmt)\n! {\n! \tif (vacstmt->vacuum && vacstmt->relation == NULL)\n \t{\n! \t\t/*\n! \t\t * Compute the initially applicable OldestXmin and FreezeLimit\n! \t\t * XIDs, so that we can record these values at the end of the\n! \t\t * VACUUM. Note that individual tables may well be processed with\n! \t\t * newer values, but we can guarantee that no (non-shared)\n! \t\t * relations are processed with older ones.\n! \t\t *\n! \t\t * It is okay to record non-shared values in pg_database, even though\n! \t\t * we may vacuum shared relations with older cutoffs, because only\n! \t\t * the minimum of the values present in pg_database matters. We\n! \t\t * can be sure that shared relations have at some time been\n! \t\t * vacuumed with cutoffs no worse than the global minimum; for, if\n! \t\t * there is a backend in some other DB with xmin = OLDXMIN that's\n! \t\t * determining the cutoff with which we vacuum shared relations,\n! \t\t * it is not possible for that database to have a cutoff newer\n! \t\t * than OLDXMIN recorded in pg_database.\n! \t\t */\n! \t\tvacuum_set_xid_limits(vacstmt, false,\n! \t\t\t\t\t\t\t &initialOldestXmin, &initialFreezeLimit);\n! \t}\n! \n! \t/* matches the StartTransaction in PostgresMain() */\n! \tCommitTransactionCommand();\n! }\n! \n! static void\n! vacuum_shutdown(VacuumStmt *vacstmt)\n! {\n! \t/* on entry, we are not in a transaction */\n \n! \t/* matches the CommitTransaction in PostgresMain() */\n! \tStartTransactionCommand();\n \n! \t/*\n! \t * If we did a database-wide VACUUM, update the database's pg_database\n! \t * row with info about the transaction IDs used, and try to truncate\n! \t * pg_clog.\n! \t */\n! \tif (vacstmt->vacuum && vacstmt->relation == NULL)\n! \t{\n! \t\tvac_update_dbstats(MyDatabaseId,\n! \t\t\t\t\t\t initialOldestXmin, initialFreezeLimit);\n! \t\tvac_truncate_clog(initialOldestXmin, initialFreezeLimit);\n \t}\n \n \t/*\n--- 265,303 ----\n \t\tif (vacstmt->vacuum)\n \t\t\tvacuum_rel(relid, vacstmt, RELKIND_RELATION);\n \t\tif (vacstmt->analyze)\n+ \t\t{\n+ \t\t\t/* If we vacuumed, use new transaction for analyze. */\n+ \t\t\tif (vacstmt->vacuum)\n+ \t\t\t\tStartTransactionCommand();\n \t\t\tanalyze_rel(relid, vacstmt);\n+ \t\t\tif (vacstmt->vacuum)\n+ \t\t\t\tCommitTransactionCommand();\n+ /*\n+ \t\t\telse\n+ \t\t\t\tMemoryContextReset();\n+ */\n+ \t\t}\n \t}\n \n \t/* clean up */\n! \tif (vacstmt->vacuum)\n \t{\n! \t\t/* on entry, we are not in a transaction */\n \n! \t\t/* matches the CommitTransaction in PostgresMain() */\n! \t\tStartTransactionCommand();\n \n! \t\t/*\n! \t\t * If we did a database-wide VACUUM, update the database's pg_database\n! \t\t * row with info about the transaction IDs used, and try to truncate\n! \t\t * pg_clog.\n! \t\t */\n! \t\tif (vacstmt->relation == NULL)\n! \t\t{\n! \t\t\tvac_update_dbstats(MyDatabaseId,\n! \t\t\t\t\t\t\t initialOldestXmin, initialFreezeLimit);\n! \t\t\tvac_truncate_clog(initialOldestXmin, initialFreezeLimit);\n! \t\t}\n \t}\n \n \t/*", "msg_date": "Tue, 11 Jun 2002 17:46:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analyze on large changes..." }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > I tried to repeat this:\n> > \n> > regression=# begin;\n> > BEGIN\n> > regression=# create table foo (f1 int);\n> > CREATE\n> > regression=# insert into foo [ ... some data ... ]\n> > \n> > regression=# analyze foo;\n> > ERROR: ANALYZE cannot run inside a BEGIN/END block\n> > \n> > This seems a tad silly; I can't see any reason why ANALYZE couldn't be\n> > done inside a BEGIN block. I think this is just a hangover from\n> > ANALYZE's origins as part of VACUUM. Can anyone see a reason not to\n> > allow it?\n> \n> The following patch allows analyze to be run inside a transaction. \n> Vacuum and vacuum analyze still can not be run in a transaction.\n\nOne change in this patch is that because analyze now runs in the outer\ntransaction, I can't clear the memory used to support each analyzed\nrelation. Not sure if this is an issue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jun 2002 17:50:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analyze on large changes..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> One change in this patch is that because analyze now runs in the outer\n> transaction, I can't clear the memory used to support each analyzed\n> relation. Not sure if this is an issue.\n\nSeems like a pretty serious (not to say fatal) objection to me. Surely\nyou can fix that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jun 2002 23:05:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analyze on large changes... " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > One change in this patch is that because analyze now runs in the outer\n> > transaction, I can't clear the memory used to support each analyzed\n> > relation. Not sure if this is an issue.\n> \n> Seems like a pretty serious (not to say fatal) objection to me. Surely\n> you can fix that.\n\nOK, suggestions. I know CommandCounterIncrement will not help. Should\nI do more pfree'ing?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jun 2002 23:11:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analyze on large changes..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Seems like a pretty serious (not to say fatal) objection to me. Surely\n>> you can fix that.\n\n> OK, suggestions. I know CommandCounterIncrement will not help. Should\n> I do more pfree'ing?\n\nNo, retail pfree'ing is not a maintainable solution. I was thinking\nmore along the lines of a MemoryContextResetAndDeleteChildren() on\nwhatever the active context is. If that doesn't work straight off,\nyou might have to create a new working context and switch into it\nbefore calling the analyze subroutine --- then deleting that context\nwould do the trick.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jun 2002 23:22:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analyze on large changes... " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Seems like a pretty serious (not to say fatal) objection to me. Surely\n> >> you can fix that.\n> \n> > OK, suggestions. I know CommandCounterIncrement will not help. Should\n> > I do more pfree'ing?\n> \n> No, retail pfree'ing is not a maintainable solution. I was thinking\n> more along the lines of a MemoryContextResetAndDeleteChildren() on\n> whatever the active context is. If that doesn't work straight off,\n> you might have to create a new working context and switch into it\n> before calling the analyze subroutine --- then deleting that context\n> would do the trick.\n\nOK, how is this?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/commands/analyze.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/commands/analyze.c,v\nretrieving revision 1.35\ndiff -c -r1.35 analyze.c\n*** src/backend/commands/analyze.c\t24 May 2002 18:57:55 -0000\t1.35\n--- src/backend/commands/analyze.c\t12 Jun 2002 03:59:45 -0000\n***************\n*** 156,170 ****\n \t\televel = DEBUG1;\n \n \t/*\n- \t * Begin a transaction for analyzing this relation.\n- \t *\n- \t * Note: All memory allocated during ANALYZE will live in\n- \t * TransactionCommandContext or a subcontext thereof, so it will all\n- \t * be released by transaction commit at the end of this routine.\n- \t */\n- \tStartTransactionCommand();\n- \n- \t/*\n \t * Check for user-requested abort.\tNote we want this to be inside a\n \t * transaction, so xact.c doesn't issue useless WARNING.\n \t */\n--- 156,161 ----\n***************\n*** 177,186 ****\n \tif (!SearchSysCacheExists(RELOID,\n \t\t\t\t\t\t\t ObjectIdGetDatum(relid),\n \t\t\t\t\t\t\t 0, 0, 0))\n- \t{\n- \t\tCommitTransactionCommand();\n \t\treturn;\n- \t}\n \n \t/*\n \t * Open the class, getting only a read lock on it, and check\n--- 168,174 ----\n***************\n*** 196,202 ****\n \t\t\telog(WARNING, \"Skipping \\\"%s\\\" --- only table or database owner can ANALYZE it\",\n \t\t\t\t RelationGetRelationName(onerel));\n \t\trelation_close(onerel, AccessShareLock);\n- \t\tCommitTransactionCommand();\n \t\treturn;\n \t}\n \n--- 184,189 ----\n***************\n*** 211,217 ****\n \t\t\telog(WARNING, \"Skipping \\\"%s\\\" --- can not process indexes, views or special system tables\",\n \t\t\t\t RelationGetRelationName(onerel));\n \t\trelation_close(onerel, AccessShareLock);\n- \t\tCommitTransactionCommand();\n \t\treturn;\n \t}\n \n--- 198,203 ----\n***************\n*** 222,228 ****\n \t\tstrcmp(RelationGetRelationName(onerel), StatisticRelationName) == 0)\n \t{\n \t\trelation_close(onerel, AccessShareLock);\n- \t\tCommitTransactionCommand();\n \t\treturn;\n \t}\n \n--- 208,213 ----\n***************\n*** 283,289 ****\n \tif (attr_cnt <= 0)\n \t{\n \t\trelation_close(onerel, NoLock);\n- \t\tCommitTransactionCommand();\n \t\treturn;\n \t}\n \n--- 268,273 ----\n***************\n*** 370,378 ****\n \t * entries we made in pg_statistic.)\n \t */\n \trelation_close(onerel, NoLock);\n- \n- \t/* Commit and release working memory */\n- \tCommitTransactionCommand();\n }\n \n /*\n--- 354,359 ----\nIndex: src/backend/commands/vacuum.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/commands/vacuum.c,v\nretrieving revision 1.226\ndiff -c -r1.226 vacuum.c\n*** src/backend/commands/vacuum.c\t24 May 2002 18:57:56 -0000\t1.226\n--- src/backend/commands/vacuum.c\t12 Jun 2002 03:59:54 -0000\n***************\n*** 110,117 ****\n \n \n /* non-export function prototypes */\n- static void vacuum_init(VacuumStmt *vacstmt);\n- static void vacuum_shutdown(VacuumStmt *vacstmt);\n static List *getrels(const RangeVar *vacrel, const char *stmttype);\n static void vac_update_dbstats(Oid dbid,\n \t\t\t\t TransactionId vacuumXID,\n--- 110,115 ----\n***************\n*** 160,165 ****\n--- 158,165 ----\n void\n vacuum(VacuumStmt *vacstmt)\n {\n+ \tMemoryContext anl_context,\n+ \t\t\t\t old_context;\n \tconst char *stmttype = vacstmt->vacuum ? \"VACUUM\" : \"ANALYZE\";\n \tList\t *vrl,\n \t\t\t *cur;\n***************\n*** 178,190 ****\n \t * user's transaction too, which would certainly not be the desired\n \t * behavior.\n \t */\n! \tif (IsTransactionBlock())\n \t\telog(ERROR, \"%s cannot run inside a BEGIN/END block\", stmttype);\n \n \t/* Running VACUUM from a function would free the function context */\n! \tif (!MemoryContextContains(QueryContext, vacstmt))\n \t\telog(ERROR, \"%s cannot be executed from a function\", stmttype);\n! \n \t/*\n \t * Send info about dead objects to the statistics collector\n \t */\n--- 178,190 ----\n \t * user's transaction too, which would certainly not be the desired\n \t * behavior.\n \t */\n! \tif (vacstmt->vacuum && IsTransactionBlock())\n \t\telog(ERROR, \"%s cannot run inside a BEGIN/END block\", stmttype);\n \n \t/* Running VACUUM from a function would free the function context */\n! \tif (vacstmt->vacuum && !MemoryContextContains(QueryContext, vacstmt))\n \t\telog(ERROR, \"%s cannot be executed from a function\", stmttype);\n! \n \t/*\n \t * Send info about dead objects to the statistics collector\n \t */\n***************\n*** 203,215 ****\n \t\t\t\t\t\t\t\t\t\tALLOCSET_DEFAULT_INITSIZE,\n \t\t\t\t\t\t\t\t\t\tALLOCSET_DEFAULT_MAXSIZE);\n \n \t/* Build list of relations to process (note this lives in vac_context) */\n \tvrl = getrels(vacstmt->relation, stmttype);\n \n \t/*\n! \t * Start up the vacuum cleaner.\n \t */\n! \tvacuum_init(vacstmt);\n \n \t/*\n \t * Process each selected relation.\tWe are careful to process each\n--- 203,264 ----\n \t\t\t\t\t\t\t\t\t\tALLOCSET_DEFAULT_INITSIZE,\n \t\t\t\t\t\t\t\t\t\tALLOCSET_DEFAULT_MAXSIZE);\n \n+ \tif (vacstmt->analyze && !vacstmt->vacuum)\n+ \t\tanl_context = AllocSetContextCreate(QueryContext,\n+ \t\t\t\t\t\t\t\t\t\t\t\"Analyze\",\n+ \t\t\t\t\t\t\t\t\t\t\tALLOCSET_DEFAULT_MINSIZE,\n+ \t\t\t\t\t\t\t\t\t\t\tALLOCSET_DEFAULT_INITSIZE,\n+ \t\t\t\t\t\t\t\t\t\t\tALLOCSET_DEFAULT_MAXSIZE);\n+ \n \t/* Build list of relations to process (note this lives in vac_context) */\n \tvrl = getrels(vacstmt->relation, stmttype);\n \n \t/*\n! \t *\t\tFormerly, there was code here to prevent more than one VACUUM from\n! \t *\t\texecuting concurrently in the same database. However, there's no\n! \t *\t\tgood reason to prevent that, and manually removing lockfiles after\n! \t *\t\ta vacuum crash was a pain for dbadmins. So, forget about lockfiles,\n! \t *\t\tand just rely on the locks we grab on each target table\n! \t *\t\tto ensure that there aren't two VACUUMs running on the same table\n! \t *\t\tat the same time.\n! \t *\n! \t *\t\tThe strangeness with committing and starting transactions in the\n! \t *\t\tinit and shutdown routines is due to the fact that the vacuum cleaner\n! \t *\t\tis invoked via an SQL command, and so is already executing inside\n! \t *\t\ta transaction.\tWe need to leave ourselves in a predictable state\n! \t *\t\ton entry and exit to the vacuum cleaner. We commit the transaction\n! \t *\t\tstarted in PostgresMain() inside vacuum_init(), and start one in\n! \t *\t\tvacuum_shutdown() to match the commit waiting for us back in\n! \t *\t\tPostgresMain().\n \t */\n! \tif (vacstmt->vacuum)\n! \t{\n! \t\tif (vacstmt->relation == NULL)\n! \t\t{\n! \t\t\t/*\n! \t\t\t * Compute the initially applicable OldestXmin and FreezeLimit\n! \t\t\t * XIDs, so that we can record these values at the end of the\n! \t\t\t * VACUUM. Note that individual tables may well be processed with\n! \t\t\t * newer values, but we can guarantee that no (non-shared)\n! \t\t\t * relations are processed with older ones.\n! \t\t\t *\n! \t\t\t * It is okay to record non-shared values in pg_database, even though\n! \t\t\t * we may vacuum shared relations with older cutoffs, because only\n! \t\t\t * the minimum of the values present in pg_database matters. We\n! \t\t\t * can be sure that shared relations have at some time been\n! \t\t\t * vacuumed with cutoffs no worse than the global minimum; for, if\n! \t\t\t * there is a backend in some other DB with xmin = OLDXMIN that's\n! \t\t\t * determining the cutoff with which we vacuum shared relations,\n! \t\t\t * it is not possible for that database to have a cutoff newer\n! \t\t\t * than OLDXMIN recorded in pg_database.\n! \t\t\t */\n! \t\t\tvacuum_set_xid_limits(vacstmt, false,\n! \t\t\t\t\t\t\t\t &initialOldestXmin, &initialFreezeLimit);\n! \t\t}\n! \n! \t\t/* matches the StartTransaction in PostgresMain() */\n! \t\tCommitTransactionCommand();\n! \t}\n \n \t/*\n \t * Process each selected relation.\tWe are careful to process each\n***************\n*** 225,305 ****\n \t\tif (vacstmt->vacuum)\n \t\t\tvacuum_rel(relid, vacstmt, RELKIND_RELATION);\n \t\tif (vacstmt->analyze)\n \t\t\tanalyze_rel(relid, vacstmt);\n \t}\n \n \t/* clean up */\n! \tvacuum_shutdown(vacstmt);\n! }\n! \n! /*\n! *\tvacuum_init(), vacuum_shutdown() -- start up and shut down the vacuum cleaner.\n! *\n! *\t\tFormerly, there was code here to prevent more than one VACUUM from\n! *\t\texecuting concurrently in the same database. However, there's no\n! *\t\tgood reason to prevent that, and manually removing lockfiles after\n! *\t\ta vacuum crash was a pain for dbadmins. So, forget about lockfiles,\n! *\t\tand just rely on the locks we grab on each target table\n! *\t\tto ensure that there aren't two VACUUMs running on the same table\n! *\t\tat the same time.\n! *\n! *\t\tThe strangeness with committing and starting transactions in the\n! *\t\tinit and shutdown routines is due to the fact that the vacuum cleaner\n! *\t\tis invoked via an SQL command, and so is already executing inside\n! *\t\ta transaction.\tWe need to leave ourselves in a predictable state\n! *\t\ton entry and exit to the vacuum cleaner. We commit the transaction\n! *\t\tstarted in PostgresMain() inside vacuum_init(), and start one in\n! *\t\tvacuum_shutdown() to match the commit waiting for us back in\n! *\t\tPostgresMain().\n! */\n! static void\n! vacuum_init(VacuumStmt *vacstmt)\n! {\n! \tif (vacstmt->vacuum && vacstmt->relation == NULL)\n \t{\n! \t\t/*\n! \t\t * Compute the initially applicable OldestXmin and FreezeLimit\n! \t\t * XIDs, so that we can record these values at the end of the\n! \t\t * VACUUM. Note that individual tables may well be processed with\n! \t\t * newer values, but we can guarantee that no (non-shared)\n! \t\t * relations are processed with older ones.\n! \t\t *\n! \t\t * It is okay to record non-shared values in pg_database, even though\n! \t\t * we may vacuum shared relations with older cutoffs, because only\n! \t\t * the minimum of the values present in pg_database matters. We\n! \t\t * can be sure that shared relations have at some time been\n! \t\t * vacuumed with cutoffs no worse than the global minimum; for, if\n! \t\t * there is a backend in some other DB with xmin = OLDXMIN that's\n! \t\t * determining the cutoff with which we vacuum shared relations,\n! \t\t * it is not possible for that database to have a cutoff newer\n! \t\t * than OLDXMIN recorded in pg_database.\n! \t\t */\n! \t\tvacuum_set_xid_limits(vacstmt, false,\n! \t\t\t\t\t\t\t &initialOldestXmin, &initialFreezeLimit);\n! \t}\n! \n! \t/* matches the StartTransaction in PostgresMain() */\n! \tCommitTransactionCommand();\n! }\n \n! static void\n! vacuum_shutdown(VacuumStmt *vacstmt)\n! {\n! \t/* on entry, we are not in a transaction */\n! \n! \t/* matches the CommitTransaction in PostgresMain() */\n! \tStartTransactionCommand();\n \n! \t/*\n! \t * If we did a database-wide VACUUM, update the database's pg_database\n! \t * row with info about the transaction IDs used, and try to truncate\n! \t * pg_clog.\n! \t */\n! \tif (vacstmt->vacuum && vacstmt->relation == NULL)\n! \t{\n! \t\tvac_update_dbstats(MyDatabaseId,\n! \t\t\t\t\t\t initialOldestXmin, initialFreezeLimit);\n! \t\tvac_truncate_clog(initialOldestXmin, initialFreezeLimit);\n \t}\n \n \t/*\n--- 274,317 ----\n \t\tif (vacstmt->vacuum)\n \t\t\tvacuum_rel(relid, vacstmt, RELKIND_RELATION);\n \t\tif (vacstmt->analyze)\n+ \t\t{\n+ \t\t\t/* If we vacuumed, use new transaction for analyze. */\n+ \t\t\tif (vacstmt->vacuum)\n+ \t\t\t\tStartTransactionCommand();\n+ \t\t\telse\n+ \t\t\t\told_context = MemoryContextSwitchTo(anl_context);\n+ \n \t\t\tanalyze_rel(relid, vacstmt);\n+ \n+ \t\t\tif (vacstmt->vacuum)\n+ \t\t\t\tCommitTransactionCommand();\n+ \t\t\telse\n+ \t\t\t{\n+ \t\t\t\tMemoryContextResetAndDeleteChildren(anl_context);\n+ \t\t\t\tMemoryContextSwitchTo(old_context);\n+ \t\t\t}\n+ \t\t}\n \t}\n \n \t/* clean up */\n! \tif (vacstmt->vacuum)\n \t{\n! \t\t/* on entry, we are not in a transaction */\n \n! \t\t/* matches the CommitTransaction in PostgresMain() */\n! \t\tStartTransactionCommand();\n \n! \t\t/*\n! \t\t * If we did a database-wide VACUUM, update the database's pg_database\n! \t\t * row with info about the transaction IDs used, and try to truncate\n! \t\t * pg_clog.\n! \t\t */\n! \t\tif (vacstmt->relation == NULL)\n! \t\t{\n! \t\t\tvac_update_dbstats(MyDatabaseId,\n! \t\t\t\t\t\t\t initialOldestXmin, initialFreezeLimit);\n! \t\t\tvac_truncate_clog(initialOldestXmin, initialFreezeLimit);\n! \t\t}\n \t}\n \n \t/*\n***************\n*** 309,314 ****\n--- 321,330 ----\n \t */\n \tMemoryContextDelete(vac_context);\n \tvac_context = NULL;\n+ \n+ \tif (vacstmt->analyze && !vacstmt->vacuum)\n+ \t\tMemoryContextDelete(anl_context);\n+ \n }\n \n /*", "msg_date": "Wed, 12 Jun 2002 00:02:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analyze on large changes..." } ]
[ { "msg_contents": "The following steps should fix the configure and\ncompile problems that was occuring with installing\nPostgres 7.2 on HPUX 11.11. using the gcc compiler\n(See below for reference posts)\n\nfor compiling postgres 7.2 or 7.2.1 on HPUX11.11 \nrun ./configure \n=> checking types of arguments for accept()...\nconfigure: error: could not determine argument types \nopen configure file : vi configure \nfind this line : \ncat >> confdefs.h <<EOF \n#define PG_VERSION \"$VERSION\" \nEOF \nand repalce by \ncat >> confdefs.h <<EOF \n#define _XOPEN_SOURCE_EXTENDED \n#define PG_VERSION \"$VERSION\" \nEOF \n\nand run agin ./configure \nnow open src/Makefile.global \nfind the line begin by CFLAGS= \nand add in this line : \n-D_XOPEN_SOURCE_EXTENDED \n\nAfter this step you need to modify 3 files \n->src/backend/libpq/pqformat.c \n->src/interfaces/libpq/fe-misc.c \n->src/interfaces/odbc/socket.c \nfor adding : \n#include <arpa/inet.h> \n#include <netinet/in.h> \n\nand Run gmake... \n\n-------------------------------------------------------\n\nRe: Second call for platform testing\n\nFrom: Joe Conway <joseph.conway@home.com\n<mailto:joseph.conway@home.com>> \nTo: lockhart@fourpalms.org\n<mailto:lockhart@fourpalms.org> \nSubject: Re: Second call for platform testing \nDate: Thu, 29 Nov 2001 12:07:28 -0800 \n\nThomas Lockhart wrote:\n\n>>Are you still looking for HPUX 11.0+ ? I can arrange\nfor access to one\n>>if we still need it (gcc though, I don't have access\nto HP's compiler).\n>>\n> \n> Yes, that would be great. 10.20 is pretty old\nafaik...\n> \n> - Thomas\n> \n> \n\n\nI ran into a problem on HPUX 11 right off with:\n\n===============================\nconfigure --enable-debug\n.\n.\n.\nchecking for struct sockaddr_un... yes\nchecking for int timezone... yes\nchecking types of arguments for accept()... configure:\nerror: could not \ndetermine argument types\n\n\n===============================\n\nI won't pretend to be very knowledgable about HPUX or\nconfigure, but it \nlooks like the following in configure is where it\ndies:\n\n===============================\n\nelse\n for ac_cv_func_accept_arg1 in 'int' 'unsigned\nint'; do\n for ac_cv_func_accept_arg2 in 'struct sockaddr\n*' 'const struct \nsockaddr *' 'void *'; do\n for ac_cv_func_accept_arg3 in 'int' 'size_t'\n'socklen_t' \n'unsigned int' 'void'; do\n cat > conftest.$ac_ext <<EOF\n\n=======================================\n\nand here's what the HPUX man page says:\n\n=======================================\naccept(2)\n\n NAME\n accept - accept a connection on a socket\n\n SYNOPSIS\n #include <sys/socket.h>\n\n AF_CCITT only\n #include <x25/x25addrstr.h>\n\n int accept(int s, void *addr, int *addrlen);\n\n _XOPEN_SOURCE_EXTENDED only (UNIX 98)\n int accept(int s, struct sockaddr *addr,\nsocklen_t *addrlen);\n\n=======================================\n\nso it looks like configure expects the third argument\nto be (int), when \non HPUX 11 the third argument is (int *).\n\nAny ideas what I'm doing wrong?\n\n-- Joe\n\n\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Health - your guide to health and wellness\nhttp://health.yahoo.com\n", "msg_date": "Wed, 1 May 2002 13:16:32 -0700 (PDT)", "msg_from": "Umang Patel <umang87@yahoo.com>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Hi all,\n\nI've been taking a look at fixing the TODO item:\n\n o Allow INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..)\n\nMy first plan of attack was to replace the current list of ResTargets\nin InsertStmt with a list of lists. The problem with that approach is\nthat:\n\n (a) the InsertStmt is converted to a Query. I could also change Query\n to use a list of lists (instead of a list) for holding TargetEntry\n items, but that would be ugly (since Query is generic, and this\n would only be needed for Inserts)\n\n (b) modifying Query would mean a lot of work (e.g. in the rewriter),\n adapting all the places that expect targetList to be a list to\n instead use a list of lists. Once again, this would be messy.\n\nSo, that seems like a bad idea.\n\nISTM that a better way to do this would be to parse the InsertStmt,\nand then execute an INSERT for every targetList in the query.\nFor example:\n\n INSERT INTO t1 (c1) VALUES (1), (2);\n\n would be executed in a similar fashion to:\n\n INSERT INTO t1 (c1) VALUES (1);\n INSERT INTO t1 (c1) VALUES (2);\n\nDoes this sound reasonable?\n\nAny suggestions would be welcome.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Wed, 1 May 2002 19:16:56 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "insert with multiple targetLists" }, { "msg_contents": "> INSERT INTO t1 (c1) VALUES (1), (2);\n>\n> would be executed in a similar fashion to:\n>\n> INSERT INTO t1 (c1) VALUES (1);\n> INSERT INTO t1 (c1) VALUES (2);\n>\n> Does this sound reasonable?\n\nI debated doing the above too. In fact, I had a partial\nimplementation at one point.\n\nHowever, the resulting purpose of allowing such a construct is to\nenable the speeds copy achieves with the variation that is found in an\ninsert. So, the above transformation method really doesn't accomplish\nmuch except a new style for many inserts. But it is quite a bit\neasier simply to code each insert individually if there is a minimal\nspeed gain. Large strings may reach query length limits in other\nsystems using this style (look at a MySQL dump sometime). You're\nreally only good for about 50 or 60 records in a single insert\nstatement there.\n\nI'd tend to run it like a copy that can resolving expressions and\ndefaults.\n\n", "msg_date": "Wed, 1 May 2002 19:51:46 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: insert with multiple targetLists" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> I've been taking a look at fixing the TODO item:\n> o Allow INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..)\n> My first plan of attack was to replace the current list of ResTargets\n> in InsertStmt with a list of lists.\n\nIf you look at the SQL spec, they actually consider VALUES to be a\n<table value constructor> which is one of the base cases for <query\nexpression>. Thus for example this is legal SQL (copied and pasted\nstraight from the spec):\n\n CONSTRAINT VIEWS_IS_UPDATABLE_CHECK_OPTION_CHECK\n CHECK ( ( IS_UPDATABLE, CHECK_OPTION ) NOT IN\n ( VALUES ( 'NO', 'CASCADED' ), ( 'NO', 'LOCAL' ) ) )\n\nSo one should really think of INSERT...VALUES as a form of\nINSERT...SELECT rather than a special case of its own. INSERT...SELECT\nis currently extremely klugy (look at the hacks in the rewriter for it)\nso I think you will not get very far until you redesign the querytree\nstructure for INSERT...SELECT.\n\nBTW, all the non-trivial cases for VALUES are Full SQL only, not entry\nor even intermediate level. So I don't see much point in providing a\nhalf-baked implementation. We've still got lots left to do to cover\nall of the entry-SQL spec, and IMHO we ought to focus on that stuff\nfirst...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 May 2002 22:58:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: insert with multiple targetLists " }, { "msg_contents": "Rod Taylor wrote:\n> > INSERT INTO t1 (c1) VALUES (1), (2);\n> >\n> > would be executed in a similar fashion to:\n> >\n> > INSERT INTO t1 (c1) VALUES (1);\n> > INSERT INTO t1 (c1) VALUES (2);\n> >\n> > Does this sound reasonable?\n\n\nSounds good to me.\n\n> I debated doing the above too. In fact, I had a partial\n> implementation at one point.\n> \n> However, the resulting purpose of allowing such a construct is to\n> enable the speeds copy achieves with the variation that is found in an\n> insert. ...\n\nI thought the purpose of the item was merely for compatibility with\nother databases that support this syntax. I don't think it will ever\nmatch COPY performance, and I don't think stuffing a huge INSERT into\nthe database rather than COPY rows will ever be a preferred method.\n\nI only see VALUES used by INSERT so if you can think of a clean way to\nmake that work as multiple INSERTs, I think it would be a good idea. \nHopefully, it will be one localized change, and we can remove it if we\never want to support VALUES in more complex situations, as Tom\nmentioned.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Jun 2002 01:53:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: insert with multiple targetLists" } ]
[ { "msg_contents": "I've got a table, view, and rules as below. The permissions are set up in\nsuch a way that I can use it just fine as myself via psql. When I try to\naccess the data using an ms access interface via odbc, I get the first\nrecord in the view, but any attempts to go to other records cause ms access\nto tell me that they've been deleted (it's lying though, because I can still\nsee them through the psql interface).\n\nI thought I had seen a mention on one of the web pages that said some\ninterfaces don't handle views very well, but can't seem to find that page\nagain (in hopes that it also had some suggestions to get around the\nproblem).\n\nHere are my questions:\n1) Is this a known problem?\n2) Am I doing something wrong?\n3) Are there work arounds?\n\n-ron\n\ncreate table log (\n id serial not null,\n whenentered timestamp,\n username name not null default user,\n class varchar(10),\n entry varchar\n);\n\ncreate view myview as select id,whenentered,class,entry from log where\nusername=user;\n\ncreate rule ins_mylog as on insert to myview do instead insert into log\n(whenentered,class,entry) values (now(), NEW.class, NEW.entry);\n\n-- create the rule that will actually do an update on the right record\ncreate rule upd_mylog as on update to myview do instead update log set\nwhenentered=NEW.whenentered, class=NEW.class, entry=NEW.entry where\nid=OLD.id;\n\n-- create a rule that satisfies postgres' need for completeness\ncreate rule upd_mylog0 as on update to myview do instead nothing;\n\n\ncreate rule del_mylog0 as on delete to myview do instead nothing;\ncreate rule del_mylog as on delete to myview do instead delete from log\nwhere id=OLD.id;\n", "msg_date": "Wed, 1 May 2002 19:11:35 -0700 ", "msg_from": "Ron Snyder <snyder@roguewave.com>", "msg_from_op": true, "msg_subject": "Using views and MS access via odbc" }, { "msg_contents": "Ron Snyder wrote:\n> \n> I've got a table, view, and rules as below. The permissions are set up in\n> such a way that I can use it just fine as myself via psql. When I try to\n> access the data using an ms access interface via odbc, I get the first\n> record in the view, but any attempts to go to other records cause ms access\n> to tell me that they've been deleted (it's lying though, because I can still\n> see them through the psql interface).\n\nAre you using 7.2 ?\nYour settings probably worked well under 7.1 but\ndoesn't in 7.2 due to the following change in\ntcop/postgres.c.\n\n /* \n * It is possible that the original query was removed due to \n * a DO INSTEAD rewrite rule. In that case we will still have \n * the default completion tag, which is fine for most purposes, \n * but it may confuse clients if it's INSERT/UPDATE/DELETE. \n * Clients expect those tags to have counts after them (cf. \n * ProcessQuery). \n */ \n if (strcmp(commandTag, \"INSERT\") == 0) \n commandTag = \"INSERT 0 0\"; \n else if (strcmp(commandTag, \"UPDATE\") == 0) \n commandTag = \"UPDATE 0\"; \n .\n .\n\n * UPDATE 0 * means no tuple was updated.\n \nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Thu, 02 May 2002 14:48:00 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using views and MS access via odbc" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Your settings probably worked well under 7.1 but\n> doesn't in 7.2 due to the following change in\n> tcop/postgres.c.\n\nAFAIR, there is only a visible change of behavior for\nINSERT/UPDATE/DELETE queries, not for SELECTs. So I don't think\nthis change explains Ron's complaint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 10:15:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using views and MS access via odbc " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Your settings probably worked well under 7.1 but\n> > doesn't in 7.2 due to the following change in\n> > tcop/postgres.c.\n> \n> AFAIR, there is only a visible change of behavior for\n> INSERT/UPDATE/DELETE queries, not for SELECTs. So I don't think\n> this change explains Ron's complaint.\n\nFor a view a_view\n\n UPDATE a_view set ... where xxxxx;\nreturns UPDATE 0 in any case in 7.2.\n\nThe psqlodbc driver understands that no row was updated\nand returns the info to the upper application if requested.\nMS access( and I) think there's no such case other than\nthe row was changed or deleted after it was SELECTed.\nNote that MS access doesn't issue any SELECT commands\nto check the optimistic concurrency of the row. The where\nclause of the UPDATE command contains *a_item = old_value*\nfor all items to check the optimisitic concurrency at the\nsame time.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 3 May 2002 07:52:05 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using views and MS access via odbc " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Your settings probably worked well under 7.1 but\n> > doesn't in 7.2 due to the following change in\n> > tcop/postgres.c.\n> \n> AFAIR, there is only a visible change of behavior for\n> INSERT/UPDATE/DELETE queries, not for SELECTs. So I don't think\n> this change explains Ron's complaint.\n\nIf you'd not like to change the behavior, I would change it, OK ? \n\nregards,\nHiroshi Inoue\n", "msg_date": "Sat, 4 May 2002 08:45:01 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using views and MS access via odbc " }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> If you'd not like to change the behavior, I would change it, OK ? \n\nTo what? I don't want to simply undo the 7.2 change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 20:06:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using views and MS access via odbc " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > If you'd not like to change the behavior, I would change it, OK ? \n> \n> To what? I don't want to simply undo the 7.2 change.\n\nWhat I'm thinking is the following makeshift fix.\nI expect it solves Ron's case though I'm not sure.\nReturning UPDATE 0 seem to make no one happy.\n\nregards,\nHiroshi Inoue\n\n*** postgres.c.orig\tThu Feb 28 08:17:01 2002\n--- postgres.c\tSat May 4 22:53:03 2002\n***************\n*** 805,811 ****\n \t\t\t\t\tif (DebugLvl > 1)\n \t\t\t\t\t\telog(DEBUG, \"ProcessQuery\");\n \n! \t\t\t\t\tif (querytree->originalQuery)\n \t\t\t\t\t{\n \t\t\t\t\t\t/* original stmt can override default tag string */\n \t\t\t\t\t\tProcessQuery(querytree, plan, dest, completionTag);\n--- 805,811 ----\n \t\t\t\t\tif (DebugLvl > 1)\n \t\t\t\t\t\telog(DEBUG, \"ProcessQuery\");\n \n! \t\t\t\t\tif (querytree->originalQuery || length(querytree_list) == 1)\n \t\t\t\t\t{\n \t\t\t\t\t\t/* original stmt can override default tag string */\n \t\t\t\t\t\tProcessQuery(querytree, plan, dest, completionTag);\n\n", "msg_date": "Sat, 4 May 2002 23:09:23 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using views and MS access via odbc " }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> If you'd not like to change the behavior, I would change it, OK ? \n>> \n>> To what? I don't want to simply undo the 7.2 change.\n\n> What I'm thinking is the following makeshift fix.\n> I expect it solves Ron's case though I'm not sure.\n> Returning UPDATE 0 seem to make no one happy.\n\nAgreed, that doesn't seem like it's going over well. Let's see, you\npropose returning the tag if there is only one replacement query, ie,\nwe had just one DO INSTEAD rule. [ thinks... ] I guess the only thing\nthat bothers me about this is the prospect that the returned tag is\ncompletely different from what the client expects. For example,\nconsider a rule like ON UPDATE DO INSTEAD INSERT INTO history_table...\nWith your patch, this would return an \"INSERT nnn nnn\" tag, which'd\nconfuse a client that expects an \"UPDATE nnn\" response. (This is one\nof the issues that prompted changing the behavior to begin with.)\n\nWould it be reasonable to allow the rewritten query to return a tag\nonly if (a) it's the only query, per your patch AND (b) it's the same\nquery type as the original, unrewritten query?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 May 2002 11:20:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using views and MS access via odbc " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > If you'd not like to change the behavior, I would change it, OK ? \n> >> \n> >> To what? I don't want to simply undo the 7.2 change.\n> \n> > What I'm thinking is the following makeshift fix.\n> > I expect it solves Ron's case though I'm not sure.\n> > Returning UPDATE 0 seem to make no one happy.\n> \n> Agreed, that doesn't seem like it's going over well. Let's see, you\n> propose returning the tag if there is only one replacement query, ie,\n> we had just one DO INSTEAD rule. [ thinks... ] I guess the only thing\n> that bothers me about this is the prospect that the returned tag is\n> completely different from what the client expects. For example,\n> consider a rule like ON UPDATE DO INSTEAD INSERT INTO history_table...\n> With your patch, this would return an \"INSERT nnn nnn\" tag, which'd\n> confuse a client that expects an \"UPDATE nnn\" response. \n\nIs it worse than returning \"UPDATE 0\" ?\nUnfortunately \"UPDATE 0\" never means the result is unknown\nbut clearly means no rows were affected. It can never be safe\nto return \"UPDATE 0\". \n\nregards,\nHiroshi Inoue\n", "msg_date": "Sun, 5 May 2002 08:20:46 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using views and MS access via odbc " } ]
[ { "msg_contents": "Dear Team,\n\nI've read your comments. No, I don't think that MVD's are the best thing since sliced bread....but they do have a certain \"simplicity\" that seems to hold the key to high speed analysis of large volumes of streaming data from the \"eyes\" of a robot. This stream of data must be quickly analyzed and subdivided into recognizable objects...I realize this area is wide open to debate...however....an MVD might at least offer a \"useful\" theoretical \"angle\" in a \"pre-PostGreSQL\" stage of processing...just a thought...\n\n\n\n\n\n\n\n\nDear Team,\n \nI've read your comments.  No, I don't think \nthat MVD's are the best thing since sliced bread....but they do have a certain \n\"simplicity\" that seems to hold the key to high speed analysis of large volumes \nof streaming data from the \"eyes\" of a robot.  This stream of data must be \nquickly analyzed and subdivided into recognizable objects...I realize this area \nis wide open to debate...however....an MVD might at least offer a \"useful\" \ntheoretical \"angle\" in a \"pre-PostGreSQL\" stage of processing...just a \nthought...", "msg_date": "Thu, 2 May 2002 00:05:20 -0700", "msg_from": "\"Arthur@LinkLine.com\" <arthur@linkline.com>", "msg_from_op": true, "msg_subject": "mV database tools" } ]
[ { "msg_contents": "Dear Team,\n\nI'm wide open to other ideas for the support of robotic vision through tools already built into PostGreSQL. But you've already admitted to certain speed limitations...and robotic vision is going to require much more intense processing power. An MVD might allow the data stream to be quickly \"re-arranged\" during the recognition processing and then converted to RDBMS form later...after object recognition is \"stabilized\"...\n\nI really would love to see little \"Short Circuits\" running around teaching themselves...ok? Just a dream of mine...\n\n\n\n\n\n\n\n\n\nDear Team,\n \nI'm wide open to other ideas for the support of \nrobotic vision through tools already built into PostGreSQL.  But you've \nalready admitted to certain speed limitations...and robotic vision is going to \nrequire much more intense processing power.  An MVD might allow the data \nstream to be quickly \"re-arranged\" during the recognition processing and then \nconverted to RDBMS form later...after object recognition is \n\"stabilized\"...\n \nI really would love to see little \"Short Circuits\" \nrunning around teaching themselves...ok?  Just a dream of \nmine...", "msg_date": "Thu, 2 May 2002 00:27:56 -0700", "msg_from": "\"Arthur@LinkLine.com\" <arthur@linkline.com>", "msg_from_op": true, "msg_subject": "mV database tools" } ]
[ { "msg_contents": "Surely the real strength of Pick, Unidata et al. is not so much the mv\nfields (which can be relatively easily emulated using array types) but\nthe data dictionaries and the way the same field can be defined in\nmultiple ways (data formats) with different names, or that you can\ncreate pseudo fields that are actually functions or correlatives?\nHowever, these again are things that can be achieved in PostgreSQL, just\nnot in the same way.\n \nOf course, PostgreSQL's array types are multidimensional whereas iirc\nyou are limited to values & subvalues in most Pick-like DBs and even\nthen support for subvalues in 4GLs such as SB+ is limited.\n \nRegards, Dave.\n\n\t-----Original Message-----\n\tFrom: Arthur@LinkLine.com [mailto:arthur@linkline.com] \n\tSent: 01 May 2002 19:37\n\tTo: PostGreSQL Hackers\n\tSubject: mV database tools\n\t\n\t\n\tDear Team,\n\t \n\tI have been monitoring this list for quite some time now and\nhave been studying PostGreSQL for a while. I also did some internet\nresearch on the subject of \"multi valued\" database theory. I know that\nthis is the basis for the \"Pick\" database system, FileMaker Pro, \"D3\",\nand a few other database systems. After carefully reviewing the\ntheoretical arguments in favor of this type of database, I am thoroughly\nconvinced that there are certain advantages to it that will never be\nmatched by a traditional \"relational database\".\n\t \n\tI won't waste your time in reviewing the technical advantages\nhere, because you can do your own research. However, I will say that it\nis obvious to me that an mV database will be an integral part of any\ntruly practical AI robotics system. It will probably be necessary to\n\"marry\" the technologies of both relational databases and mV databases\nin such a system.\n\t \n\tIMHO, this is something that you, as the leaders in the most\nadvanced database system ever developed, should carefully consider. The\nLinux community needs to be aware of the special advantages that an mV\ndatabase offers, the way to interface an mV system with a traditional\nRDBMS, and the potential application theory as it relates to AI systems.\n\t \n\tWe, as a community of leaders in GPL'd software need to make\nsure that this technology is part of the \"knowledge base\" of our\ncommunity. Thanks for listening.\n\t \n\tArthur\n\n\nMessage\n\n\n\n\n\nSurely \nthe real strength of Pick, Unidata et al. is not so much the mv fields (which \ncan be relatively easily emulated using array types) but the data dictionaries \nand the way the same field can be defined in multiple ways (data formats) with \ndifferent names, or that you can create pseudo fields that are actually \nfunctions or correlatives? However, these again are things that can be \nachieved in PostgreSQL, just not in the same way.\n \nOf \ncourse, PostgreSQL's array types are multidimensional whereas iirc you are \nlimited to values & subvalues in most Pick-like DBs and even then support \nfor subvalues in 4GLs such as SB+ is limited.\n \nRegards, Dave.\n\n\n-----Original Message-----From: \n Arthur@LinkLine.com [mailto:arthur@linkline.com] Sent: 01 May 2002 \n 19:37To: PostGreSQL HackersSubject: mV database \n tools\nDear Team,\n \nI have been monitoring this list for quite some \n time now and have been studying PostGreSQL for a while.  I also did some \n internet research on the subject of \"multi valued\" database theory.  I \n know that this is the basis for the \"Pick\" database system, FileMaker Pro, \n \"D3\", and a few other database systems.  After carefully reviewing the \n theoretical arguments in favor of this type of database, I am thoroughly \n convinced that there are certain advantages to it that will never be matched \n by a traditional \"relational database\".\n \nI won't waste your time in reviewing the \n technical advantages here, because you can do your own research.  \n However, I will say that it is obvious to me that an mV database will be an \n integral part of any truly practical AI robotics system.  It will \n probably be necessary to \"marry\" the technologies of both relational databases \n and mV databases in such a system.\n \nIMHO, this is something that you, as the leaders \n in the most advanced database system ever developed, should carefully \n consider.  The Linux community needs to be aware of the special \n advantages that an mV database offers, the way to interface an mV system with \n a traditional RDBMS, and the potential application theory as it relates to AI \n systems.\n \nWe, as a community of leaders in GPL'd software \n need to make sure that this technology is part of the \"knowledge base\" of our \n community.  Thanks for listening.\n \nArthur", "msg_date": "Thu, 2 May 2002 08:47:17 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: mV database tools" } ]
[ { "msg_contents": "\nHello:\n\nI took a look at the SSL code in libpq/fe-misc.c and noticed what I \nthink is a small problem. A patch is included at the bottom of this \nemail against anoncvs TopOfTree this evening.\n\nThe SSL library buffers input data internally. Nowhere in libpq's code \nis this buffer being checked via SSL_pending(), which can lead to a \ncondition where once in a while a socket appears to \"hang\" or \"lag\". \n This is because select() won't see bytes buffered by the library. A \ncondition like this is most likely to occur when the library's read \nbuffer has been filled previously and another read is to be performed. \n If the end of the backend's transmission was less than one SSL frame \npayload away from the last byte returned in the previous read, this will \nlikely hang. Trust me that I learned of this most painfully... \n\nI am looking deeper at how to enable non-blocking SSL sockets in libpq. \n As Tom Lane states, this is primarily a matter of checking SSL error \ncodes, particularly for SSL_WANT_READ and SSL_WANT_WRITE, and reacting \nappropriately. I'll see about that as I have more free time.\n\nEven though I'm doing this, I tend to agree with Tom that SSH tunnels \nare a really good way to make the whole SSL problem just go away.\n\nMy quick patch to perform the SSL_pending() checks:\n\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-misc.c,v\nretrieving revision 1.70\ndiff -r1.70 fe-misc.c\n350a351\n > * -or- if SSL is enabled and used, is it buffering bytes?\n361a363,371\n > /* Check for SSL library buffering read bytes */\n > #ifdef USE_SSL\n > if (conn->ssl && SSL_pending(conn->ssl) > 0)\n > {\n > /* short-circuit the select */\n > return 1;\n > }\n > #endif\n >\n784a795,797\n > * If SSL enabled and used and forRead, buffered bytes short-circuit the\n > * call to select().\n > *\n801a815,823\n >\n > /* Check for SSL library buffering read bytes */\n > #ifdef USE_SSL\n > if (forRead && conn->ssl && SSL_pending(conn->ssl) > 0)\n > {\n > /* short-circuit the select */\n > return 0;\n > }\n > #endif\n\n_Of_course_ I am just fine with this patch being under a Berkeley-style \nlicense and included in PostgreSQL.\n\nCheers.\n\n-- \n\nJack Bates\nPortland, OR, USA\nhttp://www.floatingdoghead.net\n\nGot privacy?\nMy PGP key: http://www.floatingdoghead.net/pubkey.txt\n\n\n", "msg_date": "Thu, 02 May 2002 00:59:29 -0700", "msg_from": "Jack Bates <pgsql@floatingdoghead.net>", "msg_from_op": true, "msg_subject": "PATCH SSL_pending() checks in libpq/fe-misc.c" }, { "msg_contents": "\nWould you send over a context diff, diff -c?\n\n\n---------------------------------------------------------------------------\n\nJack Bates wrote:\n> \n> Hello:\n> \n> I took a look at the SSL code in libpq/fe-misc.c and noticed what I \n> think is a small problem. A patch is included at the bottom of this \n> email against anoncvs TopOfTree this evening.\n> \n> The SSL library buffers input data internally. Nowhere in libpq's code \n> is this buffer being checked via SSL_pending(), which can lead to a \n> condition where once in a while a socket appears to \"hang\" or \"lag\". \n> This is because select() won't see bytes buffered by the library. A \n> condition like this is most likely to occur when the library's read \n> buffer has been filled previously and another read is to be performed. \n> If the end of the backend's transmission was less than one SSL frame \n> payload away from the last byte returned in the previous read, this will \n> likely hang. Trust me that I learned of this most painfully... \n> \n> I am looking deeper at how to enable non-blocking SSL sockets in libpq. \n> As Tom Lane states, this is primarily a matter of checking SSL error \n> codes, particularly for SSL_WANT_READ and SSL_WANT_WRITE, and reacting \n> appropriately. I'll see about that as I have more free time.\n> \n> Even though I'm doing this, I tend to agree with Tom that SSH tunnels \n> are a really good way to make the whole SSL problem just go away.\n> \n> My quick patch to perform the SSL_pending() checks:\n> \n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.70\n> diff -r1.70 fe-misc.c\n> 350a351\n> > * -or- if SSL is enabled and used, is it buffering bytes?\n> 361a363,371\n> > /* Check for SSL library buffering read bytes */\n> > #ifdef USE_SSL\n> > if (conn->ssl && SSL_pending(conn->ssl) > 0)\n> > {\n> > /* short-circuit the select */\n> > return 1;\n> > }\n> > #endif\n> >\n> 784a795,797\n> > * If SSL enabled and used and forRead, buffered bytes short-circuit the\n> > * call to select().\n> > *\n> 801a815,823\n> >\n> > /* Check for SSL library buffering read bytes */\n> > #ifdef USE_SSL\n> > if (forRead && conn->ssl && SSL_pending(conn->ssl) > 0)\n> > {\n> > /* short-circuit the select */\n> > return 0;\n> > }\n> > #endif\n> \n> _Of_course_ I am just fine with this patch being under a Berkeley-style \n> license and included in PostgreSQL.\n> \n> Cheers.\n> \n> -- \n> \n> Jack Bates\n> Portland, OR, USA\n> http://www.floatingdoghead.net\n> \n> Got privacy?\n> My PGP key: http://www.floatingdoghead.net/pubkey.txt\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jun 2002 18:39:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PATCH SSL_pending() checks in libpq/fe-misc.c" }, { "msg_contents": "Bruce:\n\nI've attached the original source file that I modified, my modified\nversion, and the output from 'diff -c fe-misc.c fe-misc.c.jack'.\n\nLack of CVS tags makes this the best way for me to get this to you.\n\nLet me know if you need anything else.\n\nI am no longer pursuing a total non-blocking implementation. I haven't\nfound a good way to test it with the type of work that I do with\nPostgreSQL. I do use blocking SSL sockets with this mod and have had no\nproblem whatsoever. The bug that I fixed in this patch is exceptionally\nhard to reproduce reliably.\n\nTHANKS AGAIN!!!\n\n-- \n\nJack Bates\nPortland, OR, USA\nhttp://www.floatingdoghead.net\n\nGot privacy?\nMy PGP key: http://www.floatingdoghead.net/pubkey.txt", "msg_date": "Thu, 13 Jun 2002 14:41:17 -0700 (PDT)", "msg_from": "<jack@floatingdoghead.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PATCH SSL_pending() checks in libpq/fe-misc.c" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\njack@floatingdoghead.net wrote:\n> \n> Bruce:\n> \n> I've attached the original source file that I modified, my modified\n> version, and the output from 'diff -c fe-misc.c fe-misc.c.jack'.\n> \n> Lack of CVS tags makes this the best way for me to get this to you.\n> \n> Let me know if you need anything else.\n> \n> I am no longer pursuing a total non-blocking implementation. I haven't\n> found a good way to test it with the type of work that I do with\n> PostgreSQL. I do use blocking SSL sockets with this mod and have had no\n> problem whatsoever. The bug that I fixed in this patch is exceptionally\n> hard to reproduce reliably.\n> \n> THANKS AGAIN!!!\n> \n> -- \n> \n> Jack Bates\n> Portland, OR, USA\n> http://www.floatingdoghead.net\n> \n> Got privacy?\n> My PGP key: http://www.floatingdoghead.net/pubkey.txt\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 14 Jun 2002 00:40:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PATCH SSL_pending() checks in libpq/fe-misc.c" } ]
[ { "msg_contents": "There is a report from a debian user about a vulnerability in\nPostgreSQL pre 7.2. Here is a possible attack scenario which allows to\nexecute ANY SQL in PostgreSQL.\n\nA web application accepts an input as a part of SELECT qualification\nclause. With the user input, the web server program would build a\nquery for example:\n\nSELECT * FROM t1 WHERE foo = 'input_string_from_user'\n\nOf course above method is too simple, since a user could input a\nstring such as:\n\nfoo'; DROP TABLE t1\n\nTo prevent the unwanted SQL statement being executed, the usual method\nmost applications are taking is quoting ' by \\. With this, above\nstring would be turned into:\n\nfoo\\'; DROP TABLE t1\n\nwhich would make it impossible to execute the DROP TABLE statement.\nFor example in PHP, addslashes() function does the job.\n\nNow, suppose the database encoding is set to SQL_ASCII and the client\nencoding is, say, LATIN1 and \"foo\" in above string is a latin\ncharacter which cannot be converted to ASCII. In this case, PostgreSQL\nwould produce something like:\n\n(0x81a2)\\'; DROP TABLE t1\n\nUnfortunately there was a bug in pre 7.2's multibyte support that\nwould eat the next character after the\nimpossible-to-convert-character, and would produce:\n\n(0x81a2)'; DROP TABLE t1\n\n(notice that \\ before ' is disappeared)\n\nIn this case actual query sent to the backend is:\n\nSELECT * FROM t1 WHERE foo = '(0x81a2)'; DROP TABLE t1'\n\nThe last ' will casue SQL error which prevents the DROP TABLE\nstatement from to be executed, except for 6.5.x. (correct me if I am\nwrong)\n\nHere are the precise conditions to trigger the scenario:\n\n(1) the backend is PostgreSQL 6.5.x\n(2) multibyte support is enabled (--enable-multibyte)\n(3) the database encoding is SQL_ASCII (other encodings are not\n affected by the bug). \n(4) the client encoding is set to other than SQL_ASCII\n\nI think I am responsible for this since I originally wrote the\ncode. Sorry for this. I'm going to make back port patches to fix the\nproblem for pre 7.2 versions.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 02 May 2002 17:18:30 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "a vulnerability in PostgreSQL" }, { "msg_contents": "> Not tested: but how about the string being\n> foo'; DROP TABLE T1; foo\n> \n> Would the last ' be eaten up then resulting in no error?\n\nEven the last ' is eaten up, the remaining string is (81a2), which\nwould cause parser errors since they are not valid SQL, I think.\n\n> Also normally a \\ would be quoted by \\\\ right? Would a foo\\ result in an \n> unquoted \\ ? An unquoted backslash may allow some possibilities.\n> \n> There could be other ways to get rid of the last ', comments etc, so it may \n> not be just 6.5.x.\n\nPlease provide concrete examples. I could not find such that case.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 02 May 2002 17:50:46 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "Not tested: but how about the string being\nfoo'; DROP TABLE T1; foo\n\nWould the last ' be eaten up then resulting in no error?\n\nAlso normally a \\ would be quoted by \\\\ right? Would a foo\\ result in an \nunquoted \\ ? An unquoted backslash may allow some possibilities.\n\nThere could be other ways to get rid of the last ', comments etc, so it may \nnot be just 6.5.x.\n\nRegards,\nLink.\n\nAt 05:18 PM 5/2/02 +0900, Tatsuo Ishii wrote:\n>There is a report from a debian user about a vulnerability in\n>PostgreSQL pre 7.2. Here is a possible attack scenario which allows to\n>execute ANY SQL in PostgreSQL.\n>\n>A web application accepts an input as a part of SELECT qualification\n>clause. With the user input, the web server program would build a\n>query for example:\n>\n>SELECT * FROM t1 WHERE foo = 'input_string_from_user'\n>\n>Of course above method is too simple, since a user could input a\n>string such as:\n>\n>foo'; DROP TABLE t1\n>\n>To prevent the unwanted SQL statement being executed, the usual method\n>most applications are taking is quoting ' by \\. With this, above\n>string would be turned into:\n>\n>foo\\'; DROP TABLE t1\n>\n>which would make it impossible to execute the DROP TABLE statement.\n>For example in PHP, addslashes() function does the job.\n>\n>Now, suppose the database encoding is set to SQL_ASCII and the client\n>encoding is, say, LATIN1 and \"foo\" in above string is a latin\n>character which cannot be converted to ASCII. In this case, PostgreSQL\n>would produce something like:\n>\n>(0x81a2)\\'; DROP TABLE t1\n>\n>Unfortunately there was a bug in pre 7.2's multibyte support that\n>would eat the next character after the\n>impossible-to-convert-character, and would produce:\n>\n>(0x81a2)'; DROP TABLE t1\n>\n>(notice that \\ before ' is disappeared)\n\n\n", "msg_date": "Thu, 02 May 2002 16:51:15 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "Oops. How about:\n\nfoo'; DROP TABLE t1; -- foo\n\nThe last ' gets removed, leaving -- (81a2).\n\nSo you get:\nselect ... '(0x81a2)'; DROP TABLE t1; -- (0x81a2)\n\nWould that work? Or do you need to put a semicolon after the --?\n\nAlternatively would select (0x81a2) be a syntax error? If it isn't then \nthat's another way to terminate it properly.\n\nAs for the backslash, how does postgresql treat \\000 and other naughty \ncodes? Too bad there are too many characters to backspace over - that is if \nbackspacing (\\b) over commands works in the first place ;)...\n\nI'll let you know if I think of other ways (I'm sure there are - I probably \nhave to go through the postgresql syntax and commands more closely). Got to \ngo :).\n\nCheerio,\nLink.\n\nAt 05:50 PM 5/2/02 +0900, Tatsuo Ishii wrote:\n> > Not tested: but how about the string being\n> > foo'; DROP TABLE T1; foo\n> >\n> > Would the last ' be eaten up then resulting in no error?\n>\n>Even the last ' is eaten up, the remaining string is (81a2), which\n>would cause parser errors since they are not valid SQL, I think.\n>\n> > Also normally a \\ would be quoted by \\\\ right? Would a foo\\ result in an\n> > unquoted \\ ? An unquoted backslash may allow some possibilities.\n> >\n> > There could be other ways to get rid of the last ', comments etc, so it \n> may\n> > not be just 6.5.x.\n>\n>Please provide concrete examples. I could not find such that case.\n>--\n>Tatsuo Ishii\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n", "msg_date": "Thu, 02 May 2002 19:17:28 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "> Oops. How about:\n> \n> foo'; DROP TABLE t1; -- foo\n> \n> The last ' gets removed, leaving -- (81a2).\n> \n> So you get:\n> select ... '(0x81a2)'; DROP TABLE t1; -- (0x81a2)\n\nThis surely works:-< Ok, you gave me an enough example that shows even\n7.1.x and 7.0.x are not safe.\n\nIncluded are patches for 7.1.3. Patches for 7.0.3 and 6.5.3 will be\nposted soon.", "msg_date": "Thu, 02 May 2002 22:37:19 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Here are the precise conditions to trigger the scenario:\n\n> (1) the backend is PostgreSQL 6.5.x\n> (2) multibyte support is enabled (--enable-multibyte)\n> (3) the database encoding is SQL_ASCII (other encodings are not\n> affected by the bug). \n> (4) the client encoding is set to other than SQL_ASCII\n\n> I think I am responsible for this since I originally wrote the\n> code. Sorry for this. I'm going to make back port patches to fix the\n> problem for pre 7.2 versions.\n\nIt doesn't really seem worth the trouble to make patches for 6.5.x.\nIf someone hasn't upgraded yet, they aren't likely to install patches\neither. (ISTR there are other known security risks in 6.5, anyway.)\nIf the problem is fixed in 7.0 and later, why not just tell people to\nupgrade?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 10:23:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL " }, { "msg_contents": "I hope you won't make this standard practice. Because there are quite \nsignificant differences that make upgrading from 7.1.x to 7.2 troublesome. \nI can't name them offhand but they've appeared on the list from time to time.\n\nFor 6.5.x to 7.1.x I believe there are smaller differences, even so there \nmight be people who would patch for security/bug issues but not upgrade. \nI'm still on Windows 95 for instance (Microsoft has stopped supporting it \ntho :( ). I think there are still lots of people on Oracle 7.\n\nYes support of older software is a pain. But the silver lining is: it's \nopen source they can feasibly patch it themselves if they are really hard \npressed. If the bug report is descriptive enough DIY might not be so bad. \nAnd just think of it as people really liking your work :).\n\nAny idea which versions of Postgresql have been bundled with O/S CDs?\n\nRegards,\nLink.\n\nAt 10:23 AM 5/2/02 -0400, Tom Lane wrote:\n>Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Here are the precise conditions to trigger the scenario:\n>\n> > (1) the backend is PostgreSQL 6.5.x\n> > (2) multibyte support is enabled (--enable-multibyte)\n> > (3) the database encoding is SQL_ASCII (other encodings are not\n> > affected by the bug).\n> > (4) the client encoding is set to other than SQL_ASCII\n>\n> > I think I am responsible for this since I originally wrote the\n> > code. Sorry for this. I'm going to make back port patches to fix the\n> > problem for pre 7.2 versions.\n>\n>It doesn't really seem worth the trouble to make patches for 6.5.x.\n>If someone hasn't upgraded yet, they aren't likely to install patches\n>either. (ISTR there are other known security risks in 6.5, anyway.)\n>If the problem is fixed in 7.0 and later, why not just tell people to\n>upgrade?\n>\n> regards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Fri, 03 May 2002 11:43:31 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL " }, { "msg_contents": "Or else people in our situation where it takes forever to upgrade the \nsoftware because of its heavy use and the risk involved in upgrading, not\nto mention the problems encountered when we did test-runs of the upgrade.\n\nThen there is always the thorny issue of loads of software that uses the\ndatabases, most of it under user control and any incompatibility between \nversions, \nno matter how small, could have horrible implications for our clients and \ntherefore, us.\n\nYou see, we are an ISP and a consultancy specialising in database-driven\nweb sites and corporate infrastructure. We based nearly everything that \nwe\ndo on PostgreSQL and although we upgrade when we can, our hands are\nvery tied. For us, patching is a necessity, not an option. To migrate\na client means rebuilding an entire UAT (user acceptance test site) for\nthem, extensively testing what we can ourselves, then asking the \nclient to allocate time, money and people to test their systems and their\nown code as well. They will also need to allocate development time, money\nand people to fix any problems that they find in compatibility. \n\nThen there is the throny issue of both companies needing to synchronise\ntheir resources and schedules so that we can work together solving any\nproblems that arise.\n\nFinally, because we have no control over the customer's quality control, \nand\nbecause customers very often don't have the inhouse expertise to even \nunderstand\nwhat a proper test is about (they most often have hired in expensive \nconsultants\nor contracted other companies to do the work), we have no guarantees that \nwhen the\nclient thinks that they have tested the site, they really have. Now most \npeople\nwill say \"that's their problem\", but you see, it isn't. Because these are \nbusiness-critical systems that we're talking about and the change was \ninitiated\nby us. So if the system fails, for whatever reason, on the new software, \neven \nif it isn't anything to do with the new software, we get the flack. And \nalso in\nthe clients' eyes, we are responsible because their system \"was working \nfine until \n[we] forced and upgrade to new software\".\n\nAnd that is customer relations being damanged, even if the customer is \nwrong.\n\nSee the problem? I am only writing this to add to the pool of knowledge \nabout\nhow PG is used and the real-world implications of PG being used for \nbusiness\ncritical or customer-image systems. PG is extremely well suited for this \nand\nthe benefits over closed-source systems is enormous, not to mention the \nfact\nthat PG has email lists like this one with all the fine brains on this \nlist\npooled together. It shows in the code and it shows in the satisfaction \nlevel\nof those who use it (I have never once had a client who was dissatisfied \nwith\nPG. Most clients were surprised that OpenSource existed, that it is free \nand\nthat it is such great quality without any catches or got-yous that \nnormally\ncomes with \"free\" things from commercial companies.).\n\nWell, that's my hat in the ring. Hope that it helps someone out there or \nat\nleast adds something to our pooled knowledge!\n\nBrad\nKieser.net\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 5/3/02, 4:43:31 AM, Lincoln Yeoh <lyeoh@pop.jaring.my> wrote regarding \nRe: [HACKERS] a vulnerability in PostgreSQL :\n\n\n> I hope you won't make this standard practice. Because there are quite\n> significant differences that make upgrading from 7.1.x to 7.2 \ntroublesome.\n> I can't name them offhand but they've appeared on the list from time to \ntime.\n\n> For 6.5.x to 7.1.x I believe there are smaller differences, even so there\n> might be people who would patch for security/bug issues but not upgrade.\n> I'm still on Windows 95 for instance (Microsoft has stopped supporting it\n> tho :( ). I think there are still lots of people on Oracle 7.\n\n> Yes support of older software is a pain. But the silver lining is: it's\n> open source they can feasibly patch it themselves if they are really hard\n> pressed. If the bug report is descriptive enough DIY might not be so bad.\n> And just think of it as people really liking your work :).\n\n> Any idea which versions of Postgresql have been bundled with O/S CDs?\n\n> Regards,\n> Link.\n\n> At 10:23 AM 5/2/02 -0400, Tom Lane wrote:\n> >Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > > Here are the precise conditions to trigger the scenario:\n> >\n> > > (1) the backend is PostgreSQL 6.5.x\n> > > (2) multibyte support is enabled (--enable-multibyte)\n> > > (3) the database encoding is SQL_ASCII (other encodings are not\n> > > affected by the bug).\n> > > (4) the client encoding is set to other than SQL_ASCII\n> >\n> > > I think I am responsible for this since I originally wrote the\n> > > code. Sorry for this. I'm going to make back port patches to fix the\n> > > problem for pre 7.2 versions.\n> >\n> >It doesn't really seem worth the trouble to make patches for 6.5.x.\n> >If someone hasn't upgraded yet, they aren't likely to install patches\n> >either. (ISTR there are other known security risks in 6.5, anyway.)\n> >If the problem is fixed in 7.0 and later, why not just tell people to\n> >upgrade?\n> >\n> > regards, tom lane\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 4: Don't 'kill -9' the postmaster\n\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Fri, 03 May 2002 08:11:16 GMT", "msg_from": "Bradley Kieser <brad@kieser.net>", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL " }, { "msg_contents": "On Thursday 02 May 2002 11:43 pm, Lincoln Yeoh wrote:\n> Any idea which versions of Postgresql have been bundled with O/S CDs?\n\nFor RedHat:\n5.0\t-> PG6.2.1\n5.1\t-> PG6.3.2\n5.2\t-> PG6.3.2\n6.0\t-> PG6.4.2\n6.1\t-> PG6.5.2 (I think -- this was my first RPMset in Red Hat Linux, but I'm \nnot 100% sure it was 6.5.2 -- it might have been 6.5.3)\n6.2\t-> PG6.5.3\n7.0\t-> PG7.0.2\n7.1\t-> PG7.0.3\n7.2\t-> PG7.1.3\n7.2.93 > PG7.2.1\n\nRed Hat 7.2 is the current official Red Hat, and _currently_ ships with 7.1.3. \nIf this bug applies there, it should be backpatched, and I would be willing \nto roll another 7.1.3 RPM with the backpatch in it.\n\nPrior to that -- well, I don't have any machines running those versions any \nmore. I stay pretty much on the frontline of things -- not the bleeding edge \nof RawHide, but close. I have had the 7.2.93 beta installed, for instance. \nI'm even going to get out of the Red Hat 6.2 on SPARC business at some point, \nby going to the Aurora version (current Red Hat version ported to SPARC). \n6.2 is just old, and iptables on the 2.4 kernel is just too useful.\n\nI guess I _could_ reinstall an OS to provide a security patch -- but methinks \nRed Hat would do that as an errata instead. If a patch can be worked up, it \nshould be passed through those channels. Unless we want to consider rolling \n6.5.4, 7.0.4, and 7.1.4 security bugfix releases.\n\nOf course, this is open source, and there's nothing preventing a third party \nfrom forking off and releasing a 6.5.4 bugfix release. But I wouldn't count \non getting core developers to interested in it -- the bug is fixed in the \ncurrent version, and their time is far better spent on fixing bugs and \ndeveloping new features in the current version. \n\nAnd I'm sure that if someone wanted to volunteer to provide a patchset for \neach affected version, Bruce might just apply them, and you might talk Marc \ninto rolling them up. But good luck doing so. Then I'd be happy building \nRPMs out of them -- on the my current box. You would then have to rebuild \nthe RPMs for your box from my src.rpm.\n\n'Upgrade to the next version' is not a good answer, either, particularly since \nwe don't have a true upgrade path, and the problems that dump/restore \nreinstalls have brought to light.\n\nIn a similar vein, due to some baroque dependencies, I still have a client \nrunning RedHat 5.2 in production. Not pretty to support. Still at 6.5.3, \ntoo.\n\nWe need a better upgrade path, but that's a different discussion.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 3 May 2002 13:32:53 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Here are the precise conditions to trigger the scenario:\n> \n> > (1) the backend is PostgreSQL 6.5.x\n> > (2) multibyte support is enabled (--enable-multibyte)\n> > (3) the database encoding is SQL_ASCII (other encodings are not\n> > affected by the bug). \n> > (4) the client encoding is set to other than SQL_ASCII\n> \n> > I think I am responsible for this since I originally wrote the\n> > code. Sorry for this. I'm going to make back port patches to fix the\n> > problem for pre 7.2 versions.\n> \n> It doesn't really seem worth the trouble to make patches for 6.5.x.\n> If someone hasn't upgraded yet, they aren't likely to install patches\n> either. (ISTR there are other known security risks in 6.5, anyway.)\n> If the problem is fixed in 7.0 and later, why not just tell people to\n> upgrade?\n\nPostgresql doesn't support upgrades[1], so if we're going to release\nupgrades[2], we'd need the backported fixes for 6.5, 7.0 and 7.1 \n\n[1] Not the first time I mention this, is it?\n[2] We got lucky - 6.5.x is not compiled with multibyte support.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "03 May 2002 15:50:37 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "> > Oops. How about:\n> > \n> > foo'; DROP TABLE t1; -- foo\n> > \n> > The last ' gets removed, leaving -- (81a2).\n> > \n> > So you get:\n> > select ... '(0x81a2)'; DROP TABLE t1; -- (0x81a2)\n> \n> This surely works:-< Ok, you gave me an enough example that shows even\n> 7.1.x and 7.0.x are not safe.\n> \n> Included are patches for 7.1.3. Patches for 7.0.3 and 6.5.3 will be\n> posted soon.\n\nIncluded are patches for 7.0.3 and 6.5.3 I promised.\n\nBTW,\n\n>I hope you won't make this standard practice. Because there are quite \n>significant differences that make upgrading from 7.1.x to 7.2 troublesome. \n>I can't name them offhand but they've appeared on the list from time to time.\n\nI tend to agree above but am not sure making backport patches are\ncore's job. I have been providing patches for PostgreSQL for years in\nJapan, and people there seem to be welcome such kind of\nservices. However, supporting previous versions is not a trivial job\nand I don't want core members to spend their valuable time for that\nkind of job, since making backport patches could be done by anyone who\nare familiar with PostgreSQL.\n--\nTatsuo Ishii", "msg_date": "Sat, 04 May 2002 08:56:31 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> Postgresql doesn't support upgrades[1], so if we're going to release\n> upgrades[2], we'd need the backported fixes for 6.5, 7.0 and 7.1 \n> \n> [1] Not the first time I mention this, is it?\n\nThere is now /contrib/pg_upgrade. It has all the things I can think of\nfor upgrading. Hopefully it can be tested extensively for 7,3 and fully\nsupported.\n\nIf people don't like that it is a shell script, it can be rewritten in\nanother language, but the basic steps it takes will have to be done no\nmatter what language it is written in.\n\nHowever, as I have warned before, an major change from 7.2 to 7,3 could\nmake it unusable. My point is that it isn't that I haven't tried to\nmake an upgrade script --- the problem is that making one sometimes is\nimpossible.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Jun 2002 02:21:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "Tatsuo Ishii wrote:\n> >I hope you won't make this standard practice. Because there are quite \n> >significant differences that make upgrading from 7.1.x to 7.2 troublesome. \n> >I can't name them offhand but they've appeared on the list from time to time.\n> \n> I tend to agree above but am not sure making backport patches are\n> core's job. I have been providing patches for PostgreSQL for years in\n> Japan, and people there seem to be welcome such kind of\n> services. However, supporting previous versions is not a trivial job\n> and I don't want core members to spend their valuable time for that\n> kind of job, since making backport patches could be done by anyone who\n> are familiar with PostgreSQL.\n\nYes, I know SRA and Red Hat provide extensive backpatches for older\nversions; it is a nice value-add for their support customers. Not\nsure about other PostgreSQL support companies.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Jun 2002 02:23:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "\nDo we need to do any more work to document this problem?\n\n---------------------------------------------------------------------------\n\nTatsuo Ishii wrote:\n> > Oops. How about:\n> > \n> > foo'; DROP TABLE t1; -- foo\n> > \n> > The last ' gets removed, leaving -- (81a2).\n> > \n> > So you get:\n> > select ... '(0x81a2)'; DROP TABLE t1; -- (0x81a2)\n> \n> This surely works:-< Ok, you gave me an enough example that shows even\n> 7.1.x and 7.0.x are not safe.\n> \n> Included are patches for 7.1.3. Patches for 7.0.3 and 6.5.3 will be\n> posted soon.\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jun 2002 14:11:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: a vulnerability in PostgreSQL" }, { "msg_contents": "> Do we need to do any more work to document this problem?\n\nBetter documetation will be welcome. However which document?\n--\nTatsuo Ishii\n\n> ---------------------------------------------------------------------------\n> \n> Tatsuo Ishii wrote:\n> > > Oops. How about:\n> > > \n> > > foo'; DROP TABLE t1; -- foo\n> > > \n> > > The last ' gets removed, leaving -- (81a2).\n> > > \n> > > So you get:\n> > > select ... '(0x81a2)'; DROP TABLE t1; -- (0x81a2)\n> > \n> > This surely works:-< Ok, you gave me an enough example that shows even\n> > 7.1.x and 7.0.x are not safe.\n> > \n> > Included are patches for 7.1.3. Patches for 7.0.3 and 6.5.3 will be\n> > posted soon.\n> \n> [ Attachment, skipping... ]\n> \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n", "msg_date": "Thu, 13 Jun 2002 10:10:45 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: a vulnerability in PostgreSQL" } ]
[ { "msg_contents": "\nHi,\n\nOn 7.2.1 debian-unstable PG hangs when trying to drop a table which\ncontains a field referencing another field in the same table as a\nforeign key. \n\nIs it legal/orhtodox to use a \"references\" on another field of the same\ntable?\n \nStrangely after restarting PG the drop succeeds without hanging.\n\n-- \n OENONE: Vous aimez. On ne peut vaincre sa destin�e.\n Par un charme fatal vous f�tes entra�n�e.\n (Ph�dre, J-B Racine, acte 4, sc�ne 6)\n", "msg_date": "Thu, 2 May 2002 14:38:32 +0200", "msg_from": "Louis-David Mitterrand <vindex@apartia.org>", "msg_from_op": true, "msg_subject": "DROP TABLE hangs because of same table foreign key" }, { "msg_contents": "Louis-David Mitterrand <vindex@apartia.org> writes:\n> On 7.2.1 debian-unstable PG hangs when trying to drop a table which\n> contains a field referencing another field in the same table as a\n> foreign key. \n\nI'm a little confused. Could you post a complete example?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 10:33:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DROP TABLE hangs because of same table foreign key " }, { "msg_contents": "On Thu, 2 May 2002, Louis-David Mitterrand wrote:\n\n>\n> Hi,\n>\n> On 7.2.1 debian-unstable PG hangs when trying to drop a table which\n> contains a field referencing another field in the same table as a\n> foreign key.\n>\n> Is it legal/orhtodox to use a \"references\" on another field of the same\n> table?\n\nShould be.\nWere there any other transactions open at the time? Given it went away\nafter restarting, I'd first guess that something else might have a lock\non the table.\n\n", "msg_date": "Thu, 2 May 2002 08:32:57 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: DROP TABLE hangs because of same table foreign key" } ]
[ { "msg_contents": "I have postgres 7.0 working on a red hat 7.1 installation, but now need to\nupgrade to postgres 7.2.1 to have the benefit of non-locking vacuuming.\n\nIs it possible to install postgresql 7.2.1 on to a red hat 7.1 installation?\nI've been trying but it complains that it can't find dependencies\nlibssl.so.2, libcrypto.so.so.2 and libreadline.so.4.\nI've copied them off a redhat 7.2 machine but it still can't find them!\n(Also, I can't just go to Red Hat 7.2 as that version of X doesn't support\nmy graphics hardware)\n\n\nOn a further note, I've noticed that when inserting multiple records into a\ndatabase using the copy command via libpq++, any notifies on the tables are\nnot processed. Is this true and if so is it fixed in a later version.\n\nCheers\nSteve\n\n\n", "msg_date": "Thu, 2 May 2002 17:04:20 +0100", "msg_from": "Steve King <steve.king@ecmsys.co.uk>", "msg_from_op": true, "msg_subject": "postgres 7.2.1 on redhat 7.1" }, { "msg_contents": "On Thursday 02 May 2002 12:04 pm, Steve King wrote:\n> Is it possible to install postgresql 7.2.1 on to a red hat 7.1\n> installation?\n\nYes. Please read README.rpm-dist in /usr/share/doc/postgresql-7.2.1 for \ndetails on how to rebuild from the source RPM. You can then install the RPMs \nyou just built.\n\nThere was another fellow built the RPMset on RH 7.1 a week or so ago, and he \nsaid the rebuild worked just fine.\n\nAs I don't have a RH 7.1 machine to build on, this is the best I can do. \nSorry.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 2 May 2002 15:56:42 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: postgres 7.2.1 on redhat 7.1" } ]
[ { "msg_contents": "Hi,\n\nhaving been chased away from pgsql-novice by Rasmus Mohr, I come here\nto try my luck :-) I'm still new to this; so please be patient, if I\nask silly questions.\n\nThere has been a discussion recently about saving some bytes per tuple\nheader. Well, I have the suspicion, we can eliminate 4 bytes more by\nusing one single field for t_cmin and t_cmax. Here are my thoughts\nsupporting this feeling:\n\n(1) The purpose of the command ids is to prevent a command from\nworking on the same tuple more than once. 2002-04-20 Tom Lane wrote\nin \"On-disk Tuple Size\":\n> You don't want an individual command's effects to become visible\n> until you do CommandCounterIncrement.\nand\n> The command IDs aren't interesting anymore once the originating\n> transaction is over, but I don't see a realistic way to recycle the\n> space ...\n\n(2) The command ids are of any meaning (2a) only during the active\ntransaction (with the exception of the (ab)use of t_cmin by vacuum);\nand (2b) we are only interested in whether t_cxxx is less than the\ncurrent command id, if it is, we don't care about the exact value.\n\n(3) From a command's view a tuple can be in one of these states:\n\n(3a) Neither t_xmin nor t_xmax is the current transaction. The tuple\nhas been neither inserted nor deleted (and thus not updated) by this\ntransaction, and the command ids are irrelevant.\n\n(3b) t_xmin is the current transaction, t_xmax is\nInvalidTransactionId; i.e. the tuple has been inserted by the current\ntransaction and it has not been deleted (or replaced). In this case\nt_cmin identifies the command having inserted the tuple, and t_cmax is\nirrelevant.\n\n(3c) t_xmin is some other transaction, t_xmax is the current\ntransaction; i.e. the current transaction has deleted the tuple.\nThen t_cmax identifies the command having deleted the tuple, t_cmin is\nirrelevant.\n\n(3d) t_xmin == t_xmax == current transaction. The tuple has been\ninserted and then deleted by the current transaction. Then I claim\n(but I'm not absolutely sure), that insert and delete cannot have\nhappened in the same command,\nso t_cmin < t_cmax,\nso t_cmin < CurrentCommandId,\nso the exact value of t_cmin is irrelevant.\n\nSo at any moment at most one of the two fields t_cmin and t_cmax is\nneeded.\n\n(4) If (3) is true, we can have a single field t_cnum in\nHeapTupleHeaderData, the meaning of which is t_cmax, if t_xmax is the\ncurrent transaction, otherwise t_cmin.\n\nt_cmin is used in:\n. backend/access/common/heaptuple.c\n. backend/access/heap/heapam.c\n. backend/access/transam/xlogutils.c\n. backend/commands/vacuum.c\n. backend/utils/time/tqual.c\n\nt_cmax is used in:\n. backend/access/common/heaptuple.c\n. backend/access/heap/heapam.c\n. backend/utils/time/tqual.c\n\nAs far as I have checked these sources (including the abuse of c_tmin\nby vacuum) my suggested change should be possible, but as I told you\nI'm new here and so I have the following questions:\n\n(Q1) Is assumption (3d) true? Do you have any counter examples?\n\n(Q2) Is there any possibiltity of t_cmax being set and t_cmin still\nbeing needed? (Preferred answer: no :-)\n\n(Q3) Are my thoughts WAL compatible?\n\n(Q4) Is it really easy to change the size of HeapTupleHeaderData? Are\nthe data of this struct only accessed by field names or are there\ndirty tricks using memcpy() and pointer arithmetic?\n\n(Q5) Are these thoughts obsolete as soon as nested transactions are\nconsidered?\n\nThank you for reading this long message.\n\nServus\n Manfred\n", "msg_date": "Thu, 02 May 2002 19:43:34 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Per tuple overhead, cmin, cmax" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> (3d) t_xmin == t_xmax == current transaction. The tuple has been\n> inserted and then deleted by the current transaction. Then I claim\n> (but I'm not absolutely sure), that insert and delete cannot have\n> happened in the same command,\n> so t_cmin < t_cmax,\n> so t_cmin < CurrentCommandId,\n> so the exact value of t_cmin is irrelevant.\n\nThe hole in this logic is that there can be multiple active scans with\ndifferent values of CurrentCommandId (eg, within a function\nCurrentCommandId may be different than it is outside). If you overwrite\ncmin with cmax then you are destroying the information needed by a scan\nwith smaller CurrentCommandId than yours.\n\n> (Q4) Is it really easy to change the size of HeapTupleHeaderData? Are\n> the data of this struct only accessed by field names or are there\n> dirty tricks using memcpy() and pointer arithmetic?\n\nAFAIK there are no dirty tricks there. I am hesitant to change the\nheader layout without darn good reason, because it breaks any chance\nof having a working pg_upgrade process. But that's strictly a\nproduction-system concern, and need not discourage you from\nexperimenting.\n\n> (Q5) Are these thoughts obsolete as soon as nested transactions are\n> considered?\n\nPossibly. We haven't worked out exactly how nested transactions would\nwork, but to the extent that they are handled as different CommandIds\nwe'd have the same issue already mentioned: we should not assume that\nexecution of different CommandIds can't overlap in time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 17:16:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax " }, { "msg_contents": "Tom,\nthanks for answering.\n\nOn Thu, 02 May 2002 17:16:38 -0400, you wrote:\n>The hole in this logic is that there can be multiple active scans with\n>different values of CurrentCommandId (eg, within a function\n>CurrentCommandId may be different than it is outside). If you overwrite\n>cmin with cmax then you are destroying the information needed by a scan\n>with smaller CurrentCommandId than yours.\n\nOh, I see :-(\nLet me throw in one of my infamous wild ideas in an attempt to rescue\nmy proposal: We have 4 32-bit-numbers: xmin, cmin, xmax, and cmax.\nThe only case, where we need cmin *and* cmax, is, when xmin == xmax.\nSo if we find a single bit to flag this case, we only need 3\n32-bit-numbers to store this information on disk.\n\nTo keep the code readable we probably would need some accessor\nfunctions or macros to access these fields.\n\nAs of 7.2 there are three unused bits in t_infomask.\n\n>\n>> (Q4) Is it really easy to change the size of HeapTupleHeaderData? Are\n>> the data of this struct only accessed by field names or are there\n>> dirty tricks using memcpy() and pointer arithmetic?\n>\n>AFAIK there are no dirty tricks there. I am hesitant to change the\n>header layout without darn good reason, because it breaks any chance\n>of having a working pg_upgrade process. But that's strictly a\n>production-system concern, and need not discourage you from\n>experimenting.\n\nIs saving 4 bytes per tuple a \"darn good reason\"? Is a change\nacceptable for 7.3? Do you think it's worth the effort?\n\n> We haven't worked out exactly how nested transactions would\n>work, but to the extent that they are handled as different CommandIds\n>we'd have the same issue already mentioned: we should not assume that\n>execution of different CommandIds can't overlap in time.\n\nAssuming that a subtransaction is completely contained in the outer\ntransaction and there is no activity by the outer transaction while\nthe subtransaction is active, I believe, this problem can be solved\n...\nIt's late now, I'll try to think clearer tomorrow.\n\nGood night\n Manfred\n", "msg_date": "Fri, 03 May 2002 01:25:26 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Per tuple overhead, cmin, cmax " }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Let me throw in one of my infamous wild ideas in an attempt to rescue\n> my proposal: We have 4 32-bit-numbers: xmin, cmin, xmax, and cmax.\n> The only case, where we need cmin *and* cmax, is, when xmin == xmax.\n> So if we find a single bit to flag this case, we only need 3\n> 32-bit-numbers to store this information on disk.\n\nHmm ... that might work. Actually, we are trying to stuff *five*\nnumbers into these fields: xmin, xmax, cmin, cmax, and a VACUUM FULL\ntransaction id (let's call it xvac just to have a name). The code\ncurrently assumes that cmin is not interesting simultaneously with xvac.\nI think it might be true that cmax is not interesting simultaneously\nwith xvac either, in which case this could be made to work. (Vadim,\nyour thoughts?)\n\n> To keep the code readable we probably would need some accessor\n> functions or macros to access these fields.\n\nAmen. But that would be cleaner than now, at least for VACUUM;\nit's just using cmin where it means xvac.\n\n> Is saving 4 bytes per tuple a \"darn good reason\"? Is a change\n> acceptable for 7.3? Do you think it's worth the effort?\n\nI'm on the fence about it. My thoughts are probably colored by the\nfact that I prefer platforms that have MAXALIGN=8, so half the time\n(including all null-free rows) there'd be no savings at all. Now if\nwe could get rid of 8 bytes in the header, I'd get excited ;-)\n\nAny other opinions out there?\n\n\t\t\tregards, tom lane\n\nPS: I did like your point about BITMAPLEN; I think that might be\na free savings. I was waiting for you to bring it up on hackers\nbefore commenting though...\n", "msg_date": "Thu, 02 May 2002 21:10:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax " }, { "msg_contents": "On Thu, 02 May 2002 21:10:40 -0400, Tom Lane wrote:\n\n>Hmm ... that might work. Actually, we are trying to stuff *five*\n>numbers into these fields: xmin, xmax, cmin, cmax, and a VACUUM FULL\n>transaction id (let's call it xvac just to have a name). The code\n>currently assumes that cmin is not interesting simultaneously with xvac.\n>I think it might be true that cmax is not interesting simultaneously\n>with xvac either, in which case this could be made to work. (Vadim,\n>your thoughts?)\nHaving read the sources recently I'm pretty sure you're right.\n\n>I'm on the fence about it. My thoughts are probably colored by the\n>fact that I prefer platforms that have MAXALIGN=8, so half the time\n>(including all null-free rows) there'd be no savings at all.\nBut the other half of the time we'd save 8 bytes. So on average we\nget savings of 4 bytes per tuple, don't we?\n\n> Now if\n>we could get rid of 8 bytes in the header, I'd get excited ;-)\nI keep trying :-)\n\nServus\n Manfred\n", "msg_date": "Fri, 03 May 2002 09:51:46 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Per tuple overhead, cmin, cmax " }, { "msg_contents": "On Thu, 02 May 2002 21:10:40 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Manfred Koizar <mkoi-pg@aon.at> writes:\n>> Is saving 4 bytes per tuple a \"darn good reason\"?\n>\n>[...] Now if\n>we could get rid of 8 bytes in the header, I'd get excited ;-)\n\nTom,\n\nwhat about WITHOUT OIDS? I know dropping the OID from some tables and\nkeeping it for others is not trivial, because t_oid is the _first_\nfield of HeapTupleHeaderData. I'm vaguely considering a few possible\nimplementations and will invest more work in a detailed proposal, if\nit's wanted.\n\nServus\n Manfred\n", "msg_date": "Tue, 21 May 2002 12:54:10 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> what about WITHOUT OIDS? I know dropping the OID from some tables and\n> keeping it for others is not trivial, because t_oid is the _first_\n> field of HeapTupleHeaderData. I'm vaguely considering a few possible\n> implementations and will invest more work in a detailed proposal, if\n> it's wanted.\n\nYeah, I had been toying with the notion of treating OID like a user\nfield --- ie, it'd be present in the variable-length part of the record\nif at all. It'd be a bit tricky to find all the places that would need\nto change, but I think there are not all that many.\n\nAs usual, the major objection to any such change is losing the chance\nof doing pg_upgrade. But we didn't have pg_upgrade during the 7.2\ncycle either. If we put together several such changes and did them\nall at once, the benefit might be enough to overcome that complaint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 09:57:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID " }, { "msg_contents": "On Tue, 21 May 2002 09:57:32 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Manfred Koizar <mkoi-pg@aon.at> writes:\n>> what about WITHOUT OIDS? I know dropping the OID from some tables and\n>> keeping it for others is not trivial, because t_oid is the _first_\n>> field of HeapTupleHeaderData. I'm vaguely considering a few possible\n>> implementations and will invest more work in a detailed proposal, if\n>> it's wanted.\n>\n>Yeah, I had been toying with the notion of treating OID like a user\n>field --- ie, it'd be present in the variable-length part of the record\n>if at all. It'd be a bit tricky to find all the places that would need\n>to change, but I think there are not all that many.\nThat was one of the possible solutions I thought of, unfortunately the\none I'm most afraid of. Not because I think it's not the cleanest\nway, but I don't (yet) feel comfortable enough with the code to rip\nout oids from system tables. However, if you tell me it's feasible\nand if you give me some hints where to start, I'll give it a try...\n\nOther possible implementations would leave the oid in the tuple\nheader:\n\n. typedef two structs HeapTupleHeaderDataWithOid and\nHeapTupleHeaderDataWithoutOid, wrap access to *all* HeapTupleHeader\nfields in accessor macros/functions, give these accessors enough\ninformation to know which variant to use.\n\n. Decouple on-disk format from in-memory structures, use\nHeapTupleHeaderPack() and HeapTupleHeaderUnpack() to store/extract\nheader data to/from disk buffers. Concurrency?\n\n>As usual, the major objection to any such change is losing the chance\n>of doing pg_upgrade. But we didn't have pg_upgrade during the 7.2\n>cycle either.\n\nI thought, it is quite common to need pg_dump/restore when upgrading\nbetween releases. Or are you talking about those hackers, testers and\nusers(?), who are using a cvs version now?\n\nAnyway, as long as our changes don't make heap tuples larger, it\nshould be possible to write a tool that converts version x data files\nto version y data files. I've done that before (not for PG though)\nand I know it's a lot of work, but wouldn't it be great for the PG\nmarketing department ;-)\n\n>If we put together several such changes [...]\n\nI can't guarantee that; my ideas come in drop by drop :-)\nBTW, is there a 7.3 schedule?\n\nServus\n Manfred\n", "msg_date": "Tue, 21 May 2002 17:40:30 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID " }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> That was one of the possible solutions I thought of, unfortunately the\n> one I'm most afraid of. Not because I think it's not the cleanest\n> way, but I don't (yet) feel comfortable enough with the code to rip\n> out oids from system tables.\n\nThe system tables that have OIDs will certainly continue to have OIDs.\n\nI suppose the messiest aspect of that solution would be changing all\nthe places that currently do \"tuple->t_data->t_oid\". If OID is not at\na fixed offset in the tuple then it'll be necessary to change *all*\nthose places. Ugh. While certainly we should have been using accessor\nmacros for that, I'm not sure I want to try to change it.\n\n> Other possible implementations would leave the oid in the tuple\n> header:\n\n> . typedef two structs HeapTupleHeaderDataWithOid and\n> HeapTupleHeaderDataWithoutOid, wrap access to *all* HeapTupleHeader\n> fields in accessor macros/functions, give these accessors enough\n> information to know which variant to use.\n\nIf OID is made to be the last fixed-offset field, instead of the first,\nthen this approach would be fairly workable. Actually I'd still use\njust one struct definition, but do offsetof() calculations to decide\nwhere the null-bitmap starts.\n\n> Decouple on-disk format from in-memory structures, use\n> HeapTupleHeaderPack() and HeapTupleHeaderUnpack() to store/extract\n> header data to/from disk buffers. Concurrency?\n\nInefficient, and you'd have problems still with the changeable fields\n(t_infomask etc).\n\n>> As usual, the major objection to any such change is losing the chance\n>> of doing pg_upgrade. But we didn't have pg_upgrade during the 7.2\n>> cycle either.\n\n> I thought, it is quite common to need pg_dump/restore when upgrading\n> between releases.\n\nYes, and we get loud complaints every time we require it...\n\n> Anyway, as long as our changes don't make heap tuples larger, it\n> should be possible to write a tool that converts version x data files\n> to version y data files. I've done that before (not for PG though)\n> and I know it's a lot of work, but wouldn't it be great for the PG\n> marketing department ;-)\n\nI'd be afraid to use a conversion-in-place tool for this sort of thing.\nIf it crashes halfway through, what then?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 11:53:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID " }, { "msg_contents": "On Tue, 21 May 2002 11:53:04 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>The system tables that have OIDs will certainly continue to have OIDs.\n\nThat's clear. I should have written: \"... rip out oids from tuple\nheaders of system tables.\"\n\n>Ugh. While certainly we should have been using accessor\n>macros for that, I'm not sure I want to try to change it.\n\nI already did this for xmin, xmax, cmin, cmax, and xvac (see my patch\nposted 2002-05-12).\n\n>If OID is made to be the last fixed-offset field, instead of the first,\n\nThat would introduce some padding.\n\n>then this approach would be fairly workable. Actually I'd still use\n>just one struct definition, but do offsetof() calculations to decide\n>where the null-bitmap starts.\n\n... and for calculating the tuple header size.\n\n>> Decouple on-disk format from in-memory structures, use\n>> HeapTupleHeaderPack() and HeapTupleHeaderUnpack() to store/extract\n>> header data to/from disk buffers. Concurrency?\n>\n>Inefficient,\n\nJust to be sure: You mean the CPU cycles wasted by Pack() and\nUnpack()?\n\n>I'd be afraid to use a conversion-in-place tool for this sort of thing.\n\nMe too. No, not in place! I thought of a filter reading an old\nformat data file, one page at a time, and writing a new format data\nfile. This would work as long as the conversions don't cause page\noverflow.\n\nNo comment on a planned 7.3 timeframe? :-(\n\nServus\n Manfred\n", "msg_date": "Tue, 21 May 2002 19:30:17 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID " }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> No comment on a planned 7.3 timeframe? :-(\n\nI think we are planning to go beta in late summer (end of August, say).\nProbably in July we'll start pressing people to finish up any major\ndevelopment items, or admit that they won't happen for 7.3. So we've\nstill got a couple months of full-tilt development mode before we\nstart to worry about tightening up for release.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 May 2002 13:44:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID " }, { "msg_contents": "Tom Lane wrote:\n> Manfred Koizar <mkoi-pg@aon.at> writes:\n> > No comment on a planned 7.3 timeframe? :-(\n> \n> I think we are planning to go beta in late summer (end of August, say).\n> Probably in July we'll start pressing people to finish up any major\n> development items, or admit that they won't happen for 7.3. So we've\n> still got a couple months of full-tilt development mode before we\n> start to worry about tightening up for release.\n\nI am concerned about slowing down too early. We did that in previous\nreleases and didn't get the beta focus we needed, and it was too\nparalyzing on people to know what is to be slowed and what to keep\ngoing. I think a slowdown two weeks before beta would be fine.\n\nBasically, I think the slowdown lengthened the time we were not doing\nsomething productive.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 01:59:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Tom Lane wrote:\n> Manfred Koizar <mkoi-pg@aon.at> writes:\n> > what about WITHOUT OIDS? I know dropping the OID from some tables and\n> > keeping it for others is not trivial, because t_oid is the _first_\n> > field of HeapTupleHeaderData. I'm vaguely considering a few possible\n> > implementations and will invest more work in a detailed proposal, if\n> > it's wanted.\n> \n> Yeah, I had been toying with the notion of treating OID like a user\n> field --- ie, it'd be present in the variable-length part of the record\n> if at all. It'd be a bit tricky to find all the places that would need\n> to change, but I think there are not all that many.\n> \n> As usual, the major objection to any such change is losing the chance\n> of doing pg_upgrade. But we didn't have pg_upgrade during the 7.2\n> cycle either. If we put together several such changes and did them\n> all at once, the benefit might be enough to overcome that complaint.\n\nI think it is inevitable that there be enough binary file changes the\npg_upgrade will not work for 7.3 --- it just seems it is only a matter\nof time.\n\nOne idea is to allow alternate page layouts using the heap page version\nnumber, but that will be difficult/confusing in the code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 02:01:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "On Fri, 7 Jun 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Manfred Koizar <mkoi-pg@aon.at> writes:\n> > > No comment on a planned 7.3 timeframe? :-(\n> >\n> > I think we are planning to go beta in late summer (end of August, say).\n> > Probably in July we'll start pressing people to finish up any major\n> > development items, or admit that they won't happen for 7.3. So we've\n> > still got a couple months of full-tilt development mode before we\n> > start to worry about tightening up for release.\n>\n> I am concerned about slowing down too early. We did that in previous\n> releases and didn't get the beta focus we needed, and it was too\n> paralyzing on people to know what is to be slowed and what to keep\n> going. I think a slowdown two weeks before beta would be fine.\n\nOther then personal slow downs, there is no reason for anything to \"slow\ndown\" until beta itself starts ...\n\n\n", "msg_date": "Fri, 7 Jun 2002 09:09:45 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Fri, 7 Jun 2002, Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> > > Manfred Koizar <mkoi-pg@aon.at> writes:\n> > > > No comment on a planned 7.3 timeframe? :-(\n> > >\n> > > I think we are planning to go beta in late summer (end of August, say).\n> > > Probably in July we'll start pressing people to finish up any major\n> > > development items, or admit that they won't happen for 7.3. So we've\n> > > still got a couple months of full-tilt development mode before we\n> > > start to worry about tightening up for release.\n> >\n> > I am concerned about slowing down too early. We did that in previous\n> > releases and didn't get the beta focus we needed, and it was too\n> > paralyzing on people to know what is to be slowed and what to keep\n> > going. I think a slowdown two weeks before beta would be fine.\n> \n> Other then personal slow downs, there is no reason for anything to \"slow\n> down\" until beta itself starts ...\n\nI assume Tom doesn't want a huge patch applied the day before beta, and\nI can understand that, but patch problems usually appear within two\nweeks of application, so I think we only have to worry about those last\ntwo weeks. Of course, another option is to continue in full development\nuntil the end of August, then start beta in September as soon as the\ncode is stable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 10:39:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "On Fri, 7 Jun 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> > On Fri, 7 Jun 2002, Bruce Momjian wrote:\n> >\n> > > Tom Lane wrote:\n> > > > Manfred Koizar <mkoi-pg@aon.at> writes:\n> > > > > No comment on a planned 7.3 timeframe? :-(\n> > > >\n> > > > I think we are planning to go beta in late summer (end of August, say).\n> > > > Probably in July we'll start pressing people to finish up any major\n> > > > development items, or admit that they won't happen for 7.3. So we've\n> > > > still got a couple months of full-tilt development mode before we\n> > > > start to worry about tightening up for release.\n> > >\n> > > I am concerned about slowing down too early. We did that in previous\n> > > releases and didn't get the beta focus we needed, and it was too\n> > > paralyzing on people to know what is to be slowed and what to keep\n> > > going. I think a slowdown two weeks before beta would be fine.\n> >\n> > Other then personal slow downs, there is no reason for anything to \"slow\n> > down\" until beta itself starts ...\n>\n> I assume Tom doesn't want a huge patch applied the day before beta, and\n\nNo offence to Tom, but who cares whether Tom wants a huge patch or not?\n\n> I can understand that, but patch problems usually appear within two\n> weeks of application, so I think we only have to worry about those last\n> two weeks. Of course, another option is to continue in full development\n> until the end of August, then start beta in September as soon as the\n> code is stable.\n\nThat kinda was the plan ... code will be frozen around the 1st of\nSeptember, with a release packaged then that will be label'd beta1 ...\nregardless of how stable it is ... beta is \"we're not taking any more\nchanges except fixes, so pound on this and let us know what needs to be\nfixed\" ... there shouldn't be any 'lead up' to beta, the lead up is the\ndevelopment period itself ... *shrug*\n\n", "msg_date": "Fri, 7 Jun 2002 11:56:13 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Fri, 7 Jun 2002, Bruce Momjian wrote:\n> > > > I am concerned about slowing down too early. We did that in previous\n> > > > releases and didn't get the beta focus we needed, and it was too\n> > > > paralyzing on people to know what is to be slowed and what to keep\n> > > > going. I think a slowdown two weeks before beta would be fine.\n> > >\n> > > Other then personal slow downs, there is no reason for anything to \"slow\n> > > down\" until beta itself starts ...\n> >\n> > I assume Tom doesn't want a huge patch applied the day before beta, and\n> \n> No offence to Tom, but who cares whether Tom wants a huge patch or not?\n\nWell, he does clean up lots of breakage, so I was willing to settle on a\ncompromise of 2 weeks. However, if other feel we should just go\nfull-bore until beta starts, I am fine with that.\n\nI will say that I was disapointed by previous release delays and will be\nmore vocal about moving things forward than I have in the past.\n\n> > I can understand that, but patch problems usually appear within two\n> > weeks of application, so I think we only have to worry about those last\n> > two weeks. Of course, another option is to continue in full development\n> > until the end of August, then start beta in September as soon as the\n> > code is stable.\n> \n> That kinda was the plan ... code will be frozen around the 1st of\n> September, with a release packaged then that will be label'd beta1 ...\n> regardless of how stable it is ... beta is \"we're not taking any more\n> changes except fixes, so pound on this and let us know what needs to be\n> fixed\" ... there shouldn't be any 'lead up' to beta, the lead up is the\n> development period itself ... *shrug*\n\nAgreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 12:00:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "On Fri, 7 Jun 2002, Bruce Momjian wrote:\n\n> I will say that I was disapointed by previous release delays and will be\n> more vocal about moving things forward than I have in the past.\n\nI don't know ... I kinda like being able to confidently say to clients\nthat \"the latest release is always the most stable/reliable\" and not being\nburned by it :) Hell, last release, I started moving to v7.2 on some of\nour production servers around beta4 or so, since a) I was confident and b)\nI needed some of the features ...\n\nNot many software packages you can say that about anymore, eh?\n\n", "msg_date": "Fri, 7 Jun 2002 13:18:20 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Fri, 7 Jun 2002, Bruce Momjian wrote:\n> \n> > I will say that I was disappointed by previous release delays and will be\n> > more vocal about moving things forward than I have in the past.\n> \n> I don't know ... I kinda like being able to confidently say to clients\n> that \"the latest release is always the most stable/reliable\" and not being\n> burned by it :) Hell, last release, I started moving to v7.2 on some of\n> our production servers around beta4 or so, since a) I was confident and b)\n> I needed some of the features ...\n> \n> Not many software packages you can say that about anymore, eh?\n\nYes, true, I am not going to push the schedule. What I want to try to\ndo is keep people more informed of where we are in the release process,\nand more informed about the open issues involved.\n\nThat squishy freeze is one of those items that is very hard to organize\nbecause it is so arbitrary about what should be added. I think we need\nto communicate CODE-CODE-CODE until September 1, then communicate\nTEST-TEST-TEST after that. We will not release before it is ready, but\nwe will marshall our forces better.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 12:27:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "On Fri, 7 Jun 2002 02:01:40 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>I think it is inevitable that there be enough binary file changes the\n>pg_upgrade will not work for 7.3 --- it just seems it is only a matter\n>of time.\n\nAs far as it concerns changes proposed by me, I'll (try to) provide a\nconversion program, if that's necessary for my patches to be accepted.\nThen move_objfiles() in pg_update would have to call pg_convert, or\nwhatever we call it, instead of mv. And yes, users would need twice\nthe disk space during pg_upgrade.\n\n>One idea is to allow alternate page layouts using the heap page version\n>number, but that will be difficult/confusing in the code.\n\nThis is a good idea, IMHO. By saying \"*the* heap page version number\"\ndo you mean, that there already is such a number by now? I could not\nfind one in bufpage.h. Should I have looked somewhere else?\n\nWhile we're at it, does each file start with a magic number\nidentifying its type? AFAICS nbtree does; but I can't tell for heap\nand have not yet checked for rtree, gist, ... This is the reason for\nthe \"try to\" in the first paragraph.\n\nServus\n Manfred\n", "msg_date": "Fri, 07 Jun 2002 20:22:23 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Manfred Koizar wrote:\n> On Fri, 7 Jun 2002 02:01:40 -0400 (EDT), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >I think it is inevitable that there be enough binary file changes the\n> >pg_upgrade will not work for 7.3 --- it just seems it is only a matter\n> >of time.\n> \n> As far as it concerns changes proposed by me, I'll (try to) provide a\n> conversion program, if that's necessary for my patches to be accepted.\n> Then move_objfiles() in pg_update would have to call pg_convert, or\n> whatever we call it, instead of mv. And yes, users would need twice\n> the disk space during pg_upgrade.\n\n> \n> >One idea is to allow alternate page layouts using the heap page version\n> >number, but that will be difficult/confusing in the code.\n> \n> This is a good idea, IMHO. By saying \"*the* heap page version number\"\n> do you mean, that there already is such a number by now? I could not\n> find one in bufpage.h. Should I have looked somewhere else?\n\nOops, I see I added to TODO lately:\n\n\t* Add version file format stamp to heap and other table types\n\nGuess we would have to add it. Btree has it in nbtree.h:\n\n\t uint32 btm_version;\n\nI though heap should have it too. Of course, there are problems with\nhaving the tuple read _know_ its version, and preventing mixing of\ntuples of different versions in the same page.\n\n> \n> While we're at it, does each file start with a magic number\n> identifying its type? AFAICS nbtree does; but I can't tell for heap\n> and have not yet checked for rtree, gist, ... This is the reason for\n> the \"try to\" in the first paragraph.\n\nYep, only btree. I guess I didn't add it because it would cause\nproblems for pg_upgrade. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jun 2002 17:14:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Uh guys ... what I *said* was:\n\n> I think we are planning to go beta in late summer (end of August, say).\n> Probably in July we'll start pressing people to finish up any major\n> development items, or admit that they won't happen for 7.3.\n\nBy which I meant that in July we should start hounding anyone who's got\nmajor unfinished work. (Like, say, me, if the schema changes are still\nincomplete then.) Not that we won't accept the work when it gets here,\njust that that'll be the time to start demanding closure on big 7.3\nchanges.\n\nAnd yes, I *would* be pretty upset with the idea of applying major\npatches in the last weeks of August, if they are changes that pop up\nout-of-the-blue at that time. If it's finishing up work that the\ncommunity has already approved, that's a different scenario. But big,\npoorly-reviewed feature additions right before beta are exactly my idea\nof how to mess up that reputation for stability that Marc was touting...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 00:10:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID " }, { "msg_contents": "Tom Lane wrote:\n> Uh guys ... what I *said* was:\n> \n> > I think we are planning to go beta in late summer (end of August, say).\n> > Probably in July we'll start pressing people to finish up any major\n> > development items, or admit that they won't happen for 7.3.\n> \n> By which I meant that in July we should start hounding anyone who's got\n> major unfinished work. (Like, say, me, if the schema changes are still\n> incomplete then.) Not that we won't accept the work when it gets here,\n> just that that'll be the time to start demanding closure on big 7.3\n> changes.\n\nOK.\n\n> And yes, I *would* be pretty upset with the idea of applying major\n> patches in the last weeks of August, if they are changes that pop up\n> out-of-the-blue at that time. If it's finishing up work that the\n> community has already approved, that's a different scenario. But big,\n> poorly-reviewed feature additions right before beta are exactly my idea\n> of how to mess up that reputation for stability that Marc was touting...\n\nYes, but there is a downside to this. We have trouble enough figuring\nout if a patch is a \"feature\" or \"bug fix\" during beta. How are people\ngoing to decide if a feature is \"big\" or not to work on during August?\nIt has a paralyzing effect on our developers. \n\nNow, I don't want to apply a partially-implemented feature in the last\nweek of August, but I don't want to slow things down during August,\nbecause the last time we did this we were all looking at each other\nwaiting for beta, and nothing was getting done. This is the paralyzing\neffect I want to avoid.\n\nWe have beta for testing. That's were our reliability comes from too. \nAnd last beta, we did almost nothing because we had shut down\ndevelopment so early, and it dragged because there _wasn't_ a clear line\nbetween development time and beta.\n\nSo, I we should:\n\n\tWarn people in July that beta is September 1 and all features\n\thave to be complete by then, or they get ripped out.\n\n\tReject non-complete patches during August, meaning accepted\n\tpatches in August have to be fully functional features; no\n\tpartial patches and I will work on the rest later.\n\n\tVote on any patches where there is disagreement.\n\nSo, in summary, for me, August patches have to be 100% complete. That\ntakes the guess work out of the deadline. There isn't the question of\nwhether we will accept such a feature or not. The burden is on the\ndeveloper to provide a 100% complete patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jun 2002 18:01:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Now, I don't want to apply a partially-implemented feature in the last\n> week of August, but I don't want to slow things down during August,\n> because the last time we did this we were all looking at each other\n> waiting for beta, and nothing was getting done. This is the paralyzing\n> effect I want to avoid.\n\nWell, my take on it is that the reason beta was delayed the last two\ngo-rounds was that we allowed major work to be committed in an\nincomplete state, and then we were stuck waiting for those people to\nfinish. (The fact that the people in question were core members didn't\nimprove my opinion of the situation ;-)) I'd like to stop making that\nmistake.\n\n> So, I we should:\n> \tWarn people in July that beta is September 1 and all features\n> \thave to be complete by then, or they get ripped out.\n> \tReject non-complete patches during August, meaning accepted\n> \tpatches in August have to be fully functional features; no\n> \tpartial patches and I will work on the rest later.\n\nI thought that was more or less the same thing I was proposing...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jun 2002 19:40:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Project scheduling issues (was Re: Per tuple overhead, cmin, cmax,\n\tOID)" }, { "msg_contents": "On Sat, 8 Jun 2002, Bruce Momjian wrote:\n\n> Yes, but there is a downside to this. We have trouble enough figuring\n> out if a patch is a \"feature\" or \"bug fix\" during beta. How are people\n> going to decide if a feature is \"big\" or not to work on during August?\n> It has a paralyzing effect on our developers.\n\nHow is this any different then our other releases? I think you've totally\nlost me as to where the problem is ... reading your above, you are\nsuggesting that ppl don't work on big projects during the month of August,\nsince it might not get in for the release? We've never advocated that\nbefore, nor do I believe we should at this point ... in fact, I think its\nabout time we start dealing with beta using the tools that we have\navailable ...\n\nBeta starts, we branch out a -STABLE vs -DEVELOPMENT branch in CVS ... we\nrelease a beta1 and deal with bug releases as they come in, followed by a\nbeta2 until we are ready for release ... I think everyone is old enough\nnow to be able to decide whatfixed have gone into -STABLE that should be\nreflected in -DEVELOPMENT, no? Our mistake last release wasn't how long\nbeta lasted, but how long we stalled development ...\n\n\n", "msg_date": "Sat, 8 Jun 2002 21:10:43 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Sat, 8 Jun 2002, Bruce Momjian wrote:\n> \n> > Yes, but there is a downside to this. We have trouble enough figuring\n> > out if a patch is a \"feature\" or \"bug fix\" during beta. How are people\n> > going to decide if a feature is \"big\" or not to work on during August?\n> > It has a paralyzing effect on our developers.\n> \n> How is this any different then our other releases? I think you've totally\n> lost me as to where the problem is ... reading your above, you are\n> suggesting that ppl don't work on big projects during the month of August,\n> since it might not get in for the release? We've never advocated that\n> before, nor do I believe we should at this point ... in fact, I think its\n> about time we start dealing with beta using the tools that we have\n> available ...\n\nIn previous releases, we had this \"it is too close to beta to add\nfeature X\" mentality, and I see Tom reiterating that in his email.\n\nIt is the idea were are supposed to go into beta with a bug-free release\nthat bother me. August is prime time for open-source development. Many\ncountries have holidays, and business is slow, so people have time to\nwork projects. Let's use that time to improve PostgreSQL, and leave\nbeta for fixing.\n\n> Beta starts, we branch out a -STABLE vs -DEVELOPMENT branch in CVS ... we\n> release a beta1 and deal with bug releases as they come in, followed by a\n> beta2 until we are ready for release ... I think everyone is old enough\n> now to be able to decide whatfixed have gone into -STABLE that should be\n> reflected in -DEVELOPMENT, no? Our mistake last release wasn't how long\n> beta lasted, but how long we stalled development ...\n\nAgreed. Let's split sometime during beta as soon as we are ready to\nwork on 7.4. Sooner and we just double-patch for no purpose, later and\nwe stall development.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jun 2002 21:53:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Now, I don't want to apply a partially-implemented feature in the last\n> > week of August, but I don't want to slow things down during August,\n> > because the last time we did this we were all looking at each other\n> > waiting for beta, and nothing was getting done. This is the paralyzing\n> > effect I want to avoid.\n> \n> Well, my take on it is that the reason beta was delayed the last two\n> go-rounds was that we allowed major work to be committed in an\n> incomplete state, and then we were stuck waiting for those people to\n> finish. (The fact that the people in question were core members didn't\n> improve my opinion of the situation ;-)) I'd like to stop making that\n> mistake.\n\nI am going to recommend disabling features that people can't fix in a\ntimely manner during beta. Sounds harsh, but we can't have the whole\nproject waiting on one person to have a free weekend. If they can\ngenerate a patch, we can re-enable the feature, but we need to get some\ndiscipline for everyone's benefit. I don't think any of us wants to be\nembarrassed by the beta duration again.\n\n> > So, I we should:\n> > \tWarn people in July that beta is September 1 and all features\n> > \thave to be complete by then, or they get ripped out.\n> > \tReject non-complete patches during August, meaning accepted\n> > \tpatches in August have to be fully functional features; no\n> > \tpartial patches and I will work on the rest later.\n> \n> I thought that was more or less the same thing I was proposing...\n\nThis is the text I objected to:\n\nTom Lane wrote:\n> And yes, I *would* be pretty upset with the idea of applying major\n> patches in the last weeks of August, if they are changes that pop up\n> out-of-the-blue at that time. If it's finishing up work that the\n> community has already approved, that's a different scenario. But big,\n> poorly-reviewed feature additions right before beta are exactly my idea\n> of how to mess up that reputation for stability that Marc was touting...\n\nIt emphasizes August as primarily finish-up time. And there is that\n\"pre-approved\" part I don't like. Feature has to be done by the end of\nAugust, doesn't matter whether it is approved or not. If someone wants\nto start and complete a feature during August, \"go ahead\" is my moto.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jun 2002 22:08:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "On Sat, 8 Jun 2002, Bruce Momjian wrote:\n\n> It is the idea were are supposed to go into beta with a bug-free release\n> that bother me.\n\nBut its you that's always tried to advocate that ... no? If not, then I\nam confused, cause I know *I've* never ... to me, switching to beta mode\nhas always been the switch from 'add features' to 'fix the bugss to\nrelease' ...\n\n> Agreed. Let's split sometime during beta as soon as we are ready to\n> work on 7.4. Sooner and we just double-patch for no purpose, later and\n> we stall development.\n\nNo, let's split *at* beta ... I imagine there are several ppl out there\nthat have alot of work they wish to do, and sitting around twiddling their\nthumbs waiting for someone to *maybe* report a bug in their area/code is a\nwaste of everyone's time ... not to mention a delay in being ready to\nrelease the next version ...\n\nHell, each 'beta period' we've done so far has caused that ... where we\nget in patches and changes that aren't appropriate for beta and they get\nsat on until after its been released ... then you risk the fun of merging\nin conflicting changes, so the patch that was perfect when it was\nsubmitted has to be redone because someone else's patch changed enough to\nthe code that it doesn't apply cleanly ...\n\nIts not like it was years ago when there were a couple of us in the code\n... there are enough developers out there now (and growing) that\n'stalling' things isn't fair to them (or, in some cases, the companies\nthat are paying them to develop features) ...\n\nWe have the tools to do this, its time to start using them the way they\nwere meant to be used ...\n\n", "msg_date": "Sun, 9 Jun 2002 02:17:10 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Per tuple overhead, cmin, cmax, OID" }, { "msg_contents": "On Sat, 8 Jun 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Now, I don't want to apply a partially-implemented feature in the last\n> > > week of August, but I don't want to slow things down during August,\n> > > because the last time we did this we were all looking at each other\n> > > waiting for beta, and nothing was getting done. This is the paralyzing\n> > > effect I want to avoid.\n> >\n> > Well, my take on it is that the reason beta was delayed the last two\n> > go-rounds was that we allowed major work to be committed in an\n> > incomplete state, and then we were stuck waiting for those people to\n> > finish. (The fact that the people in question were core members didn't\n> > improve my opinion of the situation ;-)) I'd like to stop making that\n> > mistake.\n>\n> I am going to recommend disabling features that people can't fix in a\n> timely manner during beta. Sounds harsh, but we can't have the whole\n> project waiting on one person to have a free weekend. If they can\n> generate a patch, we can re-enable the feature, but we need to get some\n> discipline for everyone's benefit. I don't think any of us wants to be\n> embarrassed by the beta duration again.\n\nI wasn't embarrassed by it ... when I talk to ppl asking about QA on\nPostgreSQL, I quite proudly point out that we'd rather delay then release\nsomething we aren't confident about *shrug*\n\n> It emphasizes August as primarily finish-up time. And there is that\n> \"pre-approved\" part I don't like. Feature has to be done by the end of\n> August, doesn't matter whether it is approved or not. If someone wants\n> to start and complete a feature during August, \"go ahead\" is my moto.\n\nPersonally ... I'm really curious as to why you are even trying to\n'formalize' stuff that has been done for years now ... end of August rolls\naround and someone submits a feature patch, we do as we've always done ...\nwe discuss its merits, and its risk factor ... if it presents too high a\nrisk, it gets put on the 'patch stack' for the next release ... or do you\nthink our judgement in such matters is such that we have to formalize/set\nin stone this common sense stuff beforehand?\n\nI *really* wish ppl would stop harping on the length of the last beta\ncycle ... I will always rather delay a release due to an *known*\noutstanding bug, especially one that just needs a little bit more time to\nwork out, then to release software \"on time\" ala Microsoft ...\n\nHell, you are trying to set in stone when beta starts (end of august) ...\nbut with some of the massive changes that we tend to see over the course\nof a development project, for all we know, Tom will be 90% finished\nsomething and only need another week to get it complete ... personally,\nhe's one of many whose code I wouldn't question, so giving another week to\nget it done and in, IMHO, is perfectly acceptable ... but, for what you\nare trying to get set in stone, it wasn't finished by Sept 1st, so we'll\nthrow it all out until the next release ...\n\nRight now, Sept 1st is the \"preferred date to go beta\" ... when Sept 1st\nrolls around, like we've always done in the past, we will review that and\nif we need to delay a little, we will *shrug*\n\n", "msg_date": "Sun, 9 Jun 2002 02:32:22 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> I *really* wish ppl would stop harping on the length of the last beta\n> cycle ... I will always rather delay a release due to an *known*\n> outstanding bug, especially one that just needs a little bit more time to\n> work out, then to release software \"on time\" ala Microsoft ...\n\nI don't think that's at issue here. No one was suggesting that we'd\nforce an *end* to beta cycle because of schedule issues. We ship when\nwe're satisfied and not before. I'm saying that I want to try to\n*start* the beta test period on-time, rather than letting the\nalmost-beta state drag on for months --- which we did in each of the\nlast two cycles. Development time is productive, and beta-test time\nis productive, but we're-trying-to-start-beta time is not very\nproductive ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jun 2002 01:41:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead, cmin,\n cmax,\n\tOID)" }, { "msg_contents": "On Sun, 9 Jun 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > I *really* wish ppl would stop harping on the length of the last beta\n> > cycle ... I will always rather delay a release due to an *known*\n> > outstanding bug, especially one that just needs a little bit more time to\n> > work out, then to release software \"on time\" ala Microsoft ...\n>\n> I don't think that's at issue here. No one was suggesting that we'd\n> force an *end* to beta cycle because of schedule issues. We ship when\n> we're satisfied and not before. I'm saying that I want to try to\n> *start* the beta test period on-time, rather than letting the\n> almost-beta state drag on for months --- which we did in each of the\n> last two cycles. Development time is productive, and beta-test time\n> is productive, but we're-trying-to-start-beta time is not very\n> productive ...\n\nAgreed on all accounts ... which is why this time, I want to do a proper\nbranch when beta starts ... hell, from what I've seen suggested here so\nfar, we have no choice ... At least then we can 'rip out' something from\nthe beta tree without having to remove and re-add it to the development\none later, hoping that they're changes haven't been affected by someone\nelse's ...\n\n\n", "msg_date": "Sun, 9 Jun 2002 13:28:22 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Agreed on all accounts ... which is why this time, I want to do a proper\n> branch when beta starts ... hell, from what I've seen suggested here so\n> far, we have no choice ... At least then we can 'rip out' something from\n> the beta tree without having to remove and re-add it to the development\n> one later, hoping that they're changes haven't been affected by someone\n> else's ...\n\nWell, let's give that a try and see how it goes. I'm a bit worried\nabout the amount of double-patching we'll have to do, but other projects\nseem to manage to cope with multiple active branches...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jun 2002 12:35:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead, cmin,\n cmax,\n\tOID)" }, { "msg_contents": "On Sun, Jun 09, 2002 at 02:32:22AM -0300, Marc G. Fournier wrote:\n\n> Right now, Sept 1st is the \"preferred date to go beta\" ... when Sept 1st\n\n I agree with Bruce, Sept 1st is the deadline and right time for all\n discussion about shift of this date is Sept 2nd. Not now, else you\n never will see end of the cycle :-)\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 10 Jun 2002 09:35:54 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > I *really* wish ppl would stop harping on the length of the last beta\n> > cycle ... I will always rather delay a release due to an *known*\n> > outstanding bug, especially one that just needs a little bit more time to\n> > work out, then to release software \"on time\" ala Microsoft ...\n> \n> I don't think that's at issue here. No one was suggesting that we'd\n> force an *end* to beta cycle because of schedule issues. We ship when\n> we're satisfied and not before. I'm saying that I want to try to\n> *start* the beta test period on-time, rather than letting the\n> almost-beta state drag on for months --- which we did in each of the\n> last two cycles. Development time is productive, and beta-test time\n> is productive, but we're-trying-to-start-beta time is not very\n> productive ...\n\nYes, this was exactly my point. By slowing down in August, we enter\nthat \"almost beta\" period where there is uncertainty over what should be\nworked on. I know myself I am uncertain what is appropriate to work on,\nso I usually end up doing nothing, which is a waste.\n\nI think the only message should be \"finish before the end of August\". \nPeople can understand that, and it is under the control of the\ncontributor. The message \"no big patches in August\" is too imprecise and\nleads to uncertainty.\n\nOf course, if we don't finish by the end of August, our new message may\nbe \"finish before the end of September\". This brings up another point. \nWe have delayed beta to wait for single patches in the past, usually a\nweek at a time. When that week drags to two, and then four, we have\nlost development time. If we had just said \"four weeks\" from the start,\npeople could have continued development, knowing they had a month, but\nour one-week-at-a-time strategy basically holds up the whole group\nwaiting for single developer to finish a patch. What I am suggesting is\nthat our small delays for beta are hurting us _if_ the delay drags\nlonger than anticipated, and we keep pushing back the deadline. In such\ncases, we would be better just choosing a longer deadline from the\nstart. Perhaps we should have delays that are a month at a time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jun 2002 13:29:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Agreed on all accounts ... which is why this time, I want to do a proper\n> > branch when beta starts ... hell, from what I've seen suggested here so\n> > far, we have no choice ... At least then we can 'rip out' something from\n> > the beta tree without having to remove and re-add it to the development\n> > one later, hoping that they're changes haven't been affected by someone\n> > else's ...\n> \n> Well, let's give that a try and see how it goes. I'm a bit worried\n> about the amount of double-patching we'll have to do, but other projects\n> seem to manage to cope with multiple active branches...\n\nYes, Marc has been advocating this, and perhaps it is time to give it a\ntry. There are some downsides:\n\n\to All committers need to know that they have to double-patch\n\to We might have developers working on new features rather than\n\t focusing on beta testing/fixing.\n\nOne interesting idea would be to create a branch for 7.4, and apply\n_only_ 7.4 patches to that branch. Then, when we release 7.3, we merge\nthat branch back into the main CVS tree. That would eliminate\ndouble-patching _and_ give people a place to commit 7.4 changes. I\ndon't think the merge would be too difficult because 7.3 will not change\nsignificantly during beta.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jun 2002 13:33:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "On Mon, 10 Jun 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > Agreed on all accounts ... which is why this time, I want to do a proper\n> > > branch when beta starts ... hell, from what I've seen suggested here so\n> > > far, we have no choice ... At least then we can 'rip out' something from\n> > > the beta tree without having to remove and re-add it to the development\n> > > one later, hoping that they're changes haven't been affected by someone\n> > > else's ...\n> >\n> > Well, let's give that a try and see how it goes. I'm a bit worried\n> > about the amount of double-patching we'll have to do, but other projects\n> > seem to manage to cope with multiple active branches...\n>\n> Yes, Marc has been advocating this, and perhaps it is time to give it a\n> try. There are some downsides:\n>\n> \to All committers need to know that they have to double-patch\n\n*Wrong* .. only if its a fix for a problem with -STABLE .. otherwise it\n*just* goes in the development tree ...\n\n> \to We might have developers working on new features rather than\n> \t focusing on beta testing/fixing.\n\nIts not the developers responsibility to beta test the software, its is\ntheir responsibility to test patches as they are applied during the\n'development cycle' ... and even after we've branched in the past, ppl\nhave \"fixed reported bugs\" and applied such fixes to the -STABLE branch\n... why would that be any different now? All we're doing is letting\ndevelopers work on their projects instead of sitting on their hands\nwaiting for a bug report ...\n\n*Plus* ... good chance that any bugs that are reports are in the -DEV\nbranch also, so it has to be fixed regardless ...\n\n> One interesting idea would be to create a branch for 7.4, and apply\n> _only_ 7.4 patches to that branch. Then, when we release 7.3, we merge\n> that branch back into the main CVS tree. That would eliminate\n> double-patching _and_ give people a place to commit 7.4 changes. I\n> don't think the merge would be too difficult because 7.3 will not change\n> significantly during beta.\n\nFour words: when hell freezes over\n\nWhy must you overcomplicate a process most *large* projects seem to find\nsooooo simple to deal with? God, what you are proposing above requires\nppl to predict what v7.3 is going to look like when its finished, so that\ntheir work on v7.4 can follow?\n\nBruce, I think this whole thread has just about dried up now ... when v7.3\ngoes beta, we will branch just like other large projects do so that we\ndon't hold up any developers until we release the software, which, based\non past experiences and history, will end up being delayed ... hell, just\nthink, we branch on the 1st of Sept, release on the 15 of October (lets\nsay one month for beta plus a bit of delay), and are ready to go with the\nnext beta around the 1st of January since we did't lose that 1.5mo of\ndevelopment time ... wow, imagine a *solid* 4 month development cycle\nbefore beta? :)\n\nBased on everything I've heard/seen in this thread, we seem to be looking\nat:\n\n1. Branch on Sept 1st, regardless of almost anything\n\n2. Once Branch created, any *partially implemented* features will get\n rip'd out of the -STABLE branch and only fixes to the existing, fully\n implement features will go in\n\n3. Beta1 released once developers comfortable with the state of the code\n\nNow, *if*, the week before the Branch, someone submits a bit patch that in\n*anyway* concerns someone to apply, we can hold it off for a week and put\nit into the -DEV branch so that its not shelved for a couple of months,\nand possibly going out of date ... but that would be a judgement call at\nthe time, nothing set in stone ...\n\nThe only thing we are really \"setting in stone\" here is when we are\nbranching/freezing the code for release ...\n\n\n\n", "msg_date": "Mon, 10 Jun 2002 15:46:58 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "Marc G. Fournier wrote:\n> On Mon, 10 Jun 2002, Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> > > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > > Agreed on all accounts ... which is why this time, I want to do a proper\n> > > > branch when beta starts ... hell, from what I've seen suggested here so\n> > > > far, we have no choice ... At least then we can 'rip out' something from\n> > > > the beta tree without having to remove and re-add it to the development\n> > > > one later, hoping that they're changes haven't been affected by someone\n> > > > else's ...\n> > >\n> > > Well, let's give that a try and see how it goes. I'm a bit worried\n> > > about the amount of double-patching we'll have to do, but other projects\n> > > seem to manage to cope with multiple active branches...\n> >\n> > Yes, Marc has been advocating this, and perhaps it is time to give it a\n> > try. There are some downsides:\n> >\n> > \to All committers need to know that they have to double-patch\n> \n> *Wrong* .. only if its a fix for a problem with -STABLE .. otherwise it\n> *just* goes in the development tree ...\n\nNot totally wrong. I predict >90% of patches during those first two\nweeks will have to be double-applied. Do you disagree? Remember, we\nused to branch earlier and double-apply, and it did get confusing when\npeople forgot to double-patch. I am not saying it is impossible, but I\ndon't want to minimize it either.\n\n\n> > \to We might have developers working on new features rather than\n> > \t focusing on beta testing/fixing.\n> \n> Its not the developers responsibility to beta test the software, its is\n> their responsibility to test patches as they are applied during the\n> 'development cycle' ... and even after we've branched in the past, ppl\n> have \"fixed reported bugs\" and applied such fixes to the -STABLE branch\n> ... why would that be any different now? All we're doing is letting\n> developers work on their projects instead of sitting on their hands\n> waiting for a bug report ...\n\nYes, good point. Developers are not testing. However, when we do need\npeople to track down bugs and fixes, I hope they aren't too busy working\non new features to help us.\n\n> *Plus* ... good chance that any bugs that are reports are in the -DEV\n> branch also, so it has to be fixed regardless ...\n> \n> > One interesting idea would be to create a branch for 7.4, and apply\n> > _only_ 7.4 patches to that branch. Then, when we release 7.3, we merge\n> > that branch back into the main CVS tree. That would eliminate\n> > double-patching _and_ give people a place to commit 7.4 changes. I\n> > don't think the merge would be too difficult because 7.3 will not change\n> > significantly during beta.\n> \n> Four words: when hell freezes over\n> \n> Why must you overcomplicate a process most *large* projects seem to find\n> sooooo simple to deal with? God, what you are proposing above requires\n> ppl to predict what v7.3 is going to look like when its finished, so that\n> their work on v7.4 can follow?\n\nOnly bug fixes are going into 7.3 during beta, so how much is it going\nto change?\n\nAnd I have done the double-patching, so I remember the problems. Aside\nfrom the hassle of doing everything twice, as development drifts from\nbeta, the patches do become harder to apply.\n\n> Bruce, I think this whole thread has just about dried up now ... when v7.3\n> goes beta, we will branch just like other large projects do so that we\n> don't hold up any developers until we release the software, which, based\n> on past experiences and history, will end up being delayed ... hell, just\n> think, we branch on the 1st of Sept, release on the 15 of October (lets\n> say one month for beta plus a bit of delay), and are ready to go with the\n> next beta around the 1st of January since we did't lose that 1.5mo of\n> development time ... wow, imagine a *solid* 4 month development cycle\n> before beta? :)\n\nYes, it will be good.\n\n> Based on everything I've heard/seen in this thread, we seem to be looking\n> at:\n> \n> 1. Branch on Sept 1st, regardless of almost anything\n> \n> 2. Once Branch created, any *partially implemented* features will get\n> rip'd out of the -STABLE branch and only fixes to the existing, fully\n> implement features will go in\n\nNow, that is an interesting idea.\n\n> 3. Beta1 released once developers comfortable with the state of the code\n> \n> Now, *if*, the week before the Branch, someone submits a bit patch that in\n> *anyway* concerns someone to apply, we can hold it off for a week and put\n> it into the -DEV branch so that its not shelved for a couple of months,\n> and possibly going out of date ... but that would be a judgement call at\n> the time, nothing set in stone ...\n> \n> The only thing we are really \"setting in stone\" here is when we are\n> branching/freezing the code for release ...\n\nOK. I am making these points because the previous betas have been very\ndisorganized, with lots of wasted time. I don't want it to happen\nagain. We can't say we don't understand the issues. It has happened so\nmany times that we are destined to repeat those problems unless we do\nsomething differently. Clearly, branch at beta, patch development tree,\ndisable partially implemented features, is a change. I still think not\ndouble patching for the first few weeks will be a win, though.\n\nRemember, if you fix something in current, you have to generate a patch\nand apply it to the STABLE tree for _anything_ you fix in stable. And\nalmost any changes in STABLE have to be applied in current. Also, bug\nreports will have to indentify stable or development tree.\n\nI know FreeBSD does it this way, but I don't hold them up as a model of\ngood organization.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jun 2002 15:10:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "On Monday 10 June 2002 02:46 pm, Marc G. Fournier wrote:\n> Based on everything I've heard/seen in this thread, we seem to be looking\n> at:\n\n> 1. Branch on Sept 1st, regardless of almost anything\n\n> 2. Once Branch created, any *partially implemented* features will get\n> rip'd out of the -STABLE branch and only fixes to the existing, fully\n> implement features will go in\n\n> 3. Beta1 released once developers comfortable with the state of the code\n\n> Now, *if*, the week before the Branch, someone submits a bit patch that in\n> *anyway* concerns someone to apply, we can hold it off for a week and put\n> it into the -DEV branch so that its not shelved for a couple of months,\n> and possibly going out of date ... but that would be a judgement call at\n> the time, nothing set in stone ...\n\n> The only thing we are really \"setting in stone\" here is when we are\n> branching/freezing the code for release ...\n\nThis seems to me to be reasonable. My only question would be 'why haven't we \nalways done it this way' but that isn't terribly productive. I actually know \nthe answer to my question, in fact, but that's not relevant to the future.\n\nMany large projects do this, in some form or another. FreeBSD, Debian, even \nthe Linux kernel all follow this basic form.\n\nHistorically we've concentrated our development efforts during beta to 'fixing \nbeta problems only' -- but that model produces these extraordinarily long \ncycles, IMHO. In the meantime people are literally chomping at the bit to do \na new feature -- to the point that one developer got rather upset that his \npatch wasn't being looked at and 'stomped off' in a huff. All because we \nwere in beta-only mode.\n\nHowever, I do think at that point we need to look at what the patch manager \n(historically Bruce) can deal with realistically. Is it a job for two patch \nmanagers, one for the STABLE and one for the DEV? Only Bruce can answer \nwhether he can realistically handle it (I personally have confidence he can).\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 10 Jun 2002 15:54:13 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Historically we've concentrated our development efforts during beta to\n> 'fixing beta problems only' -- but that model produces these\n> extraordinarily long cycles, IMHO. In the meantime people are\n> literally chomping at the bit to do a new feature -- to the point that\n> one developer got rather upset that his patch wasn't being looked at\n> and 'stomped off' in a huff. All because we were in beta-only mode.\n\nThere is a downside to changing away from that approach. Bruce\nmentioned it but didn't really give it the prominence I think it\ndeserves: beta mode encourages developers to work on testing, debugging,\nand oh yes documenting. Without that forced \"non development\" time,\nsome folks will just never get around to the mop-up stages of their\nprojects; they'll be off in new-feature-land all the time. I won't name\nnames, but there are more than a couple around here ;-)\n\nI think our develop mode/beta mode pattern has done a great deal to\ncontribute to the stability of our releases. If we go over to the same\napproach that everyone else uses, you can bet your last dollar that our\nreleases will be no better than everyone else's. How many people here\nrun dot-zero releases of the Linux kernel, or gcc? Anyone find them\ntrustworthy? Anyone really eager to have to maintain old releases for\nseveral years, because no sane DBA will touch the latest release?\n\nI'm not trying to sound like Cassandra, but we've done very very well\nwith only limited resources over the past several years. We should not\nbe too eager to mess with a proven-successful approach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jun 2002 16:11:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead, " }, { "msg_contents": "On Monday 10 June 2002 04:11 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Historically we've concentrated our development efforts during beta to\n> > 'fixing beta problems only' \n\n> There is a downside to changing away from that approach.\n\nThere are downsides to every approach. The question is 'Which set of \ndownsides are we most comfortable with?'\n\n> Bruce\n> mentioned it but didn't really give it the prominence I think it\n> deserves: beta mode encourages developers to work on testing, debugging,\n> and oh yes documenting. Without that forced \"non development\" time,\n> some folks will just never get around to the mop-up stages of their\n> projects; they'll be off in new-feature-land all the time. I won't name\n> names, but there are more than a couple around here ;-)\n\nWell, this is the one downside the Marc's proposal. It boils down to \nself-discipline, though. Unfortunately not everyone is as disciplined as you \nseem to be in the area, Tom. I certainly cannot claim a great deal of \nself-discipline. BTW, that is meant as a compliment to you, Tom.\n\n> I think our develop mode/beta mode pattern has done a great deal to\n> contribute to the stability of our releases. If we go over to the same\n> approach that everyone else uses, you can bet your last dollar that our\n> releases will be no better than everyone else's.\n\nI'll have to agree here -- but I also must remind people that our 'dot zero' \nreleases are typically solid, but our 'dot one' releases have not been so \nsolid. So I wouldn't be too confident in our existing model. \n\nAnd I'm not so sure the model is the producer of our sterling record \nheretofore. I'm more of the mindset that the quality and discipline of the \ndevelopers is the real reason.\n\n> How many people here\n> run dot-zero releases of the Linux kernel, or gcc? Anyone find them\n> trustworthy? Anyone really eager to have to maintain old releases for\n> several years, because no sane DBA will touch the latest release?\n\nWe already have some of that problem due to the difficulty in upgrading. \nPeople wait and see if the features warrant the downtime and pain of \nupgrading. Meantime they live with security holes and bugs in our own \nunmaintained older releases. And dump and restore upgrades are not painless. \nI will admit that I've not used pg_upgrade in some time -- I understand \nmoving from 7.1 to 7.2 is much less painful using pg_upgrade. However, \npg_upgrade was released in contrib as being a 'handle with great care' \nutility that no sane DBA is going to touch.... Catch 22.\n\nSo, I don't necessarily agree that we should hold up our development model as \nthe panacea, and I'm not thoroughly convinced that the quality of our \nreleases is related directly to the development model. I believe it is \ndirectly related to the caliber of the developers.\n\nThat said, good developers can produce good quality regardless of the model \nused if they will discipline themselves accordingly.\n\n> I'm not trying to sound like Cassandra, but we've done very very well\n> with only limited resources over the past several years. We should not\n> be too eager to mess with a proven-successful approach.\n\nInteresting reference....\n\nWhy not try this one cycle and see what happens? No one is going to force \nanyone else to develop new features when they want to fix bugs.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 10 Jun 2002 22:10:00 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "On Mon, 10 Jun 2002, Tom Lane wrote:\n\n> There is a downside to changing away from that approach. Bruce\n> mentioned it but didn't really give it the prominence I think it\n> deserves: beta mode encourages developers to work on testing, debugging,\n> and oh yes documenting. Without that forced \"non development\" time,\n> some folks will just never get around to the mop-up stages of their\n> projects; they'll be off in new-feature-land all the time. I won't name\n> names, but there are more than a couple around here ;-)\n\nWell, in alot of ways we have control over this ... we have a very limited\nnumber of committers ... start requiring that any patches that come\nthrough, instead of \"just being applied and worry about documentation\nlater\", require the documentation to be included at the same time ...\nwould definitely save alot of headaches down the road chasing down that\ndocumentation ... I think we've actually done thta a few times in the\npast, where we've held up a patch waiting for the documentation, but its\nnever been a real requirement, but I don't think its an unreasonable one\n...\n\n> I think our develop mode/beta mode pattern has done a great deal to\n> contribute to the stability of our releases. If we go over to the same\n> approach that everyone else uses, you can bet your last dollar that our\n> releases will be no better than everyone else's. How many people here\n> run dot-zero releases of the Linux kernel, or gcc? Anyone find them\n> trustworthy? Anyone really eager to have to maintain old releases for\n> several years, because no sane DBA will touch the latest release?\n\nAgain, we do have alot of control over this ... the only ppl that we\n*really* have to worry about \"not mopping up\" their code are those with\ncommitters access ... everyone else has to go through us, which means that\nwe can always \"stale\" a patch from a developer due to requirements for bug\nfixes ...\n\n... but, quite honestly, have we ever truly had a problem with this even\nduring development period? How many *large* OSS projects out there have?\nMy experience(s) with FreeBSD, for an example, are that most developers\ntake pride in their code ... if someone reports a bug, and its\nrecreateable, its generally fixed quite quickly ... its only the \"hard to\nrecreate\" bugs that take a long time to fix ... wasn't that just the case\nwith us with the sequences bug? You yourself, if I recall, admitted that\nits always been there, but it obviously wasn't the easiest to\nre-create/trigger, else we would have had more ppl yelling about it ...\nonce someone was able to narrow down the problem and how to re-create it\nconsistently, it was fixed ...\n\nWe've never really run \"a tight ship\" as far as code has gone ... Bruce\nhas been known to helter-skelter apply patches, even a couple that I\nrecall so obviously shouldn't have been that we beat him for it, but that\nhas never prevented us (or even slowed us down) from having *solid*\nreleases ... everyone that I've meet so far working on this project, IMHO,\nhave been *passionate* about what they do ... and, in some way or another,\n*rely* on it being rock-solid ...\n\n\n", "msg_date": "Tue, 11 Jun 2002 06:30:39 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "On Mon, 10 Jun 2002, Bruce Momjian wrote:\n\n> > 2. Once Branch created, any *partially implemented* features will get\n> > rip'd out of the -STABLE branch and only fixes to the existing, fully\n> > implement features will go in\n>\n> Now, that is an interesting idea.\n\nYa, I thought it was when you -and- Tom proposed it :)\n\nI quote from a message on June 8th:\n\n=======================\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > So, I we should:\n> > Warn people in July that beta is September 1 and all features\n> > have to be complete by then, or they get ripped out.\n>\n> I thought that was more or less the same thing I was proposing...\n========================\n\n\n\n\n", "msg_date": "Tue, 11 Jun 2002 06:36:34 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "Marc G. Fournier wrote:\n> On Mon, 10 Jun 2002, Bruce Momjian wrote:\n> \n> > > 2. Once Branch created, any *partially implemented* features will get\n> > > rip'd out of the -STABLE branch and only fixes to the existing, fully\n> > > implement features will go in\n> >\n> > Now, that is an interesting idea.\n> \n> Ya, I thought it was when you -and- Tom proposed it :)\n> \n> I quote from a message on June 8th:\n> \n> =======================\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > So, I we should:\n> > > Warn people in July that beta is September 1 and all features\n> > > have to be complete by then, or they get ripped out.\n> >\n> > I thought that was more or less the same thing I was proposing...\n> ========================\n\nWhat I thought was interesting was having the CURRENT branch keep the\nfeature so the guy could continue development, even if we disable in\nSTABLE.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jun 2002 05:56:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," }, { "msg_contents": "Lamar Owen wrote:\n> On Monday 10 June 2002 04:11 pm, Tom Lane wrote:\n> > I think our develop mode/beta mode pattern has done a great deal to\n> > contribute to the stability of our releases. If we go over to the same\n> > approach that everyone else uses, you can bet your last dollar that our\n> > releases will be no better than everyone else's.\n>\n> I'll have to agree here -- but I also must remind people that our 'dot zero'\n> releases are typically solid, but our 'dot one' releases have not been so\n> solid. So I wouldn't be too confident in our existing model.\n\n If that's a pattern, then we should discourage people from\n using odd dot-releases.\n\n My opinion? With each release we ship improvements and new\n functionality people have long waited for. Think about\n vacuum, toast, referential integrity. People need those\n things and have great confidence in our releases. The\n willingness to upgrade their production systems to dot zero\n releases is the biggest compliment users can make.\n\n Everything that endangers that quality is bad(tm). Our\n develop/beta mode pattern keeps people from diving into the\n next bigger thing, distracting them from the current beta or\n release candidate. I don't think that would do us a really\n good job.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Tue, 11 Jun 2002 09:06:04 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Project scheduling issues (was Re: Per tuple overhead," } ]
[ { "msg_contents": "\tI am a bit new to postgresql. I have used it in a few applications, however, \nin the current application we are beginning to see a need for replicated \ndatabases. I am trying to find out more information about how to do automatic \nreplication with postgresql. \n\tI have gone through the basic rserv tutorial, and I have a basic \nunderstanding of how it works. However, I have noticed there are a number of \nreplication projects currently underway. \n\n\tMy questions are :\n\n\t1) Is this the correct forum to ask this question?\n\t2) What is the \"recomended\" replication solution project?\n\t3) How are others doing replication? \n\t\n\n\n\tThanks,\n\n\tDoug\n\n\t\n", "msg_date": "Thu, 2 May 2002 14:10:56 -0400", "msg_from": "Doug Needham <dneedham@pbo.net>", "msg_from_op": true, "msg_subject": "replication questions" }, { "msg_contents": "\n>databases. I am trying to find out more information about how to do automatic \n>replication with postgresql. \n>\n\nWe did some research on this several months ago, and published the \nresults here\n\nhttp://gborg.postgresql.org/genpage?replication_research\n\n>\n>\n>\n>\tMy questions are :\n>\n>\t1) Is this the correct forum to ask this question?\n>\nThis is probably better suited for general, but its a topic I'm \ninterested in.\n\n>\n>\t2) What is the \"recomended\" replication solution project?\n>\nIMHO, Postres-R has the potential to be great solution, but I'm sure \nothers have different\nneeds. ;-)\n\n\n>\n>\t3) How are others doing replication? \n>\nIf you asking for approaches, I'd say the majority are master/slave \nasynchronous\nwith either triggers or transaction logs. \n\n\nI would ask what are you trying to solve with replication. Do you want \nto be able\nupdate all systems in the replica? How much bandwidth do you have \nbetween the\nservers. Do the systems need to be identical at all times? What is \nyour time frame?\n\nDepending on how you answer questions like these, your \"recommendation\" \nwill change.\n\nGood luck,\n\nDarren\n\n\n\n\n", "msg_date": "Thu, 02 May 2002 19:54:11 -0400", "msg_from": "Darren Johnson <darren@up.hrcoxmail.com>", "msg_from_op": false, "msg_subject": "Re: replication questions" }, { "msg_contents": "On Thursday 02 May 2002 07:54 pm, Darren Johnson wrote:\n> >databases. I am trying to find out more information about how to do\n> > automatic replication with postgresql.\n>\n> We did some research on this several months ago, and published the\n> results here\n>\n> http://gborg.postgresql.org/genpage?replication_research\n>\n> >\tMy questions are :\n> >\n> >\t1) Is this the correct forum to ask this question?\n>\n> This is probably better suited for general, but its a topic I'm\n> interested in.\n>\n> >\t2) What is the \"recomended\" replication solution project?\n>\n> IMHO, Postres-R has the potential to be great solution, but I'm sure\n> others have different\n> needs. ;-)\n>\n> >\t3) How are others doing replication?\n>\n> If you asking for approaches, I'd say the majority are master/slave\n> asynchronous\n> with either triggers or transaction logs.\n>\n>\n> I would ask what are you trying to solve with replication. Do you want\n> to be able\n> update all systems in the replica? \nyes.\n How much bandwidth do you have\n> between the\n> servers. \nWe will have plenty. \n> Do the systems need to be identical at all times? \nPretty much. \n>What is\n> your time frame?\nPossibly as soon as six-nine months. \n\nOur replication solution is shooting for the moon. \nThe scenario pitched to me (I'm the DBA for this application) is potentially \nhaving multiple web applications accessing multiple databases and having \nreplication keep them all in sync. \nI have to decide if in this scenario we have the multiple front end's connect \nto a single master database and then have replication move the data to all of \nthe backend databases and then notify the backend processes or to have a \ndifferent scenario. \n\nThe specific scenario that marketing/management has given me is to have two \ninstances of the same front-end located in different parts of the country, \neach updating a local database and the database ensures that the updates from \nboth locations get to the \"peer-database\". So end user in location A is able \nto see all updates done to database A and database B for the data that they \nhave premission for. \n\nI know this sounds confusing, it is. I have been doing DBA things for quite a \nwhile, but I am a litle new to the replication thing. \n\n\nThanks, \n\nDoug\n \n>\n> Depending on how you answer questions like these, your \"recommendation\"\n> will change.\n>\n> Good luck,\n>\n> Darren\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Fri, 3 May 2002 00:03:21 -0400", "msg_from": "Doug Needham <dneedham@pbo.net>", "msg_from_op": true, "msg_subject": "Re: replication questions" } ]
[ { "msg_contents": "Dear Team,\n\nThis sounds good to me. Especially the comment about software patents.\nSoftware source code can be written with different variable names and\nslightly different coding styles. But when a specific function needs\nimplementation, often there is only one logical approach. If that approach\nis patented, then it becomes \"un-usable\" by most programmers. It is\nessentially by nature the same thing as allowing artists to patent certain\nshades of color, because they have used it first in a famous painting. The\nidea strikes me as completely ludicrous.\n\nHowever, I am all for giving credit where credit is due. I like to see\ncopyright notices and references to the GPL in Linux oriented code. It\ngives me a better \"feel\" for those \"upon whose shoulders I stand\".\n\nArthur\n\n----- Original Message -----\nFrom: \"mlw\" <markw@mohawksoft.com>\nTo: <jm.poure@freesurf.fr>\nCc: \"David Terrell\" <dbt@meat.net>; \"PostgreSQL-development\"\n<pgsql-hackers@postgresql.org>\nSent: Thursday, May 02, 2002 8:44 AM\nSubject: Re: [HACKERS] PostgreSQL mission statement?\n\n\n> Jean-Michel POURE wrote:\n> >\n> > The PostgreSQL community is committed to creating and maintaining the\nbest,\n> > most reliable, open-source multi-purpose standards based database, and\nwith\n> > it, promote free(dom) and open source software world wide.\n> >\n> > I hope you don't mind writing \"free(dom)\" with the idea of fighting\npatent\n> > abuses.\n>\n> No, the mission statement is about what the postgresql group, as a whole,\nis\n> all about.\n>\n> I know it seems silly to have such a thing, but really, the more I read on\nthis\n> discussion, the more it seems like it is a useful \"call to arms\" for\ndevelopers\n> and users alike.\n>\n> Now, I do not wish to have a manifesto, but a short and sweet \"this is who\nwe\n> are, and this is what we do\" could be a positive thing.\n>\n> P.S. I think every software engineer worth anything should fight software\n> patents. If Donald Knuth didn't patent his algorithms, practically none of\nus\n> deserve patents. I mean seriously, most of the software patents are\ntrivial and\n> obvious. Knuth did something, most of us only build on his work.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Thu, 2 May 2002 13:04:34 -0700", "msg_from": "\"Arthur@LinkLine.com\" <arthur@linkline.com>", "msg_from_op": true, "msg_subject": "Fw: PostgreSQL mission statement?" } ]
[ { "msg_contents": "In current sources:\n\nregression=# select '60'::interval;\n interval\n----------\n 00:01\n(1 row)\n\nregression=# select '1.5'::interval;\n interval\n-------------\n 00:00:01.50\n(1 row)\n\nThat is, '60' is read as so many hours, '1.5' is read as so many\nseconds. This seems a tad inconsistent.\n\n7.2 does the same thing, 7.1 says\nERROR: Bad interval external representation '60'\nbut takes '1.5' as meaning 1.5 seconds.\n\nI'd prefer to standardize on a unit of seconds myself.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 17:35:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Consistency problem with unlabeled intervals" }, { "msg_contents": "...\n> That is, '60' is read as so many hours, '1.5' is read as so many\n> seconds. This seems a tad inconsistent.\n\nThey fulfill two separate use cases. Time zones can now be specified as\nintervals, and the default unit must be hours. A number with a decimal\npoint is usually in units of seconds, and matches past behavior afaik.\n\nThe current behavior makes a choice that likely breaks \"expected\nbehavior\" if it were changed. Not to mention dealing with the upgrade\nissues if the conventions were changed.\n\nI don't have my heels dug in on this, but this example doesn't cover the\nrange of cases the behavior was designed to handle. I'll go back and\nresearch it if turns out to be required.\n\n - Thomas\n", "msg_date": "Thu, 02 May 2002 17:50:17 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Consistency problem with unlabeled intervals" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> That is, '60' is read as so many hours, '1.5' is read as so many\n>> seconds. This seems a tad inconsistent.\n\n> They fulfill two separate use cases. Time zones can now be specified as\n> intervals, and the default unit must be hours. A number with a decimal\n> point is usually in units of seconds, and matches past behavior afaik.\n\nHm. Well, if this behavior is intentional, it'd be nice to document it.\nThe existing paragraphs about interval's I/O format don't mention\nbehavior for unitless numbers at all, much less explain that a decimal\npoint has semantic significance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 22:01:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Consistency problem with unlabeled intervals " } ]
[ { "msg_contents": "Hi\n\nThe info about DLM is from some Compaq and Oracle\nemployees who I know from Digital days.\n\nWhile it is not confidential information \n(the DLM is not being marketed as such) the\nSW engineers have lifed the code from VMS --> \nCompaqTRU64 (was Digital Unix on Alpha) -->\nRDB Port to DUNIX.\nSubsequently Oracle needed the DLM to keep RDB going,\nsudenly realised that the DLM could be *really* useful :-)\nand sub-licensed the code from Compaq.\n\nWhat both Compaq and Oracle are marketing is clustering\nwith near linear scalability.\nNB: If you use Oracle 9i RAC on non Compaq environemnts\nthe Oracle data must be on RAW partitions to get proper\nperformance. If on Compaq you can use the normal (AFC) file\nsystem.\n\nSun also have a DLM like clustering capability, but not \nas mature.\n\nThe BAD news - DLM is closed/proprietory. It is not being\nlicensed as such. It is part of True64 and Oracle.\n\nCompaq to donate the DLM to GNU/FSF?\n - I'd like to see that ;-)\n\nMy guess is if the PosgreSQL project wanted to implement \nparallel homogenous clustering the project would need to \nhave the following:\n\n1. Design and implemnent an OpenDLM \n (possibly as a new open source project)\n My guess, assuming input from ex Digits, DECUS and other \n VMS people, 10-20 man years over 12-18 months\n\n2. Linux/FreeBSD incorporate OpenDLM into file system\n\n3. PostgreSQL use OpenDLM to manage all DB locking\n\nWell! There's a macro road map if I've ever seen one!\n \nRegards, Kym Farnik (mailto:kym@recalldesign.com)\n-- Recall Design http://www.recalldesign.com\n53 Gilbert Street, Adelaide, South Australia 5000 \nDirect: (61-8) 8217 0556\nFax: (61-8) 8217 0555 \nMobile: 0438 014 007\n\nKeith wrote:\n> I've waited, well, more than a decade for someone to really start using\n> the DLM, or something like it, in a really useful way again. Oddly, at\n> the time VAXclusters flourished, database vendors (except for DEC;\n> remember RDB?) seemed to go around the DLM and roll their own locking.\n> \n> A standard like this could be really useful in open source databases.\n> \n> * Is there a link that talks about the DLM being used in the RACs? \n> Oracle's documentation doesn't seem to mention the DLM.\n> \n> * Is there anything open about the DLM spec or code? I.e., could it\n> actually be used in a project like PostgreSQL?\n> \n> * Is there any open source equivalent?\n> \n> * Have you looked at any PostgreSQL code and thought about how the DLM\n> might be incorporated? (I'm not even really much of a C coder, but the\n> idea is exciting to me).\n> \n> I suspect a lot of folks on the list have no idea how useful something\n> like the DLM could be.\n>", "msg_date": "Fri, 3 May 2002 09:08:08 +0930", "msg_from": "\"Kym Farnik\" <kym@recalldesign.com>", "msg_from_op": true, "msg_subject": "DLM Oracle/Compaq/OpenVMS" }, { "msg_contents": "(redirected to -hackers)\n\n> > * Is there anything open about the DLM spec or code? I.e., could it\n> > actually be used in a project like PostgreSQL?\n\nThe code is not open (despite the OpenDLM name :/\n\n> > * Is there any open source equivalent?\n\nhttp://oss.software.ibm.com/dlm/\n\n> > * Have you looked at any PostgreSQL code and thought about how the DLM\n> > might be incorporated? (I'm not even really much of a C coder, but the\n> > idea is exciting to me).\n\nYes. But it wasn't worth thinking about too hard because afaict there\nwere no reasonable distributed lock managers available.\n\nThe gborg-based replication project probably needs a distributed lock\nmanager. Not sure what it uses currently, and it looks to me that IBM's\ncontribution stalled out for the last few months. There is a little\nactivity on their mailing list but they missed an \"any day now\" release\nback in October or November...\n\n - Thomas\n", "msg_date": "Thu, 02 May 2002 18:06:42 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DLM Oracle/Compaq/OpenVMS" } ]
[ { "msg_contents": "Mission Statement:\n\"To KICK Gluteus Maximus!\"\n\n\n", "msg_date": "Thu, 2 May 2002 19:32:35 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL mission statement?" }, { "msg_contents": "Lol\nThis gets my vote ;-)\n\nDali\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org \n> [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Dann Corbit\n> Sent: Friday, 3 May 2002 14:33\n> To: PostgreSQL-development\n> Subject: Re: [HACKERS] PostgreSQL mission statement?\n> \n> \n> Mission Statement:\n> \"To KICK Gluteus Maximus!\"\n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Fri, 3 May 2002 15:06:41 +1200", "msg_from": "\"Dalibor Andzakovic\" <dali@dali.net.nz>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL mission statement?" } ]
[ { "msg_contents": "Boban Acimovic was kind enough to give me access to a Solaris 8 system\nto track down a reproducible server crash. What I find is that strxfrm\nis buggy on that system. Given locale is_IS.ISO8859-1, the call\n\n\tstrxfrm(<ptr>, \"pg_amop_opc_strategy_index\", 58)\n\nwas observed to scribble on 108 bytes of memory at <ptr>, not the 58\nthat it was allowed to. This naturally led to death and destruction\nupon next use of the adjacent data structures.\n\nI don't know yet whether this is a known/repaired problem, or whether\nit occurs in any locales besides Icelandic. But I thought I'd give\nthe list a heads-up. If anyone recognizes this bug, more info would\nbe appreciated.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 May 2002 22:56:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Solaris + locale bug identified" } ]
[ { "msg_contents": "Hi,\n\nIs there any rhyme or reason to these ISO format date parsing rules?\n\ntest=# select '1-1-1'::date;\nERROR: Bad date external representation '1-1-1'\ntest=# select '69-1-1'::date;\n date\n------------\n 2069-01-01\n(1 row)\n\ntest=# select '50-1-1'::date;\n date\n------------\n 2050-01-01\n(1 row)\n\ntest=# select '40-1-1'::date;\n date\n------------\n 2040-01-01\n(1 row)\n\ntest=# select '30-1-1'::date;\nERROR: Bad date external representation '30-1-1'\ntest=# select '100-1-1'::date;\nERROR: Bad date external representation '100-1-1'\ntest=# select '999-1-1'::date;\nERROR: Bad date external representation '999-1-1'\ntest=# select '1000-1-1'::date;\n date\n------------\n 1000-01-01\n(1 row)\n\nWhy can't someone store the year without having to pad with zeros for years\nbetween 100 and 999?\n\nWhat's wrong with 30-1-1 and below? Why does 40 work and not 30?\n\nChris\n\n", "msg_date": "Fri, 3 May 2002 15:17:12 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "3 digit year problem" }, { "msg_contents": "> Is there any rhyme or reason to these ISO format date parsing rules?\n\nYes. Though adjustments to the rules are possible, so things are not set\nin concrete. There *should* be a complete description of the date/time\nparsing rules in the User's Guide appendix.\n\n> Why can't someone store the year without having to pad with zeros for years\n> between 100 and 999?\n\nTo help distinguish between day numbers and years. We used to allow more\nvariations in the length of a year field, but have tightened it up a bit\nover the years.\n\n> What's wrong with 30-1-1 and below? Why does 40 work and not 30?\n\nBecause \"30\" *could* be a day. \"40\" can only be something intended to be\na year. And input is not enforced to be strictly ISO-compliant, so\n\"30-1-1\" *could* be interpreted multiple ways.\n\n - Thomas\n", "msg_date": "Fri, 03 May 2002 07:07:29 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: 3 digit year problem" } ]
[ { "msg_contents": "Tom Lane wrote in another tread:\n> PS: I did like your point about BITMAPLEN; I think that might be\n> a free savings. I was waiting for you to bring it up on hackers\n> before commenting though...\nSo here we go...\n\nHi,\n\nin htup.h MinHeapTupleBitmapSize is defined to be 32, i.e. the bitmap\nuses at least so many bits, if the tuple has at least one null\nattribute. The bitmap starts at offset 31 in the tuple header. The\nmacro BITMAPLEN calculates, for a given number of attributes NATTS,\nthe length of the bitmap in bytes. BITMAPLEN is the smallest number n\ndivisible by 4, so that 8*n >= NATTS.\n\nThe size of the tuple header is rounded up to a multiple of 4 (on a\ntypical(?) architecture) by MAXALIGN(...). So we get:\n\nNATTS BITMAPLEN THSIZE\n 8 4 36\n 16 4 36\n 33 8 40\n\nI don't quite understand the definition of BITMAPLEN:\n\n#define BITMAPLEN(NATTS) \\\n ((((((int)(NATTS) - 1) >> 3) + 4 - (MinHeapTupleBitmapSize >> 3)) \\\n & ~03) + (MinHeapTupleBitmapSize >> 3))\n\nAFAICS only for MinHeapTupleBitmapSize == 32 we get a meaningful\nresult, namely a multiple of MinHeapTupleBitmapSize, converted from a\nnumber of bits to a number of bytes. If this is true, we dont't need\nthe \"+ 4 - (MinHeapTupleBitmapSize >> 3)\" and the definition could be\nsimplified to\n\t(((((int)(NATTS) - 1) >> 3) & ~03) + 4)\n\nSome examples, writing MBMB for (MinHeapTupleBitmapSize >> 3):\n\nMBMB = 4:\n((((NATTS - 1) >> 3) + 4 - MBMB) & ~03) + MBMB\n 32 31 3 7 3 0 4\n 33 32 4 8 4 4 8\n 64 63 7 11 7 4 8\n 65 64 8 12 8 8 12\n\nMBMB = 1:\n((((NATTS - 1) >> 3) + 4 - MBMB) & ~03) + MBMB\n 8 7 0 4 3 0 1\n 9 8 1 5 4 4 5\n 32 31 3 7 6 4 5\n 33 32 4 8 7 4 5\n 56 55 6 10 9 8 9\n 64 63 7 11 10 8 9\n 65 64 8 12 11 8 9\n\nMBMB = 8:\n((((NATTS - 1) >> 3) + 4 - MBMB) & ~03) + MBMB\n 8 7 0 4 -4 -4 4\n 9 8 1 5 -3 -4 4\n 32 31 3 7 -1 -4 4\n 33 32 4 8 0 0 8\n 56 55 6 10 2 0 8\n 64 63 7 11 3 0 8\n 65 64 8 12 4 4 12\n\nProposal 1:\n#define BitMapBytes 4 // or any other power of 2\n#define MinHeapTupleBitmapSize (BitMapBytes * 8)\n#define BITMAPLEN(NATTS) \\\n (((((int)(NATTS) - 1) >> 3) & ~(BitMapBytes - 1)) + BitMapBytes)\n\nProposal 2: Let BITMAPLEN calculate the minimum number of bytes\nnecessary to have one bit for every attribute.\n\n#define BitMapBytes 1\n\n old old new new\nNATTS BITMAPLEN THSIZE BITMAPLEN THSIZE\n 8 4 36 1 32\n 16 4 36 2 36\n 33 8 40 5 36\n\nThis looks so simple. Is there something wrong with it?\nDoes it need further discussion or should I submit a patch?\n\nServus\n Manfred\n", "msg_date": "Fri, 03 May 2002 09:37:25 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Trying to reduce per tuple overhead (bitmap)" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Proposal 2: Let BITMAPLEN calculate the minimum number of bytes\n> necessary to have one bit for every attribute.\n\n> #define BitMapBytes 1\n\n> old old new new\n> NATTS BITMAPLEN THSIZE BITMAPLEN THSIZE\n> 8 4 36 1 32\n> 16 4 36 2 36\n> 33 8 40 5 36\n\n> This looks so simple. Is there something wrong with it?\n\nOffhand I cannot see a reason not to change this. There's no reason for\nBITMAPLEN() to be padding the bitmap length --- every caller does its\nown MAXALIGN() of the total header length, which is what we actually\nneed. I suspect that BITMAPLEN's behavior is leftover from a time when\nthose MAXALIGN's weren't there. But (a) this would only be helpful if\nthe t_bits field started on a word boundary, which it doesn't and hasn't\nfor a very long time; and (b) padding to a multiple of 4 is wrong\nanyway, since there are machines where MAXALIGN is 8.\n\nSince the data offset is stored in t_hoff and not recalculated on the\nfly, I think that fixing BITMAPLEN is completely free from a storage\ncompatibility point of view; you wouldn't even need initdb. If some\ntuples have the excess padding and some do not, everything will still\nwork.\n\nIn short, looks good to me. Please try it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 09:54:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trying to reduce per tuple overhead (bitmap) " }, { "msg_contents": "Manfred Koizar wrote:\n> Tom Lane wrote in another tread:\n> > PS: I did like your point about BITMAPLEN; I think that might be\n> > a free savings. I was waiting for you to bring it up on hackers\n> > before commenting though...\n> So here we go...\n> \n> Hi,\n> \n> in htup.h MinHeapTupleBitmapSize is defined to be 32, i.e. the bitmap\n> uses at least so many bits, if the tuple has at least one null\n> attribute. The bitmap starts at offset 31 in the tuple header. The\n> macro BITMAPLEN calculates, for a given number of attributes NATTS,\n> the length of the bitmap in bytes. BITMAPLEN is the smallest number n\n> divisible by 4, so that 8*n >= NATTS.\n> \n> The size of the tuple header is rounded up to a multiple of 4 (on a\n> typical(?) architecture) by MAXALIGN(...). So we get:\n> \n> NATTS BITMAPLEN THSIZE\n> 8 4 36\n> 16 4 36\n> 33 8 40\n> \n> I don't quite understand the definition of BITMAPLEN:\n> \n> #define BITMAPLEN(NATTS) \\\n> ((((((int)(NATTS) - 1) >> 3) + 4 - (MinHeapTupleBitmapSize >> 3)) \\\n> & ~03) + (MinHeapTupleBitmapSize >> 3))\n\nThanks for improving this. I had to look at this macro recently and it\nwas quite confusing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Jun 2002 18:08:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trying to reduce per tuple overhead (bitmap)" } ]
[ { "msg_contents": "Using this configuration:\n./configure --enable-locale --enable-recode --enable-multibyte\n--enable-nls --with-pgport=9631 --with-CXX --with-perl --with-python\n--with-tcl --enable-odbc--with-unixodbc --with-openssl --with-pam\n--enable-syslog --enable-debug --enable-cassert --enable-depend \n--with-tkconfig=/usr/lib/tk8.3 --with-tclconfig=/usr/lib/tcl8.3\n--with-includes=/usr/include/tcl8.3\n\ncurrent cvs would not compile. I found it necessary to make the\nfollowing corrections:\n\nIndex: src/backend/utils/init/miscinit.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/utils/init/miscinit.c,v\nretrieving revision 1.87\ndiff -c -r1.87 miscinit.c\n*** src/backend/utils/init/miscinit.c\t2002/04/27 21:24:34\t1.87\n--- src/backend/utils/init/miscinit.c\t2002/05/03 05:15:14\n***************\n*** 39,44 ****\n--- 39,45 ----\n #ifdef CYR_RECODE\n unsigned char RecodeForwTable[128];\n unsigned char RecodeBackTable[128];\n+ static void GetCharSetByHost(char *TableName, int host, const char *DataDir);\n #endif\n \n ProcessingMode Mode = InitProcessing;\n***************\n*** 236,249 ****\n \n #ifdef CYR_RECODE\n \n! SetCharSet(void)\n {\n \tFILE\t *file;\n \tchar\t *filename;\n \tchar\t *map_file;\n \tchar\t\tbuf[MAX_TOKEN];\n! \tint\t\t\ti,\n! \t\t\t\tc;\n \tunsigned char FromChar,\n \t\t\t\tToChar;\n \tchar\t\tChTable[MAX_TOKEN];\n--- 237,249 ----\n \n #ifdef CYR_RECODE\n \n! void SetCharSet(void)\n {\n \tFILE\t *file;\n \tchar\t *filename;\n \tchar\t *map_file;\n \tchar\t\tbuf[MAX_TOKEN];\n! \tint\t\t\ti;\n \tunsigned char FromChar,\n \t\t\t\tToChar;\n \tchar\t\tChTable[MAX_TOKEN];\n***************\n*** 289,295 ****\n \t\t\t\t\twhile (!feof(file) && buf[0])\n \t\t\t\t\t{\n \t\t\t\t\t\tnext_token(file, buf, sizeof(buf));\n! \t\t\t\t\t\telog(LOG, \"SetCharSet: unknown tag %s in file %s\"\n \t\t\t\t\t\t\tbuf, filename);\n \t\t\t\t\t}\n \t\t\t\t}\n--- 289,295 ----\n \t\t\t\t\twhile (!feof(file) && buf[0])\n \t\t\t\t\t{\n \t\t\t\t\t\tnext_token(file, buf, sizeof(buf));\n! \t\t\t\t\t\telog(LOG, \"SetCharSet: unknown tag %s in file %s\",\n \t\t\t\t\t\t\tbuf, filename);\n \t\t\t\t\t}\n \t\t\t\t}\n***************\n*** 445,451 ****\n \t\t\telse if (strcasecmp(buf, \"RecodeTable\") == 0)\n \t\t\t\tkey = KEY_TABLE;\n \t\t\telse\n! \t\t\t\telog(LOG, \"GetCharSetByHost: unknown tag %s in file %s\"\n \t\t\t\t\tbuf, CHARSET_FILE);\n \n \t\t\tswitch (key)\n--- 445,451 ----\n \t\t\telse if (strcasecmp(buf, \"RecodeTable\") == 0)\n \t\t\t\tkey = KEY_TABLE;\n \t\t\telse\n! \t\t\t\telog(LOG, \"GetCharSetByHost: unknown tag %s in file %s\",\n \t\t\t\t\tbuf, CHARSET_FILE);\n \n \t\t\tswitch (key)\n***************\n*** 501,507 ****\n \t\t\twhile (!feof(file) && buf[0])\n \t\t\t{\n \t\t\t\tnext_token(file, buf, sizeof(buf));\n! \t\t\t\telog(LOG, \"GetCharSetByHost: unknown tag %s in file %s\"\n \t\t\t\t\tbuf, CHARSET_FILE);\n \t\t\t}\n \t\t}\n--- 501,507 ----\n \t\t\twhile (!feof(file) && buf[0])\n \t\t\t{\n \t\t\t\tnext_token(file, buf, sizeof(buf));\n! \t\t\t\telog(LOG, \"GetCharSetByHost: unknown tag %s in file %s\",\n \t\t\t\t\tbuf, CHARSET_FILE);\n \t\t\t}\n \t\t}\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Rejoice with them that do rejoice, and weep with them \n that weep.\" Romans 12:15", "msg_date": "03 May 2002 09:36:24 +0100", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": true, "msg_subject": "Compilation failed when --with-recode specified (patch)" }, { "msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> current cvs would not compile. I found it necessary to make the\n> following corrections:\n\nA little software rot setting in there :-(. My compiler complained\nabout even more stuff than yours did. Patch applied.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 16:44:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Compilation failed when --with-recode specified (patch) " }, { "msg_contents": "\nGlad you are testing recode because I changed its token handling to use\nthe new unified token code used by pg_hba.conf and pg_ident.conf.\n\n---------------------------------------------------------------------------\n\nOliver Elphick wrote:\n-- Start of PGP signed section.\n> Using this configuration:\n> ./configure --enable-locale --enable-recode --enable-multibyte\n> --enable-nls --with-pgport=9631 --with-CXX --with-perl --with-python\n> --with-tcl --enable-odbc--with-unixodbc --with-openssl --with-pam\n> --enable-syslog --enable-debug --enable-cassert --enable-depend \n> --with-tkconfig=/usr/lib/tk8.3 --with-tclconfig=/usr/lib/tcl8.3\n> --with-includes=/usr/include/tcl8.3\n> \n> current cvs would not compile. I found it necessary to make the\n> following corrections:\n> \n> Index: src/backend/utils/init/miscinit.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/backend/utils/init/miscinit.c,v\n> retrieving revision 1.87\n> diff -c -r1.87 miscinit.c\n> *** src/backend/utils/init/miscinit.c\t2002/04/27 21:24:34\t1.87\n> --- src/backend/utils/init/miscinit.c\t2002/05/03 05:15:14\n> ***************\n> *** 39,44 ****\n> --- 39,45 ----\n> #ifdef CYR_RECODE\n> unsigned char RecodeForwTable[128];\n> unsigned char RecodeBackTable[128];\n> + static void GetCharSetByHost(char *TableName, int host, const char *DataDir);\n> #endif\n> \n> ProcessingMode Mode = InitProcessing;\n> ***************\n> *** 236,249 ****\n> \n> #ifdef CYR_RECODE\n> \n> ! SetCharSet(void)\n> {\n> \tFILE\t *file;\n> \tchar\t *filename;\n> \tchar\t *map_file;\n> \tchar\t\tbuf[MAX_TOKEN];\n> ! \tint\t\t\ti,\n> ! \t\t\t\tc;\n> \tunsigned char FromChar,\n> \t\t\t\tToChar;\n> \tchar\t\tChTable[MAX_TOKEN];\n> --- 237,249 ----\n> \n> #ifdef CYR_RECODE\n> \n> ! void SetCharSet(void)\n> {\n> \tFILE\t *file;\n> \tchar\t *filename;\n> \tchar\t *map_file;\n> \tchar\t\tbuf[MAX_TOKEN];\n> ! \tint\t\t\ti;\n> \tunsigned char FromChar,\n> \t\t\t\tToChar;\n> \tchar\t\tChTable[MAX_TOKEN];\n> ***************\n> *** 289,295 ****\n> \t\t\t\t\twhile (!feof(file) && buf[0])\n> \t\t\t\t\t{\n> \t\t\t\t\t\tnext_token(file, buf, sizeof(buf));\n> ! \t\t\t\t\t\telog(LOG, \"SetCharSet: unknown tag %s in file %s\"\n> \t\t\t\t\t\t\tbuf, filename);\n> \t\t\t\t\t}\n> \t\t\t\t}\n> --- 289,295 ----\n> \t\t\t\t\twhile (!feof(file) && buf[0])\n> \t\t\t\t\t{\n> \t\t\t\t\t\tnext_token(file, buf, sizeof(buf));\n> ! \t\t\t\t\t\telog(LOG, \"SetCharSet: unknown tag %s in file %s\",\n> \t\t\t\t\t\t\tbuf, filename);\n> \t\t\t\t\t}\n> \t\t\t\t}\n> ***************\n> *** 445,451 ****\n> \t\t\telse if (strcasecmp(buf, \"RecodeTable\") == 0)\n> \t\t\t\tkey = KEY_TABLE;\n> \t\t\telse\n> ! \t\t\t\telog(LOG, \"GetCharSetByHost: unknown tag %s in file %s\"\n> \t\t\t\t\tbuf, CHARSET_FILE);\n> \n> \t\t\tswitch (key)\n> --- 445,451 ----\n> \t\t\telse if (strcasecmp(buf, \"RecodeTable\") == 0)\n> \t\t\t\tkey = KEY_TABLE;\n> \t\t\telse\n> ! \t\t\t\telog(LOG, \"GetCharSetByHost: unknown tag %s in file %s\",\n> \t\t\t\t\tbuf, CHARSET_FILE);\n> \n> \t\t\tswitch (key)\n> ***************\n> *** 501,507 ****\n> \t\t\twhile (!feof(file) && buf[0])\n> \t\t\t{\n> \t\t\t\tnext_token(file, buf, sizeof(buf));\n> ! \t\t\t\telog(LOG, \"GetCharSetByHost: unknown tag %s in file %s\"\n> \t\t\t\t\tbuf, CHARSET_FILE);\n> \t\t\t}\n> \t\t}\n> --- 501,507 ----\n> \t\t\twhile (!feof(file) && buf[0])\n> \t\t\t{\n> \t\t\t\tnext_token(file, buf, sizeof(buf));\n> ! \t\t\t\telog(LOG, \"GetCharSetByHost: unknown tag %s in file %s\",\n> \t\t\t\t\tbuf, CHARSET_FILE);\n> \t\t\t}\n> \t\t}\n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight http://www.lfix.co.uk/oliver\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> \n> \"Rejoice with them that do rejoice, and weep with them \n> that weep.\" Romans 12:15 \n-- End of PGP section.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Jun 2002 18:18:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Compilation failed when --with-recode specified (patch)" } ]
[ { "msg_contents": "\nMorning all ...\n\n\tJust a heads up that over the next little while, I'm planning on\nmaking a bunch of commits in order to work on making the code able to work\nnatively in the above environments ... my work will mostly focus on Win32\n(since I have no OS2/BeOS installs), but alot of the changes will be such\nthat it will benefit the others as well ...\n\n\tThe initial changes will be to just wrapper all our shared memory\ncode, so that I can make use of Apache's libapr libraries *if* they are\ninstalled ... if not, it will just fall back to \"the current code\" ...\n\n\n", "msg_date": "Fri, 3 May 2002 10:18:13 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> Morning all ...\n> \n> Just a heads up that over the next little while, I'm planning on\n> making a bunch of commits in order to work on making the code able to work\n> natively in the above environments ... my work will mostly focus on Win32\n> (since I have no OS2/BeOS installs), but alot of the changes will be such\n> that it will benefit the others as well ...\n> \n> The initial changes will be to just wrapper all our shared memory\n> code, so that I can make use of Apache's libapr libraries *if* they are\n> installed ... if not, it will just fall back to \"the current code\" ...\n\nIf you want any assistance, drop me an email. I spent a long time (> decade)\ndoing Windows applications and drivers and know a good number of the cool\ntricks.\n", "msg_date": "Fri, 03 May 2002 09:23:26 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "On Fri, 3 May 2002, mlw wrote:\n\n> \"Marc G. Fournier\" wrote:\n> >\n> > Morning all ...\n> >\n> > Just a heads up that over the next little while, I'm planning on\n> > making a bunch of commits in order to work on making the code able to work\n> > natively in the above environments ... my work will mostly focus on Win32\n> > (since I have no OS2/BeOS installs), but alot of the changes will be such\n> > that it will benefit the others as well ...\n> >\n> > The initial changes will be to just wrapper all our shared memory\n> > code, so that I can make use of Apache's libapr libraries *if* they are\n> > installed ... if not, it will just fall back to \"the current code\" ...\n>\n> If you want any assistance, drop me an email. I spent a long time (> decade)\n> doing Windows applications and drivers and know a good number of the cool\n> tricks.\n\nhrmmmm ... do you have a working Windows development environment? I'm\nrunning WinXP at home, but don't have any of the compilers or anything\nyet, so all my work for the first part is going to be done under Unix ...\n\nbut someone that knows something about building makefiles for Windows, and\ncompiling under it, will definitely be a major asset ;)\n\n", "msg_date": "Fri, 3 May 2002 10:47:33 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "Will there really be a need for a BeOS development with the sale of Be to\nPalm? Is BeOS even still available? It might not be worth the time to\ndevelop for BeOS until you see what Palm decides to do with the software.\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\nSent: Friday, May 03, 2002 9:48 AM\nTo: mlw\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports\n\n\nOn Fri, 3 May 2002, mlw wrote:\n\n> \"Marc G. Fournier\" wrote:\n> >\n> > Morning all ...\n> >\n> > Just a heads up that over the next little while, I'm planning\non\n> > making a bunch of commits in order to work on making the code able to\nwork\n> > natively in the above environments ... my work will mostly focus on\nWin32\n> > (since I have no OS2/BeOS installs), but alot of the changes will be\nsuch\n> > that it will benefit the others as well ...\n> >\n> > The initial changes will be to just wrapper all our shared\nmemory\n> > code, so that I can make use of Apache's libapr libraries *if* they\nare\n> > installed ... if not, it will just fall back to \"the current code\" ...\n>\n> If you want any assistance, drop me an email. I spent a long time (>\ndecade)\n> doing Windows applications and drivers and know a good number of the\ncool\n> tricks.\n\nhrmmmm ... do you have a working Windows development environment? I'm\nrunning WinXP at home, but don't have any of the compilers or anything\nyet, so all my work for the first part is going to be done under Unix ...\n\nbut someone that knows something about building makefiles for Windows, and\ncompiling under it, will definitely be a major asset ;)\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly", "msg_date": "Fri, 3 May 2002 09:55:14 -0400", "msg_from": "\"Travis Hoyt\" <thoyt@npc.net>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "On Fri, 3 May 2002, Travis Hoyt wrote:\n\n> Will there really be a need for a BeOS development with the sale of Be to\n> Palm? Is BeOS even still available? It might not be worth the time to\n> develop for BeOS until you see what Palm decides to do with the software.\n\nNote that the changes I'm making are to make use of what is available\nthrough the libapr API that the Apache group has developed ... so, as long\nas they have the hooks in for BeOS, we will ... doesn't mean PgSQL will\nactually have makefiles for, and will compile under it, unless someone\n*with* BeOS steps forward, but alot of the core functionality that has\nheld back native ports should work ...\n\n\n\n\n >\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> Sent: Friday, May 03, 2002 9:48 AM\n> To: mlw\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports\n>\n>\n> On Fri, 3 May 2002, mlw wrote:\n>\n> > \"Marc G. Fournier\" wrote:\n> > >\n> > > Morning all ...\n> > >\n> > > Just a heads up that over the next little while, I'm planning\n> on\n> > > making a bunch of commits in order to work on making the code able to\n> work\n> > > natively in the above environments ... my work will mostly focus on\n> Win32\n> > > (since I have no OS2/BeOS installs), but alot of the changes will be\n> such\n> > > that it will benefit the others as well ...\n> > >\n> > > The initial changes will be to just wrapper all our shared\n> memory\n> > > code, so that I can make use of Apache's libapr libraries *if* they\n> are\n> > > installed ... if not, it will just fall back to \"the current code\" ...\n> >\n> > If you want any assistance, drop me an email. I spent a long time (>\n> decade)\n> > doing Windows applications and drivers and know a good number of the\n> cool\n> > tricks.\n>\n> hrmmmm ... do you have a working Windows development environment? I'm\n> running WinXP at home, but don't have any of the compilers or anything\n> yet, so all my work for the first part is going to be done under Unix ...\n>\n> but someone that knows something about building makefiles for Windows, and\n> compiling under it, will definitely be a major asset ;)\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Fri, 3 May 2002 10:59:47 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> \tThe initial changes will be to just wrapper all our shared memory\n> code, so that I can make use of Apache's libapr libraries *if* they are\n> installed ... if not, it will just fall back to \"the current code\" ...\n\nI think we should redesign the shared memory API (and even more so the\nsemaphore API), not just put a wrapper layer on it. A lot of the\ninternal API is unnecessarily dependent on SysV shmem/sem behavior.\n\nNote however that there are some things you will break if you are not\nvery careful. We are depending on shmem/sem behavior to catch a number\nof multiple-postmaster conflict situations. If there's not a more or\nless SysV-ish kernel underneath us, those situations will have to be\nrethought and some other interlock invented.\n\nIn short, I want to see a design review first, not a bunch of\noff-the-cuff commits.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 10:11:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Hi Marc,\n\nHow about using Dev-C++?\n\nIt's a Windows IDE with a GCC backend, and has a nice rep (and a Linux\nport):\n\nhttp://sourceforge.net/projects/dev-cpp/\n\nIt's always in SF.net's \"Top 10\" most worked on projects too, with about\nroughly 7,000 downloads per day. It can generate mingwin code too.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n\"Marc G. Fournier\" wrote:\n> \n> On Fri, 3 May 2002, mlw wrote:\n> \n> > \"Marc G. Fournier\" wrote:\n> > >\n> > > Morning all ...\n> > >\n> > > Just a heads up that over the next little while, I'm planning on\n> > > making a bunch of commits in order to work on making the code able to work\n> > > natively in the above environments ... my work will mostly focus on Win32\n> > > (since I have no OS2/BeOS installs), but alot of the changes will be such\n> > > that it will benefit the others as well ...\n> > >\n> > > The initial changes will be to just wrapper all our shared memory\n> > > code, so that I can make use of Apache's libapr libraries *if* they are\n> > > installed ... if not, it will just fall back to \"the current code\" ...\n> >\n> > If you want any assistance, drop me an email. I spent a long time (> decade)\n> > doing Windows applications and drivers and know a good number of the cool\n> > tricks.\n> \n> hrmmmm ... do you have a working Windows development environment? I'm\n> running WinXP at home, but don't have any of the compilers or anything\n> yet, so all my work for the first part is going to be done under Unix ...\n> \n> but someone that knows something about building makefiles for Windows, and\n> compiling under it, will definitely be a major asset ;)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 04 May 2002 00:26:42 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "On Fri, 3 May 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > \tThe initial changes will be to just wrapper all our shared memory\n> > code, so that I can make use of Apache's libapr libraries *if* they are\n> > installed ... if not, it will just fall back to \"the current code\" ...\n>\n> I think we should redesign the shared memory API (and even more so the\n> semaphore API), not just put a wrapper layer on it. A lot of the\n> internal API is unnecessarily dependent on SysV shmem/sem behavior.\n>\n> Note however that there are some things you will break if you are not\n> very careful. We are depending on shmem/sem behavior to catch a number\n> of multiple-postmaster conflict situations. If there's not a more or\n> less SysV-ish kernel underneath us, those situations will have to be\n> rethought and some other interlock invented.\n>\n> In short, I want to see a design review first, not a bunch of\n> off-the-cuff commits.\n\nAll I'm planning on doing is changing the appropriate shm_* functions iwth\npg_shm_* functions ... if !(libapr), all those pg_shm_* functions will\nhave in them is the original call we've always used ... there will even be\na --disable-libapr configure option so that if someone already has Apache2\ninstalled, but doesn't wanna use libapr for PgSQL, they don't have to ...\n\nBasically, all I'm looking at is allowing PgSQL to use a different library\nfor its shared memory calls then the standard one, nothing else ...\n\n", "msg_date": "Fri, 3 May 2002 11:37:52 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> All I'm planning on doing is changing the appropriate shm_* functions iwth\n> pg_shm_* functions ... if !(libapr), all those pg_shm_* functions will\n> have in them is the original call we've always used ... there will even be\n> a --disable-libapr configure option so that if someone already has Apache2\n> installed, but doesn't wanna use libapr for PgSQL, they don't have to ...\n\n> Basically, all I'm looking at is allowing PgSQL to use a different library\n> for its shared memory calls then the standard one, nothing else ...\n\nOh. I guess my next question is how closely that Apache library\nemulates the SysV shmem semantics. In particular, can you reliably\ntell how many processes are attached to a shmem block? (Cf\nSharedMemoryIsInUse() in storage/ipc/ipc.c) Without that feature we\nhave an interlock problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 10:42:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Fri, 3 May 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > All I'm planning on doing is changing the appropriate shm_* functions iwth\n> > pg_shm_* functions ... if !(libapr), all those pg_shm_* functions will\n> > have in them is the original call we've always used ... there will even be\n> > a --disable-libapr configure option so that if someone already has Apache2\n> > installed, but doesn't wanna use libapr for PgSQL, they don't have to ...\n>\n> > Basically, all I'm looking at is allowing PgSQL to use a different library\n> > for its shared memory calls then the standard one, nothing else ...\n>\n> Oh. I guess my next question is how closely that Apache library\n> emulates the SysV shmem semantics. In particular, can you reliably\n> tell how many processes are attached to a shmem block? (Cf\n> SharedMemoryIsInUse() in storage/ipc/ipc.c) Without that feature we\n> have an interlock problem.\n\nWill investigate this ... my immediate goal is to just get it so that an\nalternate library can be used ... default behaviour will be to stick with\nour current function calls ... to use libapr, you will/would have to use a\nconfigure option for it (sorry, meant --enable above, not --disable) ...\n\nThe only '#ifdef's I'm planning on for this will be in a central shmem.*\nfile(s), so there isn't going to be a string of those all over the place\nor anything stupid like that ...\n\n", "msg_date": "Fri, 3 May 2002 11:54:27 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > All I'm planning on doing is changing the appropriate shm_* functions iwth\n> > pg_shm_* functions ... if !(libapr), all those pg_shm_* functions will\n> > have in them is the original call we've always used ... there will even be\n> > a --disable-libapr configure option so that if someone already has Apache2\n> > installed, but doesn't wanna use libapr for PgSQL, they don't have to ...\n> \n> > Basically, all I'm looking at is allowing PgSQL to use a different library\n> > for its shared memory calls then the standard one, nothing else ...\n> \n> Oh. I guess my next question is how closely that Apache library\n> emulates the SysV shmem semantics. In particular, can you reliably\n> tell how many processes are attached to a shmem block? (Cf\n> SharedMemoryIsInUse() in storage/ipc/ipc.c) Without that feature we\n> have an interlock problem.\n\nI am not familiar with the Apache code, but I see no reason why all the\nfeatures in SysV SHM should not be implementable in a Windows modules. IMHO\nthat's what should be done.\n", "msg_date": "Fri, 03 May 2002 11:02:25 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> On Fri, 3 May 2002, Tom Lane wrote:\n> \n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > All I'm planning on doing is changing the appropriate shm_* functions iwth\n> > > pg_shm_* functions ... if !(libapr), all those pg_shm_* functions will\n> > > have in them is the original call we've always used ... there will even be\n> > > a --disable-libapr configure option so that if someone already has Apache2\n> > > installed, but doesn't wanna use libapr for PgSQL, they don't have to ...\n> >\n> > > Basically, all I'm looking at is allowing PgSQL to use a different library\n> > > for its shared memory calls then the standard one, nothing else ...\n> >\n> > Oh. I guess my next question is how closely that Apache library\n> > emulates the SysV shmem semantics. In particular, can you reliably\n> > tell how many processes are attached to a shmem block? (Cf\n> > SharedMemoryIsInUse() in storage/ipc/ipc.c) Without that feature we\n> > have an interlock problem.\n> \n> Will investigate this ... my immediate goal is to just get it so that an\n> alternate library can be used ... default behaviour will be to stick with\n> our current function calls ... to use libapr, you will/would have to use a\n> configure option for it (sorry, meant --enable above, not --disable) ...\n> \n> The only '#ifdef's I'm planning on for this will be in a central shmem.*\n> file(s), so there isn't going to be a string of those all over the place\n> or anything stupid like that ...\n\nI think that you should create a verbatim implementation of the SysV shared\nmemory API in native Win32. It may have to be a pgsysvshm.dll or something like\nit, but I think it is the best possible approach.\n\nLet me look at it, I may be able to have something pretty quick.\n", "msg_date": "Fri, 03 May 2002 11:11:23 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > I think that you should create a verbatim implementation of the SysV\n> > shared memory API in native Win32. It may have to be a pgsysvshm.dll\n> > or something like it, but I think it is the best possible approach.\n> \n> > Let me look at it, I may be able to have something pretty quick.\n> \n> The notion of redesigning the internal API shouldn't be forgotten,\n> though. I'm not so dissatisfied with the shmem API (mainly because\n> it's only relevant at startup; once we've created and attached the\n> shmem segment, we're done worrying about it). But the SysV semaphore\n> API is really kind of ugly, and the ugliness doesn't buy anything except\n> porting difficulty. Moreover, putting a cleaner API layer there would\n> make it easier to experiment with cheaper semaphore primitives, such\n> as POSIX mutexes.\n> \n> There was a thread last fall concerning redesigning that code --- I've\n> forgotten the guy's name, but IIRC he wanted to make a port to QNX6,\n> and the sema code was getting in the way. We put the work on hold\n> because we were getting close to 7.2 release (or thought we were,\n> anyway) but the project ought to be taken up again.\n\nI will commit to writing a windows version of what ever shm/semaphore/mutex\ncode you guys specify.\n\n\n> \n> regards, tom lane\n", "msg_date": "Fri, 03 May 2002 11:23:37 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I think that you should create a verbatim implementation of the SysV\n> shared memory API in native Win32. It may have to be a pgsysvshm.dll\n> or something like it, but I think it is the best possible approach.\n\n> Let me look at it, I may be able to have something pretty quick.\n\nThe notion of redesigning the internal API shouldn't be forgotten,\nthough. I'm not so dissatisfied with the shmem API (mainly because\nit's only relevant at startup; once we've created and attached the\nshmem segment, we're done worrying about it). But the SysV semaphore\nAPI is really kind of ugly, and the ugliness doesn't buy anything except\nporting difficulty. Moreover, putting a cleaner API layer there would\nmake it easier to experiment with cheaper semaphore primitives, such\nas POSIX mutexes.\n\nThere was a thread last fall concerning redesigning that code --- I've\nforgotten the guy's name, but IIRC he wanted to make a port to QNX6,\nand the sema code was getting in the way. We put the work on hold\nbecause we were getting close to 7.2 release (or thought we were,\nanyway) but the project ought to be taken up again.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 11:25:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "sysv shm/sem\n\nI am writing a Win32 DLL implementation of :\n\nint semget(key_t key, int nsems, int semflg);\nint semctl(int semid, int semnum, int cmd, union semun arg);\nint semop(int semid, struct sembuf * sops, unsigned nsops);\nint shmctl(int shmid, int cmd, struct shmid_ds *buf);\nint shmget(key_t key, int size, int shmflg);\nvoid * shmat(int shmid, const void *shmaddr, int shmfl);\nint shmdt(const void *shmaddr);\n\nI will donate it do PostgreSQL.\n\nUNIX permissions will be ignored, i.e. uig/gid will be 0\nDo you see any need for the msgxxx calls?\nIs the function ipc() ever used?\n", "msg_date": "Fri, 03 May 2002 13:35:02 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> UNIX permissions will be ignored, i.e. uig/gid will be 0\n\nWin32 has no security anyway, right? ;-)\n\n> Do you see any need for the msgxxx calls?\n> Is the function ipc() ever used?\n\nNope, and nope.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 15:18:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "> mlw <markw@mohawksoft.com> writes:\n> > I think that you should create a verbatim implementation of the SysV\n> > shared memory API in native Win32. It may have to be a pgsysvshm.dll\n> > or something like it, but I think it is the best possible approach.\n>\n> > Let me look at it, I may be able to have something pretty quick.\n>\n> The notion of redesigning the internal API shouldn't be forgotten,\n> though. I'm not so dissatisfied with the shmem API (mainly because\n> it's only relevant at startup; once we've created and attached the\n> shmem segment, we're done worrying about it). But the SysV semaphore\n> API is really kind of ugly, and the ugliness doesn't buy anything except\n> porting difficulty. Moreover, putting a cleaner API layer there would\n> make it easier to experiment with cheaper semaphore primitives, such\n> as POSIX mutexes.\n>\n> There was a thread last fall concerning redesigning that code --- I've\n> forgotten the guy's name, but IIRC he wanted to make a port to QNX6,\n\nThat would be me.\n\n> and the sema code was getting in the way. We put the work on hold\n> because we were getting close to 7.2 release (or thought we were,\n> anyway) but the project ought to be taken up again.\n>\n\nYes, I am intended to give it another spin soon. I think it is bad idea to\nimpose SysV ugliness on systems which have better solutions. Main problem\nwith SysV primitives is that they are 'sticky' (i.e., not cleaned up if\nprocess dies/exits by the system). So Postgres has to deal with issues like\ndiscovering leftovers, finding unused IPC keys, etc. It is inelegant and\ntakes up lot of code. POSIX primitives are anonymous and cleaned up\nautomatically. So you just say 'give me a semaphore' and you get it, nothing\ngets into your way.\n\nPerformance of POSIX mutexes and semaphores (on platforms where they are\nimplemented properly) is also better than SysV semaphores. Unfortunately\nsome systems have rather lame POSIX support, for example semaphores and\nmutexes can't be shared across processes on Linux. That's basically the\nreason why people keep sticking to SysV.\n\nWhat really need to be done is new abstraction layer which would cover SysV\nAPI, POSIX and whatever native APIs are better for BeOS/OS2/Win32. I almost\ndid it last time...\n\n-- igor\n\n\n", "msg_date": "Fri, 3 May 2002 16:35:22 -0500", "msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I am writing a Win32 DLL implementation of :\n\n> int semget(key_t key, int nsems, int semflg);\n> int semctl(int semid, int semnum, int cmd, union semun arg);\n> int semop(int semid, struct sembuf * sops, unsigned nsops);\n\nRather than propagating the SysV semaphore API still further, why don't\nwe kill it now? (I'm willing to keep the shmem API, however.)\n\nAfter looking over the uses of these functions, I believe that we could\neasily develop a non-SysV-centric internal API. Here's a first cut:\n\n1. Define a struct type PGSemaphore that has implementation-specific\ncontents (the generic code will never look inside it). Operations on\nsemaphores will take \"PGSemaphore *\" arguments. When implementing atop\nSysV semaphores, PGSemaphore will contain two fields, the semaphore id\nand semaphore number. In other cases the contents could be different.\n\n2. All PGSemaphore structs will be physically stored in shared memory.\nThis doesn't matter for SysV support, where the id/number are constants\nanyway; but it will allow implementations based on mutexes.\n\n3. The operations needed are\n\n* Reserve semaphores. This will be told the number of semaphores\nneeded. On SysV it will do the necessary semget()s, but on some\nimplementations it might be a no-op. This should also be prepared\nto clean up after a failed postmaster, if it is possible for sema\nresources to outlive the creating postmaster.\n\n* Create semaphore. Given a pointer to an uninitialized PGSemaphore\nstruct, initialize it to a new semaphore with count 1. (On SysV this\nwould hand out the individual semas previously allocated by Reserve.)\nNote that this is not responsible for allocating the memory occupied\nby the PGSemaphore struct --- I envision the structs being part of\nlarger objects such as PROC structures.\n\n* Release semaphores. Release all resources allocated by previous\nReserve and Create operations. This is called when shutting down\nor when resetting shared memory after a backend crash.\n\n* Reset semaphore. Reset an existing PGSemaphore to count zero.\n\n* Lock semaphore. Identical to current IpcSemaphoreLock(), except\nparameter is a PGSemaphore *. See code of that routine for detailed\nsemantics.\n\n* Unlock semaphore. Identical to current IpcSemaphoreUnlock(), except\nparameter is a PGSemaphore *.\n\n* Conditional lock semaphore. Identical to current\nIpcSemaphoreTryLock(), except parameter is a PGSemaphore *.\n\nReserve/create/release would all be called in the postmaster process,\nso they could communicate via malloc'd private memory (eg, an array\nof semaphore IDs would be needed in the SysV case). The remaining\noperations would be invokable by any backend.\n\nComments?\n\nI'd be willing to work on refactoring the existing SysV-based code\nto meet this spec.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 18:07:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com> writes:\n> What really need to be done is new abstraction layer which would cover SysV\n> API, POSIX and whatever native APIs are better for BeOS/OS2/Win32. I almost\n> did it last time...\n\nYes. I just sent off a proposal for a cleaner semaphore API --- please\ncomment on it.\n\nMy inclination is to stick with the SysV API for shared memory, however.\nThe \"stickiness\" is actually not a bad thing for us in the shared memory\ncase, because it allows a new postmaster to detect the situation where\nold backends are still running: it can see that there is an old shmem\nsegment still present with attached processes. Without that, we have no\ngood defense against the scenario where an old postmaster dumped core\nleaving backends still running. The backends are fine as long as they\nare left to finish out their operations, or even killed with whatever\ndegree of prejudice the admin wants. But what we must *not* do is allow\na new postmaster to start while the old backends are still running;\nthat would mean two sets of backends running without contact with each\nother, which would be fatal for data integrity. The SysV API lets us\ndetect that case, but I don't see any equally good way to do it if we\nare using anonymous shared memory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 18:18:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Like I told Marc, I don't care. You spec out what you want and I'll write it\nfor Windows. \n\nThat being said, a SysV IPC interface for native Windows would be kind of cool\nto have.\n\n\nTom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > I am writing a Win32 DLL implementation of :\n> \n> > int semget(key_t key, int nsems, int semflg);\n> > int semctl(int semid, int semnum, int cmd, union semun arg);\n> > int semop(int semid, struct sembuf * sops, unsigned nsops);\n> \n> Rather than propagating the SysV semaphore API still further, why don't\n> we kill it now? (I'm willing to keep the shmem API, however.)\n> \n> After looking over the uses of these functions, I believe that we could\n> easily develop a non-SysV-centric internal API. Here's a first cut:\n> \n> 1. Define a struct type PGSemaphore that has implementation-specific\n> contents (the generic code will never look inside it). Operations on\n> semaphores will take \"PGSemaphore *\" arguments. When implementing atop\n> SysV semaphores, PGSemaphore will contain two fields, the semaphore id\n> and semaphore number. In other cases the contents could be different.\n> \n> 2. All PGSemaphore structs will be physically stored in shared memory.\n> This doesn't matter for SysV support, where the id/number are constants\n> anyway; but it will allow implementations based on mutexes.\n> \n> 3. The operations needed are\n> \n> * Reserve semaphores. This will be told the number of semaphores\n> needed. On SysV it will do the necessary semget()s, but on some\n> implementations it might be a no-op. This should also be prepared\n> to clean up after a failed postmaster, if it is possible for sema\n> resources to outlive the creating postmaster.\n> \n> * Create semaphore. Given a pointer to an uninitialized PGSemaphore\n> struct, initialize it to a new semaphore with count 1. (On SysV this\n> would hand out the individual semas previously allocated by Reserve.)\n> Note that this is not responsible for allocating the memory occupied\n> by the PGSemaphore struct --- I envision the structs being part of\n> larger objects such as PROC structures.\n> \n> * Release semaphores. Release all resources allocated by previous\n> Reserve and Create operations. This is called when shutting down\n> or when resetting shared memory after a backend crash.\n> \n> * Reset semaphore. Reset an existing PGSemaphore to count zero.\n> \n> * Lock semaphore. Identical to current IpcSemaphoreLock(), except\n> parameter is a PGSemaphore *. See code of that routine for detailed\n> semantics.\n> \n> * Unlock semaphore. Identical to current IpcSemaphoreUnlock(), except\n> parameter is a PGSemaphore *.\n> \n> * Conditional lock semaphore. Identical to current\n> IpcSemaphoreTryLock(), except parameter is a PGSemaphore *.\n> \n> Reserve/create/release would all be called in the postmaster process,\n> so they could communicate via malloc'd private memory (eg, an array\n> of semaphore IDs would be needed in the SysV case). The remaining\n> operations would be invokable by any backend.\n> \n> Comments?\n> \n> I'd be willing to work on refactoring the existing SysV-based code\n> to meet this spec.\n> \n> regards, tom lane\n", "msg_date": "Fri, 03 May 2002 19:09:53 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "> \"Igor Kovalenko\" <Igor.Kovalenko@motorola.com> writes:\n> > What really need to be done is new abstraction layer which would cover\nSysV\n> > API, POSIX and whatever native APIs are better for BeOS/OS2/Win32. I\nalmost\n> > did it last time...\n>\n> Yes. I just sent off a proposal for a cleaner semaphore API --- please\n> comment on it.\n>\n\nI will look. I remember from my last attempt that it actually did not\ninvolve a lot of changes in your existing abstraction layer (which already\nexists, just being SysV-centric). I believe only one function prototype had\nto be changed... Your proposal sounds like more changes will be needed...\n\n> My inclination is to stick with the SysV API for shared memory, however.\n> The \"stickiness\" is actually not a bad thing for us in the shared memory\n> case, because it allows a new postmaster to detect the situation where\n> old backends are still running: it can see that there is an old shmem\n> segment still present with attached processes. Without that, we have no\n> good defense against the scenario where an old postmaster dumped core\n> leaving backends still running. The backends are fine as long as they\n> are left to finish out their operations, or even killed with whatever\n> degree of prejudice the admin wants. But what we must *not* do is allow\n> a new postmaster to start while the old backends are still running;\n> that would mean two sets of backends running without contact with each\n> other, which would be fatal for data integrity. The SysV API lets us\n> detect that case, but I don't see any equally good way to do it if we\n> are using anonymous shared memory.\n\nIt does not have to be anonymous. POSIX also defines shm_open(same arguments\nas open) API which will create named object in whatever location corresponds\nto shared memory storage on that platform (object is then grown to needed\nsize by ftruncate() and the fd is then passed to mmap). The object will\nexist in name space and can be detected by subsequent calls to shm_open()\nwith same name. It is not really different from doing open(), but more\nportable (mmap() on regular files may not be supported).\n\nI suggest we do IPC abstraction which would cover shared memory as well as\nsemaphores, otherwise it will be only half of solution - platforms without\nSysV API would still have to emulate SysV shared memory.\n\n-- igor\n\n\n", "msg_date": "Fri, 3 May 2002 18:16:47 -0500", "msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com> writes:\n> It does not have to be anonymous. POSIX also defines shm_open(same arguments\n> as open) API which will create named object in whatever location corresponds\n> to shared memory storage on that platform (object is then grown to needed\n> size by ftruncate() and the fd is then passed to mmap). The object will\n> exist in name space and can be detected by subsequent calls to shm_open()\n> with same name. It is not really different from doing open(), but more\n> portable (mmap() on regular files may not be supported).\n\nYes, but can you detect whether other processes have the same file open?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 20:05:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Fri, 3 May 2002, Tom Lane wrote:\n\n> But what we must *not* do is allow a new postmaster to start while the\n> old backends are still running; that would mean two sets of backends\n> running without contact with each other, which would be fatal for data\n> integrity. The SysV API lets us detect that case, but I don't see any\n> equally good way to do it if we are using anonymous shared memory.\n\nIt's a hack (and has slight security implications), but you\ncould just allow the postgres backends to keep the listening\nsocket(s) open.\n\nMatthew.\n\n", "msg_date": "Sat, 4 May 2002 10:59:20 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Friday, May 03, 2002 6:07 PM\n> To: mlw\n> Cc: Marc G. Fournier; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports\n>\n>\n> Rather than propagating the SysV semaphore API still further, why don't\n> we kill it now? (I'm willing to keep the shmem API, however.)\n\nWould this have the benefit of allow PostgreSQL to work properly in BSD\njails, since lack of really working SysV IPC was the problem there?\n\n- J.\n\n", "msg_date": "Sat, 4 May 2002 09:33:49 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n>> Rather than propagating the SysV semaphore API still further, why don't\n>> we kill it now? (I'm willing to keep the shmem API, however.)\n\n> Would this have the benefit of allow PostgreSQL to work properly in BSD\n> jails, since lack of really working SysV IPC was the problem there?\n\nWas the problem just with semas, or was shmem an issue too?\n\nIn any case, unless someone actually writes an alternative sema\nimplementation that will work on BSD, nothing will happen...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 May 2002 11:10:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> On Fri, 3 May 2002, Tom Lane wrote:\n>> The SysV API lets us detect that case, but I don't see any\n>> equally good way to do it if we are using anonymous shared memory.\n\n> It's a hack (and has slight security implications), but you\n> could just allow the postgres backends to keep the listening\n> socket(s) open.\n\nHmm. That might be workable, but it feels shaky to me. The problem\nis that you are using a lock based on port number to interlock a data\ndirectory --- and port number and data directory are independently\nvariable parameters. Consider\n\t$ postmaster -D /my/dir &\n\t-- dba thinks \"oops, forgot to specify port\"\n\t$ kill -9 pm-pid # bad idea\n\t$ postmaster -D /my/dir -p myport &\nAny backends started by the first postmaster will not be noticed by\nthe second one, if the interlock is based on port number.\n\nWe could get around this, of course: record the port number in the data\ndirectory lockfile, and test for existence of the old socket\nindependently of trying to create a new one. But it seems ugly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 May 2002 12:37:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "I have just committed changes to create a platform-independent internal\nAPI for semaphores, along the lines discussed yesterday.\n\nAt this point, the Darwin (Mac OS X), BeOS, and QNX4 ports are probably\nbroken. I will fix the Darwin port (probably not till tomorrow though);\nvolunteers to clean up the BeOS and QNX4 ports are needed.\n\nBTW, there is a quick hack attempt at a POSIX-semaphore-based\nimplementation in src/backend/port/posix_sema.c. I have not tested\nthis yet, but expect to do so as part of fixing the Darwin port.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 May 2002 20:08:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "> \"Joel Burton\" <joel@joelburton.com> writes:\n> >> Rather than propagating the SysV semaphore API still further, why don't\n> >> we kill it now? (I'm willing to keep the shmem API, however.)\n>\n> > Would this have the benefit of allow PostgreSQL to work properly in BSD\n> > jails, since lack of really working SysV IPC was the problem there?\n>\n> Was the problem just with semas, or was shmem an issue too?\n\nNot sure -- doesn't get far enough for me to tell. initdb dies with:\n\ncreating template1 database in /usr/local/pgsql/data/base/1...\nIpcSemaphoreCreate: semget(key=1, num=17, 03600) failed:\nFunction not implemented\n\n> In any case, unless someone actually writes an alternative sema\n> implementation that will work on BSD, nothing will happen...\n\nWas hoping that the discussions about the APR might let this work under BSD\njails, assuming I can get the APR to compile.\n\n(For others: apparently PG will work under BSD jails if you recompile the\nBSD kernel w/some new settings, but my ISP for this project was unwilling to\ndo that. Search the mailing list for messages on how to do this.)\n\nJ.\n\n", "msg_date": "Sun, 5 May 2002 02:44:31 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n> Would this have the benefit of allow PostgreSQL to work properly in BSD\n> jails, since lack of really working SysV IPC was the problem there?\n>> \n>> Was the problem just with semas, or was shmem an issue too?\n\n> Not sure -- doesn't get far enough for me to tell. initdb dies with:\n\n> creating template1 database in /usr/local/pgsql/data/base/1...\n> IpcSemaphoreCreate: semget(key=1, num=17, 03600) failed:\n> Function not implemented\n\nWe create shared memory before semaphores, so if you got this far then\nthe shmem code is probably working (at least minimally).\n\nDo you have working sem_open or sem_init (ie, POSIX semaphores)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 May 2002 10:44:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "> > Rather than propagating the SysV semaphore API still further, why don't\n> > we kill it now? (I'm willing to keep the shmem API, however.)\n>\n> Would this have the benefit of allow PostgreSQL to work properly in BSD\n> jails, since lack of really working SysV IPC was the problem there?\n\nI have postgresql working quite happily in FreeBSD jails! (Just make sure\nyou go \"sysctl jail.sysvipc_allowed=1\").\n\nChris\n\n", "msg_date": "Mon, 6 May 2002 09:43:37 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "> (For others: apparently PG will work under BSD jails if you recompile the\n> BSD kernel w/some new settings, but my ISP for this project was \n> unwilling to\n> do that. Search the mailing list for messages on how to do this.)\n\nWorks fine. You don't need to recompile - just use the sysctl.\n\nChris\n\n", "msg_date": "Mon, 6 May 2002 09:49:51 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Marc G. Fournier wrote:\n\n> hrmmmm ... do you have a working Windows development environment? I'm\n> running WinXP at home, but don't have any of the compilers or anything\n> yet, so all my work for the first part is going to be done under Unix ...\n> \n> but someone that knows something about building makefiles for Windows, and\n> compiling under it, will definitely be a major asset ;)\n\nI think if you are familiar with make and gcc (and perhaps autoconf), \nMinGW and MSys are the development environment of choice on Windows. You \neven get /bin/sh. But the generated program does not depend on any \ncustom library (like cygwin does). It's even possible to cross compile \nfrom a Linux box (actully powerpc in my case).\n\nLook at http://mingw.sourceforge.net (and there for msys).\n\n Christof\n\n", "msg_date": "Mon, 06 May 2002 09:49:01 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "> > > Rather than propagating the SysV semaphore API still further,\n> why don't\n> > > we kill it now? (I'm willing to keep the shmem API, however.)\n> >\n> > Would this have the benefit of allow PostgreSQL to work properly in BSD\n> > jails, since lack of really working SysV IPC was the problem there?\n>\n> I have postgresql working quite happily in FreeBSD jails! (Just make sure\n> you go \"sysctl jail.sysvipc_allowed=1\").\n\nYep, Alastair D'Silva helpfully pointed this out a month or two ago, and for\nmany people, this would be a workable solution. Unfortunately, it appears\nthat you have to run this command outside the jail, which I don't have\naccess to.\n\nI forwarded the suggestion to my ISP (imeme, a Zope provider), who said\nthat:\n\n\"This will allow you to run a single postgres in a single jail only one\nuser would have access to it. If you try to run more then one it will\ntry to use the same shared memory and crash.\"\n\nAnd therefore they refused to make the change. (More annoyingly, they kept\ntrying to convince me that I should quit my whining and use MySQL since it's\n\"ACID compliant\").\n\nSo, I'm holding out hope that since this ISP seems unenlightened, one day\nPostgreSQL will simply run in BSD jails without a cooperating jailmaster,\nand it sounded like using the APR _might_ make this possible. (All of my\nother projects use PG; I'd sure love to get this one switched over!)\n\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Mon, 6 May 2002 07:07:00 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "> I forwarded the suggestion to my ISP (imeme, a Zope provider), who said\n> that:\n> \n> \"This will allow you to run a single postgres in a single jail only one\n> user would have access to it. If you try to run more then one it will\n> try to use the same shared memory and crash.\"\n\nNot true. But I'll avoid digging up any more on that old issue...\n\nChris\n\n\n", "msg_date": "Mon, 6 May 2002 19:36:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Sat, 4 May 2002, Joel Burton wrote:\n\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> > Sent: Friday, May 03, 2002 6:07 PM\n> > To: mlw\n> > Cc: Marc G. Fournier; pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports\n> >\n> >\n> > Rather than propagating the SysV semaphore API still further, why don't\n> > we kill it now? (I'm willing to keep the shmem API, however.)\n>\n> Would this have the benefit of allow PostgreSQL to work properly in BSD\n> jails, since lack of really working SysV IPC was the problem there?\n\nThere is no problem with SysV IPC in the jail, per se ... jail's were just\nnot coded to delimite/segregate such IPC from other jails ... its one of\nthose \"caveat empor\"(sp?) situations ... you can do it, but at your own\nrisk, as somoene in another jail has the ability to 'attach' to your\nsegments ...\n\n\n", "msg_date": "Mon, 6 May 2002 08:52:00 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au]\n> Sent: Monday, May 06, 2002 7:36 AM\n> To: Joel Burton; Tom Lane; mlw\n> Cc: Marc G. Fournier; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports\n>\n>\n> > I forwarded the suggestion to my ISP (imeme, a Zope provider), who said\n> > that:\n> >\n> > \"This will allow you to run a single postgres in a single jail only one\n> > user would have access to it. If you try to run more then one it will\n> > try to use the same shared memory and crash.\"\n>\n> Not true. But I'll avoid digging up any more on that old issue...\n\nOh, I'm sure it's not true. But sometimes things end up on the \"nyah, nyah,\nit's my server and I say so\" level. Sigh.\n\nSo, I guess that's where it leaves me: waiting for some solution other than\nISP cluefulness. :-)\n\n- J.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n", "msg_date": "Mon, 6 May 2002 07:54:27 -0400", "msg_from": "\"Joel Burton\" <joel@joelburton.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Sat, 4 May 2002, Tom Lane wrote:\n\n> Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> > On Fri, 3 May 2002, Tom Lane wrote:\n> >> The SysV API lets us detect that case, but I don't see any\n> >> equally good way to do it if we are using anonymous shared memory.\n>\n> > It's a hack (and has slight security implications), but you\n> > could just allow the postgres backends to keep the listening\n> > socket(s) open.\n>\n> Hmm. That might be workable, but it feels shaky to me. The problem\n> is that you are using a lock based on port number to interlock a data\n> directory --- and port number and data directory are independently\n> variable parameters. Consider\n> \t$ postmaster -D /my/dir &\n> \t-- dba thinks \"oops, forgot to specify port\"\n> \t$ kill -9 pm-pid # bad idea\n> \t$ postmaster -D /my/dir -p myport &\n> Any backends started by the first postmaster will not be noticed by\n> the second one, if the interlock is based on port number.\n>\n> We could get around this, of course: record the port number in the data\n> directory lockfile, and test for existence of the old socket\n> independently of trying to create a new one. But it seems ugly.\n\nHow about a second, data directory based socket simply named something\nlike '.inuse', that is not port dependent?\n\n", "msg_date": "Mon, 6 May 2002 08:54:52 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Sun, 5 May 2002, Joel Burton wrote:\n\n> > \"Joel Burton\" <joel@joelburton.com> writes:\n> > >> Rather than propagating the SysV semaphore API still further, why don't\n> > >> we kill it now? (I'm willing to keep the shmem API, however.)\n> >\n> > > Would this have the benefit of allow PostgreSQL to work properly in BSD\n> > > jails, since lack of really working SysV IPC was the problem there?\n> >\n> > Was the problem just with semas, or was shmem an issue too?\n>\n> Not sure -- doesn't get far enough for me to tell. initdb dies with:\n>\n> creating template1 database in /usr/local/pgsql/data/base/1...\n> IpcSemaphoreCreate: semget(key=1, num=17, 03600) failed:\n> Function not implemented\n\nRead the jail manpage:\n\n jail.sysvipc_allowed\n This MIB entry determines whether or not processes within a jail\n have access to System V IPC primitives. In the current jail imple-\n mentation, System V primitives share a single namespace across the\n host and jail environments, meaning that processes within a jail\n would be able to communicate with (and potentially interfere with)\n processes outside of the jail, and in other jails. As such, this\n functionality is disabled by default, but can be enabled by setting\n this MIB entry to 1.\n\n", "msg_date": "Mon, 6 May 2002 08:57:18 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\nOr changing ISPs to a place more enlightened ...\n\nOn Mon, 6 May 2002, Joel Burton wrote:\n\n> > -----Original Message-----\n> > From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au]\n> > Sent: Monday, May 06, 2002 7:36 AM\n> > To: Joel Burton; Tom Lane; mlw\n> > Cc: Marc G. Fournier; pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports\n> >\n> >\n> > > I forwarded the suggestion to my ISP (imeme, a Zope provider), who said\n> > > that:\n> > >\n> > > \"This will allow you to run a single postgres in a single jail only one\n> > > user would have access to it. If you try to run more then one it will\n> > > try to use the same shared memory and crash.\"\n> >\n> > Not true. But I'll avoid digging up any more on that old issue...\n>\n> Oh, I'm sure it's not true. But sometimes things end up on the \"nyah, nyah,\n> it's my server and I say so\" level. Sigh.\n>\n> So, I guess that's where it leaves me: waiting for some solution other than\n> ISP cluefulness. :-)\n>\n> - J.\n>\n> Joel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\n> Knowledge Management & Technology Consultant\n>\n>\n\n", "msg_date": "Mon, 6 May 2002 09:01:36 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> We could get around this, of course: record the port number in the data\n>> directory lockfile, and test for existence of the old socket\n>> independently of trying to create a new one. But it seems ugly.\n\n> How about a second, data directory based socket simply named something\n> like '.inuse', that is not port dependent?\n\nHmm ... but how do you use that to tell if there are still backends\naround?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 09:40:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Mon, 6 May 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> >> We could get around this, of course: record the port number in the data\n> >> directory lockfile, and test for existence of the old socket\n> >> independently of trying to create a new one. But it seems ugly.\n>\n> > How about a second, data directory based socket simply named something\n> > like '.inuse', that is not port dependent?\n>\n> Hmm ... but how do you use that to tell if there are still backends\n> around?\n\nAs a backend is started up, connect to that socket ... if socket is open\nwhen trying to start a new frontend, fail as there are currently other\nconnections attached to it?\n\n", "msg_date": "Mon, 6 May 2002 10:43:15 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> Hmm ... but how do you use that to tell if there are still backends\n>> around?\n\n> As a backend is started up, connect to that socket ... if socket is open\n> when trying to start a new frontend, fail as there are currently other\n> connections attached to it?\n\nBut the backends would only have the socket open, they'd not be actively\nlistening to it. So how could you tell whether anyone had the socket\nopen or not?\n\nISTM we gave up on exactly that technique for the main postmaster's\nsocket; we now create a separate lockfile to protect the socket, and\ndon't rely on the socket itself to give us any interlocking help at all.\nBut the lockfile just contains the postmaster's PID, so it's no help\nin detecting the case where the old postmaster has gone away but there\nare still orphaned backends laying about.\n\nI'm not entirely thrilled with the lockfile technique; it'd be nice to\nfind something better. (In particular, we've seen a couple cases now\nwhere people had trouble with PG refusing to start after a system\nreboot, because some other daemon process had been assigned the PID\nthat the postmaster had in its previous incarnation; so the lockfile\ncheck code mistakenly thinks there's still an old postmaster.) But\nso far, the only thing worse than lockfiles is everything else :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 10:17:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "I said:\n> But the backends would only have the socket open, they'd not be actively\n> listening to it. So how could you tell whether anyone had the socket\n> open or not?\n\nOh, I take that back, I see how you could do it: the postmaster opens\nthe socket *for writing*, but never actually writes. All its child\nprocesses inherit that same open file descriptor and just keep it\naround. Then, to tell if anyone's home, you open the socket *for\nreading* and try to read in O_NONBLOCK mode. You get an EOF indication\nif and only if no one has the socket open for writing; otherwise you\nget an EAGAIN error.\n\nThat would work ... but is it more portable than depending on SysV\nshmem connection counts? ISTR that some of the platforms we support\ndon't have Unix-style sockets at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 10:25:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Mon, 6 May 2002, Tom Lane wrote:\n\n> I said:\n> > But the backends would only have the socket open, they'd not be actively\n> > listening to it. So how could you tell whether anyone had the socket\n> > open or not?\n>\n> Oh, I take that back, I see how you could do it: the postmaster opens\n> the socket *for writing*, but never actually writes. All its child\n> processes inherit that same open file descriptor and just keep it\n> around. Then, to tell if anyone's home, you open the socket *for\n> reading* and try to read in O_NONBLOCK mode. You get an EOF indication\n> if and only if no one has the socket open for writing; otherwise you\n> get an EAGAIN error.\n>\n> That would work ... but is it more portable than depending on SysV\n> shmem connection counts? ISTR that some of the platforms we support\n> don't have Unix-style sockets at all.\n\nWouldn't the same thing work with a simple file? Does it have to be a\nUnixDomainSocket?\n\n\n", "msg_date": "Mon, 6 May 2002 11:35:20 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> That would work ... but is it more portable than depending on SysV\n>> shmem connection counts? ISTR that some of the platforms we support\n>> don't have Unix-style sockets at all.\n\n> Wouldn't the same thing work with a simple file? Does it have to be a\n> UnixDomainSocket?\n\nNo, and yes. If it's not a pipe/fifo then you don't get the\nEOF-only-when-no-possible-writers-remain behavior. TCP and UDP\nsockets don't show this sort of behavior either. So AFAICS we\nreally need a named pipe, ie, socket.\n\nWe could maybe do something approximately similar with TCP connection\nattempts (per the prior suggestion of letting backends hold the\npostmaster's listen socket open; then see if you get \"connection\nrefused\" or a timeout from trying to connect) but I don't think it'd be\nas trustworthy. Simple mistakes like overly aggressive ipchains filters\nwould confuse this kind of test.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 10:48:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\nSince our default behavior (at startup) is to have TCP sockets disabled,\nhow many OSs are there that don't support UD sockets? Enough to really be\nworried about?\n\n\n\n\nOn Mon, 6 May 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> >> That would work ... but is it more portable than depending on SysV\n> >> shmem connection counts? ISTR that some of the platforms we support\n> >> don't have Unix-style sockets at all.\n>\n> > Wouldn't the same thing work with a simple file? Does it have to be a\n> > UnixDomainSocket?\n>\n> No, and yes. If it's not a pipe/fifo then you don't get the\n> EOF-only-when-no-possible-writers-remain behavior. TCP and UDP\n> sockets don't show this sort of behavior either. So AFAICS we\n> really need a named pipe, ie, socket.\n>\n> We could maybe do something approximately similar with TCP connection\n> attempts (per the prior suggestion of letting backends hold the\n> postmaster's listen socket open; then see if you get \"connection\n> refused\" or a timeout from trying to connect) but I don't think it'd be\n> as trustworthy. Simple mistakes like overly aggressive ipchains filters\n> would confuse this kind of test.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Mon, 6 May 2002 11:55:48 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Since our default behavior (at startup) is to have TCP sockets disabled,\n> how many OSs are there that don't support UD sockets?\n\nA quick look in the sources shows that we #undef HAVE_UNIX_SOCKETS for\nQNX, BeOS, and old cygwin versions ... which are exactly the platforms\nthat don't have SysV shmem support, so those are exactly the guys who\nwe're trying to fix the problem for.\n\nI do like the idea of using a Unix socket this way where available,\nthough. It'd let us switch over the shmem code to using IPC_PRIVATE\nshmem key, which'd simplify that code tremendously; and we could make\nsome progress against the dead-PID-in-lockfile problem.\n\nCould we get away with saying that the Unix-socket-less platforms have\nweaker protection against mistakenly restarting the postmaster? We\ncould have a plain-vanilla lockfile instead of a socket lockfile on\nthose platforms, which would not catch the dead-postmaster-live-backends\ncase, but it'd be better than nothing. And I am not convinced that the\nshmem-connection-count check should be trusted on QNX or BeOS, anyway,\nso I'm not sure that they actually have a functioning check now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 11:19:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Tom Lane wrote:\n> I said:\n> > But the backends would only have the socket open, they'd not be actively\n> > listening to it. So how could you tell whether anyone had the socket\n> > open or not?\n>\n> Oh, I take that back, I see how you could do it: the postmaster opens\n> the socket *for writing*, but never actually writes. All its child\n> processes inherit that same open file descriptor and just keep it\n> around. Then, to tell if anyone's home, you open the socket *for\n> reading* and try to read in O_NONBLOCK mode. You get an EOF indication\n> if and only if no one has the socket open for writing; otherwise you\n> get an EAGAIN error.\n>\n> That would work ... but is it more portable than depending on SysV\n> shmem connection counts? ISTR that some of the platforms we support\n> don't have Unix-style sockets at all.\n\n I think what you describe is a named pipe, not a socket. The\n underlying implementation might be a socketpair, but the\n behaviour of named pipes is exactly that since Version 7 at\n least. This worked under Minix already.\n\n\n>\n> regards, tom lane\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Mon, 6 May 2002 14:34:41 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Since our default behavior (at startup) is to have TCP sockets disabled,\n> > how many OSs are there that don't support UD sockets?\n>\n> A quick look in the sources shows that we #undef HAVE_UNIX_SOCKETS for\n> QNX, BeOS, and old cygwin versions ... which are exactly the platforms\n> that don't have SysV shmem support, so those are exactly the guys who\n> we're trying to fix the problem for.\n\nNext release of QNX (6.2) will add support for UDS, but they are still not\nquite portable.\n\n>\n> I do like the idea of using a Unix socket this way where available,\n> though. It'd let us switch over the shmem code to using IPC_PRIVATE\n> shmem key, which'd simplify that code tremendously; and we could make\n> some progress against the dead-PID-in-lockfile problem.\n>\n> Could we get away with saying that the Unix-socket-less platforms have\n> weaker protection against mistakenly restarting the postmaster? We\n> could have a plain-vanilla lockfile instead of a socket lockfile on\n> those platforms, which would not catch the dead-postmaster-live-backends\n> case, but it'd be better than nothing. And I am not convinced that the\n> shmem-connection-count check should be trusted on QNX or BeOS, anyway,\n> so I'm not sure that they actually have a functioning check now.\n\nWhy can't we use named pipe (aka FIFO file) instead of UDS? I think that is\nmore portable... The socketpair() function also tends to be more portable\nthan whole UDS in general... It works on QNX4 even, but not sure about BeOS.\n\nAnother thought is, why can't we use bind() to the postmaster port to detect\nother postmasters? I might be missing something, so pardon by ignorance. But\nshould not bind() to same port fail with EADDRINUSE unless SO_REUSEADDR is\nset? I don't really know if it is set in postgres or not ...\n\n-- igor\n\n\n", "msg_date": "Mon, 6 May 2002 17:25:21 -0500", "msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com> writes:\n>> Could we get away with saying that the Unix-socket-less platforms have\n>> weaker protection against mistakenly restarting the postmaster?\n\n> Why can't we use named pipe (aka FIFO file) instead of UDS?\n\nThat's exactly what I'm talking about.\n\n> Another thought is, why can't we use bind() to the postmaster port to detect\n> other postmasters?\n\nBecause port number and data directory are independent parameters. The\ninterlock on port number is not related to the interlock on data\ndirectory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 18:59:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Mon, 6 May 2002, Tom Lane wrote:\n\n> > As a backend is started up, connect to that socket ... if socket is open\n> > when trying to start a new frontend, fail as there are currently other\n> > connections attached to it?\n>\n> But the backends would only have the socket open, they'd not be\n> actively listening to it. So how could you tell whether anyone\n> had the socket open or not?\n\nIt's easy. As startup, the postmaster (or standalone\nbackend) creates a Unix socket, binds it to the filename\nand calls listen on it.\n\nIf another backend is running, it'll get EADDRINUSE from\nthe bind or listen.\n\nNobody actually needs to connect to the socket. Simple,\nrace-free, 10 lines of code.\n\nMatthew.\n\n", "msg_date": "Tue, 7 May 2002 12:15:32 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> Nobody actually needs to connect to the socket. Simple,\n> race-free, 10 lines of code.\n\n... and we already do it. But it protects the port number, not\nthe data directory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 May 2002 09:25:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Tue, 7 May 2002, Tom Lane wrote:\n\n> > Nobody actually needs to connect to the socket. Simple,\n> > race-free, 10 lines of code.\n>\n> ... and we already do it. But it protects the port number, not\n> the data directory.\n\nIf I understood him correctly, Marc was suggesting a further\ndomain socket inside the data directory.\n\nMatthew.\n\n", "msg_date": "Tue, 7 May 2002 14:37:59 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n>> ... and we already do it. But it protects the port number, not\n>> the data directory.\n\n> If I understood him correctly, Marc was suggesting a further\n> domain socket inside the data directory.\n\nRight, and that would work because we would reference it as\n$PGDATA/.socket --- exact, one-to-one correspondence between data\ndirectory and interlock file. A TCP socket isn't going to have any\nsuch direct connection to the data directory.\n\nWe could try to make such a connection (eg, pick a free port number at\nrandom, and record the number in a lockfile in $PGDATA). But that will\nsuffer from a bunch of failure modes, starting with the same one that's\nbeen biting us for PID interlocking: after a system restart, someone\nelse may hold the port number that we chose at random last time.\n\nBasically, the reason that we want this interlock is because we are\ngoing after five-nines kind of reliability. An interlock technology\nthat's not itself five-nines reliable isn't going to make things better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 May 2002 10:15:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Just a friendly reminder that it should be named pipe rather than UDS ;)\n-- igor\n\n> Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> >> ... and we already do it. But it protects the port number, not\n> >> the data directory.\n> \n> > If I understood him correctly, Marc was suggesting a further\n> > domain socket inside the data directory.\n> \n> Right, and that would work because we would reference it as\n> $PGDATA/.socket --- exact, one-to-one correspondence between data\n> directory and interlock file. A TCP socket isn't going to have any\n> such direct connection to the data directory.\n> \n> We could try to make such a connection (eg, pick a free port number at\n> random, and record the number in a lockfile in $PGDATA). But that will\n> suffer from a bunch of failure modes, starting with the same one that's\n> been biting us for PID interlocking: after a system restart, someone\n> else may hold the port number that we chose at random last time.\n> \n> Basically, the reason that we want this interlock is because we are\n> going after five-nines kind of reliability. An interlock technology\n> that's not itself five-nines reliable isn't going to make things better.\n> \n> regards, tom lane\n> \n\n", "msg_date": "Tue, 7 May 2002 15:53:19 -0500", "msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Tue, 7 May 2002, Igor Kovalenko wrote:\n\n> Just a friendly reminder that it should be named pipe rather than UDS\n> ;)\n\nNamed pipes don't have the required syntax. Perhaps for\nplatforms which have neither SysV shm, something like\nPOSIX named semaphores are the way forward.\n\nMatthew.\n\n", "msg_date": "Wed, 8 May 2002 10:19:59 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Can you be more specific? What required syntax? I was talking about named\npipe vs UDS socket...\n\n> On Tue, 7 May 2002, Igor Kovalenko wrote:\n>\n> > Just a friendly reminder that it should be named pipe rather than UDS\n> > ;)\n>\n> Named pipes don't have the required syntax. Perhaps for\n> platforms which have neither SysV shm, something like\n> POSIX named semaphores are the way forward.\n>\n> Matthew.\n>\n\n", "msg_date": "Wed, 8 May 2002 12:43:13 -0500", "msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com> writes:\n> I was talking about named pipe vs UDS socket...\n\nAren't those the same thing? You get a socket file either way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 May 2002 15:26:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "> \"Igor Kovalenko\" <Igor.Kovalenko@motorola.com> writes:\n> > I was talking about named pipe vs UDS socket...\n>\n> Aren't those the same thing? You get a socket file either way.\n>\n\nOn QNX named pipe will have type 'FIFO file', which has similar features to\na socket indeed but implemented differently but that is not the point. On\nSysV derivatives they all will be implemented as 2 connected STREAMS heads.\nOn BSD they both will be same thing. Not sure about other systems. The UDS\nAPI however was originally limited to BSD4.3 and only later started to\nspread, whereas named pipes have been around longer and probably exist in\nany Unix variant and probably other types of systems.\n\n-- igor\n\n\n", "msg_date": "Wed, 8 May 2002 14:42:56 -0500", "msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "On Wed, 8 May 2002, Igor Kovalenko wrote:\n\n> Can you be more specific? What required syntax? I was talking about\n> named pipe vs UDS socket...\n\nSorry, I meant semantics.\n\nA pipe can have multiple readers and multiple writers. This is\nno use for us.\n\nA listening SOCK_STREAM Unix domain socket can have no readers or\nwriters, but only one listener (well, except that other processes\ncan inherit or be passed the socket). You have to connect() (and\nthe server must accept()) before read and write do anything. But\nwe have no use for that here. It's just an exclusive-only mutex\nwhose namespace is the filesystem.\n\nIt really is like a TCP socket, except that the address namespace\nis the filesystem, and thus it's not available remotely.\n\nThink of it as a TCP socket without the \"which address and port\ndo I use, and how do I keep it secure\" issues.\n\nMatthew.\n\n", "msg_date": "Thu, 9 May 2002 01:25:40 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Ahh... you want a named semaphore... There is such a thing in POSIX but it\nis only portable if their names begin with \"/\" (which tells OS to put it\nwhere appropriate). I believe without leading slash they end up in current\ndirectory, but we can't rely on that... too bad. Glad UDS it is getting\nsupported on my platform, lol ;)\n\nThis will however leave QNX4 in the dust, if anyone cares. And most likely\nBeOS, MP/X and half dozen other platforms. Which prompts me to think if it\nwould not be better to come up with a platform independent 'namespace sync'\nmechanism. Can't we use fcntl()-based lock for that purpose? That's what\napache is doing apparently (one of variants).\n\n-- igor\n\n> On Wed, 8 May 2002, Igor Kovalenko wrote:\n>\n> > Can you be more specific? What required syntax? I was talking about\n> > named pipe vs UDS socket...\n>\n> Sorry, I meant semantics.\n>\n> A pipe can have multiple readers and multiple writers. This is\n> no use for us.\n>\n> A listening SOCK_STREAM Unix domain socket can have no readers or\n> writers, but only one listener (well, except that other processes\n> can inherit or be passed the socket). You have to connect() (and\n> the server must accept()) before read and write do anything. But\n> we have no use for that here. It's just an exclusive-only mutex\n> whose namespace is the filesystem.\n>\n> It really is like a TCP socket, except that the address namespace\n> is the filesystem, and thus it's not available remotely.\n>\n> Think of it as a TCP socket without the \"which address and port\n> do I use, and how do I keep it secure\" issues.\n>\n> Matthew.\n>\n\n", "msg_date": "Wed, 8 May 2002 21:53:15 -0500", "msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com> writes:\n> Can't we use fcntl()-based lock for that purpose?\n\nI'm pretty sure that fcntl locking has an evil reputation as well.\n(Didn't we use that up till a couple years ago, and give up on it?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 May 2002 01:12:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " }, { "msg_contents": "Tom Lane wrote:\n> \"Igor Kovalenko\" <Igor.Kovalenko@motorola.com> writes:\n> > I was talking about named pipe vs UDS socket...\n>\n> Aren't those the same thing? You get a socket file either way.\n\n No they are not. The former is a FIFO file, the latter a\n socket. FIFO's can be used via open(2), sockets via\n connect(2). And as said before, FIFO's are there since UNIX\n Version 7 (at least, I haven't been around before that). So\n there is a good chance that these are available on every\n UNIX.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Thu, 9 May 2002 09:48:33 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "Igor Kovalenko wrote:\n> It does not have to be anonymous. POSIX also defines shm_open(same arguments\n> as open) API which will create named object in whatever location corresponds\n> to shared memory storage on that platform (object is then grown to needed\n> size by ftruncate() and the fd is then passed to mmap). The object will\n> exist in name space and can be detected by subsequent calls to shm_open()\n> with same name. It is not really different from doing open(), but more\n> portable (mmap() on regular files may not be supported).\n\nActually, I think the best shared memory implemention would be\nMAP_ANON | MAP_SHARED mmap(), which could be called from the postmaster\nand passed to child processes.\n\nWhile all our platforms have mmap(), many don't have MAP_ANON, but those\nthat do could use it. You need MAP_ANON to prevent the shared memory\nfrom being written to a disk file.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Jun 2002 20:47:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "mlw wrote:\n> Like I told Marc, I don't care. You spec out what you want and I'll write it\n> for Windows. \n> \n> That being said, a SysV IPC interface for native Windows would be kind of cool\n> to have.\n\nI am wondering why we don't just use the Cygwin shm/sem code in our\nproject, or maybe the Apache stuff; why bother reinventing the wheel.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Jun 2002 20:49:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "\nYou might want to go to the archives and catch up on the whole thread and\nits digressions :)\n\nOn Sun, 2 Jun 2002, Bruce Momjian wrote:\n\n> mlw wrote:\n> > Like I told Marc, I don't care. You spec out what you want and I'll write it\n> > for Windows.\n> >\n> > That being said, a SysV IPC interface for native Windows would be kind of cool\n> > to have.\n>\n> I am wondering why we don't just use the Cygwin shm/sem code in our\n> project, or maybe the Apache stuff; why bother reinventing the wheel.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Sun, 2 Jun 2002 22:29:34 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> mlw wrote:\n> > Like I told Marc, I don't care. You spec out what you want and I'll write it\n> > for Windows.\n> >\n> > That being said, a SysV IPC interface for native Windows would be kind of cool\n> > to have.\n> \n> I am wondering why we don't just use the Cygwin shm/sem code in our\n> project, or maybe the Apache stuff; why bother reinventing the wheel.\n\nI have not been participating on the list, I don't know why I'm still receiving\nmail.\n\nbut! in the course of testing some code, I managed to gain some experience with\ncygwin. I have seen fork() problems with a large number of processes. \n\nFor PostgreSQL to be as good on Windows as it is on UNIX, it has to be a native\nprogram without cygwin. The shared memory and semaphore management should be\ndone with the postmaster process.\n\nThe apache stuff is OK, it is just as good as anything else. You may be able to\nuse critical sections in shared memory to implement a fast semaphore, but that\nwould take a bit experimentation.\n\nI think what Tom had in mind is to take out the SysV and various OS specific\nAPIs and replace them with a more generic one, behind which, you guys can tune\nthe implementation.\n", "msg_date": "Sun, 02 Jun 2002 21:33:57 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "\nYes, I am having trouble figuring out if I have seen the whole thread yet.\n\n---------------------------------------------------------------------------\n\nMarc G. Fournier wrote:\n> \n> You might want to go to the archives and catch up on the whole thread and\n> its digressions :)\n> \n> On Sun, 2 Jun 2002, Bruce Momjian wrote:\n> \n> > mlw wrote:\n> > > Like I told Marc, I don't care. You spec out what you want and I'll write it\n> > > for Windows.\n> > >\n> > > That being said, a SysV IPC interface for native Windows would be kind of cool\n> > > to have.\n> >\n> > I am wondering why we don't just use the Cygwin shm/sem code in our\n> > project, or maybe the Apache stuff; why bother reinventing the wheel.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 2 Jun 2002 21:36:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "Bruce,\n\nOn Sun, Jun 02, 2002 at 08:49:21PM -0400, Bruce Momjian wrote:\n> mlw wrote:\n> > Like I told Marc, I don't care. You spec out what you want and I'll write it\n> > for Windows. \n> > \n> > That being said, a SysV IPC interface for native Windows would be kind of\n> > cool to have.\n> \n> I am wondering why we don't just use the Cygwin shm/sem code in our\n> project, or maybe the Apache stuff; why bother reinventing the wheel.\n\nAre you referring to cygipc above? If so, they even one of the original\ncygipc authors would discourage this:\n\n http://sources.redhat.com/ml/cygwin-apps/2001-09/msg00017.html\n\nSpecifically, Ludovic Lange states the following:\n\n > I really think the solution would be to start again from scratch\n > another implementation, as was suggested. The way we did it was\n > quick and dirty, the goals weren't to have production systems\n > running on it but only to run prototypes. So the internal design\n > (if there is any) may not be adequate for the cygwin project.\n\nHowever, Rob Collins has contributed a MinGW daemon to Cygwin to support\nswitching users, System V IPC, etc. So, this code base may be a more\nsuitable starting point to satisfy PostgreSQL's native Win32 System V\nIPC needs.\n\nJason\n", "msg_date": "Mon, 03 Jun 2002 09:18:40 -0400", "msg_from": "Jason Tishler <jason@tishler.net>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "On Sun, Jun 02, 2002 at 09:33:57PM -0400, mlw wrote:\n> Bruce Momjian wrote:\n> > mlw wrote:\n> > > Like I told Marc, I don't care. You spec out what you want and I'll write\n> > > it for Windows.\n> > >\n> > > That being said, a SysV IPC interface for native Windows would be kind of\n> > > cool to have.\n> > \n> > I am wondering why we don't just use the Cygwin shm/sem code in our\n> > project, or maybe the Apache stuff; why bother reinventing the wheel.\n> \n> but! in the course of testing some code, I managed to gain some experience\n> with cygwin. I have seen fork() problems with a large number of processes. \n\nSince Cygwin's fork() is implemented with WaitForMultipleObjects(),\nit has a limitation of only 63 children per parent. Also, there can\nbe DLL base address conflicts (causing Cygwin fork() to fail) that are\navoidable by rebasing the appropriate DLLs. AFAICT, Cygwin PostgreSQL is\ncurrently *not* affected by this issue where as other Cygwin applications\nsuch as Python and Apache are.\n\nJason\n", "msg_date": "Mon, 03 Jun 2002 09:28:48 -0400", "msg_from": "Jason Tishler <jason@tishler.net>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "Jason Tishler wrote:\n> \n> On Sun, Jun 02, 2002 at 09:33:57PM -0400, mlw wrote:\n> > Bruce Momjian wrote:\n> > > mlw wrote:\n> > > > Like I told Marc, I don't care. You spec out what you want and I'll write\n> > > > it for Windows.\n> > > >\n> > > > That being said, a SysV IPC interface for native Windows would be kind of\n> > > > cool to have.\n> > >\n> > > I am wondering why we don't just use the Cygwin shm/sem code in our\n> > > project, or maybe the Apache stuff; why bother reinventing the wheel.\n> >\n> > but! in the course of testing some code, I managed to gain some experience\n> > with cygwin. I have seen fork() problems with a large number of processes.\n> \n> Since Cygwin's fork() is implemented with WaitForMultipleObjects(),\n> it has a limitation of only 63 children per parent. Also, there can\n> be DLL base address conflicts (causing Cygwin fork() to fail) that are\n> avoidable by rebasing the appropriate DLLs. AFAICT, Cygwin PostgreSQL is\n> currently *not* affected by this issue where as other Cygwin applications\n> such as Python and Apache are.\n\nWhy would not PostgreSQL be affected by this?\n", "msg_date": "Mon, 03 Jun 2002 09:36:51 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "Jason Tishler wrote:\n> On Sun, Jun 02, 2002 at 09:33:57PM -0400, mlw wrote:\n> > Bruce Momjian wrote:\n> > > mlw wrote:\n> > > > Like I told Marc, I don't care. You spec out what you want and I'll write\n> > > > it for Windows.\n> > > >\n> > > > That being said, a SysV IPC interface for native Windows would be kind of\n> > > > cool to have.\n> > >\n> > > I am wondering why we don't just use the Cygwin shm/sem code in our\n> > > project, or maybe the Apache stuff; why bother reinventing the wheel.\n> >\n> > but! in the course of testing some code, I managed to gain some experience\n> > with cygwin. I have seen fork() problems with a large number of processes.\n>\n> Since Cygwin's fork() is implemented with WaitForMultipleObjects(),\n> it has a limitation of only 63 children per parent. Also, there can\n> be DLL base address conflicts (causing Cygwin fork() to fail) that are\n> avoidable by rebasing the appropriate DLLs. AFAICT, Cygwin PostgreSQL is\n> currently *not* affected by this issue where as other Cygwin applications\n> such as Python and Apache are.\n\n Whatever technical problems there are, we can debate on and\n on if it's worth working around them in PostgreSQL or fixing\n them in CygWIN or whatever.\n\n The main problem will remain. That using PostgreSQL under\n CygWIN requires some UNIX know how. So a pure Windows\n user/shop needs UNIX knowledge to run our \"Windows port\" of\n PostgreSQL? Interesting definition of \"port\".\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n", "msg_date": "Mon, 3 Jun 2002 09:44:38 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "Hi,\n\nYou may want to have a look at: http://www.garret.ru/~knizhnik/\nYou find there code for a 'Fast synchronized access to shared \nmemory for Windows and for i86 Unix-es\".\n\nkind regards,\n\nRobert\n\n> Bruce,\n>\n> On Sun, Jun 02, 2002 at 08:49:21PM -0400, Bruce Momjian wrote:\n> > mlw wrote:\n> > > Like I told Marc, I don't care. You spec out what you want and I'll\n> > > write it for Windows.\n> > >\n> > > That being said, a SysV IPC interface for native Windows would be kind\n> > > of cool to have.\n> >\n> > I am wondering why we don't just use the Cygwin shm/sem code in our\n> > project, or maybe the Apache stuff; why bother reinventing the wheel.\n>\n> Are you referring to cygipc above? If so, they even one of the original\n> cygipc authors would discourage this:\n>\n> http://sources.redhat.com/ml/cygwin-apps/2001-09/msg00017.html\n>\n> Specifically, Ludovic Lange states the following:\n> > I really think the solution would be to start again from scratch\n> > another implementation, as was suggested. The way we did it was\n> > quick and dirty, the goals weren't to have production systems\n> > running on it but only to run prototypes. So the internal design\n> > (if there is any) may not be adequate for the cygwin project.\n>\n> However, Rob Collins has contributed a MinGW daemon to Cygwin to support\n> switching users, System V IPC, etc. So, this code base may be a more\n> suitable starting point to satisfy PostgreSQL's native Win32 System V\n> IPC needs.\n>\n> Jason\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n", "msg_date": "Mon, 3 Jun 2002 16:08:14 +0200", "msg_from": "Robert Schrem <robert.schrem@WiredMinds.de>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports - the 'BEST OPEN SOURCE\n\tdatabase backend'" }, { "msg_contents": "Hi,\n\nSome of you might already know GOODS, programmed\nalmost entirely by Konstantin Knizhnik - if not you should \nreally have a look at it right now (be warned: consuming this \nextraordinary work might change your levels about the \nrequired quality of a 'good programmer' forever. At least \nthis happend to me... ;):\nhttp://www.garret.ru/~knizhnik/goods.html\n\nSome core features of this backend (as they come to my mind):\n-> full ACID transaction support\n-> distributed stoarge management (->distributed transactions)\n-> multible reader/single writer (is this called MVCC within PostgreSQL?)\n-> dual client side object cache\n-> online backup (snapshot backup AND permanent backup)\n-> nested transactions on object level\n-> transaction isolation levels on object level\n-> object level shared and exclusive locks\n-> excellent C++ programming interface\n-> WAL\n-> garbage collection for no longer reference database objects\n-> fully thread safe client interface\n-> JAVA client API\n-> very high performance as a result of a lot of fine tuning\n-> asyncrous event notification on object instance modification\n-> extremly high code quality\n-> a one person effort, hence a very clean design\n-> the most relevant platforms are supported out of the box\n-> complete build is done in less than a minute on my machine\n-> it's documented\n...\n\nThe licensing of this coding wonder: >>> PUBLIC DOMAIN <<<\n\nI'm using GOODS quiet a while now in the context of my\ndevelopment activities for a native XML database and have \nvery promissing experiences concerning performance and \nstability of GOODS. E.g.: The performance seems to be \nbetter than sleepycat's berkeley db library - especially \nwith mutliple simultanous transactions...\n\nMaybe the only restriction to use this backend in postgres \nfrom now on: it's completely C++ ...\n\nI'm wondering why there is no SQL frontend yet for this\nexecellent backend...\n\nYou may want to look also at a comparision chart of some \nother backends than GOODS (some of them from the same \nauthor!!! I'm wondering how he was able to code all this...): \nhttp://www.garret.ru/~knizhnik/compare.html\n\nkind regards,\n\nRobert\n\n", "msg_date": "Mon, 3 Jun 2002 16:21:56 +0200", "msg_from": "Robert Schrem <robert.schrem@WiredMinds.de>", "msg_from_op": false, "msg_subject": "GOODS - a sensational public domain database backend that deserves a\n\tSQL frontend" }, { "msg_contents": "On Mon, Jun 03, 2002 at 09:36:51AM -0400, mlw wrote:\n> Jason Tishler wrote:\n> > \n> > On Sun, Jun 02, 2002 at 09:33:57PM -0400, mlw wrote:\n> > > Bruce Momjian wrote:\n> > > > mlw wrote:\n> > > > > Like I told Marc, I don't care. You spec out what you want and I'll\n> > > > > write it for Windows.\n> > > > >\n> > > > > That being said, a SysV IPC interface for native Windows would be\n> > > > > kind of cool to have.\n> > > >\n> > > > I am wondering why we don't just use the Cygwin shm/sem code in our\n> > > > project, or maybe the Apache stuff; why bother reinventing the wheel.\n> > >\n> > > but! in the course of testing some code, I managed to gain some experience\n> > > with cygwin. I have seen fork() problems with a large number of processes.\n> > \n> > Since Cygwin's fork() is implemented with WaitForMultipleObjects(),\n> > it has a limitation of only 63 children per parent. Also, there can\n> > be DLL base address conflicts (causing Cygwin fork() to fail) that are\n> > avoidable by rebasing the appropriate DLLs. AFAICT, Cygwin PostgreSQL is\n> > currently *not* affected by this issue where as other Cygwin applications\n> > such as Python and Apache are.\n> \n> Why would not PostgreSQL be affected by this?\n\nSorry, if I was unclear -- I should have used two paragraphs above and\nmaybe a few more words... :,)\n\nCygwin PostgreSQL *is* affected by the Cygwin 63 children per parent\nfork limitation.\n\nPostgreSQL *can* be affected by the Cygwin DLL base address conflict\nfork issue, but in my experience (both personal and by monitoring the\nCygwin and pgsql-cygwin lists), no one has been affected yet. The DLL\nbase address conflict is a \"probability\" thing. The more DLLs loaded\nthe greater the chance of a conflict (and fork() failing). Since, Cygwin\nPostgreSQL loads only a few DLLs, this has not become an issue (yet).\n\nJason\n", "msg_date": "Mon, 03 Jun 2002 10:29:57 -0400", "msg_from": "Jason Tishler <jason@tishler.net>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "Kostya is a good qualified programmer. I know him and he is always open for\nchallenges. Some time ago, me and Teodor ask him about GiST support\nin his another database (Gigabase). It was sort of challenge ( we wanted\nto port our contrib/tsearch module ) and he did that (using libgist).\nWe work with gigabase database embedded into our application under\nWindows (we had a lot of troubles with perforance of postgresql under\nCygwin:-) and quite happy.\n\n\nOn Mon, 3 Jun 2002, Robert Schrem wrote:\n\n> Hi,\n>\n> Some of you might already know GOODS, programmed\n> almost entirely by Konstantin Knizhnik - if not you should\n> really have a look at it right now (be warned: consuming this\n> extraordinary work might change your levels about the\n> required quality of a 'good programmer' forever. At least\n> this happend to me... ;):\n> http://www.garret.ru/~knizhnik/goods.html\n>\n> Some core features of this backend (as they come to my mind):\n> -> full ACID transaction support\n> -> distributed stoarge management (->distributed transactions)\n> -> multible reader/single writer (is this called MVCC within PostgreSQL?)\n> -> dual client side object cache\n> -> online backup (snapshot backup AND permanent backup)\n> -> nested transactions on object level\n> -> transaction isolation levels on object level\n> -> object level shared and exclusive locks\n> -> excellent C++ programming interface\n> -> WAL\n> -> garbage collection for no longer reference database objects\n> -> fully thread safe client interface\n> -> JAVA client API\n> -> very high performance as a result of a lot of fine tuning\n> -> asyncrous event notification on object instance modification\n> -> extremly high code quality\n> -> a one person effort, hence a very clean design\n> -> the most relevant platforms are supported out of the box\n> -> complete build is done in less than a minute on my machine\n> -> it's documented\n> ...\n>\n> The licensing of this coding wonder: >>> PUBLIC DOMAIN <<<\n>\n> I'm using GOODS quiet a while now in the context of my\n> development activities for a native XML database and have\n> very promissing experiences concerning performance and\n> stability of GOODS. E.g.: The performance seems to be\n> better than sleepycat's berkeley db library - especially\n> with mutliple simultanous transactions...\n>\n> Maybe the only restriction to use this backend in postgres\n> from now on: it's completely C++ ...\n>\n> I'm wondering why there is no SQL frontend yet for this\n> execellent backend...\n>\n> You may want to look also at a comparision chart of some\n> other backends than GOODS (some of them from the same\n> author!!! I'm wondering how he was able to code all this...):\n> http://www.garret.ru/~knizhnik/compare.html\n>\n> kind regards,\n>\n> Robert\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 3 Jun 2002 19:02:52 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: GOODS - a sensational public domain database backend" }, { "msg_contents": "Jason Tishler wrote:\n> \n> On Mon, Jun 03, 2002 at 09:36:51AM -0400, mlw wrote:\n> > Jason Tishler wrote:\n> > >\n> > > On Sun, Jun 02, 2002 at 09:33:57PM -0400, mlw wrote:\n> > > > Bruce Momjian wrote:\n> > > > > mlw wrote:\n> > > > > > Like I told Marc, I don't care. You spec out what you want and I'll\n> > > > > > write it for Windows.\n> > > > > >\n> > > > > > That being said, a SysV IPC interface for native Windows would be\n> > > > > > kind of cool to have.\n> > > > >\n> > > > > I am wondering why we don't just use the Cygwin shm/sem code in our\n> > > > > project, or maybe the Apache stuff; why bother reinventing the wheel.\n> > > >\n> > > > but! in the course of testing some code, I managed to gain some experience\n> > > > with cygwin. I have seen fork() problems with a large number of processes.\n> > >\n> > > Since Cygwin's fork() is implemented with WaitForMultipleObjects(),\n> > > it has a limitation of only 63 children per parent. Also, there can\n> > > be DLL base address conflicts (causing Cygwin fork() to fail) that are\n> > > avoidable by rebasing the appropriate DLLs. AFAICT, Cygwin PostgreSQL is\n> > > currently *not* affected by this issue where as other Cygwin applications\n> > > such as Python and Apache are.\n> >\n> > Why would not PostgreSQL be affected by this?\n> \n> Sorry, if I was unclear -- I should have used two paragraphs above and\n> maybe a few more words... :,)\n> \n> Cygwin PostgreSQL *is* affected by the Cygwin 63 children per parent\n> fork limitation.\n> \n> PostgreSQL *can* be affected by the Cygwin DLL base address conflict\n> fork issue, but in my experience (both personal and by monitoring the\n> Cygwin and pgsql-cygwin lists), no one has been affected yet. The DLL\n> base address conflict is a \"probability\" thing. The more DLLs loaded\n> the greater the chance of a conflict (and fork() failing). Since, Cygwin\n> PostgreSQL loads only a few DLLs, this has not become an issue (yet).\n\nI'm not sure the DLL load address is a big issue for PostgreSQL, AFAIK no\noption DLLs will be loaded by Postmaster. So, with fork() it will be a simple\nprocess. A PostgreSQL child will die upon completion, and never execute fork().\n\nMy concern would be the limit on the number of child processes allowed. 63 is\nfar below what would be considered a usable number in production, and as long\nas that is an issue, I don't think anyone would take PostgreSQL seriously.\n\nA Windows version of PostgreSQL must run within the confines of the Windows OS.\nThe reason, IMHO, that no one has found any serious bugs in the cygwin version,\nis because no one is seriously using it. Anyone who *would* seriously use it,\nknows better.\n", "msg_date": "Mon, 03 Jun 2002 17:38:17 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" }, { "msg_contents": "That's what Apache does. Note, on most platforms MAP_ANON is equivalent to\nmmmap-ing /dev/zero. Solaris for example does not provide MAP_ANON but using\n\nfd=open(/dev/zero)\nmmap(fd, ...)\nclose(fd)\n\nworks just fine.\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>\nCc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; \"mlw\" <markw@mohawksoft.com>; \"Marc G.\nFournier\" <scrappy@hub.org>; <pgsql-hackers@postgresql.org>\nSent: Sunday, June 02, 2002 7:47 PM\nSubject: Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports\n\n\n> Igor Kovalenko wrote:\n> > It does not have to be anonymous. POSIX also defines shm_open(same\narguments\n> > as open) API which will create named object in whatever location\ncorresponds\n> > to shared memory storage on that platform (object is then grown to\nneeded\n> > size by ftruncate() and the fd is then passed to mmap). The object will\n> > exist in name space and can be detected by subsequent calls to\nshm_open()\n> > with same name. It is not really different from doing open(), but more\n> > portable (mmap() on regular files may not be supported).\n>\n> Actually, I think the best shared memory implemention would be\n> MAP_ANON | MAP_SHARED mmap(), which could be called from the postmaster\n> and passed to child processes.\n>\n> While all our platforms have mmap(), many don't have MAP_ANON, but those\n> that do could use it. You need MAP_ANON to prevent the shared memory\n> from being written to a disk file.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Mon, 3 Jun 2002 16:53:51 -0500", "msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports" } ]
[ { "msg_contents": "Hi all,\n\nThe SQL92 spec has this to say about SET CONSTRAINTS DEFERRED:\n\n a) If ALL is specified, then the constraint mode in TXN of all\n constraints that are DEFERRABLE is set to deferred.\n\n b) Otherwise, the constraint mode in TXN for the constraints\n identified by the <constraint name>s in the <constraint name\n list> is set to deferred.\n\n(section 14.2, page 401)\n\nMy reading of this: if you specify ALL, only the constraints marked\nas DEFERRABLE are affected. If you specify a specific constraint,\nit is deferred, whether the constraint is marked as DEFERRABLE or\nnot.\n\nCurrent Postgres behavior is incompatible with this interpretation:\n\nnconway=> create table pk (id serial primary key);\nNOTICE: CREATE TABLE will create implicit sequence 'pk_id_seq' for SERIAL column 'pk.id'\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pk_pkey' for table 'pk'\nCREATE\nnconway=> create table fk (pk_ref int constraint my_constraint references pk);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nCREATE\nnconway=> begin;\nBEGIN\nnconway=> set constraints my_constraint deferred;\nERROR: Constraint 'my_constraint' is not deferrable\n\nSecond question: SQL92 also specifies this for SET CONSTRAINTS --\n\n 1) If an SQL-transaction is currently active, then let TXN be the\n currently active SQL-transaction. Otherwise, let TXN be the next\n SQL-transaction for the SQL-agent.\n\n(section 14.2, page 400)\n\nIn PostgreSQL, SET CONSTRAINTS only affects the current\ntransaction. Is it possible to make this more compliant?\nIf not, it should be noted in the docs for SET CONSTRAINTS.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 3 May 2002 13:15:32 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "set constraints behavior" }, { "msg_contents": "\nOn Fri, 3 May 2002, Neil Conway wrote:\n\n> Hi all,\n>\n> The SQL92 spec has this to say about SET CONSTRAINTS DEFERRED:\n>\n> a) If ALL is specified, then the constraint mode in TXN of all\n> constraints that are DEFERRABLE is set to deferred.\n>\n> b) Otherwise, the constraint mode in TXN for the constraints\n> identified by the <constraint name>s in the <constraint name\n> list> is set to deferred.\n>\n> (section 14.2, page 401)\n>\n> My reading of this: if you specify ALL, only the constraints marked\n> as DEFERRABLE are affected. If you specify a specific constraint,\n> it is deferred, whether the constraint is marked as DEFERRABLE or\n> not.\n>\n> Current Postgres behavior is incompatible with this interpretation:\n\nI think you missed Syntax Rule 2:\n\"The constraint specified by <constraint name> shall be DEFERRABLE\"\n\n", "msg_date": "Fri, 3 May 2002 10:39:28 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: set constraints behavior" }, { "msg_contents": "On Fri, 3 May 2002 10:39:28 -0700 (PDT)\n\"Stephan Szabo\" <sszabo@megazone23.bigpanda.com> wrote:\n> \n> On Fri, 3 May 2002, Neil Conway wrote:\n> > My reading of this: if you specify ALL, only the constraints marked\n> > as DEFERRABLE are affected. If you specify a specific constraint,\n> > it is deferred, whether the constraint is marked as DEFERRABLE or\n> > not.\n> >\n> > Current Postgres behavior is incompatible with this interpretation:\n> \n> I think you missed Syntax Rule 2:\n> \"The constraint specified by <constraint name> shall be DEFERRABLE\"\n\nAh, okay. Yeah, I missed that part. Stupid standards, they're\npractically unreadable :-)\n\n(My other question, regarding transaction and SET CONSTRAINTS,\nis still valid)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Fri, 3 May 2002 14:12:01 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: set constraints behavior" }, { "msg_contents": "\nOn Fri, 3 May 2002, Neil Conway wrote:\n\n> On Fri, 3 May 2002 10:39:28 -0700 (PDT)\n> \"Stephan Szabo\" <sszabo@megazone23.bigpanda.com> wrote:\n> >\n> > On Fri, 3 May 2002, Neil Conway wrote:\n> > > My reading of this: if you specify ALL, only the constraints marked\n> > > as DEFERRABLE are affected. If you specify a specific constraint,\n> > > it is deferred, whether the constraint is marked as DEFERRABLE or\n> > > not.\n> > >\n> > > Current Postgres behavior is incompatible with this interpretation:\n> >\n> > I think you missed Syntax Rule 2:\n> > \"The constraint specified by <constraint name> shall be DEFERRABLE\"\n>\n> Ah, okay. Yeah, I missed that part. Stupid standards, they're\n> practically unreadable :-)\n>\n> (My other question, regarding transaction and SET CONSTRAINTS,\n> is still valid)\n\nDidn't answer that part because I'm not sure what's best for that\ngiven the way we handle \"out of transaction\" statements (the\nother I remembered from past readings and rechecked).\n\n", "msg_date": "Fri, 3 May 2002 11:46:30 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: set constraints behavior" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Second question: SQL92 also specifies this for SET CONSTRAINTS --\n\n> 1) If an SQL-transaction is currently active, then let TXN be the\n> currently active SQL-transaction. Otherwise, let TXN be the next\n> SQL-transaction for the SQL-agent.\n\n> (section 14.2, page 400)\n\n> In PostgreSQL, SET CONSTRAINTS only affects the current\n> transaction. Is it possible to make this more compliant?\n\nWell, what definition do you propose? I don't think there's currently\nany usefulness to SET CONSTRAINTS outside a transaction block, so we\ncould change its behavior without breaking anything.\n\nGiven that we don't define transaction boundaries the same way SQL92\ndoes (BEGIN isn't SQL), I'm not sure that exact spec compliance is\nthe right consideration here anyway.\n\nNote however that there are proposals floating around to allow a more\nspec-compliant transaction behavior --- eg, a SET variable to cause an\n\"implicit BEGIN\" on any SQL command outside a transaction block.\nIt'd be a good idea to keep that in mind while thinking about how SET\nCONSTRAINTS ought to behave.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 May 2002 15:42:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: set constraints behavior " } ]
[ { "msg_contents": "This small patch reloads dynamic libraries whose modification time is\ngreater than that at the time it was initially loaded. This means that\nconnections do not need to be reinitialised when a library is recompiled.\n\nThere is a problem with this, however: if dlopen()'ing the new patch\nfails, the functions registered in the system against this patch will also\npresumably break. This is, of course, what would happened if, a new\nconnection came in after the library was broken and it attempted to use\nany of the functions in it.\n\nAny ideas about ways around this? Need there be? Is this desired\nbehaviour?\n\nGavin", "msg_date": "Sat, 4 May 2002 21:05:35 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Auto-reload of dynamic libraries" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> This small patch reloads dynamic libraries whose modification time is\n> greater than that at the time it was initially loaded. This means that\n> connections do not need to be reinitialised when a library is recompiled.\n\nIs that a good idea? It's easy to imagine cases where a library is not\ndesigned to be unloaded (eg, it hooks into things in the main backend\nand doesn't have a way to unhook). I'd rather stick with the current\nbehavior of unload/reload only when specifically commanded to.\n\nThe patch as given fails in the same inode/different path case, btw.\nI think you wanted to make the test further down.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 May 2002 11:36:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Auto-reload of dynamic libraries " } ]
[ { "msg_contents": "Upon doing some inspection of apache 2.x, it seems that me making a SysV\nWindows .DLL for PostgreSQL, while a cool project, would be unnecessary.\n\nThe APR (Apache Portable Runtime) seems to have all the necessary support. The\nproblem is that it has its own API.\n\nWe should find a way to extract the APR out of apache and make it a library\nwithin PostgreSQL. A quick look at the license seems to indicate this is OK.\nShould we notify the Apache guys just to be polite?\n\nIt looks like the APR is pretty analogous to SysV with a few changes, so it\nshould not be too hard to code it into PostgrsSQL.\n", "msg_date": "Sat, 04 May 2002 09:46:05 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Native Windows, Apache Portable Runtime" }, { "msg_contents": "mlw wrote:\n> \n> Upon doing some inspection of apache 2.x, it seems that me making a SysV\n> Windows .DLL for PostgreSQL, while a cool project, would be unnecessary.\n> \n> The APR (Apache Portable Runtime) seems to have all the necessary support. The\n> problem is that it has its own API.\n> \n> We should find a way to extract the APR out of apache and make it a library\n> within PostgreSQL. A quick look at the license seems to indicate this is OK.\n> Should we notify the Apache guys just to be polite?\n\nDefinitely. They make also be able to offer some tips and guidance\n(i.e. there may already be a way of splitting it off easy, etc), but\nwe'll never know if we don't say hello. (Hey, that rhymes!)\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> It looks like the APR is pretty analogous to SysV with a few changes, so it\n> should not be too hard to code it into PostgrsSQL.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 05 May 2002 00:21:41 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime" }, { "msg_contents": "Justin Clift wrote:\n> \n> mlw wrote:\n> >\n> > Upon doing some inspection of apache 2.x, it seems that me making a SysV\n> > Windows .DLL for PostgreSQL, while a cool project, would be unnecessary.\n> >\n> > The APR (Apache Portable Runtime) seems to have all the necessary support. The\n> > problem is that it has its own API.\n> >\n> > We should find a way to extract the APR out of apache and make it a library\n> > within PostgreSQL. A quick look at the license seems to indicate this is OK.\n> > Should we notify the Apache guys just to be polite?\n> \n> Definitely. They make also be able to offer some tips and guidance\n> (i.e. there may already be a way of splitting it off easy, etc), but\n> we'll never know if we don't say hello. (Hey, that rhymes!)\n\nIt is so easy to extract he APR it is silly. They even describe how in the\nREADME.dev. Except for NIH, I can't see any reason not to use t.\n", "msg_date": "Sat, 04 May 2002 10:22:42 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Native Windows, Apache Portable Runtime" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Upon doing some inspection of apache 2.x, it seems that me making a SysV\n> Windows .DLL for PostgreSQL, while a cool project, would be unnecessary.\n\n> The APR (Apache Portable Runtime) seems to have all the necessary support.\n\nDoes it? AFAICT they intend to provide mutexes not counting semaphores.\nTheir implementation atop SysV semaphores would work the way we need\n(ie, remember multiple unlocks arriving before a lock operation), but\nI'm unclear on whether any of the other ones would.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 May 2002 12:18:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Upon doing some inspection of apache 2.x, it seems that me making a SysV\n> > Windows .DLL for PostgreSQL, while a cool project, would be unnecessary.\n> \n> > The APR (Apache Portable Runtime) seems to have all the necessary support.\n> \n> Does it? AFAICT they intend to provide mutexes not counting semaphores.\n> Their implementation atop SysV semaphores would work the way we need\n> (ie, remember multiple unlocks arriving before a lock operation), but\n> I'm unclear on whether any of the other ones would.\n\nOk, you got me, they do not provide a semaphore, only mutexes. We should be\nable to use APR in PostgreSQL, I'll implement a semaphore directory for Windows\nand UNIX. Someone will have to handle the Netware, beos and OS/2.\n \n(I may be able to do the OS/2 stuff, but it has, in fact, been 6 years since I\neven looked at OS/2 in any form.)\n\nWe could provide a PGSemaphore based on an APR mutex and a counter, but I'm not\nsure of the performance impact. We may want to implement a \"generic\" semaphore\nlike this and one optimized for platforms which we have development resources.\n", "msg_date": "Sat, 04 May 2002 12:28:43 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Native Windows, Apache Portable Runtime" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> We could provide a PGSemaphore based on an APR mutex and a counter,\n> but I'm not sure of the performance impact. We may want to implement a\n> \"generic\" semaphore like this and one optimized for platforms which we\n> have development resources.\n\nOnce we have the internal API redone, it should be fairly easy to\nexperiment with alternatives like that.\n\nI'm planning to work on this today (need a break from thinking about\nschemas ;-)). I'll run with the API I sketched yesterday, since no one\nobjected. Although I'm not planning on doing anything to the API of the\nshared-mem routines, I'll break them out into a replaceable file as\nwell, just in case anyone wants to try a non-SysV implementation.\nWhat I plan to do is:\n\n1. Replace include/storage/ipc.h with \"pg_sema.h\" and \"pg_shmem.h\"\ncontaining the hopefully-platform-independent API definitions, plus\nifdef'd sections along the lines of\n\n\t#ifdef USE_SYSV_SEMAPHORES\n\n\ttypedef struct PGSemaphoreData {\n\t\tint id;\n\t\tint num;\n\t} PGSemaphoreData;\n\n\ttypedef PGSemaphoreData *PGSemaphore;\n\n\t#endif\n\nAFAICS at the moment, only this typedef needs to vary across different\nimplementations as far as the header is concerned.\n\n2. Break out the SysV-dependent code into backend/port/sysv_sema.c\nand backend/port/sysv_shmem.c. storage/ipc/ipc.c will either go away\ncompletely or get lots smaller.\n\n3. Extend configure to define the correct USE_foo_SEMAPHORES symbol\nin pg_config.h, and to symlink the correct implementation files to\nbackend/port/pg_sema.c and backend/port/pg_shmem.c. These two \"files\"\nwill be the ones compiled and linked into the backend.\n\nI'm expecting that configure will default to use SysV semas and shared\nmemory unless told differently by the \"template\" script selected for\nthe platform. For instance src/template/darwin might contain something\nlike\n\tUSE_DARWIN_SEMAPHORES=1\n\tSEMA_IMPLEMENTATION=src/backend/port/darwin/sem.c\nto override the default semaphore implementation. Later we might want\nsome more-dynamic way of configuring the sema type, but this seems like\nenough to get started.\n\nComments, better ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 May 2002 12:56:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": " Hi Tom,\n\n I'll do the necessary change for the BeOS port. On a first look, this\nwill greatly simplify the semaphore layer as the new API map quite well with\nthe BeOS one.\n\n I find the semaphore API quite clean but have some question on the\nShared memory one. The Id's passed to PGSharedMemoryIsInUse aren't clear to\nme. How id1 and id1 are related to the port parameter of\nPGSharedMemoryCreate ?\n Also why not do the header fillup outside of PGSharedMemoryCreate ?\n\n\n What about using an API similar to the sema one :\n PGShmemHeader * PGSharedMemoryCreate(PGShmem shmem,uint32 size, bool\nmakePrivate, int memKey);\n bool PGSharedMemoryIsInUse(PGShmem shmem);\n\n where PGShmem is an implementation definded struct (including header\ndatas).\n\n\n On a side note, after these API change, Beos will still need an Hack for\nshared memory, because all shared memory segments are in copy on write mode\nin the forked process. One solution could be to have an explicit attach call\nin the forked process :\n PGSharedMemoryAttach(PGShmem shmem);\n\n This could be a no op for SYSV shmem.\n\n This will allow the following calls for each fork to removed :\n\n beos_before_backend_startup\n beos_backend_startup_failed\n beos_backend_startup\n\n cyril\n\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"mlw\" <markw@mohawksoft.com>\nCc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Saturday, May 04, 2002 6:56 PM\nSubject: Re: [HACKERS] Native Windows, Apache Portable Runtime\n\n\n> mlw <markw@mohawksoft.com> writes:\n> > We could provide a PGSemaphore based on an APR mutex and a counter,\n> > but I'm not sure of the performance impact. We may want to implement a\n> > \"generic\" semaphore like this and one optimized for platforms which we\n> > have development resources.\n>\n> Once we have the internal API redone, it should be fairly easy to\n> experiment with alternatives like that.\n>\n> I'm planning to work on this today (need a break from thinking about\n> schemas ;-)). I'll run with the API I sketched yesterday, since no one\n> objected. Although I'm not planning on doing anything to the API of the\n> shared-mem routines, I'll break them out into a replaceable file as\n> well, just in case anyone wants to try a non-SysV implementation.\n> What I plan to do is:\n>\n> 1. Replace include/storage/ipc.h with \"pg_sema.h\" and \"pg_shmem.h\"\n> containing the hopefully-platform-independent API definitions, plus\n> ifdef'd sections along the lines of\n>\n> #ifdef USE_SYSV_SEMAPHORES\n>\n> typedef struct PGSemaphoreData {\n> int id;\n> int num;\n> } PGSemaphoreData;\n>\n> typedef PGSemaphoreData *PGSemaphore;\n>\n> #endif\n>\n> AFAICS at the moment, only this typedef needs to vary across different\n> implementations as far as the header is concerned.\n>\n> 2. Break out the SysV-dependent code into backend/port/sysv_sema.c\n> and backend/port/sysv_shmem.c. storage/ipc/ipc.c will either go away\n> completely or get lots smaller.\n>\n> 3. Extend configure to define the correct USE_foo_SEMAPHORES symbol\n> in pg_config.h, and to symlink the correct implementation files to\n> backend/port/pg_sema.c and backend/port/pg_shmem.c. These two \"files\"\n> will be the ones compiled and linked into the backend.\n>\n> I'm expecting that configure will default to use SysV semas and shared\n> memory unless told differently by the \"template\" script selected for\n> the platform. For instance src/template/darwin might contain something\n> like\n> USE_DARWIN_SEMAPHORES=1\n> SEMA_IMPLEMENTATION=src/backend/port/darwin/sem.c\n> to override the default semaphore implementation. Later we might want\n> some more-dynamic way of configuring the sema type, but this seems like\n> enough to get started.\n>\n> Comments, better ideas?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Sun, 5 May 2002 21:24:44 +0200", "msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": "On Sat, 2002-05-04 at 21:56, Tom Lane wrote:\n> mlw <markw@mohawksoft.com> writes:\n> > We could provide a PGSemaphore based on an APR mutex and a counter,\n> > but I'm not sure of the performance impact. We may want to implement a\n> > \"generic\" semaphore like this and one optimized for platforms which we\n> > have development resources.\n> \n> Once we have the internal API redone, it should be fairly easy to\n> experiment with alternatives like that.\n> \n> I'm planning to work on this today (need a break from thinking about\n> schemas ;-)). I'll run with the API I sketched yesterday, since no one\n> objected. Although I'm not planning on doing anything to the API of the\n> shared-mem routines, I'll break them out into a replaceable file as\n> well, just in case anyone wants to try a non-SysV implementation.\n\nWould it be too hard to make them macros, so those which dont need\nshared mem at all (embedded single-user systems) could avoid the\nperformance impact altogether.\n\n----------------\nHannu\n\n\n", "msg_date": "06 May 2002 00:31:04 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime" }, { "msg_contents": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr> writes:\n> I find the semaphore API quite clean but have some question on the\n> Shared memory one. The Id's passed to PGSharedMemoryIsInUse aren't clear to\n> me. How id1 and id1 are related to the port parameter of\n> PGSharedMemoryCreate ?\n\nYou can define 'em however you want. For SysV shmem they are the shmem\nkey and id.\n\n> Also why not do the header fillup outside of PGSharedMemoryCreate ?\n\nWell, (a) I wasn't really concerned about defining an all-new API for\nshmem, and (b) I think the header is largely dependent on the semantics\nof SysV shmem anyway. A different shmem implementation might need\ndifferent fields in there.\n\n> What about using an API similar to the sema one :\n> PGShmemHeader * PGSharedMemoryCreate(PGShmem shmem,uint32 size, bool\n> makePrivate, int memKey);\n> bool PGSharedMemoryIsInUse(PGShmem shmem);\n\nHow does that solve the problem of determining whether a *previously*\ncreated shmem block is still in use? The point here is to be able to\ntrace the connection from a data directory to active backends via their\nconnections to a shared memory block --- take a look at\nRecordSharedMemoryInLockFile, which is the flip side of\nSharedMemoryIsInUse.\n\n> On a side note, after these API change, Beos will still need an Hack for\n> shared memory, because all shared memory segments are in copy on write mode\n> in the forked process. One solution could be to have an explicit attach call\n> in the forked process :\n> PGSharedMemoryAttach(PGShmem shmem);\n\nNo strong feelings about this --- it looks like the same BeOS-specific\nhack under a different name ;-)\n\n> This will allow the following calls for each fork to removed :\n\n> beos_before_backend_startup\n> beos_backend_startup_failed\n> beos_backend_startup\n\nHow so? If those calls were needed before, why won't all three still\nbe needed?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 May 2002 19:21:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": "On Sat, 4 May 2002, mlw wrote:\n\n> Upon doing some inspection of apache 2.x, it seems that me making a SysV\n> Windows .DLL for PostgreSQL, while a cool project, would be unnecessary.\n>\n> The APR (Apache Portable Runtime) seems to have all the necessary support. The\n> problem is that it has its own API.\n>\n> We should find a way to extract the APR out of apache and make it a library\n> within PostgreSQL. A quick look at the license seems to indicate this is OK.\n> Should we notify the Apache guys just to be polite?\n>\n> It looks like the APR is pretty analogous to SysV with a few changes, so it\n> should not be too hard to code it into PostgrsSQL.\n\nThis is the wrong route to take ... I've already discussed it with the\nApache folk, and, yes, we could include it into our tree, but I would\n*really* like to avoid that ... main reason, there is an active group over\nthere working on this API and I'd rather not have to maintain a second\nversion of it ...\n\nAs is my intention, I'm going to pull out the shared memory stuff that we\nare currently using and putting it into a central file, with a pg_*\nassociated command to match it ... this way, if someone wants to use the\n'standard shared memory' that comes with their OS, they can ... if someone\nwants to use libapr.*, which I've been informed will be released as a\nseperate package from Apache once v1.0 of it is available (and associated\n.dll's), then they will have that option too ...\n\n\n\n\n", "msg_date": "Mon, 6 May 2002 00:41:00 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime" }, { "msg_contents": "\nWell, I guess that just saved *me* alot of work ... thanks ...\n\nOn Sat, 4 May 2002, Tom Lane wrote:\n\n> mlw <markw@mohawksoft.com> writes:\n> > We could provide a PGSemaphore based on an APR mutex and a counter,\n> > but I'm not sure of the performance impact. We may want to implement a\n> > \"generic\" semaphore like this and one optimized for platforms which we\n> > have development resources.\n>\n> Once we have the internal API redone, it should be fairly easy to\n> experiment with alternatives like that.\n>\n> I'm planning to work on this today (need a break from thinking about\n> schemas ;-)). I'll run with the API I sketched yesterday, since no one\n> objected. Although I'm not planning on doing anything to the API of the\n> shared-mem routines, I'll break them out into a replaceable file as\n> well, just in case anyone wants to try a non-SysV implementation.\n> What I plan to do is:\n>\n> 1. Replace include/storage/ipc.h with \"pg_sema.h\" and \"pg_shmem.h\"\n> containing the hopefully-platform-independent API definitions, plus\n> ifdef'd sections along the lines of\n>\n> \t#ifdef USE_SYSV_SEMAPHORES\n>\n> \ttypedef struct PGSemaphoreData {\n> \t\tint id;\n> \t\tint num;\n> \t} PGSemaphoreData;\n>\n> \ttypedef PGSemaphoreData *PGSemaphore;\n>\n> \t#endif\n>\n> AFAICS at the moment, only this typedef needs to vary across different\n> implementations as far as the header is concerned.\n>\n> 2. Break out the SysV-dependent code into backend/port/sysv_sema.c\n> and backend/port/sysv_shmem.c. storage/ipc/ipc.c will either go away\n> completely or get lots smaller.\n>\n> 3. Extend configure to define the correct USE_foo_SEMAPHORES symbol\n> in pg_config.h, and to symlink the correct implementation files to\n> backend/port/pg_sema.c and backend/port/pg_shmem.c. These two \"files\"\n> will be the ones compiled and linked into the backend.\n>\n> I'm expecting that configure will default to use SysV semas and shared\n> memory unless told differently by the \"template\" script selected for\n> the platform. For instance src/template/darwin might contain something\n> like\n> \tUSE_DARWIN_SEMAPHORES=1\n> \tSEMA_IMPLEMENTATION=src/backend/port/darwin/sem.c\n> to override the default semaphore implementation. Later we might want\n> some more-dynamic way of configuring the sema type, but this seems like\n> enough to get started.\n>\n> Comments, better ideas?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Mon, 6 May 2002 00:43:02 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Well, I guess that just saved *me* alot of work ... thanks ...\n\nUh, not yet. Don't you still need a semaphore implementation that\nworks on Windows?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 00:00:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Well, I guess that just saved *me* alot of work ... thanks ...\n> \n> Uh, not yet. Don't you still need a semaphore implementation that\n> works on Windows?\n> \n\nI have a LOT of experience with Windows development. You tell me what you want\nand I'll write it. This is not an issue.\n", "msg_date": "Mon, 06 May 2002 00:08:24 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Native Windows, Apache Portable Runtime" }, { "msg_contents": "> \"Cyril VELTER\" <cyril.velter@libertysurf.fr> writes:\n> > Also why not do the header fillup outside of PGSharedMemoryCreate ?\n>\n> Well, (a) I wasn't really concerned about defining an all-new API for\n> shmem, and (b) I think the header is largely dependent on the semantics\n> of SysV shmem anyway. A different shmem implementation might need\n> different fields in there.\n\n Are the PGShmemHeader fields only used by PGSharedMemoryCreate ?\n\n>\n> > What about using an API similar to the sema one :\n> > PGShmemHeader * PGSharedMemoryCreate(PGShmem shmem,uint32 size,\nbool\n> > makePrivate, int memKey);\n> > bool PGSharedMemoryIsInUse(PGShmem shmem);\n>\n> How does that solve the problem of determining whether a *previously*\n> created shmem block is still in use? The point here is to be able to\n> trace the connection from a data directory to active backends via their\n> connections to a shared memory block --- take a look at\n> RecordSharedMemoryInLockFile, which is the flip side of\n> SharedMemoryIsInUse.\n>\n\n Ok, I overlooked that, my proposal for PGSharedMemoryIsInUse doesn't\nmake sense (and it doesn't matter on Beos because shared mem segments are\nautomaticaly reaped at the end of the process).\n\n\n> > On a side note, after these API change, Beos will still need an Hack\nfor\n> > shared memory, because all shared memory segments are in copy on write\nmode\n> > in the forked process. One solution could be to have an explicit attach\ncall\n> > in the forked process :\n> > PGSharedMemoryAttach(PGShmem shmem);\n>\n> No strong feelings about this --- it looks like the same BeOS-specific\n> hack under a different name ;-)\n>\n> > This will allow the following calls for each fork to removed :\n>\n> > beos_before_backend_startup\n> > beos_backend_startup_failed\n> > beos_backend_startup\n>\n> How so? If those calls were needed before, why won't all three still\n> be needed?\n\n\n In the current hack, I've to iterate over all sharedmem segments (system\nwide) to find the original one by name. There is a race condition here if\nseveral backend are starting at the same time. beos_before_backend_startup\nbeos_backend_startup_failed acquire / release a semaphore which prevent\nseveral fork at the same time.\n\n\n With the proposed API, I would be able to store some specific info about\nthe shared mem segment (the beos handle of the original one created by\npostmaster) which will be accessible to the backend after the fork. This\nwill remove the race condition and the need of the three calls. This will\nalso improve mutliple backend startup time cause now forks are serialized.\n\n This was just a though, current implementation works fine.\n\n\n cyril\n\n\n\n", "msg_date": "Mon, 6 May 2002 10:21:37 +0200", "msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": "On Mon, 6 May 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Well, I guess that just saved *me* alot of work ... thanks ...\n>\n> Uh, not yet. Don't you still need a semaphore implementation that\n> works on Windows?\n\nYup ... next steps, but I believe that is what Mark is working on ...\n\n\n\n", "msg_date": "Mon, 6 May 2002 08:59:17 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr> writes:\n> Are the PGShmemHeader fields only used by PGSharedMemoryCreate ?\n\nOther than totalsize and freeoffset, I believe so. I see no reason\nthat a particular port couldn't stick different fields in there if it\nhad a mind to.\n\n>> How does that solve the problem of determining whether a *previously*\n>> created shmem block is still in use?\n\n> Ok, I overlooked that, my proposal for PGSharedMemoryIsInUse doesn't\n> make sense (and it doesn't matter on Beos because shared mem segments are\n> automaticaly reaped at the end of the process).\n\nWell, SharedMemoryIsInUse is *not* just about ensuring that the shared\nmemory gets reaped. The point is to ensure that you can't start a new\npostmaster until the last old backend is gone. (Consider situations\nwhere the parent postmaster process crashes, or perhaps is kill -9'd\nby a careless DBA, but there are still active backends. We want to\ndetect that situation and ensure that a new postmaster will refuse to\nstart.)\n\n>> How so? If those calls were needed before, why won't all three still\n>> be needed?\n\n> In the current hack, I've to iterate over all sharedmem segments (system\n> wide) to find the original one by name. There is a race condition here if\n> several backend are starting at the same time. beos_before_backend_startup\n> beos_backend_startup_failed acquire / release a semaphore which prevent\n> several fork at the same time.\n\nDoes keeping the shmem segment name around solve that? Seems like you\ndon't need a PGShmemHeader field for that; just store it in a static\nvariable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 09:36:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": "> Well, SharedMemoryIsInUse is *not* just about ensuring that the shared\n> memory gets reaped. The point is to ensure that you can't start a new\n> postmaster until the last old backend is gone. (Consider situations\n> where the parent postmaster process crashes, or perhaps is kill -9'd\n> by a careless DBA, but there are still active backends. We want to\n> detect that situation and ensure that a new postmaster will refuse to\n> start.)\n\n Yes I remember that now (the current code do that correctly).\n\n>\n> >> How so? If those calls were needed before, why won't all three still\n> >> be needed?\n>\n> > In the current hack, I've to iterate over all sharedmem segments\n(system\n> > wide) to find the original one by name. There is a race condition here\nif\n> > several backend are starting at the same time.\nbeos_before_backend_startup\n> > beos_backend_startup_failed acquire / release a semaphore which prevent\n> > several fork at the same time.\n>\n> Does keeping the shmem segment name around solve that? Seems like you\n> don't need a PGShmemHeader field for that; just store it in a static\n> variable.\n\nNo the name is not enough, I need the beos handle for each shared mem\nsegment. I'll try to find a cleaner solution using existing API.\n\n\n cyril\n\n", "msg_date": "Mon, 6 May 2002 19:51:58 +0200", "msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime " }, { "msg_contents": "On Mon, 6 May 2002, mlw wrote:\n\n> Tom Lane wrote:\n> >\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > Well, I guess that just saved *me* alot of work ... thanks ...\n> >\n> > Uh, not yet. Don't you still need a semaphore implementation that\n> > works on Windows?\n> >\n>\n> I have a LOT of experience with Windows development. You tell me what you want\n> and I'll write it. This is not an issue.\n\nappropriate sem* replacements in Windows?\n\n\n", "msg_date": "Mon, 6 May 2002 20:20:09 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Native Windows, Apache Portable Runtime" } ]
[ { "msg_contents": "Although I can't tell for sure, I really don't think it's the output of the\nUPDATE 0 that is causing the problem. I configured the server to log all\nqueries last night, and it looks to me like it (MS Access) is doing stupid\nstuff. (Like issuing a select on all fields (but not *), and then issuing a\ngiant select to make sure that all the records are still there.) When I \n\nAt this point, I'm suspecting that the problem may be more related to my\ninexperience with Access than with postgres. Tom had thought that the\nproblem might be related to the lack of a column that access recognized as\nthe unique identifier, and I finally found a page last night that discusses\nthe exact behavior that I'm seeing. (Although I still haven't figured out\nhow to fix it.)\n\nhttp://joelburton.com/resources/pgaccess/faq.html has some mention of the\nproblem and a link to the MS knowledge base, but I'm seeing behavior from\nthe MS client that leads me to believe the problem is closer to the user. :)\n(I tried to set up the view so that the user couldn't change the ID or set\nthe timestamp of the record (each record is a work journal entry, so there's\nan ID as well as a timestamp.))\n\n-ron \"who's off to sub to pgsql-odbc now\"\n\n\n> -----Original Message-----\n> From: Hiroshi Inoue [mailto:Inoue@tpf.co.jp] \n> Sent: Saturday, May 04, 2002 7:09 AM\n> To: Tom Lane\n> Cc: Ron Snyder; pgsql-general@postgresql.org; pgsql-hackers\n> Subject: RE: [GENERAL] Using views and MS access via odbc \n> \n> \n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > \n> > \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > > If you'd not like to change the behavior, I would change it, OK ? \n> > \n> > To what? I don't want to simply undo the 7.2 change.\n> \n> What I'm thinking is the following makeshift fix.\n> I expect it solves Ron's case though I'm not sure.\n> Returning UPDATE 0 seem to make no one happy.\n> \n> regards,\n> Hiroshi Inoue\n> \n> *** postgres.c.orig\tThu Feb 28 08:17:01 2002\n> --- postgres.c\tSat May 4 22:53:03 2002\n> ***************\n> *** 805,811 ****\n> \t\t\t\t\tif (DebugLvl > 1)\n> \t\t\t\t\t\telog(DEBUG, \n> \"ProcessQuery\");\n> \n> ! \t\t\t\t\tif (querytree->originalQuery)\n> \t\t\t\t\t{\n> \t\t\t\t\t\t/* original \n> stmt can override default tag string */\n> \t\t\t\t\t\t\n> ProcessQuery(querytree, plan, dest, completionTag);\n> --- 805,811 ----\n> \t\t\t\t\tif (DebugLvl > 1)\n> \t\t\t\t\t\telog(DEBUG, \n> \"ProcessQuery\");\n> \n> ! \t\t\t\t\tif \n> (querytree->originalQuery || length(querytree_list) == 1)\n> \t\t\t\t\t{\n> \t\t\t\t\t\t/* original \n> stmt can override default tag string */\n> \t\t\t\t\t\t\n> ProcessQuery(querytree, plan, dest, completionTag);\n> \n", "msg_date": "Sat, 4 May 2002 14:40:07 -0700 ", "msg_from": "Ron Snyder <snyder@roguewave.com>", "msg_from_op": true, "msg_subject": "Re: Using views and MS access via odbc " }, { "msg_contents": "> -----Original Message-----\n> From: Ron Snyder [mailto:snyder@roguewave.com]\n> \n> Although I can't tell for sure, I really don't think it's the \n> output of the UPDATE 0 that is causing the problem.\n\nYou may have other problems.\nHowever you can't get expected results anyway \nas long as you are using ordinary updatable views\nin 7.2. \n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 6 May 2002 08:15:09 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using views and MS access via odbc " } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, May 03, 2002 3:07 PM\n> To: mlw\n> Cc: Marc G. Fournier; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports \n> \n> \n> mlw <markw@mohawksoft.com> writes:\n> > I am writing a Win32 DLL implementation of :\n> \n> > int semget(key_t key, int nsems, int semflg);\n> > int semctl(int semid, int semnum, int cmd, union semun arg);\n> > int semop(int semid, struct sembuf * sops, unsigned nsops);\n> \n> Rather than propagating the SysV semaphore API still further, \n> why don't\n> we kill it now? (I'm willing to keep the shmem API, however.)\n> \n> After looking over the uses of these functions, I believe \n> that we could\n> easily develop a non-SysV-centric internal API. Here's a first cut:\n> \n> 1. Define a struct type PGSemaphore that has implementation-specific\n> contents (the generic code will never look inside it). Operations on\n> semaphores will take \"PGSemaphore *\" arguments. When \n> implementing atop\n> SysV semaphores, PGSemaphore will contain two fields, the semaphore id\n> and semaphore number. In other cases the contents could be different.\n> \n> 2. All PGSemaphore structs will be physically stored in \n> shared memory.\n> This doesn't matter for SysV support, where the id/number are \n> constants\n> anyway; but it will allow implementations based on mutexes.\n> \n> 3. The operations needed are\n> \n> * Reserve semaphores. This will be told the number of semaphores\n> needed. On SysV it will do the necessary semget()s, but on some\n> implementations it might be a no-op. This should also be prepared\n> to clean up after a failed postmaster, if it is possible for sema\n> resources to outlive the creating postmaster.\n> \n> * Create semaphore. Given a pointer to an uninitialized PGSemaphore\n> struct, initialize it to a new semaphore with count 1. (On SysV this\n> would hand out the individual semas previously allocated by Reserve.)\n> Note that this is not responsible for allocating the memory occupied\n> by the PGSemaphore struct --- I envision the structs being part of\n> larger objects such as PROC structures.\n> \n> * Release semaphores. Release all resources allocated by previous\n> Reserve and Create operations. This is called when shutting down\n> or when resetting shared memory after a backend crash.\n> \n> * Reset semaphore. Reset an existing PGSemaphore to count zero.\n> \n> * Lock semaphore. Identical to current IpcSemaphoreLock(), except\n> parameter is a PGSemaphore *. See code of that routine for detailed\n> semantics.\n> \n> * Unlock semaphore. Identical to current IpcSemaphoreUnlock(), except\n> parameter is a PGSemaphore *.\n> \n> * Conditional lock semaphore. Identical to current\n> IpcSemaphoreTryLock(), except parameter is a PGSemaphore *.\n> \n> Reserve/create/release would all be called in the postmaster process,\n> so they could communicate via malloc'd private memory (eg, an array\n> of semaphore IDs would be needed in the SysV case). The remaining\n> operations would be invokable by any backend.\n> \n> Comments?\n> \n> I'd be willing to work on refactoring the existing SysV-based code\n> to meet this spec.\n\nIt's already been done. Here is a freely available C++ implementation\n(licensing similar to PostgreSQL):\nhttp://www.cs.wustl.edu/~schmidt/ACE.html\n\n\n", "msg_date": "Sun, 5 May 2002 00:12:02 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports " } ]
[ { "msg_contents": "help\n", "msg_date": "Sun, 05 May 2002 11:40:50 +0300", "msg_from": "Vladimir Zolotykh <gsmith@eurocom.od.ua>", "msg_from_op": true, "msg_subject": "help" } ]
[ { "msg_contents": "Hi\n\nI found a strange error (at least at first glance I had thought it\nseems so):\n\nmail=# select * from accounts_log where login='trading';\n id | login | debet | credit | when \n------+---------+-------+-----------+------------------------------\n 6289 | trading | 1170 | 1294.9071 | Wed 21 Mar 18:07:19 2001 EET\n(1 row)\n\nmail=# select * from accounts_log where login='trading' and \"when\" = '2001-03-21 18:07:19';\n id | login | debet | credit | when \n------+---------+-------+-----------+------------------------------\n 6289 | trading | 1170 | 1294.9071 | Wed 21 Mar 18:07:19 2001 EET\n(1 row)\n\nmail=# select * from accounts_log where login='trading' and \"when\" >= '2001-03-21 18:07:19';\nERROR: Bad timestamp external representation 'Wed 04 Apr 20:00:56 2001 EEST'\nmail=# \n\nCould you add some comments to this ?\n\nAlso I'd like to question if you don't mind: While now() outputs\n\n Sun 05 May 11:53:44.731416 2002 EEST\n\nIt seems I can't use EEST (Eastern Europe Summer Time) in input:\n\n proba=# select * from temp;\n n | date \n ---+------\n (0 rows)\n\n proba=# \\d temp\n\t\t Table \"temp\"\n Column | Type | Modifiers \n --------+--------------------------+-----------\n n | integer | \n date | timestamp with time zone | \n\n proba=# select * from temp where date = 'Sun 05 May 11:53:44.731416 2002 EEST';\n ERROR: Bad timestamp external representation 'Sun 05 May 11:53:44.731416 2002 EEST'\n proba=# \n\nThe EETDST time zone abbreviation works but it is inconvenient because\nall files produced with pg_dump utility or copy command contains EEST\nand I can't use then without some modifications e.g\n\n $ psql -e -f copy-command.sql proba\n Using pager is off.\n COPY \"temp\" FROM stdin;\n psql:copy-command.sql:1: ERROR: copy: line 2952, Bad timestamp external representation 'Mon 26 Mar 18:45:36 2001\nEEST'\n psql:copy-command.sql:1: lost synchronization with server, resetting connection\n $ \n\nTo be precise, DST time was started at 25 Mar 2001 at 01:00 UTC for\nour time zone (UTC+2) if it does matter.\n\nCould you suggest something ?\n\nUsing PostgreSQL 7.2 on Slackware 8.0\n\n\nBest regards\n\n-- \nVladimir Zolotykh\n", "msg_date": "Sun, 05 May 2002 12:06:59 +0300", "msg_from": "Vladimir Zolotykh <gsmith@eurocom.od.ua>", "msg_from_op": true, "msg_subject": "Bad timestamp external representation 'Sun 05 May 11:53:44.731416\n\t2002 EEST'" } ]
[ { "msg_contents": "It is sunday morning and I have been musing about some PostgreSQL issues. As\nsome of you are aware, my dot com, dot died, and I am working on a business\nplan for a consulting company which, amongst other things, will feature\nPostgreSQL. As I am working on the various aspects, some issue pop up about\nPostgreSQL.\n\nPlease don't take any of these personally, they are only my observations, if\nyou say they are non issues I would rather just accept that we disagree than\nget into a nasty fight. They *are* issues to a corporate acceptance, I have\nbeen challenged by IT people about them.\n\n(1) Major version upgrade. This is a hard one, having to dump out and restore a\ndatabase to go from 7.1 to 7.2 or 7.2 to 7.3 is really a hard sell. If a\ncustomer has a very large database, this represents a large amount of\ndown-time. If they are running on an operating system with file-size\nlimitations it is not an easy task. It also means that they have to have\nadditional storage which amount to at least a copy of the whole database.\n\n(2) Update behavior, the least recently updated (LRU) tuple order in storage is\na problem. To have performance degrade as it does from updates is hard to\nexplain to a customer, and quite honestly, tells me I can not recommend\nPostgreSQL for an environment in which the primary behavior is updating data. \n\n[Index] --> [Target]->[LRU]->[1]->[2]->[3]->[MRU]\n\nupdate tbl set foo = x where bar = y\n\nThe most recently updated (MRU) tuple, becomes [4] and the new tuple becomes\nthe MRU tuple.\n\n[Index] --> [Target]->[LRU]->[1]->[2]->[3]->[4]->[MRU]\n\nThe above represents what PostgreSQL seems to currently do. Correct me if I'm\nwrong. (I would love to be wrong here.) If we break the list at the beginning\nand put the MRU tuple right after the target tuple (target tuple is the one\nwhich the index points to), say like this:\n\n[Index] --> [Target]->[MRU]->[1]->[2]->[3]->[LRU]\n\nupdate tbl set foo = x where bar = y\n\n[Index] --> [Target]->[MRU]->[4]->[3]->[2]->[1]->[LRU]\n\nAgain, the MRU becomes [4] but, rather than scanning each obsolete tuple to\nfind the end, the target tuple's next value is the MRU. \n\nIf updates and deletes could be handled this way, that would limit the update\nand select performance degradation between vacuums.\n", "msg_date": "Sun, 05 May 2002 10:01:57 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Musings" }, { "msg_contents": "On Sun, 05 May 2002 10:01:57 EDT, the world broke into rejoicing as\nmlw <markw@mohawksoft.com> said:\n> It is sunday morning and I have been musing about some PostgreSQL issues. As\n> some of you are aware, my dot com, dot died, and I am working on a business\n> plan for a consulting company which, amongst other things, will feature\n> PostgreSQL. As I am working on the various aspects, some issue pop up about\n> PostgreSQL.\n> \n> Please don't take any of these personally, they are only my observations, if\n> you say they are non issues I would rather just accept that we disagree than\n> get into a nasty fight. They *are* issues to a corporate acceptance, I have\n> been challenged by IT people about them.\n> \n> (1) Major version upgrade. This is a hard one, having to dump out and\n> restore a database to go from 7.1 to 7.2 or 7.2 to 7.3 is really a\n> hard sell. If a customer has a very large database, this represents a\n> large amount of down-time. If they are running on an operating system\n> with file-size limitations it is not an easy task. It also means that\n> they have to have additional storage which amount to at least a copy\n> of the whole database.\n\nAll of these things are true, and what you should throw back at the IT\npeople is the question:\n\n \"So what do you do when you upgrade from Oracle 7 to Oracle 8? How\n about the process of doing major Informix upgrades? Sybase? Does it\n not involve some appreciable amounts of down-time?\"\n\nThere may well be possible improvements to the PostgreSQL upgrade\nprocess; \"zero-downtime, zero-extra space upgrades\" do not seem likely\nto be amongst those things.\n\nThe last time I did an SAP upgrade, there were _five days_ of\ndown-time. Not 15 minutes, not \"none,\" but rather a figure rather close\nto a week.\n\nFor the IT guys to have sour grapes over upgrades requiring some time\nand disk space is unsurprising; for them to pretend it is only a problem\nwith PostgreSQL is just dishonest.\n--\n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://www.cbbrowne.com/info/\n\"Marketing Division, Sirius Cybernetics Corp: A bunch of mindless\njerks who'll be the first against the wall when the revolution comes.\"\n-- The Hitchhiker's Guide to the Galaxy\n", "msg_date": "Sun, 05 May 2002 10:21:50 -0400", "msg_from": "cbbrowne@cbbrowne.com", "msg_from_op": false, "msg_subject": "Re: Musings " }, { "msg_contents": "On Sun, 5 May 2002 cbbrowne@cbbrowne.com wrote:\n\n> On Sun, 05 May 2002 10:01:57 EDT, the world broke into rejoicing as\n> mlw <markw@mohawksoft.com> said:\n> > It is sunday morning and I have been musing about some PostgreSQL issues. As\n> > some of you are aware, my dot com, dot died, and I am working on a business\n> > plan for a consulting company which, amongst other things, will feature\n> > PostgreSQL. As I am working on the various aspects, some issue pop up about\n> > PostgreSQL.\n> > \n> > Please don't take any of these personally, they are only my observations, if\n> > you say they are non issues I would rather just accept that we disagree than\n> > get into a nasty fight. They *are* issues to a corporate acceptance, I have\n> > been challenged by IT people about them.\n> > \n> > (1) Major version upgrade. This is a hard one, having to dump out and\n> > restore a database to go from 7.1 to 7.2 or 7.2 to 7.3 is really a\n> > hard sell. If a customer has a very large database, this represents a\n> > large amount of down-time. If they are running on an operating system\n> > with file-size limitations it is not an easy task. It also means that\n> > they have to have additional storage which amount to at least a copy\n> > of the whole database.\n> \n> All of these things are true, and what you should throw back at the IT\n> people is the question:\n> \n> \"So what do you do when you upgrade from Oracle 7 to Oracle 8? How\n> about the process of doing major Informix upgrades? Sybase? Does it\n> not involve some appreciable amounts of down-time?\"\n\n\nThis is most definately the wrong way of thinking about this. I'm not\nsaying that Mark sets a simple task, but the goals of Postgres should\nnever be limited to the other products out there. \n\nGavin\n\n", "msg_date": "Mon, 6 May 2002 00:50:25 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Musings " }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> [Index] --> [Target]->[LRU]->[1]->[2]->[3]->[MRU]\n\nThis diagram is entirely unrelated to reality. See, eg,\nhttp://archives.postgresql.org/pgsql-hackers/2002-05/msg00012.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 May 2002 10:51:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Musings " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > [Index] --> [Target]->[LRU]->[1]->[2]->[3]->[MRU]\n> \n\nRE: http://archives.postgresql.org/pgsql-hackers/2002-05/msg00030.php\n\nThere are a few variations, but it seems I am making the same assumptions as\nLincln Yeoh. So, you are saying that when a search for a specific tuple\nhappens, you have to hit every version of the tuple, no matter what? It isn't a\nlinked list?\n\nI guess I don't understand. Why does it have to visit all of them? If ordering\nthem from newest tom oldest, and then take the first transaction ID that it\nsmaller then current transaction id, doesn't that work?\n", "msg_date": "Sun, 05 May 2002 11:33:11 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Musings" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I guess I don't understand. Why does it have to visit all of them?\n\nBecause it doesn't have any way to know in advance which one(s) are\nvisible to it.\n\n> If ordering\n> them from newest tom oldest, and then take the first transaction ID that it\n> smaller then current transaction id, doesn't that work?\n\nNo. For starters, we couldn't guarantee that insertion order is the\nsame as transaction commit order. Even if we did, your assumption\nthat commit order is the same as visibility is too simplistic. And\nnone of this works if the index isn't unique.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 May 2002 14:01:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Musings " }, { "msg_contents": "I said:\n> none of this works if the index isn't unique.\n\nOn the other hand --- if the index *is* unique, and we are checking\nequality on all columns (a fairly easily checked condition), then we\nknow we should retrieve at most one visible tuple. So, without making\nany incorrect assumptions, we could terminate the indexscan after the\nfirst successful match. Hmm ... you might be right that there's a\ncheap win to be had there. I still think that we also need to do\nsomething with propagating tuple deadness flags into the index, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 May 2002 14:15:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Musings " }, { "msg_contents": "On Mon, 06 May 2002 00:50:25 +1000, the world broke into rejoicing as\nGavin Sherry <swm@linuxworld.com.au> said:\n> On Sun, 5 May 2002 cbbrowne@cbbrowne.com wrote:\n> > On Sun, 05 May 2002 10:01:57 EDT, the world broke into rejoicing as\n> > mlw <markw@mohawksoft.com> said:\n> > > It is sunday morning and I have been musing about some PostgreSQL issues. As\n> > > some of you are aware, my dot com, dot died, and I am working on a business\n> > > plan for a consulting company which, amongst other things, will feature\n> > > PostgreSQL. As I am working on the various aspects, some issue pop up about\n> > > PostgreSQL.\n> > > \n> > > Please don't take any of these personally, they are only my observations, if\n> > > you say they are non issues I would rather just accept that we disagree than\n> > > get into a nasty fight. They *are* issues to a corporate acceptance, I have\n> > > been challenged by IT people about them.\n> > > \n> > > (1) Major version upgrade. This is a hard one, having to dump out and\n> > > restore a database to go from 7.1 to 7.2 or 7.2 to 7.3 is really a\n> > > hard sell. If a customer has a very large database, this represents a\n> > > large amount of down-time. If they are running on an operating system\n> > > with file-size limitations it is not an easy task. It also means that\n> > > they have to have additional storage which amount to at least a copy\n> > > of the whole database.\n> > \n> > All of these things are true, and what you should throw back at the IT\n> > people is the question:\n> > \n> > \"So what do you do when you upgrade from Oracle 7 to Oracle 8? How\n> > about the process of doing major Informix upgrades? Sybase? Does it\n> > not involve some appreciable amounts of down-time?\"\n\n> This is most definately the wrong way of thinking about this. I'm not\n> saying that Mark sets a simple task, but the goals of Postgres should\n> never be limited to the other products out there.\n\nApparently you decided to fire back an email before bothering to read\nthe paragraph that followed, which read:\n\n There may well be possible improvements to the PostgreSQL upgrade\n process; \"zero-downtime, zero-extra space upgrades\" do not seem likely\n to be amongst those things.\n\nYes, there may well be improvements possible. I'd think it unlikely\nthat they'd emerge today or tomorrow, and I think it's silly to assume\nthat all responses must necessarily be of a technical nature.\n\nIT guys that are firing shots to the effect of \"We expect zero time\nupgrades\" are more than likely playing some other agenda than merely\n\"we'd like instant upgrades.\"\n\nFor them to expect instant upgrades when _much_ more expensive systems\noffer nothing of the sort suggests to me that the _true_ agenda has\nnothing to do with upgrade time, and everything to do with FUD.\n\nIf that's the case, and I expect FUD is in play in this sort of\nsituation, then the purely technical response of \"we might try that\nsomeday\" is a Dead Loss of an answer.\n\nIf they refuse to move from Oracle to PostgreSQL because PostgreSQL has\nno \"instant transparent upgrade\" scheme as compared to Oracle, which\n_also_ has no \"instant transparent upgrade,\" then do you realistically\nthink that the lack of a \"instant transparent upgrade\" has ANYTHING to\ndo with the choice?\n\nI'm merely suggesting that suitable questions head back to determine if\nthe question is an honest one, or if it's merely FUD.\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www.cbbrowne.com/info/lisp.html\nWhen man stands on toilet, man is high on pot. -Confucius\n", "msg_date": "Sun, 05 May 2002 15:09:05 -0400", "msg_from": "cbbrowne@cbbrowne.com", "msg_from_op": false, "msg_subject": "Re: Musings " }, { "msg_contents": "Tom Lane wrote:\n> No. For starters, we couldn't guarantee that insertion order is the\n> same as transaction commit order. Even if we did, your assumption\n> that commit order is the same as visibility is too simplistic. And\n> none of this works if the index isn't unique.\n\nAhh, I get it, (again, correct me if I am wrong) multiple references in a\nnon-unique index are handled the same way as multiple versions of the same\ntuple. When an index entry is found, presumably, all the tuples are loaded, all\nthe unique \"rows\" are identified and the latest \"visible\" version of each of\nthem are returned.\n\nI wonder, is there some way inexpensive ordering up front on updates can help\nincrease select performance? A very good problem indeed.\n", "msg_date": "Sun, 05 May 2002 16:05:33 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Musings" }, { "msg_contents": "I said:\n> On the other hand --- if the index *is* unique, and we are checking\n> equality on all columns (a fairly easily checked condition), then we\n> know we should retrieve at most one visible tuple. So, without making\n> any incorrect assumptions, we could terminate the indexscan after the\n> first successful match. Hmm ... you might be right that there's a\n> cheap win to be had there.\n\nI tried this out on a quick-hack basis of just teaching IndexNext to\nterminate an indexscan once it's gotten a single visible tuple out of\na unique index. It works pretty much as expected, but I didn't see any\nnoticeable improvement in pgbench speed. Investigation with gprof\nled to the following conclusions:\n\n1. pgbench spends an awful lot of its time in _bt_check_unique, which\nI hadn't touched. (AFAICS it couldn't be sped up anyway with this\ntechnique, since in the non-error case it won't find any visible\ntuples.) I think the only real hope for speeding up _bt_check_unique\nis to mark dead index entries so that we can avoid repeated heap_fetches.\n\n2. When I said that new index entries would be visited first because\nthey're inserted to the left of existing entries of the same key,\nI forgot that that's only true when there's room for them there.\nThe comments in nbtinsert.c give a more complete picture:\n\n * NOTE: if the new key is equal to one or more existing keys, we can\n * legitimately place it anywhere in the series of equal keys --- in fact,\n * if the new key is equal to the page's \"high key\" we can place it on\n * the next page. If it is equal to the high key, and there's not room\n * to insert the new tuple on the current page without splitting, then\n * we can move right hoping to find more free space and avoid a split.\n * (We should not move right indefinitely, however, since that leads to\n * O(N^2) insertion behavior in the presence of many equal keys.)\n * Once we have chosen the page to put the key on, we'll insert it before\n * any existing equal keys because of the way _bt_binsrch() works.\n\nIf we repeatedly update the same tuple (keeping its index key the same),\nafter awhile the first btree page containing that index key will fill\nup, and subsequently we will tend to insert duplicate entries somewhere\nin the middle of the multiple-page sequence of duplicates. We could\nguarantee that the newest tuple is visited first only if we were\nprepared to split the first btree page in the above-described case,\nrather than looking for subsequent pages with sufficient room to insert\nthe index entry. This seems like a bad idea --- it would guarantee\ninefficient space usage in the index, because as soon as we had a series\nof duplicate keys spanning multiple pages, we would *never* attempt to\ninsert into the middle of that series, and thus freed space within the\nseries of pages would go forever unused.\n\nSo my conclusion is that this idea is probably worth doing, but it's not\nearthshaking by itself.\n\nThe major difficulty with doing it in a non-hack fashion is that there\nisn't a clean place to insert the logic. index_getnext and subroutines\ncannot apply the test, because they don't know anything about tuple\nvisibility (they don't get passed the snapshot being used). So without\nrestructuring, we'd have to teach all the couple-dozen callers of\nindex_getnext what to do.\n\nI have been thinking for awhile that we need to restructure the\nindexscan API, however. There is no good reason why index_getnext\n*shouldn't* be responsible for time qual checking, and if it were\nthen it could correctly apply the one-returned-tuple rule. Even\nmore important, it would then become possible for index_getnext to\nalso be the place that detects completely-dead tuples and notifies\nthe index AMs to mark those index entries as uninteresting.\n\nBasically I'd like to make index_getnext have an API essentially the\nsame as heap_getnext. There are a few callers that actually want\nindex_getnext's behavior (fetch index tuples but not heap tuples),\nbut we could create an alternative subroutine for them to use.\n\nA more detailed proposal will follow by and by...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 May 2002 23:50:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Musings " }, { "msg_contents": "...\n> (1) Major version upgrade. This is a hard one, having to dump out and restore a\n> database to go from 7.1 to 7.2 or 7.2 to 7.3 is really a hard sell.\n\nHmm, maybe it would be more acceptable if we charged $40k per license,\nbut refunded $40k if you *want* to dump/reload. Gets that motivation\nlevel up a bit... ;)\n\n - Thomas\n", "msg_date": "Sun, 05 May 2002 23:19:45 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Musings" } ]
[ { "msg_contents": "\nI'm using:\n\nCVSROOT=:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\nStill no tag for 7.2.1.\n\nCould I (again) request that a tag be set for the current public release \nof this product?\n\nCheers.\n\n-- \n\nJack Bates\nPortland, OR, USA\nhttp://www.floatingdoghead.net\n\nGot privacy?\nMy PGP key: http://www.floatingdoghead.net/pubkey.txt\n\n\n", "msg_date": "Sun, 05 May 2002 11:46:09 -0700", "msg_from": "Jack Bates <pgsql@floatingdoghead.net>", "msg_from_op": true, "msg_subject": "STILL LACKING: CVS tag for release 7.2.1" }, { "msg_contents": "On Sunday 05 May 2002 02:46 pm, Jack Bates wrote:\n> CVSROOT=:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\n> Still no tag for 7.2.1.\n\n> Could I (again) request that a tag be set for the current public release\n> of this product?\n\nWhy? There is typically a REL tag set for the major, then a REL PATCHES tag \nset for the minors as a collective. If you want to track the current stable, \nyou get the PATCHES tag, if one is set...However, our tags have never been \nconsistent, unfortunately. We have a right hodgepodge of tags, according to \nthe web interface (http://developer.postgresql.org/cvsweb.cgi/pgsql/)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 5 May 2002 21:13:26 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: STILL LACKING: CVS tag for release 7.2.1" }, { "msg_contents": "On Sun, 5 May 2002, Lamar Owen wrote:\n\n> On Sunday 05 May 2002 02:46 pm, Jack Bates wrote:\n> > CVSROOT=:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n>\n> > Still no tag for 7.2.1.\n>\n> > Could I (again) request that a tag be set for the current public release\n> > of this product?\n>\n> Why? ...\n\nAside from being a near-universal \"best practice\", it makes it easier for\nsomeone to analyze whether local patches to 7.2.1 conflict with work that\nthe team has committed. This makes it easier for patches to be massaged\nand submitted to the maintainers successfully even if the patch is not\noriginally written for CVS head. Not everyone wants to develop on the\nbleeding edge all the time. Code that has passed local acceptance testing\nneeds to be supported carefully at its existing release level, if at all\npossible.\n\nThere was a tag for the 7.1.2 release, which was my previous baseline. A\nbunch of BETAs leading to 7.2 are tagged. Why not the current public\nrelease?\n\nI'm not here to bully and I apologize if my tone irritated. I'm a\nseasoned software engineer, and I'm happy to help out a bit in areas where\nI am qualified to do so. I submitted a patch last week for an obscure SSL\nissue in libpq and I'm looking at enabling, generally, non-blocking client\nIO over SSL in that library.\n\nBTW - I _LOVE_ 7.2's non-locking VACUUM ANALYZE - many, many thanks!\nRecent murmurings about propogating \"deadness\" of tuples to reduce index\nscan time are quite interesting to me, as is point-in-time recovery.\nEven without these features, PostgreSQL works _very_ well and quite\npredictably for me. I beat on this DBMS very hard, and I have not been\nable to break 7.2[.1] (nor 7.1.2, previously). Real good stuff.\n\nCheers.\n\n-- \n\nJack Bates\nPortland, OR, USA\nhttp://www.floatingdoghead.net\n\nGot privacy?\nMy PGP key: http://www.floatingdoghead.net/pubkey.txt\n\n", "msg_date": "Mon, 6 May 2002 15:23:38 -0700 (PDT)", "msg_from": "<jack@floatingdoghead.net>", "msg_from_op": false, "msg_subject": "Re: STILL LACKING: CVS tag for release 7.2.1" }, { "msg_contents": "<jack@floatingdoghead.net> writes:\n> Aside from being a near-universal \"best practice\", it makes it easier for\n> someone to analyze whether local patches to 7.2.1 conflict with work that\n> the team has committed.\n\nThere is a 7.2 branch, and I would think that the tip of that branch is\ngenerally what you are interested in if you do not want the HEAD tip.\n\nApplying a tag to indicate exactly what state of that branch got\nreleased as 7.2.1 would be good from a historical-documentation\npoint of view, but I can't see that it has any direct relevance\nfor either current development or maintenance work.\n\nOf course, the real answer to your question is that Marc Fournier does\nthat work, and the rest of us long ago gave up trying to get him to be\nperfectly consistent in his tagging practices ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 May 2002 10:04:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: STILL LACKING: CVS tag for release 7.2.1 " }, { "msg_contents": "hi,\ncan v define our own datatype n use it in PostgreSQL tables?\nShra\n\n", "msg_date": "Wed, 22 May 2002 18:34:03 +0530", "msg_from": "Shra <shravan@yaskatech.com>", "msg_from_op": false, "msg_subject": "None" }, { "msg_contents": "Shra wrote:\n> hi,\n> can v define our own datatype n use it in PostgreSQL tables?\n> Shra\n\n u can\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n", "msg_date": "Wed, 22 May 2002 10:17:51 -0400 (EDT)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "Currently there's an int16 t_natts in HeapTupleHeaderData. This\nnumber is stored on disk for every single tuple. Assuming that the\nnumber of attributes is constant for all tuples of one relation we\nhave a lot of redundancy here.\n\nAlmost everywhere in the sources, where HeapTupleHeader->t_natts is\nused, there is a HeapTuple and/or TupleDesc around. In struct\ntupleDesc there is int natts /* Number of attributes in the tuple */.\nIf we move t_natts from struct HeapTupleHeaderData to struct\nHeapTupleData, we'd have this number whenever we need it and didn't\nhave to write it to disk millions of times.\n\nTwo years ago there have been thoughts about ADD COLUMN and whether it\nshould touch all tuples or just change the metadata. Could someone\ntell me, what eventually came out of this discussion and where I find\nthe relevant pieces of source code, please. What about DROP COLUMN?\n\nIf there is interest in reducing on-disk tuple header size and I have\nnot missed any strong arguments against dropping t_natts, I'll\ninvestigate further. Comments?\n\nOn Fri, 3 May 2002 01:40:42 +0000 (UTC), tgl@sss.pgh.pa.us (Tom Lane)\nwrote:\n> Now if\n>we could get rid of 8 bytes in the header, I'd get excited ;-)\n\nIf this is doable, we arrive at 6 bytes. And what works for t_natts,\nshould also work for t_hoff; that's another byte. Are we getting\nnearer?\n\nServus\n Manfred\n", "msg_date": "Sun, 05 May 2002 23:48:31 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Number of attributes in HeapTupleHeader" }, { "msg_contents": "On Sun, 05 May 2002 23:48:31 +0200\n\"Manfred Koizar\" <mkoi-pg@aon.at> wrote:\n> Two years ago there have been thoughts about ADD COLUMN and whether it\n> should touch all tuples or just change the metadata. Could someone\n> tell me, what eventually came out of this discussion and where I find\n> the relevant pieces of source code, please.\n\nSee AlterTableAddColumn() in commands/tablecmds.c\n\n> If there is interest in reducing on-disk tuple header size and I have\n> not missed any strong arguments against dropping t_natts, I'll\n> investigate further. Comments?\n\nI'd definately be interested -- let me know if you'd like any help...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sun, 5 May 2002 18:07:27 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader" }, { "msg_contents": "On Sun, 5 May 2002 18:07:27 -0400, Neil Conway\n<nconway@klamath.dyndns.org> wrote:\n>See AlterTableAddColumn() in commands/tablecmds.c\nThanks. Sounds obvious. Should have looked before asking...\nThis doesn't look too promising:\n * Implementation restrictions: because we don't touch the table rows,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n * the new column values will initially appear to be NULLs. (This\n * happens because the heap tuple access routines always check for\n * attnum > # of attributes in tuple, and return NULL if so.)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nScratching my head and pondering on ...\nI'll be back :-)\n\n>I'd definately be interested -- let me know if you'd like any help...\nWell, currently I'm in the process of making myself familiar with the\ncode. That mainly takes hours of reading and searching. Anyway,\nthanks; I'll post here, if I have questions.\n\nServus\n Manfred\n", "msg_date": "Mon, 06 May 2002 00:54:02 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Number of attributes in HeapTupleHeader" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Currently there's an int16 t_natts in HeapTupleHeaderData. This\n> number is stored on disk for every single tuple. Assuming that the\n> number of attributes is constant for all tuples of one relation we\n> have a lot of redundancy here.\n\n... but that's a false assumption.\n\nNo, I don't think removing 2 bytes from the header is worth making\nALTER TABLE ADD COLUMN orders of magnitude slower. Especially since\nthe actual savings will be *zero*, unless you can find another 2 bytes\nsomeplace.\n\n> If this is doable, we arrive at 6 bytes. And what works for t_natts,\n> should also work for t_hoff; that's another byte. Are we getting\n> nearer?\n\nSorry, you used up your chance at claiming that t_hoff is dispensable.\nIf we apply your already-submitted patch, it isn't.\n\nThe bigger picture here is that the more redundancy we squeeze out\nof tuple headers, the more fragile the table data structure becomes.\nEven if we could remove t_natts at zero runtime cost, I'd be concerned\nabout the implications for reliability (ie, ability to detect\ninconsistencies) and post-crash data reconstruction. I've spent enough\ntime staring at tuple dumps to be fairly glad that we don't run the\ndata through a compressor ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 May 2002 19:41:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader " }, { "msg_contents": "> -----Original Message-----\n> From: Manfred Koizar\n> \n> If there is interest in reducing on-disk tuple header size and I have\n> not missed any strong arguments against dropping t_natts, I'll\n> investigate further. Comments?\n\nIf a dbms is proper, it prepares a mechanism from the first\nto handle ADD COLUMN without touching the tuples. If the\nmachanism is lost(I believe so) by removing t_natts, I would\nsay good bye to PostgreSQL.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 6 May 2002 08:44:27 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader" }, { "msg_contents": "On Mon, 6 May 2002 08:44:27 +0900\n\"Hiroshi Inoue\" <Inoue@tpf.co.jp> wrote:\n> > -----Original Message-----\n> > From: Manfred Koizar\n> > \n> > If there is interest in reducing on-disk tuple header size and I have\n> > not missed any strong arguments against dropping t_natts, I'll\n> > investigate further. Comments?\n> \n> If a dbms is proper, it prepares a mechanism from the first\n> to handle ADD COLUMN without touching the tuples. If the\n> machanism is lost(I believe so) by removing t_natts, I would\n> say good bye to PostgreSQL.\n\nIMHO, the current ADD COLUMN mechanism is a hack. Besides requiring\nredundant on-disk data (t_natts), it isn't SQL compliant (because\ndefault values or NOT NULL can't be specified), and depends on\na low-level kludge (that the storage system will return NULL for\nany attnums > the # of the attributes stored in the tuple).\n\nWhile instantaneous ADD COLUMN is nice, I think it's counter-\nproductive to not take advantage of a storage space optimization\njust to preserve a feature that is already semi-broken.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n", "msg_date": "Sun, 5 May 2002 20:59:06 -0400", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> IMHO, the current ADD COLUMN mechanism is a hack. Besides requiring\n> redundant on-disk data (t_natts), it isn't SQL compliant (because\n> default values or NOT NULL can't be specified), and depends on\n> a low-level kludge (that the storage system will return NULL for\n> any attnums > the # of the attributes stored in the tuple).\n\nIt could be improved if anyone felt like working on it.\n\nHint: instead of returning NULL for col > t_natts, you could instead\nreturn whatever default value is specified for the column... at least\nfor the case of a constant default, which is the main thing people\nare interested in IMHO.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 May 2002 21:48:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader " }, { "msg_contents": "> IMHO, the current ADD COLUMN mechanism is a hack. Besides requiring\n> redundant on-disk data (t_natts), it isn't SQL compliant (because\n> default values or NOT NULL can't be specified), and depends on\n> a low-level kludge (that the storage system will return NULL for\n> any attnums > the # of the attributes stored in the tuple).\n>\n> While instantaneous ADD COLUMN is nice, I think it's counter-\n> productive to not take advantage of a storage space optimization\n> just to preserve a feature that is already semi-broken.\n\nI actually started working on modifying ADD COLUMN to allow NOT NULL and\nDEFAULT clauses. Tom's idea of having col > n_atts return the default\ninstead of NULL is cool - I didn't think of that. My changes would have\nbasically made the plain add column we have at the moment work instantly,\nbut if they specified NOT NULL it would touch every row. That way it's up\nto the DBA which one they want (as good HCI should always do).\n\nHowever, now that my SET/DROP NOT NULL patch is in there, it's easy to do\nthe whole add column process, just in a transaction:\n\nBEGIN;\nALTER TABLE foo ADD bar int4;\nUPDATE foo SET bar=3;\nALTER TABLE foo ALTER bar SET NOT NULL;\nALTER TABLE foo SET DEFAULT 3;\nALTER TABLE foo ADD FOREIGN KEY (bar) REFERENCES (noik);\nCOMMIT;\n\nWith the advantage that you have full control over every step...\n\nChris\n\n", "msg_date": "Mon, 6 May 2002 10:07:17 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader" }, { "msg_contents": "I said:\n> Sorry, you used up your chance at claiming that t_hoff is dispensable.\n> If we apply your already-submitted patch, it isn't.\n\nWait, I take that back. t_hoff is important to distinguish how much\nbitmap padding there is on a particular tuple --- but that's really\nonly interesting as long as we aren't forcing dump/initdb/reload.\nIf we are changing anything else about tuple headers, then that\nargument becomes irrelevant anyway.\n\nHowever, I'm still concerned about losing safety margin by removing\n\"redundant\" fields.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 May 2002 00:20:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader " }, { "msg_contents": "Neil Conway wrote:\n> \n> On Mon, 6 May 2002 08:44:27 +0900\n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> wrote:\n> > > -----Original Message-----\n> > > From: Manfred Koizar\n> > >\n> > > If there is interest in reducing on-disk tuple header size and I have\n> > > not missed any strong arguments against dropping t_natts, I'll\n> > > investigate further. Comments?\n> >\n> > If a dbms is proper, it prepares a mechanism from the first\n> > to handle ADD COLUMN without touching the tuples. If the\n> > machanism is lost(I believe so) by removing t_natts, I would\n> > say good bye to PostgreSQL.\n> \n> IMHO, the current ADD COLUMN mechanism is a hack. Besides requiring\n> redundant on-disk data (t_natts), it isn't SQL compliant (because\n> default values or NOT NULL can't be specified), and depends on\n> a low-level kludge (that the storage system will return NULL for\n> any attnums > the # of the attributes stored in the tuple).\n\nI think it's neither a hack nor a kludge.\nThe value of data which are non-existent at the appearance\nis basically unknown. So there could be an implementation\nof ALTER TABLE ADD COLUMN .. DEFAULT which doesn't touch\nexistent tuples at all as Oracle does.\nThough I don't object to touch tuples to implement ADD COLUMN\n.. DEFAULT, please don't change the existent stuff together.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Tue, 07 May 2002 10:08:51 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader" }, { "msg_contents": "I think the real trick is keeping track of the difference between:\n\nbegin;\nALTER TABLE tab ADD COLUMN col1 int4 DEFAULT 4;\ncommit;\n\nand\n\nbegin;\nALTER TABLE tab ADD COLUMN col1;\nALTER TABLE tab ALTER COLUMN col1 SET DEFAULT 4;\ncommit;\n\nThe first should populate the column with the value of '4', the second\nshould populate the column with NULL and have new entries with default\nof 4.\n\nNot to mention\nbegin;\nALTER TABLE tab ADD COLUMN col1 DEFAULT 5;\nALTER TABLE tab ALTER COLUMN col1 SET DEFAULT 4;\ncommit;\n\nNew tuples with default value of 4, but the column creation should\nhave 5.\n--\nRod\n----- Original Message -----\nFrom: \"Hiroshi Inoue\" <Inoue@tpf.co.jp>\nTo: \"Neil Conway\" <nconway@klamath.dyndns.org>\nCc: <mkoi-pg@aon.at>; <pgsql-hackers@postgresql.org>\nSent: Monday, May 06, 2002 9:08 PM\nSubject: Re: [HACKERS] Number of attributes in HeapTupleHeader\n\n\n> Neil Conway wrote:\n> >\n> > On Mon, 6 May 2002 08:44:27 +0900\n> > \"Hiroshi Inoue\" <Inoue@tpf.co.jp> wrote:\n> > > > -----Original Message-----\n> > > > From: Manfred Koizar\n> > > >\n> > > > If there is interest in reducing on-disk tuple header size and\nI have\n> > > > not missed any strong arguments against dropping t_natts, I'll\n> > > > investigate further. Comments?\n> > >\n> > > If a dbms is proper, it prepares a mechanism from the first\n> > > to handle ADD COLUMN without touching the tuples. If the\n> > > machanism is lost(I believe so) by removing t_natts, I would\n> > > say good bye to PostgreSQL.\n> >\n> > IMHO, the current ADD COLUMN mechanism is a hack. Besides\nrequiring\n> > redundant on-disk data (t_natts), it isn't SQL compliant (because\n> > default values or NOT NULL can't be specified), and depends on\n> > a low-level kludge (that the storage system will return NULL for\n> > any attnums > the # of the attributes stored in the tuple).\n>\n> I think it's neither a hack nor a kludge.\n> The value of data which are non-existent at the appearance\n> is basically unknown. So there could be an implementation\n> of ALTER TABLE ADD COLUMN .. DEFAULT which doesn't touch\n> existent tuples at all as Oracle does.\n> Though I don't object to touch tuples to implement ADD COLUMN\n> .. DEFAULT, please don't change the existent stuff together.\n>\n> regards,\n> Hiroshi Inoue\n> http://w2422.nsk.ne.jp/~inoue/\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Mon, 6 May 2002 21:52:30 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader" }, { "msg_contents": "Rod Taylor wrote:\n> \n> I think the real trick is keeping track of the difference between:\n> \n> begin;\n> ALTER TABLE tab ADD COLUMN col1 int4 DEFAULT 4;\n> commit;\n> \n> and\n> \n> begin;\n> ALTER TABLE tab ADD COLUMN col1;\n> ALTER TABLE tab ALTER COLUMN col1 SET DEFAULT 4;\n> commit;\n> \n> The first should populate the column with the value of '4', the second\n> should populate the column with NULL and have new entries with default\n> of 4.\n\nI know the difference. Though I don't love the standard\nspec of the first, I don't object to introduce it.\nMy only anxiety is that the implementation of the first\nwould replace the current implementaion of ADD COLUMN\n(without default) together to touch tuples.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Tue, 07 May 2002 11:07:32 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader" }, { "msg_contents": "On Sun, 05 May 2002 19:41:00 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>No, I don't think removing 2 bytes from the header is worth making\n>ALTER TABLE ADD COLUMN orders of magnitude slower.\n\nI agree. And I'll not touch the code, if my modifications break an\nexisting feature.\n\nFor now I rather work on a patch to eliminate one of the 4\nTransaction/CommandIds per tuple as discussed in another thread. This\nwill at least benefit those, who run PG on machines with 4 byte\nalignment.\n\n>The bigger picture here is that the more redundancy we squeeze out\n>of tuple headers, the more fragile the table data structure becomes.\n>Even if we could remove t_natts at zero runtime cost, I'd be concerned\n>about the implications for reliability (ie, ability to detect\n>inconsistencies) and post-crash data reconstruction. I've spent enough\n>time staring at tuple dumps to be fairly glad that we don't run the\n>data through a compressor ;-)\n\nWell, that's a matter of taste. You are around for several years and\nyou are used to having natts in each tuple. Others might wish to have\nmore redundant metadata in tuple headers, or less. It's hard to draw\na sharp line here.\nServus\n Manfred\n", "msg_date": "Wed, 08 May 2002 17:29:38 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Number of attributes in HeapTupleHeader " }, { "msg_contents": "On Mon, 6 May 2002 21:52:30 -0400, \"Rod Taylor\" <rbt@zort.ca> wrote:\n>I think the real trick is keeping track of the difference between:\n>\n>begin;\n>ALTER TABLE tab ADD COLUMN col1 int4 DEFAULT 4;\n>commit;\n>\n>begin;\n>ALTER TABLE tab ADD COLUMN col1;\n>ALTER TABLE tab ALTER COLUMN col1 SET DEFAULT 4;\n>commit;\n>[...]\n>begin;\n>ALTER TABLE tab ADD COLUMN col1 DEFAULT 5;\n>ALTER TABLE tab ALTER COLUMN col1 SET DEFAULT 4;\n>commit;\n\nThis starts to get interesting. Wouldn't it be cool, if PG could do\nall these ALTER TABLE statements without touching any existing tuple?\nThis is possible; it needs a feature we could call MVMD (multi version\nmetadata). How could that work? I think of something like:\n\nAn ALTER TABLE statement makes a new copy of the metadata describing\nthe table, modifies the copy and gives it a unique (for this table)\nversion number. It does not change or remove old metadata.\n\nEvery tuple knows the current metadata version as of the tuple's\ncreation.\n\nWhenever a tuple is read, the correct version of the tuple descriptor\nis associated to it. All conversions to make the old tuple format\nlook like the current one are done on the fly.\n\nWhen a tuple is updated, this clearly is handled like an insert, so\nthe tuple is converted to the most recent format.\n\nThe version number could be a small (1 byte) integer. If we maintain\nmin and max valid version in the table metadata, we could even allow\nthe version to roll over to 0 after the highest possible value. Max\nversion would be incremented by ALTER TABLE, min version could be\nadvanced by VACUUM.\n\nThe key point to make this work is whether we can keep the runtime\ncost low. I think there should be no problem regarding memory\nfootprint (just a few more tuple descriptors), but cannot (yet)\nestimate the cpu overhead.\n\nWith MVMD nobody could call handling of pre ALTER TABLE tuples a hack\nor a kludge. There would be a well defined concept.\n\nNo, this concept is neither new nor is it mine. I just like the idea,\nand I hope I have described it correctly.\n\nAnd no, I'm not whining that I think I need a feature and want you to\nimplement it for me. I've got myself a shovel and a hoe and I'm ready\nto dig, as soon as the hackers agree, where it makes sense.\n\nOh, just one wish: please try to find friendly words, if you have to\ntell me, that this is all bullshit :-)\n\nServus\n Manfred\n", "msg_date": "Wed, 08 May 2002 18:19:18 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Number of attributes in HeapTupleHeader" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> An ALTER TABLE statement makes a new copy of the metadata describing\n> the table, modifies the copy and gives it a unique (for this table)\n> version number. It does not change or remove old metadata.\n\nThis has been discussed before --- in PG terms, it'd mean keeping the\nOID of a rowtype in the tuple header. (No, I won't let you get away\nwith a 1-byte integer. But you could remove natts and hoff, thus\nbuying back 3 of the 4 bytes.)\n\nI was actually going to suggest it again earlier in this thread; but\npeople weren't excited about the idea last time it was brought up,\nso I decided not to bother. It'd be a *lot* of work and a lot of\nbreakage of existing clients (eg, pg_attribute would need to link\nto pg_type not pg_class, pg_class.relnatts would move to pg_type,\netc etc). The flexibility looks cool, but people seem to feel that\nthe price is too high for the actual amount of usefulness.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 May 2002 16:54:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader " }, { "msg_contents": "\n--\nRod\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Manfred Koizar\" <mkoi-pg@aon.at>\nCc: \"Rod Taylor\" <rbt@zort.ca>; \"Hiroshi Inoue\" <Inoue@tpf.co.jp>;\n\"Neil Conway\" <nconway@klamath.dyndns.org>;\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, May 08, 2002 4:54 PM\nSubject: Re: [HACKERS] Number of attributes in HeapTupleHeader\n\n\n> This has been discussed before --- in PG terms, it'd mean keeping\nthe\n> OID of a rowtype in the tuple header. (No, I won't let you get away\n> with a 1-byte integer. But you could remove natts and hoff, thus\n> buying back 3 of the 4 bytes.)\n\nCould the OID be on a per page basis? Rather than versioning each\ntuple, much with a page at a time? Means when you update one in a\npage the rest need to be tested to ensure that they have the most\nrecent type, but it certainly makes storage requirements smaller when\nToast isn't involved (8k rows).\n\n> I was actually going to suggest it again earlier in this thread; but\n> people weren't excited about the idea last time it was brought up,\n> so I decided not to bother. It'd be a *lot* of work and a lot of\n> breakage of existing clients (eg, pg_attribute would need to link\n> to pg_type not pg_class, pg_class.relnatts would move to pg_type,\n> etc etc). The flexibility looks cool, but people seem to feel that\n> the price is too high for the actual amount of usefulness.\n\nThere would be no cost if we had an information schema of somekind.\nJust change how the views are made. Getting everything to use the\ninformation schema in the first place is tricky though...\n\n", "msg_date": "Wed, 8 May 2002 17:33:08 -0400", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader " }, { "msg_contents": "On Wed, 8 May 2002 17:33:08 -0400, \"Rod Taylor\" <rbt@zort.ca> wrote:\n>From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n>> This has been discussed before --- in PG terms, it'd mean keeping\n>the\n>> OID of a rowtype in the tuple header. (No, I won't let you get away\n>> with a 1-byte integer. But you could remove natts and hoff, thus\n>> buying back 3 of the 4 bytes.)\n>\n>Could the OID be on a per page basis? Rather than versioning each\n>tuple, much with a page at a time? Means when you update one in a\n>page the rest need to be tested to ensure that they have the most\n>recent type, [...]\n\nRod,\n\"to be tested\" is not enough, they'd have to be converted, which means\nthey could grow, thus possibly using up the free space on the page.\nOr did you mean to treat this just like a normal update?\n\nI was rather thinking of some kind of a translation vector: having 1\narray of rowtype OIDs per relation and 1 byte per tuple pointing into\nthis array. But that has been rejected.\n\nSo it seems we are getting off topic. Initially this thread was about\nreducing tuple header size, and now we've arrived at increasing the\nsize by one byte :-)\n\nServus\n Manfred\n", "msg_date": "Thu, 09 May 2002 12:07:58 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Number of attributes in HeapTupleHeader " }, { "msg_contents": "Tom Lane wrote:\n> I said:\n> > Sorry, you used up your chance at claiming that t_hoff is dispensable.\n> > If we apply your already-submitted patch, it isn't.\n> \n> Wait, I take that back. t_hoff is important to distinguish how much\n> bitmap padding there is on a particular tuple --- but that's really\n> only interesting as long as we aren't forcing dump/initdb/reload.\n> If we are changing anything else about tuple headers, then that\n> argument becomes irrelevant anyway.\n> \n> However, I'm still concerned about losing safety margin by removing\n> \"redundant\" fields.\n\nI just wanted to comment that redundancy in the tuple header, while\nadding a very marginal amount to stability, is really too high a cost. \nIf we can save 4 bytes on every row stored, I think that is a clear win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jun 2002 14:23:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Number of attributes in HeapTupleHeader" } ]
[ { "msg_contents": "On Sat, 2002-05-04 at 21:56, Tom Lane wrote:\n> mlw <markw@mohawksoft.com> writes:\n> > We could provide a PGSemaphore based on an APR mutex and a counter,\n> > but I'm not sure of the performance impact. We may want to implement\na\n> > \"generic\" semaphore like this and one optimized for platforms which\nwe\n> > have development resources.\n> \n> Once we have the internal API redone, it should be fairly easy to\n> experiment with alternatives like that.\n> \n> I'm planning to work on this today (need a break from thinking about\n> schemas ;-)). I'll run with the API I sketched yesterday, since no\none\n> objected. Although I'm not planning on doing anything to the API of\nthe\n> shared-mem routines, I'll break them out into a replaceable file as\n> well, just in case anyone wants to try a non-SysV implementation.\n\nWould it be too hard to make them macros, so those which dont need\nshared mem at all (embedded single-user systems) could avoid the\nperformance impact altogether.\n\n----------------\nHannu\n\n\n\n", "msg_date": "06 May 2002 12:48:32 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Native Windows, Apache Portable Runtime" } ]