threads
listlengths
1
2.99k
[ { "msg_contents": "> > Are we ready for RC1 yet?\n> \n> I think so. The NO_MKTIME_BEFORE_1970 issue was bothering me, but I\n> feel that's resolved now. (It'd be nice to hear a crosscheck from\n> some AIX users though...)\n\nabstime, tinterval and horology fail on AIX. \nThe rest is now working (AIX 4.3.2 xlc 5.0.0.2).\n\nI am just now rebuilding with removing the #define NO_MKTIME_BEFORE_1970.\nMy feeling is, that there is no difference. Can that be ?\nAttached are the regression diffs for vanilla 7.3b5\n\nAndreas", "msg_date": "Tue, 12 Nov 2002 17:34:09 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: RC1? " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> abstime, tinterval and horology fail on AIX.=20\n\nI would expect them now (without NO_MKTIME_BEFORE_1970) to match the\nsolaris-1947 comparison files for these tests. Could you confirm that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Nov 2002 13:12:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1? " } ]
[ { "msg_contents": "Hello,\nOur application development group has observed what we \nfeel is inconsistent behavior when comparing numeric \ncolumn references to constant/literal values in SQL. \n\nI would appreciate comments on the best approach to \nthis problem that will allow for the highest\nportability of our application code. I have searched\nthe archives and online docs, but so far have not found \nanyone addressing the problem quite this way.\n\nAssume wuActive is a numeric field ( with scale but no\nprecision ) in the table WU:\n select count(wuid) from WU where wuActive = 0 --works fine\n select count(wuid) from WU where wuActive = '0' --works fine\n select count(wuid) from WU where wuActive = '0.0' --works fine\n select count(wuid) from WU where wuActive = 0.0 --throws the \nfollowing exception:\n\n\"Unable to identify an operator '=' for types 'numeric' and 'double\nprecision' You will have to retype this query using an explicit cast\"\n\nSecond, assume tPct is a numeric field ( having scale of 4 and\nprecision of 1 ) in the table T\n select count(tid) from T where tPct > 77 --works fine\n select count(tid) from T where tPct > '77' --works fine\n select count(tid) from T where tPct > '77.5' --works fine\n select count(tid) from T where tPct > 77.5 -- again throws \nthe exception:\n\n\"Unable to identify an operator '>' for types 'numeric' and 'double\nprecision' You will have to retype this query using an explicit cast\"\n\nThis seems to occur regardless of connectivity drivers used \n(ODBC, JDBC, etc..)\n\nI am aware of the use of type casting to force the desired \nbehavior in these situations. I have also started to go down \nthe road of creating functions and operators to force numeric \nto numeric comparison operations when comparing numeric to float, \nbut realize that this approach is fraught with pitfalls, in fact \nit is interesting to us to note that with an operator in place \nto force numeric = float comparisons to parse as numeric = numeric, \nwe started getting the opposite behavior. Queries with 'column \nreference' = 0.0 worked fine, but queries with 'column reference' = 0 \nthrew a variant of the previous exception:\n\n\"Unable to identify an operator '=' for types 'numeric' and 'integer'\"\n\nOverall, this behavior appears to be inconsistent and is not \nthe same behavior I have experienced with many other DBMS's.\nSpecifically, it seems strange that the parser does not treat \nvalues 0.0 or 77.5 as numeric(s[,p]) when comparing the values\nto a column reference known to be of type numeric (s,[p]). \n\nIs an unquoted number in the form of NN.N always treated as a \nfloat? If the planner could somehow recognize that the constant/\nliteral value was being compared to a column reference of the\ntype numeric (s,p) and treat the value accordingly, then would\noperator identification no longer be a problem?\n\nWe are looking to maintain a high degree of portability in our \napplication code, and while \"CAST ( expression as type )\" is \nfairly portable, no one here feels that it is a portable as\ncolumn reference = literal/constant value. If someone knows\nof a better approach, or can point us to documentation of build or\nrun-time configuration that affects the query planner where this \nissue is concerned, it would be much appreciated.\n\nThanks,\n\nPaul Ogden\nDatabase Administrator/Programmer\nClaresco Corporation\n(510) 549-2290\t \n", "msg_date": "Tue, 12 Nov 2002 09:39:13 -0800", "msg_from": "Paul Ogden <pogden@claresco.com>", "msg_from_op": true, "msg_subject": "Inconsistent or incomplete behavior obverse in where clause" }, { "msg_contents": "Paul,\n\n> \"Unable to identify an operator '=' for types 'numeric' and 'double\n> precision' You will have to retype this query using an explicit cast\"\n\nThis is due, as you surmised, to decimal values defaulting to floats.\n While there is little problem with an = operator for numeric and\nfloat, you would not want an implicit cast for a / operator with\nnumeric and float. As a result, I believe that all numeric and float\noperators have been left undefined.\n\n> I am aware of the use of type casting to force the desired \n> behavior in these situations. I have also started to go down \n> the road of creating functions and operators to force numeric \n> to numeric comparison operations when comparing numeric to float, \n> but realize that this approach is fraught with pitfalls, in fact \n> it is interesting to us to note that with an operator in place \n> to force numeric = float comparisons to parse as numeric = numeric, \n> we started getting the opposite behavior. Queries with 'column \n> reference' = 0.0 worked fine, but queries with 'column reference' = 0\n> \n> threw a variant of the previous exception:\n> \n> \"Unable to identify an operator '=' for types 'numeric' and\n> 'integer'\"\n\nNow, that's interesting. Why would defining a \"numeric = float\" have\nbroken \"numeric = integer\"? There's no reason I can think of.\n Perhaps I will try this myself and see if I encounter the same\nproblem, or if your team modified the numeric = integer operator by\nmistake.\n\n> Overall, this behavior appears to be inconsistent and is not \n> the same behavior I have experienced with many other DBMS's.\n> Specifically, it seems strange that the parser does not treat \n> values 0.0 or 77.5 as numeric(s[,p]) when comparing the values\n> to a column reference known to be of type numeric (s,[p]). \n> \n> Is an unquoted number in the form of NN.N always treated as a \n> float? \n\nYes. I believe that this is from the SQL 92 spec; hopefully someone\non this list with a copy of the Guide to the SQL Standard can quote it\nfor you.\n\n> If the planner could somehow recognize that the constant/\n> literal value was being compared to a column reference of the\n> type numeric (s,p) and treat the value accordingly, then would\n> operator identification no longer be a problem?\n\nIt's an interesting idea, and would be wonderful if it could be made to\nwork. However, the challenge of getting the program to correctly\nrecognize the context for all literal values *without* making any wrong\nassumptions that would afffect the data could be substantial.\n\nMost other RDBMSs deal with this, not by any kind of data type\ncontext-sensitivity, but simply by supporting a large number of\nimplicit casts. This approach can have its own perils, as I have\nexperienced with MS SQL Server, where the average of splits for 120,000\ntransactions is significantly different if you accidentally let the\ndatabase implicitly cast the values as Float instead of Numeric.\n\nAs such, there was talk on the Hackers list at one time of *reducing*\nthe number of implicit casts instead of increasing them. This would\nobviously make your particular problem even worse, but the proponents\nof reduction point out that implicit casts can get you into real\ntrouble if you're not aware of them, wheras forcing explicit casts just\ngets you error messages.\n\nHmmm ... in fact, I'd think the perfect solution would be a\ncompile-time option or contrib package which allows you to\nenable/disable implicit casts for many data types.\n\n> We are looking to maintain a high degree of portability in our \n> application code, and while \"CAST ( expression as type )\" is \n> fairly portable, no one here feels that it is a portable as\n> column reference = literal/constant value. If someone knows\n> of a better approach, or can point us to documentation of build or\n> run-time configuration that affects the query planner where this \n> issue is concerned, it would be much appreciated.\n\nHopefully someone else will respond to your message as well. I'll\nre-phrase one of your questions for the Hackers list:\n\nQUESTION: Is there any way we could distinguish between literals and\ncolumn references when processing operators? That is, while we would\n*not* want to implicitly convert a float column to numeric for equality\ncomparison, would it be possible to convert a literal value to match\nthe column to which it is compared? Or is literal processing completed\nbefore any expressions are evaluated?\n\n-Josh Berkus\n\n\n\n\n", "msg_date": "Tue, 12 Nov 2002 10:23:02 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent or incomplete behavior obverse in where" }, { "msg_contents": "Paul Ogden <pogden@claresco.com> writes:\n> select count(wuid) from WU where wuActive = 0.0 --throws the \n> following exception:\n\n> \"Unable to identify an operator '=' for types 'numeric' and 'double\n> precision' You will have to retype this query using an explicit cast\"\n\nThis is fixed as of 7.3. (We still have related issues for smallint\nand bigint columns, unfortunately.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Nov 2002 13:34:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inconsistent or incomplete behavior obverse in where clause " }, { "msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> Now, that's interesting. Why would defining a \"numeric = float\" have\n> broken \"numeric = integer\"? There's no reason I can think of.\n\nThe problem probably is that the parser now finds two possible\ninterpretations that look equally good to it, so it can't choose.\nIt could coerce the integer constant to numeric (and use numeric=numeric)\nor to float (and use the added numeric=float operator), and there's no\nrule that can break the tie.\n\nIn 7.3 and 7.4 we are actually going in the direction of removing\ncross-data-type operators, not adding them, because they tend to create\ntoo many options for the parser to choose from.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Nov 2002 13:44:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inconsistent or incomplete behavior obverse in where " }, { "msg_contents": "Josh,\nThanks for the reply. Much of what you say is as we expected. \nI see that 7.3 has addressed the \"Unable to identify an operator \n'=' for types 'numeric' and 'double precision'\" problem, but \nI'm not sure how. Context-sensitive approach? Overloaded operator\napproach? Something else ( is there )?\n\nIf the release of 7.3 is soon, perhaps we can get by with the \nband-aid approach of overloading the comparison operators \nuntil such time as the new version is available. Production\nfor us is next spring, so maybe we'll be okay on this one.\nThis approach would certainly allow our development team to\nright their code one way.\n\n> \n> \n> Paul,\n> \n> > \"Unable to identify an operator '=' for types 'numeric' and 'double\n> > precision' You will have to retype this query using an explicit cast\"\n> \n> This is due, as you surmised, to decimal values defaulting to floats.\n> While there is little problem with an = operator for numeric and\n> float, you would not want an implicit cast for a / operator with\n> numeric and float. As a result, I believe that all numeric and float\n> operators have been left undefined.\n> \n> > I am aware of the use of type casting to force the desired \n> > behavior in these situations. I have also started to go down \n> > the road of creating functions and operators to force numeric \n> > to numeric comparison operations when comparing numeric to float, \n> > but realize that this approach is fraught with pitfalls, in fact \n> > it is interesting to us to note that with an operator in place \n> > to force numeric = float comparisons to parse as numeric = numeric, \n> > we started getting the opposite behavior. Queries with 'column \n> > reference' = 0.0 worked fine, but queries with 'column reference' = 0\n> > \n> > threw a variant of the previous exception:\n> > \n> > \"Unable to identify an operator '=' for types 'numeric' and\n> > 'integer'\"\n> \n> Now, that's interesting. Why would defining a \"numeric = float\" have\n> broken \"numeric = integer\"? There's no reason I can think of.\n> Perhaps I will try this myself and see if I encounter the same\n> problem, or if your team modified the numeric = integer operator by\n> mistake.\n> \n\nNo, we made no modifications to numeric = integer. In fact, issuing\nDROP OPERATOR (numeric,float8); \ncleared that problem right up. And brought us back to square one.\n\n> > Overall, this behavior appears to be inconsistent and is not \n> > the same behavior I have experienced with many other DBMS's.\n> > Specifically, it seems strange that the parser does not treat \n> > values 0.0 or 77.5 as numeric(s[,p]) when comparing the values\n> > to a column reference known to be of type numeric (s,[p]). \n> > \n> > Is an unquoted number in the form of NN.N always treated as a \n> > float? \n> \n> Yes. I believe that this is from the SQL 92 spec; hopefully someone\n> on this list with a copy of the Guide to the SQL Standard can quote it\n> for you.\n> \n> > If the planner could somehow recognize that the constant/\n> > literal value was being compared to a column reference of the\n> > type numeric (s,p) and treat the value accordingly, then would\n> > operator identification no longer be a problem?\n> \n> It's an interesting idea, and would be wonderful if it could be made to\n> work. However, the challenge of getting the program to correctly\n> recognize the context for all literal values *without* making any wrong\n> assumptions that would afffect the data could be substantial.\n> \n> Most other RDBMSs deal with this, not by any kind of data type\n> context-sensitivity, but simply by supporting a large number of\n> implicit casts. This approach can have its own perils, as I have\n> experienced with MS SQL Server, where the average of splits for 120,000\n> transactions is significantly different if you accidentally let the\n> database implicitly cast the values as Float instead of Numeric.\n> \n> As such, there was talk on the Hackers list at one time of *reducing*\n> the number of implicit casts instead of increasing them. This would\n> obviously make your particular problem even worse, but the proponents\n> of reduction point out that implicit casts can get you into real\n> trouble if you're not aware of them, wheras forcing explicit casts just\n> gets you error messages.\n> \n> Hmmm ... in fact, I'd think the perfect solution would be a\n> compile-time option or contrib package which allows you to\n> enable/disable implicit casts for many data types.\n> \n\nI think this is a great idea. We're more of the postgres user class \nthan hacker class, so that's out of our scope to undertake, but\nwe'd sure use it.\n\n> > We are looking to maintain a high degree of portability in our \n> > application code, and while \"CAST ( expression as type )\" is \n> > fairly portable, no one here feels that it is a portable as\n> > column reference = literal/constant value. If someone knows\n> > of a better approach, or can point us to documentation of build or\n> > run-time configuration that affects the query planner where this \n> > issue is concerned, it would be much appreciated.\n> \n> Hopefully someone else will respond to your message as well. I'll\n> re-phrase one of your questions for the Hackers list:\n> \n> QUESTION: Is there any way we could distinguish between literals and\n> column references when processing operators? That is, while we would\n> *not* want to implicitly convert a float column to numeric for equality\n> comparison, would it be possible to convert a literal value to match\n> the column to which it is compared? Or is literal processing completed\n> before any expressions are evaluated?\n> \n> -Josh Berkus\n> \n\nThanks for doing that.\n- Paul Ogden\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Tue, 12 Nov 2002 10:52:20 -0800", "msg_from": "Paul Ogden <pogden@claresco.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistent or incomplete behavior obverse in where" }, { "msg_contents": "\nPaul,\n\n> Thanks for the reply. Much of what you say is as we expected. \n> I see that 7.3 has addressed the \"Unable to identify an operator \n> '=' for types 'numeric' and 'double precision'\" problem, but \n> I'm not sure how. Context-sensitive approach? Overloaded operator\n> approach? Something else ( is there )?\n\nA modification of the operators available.\n\n> If the release of 7.3 is soon, perhaps we can get by with the \n> band-aid approach of overloading the comparison operators \n> until such time as the new version is available. Production\n> for us is next spring, so maybe we'll be okay on this one.\n> This approach would certainly allow our development team to\n> right their code one way.\n\n7.3 final is expected before December.\n\n-- \n-Josh Berkus\n\n", "msg_date": "Tue, 12 Nov 2002 11:13:09 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent or incomplete behavior obverse in where" } ]
[ { "msg_contents": "> Wouldn't it work for cntxDirty to be set not by LockBuffer, but by\n> XLogInsert for each buffer that is included in its argument list?\n\nI thought to add separate call to mark context dirty but above\nshould work if all callers to XLogInsert always pass all\nmodified buffers - please check.\n\nVadim\n", "msg_date": "Tue, 12 Nov 2002 10:11:01 -0800", "msg_from": "\"Mikheev, Vadim\" <VMIKHEEV@sectordata.com>", "msg_from_op": true, "msg_subject": "Re: Idea for better handling of cntxDirty" }, { "msg_contents": "\"Mikheev, Vadim\" <VMIKHEEV@sectordata.com> writes:\n>> Wouldn't it work for cntxDirty to be set not by LockBuffer, but by\n>> XLogInsert for each buffer that is included in its argument list?\n\n> I thought to add separate call to mark context dirty but above\n> should work if all callers to XLogInsert always pass all\n> modified buffers - please check.\n\nAFAICT it is safe. There are some places (in sequences and btree)\nwhere not all the modified buffers are explicitly listed in XLogInsert's\narguments, but redo of those types of WAL records will always reinit the\naffected pages anyway. So we don't need to worry about forcing\ncheckpoint to write the pages early.\n\nIn general I don't think this adds any fragility to the system. A WAL\nrecord that is not set up to restore all buffers modified by the logged\noperation would be broken by definition, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Nov 2002 14:49:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Idea for better handling of cntxDirty " } ]
[ { "msg_contents": "\n> \tI have removed the NO_MKTIME_BEFORE_1970 symbol from irix5.h,\n> rebuilt 7.3b2, and reran the regression. The three time tests\n> (tinterval, horology, abstime) now match the Solaris expected files.\n> \tI checked the timezone files, and the system does not appear to\n> have savings time defined for 1947, but it does report it as such\n> in the PostgreSQL regression tests.\n\nI think that is because both irix and aix seem to use TZ or the current \nyear's DST rules for dates before 1970. Can that be ?\n\nAndreas\n", "msg_date": "Tue, 12 Nov 2002 19:32:48 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Problem with 7.3 on Irix with dates before 1970" } ]
[ { "msg_contents": "> > I think so. The NO_MKTIME_BEFORE_1970 issue was bothering me, but I\n> > feel that's resolved now. (It'd be nice to hear a crosscheck from\n> > some AIX users though...)\n> \n> abstime, tinterval and horology fail on AIX. \n> The rest is now working (AIX 4.3.2 xlc 5.0.0.2).\n> \n> I am just now rebuilding with removing the #define NO_MKTIME_BEFORE_1970.\n\nOk, when #define NO_MKTIME_BEFORE_1970 is removed from aix.h, then the results \nmatch the Solaris files.\n\nAttached is a patch to make AIX match Solaris. Please apply and add AIX to\nthe supported platforms.\n\nThank you\nAndreas\n\nPS: what should we do with the rest of the resultmap entries for no-DST-before-1970 ?", "msg_date": "Tue, 12 Nov 2002 19:49:18 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: RC1? " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Ok, when #define NO_MKTIME_BEFORE_1970 is removed from aix.h, then the\n> results match the Solaris files.\n\nGreat!\n\n> Attached is a patch to make AIX match Solaris. Please apply and add AIX\n> to the supported platforms.\n\nPatch applied to 7.3 and CVS tip --- Bruce, you're maintaining the\nsupported-platforms list, right?\n\n> PS: what should we do with the rest of the resultmap entries for\n> no-DST-before-1970 ?\n\nI can tell you that the hppa entry is correct. I presume the cygwin\nfolks would've mentioned it by now if theirs wasn't.\n\nI suspect we are looking at two different behaviors for systems with no\nold DST data: either assume all before 1970 is standard time (hppa does\nthis) or assume that years before 1970 use the same transition rule as\n1970 (I'll bet that's what Solaris, AIX, etc are doing).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Nov 2002 15:08:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1? " }, { "msg_contents": "Tom Lane wrote:\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Ok, when #define NO_MKTIME_BEFORE_1970 is removed from aix.h, then the\n> > results match the Solaris files.\n> \n> Great!\n> \n> > Attached is a patch to make AIX match Solaris. Please apply and add AIX\n> > to the supported platforms.\n> \n> Patch applied to 7.3 and CVS tip --- Bruce, you're maintaining the\n> supported-platforms list, right?\n\nAIX updated in 7.3 and CVS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 12 Nov 2002 15:20:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1?" } ]
[ { "msg_contents": "\n\n if (ic_flag == 1) {\n /*only select those non-IC/Spyder nodes that has full update set*/\n EXEC SQL DECLARE full_dyn_node CURSOR FOR\n SELECT node_name FROM NODE\n WHERE dynamic_community = 'f' AND ic_flag='n' AND machine_type!=22\n AND node_id != 0 AND NODE_NAME != :nodename;\n }\n else{\n EXEC SQL DECLARE full_dyn_node CURSOR FOR\n SELECT node_name FROM NODE\n WHERE dynamic_community = 'f'\n AND node_id != 0 AND NODE_NAME != :nodename; (line#493)\n }\n\nthe above code generates the following error:\n\nThe compiler complains:\n../subapi.pgc:493: ERROR: cursor full_dyn_node already defined\n\nsince its envelop'd in an if/else clause, shouldn't it work?\n\n\n\n", "msg_date": "Tue, 12 Nov 2002 14:58:17 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "ecpg \"problem\" ... " }, { "msg_contents": "hi,\n\ni think that ecpg is only text preprocessor. it doesn't understand the c\nsemantics - it goes from the top to the end of the file row by row and\nsees your declaration twice.\n\nkuba\n\nOn Tue, 12 Nov 2002, Marc G. Fournier wrote:\n\n>\n>\n> if (ic_flag == 1) {\n> /*only select those non-IC/Spyder nodes that has full update set*/\n> EXEC SQL DECLARE full_dyn_node CURSOR FOR\n> SELECT node_name FROM NODE\n> WHERE dynamic_community = 'f' AND ic_flag='n' AND machine_type!=22\n> AND node_id != 0 AND NODE_NAME != :nodename;\n> }\n> else{\n> EXEC SQL DECLARE full_dyn_node CURSOR FOR\n> SELECT node_name FROM NODE\n> WHERE dynamic_community = 'f'\n> AND node_id != 0 AND NODE_NAME != :nodename; (line#493)\n> }\n>\n> the above code generates the following error:\n>\n> The compiler complains:\n> ../subapi.pgc:493: ERROR: cursor full_dyn_node already defined\n>\n> since its envelop'd in an if/else clause, shouldn't it work?\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Tue, 12 Nov 2002 21:05:52 +0100 (CET)", "msg_from": "Jakub Ouhrabka <jouh8664@ss1000.ms.mff.cuni.cz>", "msg_from_op": false, "msg_subject": "Re: ecpg \"problem\" ... " }, { "msg_contents": "Marc,\n\nMarc G. Fournier writes:\n > if (ic_flag == 1) {\n > /*only select those non-IC/Spyder nodes that has full update set*/\n > EXEC SQL DECLARE full_dyn_node CURSOR FOR\n > SELECT node_name FROM NODE\n > WHERE dynamic_community = 'f' AND ic_flag='n' AND machine_type!=22\n > AND node_id != 0 AND NODE_NAME != :nodename;\n > }\n > else{\n > EXEC SQL DECLARE full_dyn_node CURSOR FOR\n > SELECT node_name FROM NODE\n > WHERE dynamic_community = 'f'\n > AND node_id != 0 AND NODE_NAME != :nodename; (line#493)\n > }\n > \n > the above code generates the following error:\n > \n > The compiler complains:\n > ../subapi.pgc:493: ERROR: cursor full_dyn_node already defined\n > \n > since its envelop'd in an if/else clause, shouldn't it work?\n\nUnfortuantely no, you can only ever have one \"EXEC SQL DECLARE\" for a\ngiven cursor name due to ecpg/ESQL simple parsing. What you would do\nin a situation like this is something like:\n\n if( ic_flag == 1 )\n /* only select those non-IC/Spyder nodes that has full update set */\n sprintf(stmt, \"SELECT node_name FROM NODE WHERE dynamic_community = 'f' AND ic_flag = 'n' AND machine_type != 22 AND node_id != 0 AND NODE_NAME != %s\", nodename);\n else\n sprintf(stmt, \"SELECT node_name FROM NODE WHERE dynamic_community = 'f' AND node_id != 0 AND NODE_NAME != %s\", nodename);\n\n EXEC SQL PREPARE s_statement FROM :stmt;\n EXEC SQL DECLARE full_dyn_node CURSOR FOR s_statement;\n\nRegards, Lee.\n", "msg_date": "Wed, 13 Nov 2002 09:29:58 +0000", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "ecpg \"problem\" ... " }, { "msg_contents": "On Tue, Nov 12, 2002 at 02:58:17PM -0400, Marc G. Fournier wrote:\n> \n> \n> if (ic_flag == 1) {\n> /*only select those non-IC/Spyder nodes that has full update set*/\n> EXEC SQL DECLARE full_dyn_node CURSOR FOR\n> SELECT node_name FROM NODE\n> WHERE dynamic_community = 'f' AND ic_flag='n' AND machine_type!=22\n> AND node_id != 0 AND NODE_NAME != :nodename;\n> }\n> else{\n> EXEC SQL DECLARE full_dyn_node CURSOR FOR\n> SELECT node_name FROM NODE\n> WHERE dynamic_community = 'f'\n> AND node_id != 0 AND NODE_NAME != :nodename; (line#493)\n> }\n> ...\n> since its envelop'd in an if/else clause, shouldn't it work?\n\nBy definition no. You could compare it to C preprocessor commands like\n#define.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 14 Nov 2002 12:07:25 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg \"problem\" ..." } ]
[ { "msg_contents": "Tatsuo, are you or anyone else working on adding PREPARE, EXECUTE support to\npgbench?\n\nIf not, I can do it myself and if you are interested, I'll send you the\npatch.\n\n- Curtis\n\n", "msg_date": "Tue, 12 Nov 2002 15:41:40 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Prepare enabled pgbench" }, { "msg_contents": "> Tatsuo, are you or anyone else working on adding PREPARE, EXECUTE support to\n> pgbench?\n\nAs far as I know, no one is working on that.\n\n> If not, I can do it myself and if you are interested, I'll send you the\n> patch.\n\nThanks. I can commit it for 7.4. BTW, it would be nice if we could\nhave a switch to turn on/off PREPARE/EXECUTE in pgbench so that we\ncould see how PRPARE/EXECUTE could improve the performance...\n--\nTatsuo Ishii\n\n", "msg_date": "Wed, 13 Nov 2002 10:50:51 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Prepare enabled pgbench" }, { "msg_contents": "Tatsuo Ishii wrote:\n> > Tatsuo, are you or anyone else working on adding PREPARE, EXECUTE support to\n> > pgbench?\n> \n> As far as I know, no one is working on that.\n> \n> > If not, I can do it myself and if you are interested, I'll send you the\n> > patch.\n> \n> Thanks. I can commit it for 7.4. BTW, it would be nice if we could\n> have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we\n> could see how PRPARE/EXECUTE could improve the performance...\n\nWe could probably just run before-after patch tests to see the\nperformance change. I am afraid adding that switch into the code may\nmake it messy.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 12 Nov 2002 20:59:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Prepare enabled pgbench" }, { "msg_contents": "> > Thanks. I can commit it for 7.4. BTW, it would be nice if we could\n> > have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we\n> > could see how PRPARE/EXECUTE could improve the performance...\n> \n> We could probably just run before-after patch tests to see the\n> performance change. I am afraid adding that switch into the code may\n> make it messy.\n\nBut one of the purposes of pgbench is examining performance on\ndifferent environments, doesn't it? I'm afraid hard coded\nPREPARE/EXECUTE makes it harder.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 13 Nov 2002 11:16:17 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Prepare enabled pgbench" }, { "msg_contents": "Tatsuo Ishii wrote:\n> > > Thanks. I can commit it for 7.4. BTW, it would be nice if we could\n> > > have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we\n> > > could see how PRPARE/EXECUTE could improve the performance...\n> > \n> > We could probably just run before-after patch tests to see the\n> > performance change. I am afraid adding that switch into the code may\n> > make it messy.\n> \n> But one of the purposes of pgbench is examining performance on\n> different environments, doesn't it? I'm afraid hard coded\n> PREPARE/EXECUTE makes it harder.\n\nI was just thinking that pgbench is for measuring code changes, not for\ntesting changes _in_ pgbench. Once we know the performance difference\nfor PERFORM, would we still keep the code in pgbench? Maybe to test\nlater, I guess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 12 Nov 2002 21:20:07 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Prepare enabled pgbench" }, { "msg_contents": "> > But one of the purposes of pgbench is examining performance on\n> > different environments, doesn't it? I'm afraid hard coded\n> > PREPARE/EXECUTE makes it harder.\n> \n> I was just thinking that pgbench is for measuring code changes, not for\n> testing changes _in_ pgbench. Once we know the performance difference\n> for PERFORM, would we still keep the code in pgbench? Maybe to test\n> later, I guess.\n\nMy concern is PREPARE/EXECUTE may NOT always improve the\nperformance. I guess we have very few data to judge PREPARE/EXECUTE is\ngood or not. Moreover PREPARE/EXECUTE might be improved in the\nfuture. If that happens, keeping that switch would help examining the\neffect, no?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 13 Nov 2002 11:32:35 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Prepare enabled pgbench" }, { "msg_contents": "Tatsuo Ishii wrote:\n> > > But one of the purposes of pgbench is examining performance on\n> > > different environments, doesn't it? I'm afraid hard coded\n> > > PREPARE/EXECUTE makes it harder.\n> > \n> > I was just thinking that pgbench is for measuring code changes, not for\n> > testing changes _in_ pgbench. Once we know the performance difference\n> > for PERFORM, would we still keep the code in pgbench? Maybe to test\n> > later, I guess.\n> \n> My concern is PREPARE/EXECUTE may NOT always improve the\n> performance. I guess we have very few data to judge PREPARE/EXECUTE is\n> good or not. Moreover PREPARE/EXECUTE might be improved in the\n> future. If that happens, keeping that switch would help examining the\n> effect, no?\n\nIt would. I was just concerned that having both in there would be a\nmaintenance headache and would perhaps double the amount of code and\nmake it complicated. Let see what the author does and we can decide\nthen.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 12 Nov 2002 21:42:22 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Prepare enabled pgbench" }, { "msg_contents": "> > My concern is PREPARE/EXECUTE may NOT always improve the\n> > performance. I guess we have very few data to judge PREPARE/EXECUTE is\n> > good or not. Moreover PREPARE/EXECUTE might be improved in the\n> > future. If that happens, keeping that switch would help examining the\n> > effect, no?\n> \n> It would. I was just concerned that having both in there would be a\n> maintenance headache and would perhaps double the amount of code and\n> make it complicated. Let see what the author does and we can decide\n> then.\n\nOk.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 13 Nov 2002 11:55:20 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Prepare enabled pgbench" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Thanks. I can commit it for 7.4. BTW, it would be nice if we could\n> have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we\n> could see how PRPARE/EXECUTE could improve the performance...\n\nThat is a *must*. Otherwise, you've simply made an arbitrary change\nin the benchmark ... which is no benchmark at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 00:08:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Prepare enabled pgbench " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Thanks. I can commit it for 7.4. BTW, it would be nice if we could\n> > have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we\n> > could see how PRPARE/EXECUTE could improve the performance...\n\ntom lane replies:\n> That is a *must*. Otherwise, you've simply made an arbitrary change\n> in the benchmark ... which is no benchmark at all.\n> \n> \t\t\tregards, tom lane\n\nI will add it as a switched option.\n\nIt should be possible to keep most of the code common for the two cases.\n\n- Curtis\n", "msg_date": "Wed, 13 Nov 2002 10:24:36 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Prepare enabled pgbench " }, { "msg_contents": "Hi Curtis,\n\nHave you had time to get this done?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nCurtis Faith wrote:\n>>Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>>\n>>>Thanks. I can commit it for 7.4. BTW, it would be nice if we could\n>>>have a switch to turn on/off PREPARE/EXECUTE in pgbench so that we\n>>>could see how PRPARE/EXECUTE could improve the performance...\n> \n> \n> tom lane replies:\n> \n>>That is a *must*. Otherwise, you've simply made an arbitrary change\n>>in the benchmark ... which is no benchmark at all.\n>>\n>>\t\t\tregards, tom lane\n> \n> \n> I will add it as a switched option.\n> \n> It should be possible to keep most of the code common for the two cases.\n> \n> - Curtis\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Mon, 20 Jan 2003 14:34:16 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Prepare enabled pgbench" } ]
[ { "msg_contents": "I have been doing some research about how to create new routines for\nstring collation and character case mapping that would allow us to break\nout of the one-locale-per-process scheme. I have found that the Unicode\nstandard provides a lot of useful specifications and data for this. The\nUnicode data can be mapped to the other character sets (so you don't\nactually have to use Unicode), and it should also be future-proof, in case\none day the entire world uses Unicode.\n\nI am looking for replacements for the following C functions:\n\nisalpha(), isdigit(), etc. --> \"character properties\"\ntoupper(), tolower() --> \"case mapping\"\nstrcoll(), strxfrm() --> \"string collation\"\n\n(Those should be all the cases relying on LC_CTYPE and LC_COLLATE that\nwe're interested in.)\n\nWhat we basically need is an API that allows passing locale and character\nencoding as parameters, so they can be used flexibly in a server that\nmight be using many locales and encodings.\n\n(These musings do not cover how to actually implement per-column or\nper-datum locales, but they represent prerequisite work.)\n\nCharacter properties are easy to handle, because they aren't\nlocale-dependent at all. (A letter is a letter and a digit is a digit any\nway you look at it.) The Unicode standard defines a character category\nfor each character which can be mapped to the POSIX categories (alpha,\ndigit, punct, blank, etc.). (Some details on the exact mapping are a bit\nfuzzy to me, because the POSIX standard is a bit vague on these points,\nbut that isn't a terribly hard problem to resolve.)\n\nI imagine that for each encoding supported by the PostgreSQL server we\ncreate a simple lookup array indexed by character code. Those arrays can\nbe created from Unicode data and conversion mapping files using a bit of\nPerl.\n\nCase mapping is only minimally locale-dependent. For most languages, the\ncorrespondence between lower-case and upper-case letters is the same (even\nif the language wouldn't normally use many of those letters). The Unicode\nstandard only enumerates a handful of exceptions, which can easily be\nhard-coded. (The only exception we will really be interested in is the\nmapping of the Turkish i and I. The other exceptions mostly apply to\nesoteric Unicode features about which way accents are combined --\nsomething we don't support yet, to my knowledge.)\n\nThus, we can create for each supported encoding a pair of functions\n(tolower/toupper) that maps an input character to the corresponding\nlower/upper-case character. The function only needs to cover the Turkish\nexception if all the involved characters are contained in the respective\ncharacter set (which they aren't, for example, in ISO 8859-1). Again, we\ncan create these functions from Unicode data and conversion map files\nusing a bit of Perl.\n\nI've already created prototypes for the mentioned character property and\ncase mapping tables. The only thing that remains to be done is figuring\nout a reasonable space/time tradeoff.\n\nThe Unicode standard also defines a collation algorithm. The collation\nalgorithm essentially converts a string (of Unicode characters) into a\nsequence of numbers which can be compared with, say, memcmp -- much like\nstrxfrm() does. The assignment of numbers for characters is the tricky\npart. The Unicode standard defines a default collation order, which is a\nnice compromise but not appropriate for all languages. Thus, making up\nvarious linguistically correct collation tables is the laborious part of\nthis proposal.\n\nThere are a couple of possible implementation approaches for the collation\nalgorithm:\n\nWe can follow the earlier ideas and preprocess collation tables for each\ncombination of language and character set. Considering that my system\nknows 69 different languages and PostgreSQL supports 28 server-side\ncharacter sets, this would give far more than a thousand combinations.\n\nOr we could pick out the combinations that are actually distinct and\ndeemed to be useful. (For example, Finnish collation with a character set\ntypically used for Vietnamese does not seem useful, although it's\nperfectly possible.) I can't really judge what volume of data and work\nthis would give us.\n\nFinally, we could transcode a given character string to Unicode on the fly\nbefore computing the collation transformation. This would simply require\ntwo transformations instead of the one we need anyway, and it would keep\neverything quite manageable since we'd only need one routine to do it all\nand only one set of collation tables. (Note that the collation problem\ndoes not require round trip transcoding. We only need conversion *to*\nUnicode, which should always be possible for those characters that matter\nin collation.)\n\nSo, any thoughts on these ideas?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 12 Nov 2002 21:18:57 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Collation and case mapping thoughts (long)" } ]
[ { "msg_contents": "Hi,\n\nHas anyone given much thought to improving pg_dump's object order algorithm\nfor 7.4? It seems that now we have dependencies, it should just be a matter\nof doing a breadth-first or depth-first search over the pg_depend table to\ngenerate a valid order of oids.\n\nTo allow for mess-ups in that table, the next step would be to add to the\nend of the list of oids any objects that for whatever reason aren't in the\ndependency system. (Is this possible? Manual hacking can do it\nmethinks...)\n\nDoes this sound like an idea?\n\nI've just become rather frustrated trying to do a test reload of our 7.2.3\ndump into 7.3b5. The problem is all the tsearch types are declared after\nthe tables that actually use them!\n\nChris\n\n", "msg_date": "Wed, 13 Nov 2002 13:33:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "pg_dump in 7.4" }, { "msg_contents": "At 01:33 PM 13/11/2002 +0800, Christopher Kings-Lynne wrote:\n>Does this sound like an idea?\n\nIt does, but in keeping with allowing pg_restore to be quite flexible, I'd \nlike to see the dependency data stored in the dump file, then processed at \nrestore-time.\n\n\n>I've just become rather frustrated trying to do a test reload of our 7.2.3\n>dump into 7.3b5. The problem is all the tsearch types are declared after\n>the tables that actually use them!\n\npg_dump already has rudimentary dependency tracking (one level deep); each \nitem can have a list of oid's it depends on. You *could* patch it to add \nthe types to the table dependencies.\n\nIn the future I'd imagine we'll just dump the OIDs of all first level \ndependencies for each object, then at restore-time, process them in \nwhatever order the user requests (defaulting to dependency-order).\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Wed, 13 Nov 2002 16:40:52 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "> pg_dump already has rudimentary dependency tracking (one level\n> deep); each\n> item can have a list of oid's it depends on. You *could* patch it to add\n> the types to the table dependencies.\n>\n> In the future I'd imagine we'll just dump the OIDs of all first level\n> dependencies for each object, then at restore-time, process them in\n> whatever order the user requests (defaulting to dependency-order).\n\nWell, the problem is that you can add a new type and then add a column to a\nreally old table that uses that type - that causes pain. Lots of other\npeople have also reported the \"view dumped before table it is based on\"\nproblem.\n\nChris\n\n", "msg_date": "Wed, 13 Nov 2002 13:50:43 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Has anyone given much thought to improving pg_dump's object order algorithm\n> for 7.4? It seems that now we have dependencies, it should just be a matter\n> of doing a breadth-first or depth-first search over the pg_depend table to\n> generate a valid order of oids.\n\nI've thought about this a little. Using the pg_depend data would\nobviously give pg_dump a big leg up on the problem, but it's by no means\na trivial task even so. Some things to think about:\n\n* We don't store dependencies for SQL functions to things mentioned in\nthe SQL function body. (Maybe we should, but we don't.) So there's\ndata missing in that case, and possibly other cases.\n\n* pg_dump really ought to still work for pre-7.3 databases. What will\nits fallback strategy be like when it hasn't got pg_depend?\n\n* With ALTER TABLE, CREATE OR REPLACE FUNCTION, etc, it's not hard at\nall to create situations with circular dependencies. Right now pg_dump\nis guaranteed to produce an unusable dump in such cases, but our\nambition should be to make it work. That means pg_dump needs a strategy\nfor breaking circular dependency paths, by itself using ALTER when\nneeded to postpone a reference.\n\n\nThe thought that I'd been toying with is to build a list of inter-object\ndependencies (using pg_depend if available, else fall back on pg_dump's\nnative wit, ie, the rather limited set of dependencies it already\nunderstands). Then do a topological sort, preferring to maintain OID\norder in cases where the object ordering is underspecified. When the\nsort fails (ie, there's a circular dependency) then modify the set of\ndumpable objects by breaking some object into two parts (a base\ndeclaration and an ALTER command); this changes the dependencies too.\nRepeat the sort and adjustment steps until the sort succeeds.\n\nIf you're not familiar with topological sorts, look at Knuth or your\nfavorite algorithms text. There are one or two instances of the method\nin PG already (deadlock detection, for example).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 08:33:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4 " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 01:33 PM 13/11/2002 +0800, Christopher Kings-Lynne wrote:\n>> Does this sound like an idea?\n\n> It does, but in keeping with allowing pg_restore to be quite flexible, I'd \n> like to see the dependency data stored in the dump file, then processed at \n> restore-time.\n\nNo, because that doesn't help for the plain-text-dump case. I think we\nshould solve the dependencies during pg_dump and output the objects in a\nsafe order. pg_restore can keep its options for rearranging the order,\nbut those should become vestigial, or at least only needed in bizarre\ncases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 08:36:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4 " }, { "msg_contents": "On Wed, 2002-11-13 at 00:33, Christopher Kings-Lynne wrote:\n> for 7.4? It seems that now we have dependencies, it should just be a matter\n> of doing a breadth-first or depth-first search over the pg_depend table to\n> generate a valid order of oids.\n\nThe biggest trick will be trying to re-combine the ALTER ... ADD\nCONSTRAINT and ALTER ... SET DEFAULT statements back into CREATE TABLE,\nbut that seems to partially solve itself simply by using an ALAP\nalgorithm (as late as possible) and being picky about the paths you try\nfirst, as they'll get pushed together.\n\n> To allow for mess-ups in that table, the next step would be to add to the\n> end of the list of oids any objects that for whatever reason aren't in the\n> dependency system. (Is this possible? Manual hacking can do it\n> methinks...)\n\nAre there any objects which are not in the dependency system, other than\ncomments? Comments can be done at the time the object is created.\n\n-- \n Rod Taylor\n\n", "msg_date": "13 Nov 2002 08:52:25 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "At 08:36 AM 13/11/2002 -0500, Tom Lane wrote:\n>No, because that doesn't help for the plain-text-dump case.\n\nWrong. The way pg_dump does a plain-text dump is to do a fake restore. Same \ncode. Which meand if make the restore work correctly and everything works.\n\n\n> I think we\n>should solve the dependencies during pg_dump and output the objects in a\n>safe order.\n\nWe should do this anyway, just to make the restorations quicker (as we do \nalready).\n\n\n> pg_restore can keep its options for rearranging the order,\n>but those should become vestigial, or at least only needed in bizarre\n>cases.\n\nOrdering is not the problem; it's allowing the user to dump a single table \nand associated type definitions that requires the\n\npg_dump really *must* dump dependency information. Then the dump/restore \ncode can sort it appropriately.\n\nThe way the code works at the moment is:\n\n- Dump definitions in any convenient order, creating an in-memory TOC.\n- Sort the TOC entries appropriately (using code in pg_backup_archiver).\n- Dump the definitions and data to file/stdout (using code in \npg_backup_archiver).\n\nWe need is to replace the naieve quicksort with a more complex sorting system.\n\nThe suggestion of breaking items into create/alter etc is interesting - I \nassume you are thinking of function bodies? Or is there something else?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 14 Nov 2002 00:53:03 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4 " }, { "msg_contents": "At 01:50 PM 13/11/2002 +0800, Christopher Kings-Lynne wrote:\n>Well, the problem is that you can add a new type and then add a column to a\n>really old table that uses that type - that causes pain\\\n\nYou may have misunderstood; I meant to add each type used by the table to \nthe deps list for a table (and each function used by a view etc etc). \nCurrent implementation leaves the deps list blank for tables, and only \nlists the tables for a view (I think).\n\nThe problem with the current deps list is that (a) it assumes that OID \norder is important and (b) it does not do any analysis of the topology of \nthe dependencies.\n\nThe latter will be substantially improved if we can get pg_depend deps into \nthe dump file, and if we can do a useful analysis of the dependencies.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 14 Nov 2002 00:57:50 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "At 08:52 AM 13/11/2002 -0500, Rod Taylor wrote:\n>The biggest trick will be trying to re-combine the ALTER ... ADD\n>CONSTRAINT and ALTER ... SET DEFAULT statements back into CREATE TABLE\n\nI'm not sure this would be worth the effort - I'll grant it would be cute, \nbut getting pg_dump to understand SQL seems a little ambitious. We'd \nprobably end up defining a portable schema definition language just for \ndump files.\n\nTo achieve Tom's suggestion it might be simpler to store two versions - the \n'full' version, and the 'fully deconstructed' version. If our analysis of \nthe dependencies meant we needed to break up an object, then we use the \nlatter.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 14 Nov 2002 01:08:53 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "On Wed, 2002-11-13 at 09:08, Philip Warner wrote:\n> At 08:52 AM 13/11/2002 -0500, Rod Taylor wrote:\n> >The biggest trick will be trying to re-combine the ALTER ... ADD\n> >CONSTRAINT and ALTER ... SET DEFAULT statements back into CREATE TABLE\n> \n> I'm not sure this would be worth the effort - I'll grant it would be cute, \n> but getting pg_dump to understand SQL seems a little ambitious. We'd \n> probably end up defining a portable schema definition language just for \n> dump files.\n\n> To achieve Tom's suggestion it might be simpler to store two versions - the \n> 'full' version, and the 'fully deconstructed' version. If our analysis of \n> the dependencies meant we needed to break up an object, then we use the \n> latter.\n\nDifferent approaches to the same result. To me, the dependency tree is\nalready (mostly) broken up to start with. So at some point you need to\nteach something to re-combine in the pg_attrdef -> pg_class dependencies\namong others. But the opposite approach of starting with the large\nobjects and breaking up where required would be just as good, especially\nif you only breakup the little bits that are required.\n\nStarting with all functions broken up into their two parts (definition\nand body) with the body dependent on the definition and recombining\nlater if they sort side by side seems much easier than trying to resolve\ncycles and break it up at a later time.\n\nAn ALAP scheduling algorithm will almost always sort these things to be\nside by side to allow combining on a second pass by something with the\nintelligence.\n\n-- \n Rod Taylor\n\n", "msg_date": "13 Nov 2002 09:29:21 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "At 09:29 AM 13/11/2002 -0500, Rod Taylor wrote:\n>An ALAP scheduling algorithm will almost always sort these things to be\n>side by side to allow combining on a second pass by something with the\n>intelligence.\n\nDo we have a list of dependency data that we collect? eg. do we know about \nfunctions used in views and indexes? At this stage it's probably worth \nmaking a list of what we think is achievable in 7.4, and what we want to \nachieve ultimately. I certainly agree about recombining function headers \nand bodies, but there are a bunch of things that I think we can leave \ndeconstructed and always move to the end of the restore, eg:\n\n- constraints\n- sequences set (not really a dependency problem)\n- indexes\n- comments\n\nAFAIR, we can only split function bodies for non-SQL functions.\n\nFor views we need to know the types of casts within the view, the types \nreturned by the view, the functions used by the view as well as the tables \nreferenced. AFAIR, there is no way to break up a view, so it has to go \nafter each ancestor.\n\nFor a table, it should be sufficient to know the constraints & types; we \ncan add constraints later, but I'd be reluctant to get into doing 'ALTER \nTABLE ADD COLUMN...'.\n\nIndexes may have a function and/or a cast? Create Index I on T( cast(id as \nmy_type) )?\n\nI'd guess constraints can depend on multiple tables, views(?), types, & \nfunctions. Not sure what else. We can't really break these down.\n\nSo it looks like the only contentious item might be table attrs? is that right?\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 14 Nov 2002 01:52:34 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> The suggestion of breaking items into create/alter etc is interesting - I \n> assume you are thinking of function bodies? Or is there something else?\n\nLet's see --- foreign-key constraints are an obvious source of possible\ncircularities, but I see pg_dump already dumps those as separate\nobjects. I recall thinking that column default and constraint clauses\nmight need to be broken out too, but I'm not sure why I thought that\n(maybe because they can call SQL functions?). Anything else?\n\nA simple-minded approach would be to *always* add these things via\nALTER commands at the end, but to keep dumps legible it would be\nnicer to keep them in the original table definition whenever\npossible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 10:22:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4 " }, { "msg_contents": "> Do we have a list of dependency data that we collect? eg. do we know about \n> functions used in views and indexes? At this stage it's probably worth \n\n> - constraints\n> - sequences set (not really a dependency problem)\n> - indexes\n> - comments\n\nI can make a complete list tonight of whats captured. Shall we tack\nthe list onto the end of section 3.13 (pg_depend):\n\nhttp://developer.postgresql.org/docs/postgres/catalog-pg-depend.html\n\n> For a table, it should be sufficient to know the constraints & types; we \n> can add constraints later, but I'd be reluctant to get into doing 'ALTER \n> TABLE ADD COLUMN...'.\n\nShouldn't ever need to do an ALTER TABLE ADD COLUMN. But I can\ncertainly come up with a case for ALTER TABLE SET DEFAULT.\n\n> Indexes may have a function and/or a cast? Create Index I on T( cast(id as \n> my_type) )?\n> \n> I'd guess constraints can depend on multiple tables, views(?), types, & \n> functions. Not sure what else. We can't really break these down.\n\nThey can via functions. And you can break down a function and table,\nbut not really types or views.\n\n\nCREATE FUNCTION func .... 'SELECT TRUE;' LANGUAGE 'sql';\n\nCREATE <items requiring function>;\n\n-- Fill in function body.\nCREATE OR REPLACE FUNCTION func ... '<real query>' LANGUAGE 'sql';\n\n> So it looks like the only contentious item might be table attrs? is that right?\n\nMore likely to be functions. As everything else (I can think of) is\neasily altered into place.\n\n-- \n Rod Taylor\n\n", "msg_date": "13 Nov 2002 14:53:28 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "> The thought that I'd been toying with is to build a list of inter-object\n> dependencies (using pg_depend if available, else fall back on pg_dump's\n> native wit, ie, the rather limited set of dependencies it already\n> understands). Then do a topological sort, preferring to maintain OID\n> order in cases where the object ordering is underspecified. When the\n> sort fails (ie, there's a circular dependency) then modify the set of\n> dumpable objects by breaking some object into two parts (a base\n> declaration and an ALTER command); this changes the dependencies too.\n> Repeat the sort and adjustment steps until the sort succeeds.\n>\n> If you're not familiar with topological sorts, look at Knuth or your\n> favorite algorithms text. There are one or two instances of the method\n> in PG already (deadlock detection, for example).\n\nC'mon - I _do_ have an honours degree in computer science ;)\n\nI was actually trying to think of the correct term when i wrote my original\nemail. I had 'PERT charts' in my head, but I couldn't remember the term for\nthe required order of activities - thanks for reminding me :)\n\nChris\n\n", "msg_date": "Thu, 14 Nov 2002 11:33:16 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_dump in 7.4 " }, { "msg_contents": "At 02:53 PM 13/11/2002 -0500, Rod Taylor wrote:\n>I can make a complete list tonight of whats captured.\n\nSounds good.\n\nIf you can also indicate which parts of functions are captured - arguments, \nreturn type and body? IIRC, only SQL functions are compiled at define-time, \nso other functions *shouldn't* be a major problem if they are out of order \nin *most* cases.\n\nSince pg_dump already has some intrinsic knowledge of dependencies, if \nthere is much missing from pg_depend, we could probably add it to pg_dump \n(it many cases it will just be a matter of getting oids as well as names in \nSQL statements).\n\nIn terms of supporting older dump files, I think it should all work: they \nalready have space for deps, mostly empty, and as Tom suggested, we should \nbe able to dump leaf nodes in OID order. From the PoV of pg_dump, the \nalgorthim becomes:\n\n1. generate in-memory TOC in a convenient format.\n2. pick the lowest OID *leaf* node. If none, goto 5.\n3. remove it from deps of other TOC entries\n4. goto 2.\n\n[cyclic]\n5. pick the lowest OID node. See if we can break it. If not repeat with \nmore nodes until we can break one. If none, then goto step 8.\n6. Break up the node.\n7. Goto 2.\n\n[cyclic, no resolution]\n8. Pick the lowest OID node.\n9. Goto 3.\n\n\nThere are a few issues here:\n\n(a) we need to know dependencies of *parts* of objects in order to do the \nbreakup. To me this implies we should break them up at dump time (ie. have \nFUNCTION_DEFINITION and FUNCTION_BODY TOC entries; it gets nastier with \ntables, but TABLE_DEFINITION, TABLE_CONSTRAINT, ATTRIBUTE_CONSTRAINT, and \nATTRIBUTE_DEFAULT come to mind. So time later we can write cunning code to \nrecombine them (volunteers?).\n\n(b) Step 6 may be pointless. We probably need to verify that we will end up \nwith a leaf node as a result of the breakup: I presume someone has a good \nalgorithm. But if we do have a cycle, I'd guess we should just revert to an \nOID ordering, since while we may pick the topologically optimal node, it \nmay well not be optimal from PostgreSQL point of view: if the node fails to \nbe defined, then everything else that depends on it will fail. This has the \nadvantage of being simple.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 14 Nov 2002 15:37:13 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> * We don't store dependencies for SQL functions to things mentioned in\n> the SQL function body. (Maybe we should, but we don't.) So there's\n> data missing in that case, and possibly other cases.\n\nThis might be interesting to do, and we could tie it into the need to\ninvalidate PL/PgSQL functions that depend on a database object when\nthe object is changed.\n\nPerhaps when the function is defined, we run all the SQL queries in\nthe function body through the parser/analyzer/rewriter, and then\ngenerate dependencies on the Query trees we get back?\n\nIn any case, there would be a limit to what we could divine from the\nfunction definition (e.g. we'd get practically no info about a\nfunction defined in C) -- but this might make things a little nicer,\nanyway.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "14 Nov 2002 00:39:52 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "At 12:39 AM 14/11/2002 -0500, Neil Conway wrote:\n>Perhaps when the function is defined, we run all the SQL queries in\n>the function body through the parser/analyzer/rewriter, and then\n>generate dependencies on the Query trees we get back?\n\nWon't work for functions that build dynamic queries. But it might be \ninteresting/worthwhile to allow user-specified dependencies; that way if a \nuser has problems with database dumps etc, they could manually add \ndependencies for C functions, dynamic query functions (where possible) etc. \nIt's probably more trouble than it's worth, but worth considering.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 14 Nov 2002 16:43:46 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Won't work for functions that build dynamic queries.\n\nGranted, but the the intent is to\n\n (a) solve some, but not necessarily all, of the dump-order\n problems\n\n (b) drop functions that depend on a database object when the\n database object itself is dropped -- i.e. you don't *want*\n to worry about dynamically built queries, as they are fine\n already\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "14 Nov 2002 00:47:16 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 12:39 AM 14/11/2002 -0500, Neil Conway wrote:\n>> Perhaps when the function is defined, we run all the SQL queries in\n>> the function body through the parser/analyzer/rewriter, and then\n>> generate dependencies on the Query trees we get back?\n\n> Won't work for functions that build dynamic queries.\n\nThat's irrelevant for SQL functions, though, and pg_dump does not need\nto worry about dependencies inside other types of functions (since only\nSQL functions' bodies are examined during CREATE FUNCTION).\n\nNeil's solution is exactly what I was thinking we should do. Or almost\nexactly, anyway: we don't want to run the rewriter before extracting\ndependencies.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 14:35:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4 " }, { "msg_contents": "On Wed, 2002-11-13 at 23:37, Philip Warner wrote:\n> At 02:53 PM 13/11/2002 -0500, Rod Taylor wrote:\n> >I can make a complete list tonight of whats captured.\n> \n> Sounds good\n\nBelow is a summary of what pg_depend tracks that might be useful. \nSkipped a number of dependencies that are internal only (ie. toast table\ndependencies) as they will be regenerated correctly if their 'owners'\nare generated correctly.\n\n\n<Expression Dependencies> include:\n\t- Operators\n\t- Functions\n\t- Relations (and columns)\n\t- Aggregates\n\n\nAttributes (Columns) depend on:\n\t- Type of attribute\n\nTables depend on:\n\t- Namespace\n\t- Parent tables (if inheritance)\n\nDefault expressions depend on:\n\t- Table\n\t- <Expression Dependencies>\n\nIndexes depend on:\n\t- Constraint (where unique / primary key constraint)\n\t- Index procedure\n\t- Index operator\n\t- Attributes of indexed relation\n\nAggregates depend on:\n\t- Transformation function\n\t- Final function (if required)\n\nForeign Keys depend on:\n\t- Foreign key'd relation and its attributes\n\t- Constrained relation and its attributes\n\t- Unique Index on the foreign key'd relation\n\nCheck Constraints depend on:\n\t- <Expression Dependencies> <- includes parent relation\n\t- Domain type (if check constraint on domain -- v7.4)\n\t\nOperators depend on:\n\t- Namespace\n\t- Left operator type\n\t- Right operator type\n\t- Result operator type\n\t- Code function\n\t- Rest function\n\t- Join function\n\nFunctions depend on:\n\t- Namespace\n\t- Language\n\t- Return type\n\t- Argument types (all)\n\nTypes (domains included) depend on:\n\t- Namespace\n\t- Input type\n\t- Output type\n\t- Element type (if array)\n\t- Base type (if domain)\n\t- Default value -> <Expression Dependencies>\n\nCasts depend on:\n\t- Source type\n\t- Target type\n\t- Cast function\n\nOperator Classes depend on:\n\t- Namespaces\n\t- Input type\n\t- Key data type (if different than input type)\n\t- Dependencies on operators in the class\n\t- Dependencies on procedures in the class\n\nLanguages depend on:\n\t- Call function\n\t- Validation function\n\nTriggers depend on:\n\t- Trigger function\n\t- Relation trigger is on\n\t- Constrained relation (if constraint trigger)\n\nRules depend on:\n\t- Relation rule is on\n\t- Qualifying condition -> <Expression Dependencies>\n\t- resulting Query Tree -> <Expression Dependencies>\n\nMissing:\n\t- Body of all functions\n\n-- \nRod Taylor <rbt@rbt.ca>\n", "msg_date": "15 Nov 2002 11:43:47 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "At 11:43 AM 15/11/2002 -0500, Rod Taylor wrote:\n>Below is a summary of what pg_depend tracks that might be useful.\n\nThis looks excellent. If people are happy with my earlier outline, it \nshould be reasonably easy to proceed...\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Sat, 16 Nov 2002 13:41:19 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" }, { "msg_contents": "On Sat, 2002-11-16 at 15:49, Alvaro Herrera wrote:\n> On Fri, Nov 15, 2002 at 11:43:47AM -0500, Rod Taylor wrote:\n> \n> > Below is a summary of what pg_depend tracks that might be useful. \n> > Skipped a number of dependencies that are internal only (ie. toast table\n> > dependencies) as they will be regenerated correctly if their 'owners'\n> > are generated correctly.\n> > \n> > \n> > Tables depend on:\n> > \t- Namespace\n> > \t- Parent tables (if inheritance)\n> \n> And columns?\n\nI only see table dependencies.\n\ntablecmds.c line 943\n\n> > Indexes depend on:\n> > \t- Constraint (where unique / primary key constraint)\n> > \t- Index procedure\n> > \t- Index operator\n> > \t- Attributes of indexed relation\n> \n> On function if functional maybe? (Is this \"procedure\"?) \n\nYes, forgot that marker beside 'Index Procedure'.\n\n-- \nRod Taylor <rbt@rbt.ca>\n", "msg_date": "16 Nov 2002 16:08:58 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: pg_dump in 7.4" } ]
[ { "msg_contents": "Hey Hackers - \nI was testing beta5 and found a performance regression involving\napplication of constraints into a VIEW - I've got a view that is fairly\nexpensive, involving a subselet and an aggregate. When the query is\nrewritten in 7.2.3, the toplevel constraint is used to filter before\nthe subselect - in 7.3b5, it comes after.\n\nFor this query, the difference is 160 ms vs. 2 sec. Any reason for this\nchange?\n\nHere's the view def., and explain analyzes for the view, and two hand\nrewritten versions (since the explain analyze in 7.2.3 doesn't display\nthe filter parameters)\n\nRoss\n\nCREATE VIEW current_modules AS \n SELECT * FROM modules m \n WHERE module_ident = \n (SELECT max(module_ident) FROM modules \n WHERE m.moduleid = moduleid GROUP BY moduleid);\n\nrepository=# explain analyze select * from current_modules where name ~ 'Fourier';\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on modules m (cost=0.00..116090.23 rows=1 width=135) (actual time=18.74..1968.01 rows=37 loops=1)\n Filter: ((module_ident = (subplan)) AND (name ~ 'Fourier'::text))\n SubPlan\n -> Aggregate (cost=0.00..25.57 rows=1 width=13) (actual time=0.41..0.41 rows=1 loops=4534)\n -> Group (cost=0.00..25.55 rows=6 width=13) (actual time=0.08..0.37 rows=10 loops=4534)\n -> Index Scan using moduleid_idx on modules (cost=0.00..25.54 rows=6 width=13) (actual time=0.06..0.27 rows=10 loops=4534)\n Index Cond: ($0 = moduleid)\n Total runtime: 1968.65 msec\n(8 rows)\n\nrepository=# explain analyze select module_ident from modules m where m.name ~ 'Fourier' and m.module_ident = (SELECT max(modules.module_ident) as max from modules where (m.moduleid=moduleid) group by modules.moduleid);\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on modules m (cost=0.00..116090.23 rows=1 width=4) (actual time=2.46..158.33 rows=37 loops=1)\n Filter: ((name ~ 'Fourier'::text) AND (module_ident = (subplan)))\n SubPlan\n -> Aggregate (cost=0.00..25.57 rows=1 width=13) (actual time=0.35..0.35 rows=1 loops=270)\n -> Group (cost=0.00..25.55 rows=6 width=13) (actual time=0.07..0.31 rows=9 loops=270)\n -> Index Scan using moduleid_idx on modules (cost=0.00..25.54 rows=6 width=13) (actual time=0.06..0.22 rows=9 loops=270)\n Index Cond: ($0 = moduleid)\n Total runtime: 158.81 msec\n(8 rows)\n\nrepository=# explain analyze select module_ident from modules m where m.module_ident = (SELECT max(modules.module_ident) as max from modules where (m.moduleid=moduleid) group by modules.moduleid) and m.name ~ 'Fourier';\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on modules m (cost=0.00..116090.23 rows=1 width=4) (actual time=18.66..1959.31 rows=37 loops=1)\n Filter: ((module_ident = (subplan)) AND (name ~ 'Fourier'::text))\n SubPlan\n -> Aggregate (cost=0.00..25.57 rows=1 width=13) (actual time=0.41..0.41 rows=1 loops=4534)\n -> Group (cost=0.00..25.55 rows=6 width=13) (actual time=0.08..0.37 rows=10 loops=4534)\n -> Index Scan using moduleid_idx on modules (cost=0.00..25.54 rows=6 width=13) (actual time=0.06..0.27 rows=10 loops=4534)\n Index Cond: ($0 = moduleid)\n Total runtime: 1959.84 msec\n(8 rows)\n", "msg_date": "Wed, 13 Nov 2002 00:22:11 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": true, "msg_subject": "performance regression, 7.2.3 -> 7.3b5 w/ VIEW" }, { "msg_contents": "Ross J. Reedstrom wrote:\n> Hey Hackers - \n> I was testing beta5 and found a performance regression involving\n> application of constraints into a VIEW - I've got a view that is fairly\n> expensive, involving a subselet and an aggregate. When the query is\n> rewritten in 7.2.3, the toplevel constraint is used to filter before\n> the subselect - in 7.3b5, it comes after.\n> \n> For this query, the difference is 160 ms vs. 2 sec. Any reason for this\n> change?\n\nI could be way off base, but here's a shot in the dark:\n\nhttp://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=3D0885E1.8F369ACA%40mascari.com&rnum=3&prev=/groups%3Fq%3DMike%2BMascari%2Bsecurity%2BTom%2BLane%26ie%3DUTF-8%26oe%3DUTF-8%26hl%3Den\n\nAt the time I thought PostgreSQL was doing something naughty by \nallowing user functions to be invoked on data that would \nultimately not be returned. Now I know how Oracle uses VIEWS for \nrow security: Oracle functions invoked in DML statements can't \nrecord any changes to the database. So if the above is the \ncause, I wouldn't have any problems with the patch being \nreversed. Maybe separate privileges for read-only vs. read-write \nfunctions are in order at some point in the future though...\n\nMike Mascari\nmascarm@mascari.com\n\n\n", "msg_date": "Wed, 13 Nov 2002 02:40:40 -0500", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: performance regression, 7.2.3 -> 7.3b5 w/ VIEW" }, { "msg_contents": "On Wed, Nov 13, 2002 at 02:40:40AM -0500, Mike Mascari wrote:\n> Ross J. Reedstrom wrote:\n> >\n> >For this query, the difference is 160 ms vs. 2 sec. Any reason for this\n> >change?\n> \n> I could be way off base, but here's a shot in the dark:\n> \n> http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=3D0885E1.8F369ACA%40mascari.com&rnum=3&prev=/groups%3Fq%3DMike%2BMascari%2Bsecurity%2BTom%2BLane%26ie%3DUTF-8%26oe%3DUTF-8%26hl%3Den\n> \n> At the time I thought PostgreSQL was doing something naughty by \n> allowing user functions to be invoked on data that would \n> ultimately not be returned. Now I know how Oracle uses VIEWS for \n> row security: Oracle functions invoked in DML statements can't \n> record any changes to the database. So if the above is the \n> cause, I wouldn't have any problems with the patch being \n> reversed. Maybe separate privileges for read-only vs. read-write \n> functions are in order at some point in the future though...\n\nBingo, that solved it. I'm back to 160 ms. What does Tom feel about\nremoving this? Is there some way the planner could have known which\nwas the smarter/faster order of application?\n\nRoss\n", "msg_date": "Wed, 13 Nov 2002 02:02:38 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": true, "msg_subject": "Re: performance regression, 7.2.3 -> 7.3b5 w/ VIEW" }, { "msg_contents": "Am Mittwoch, 13. November 2002 07:22 schrieb Ross J. Reedstrom:\n> Hey Hackers -\n...\n>\n> CREATE VIEW current_modules AS\n> SELECT * FROM modules m\n> WHERE module_ident =\n> (SELECT max(module_ident) FROM modules\n> WHERE m.moduleid = moduleid GROUP BY moduleid);\n>\n...\n\nI just wonder if you really need the GROUP BY. The subselect should return \nexactly one row and so max does without GROUP BY:\n CREATE VIEW current_modules AS\n SELECT * FROM modules m\n WHERE module_ident =\n (SELECT max(module_ident) FROM modules\n WHERE m.moduleid = moduleid);\n\n\nTommi\n\n-- \nDr. Eckhardt + Partner GmbH\nhttp://www.epgmbh.de\n", "msg_date": "Wed, 13 Nov 2002 09:28:38 +0100", "msg_from": "Tommi Maekitalo <t.maekitalo@epgmbh.de>", "msg_from_op": false, "msg_subject": "Re: performance regression, 7.2.3 -> 7.3b5 w/ VIEW" }, { "msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> Bingo, that solved it. I'm back to 160 ms. What does Tom feel about\n> removing this? Is there some way the planner could have known which\n> was the smarter/faster order of application?\n\nAs I said in the previous thread, I don't have a lot of patience with\nthe notion of expecting the planner to promise anything about evaluation\norder of WHERE clauses. I wasn't thrilled with adding the patch, but\nI'm even less thrilled with the idea of backing it out now.\n\nThere has been some discussion of reordering WHERE clauses based on\nestimated cost --- a simple form of this would be to push any clauses\ninvolving subplans to the end of the list. I haven't done anything\nabout that yet, mainly because I'm unsure if there are cases where it\nwould be worse than not doing it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 08:58:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: performance regression, 7.2.3 -> 7.3b5 w/ VIEW " }, { "msg_contents": "On Wed, Nov 13, 2002 at 08:58:04AM -0500, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> > Bingo, that solved it. I'm back to 160 ms. What does Tom feel about\n> > removing this? Is there some way the planner could have known which\n> > was the smarter/faster order of application?\n> \n> As I said in the previous thread, I don't have a lot of patience with\n> the notion of expecting the planner to promise anything about evaluation\n> order of WHERE clauses. I wasn't thrilled with adding the patch, but\n> I'm even less thrilled with the idea of backing it out now.\n\nHaving read the previous thread, I realized you wouldn't be thrilled\nabout it, that's why I asked. While I agree in principle (don't promise\na particular order), the pragmatic corollary of that principle would say\nif you don't favor a particular order, then don't change the order from\nprevious stable releases.\n\nUnlike the previous thread, I'm not looking for a particular order:\nthere're no side-effects I'm trying to exploit, I just want the best\npossible performance.\n\n> There has been some discussion of reordering WHERE clauses based on\n> estimated cost --- a simple form of this would be to push any clauses\n> involving subplans to the end of the list. I haven't done anything\n> about that yet, mainly because I'm unsure if there are cases where it\n> would be worse than not doing it.\n\nMe either, though my gut says subplans are expensive. I _can_ trivially\nwrite queries that do the wrong thing (suboptimal order of WHERE clauses)\nwith or without this patch. \n\nIt's clearly the wrong time to try to do anything fancier, but the\nconservative thing to do (in my unbiased opinion ;-) is put it back\nthe way it was for the last stable release, on the principle of least\nsurprise - there seems to be no bug fixed or functionality gained by\nkeeping the change.\n\nSeems like this is at least worth a TODO:\n\n* Examine WHERE clause order optimization possibilities, particularly\nwith subplans\n\nRoss\n", "msg_date": "Wed, 13 Nov 2002 09:14:15 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": true, "msg_subject": "Re: performance regression, 7.2.3 -> 7.3b5 w/ VIEW" }, { "msg_contents": "You're right, I should remove that (cruft left over from when the\nsubselect wasn't). However, it has no impact on the planner at hand:\nremoving it does trim 25% from the execution time, but getting the\nWHERE clauses used in the right order gains an order of magnitude.\n\nBoth apply. Thanks, I'll fix it.\n\nRoss\n\nOn Wed, Nov 13, 2002 at 09:28:38AM +0100, Tommi Maekitalo wrote:\n> Am Mittwoch, 13. November 2002 07:22 schrieb Ross J. Reedstrom:\n> > Hey Hackers -\n> ...\n> >\n> > CREATE VIEW current_modules AS\n> > SELECT * FROM modules m\n> > WHERE module_ident =\n> > (SELECT max(module_ident) FROM modules\n> > WHERE m.moduleid = moduleid GROUP BY moduleid);\n> >\n> ...\n> \n> I just wonder if you really need the GROUP BY. The subselect should return \n> exactly one row and so max does without GROUP BY:\n> CREATE VIEW current_modules AS\n> SELECT * FROM modules m\n> WHERE module_ident =\n> (SELECT max(module_ident) FROM modules\n> WHERE m.moduleid = moduleid);\n", "msg_date": "Wed, 13 Nov 2002 09:20:11 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": true, "msg_subject": "Re: performance regression, 7.2.3 -> 7.3b5 w/ VIEW" } ]
[ { "msg_contents": "(I posted this on the bugs and jdbc newsgroups last week\nbut have seen no response. Imho, this really needs to\nbe fixed since the bug makes it impossible to use the\ndriver in a multithreaded environment so I'm reposting\nto hackers and patches.)\n\n _\nMats Lofkvist\nmal@algonet.se\n\n\n\n\nThe optimization added in\nsrc/interfaces/jdbc/org/postgresql/core/Encoding.java\nversion 1.7 breaks JDBC since it is not thread safe.\n\nThe new method decodeUTF8() uses a static (i.e. class member)\nbut is synchronized on the instance so it won't work with multiple\ninstances used in parallel by multiple threads.\n(Quick and dirty patch below.)\n\n(The method also isn't using the 'length' parameter correctly,\nbut since offset always seems to be zero, this bug doesn't show up.)\n\n _\nMats Lofkvist\nmal@algonet.se\n\n\n*** org/postgresql/core/Encoding.java~ Sun Oct 20 04:55:50 2002\n--- org/postgresql/core/Encoding.java Fri Nov 8 16:13:20 2002\n***************\n*** 233,239 ****\n */\n private static final int pow2_6 = 64; // 26\n private static final int pow2_12 = 4096; // 212\n! private static char[] cdata = new char[50];\n \n private synchronized String decodeUTF8(byte data[], int offset, int length) {\n char[] l_cdata = cdata;\n--- 233,239 ----\n */\n private static final int pow2_6 = 64; // 26\n private static final int pow2_12 = 4096; // 212\n! private char[] cdata = new char[50];\n \n private synchronized String decodeUTF8(byte data[], int offset, int length) {\n char[] l_cdata = cdata;\n\n", "msg_date": "13 Nov 2002 12:28:13 -0000", "msg_from": "Mats Lofkvist <mal@algonet.se>", "msg_from_op": true, "msg_subject": "JDBC access is broken in 7.3 beta" }, { "msg_contents": "Mats,\n\nPatch applied. (I also fixed the 'length' problem you reported as well).\n\nthanks,\n--Barry\n\n\nMats Lofkvist wrote:\n> (I posted this on the bugs and jdbc newsgroups last week\n> but have seen no response. Imho, this really needs to\n> be fixed since the bug makes it impossible to use the\n> driver in a multithreaded environment so I'm reposting\n> to hackers and patches.)\n> \n> _\n> Mats Lofkvist\n> mal@algonet.se\n> \n> \n> \n> \n> The optimization added in\n> src/interfaces/jdbc/org/postgresql/core/Encoding.java\n> version 1.7 breaks JDBC since it is not thread safe.\n> \n> The new method decodeUTF8() uses a static (i.e. class member)\n> but is synchronized on the instance so it won't work with multiple\n> instances used in parallel by multiple threads.\n> (Quick and dirty patch below.)\n> \n> (The method also isn't using the 'length' parameter correctly,\n> but since offset always seems to be zero, this bug doesn't show up.)\n> \n> _\n> Mats Lofkvist\n> mal@algonet.se\n> \n> \n> *** org/postgresql/core/Encoding.java~ Sun Oct 20 04:55:50 2002\n> --- org/postgresql/core/Encoding.java Fri Nov 8 16:13:20 2002\n> ***************\n> *** 233,239 ****\n> */\n> private static final int pow2_6 = 64; // 26\n> private static final int pow2_12 = 4096; // 212\n> ! private static char[] cdata = new char[50];\n> \n> private synchronized String decodeUTF8(byte data[], int offset, int length) {\n> char[] l_cdata = cdata;\n> --- 233,239 ----\n> */\n> private static final int pow2_6 = 64; // 26\n> private static final int pow2_12 = 4096; // 212\n> ! private char[] cdata = new char[50];\n> \n> private synchronized String decodeUTF8(byte data[], int offset, int length) {\n> char[] l_cdata = cdata;\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n", "msg_date": "Thu, 14 Nov 2002 02:56:43 -0800", "msg_from": "Barry Lind <blind@xythos.com>", "msg_from_op": false, "msg_subject": "Re: JDBC access is broken in 7.3 beta" } ]
[ { "msg_contents": "I noticed that the planner is unable to select an index scan when a partial\nindex is available, the partial index is based on a \"NOT NULL\" condition.\n\nExample:\n\nstart with no index:\nmydb=# EXPLAIN ANALYZE select id from str where url='foobar';\nNOTICE: QUERY PLAN:\n\nSeq Scan on str (cost=0.00..88.91 rows=1 width=4) (actual time=5.93..5.93\nrows=0 loops=1)\nTotal runtime: 6.01 msec\n\nEXPLAIN\nmydb=# create index str_idx_url on str(url) where url is not null;\nCREATE\nmydb=# analyze str;\nANALYZE\nmydb=# EXPLAIN ANALYZE select id from str where url='foobar';\nNOTICE: QUERY PLAN:\n\nSeq Scan on str (cost=0.00..91.05 rows=3 width=4) (actual time=6.24..6.24\nrows=0 loops=1)\nTotal runtime: 6.30 msec\n\nEXPLAIN\nmydb=# drop index str_idx_url;\nDROP\nmydb=# create index str_idx_url on str(url);\nCREATE\nmydb=# analyze str;\nANALYZE\nmydb=# EXPLAIN ANALYZE select id from str where url='foobar';\nNOTICE: QUERY PLAN:\n\nIndex Scan using str_idx_url on str (cost=0.00..2.56 rows=1 width=4) (actual\ntime=0.53..0.53 rows=0 loops=1)\nTotal runtime: 0.60 msec\n\nEXPLAIN\n\n\n\nIt's no big deal in my application, speed is more than fast enough, I just\nnoticed it. The documentation says:\n\"However, keep in mind that the predicate must match the conditions used in\nthe queries that are supposed to benefit from the index. To be precise, a\npartial index can be used in a query only if the system can recognize that\nthe query's WHERE condition mathematically implies the index's predicate.\nPostgreSQL does not have a sophisticated theorem prover that can recognize\nmathematically equivalent predicates that are written in different forms.\n(Not only is such a general theorem prover extremely difficult to create, it\nwould probably be too slow to be of any real use.) The system can recognize\nsimple inequality implications, for example \"x < 1\" implies \"x < 2\";\notherwise the predicate condition must exactly match the query's WHERE\ncondition or the index will not be recognized to be usable. \"\n\nNormally a \"IS NOT NULL\"/\"IS NULL\" should be easy to recognise, since NULL is\nvery special. This would allow much smaller indices in some applications, for\nexample I've a case with a table with 200000 rows where 4 values (of type\ntext) are not null. The index size would be much smaller without all those\nNULL values. \n\nBest regards,\n\tMario Weilguni\n\n", "msg_date": "Wed, 13 Nov 2002 13:43:42 +0100", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "null values / partial indices" }, { "msg_contents": "\"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> I noticed that the planner is unable to select an index scan when a partial\n> index is available, the partial index is based on a \"NOT NULL\" condition.\n\nIt wants you to do this:\n\nselect id from str where url='foobar' and url is not null;\n\nI know and you know that \"url='foobar'\" implies url is not null,\nbut the code that checks for applicability of partial indexes is not\nthat bright.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 09:43:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: null values / partial indices " } ]
[ { "msg_contents": "\n> mydb=# create index str_idx_url on str(url) where url is not null;\n> CREATE\n> mydb=# analyze str;\n> ANALYZE\n> mydb=# EXPLAIN ANALYZE select id from str where url='foobar';\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on str (cost=0.00..91.05 rows=3 width=4) (actual \n\nYou can try an index like:\ncreate index str_idx_url on str(url) where url >= ''; \n\nI think that should be identical. ('' is the smallest string, no ?)\n\nAndreas\n", "msg_date": "Wed, 13 Nov 2002 14:02:16 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: null values / partial indices" } ]
[ { "msg_contents": ">You can try an index like:\n>create index str_idx_url on str(url) where url >= ''; \n>\n>I think that should be identical. ('' is the smallest string, no ?)\n\nThanks alot, it works now. But I still think the NOT NULL case would be\nuseful.\n\nBest regards,\n\tMario Weilguni\n", "msg_date": "Wed, 13 Nov 2002 14:09:34 +0100", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: null values / partial indices" } ]
[ { "msg_contents": "\n> My suspicion falls on the very-recently-added awk calls. Try changing\n> \n> (echo \"SET autocommit TO 'on';\"; awk 'BEGIN {printf \n> \"\\\\set ECHO all\\n\"}'; cat \"$inputdir/sql/$1.sql\") |\n\nWhy use awk for this at all ? and not:\necho \"\\\\set ECHO all\"\n\n??\nAndreas\n", "msg_date": "Wed, 13 Nov 2002 14:23:11 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: RC1? " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Why use awk for this at all ? and not:\n> echo \"\\\\set ECHO all\"\n\nI think Bruce is worried about portability; some versions of echo might\ndo something weird with the backslash. OTOH, it's not obvious to me\nthat awk is better on that score. Bruce?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 10:06:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1? " }, { "msg_contents": "> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> Why use awk for this at all ? and not:\n>> echo \"\\\\set ECHO all\"\n\nActually, some googling revealed the following advice (in the Autoconf\nmanual):\n\n Because of these problems, do not pass a string containing\n arbitrary characters to echo. For example, echo \"$foo\" is safe if\n you know that foo's value cannot contain backslashes and cannot\n start with -, but otherwise you should use a here-document like\n this:\n\n cat <<EOF\n $foo\n EOF\n\nThis seems obviously safer, so I shall make it do that instead\nof relying on either awk or echo.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 11:28:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1? " }, { "msg_contents": "On Wed, 13 Nov 2002 10:06:15 EST, the world broke into rejoicing as\nTom Lane <tgl@sss.pgh.pa.us> said:\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Why use awk for this at all ? and not:\n> > echo \"\\\\set ECHO all\"\n> \n> I think Bruce is worried about portability; some versions of echo might\n> do something weird with the backslash. OTOH, it's not obvious to me\n> that awk is better on that score. Bruce?\n\nThe problem is that the regress script isn't pointing to the version of\nawk that was picked up in the autoconf phase.\n\n(More detailed comments forwarded directly :-).)\n\nThe \"real deal\" on what happens on Solaris is thus, from the awk FAQ,\nwhere Patrick McPhee writes:\n\n> SunOS includes three versions of awk. /usr/bin/awk is the old\n> (pre-1989) version. /usr/bin/nawk is the new awk which appeared in\n> 1989, and /usr/xpg4/bin/awk is supposed to conform to the single unix\n> specification. No one knows why Sun continues to ship old awk.\n\nI would be /very/ inclined to trust Patrick's wisdom on this.\n\nSo long as we fix up the regression script to grab the \"nawk\" that\nwe expect to work, that's probably nicer than figuring out which\necho parameters are needed...\n--\n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://cbbrowne.com/info/wp.html\nThe first cup of coffee recapitulates phylogeny.\n", "msg_date": "Wed, 13 Nov 2002 11:42:01 -0500", "msg_from": "cbbrowne@cbbrowne.com", "msg_from_op": false, "msg_subject": "Re: RC1? " } ]
[ { "msg_contents": "\nIt seems to me that about the only major issue right now is testing the\nvarious platforms ... would anyone disagree with putting out an RC1 on\nFriday whose primary purpose is platform testing? I don't believe there\nis anything outstanding right now that would require us to do a beta6, and\nRC1 might give enough stability to ppl to get them testing for the\nremaining platforms ... ?\n\n", "msg_date": "Wed, 13 Nov 2002 11:39:56 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Propose RC1 for Friday ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> It seems to me that about the only major issue right now is testing the\n> various platforms ... would anyone disagree with putting out an RC1 on\n> Friday whose primary purpose is platform testing?\n\nWorks for me. We should be able to resolve this awk issue by then,\nand hopefully have confirmation on that GB18030 change too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 11:00:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ... " }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > It seems to me that about the only major issue right now is testing the\n> > various platforms ... would anyone disagree with putting out an RC1 on\n> > Friday whose primary purpose is platform testing?\n> \n> Works for me. We should be able to resolve this awk issue by then,\n> and hopefully have confirmation on that GB18030 change too.\n\nSounds good to me!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 13 Nov 2002 23:43:14 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ..." }, { "msg_contents": "On Wed, Nov 13, 2002 at 11:43:14PM -0500, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > It seems to me that about the only major issue right now is testing the\n> > > various platforms ... would anyone disagree with putting out an RC1 on\n> > > Friday whose primary purpose is platform testing?\n> > \n> > Works for me. We should be able to resolve this awk issue by then,\n> > and hopefully have confirmation on that GB18030 change too.\n\nSorry to be a pest, but I'd like to re-raise the issue I brought up\nregarding a performance regression from 7.2.3, when subqueries are pulled\nup and merged with their parent. What happened is that the default order\nthat WHERE clauses get merged changed. (The original discussion and\npatch was over on GENERAL, and doesn't seem to be in the FTS archives?):\n\nhttp://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=3D0885E1.8F369ACA%40mascari.com&rnum=3&prev=/groups%3Fq%3DMike%2BMascari%2Bsecurity%2BTom%2BLane%26ie%3DUTF-8%26oe%3DUTF-8%26hl%3Den \n\nThe reason for doing this was a theoretical hole in VIEW-based data access\nrestrictions. The consequence is that a class of queries got an order\nof magnitude slower (my particular example goes from 160 ms to 2000 ms).\n\nTom was not excited about making the original change (we don't guarantee\nthe order of WHERE clauses, which is what would be required for this to\nbe a real fix), and is resisting changing it back, partly because neither\norder is the right thing. My argument is that we can't do the right thing\nright now, anyway (feature freeze), so let's put it back the way it was in\nthe last stable release, so as not to break (o.k., dramatically slow down)\nexisting queries. (patch attached)\n\nAny other opinions?\n\nRoss", "msg_date": "Thu, 14 Nov 2002 10:20:20 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ..." }, { "msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> Sorry to be a pest, but I'd like to re-raise the issue I brought up\n> regarding a performance regression from 7.2.3, when subqueries are pulled\n> up and merged with their parent.\n> ...\n> Tom was not excited about making the original change (we don't guarantee\n> the order of WHERE clauses, which is what would be required for this to\n> be a real fix), and is resisting changing it back, partly because neither\n> order is the right thing. My argument is that we can't do the right thing\n> right now, anyway (feature freeze), so let's put it back the way it was in\n> the last stable release, so as not to break (o.k., dramatically slow down)\n> existing queries.\n\nWell, we could define it as a bug ;-) --- that is, a performance regression.\nI'd be happier about adding a dozen lines of code to sort quals by\nwhether or not they contain a subplan than about flip-flopping on the\noriginal patch. That would actually solve the class of problem you\nexhibited, whereas the other is just a band-aid that happens to work for\nyour particular example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 12:34:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ... " }, { "msg_contents": "I said:\n> Well, we could define it as a bug ;-) --- that is, a performance regression.\n> I'd be happier about adding a dozen lines of code to sort quals by\n> whether or not they contain a subplan than about flip-flopping on the\n> original patch. That would actually solve the class of problem you\n> exhibited, whereas the other is just a band-aid that happens to work for\n> your particular example.\n\nThe attached patch does the above. I think it's a very low-risk change,\nbut am tossing it out on the list to see if anyone objects to applying\nit in the 7.3 branch. (I intend to put it in 7.4devel in any case.)\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/optimizer/plan/createplan.c.orig\tWed Nov 6 17:31:24 2002\n--- src/backend/optimizer/plan/createplan.c\tThu Nov 14 13:18:04 2002\n***************\n*** 70,75 ****\n--- 70,76 ----\n \t\t\t\t\t IndexOptInfo *index,\n \t\t\t\t\t Oid *opclass);\n static List *switch_outer(List *clauses);\n+ static List *order_qual_clauses(Query *root, List *clauses);\n static void copy_path_costsize(Plan *dest, Path *src);\n static void copy_plan_costsize(Plan *dest, Plan *src);\n static SeqScan *make_seqscan(List *qptlist, List *qpqual, Index scanrelid);\n***************\n*** 182,187 ****\n--- 183,191 ----\n \t */\n \tscan_clauses = get_actual_clauses(best_path->parent->baserestrictinfo);\n \n+ \t/* Sort clauses into best execution order */\n+ \tscan_clauses = order_qual_clauses(root, scan_clauses);\n+ \n \tswitch (best_path->pathtype)\n \t{\n \t\tcase T_SeqScan:\n***************\n*** 359,364 ****\n--- 363,369 ----\n {\n \tResult\t *plan;\n \tList\t *tlist;\n+ \tList\t *constclauses;\n \tPlan\t *subplan;\n \n \tif (best_path->path.parent)\n***************\n*** 371,377 ****\n \telse\n \t\tsubplan = NULL;\n \n! \tplan = make_result(tlist, (Node *) best_path->constantqual, subplan);\n \n \treturn plan;\n }\n--- 376,384 ----\n \telse\n \t\tsubplan = NULL;\n \n! \tconstclauses = order_qual_clauses(root, best_path->constantqual);\n! \n! \tplan = make_result(tlist, (Node *) constclauses, subplan);\n \n \treturn plan;\n }\n***************\n*** 1210,1215 ****\n--- 1217,1259 ----\n \t\t\tt_list = lappend(t_list, clause);\n \t}\n \treturn t_list;\n+ }\n+ \n+ /*\n+ * order_qual_clauses\n+ *\t\tGiven a list of qual clauses that will all be evaluated at the same\n+ *\t\tplan node, sort the list into the order we want to check the quals\n+ *\t\tin at runtime.\n+ *\n+ * Ideally the order should be driven by a combination of execution cost and\n+ * selectivity, but unfortunately we have so little information about\n+ * execution cost of operators that it's really hard to do anything smart.\n+ * For now, we just move any quals that contain SubPlan references (but not\n+ * InitPlan references) to the end of the list.\n+ */\n+ static List *\n+ order_qual_clauses(Query *root, List *clauses)\n+ {\n+ \tList\t *nosubplans;\n+ \tList\t *withsubplans;\n+ \tList\t *l;\n+ \n+ \t/* No need to work hard if the query is subselect-free */\n+ \tif (!root->hasSubLinks)\n+ \t\treturn clauses;\n+ \n+ \tnosubplans = withsubplans = NIL;\n+ \tforeach(l, clauses)\n+ \t{\n+ \t\tNode *clause = lfirst(l);\n+ \n+ \t\tif (contain_subplans(clause))\n+ \t\t\twithsubplans = lappend(withsubplans, clause);\n+ \t\telse\n+ \t\t\tnosubplans = lappend(nosubplans, clause);\n+ \t}\n+ \n+ \treturn nconc(nosubplans, withsubplans);\n }\n \n /*\n", "msg_date": "Thu, 14 Nov 2002 13:33:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ... " }, { "msg_contents": "I've tested this under 7.3, and it works beautifully for the cases I've\nbuilt over the last 2 days. I can no longer bugger a plan up mearly\nby reordering the WHERE clauses. Note that 2 of the five parts won't\npatch in (involving constantqual). Looks to be code refactoring between\nhere and planmain.c on the 7.4 branch? I tried to hand-patch it in,\nand gave up. it _seems_ to work without it, but I probably haven't\ncovered that codepath.\n\nRoss\n\nOn Thu, Nov 14, 2002 at 01:33:05PM -0500, Tom Lane wrote:\n> I said:\n> > Well, we could define it as a bug ;-) --- that is, a performance regression.\n> > I'd be happier about adding a dozen lines of code to sort quals by\n> > whether or not they contain a subplan than about flip-flopping on the\n> > original patch. That would actually solve the class of problem you\n> > exhibited, whereas the other is just a band-aid that happens to work for\n> > your particular example.\n> \n> The attached patch does the above. I think it's a very low-risk change,\n> but am tossing it out on the list to see if anyone objects to applying\n> it in the 7.3 branch. (I intend to put it in 7.4devel in any case.)\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 14:09:01 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ..." }, { "msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> I've tested this under 7.3, and it works beautifully for the cases I've\n> built over the last 2 days. I can no longer bugger a plan up mearly\n> by reordering the WHERE clauses. Note that 2 of the five parts won't\n> patch in (involving constantqual). Looks to be code refactoring between\n> here and planmain.c on the 7.4 branch?\n\nYeah. I'm not going to bother covering the resconstantqual case in the\n7.3 version; that's a little-used path anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 15:13:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ... " }, { "msg_contents": "\nI'd ask for a quick beta6 ... even knowing everyone would hate me :)\n\n\n\nOn Thu, 14 Nov 2002, Tom Lane wrote:\n\n> I said:\n> > Well, we could define it as a bug ;-) --- that is, a performance regression.\n> > I'd be happier about adding a dozen lines of code to sort quals by\n> > whether or not they contain a subplan than about flip-flopping on the\n> > original patch. That would actually solve the class of problem you\n> > exhibited, whereas the other is just a band-aid that happens to work for\n> > your particular example.\n>\n> The attached patch does the above. I think it's a very low-risk change,\n> but am tossing it out on the list to see if anyone objects to applying\n> it in the 7.3 branch. (I intend to put it in 7.4devel in any case.)\n>\n> \t\t\tregards, tom lane\n>\n>\n> *** src/backend/optimizer/plan/createplan.c.orig\tWed Nov 6 17:31:24 2002\n> --- src/backend/optimizer/plan/createplan.c\tThu Nov 14 13:18:04 2002\n> ***************\n> *** 70,75 ****\n> --- 70,76 ----\n> \t\t\t\t\t IndexOptInfo *index,\n> \t\t\t\t\t Oid *opclass);\n> static List *switch_outer(List *clauses);\n> + static List *order_qual_clauses(Query *root, List *clauses);\n> static void copy_path_costsize(Plan *dest, Path *src);\n> static void copy_plan_costsize(Plan *dest, Plan *src);\n> static SeqScan *make_seqscan(List *qptlist, List *qpqual, Index scanrelid);\n> ***************\n> *** 182,187 ****\n> --- 183,191 ----\n> \t */\n> \tscan_clauses = get_actual_clauses(best_path->parent->baserestrictinfo);\n>\n> + \t/* Sort clauses into best execution order */\n> + \tscan_clauses = order_qual_clauses(root, scan_clauses);\n> +\n> \tswitch (best_path->pathtype)\n> \t{\n> \t\tcase T_SeqScan:\n> ***************\n> *** 359,364 ****\n> --- 363,369 ----\n> {\n> \tResult\t *plan;\n> \tList\t *tlist;\n> + \tList\t *constclauses;\n> \tPlan\t *subplan;\n>\n> \tif (best_path->path.parent)\n> ***************\n> *** 371,377 ****\n> \telse\n> \t\tsubplan = NULL;\n>\n> ! \tplan = make_result(tlist, (Node *) best_path->constantqual, subplan);\n>\n> \treturn plan;\n> }\n> --- 376,384 ----\n> \telse\n> \t\tsubplan = NULL;\n>\n> ! \tconstclauses = order_qual_clauses(root, best_path->constantqual);\n> !\n> ! \tplan = make_result(tlist, (Node *) constclauses, subplan);\n>\n> \treturn plan;\n> }\n> ***************\n> *** 1210,1215 ****\n> --- 1217,1259 ----\n> \t\t\tt_list = lappend(t_list, clause);\n> \t}\n> \treturn t_list;\n> + }\n> +\n> + /*\n> + * order_qual_clauses\n> + *\t\tGiven a list of qual clauses that will all be evaluated at the same\n> + *\t\tplan node, sort the list into the order we want to check the quals\n> + *\t\tin at runtime.\n> + *\n> + * Ideally the order should be driven by a combination of execution cost and\n> + * selectivity, but unfortunately we have so little information about\n> + * execution cost of operators that it's really hard to do anything smart.\n> + * For now, we just move any quals that contain SubPlan references (but not\n> + * InitPlan references) to the end of the list.\n> + */\n> + static List *\n> + order_qual_clauses(Query *root, List *clauses)\n> + {\n> + \tList\t *nosubplans;\n> + \tList\t *withsubplans;\n> + \tList\t *l;\n> +\n> + \t/* No need to work hard if the query is subselect-free */\n> + \tif (!root->hasSubLinks)\n> + \t\treturn clauses;\n> +\n> + \tnosubplans = withsubplans = NIL;\n> + \tforeach(l, clauses)\n> + \t{\n> + \t\tNode *clause = lfirst(l);\n> +\n> + \t\tif (contain_subplans(clause))\n> + \t\t\twithsubplans = lappend(withsubplans, clause);\n> + \t\telse\n> + \t\t\tnosubplans = lappend(nosubplans, clause);\n> + \t}\n> +\n> + \treturn nconc(nosubplans, withsubplans);\n> }\n>\n> /*\n>\n\n", "msg_date": "Thu, 14 Nov 2002 16:20:18 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Propose RC1 for Friday ... " }, { "msg_contents": "\nIf we are going to go for a beta6, I vote we reverse out the patch. Of\ncourse, I prefer neither.\n\nDo we have to do a delay/feature analysis on this?\n\nMarc, there will always be 7.3.1 to fix any problems. They will surely\nhappen so I think it is safe to push forward for tomorrow's RC1. Of\ncourse, if you disagree, let's back it out.\n\n---------------------------------------------------------------------------\n\nMarc G. Fournier wrote:\n> \n> I'd ask for a quick beta6 ... even knowing everyone would hate me :)\n> \n> \n> \n> On Thu, 14 Nov 2002, Tom Lane wrote:\n> \n> > I said:\n> > > Well, we could define it as a bug ;-) --- that is, a performance regression.\n> > > I'd be happier about adding a dozen lines of code to sort quals by\n> > > whether or not they contain a subplan than about flip-flopping on the\n> > > original patch. That would actually solve the class of problem you\n> > > exhibited, whereas the other is just a band-aid that happens to work for\n> > > your particular example.\n> >\n> > The attached patch does the above. I think it's a very low-risk change,\n> > but am tossing it out on the list to see if anyone objects to applying\n> > it in the 7.3 branch. (I intend to put it in 7.4devel in any case.)\n> >\n> > \t\t\tregards, tom lane\n> >\n> >\n> > *** src/backend/optimizer/plan/createplan.c.orig\tWed Nov 6 17:31:24 2002\n> > --- src/backend/optimizer/plan/createplan.c\tThu Nov 14 13:18:04 2002\n> > ***************\n> > *** 70,75 ****\n> > --- 70,76 ----\n> > \t\t\t\t\t IndexOptInfo *index,\n> > \t\t\t\t\t Oid *opclass);\n> > static List *switch_outer(List *clauses);\n> > + static List *order_qual_clauses(Query *root, List *clauses);\n> > static void copy_path_costsize(Plan *dest, Path *src);\n> > static void copy_plan_costsize(Plan *dest, Plan *src);\n> > static SeqScan *make_seqscan(List *qptlist, List *qpqual, Index scanrelid);\n> > ***************\n> > *** 182,187 ****\n> > --- 183,191 ----\n> > \t */\n> > \tscan_clauses = get_actual_clauses(best_path->parent->baserestrictinfo);\n> >\n> > + \t/* Sort clauses into best execution order */\n> > + \tscan_clauses = order_qual_clauses(root, scan_clauses);\n> > +\n> > \tswitch (best_path->pathtype)\n> > \t{\n> > \t\tcase T_SeqScan:\n> > ***************\n> > *** 359,364 ****\n> > --- 363,369 ----\n> > {\n> > \tResult\t *plan;\n> > \tList\t *tlist;\n> > + \tList\t *constclauses;\n> > \tPlan\t *subplan;\n> >\n> > \tif (best_path->path.parent)\n> > ***************\n> > *** 371,377 ****\n> > \telse\n> > \t\tsubplan = NULL;\n> >\n> > ! \tplan = make_result(tlist, (Node *) best_path->constantqual, subplan);\n> >\n> > \treturn plan;\n> > }\n> > --- 376,384 ----\n> > \telse\n> > \t\tsubplan = NULL;\n> >\n> > ! \tconstclauses = order_qual_clauses(root, best_path->constantqual);\n> > !\n> > ! \tplan = make_result(tlist, (Node *) constclauses, subplan);\n> >\n> > \treturn plan;\n> > }\n> > ***************\n> > *** 1210,1215 ****\n> > --- 1217,1259 ----\n> > \t\t\tt_list = lappend(t_list, clause);\n> > \t}\n> > \treturn t_list;\n> > + }\n> > +\n> > + /*\n> > + * order_qual_clauses\n> > + *\t\tGiven a list of qual clauses that will all be evaluated at the same\n> > + *\t\tplan node, sort the list into the order we want to check the quals\n> > + *\t\tin at runtime.\n> > + *\n> > + * Ideally the order should be driven by a combination of execution cost and\n> > + * selectivity, but unfortunately we have so little information about\n> > + * execution cost of operators that it's really hard to do anything smart.\n> > + * For now, we just move any quals that contain SubPlan references (but not\n> > + * InitPlan references) to the end of the list.\n> > + */\n> > + static List *\n> > + order_qual_clauses(Query *root, List *clauses)\n> > + {\n> > + \tList\t *nosubplans;\n> > + \tList\t *withsubplans;\n> > + \tList\t *l;\n> > +\n> > + \t/* No need to work hard if the query is subselect-free */\n> > + \tif (!root->hasSubLinks)\n> > + \t\treturn clauses;\n> > +\n> > + \tnosubplans = withsubplans = NIL;\n> > + \tforeach(l, clauses)\n> > + \t{\n> > + \t\tNode *clause = lfirst(l);\n> > +\n> > + \t\tif (contain_subplans(clause))\n> > + \t\t\twithsubplans = lappend(withsubplans, clause);\n> > + \t\telse\n> > + \t\t\tnosubplans = lappend(nosubplans, clause);\n> > + \t}\n> > +\n> > + \treturn nconc(nosubplans, withsubplans);\n> > }\n> >\n> > /*\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Nov 2002 15:27:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> I'd ask for a quick beta6 ... even knowing everyone would hate me :)\n\nWhat's wrong with calling it \"RC1\"?\n\nI think pushing out an RC tarball is the only way we'll shake loose any\nmore port reports. Putting out \"beta6\" isn't going to attract attention\nfrom anyone who ignored the first five. So I think delaying RC1 is just\ngoing to delay the release, without actually gaining anything.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 15:30:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ... " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> If we are going to go for a beta6, I vote we reverse out the patch.\n\nIt's not applied yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 15:30:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ... " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> If we are going to go for a beta6, I vote we reverse out the patch. Of\n> course, I prefer neither.\n\nI read this several times and am still not quite sure which path you are\nvoting for. We can:\n\n1. not apply the patch to fix Ross' problem, and ship RC1 tomorrow;\n2. apply the patch, and ship RC1 tomorrow;\n3. apply the patch and delay RC1.\n\nWhich do you like?\n\nPersonally I think this is a low-risk patch and so choice 2 is\nappropriate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 15:35:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ... " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > If we are going to go for a beta6, I vote we reverse out the patch. Of\n> > course, I prefer neither.\n> \n> I read this several times and am still not quite sure which path you are\n> voting for. We can:\n> \n> 1. not apply the patch to fix Ross' problem, and ship RC1 tomorrow;\n> 2. apply the patch, and ship RC1 tomorrow;\n> 3. apply the patch and delay RC1.\n> \n> Which do you like?\n> \n> Personally I think this is a low-risk patch and so choice 2 is\n> appropriate.\n\nSorry, I was vague. I think we should apply and go to RC1 tomorrow. \nThere will always be tweaks and fixes. If we expect it to be perfect,\nwe will never make a final release. We are 2.5 months into beta, and\nif we don't want +3 months beta, we should get going.\n\nWe have to start taking some _reasonable_ risks to move this forward.\nUntil we make a final, we will not increase our test pool, and at this\npoint we are all sort of staring at each other. Let's go!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Nov 2002 15:39:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ..." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Tom Lane wrote:\n<snip>\n> > Personally I think this is a low-risk patch and so choice 2 is\n> > appropriate.\n\nIf this is the only change, then 2 does seem like the best mix of\nrisk/progress.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Sorry, I was vague. I think we should apply and go to RC1 tomorrow.\n> There will always be tweaks and fixes. If we expect it to be perfect,\n> we will never make a final release. We are 2.5 months into beta, and\n> if we don't want +3 months beta, we should get going.\n> \n> We have to start taking some _reasonable_ risks to move this forward.\n> Until we make a final, we will not increase our test pool, and at this\n> point we are all sort of staring at each other. Let's go!\n<snip>\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 15 Nov 2002 08:16:11 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ..." }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> 2. apply the patch, and ship RC1 tomorrow;\n\nI think that's the best bet.\n\n(That said, the philosophy of \"there's always 7.3.1\" that Bruce alluded\nto is not one that I agree with.)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "14 Nov 2002 16:44:07 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ..." }, { "msg_contents": "> Sorry, I was vague. I think we should apply and go to RC1 tomorrow. \n> There will always be tweaks and fixes. If we expect it to be perfect,\n> we will never make a final release. We are 2.5 months into beta, and\n> if we don't want +3 months beta, we should get going.\n> \n> We have to start taking some _reasonable_ risks to move this forward.\n> Until we make a final, we will not increase our test pool, and at this\n> point we are all sort of staring at each other. Let's go!\n\nYeah - RC1 guys - I'm really hanging out for 7.3...\n\nChris\n\n", "msg_date": "Fri, 15 Nov 2002 09:43:38 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ..." }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Sorry, I was vague. I think we should apply and go to RC1 tomorrow. \n> > There will always be tweaks and fixes. If we expect it to be perfect,\n> > we will never make a final release. We are 2.5 months into beta, and\n> > if we don't want +3 months beta, we should get going.\n> > \n> > We have to start taking some _reasonable_ risks to move this forward.\n> > Until we make a final, we will not increase our test pool, and at this\n> > point we are all sort of staring at each other. Let's go!\n> \n> Yeah - RC1 guys - I'm really hanging out for 7.3...\n\nYes, that was my lame attempt to get some excitement for RC1. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Nov 2002 20:45:39 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose RC1 for Friday ..." }, { "msg_contents": "On Thu, 14 Nov 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > If we are going to go for a beta6, I vote we reverse out the patch. Of\n> > course, I prefer neither.\n>\n> I read this several times and am still not quite sure which path you are\n> voting for. We can:\n>\n> 1. not apply the patch to fix Ross' problem, and ship RC1 tomorrow;\n> 2. apply the patch, and ship RC1 tomorrow;\n> 3. apply the patch and delay RC1.\n>\n> Which do you like?\n>\n> Personally I think this is a low-risk patch and so choice 2 is\n> appropriate.\n\nGo for it ... I'd like to get RC1 out this week, and if you feel its\nlow-risk, the worst case, we have to do an RC2 ...\n\n\n", "msg_date": "Thu, 14 Nov 2002 23:02:25 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Propose RC1 for Friday ... " } ]
[ { "msg_contents": "I've got a question about the foreign key constraint behavior.\n\nIt looks to me that inserts within transactions into a child table, which have the same FK value back to the parent will block until the first txn will commit or rollback. (see example below)\n\nThis seems to be based on the fact that the RI_FKey_check function will lock the parent row for update, so any other child row referring the same row will be locked out.\n\nI've added a debug stmt into the RI_FKey_check function to see the query it does:\nNOTICE: RI_FKey_check: PLAN2: SELECT 1 FROM ONLY \"public\".\"parent\" x WHERE \"id\" = $1 FOR UPDATE OF x\n\nI think I basically understand, why this is done. To make sure that the parent row can't be deleted before the child row is committed and there would have an orphan reference.\n\nBut, if a lot of inserts happens into the child table and there is a mix of short and long running transactions, the likelihood of blocking is very high, even the inserts are independent and everything is ok (prim. key etc.). This is even more extreme, the smaller parent table is.\n\nFYI, I've tried the same with Oracle and there is no such problem. The insert in the second session will come back immediately without blocking, though it will still maintain the integrity from other txns.\n\nI wonder if there is a lower level way to maintain the locking and having the same behavior as oracle.\nSo, instead of using a \"SELECT ... FOR UPDATE\", using some pg function to lock a row with a different mode?\n\nOverall, I find this restriction pretty bad and renders the use of foreign key constraints almost useless from the performance point of view as that leads to real serialization of transaction, even they don't have any overlaps.\n\n\nin session1:\n============\ndrop table child;\ndrop table parent;\n\ncreate table parent (id integer not null);\nALTER TABLE parent ADD CONSTRAINT parent_PK PRIMARY KEY(ID);\n\ncreate table child (id integer not null, parent_id integer not null);\nALTER TABLE child ADD CONSTRAINT child_PK PRIMARY KEY(ID);\nALTER TABLE child ADD CONSTRAINT child_parent_id FOREIGN KEY (parent_id) REFERENCES parent (ID);\n\ninsert into parent values (1);\ninsert into parent values (2);\n\nbegin;\ninsert into child values (1,1);\n<this will be ok>\n\nin session2 after the last insert in session1:\n==============================================\nbegin;\ninsert into child values (2,1);\n<this will block now until the session1 does commit or rollback>\n\n-- \nBest regards,\nPeter Schindler\n", "msg_date": "Wed, 13 Nov 2002 22:03:58 +0100", "msg_from": "Peter Schindler <pschindler@synchronicity.com>", "msg_from_op": true, "msg_subject": "RI_FKey_check: foreign key constraint blocks parallel independent\n\tinserts" }, { "msg_contents": "\nOn Wed, 13 Nov 2002, Peter Schindler wrote:\n\n> But, if a lot of inserts happens into the child table and there is a\n> mix of short and long running transactions, the likelihood of blocking\n> is very high, even the inserts are independent and everything is ok\n> (prim. key etc.). This is even more extreme, the smaller parent table\n> is.\n>\n> FYI, I've tried the same with Oracle and there is no such problem. The\n> insert in the second session will come back immediately without\n> blocking, though it will still maintain the integrity from other txns.\n>\n> I wonder if there is a lower level way to maintain the locking and\n> having the same behavior as oracle. So, instead of using a \"SELECT ...\n> FOR UPDATE\", using some pg function to lock a row with a different\n> mode?\n\nI've been working on something of the sort. I've got a test patch\n(against about 7.3b2) that I'm trying to validate which cases it does and\ndoes not work for. I'm still looking for more volunteers if you've got a\ndev system you're willing to use. :)\n\nRight now, I know that it has a hole that lets through invalid data in one\ncase that it got while trying to fix a deadlock case. Hopefully in the\nnext week or so I'll have figured out a way around it.\n\n", "msg_date": "Wed, 13 Nov 2002 14:22:51 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: RI_FKey_check: foreign key constraint blocks parallel" }, { "msg_contents": "Stephan Szabo wrote:\n> I've been working on something of the sort. I've got a test patch\n> (against about 7.3b2) that I'm trying to validate which cases it does and\n> does not work for. I'm still looking for more volunteers if you've got a\n> dev system you're willing to use. :)\nI'd willing to do some testing. Though, I can't promise too much time as we are \njust in the middle of final. qualification for our release. Also we are still \nusing 7.2.1 and didn't port to 7.3 yet. But, I could do some systematic manual \ntesting with psql if you want. Could you sent me the patch please.\n\nBTW, I forgot to mention this in my orig. mail, even it probably obvious to you,\nthis behavior is there for several (if not all) pg releases. I've tested it with \n7.2.1, 7.3b2 and 7.3b5.\n\n> Right now, I know that it has a hole that lets through invalid data in one\n> case that it got while trying to fix a deadlock case. Hopefully in the\n> next week or so I'll have figured out a way around it.\nAfter our and the pg7.3 release is out we'll port there and I really would like\nto get rid of this restriction with that release than. So it would be wonderful\nif that still goes into the final of 7.3.\n\nRgs,\nPeter\n", "msg_date": "Thu, 14 Nov 2002 08:46:36 +0100", "msg_from": "Peter Schindler <pschindler@synchronicity.com>", "msg_from_op": true, "msg_subject": "Re: RI_FKey_check: foreign key constraint blocks" }, { "msg_contents": "> After our and the pg7.3 release is out we'll port there and I\n> really would like\n> to get rid of this restriction with that release than. So it\n> would be wonderful\n> if that still goes into the final of 7.3.\n\nI'm not a core developer, but I'll tell you right now that there's pretty\nmuch zero chance of it being in 7.3 - it's about to go to release candidate.\nSince it changes pretty important functionality, it will be left for 7.4.\n\nChris\n\n", "msg_date": "Thu, 14 Nov 2002 15:53:53 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: RI_FKey_check: foreign key constraint blocks" }, { "msg_contents": "On Wed, 13 Nov 2002 14:22:51 -0800 (PST), Stephan Szabo\n<sszabo@megazone23.bigpanda.com> wrote:\n>Right now, I know that it has a hole that lets through invalid data\n\nStephan, your patch has been posted to -general (Subject: Re:\n[GENERAL] Help..Help...). Is this version still valid?\n\n> void\n> heap_mark4fk_lock_acquire(Relation relation, HeapTuple tuple) {\n> [...]\n> /* try to find the list for the table in question */\nThis part of the patch works, if the list\n(a) is initially empty or\n(b) already contains relid or\n(c) starts with a table > relid.\n\n> while (ptr!=NULL) {\n> if (relid>ptr->table) {\n> ptr=ptr->next;\n> oldptr=ptr;\n// AFAICT above two lines should be swapped ...\n> }\n> else \n> break;\n> }\n\n... otherwise\n(d) if the new relid is to be inserted between two existing entries,\nwe get two items pointing to each other\n(e) if the new relid is > the last table in the list, we lose the\nwhole list.\n\nServus\n Manfred\n", "msg_date": "Fri, 15 Nov 2002 23:11:52 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: RI_FKey_check: foreign key constraint blocks parallel" }, { "msg_contents": "On Fri, 15 Nov 2002, Manfred Koizar wrote:\n\n> On Wed, 13 Nov 2002 14:22:51 -0800 (PST), Stephan Szabo\n> <sszabo@megazone23.bigpanda.com> wrote:\n> >Right now, I know that it has a hole that lets through invalid data\n>\n> Stephan, your patch has been posted to -general (Subject: Re:\n> [GENERAL] Help..Help...). Is this version still valid?\n\nI have a newer version of it on my machine, but I was still sending out\nthat version of the patch. :( Thanks for letting me know before even more\npeople got a version that was broken. :)\n\nFor anyone working with the patch, you need to fix the lines below as\nnoted by Manfred. This is mostly unrelated to the hole mentioned in the\nquoted message above (it's a bug that with the bug you actually partially\nfill the hole but instead deadlock). I wonder if there were any other\nstupdities in there.\n\n> > void\n> > heap_mark4fk_lock_acquire(Relation relation, HeapTuple tuple) {\n> > [...]\n> > /* try to find the list for the table in question */\n> This part of the patch works, if the list\n> (a) is initially empty or\n> (b) already contains relid or\n> (c) starts with a table > relid.\n>\n> > while (ptr!=NULL) {\n> > if (relid>ptr->table) {\n> > ptr=ptr->next;\n> > oldptr=ptr;\n> // AFAICT above two lines should be swapped ...\n> > }\n> > else\n> > break;\n> > }\n>\n> ... otherwise\n> (d) if the new relid is to be inserted between two existing entries,\n> we get two items pointing to each other\n> (e) if the new relid is > the last table in the list, we lose the\n> whole list.\n\n", "msg_date": "Fri, 15 Nov 2002 15:38:46 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: RI_FKey_check: foreign key constraint blocks parallel" }, { "msg_contents": "\n(Just noticed that this was sent to hackers, but -general would probably\nget to more of the people who might want to see it)\n\nOn Fri, 15 Nov 2002, Stephan Szabo wrote:\n\n> On Fri, 15 Nov 2002, Manfred Koizar wrote:\n>\n> > On Wed, 13 Nov 2002 14:22:51 -0800 (PST), Stephan Szabo\n> > <sszabo@megazone23.bigpanda.com> wrote:\n> > >Right now, I know that it has a hole that lets through invalid data\n> >\n> > Stephan, your patch has been posted to -general (Subject: Re:\n> > [GENERAL] Help..Help...). Is this version still valid?\n>\n> I have a newer version of it on my machine, but I was still sending out\n> that version of the patch. :( Thanks for letting me know before even more\n> people got a version that was broken. :)\n>\n> For anyone working with the patch, you need to fix the lines below as\n> noted by Manfred. This is mostly unrelated to the hole mentioned in the\n> quoted message above (it's a bug that with the bug you actually partially\n> fill the hole but instead deadlock). I wonder if there were any other\n> stupdities in there.\n>\n> > > void\n> > > heap_mark4fk_lock_acquire(Relation relation, HeapTuple tuple) {\n> > > [...]\n> > > /* try to find the list for the table in question */\n> > This part of the patch works, if the list\n> > (a) is initially empty or\n> > (b) already contains relid or\n> > (c) starts with a table > relid.\n> >\n> > > while (ptr!=NULL) {\n> > > if (relid>ptr->table) {\n> > > ptr=ptr->next;\n> > > oldptr=ptr;\n> > // AFAICT above two lines should be swapped ...\n> > > }\n> > > else\n> > > break;\n> > > }\n> >\n> > ... otherwise\n> > (d) if the new relid is to be inserted between two existing entries,\n> > we get two items pointing to each other\n> > (e) if the new relid is > the last table in the list, we lose the\n> > whole list.\n\n\n", "msg_date": "Fri, 15 Nov 2002 15:40:34 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Fwd: [HACKERS] RI_FKey_check: foreign key constraint blocks parallel" } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n> Discussing the press release on IRC, we realized that we really want \n> the number of code, /contrib, and GBORG contributors. Can anyone think \n> of a way we could get that?\n\nThat's a tough one. One way would be to scour the CVS logs: the \ncore maintainers usually attribute who a certain change was from. \nHow you could automate that is beyond me: I have some ideas, however. \n\nAnother way is to look at the email lists. Since most patches come \nthrough on the patch list, I did a quick count of distinct \"from\" \naddresses from that list. My archives only go back about a year: \nin that time, there were 1814 messages from 94 different emails. \nAbout 24 of those were one-shot wonders, but the usual suspects \ntopped the list:\n\n 73 | Neil Conway <neilc@samurai.com>\n 83 | Peter Eisentraut <peter_e@gmx.net>\n 129 | Joe Conway <mail@joeconway.com>\n 307 | Tom Lane <tgl@sss.pgh.pa.us>\n 599 | Bruce Momjian <pgman@candle.pha.pa.us>\n\nSo I think about 90 is probably a good ballpark to start from, \nas far as the number of people contributing to the code. Assuming \nthat almost all of the posters to patches are actually contributing \nsomething. (By way of comparison, the general list saw @28,000 posts \nfrom 3200 people!) This is for one year, so a total I would roughly \nguess to be about 2-3 times that. I'll go see what I can do with the \ncvs logs...\n\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200211132136\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niD8DBQE90wxvvJuQZxSWSsgRAi2PAKCYXCMQgrXnJzgk0ZpTNypGZ8jvdwCfbg58\n04vClcAkO7AXEgyhl+WSfVI=\n=qcKg\n-----END PGP SIGNATURE-----\n\n\n\n", "msg_date": "Thu, 14 Nov 2002 02:31:51 -0000", "msg_from": "greg@turnstep.com", "msg_from_op": true, "msg_subject": "Re: Press Release -- Numbers" }, { "msg_contents": "Folks,\n\nThank you, everyone, for you help with the press release. The only\nthing we're waiting on, I believe, is a quote from Tom Lane on the new\nrelease. Last-minute copy edits, please, people?\n\nhttp://techdocs.postgresql.org/guides/Pressrelease73\n\nAnd can anyone persuade Tom to cough up a quote in the next 36 hours?\n\n-Josh Berkus\n\n", "msg_date": "Sun, 17 Nov 2002 11:43:33 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Press Release -- Just Waiting for Tom" }, { "msg_contents": "FYI. I just tried to add a comment to the page, and I got <hr><b> rather\nthan the formatting of a hard-return and bolding. (i.e. \"<hr><b>2002/11/18\n10:22 EST (via web):</b><br>\" appeared rather than it formatted.\n\nI'll also ask what I asked there, here:\nAre there any new replication/backup features?\n\n-----Original Message-----\nFrom: pgsql-advocacy-owner@postgresql.org\n[mailto:pgsql-advocacy-owner@postgresql.org]On Behalf Of Josh Berkus\nSent: Sunday, November 17, 2002 2:44 PM\nTo: pgsql-advocacy@postgresql.org\nSubject: [pgsql-advocacy] Press Release -- Just Waiting for Tom\n\n\nFolks,\n\nThank you, everyone, for you help with the press release. The only\nthing we're waiting on, I believe, is a quote from Tom Lane on the new\nrelease. Last-minute copy edits, please, people?\n\nhttp://techdocs.postgresql.org/guides/Pressrelease73\n\nAnd can anyone persuade Tom to cough up a quote in the next 36 hours?\n\n-Josh Berkus\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Mon, 18 Nov 2002 11:39:57 -0500", "msg_from": "Jason Hihn <jhihn@paytimepayroll.com>", "msg_from_op": false, "msg_subject": "Re: Press Release -- Just Waiting for Tom" }, { "msg_contents": "I wasn't trying to use any formatting. The CGI did that all on it's own...\nIt put my comments into the PRE block of the annoucement. Maybe moving the\nclosing PRE tag is all it will need?\n\n\n-----Original Message-----\nFrom: Robert Treat [mailto:xzilla@users.sourceforge.net]\nSent: Monday, November 18, 2002 1:06 PM\nTo: Jason Hihn\nCc: pgsql-advocacy@postgresql.org\nSubject: Re: [pgsql-advocacy] Press Release -- Just Waiting for Tom\n\n\nOn Mon, 2002-11-18 at 11:39, Jason Hihn wrote:\n> FYI. I just tried to add a comment to the page, and I got <hr><b> rather\n> than the formatting of a hard-return and bolding. (i.e. \"<hr><b>2002/11/18\n> 10:22 EST (via web):</b><br>\" appeared rather than it formatted.\n>\n\nperhaps comments don't allow html formatting? Justin, can you confirm\nthis?\n\n> I'll also ask what I asked there, here:\n> Are there any new replication/backup features?\n>\n\nThere haven't been major changes in these areas. There are some\ncommercial companies that are offering replication solutions you can\nlook at. I also know Point In Time Recovery is planned for version 7.4\nas well.\n\nBTW - A complete list of changes is included in the source distributions\nif you want to check for something specific.\n\nRobert Treat\n\n> -----Original Message-----\n> From: pgsql-advocacy-owner@postgresql.org\n> [mailto:pgsql-advocacy-owner@postgresql.org]On Behalf Of Josh Berkus\n> Sent: Sunday, November 17, 2002 2:44 PM\n> To: pgsql-advocacy@postgresql.org\n> Subject: [pgsql-advocacy] Press Release -- Just Waiting for Tom\n>\n>\n> Folks,\n>\n> Thank you, everyone, for you help with the press release. The only\n> thing we're waiting on, I believe, is a quote from Tom Lane on the new\n> release. Last-minute copy edits, please, people?\n>\n> http://techdocs.postgresql.org/guides/Pressrelease73\n>\n> And can anyone persuade Tom to cough up a quote in the next 36 hours?\n>\n> -Josh Berkus\n>\n\n\n", "msg_date": "Mon, 18 Nov 2002 13:04:00 -0500", "msg_from": "Jason Hihn <jhihn@paytimepayroll.com>", "msg_from_op": false, "msg_subject": "Re: Press Release -- Just Waiting for Tom" }, { "msg_contents": "On Mon, 2002-11-18 at 11:39, Jason Hihn wrote:\n> FYI. I just tried to add a comment to the page, and I got <hr><b> rather\n> than the formatting of a hard-return and bolding. (i.e. \"<hr><b>2002/11/18\n> 10:22 EST (via web):</b><br>\" appeared rather than it formatted.\n> \n\nperhaps comments don't allow html formatting? Justin, can you confirm\nthis?\n\n> I'll also ask what I asked there, here:\n> Are there any new replication/backup features?\n> \n\nThere haven't been major changes in these areas. There are some\ncommercial companies that are offering replication solutions you can\nlook at. I also know Point In Time Recovery is planned for version 7.4\nas well. \n\nBTW - A complete list of changes is included in the source distributions\nif you want to check for something specific.\n\nRobert Treat\n\n> -----Original Message-----\n> From: pgsql-advocacy-owner@postgresql.org\n> [mailto:pgsql-advocacy-owner@postgresql.org]On Behalf Of Josh Berkus\n> Sent: Sunday, November 17, 2002 2:44 PM\n> To: pgsql-advocacy@postgresql.org\n> Subject: [pgsql-advocacy] Press Release -- Just Waiting for Tom\n> \n> \n> Folks,\n> \n> Thank you, everyone, for you help with the press release. The only\n> thing we're waiting on, I believe, is a quote from Tom Lane on the new\n> release. Last-minute copy edits, please, people?\n> \n> http://techdocs.postgresql.org/guides/Pressrelease73\n> \n> And can anyone persuade Tom to cough up a quote in the next 36 hours?\n> \n> -Josh Berkus\n> \n\n\n", "msg_date": "18 Nov 2002 13:05:39 -0500", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Press Release -- Just Waiting for Tom" }, { "msg_contents": "Robert Treat wrote:\n<snip>\n> perhaps comments don't allow html formatting? Justin, can you confirm\n> this?\n\nHi guys,\n\nIt's kind of a bit more tricky than that.\n\nThe Press Release for PG 7.3 page is being worked on as plain text, and\nis stored in Zwiki (the software we're using) that way. HTML inside it\ndoesn't work. :-/\n\nIt turns out that the \"Add Comments\" system automatically adds the\nneeded seperator, but does so in HTML without checking if that's\nappropriate. Have asked the Zwiki guys if they'd consider altering it,\nand think it's been added to the \"wishlist\" area of their bugs tracker,\nbut that's it.\n\nIf anyone here is good with Python, that could be fixed though.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> > I'll also ask what I asked there, here:\n> > Are there any new replication/backup features?\n> >\n> \n> There haven't been major changes in these areas. There are some\n> commercial companies that are offering replication solutions you can\n> look at. I also know Point In Time Recovery is planned for version 7.4\n> as well.\n> \n> BTW - A complete list of changes is included in the source distributions\n> if you want to check for something specific.\n> \n> Robert Treat\n> \n> > -----Original Message-----\n> > From: pgsql-advocacy-owner@postgresql.org\n> > [mailto:pgsql-advocacy-owner@postgresql.org]On Behalf Of Josh Berkus\n> > Sent: Sunday, November 17, 2002 2:44 PM\n> > To: pgsql-advocacy@postgresql.org\n> > Subject: [pgsql-advocacy] Press Release -- Just Waiting for Tom\n> >\n> >\n> > Folks,\n> >\n> > Thank you, everyone, for you help with the press release. The only\n> > thing we're waiting on, I believe, is a quote from Tom Lane on the new\n> > release. Last-minute copy edits, please, people?\n> >\n> > http://techdocs.postgresql.org/guides/Pressrelease73\n> >\n> > And can anyone persuade Tom to cough up a quote in the next 36 hours?\n> >\n> > -Josh Berkus\n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 19 Nov 2002 16:41:15 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Press Release -- Just Waiting for Tom" } ]
[ { "msg_contents": "I have installed a 128k ISDN line to my home to service candle.pha.pa.us\n& momjian.postgresql.org. You should be seeing the same performance you\nsaw while I was in my old house pre-August.\n\nI hope ADSL will reach my house within the next year.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 13 Nov 2002 22:56:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "ISDN installed" } ]
[ { "msg_contents": "I just came across this article:\n\nhttp://newsforge.com/newsforge/02/11/11/1848223.shtml?tid=3\n\nIt's an ERP company, OpenMFG, that uses Linux, PostgreSQL and QT. In the\narticle they say they're active in Postgres development. Just wondering if\nthey wanted to say hi!\n\nChris\n\n\n", "msg_date": "Thu, 14 Nov 2002 13:03:37 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "An article mentioning PostgreSQL" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> I just came across this article:\n> \n> http://newsforge.com/newsforge/02/11/11/1848223.shtml?tid=3\n> \n> It's an ERP company, OpenMFG, that uses Linux, PostgreSQL and QT. In the\n> article they say they're active in Postgres development. Just wondering if\n> they wanted to say hi!\n\nThat is Ned Lilly, former Great Bridge employee. I talk to him\nregularly. I don't think they have submitted any patches recently, but\nhe does reply to email postings periodically.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Nov 2002 00:04:43 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: An article mentioning PostgreSQL" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> It's an ERP company, OpenMFG, that uses Linux, PostgreSQL and QT. In the\n> article they say they're active in Postgres development. Just wondering if\n> they wanted to say hi!\n\nWhile we're on the topic, it appears that Compiere (www.compiere.org)\nare in the process of porting their software to work with PostgreSQL:\n\n http://www.compiere.org/technology/pg/index.html\n\nGNUe has also supported PostgreSQL for a long time, although I'm not\nsure what state their ERP implementation is at:\n\n http://www.gnuenterprise.org/\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "14 Nov 2002 00:24:06 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: An article mentioning PostgreSQL" }, { "msg_contents": "Yeah, we made an announcement to -general a few weeks ago, but didn't cross-post to -hackers.\n\nCheers to all. Obviously as a co-founder of Great Bridge, I'm a big believer in Postgres. OpenMFG makes extensive use of pl/pgsql for most of its ERP business logic.\n\nWe've also been beta testing PeerDirect's new Windows port, which I'd encourage more people to get involved with. As has been observed on this list before, as much as we all love Linux/BSD, if you don't have a Windows solution, you're missing an awfully big chunk of the market - and that's particularly true in our world of small manufacturing. That's why we chose Qt - one source tree, identical clients for Windows, Linux/Unix, Mac.\n\nRegards,\nNed\n\n\n----- Original Message ----- \nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nCc: \"Hackers\" <pgsql-hackers@postgresql.org>\nSent: Thursday, November 14, 2002 12:04 AM\nSubject: Re: [HACKERS] An article mentioning PostgreSQL\n\n\n> Christopher Kings-Lynne wrote:\n> > I just came across this article:\n> > \n> > http://newsforge.com/newsforge/02/11/11/1848223.shtml?tid=3\n> > \n> > It's an ERP company, OpenMFG, that uses Linux, PostgreSQL and QT. In the\n> > article they say they're active in Postgres development. Just wondering if\n> > they wanted to say hi!\n> \n> That is Ned Lilly, former Great Bridge employee. I talk to him\n> regularly. I don't think they have submitted any patches recently, but\n> he does reply to email postings periodically.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n", "msg_date": "Thu, 14 Nov 2002 06:53:53 -0500", "msg_from": "\"Ned Lilly\" <ned@nedscape.com>", "msg_from_op": false, "msg_subject": "Re: An article mentioning PostgreSQL" } ]
[ { "msg_contents": "Hi guys,\n\nWe received a query through the Advocacy site about whether we support\nAIX 5.1 or not, so am trying to find out.\n\nJust took a look at the Supported Platforms list, and the FAQ_AIX\ndocument (for 7.2.x) and it doesn't seem to mention specific versions of\nAIX that are supported.\n\nDoes PostgreSQL 7.2.x (and also 7.3 when it's released) support AIX 5.1?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 14 Nov 2002 17:51:48 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Does v7.2.x support AIX 5.1?" }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> We received a query through the Advocacy site about whether we support\n> AIX 5.1 or not, so am trying to find out.\n\nIt should work. Andreas just submitted a port confirmation on AIX\n4.3.2 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 08:57:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Does v7.2.x support AIX 5.1? " }, { "msg_contents": "Tom Lane wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> > We received a query through the Advocacy site about whether we support\n> > AIX 5.1 or not, so am trying to find out.\n> \n> It should work. Andreas just submitted a port confirmation on AIX\n> 4.3.2 ...\n\nThanks Tom. :)\n\nDo you feel there's anyone around that would be able to give a\ndefinitite yes or no?\n\nAm prepared to say \"it should work\" as an answer, but would prefer to\ngive something more definite if possible.\n\nJust being careful here.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n \n> regards, tom lane\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 15 Nov 2002 01:13:29 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Does v7.2.x support AIX 5.1?" }, { "msg_contents": "Justin Clift wrote:\n> Tom Lane wrote:\n> > \n> > Justin Clift <justin@postgresql.org> writes:\n> > > We received a query through the Advocacy site about whether we support\n> > > AIX 5.1 or not, so am trying to find out.\n> > \n> > It should work. Andreas just submitted a port confirmation on AIX\n> > 4.3.2 ...\n> \n> Thanks Tom. :)\n> \n> Do you feel there's anyone around that would be able to give a\n> definitite yes or no?\n> \n> Am prepared to say \"it should work\" as an answer, but would prefer to\n> give something more definite if possible.\n\nI checked the supported platforms list for 7.2 at:\n\n\thttp://www.us.postgresql.org/users-lounge/docs/7.2/postgres/supported-platforms.html\n\nand saw:\n\n\tAIX RS60007.\n\t22001-12-19, \n\tAndreas Zeugswetter (<ZeugswetterA@spardat.at>),\n\tTatsuo Ishii (<t-ishii@sra.co.jp>)\n\tsee also doc/FAQ_AIX\n\nFAQ_AIX has specific version information.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Nov 2002 09:18:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Does v7.2.x support AIX 5.1?" } ]
[ { "msg_contents": "Hi there!\n\nI want to propose the patch for adding the hierarchical queries posibility.\nIt allows to construct queries a la Oracle for ex:\nSELECT a,b FROM t CONNECT BY a PRIOR b START WITH cond;B\n\nI've seen this type of queries often made by adding a new type, which stores\nposition of row in the tree. But sorting such tree are very tricky (i think).\n\nPatch allows result tree to be sorted, i.e. subnodes of each node will be\nsorted by ORDER BY clause.\n\nwith regards, evgen\n\n---\n.evgen\n\n", "msg_date": "Thu, 14 Nov 2002 15:52:28 +0400 (SAMT)", "msg_from": "Evgen Potemkin <evgent@ns.terminal.ru>", "msg_from_op": true, "msg_subject": "Proposal of hierachical queries (a la Oracle)" }, { "msg_contents": "On Thu, 2002-11-14 at 06:52, Evgen Potemkin wrote:\n> Hi there!\n> \n> I want to propose the patch for adding the hierarchical queries posibility.\n> It allows to construct queries a la Oracle for ex:\n> SELECT a,b FROM t CONNECT BY a PRIOR b START WITH cond;B\n\nGreat addition. But please use the SQL 99 syntax for recursive queries\n(if you need the full segment, I can send them to you):\n\nSection 7.13 of Part 2:\n\n Format\n\n <search or cycle clause> ::=\n <search clause>\n | <cycle clause>\n | <search clause> <cycle clause>\n\n <search clause> ::=\n SEARCH <recursive search order> SET <sequence column>\n\n <recursive search order> ::=\n DEPTH FIRST BY <sort specification list>\n | BREADTH FIRST BY <sort specification list>\n\n <sequence column> ::= <column name>\n\n <cycle clause> ::=\n CYCLE <cycle column list>\n SET <cycle mark column> TO <cycle mark value>\n DEFAULT <non-cycle mark value>\n USING <path column>\n\n <cycle column list> ::=\n <cycle column> [ { <comma> <cycle column> }... ]\n\n <cycle column> ::= <column name>\n\n <cycle mark column> ::= <column name>\n\n <path column> ::= <column name>\n\n <cycle mark value> ::= <value expression>\n\n <non-cycle mark value> ::= <value expression>\n\n-- \nRod Taylor <rbt@rbt.ca>\n", "msg_date": "15 Nov 2002 12:02:17 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Proposal of hierachical queries (a la Oracle)" }, { "msg_contents": "Ok, full section would be very helpful, i don't have it.\n\n---\n.evgen\n\nOn 15 Nov 2002, Rod Taylor wrote:\n\n> On Thu, 2002-11-14 at 06:52, Evgen Potemkin wrote:\n> > Hi there!\n> >\n> > I want to propose the patch for adding the hierarchical queries posibility.\n> > It allows to construct queries a la Oracle for ex:\n> > SELECT a,b FROM t CONNECT BY a PRIOR b START WITH cond;B\n>\n> Great addition. But please use the SQL 99 syntax for recursive queries\n> (if you need the full segment, I can send them to you):\n>\n> Section 7.13 of Part 2:\n>\n\n", "msg_date": "Sat, 16 Nov 2002 14:28:42 +0400 (SAMT)", "msg_from": "Evgen Potemkin <evgent@ns.terminal.ru>", "msg_from_op": true, "msg_subject": "Re: Proposal of hierachical queries (a la Oracle)" } ]
[ { "msg_contents": "Hello,\n When I change view and change number of column PostgreSQL return error :\n'cannot change number of column in view'\nIs it too hard set this command\nif view exits drop view\nand then change view\n\nIt is like with return type in function\n\nNow 'or replace' don't help too much\n\nregards\n", "msg_date": "Thu, 14 Nov 2002 13:41:18 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "create or replace view" }, { "msg_contents": "On Thu, Nov 14, 2002 at 13:41:18 +0000,\n snpe <snpe@snpe.co.yu> wrote:\n> Hello,\n> When I change view and change number of column PostgreSQL return error :\n> 'cannot change number of column in view'\n> Is it too hard set this command\n> if view exits drop view\n> and then change view\n> \n> It is like with return type in function\n> \n> Now 'or replace' don't help too much\n\nThe create or replace command exists so that you can modify a view in a\nway that allows other objects that refer to it to keep working (without\nhaving to recreate those objects). However if you can the number of\ncolumns (and probably any of their types), then these other objects\nor going to need to know that things have changed so that you can't\njust replace the view.\n", "msg_date": "Thu, 14 Nov 2002 08:41:41 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thursday 14 November 2002 02:41 pm, Bruno Wolff III wrote:\n> On Thu, Nov 14, 2002 at 13:41:18 +0000,\n>\n> snpe <snpe@snpe.co.yu> wrote:\n> > Hello,\n> > When I change view and change number of column PostgreSQL return error\n> > : 'cannot change number of column in view'\n> > Is it too hard set this command\n> > if view exits drop view\n> > and then change view\n> >\n> > It is like with return type in function\n> >\n> > Now 'or replace' don't help too much\n>\n> The create or replace command exists so that you can modify a view in a\n> way that allows other objects that refer to it to keep working (without\n> having to recreate those objects). However if you can the number of\n> columns (and probably any of their types), then these other objects\n> or going to need to know that things have changed so that you can't\n> just replace the view.\n\nI undestand that, but if I change number of column I want that\n'create or replace view' do 'drop view ..; create view ..;'\nWhy not ?\n\nregards\nHaris Peco\n", "msg_date": "Thu, 14 Nov 2002 16:17:15 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thu, 2002-11-14 at 11:17, snpe wrote:\n> On Thursday 14 November 2002 02:41 pm, Bruno Wolff III wrote:\n> > On Thu, Nov 14, 2002 at 13:41:18 +0000,\n> >\n> > snpe <snpe@snpe.co.yu> wrote:\n> > > Hello,\n> > > When I change view and change number of column PostgreSQL return error\n> > > : 'cannot change number of column in view'\n> > > Is it too hard set this command\n> > > if view exits drop view\n> > > and then change view\n> > >\n> > > It is like with return type in function\n> > >\n> > > Now 'or replace' don't help too much\n> >\n> > The create or replace command exists so that you can modify a view in a\n> > way that allows other objects that refer to it to keep working (without\n> > having to recreate those objects). However if you can the number of\n> > columns (and probably any of their types), then these other objects\n> > or going to need to know that things have changed so that you can't\n> > just replace the view.\n> \n> I undestand that, but if I change number of column I want that\n> 'create or replace view' do 'drop view ..; create view ..;'\n> Why not ?\n\nNow you've just broken all functions, views, rules, and triggers that\ndepend on that view to function.\n\n-- \n Rod Taylor\n\n", "msg_date": "14 Nov 2002 11:38:59 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thursday 14 November 2002 04:38 pm, Rod Taylor wrote:\n> On Thu, 2002-11-14 at 11:17, snpe wrote:\n> > On Thursday 14 November 2002 02:41 pm, Bruno Wolff III wrote:\n> > > On Thu, Nov 14, 2002 at 13:41:18 +0000,\n> > >\n> > > snpe <snpe@snpe.co.yu> wrote:\n> > > > Hello,\n> > > > When I change view and change number of column PostgreSQL return\n> > > > error\n> > > >\n> > > > : 'cannot change number of column in view'\n> > > >\n> > > > Is it too hard set this command\n> > > > if view exits drop view\n> > > > and then change view\n> > > >\n> > > > It is like with return type in function\n> > > >\n> > > > Now 'or replace' don't help too much\n> > >\n> > > The create or replace command exists so that you can modify a view in a\n> > > way that allows other objects that refer to it to keep working (without\n> > > having to recreate those objects). However if you can the number of\n> > > columns (and probably any of their types), then these other objects\n> > > or going to need to know that things have changed so that you can't\n> > > just replace the view.\n> >\n> > I undestand that, but if I change number of column I want that\n> > 'create or replace view' do 'drop view ..; create view ..;'\n> > Why not ?\n>\n> Now you've just broken all functions, views, rules, and triggers that\n> depend on that view to function.\nBut I can simple:\ndrop view view_name;\ncreate view view_name ...;\n\nI want that 'create or replace view' work drop-create if view exists else only \ncreate\n\nregards\nHaris Peco\n\n", "msg_date": "Thu, 14 Nov 2002 16:49:42 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thursday 14 November 2002 05:01 pm, Bruno Wolff III wrote:\n> On Thu, Nov 14, 2002 at 16:49:42 +0000,\n>\n> snpe <snpe@snpe.co.yu> wrote:\n> > I want that 'create or replace view' work drop-create if view exists else\n> > only create\n>\n> Why do you want this?\n>\n\nWhy 'create or replace' ?\n\n", "msg_date": "Thu, 14 Nov 2002 17:00:30 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thu, Nov 14, 2002 at 16:49:42 +0000,\n snpe <snpe@snpe.co.yu> wrote:\n> \n> I want that 'create or replace view' work drop-create if view exists else only \n> create\n\nWhy do you want this?\n", "msg_date": "Thu, 14 Nov 2002 11:01:36 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thursday 14 November 2002 05:22 pm, Bruno Wolff III wrote:\n> On Thu, Nov 14, 2002 at 17:00:30 +0000,\n>\n> snpe <snpe@snpe.co.yu> wrote:\n> > On Thursday 14 November 2002 05:01 pm, Bruno Wolff III wrote:\n> > > On Thu, Nov 14, 2002 at 16:49:42 +0000,\n> > >\n> > > snpe <snpe@snpe.co.yu> wrote:\n> > > > I want that 'create or replace view' work drop-create if view exists\n> > > > else only create\n> > >\n> > > Why do you want this?\n> >\n> > Why 'create or replace' ?\n>\n> Why do you want create or replace to do a drop, then a create if the view\n> exists but it is being changed in a way that will break any objects that\n> refer to the old view?\n>\n> Are you trying to save typing a few characters or what?\nYes, it is 'create or replace view', not ?\n\n", "msg_date": "Thu, 14 Nov 2002 17:20:18 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thu, Nov 14, 2002 at 17:00:30 +0000,\n snpe <snpe@snpe.co.yu> wrote:\n> On Thursday 14 November 2002 05:01 pm, Bruno Wolff III wrote:\n> > On Thu, Nov 14, 2002 at 16:49:42 +0000,\n> >\n> > snpe <snpe@snpe.co.yu> wrote:\n> > > I want that 'create or replace view' work drop-create if view exists else\n> > > only create\n> >\n> > Why do you want this?\n> >\n> \n> Why 'create or replace' ?\n\nWhy do you want create or replace to do a drop, then a create if the view\nexists but it is being changed in a way that will break any objects that\nrefer to the old view?\n\nAre you trying to save typing a few characters or what?\n", "msg_date": "Thu, 14 Nov 2002 11:22:59 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: create or replace view" }, { "msg_contents": "snpe <snpe@snpe.co.yu> writes:\n> On Thursday 14 November 2002 05:22 pm, Bruno Wolff III wrote:\n>> Are you trying to save typing a few characters or what?\n\n> Yes, it is 'create or replace view', not ?\n\nThe statement was not invented to save a few characters of typing.\nIt was invented to allow people to make internal changes to view\ndefinitions without breaking other objects that refer to the view.\n\nIf we made it automatically drop and recreate the view then we'd\nbe defeating the purpose.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 12:45:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: create or replace view " }, { "msg_contents": "Tom Lane wrote:\n> snpe <snpe@snpe.co.yu> writes:\n> \n>>On Thursday 14 November 2002 05:22 pm, Bruno Wolff III wrote:\n>>\n>>>Are you trying to save typing a few characters or what?\n>>\n> \n>>Yes, it is 'create or replace view', not ?\n> \n> \n> The statement was not invented to save a few characters of typing.\n> It was invented to allow people to make internal changes to view\n> definitions without breaking other objects that refer to the view.\n> \n> If we made it automatically drop and recreate the view then we'd\n> be defeating the purpose.\n\nIt might just be me but it seems that this discussion is missing the \npoint if we believe this request is about saving some characters. I \ndon't think it is. I think it's about being able to write simple SQL \nscripts that don't produce errors when you use the syntax below in an \nadminstration or development script and the object doesn't exist:\n\n\tdrop...\n\tcreate...\n\n\nThe accepted syntax in both PG and others for trying to avoiding this \nissue is:\n\n\tcreate or replace....\n\nUsing this syntax the database script will run without errors, quietly \nadjusting the object definition as required. Perfect. That's what we want.\n\nNow I'm only interpreting here and haven't run into this problem myself \nin PG but it appears from some of the early posts on this subject that \nPG isn't consistent in whether it will allow the change to occur, at \nleast with respect to views. Instead, PG apparently tries to \"help\" by \nnot updating the view if the views' result schema would be different, \nhence the request (perhaps misguided by trying to specify \"how\" instead \nof \"what\") to drop/create.\n\n\nAssuming that's a correct assessment and summary of the problem then \nreviewing the following use cases seems in order:\n\n\n1. The view doesn't exist.\n\nAction: create the new view\n\n\n2. The view exists and the change can be determined to be benign, \npresumably because the virtual table schema of the view retains the same \ncolumn specifications (names and types match original specification).\n\nAction: replace the view \"in situ\" so that dependencies are ok\n\n\n3. The view exists but the change isn't benign and it's clear that other \nobjects referencing the view are going to have issues since column \nnames, types, number, etc. are being changed.\n\nAction 3: drop/create the view. Optionally we might consider doing a \nNOTIFY \"dependent object references\" which might also work nicely in \nother areas such as trigger functions etc.\n\n\nWhy drop/create? (or appropriate similar internal operation). A lot of \nreasons actually.\n\nFirst, this use case, by definition, says the new view's going to break \nother objects -- and that this will be true regardless of whether I use \ncreate-replace or drop/create. So not allowing create-replace to operate \nas sugar changes nothing in terms of the resulting schema issues upon \nstatement completion. It has a big impact on my SQL though, since \ndrop/create may throw errors that create-replace won't. So we haven't \nsolved a problem by ignoring case #3. Instead we've continued to require \ndevelopers to use a syntax guaranteed to throw errors. Cool.\n\nSecond, if there are other objects depending on the view to look a \ncertain way, and I'm knowingly changing the view what can you infer? One \nmight choose to infer \"The programmer's an idiot for wanting to break \nhis schema like this.\" I see far too much code written from this \nattitude...it's what I hate about most M$ code. I prefer to infer that \n\"The programmer's a human being who might just be 10x smarter than \nme...maybe I should let him do his job as he sees fit.\"\n\nAs an aside, this is the UNIX philosophy. Not only do we not try to \nprotect you from yourself by taking away all the guns (no command prompt \netc), we give you a fully loaded semi-automatic weapon (C, shell, etc) \nwith the safety off (root) and say \"Be careful\".\n\n<soapbox>\n\nSo, instead of assuming that we know more about what's right than the \nprogrammer, perhaps we should try assuming that the programmer's next \nSQL script lines will adapt to the new view definition and make the \nappropriate changes -- perhaps via a series of more create or replace \nstatements ;). A reasonable developer/DBA should know they're changing \nthe view in a way that isn't compatible with previously defined \ndependents, just as they should realize dependencies may exist when they \nalter schema in general. If not, then hey we told you to \"Be careful\".\n\nThe \"create or replace\" syntax, in my mind anyway, wasn't designed to \nsay \"If you can create, do so. If you can replace, do so. If you have to \ndrop, tell the programmer to bite you\" as implied by many of the posts \non this thread. It has a different goal, one of making the developer or \nDBA's life easier (which occasionally means saving characters BTW. I \nmean, if people weren't concerned about that how can you explain Unix or \nPerl? ;) ).\n\nIf we're concerned with this change from a consistency perspective, look \nat triggers. The programmer drops a function and the triggers relying on \nthat function go to hell. Sure, and if we said \"you can't drop the \nfunction because triggers might break\" then it'd parallel what we're \nsaying here -- in effect \"we know better than you do what you want\". Or \nto use M$ terminology \"we know where you want to go today\" ;).\n\nNow, if I've misunderstood the problem here I just spent a lot of time \non a non-issue and wasted a lot of time, for which I apologize. But I \nthink the overall philosophy is reusable in any event. I bring it up \nhere because I've gotten a distinct sense of disrepect in some of the \nreplies on this thread and it disturbs me. If we have any goals for the \nPostgres community they should include:\n\nA. We want the programmer/DBA to have an easier time getting their job \ndone and anything we do to that end that is compatible with existing and \nemerging standards is \"a good thing\". If PG is easier to use it'll get \nused more.\n\nB. We want to treat people who are interested in PostgreSQL with respect \nat all times, keeping in mind that we communicate with them not only \nthrough this forum, but through the code we write for them.\n\n\nAs a personal note, any time I see a response to my posts consisting of \n \"Why would you want to do that?\" I automatically assume the author \nsimply left off the implied suffix of \"you idiot\". It's not a question \nthat I feel treats me with respect. I'm sure I'm not alone.\n\n</soapbox>\n\n\nss\n\n\n\nScott Shattuck\nTechnical Pursuit Inc.\n\n\n", "msg_date": "Thu, 14 Nov 2002 12:26:53 -0700", "msg_from": "Scott Shattuck <ss@technicalpursuit.com>", "msg_from_op": false, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thu, 14 Nov 2002, Scott Shattuck wrote:\n\n> It might just be me but it seems that this discussion is missing the\n> point if we believe this request is about saving some characters. I\n> don't think it is. I think it's about being able to write simple SQL\n> scripts that don't produce errors when you use the syntax below in an\n> adminstration or development script and the object doesn't exist:\n\nI think there are two groups of people who have different ideas of what\nthis functionality is supposed to do. From my understanding of the\ndiscussions on create or replace function, the point really was to do an\nin place modification to not need to drop and recreate dependent objects.\nNote that afaik you also can't change the return type of a function in a\ncreate or replace if it already exists with a different return type.\n\nThe other usage is useful, but I don't think it was the intended way to be\nused. I use it that way too, but if I get an error on a create or replace\nI do the more involved version (dump dependents if necessary, drop\ncascade, create, edit dump, restore).\n\n> If we're concerned with this change from a consistency perspective, look\n> at triggers. The programmer drops a function and the triggers relying on\n> that function go to hell. Sure, and if we said \"you can't drop the\n> function because triggers might break\" then it'd parallel what we're\n> saying here -- in effect \"we know better than you do what you want\". Or\n> to use M$ terminology \"we know where you want to go today\" ;).\n\nIn fact, afaict 7.3 does exactly this unless you use drop cascade.\nI don't think that the past way was particularly easier, with needing to\ndump/restore dependent objects in order to make them work again. I think\nof it like constraints, as much as you can you enforce the constraint.\nIt's possible that the next statement will make the sequence\nwork for the constraint, but you don't wait to find out.\n\n> B. We want to treat people who are interested in PostgreSQL with respect\n> at all times, keeping in mind that we communicate with them not only\n> through this forum, but through the code we write for them.\n\nThis is always true. Even if we forget sometimes. :)\n\n\n", "msg_date": "Thu, 14 Nov 2002 12:01:49 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thursday 14 November 2002 05:45 pm, Tom Lane wrote:\n> snpe <snpe@snpe.co.yu> writes:\n> > On Thursday 14 November 2002 05:22 pm, Bruno Wolff III wrote:\n> >> Are you trying to save typing a few characters or what?\n> >\n> > Yes, it is 'create or replace view', not ?\n>\n> The statement was not invented to save a few characters of typing.\n> It was invented to allow people to make internal changes to view\n> definitions without breaking other objects that refer to the view.\n>\n> If we made it automatically drop and recreate the view then we'd\n> be defeating the purpose.\n>\nDoes it mean that if I will change any object (view or function) I must\ndrop all dependent objects ?\nexample :\n I want change (number of columns) view viewa\nIf viewb depend of viewa, I must drop and create viewa and viewb ?\n\nDoes it possible that viewb stay temporary (or always) invalid ?\n recreate viewa will make viewb valid or pgsql return error for viewb ?\n\nregards\nHaris Peco\n", "msg_date": "Thu, 14 Nov 2002 21:32:49 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thursday 14 November 2002 08:01 pm, Stephan Szabo wrote:\n> On Thu, 14 Nov 2002, Scott Shattuck wrote:\n> > It might just be me but it seems that this discussion is missing the\n> > point if we believe this request is about saving some characters. I\n> > don't think it is. I think it's about being able to write simple SQL\n> > scripts that don't produce errors when you use the syntax below in an\n> > adminstration or development script and the object doesn't exist:\n>\n> I think there are two groups of people who have different ideas of what\n> this functionality is supposed to do. From my understanding of the\n> discussions on create or replace function, the point really was to do an\n> in place modification to not need to drop and recreate dependent objects.\n> Note that afaik you also can't change the return type of a function in a\n> create or replace if it already exists with a different return type.\n>\n> The other usage is useful, but I don't think it was the intended way to be\n> used. I use it that way too, but if I get an error on a create or replace\n> I do the more involved version (dump dependents if necessary, drop\n> cascade, create, edit dump, restore).\n>\n> > If we're concerned with this change from a consistency perspective, look\n> > at triggers. The programmer drops a function and the triggers relying on\n> > that function go to hell. Sure, and if we said \"you can't drop the\n> > function because triggers might break\" then it'd parallel what we're\n> > saying here -- in effect \"we know better than you do what you want\". Or\n> > to use M$ terminology \"we know where you want to go today\" ;).\n>\n> In fact, afaict 7.3 does exactly this unless you use drop cascade.\n> I don't think that the past way was particularly easier, with needing to\n> dump/restore dependent objects in order to make them work again. I think\n> of it like constraints, as much as you can you enforce the constraint.\n> It's possible that the next statement will make the sequence\n> work for the constraint, but you don't wait to find out.\n>\n> > B. We want to treat people who are interested in PostgreSQL with respect\n> > at all times, keeping in mind that we communicate with them not only\n> > through this forum, but through the code we write for them.\n>\n> This is always true. Even if we forget sometimes. :)\n>\n>\nProblem is when I want change view (or functions) with a lot of dependecies\nI must drop and recreate all dependent views (or functions) - I want add only\none column in view \nI don't know if solution hard for that.\n\nregards\nHaris Peco\n", "msg_date": "Thu, 14 Nov 2002 21:45:54 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: create or replace view" }, { "msg_contents": "\nOn Thu, 14 Nov 2002, snpe wrote:\n\n> Problem is when I want change view (or functions) with a lot of dependecies\n> I must drop and recreate all dependent views (or functions) - I want add only\n> one column in view\n> I don't know if solution hard for that.\n\nWell, doing create or replace as a drop/create might very well do the same\nthing, and even if it got the same oid, we'd have to be really sure that\nnothing would misbehave upon receiving that extra column before allowing\nit for purposes of avoiding recreation of dependencies.\n\n\n", "msg_date": "Thu, 14 Nov 2002 14:36:34 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: create or replace view" }, { "msg_contents": "> Problem is when I want change view (or functions) with a lot of\n> dependecies\n> I must drop and recreate all dependent views (or functions) - I\n> want add only\n> one column in view\n> I don't know if solution hard for that.\n\nThis is definitely something that will cause some anguish in 7.3. I think\n7.4 will need the concept of an \"invalid object\" that can be resurrected...\n\nChris\n\n", "msg_date": "Fri, 15 Nov 2002 09:45:26 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Thursday 14 November 2002 10:36 pm, Stephan Szabo wrote:\n> On Thu, 14 Nov 2002, snpe wrote:\n> > Problem is when I want change view (or functions) with a lot of\n> > dependecies I must drop and recreate all dependent views (or functions) -\n> > I want add only one column in view\n> > I don't know if solution hard for that.\n>\n> Well, doing create or replace as a drop/create might very well do the same\n> thing, and even if it got the same oid, we'd have to be really sure that\n> nothing would misbehave upon receiving that extra column before allowing\n> it for purposes of avoiding recreation of dependencies.\n>\n>\nCan PostgreSQL recreate dependecies automaticly or say 'object is not valid'\n\nregards\nHaris Peco\n", "msg_date": "Fri, 15 Nov 2002 02:09:13 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: create or replace view" }, { "msg_contents": "> > Well, doing create or replace as a drop/create might very well\n> do the same\n> > thing, and even if it got the same oid, we'd have to be really sure that\n> > nothing would misbehave upon receiving that extra column before allowing\n> > it for purposes of avoiding recreation of dependencies.\n> >\n> >\n> Can PostgreSQL recreate dependecies automaticly or say 'object is\n> not valid'\n\n7.3 doesn't do 'object is not valid'\n\nChris\n\n", "msg_date": "Fri, 15 Nov 2002 10:14:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: create or replace view" } ]
[ { "msg_contents": "\n> > > > We received a query through the Advocacy site about whether we support\n> > > > AIX 5.1 or not, so am trying to find out.\n> > > \n> > > It should work. Andreas just submitted a port confirmation on AIX\n> > > 4.3.2 ...\n\n> > Do you feel there's anyone around that would be able to give a\n> > definitite yes or no?\n\nI don't have AIX 5 here, so cannot test, sorry.\nBut yes, it should definitely work.\n\nThere is a known possible performance improvement for concurrent sessions on \nmultiprocessor AIX machines. The now depricated \"cs(3)\" used for the AIX TAS \nimplementation should be replaced with fetch_and_or, compare_and_swap or the \nlike. I just haven't got round to doing a patch that I trust. \n\nMaybe Tatsuo can say more ?\n\nAndreas\n", "msg_date": "Thu, 14 Nov 2002 16:25:49 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Does v7.2.x support AIX 5.1?" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> I don't have AIX 5 here, so cannot test, sorry.\n> But yes, it should definitely work.\n> \n> There is a known possible performance improvement for concurrent sessions on \n> multiprocessor AIX machines. The now depricated \"cs(3)\" used for the AIX TAS \n> implementation should be replaced with fetch_and_or, compare_and_swap or the \n> like. I just haven't got round to doing a patch that I trust. \n> \n> Maybe Tatsuo can say more ?\n\ncs() is gone in 7.3beta because cs() was failing on SMP machines anyway.\nThe new code is:\n\n\t#define TAS(lock) _check_lock(lock, 0, 1)\n\t#define S_UNLOCK(lock) _clear_lock(lock, 0)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Nov 2002 11:03:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Does v7.2.x support AIX 5.1?" } ]
[ { "msg_contents": "\n> > I don't have AIX 5 here, so cannot test, sorry.\n> > But yes, it should definitely work.\n> > \n> > There is a known possible performance improvement for concurrent sessions on \n> > multiprocessor AIX machines. The now depricated \"cs(3)\" used for the AIX TAS \n> > implementation should be replaced with fetch_and_or, compare_and_swap or the \n> > like. I just haven't got round to doing a patch that I trust. \n> > \n> > Maybe Tatsuo can say more ?\n> \n> cs() is gone in 7.3beta because cs() was failing on SMP machines anyway.\n> The new code is:\n> \n> \t#define TAS(lock) _check_lock(lock, 0, 1)\n> \t#define S_UNLOCK(lock) _clear_lock(lock, 0)\n\nAh, great. That is perfect. Has slipped in without me noticing :-)\n\nThanks to whoever fixed this\nAndreas\n", "msg_date": "Thu, 14 Nov 2002 17:29:42 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Does v7.2.x support AIX 5.1?" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > > I don't have AIX 5 here, so cannot test, sorry.\n> > > But yes, it should definitely work.\n> > > \n> > > There is a known possible performance improvement for concurrent sessions on \n> > > multiprocessor AIX machines. The now depricated \"cs(3)\" used for the AIX TAS \n> > > implementation should be replaced with fetch_and_or, compare_and_swap or the \n> > > like. I just haven't got round to doing a patch that I trust. \n> > > \n> > > Maybe Tatsuo can say more ?\n> > \n> > cs() is gone in 7.3beta because cs() was failing on SMP machines anyway.\n> > The new code is:\n> > \n> > \t#define TAS(lock) _check_lock(lock, 0, 1)\n> > \t#define S_UNLOCK(lock) _clear_lock(lock, 0)\n> \n> Ah, great. That is perfect. Has slipped in without me noticing :-)\n\nAccording to CVS, it was Tomoyuki Niijima:\n\n---------------------------------------------------------------------------\n\n\nI tried to build PostgreSQL with the following step to see backends hung\nduring the regression test. The problem has been reproduced on two machine\nbut both of these are the same type of hardware and software. I also tried\nto recreate the problem on other machines, on older version of AIX but I\ncouldn't.\n\nAfter looked through pgsql-hackers mailing list, I focused on spin lock\nissue to solve the problem. The easiest and may not be the best solution\nfor the problem is to give up HAS_TEST_AND_SET. This actually works.\n\nOne another and better solution for the problem is to use _check_lock() and\n_clear_lock() as spin lock. Important thing here is to define S_UNLOCK()\nwith _clear_lock(). This will solve the so called \"Compiler bug\" issue\nsomeone wrote on the mailing list.\n\nWe have some other API such as cs(), compare_and_swap() and fetch_and_or()\nto do test and set on AIX, but any of these didn't solve my problem. I\nwrote tiny testing program to see if we have any bug of these API of AIX,\nbut I couldn't see any problem except for compare_and_swap(). It seems that\nyou can not use compare_and_swap() for the purpose, as it would not work as\nspin lock on any SMP machines I tested. I don't know the reason why cs()\nnor fetch_and_or()/fetch_and_and() will not work with PostgreSQL on p690.\nThese worked with my testing program on all machines I tested.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Nov 2002 11:34:34 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Does v7.2.x support AIX 5.1?" } ]
[ { "msg_contents": "Believe it or not, I'm trying to compile today's cvs pgsql on a\nDebian 2.2.19 system. Compilation dies while compiling pg_dump with\n\n../../../src/interfaces/libpq/libpq.so: undefined reference to `atexit'\n\nIn the mail archives there is a mention of upgrading libc to\nlibc6-dev_2.2.5-3_i386.deb. As far as I can tell, that should read\nlibc6_2.2.5-3_i386.deb, and again AFAICT this system already has\nlibc6_2.2.5-6_i386.deb on it. I can see atexit is undefined in libpq, and it\nis defined in /usr/lib/libc.a. For some reason /lib/libc*.so are stripped,\nso it is hard to tell, but I assume it must be the same as for\n/usr/lib/libc.a.\n\nHave any of you managed to compile postgresql on an oldstable Debian system?\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 14 Nov 2002 20:55:22 +0000", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Debian build prob" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> Believe it or not, I'm trying to compile today's cvs pgsql on a\n> Debian 2.2.19 system. Compilation dies while compiling pg_dump with\n> ../../../src/interfaces/libpq/libpq.so: undefined reference to `atexit'\n\n<blink> Did you run configure? AFAICT that call only gets compiled if\nconfigure found atexit(), so this is more than a tad surprising ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 16:18:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Debian build prob " }, { "msg_contents": "On Thu, 14 Nov 2002, Patrick Welche wrote:\n\n> Believe it or not, I'm trying to compile today's cvs pgsql on a\n> Debian 2.2.19 system. Compilation dies while compiling pg_dump with\n> \n> ../../../src/interfaces/libpq/libpq.so: undefined reference to `atexit'\n> \n> In the mail archives there is a mention of upgrading libc to\n> libc6-dev_2.2.5-3_i386.deb. As far as I can tell, that should read\n> libc6_2.2.5-3_i386.deb, and again AFAICT this system already has\n> libc6_2.2.5-6_i386.deb on it. I can see atexit is undefined in libpq, and it\n> is defined in /usr/lib/libc.a. For some reason /lib/libc*.so are stripped,\n> so it is hard to tell, but I assume it must be the same as for\n> /usr/lib/libc.a.\n> \n> Have any of you managed to compile postgresql on an oldstable Debian system?\n> \n\nThe latest I've built was from somewhere like the beta 3 mark but yes, built it\non a Debian 2.2 installation with no library upgrades or anything. Now of\ncourse one would need a new bison.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Fri, 15 Nov 2002 10:42:45 +0000 (GMT)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Debian build prob" }, { "msg_contents": "On Thu, Nov 14, 2002 at 08:55:22PM +0000, Patrick Welche wrote:\n> Believe it or not, I'm trying to compile today's cvs pgsql on a\n> Debian 2.2.19 system. Compilation dies while compiling pg_dump with\n> \n> ../../../src/interfaces/libpq/libpq.so: undefined reference to `atexit'\n> \n> In the mail archives there is a mention of upgrading libc to\n> libc6-dev_2.2.5-3_i386.deb. As far as I can tell, that should read\n> libc6_2.2.5-3_i386.deb, and again AFAICT this system already has\n> libc6_2.2.5-6_i386.deb on it. I can see atexit is undefined in libpq, and it\n> is defined in /usr/lib/libc.a. For some reason /lib/libc*.so are stripped,\n> so it is hard to tell, but I assume it must be the same as for\n> /usr/lib/libc.a.\n> \n> Have any of you managed to compile postgresql on an oldstable Debian system?\n\nAdam Buraczewski tells me its a linux i386 gcc<=2.95.3 problem. Upgrading gcc\nto\n\ngcc version 2.95.4 20011002 (Debian prerelease)\n\nyielded a working postgresql!\n\n PostgreSQL 7.4devel on i686-pc-linux-gnu, compiled by GCC 2.95.4\n\n\nCheers,\n\nPatrick\n", "msg_date": "Sun, 17 Nov 2002 16:57:21 +0000", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Debian build prob" } ]
[ { "msg_contents": "Hi Guys,\n\nI emailed a few people to try to get some platform reports. What was the\nsolution to this problem? It was AWK or something wasn't it? Will Martin\nhave to try RC1?\n\nChris\n\n> > We are trying to get our supported platforms list together, and\n> so far we\n> > have not had any platform reports for NetBSD, OpenBSD, Solaris\n> x86 or some\n> > of the rarer Linux architectures.\n>\n> I've compiled PostgreSQL 7.3b5 on Solaris X86 and I did a 'make check'\n> but it ran only 13 tests. I vaguely remember there were a lot more\n> last time. Let me know if there is something else I need to run.\n>\n> Platform: Solaris 8/X86\n> Compiler: GCC 2.95.3\n>\n> /bin/sh ./pg_regress --temp-install --top-builddir=../../..\n> --schedule=./parallel_schedule --multibyte=SQL_ASCII\n> ============== creating temporary installation ==============\n> ============== initializing database system ==============\n> ============== starting postmaster ==============\n> running on port 65432 with pid 21190\n> ============== creating database \"regression\" ==============\n> CREATE DATABASE\n> ALTER DATABASE\n> ============== dropping regression test user accounts ==============\n> ============== installing PL/pgSQL ==============\n> ============== running regression test queries ==============\n> parallel group (13 tests): name boolean float4 text char int2\n> int8 varchar int4 oid float8 bit numeric\n> boolean ... ok\n> char ... ok\n> name ... ok\n> varchar ... ok\n> text ... ok\n> int2 ... ok\n> int4 ... ok\n> int8 ... ok\n> oid ... ok\n> float4 ... ok\n> float8 ... ok\n> bit ... ok\n> numeric ... ok\n> ============== shutting down postmaster ==============\n>\n> ======================\n> All 13 tests passed.\n> ======================\n>\n> rm regress.o\n> make[2]: Leaving directory\n> `/devel/current/postgresql-7.3b5/src/test/regress'\n> make[1]: Leaving directory `/devel/current/postgresql-7.3b5/src/test'\n> parkcity$\n>\n\n", "msg_date": "Fri, 15 Nov 2002 09:47:44 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "FW: PostgreSQL 7.3 Platform Testing" }, { "msg_contents": "\nIt was Solaris awk. The fix will be in RC1.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Hi Guys,\n> \n> I emailed a few people to try to get some platform reports. What was the\n> solution to this problem? It was AWK or something wasn't it? Will Martin\n> have to try RC1?\n> \n> Chris\n> \n> > > We are trying to get our supported platforms list together, and\n> > so far we\n> > > have not had any platform reports for NetBSD, OpenBSD, Solaris\n> > x86 or some\n> > > of the rarer Linux architectures.\n> >\n> > I've compiled PostgreSQL 7.3b5 on Solaris X86 and I did a 'make check'\n> > but it ran only 13 tests. I vaguely remember there were a lot more\n> > last time. Let me know if there is something else I need to run.\n> >\n> > Platform: Solaris 8/X86\n> > Compiler: GCC 2.95.3\n> >\n> > /bin/sh ./pg_regress --temp-install --top-builddir=../../..\n> > --schedule=./parallel_schedule --multibyte=SQL_ASCII\n> > ============== creating temporary installation ==============\n> > ============== initializing database system ==============\n> > ============== starting postmaster ==============\n> > running on port 65432 with pid 21190\n> > ============== creating database \"regression\" ==============\n> > CREATE DATABASE\n> > ALTER DATABASE\n> > ============== dropping regression test user accounts ==============\n> > ============== installing PL/pgSQL ==============\n> > ============== running regression test queries ==============\n> > parallel group (13 tests): name boolean float4 text char int2\n> > int8 varchar int4 oid float8 bit numeric\n> > boolean ... ok\n> > char ... ok\n> > name ... ok\n> > varchar ... ok\n> > text ... ok\n> > int2 ... ok\n> > int4 ... ok\n> > int8 ... ok\n> > oid ... ok\n> > float4 ... ok\n> > float8 ... ok\n> > bit ... ok\n> > numeric ... ok\n> > ============== shutting down postmaster ==============\n> >\n> > ======================\n> > All 13 tests passed.\n> > ======================\n> >\n> > rm regress.o\n> > make[2]: Leaving directory\n> > `/devel/current/postgresql-7.3b5/src/test/regress'\n> > make[1]: Leaving directory `/devel/current/postgresql-7.3b5/src/test'\n> > parkcity$\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Nov 2002 20:52:10 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: PostgreSQL 7.3 Platform Testing" } ]
[ { "msg_contents": "I have moved more variables into the log_* GUC category in an attempt to\nmake log control more understandable. Changes to postgresql.conf\nattached.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: postgresql.conf.sample\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.53\nretrieving revision 1.58\ndiff -c -c -r1.53 -r1.58\n*** postgresql.conf.sample\t8 Nov 2002 17:37:52 -0000\t1.53\n--- postgresql.conf.sample\t15 Nov 2002 01:57:28 -0000\t1.58\n***************\n*** 34,41 ****\n #superuser_reserved_connections = 2\n \n #port = 5432 \n- #hostname_lookup = false\n- #show_source_port = false\n \n #unix_socket_directory = ''\n #unix_socket_group = ''\n--- 34,39 ----\n***************\n*** 112,118 ****\n #\n #\tMessage display\n #\n! #server_min_messages = notice\t# Values, in order of decreasing detail:\n \t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n \t\t\t\t# info, notice, warning, error, log, fatal,\n \t\t\t\t# panic\n--- 110,116 ----\n #\n #\tMessage display\n #\n! #log_min_messages = notice\t# Values, in order of decreasing detail:\n \t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n \t\t\t\t# info, notice, warning, error, log, fatal,\n \t\t\t\t# panic\n***************\n*** 122,127 ****\n--- 120,127 ----\n #silent_mode = false\n \n #log_connections = false\n+ #log_hostname = false\n+ #log_source_port = false\n #log_pid = false\n #log_statement = false\n #log_duration = false\n***************\n*** 152,164 ****\n #\n #\tStatistics\n #\n! #show_parser_stats = false\n! #show_planner_stats = false\n! #show_executor_stats = false\n! #show_statement_stats = false\n \n # requires BTREE_BUILD_STATS\n! #show_btree_build_stats = false\n \n \n #\n--- 152,164 ----\n #\n #\tStatistics\n #\n! #log_parser_stats = false\n! #log_planner_stats = false\n! #log_executor_stats = false\n! #log_statement_stats = false\n \n # requires BTREE_BUILD_STATS\n! #log_btree_build_stats = false\n \n \n #", "msg_date": "Thu, 14 Nov 2002 21:26:10 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "log_* GUC variables" } ]
[ { "msg_contents": "I'd like to implement FOR EACH STATEMENT triggers. AFAICS it shouldn't\nbe too tricky -- so if there's some show-stopper that prevented it\nfrom being done earlier, let me know now, please :-)\n\nSome random notes on the implementation I'm thinking of:\n\n - in the function called by a per-statement trigger, no\n references to the 'OLD' or 'NEW' rows will be allowed\n\n - should we allow per-statement BEFORE triggers? DB2 doesn't,\n but I'm not sure that's because they just cut corners, or if\n there's some legitimate reason not to allow them. AFAICT SQL\n 200x doesn't specify that they *aren't* allowed, so I'm\n inclined to allow them...\n\n - if the statement effects zero rows, a per-statement trigger\n is still executed\n\n - COPY executes per-statement INSERT triggers, to stay\n consistent with the current behavior WRT per-row INSERT\n triggers\n\n - specifying 'FOR EACH xxx' in CREATE TRIGGER should now be\n optional; if neither is specified, FOR EACH STATEMENT is the\n default. This is per SQL spec (SQL 200x, 11.39, 8)\n\nComments?\n\nCheers,\n\nNeil\n \n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "14 Nov 2002 21:56:17 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "FOR EACH STATEMENT triggers" }, { "msg_contents": "Looks pretty sweet, Neil.\n\nMaybe you could look at column triggers while you're at it, per comment on\nCompiere page ;)\n\nTriggers\n\nCompiere uses triggers to ensure data consistency. It seems that in general,\nOracle triggers are relatively easy to convert. In addition to the Function\nissues, a procedure needs to be crated per trigger. Oracle and PostgreSQL\nhave slightly different notation of the \"new\" and \"old\" references and\nINSERTING, etc. PostgreSQL Triggers do not support Column restrictions\n(AFTER UPDATE OF column, column ON table).\n\n> - if the statement effects zero rows, a per-statement trigger\n> is still executed\n\n\"affects\" - sorry couldn't help myself :)\n\nChris\n\n", "msg_date": "Fri, 15 Nov 2002 11:16:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: FOR EACH STATEMENT triggers" } ]
[ { "msg_contents": "Just a quick note to mention that I've resigned from the PostgreSQL \nsteering committee. It has been a lot of fun and very rewarding to \nparticipate in PostgreSQL development over the last six years, but it is \ntime to take a break and to move on to other projects.\n\nThanks to Marc, Bruce, and Vadim for welcoming me many years ago. It has \nbeen great working with the group and I'm looking forward to seeing \nPostgreSQL achieve greater and greater success in the coming years.\n\n - Thomas\n\n", "msg_date": "Fri, 15 Nov 2002 06:38:19 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Time to move on..." }, { "msg_contents": "> Just a quick note to mention that I've resigned from the PostgreSQL\n> steering committee. It has been a lot of fun and very rewarding to\n> participate in PostgreSQL development over the last six years, but it is\n> time to take a break and to move on to other projects.\n\nThanks for all your work, Thomas. I hope to one day achieve similarly -\ntime be willing!\n\nTo the rest of the hackers, is it normal practice to perhaps vote in a new\nmember of the steering committee?\n\nChris\n\n", "msg_date": "Fri, 15 Nov 2002 14:46:41 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> Just a quick note to mention that I've resigned from the PostgreSQL\n> steering committee.\n\nWow. That was totally unexpected.\n\nA sad day. :-/\n\n\n> It has been a lot of fun and very rewarding to\n> participate in PostgreSQL development over the last six years, but it is\n> time to take a break and to move on to other projects.\n\nGood luck Thomas.\n\nTruly hope you're going to have heaps of fun, enjoy yourself, and find\nthe new projects rewarding too.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Thanks to Marc, Bruce, and Vadim for welcoming me many years ago. It has\n> been great working with the group and I'm looking forward to seeing\n> PostgreSQL achieve greater and greater success in the coming years.\n> \n> - Thomas\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 15 Nov 2002 18:48:55 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "Hey Thomas,\n\nAlthough we have never corresponded, I just wanted to say thank you to \nyourself and all the other hackers who have devoted their time voluntarily to \nPostgreSQL. It really is appreciated.\n\nCheers\n\nMark Pritchard\n\nOn Fri, 15 Nov 2002 17:38, Thomas Lockhart wrote:\n> Just a quick note to mention that I've resigned from the PostgreSQL\n> steering committee. It has been a lot of fun and very rewarding to\n> participate in PostgreSQL development over the last six years, but it is\n> time to take a break and to move on to other projects.\n>\n> Thanks to Marc, Bruce, and Vadim for welcoming me many years ago. It has\n> been great working with the group and I'm looking forward to seeing\n> PostgreSQL achieve greater and greater success in the coming years.\n>\n> - Thomas\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Fri, 15 Nov 2002 18:53:10 +1100", "msg_from": "Mark Pritchard <mark.pritchard@modus.com.au>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> To the rest of the hackers, is it normal practice to perhaps vote in a new\n> member of the steering committee?\n\nUh ... it's never happened before ... so there is no \"normal practice\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Nov 2002 10:23:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time to move on... " }, { "msg_contents": "Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > To the rest of the hackers, is it normal practice to perhaps vote in a new\n> > member of the steering committee?\n> \n> Uh ... it's never happened before ... so there is no \"normal practice\".\n\nThe logic usually has been to add people to core who are so involved in\nthe release process that we couldn't imagine scheduling a release\nwithout them.\n\nI am not saying all current core members are that involved, but at the\ntime they were added to core, they were.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 15 Nov 2002 12:03:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "On Friday 15 November 2002 01:38, Thomas Lockhart wrote:\n> Just a quick note to mention that I've resigned from the PostgreSQL\n> steering committee. It has been a lot of fun and very rewarding to\n> participate in PostgreSQL development over the last six years, but it is\n> time to take a break and to move on to other projects.\n\nI'll echo the sad day response of earlier. You have done quite a bit for the \nproject, and you will be missed.\n\n> Thanks to Marc, Bruce, and Vadim for welcoming me many years ago. It has\n> been great working with the group and I'm looking forward to seeing\n> PostgreSQL achieve greater and greater success in the coming years.\n\nThomas, good luck.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 15 Nov 2002 12:37:57 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "On Friday 15 November 2002 10:23, Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > To the rest of the hackers, is it normal practice to perhaps vote in a\n> > new member of the steering committee?\n>\n> Uh ... it's never happened before ... so there is no \"normal practice\".\n\nIMHO, replacement of a core member should be treated the same as bringing in a \nnew core member, which, IIRC, is by invitation and vote of the balance of the \ncore members.\n\nIf a replacement is immediately necessary, that is. Having five core versus \nsix core isn't a great handicap, as the potential replacement pool consists \nof people who are already doing development now. Having an odd number of \ncore has its advantages.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 15 Nov 2002 12:40:11 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "On Fri, 15 Nov 2002, Thomas Lockhart wrote:\n\n> Just a quick note to mention that I've resigned from the PostgreSQL\n> steering committee. It has been a lot of fun and very rewarding to\n> participate in PostgreSQL development over the last six years, but it is\n> time to take a break and to move on to other projects.\n\nI'm really sorry to hear that. Good luck in all of your future endeavors.\n\nVince.\n-- \n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n", "msg_date": "Fri, 15 Nov 2002 15:20:09 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "Lamar Owen wrote:\n> On Friday 15 November 2002 10:23, Tom Lane wrote:\n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > To the rest of the hackers, is it normal practice to perhaps vote in a\n> > > new member of the steering committee?\n> >\n> > Uh ... it's never happened before ... so there is no \"normal practice\".\n> \n> IMHO, replacement of a core member should be treated the same as bringing in a \n> new core member, which, IIRC, is by invitation and vote of the balance of the \n> core members.\n> \n> If a replacement is immediately necessary, that is. Having five core versus \n> six core isn't a great handicap, as the potential replacement pool consists \n> of people who are already doing development now. Having an odd number of \n> core has its advantages.\n\nI will reiterate for the new folks that the core group doesn't do much\nmore than decide if the final release will be on a Friday or a Monday,\nand deal with private issues like discipline. I think we deal with such\nissues perhaps 2-4 times a year.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 15 Nov 2002 22:05:47 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "> I will reiterate for the new folks that the core group doesn't do much\n> more than decide if the final release will be on a Friday or a Monday,\n> and deal with private issues like discipline. I think we deal with such\n> issues perhaps 2-4 times a year.\n\nOK sorry - I was under the impression that core == commit bit...\n\nChris\n\n\n", "msg_date": "Sat, 16 Nov 2002 14:07:11 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "On Saturday 16 November 2002 01:07, Christopher Kings-Lynne wrote:\n> > I will reiterate for the new folks that the core group doesn't do much\n> > more than decide if the final release will be on a Friday or a Monday,\n> > and deal with private issues like discipline. I think we deal with such\n> > issues perhaps 2-4 times a year.\n\n> OK sorry - I was under the impression that core == commit bit...\n\ncommitters != core\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 16 Nov 2002 01:11:09 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Saturday 16 November 2002 01:07, Christopher Kings-Lynne wrote:\n> > OK sorry - I was under the impression that core == commit bit...\n> \n> committers != core\n\nIs there any reason for this distinction?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "16 Nov 2002 01:21:42 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." }, { "msg_contents": "Neil Conway wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > On Saturday 16 November 2002 01:07, Christopher Kings-Lynne wrote:\n> > > OK sorry - I was under the impression that core == commit bit...\n> > \n> > committers != core\n> \n> Is there any reason for this distinction?\n\nI assume you are asking why there is core? It is really just for\ngetting a quick vote among folks for final release date, and for\ndiscussing things with individuals and companies who require a private\nconversation among PostgreSQL representatives.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 16 Nov 2002 09:10:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time to move on..." } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Thomas Lockhart [mailto:lockhart@fourpalms.org] \n> Sent: 15 November 2002 06:38\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Time to move on...\n> \n> \n> Just a quick note to mention that I've resigned from the PostgreSQL \n> steering committee. It has been a lot of fun and very rewarding to \n> participate in PostgreSQL development over the last six \n> years, but it is \n> time to take a break and to move on to other projects.\n\nGood luck in whatever you decide to do next Thomas. Your effort and help\nover the last few years has certainly been appreciated by me, and I'm\nsure by many others as well.\n\nRegards, Dave.\n", "msg_date": "Fri, 15 Nov 2002 08:10:22 -0000", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Time to move on..." } ]
[ { "msg_contents": "\n> Problem is when I want change view (or functions) with a lot of dependecies\n> I must drop and recreate all dependent views (or functions) - \n> I want add only one column in view \n> I don't know if solution hard for that.\n\nI do not see how adding a column to a view would invalidate\ndependent objects. (Except an object that uses \"select *\", in which case\nthe writer of the object explicitly states that he can cope with changing \ncolumn count and order). \n\nThus I think \"create or replace\" should work in this case regardless of \nwhat definition for \"create or replace\" finds a consensus, no ?\n\nAndreas\n", "msg_date": "Fri, 15 Nov 2002 09:24:19 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: create or replace view" }, { "msg_contents": "On Fri, 15 Nov 2002, Zeugswetter Andreas SB SD wrote:\n\n>\n> > Problem is when I want change view (or functions) with a lot of dependecies\n> > I must drop and recreate all dependent views (or functions) -\n> > I want add only one column in view\n> > I don't know if solution hard for that.\n>\n> I do not see how adding a column to a view would invalidate\n> dependent objects. (Except an object that uses \"select *\", in which case\n> the writer of the object explicitly states that he can cope with changing\n> column count and order).\n\nI'm not sure, but can all the places that currently save a plan deal with\ngetting a longer rowtype than expected? I'd guess so due to inheritance,\nbut we'd have to be absolutely sure. It'd also change the return type\nfor functions that are defined to return the composite type the view\ndefines.\n\n", "msg_date": "Fri, 15 Nov 2002 07:54:59 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: create or replace view" } ]
[ { "msg_contents": "\nHi there!\n\nI want to propose the patch for adding the hierarchical queries posibility.\nIt allows to construct queries a la Oracle for ex:\nSELECT a,b FROM t CONNECT BY a PRIOR b START WITH cond;B\n\nI've seen this type of queries often made by adding a new type, which stores\nposition of row in the tree. But sorting such tree are very tricky (i\nthink).\n\nPatch allows result tree to be sorted, i.e. subnodes of each node will be\nsorted by ORDER BY clause.\n\nwith regards, evgen\n\n---\n.evgen\n\n\n", "msg_date": "Fri, 15 Nov 2002 13:37:42 +0400 (SAMT)", "msg_from": "Evgen Potemkin <evgent@ns.terminal.ru>", "msg_from_op": true, "msg_subject": "Proposal of hierarchical queries, a la Oracle" }, { "msg_contents": "Was there supposed to be a patch attached to this email?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Evgen Potemkin\n> Sent: Friday, 15 November 2002 5:38 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Proposal of hierarchical queries, a la Oracle\n> \n> \n> \n> Hi there!\n> \n> I want to propose the patch for adding the hierarchical queries \n> posibility.\n> It allows to construct queries a la Oracle for ex:\n> SELECT a,b FROM t CONNECT BY a PRIOR b START WITH cond;B\n> \n> I've seen this type of queries often made by adding a new type, \n> which stores\n> position of row in the tree. But sorting such tree are very tricky (i\n> think).\n> \n> Patch allows result tree to be sorted, i.e. subnodes of each node will be\n> sorted by ORDER BY clause.\n> \n> with regards, evgen\n> \n> ---\n> .evgen\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n", "msg_date": "Mon, 18 Nov 2002 15:15:42 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Proposal of hierarchical queries, a la Oracle" } ]
[ { "msg_contents": "Hello,\n When I call DECLARE CURSOR out of transaction command success,\nbut cursor is not created\n Reference manual say that this get error :\nERROR: DECLARE CURSOR may only be used in begin/end transaction blocks \n I don't find this text in pgsql source code\nWhat is problem ?\nThanks\nHaris Peco\n", "msg_date": "Fri, 15 Nov 2002 17:05:32 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "DECLARE CURSOR" }, { "msg_contents": "On Fri, 15 Nov 2002, snpe wrote:\n\n> Hello,\n> When I call DECLARE CURSOR out of transaction command success,\n> but cursor is not created\n> Reference manual say that this get error :\n> ERROR: DECLARE CURSOR may only be used in begin/end transaction blocks\n> I don't find this text in pgsql source code\n> What is problem ?\n\nI get that error text in 7.3b2. Don't have an earlier version available\nright at the moment to test.\n\nIt may very well be making the cursor, but IIRC the cursor would have gone\naway at the end of the implicit transaction wrapping the statement.\n\n\n", "msg_date": "Sat, 16 Nov 2002 12:14:17 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: DECLARE CURSOR" }, { "msg_contents": "On Fri, 15 Nov 2002, snpe wrote:\n\n> Hello,\n> When I call DECLARE CURSOR out of transaction command success,\n> but cursor is not created\n> Reference manual say that this get error :\n> ERROR: DECLARE CURSOR may only be used in begin/end transaction blocks\n> I don't find this text in pgsql source code\n> What is problem ?\n\nAccording to the documentation for DECLARE CURSOR (v.7.2.x):\n\n\"Cursors are only available in transactions. Use to BEGIN, COMMIT and\n\tROLLBACK to define a transaction block.\"\n\nThis seems consistent with your error message. Please try\nwrapping your DECLARE inside a transaction using BEGIN,...\n\nHTH--\n\n\t-frank\n\n", "msg_date": "Sat, 16 Nov 2002 13:29:25 -0800 (PST)", "msg_from": "Frank Miles <fpm@u.washington.edu>", "msg_from_op": false, "msg_subject": "Re: DECLARE CURSOR" }, { "msg_contents": "On Saturday 16 November 2002 09:29 pm, Frank Miles wrote:\n> On Fri, 15 Nov 2002, snpe wrote:\n> > Hello,\n> > When I call DECLARE CURSOR out of transaction command success,\n> > but cursor is not created\n> > Reference manual say that this get error :\n> > ERROR: DECLARE CURSOR may only be used in begin/end transaction blocks\n> > I don't find this text in pgsql source code\n> > What is problem ?\n>\n> According to the documentation for DECLARE CURSOR (v.7.2.x):\n>\n> \"Cursors are only available in transactions. Use to BEGIN, COMMIT and\n> \tROLLBACK to define a transaction block.\"\n>\n> This seems consistent with your error message. Please try\n> wrapping your DECLARE inside a transaction using BEGIN,...\n>\n\nI understand it.\nI don't understand why 'DECLARE CURSOR' success out of a transaction\n- I expect error\n\nregards\nharis peco\n\n", "msg_date": "Sat, 16 Nov 2002 22:32:13 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: DECLARE CURSOR" }, { "msg_contents": "On Sat, 16 Nov 2002, snpe wrote:\n\n> On Saturday 16 November 2002 09:29 pm, Frank Miles wrote:\n> > On Fri, 15 Nov 2002, snpe wrote:\n> > > Hello,\n> > > When I call DECLARE CURSOR out of transaction command success,\n> > > but cursor is not created\n> > > Reference manual say that this get error :\n> > > ERROR: DECLARE CURSOR may only be used in begin/end transaction blocks\n> > > I don't find this text in pgsql source code\n> > > What is problem ?\n> >\n> > According to the documentation for DECLARE CURSOR (v.7.2.x):\n> >\n> > \"Cursors are only available in transactions. Use to BEGIN, COMMIT and\n> > \tROLLBACK to define a transaction block.\"\n> >\n> > This seems consistent with your error message. Please try\n> > wrapping your DECLARE inside a transaction using BEGIN,...\n> >\n>\n> I understand it.\n> I don't understand why 'DECLARE CURSOR' success out of a transaction\n> - I expect error\n\nWhat version are you using? At least with 7.2.x, there is an immediate\nerror at the DECLARE statement. Perhaps I am misunderstanding your\nquestion?\n\n\t-frank\n\n", "msg_date": "Sat, 16 Nov 2002 21:46:41 -0800 (PST)", "msg_from": "Frank Miles <fpm@u.washington.edu>", "msg_from_op": false, "msg_subject": "Re: DECLARE CURSOR" }, { "msg_contents": "On Sunday 17 November 2002 05:46 am, Frank Miles wrote:\n> On Sat, 16 Nov 2002, snpe wrote:\n> > On Saturday 16 November 2002 09:29 pm, Frank Miles wrote:\n> > > On Fri, 15 Nov 2002, snpe wrote:\n> > > > Hello,\n> > > > When I call DECLARE CURSOR out of transaction command success,\n> > > > but cursor is not created\n> > > > Reference manual say that this get error :\n> > > > ERROR: DECLARE CURSOR may only be used in begin/end transaction\n> > > > blocks I don't find this text in pgsql source code\n> > > > What is problem ?\n> > >\n> > > According to the documentation for DECLARE CURSOR (v.7.2.x):\n> > >\n> > > \"Cursors are only available in transactions. Use to BEGIN, COMMIT and\n> > > \tROLLBACK to define a transaction block.\"\n> > >\n> > > This seems consistent with your error message. Please try\n> > > wrapping your DECLARE inside a transaction using BEGIN,...\n> >\n> > I understand it.\n> > I don't understand why 'DECLARE CURSOR' success out of a transaction\n> > - I expect error\n>\n> What version are you using? At least with 7.2.x, there is an immediate\n> error at the DECLARE statement. Perhaps I am misunderstanding your\n> question?\n>\n\n7.3b5\nmaybe, it is prepare for cursor out of a transaction (I hope)\n\nregards\nHaris Peco\n", "msg_date": "Sun, 17 Nov 2002 12:08:18 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: DECLARE CURSOR" }, { "msg_contents": "snpe <snpe@snpe.co.yu> writes:\n> When I call DECLARE CURSOR out of transaction command success,\n> but cursor is not created\n> Reference manual say that this get error :\n> ERROR: DECLARE CURSOR may only be used in begin/end transaction blocks \n\nOops. I removed that test on 21-Oct as part of this fix:\n\n2002-10-21 18:06 tgl\n\n\t* src/: backend/access/transam/xact.c, backend/catalog/heap.c,\n\tbackend/catalog/index.c, backend/commands/dbcommands.c,\n\tbackend/commands/indexcmds.c, backend/commands/tablecmds.c,\n\tbackend/commands/vacuum.c, backend/parser/analyze.c,\n\tinclude/access/xact.h: Fix places that were using\n\tIsTransactionBlock() as an (inadequate) check that they'd get to\n\tcommit immediately on finishing. There's now a centralized routine\n\tPreventTransactionChain() that implements the necessary tests.\n\nMy reasons for removing it were (a) it was in the wrong place (analyze.c\nis not the right place to test execution-time constraints), and (b) it\nwas the wrong test: the test as written was just IsTransactionBlock(),\nwhich is wrong in the case of autocommit-off, since a DECLARE CURSOR\nwill start a new transaction perfectly well. Another objection is that\ninside a function call, it ought to be legal to do DECLARE CURSOR even\nif we're not in a transaction block, since the function might intend to\nuse the cursor itself before returning.\n\nI think I had intended to put together an alternative test that only\ncomplained about interactive DECLARE CURSOR and understood about\nautocommit, but I forgot.\n\nAt this point we can either add the fixed-up error check (meaning RC1\nwon't be the release after all), or change the documentation.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 14:33:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR " }, { "msg_contents": "\nLet's just fix it and roll an RC2 with the fix. If not, we can just fix\nit in 7.3.1 but I see little problem in rolling an RC2.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> snpe <snpe@snpe.co.yu> writes:\n> > When I call DECLARE CURSOR out of transaction command success,\n> > but cursor is not created\n> > Reference manual say that this get error :\n> > ERROR: DECLARE CURSOR may only be used in begin/end transaction blocks \n> \n> Oops. I removed that test on 21-Oct as part of this fix:\n> \n> 2002-10-21 18:06 tgl\n> \n> \t* src/: backend/access/transam/xact.c, backend/catalog/heap.c,\n> \tbackend/catalog/index.c, backend/commands/dbcommands.c,\n> \tbackend/commands/indexcmds.c, backend/commands/tablecmds.c,\n> \tbackend/commands/vacuum.c, backend/parser/analyze.c,\n> \tinclude/access/xact.h: Fix places that were using\n> \tIsTransactionBlock() as an (inadequate) check that they'd get to\n> \tcommit immediately on finishing. There's now a centralized routine\n> \tPreventTransactionChain() that implements the necessary tests.\n> \n> My reasons for removing it were (a) it was in the wrong place (analyze.c\n> is not the right place to test execution-time constraints), and (b) it\n> was the wrong test: the test as written was just IsTransactionBlock(),\n> which is wrong in the case of autocommit-off, since a DECLARE CURSOR\n> will start a new transaction perfectly well. Another objection is that\n> inside a function call, it ought to be legal to do DECLARE CURSOR even\n> if we're not in a transaction block, since the function might intend to\n> use the cursor itself before returning.\n> \n> I think I had intended to put together an alternative test that only\n> complained about interactive DECLARE CURSOR and understood about\n> autocommit, but I forgot.\n> \n> At this point we can either add the fixed-up error check (meaning RC1\n> won't be the release after all), or change the documentation.\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 17 Nov 2002 18:44:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Let's just fix it and roll an RC2 with the fix. If not, we can just fix\n> it in 7.3.1 but I see little problem in rolling an RC2.\n\nSince Marc hasn't yet announced RC1, I think we could get away with just\na quick fix and re-roll of RC1 ...\n\n\t\t\tregards, tom lane\n\n> ---------------------------------------------------------------------------\n\n> Tom Lane wrote:\n>> snpe <snpe@snpe.co.yu> writes:\n> When I call DECLARE CURSOR out of transaction command success,\n> but cursor is not created\n> Reference manual say that this get error :\n> ERROR: DECLARE CURSOR may only be used in begin/end transaction blocks \n>> \n>> Oops. I removed that test on 21-Oct as part of this fix:\n>> \n>> 2002-10-21 18:06 tgl\n>> \n>> * src/: backend/access/transam/xact.c, backend/catalog/heap.c,\n>> backend/catalog/index.c, backend/commands/dbcommands.c,\n>> backend/commands/indexcmds.c, backend/commands/tablecmds.c,\n>> backend/commands/vacuum.c, backend/parser/analyze.c,\n>> include/access/xact.h: Fix places that were using\n>> IsTransactionBlock() as an (inadequate) check that they'd get to\n>> commit immediately on finishing. There's now a centralized routine\n>> PreventTransactionChain() that implements the necessary tests.\n>> \n>> My reasons for removing it were (a) it was in the wrong place (analyze.c\n>> is not the right place to test execution-time constraints), and (b) it\n>> was the wrong test: the test as written was just IsTransactionBlock(),\n>> which is wrong in the case of autocommit-off, since a DECLARE CURSOR\n>> will start a new transaction perfectly well. Another objection is that\n>> inside a function call, it ought to be legal to do DECLARE CURSOR even\n>> if we're not in a transaction block, since the function might intend to\n>> use the cursor itself before returning.\n>> \n>> I think I had intended to put together an alternative test that only\n>> complained about interactive DECLARE CURSOR and understood about\n>> autocommit, but I forgot.\n>> \n>> At this point we can either add the fixed-up error check (meaning RC1\n>> won't be the release after all), or change the documentation.\n>> \n>> Comments?\n>> \n>> regards, tom lane\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>> \n\n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 17 Nov 2002 18:58:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Let's just fix it and roll an RC2 with the fix. If not, we can just fix\n> it in 7.3.1 but I see little problem in rolling an RC2.\n\nHere is the patch I am testing (in current sources; I don't think it\nneeds any adjustments for REL7_3, but haven't tried to apply it yet).\nBasically it moves the test that was originally done in parse/analyze.c\ninto the execution-time setup of a cursor, and enlarges the test to\nunderstand about autocommit-off and inside-a-function exceptions.\nAnyone see a problem?\n\n\t\t\tregards, tom lane\n\n*** src/backend/access/transam/xact.c.orig\tWed Nov 13 10:51:46 2002\n--- src/backend/access/transam/xact.c\tSun Nov 17 19:10:20 2002\n***************\n*** 1488,1493 ****\n--- 1488,1537 ----\n \t}\n }\n \n+ /* --------------------------------\n+ *\tRequireTransactionChain\n+ *\n+ *\tThis routine is to be called by statements that must run inside\n+ *\ta transaction block, because they have no effects that persist past\n+ *\ttransaction end (and so calling them outside a transaction block\n+ *\tis presumably an error). DECLARE CURSOR is an example.\n+ *\n+ *\tIf we appear to be running inside a user-defined function, we do not\n+ *\tissue an error, since the function could issue more commands that make\n+ *\tuse of the current statement's results. Thus this is an inverse for\n+ *\tPreventTransactionChain.\n+ *\n+ *\tstmtNode: pointer to parameter block for statement; this is used in\n+ *\ta very klugy way to determine whether we are inside a function.\n+ *\tstmtType: statement type name for error messages.\n+ * --------------------------------\n+ */\n+ void\n+ RequireTransactionChain(void *stmtNode, const char *stmtType)\n+ {\n+ \t/*\n+ \t * xact block already started?\n+ \t */\n+ \tif (IsTransactionBlock())\n+ \t\treturn;\n+ \t/*\n+ \t * Are we inside a function call? If the statement's parameter block\n+ \t * was allocated in QueryContext, assume it is an interactive command.\n+ \t * Otherwise assume it is coming from a function.\n+ \t */\n+ \tif (!MemoryContextContains(QueryContext, stmtNode))\n+ \t\treturn;\n+ \t/*\n+ \t * If we are in autocommit-off mode then it's okay, because this\n+ \t * statement will itself start a transaction block.\n+ \t */\n+ \tif (!autocommit && !suppressChain)\n+ \t\treturn;\n+ \t/* translator: %s represents an SQL statement name */\n+ \telog(ERROR, \"%s may only be used in begin/end transaction blocks\",\n+ \t\t stmtType);\n+ }\n+ \n \n /* ----------------------------------------------------------------\n *\t\t\t\t\t transaction block support\n*** /home/postgres/pgsql/src/backend/tcop/pquery.c.orig\tWed Sep 4 17:30:43 2002\n--- /home/postgres/pgsql/src/backend/tcop/pquery.c\tSun Nov 17 19:10:26 2002\n***************\n*** 161,166 ****\n--- 161,168 ----\n \t\t\t/* If binary portal, switch to alternate output format */\n \t\t\tif (dest == Remote && parsetree->isBinary)\n \t\t\t\tdest = RemoteInternal;\n+ \t\t\t/* Check for invalid context (must be in transaction block) */\n+ \t\t\tRequireTransactionChain((void *) parsetree, \"DECLARE CURSOR\");\n \t\t}\n \t\telse if (parsetree->into != NULL)\n \t\t{\n*** /home/postgres/pgsql/src/include/access/xact.h.orig\tWed Nov 13 10:52:07 2002\n--- /home/postgres/pgsql/src/include/access/xact.h\tSun Nov 17 19:10:13 2002\n***************\n*** 115,120 ****\n--- 115,121 ----\n extern void UserAbortTransactionBlock(void);\n extern void AbortOutOfAnyTransaction(void);\n extern void PreventTransactionChain(void *stmtNode, const char *stmtType);\n+ extern void RequireTransactionChain(void *stmtNode, const char *stmtType);\n \n extern void RecordTransactionCommit(void);\n \n", "msg_date": "Sun, 17 Nov 2002 19:30:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR " }, { "msg_contents": "On Sun, 17 Nov 2002 06:06:05 -0600, snpe wrote:\n\n> On Sunday 17 November 2002 05:46 am, Frank Miles wrote:\n>> On Sat, 16 Nov 2002, snpe wrote:\n>> > On Saturday 16 November 2002 09:29 pm, Frank Miles wrote:\n>> > > On Fri, 15 Nov 2002, snpe wrote:\n>> > > > Hello,\n>> > > > When I call DECLARE CURSOR out of transaction command success,\n>> > > > but cursor is not created\n>> > > > Reference manual say that this get error :\n>> > > > ERROR: DECLARE CURSOR may only be used in begin/end transaction\n>> > > > blocks I don't find this text in pgsql source code What is\n>> > > > problem ?\n>> > >\n>> > > According to the documentation for DECLARE CURSOR (v.7.2.x):\n>> > >\n>> > > \"Cursors are only available in transactions. Use to BEGIN, COMMIT\n>> > > and\n>> > > \tROLLBACK to define a transaction block.\"\n>> > >\n>> > > This seems consistent with your error message. Please try wrapping\n>> > > your DECLARE inside a transaction using BEGIN,...\n>> >\n>> > I understand it.\n>> > I don't understand why 'DECLARE CURSOR' success out of a transaction\n>> > - I expect error\n>>\n>> What version are you using? At least with 7.2.x, there is an immediate\n>> error at the DECLARE statement. Perhaps I am misunderstanding your\n>> question?\n>>\n>>\n> 7.3b5\n> maybe, it is prepare for cursor out of a transaction (I hope)\n> \n> \n\nI'm getting a little confused, here, reading this. I don't have a BEGIN,\nCOMMIT, or ROLLBACK in sight in my ESQL application, but my cursor works\njust fine. Under which circumstances are the BEGIN, COMMIT, and ROLLBACK\nrequired? Is that something specific to the C interface?\n\n-- \nMatthew Vanecek\nperl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'\n********************************************************************************\nFor 93 million miles, there is nothing between the sun and my shadow except me.\nI'm always getting in the way of something...\n", "msg_date": "Mon, 18 Nov 2002 02:27:41 GMT", "msg_from": "\"Matthew V.\" < <deusmech@yahoo.com>>", "msg_from_op": false, "msg_subject": "Re: DECLARE CURSOR" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Let's just fix it and roll an RC2 with the fix. If not, we can just fix\n> > it in 7.3.1 but I see little problem in rolling an RC2.\n> \n> Since Marc hasn't yet announced RC1, I think we could get away with just\n> a quick fix and re-roll of RC1 ...\n\nOnce Marc puts it on FTP:\n\n\t-rw-r--r-- 1 70 70 1073151 Nov 16 20:01 postgresql-test-7.3rc1.tar.gz\n\nI think he likes to create a new release to avoid confusion.\n\nStamping RC2 now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 17 Nov 2002 23:39:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "Hello,\n is it planed cursor out of a transaction in 7.4 ?\nThanks\nHaris Peco\nOn Monday 18 November 2002 12:30 am, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Let's just fix it and roll an RC2 with the fix. If not, we can just fix\n> > it in 7.3.1 but I see little problem in rolling an RC2.\n>\n> Here is the patch I am testing (in current sources; I don't think it\n> needs any adjustments for REL7_3, but haven't tried to apply it yet).\n> Basically it moves the test that was originally done in parse/analyze.c\n> into the execution-time setup of a cursor, and enlarges the test to\n> understand about autocommit-off and inside-a-function exceptions.\n> Anyone see a problem?\n>\n> \t\t\tregards, tom lane\n>\n> *** src/backend/access/transam/xact.c.orig\tWed Nov 13 10:51:46 2002\n> --- src/backend/access/transam/xact.c\tSun Nov 17 19:10:20 2002\n> ***************\n> *** 1488,1493 ****\n> --- 1488,1537 ----\n> \t}\n> }\n>\n> + /* --------------------------------\n> + *\tRequireTransactionChain\n> + *\n> + *\tThis routine is to be called by statements that must run inside\n> + *\ta transaction block, because they have no effects that persist past\n> + *\ttransaction end (and so calling them outside a transaction block\n> + *\tis presumably an error). DECLARE CURSOR is an example.\n> + *\n> + *\tIf we appear to be running inside a user-defined function, we do not\n> + *\tissue an error, since the function could issue more commands that make\n> + *\tuse of the current statement's results. Thus this is an inverse for\n> + *\tPreventTransactionChain.\n> + *\n> + *\tstmtNode: pointer to parameter block for statement; this is used in\n> + *\ta very klugy way to determine whether we are inside a function.\n> + *\tstmtType: statement type name for error messages.\n> + * --------------------------------\n> + */\n> + void\n> + RequireTransactionChain(void *stmtNode, const char *stmtType)\n> + {\n> + \t/*\n> + \t * xact block already started?\n> + \t */\n> + \tif (IsTransactionBlock())\n> + \t\treturn;\n> + \t/*\n> + \t * Are we inside a function call? If the statement's parameter block\n> + \t * was allocated in QueryContext, assume it is an interactive command.\n> + \t * Otherwise assume it is coming from a function.\n> + \t */\n> + \tif (!MemoryContextContains(QueryContext, stmtNode))\n> + \t\treturn;\n> + \t/*\n> + \t * If we are in autocommit-off mode then it's okay, because this\n> + \t * statement will itself start a transaction block.\n> + \t */\n> + \tif (!autocommit && !suppressChain)\n> + \t\treturn;\n> + \t/* translator: %s represents an SQL statement name */\n> + \telog(ERROR, \"%s may only be used in begin/end transaction blocks\",\n> + \t\t stmtType);\n> + }\n> +\n>\n> /* ----------------------------------------------------------------\n> *\t\t\t\t\t transaction block support\n> *** /home/postgres/pgsql/src/backend/tcop/pquery.c.orig\tWed Sep 4 17:30:43\n> 2002 --- /home/postgres/pgsql/src/backend/tcop/pquery.c\tSun Nov 17 19:10:26\n> 2002 ***************\n> *** 161,166 ****\n> --- 161,168 ----\n> \t\t\t/* If binary portal, switch to alternate output format */\n> \t\t\tif (dest == Remote && parsetree->isBinary)\n> \t\t\t\tdest = RemoteInternal;\n> + \t\t\t/* Check for invalid context (must be in transaction block) */\n> + \t\t\tRequireTransactionChain((void *) parsetree, \"DECLARE CURSOR\");\n> \t\t}\n> \t\telse if (parsetree->into != NULL)\n> \t\t{\n> *** /home/postgres/pgsql/src/include/access/xact.h.orig\tWed Nov 13 10:52:07\n> 2002 --- /home/postgres/pgsql/src/include/access/xact.h\tSun Nov 17 19:10:13\n> 2002 ***************\n> *** 115,120 ****\n> --- 115,121 ----\n> extern void UserAbortTransactionBlock(void);\n> extern void AbortOutOfAnyTransaction(void);\n> extern void PreventTransactionChain(void *stmtNode, const char\n> *stmtType); + extern void RequireTransactionChain(void *stmtNode, const\n> char *stmtType);\n>\n> extern void RecordTransactionCommit(void);\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Mon, 18 Nov 2002 13:08:12 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "On Monday 18 November 2002 02:27 am, \\\"Matthew V.\\ wrote:\n> On Sun, 17 Nov 2002 06:06:05 -0600, snpe wrote:\n> > On Sunday 17 November 2002 05:46 am, Frank Miles wrote:\n> >> On Sat, 16 Nov 2002, snpe wrote:\n> >> > On Saturday 16 November 2002 09:29 pm, Frank Miles wrote:\n> >> > > On Fri, 15 Nov 2002, snpe wrote:\n> >> > > > Hello,\n> >> > > > When I call DECLARE CURSOR out of transaction command success,\n> >> > > > but cursor is not created\n> >> > > > Reference manual say that this get error :\n> >> > > > ERROR: DECLARE CURSOR may only be used in begin/end transaction\n> >> > > > blocks I don't find this text in pgsql source code What is\n> >> > > > problem ?\n> >> > >\n> >> > > According to the documentation for DECLARE CURSOR (v.7.2.x):\n> >> > >\n> >> > > \"Cursors are only available in transactions. Use to BEGIN, COMMIT\n> >> > > and\n> >> > > \tROLLBACK to define a transaction block.\"\n> >> > >\n> >> > > This seems consistent with your error message. Please try wrapping\n> >> > > your DECLARE inside a transaction using BEGIN,...\n> >> >\n> >> > I understand it.\n> >> > I don't understand why 'DECLARE CURSOR' success out of a transaction\n> >> > - I expect error\n> >>\n> >> What version are you using? At least with 7.2.x, there is an immediate\n> >> error at the DECLARE statement. Perhaps I am misunderstanding your\n> >> question?\n> >\n> > 7.3b5\n> > maybe, it is prepare for cursor out of a transaction (I hope)\n>\n> I'm getting a little confused, here, reading this. I don't have a BEGIN,\n> COMMIT, or ROLLBACK in sight in my ESQL application, but my cursor works\n> just fine. Under which circumstances are the BEGIN, COMMIT, and ROLLBACK\n> required? Is that something specific to the C interface?\n\nYou don't use cursor, probably.\nFor PostgreSQL cursor is explicit with DECLARE CURSOR in sql command\nIt is like :\nBEGIN;;\n..\nDECLARE c1 CURSOR FOR SELECT ...;\n...\nFETCH 1 FROM c1\n...\nCOMMIT;\n\n\nregards\nHaris Peco\n", "msg_date": "Mon, 18 Nov 2002 14:31:34 +0000", "msg_from": "Haris Peco <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: DECLARE CURSOR" }, { "msg_contents": "snpe <snpe@snpe.co.yu> writes:\n> is it planed cursor out of a transaction in 7.4 ?\n\nI do not think we will allow cross-transaction cursors ever. What would\nit mean to have a cross-transaction cursor, anyway? Does it show a\nfrozen snapshot as of the time it was opened? The usefulness of that\nseems awfully low in comparison to the pain of implementing it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 09:38:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR " }, { "msg_contents": "On Monday 18 November 2002 02:38 pm, Tom Lane wrote:\n> snpe <snpe@snpe.co.yu> writes:\n> > is it planed cursor out of a transaction in 7.4 ?\n>\n> I do not think we will allow cross-transaction cursors ever. What would\n> it mean to have a cross-transaction cursor, anyway? Does it show a\n> frozen snapshot as of the time it was opened? The usefulness of that\n> seems awfully low in comparison to the pain of implementing it.\n>\n> \t\t\tregards, tom lane\nIt is in TODO list. Can You implement this with savepoint ?\n\nregards\nHaris Peco\n", "msg_date": "Mon, 18 Nov 2002 14:46:29 +0000", "msg_from": "Haris Peco <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "Haris Peco wrote:\n> On Monday 18 November 2002 02:38 pm, Tom Lane wrote:\n> > snpe <snpe@snpe.co.yu> writes:\n> > > is it planed cursor out of a transaction in 7.4 ?\n> >\n> > I do not think we will allow cross-transaction cursors ever. What would\n> > it mean to have a cross-transaction cursor, anyway? Does it show a\n> > frozen snapshot as of the time it was opened? The usefulness of that\n> > seems awfully low in comparison to the pain of implementing it.\n> >\n> > \t\t\tregards, tom lane\n> It is in TODO list. Can You implement this with savepoint ?\n\nI am planning on doing savepoints for 7.4.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 18 Nov 2002 10:45:15 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "On Sun, 17 Nov 2002, snpe wrote:\n\n> On Sunday 17 November 2002 05:46 am, Frank Miles wrote:\n> > On Sat, 16 Nov 2002, snpe wrote:\n> > > On Saturday 16 November 2002 09:29 pm, Frank Miles wrote:\n> > > > On Fri, 15 Nov 2002, snpe wrote:\n> > > > > Hello,\n> > > > > When I call DECLARE CURSOR out of transaction command success,\n> > > > > but cursor is not created\n> > > > > Reference manual say that this get error :\n> > > > > ERROR: DECLARE CURSOR may only be used in begin/end transaction\n> > > > > blocks I don't find this text in pgsql source code\n> > > > > What is problem ?\n> > > >\n> > > > According to the documentation for DECLARE CURSOR (v.7.2.x):\n> > > >\n> > > > \"Cursors are only available in transactions. Use to BEGIN, COMMIT and\n> > > > \tROLLBACK to define a transaction block.\"\n> > > >\n> > > > This seems consistent with your error message. Please try\n> > > > wrapping your DECLARE inside a transaction using BEGIN,...\n> > >\n> > > I understand it.\n> > > I don't understand why 'DECLARE CURSOR' success out of a transaction\n> > > - I expect error\n> >\n> > What version are you using? At least with 7.2.x, there is an immediate\n> > error at the DECLARE statement. Perhaps I am misunderstanding your\n> > question?\n> >\n> \n> 7.3b5\n> maybe, it is prepare for cursor out of a transaction (I hope)\n\nNo, you just have autocommit turned off. Which means that the second you \nconnect and type a command, Postgresql does an invisible begin for you. \nI.e. you're ALWAYS in an uncommitted transaction. Note that you'll have \nto issue a commit to get your changes into the database.\n\n", "msg_date": "Mon, 18 Nov 2002 10:13:55 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: DECLARE CURSOR" }, { "msg_contents": "On Monday 18 November 2002 05:13 pm, scott.marlowe wrote:\n> On Sun, 17 Nov 2002, snpe wrote:\n> > On Sunday 17 November 2002 05:46 am, Frank Miles wrote:\n> > > On Sat, 16 Nov 2002, snpe wrote:\n> > > > On Saturday 16 November 2002 09:29 pm, Frank Miles wrote:\n> > > > > On Fri, 15 Nov 2002, snpe wrote:\n> > > > > > Hello,\n> > > > > > When I call DECLARE CURSOR out of transaction command success,\n> > > > > > but cursor is not created\n> > > > > > Reference manual say that this get error :\n> > > > > > ERROR: DECLARE CURSOR may only be used in begin/end transaction\n> > > > > > blocks I don't find this text in pgsql source code\n> > > > > > What is problem ?\n> > > > >\n> > > > > According to the documentation for DECLARE CURSOR (v.7.2.x):\n> > > > >\n> > > > > \"Cursors are only available in transactions. Use to BEGIN, COMMIT\n> > > > > and ROLLBACK to define a transaction block.\"\n> > > > >\n> > > > > This seems consistent with your error message. Please try\n> > > > > wrapping your DECLARE inside a transaction using BEGIN,...\n> > > >\n> > > > I understand it.\n> > > > I don't understand why 'DECLARE CURSOR' success out of a transaction\n> > > > - I expect error\n> > >\n> > > What version are you using? At least with 7.2.x, there is an immediate\n> > > error at the DECLARE statement. Perhaps I am misunderstanding your\n> > > question?\n> >\n> > 7.3b5\n> > maybe, it is prepare for cursor out of a transaction (I hope)\n>\n> No, you just have autocommit turned off. Which means that the second you\n> connect and type a command, Postgresql does an invisible begin for you.\n> I.e. you're ALWAYS in an uncommitted transaction. Note that you'll have\n> to issue a commit to get your changes into the database.\n>\n>\n\nI want do next :\ntable - big table and select work with cursor only\nI select row and if any condition is true I do transaction on another table\nI can't do all in one transaction (performance reason) - for some rows in\ntable I do transaction\n\nHow can I do this ?\n\nThanks \nHaris Peco\n\n\n", "msg_date": "Mon, 18 Nov 2002 17:32:36 +0000", "msg_from": "Haris Peco <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: DECLARE CURSOR" }, { "msg_contents": "On Monday 18 November 2002 03:45 pm, Bruce Momjian wrote:\n> Haris Peco wrote:\n> > On Monday 18 November 2002 02:38 pm, Tom Lane wrote:\n> > > snpe <snpe@snpe.co.yu> writes:\n> > > > is it planed cursor out of a transaction in 7.4 ?\n> > >\n> > > I do not think we will allow cross-transaction cursors ever. What\n> > > would it mean to have a cross-transaction cursor, anyway? Does it show\n> > > a frozen snapshot as of the time it was opened? The usefulness of that\n> > > seems awfully low in comparison to the pain of implementing it.\n> > >\n> > > \t\t\tregards, tom lane\n> >\n> > It is in TODO list. Can You implement this with savepoint ?\n>\n> I am planning on doing savepoints for 7.4.\n\ngreat.\nIs it possible with savepoints next :\nwhen am I in transaction and any command is error - only this command\nis lost and I continue normal ?\n\nThanks\nHaris Peco\n\n", "msg_date": "Mon, 18 Nov 2002 17:36:00 +0000", "msg_from": "Haris Peco <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "Haris Peco wrote:\n> On Monday 18 November 2002 03:45 pm, Bruce Momjian wrote:\n> > Haris Peco wrote:\n> > > On Monday 18 November 2002 02:38 pm, Tom Lane wrote:\n> > > > snpe <snpe@snpe.co.yu> writes:\n> > > > > is it planed cursor out of a transaction in 7.4 ?\n> > > >\n> > > > I do not think we will allow cross-transaction cursors ever. What\n> > > > would it mean to have a cross-transaction cursor, anyway? Does it show\n> > > > a frozen snapshot as of the time it was opened? The usefulness of that\n> > > > seems awfully low in comparison to the pain of implementing it.\n> > > >\n> > > > \t\t\tregards, tom lane\n> > >\n> > > It is in TODO list. Can You implement this with savepoint ?\n> >\n> > I am planning on doing savepoints for 7.4.\n> \n> great.\n> Is it possible with savepoints next :\n> when am I in transaction and any command is error - only this command\n> is lost and I continue normal ?\n\nYes, that will be part of it. I am working on my proposal today.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 18 Nov 2002 12:38:51 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "Haris Peco wrote:\n> > > great.\n> > > Is it possible with savepoints next :\n> > > when am I in transaction and any command is error - only this command\n> > > is lost and I continue normal ?\n> >\n> > Yes, that will be part of it. I am working on my proposal today.\n> Fine.What about cursor out of a transaction ?\n\nThat is not part of my work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 18 Nov 2002 12:46:22 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "On Monday 18 November 2002 05:38 pm, Bruce Momjian wrote:\n> Haris Peco wrote:\n> > On Monday 18 November 2002 03:45 pm, Bruce Momjian wrote:\n> > > Haris Peco wrote:\n> > > > On Monday 18 November 2002 02:38 pm, Tom Lane wrote:\n> > > > > snpe <snpe@snpe.co.yu> writes:\n> > > > > > is it planed cursor out of a transaction in 7.4 ?\n> > > > >\n> > > > > I do not think we will allow cross-transaction cursors ever. What\n> > > > > would it mean to have a cross-transaction cursor, anyway? Does it\n> > > > > show a frozen snapshot as of the time it was opened? The\n> > > > > usefulness of that seems awfully low in comparison to the pain of\n> > > > > implementing it.\n> > > > >\n> > > > > \t\t\tregards, tom lane\n> > > >\n> > > > It is in TODO list. Can You implement this with savepoint ?\n> > >\n> > > I am planning on doing savepoints for 7.4.\n> >\n> > great.\n> > Is it possible with savepoints next :\n> > when am I in transaction and any command is error - only this command\n> > is lost and I continue normal ?\n>\n> Yes, that will be part of it. I am working on my proposal today.\nFine.What about cursor out of a transaction ?\nThanks \nHaris Peco\n\n", "msg_date": "Mon, 18 Nov 2002 17:47:16 +0000", "msg_from": "Haris Peco <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "On Monday 18 November 2002 05:46 pm, Bruce Momjian wrote:\n> Haris Peco wrote:\n> > > > great.\n> > > > Is it possible with savepoints next :\n> > > > when am I in transaction and any command is error - only this command\n> > > > is lost and I continue normal ?\n> > >\n> > > Yes, that will be part of it. I am working on my proposal today.\n> >\n> > Fine.What about cursor out of a transaction ?\n>\n> That is not part of my work.\n\nIs it planned UNDO (WAL) ?\n\nThanks\nHaris Peco\n\n", "msg_date": "Mon, 18 Nov 2002 17:57:46 +0000", "msg_from": "Haris Peco <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "Haris Peco wrote:\n> On Monday 18 November 2002 05:46 pm, Bruce Momjian wrote:\n> > Haris Peco wrote:\n> > > > > great.\n> > > > > Is it possible with savepoints next :\n> > > > > when am I in transaction and any command is error - only this command\n> > > > > is lost and I continue normal ?\n> > > >\n> > > > Yes, that will be part of it. I am working on my proposal today.\n> > >\n> > > Fine.What about cursor out of a transaction ?\n> >\n> > That is not part of my work.\n> \n> Is it planned UNDO (WAL) ?\n\nNo, see TODO.detail/transactions for info, or wait for my posting later\ntoday.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 18 Nov 2002 14:23:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DECLARE CURSOR" }, { "msg_contents": "On Mon, 18 Nov 2002 08:28:59 -0600, Haris Peco wrote:\n\n> On Monday 18 November 2002 02:27 am, \\\"Matthew V.\\ wrote:\n>> On Sun, 17 Nov 2002 06:06:05 -0600, snpe wrote:\n>> > On Sunday 17 November 2002 05:46 am, Frank Miles wrote:\n>> >> On Sat, 16 Nov 2002, snpe wrote:\n>> >> > On Saturday 16 November 2002 09:29 pm, Frank Miles wrote:\n>> >> > > On Fri, 15 Nov 2002, snpe wrote:\n>> >> > > > Hello,\n>> >> > > > When I call DECLARE CURSOR out of transaction command\n>> >> > > > success,\n>> >> > > > but cursor is not created\n>> >> > > > Reference manual say that this get error :\n>> >> > > > ERROR: DECLARE CURSOR may only be used in begin/end\n>> >> > > > transaction blocks I don't find this text in pgsql source code\n>> >> > > > What is problem ?\n>> >> > >\n>> >> > > According to the documentation for DECLARE CURSOR (v.7.2.x):\n>> >> > >\n>> >> > > \"Cursors are only available in transactions. Use to BEGIN,\n>> >> > > COMMIT and\n>> >> > > \tROLLBACK to define a transaction block.\"\n>> >> > >\n>> >> > > This seems consistent with your error message. Please try\n>> >> > > wrapping your DECLARE inside a transaction using BEGIN,...\n>> >> >\n>> >> > I understand it.\n>> >> > I don't understand why 'DECLARE CURSOR' success out of a\n>> >> > transaction - I expect error\n>> >>\n>> >> What version are you using? At least with 7.2.x, there is an\n>> >> immediate error at the DECLARE statement. Perhaps I am\n>> >> misunderstanding your question?\n>> >\n>> > 7.3b5\n>> > maybe, it is prepare for cursor out of a transaction (I hope)\n>>\n>> I'm getting a little confused, here, reading this. I don't have a\n>> BEGIN, COMMIT, or ROLLBACK in sight in my ESQL application, but my\n>> cursor works just fine. Under which circumstances are the BEGIN,\n>> COMMIT, and ROLLBACK required? Is that something specific to the C\n>> interface?\n> \n> You don't use cursor, probably.\n> For PostgreSQL cursor is explicit with DECLARE CURSOR in sql command It\n> is like :\n> BEGIN;;\n> ..\n> DECLARE c1 CURSOR FOR SELECT ...;\n> ...\n> FETCH 1 FROM c1\n> ...\n> COMMIT;\n> \n> \n> \nYes, I do use a cursor. The ESQL I mentioned means \"Embedded SQL\" (sorry,\nthought everyone knew). Cursors are a very big part of ESQL. But I don't\nhave any BEGINS, or COMMITS (why would I? I do SELECTs, not\nINSERTs/UPDATEs/DELETEs!). The cursor declaration, and the subsequent\nFETCHes, work just fine. That's why I was wondering if I was missing\nsomething. The cursor works perfectly the way I wrote it, yet people in\nthis thread keep talking like cursors are only declareable/useable inside\ntransactions (BEGIN-COMMIT blocks).. I personally don't see why you would\nwant to waste transaction overhead unless you are modifying the data\n(especially since Postgresql doesn't support updateable cursors).\n\nIn any case, whether or not it's the \"correct\" behavior, you don't need to\nspecify a BEGIN/COMMIT block to DECLARE a cursor. The documentation the\noriginal poster quoted appears to be in error, or outdated (I have the\nsame docs, and they don't match with actual behavior). I declare my\ncursor in an include file (so that it's global to the file), open the\ncursor, fetch the cursor until EOF or other error, and process the data. I\ndon't \"EXEC SQL BEGIN;\" or anything anywhere. Since I'm doing FETCHes,\nthere's no need for a COMMIT.\n\nI was just wondering what the hullabaloo was all about, because I don't\nget any of the errors described by previous posters, and thought maybe I\naccidentally fixed something, or broke something that was supposed to\nbreak my DECLARE...\n\n\nI tried mucking around with\nautocommit = off/on, but that affects neither the DECLARE nor the FETCH. \nIs there supposed to be a global autocommit setting? I couldn't find one\nin the docs for 7.2.1.\n\n\n-- \nMatthew Vanecek\nperl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'\n********************************************************************************\nFor 93 million miles, there is nothing between the sun and my shadow except me.\nI'm always getting in the way of something...\n", "msg_date": "Tue, 19 Nov 2002 18:20:28 GMT", "msg_from": "\"Matthew V.\" < <deusmech@yahoo.com>>", "msg_from_op": false, "msg_subject": "Re: DECLARE CURSOR" } ]
[ { "msg_contents": "Seems like a result of Alverro's cluster patch -- looks like the patch\ndidn't updated the expected results for the regression tests\nfully. Diffs below.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n*** ./expected/cluster.out\tFri Nov 15 12:35:36 2002\n--- ./results/cluster.out\tFri Nov 15 12:39:33 2002\n***************\n*** 302,307 ****\n--- 302,310 ----\n INSERT INTO clstr_2 VALUES (1);\n INSERT INTO clstr_3 VALUES (2);\n INSERT INTO clstr_3 VALUES (1);\n+ -- \"CLUSTER <tablename>\" on a table that hasn't been clustered\n+ CLUSTER clstr_2;\n+ ERROR: CLUSTER: No previously clustered index found on table clstr_2\n CLUSTER clstr_1_pkey ON clstr_1;\n CLUSTER clstr_2_pkey ON clstr_2;\n SELECT * FROM clstr_1 UNION ALL\n***************\n*** 344,349 ****\n--- 347,364 ----\n 1\n (6 rows)\n \n+ -- cluster a single table using the indisclustered bit previously set\n+ DELETE FROM clstr_1;\n+ INSERT INTO clstr_1 VALUES (2);\n+ INSERT INTO clstr_1 VALUES (1);\n+ CLUSTER clstr_1;\n+ SELECT * FROM clstr_1;\n+ a \n+ ---\n+ 1\n+ 2\n+ (2 rows)\n+ \n -- clean up\n \\c -\n DROP TABLE clstr_1;\n\n", "msg_date": "15 Nov 2002 12:43:43 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "regression test failure (CVS HEAD)" }, { "msg_contents": "\nApplied. Sorry I missed this one. I did a clean compile and initdb for\ntesting, but forgot regression.\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> Seems like a result of Alverro's cluster patch -- looks like the patch\n> didn't updated the expected results for the regression tests\n> fully. Diffs below.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n> \n> *** ./expected/cluster.out\tFri Nov 15 12:35:36 2002\n> --- ./results/cluster.out\tFri Nov 15 12:39:33 2002\n> ***************\n> *** 302,307 ****\n> --- 302,310 ----\n> INSERT INTO clstr_2 VALUES (1);\n> INSERT INTO clstr_3 VALUES (2);\n> INSERT INTO clstr_3 VALUES (1);\n> + -- \"CLUSTER <tablename>\" on a table that hasn't been clustered\n> + CLUSTER clstr_2;\n> + ERROR: CLUSTER: No previously clustered index found on table clstr_2\n> CLUSTER clstr_1_pkey ON clstr_1;\n> CLUSTER clstr_2_pkey ON clstr_2;\n> SELECT * FROM clstr_1 UNION ALL\n> ***************\n> *** 344,349 ****\n> --- 347,364 ----\n> 1\n> (6 rows)\n> \n> + -- cluster a single table using the indisclustered bit previously set\n> + DELETE FROM clstr_1;\n> + INSERT INTO clstr_1 VALUES (2);\n> + INSERT INTO clstr_1 VALUES (1);\n> + CLUSTER clstr_1;\n> + SELECT * FROM clstr_1;\n> + a \n> + ---\n> + 1\n> + 2\n> + (2 rows)\n> + \n> -- clean up\n> \\c -\n> DROP TABLE clstr_1;\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 15 Nov 2002 22:24:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression test failure (CVS HEAD)" } ]
[ { "msg_contents": "Ok. Transaction safe truncate is simply transaction safe cluster\nwithout the data copy and a slightly different set of permission checks.\n\nI'd like to split cluster_rel() in cluster.c into 2 functions at line\n174. The permission checks, locking, etc will remain in cluster_rel(). \nThe bottom half will be turned into a function called\nrebuild_rel(tableOid Oid, indexOid Oid, dataCopy bool).\n\nIf dataCopy is set to false, then indexOid may be null -- this will\ntruncate the table.\n\nCluster will set dataCopy to true which will maintain current\nexpectations for cluster.\n\nI'll also move TruncateRelation into cluster.c.\n\n \nPreCommit_on_commit_actions() -> ONCOMMIT_DELETE_ROWS is the only \nlocation using heap_truncate(). It may be possible to change this and\nremove heap_truncate() altogether.\n\n-- \nRod Taylor <rbt@rbt.ca>\n", "msg_date": "15 Nov 2002 14:11:15 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": true, "msg_subject": "Transaction safe Truncate" }, { "msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n> I'd like to split cluster_rel() in cluster.c into 2 functions at line\n> 174. The permission checks, locking, etc will remain in cluster_rel(). \n> The bottom half will be turned into a function called\n> rebuild_rel(tableOid Oid, indexOid Oid, dataCopy bool).\n\nI just finished fixing the division of labor between TruncateRelation\nand heap_truncate. Please don't break it merely to avoid rearranging\ncode in cluster.c. Actually, I'd argue that cluster should adopt\ntruncate's code layout, not vice versa.\n\n> I'll also move TruncateRelation into cluster.c.\n\nYou could leave it where it is and just move heap_truncate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Nov 2002 18:29:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Transaction safe Truncate " } ]
[ { "msg_contents": "All,\n\nI've just tried to build the Win32 components under Visual Studio's C++\ncompiler from the win32.mak CVS archive at\n:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot and found that the\nfollowing file was missing;\n\nsrc\\bin\\psql\\sql_help.h\n\nI've copied the file from the the source tree of version 7.2.3 and the\ncompile works with out any problems.\n\nShould the file be in CVS?\n\nAl.\n\n\n", "msg_date": "Fri, 15 Nov 2002 20:48:11 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": true, "msg_subject": "Missing file from CVS?" }, { "msg_contents": "\"Al Sutton\" <al@alsutton.com> writes:\n> src\\bin\\psql\\sql_help.h\n> Should the file be in CVS?\n\nNo. I'd suggest using a snapshot rather than CVS; or perhaps you can\nget Perl working in your environment so you can build the file yourself.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Nov 2002 15:13:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing file from CVS? " }, { "msg_contents": "Al Sutton wrote:\n> All,\n> \n> I've just tried to build the Win32 components under Visual Studio's C++\n> compiler from the win32.mak CVS archive at\n> :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot and found that the\n> following file was missing;\n> \n> src\\bin\\psql\\sql_help.h\n> \n> I've copied the file from the the source tree of version 7.2.3 and the\n> compile works with out any problems.\n> \n> Should the file be in CVS?\n> \n\nI'm not seeing a problem here with cvs tip and VS .Net's C++, although I am \nnow getting a few pedantic warnings that I wasn't seeing a few weeks ago.\n\nWhere exactly are you getting an error?\n\nJoe\n\np.s. here's my output:\n\nC:\\Documents and Settings\\jconway\\My Documents\\Visual Studio \nProjects\\pgsql\\src>nmake -f win32.mak\n\nMicrosoft (R) Program Maintenance Utility Version 7.00.9466\nCopyright (C) Microsoft Corporation. All rights reserved.\n\n cd include\n if not exist pg_config.h copy pg_config.h.win32 pg_config.h\n 1 file(s) copied.\n cd ..\n cd interfaces\\libpq\n nmake /f win32.mak\n\nMicrosoft (R) Program Maintenance Utility Version 7.00.9466\nCopyright (C) Microsoft Corporation. All rights reserved.\n\nBuilding the Win32 static library...\n\n if not exist \".\\Release/\" mkdir \".\\Release\"\n cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1A.tmp\ndllist.c\n cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1B.tmp\nmd5.c\n cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1C.tmp\nwchar.c\n cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1D.tmp\nencnames.c\n cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1E.tmp\nwin32.c\nfe-auth.c\nfe-connect.c\nfe-exec.c\nfe-lobj.c\nfe-misc.c\nfe-print.c\nfe-secure.c\npqexpbuffer.c\n link.exe -lib @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1F.tmp\n cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm20.tmp\nlibpqdll.c\n rc.exe /l 0x409 /fo\".\\Release\\libpq.res\" libpq.rc\n link.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm21.tmp\n Creating library .\\Release\\libpqdll.lib and object .\\Release\\libpqdll.exp\n cd ..\\..\\bin\\psql\n nmake /f win32.mak\n\nMicrosoft (R) Program Maintenance Utility Version 7.00.9466\nCopyright (C) Microsoft Corporation. All rights reserved.\n\n if not exist \".\\Release/\" mkdir \".\\Release\"\n cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm27.tmp\ngetopt.c\n cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm28.tmp\ncommand.c\ncommand.c(497) : warning C4244: 'function' : conversion from 'unsigned short' \nto 'bool', possible loss of data\ncommon.c\nhelp.c\nhelp.c(166) : warning C4244: 'function' : conversion from 'unsigned short' to \n'bool', possible loss of data\ninput.c\nstringutils.c\nmainloop.c\ncopy.c\nstartup.c\nprompt.c\nsprompt.c\nvariables.c\nlarge_obj.c\nprint.c\nprint.c(1009) : warning C4244: 'function' : conversion from 'const unsigned \nshort' to 'bool', possible loss of data\ndescribe.c\ntab-complete.c\nmbprint.c\n link.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm29.tmp\n cd ..\\..\n echo All Win32 parts have been built!\nAll Win32 parts have been built!\n\n\n", "msg_date": "Sat, 16 Nov 2002 20:17:46 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Missing file from CVS?" }, { "msg_contents": "Joe,\n\nI've been told by Tom Lane that the problem is related to having Perl\nworking, so I'm assuming theres a change that needs to go into the win32\nmakefile that builds this file using perl.\n\nI'm going to have a go at finding the relevant commands and create a patch.\n\nI've also attached the output of a CVS update and a compile so you can see\nwhere my problem comes from. The compile error is near the bottom of this\ntext and reads;\n\nhelp.c(31) : fatal error C1083: Cannot open include file: 'sql_help.h': No\nsuch\nfile or directory\n\n\nRegards,\n\nAl.\n\nC:\\Projects\\pgsql\\src>cvs update -dP\ncvs server: Updating .\ncvs server: Updating backend\ncvs server: Updating backend/access\ncvs server: Updating backend/access/common\ncvs server: Updating backend/access/gist\ncvs server: Updating backend/access/hash\ncvs server: Updating backend/access/heap\ncvs server: Updating backend/access/index\ncvs server: Updating backend/access/nbtree\ncvs server: Updating backend/access/rtree\ncvs server: Updating backend/access/transam\ncvs server: Updating backend/bootstrap\ncvs server: Updating backend/catalog\ncvs server: Updating backend/commands\ncvs server: Updating backend/commands/_deadcode\ncvs server: Updating backend/executor\ncvs server: Updating backend/executor/_deadcode\ncvs server: Updating backend/include\ncvs server: Updating backend/lib\ncvs server: Updating backend/libpq\ncvs server: Updating backend/main\ncvs server: Updating backend/nodes\ncvs server: Updating backend/optimizer\ncvs server: Updating backend/optimizer/geqo\ncvs server: Updating backend/optimizer/path\ncvs server: Updating backend/optimizer/path/_deadcode\ncvs server: Updating backend/optimizer/plan\ncvs server: Updating backend/optimizer/prep\ncvs server: Updating backend/optimizer/prep/_deadcode\ncvs server: Updating backend/optimizer/util\ncvs server: Updating backend/parser\ncvs server: Updating backend/po\ncvs server: Updating backend/port\ncvs server: Updating backend/port/BSD44_derived\ncvs server: Updating backend/port/aix\ncvs server: Updating backend/port/alpha\ncvs server: Updating backend/port/beos\ncvs server: Updating backend/port/bsdi\ncvs server: Updating backend/port/bsdi_2_1\ncvs server: Updating backend/port/common\ncvs server: Updating backend/port/darwin\ncvs server: Updating backend/port/dgux\ncvs server: Updating backend/port/dynloader\ncvs server: Updating backend/port/hpux\ncvs server: Updating backend/port/i386_solaris\ncvs server: Updating backend/port/irix5\ncvs server: Updating backend/port/linux\ncvs server: Updating backend/port/linux/asm\ncvs server: Updating backend/port/linux_alpha\ncvs server: Updating backend/port/linux_i386\ncvs server: Updating backend/port/linuxalpha\ncvs server: Updating backend/port/next\ncvs server: Updating backend/port/nextstep\ncvs server: Updating backend/port/qnx\ncvs server: Updating backend/port/qnx4\ncvs server: Updating backend/port/sco\ncvs server: Updating backend/port/sparc\ncvs server: Updating backend/port/sparc_solaris\ncvs server: Updating backend/port/sunos4\ncvs server: Updating backend/port/svr4\ncvs server: Updating backend/port/tas\ncvs server: Updating backend/port/ultrix4\ncvs server: Updating backend/port/univel\ncvs server: Updating backend/port/win32\ncvs server: Updating backend/port/win32/regex\ncvs server: Updating backend/port/win32/sys\ncvs server: Updating backend/postmaster\ncvs server: Updating backend/regex\ncvs server: Updating backend/rewrite\ncvs server: Updating backend/storage\ncvs server: Updating backend/storage/buffer\ncvs server: Updating backend/storage/file\ncvs server: Updating backend/storage/freespace\ncvs server: Updating backend/storage/ipc\ncvs server: Updating backend/storage/large_object\ncvs server: Updating backend/storage/lmgr\ncvs server: Updating backend/storage/page\ncvs server: Updating backend/storage/smgr\ncvs server: Updating backend/tcop\ncvs server: Updating backend/tioga\ncvs server: Updating backend/utils\ncvs server: Updating backend/utils/adt\ncvs server: Updating backend/utils/cache\ncvs server: Updating backend/utils/error\ncvs server: Updating backend/utils/fmgr\ncvs server: Updating backend/utils/hash\ncvs server: Updating backend/utils/init\ncvs server: Updating backend/utils/mb\ncvs server: Updating backend/utils/mb/Unicode\ncvs server: Updating backend/utils/mb/conversion_procs\ncvs server: Updating backend/utils/mb/conversion_procs/ascii_and_mic\ncvs server: Updating backend/utils/mb/conversion_procs/cyrillic_and_mic\ncvs server: Updating backend/utils/mb/conversion_procs/euc_cn_and_mic\ncvs server: Updating backend/utils/mb/conversion_procs/euc_jp_and_sjis\ncvs server: Updating backend/utils/mb/conversion_procs/euc_kr_and_mic\ncvs server: Updating backend/utils/mb/conversion_procs/euc_tw_and_big5\ncvs server: Updating backend/utils/mb/conversion_procs/latin2_and_win1250\ncvs server: Updating backend/utils/mb/conversion_procs/latin_and_mic\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_ascii\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_big5\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_cyrillic\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_euc_cn\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_euc_jp\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_euc_kr\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_euc_tw\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_gb18030\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_gbk\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_iso8859\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_iso8859_1\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_johab\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_sjis\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_tcvn\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_uhc\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_win1250\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_win1256\ncvs server: Updating backend/utils/mb/conversion_procs/utf8_and_win874\ncvs server: Updating backend/utils/misc\ncvs server: Updating backend/utils/mmgr\ncvs server: Updating backend/utils/sort\ncvs server: Updating backend/utils/time\ncvs server: Updating bin\ncvs server: Updating bin/cleardbdir\ncvs server: Updating bin/createdb\ncvs server: Updating bin/createlang\ncvs server: Updating bin/createuser\ncvs server: Updating bin/destroydb\ncvs server: Updating bin/destroylang\ncvs server: Updating bin/destroyuser\ncvs server: Updating bin/initdb\ncvs server: Updating bin/initlocation\ncvs server: Updating bin/ipcclean\ncvs server: Updating bin/monitor\ncvs server: Updating bin/pg-config\ncvs server: Updating bin/pg4_dump\ncvs server: Updating bin/pg_config\ncvs server: Updating bin/pg_controldata\ncvs server: Updating bin/pg_controldata/po\ncvs server: Updating bin/pg_ctl\ncvs server: Updating bin/pg_dump\ncvs server: Updating bin/pg_dump/po\ncvs server: Updating bin/pg_encoding\ncvs server: Updating bin/pg_id\ncvs server: Updating bin/pg_passwd\ncvs server: Updating bin/pg_resetxlog\ncvs server: Updating bin/pg_resetxlog/po\ncvs server: Updating bin/pg_version\ncvs server: Updating bin/pgaccess\ncvs server: Updating bin/pgaccess/demo\ncvs server: Updating bin/pgaccess/doc\ncvs server: Updating bin/pgaccess/doc/html\ncvs server: Updating bin/pgaccess/doc/html/tutorial\ncvs server: Updating bin/pgaccess/images\ncvs server: Updating bin/pgaccess/lib\ncvs server: Updating bin/pgaccess/lib/help\ncvs server: Updating bin/pgaccess/lib/languages\ncvs server: Updating bin/pgaccess/win32\ncvs server: Updating bin/pgaccess/win32/dll\ncvs server: Updating bin/pgtclsh\ncvs server: Updating bin/psql\ncvs server: Updating bin/psql/po\ncvs server: Updating bin/scripts\ncvs server: Updating bin/vacuumdb\ncvs server: Updating corba\ncvs server: Updating data\ncvs server: Updating extend\ncvs server: Updating extend/array\ncvs server: Updating extend/datetime\ncvs server: Updating extend/pginsert\ncvs server: Updating extend/soundex\ncvs server: Updating extend/string\ncvs server: Updating include\ncvs server: Updating include/access\ncvs server: Updating include/bootstrap\ncvs server: Updating include/catalog\ncvs server: Updating include/commands\ncvs server: Updating include/executor\ncvs server: Updating include/lib\ncvs server: Updating include/libpq\ncvs server: Updating include/mb\ncvs server: Updating include/nodes\ncvs server: Updating include/optimizer\ncvs server: Updating include/optimizer/_deadcode\ncvs server: Updating include/parser\ncvs server: Updating include/port\ncvs server: Updating include/port/darwin\ncvs server: Updating include/regex\ncvs server: Updating include/rewrite\ncvs server: Updating include/storage\ncvs server: Updating include/tcop\ncvs server: Updating include/utils\ncvs server: Updating interfaces\ncvs server: Updating interfaces/cli\ncvs server: Updating interfaces/ecpg\ncvs server: Updating interfaces/ecpg/doc\ncvs server: Updating interfaces/ecpg/include\ncvs server: Updating interfaces/ecpg/lib\ncvs server: Updating interfaces/ecpg/preproc\ncvs server: Updating interfaces/ecpg/src\ncvs server: Updating interfaces/ecpg/src/include\ncvs server: Updating interfaces/ecpg/src/lib\ncvs server: Updating interfaces/ecpg/src/preproc\ncvs server: Updating interfaces/ecpg/src/test\ncvs server: Updating interfaces/ecpg/test\ncvs server: Updating interfaces/jdbc\ncvs server: Updating interfaces/jdbc/example\ncvs server: Updating interfaces/jdbc/example/corba\ncvs server: Updating interfaces/jdbc/org\ncvs server: Updating interfaces/jdbc/org/postgresql\ncvs server: Updating interfaces/jdbc/org/postgresql/core\ncvs server: Updating interfaces/jdbc/org/postgresql/fastpath\ncvs server: Updating interfaces/jdbc/org/postgresql/geometric\ncvs server: Updating interfaces/jdbc/org/postgresql/jdbc1\ncvs server: Updating interfaces/jdbc/org/postgresql/jdbc2\ncvs server: Updating interfaces/jdbc/org/postgresql/jdbc2/optional\ncvs server: Updating interfaces/jdbc/org/postgresql/jdbc3\ncvs server: Updating interfaces/jdbc/org/postgresql/largeobject\ncvs server: Updating interfaces/jdbc/org/postgresql/test\ncvs server: Updating interfaces/jdbc/org/postgresql/test/jdbc2\ncvs server: Updating interfaces/jdbc/org/postgresql/test/jdbc2/optional\ncvs server: Updating interfaces/jdbc/org/postgresql/test/jdbc3\ncvs server: Updating interfaces/jdbc/org/postgresql/test/util\ncvs server: Updating interfaces/jdbc/org/postgresql/util\ncvs server: Updating interfaces/jdbc/org/postgresql/xa\ncvs server: Updating interfaces/jdbc/postgresql\ncvs server: Updating interfaces/jdbc/postgresql/fastpath\ncvs server: Updating interfaces/jdbc/postgresql/geometric\ncvs server: Updating interfaces/jdbc/postgresql/jdbc1\ncvs server: Updating interfaces/jdbc/postgresql/jdbc2\ncvs server: Updating interfaces/jdbc/postgresql/largeobject\ncvs server: Updating interfaces/jdbc/postgresql/util\ncvs server: Updating interfaces/jdbc/utils\ncvs server: Updating interfaces/libpgeasy\ncvs server: Updating interfaces/libpgeasy/examples\ncvs server: Updating interfaces/libpgtcl\ncvs server: Updating interfaces/libpq\ncvs server: Updating interfaces/libpq/po\ncvs server: Updating interfaces/libpq++\ncvs server: Updating interfaces/libpq++/examples\ncvs server: Updating interfaces/libpq++/man\ncvs server: Updating interfaces/odbc\ncvs server: Updating interfaces/odbc/windev\ncvs server: Updating interfaces/perl5\ncvs server: Updating interfaces/perl5/eg\ncvs server: Updating interfaces/perl5/examples\ncvs server: Updating interfaces/pgeasy\ncvs server: Updating interfaces/pgeasy/examples\ncvs server: Updating interfaces/python\ncvs server: Updating interfaces/python/tutorial\ncvs server: Updating interfaces/ssl\ncvs server: Updating lextest\ncvs server: Updating makefiles\ncvs server: Updating man\ncvs server: Updating mk\ncvs server: Updating mk/port\ncvs server: Updating pl\ncvs server: Updating pl/plperl\ncvs server: Updating pl/plpgsql\ncvs server: Updating pl/plpgsql/doc\ncvs server: Updating pl/plpgsql/src\ncvs server: Updating pl/plpgsql/test\ncvs server: Updating pl/plpgsql/test/expected\ncvs server: Updating pl/plpython\ncvs server: Updating pl/tcl\ncvs server: Updating pl/tcl/modules\ncvs server: Updating pl/tcl/test\ncvs server: Updating port\ncvs server: Updating scripts\ncvs server: Updating template\ncvs server: Updating test\ncvs server: Updating test/bench\ncvs server: Updating test/examples\ncvs server: Updating test/locale\ncvs server: Updating test/locale/de_DE.ISO8859-1\ncvs server: Updating test/locale/de_DE.ISO8859-1/expected\ncvs server: Updating test/locale/gr_GR.ISO8859-7\ncvs server: Updating test/locale/gr_GR.ISO8859-7/expected\ncvs server: Updating test/locale/koi8-r\ncvs server: Updating test/locale/koi8-r/expected\ncvs server: Updating test/locale/koi8-to-win1251\ncvs server: Updating test/locale/koi8-to-win1251/expected\ncvs server: Updating test/mb\ncvs server: Updating test/mb/expected\ncvs server: Updating test/mb/sql\ncvs server: Updating test/performance\ncvs server: Updating test/performance/results\ncvs server: Updating test/performance/sqls\ncvs server: Updating test/regress\ncvs server: Updating test/regress/data\ncvs server: Updating test/regress/expected\ncvs server: Updating test/regress/input\ncvs server: Updating test/regress/output\ncvs server: Updating test/regress/sql\ncvs server: Updating test/suite\ncvs server: Updating test/suite/results\ncvs server: Updating tools\ncvs server: Updating tools/backend\ncvs server: Updating tools/entab\ncvs server: Updating tools/make_diff\ncvs server: Updating tools/mkldexport\ncvs server: Updating tools/pginclude\ncvs server: Updating tools/pgindent\ncvs server: Updating tools/pgindent.dir\ncvs server: Updating tutorial\ncvs server: Updating tutorial/C-code\ncvs server: Updating utils\ncvs server: Updating win32\n\nC:\\Projects\\pgsql\\src>nmake -f win32.mak\n\nMicrosoft (R) Program Maintenance Utility Version 7.00.9466\nCopyright (C) Microsoft Corporation. All rights reserved.\n\n cd include\n if not exist pg_config.h copy pg_config.h.win32 pg_config.h\n 1 file(s) copied.\n cd ..\n cd interfaces\\libpq\n nmake /f win32.mak\n\nMicrosoft (R) Program Maintenance Utility Version 7.00.9466\nCopyright (C) Microsoft Corporation. All rights reserved.\n\nBuilding the Win32 static library...\n\n if not exist \".\\Release/\" mkdir \".\\Release\"\n cl.exe @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1C0.tmp\ndllist.c\n cl.exe @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1C1.tmp\nmd5.c\n cl.exe @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1C2.tmp\nwchar.c\n cl.exe @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1C3.tmp\nencnames.c\n cl.exe @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1C4.tmp\nwin32.c\nfe-auth.c\nfe-connect.c\nfe-exec.c\nfe-lobj.c\nfe-misc.c\nfe-print.c\nfe-secure.c\npqexpbuffer.c\n link.exe -lib @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1C5.tmp\n cl.exe @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1C6.tmp\nlibpqdll.c\n rc.exe /l 0x409 /fo\".\\Release\\libpq.res\" libpq.rc\n link.exe @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1C7.tmp\n Creating library .\\Release\\libpqdll.lib and object .\\Release\\libpqdll.exp\n cd ..\\..\\bin\\psql\n nmake /f win32.mak\n\nMicrosoft (R) Program Maintenance Utility Version 7.00.9466\nCopyright (C) Microsoft Corporation. All rights reserved.\n\n if not exist \".\\Release/\" mkdir \".\\Release\"\n cl.exe @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1CD.tmp\ngetopt.c\n cl.exe @C:\\DOCUME~1\\Al\\LOCALS~1\\Temp\\nm1CE.tmp\ncommand.c\ncommand.c(497) : warning C4244: 'function' : conversion from 'unsigned\nshort' to\n 'bool', possible loss of data\ncommon.c\nhelp.c\nhelp.c(31) : fatal error C1083: Cannot open include file: 'sql_help.h': No\nsuch\nfile or directory\ninput.c\nstringutils.c\nmainloop.c\ncopy.c\nstartup.c\nprompt.c\nsprompt.c\nvariables.c\nlarge_obj.c\nprint.c\nprint.c(1009) : warning C4244: 'function' : conversion from 'const unsigned\nshor\nt' to 'bool', possible loss of data\ndescribe.c\ntab-complete.c\nmbprint.c\nNMAKE : fatal error U1077: 'cl.exe' : return code '0x2'\nStop.\nNMAKE : fatal error U1077: '\"C:\\Program Files\\Microsoft Visual Studio\n.NET\\VC7\\B\nIN\\nmake.exe\"' : return code '0x2'\nStop.\n\nC:\\Projects\\pgsql\\src>\n\n\n\n\n----- Original Message -----\nFrom: \"Joe Conway\" <mail@joeconway.com>\nTo: \"Al Sutton\" <al@alsutton.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Sunday, November 17, 2002 4:17 AM\nSubject: Re: [HACKERS] Missing file from CVS?\n\n\n> Al Sutton wrote:\n> > All,\n> >\n> > I've just tried to build the Win32 components under Visual Studio's C++\n> > compiler from the win32.mak CVS archive at\n> > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot and found that\nthe\n> > following file was missing;\n> >\n> > src\\bin\\psql\\sql_help.h\n> >\n> > I've copied the file from the the source tree of version 7.2.3 and the\n> > compile works with out any problems.\n> >\n> > Should the file be in CVS?\n> >\n>\n> I'm not seeing a problem here with cvs tip and VS .Net's C++, although I\nam\n> now getting a few pedantic warnings that I wasn't seeing a few weeks ago.\n>\n> Where exactly are you getting an error?\n>\n> Joe\n>\n> p.s. here's my output:\n>\n> C:\\Documents and Settings\\jconway\\My Documents\\Visual Studio\n> Projects\\pgsql\\src>nmake -f win32.mak\n>\n> Microsoft (R) Program Maintenance Utility Version 7.00.9466\n> Copyright (C) Microsoft Corporation. All rights reserved.\n>\n> cd include\n> if not exist pg_config.h copy pg_config.h.win32 pg_config.h\n> 1 file(s) copied.\n> cd ..\n> cd interfaces\\libpq\n> nmake /f win32.mak\n>\n> Microsoft (R) Program Maintenance Utility Version 7.00.9466\n> Copyright (C) Microsoft Corporation. All rights reserved.\n>\n> Building the Win32 static library...\n>\n> if not exist \".\\Release/\" mkdir \".\\Release\"\n> cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1A.tmp\n> dllist.c\n> cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1B.tmp\n> md5.c\n> cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1C.tmp\n> wchar.c\n> cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1D.tmp\n> encnames.c\n> cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1E.tmp\n> win32.c\n> fe-auth.c\n> fe-connect.c\n> fe-exec.c\n> fe-lobj.c\n> fe-misc.c\n> fe-print.c\n> fe-secure.c\n> pqexpbuffer.c\n> link.exe -lib @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm1F.tmp\n> cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm20.tmp\n> libpqdll.c\n> rc.exe /l 0x409 /fo\".\\Release\\libpq.res\" libpq.rc\n> link.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm21.tmp\n> Creating library .\\Release\\libpqdll.lib and object\n.\\Release\\libpqdll.exp\n> cd ..\\..\\bin\\psql\n> nmake /f win32.mak\n>\n> Microsoft (R) Program Maintenance Utility Version 7.00.9466\n> Copyright (C) Microsoft Corporation. All rights reserved.\n>\n> if not exist \".\\Release/\" mkdir \".\\Release\"\n> cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm27.tmp\n> getopt.c\n> cl.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm28.tmp\n> command.c\n> command.c(497) : warning C4244: 'function' : conversion from 'unsigned\nshort'\n> to 'bool', possible loss of data\n> common.c\n> help.c\n> help.c(166) : warning C4244: 'function' : conversion from 'unsigned short'\nto\n> 'bool', possible loss of data\n> input.c\n> stringutils.c\n> mainloop.c\n> copy.c\n> startup.c\n> prompt.c\n> sprompt.c\n> variables.c\n> large_obj.c\n> print.c\n> print.c(1009) : warning C4244: 'function' : conversion from 'const\nunsigned\n> short' to 'bool', possible loss of data\n> describe.c\n> tab-complete.c\n> mbprint.c\n> link.exe @C:\\DOCUME~1\\jconway\\LOCALS~1\\Temp\\nm29.tmp\n> cd ..\\..\n> echo All Win32 parts have been built!\n> All Win32 parts have been built!\n>\n>\n>\n\n\n", "msg_date": "Sun, 17 Nov 2002 10:13:31 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": true, "msg_subject": "Re: Missing file from CVS?" } ]
[ { "msg_contents": "I've gotten really tired of explaining to newbies why stuff involving\nchar(n) fields doesn't work like they expect. Our current behavior is\nnot valid per SQL92 anyway, I believe.\n\nI think there is a pretty simple solution now that we have pg_cast:\nwe could stop treating char(n) as binary-equivalent to varchar/text,\nand instead define it as requiring a runtime conversion (which would\nbe essentially the rtrim() function). The cast in the other direction\nwould be assignment-only, so that any expression that involves mixed\nchar(n) and varchar/text operations would be evaluated in varchar\nrules after stripping char's insignificant trailing blanks.\n\nIf we did this, then operations like\n\t\tWHERE UPPER(charcolumn) = 'FOO'\nwould work as a newbie expects. I believe that we'd come a lot closer\nto spec compliance on the behavior of char(n), too.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Nov 2002 17:54:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "char(n) to varchar or text conversion should strip trailing spaces" } ]
[ { "msg_contents": "> > > void\n> > > heap_mark4fk_lock_acquire(Relation relation, HeapTuple tuple) {\n\nJust wonder how are you going to implement it - is it by using\nsome kind of \"read-locks\", ie FK transaction \"locks\" PK to prevent\ndelete (this is known as \"pessimistic\" approach)?\nAbout two years ago we discussed with Jan \"optimistic\" approach\nwith using \"dirty reads\", when PK/FK transactions do not check\nexistence of FK/PK untill constraint should be checked (after\nstatement processed for immediate mode, at the commit time/\nset constraint immediate for deferred constraints).\n\nSo, at the check time, FK transaction uses dirty reads to know\nabout existence/\"status\" of PK:\n1. No PK -> abort.\n2. PK (inserted?/)deleted/updated/selected for update by concurrent\ntransaction P -> wait for P commit/abort (just like transactions do\nfor concurrent same-row-update); go to 1.\n3. Else (PK exists and no one changing it right now) -> proceed.\n\nPK transaction does the same:\n1. No FK -> proceed.\n2. FK inserted/updated/selected for update by concurrent transaction\nF -> wait for F commit/abort; go to 1.\n\nThis would be more in MVCC style -:)\n\nVadim\n", "msg_date": "Fri, 15 Nov 2002 17:39:11 -0800", "msg_from": "\"Mikheev, Vadim\" <VMIKHEEV@sectordata.com>", "msg_from_op": true, "msg_subject": "Re: RI_FKey_check: foreign key constraint blocks parall" }, { "msg_contents": "On Fri, 15 Nov 2002, Mikheev, Vadim wrote:\n\n> Just wonder how are you going to implement it - is it by using\n> some kind of \"read-locks\", ie FK transaction \"locks\" PK to prevent\n> delete (this is known as \"pessimistic\" approach)?\n> About two years ago we discussed with Jan \"optimistic\" approach\n> with using \"dirty reads\", when PK/FK transactions do not check\n> existence of FK/PK untill constraint should be checked (after\n> statement processed for immediate mode, at the commit time/\n> set constraint immediate for deferred constraints).\n>\n> So, at the check time, FK transaction uses dirty reads to know\n> about existence/\"status\" of PK:\n> 1. No PK -> abort.\n> 2. PK (inserted?/)deleted/updated/selected for update by concurrent\n> transaction P -> wait for P commit/abort (just like transactions do\n> for concurrent same-row-update); go to 1.\n> 3. Else (PK exists and no one changing it right now) -> proceed.\n>\n> PK transaction does the same:\n> 1. No FK -> proceed.\n> 2. FK inserted/updated/selected for update by concurrent transaction\n> F -> wait for F commit/abort; go to 1.\n>\n> This would be more in MVCC style -:)\n\nRight now, it's similar to the above, but only one direction is doing\nthe dirty reads right now. I don't do the dirty reads on the fk\ntransactions right now. It'll still see delete/update/selected for\nupdate on a row that would have otherwise existed for the transaction,\nbut not see the new rows (I'd like to switch it to dirty both\ndirections, but I'm having enough trouble with deadlocks as it is).\nOr, at least that's the intention behind the code if not the actual\neffect. It gets rid of the concurrency issues of two fk transactions, but\nit doesn't get rid of deadlock cases.\n\nT1: insert into fk values (1);\nT2: delete from pk;\nT1: insert into fk values (1);\nshouldn't need to deadlock. The \"lock\" stuff is actually more like\nan un-lock to make it not wait on the second T1 statement. It's\nbroken, however, as I just thought of some more things it doesn't handle\ncorrectly. Oh well.\n\n", "msg_date": "Fri, 15 Nov 2002 18:03:13 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: RI_FKey_check: foreign key constraint blocks parall" } ]
[ { "msg_contents": "Where can I find a list of the new features appearing in Postgresql 7.3?\n", "msg_date": "16 Nov 2002 08:47:58 -0800", "msg_from": "mydejamail@yahoo.co.uk (My Deja)", "msg_from_op": true, "msg_subject": "Where can I find a list of the new features appearing in Postgresql\n\t7.3?" }, { "msg_contents": "\n From the /HISTORY file in the tarball.\n\n---------------------------------------------------------------------------\n\nMy Deja wrote:\n> Where can I find a list of the new features appearing in Postgresql 7.3?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 16 Nov 2002 15:36:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Where can I find a list of the new features appearing" } ]
[ { "msg_contents": "Hello\nI want change DatabaseMetaData.getImportedKeys that return name of constraint\nin place of pg_trigger.tgargs (for FK_NAME)\nQuery for getImportedKey is like :\n\nQuery 1:\n\nSELECT DISTINCT n.nspname as pnspname,n2.nspname as fnspname, c.relname as prelname,\n\tc2.relname as frelname, t.tgconstrname, a.attnum as keyseq, ic.relname as fkeyname,\n\tt.tgdeferrable, t.tginitdeferred, t.tgnargs,t.tgargs, p1.proname as updaterule,\n\tp2.proname as deleterule\nFROM pg_catalog.pg_namespace n, pg_catalog.pg_namespace n2,\n\tpg_catalog.pg_trigger t, pg_catalog.pg_trigger t1,\n\tpg_catalog.pg_class c, pg_catalog.pg_class c2,\n\tpg_catalog.pg_class ic, pg_catalog.pg_proc p1,\n\tpg_catalog.pg_proc p2, pg_catalog.pg_index i,\n\tpg_catalog.pg_attribute a\nWHERE (t.tgrelid=c.oid AND t.tgisconstraint\n\tAND t.tgconstrrelid=c2.oid AND t.tgfoid=p1.oid and p1.proname like 'RI\\\\_FKey\\\\_%\\\\_upd')\n\tand (t1.tgrelid=c.oid and t1.tgisconstraint and t1.tgconstrrelid=c2.oid\n\tAND t1.tgfoid=p2.oid and p2.proname like 'RI\\\\_FKey\\\\_%\\\\_del') AND i.indrelid=c.oid\n\tAND i.indexrelid=ic.oid AND ic.oid=a.attrelid AND i.indisprimary AND c.relnamespace = n.oid\n\tAND c2.relnamespace=n2.oid AND c2.relname='fin_nk'\nORDER BY prelname,keyseq\n\nI set like this :\n\nQuery 2:\n\nSELECT DISTINCT n.nspname as pnspname,n2.nspname as fnspname, c.relname as prelname,\n\tc2.relname as frelname, t.tgconstrname, a.attnum as keyseq, ic.relname as fkeyname,\n\tt.tgdeferrable, t.tginitdeferred, t.tgnargs,t.tgargs, p1.proname as updaterule,\n\tp2.proname as deleterule,con.conname as conname \nFROM pg_catalog.pg_namespace n, pg_catalog.pg_namespace n2,\n\tpg_catalog.pg_trigger t, pg_catalog.pg_trigger t1,\n\tpg_catalog.pg_class c, pg_catalog.pg_class c2,\n\tpg_catalog.pg_class ic, pg_catalog.pg_proc p1,\n\tpg_catalog.pg_proc p2, pg_catalog.pg_index i,\n\tpg_catalog.pg_attribute a,pg_catalog.pg_constraint con\nWHERE (t.tgrelid=c.oid AND t.tgisconstraint\n\tAND t.tgconstrrelid=c2.oid AND t.tgfoid=p1.oid and p1.proname like 'RI\\\\_FKey\\\\_%\\\\_upd')\n\tand (t1.tgrelid=c.oid and t1.tgisconstraint and t1.tgconstrrelid=c2.oid\n\tAND t1.tgfoid=p2.oid and p2.proname like 'RI\\\\_FKey\\\\_%\\\\_del') AND i.indrelid=c.oid\n\tAND i.indexrelid=ic.oid AND ic.oid=a.attrelid AND i.indisprimary AND c.relnamespace = n.oid\n\tAND c2.relnamespace=n2.oid AND c2.relname='fin_nk'\n\tAND (c2.oid =con.conrelid AND n.oid=con.connamespace AND con.contype='f' AND c.oid=con.confrelid)\nORDER BY prelname,keyseq\n\n\nQuery 2 is very slow (sometime 10-20 minutes)\n\nI call vacuumdb --all --full --analyze\n\nI have 282 rows in pg_class, 1900 in pg_attribute, 141 in pg_constraint\nand pg_trigger.\nWhat is wrong ?\n\nregards\nHaris Peco\n", "msg_date": "Sat, 16 Nov 2002 19:57:29 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": true, "msg_subject": "Query for DatabaseMetaData.getImportedKey" } ]
[ { "msg_contents": "\nIt looks okay from here ... I'll put out a notice later this evening if\nnobody sees anything wrong with it ...\n\n\n", "msg_date": "Sat, 16 Nov 2002 16:04:10 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "RC1 packaged and available ..." } ]
[ { "msg_contents": "Hello hackers,\n\nIn the pg_stat_activity view, the usesysid is shown as having type Oid.\nHowever pg_shadow says it's an integer. Is there a reason? Looks like\na bug.\n\nThis patch seems to corrects this issue, but I don't know if there's\nsomething else involved.\n\nIndex: src/include/catalog/pg_proc.h\n===================================================================\nRCS file: /projects/cvsroot/pgsql-server/src/include/catalog/pg_proc.h,v\nretrieving revision 1.276\ndiff -c -r1.276 pg_proc.h\n*** src/include/catalog/pg_proc.h\t2002/11/08 17:27:03\t1.276\n--- src/include/catalog/pg_proc.h\t2002/11/16 23:18:44\n***************\n*** 2738,2744 ****\n DESCR(\"Statistics: PID of backend\");\n DATA(insert OID = 1938 ( pg_stat_get_backend_dbid\t\tPGNSP PGUID 12 f f t f s 1 26 \"23\"\tpg_stat_get_backend_dbid - _null_ ));\n DESCR(\"Statistics: Database ID of backend\");\n! DATA(insert OID = 1939 ( pg_stat_get_backend_userid\tPGNSP PGUID 12 f f t f s 1 26 \"23\"\tpg_stat_get_backend_userid - _null_ ));\n DESCR(\"Statistics: User ID of backend\");\n DATA(insert OID = 1940 ( pg_stat_get_backend_activity\tPGNSP PGUID 12 f f t f s 1 25 \"23\"\tpg_stat_get_backend_activity - _null_ ));\n DESCR(\"Statistics: Current query of backend\");\n--- 2738,2744 ----\n DESCR(\"Statistics: PID of backend\");\n DATA(insert OID = 1938 ( pg_stat_get_backend_dbid\t\tPGNSP PGUID 12 f f t f s 1 26 \"23\"\tpg_stat_get_backend_dbid - _null_ ));\n DESCR(\"Statistics: Database ID of backend\");\n! DATA(insert OID = 1939 ( pg_stat_get_backend_userid\tPGNSP PGUID 12 f f t f s 1 23 \"23\"\tpg_stat_get_backend_userid - _null_ ));\n DESCR(\"Statistics: User ID of backend\");\n DATA(insert OID = 1940 ( pg_stat_get_backend_activity\tPGNSP PGUID 12 f f t f s 1 25 \"23\"\tpg_stat_get_backend_activity - _null_ ));\n DESCR(\"Statistics: Current query of backend\");\nIndex: src/backend/utils/adt/pgstatfuncs.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql-server/src/backend/utils/adt/pgstatfuncs.c,v\nretrieving revision 1.8\ndiff -c -r1.8 pgstatfuncs.c\n*** src/backend/utils/adt/pgstatfuncs.c\t2002/08/20 04:47:52\t1.8\n--- src/backend/utils/adt/pgstatfuncs.c\t2002/11/16 23:18:44\n***************\n*** 272,278 ****\n \tif ((beentry = pgstat_fetch_stat_beentry(beid)) == NULL)\n \t\tPG_RETURN_NULL();\n \n! \tPG_RETURN_OID(beentry->userid);\n }\n \n \n--- 272,278 ----\n \tif ((beentry = pgstat_fetch_stat_beentry(beid)) == NULL)\n \t\tPG_RETURN_NULL();\n \n! \tPG_RETURN_INT32(beentry->userid);\n }\n \n \n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)\n", "msg_date": "Sun, 17 Nov 2002 11:07:41 -0300", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": true, "msg_subject": "pg_stat_database shows userid as OID" }, { "msg_contents": "Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> In the pg_stat_activity view, the usesysid is shown as having type Oid.\n> However pg_shadow says it's an integer. Is there a reason?\n\nThere's been disagreement for a long time over whether userids should be\nOIDs or ints. If you want to introduce consistency then it's going to\ntake a lot more than a one-line patch. (First you'll need to convince\nthe partisans involved which answer is the right one.)\n\n> Looks like a bug.\n\nNot as long as OID is 4 bytes.\n\nI'd recommend not making any piecemeal changes, especially not when\nthere's not yet a consensus which way to converge.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 13:16:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_stat_database shows userid as OID " }, { "msg_contents": "On Sun, Nov 17, 2002 at 01:16:29PM -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> > In the pg_stat_activity view, the usesysid is shown as having type Oid.\n> > However pg_shadow says it's an integer. Is there a reason?\n> \n> There's been disagreement for a long time over whether userids should be\n> OIDs or ints. If you want to introduce consistency then it's going to\n> take a lot more than a one-line patch. (First you'll need to convince\n> the partisans involved which answer is the right one.)\n\nOh, I see. I wasn't aware of this. I don't really know which answer is\n\"the right one\". I don't care a lot about this thing either, but I'll\nkeep it into my list of amusements, and will probably even dig into\nthe archives sometime.\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\nCriptograf�a: Poderosa t�cnica algor�tmica de codificaci�n que es\nempleada en la creaci�n de manuales de computadores.\n", "msg_date": "Sun, 17 Nov 2002 17:13:52 -0300", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": true, "msg_subject": "Re: pg_stat_database shows userid as OID" }, { "msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> > In the pg_stat_activity view, the usesysid is shown as having type Oid.\n> > However pg_shadow says it's an integer. Is there a reason?\n> \n> There's been disagreement for a long time over whether userids should be\n> OIDs or ints. If you want to introduce consistency then it's going to\n> take a lot more than a one-line patch. (First you'll need to convince\n> the partisans involved which answer is the right one.)\n> \n> > Looks like a bug.\n> \n> Not as long as OID is 4 bytes.\n> \n> I'd recommend not making any piecemeal changes, especially not when\n> there's not yet a consensus which way to converge.\n\nWell, seems we should make it consistent at least. Let's decide and\nmake it done. I think some wanted it to be an int so they could use the\nsame unix uid for pg_shadow, but I think we aren't using that idea much\nanymore. However, right now, it looks like the super user is '1', and\nother users start numbering from 100. That at least suggests int rather\nthan oid.\n\nI am not particular in what we choose, but I do think there is a good\nargument to make it consistent.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 17 Nov 2002 16:24:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_stat_database shows userid as OID" }, { "msg_contents": "In looking at the CLUSTER ALL patch I have applied, I am now wondering\nwhy the ALL keyword is used. When we do VACUUM, we don't use ALL. \nVACUUM vacuums all tables. Shouldn't' CLUSTER alone do the same thing. \nAnd what about REINDEX? That seems to have a different syntax from the\nother two. Seems there should be some consistency.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 17 Nov 2002 16:33:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "CLUSTER ALL syntax" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> I'd recommend not making any piecemeal changes, especially not when\n>> there's not yet a consensus which way to converge.\n\n> Well, seems we should make it consistent at least.\n\nI think the original argument stemmed from the idea that we ought to use\npg_shadow's OID column as the user identifier (eliminating usesysid per\nse). This seems like a good idea at first but I think it has a couple\nof fatal problems:\n * disappearance of pg_shadow.usesysid column will doubtless break some\n applications\n * if we use OID then it's much more difficult to support explicit\n assignment of userid\n\n> I think some wanted it to be an int so they could use the\n> same unix uid for pg_shadow, but I think we aren't using that idea much\n> anymore.\n\nI don't think anyone worries about making usesysid match /etc/passwd\nanymore, but nonetheless CREATE USER WITH SYSID is still an essential\ncapability. What if you drop a user accidentally while he still owns\nobjects? You *must* be able to recreate him with the same sysid as\nbefore. pg_depend cannot save us from this kind of mistake, either,\nsince users span databases.\n\nSo it seems to me that we must keep pg_shadow.usesysid as a separate\ncolumn and not try to make it the OID of pg_shadow.\n\nGiven that decision, the argument for making it be type OID seems very\nweak, so I'd lean to the \"use int4\" camp myself. But I'm not sure\neveryone agrees. I think Peter was strongly in favor of OID when he\nwas revising the session-authorization code (that's why it all uses OID\nfor user IDs...)\n\nAs far as the actual C code goes, I'd lean to creating new typedefs\nUserId and GroupId (or some such names) and making all the routine\nand variable declarations use those, and not either OID or int4.\nBut I'm not excited about promoting these typedefs into actual SQL\ntypes, as was done for TransactionId and CommandId; the payback seems\nmuch less than the effort needed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 16:39:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_stat_database shows userid as OID " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> In looking at the CLUSTER ALL patch I have applied, I am now wondering\n> why the ALL keyword is used. When we do VACUUM, we don't use ALL. \n> VACUUM vacuums all tables. Shouldn't' CLUSTER alone do the same thing. \n\nI agree, lose the ALL.\n\n> And what about REINDEX? That seems to have a different syntax from the\n> other two. Seems there should be some consistency.\n\nWe don't have a REINDEX ALL, and I'm not in a hurry to invent one.\n(Especially, I'd not want to see Alvaro spending time on that instead\nof fixing the underlying btree-compaction problem ;-))\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 16:42:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER ALL syntax " }, { "msg_contents": "On Sun, Nov 17, 2002 at 04:42:01PM -0500, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > In looking at the CLUSTER ALL patch I have applied, I am now wondering\n> > why the ALL keyword is used. When we do VACUUM, we don't use ALL. \n> > VACUUM vacuums all tables. Shouldn't' CLUSTER alone do the same thing. \n> \n> I agree, lose the ALL.\n\nWell, in my original patch (the one submitted just when 7.3 was going\ninto beta) there was no ALL. I decided to put it in for subsequent\npatches for no good reason.\n\n> > And what about REINDEX? That seems to have a different syntax from the\n> > other two. Seems there should be some consistency.\n> \n> We don't have a REINDEX ALL, and I'm not in a hurry to invent one.\n> (Especially, I'd not want to see Alvaro spending time on that instead\n> of fixing the underlying btree-compaction problem ;-))\n\nActually, I'm planning to do the freelist thing, then the btree\ncompaction and then replace the current REINDEX code with the compaction\ncode, probably including some means to do REINDEX ALL.\n\nIt makes me really proud to hear such a note of confidence in my work.\nThank you very much.\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n", "msg_date": "Sun, 17 Nov 2002 19:49:05 -0300", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": true, "msg_subject": "Re: CLUSTER ALL syntax" }, { "msg_contents": "Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> Actually, I'm planning to do the freelist thing, then the btree\n> compaction and then replace the current REINDEX code with the compaction\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> code, probably including some means to do REINDEX ALL.\n\nUh ... no. The primary purpose of REINDEX is to recover from corrupted\nindexes, so it has to be based on a rebuild strategy not a compaction\nstrategy.\n\nIf you want to add a REINDEX ALL for completeness, go ahead, but I think\nthe need for it will be vanishingly small once vacuum compacts btrees\nproperly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 17:58:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER ALL syntax " }, { "msg_contents": "\nI totally agree with what you have said. Peter, can you clarify your\nreasoning for OID for user/group id?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> I'd recommend not making any piecemeal changes, especially not when\n> >> there's not yet a consensus which way to converge.\n> \n> > Well, seems we should make it consistent at least.\n> \n> I think the original argument stemmed from the idea that we ought to use\n> pg_shadow's OID column as the user identifier (eliminating usesysid per\n> se). This seems like a good idea at first but I think it has a couple\n> of fatal problems:\n> * disappearance of pg_shadow.usesysid column will doubtless break some\n> applications\n> * if we use OID then it's much more difficult to support explicit\n> assignment of userid\n> \n> > I think some wanted it to be an int so they could use the\n> > same unix uid for pg_shadow, but I think we aren't using that idea much\n> > anymore.\n> \n> I don't think anyone worries about making usesysid match /etc/passwd\n> anymore, but nonetheless CREATE USER WITH SYSID is still an essential\n> capability. What if you drop a user accidentally while he still owns\n> objects? You *must* be able to recreate him with the same sysid as\n> before. pg_depend cannot save us from this kind of mistake, either,\n> since users span databases.\n> \n> So it seems to me that we must keep pg_shadow.usesysid as a separate\n> column and not try to make it the OID of pg_shadow.\n> \n> Given that decision, the argument for making it be type OID seems very\n> weak, so I'd lean to the \"use int4\" camp myself. But I'm not sure\n> everyone agrees. I think Peter was strongly in favor of OID when he\n> was revising the session-authorization code (that's why it all uses OID\n> for user IDs...)\n> \n> As far as the actual C code goes, I'd lean to creating new typedefs\n> UserId and GroupId (or some such names) and making all the routine\n> and variable declarations use those, and not either OID or int4.\n> But I'm not excited about promoting these typedefs into actual SQL\n> types, as was done for TransactionId and CommandId; the payback seems\n> much less than the effort needed.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 17 Nov 2002 18:37:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_stat_database shows userid as OID" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > In looking at the CLUSTER ALL patch I have applied, I am now wondering\n> > why the ALL keyword is used. When we do VACUUM, we don't use ALL. \n> > VACUUM vacuums all tables. Shouldn't' CLUSTER alone do the same thing. \n> \n> I agree, lose the ALL.\n\nGood. I can take care of that or someone can submit a patch.\n\n> > And what about REINDEX? That seems to have a different syntax from the\n> > other two. Seems there should be some consistency.\n> \n> We don't have a REINDEX ALL, and I'm not in a hurry to invent one.\n> (Especially, I'd not want to see Alvaro spending time on that instead\n> of fixing the underlying btree-compaction problem ;-))\n\nMy point for REINDEX was a little different. The man pages shows:\n\n\tREINDEX { DATABASE | TABLE | INDEX } <replaceable\n\t\tclass=\"PARAMETER\">name</replaceable> [ FORCE ]\n\nwhere we don't have ALL but we do have DATABASE. Do we need that\ntri-valued secodn field for reindex because you can reindex a table _or_\nand index, and hence DATABASE makes sense? I am just asking.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 17 Nov 2002 18:43:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER ALL syntax" }, { "msg_contents": "On Sun, Nov 17, 2002 at 06:43:38PM -0500, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > > And what about REINDEX? That seems to have a different syntax from the\n> > > other two. Seems there should be some consistency.\n> > \n> > We don't have a REINDEX ALL, and I'm not in a hurry to invent one.\n> > (Especially, I'd not want to see Alvaro spending time on that instead\n> > of fixing the underlying btree-compaction problem ;-))\n> \n> My point for REINDEX was a little different. The man pages shows:\n> \n> \tREINDEX { DATABASE | TABLE | INDEX } <replaceable\n> \t\tclass=\"PARAMETER\">name</replaceable> [ FORCE ]\n> \n> where we don't have ALL but we do have DATABASE. Do we need that\n> tri-valued secodn field for reindex because you can reindex a table _or_\n> and index, and hence DATABASE makes sense? I am just asking.\n\nREINDEX DATABASE is for system indexes only, it's not the same that one\nwould think of REINDEX alone (which is all indexes on all tables, isn't\nit?).\n\nWhat I don't understand is what are the parameters in the\nReindexDatabase function for. For example, the boolean all is always\nfalse in tcop/utility.c (and there are no other places that the function\nis called). Also, the database name is checked to be equal to a\n\"constant\" value, the database name that the standalone backend is\nconnected to. Why are those useful?\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"No renuncies a nada. No te aferres a nada\"\n", "msg_date": "Sun, 17 Nov 2002 22:31:40 -0300", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": true, "msg_subject": "Re: CLUSTER ALL syntax" }, { "msg_contents": "Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> What I don't understand is what are the parameters in the\n> ReindexDatabase function for. For example, the boolean all is always\n> false in tcop/utility.c (and there are no other places that the function\n> is called). Also, the database name is checked to be equal to a\n> \"constant\" value, the database name that the standalone backend is\n> connected to. Why are those useful?\n\nWell, passing all=true would implement REINDEX ALL ...\n\nAs for the database name, we could perhaps change the syntax to just\nREINDEX DATABASE; not sure if it's worth the trouble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 20:36:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER ALL syntax " }, { "msg_contents": "> In looking at the CLUSTER ALL patch I have applied, I am now wondering\n> why the ALL keyword is used. When we do VACUUM, we don't use ALL. \n> VACUUM vacuums all tables. Shouldn't' CLUSTER alone do the same thing. \n> And what about REINDEX? That seems to have a different syntax from the\n> other two. Seems there should be some consistency.\n\nYeah - I agree!\n\nChris\n\n", "msg_date": "Mon, 18 Nov 2002 09:53:07 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: CLUSTER ALL syntax" }, { "msg_contents": "Alvaro Herrera wrote:\n> \n> On Sun, Nov 17, 2002 at 06:43:38PM -0500, Bruce Momjian wrote:\n> > Tom Lane wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > > > And what about REINDEX? That seems to have a different\n> > > > syntax from the other two. Seems there should be some > > > > consistency.\n> > >\n> > > We don't have a REINDEX ALL, and I'm not in a hurry to invent one.\n> > > (Especially, I'd not want to see Alvaro spending time on that\n> > > instead of fixing the underlying btree-compaction problem ;-))\n> >\n> > My point for REINDEX was a little different. The man pages shows:\n> >\n> > REINDEX { DATABASE | TABLE | INDEX } <replaceable\n> > class=\"PARAMETER\">name</replaceable> [ FORCE ]\n> >\n> > where we don't have ALL but we do have DATABASE. Do we need that\n> > tri-valued secodn field for reindex because you can reindex a\n> > table _or_ and index, and hence DATABASE makes sense? I am just\n> > asking.\n> \n> REINDEX DATABASE is for system indexes only, it's not the same that one\n> would think of REINDEX alone (which is all indexes on all tables, isn't\n> it?).\n\nProbably You don't understand the initial purpose of REINDEX.\nIt isn't an SQL standard at all and was intended to recover\ncorrupted system indexes. It's essentially an unsafe operation\nand so the operation was inhibited other than under standalone\npostgres. I also made the command a little hard to use to avoid\nunexpected invocations e.g. REINDEX DATABASE requires an unnecessary\ndatabase name parameter or FORCE is still needed though it's a\nrequisite parameter now.\n\nREINDEX is also used to compact indexes now. It's good but\nthe purpose is different from the initial one and we would\nhave to reorganize the functionalities e.g. the table data\nisn't needed to compact the indexes etc. \n \n> What I don't understand is what are the parameters in the\n> ReindexDatabase function for. For example, the boolean all\n> is always false in tcop/utility.c (and there are no other\n> places that the function is called). \n\nI intended to implement the *true* case also then\nbut haven't done it yet, sorry.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n", "msg_date": "Mon, 18 Nov 2002 11:31:42 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: CLUSTER ALL syntax" }, { "msg_contents": "Alvaro Herrera wrote:\n> On Sun, Nov 17, 2002 at 04:42:01PM -0500, Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > In looking at the CLUSTER ALL patch I have applied, I am now wondering\n> > > why the ALL keyword is used. When we do VACUUM, we don't use ALL. \n> > > VACUUM vacuums all tables. Shouldn't' CLUSTER alone do the same thing. \n> > \n> > I agree, lose the ALL.\n> \n> Well, in my original patch (the one submitted just when 7.3 was going\n> into beta) there was no ALL. I decided to put it in for subsequent\n> patches for no good reason.\n\nI have updated CVS to make the syntax CLUSTER rather than CLUSTER ALL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 18 Nov 2002 12:17:08 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER ALL syntax" }, { "msg_contents": "\nDoes anyone want userid to be an OID? Peter? Anyone?\n\nIf not, I will add it to the TODO list or work on the patch myself.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> I'd recommend not making any piecemeal changes, especially not when\n> >> there's not yet a consensus which way to converge.\n> \n> > Well, seems we should make it consistent at least.\n> \n> I think the original argument stemmed from the idea that we ought to use\n> pg_shadow's OID column as the user identifier (eliminating usesysid per\n> se). This seems like a good idea at first but I think it has a couple\n> of fatal problems:\n> * disappearance of pg_shadow.usesysid column will doubtless break some\n> applications\n> * if we use OID then it's much more difficult to support explicit\n> assignment of userid\n> \n> > I think some wanted it to be an int so they could use the\n> > same unix uid for pg_shadow, but I think we aren't using that idea much\n> > anymore.\n> \n> I don't think anyone worries about making usesysid match /etc/passwd\n> anymore, but nonetheless CREATE USER WITH SYSID is still an essential\n> capability. What if you drop a user accidentally while he still owns\n> objects? You *must* be able to recreate him with the same sysid as\n> before. pg_depend cannot save us from this kind of mistake, either,\n> since users span databases.\n> \n> So it seems to me that we must keep pg_shadow.usesysid as a separate\n> column and not try to make it the OID of pg_shadow.\n> \n> Given that decision, the argument for making it be type OID seems very\n> weak, so I'd lean to the \"use int4\" camp myself. But I'm not sure\n> everyone agrees. I think Peter was strongly in favor of OID when he\n> was revising the session-authorization code (that's why it all uses OID\n> for user IDs...)\n> \n> As far as the actual C code goes, I'd lean to creating new typedefs\n> UserId and GroupId (or some such names) and making all the routine\n> and variable declarations use those, and not either OID or int4.\n> But I'm not excited about promoting these typedefs into actual SQL\n> types, as was done for TransactionId and CommandId; the payback seems\n> much less than the effort needed.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 22 Nov 2002 00:40:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_stat_database shows userid as OID" }, { "msg_contents": "OK, with three people thinking we need to make usesysid more constently\nint4, and no one objecting, I have applied this patch, and that attached\npatch which makes usesysid constently int4, and not oid.\n\nCatalog version updated. initdb required.\n\nIf there are more places that think usesysid is oid, please let me know.\nAclId already existed for this purpose, so I used that rather than int32\ndirectly.\n\n---------------------------------------------------------------------------\n\nAlvaro Herrera wrote:\n> Hello hackers,\n> \n> In the pg_stat_activity view, the usesysid is shown as having type Oid.\n> However pg_shadow says it's an integer. Is there a reason? Looks like\n> a bug.\n> \n> This patch seems to corrects this issue, but I don't know if there's\n> something else involved.\n> \n> Index: src/include/catalog/pg_proc.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/include/catalog/pg_proc.h,v\n> retrieving revision 1.276\n> diff -c -r1.276 pg_proc.h\n> *** src/include/catalog/pg_proc.h\t2002/11/08 17:27:03\t1.276\n> --- src/include/catalog/pg_proc.h\t2002/11/16 23:18:44\n> ***************\n> *** 2738,2744 ****\n> DESCR(\"Statistics: PID of backend\");\n> DATA(insert OID = 1938 ( pg_stat_get_backend_dbid\t\tPGNSP PGUID 12 f f t f s 1 26 \"23\"\tpg_stat_get_backend_dbid - _null_ ));\n> DESCR(\"Statistics: Database ID of backend\");\n> ! DATA(insert OID = 1939 ( pg_stat_get_backend_userid\tPGNSP PGUID 12 f f t f s 1 26 \"23\"\tpg_stat_get_backend_userid - _null_ ));\n> DESCR(\"Statistics: User ID of backend\");\n> DATA(insert OID = 1940 ( pg_stat_get_backend_activity\tPGNSP PGUID 12 f f t f s 1 25 \"23\"\tpg_stat_get_backend_activity - _null_ ));\n> DESCR(\"Statistics: Current query of backend\");\n> --- 2738,2744 ----\n> DESCR(\"Statistics: PID of backend\");\n> DATA(insert OID = 1938 ( pg_stat_get_backend_dbid\t\tPGNSP PGUID 12 f f t f s 1 26 \"23\"\tpg_stat_get_backend_dbid - _null_ ));\n> DESCR(\"Statistics: Database ID of backend\");\n> ! DATA(insert OID = 1939 ( pg_stat_get_backend_userid\tPGNSP PGUID 12 f f t f s 1 23 \"23\"\tpg_stat_get_backend_userid - _null_ ));\n> DESCR(\"Statistics: User ID of backend\");\n> DATA(insert OID = 1940 ( pg_stat_get_backend_activity\tPGNSP PGUID 12 f f t f s 1 25 \"23\"\tpg_stat_get_backend_activity - _null_ ));\n> DESCR(\"Statistics: Current query of backend\");\n> Index: src/backend/utils/adt/pgstatfuncs.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/backend/utils/adt/pgstatfuncs.c,v\n> retrieving revision 1.8\n> diff -c -r1.8 pgstatfuncs.c\n> *** src/backend/utils/adt/pgstatfuncs.c\t2002/08/20 04:47:52\t1.8\n> --- src/backend/utils/adt/pgstatfuncs.c\t2002/11/16 23:18:44\n> ***************\n> *** 272,278 ****\n> \tif ((beentry = pgstat_fetch_stat_beentry(beid)) == NULL)\n> \t\tPG_RETURN_NULL();\n> \n> ! \tPG_RETURN_OID(beentry->userid);\n> }\n> \n> \n> --- 272,278 ----\n> \tif ((beentry = pgstat_fetch_stat_beentry(beid)) == NULL)\n> \t\tPG_RETURN_NULL();\n> \n> ! \tPG_RETURN_INT32(beentry->userid);\n> }\n> \n> \n> -- \n> Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n> \"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/catalog/aclchk.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/catalog/aclchk.c,v\nretrieving revision 1.78\ndiff -c -c -r1.78 aclchk.c\n*** src/backend/catalog/aclchk.c\t24 Sep 2002 23:14:25 -0000\t1.78\n--- src/backend/catalog/aclchk.c\t4 Dec 2002 05:16:29 -0000\n***************\n*** 893,899 ****\n * Exported routine for checking a user's access privileges to a table\n */\n AclResult\n! pg_class_aclcheck(Oid table_oid, Oid userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tbool\t\tusesuper,\n--- 893,899 ----\n * Exported routine for checking a user's access privileges to a table\n */\n AclResult\n! pg_class_aclcheck(Oid table_oid, AclId userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tbool\t\tusesuper,\n***************\n*** 991,997 ****\n * Exported routine for checking a user's access privileges to a database\n */\n AclResult\n! pg_database_aclcheck(Oid db_oid, Oid userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tRelation\tpg_database;\n--- 991,997 ----\n * Exported routine for checking a user's access privileges to a database\n */\n AclResult\n! pg_database_aclcheck(Oid db_oid, AclId userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tRelation\tpg_database;\n***************\n*** 1054,1060 ****\n * Exported routine for checking a user's access privileges to a function\n */\n AclResult\n! pg_proc_aclcheck(Oid proc_oid, Oid userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tHeapTuple\ttuple;\n--- 1054,1060 ----\n * Exported routine for checking a user's access privileges to a function\n */\n AclResult\n! pg_proc_aclcheck(Oid proc_oid, AclId userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tHeapTuple\ttuple;\n***************\n*** 1107,1113 ****\n * Exported routine for checking a user's access privileges to a language\n */\n AclResult\n! pg_language_aclcheck(Oid lang_oid, Oid userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tHeapTuple\ttuple;\n--- 1107,1113 ----\n * Exported routine for checking a user's access privileges to a language\n */\n AclResult\n! pg_language_aclcheck(Oid lang_oid, AclId userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tHeapTuple\ttuple;\n***************\n*** 1157,1163 ****\n * Exported routine for checking a user's access privileges to a namespace\n */\n AclResult\n! pg_namespace_aclcheck(Oid nsp_oid, Oid userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tHeapTuple\ttuple;\n--- 1157,1163 ----\n * Exported routine for checking a user's access privileges to a namespace\n */\n AclResult\n! pg_namespace_aclcheck(Oid nsp_oid, AclId userid, AclMode mode)\n {\n \tAclResult\tresult;\n \tHeapTuple\ttuple;\n***************\n*** 1218,1224 ****\n * Ownership check for a relation (specified by OID).\n */\n bool\n! pg_class_ownercheck(Oid class_oid, Oid userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n--- 1218,1224 ----\n * Ownership check for a relation (specified by OID).\n */\n bool\n! pg_class_ownercheck(Oid class_oid, AclId userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n***************\n*** 1244,1250 ****\n * Ownership check for a type (specified by OID).\n */\n bool\n! pg_type_ownercheck(Oid type_oid, Oid userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n--- 1244,1250 ----\n * Ownership check for a type (specified by OID).\n */\n bool\n! pg_type_ownercheck(Oid type_oid, AclId userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n***************\n*** 1270,1276 ****\n * Ownership check for an operator (specified by OID).\n */\n bool\n! pg_oper_ownercheck(Oid oper_oid, Oid userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n--- 1270,1276 ----\n * Ownership check for an operator (specified by OID).\n */\n bool\n! pg_oper_ownercheck(Oid oper_oid, AclId userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n***************\n*** 1296,1302 ****\n * Ownership check for a function (specified by OID).\n */\n bool\n! pg_proc_ownercheck(Oid proc_oid, Oid userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n--- 1296,1302 ----\n * Ownership check for a function (specified by OID).\n */\n bool\n! pg_proc_ownercheck(Oid proc_oid, AclId userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n***************\n*** 1322,1328 ****\n * Ownership check for a namespace (specified by OID).\n */\n bool\n! pg_namespace_ownercheck(Oid nsp_oid, Oid userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n--- 1322,1328 ----\n * Ownership check for a namespace (specified by OID).\n */\n bool\n! pg_namespace_ownercheck(Oid nsp_oid, AclId userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n***************\n*** 1349,1355 ****\n * Ownership check for an operator class (specified by OID).\n */\n bool\n! pg_opclass_ownercheck(Oid opc_oid, Oid userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\n--- 1349,1355 ----\n * Ownership check for an operator class (specified by OID).\n */\n bool\n! pg_opclass_ownercheck(Oid opc_oid, AclId userid)\n {\n \tHeapTuple\ttuple;\n \tAclId\t\towner_id;\nIndex: src/backend/catalog/namespace.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/catalog/namespace.c,v\nretrieving revision 1.40\ndiff -c -c -r1.40 namespace.c\n*** src/backend/catalog/namespace.c\t11 Nov 2002 22:19:21 -0000\t1.40\n--- src/backend/catalog/namespace.c\t4 Dec 2002 05:16:36 -0000\n***************\n*** 1365,1371 ****\n static void\n recomputeNamespacePath(void)\n {\n! \tOid\t\t\tuserId = GetUserId();\n \tchar\t *rawname;\n \tList\t *namelist;\n \tList\t *oidlist;\n--- 1365,1371 ----\n static void\n recomputeNamespacePath(void)\n {\n! \tAclId\t\tuserId = GetUserId();\n \tchar\t *rawname;\n \tList\t *namelist;\n \tList\t *oidlist;\nIndex: src/backend/catalog/pg_conversion.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/catalog/pg_conversion.c,v\nretrieving revision 1.8\ndiff -c -c -r1.8 pg_conversion.c\n*** src/backend/catalog/pg_conversion.c\t2 Nov 2002 18:41:21 -0000\t1.8\n--- src/backend/catalog/pg_conversion.c\t4 Dec 2002 05:16:37 -0000\n***************\n*** 37,43 ****\n */\n Oid\n ConversionCreate(const char *conname, Oid connamespace,\n! \t\t\t\t int32 conowner,\n \t\t\t\t int32 conforencoding, int32 contoencoding,\n \t\t\t\t Oid conproc, bool def)\n {\n--- 37,43 ----\n */\n Oid\n ConversionCreate(const char *conname, Oid connamespace,\n! \t\t\t\t AclId conowner,\n \t\t\t\t int32 conforencoding, int32 contoencoding,\n \t\t\t\t Oid conproc, bool def)\n {\nIndex: src/backend/commands/cluster.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/commands/cluster.c,v\nretrieving revision 1.97\ndiff -c -c -r1.97 cluster.c\n*** src/backend/commands/cluster.c\t23 Nov 2002 18:26:45 -0000\t1.97\n--- src/backend/commands/cluster.c\t4 Dec 2002 05:16:37 -0000\n***************\n*** 804,814 ****\n \n /* Get a list of tables that the current user owns and\n * have indisclustered set. Return the list in a List * of rvsToCluster\n! * with the tableOid and the indexOid on which the table is already \n * clustered.\n */\n List *\n! get_tables_to_cluster(Oid owner)\n {\n \tRelation\t\tindRelation;\n \tHeapScanDesc\tscan;\n--- 804,814 ----\n \n /* Get a list of tables that the current user owns and\n * have indisclustered set. Return the list in a List * of rvsToCluster\n! * with the tableOid and the indexOid on which the table is already\n * clustered.\n */\n List *\n! get_tables_to_cluster(AclId owner)\n {\n \tRelation\t\tindRelation;\n \tHeapScanDesc\tscan;\nIndex: src/backend/utils/adt/pgstatfuncs.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/adt/pgstatfuncs.c,v\nretrieving revision 1.8\ndiff -c -c -r1.8 pgstatfuncs.c\n*** src/backend/utils/adt/pgstatfuncs.c\t20 Aug 2002 04:47:52 -0000\t1.8\n--- src/backend/utils/adt/pgstatfuncs.c\t4 Dec 2002 05:16:38 -0000\n***************\n*** 272,278 ****\n \tif ((beentry = pgstat_fetch_stat_beentry(beid)) == NULL)\n \t\tPG_RETURN_NULL();\n \n! \tPG_RETURN_OID(beentry->userid);\n }\n \n \n--- 272,278 ----\n \tif ((beentry = pgstat_fetch_stat_beentry(beid)) == NULL)\n \t\tPG_RETURN_NULL();\n \n! \tPG_RETURN_INT32(beentry->userid);\n }\n \n \nIndex: src/include/miscadmin.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/miscadmin.h,v\nretrieving revision 1.111\ndiff -c -c -r1.111 miscadmin.h\n*** src/include/miscadmin.h\t3 Oct 2002 17:07:53 -0000\t1.111\n--- src/include/miscadmin.h\t4 Dec 2002 05:16:39 -0000\n***************\n*** 202,208 ****\n \n extern char *GetUserNameFromId(Oid userid);\n \n! extern Oid\tGetUserId(void);\n extern void SetUserId(Oid userid);\n extern Oid\tGetSessionUserId(void);\n extern void SetSessionUserId(Oid userid);\n--- 202,214 ----\n \n extern char *GetUserNameFromId(Oid userid);\n \n! /*\n! * AclId\t\tsystem identifier for the user, group, etc.\n! *\t\t\t\tXXX Perhaps replace this type by OID?\n! */\n! typedef uint32 AclId;\n! \n! extern AclId GetUserId(void);\n extern void SetUserId(Oid userid);\n extern Oid\tGetSessionUserId(void);\n extern void SetSessionUserId(Oid userid);\nIndex: src/include/catalog/catversion.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/catalog/catversion.h,v\nretrieving revision 1.166\ndiff -c -c -r1.166 catversion.h\n*** src/include/catalog/catversion.h\t25 Nov 2002 18:12:11 -0000\t1.166\n--- src/include/catalog/catversion.h\t4 Dec 2002 05:16:39 -0000\n***************\n*** 53,58 ****\n */\n \n /*\t\t\t\t\t\t\tyyyymmddN */\n! #define CATALOG_VERSION_NO\t200211251\n \n #endif\n--- 53,58 ----\n */\n \n /*\t\t\t\t\t\t\tyyyymmddN */\n! #define CATALOG_VERSION_NO\t200212031\n \n #endif\nIndex: src/include/catalog/pg_conversion.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/catalog/pg_conversion.h,v\nretrieving revision 1.7\ndiff -c -c -r1.7 pg_conversion.h\n*** src/include/catalog/pg_conversion.h\t2 Nov 2002 02:33:03 -0000\t1.7\n--- src/include/catalog/pg_conversion.h\t4 Dec 2002 05:16:40 -0000\n***************\n*** 19,24 ****\n--- 19,26 ----\n #ifndef PG_CONVERSION_H\n #define PG_CONVERSION_H\n \n+ #include \"miscadmin.h\"\n+ \n /* ----------------\n *\t\tpostgres.h contains the system type definitions and the\n *\t\tCATALOG(), BOOTSTRAP and DATA() sugar words so this file\n***************\n*** 84,90 ****\n #include \"nodes/parsenodes.h\"\n \n extern Oid ConversionCreate(const char *conname, Oid connamespace,\n! \t\t\t\t int32 conowner,\n \t\t\t\t int32 conforencoding, int32 contoencoding,\n \t\t\t\t Oid conproc, bool def);\n extern void ConversionDrop(Oid conversionOid, DropBehavior behavior);\n--- 86,92 ----\n #include \"nodes/parsenodes.h\"\n \n extern Oid ConversionCreate(const char *conname, Oid connamespace,\n! \t\t\t\t AclId conowner,\n \t\t\t\t int32 conforencoding, int32 contoencoding,\n \t\t\t\t Oid conproc, bool def);\n extern void ConversionDrop(Oid conversionOid, DropBehavior behavior);\nIndex: src/include/catalog/pg_proc.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/catalog/pg_proc.h,v\nretrieving revision 1.276\ndiff -c -c -r1.276 pg_proc.h\n*** src/include/catalog/pg_proc.h\t8 Nov 2002 17:27:03 -0000\t1.276\n--- src/include/catalog/pg_proc.h\t4 Dec 2002 05:16:48 -0000\n***************\n*** 2738,2744 ****\n DESCR(\"Statistics: PID of backend\");\n DATA(insert OID = 1938 ( pg_stat_get_backend_dbid\t\tPGNSP PGUID 12 f f t f s 1 26 \"23\"\tpg_stat_get_backend_dbid - _null_ ));\n DESCR(\"Statistics: Database ID of backend\");\n! DATA(insert OID = 1939 ( pg_stat_get_backend_userid\tPGNSP PGUID 12 f f t f s 1 26 \"23\"\tpg_stat_get_backend_userid - _null_ ));\n DESCR(\"Statistics: User ID of backend\");\n DATA(insert OID = 1940 ( pg_stat_get_backend_activity\tPGNSP PGUID 12 f f t f s 1 25 \"23\"\tpg_stat_get_backend_activity - _null_ ));\n DESCR(\"Statistics: Current query of backend\");\n--- 2738,2744 ----\n DESCR(\"Statistics: PID of backend\");\n DATA(insert OID = 1938 ( pg_stat_get_backend_dbid\t\tPGNSP PGUID 12 f f t f s 1 26 \"23\"\tpg_stat_get_backend_dbid - _null_ ));\n DESCR(\"Statistics: Database ID of backend\");\n! DATA(insert OID = 1939 ( pg_stat_get_backend_userid\tPGNSP PGUID 12 f f t f s 1 23 \"23\"\tpg_stat_get_backend_userid - _null_ ));\n DESCR(\"Statistics: User ID of backend\");\n DATA(insert OID = 1940 ( pg_stat_get_backend_activity\tPGNSP PGUID 12 f f t f s 1 25 \"23\"\tpg_stat_get_backend_activity - _null_ ));\n DESCR(\"Statistics: Current query of backend\");\nIndex: src/include/utils/acl.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/utils/acl.h,v\nretrieving revision 1.47\ndiff -c -c -r1.47 acl.h\n*** src/include/utils/acl.h\t4 Sep 2002 20:31:45 -0000\t1.47\n--- src/include/utils/acl.h\t4 Dec 2002 05:16:48 -0000\n***************\n*** 22,37 ****\n #ifndef ACL_H\n #define ACL_H\n \n #include \"nodes/parsenodes.h\"\n #include \"utils/array.h\"\n \n \n- /*\n- * AclId\t\tsystem identifier for the user, group, etc.\n- *\t\t\t\tXXX Perhaps replace this type by OID?\n- */\n- typedef uint32 AclId;\n- \n #define ACL_ID_WORLD\t0\t\t/* placeholder for id in a WORLD acl item */\n \n /*\n--- 22,32 ----\n #ifndef ACL_H\n #define ACL_H\n \n+ #include \"miscadmin.h\"\n #include \"nodes/parsenodes.h\"\n #include \"utils/array.h\"\n \n \n #define ACL_ID_WORLD\t0\t\t/* placeholder for id in a WORLD acl item */\n \n /*\n***************\n*** 204,214 ****\n extern void aclcheck_error(AclResult errcode, const char *objectname);\n \n /* ownercheck routines just return true (owner) or false (not) */\n! extern bool pg_class_ownercheck(Oid class_oid, Oid userid);\n! extern bool pg_type_ownercheck(Oid type_oid, Oid userid);\n! extern bool pg_oper_ownercheck(Oid oper_oid, Oid userid);\n! extern bool pg_proc_ownercheck(Oid proc_oid, Oid userid);\n! extern bool pg_namespace_ownercheck(Oid nsp_oid, Oid userid);\n! extern bool pg_opclass_ownercheck(Oid opc_oid, Oid userid);\n \n #endif /* ACL_H */\n--- 199,209 ----\n extern void aclcheck_error(AclResult errcode, const char *objectname);\n \n /* ownercheck routines just return true (owner) or false (not) */\n! extern bool pg_class_ownercheck(Oid class_oid, AclId userid);\n! extern bool pg_type_ownercheck(Oid type_oid, AclId userid);\n! extern bool pg_oper_ownercheck(Oid oper_oid, AclId userid);\n! extern bool pg_proc_ownercheck(Oid proc_oid, AclId userid);\n! extern bool pg_namespace_ownercheck(Oid nsp_oid, AclId userid);\n! extern bool pg_opclass_ownercheck(Oid opc_oid, AclId userid);\n \n #endif /* ACL_H */\nIndex: src/test/regress/expected/rules.out\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/test/regress/expected/rules.out,v\nretrieving revision 1.68\ndiff -c -c -r1.68 rules.out\n*** src/test/regress/expected/rules.out\t21 Nov 2002 22:26:01 -0000\t1.68\n--- src/test/regress/expected/rules.out\t4 Dec 2002 05:16:50 -0000\n***************\n*** 1274,1280 ****\n pg_locks | SELECT l.relation, l.\"database\", l.\"transaction\", l.pid, l.\"mode\", l.granted FROM pg_lock_status() l(relation oid, \"database\" oid, \"transaction\" xid, pid integer, \"mode\" text, granted boolean);\n pg_rules | SELECT n.nspname AS schemaname, c.relname AS tablename, r.rulename, pg_get_ruledef(r.oid) AS definition FROM ((pg_rewrite r JOIN pg_class c ON ((c.oid = r.ev_class))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (r.rulename <> '_RETURN'::name);\n pg_settings | SELECT a.name, a.setting FROM pg_show_all_settings() a(name text, setting text);\n! pg_stat_activity | SELECT d.oid AS datid, d.datname, pg_stat_get_backend_pid(s.backendid) AS procpid, pg_stat_get_backend_userid(s.backendid) AS usesysid, u.usename, pg_stat_get_backend_activity(s.backendid) AS current_query FROM pg_database d, (SELECT pg_stat_get_backend_idset() AS backendid) s, pg_shadow u WHERE ((pg_stat_get_backend_dbid(s.backendid) = d.oid) AND (pg_stat_get_backend_userid(s.backendid) = (u.usesysid)::oid));\n pg_stat_all_indexes | SELECT c.oid AS relid, i.oid AS indexrelid, n.nspname AS schemaname, c.relname, i.relname AS indexrelname, pg_stat_get_numscans(i.oid) AS idx_scan, pg_stat_get_tuples_returned(i.oid) AS idx_tup_read, pg_stat_get_tuples_fetched(i.oid) AS idx_tup_fetch FROM (((pg_class c JOIN pg_index x ON ((c.oid = x.indrelid))) JOIN pg_class i ON ((i.oid = x.indexrelid))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.relkind = 'r'::\"char\");\n pg_stat_all_tables | SELECT c.oid AS relid, n.nspname AS schemaname, c.relname, pg_stat_get_numscans(c.oid) AS seq_scan, pg_stat_get_tuples_returned(c.oid) AS seq_tup_read, sum(pg_stat_get_numscans(i.indexrelid)) AS idx_scan, sum(pg_stat_get_tuples_fetched(i.indexrelid)) AS idx_tup_fetch, pg_stat_get_tuples_inserted(c.oid) AS n_tup_ins, pg_stat_get_tuples_updated(c.oid) AS n_tup_upd, pg_stat_get_tuples_deleted(c.oid) AS n_tup_del FROM ((pg_class c LEFT JOIN pg_index i ON ((c.oid = i.indrelid))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.relkind = 'r'::\"char\") GROUP BY c.oid, n.nspname, c.relname;\n pg_stat_database | SELECT d.oid AS datid, d.datname, pg_stat_get_db_numbackends(d.oid) AS numbackends, pg_stat_get_db_xact_commit(d.oid) AS xact_commit, pg_stat_get_db_xact_rollback(d.oid) AS xact_rollback, (pg_stat_get_db_blocks_fetched(d.oid) - pg_stat_get_db_blocks_hit(d.oid)) AS blks_read, pg_stat_get_db_blocks_hit(d.oid) AS blks_hit FROM pg_database d;\n--- 1274,1280 ----\n pg_locks | SELECT l.relation, l.\"database\", l.\"transaction\", l.pid, l.\"mode\", l.granted FROM pg_lock_status() l(relation oid, \"database\" oid, \"transaction\" xid, pid integer, \"mode\" text, granted boolean);\n pg_rules | SELECT n.nspname AS schemaname, c.relname AS tablename, r.rulename, pg_get_ruledef(r.oid) AS definition FROM ((pg_rewrite r JOIN pg_class c ON ((c.oid = r.ev_class))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (r.rulename <> '_RETURN'::name);\n pg_settings | SELECT a.name, a.setting FROM pg_show_all_settings() a(name text, setting text);\n! pg_stat_activity | SELECT d.oid AS datid, d.datname, pg_stat_get_backend_pid(s.backendid) AS procpid, pg_stat_get_backend_userid(s.backendid) AS usesysid, u.usename, pg_stat_get_backend_activity(s.backendid) AS current_query FROM pg_database d, (SELECT pg_stat_get_backend_idset() AS backendid) s, pg_shadow u WHERE ((pg_stat_get_backend_dbid(s.backendid) = d.oid) AND (pg_stat_get_backend_userid(s.backendid) = u.usesysid));\n pg_stat_all_indexes | SELECT c.oid AS relid, i.oid AS indexrelid, n.nspname AS schemaname, c.relname, i.relname AS indexrelname, pg_stat_get_numscans(i.oid) AS idx_scan, pg_stat_get_tuples_returned(i.oid) AS idx_tup_read, pg_stat_get_tuples_fetched(i.oid) AS idx_tup_fetch FROM (((pg_class c JOIN pg_index x ON ((c.oid = x.indrelid))) JOIN pg_class i ON ((i.oid = x.indexrelid))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.relkind = 'r'::\"char\");\n pg_stat_all_tables | SELECT c.oid AS relid, n.nspname AS schemaname, c.relname, pg_stat_get_numscans(c.oid) AS seq_scan, pg_stat_get_tuples_returned(c.oid) AS seq_tup_read, sum(pg_stat_get_numscans(i.indexrelid)) AS idx_scan, sum(pg_stat_get_tuples_fetched(i.indexrelid)) AS idx_tup_fetch, pg_stat_get_tuples_inserted(c.oid) AS n_tup_ins, pg_stat_get_tuples_updated(c.oid) AS n_tup_upd, pg_stat_get_tuples_deleted(c.oid) AS n_tup_del FROM ((pg_class c LEFT JOIN pg_index i ON ((c.oid = i.indrelid))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.relkind = 'r'::\"char\") GROUP BY c.oid, n.nspname, c.relname;\n pg_stat_database | SELECT d.oid AS datid, d.datname, pg_stat_get_db_numbackends(d.oid) AS numbackends, pg_stat_get_db_xact_commit(d.oid) AS xact_commit, pg_stat_get_db_xact_rollback(d.oid) AS xact_rollback, (pg_stat_get_db_blocks_fetched(d.oid) - pg_stat_get_db_blocks_hit(d.oid)) AS blks_read, pg_stat_get_db_blocks_hit(d.oid) AS blks_hit FROM pg_database d;", "msg_date": "Wed, 4 Dec 2002 00:20:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_stat_database shows userid as OID" } ]
[ { "msg_contents": "Hello,\n\nI'm just taking the btree shrinking problem, and saw in the README:\n\n+ Deletions are handled by getting a super-exclusive lock on the target\n page, so that no other backend has a pin on the page when the deletion\n starts. This means no scan is pointing at the page. This is OK for\n deleting leaf items, probably not OK for deleting internal nodes;\n will need to think harder when it's time to support index compaction.\n\n\nIn what cases is not OK to delete an item from an internal node, holding\na super-exclusive lock? (I assume this means LockBufferForCleanup()).\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"Los dioses no protegen a los insensatos. �stos reciben protecci�n de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n", "msg_date": "Sun, 17 Nov 2002 13:31:31 -0300", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": true, "msg_subject": "btree shrinking again" }, { "msg_contents": "Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> + Deletions are handled by getting a super-exclusive lock on the target\n> page, so that no other backend has a pin on the page when the deletion\n> starts. This means no scan is pointing at the page. This is OK for\n> deleting leaf items, probably not OK for deleting internal nodes;\n> will need to think harder when it's time to support index compaction.\n\n> In what cases is not OK to delete an item from an internal node, holding\n> a super-exclusive lock?\n\nI believe the thing I was worried about when I wrote that note was the\nstack of ancestor pointers maintained by an insert operation: the insert\nwill not have pins on those pages, but might try to return to them\nlater (to service a page split).\n\nA simple-minded solution might be to keep the pins until the insert is\ndone, but you'd have to think about possible deadlock conditions as well\nas loss of concurrency. I'd prefer to find a solution that didn't\nrequire that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 13:25:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: btree shrinking again " }, { "msg_contents": "> Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> > + Deletions are handled by getting a super-exclusive lock on the target\n> > page, so that no other backend has a pin on the page when the deletion\n> > starts. This means no scan is pointing at the page. This is OK for\n> > deleting leaf items, probably not OK for deleting internal nodes;\n> > will need to think harder when it's time to support index compaction.\n>\n> > In what cases is not OK to delete an item from an internal node, holding\n> > a super-exclusive lock?\n>\n> I believe the thing I was worried about when I wrote that note was the\n> stack of ancestor pointers maintained by an insert operation: the insert\n> will not have pins on those pages, but might try to return to them\n> later (to service a page split).\n\ntom lane wrote:\n> A simple-minded solution might be to keep the pins until the insert is\n> done, but you'd have to think about possible deadlock conditions as well\n> as loss of concurrency. I'd prefer to find a solution that didn't\n> require that.\n\nThere is an algorithm that work pretty well for this and that don't require\nholding pins up the tree in most cases. The locking for deletes mirrors that\nfor inserts. In each case, either a delete or an insert could require a\nchange to the parent page recursively up to and including the root page.\n\nTraversals pin read-only starting at the root, after each page pin is\nacquired pins on previous, more interior pages are released. Leaf pages are\npinned read/write for insert or delete cases. The strict ordering of lock\nacquisition pinning from the root toward the more exterior/leaf nodes, and\nnever from the leaf up, will prevent deadlocks.\n\nEach page keeps a version number that is incremented after each change (you\ndon't really have to worry about rollover on this since checks against the\nversion number are for equality and inequality only).\n\nIf an insert will cause a split and hence an insert for the interior page\nthen the parent interior page of the page that has been split is pinned for\nwrite without waiting (waiting could cause deadlocks since it would violate\nthe root to leaf acquisition ordering mentioned above). If the pin fails\nbecause another lock is held on that page, or the page is not there; or after\npinning the version number is different than the version number found when\nthe page was originally pinned on the way down to the leaf, the insert is\naborted.\n\nIn case of abort, the btree is searched again starting at the root going down\nto the page which needs to be inserted write pinning each of the pages on the\nway down.\n\nAnother variation simply retries the insert again in the case of aborts but\nacquires a write lock on page one level up from the lowest level of the last\ntry. In the case of the first abort for a four-level tree this would acquire\na read lock on the root node (level 0) and level 1, and write locks on levels\n2 and 3, since level 3 failed the previous time.\n\nThis method provides better concurrency at the expense of slightly more work\nfor inserts or deletes that require major restructuring of the btrees.\n\nLeaf pages can be pinned across the bottom of the tree for index scans but\nnever up the tree. This means that leaf nodes require sibling pointers in\norder to support index scans. I don't know if PostgreSQL's btree has these\nalready.\n\nI'm sure I have missed some detail but this is the general idea.\n\n- Curtis\n\n", "msg_date": "Mon, 18 Nov 2002 12:16:56 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: btree shrinking again " } ]
[ { "msg_contents": "For 7.4 I would like to add a function for importing float8 values\ninto cube. But because the cube data type is variable length I am\nnot sure what a good approach would be.\n\nCurrently this can be poorly done using text as an intermediate type.\nAs far as I can tell functions can't take sets as arguments. Arrays\nseem to suffer from similar problems as the cube type, so that preloading\nan array with the output of a few calculations isn't particularly easy\nunless you use text as an intermediate type.\n\nOne possibility would be to have a function that adds one dimension\non to an existing cube. This could be used recursively to build up a\ncube with desired number of dimensions. It may not gain much in speed, but\nwould be more accurrate without having to adjust extra_float_digits.\n\nIs there some better approach that I have overlooked?\n", "msg_date": "Sun, 17 Nov 2002 13:59:38 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": true, "msg_subject": "Getting float8 data into cube?" }, { "msg_contents": "Bruno Wolff III <bruno@wolff.to> writes:\n> For 7.4 I would like to add a function for importing float8 values\n> into cube. But because the cube data type is variable length I am\n> not sure what a good approach would be.\n\nI'm not clear on what you want to accomplish. How are you expecting\nthe source data to be structured?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 15:19:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Getting float8 data into cube? " }, { "msg_contents": "On Sun, Nov 17, 2002 at 15:19:54 -0500,\n Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Bruno Wolff III <bruno@wolff.to> writes:\n> > For 7.4 I would like to add a function for importing float8 values\n> > into cube. But because the cube data type is variable length I am\n> > not sure what a good approach would be.\n> \n> I'm not clear on what you want to accomplish. How are you expecting\n> the source data to be structured?\n> \n> \t\t\tregards, tom lane\n\nI would like to get the results of floating point calculations into a cube\nwithout losing precision because float output is trunctated at 15 digits.\nIn 7.4 I will have the option to change extra_float_digits while loading\ndata into cubes, but that seems like a hack.\n\nMy particular case will be converting latitude and longitude to 3D\ncartesian coordinates. With the calculations truncated at 15 digits,\nI can sometimes see round off error after going back to latitude\nand longitude. The accurracy itself isn't a big deal, it is more seeing\nthe .0000001 or .999999 at the end of the numbers that is annoying. This\ncould also be handled by further rounding.\n\nIt also seems like there should be a way to get floating point data into\ncube without losing precision.\n", "msg_date": "Mon, 18 Nov 2002 12:08:49 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": true, "msg_subject": "Re: Getting float8 data into cube?" }, { "msg_contents": "I have a specific proposal for allowing for building cube values from\nfloat8 values with building strings (which will typically lose information).\nI want to add the following 4 overloaded functions:\n\ncube(float8)\n cube(1) returns '(1),(1)'::cube\n\ncube(float8,float8)\n cube(1,2) returns '(1),(2)'::cube\n\ncube(cube,float8)\n cube('(1),(2)',3) returns '(1,3),(2,3)'::cube\n\ncube(cube,float8,float8)\n cube('(1),(2)',3,4) returns '(1,3),(2,4)'::cube\n\nThis is useful when the input needs to be transformed before being stored.\nFor example when the input is in polar coordinates.\n\nFor polar input of R and O, you then could do something like:\ncube(cube(R*sin(O)),R*cos(O))\ninstead of\ncube('('||R*sin(O)||','||R*cos(O)||')')\nThe latter will normally be less accurrate than the former.\n\nIf the above is acceptable, I will come up with a patch versus the 7.4\nversion of the cube contrib package.\n", "msg_date": "Mon, 20 Jan 2003 00:09:09 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": true, "msg_subject": "Re: Getting float8 data into cube?" } ]
[ { "msg_contents": "This is a successful report for OpenBSD 3.2 on sparc and i386\n\n> -----Original Message-----\n> From: bpalmer [mailto:bpalmer@crimelabs.net]\n> Sent: Monday, 18 November 2002 2:14 AM\n> To: Christopher Kings-Lynne\n> Subject: Re: PostgreSQL 7.3 Platform Testing\n>\n>\n> Sorry for taking so long to get back to you, getting everything working on\n> obsd took a while. My sun is a 60mhz deal and it's a bit slow.\n>\n> Anywho:\n>\n> Kernel tweaks are needed:\n>\n> edited:\n> /etc/login.conf\n>\n> Changed:\n>\n> default:\\\n> :maxproc-max=128:\\\n> :maxproc-cur=64:\\\n> :openfiles-cur=64:\\\n>\n> to:\n>\n> default:\\\n> :maxproc-max=256:\\\n> :maxproc-cur=256:\\\n> :openfiles-cur=256:\\\n>\n>\n> Kernel settings needed:\n>\n> option SEMMNI=256\n> option SEMMNS=2048\n>\n> option SEMMAXPGS=4096\n>\n>\n> Once that was done, however (and it's always been needed afaik)\n>\n> $ uname -an\n> OpenBSD incelous.crimelabs.net 3.2 incelous#0 i386\n>\n> ======================\n> All 89 tests passed.\n> ======================\n>\n> 356.52s real 18.22s user 15.92s system\n>\n>\n> $ uname -an\n> OpenBSD blackwidow.crimelabs.net 3.2 blackwidow#0 sparc\n>\n> ======================\n> All 89 tests passed.\n> ======================\n>\n> 1311.48s real 134.86s user 127.44s system\n>\n>\n>\n>\n> - Brandon\n>\n>\n> ------------------------------------------------------------------\n> ----------\n> c: 917-697-8665 h:\n> 201-798-4983\n> b. palmer, bpalmer@crimelabs.net\n> pgp:crimelabs.net/bpalmer.pgp5\n>\n>\n>\n\n", "msg_date": "Mon, 18 Nov 2002 09:50:17 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "FW: PostgreSQL 7.3 Platform Testing" }, { "msg_contents": "\nPorts list updated:\n\n http://candle.pha.pa.us/main/writings/pgsql/sgml/supported-platforms.html\n\n---------------------------------------------------------------------------\nChristopher Kings-Lynne wrote:\n> This is a successful report for OpenBSD 3.2 on sparc and i386\n> \n> > -----Original Message-----\n> > From: bpalmer [mailto:bpalmer@crimelabs.net]\n> > Sent: Monday, 18 November 2002 2:14 AM\n> > To: Christopher Kings-Lynne\n> > Subject: Re: PostgreSQL 7.3 Platform Testing\n> >\n> >\n> > Sorry for taking so long to get back to you, getting everything working on\n> > obsd took a while. My sun is a 60mhz deal and it's a bit slow.\n> >\n> > Anywho:\n> >\n> > Kernel tweaks are needed:\n> >\n> > edited:\n> > /etc/login.conf\n> >\n> > Changed:\n> >\n> > default:\\\n> > :maxproc-max=128:\\\n> > :maxproc-cur=64:\\\n> > :openfiles-cur=64:\\\n> >\n> > to:\n> >\n> > default:\\\n> > :maxproc-max=256:\\\n> > :maxproc-cur=256:\\\n> > :openfiles-cur=256:\\\n> >\n> >\n> > Kernel settings needed:\n> >\n> > option SEMMNI=256\n> > option SEMMNS=2048\n> >\n> > option SEMMAXPGS=4096\n> >\n> >\n> > Once that was done, however (and it's always been needed afaik)\n> >\n> > $ uname -an\n> > OpenBSD incelous.crimelabs.net 3.2 incelous#0 i386\n> >\n> > ======================\n> > All 89 tests passed.\n> > ======================\n> >\n> > 356.52s real 18.22s user 15.92s system\n> >\n> >\n> > $ uname -an\n> > OpenBSD blackwidow.crimelabs.net 3.2 blackwidow#0 sparc\n> >\n> > ======================\n> > All 89 tests passed.\n> > ======================\n> >\n> > 1311.48s real 134.86s user 127.44s system\n> >\n> >\n> >\n> >\n> > - Brandon\n> >\n> >\n> > ------------------------------------------------------------------\n> > ----------\n> > c: 917-697-8665 h:\n> > 201-798-4983\n> > b. palmer, bpalmer@crimelabs.net\n> > pgp:crimelabs.net/bpalmer.pgp5\n> >\n> >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 18 Nov 2002 00:02:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: PostgreSQL 7.3 Platform Testing" }, { "msg_contents": "> Ports list updated:\n\n\nSure? Still says 7.2 for openbsd and has old submission date...\n\n> http://candle.pha.pa.us/main/writings/pgsql/sgml/supported-platforms.html\n\n", "msg_date": "Mon, 18 Nov 2002 13:05:26 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: FW: PostgreSQL 7.3 Platform Testing" }, { "msg_contents": "\nIt updates every 15 minutes. You have to give it time. :-)\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > Ports list updated:\n> \n> \n> Sure? Still says 7.2 for openbsd and has old submission date...\n> \n> > http://candle.pha.pa.us/main/writings/pgsql/sgml/supported-platforms.html\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 18 Nov 2002 00:06:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: PostgreSQL 7.3 Platform Testing" } ]
[ { "msg_contents": "Dear Fellow DBI and PostgreSQL Hackers,\n\nApologies for cross-posting, but I figure that some of my questions can \nbe better answered by DBI folks, while others can be better answered by \nPostgreSQL interface folks.\n\nSince Tim pointed out that DBD::Pg hasn't been updated to use DBI's \nDriver.xst, I've taken it upon myself to try to update it to do so. \nHowever, since a) I've never programmed XS before; and b) I've never \nprogrammed C before; and c) I didn't want to just totally hork the \nDBD::Pg sources, I took it upon myself to try creating a new PostgreSQL \ndriver from scratch.\n\nThe good news is that I think I'm making pretty decent progress, and I \nmay well be able to get something workable in a few weeks. It's turning \nout that C isn't quite as tough to work with as my years-long mental \nblock has led me to believe. Of course, it's made easier by the nicely \ndone DBI::DBD document, as well as the great existing implementations \nfor MySQL, ODBC, and Oracle. So I've been cutting and pasting with glee \nfrom the DBD::mysql and DBD::Pg sources, and I think it could add up to \nsomething pretty good before long.\n\nAll that is a long-winded way of leading up to some questions I've been \nhaving as I've worked through the sources. The questions:\n\n* In DBD::Pg's dbdimp.c, the dbd_db_commit() function attempts a \ncommit, and if it's successful, it then starts another transaction. Is \nthis the proper behavior? The other DBDs I looked at don't appear to \nBEGIN a new transaction in the dbd_db_commit() function.\n\n* A similar question applies to dbd_db_rollback(). It does a rollback, \nand then BEGINs a new transaction. Should it be starting another \ntransaction there?\n\n* How is DBI's begin_work() method intended to influence commits and \nrollbacks?\n\n* Also in dbd_db_commit() and dbd_db_rollback(), I notice that the last \nreturn statement returns 0. Shouldn't these be returning true?\n\n* In DBD::Pg's dbdimp.c, the dbd_db_disconnect() function automatically \ndoes a rollback if AutoCommit is off. Should there not be some way to \ntell that, in addition to AutoCommit being off, a transaction is \nactually in progress? That is to say, since the last call to \ndbd_db_commit() that some statements have actually been executed? Or \ndoes this matter?\n\n* In dbd_db_destroy(), if I'm using Driver.xst, I don't actually need \nto execute this code, correct?\n\n if (DBIc_ACTIVE(imp_dbh)) {\n dbd_db_disconnect(dbh, imp_dbh);\n }\n\n* In dbd_db_STORE_attrib(), DBD::Pg is doing the necessary stuff when \nAutoCommit is set to COMMIT and BEGIN transactions. If the answers to \nthe above questions about dbd_db_commit() and dbd_db_rollback() \nindicate that they can stop BEGINing transactions, couldn't those \nfunctions be called inside dbd_db_STORE_attrib() instead of \ndbd_db_STORE_attrib() duplicating much of the same code?\n\n* Also in dbd_db_STORE_attrib(), I note that DBD::Pg's \nimp_dbh->init_commit attribute is checked and set. Isn't this \nredundant, since we already have AutoCommit? Or could this attribute \nactually be used to tell something about the *status* of a transaction? \n(AFAICT, it currently isn't used that way, and is simply redundant).\n\n* And finally, is dbd_preparse() totally necessary? I mean, doesn't \nPostgreSQL's PQexec() function do the necessary parsing? Jeffrey Baker \nmentioned to me that he was working on a new parser, and perhaps I'm \nmissing something (because of parameters?), but I'm just trying to \nfigure out why this is even necessary.\n\n* One more thing: I was looking at the PostgreSQL documents for the new \nsupport for prepared statements in version 7.3. They look like this:\n\nPREPARE q3(text, int, float, boolean, oid, smallint) AS\n\tSELECT * FROM tenk1 WHERE string4 = $1 AND (four = $2 OR\n\tten = $3::bigint OR true = $4 OR oid = $5 OR odd = $6::int);\n\n(BTW, I can see why preparsing would be necessary here!) Now, if I'm \nunderstanding this correctly, the PREPARE statement would need to have \nthe data types of each of the parameters specified. Is this something \nthat's done in other DBI drivers?\n\nOkay, sorry for all the questions. My motivation is to make a new \nPostgreSQL DBI driver that's one of the best DBI drivers around. Any \nhelp would go a long way toward helping me to reach my goal.\n\nTIA,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Sun, 17 Nov 2002 19:00:30 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "DBD::PostgreSQL" }, { "msg_contents": "This is great to hear ... possible name of PgXS? (not that the current\nversion isn't using XS), allows both Pg and the new Pg (along with PgSPI) to\nbe installed at once.\n\nOn Sun, Nov 17, 2002 at 07:00:30PM -0800, David Wheeler wrote:\n> programmed C before; and c) I didn't want to just totally hork the \n> DBD::Pg sources, I took it upon myself to try creating a new PostgreSQL \n> driver from scratch.\n\nLearning under fire, the best way!\n\n> * In DBD::Pg's dbdimp.c, the dbd_db_commit() function attempts a \n> commit, and if it's successful, it then starts another transaction. Is \n> this the proper behavior? The other DBDs I looked at don't appear to \n> BEGIN a new transaction in the dbd_db_commit() function.\n\nYes, when AutoCommit is on, each statement is committed after execution.\nDBD::ADO uses an ADO function that starts a new transaction after a successful\ncommit or rollback of the current. It's switching between the two states that\ngets difficult to handle (also supporting database types that do not support\ntransactions).\n\n> * A similar question applies to dbd_db_rollback(). It does a rollback, \n> and then BEGINs a new transaction. Should it be starting another \n> transaction there?\n\nYes.\n\n> * How is DBI's begin_work() method intended to influence commits and \n> rollbacks?\n\nInfo from the DBI doc:\n \"begin_work\" $rc = $dbh->begin_work or die $dbh->errstr;\n\n Enable transactions (by turning \"AutoCommit\" off) until the next call to\n \"commit\" or \"rollback\". After the next \"commit\" or \"rollback\",\n \"AutoCommit\" will automatically be turned on again.\n\n If \"AutoCommit\" is already off when \"begin_work\" is called then it does\n nothing except return an error. If the driver does not support\n transactions then when \"begin_work\" attempts to set \"AutoCommit\" off the\n driver will trigger a fatal error.\n\n See also \"Transactions\" in the \"FURTHER INFORMATION\" section below.\n\nIMHO: begin_work for Pg simply turns AutoCommit off. The AutoCommit handles\ncommitting the current transaction and starting the next.\n\n> * Also in dbd_db_commit() and dbd_db_rollback(), I notice that the last \n> return statement returns 0. Shouldn't these be returning true?\n\nSuccess is non-zero. However, $dbh->err is 0 or undefined.\n\nInfo from DBI doc:\n \"commit\"\n\t$rc = $dbh->commit or die $dbh->errstr;\n\n\n> * In DBD::Pg's dbdimp.c, the dbd_db_disconnect() function automatically \n> does a rollback if AutoCommit is off. Should there not be some way to \n> tell that, in addition to AutoCommit being off, a transaction is \n> actually in progress? That is to say, since the last call to \n> dbd_db_commit() that some statements have actually been executed? Or \n> does this matter?\n\n\nIMHO: It's much safer to rollback (unconditionally) on disconnect, then\nattempting to manage tracking the current action taken in the\ntransaction by the different statement handlers.\n\n> * And finally, is dbd_preparse() totally necessary? I mean, doesn't \n> PostgreSQL's PQexec() function do the necessary parsing? Jeffrey Baker \n> mentioned to me that he was working on a new parser, and perhaps I'm \n> missing something (because of parameters?), but I'm just trying to \n> figure out why this is even necessary.\n\nAFAIK: All the drivers support dbd_preparse.\n\n> * One more thing: I was looking at the PostgreSQL documents for the new \n> support for prepared statements in version 7.3. They look like this:\n> \n> PREPARE q3(text, int, float, boolean, oid, smallint) AS\n> SELECT * FROM tenk1 WHERE string4 = $1 AND (four = $2 OR\n> ten = $3::bigint OR true = $4 OR oid = $5 OR odd = $6::int);\n> \n> (BTW, I can see why preparsing would be necessary here!) Now, if I'm \n> understanding this correctly, the PREPARE statement would need to have \n> the data types of each of the parameters specified. Is this something \n> that's done in other DBI drivers?\n\nOuch ... that may make things ugly. \nIt'll give you fewer nightmares if you can pass the \"statement\" to\nthe back-end to prepare, having the back-end return the number of\nparameters, and data types. (I haven't looked at the 7.3 PostgreSQL\ndocumentation yet). If the back-end doesn't support this type of\nprepare, then you may need to pre-parse the statement to determine\nwhat placeholders are requires and attempt to determine the correct\ndata types.\n\nTom\n\n-- \nThomas A. Lowery\nSee DBI/FAQ http://xmlproj.dyndns.org/cgi-bin/fom\n", "msg_date": "Sun, 17 Nov 2002 23:21:22 -0500", "msg_from": "\"Thomas A. Lowery\" <tl-lists@stlowery.net>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Sun, 17 Nov 2002, David Wheeler wrote:\n\n> * In DBD::Pg's dbdimp.c, the dbd_db_commit() function attempts a \n> commit, and if it's successful, it then starts another transaction. Is \n> this the proper behavior? The other DBDs I looked at don't appear to \n> BEGIN a new transaction in the dbd_db_commit() function.\n> \n> * A similar question applies to dbd_db_rollback(). It does a rollback, \n> and then BEGINs a new transaction. Should it be starting another \n> transaction there?\n> \n\nCurrent behaviour sounds about right. Iff you are not in auto commit mode,\nyou have to tell pg to start a new transaction. IIRC, some DBs will\nautomatically start a new transaction when the commit/rollback is called;\nhowever, for pg, an explicit BEGIN is required to start the transaction.\n\n> * How is DBI's begin_work() method intended to influence commits and \n> rollbacks?\n> \n\nI would guess this is along the lines of std PostgeSQL behaviour; when you\nbegin_work you tell the db to start a transaction (BEGIN) up until the\nnext commit/rollback. So instead of turning autocommit off you can just\nbegin work around the blocks of code that need transactions. (cf. local\n($dbh->{AutoCommit}) = 0)\n\n\n> * Also in dbd_db_commit() and dbd_db_rollback(), I notice that the last \n> return statement returns 0. Shouldn't these be returning true?\n\ndbd_db_commit() returns zero when NULL == $imp_dbh->conn or on error. It \nreturns one when when PGRES_COMMAND_OK == status.\n\nHumm intersting... It look like the data can be committed to database &\ndbd_db_commit can still through an error because the BEGIN failed. Ugg.\nThis could be non-pretty.\n\nall of the above also goes for rollback().\n\n\n> * In DBD::Pg's dbdimp.c, the dbd_db_disconnect() function automatically \n> does a rollback if AutoCommit is off. Should there not be some way to \n> tell that, in addition to AutoCommit being off, a transaction is \n> actually in progress? That is to say, since the last call to \n> dbd_db_commit() that some statements have actually been executed? Or \n> does this matter?\n> \n\nA transaction is already in progress because you have called BEGIN.\n\n\n> * In dbd_db_destroy(), if I'm using Driver.xst, I don't actually need \n> to execute this code, correct?\n> \n> if (DBIc_ACTIVE(imp_dbh)) {\n> dbd_db_disconnect(dbh, imp_dbh);\n> }\n> \n\nDon't know, but it looks like (cursory glance) that dbd_db_disconnect gets \ncalled already before dbd_db_destory in DESTROY of Driver.xst. But hey \ncan't hurt, right :)\n\n\n> * And finally, is dbd_preparse() totally necessary? I mean, doesn't \n> PostgreSQL's PQexec() function do the necessary parsing? Jeffrey Baker \n> mentioned to me that he was working on a new parser, and perhaps I'm \n> missing something (because of parameters?), but I'm just trying to \n> figure out why this is even necessary.\n\n\ndbd_preparse scans and rewrites the query for placeholders, so if you\nwant to use placeholders with prepare, you will need to walk the string \nlooking for placeholders. How do you think DBD::Pg knows that when you \nsay $sth = $x->prepare(\"SELECT * FROM thing WHERE 1=? and 2 =?) that $sth \nis going to need two placeholders when execute() is called?\n\n\n\n> * One more thing: I was looking at the PostgreSQL documents for the new \n> support for prepared statements in version 7.3. They look like this:\n> \n> PREPARE q3(text, int, float, boolean, oid, smallint) AS\n> \tSELECT * FROM tenk1 WHERE string4 = $1 AND (four = $2 OR\n> \tten = $3::bigint OR true = $4 OR oid = $5 OR odd = $6::int);\n> \n\n From my rough scanning of the docs a few weeks ago, I think that the \ntypes are optional (I hope that thy are, in any event), & you are \nmissing the plan_name. \n\nTo get this to work automagically in DBD::Pg, you would have\ndbd_st_reparse rewrite the placeholders ?/p:1/&c. as $1, $2, $4, &c, then\nprepend a PREPARE plan_name, and then issue the query to the db\n(remembering the plan name that you created for the call to execute\nlater).\n\n> (BTW, I can see why preparsing would be necessary here!) Now, if I'm \n> understanding this correctly, the PREPARE statement would need to have \n> the data types of each of the parameters specified. Is this something \n> that's done in other DBI drivers?\n\nYou do not want to go there (trying to magically get the types for the \nplaceholders (unless PostgreSQL will give them to you)).\n\nLater,\n\n-r\n\n", "msg_date": "Sun, 17 Nov 2002 23:26:11 -0500 (EST)", "msg_from": "Rudy Lippan <rlippan@remotelinux.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "David Wheeler <david@wheeler.net> writes:\n> * In DBD::Pg's dbdimp.c, the dbd_db_commit() function attempts a \n> commit, and if it's successful, it then starts another transaction. Is \n> this the proper behavior? The other DBDs I looked at don't appear to \n> BEGIN a new transaction in the dbd_db_commit() function.\n> * A similar question applies to dbd_db_rollback(). It does a rollback, \n> and then BEGINs a new transaction. Should it be starting another \n> transaction there?\n\nBoth of these seem pretty bogus to me. Ideally the driver should not\nissue a \"begin\" until the application issues the first command of the\nnew transaction. Otherwise you get into scenarios where idle\nconnections are holding open transactions, and ain't nobody gonna be\nhappy with that.\n\n> (BTW, I can see why preparsing would be necessary here!) Now, if I'm \n> understanding this correctly, the PREPARE statement would need to have \n> the data types of each of the parameters specified. Is this something \n> that's done in other DBI drivers?\n\nProbably not --- the SQL spec seems to think that the server can intuit\nappropriate datatypes for each parameter symbol. (Which I suppose may\nbe true, in a datatype universe as impoverished as the spec's is;\nbut it won't work for Postgres. Thus we have a nonstandard syntax for\nPREPARE.) So you'll probably have to do some driver-specific coding here.\n\nNo ideas about your other questions, but I hope the DBI folk can answer.\n\n> Okay, sorry for all the questions. My motivation is to make a new \n> PostgreSQL DBI driver that's one of the best DBI drivers around. Any \n> help would go a long way toward helping me to reach my goal.\n\nGo to it ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 01:15:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL " }, { "msg_contents": "On Monday 18 November 2002 04:00, David Wheeler wrote:\n> Dear Fellow DBI and PostgreSQL Hackers,\n\n(...)\n> Okay, sorry for all the questions. My motivation is to make a new\n> PostgreSQL DBI driver that's one of the best DBI drivers around. Any\n> help would go a long way toward helping me to reach my goal.\n\nCount me in. I'm still in a slight state of shock after wondering over\nto CPAN to find out how DBD::Pg was coming along ;-). At the\nvery least I can do testing and documentation, and quite possibly\n\"grunt work\". Anything else will depend on how quickly I can\nacquaint myself with the internals of DBI. (Note to self: do not believe\nthis is impossible or anything). Perl is my main \ndevelopment language, and I used to work a lot with C.\n\nFor clarification: is DBD::Postgres intended to replace DBD::Pg, and are\nany maintenance releases of the latter planned (e.g. in conjunction with\nthe PostgreSQL 7.3. release)?\n\nIan Barwick\nbarwick@gmx.net\n\n", "msg_date": "Mon, 18 Nov 2002 09:32:11 +0100", "msg_from": "Ian Barwick <barwick@gmx.net>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Sun, Nov 17, 2002 at 07:00:30PM -0800, David Wheeler wrote:\n> \n> * In DBD::Pg's dbdimp.c, the dbd_db_commit() function attempts a \n> commit, and if it's successful, it then starts another transaction. Is \n> this the proper behavior? The other DBDs I looked at don't appear to \n> BEGIN a new transaction in the dbd_db_commit() function.\n\nMany databases, like Oracle, automatically start a transaction at\nthe server as soon as it's needed. The application doesn't have to\ndo it explicitly. (DBD::Informix is probably a good example of a\ndriver that needs to start transactions explicitly.)\n\n> * A similar question applies to dbd_db_rollback(). It does a rollback, \n> and then BEGINs a new transaction. Should it be starting another \n> transaction there?\n\nDrivers are free to defer starting a new transaction until it's needed.\nOr they can start one right away, but that may cause problems on\nthe server if there are many 'idle transactions'. (Also beware that\nsome databases don't allow certain statements, like some 'alter\nsession ...', to be issued while a transaction is active. If that\napplies to Pg then you may have a problem.)\n\n> * How is DBI's begin_work() method intended to influence commits and \n> rollbacks?\n\n From the source:\n\n sub begin_work {\n my $dbh = shift;\n return $dbh->DBI::set_err(1, \"Already in a transaction\")\n unless $dbh->FETCH('AutoCommit');\n $dbh->STORE('AutoCommit', 0); # will croak if driver doesn't support it\n $dbh->STORE('BegunWork', 1); # trigger post commit/rollback action\n }\n\ndrivers do *not* need to define their own begin_work method.\n\nWhat they _should_ do is make their commit and rollback methods\ncheck for BegunWork being true (it's a bit flag in the com structure)\nand if true then turn AutoCommit back on instead of starting a new transaction.\n\n(If they don't do that then the DBI handles it but it's faster,\ncleaner, and safer for teh driver to do it.)\n\n\n> * Also in dbd_db_commit() and dbd_db_rollback(), I notice that the last \n> return statement returns 0. Shouldn't these be returning true?\n\nYes, when using Driver.xst, if there's no error.\n\n> Okay, sorry for all the questions. My motivation is to make a new \n> PostgreSQL DBI driver that's one of the best DBI drivers around. Any \n> help would go a long way toward helping me to reach my goal.\n\nI'd really appreciate any feedback (ie patches :) you might have\nfor the DBI::DBD document. It's a bit thin and/or dated in places.\n\nTim.\n\n\n", "msg_date": "Mon, 18 Nov 2002 10:15:55 +0000", "msg_from": "Tim Bunce <Tim.Bunce@pobox.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Mon, Nov 18, 2002 at 10:15:55AM +0000, Tim Bunce wrote:\n> \n> What they _should_ do is make their commit and rollback methods\n> check for BegunWork being true (it's a bit flag in the com structure)\n> and if true then turn AutoCommit back on instead of starting a new transaction.\n\n(and turn BegunWork back off.)\n\nTim.\n", "msg_date": "Mon, 18 Nov 2002 10:26:20 +0000", "msg_from": "Tim Bunce <Tim.Bunce@pobox.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Mon, 18 Nov 2002, Ian Barwick wrote:\n\n> For clarification: is DBD::Postgres intended to replace DBD::Pg, and are\n> any maintenance releases of the latter planned (e.g. in conjunction with\n> the PostgreSQL 7.3. release)?\n\nI didn't see any indication that David's planning on giving a new name to\nhis rewritten PostgreSQL DBD driver, other than the subject of his email.\nIt would cause a lot of pain if the driver's name changed from DBD::Pg,\nsince every place people have DSNs in code or config files would have to\nbe updated ...\n\nWould anyone actually consider using a different name than DBD::Pg? It\nseems it would be a big pain with no benefit, and make it unclear which \ndriver users should use.\n\nJon\n\n", "msg_date": "Mon, 18 Nov 2002 14:47:56 +0000 (UTC)", "msg_from": "Jon Jensen <jon@endpoint.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Sunday, November 17, 2002, at 10:15 PM, Tom Lane wrote:\n\n> Both of these seem pretty bogus to me. Ideally the driver should not\n> issue a \"begin\" until the application issues the first command of the\n> new transaction. Otherwise you get into scenarios where idle\n> connections are holding open transactions, and ain't nobody gonna be\n> happy with that.\n\nOkay. I think I'll use a flag in the driver to track when it's in a \ntransaction, and do the right thing in the begin and rollback functions.\n\n> Probably not --- the SQL spec seems to think that the server can intuit\n> appropriate datatypes for each parameter symbol. (Which I suppose may\n> be true, in a datatype universe as impoverished as the spec's is;\n> but it won't work for Postgres. Thus we have a nonstandard syntax for\n> PREPARE.) So you'll probably have to do some driver-specific coding \n> here.\n\nSo, if I understand you correctly, PostgreSQL's PREPARE statement \n*requires* data typing in its syntax? If so, is there an \neasy/straight-forward way to ask the server what the data types for \neach column are before executing the PREPARE?\n\n> No ideas about your other questions, but I hope the DBI folk can \n> answer.\n\nThanks, yes, I'm getting some good responses.\n\n> Go to it ;-)\n\nThanks Tom!\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 08:16:03 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL " }, { "msg_contents": "David Wheeler <david@wheeler.net> writes:\n> On Sunday, November 17, 2002, at 10:15 PM, Tom Lane wrote:\n>> Both of these seem pretty bogus to me. Ideally the driver should not\n>> issue a \"begin\" until the application issues the first command of the\n>> new transaction. Otherwise you get into scenarios where idle\n>> connections are holding open transactions, and ain't nobody gonna be\n>> happy with that.\n\n> Okay. I think I'll use a flag in the driver to track when it's in a \n> transaction, and do the right thing in the begin and rollback functions.\n\nI think someone else said that the DBD framework already includes such a\nflag (\"BegunWork\"?) --- if so, you should surely use that one.\n\n> So, if I understand you correctly, PostgreSQL's PREPARE statement \n> *requires* data typing in its syntax?\n\nYup.\n\n> If so, is there an \n> easy/straight-forward way to ask the server what the data types for \n> each column are before executing the PREPARE?\n\nThere are various ways to retrieve the datatypes of the columns of a\ntable, but I'm not sure how that helps you to determine the parameter\ntypes for an arbitrary SQL command to be prepared. Are you assuming\na specific structure of the command you want to prepare?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 11:19:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL " }, { "msg_contents": "On Monday, November 18, 2002, at 08:19 AM, Tom Lane wrote:\n\n> I think someone else said that the DBD framework already includes such \n> a\n> flag (\"BegunWork\"?) --- if so, you should surely use that one.\n\nNo, I'm finding out that that flag is for a slightly different purpose \n-- using transactions even when AutoCommit = 1.\n\n>> So, if I understand you correctly, PostgreSQL's PREPARE statement\n>> *requires* data typing in its syntax?\n>\n> Yup.\n\nDamn.\n\n> There are various ways to retrieve the datatypes of the columns of a\n> table, but I'm not sure how that helps you to determine the parameter\n> types for an arbitrary SQL command to be prepared. Are you assuming\n> a specific structure of the command you want to prepare?\n\nOuch, good point. I don't want to go there. It's a shame, really, but \nin light of this requirement, I don't see how PostgreSQL prepared \nstatements can be supported by the DBI. Pity; I was really looking \nforward to the performance boost.\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 08:27:33 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL " }, { "msg_contents": "On Mon, Nov 18, 2002 at 11:19:25AM -0500, Tom Lane wrote:\n> David Wheeler <david@wheeler.net> writes:\n> > On Sunday, November 17, 2002, at 10:15 PM, Tom Lane wrote:\n> >> Both of these seem pretty bogus to me. Ideally the driver should not\n> >> issue a \"begin\" until the application issues the first command of the\n> >> new transaction. Otherwise you get into scenarios where idle\n> >> connections are holding open transactions, and ain't nobody gonna be\n> >> happy with that.\n> \n> > Okay. I think I'll use a flag in the driver to track when it's in a \n> > transaction, and do the right thing in the begin and rollback functions.\n> \n> I think someone else said that the DBD framework already includes such a\n> flag (\"BegunWork\"?) --- if so, you should surely use that one.\n\nBegunWork _only_ relates to the begin_work method. It's not used unless\nthat method is used, so it's not appropriate for your use here.\n\nJust add a flag to the drivers private structure.\n\nTim.\n", "msg_date": "Mon, 18 Nov 2002 16:39:08 +0000", "msg_from": "Tim Bunce <Tim.Bunce@pobox.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Sunday, November 17, 2002, at 08:26 PM, Rudy Lippan wrote:\n\n> Current behaviour sounds about right. Iff you are not in auto commit \n> mode,\n> you have to tell pg to start a new transaction. IIRC, some DBs will\n> automatically start a new transaction when the commit/rollback is \n> called;\n> however, for pg, an explicit BEGIN is required to start the \n> transaction.\n\nWith feedback from Tom Lane, I think I'll add code to track when to \nBEGIN a transaction, and check it in execute() to see if it needs to be \nturned on before executing a statement.\n\n> I would guess this is along the lines of std PostgeSQL behaviour; when \n> you\n> begin_work you tell the db to start a transaction (BEGIN) up until the\n> next commit/rollback. So instead of turning autocommit off you can \n> just\n> begin work around the blocks of code that need transactions. (cf. \n> local\n> ($dbh->{AutoCommit}) = 0)\n\nOkay, so if I understand correctly, it's an alternative to AutoCommit \nfor handling transactions. That explains why they *both* need to be \nchecked.\n\n> dbd_db_commit() returns zero when NULL == $imp_dbh->conn or on error. \n> It\n> returns one when when PGRES_COMMAND_OK == status.\n\nOkay.\n\n> Humm intersting... It look like the data can be committed to database &\n> dbd_db_commit can still through an error because the BEGIN failed. \n> Ugg.\n> This could be non-pretty.\n\nYeah, that's another reason to set a flag and remove the BEGIN from \ndbd_db_commit() and dbd_db_rollback().\n\n> A transaction is already in progress because you have called BEGIN.\n\nYes, but if I set the flag as I've mentioned above, I may not have. It \nmakes sense to me to use the init_commit flag for this purpose.\n\n> Don't know, but it looks like (cursory glance) that dbd_db_disconnect \n> gets\n> called already before dbd_db_destory in DESTROY of Driver.xst. But hey\n> can't hurt, right :)\n\nUm, yes, I guess that's true. I was thinking about redundant operations \nusing more time, but I guess that doesn't really matter in \ndbd_db_destroy() (and it takes next to no time, anyway).\n\n> dbd_preparse scans and rewrites the query for placeholders, so if you\n> want to use placeholders with prepare, you will need to walk the string\n> looking for placeholders. How do you think DBD::Pg knows that when you\n> say $sth = $x->prepare(\"SELECT * FROM thing WHERE 1=? and 2 =?) that \n> $sth\n> is going to need two placeholders when execute() is called?\n\nRight, okay, that's *kind of* what I thought. It just seems a shame \nthat each query has to be parsed twice (once by the DBI driver, once by \nPostgreSQL). But I guess there's no other way about it. Perhaps our \npreparsed statement could be cached by prepare_cached(), so that, even \nthough we can't cache a statement prepared by PostgreSQL (see my \nexchange with Tom Lane), we could at least cache our own parsed \nstatement.\n\n>> * One more thing: I was looking at the PostgreSQL documents for the \n>> new\n>> support for prepared statements in version 7.3. They look like this:\n>>\n>> PREPARE q3(text, int, float, boolean, oid, smallint) AS\n>> \tSELECT * FROM tenk1 WHERE string4 = $1 AND (four = $2 OR\n>> \tten = $3::bigint OR true = $4 OR oid = $5 OR odd = $6::int);\n>>\n> From my rough scanning of the docs a few weeks ago, I think that the\n> types are optional (I hope that thy are, in any event), & you are\n> missing the plan_name.\n\nUnfortunately, according to Tom Lane, the data types are required. :-(\nFWIW with the above example, I swiped it right out of PostgreSQL's \ntests. the plan_name is \"q3\".\n\n> You do not want to go there (trying to magically get the types for the\n> placeholders (unless PostgreSQL will give them to you)).\n\nNot easily, I think. A shame, really, that the data types are required, \nas it means that dynamic database clients like DBI (and, I expect, \nJDBC) won't really be able to take advantage of prepared statements. \nOnly custom code that uses the PostgreSQL API directly (that is, C \napplications) will be able to do it.\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 08:42:01 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Monday, November 18, 2002, at 08:39 AM, Tim Bunce wrote:\n\n> BegunWork _only_ relates to the begin_work method. It's not used unless\n> that method is used, so it's not appropriate for your use here.\n>\n> Just add a flag to the drivers private structure.\n\nRight, that's my plan.\n\nThanks Tim!\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 08:44:52 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Monday, November 18, 2002, at 12:32 AM, Ian Barwick wrote:\n\n> Count me in. I'm still in a slight state of shock after wondering over\n> to CPAN to find out how DBD::Pg was coming along ;-). At the\n> very least I can do testing and documentation, and quite possibly\n> \"grunt work\". Anything else will depend on how quickly I can\n> acquaint myself with the internals of DBI. (Note to self: do not \n> believe\n> this is impossible or anything). Perl is my main\n> development language, and I used to work a lot with C.\n\nWell then, once I finish pasting together dbdimp.c and get everything \nto compile, I might ask for your help with a code review and writing \ntests.\n\n> For clarification: is DBD::Postgres intended to replace DBD::Pg, and \n> are\n> any maintenance releases of the latter planned (e.g. in conjunction \n> with\n> the PostgreSQL 7.3. release)?\n\nAnd on Monday, November 18, 2002, at 06:47 AM, Jon Jensen wrote:\n\n> I didn't see any indication that David's planning on giving a new name \n> to\n> his rewritten PostgreSQL DBD driver, other than the subject of his \n> email.\n> It would cause a lot of pain if the driver's name changed from DBD::Pg,\n> since every place people have DSNs in code or config files would have \n> to\n> be updated ...\n>\n> Would anyone actually consider using a different name than DBD::Pg? It\n> seems it would be a big pain with no benefit, and make it unclear which\n> driver users should use.\n\nI expect that the PostgreSQL developers will include whatever DBI \ndriver is the \"official\" DBI driver for PostgreSQL. At this point, I've \njust changed the name so I can feel free to hack it any way I like, \nincluding breaking backward compatibility where necessary (such as in \nthe escape() method).\n\nIf I finish something that actually works, then I'll request some help \nfrom others comparing it to the behavior of the DBD::Pg driver. If it \ndoesn't break backwards compatibility too much, then I would suggest \nthat it become DBD::Pg 1.20 or 2.0 or something. But if its behavior is \ndifferent enough (and it would need to be tried with a number of \ndifferent applications to see what breaks, I think), then it would \nprobably have to be released under a different name and people could \nswitch if/when they could. But we're a long ways from determining that \njust yet.\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 08:49:17 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Monday, November 18, 2002, at 02:15 AM, Tim Bunce wrote:\n\n> Many databases, like Oracle, automatically start a transaction at\n> the server as soon as it's needed. The application doesn't have to\n> do it explicitly. (DBD::Informix is probably a good example of a\n> driver that needs to start transactions explicitly.)\n\nI'm quite sure that in PostgreSQL, transactions have to be started \nexplicitly.\n\n> Drivers are free to defer starting a new transaction until it's needed.\n> Or they can start one right away, but that may cause problems on\n> the server if there are many 'idle transactions'. (Also beware that\n> some databases don't allow certain statements, like some 'alter\n> session ...', to be issued while a transaction is active. If that\n> applies to Pg then you may have a problem.)\n\nAccording to Tom Lane, idle transactions are problematic, so I think \nI'll code it up to start the transaction when its needed -- presumably \nby checking and setting the relevant flags in execute().\n\n> drivers do *not* need to define their own begin_work method.\n>\n> What they _should_ do is make their commit and rollback methods\n> check for BegunWork being true (it's a bit flag in the com structure)\n> and if true then turn AutoCommit back on instead of starting a new \n> transaction.\n>\n> (If they don't do that then the DBI handles it but it's faster,\n> cleaner, and safer for teh driver to do it.)\n\nOkay, then that's what I'll do. Do I check it like this?\n\n if (DBIc_has(imp_dbh, DBIcf_BegunWork)) {...}\n\n>> * Also in dbd_db_commit() and dbd_db_rollback(), I notice that the \n>> last\n>> return statement returns 0. Shouldn't these be returning true?\n>\n> Yes, when using Driver.xst, if there's no error.\n\nIt appears that they return false when imp_dbh->conn is NULL. That \nwould count as an error, I think. DBD::Pg doesn't report it as an \nerror, though -- it just returns false. Should I add an appropriate \ncall to do_error() in such a case?\n\n>> Okay, sorry for all the questions. My motivation is to make a new\n>> PostgreSQL DBI driver that's one of the best DBI drivers around. Any\n>> help would go a long way toward helping me to reach my goal.\n>\n> I'd really appreciate any feedback (ie patches :) you might have\n> for the DBI::DBD document. It's a bit thin and/or dated in places.\n\nYes, I've thought about that. You can at least expect a bit of clean up \n(grammar, etc.), but I might well add more. It'd probably be good to do \nso as a newbie who wants to help other newbies along...\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 08:55:20 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Monday, November 18, 2002, at 02:26 AM, Tim Bunce wrote:\n\n>> What they _should_ do is make their commit and rollback methods\n>> check for BegunWork being true (it's a bit flag in the com structure)\n>> and if true then turn AutoCommit back on instead of starting a new \n>> transaction.\n>\n> (and turn BegunWork back off.)\n\nGotcha. Thanks.\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 08:55:34 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "David Wheeler <david@wheeler.net> writes:\n> On Monday, November 18, 2002, at 08:19 AM, Tom Lane wrote:\n>> There are various ways to retrieve the datatypes of the columns of a\n>> table, but I'm not sure how that helps you to determine the parameter\n>> types for an arbitrary SQL command to be prepared. Are you assuming\n>> a specific structure of the command you want to prepare?\n\n> Ouch, good point. I don't want to go there. It's a shame, really, but \n> in light of this requirement, I don't see how PostgreSQL prepared \n> statements can be supported by the DBI. Pity; I was really looking \n> forward to the performance boost.\n\nThinking about this, it occurs to me that there's no good reason why\nwe couldn't allow parameter symbols ($n) to be considered type UNKNOWN\ninitially. The type interpretation algorithms would then treat them\njust like quoted literal constants. After parsing finishes, PREPARE\ncould scan the tree to see what type each symbol had been cast to.\n(You'd have to raise an error if multiple appearances of the same symbol\nhad been cast to different types, but that'd be an uncommon case.)\n\nThis form of PREPARE would presumably need some way of reporting back\nthe types it had determined for the symbols; anyone have a feeling for\nthe appropriate API for that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 11:58:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "PREPARE and parameter types (Re: [INTERFACES] DBD::PostgreSQL)" }, { "msg_contents": "On Sunday, November 17, 2002, at 08:21 PM, Thomas A. Lowery wrote:\n\n> This is great to hear ... possible name of PgXS? (not that the current\n> version isn't using XS), allows both Pg and the new Pg (along with \n> PgSPI) to\n> be installed at once.\n\nWell, if the name needs to change, I was thinking of DBD::PgSQL. Is \nsomeone working on DBD::PgSPI? That might be even more valuable, since \nthat appears to be a much more robust API.\n\n> Learning under fire, the best way!\n\nYes...or I'm a crazy bastard. Take your pick.\n\n> Yes, when AutoCommit is on, each statement is committed after \n> execution.\n> DBD::ADO uses an ADO function that starts a new transaction after a \n> successful\n> commit or rollback of the current. It's switching between the two \n> states that\n> gets difficult to handle (also supporting database types that do not \n> support\n> transactions).\n\nSo in DBD::ADO, you're not actually deferring starting a new \ntransaction until it's actually needed? Are there no problems with idle \ntransactions?\n\n> IMHO: begin_work for Pg simply turns AutoCommit off. The AutoCommit \n> handles\n> committing the current transaction and starting the next.\n\nOkay.\n\n>> * Also in dbd_db_commit() and dbd_db_rollback(), I notice that the \n>> last\n>> return statement returns 0. Shouldn't these be returning true?\n>\n> Success is non-zero. However, $dbh->err is 0 or undefined.\n>\n> Info from DBI doc:\n> \"commit\"\n> \t$rc = $dbh->commit or die $dbh->errstr;\n\nYes. However, dbd_db_commit() and dbd_db_rollback() can return false \nwithout throwing an error. I think that's a mistake.\n\n> IMHO: It's much safer to rollback (unconditionally) on disconnect, then\n> attempting to manage tracking the current action taken in the\n> transaction by the different statement handlers.\n\nDon't all statement ultimately go through dbd_st_execute()? If so, then \nI think it'd be relatively easy to just start the transaction when its \nneeded, and then dbd_db_disconnect() can check for a flag indicating \nwhether a transaction is actually in progress or not.\n\n> AFAIK: All the drivers support dbd_preparse.\n\nOkay, got it.\n\n> Ouch ... that may make things ugly.\n\nAmen.\n\n> It'll give you fewer nightmares if you can pass the \"statement\" to\n> the back-end to prepare, having the back-end return the number of\n> parameters, and data types. (I haven't looked at the 7.3 PostgreSQL\n> documentation yet). If the back-end doesn't support this type of\n> prepare, then you may need to pre-parse the statement to determine\n> what placeholders are requires and attempt to determine the correct\n> data types.\n\nAFAIK, there currently is no API for this, but I think that this \nexchange might have tickled some ideas among the PostgreSQL \ndevelopers... :-)\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 09:06:30 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "David Wheeler <david@wheeler.net> writes:\n> I'm quite sure that in PostgreSQL, transactions have to be started \n> explicitly.\n\nAs of 7.3 that's not necessarily so anymore; you can \"SET autocommit TO\noff\" and get the behavior where any statement starts a transaction block\n(and so an explicit COMMIT is required to commit its effects). Not\nsure if this helps you or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 12:12:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL " }, { "msg_contents": "On Monday, November 18, 2002, at 08:58 AM, Tom Lane wrote:\n\n> Thinking about this, it occurs to me that there's no good reason why\n> we couldn't allow parameter symbols ($n) to be considered type UNKNOWN\n> initially. The type interpretation algorithms would then treat them\n> just like quoted literal constants. After parsing finishes, PREPARE\n> could scan the tree to see what type each symbol had been cast to.\n> (You'd have to raise an error if multiple appearances of the same \n> symbol\n> had been cast to different types, but that'd be an uncommon case.)\n>\n> This form of PREPARE would presumably need some way of reporting back\n> the types it had determined for the symbols; anyone have a feeling for\n> the appropriate API for that?\n\nIf I'm understanding you correctly this approach would make it much \neasier on dynamic drivers such as DBI and JDBC. Ideally, in DBI, I'd be \nable to do something like this:\n\nPREPARE my_stmt AS\n SELECT foo, bar\n FROM bat\n WHERE foo = $1\n AND bar = $2;\n\nEXECUTE my_stmt('foo_val', 'bar_val');\n\nIt would be the responsibility of the PostgreSQL PREPARE parser to \nhandle the data typing of $1 and $2, and the responsibility of the DBI \nclient to pass in data of the appropriate type.\n\nIs this along the lines of what you're thinking, Tom?\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 09:15:16 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: PREPARE and parameter types (Re: [INTERFACES] DBD::PostgreSQL)" }, { "msg_contents": "On Monday, November 18, 2002, at 09:12 AM, Tom Lane wrote:\n\n> As of 7.3 that's not necessarily so anymore; you can \"SET autocommit TO\n> off\" and get the behavior where any statement starts a transaction \n> block\n> (and so an explicit COMMIT is required to commit its effects). Not\n> sure if this helps you or not.\n\nPostgreSQL gets better and better. Yay. However, although I might be \nable to use compile-time macros to determine the PostgreSQL version, I \nhave to support a minimum version of PostsgreSQL in the driver. I was \nthinking 7.0 -- maybe it's time to leave the 6.x series behind.\n\nThoughts, DBD::Pg users?\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 09:17:05 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL " }, { "msg_contents": "On Mon, Nov 18, 2002 at 08:55:20AM -0800, David Wheeler wrote:\n> On Monday, November 18, 2002, at 02:15 AM, Tim Bunce wrote:\n> \n> Okay, then that's what I'll do. Do I check it like this?\n> \n> if (DBIc_has(imp_dbh, DBIcf_BegunWork)) {...}\n\nYeap.\n\n> >>* Also in dbd_db_commit() and dbd_db_rollback(), I notice that the \n> >>last\n> >>return statement returns 0. Shouldn't these be returning true?\n> >\n> >Yes, when using Driver.xst, if there's no error.\n> \n> It appears that they return false when imp_dbh->conn is NULL. That \n> would count as an error, I think. DBD::Pg doesn't report it as an \n> error, though -- it just returns false. Should I add an appropriate \n> call to do_error() in such a case?\n\nProbably. It's fairly important that a method doesn't return an error\nstatus without having recorded the error by at least doing\n\tsv_setiv(DBIc_ERR(imp_xxh), ...)\n\n> >I'd really appreciate any feedback (ie patches :) you might have\n> >for the DBI::DBD document. It's a bit thin and/or dated in places.\n> \n> Yes, I've thought about that. You can at least expect a bit of clean up \n> (grammar, etc.), but I might well add more. It'd probably be good to do \n> so as a newbie who wants to help other newbies along...\n\nGreat. Thanks.\n\nTim.\n", "msg_date": "Mon, 18 Nov 2002 17:33:56 +0000", "msg_from": "Tim Bunce <Tim.Bunce@pobox.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Monday 18 November 2002 18:17, David Wheeler wrote:\n\n> PostgreSQL gets better and better. Yay. However, although I might be\n> able to use compile-time macros to determine the PostgreSQL version, I\n> have to support a minimum version of PostsgreSQL in the driver. I was\n> thinking 7.0 -- maybe it's time to leave the 6.x series behind.\n>\n> Thoughts, DBD::Pg users?\n\nExisting versions of DBD::Pg aren't suddenly going to get broken ;-)\nThe README for DBD::Pg 1.13 says 6.5 is the minimum required version,\nwhich I believe was the last major release before 7.0. So anyone\nusing < 6.5 is out of the DBD::Pg upgrade cycle anyway. A README-\nnote along the lines of \"PostgreSQL 6.5.x users: you need DBD::Pg 1.13\"\nshould suffice.\n\nIan Barwick\nbarwick@gmx.net\n\n\n", "msg_date": "Mon, 18 Nov 2002 18:36:02 +0100", "msg_from": "Ian Barwick <barwick@akademie.de>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "David Wheeler <david@wheeler.net> writes:\n> PostgreSQL gets better and better. Yay. However, although I might be \n> able to use compile-time macros to determine the PostgreSQL version, I \n> have to support a minimum version of PostsgreSQL in the driver. I was \n> thinking 7.0 -- maybe it's time to leave the 6.x series behind.\n\nIt's way past time to forget 6.* ;-). Based on what we see in the\nmailing lists, hardly anyone is on 7.0.* either, and the people on\n7.1.* all know they need to upgrade.\n\nFor a newly coded DBD driver, I think you could get away with setting a\nbaseline requirement of a 7.2 server. Maybe even 7.3, if you wanted to\nbe a hard case (and you aren't planning to release for a few months).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 12:39:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL " }, { "msg_contents": "On Mon, Nov 18, 2002 at 08:42:01AM -0800, David Wheeler wrote:\n> On Sunday, November 17, 2002, at 08:26 PM, Rudy Lippan wrote:\n> \n> >I would guess this is along the lines of std PostgeSQL behaviour; when \n> >you\n> >begin_work you tell the db to start a transaction (BEGIN) up until the\n> >next commit/rollback. So instead of turning autocommit off you can \n> >just\n> >begin work around the blocks of code that need transactions. (cf. \n> >local\n> >($dbh->{AutoCommit}) = 0)\n> \n> Okay, so if I understand correctly, it's an alternative to AutoCommit \n> for handling transactions. That explains why they *both* need to be checked.\n\nJust to be sure this is clear, begin_work is *just* an alternative\nway to set AutoCommit off till the next commit or rollback. The\napplication is free to just set AutoCommit explicitly as needed.\n\nThe *only* time a driver needs to consider the BegunWork attribute\nis immediately after a commit or rollback.\n\n> >dbd_preparse scans and rewrites the query for placeholders, so if you\n> >want to use placeholders with prepare, you will need to walk the string\n> >looking for placeholders. How do you think DBD::Pg knows that when you\n> >say $sth = $x->prepare(\"SELECT * FROM thing WHERE 1=? and 2 =?) that \n> >$sth\n> >is going to need two placeholders when execute() is called?\n> \n> Right, okay, that's *kind of* what I thought. It just seems a shame \n> that each query has to be parsed twice (once by the DBI driver, once by \n> PostgreSQL).\n\nShould need to \"parse\" in the formal sense, no full grammar is\nneeded, it's just a very quick skim through the string. If done in\nC it should be too cheap to worry about. Especially as it's only\ndone at prepare() time, not execute().\n\n> >>* One more thing: I was looking at the PostgreSQL documents for the \n> >>new\n> >>support for prepared statements in version 7.3. They look like this:\n> >>\n> >>PREPARE q3(text, int, float, boolean, oid, smallint) AS\n> >>\tSELECT * FROM tenk1 WHERE string4 = $1 AND (four = $2 OR\n> >>\tten = $3::bigint OR true = $4 OR oid = $5 OR odd = $6::int);\n> >>\n> >From my rough scanning of the docs a few weeks ago, I think that the\n> >types are optional (I hope that thy are, in any event), & you are\n> >missing the plan_name.\n> \n> Unfortunately, according to Tom Lane, the data types are required. :-(\n> FWIW with the above example, I swiped it right out of PostgreSQL's \n> tests. the plan_name is \"q3\".\n> \n> >You do not want to go there (trying to magically get the types for the\n> >placeholders (unless PostgreSQL will give them to you)).\n> \n> Not easily, I think. A shame, really, that the data types are required, \n> as it means that dynamic database clients like DBI (and, I expect, \n> JDBC) won't really be able to take advantage of prepared statements. \n> Only custom code that uses the PostgreSQL API directly (that is, C \n> applications) will be able to do it.\n\nWould binding a string type to an integer (etc) placeholder work?\nIf so, just ignore the types and bind everything as strings.\nThat's exactly what DBD::Oracle does.\n\nTim.\n", "msg_date": "Mon, 18 Nov 2002 17:44:08 +0000", "msg_from": "Tim Bunce <Tim.Bunce@pobox.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Mon, 18 Nov 2002, David Wheeler wrote:\n\n> With feedback from Tom Lane, I think I'll add code to track when to \n> BEGIN a transaction, and check it in execute() to see if it needs to be \n> turned on before executing a statement.\n> \n\nI thought about that, but I was thinking that it would add quite a bit to \nthe complexity of the code (simpler is better all things being equal). \n\nThe problem with waiting until the fist stmt is that DBD::Pg also makes \nrequests to the DB behind the scenes, so now you have to keep track of tx \nstatus before quering the db and turn it off and turn it back on when done \niff you are not already in tx.... \n\nIf you are going to say that transactions are only BEGINed when the user\nexecutes statement then it would be a bug for the driver to just decide to\nstart one on the user.\n\nI don't know what would be worse: Always having a transaction open when I \nam not in automcommit mode, and knowing about it, or having to worry \nwhether I am in transaction because of some stmt that I or some other \nmodule issued (Including DBD::Pg).\n\n\n> Um, yes, I guess that's true. I was thinking about redundant operations \n> using more time, but I guess that doesn't really matter in \n> dbd_db_destroy() (and it takes next to no time, anyway).\n\nI look at it this way... make it simple and bullet-proof and then optimize \nif needed.\n\n\n> > dbd_preparse scans and rewrites the query for placeholders, so if you\n> > want to use placeholders with prepare, you will need to walk the string\n> > looking for placeholders. How do you think DBD::Pg knows that when you\n> > say $sth = $x->prepare(\"SELECT * FROM thing WHERE 1=? and 2 =?) that \n> > $sth\n> > is going to need two placeholders when execute() is called?\n> \n> Right, okay, that's *kind of* what I thought. It just seems a shame \n> that each query has to be parsed twice (once by the DBI driver, once by \n> PostgreSQL). But I guess there's no other way about it. Perhaps our \n> preparsed statement could be cached by prepare_cached(), so that, even \n> though we can't cache a statement prepared by PostgreSQL (see my \n> exchange with Tom Lane), we could at least cache our own parsed \n> statement.\n\nIs that not what prepare_cached is for? One should only be preping \nthe statement once anyway, right?\n\nAnd the statement gets parsed twice by DBD::Pg. One time in prepare and \nOne time in execute (for the substitution of parameters).\n\n> \n> \n> Unfortunately, according to Tom Lane, the data types are required. :-(\n> FWIW with the above example, I swiped it right out of PostgreSQL's \n> tests. the plan_name is \"q3\".\n\nMissed the q3. \n\nPREPARE plan_name [ (datatype [, ...] ) ] AS query\n\nI guess I read that as (datatype) being optional... I guess it is only \noptional if there are no $1 &c. in the query, then.\n\n\nLater,\n\n-r\n\n", "msg_date": "Mon, 18 Nov 2002 12:55:08 -0500 (EST)", "msg_from": "Rudy Lippan <rlippan@remotelinux.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Monday, November 18, 2002, at 09:44 AM, Tim Bunce wrote:\n\n> Just to be sure this is clear, begin_work is *just* an alternative\n> way to set AutoCommit off till the next commit or rollback. The\n> application is free to just set AutoCommit explicitly as needed.\n>\n> The *only* time a driver needs to consider the BegunWork attribute\n> is immediately after a commit or rollback.\n\nOkay, thanks for the clarification, Tim.\n\n> Should need to \"parse\" in the formal sense, no full grammar is\n> needed, it's just a very quick skim through the string. If done in\n> C it should be too cheap to worry about. Especially as it's only\n> done at prepare() time, not execute().\n\nOkay. I'm used to thinking about things in Perl time, so I'm not quite \nsure about the expense of the c code stuff. Thanks for the tip.\n\n> Would binding a string type to an integer (etc) placeholder work?\n> If so, just ignore the types and bind everything as strings.\n> That's exactly what DBD::Oracle does.\n\nWell, it wouldn't work for bytea columns, but that's true with \nnon-prepared statements already, anyway. Interesting idea, and I think \nyou may well be right.\n\nTom, can you confirm?\n\nThanks,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 10:03:54 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Monday, November 18, 2002, at 09:55 AM, Rudy Lippan wrote:\n\n> I don't know what would be worse: Always having a transaction open \n> when I\n> am not in automcommit mode, and knowing about it, or having to worry\n> whether I am in transaction because of some stmt that I or some other\n> module issued (Including DBD::Pg).\n\nI think that we could probably prevent the driver's statements from \ninterfering with that, particularly if they're performed in dbdimp.c \nrather than in Pg.pm.\n\n> I look at it this way... make it simple and bullet-proof and then \n> optimize\n> if needed.\n\nRight.\n\n> Is that not what prepare_cached is for? One should only be preping\n> the statement once anyway, right?\n\nRight. I wasn't sure if that was already happening or not -- I haven't \ngot that far in the code yet. :-)\n\n> And the statement gets parsed twice by DBD::Pg. One time in prepare and\n> One time in execute (for the substitution of parameters).\n\nRight, although if we can add support for true prepared statements, we \ncould eliminate the second parsing.\n\n> Missed the q3.\n>\n> PREPARE plan_name [ (datatype [, ...] ) ] AS query\n>\n> I guess I read that as (datatype) being optional... I guess it is only\n> optional if there are no $1 &c. in the query, then.\n\nRight, unfortunately true -- for now, anyway.\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 10:07:06 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Monday, November 18, 2002, at 09:33 AM, Tim Bunce wrote:\n\n>> Okay, then that's what I'll do. Do I check it like this?\n>>\n>> if (DBIc_has(imp_dbh, DBIcf_BegunWork)) {...}\n>\n> Yeap.\n\nGreat, thanks. I think I'll set the minimum DBI requirement to 1.20 in \norder to properly support this feature. Any objections, DBIers?\n\n>> would count as an error, I think. DBD::Pg doesn't report it as an\n>> error, though -- it just returns false. Should I add an appropriate\n>> call to do_error() in such a case?\n>\n> Probably. It's fairly important that a method doesn't return an error\n> status without having recorded the error by at least doing\n> \tsv_setiv(DBIc_ERR(imp_xxh), ...)\n\nI'll add the call, then.\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 10:09:07 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Monday, November 18, 2002, at 09:39 AM, Tom Lane wrote:\n\n> For a newly coded DBD driver, I think you could get away with setting a\n> baseline requirement of a 7.2 server. Maybe even 7.3, if you wanted to\n> be a hard case (and you aren't planning to release for a few months).\n\nI think it'll be a couple of months at least, yes.\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Mon, 18 Nov 2002 10:09:37 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL " }, { "msg_contents": "On Mon, 18 Nov 2002 09:17:05 -0800, David Wheeler wrote:\n> On Monday, November 18, 2002, at 09:12 AM, Tom Lane wrote:\n> > As of 7.3 that's not necessarily so anymore; you can \"SET autocommit TO\n> > off\" and get the behavior where any statement starts a transaction block\n> > (and so an explicit COMMIT is required to commit its effects). Not sure\n> > if this helps you or not.\n>\n> PostgreSQL gets better and better. Yay. However, although I might be able\n> to use compile-time macros to determine the PostgreSQL version, I have to\n> support a minimum version of PostsgreSQL in the driver.\n\nYou'd also need a runtime check that the server you connected to was of a\nsufficiently high version. I realise it would make things more complicated,\nbut would it be possible to keep the manual transaction starting behaviour,\nas well as adding the 7.3 behaviour, and deciding which to do based on the\nserver version?\n\nAlso, I'd like booleans to be returned as 't'/'f', rather than 1/0, to match\nthe behaviour of other drivers. Or at least have a driver-specific flag to\ncontrol which values get used.\n\n-- \n\tPeter Haworth\tpmh@edison.ioppublishing.com\n\"To vacillate or not to vacillate, that is the question ... or is it?\"\n", "msg_date": "Tue, 19 Nov 2002 12:19:48 +0000", "msg_from": "\"Peter Haworth\" <pmh@edison.ioppublishing.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Tuesday, November 19, 2002, at 04:19 AM, Peter Haworth wrote:\n\n> You'd also need a runtime check that the server you connected to was \n> of a\n> sufficiently high version. I realise it would make things more \n> complicated,\n> but would it be possible to keep the manual transaction starting \n> behaviour,\n> as well as adding the 7.3 behaviour, and deciding which to do based on \n> the\n> server version?\n\nI think I'd rather do it at compile-time, depending on the PosgreSQL \nlibraries available. Folks who compile against 7.3 and then connect to \n7.2 get what they ask for, IMO.\n\n> Also, I'd like booleans to be returned as 't'/'f', rather than 1/0, to \n> match\n> the behaviour of other drivers. Or at least have a driver-specific \n> flag to\n> control which values get used.\n\nI might add this as an option later -- it'd probably be fairly easy -- \nbut right now I want to get core functionality nailed down, and since \nthe 1/0 behavior is most Perlish, I'll stick to that as the default.\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Tue, 19 Nov 2002 15:13:49 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Thinking about this, it occurs to me that there's no good reason why\n> we couldn't allow parameter symbols ($n) to be considered type UNKNOWN\n> initially.\n\nGood idea.\n\n> This form of PREPARE would presumably need some way of reporting back\n> the types it had determined for the symbols; anyone have a feeling for\n> the appropriate API for that?\n\nWhy would this be needed? Couldn't we rely on the client programmer to\nknow that '$n is of type foo', and then pass the appropriately-typed\ndata to EXECUTE?\n\nIf we *do* need an API for this, ISTM that by adding protocol-level\nsupport for PREPARE/EXECUTE, this shouldn't be too difficult to do\n(and analogous to the way we pass back type information for SELECT\nresults). It would also allow us to side-step the parser for EXECUTE\nparameters, which was something that a few people had requested\nearlier.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "19 Nov 2002 23:42:57 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: PREPARE and parameter types (Re: [INTERFACES] DBD::PostgreSQL)" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> This form of PREPARE would presumably need some way of reporting back\n>> the types it had determined for the symbols; anyone have a feeling for\n>> the appropriate API for that?\n\n> Why would this be needed? Couldn't we rely on the client programmer to\n> know that '$n is of type foo', and then pass the appropriately-typed\n> data to EXECUTE?\n\nI don't think so. You may as well ask why we waste bandwidth on passing\nback info about the column names and types of a SELECT result ---\nshouldn't the client know that already? There are lots of middleware\nlayers that don't know it, or at least don't want to expend a lot of\ncode on trying to deduce it.\n\n> If we *do* need an API for this, ISTM that by adding protocol-level\n> support for PREPARE/EXECUTE, this shouldn't be too difficult to do\n> (and analogous to the way we pass back type information for SELECT\n> results).\n\nI'm not sure what you mean by protocol-level support, but the one idea\nI had was to return a table (equivalent to a SELECT result) listing\nthe parameter types. This would not break libpq, for example, so\narguably it's not a protocol change. But if you think the recent\nchanges in how EXPLAIN returns its results are a protocol change, then\nyeah it's a protocol change ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Nov 2002 00:24:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PREPARE and parameter types (Re: [INTERFACES] DBD::PostgreSQL) " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Neil Conway <neilc@samurai.com> writes:\n> > Why would this be needed? Couldn't we rely on the client programmer to\n> > know that '$n is of type foo', and then pass the appropriately-typed\n> > data to EXECUTE?\n> \n> I don't think so. You may as well ask why we waste bandwidth on passing\n> back info about the column names and types of a SELECT result ---\n> shouldn't the client know that already? There are lots of middleware\n> layers that don't know it, or at least don't want to expend a lot of\n> code on trying to deduce it.\n\nFair enough -- although there's a major difference between the\nmeta-data stored about tables (which are permanent database objects\nand are typically complex), and prepared statements (which (at\npresent) are only stored for the duration of the current connection,\nand are relatively simple: many statements will not have more than a\ncouple params). Arguably, the difference is enough to make it\nnonessential that we provide client programmers with that information.\n\n> > If we *do* need an API for this, ISTM that by adding protocol-level\n> > support for PREPARE/EXECUTE, this shouldn't be too difficult to do\n> > (and analogous to the way we pass back type information for SELECT\n> > results).\n> \n> I'm not sure what you mean by protocol-level support\n\nI was thinking something along the lines of making prepared statements\nactually part of the protocol itself -- i.e. have a client message for\n'PrepareStatement', a server message for 'StatementDescriptor', a\nclient message for 'ExecuteStatement', a server message for\n'ExecuteResults', and so on. The message that returns the statement\ndescriptor would provide the necessary typo info, which the client's\nlanguage interface can make available to them in a convenient\nfashion. As I mentioned, this would also allow EXECUTE parameters to\nbypass the parser, which a couple people have remarked is slow when\ninputting megabytes of data in a query string.\n\n> the one idea I had was to return a table (equivalent to a SELECT\n> result) listing the parameter types. This would not break libpq,\n> for example, so arguably it's not a protocol change.\n\nHmmm... that would work, although it strikes me as being a bit messy\nto use tables to return data intended solely for machine use. As far\nas changing the protocol, I think there's justification for doing that\nin 7.4 anyway -- so ISTM that either solution would work pretty\nwell.\n\nIf anyone would prefer one or the other of these APIs, please speak\nup...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "20 Nov 2002 01:12:40 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: PREPARE and parameter types (Re: [INTERFACES] DBD::PostgreSQL)" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> Fair enough -- although there's a major difference between the\n> meta-data stored about tables (which are permanent database objects\n> and are typically complex), and prepared statements (which (at\n> present) are only stored for the duration of the current connection,\n> and are relatively simple: many statements will not have more than a\n> couple params). Arguably, the difference is enough to make it\n> nonessential that we provide client programmers with that information.\n\nI forgot to point out this: if the client programmer could conveniently\nprovide that info, we'd not be having this discussion, because he could\njust as well include the datatypes in the PREPARE command to meet our\nexisting syntax. The fact that we are getting complaints about the\nsyntax is sufficient evidence that it's not always reasonable to expect\nclient-side code to know the datatypes. (I think this comes mainly from\nthe fact that client-side code is not monolithic but tends to consist\nof multiple layers. Some of those layers may be expected to know\na-priori what datatypes a query involves, but others will not know.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Nov 2002 01:27:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PREPARE and parameter types (Re: [INTERFACES] DBD::PostgreSQL) " }, { "msg_contents": "On Tue, 19 Nov 2002 15:13:49 -0800, David Wheeler wrote:\n> On Tuesday, November 19, 2002, at 04:19 AM, Peter Haworth wrote:\n>\n> > You'd also need a runtime check that the server you connected to was of\n> > a sufficiently high version. I realise it would make things more\n> > complicated, but would it be possible to keep the manual transaction\n> > starting behaviour, as well as adding the 7.3 behaviour, and deciding\n> > which to do based on the server version?\n>\n> I think I'd rather do it at compile-time, depending on the PosgreSQL\n> libraries available. Folks who compile against 7.3 and then connect to\n> 7.2 get what they ask for, IMO.\n\nFair enough, but at least check the server version on connection, and bail\nif it's not high enough.\n\n-- \n\tPeter Haworth\tpmh@edison.ioppublishing.com\n\"I couldn't even find anything to read. The hotel shop \n only had two decent books, and I'd written both of them.\"\n\t\t-- Douglas Adams, the Salmon of Doubt\n", "msg_date": "Wed, 20 Nov 2002 15:56:46 +0000", "msg_from": "\"Peter Haworth\" <pmh@edison.ioppublishing.com>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Wednesday, November 20, 2002, at 07:56 AM, Peter Haworth wrote:\n\n> Fair enough, but at least check the server version on connection, and \n> bail\n> if it's not high enough.\n\nWhat's the easiest way to get the version on connection?\n\nThanks,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Wed, 20 Nov 2002 19:02:51 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Thu, 2002-11-21 at 03:02, David Wheeler wrote:\n> On Wednesday, November 20, 2002, at 07:56 AM, Peter Haworth wrote:\n> \n> > Fair enough, but at least check the server version on connection, and \n> > bail\n> > if it's not high enough.\n> \n> What's the easiest way to get the version on connection?\n\nSELECT version();\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"A new commandment I give unto you; That ye love one \n another. As I have loved you, so ye also must love one\n another. By this shall all men know that ye are my \n disciples, if ye have love one to another.\" \n John 13:34,35 \n\n", "msg_date": "21 Nov 2002 07:50:03 +0000", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Tue, Nov 19, 2002 at 03:13:49PM -0800, David Wheeler wrote:\n> On Tuesday, November 19, 2002, at 04:19 AM, Peter Haworth wrote:\n> \n> >as well as adding the 7.3 behavior, and deciding which to do based on \n> >the\n> >server version?\n> \n> I think I'd rather do it at compile-time, depending on the PosgreSQL \n> libraries available. Folks who compile against 7.3 and then connect to \n> 7.2 get what they ask for, IMO.\n\nAck! I hope this isn't true. Think about it: My development machine has\nthe latest and greatest PostgreSQL installed (along with\nPerl/DBI/dbish), I'm testing the difference between PostgreSQL 7.2 and\n7.3 in my application. Time to connect with the production server to\nresearch a difference (the 7.2 base). \n\nDo I need to maintain two copies of DBD::Postgres (or DBD::Pg ... not\nsure of the new name) one compiled for 7.2 and one for 7.3?\n\nI understand the pain of supporting different functions for different\nversions.\n\nTom\n\n-- \nThomas A. Lowery\nSee DBI/FAQ http://xmlproj.dyndns.org/cgi-bin/fom\n", "msg_date": "Thu, 21 Nov 2002 08:15:40 -0500", "msg_from": "\"Thomas A. Lowery\" <tl-lists@stlowery.net>", "msg_from_op": false, "msg_subject": "Re: DBD::PostgreSQL compile time /run time version" }, { "msg_contents": "On Wednesday, November 20, 2002, at 11:50 PM, Oliver Elphick wrote:\n\n> SELECT version();\n\nRight, I've used that before. Too bad it doesn't just return the \nversion number. But I can parse it out.\n\nThanks,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Thu, 21 Nov 2002 08:13:52 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL" }, { "msg_contents": "On Thursday, November 21, 2002, at 05:15 AM, Thomas A. Lowery wrote:\n\n> Do I need to maintain two copies of DBD::Postgres (or DBD::Pg ... not\n> sure of the new name) one compiled for 7.2 and one for 7.3?\n>\n> I understand the pain of supporting different functions for different\n> versions.\n\nThis would be the best reason to change the name of the driver. But I'm \nstill on the fence wrt all this, and it's likely to be months before I \nhave something workable and it's time to decide how to handle the issue \nof compatibility.\n\nRegards,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Thu, 21 Nov 2002 15:38:43 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: DBD::PostgreSQL compile time /run time version" } ]
[ { "msg_contents": "Richard Pais <chris_pais@yahoo.com> wrote:\n\n>\n>Just an explanation in the FAQ that the ipc-daemon is not running won't suffice. Because in my case I had ipc-daemon (version 1.11) running and it still hung (Jason's patch reported the IpcMemoryCreate error). Only when I downgraded to version 1.09 (office) and upgraded to 1.13 (home) did initdb succeed. So I'd suggest also covering this scenario.\n>Thanks,\n>Richard\n[...]\n\nIt will. But should be augmented with REAL tests about ipc-daemon's working (cygwin's ps is not the case):\n - either run cygipc test suite\n - or check event viewer for cygipc entries (or \\CYGWIN_SYSLOG.TXT and/or console in w9x case)\n\nThe fact that 1.09 worked in your case, could happen because 1.09 is (maybe) more relaxed about the \"/tmp\" file permissions than others.\n\n\nSLao\n\n\n__________________________________________________________________\nThe NEW Netscape 7.0 browser is now available. Upgrade now! http://channels.netscape.com/ns/browsers/download.jsp \n\nGet your own FREE, personal Netscape Mail account today at http://webmail.netscape.com/\n", "msg_date": "Mon, 18 Nov 2002 02:28:04 -0500", "msg_from": "s0lao@netscape.net (S. L.)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ipc-daemon" } ]
[ { "msg_contents": "\n> I've gotten really tired of explaining to newbies why stuff involving\n> char(n) fields doesn't work like they expect. Our current behavior is\n> not valid per SQL92 anyway, I believe.\n> \n> I think there is a pretty simple solution now that we have pg_cast:\n> we could stop treating char(n) as binary-equivalent to varchar/text,\n> and instead define it as requiring a runtime conversion (which would\n> be essentially the rtrim() function). The cast in the other direction\n> would be assignment-only, so that any expression that involves mixed\n> char(n) and varchar/text operations would be evaluated in varchar\n> rules after stripping char's insignificant trailing blanks.\n> \n> If we did this, then operations like\n> \t\tWHERE UPPER(charcolumn) = 'FOO'\n> would work as a newbie expects. I believe that we'd come a lot closer\n> to spec compliance on the behavior of char(n), too.\n\nI am all for it. That would much more closely match what I would expect.\n\nOne alternate possible approach would maybe be to change the on-disk\nrepresentation to really be binary compatible and change the input \noutput and operator functions ? IIRC fixed width optimizations do not gain as \nmuch as in earlier versions anyway. Then char(n) would have the benefit of \nbeeing trailing blank insensitive and having the optimal storage format.\n\nAndreas\n", "msg_date": "Mon, 18 Nov 2002 11:42:07 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: char(n) to varchar or text conversion should strip trailing\n spaces" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> One alternate possible approach would maybe be to change the on-disk\n> representation to really be binary compatible and change the input \n> output and operator functions ?\n\nHmm ... now that's an interesting thought. So the input converter would\nactively strip trailing blanks, output would add them back, and only in\na few char(n)-specific functions would we need to pretend they are there.\n\nThis would mean that char(n)-to-text is binary compatible but the\nreverse direction is not (it must strip trailing blanks) --- but that\ncoercion could be folded into the existing length-limit checker for\nchar(n) columns.\n\n> IIRC fixed width optimizations do not gain as \n> much as in earlier versions anyway.\n\nThere are no fixed-width optimizations for char(n) at all, now that we\nhave multibyte enabled all the time.\n\nSeems like a great idea to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 09:35:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: char(n) to varchar or text conversion should strip trailing\n\tspaces" }, { "msg_contents": "Tom Lane writes:\n\n> Hmm ... now that's an interesting thought. So the input converter would\n> actively strip trailing blanks, output would add them back,\n\nBut how would the output know how many to put back?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 18 Nov 2002 19:14:48 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: char(n) to varchar or text conversion should strip" }, { "msg_contents": "I said:\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> One alternate possible approach would maybe be to change the on-disk\n>> representation to really be binary compatible and change the input \n>> output and operator functions ?\n\n> Seems like a great idea to me.\n\nOn further thought I've got some reservations about it. The main\nproblem is that the physical contents of a char(n) Datum wouldn't\nprovide the complete semantic meaning: you must know the typmod as well\nto know the number of padding spaces that are supposed to be there.\nWe have special provisions to allow input and output functions to know\nthe typmod, but in more general cases functions will not receive a\ntypmod, and in any case they cannot deliver a typmod. So for any\noperation that wanted to behave as though the padding spaces are real,\nthere'd be a problem.\n\nWe could maybe play some games with having char(n) values be expanded\n(space-padded) during expression evaluation and then trimmed for\nstorage, but this strikes me as awfully messy, not to mention redundant\nwith the compression done by TOAST.\n\n\nAlso: I've been reading through the spec in more detail, and what I now\nrealize is that they expect trailing blanks to be ignored by default in\nboth char(n) and varchar(n) comparisons! In fact, it's not really the\ndatatype that determines this, but the collation attribute, and they\nspecify that the default collation must have the PAD SPACE attribute\n(which essentially means that trailing spaces are not significant).\nWhat's more, AFAICT padding spaces in char(n) are treated as real data\nby every operation except comparison --- for example, they are real\ndata in concatenation.\n\nI don't think we really want to meet the letter of the spec here :-(\nIt seems quite schizoid to treat pad spaces as real data for everything\nexcept comparison. Certainly I do not want to do that for varchar or\ntext datatypes.\n\nThe idea of having char(n)-to-text conversion strip trailing blanks\nstill appeals to me, but I have to withdraw the claim that it'd improve\nour spec compliance; it wouldn't.\n\n\nI'm now wondering whether it wouldn't be better to leave the data\nrepresentation as-is (padding spaces are stored), and still allow binary\ncompatibility both ways, but add a collection of duplicate pg_proc and\npg_operator entries so that char-ness is preserved where appropriate.\nFor example, we'd need both these pg_proc entries for UPPER():\n\tupper(text) returns text\n\tupper(character) returns character\nThey could point at the same C routine, but the parser would select the\nfirst when the input is text or varchar, and the second when the input\nis character. This would solve the original complaint about\n\tupper('foo '::char(6)) = 'FOO'\nneeding to yield TRUE. The extra pg_proc entries would be a tad\ntedious, but I think we'd only need a couple dozen to satisfy the spec's\nrequirements. Some (perhaps not all) of these functions would need to\nbe duplicated:\n\nbtrim(text)\nbtrim(text,text)\nconvert(text,name)\nconvert(text,name,name)\nconvert_using(text,text)\ninitcap(text)\nlower(text)\nlpad(text,integer)\nlpad(text,integer,text)\nltrim(text)\nltrim(text,text)\nmax(text)\nmin(text)\noverlay(text,text,integer)\noverlay(text,text,integer,integer)\nrepeat(text,integer)\nreplace(text,text,text)\nrpad(text,integer)\nrpad(text,integer,text)\nrtrim(text)\nrtrim(text,text)\nsplit_part(text,text,integer)\nsubstr(text,integer)\nsubstr(text,integer,integer)\nsubstring(text,integer)\nsubstring(text,integer,integer)\ntext_larger(text,text)\ntext_smaller(text,text)\ntextcat(text,text)\ntranslate(text,text,text)\nupper(text)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 13:42:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: char(n) to varchar or text conversion should strip trailing\n\tspaces" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Hmm ... now that's an interesting thought. So the input converter would\n>> actively strip trailing blanks, output would add them back,\n\n> But how would the output know how many to put back?\n\nThe output routine would need access to the column typmod. Which it\nwould have, in simple \"SELECT columnname\" cases, but this is a serious\nweakness of the scheme in general. See my followup post of a few\nminutes ago.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 13:54:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: char(n) to varchar or text conversion should strip " } ]
[ { "msg_contents": "Hi,\n\nI recently installed pg 7.2.3 on my linux box and discovered that\nthere are some problems with datatype serial and sequence.\n\n1.) If you create a table with a datatype serial, the corrsponding\nsequence will be created, but if you drop the table the sequence is\nnot dropped.\n\n2.) If you create a sequence and grant it to public one cant use\ncurrval() until one used setval() or nextval().\n\"ERROR: midnr.currval is not yet defined in this session\"\n\n3.) Sometimes one gets 'not enogh privileges' error when using\nnexval()\n\n\nAm I the first one who discovered that?\n\nThanks\nThomas\n", "msg_date": "18 Nov 2002 07:45:19 -0800", "msg_from": "taich@gmx.at (Thomas Aichinger)", "msg_from_op": true, "msg_subject": "Bug with sequence" }, { "msg_contents": "In article <92c0776e.0211180745.49911131@posting.google.com>, \nThomas Aichinger wrote:\n> Hi,\n> \n> I recently installed pg 7.2.3 on my linux box and discovered that\n> there are some problems with datatype serial and sequence.\n> \n> 1.) If you create a table with a datatype serial, the corrsponding\n> sequence will be created, but if you drop the table the sequence is\n> not dropped.\n> \n\nI am pretty sure this has been fixed in version 7.3 (due to be\nrelease real soon now.)\n\n\n> 2.) If you create a sequence and grant it to public one cant use\n> currval() until one used setval() or nextval().\n> \"ERROR: midnr.currval is not yet defined in this session\"\n> \n\nThis is actually the way sequences work (ie, it's a feature, not\na bug, heh heh)\n\ncurrval() gives you the current value of the sequence\n*as seen in this session* ... so until you do a setval()\nor a nextval() there is no value seen by this session.\n\n\n> 3.) Sometimes one gets 'not enogh privileges' error when using\n> nexval()\n> \n\nnextval()? hmm. well. If the user does not have privelege to\nupdate the sequence, then the call will fail.\n\n> \n> Am I the first one who discovered that?\n> \n\nNah, and probably not the first one to avoid the FAQ either.\nheh heh.\n\n", "msg_date": "Tue, 19 Nov 2002 01:22:24 +0000 (UTC)", "msg_from": "Lee Harr <missive@frontiernet.net>", "msg_from_op": false, "msg_subject": "Re: Bug with sequence" }, { "msg_contents": "On Mon, 2002-11-18 at 15:45, Thomas Aichinger wrote:\n> Hi,\n> \n> I recently installed pg 7.2.3 on my linux box and discovered that\n> there are some problems with datatype serial and sequence.\n> \n> 1.) If you create a table with a datatype serial, the corrsponding\n> sequence will be created, but if you drop the table the sequence is\n> not dropped.\n\nThis is fixed in 7.3\n\n> 2.) If you create a sequence and grant it to public one cant use\n> currval() until one used setval() or nextval().\n> \"ERROR: midnr.currval is not yet defined in this session\"\n\nThis is how it is intended to work. Read the manual...\nAnd it is nothing to do with its being granted to public; it is\nfundamental to what currval() does, which is to provide the last value\ngiven by nextval() *in*the*current*session*. If there has been no use\nof nextval(), currval() cannot report anything.\n\nIf you want the last value given by anyone on the sequence, you could\nuse \"select last_value from <sequence_name>\", but that would not give\nyou anything done by uncompleted transactions. That's why currval()\nexists.\n\n> 3.) Sometimes one gets 'not enogh privileges' error when using\n> nexval()\n\nWhen the sequence is created, you need to grant access rights on it to\nother users who will need it.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"If my people, which are called by my name, shall \n humble themselves, and pray, and seek my face, and \n turn from their wicked ways; then will I hear from \n heaven, and will forgive their sin, and will heal \n their land.\" II Chronicles 7:14 \n\n", "msg_date": "20 Nov 2002 08:53:11 +0000", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: Bug with sequence" }, { "msg_contents": "On Wed, 2002-11-20 at 21:35, Robert Treat wrote:\n> On Wed, 2002-11-20 at 03:53, Oliver Elphick wrote:\n> > On Mon, 2002-11-18 at 15:45, Thomas Aichinger wrote:\n> > > Hi,\n> > > \n> > > I recently installed pg 7.2.3 on my linux box and discovered that\n> > > there are some problems with datatype serial and sequence.\n> > > \n> > > 1.) If you create a table with a datatype serial, the corrsponding\n> > > sequence will be created, but if you drop the table the sequence is\n> > > not dropped.\n> > \n> > This is fixed in 7.3\n> > \n> \n> out of curiosity, do you know the logic that implements this fix? I have\n> a couple of tables that use the same sequence; I'm wondering if dropping\n> one of the tables removes the sequence or if I have to drop all tables\n> before the sequence is removed\n\nI just tried it.\n\nI created a sequence using SERIAL when I created a table. I used the\nsame sequence for another table by setting a column default to\nnextval(sequence).\n\nI deleted the first table. The sequence was deleted too, leaving the\ndefault of the second table referring to a non-existent sequence.\n\n\nCould this be a TODO item in 7.4, to add a dependency check when a\nsequence is set as the default without being created at the same time?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"If my people, which are called by my name, shall \n humble themselves, and pray, and seek my face, and \n turn from their wicked ways; then will I hear from \n heaven, and will forgive their sin, and will heal \n their land.\" II Chronicles 7:14 \n\n", "msg_date": "20 Nov 2002 22:12:58 +0000", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence" }, { "msg_contents": "This requires changing the nextval() function to be an attribute of the\nsequence.\n\nie. sequence.nextval and sequence.currval to deal with the sequence.\n\n\nIt should also be on the todo list.\n\nOn Wed, 2002-11-20 at 17:12, Oliver Elphick wrote:\n> On Wed, 2002-11-20 at 21:35, Robert Treat wrote:\n> > On Wed, 2002-11-20 at 03:53, Oliver Elphick wrote:\n> > > On Mon, 2002-11-18 at 15:45, Thomas Aichinger wrote:\n> > > > Hi,\n> > > > \n> > > > I recently installed pg 7.2.3 on my linux box and discovered that\n> > > > there are some problems with datatype serial and sequence.\n> > > > \n> > > > 1.) If you create a table with a datatype serial, the corrsponding\n> > > > sequence will be created, but if you drop the table the sequence is\n> > > > not dropped.\n> > > \n> > > This is fixed in 7.3\n> > > \n> > \n> > out of curiosity, do you know the logic that implements this fix? I have\n> > a couple of tables that use the same sequence; I'm wondering if dropping\n> > one of the tables removes the sequence or if I have to drop all tables\n> > before the sequence is removed\n> \n> I just tried it.\n> \n> I created a sequence using SERIAL when I created a table. I used the\n> same sequence for another table by setting a column default to\n> nextval(sequence).\n> \n> I deleted the first table. The sequence was deleted too, leaving the\n> default of the second table referring to a non-existent sequence.\n> \n> \n> Could this be a TODO item in 7.4, to add a dependency check when a\n> sequence is set as the default without being created at the same time?\n-- \nRod Taylor <rbt@rbt.ca>\n\n", "msg_date": "20 Nov 2002 18:20:38 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence" }, { "msg_contents": "Oliver Elphick wrote:\n> \n> On Wed, 2002-11-20 at 21:35, Robert Treat wrote:\n> > On Wed, 2002-11-20 at 03:53, Oliver Elphick wrote:\n> > > On Mon, 2002-11-18 at 15:45, Thomas Aichinger wrote:\n> > > > Hi,\n> > > >\n> > > > I recently installed pg 7.2.3 on my linux box and discovered that\n> > > > there are some problems with datatype serial and sequence.\n> > > >\n> > > > 1.) If you create a table with a datatype serial, the corrsponding\n> > > > sequence will be created, but if you drop the table the sequence is\n> > > > not dropped.\n> > >\n> > > This is fixed in 7.3\n> > >\n> >\n> > out of curiosity, do you know the logic that implements this fix? I have\n> > a couple of tables that use the same sequence; I'm wondering if dropping\n> > one of the tables removes the sequence or if I have to drop all tables\n> > before the sequence is removed\n> \n> I just tried it.\n> \n> I created a sequence using SERIAL when I created a table. I used the\n> same sequence for another table by setting a column default to\n> nextval(sequence).\n> \n> I deleted the first table. The sequence was deleted too, leaving the\n> default of the second table referring to a non-existent sequence.\n\nThis sounds like a serious bug in our behaviour, and not something we'd\nlike to release.\n\nSpecifically in relation to people's existing scripts, and also to\npeople who are doing dump/restore of specific tables (it'll kill the\nsequences that other tables depend on too!)\n\nNo real issue with the nicety for newbies, but am very concerned about\nthe lack of a dependancy check here.\n\n:-/\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Could this be a TODO item in 7.4, to add a dependency check when a\n> sequence is set as the default without being created at the same time?\n> \n> --\n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight, UK\n> http://www.lfix.co.uk/oliver\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"If my people, which are called by my name, shall\n> humble themselves, and pray, and seek my face, and\n> turn from their wicked ways; then will I hear from\n> heaven, and will forgive their sin, and will heal\n> their land.\" II Chronicles 7:14\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 21 Nov 2002 13:40:41 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence" }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> This sounds like a serious bug in our behaviour, and not something\n> we'd like to release.\n\nIt's not ideal, I agree, but I *definately* don't think this is\ngrounds for changing the release schedule.\n\n> No real issue with the nicety for newbies, but am very concerned\n> about the lack of a dependancy check here.\n\nWell, how would you suggest we fix this? ISTM this is partially a\nresult of the fact that we don't produce dependancy information for\nfunction bodies. While it might be possible to do so (in 7.4) for\ncertain types of functions (e.g. for functions defined in SQL,\nPL/PgSQL, etc.), I can't see a general solution (e.g. for functions\ndefined in C).\n\nAnd adding random hacks to get specific functions (e.g. nextval()) to\nwork does not strike me as a very good idea.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "20 Nov 2002 22:44:06 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence" }, { "msg_contents": "Neil Conway wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> > This sounds like a serious bug in our behaviour, and not something\n> > we'd like to release.\n> \n> It's not ideal, I agree, but I *definately* don't think this is\n> grounds for changing the release schedule.\n\nHey, I'm no fan of slowing the release schedule either.\n\nBug this is definitely sounding like a bug.\n\n \n> > No real issue with the nicety for newbies, but am very concerned\n> > about the lack of a dependancy check here.\n> \n> Well, how would you suggest we fix this? ISTM this is partially a\n> result of the fact that we don't produce dependancy information for\n> function bodies. While it might be possible to do so (in 7.4) for\n> certain types of functions (e.g. for functions defined in SQL,\n> PL/PgSQL, etc.), I can't see a general solution (e.g. for functions\n> defined in C).\n\nAbsolutely *no* idea.\n\n \n> And adding random hacks to get specific functions (e.g. nextval()) to\n> work does not strike me as a very good idea.\n\nAgreed. Random hacks aren't always a good approach.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Cheers,\n> \n> Neil\n> \n> --\n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 21 Nov 2002 14:46:54 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence" }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Oliver Elphick wrote:\n>> I created a sequence using SERIAL when I created a table. I used the\n>> same sequence for another table by setting a column default to\n>> nextval(sequence).\n>> \n>> I deleted the first table. The sequence was deleted too, leaving the\n>> default of the second table referring to a non-existent sequence.\n\n> This sounds like a serious bug in our behaviour, and not something we'd\n> like to release.\n\nWe will be releasing it whether we like it or not, because\nnextval('foo') doesn't expose any visible dependency on sequence foo.\n\n(If you think it should, how about nextval('fo' || 'o')? If you think\nthat's improbable, consider nextval('table' || '_' || 'col' || '_seq').)\n\nThe long-term answer is to do what Rod alluded to: support the\nOracle-style syntax foo.nextval, so that the sequence reference is\nhonestly part of the parsetree and not buried inside a string\nexpression.\n\nIn the meantime, I consider that Oliver was misusing the SERIAL\nfeature. If you want multiple tables fed by the same sequence object,\nyou should create the sequence as a separate object and then create\nthe tables using explicit \"DEFAULT nextval('foo')\" clauses. Doing what\nhe did amounts to sticking his fingers under the hood of the SERIAL\nimplementation; if he gets his fingers burnt, it's his problem.\n\n> Specifically in relation to people's existing scripts, and also to\n> people who are doing dump/restore of specific tables (it'll kill the\n> sequences that other tables depend on too!)\n\n7.3 breaks no existing schemas, because older schemas will be dumped\nas separate CREATE SEQUENCE and CREATE TABLE ... DEFAULT nextval()\ncommands.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Nov 2002 23:11:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence " }, { "msg_contents": "Tom Lane wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> > Oliver Elphick wrote:\n> >> I created a sequence using SERIAL when I created a table. I used the\n> >> same sequence for another table by setting a column default to\n> >> nextval(sequence).\n> >>\n> >> I deleted the first table. The sequence was deleted too, leaving the\n> >> default of the second table referring to a non-existent sequence.\n> \n> > This sounds like a serious bug in our behaviour, and not something we'd\n> > like to release.\n> \n> We will be releasing it whether we like it or not, because\n> nextval('foo') doesn't expose any visible dependency on sequence foo.\n\nAwww rats.\n\n<snip>\n> 7.3 breaks no existing schemas, because older schemas will be dumped\n> as separate CREATE SEQUENCE and CREATE TABLE ... DEFAULT nextval()\n> commands.\n\nOk.\n\nThanks Tom. :)\n\nRegards and best wishes,\n\nJustin Clift\n\n> regards, tom lane\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 21 Nov 2002 16:03:18 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence" }, { "msg_contents": "It seems worth pointing out, too, that some SQL purists propose not \nrelying on product-specific methods of auto-incrementing.\n\nI.e., it is possible to do something like:\n\ninsert into foo( col, ... )\nvalues( coalesce( ( select max( col ) from foo ), 0 ) + 1, ... );\n\nand this is easily placed in a trigger.\n\n-tfo\n\nIn article <7017.1037851915@sss.pgh.pa.us>,\n tgl@sss.pgh.pa.us (Tom Lane) wrote:\n\n> Justin Clift <justin@postgresql.org> writes:\n> > Oliver Elphick wrote:\n> >> I created a sequence using SERIAL when I created a table. I used the\n> >> same sequence for another table by setting a column default to\n> >> nextval(sequence).\n> >> \n> >> I deleted the first table. The sequence was deleted too, leaving the\n> >> default of the second table referring to a non-existent sequence.\n> \n> > This sounds like a serious bug in our behaviour, and not something we'd\n> > like to release.\n> \n> We will be releasing it whether we like it or not, because\n> nextval('foo') doesn't expose any visible dependency on sequence foo.\n> \n> (If you think it should, how about nextval('fo' || 'o')? If you think\n> that's improbable, consider nextval('table' || '_' || 'col' || '_seq').)\n> \n> The long-term answer is to do what Rod alluded to: support the\n> Oracle-style syntax foo.nextval, so that the sequence reference is\n> honestly part of the parsetree and not buried inside a string\n> expression.\n> \n> In the meantime, I consider that Oliver was misusing the SERIAL\n> feature. If you want multiple tables fed by the same sequence object,\n> you should create the sequence as a separate object and then create\n> the tables using explicit \"DEFAULT nextval('foo')\" clauses. Doing what\n> he did amounts to sticking his fingers under the hood of the SERIAL\n> implementation; if he gets his fingers burnt, it's his problem.\n> \n> > Specifically in relation to people's existing scripts, and also to\n> > people who are doing dump/restore of specific tables (it'll kill the\n> > sequences that other tables depend on too!)\n> \n> 7.3 breaks no existing schemas, because older schemas will be dumped\n> as separate CREATE SEQUENCE and CREATE TABLE ... DEFAULT nextval()\n> commands.\n> \n> regards, tom lane\n", "msg_date": "Thu, 21 Nov 2002 12:53:46 -0600", "msg_from": "Thomas O'Connell <tfo@monsterlabs.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence" }, { "msg_contents": "\nOf course, those would be SQL purists who _don't_ understand\nconcurrency issues. ;-)\n\n---------------------------------------------------------------------------\n\nThomas O'Connell wrote:\n> It seems worth pointing out, too, that some SQL purists propose not \n> relying on product-specific methods of auto-incrementing.\n> \n> I.e., it is possible to do something like:\n> \n> insert into foo( col, ... )\n> values( coalesce( ( select max( col ) from foo ), 0 ) + 1, ... );\n> \n> and this is easily placed in a trigger.\n> \n> -tfo\n> \n> In article <7017.1037851915@sss.pgh.pa.us>,\n> tgl@sss.pgh.pa.us (Tom Lane) wrote:\n> \n> > Justin Clift <justin@postgresql.org> writes:\n> > > Oliver Elphick wrote:\n> > >> I created a sequence using SERIAL when I created a table. I used the\n> > >> same sequence for another table by setting a column default to\n> > >> nextval(sequence).\n> > >> \n> > >> I deleted the first table. The sequence was deleted too, leaving the\n> > >> default of the second table referring to a non-existent sequence.\n> > \n> > > This sounds like a serious bug in our behaviour, and not something we'd\n> > > like to release.\n> > \n> > We will be releasing it whether we like it or not, because\n> > nextval('foo') doesn't expose any visible dependency on sequence foo.\n> > \n> > (If you think it should, how about nextval('fo' || 'o')? If you think\n> > that's improbable, consider nextval('table' || '_' || 'col' || '_seq').)\n> > \n> > The long-term answer is to do what Rod alluded to: support the\n> > Oracle-style syntax foo.nextval, so that the sequence reference is\n> > honestly part of the parsetree and not buried inside a string\n> > expression.\n> > \n> > In the meantime, I consider that Oliver was misusing the SERIAL\n> > feature. If you want multiple tables fed by the same sequence object,\n> > you should create the sequence as a separate object and then create\n> > the tables using explicit \"DEFAULT nextval('foo')\" clauses. Doing what\n> > he did amounts to sticking his fingers under the hood of the SERIAL\n> > implementation; if he gets his fingers burnt, it's his problem.\n> > \n> > > Specifically in relation to people's existing scripts, and also to\n> > > people who are doing dump/restore of specific tables (it'll kill the\n> > > sequences that other tables depend on too!)\n> > \n> > 7.3 breaks no existing schemas, because older schemas will be dumped\n> > as separate CREATE SEQUENCE and CREATE TABLE ... DEFAULT nextval()\n> > commands.\n> > \n> > regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 21 Nov 2002 14:11:08 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Bug with sequence" }, { "msg_contents": "Oliver Elphick wrote:\n> I deleted the first table. The sequence was deleted too, leaving the\n> default of the second table referring to a non-existent sequence.\n> \n> \n> Could this be a TODO item in 7.4, to add a dependency check when a\n> sequence is set as the default without being created at the same time?\n\nAdded to TODO:\n\n* Have sequence dependency track use of DEFAULT sequences, seqname.nextval \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 21 Nov 2002 14:14:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence" }, { "msg_contents": "On Thu, 2002-11-21 at 14:11, Bruce Momjian wrote:\n> Of course, those would be SQL purists who _don't_ understand\n> concurrency issues. ;-)\n\nOr they're the kind that locks the entire table for any given insert.\n\n-- \nRod Taylor <rbt@rbt.ca>\n\n", "msg_date": "21 Nov 2002 14:22:46 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Bug with sequence" }, { "msg_contents": "\"Thomas O'Connell\" <tfo@monsterlabs.com> writes:\n> It seems worth pointing out, too, that some SQL purists propose not \n> relying on product-specific methods of auto-incrementing.\n> I.e., it is possible to do something like:\n> insert into foo( col, ... )\n> values( coalesce( ( select max( col ) from foo ), 0 ) + 1, ... );\n> and this is easily placed in a trigger.\n\n... but that approach is entirely unworkable if you want any concurrency\nof insert operations. (Triggers are a tad product-specific, too :-()\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Nov 2002 14:30:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Bug with sequence " }, { "msg_contents": "On 21 Nov 2002, Rod Taylor wrote:\n\n> On Thu, 2002-11-21 at 14:11, Bruce Momjian wrote:\n> > Of course, those would be SQL purists who _don't_ understand\n> > concurrency issues. ;-)\n> \n> Or they're the kind that locks the entire table for any given insert.\n\nIsn't that what Bruce just said? ;^)\n\n", "msg_date": "Thu, 21 Nov 2002 13:09:50 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Bug with sequence" }, { "msg_contents": "On Thu, 2002-11-21 at 15:09, scott.marlowe wrote:\n> On 21 Nov 2002, Rod Taylor wrote:\n> \n> > On Thu, 2002-11-21 at 14:11, Bruce Momjian wrote:\n> > > Of course, those would be SQL purists who _don't_ understand\n> > > concurrency issues. ;-)\n> > \n> > Or they're the kind that locks the entire table for any given insert.\n> \n> Isn't that what Bruce just said? ;^)\n\nI suppose so. I took what Bruce said to be that multiple users could\nget the same ID.\n\nI keep having developers want to make their own table for a sequence,\nthen use id = id + 1 -- so they hold a lock on it for the duration of\nthe transaction.\n\n-- \nRod Taylor <rbt@rbt.ca>\n\n", "msg_date": "21 Nov 2002 15:23:50 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Bug with sequence" }, { "msg_contents": "On 21 Nov 2002, Rod Taylor wrote:\n\n> On Thu, 2002-11-21 at 15:09, scott.marlowe wrote:\n> > On 21 Nov 2002, Rod Taylor wrote:\n> > \n> > > On Thu, 2002-11-21 at 14:11, Bruce Momjian wrote:\n> > > > Of course, those would be SQL purists who _don't_ understand\n> > > > concurrency issues. ;-)\n> > > \n> > > Or they're the kind that locks the entire table for any given insert.\n> > \n> > Isn't that what Bruce just said? ;^)\n> \n> I suppose so. I took what Bruce said to be that multiple users could\n> get the same ID.\n> \n> I keep having developers want to make their own table for a sequence,\n> then use id = id + 1 -- so they hold a lock on it for the duration of\n> the transaction.\n\nI was just funnin' with ya, but the point behind it was that either way \n(with or without a lock) that using something other than a sequence is \nprobably a bad idea. Either way, under parallel load, you have data \nconsistency issues, or you have poor performance issues.\n\n", "msg_date": "Thu, 21 Nov 2002 14:52:18 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Bug with sequence" }, { "msg_contents": "scott.marlowe@ihs.com (\"scott.marlowe\") wrote in message news:<Pine.LNX.4.33.0211211450100.23804-100000@css120.ihs.com>...\n> On 21 Nov 2002, Rod Taylor wrote:\n> \n> > On Thu, 2002-11-21 at 15:09, scott.marlowe wrote:\n> > > On 21 Nov 2002, Rod Taylor wrote:\n> > > \n> > > > On Thu, 2002-11-21 at 14:11, Bruce Momjian wrote:\n> > > > > Of course, those would be SQL purists who _don't_ understand\n> > > > > concurrency issues. ;-)\n> > > > \n> > > > Or they're the kind that locks the entire table for any given insert.\n> > > \n> > > Isn't that what Bruce just said? ;^)\n> > \n> > I suppose so. I took what Bruce said to be that multiple users could\n> > get the same ID.\n> > \n> > I keep having developers want to make their own table for a sequence,\n> > then use id = id + 1 -- so they hold a lock on it for the duration of\n> > the transaction.\n> \n> I was just funnin' with ya, but the point behind it was that either way \n> (with or without a lock) that using something other than a sequence is \n> probably a bad idea. Either way, under parallel load, you have data \n> consistency issues, or you have poor performance issues.\n> \n> \nI'm not familiar with these \"SQL purists\" (perhaps the reference is to\nJ. Celko?) but the fact is that it's hard to call SEQUENCE\nproduct-specific now that it's in Oracle, DB2, and SQL:2003. The\nsyntaxes do differ a little, usually due to choice of abbreviation,\nbut as far as I can tell the internals are similar across\nimplementations.\n\nPeter Gulutzan\nAuthor of \"Sequences And Identity Columns\"\n(http://dbazine.com/gulutzan4.html)\n", "msg_date": "26 Nov 2002 07:30:12 -0800", "msg_from": "pgulutzan@ocelot.ca (Peter Gulutzan)", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Bug with sequence" } ]
[ { "msg_contents": "\n> > is it planed cursor out of a transaction in 7.4 ?\n> \n> I do not think we will allow cross-transaction cursors ever. \n> What would\n> it mean to have a cross-transaction cursor, anyway? Does it show a\n> frozen snapshot as of the time it was opened? The usefulness of that\n> seems awfully low in comparison to the pain of implementing it.\n\nIt is usually used with comitted read isolation for an outer select\non one table and one transaction per row where the action usually involving \nadditional tables depends on the selected row. This is to keep transactions \nsmall and avoid locking out other activity.\n\nThe outer cursor is declared \"WITH HOLD\".\nI think it is a useful feature.\n\nOf course a workaround is to open two connections.\n\nAndreas\n", "msg_date": "Mon, 18 Nov 2002 17:32:25 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] DECLARE CURSOR " } ]
[ { "msg_contents": "I am trying to figure out which is the best way to store custom collation\ntables on a PostgreSQL server system, and what kind of interface to\nprovide to users to allow them to create their own.\n\nA collation table essentially consists of a mapping 'character code ->\nweight' for every character in the set and some additional considerations\nfor one-to-many and many-to-one mappings, plus a few feature flags.\n\nHow would a user go about creating such a table?\n\nCREATE COLLATION foo (\n ...\n <10000 lines of data>\n ...\n);\n\nor would it be preferrable to store the table in some external file and\nthen have the call simply be, say,\n\nCREATE COLLATION foo SOURCE 'some file';\n\nThe latter has the disadvantage that we'd need some smarts so that pg_dump\nwould not repeat the mistakes that were made with dynamically loadable\nmodules (such as absolute file paths). The former has the disadvantage\nthat it is too unwieldy to be useful.\n\nWe also need to consider the following two problems:\n\nFirstly, if the collation data -- no matter how it is created -- is stored\nwithin the database (that is, in some table(s)), then it would be\nduplicated in every database. Depending on the storage format, a\ncollation table takes between around 100 kB and 800 kB. Multiply that by\na few dozen languages, for each database. That would make an external\nfile seem more attractive. (The external file would need to be a binary\nfile that is precomputed for efficient processing, unless we want to\nreparse and reprocess it every so often, like for every session.)\n\nSecondly, because each collation table depends on a particular character\nencoding (since it is indexed by character code), some sort of magic needs\nto happen when someone creates a database with a different encoding than\nthe template database. One option is to do some mangling on the\nregistered external file name (such as appending the encoding name to the\nfile name). Another option is to have the notional pg_collate system\ncatalog contain a column for the encoding, and then simply ignore all\nentries pertaining to encodings other than the database encoding.\n\nComments or better ideas?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 18 Nov 2002 19:10:12 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Ideas needed: How to create and store collation tables" }, { "msg_contents": "\nOn Mon, 18 Nov 2002, Peter Eisentraut wrote:\n\n> A collation table essentially consists of a mapping 'character code ->\n> weight' for every character in the set and some additional considerations\n> for one-to-many and many-to-one mappings, plus a few feature flags.\n>\n> How would a user go about creating such a table?\n>\n> CREATE COLLATION foo (\n> ...\n> <10000 lines of data>\n> ...\n> );\n>\n> or would it be preferrable to store the table in some external file and\n> then have the call simply be, say,\n>\n> CREATE COLLATION foo SOURCE 'some file';\n\nI'd say the latter makes more sense, but would it be better to use\nCREATE COLLATION foo FROM EXTERNAL 'some file';\nwhere we say valid implementation defined collation names are references\nto files of the appropriate type?\n\n> Secondly, because each collation table depends on a particular character\n> encoding (since it is indexed by character code), some sort of magic needs\n> to happen when someone creates a database with a different encoding than\n> the template database. One option is to do some mangling on the\n> registered external file name (such as appending the encoding name to the\n> file name). Another option is to have the notional pg_collate system\n> catalog contain a column for the encoding, and then simply ignore all\n> entries pertaining to encodings other than the database encoding.\n\nThe SQL92 CREATE COLLATION seems to create a collation for a particular\ncharacter set, so the latter seems more appropriate to me, especially if\nwe plan to support the full range of SQL's character set/collation/padding\nattributes at some point.\n\n", "msg_date": "Mon, 18 Nov 2002 12:08:54 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Ideas needed: How to create and store collation tables" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I am trying to figure out which is the best way to store custom collation\n> tables on a PostgreSQL server system, and what kind of interface to\n> provide to users to allow them to create their own.\n\n> A collation table essentially consists of a mapping 'character code ->\n> weight' for every character in the set and some additional considerations\n> for one-to-many and many-to-one mappings, plus a few feature flags.\n\nI'd be inclined to handle it similarly to the way that Tatsuo did with\nconversion_procs: let collations be represented by comparison functions\nthat meet some suitable API. I think that trying to represent such a\ntable as an SQL table compactly will be a nightmare, and trying to\naccess it quickly enough for reasonable performance will be worse. Keep\nthe problem out of the API and let each comparison function do what it\nneeds to do internally.\n\n> Secondly, because each collation table depends on a particular character\n> encoding (since it is indexed by character code), some sort of magic needs\n> to happen when someone creates a database with a different encoding than\n> the template database. One option is to do some mangling on the\n> registered external file name (such as appending the encoding name to the\n> file name). Another option is to have the notional pg_collate system\n> catalog contain a column for the encoding, and then simply ignore all\n> entries pertaining to encodings other than the database encoding.\n\nSQL92 says that any particular collation is applicable to only one\ncharacter set (which is their term that matches our \"encoding\"s).\nSo I think we'd definitely want to associate a character set with each\npg_collation entry, and then ignore any entries that don't match the \nDB encoding. (Further down the road, \"the\" DB encoding might change\ninto just a \"default for tables in this DB\" encoding, meaning that we'd\nneed access to collations for multiple encodings anyway.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 15:44:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ideas needed: How to create and store collation tables " }, { "msg_contents": "> I'd be inclined to handle it similarly to the way that Tatsuo did with\n> conversion_procs: let collations be represented by comparison functions\n> that meet some suitable API. I think that trying to represent such a\n> table as an SQL table compactly will be a nightmare, and trying to\n> access it quickly enough for reasonable performance will be worse. Keep\n> the problem out of the API and let each comparison function do what it\n> needs to do internally.\n\nAgreed. That was the way I have been thinking about collation/create\ncharacter stuff too.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 19 Nov 2002 10:21:30 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Ideas needed: How to create and store collation" } ]
[ { "msg_contents": "\nMorning all ...\n\n Well, we're finally coming to the end of yet another release cycle ...\nright now, baring any problems creeping up, we're planning on packaging up\nRC2 on Friday morning, with an announcement going out that evening for ppl\nto test and confirm everything is okay.\n\n If *no* commits to the code are made between then and Tuesday, Tuesday\nthe 26th of November will be the wrap-up date for the final package, with\nthe full release / press release going out first thing Wednesday morning\n...\n\n Currently we have one large, closed, list of email addresses we will be\nusing for this announcement ... but it is not necessarily a complete list\nof everyone we could send to ... if anyone would like to submit addresses\nto add, please send them directly to me with a subject header of:\n\n [PR Submit]\n\n Don't just send me the address though, tell me who the person is ... and\nyes, I know that ther will be a load of duplicates coming my way, but I'll\nsort through that ... just let me know of any addresses you can think of\nthat we shoudl submit to, and I'll worry about whether I might already\nhave them on our list :)\n\n Another week to go folks ... :)\n\n\n", "msg_date": "Mon, 18 Nov 2002 15:10:32 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Final Release Scheduale ..." } ]
[ { "msg_contents": "Hi everyone,\n\nWe know PostgreSQL will compile and run on the Sony Playstation 2, and\nwe now have the opportunity to include pre-compiled PostgreSQL RPM's\n(for the Playstation 2 Linux kit) on the Sony site, as part of the\n\"Compiled For Your Convenience\" collection.\n\nNow, we just need someone with the Linux for Playstation 2 kit to\ncompile the RPM's so they can be added.\n\nDoes anyone already have a \"Linux for Playstation 2\" kit and would be\nwilling to compile them?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 19 Nov 2002 08:51:17 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Looking for a \"Linux on Playstation 2\" person to compile PostgreSQL\n\tRPM's" }, { "msg_contents": "I don't have the kit, but I do have a playstation 2. If someone would be\nwilling to donate a kit I'd be willing to try and get it to compile. :-)\n\nRobert Treat\n\nOn Mon, 2002-11-18 at 16:51, Justin Clift wrote:\n> Hi everyone,\n> \n> We know PostgreSQL will compile and run on the Sony Playstation 2, and\n> we now have the opportunity to include pre-compiled PostgreSQL RPM's\n> (for the Playstation 2 Linux kit) on the Sony site, as part of the\n> \"Compiled For Your Convenience\" collection.\n> \n> Now, we just need someone with the Linux for Playstation 2 kit to\n> compile the RPM's so they can be added.\n> \n> Does anyone already have a \"Linux for Playstation 2\" kit and would be\n> willing to compile them?\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n>\n\n\n\n", "msg_date": "18 Nov 2002 17:38:09 -0500", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Looking for a \"Linux on Playstation 2\" person" }, { "msg_contents": "> Now, we just need someone with the Linux for Playstation 2 kit to\n> compile the RPM's so they can be added.\n\nI ask this not because I'm trying to be trite but because I honestly do\nnot know.\nWhy would you want this? What can you do with pgSQL (or any other\nrdbms) on the Playstation 2?\n\nChris\n\n", "msg_date": "Tue, 19 Nov 2002 07:32:54 -0600", "msg_from": "\"Chris Boget\" <chris@wild.net>", "msg_from_op": false, "msg_subject": "Re: Looking for a \"Linux on Playstation 2\" person to compile\n\tPostgreSQL RPM's" }, { "msg_contents": "Chris Boget wrote:\n> \n> > Now, we just need someone with the Linux for Playstation 2 kit to\n> > compile the RPM's so they can be added.\n> \n> I ask this not because I'm trying to be trite but because I honestly do\n> not know.\n> Why would you want this? What can you do with pgSQL (or any other\n> rdbms) on the Playstation 2?\n\nThat's ok.\n\nThe short answer is to give people using Linux on the PS2 a database\noption, as there presently don't appear to be any alternatives for them.\n\nA side benefit is that we could possibly leverage it for\nadvocacy/marketing purposes, depending if that would be useful. :)\n\ni.e. \"PostgreSQL announces support for the Sony Playstation 2\" type of\npress release, or just for pointing out how scalable we can be. Same\nsoftware, same API's, same interfaces, etc for PS2 -> 100+ CPU mission\ncritical powerhouses, etc.\n\n:-)\n\nAs a humorous note, saw a guy on their mailing lists asking if Oracle\nwould work... ;-)\n\nRegards and best wishes,\n\nJustin Clift\n \n> Chris\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 20 Nov 2002 00:47:47 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Looking for a \"Linux on Playstation 2\" person to compile " } ]
[ { "msg_contents": "I'm pretty sure I saw a reference within the last 3 or 4 weeks on one of\nthe mailing lists to a script that would put in place, after an upgrade\nto 7.3, dependency information that would have been automatically\ncreated if the schema had been created ab initio in 7.3. However, I\ncan't find it in a mailing list search.\n\nCan anyone give me a URL for that, or have I sdreamed it?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"To show forth thy lovingkindness in the morning, and \n thy faithfulness every night.\" Psalms 92:2 \n\n", "msg_date": "19 Nov 2002 00:30:42 +0000", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": true, "msg_subject": "mislaid reference to update script for after 7.3 upgrade" }, { "msg_contents": "Oliver Elphick wrote:\n> I'm pretty sure I saw a reference within the last 3 or 4 weeks on one of\n> the mailing lists to a script that would put in place, after an upgrade\n> to 7.3, dependency information that would have been automatically\n> created if the schema had been created ab initio in 7.3. However, I\n> can't find it in a mailing list search.\n> \n> Can anyone give me a URL for that, or have I sdreamed it?\n> \n\nSee contrib/adddepend.\n\nJoe\n\n\n", "msg_date": "Mon, 18 Nov 2002 16:46:06 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: mislaid reference to update script for after 7.3 upgrade" } ]
[ { "msg_contents": "To whom it may concern,\n\nI am a java developer using postgres as a DB server. Me and my development team have a product comprised of about 50 tables, with about 10,000 records in the largest table. We have looked for concrete answers in books and the web for solutions to several problems that are plaguing us. Now we look to the source.\n\nIssue #1 Massive deletion of records.\n\nQ - If many (eg hundreds) records are deleted (purposely), those records get flagged for later removal. What is the best sequence of operations to optimize the database afterwards? Is it Vacuum, Re-index, then do a Vacuum Analyze.\n\nSome of what I have read suggests that doing a vacuum without a re-index, can cause a given index to be invalid (ie entries pointing to records that do not match the index criteria).\n\nThis would then suggest that doing a Vacuum Analyze would create an incorrect statistics table.\n\nAny help regarding the best maintenance policy, ramifications of mass deletions, vacuuming, and re-indexing would be most appreciated. Thanks\n\n\n\n---------------------------------\nDo you Yahoo!?\nYahoo! Web Hosting - Let the expert host your site\nTo whom it may concern,\nI am a java developer using postgres as a DB server.  Me and my development team have a product comprised of about 50 tables, with about 10,000 records in the largest table.  We have looked for concrete answers in books and the web for solutions to several problems that are plaguing us.  Now we look to the source.\nIssue #1 Massive deletion of records.\nQ - If many (eg hundreds) records are deleted (purposely), those records get flagged for later removal.  What is the best sequence of operations to optimize the database afterwards?  Is it Vacuum, Re-index, then do a Vacuum Analyze.\nSome of what I have read suggests that doing a vacuum without a re-index, can cause a given index to be invalid (ie entries pointing to records that do not match the index criteria).\nThis would then suggest that doing a Vacuum Analyze would create an incorrect statistics table.\nAny help regarding the best maintenance policy, ramifications of mass deletions, vacuuming, and re-indexing would be most appreciated.  ThanksDo you Yahoo!?\nYahoo! Web Hosting - Let the expert host your site", "msg_date": "Mon, 18 Nov 2002 19:02:40 -0800 (PST)", "msg_from": "Adrian Calvin <acexec@yahoo.com>", "msg_from_op": true, "msg_subject": "Question regarding effects of Vacuum, Vacuum Analyze, and Reindex" }, { "msg_contents": "Moving thread to pgsql-performance.\n\nOn Mon, 2002-11-18 at 22:02, Adrian Calvin wrote:\n> Q - If many (eg hundreds) records are deleted (purposely), those\n> records get flagged for later removal. What is the best sequence of\n> operations to optimize the database afterwards? Is it Vacuum,\n> Re-index, then do a Vacuum Analyze.\n\nJust run a regular vacuum once for the above. If you modify 10%+ of the\ntable (via single or multiple updates, deletes or inserts) then a vacuum\nanalyze will be useful.\n\nRe-index when you change the tables contents a few times over. (Have\ndeleted or updated 30k entries in a table with 10k entries at any given\ntime).\n\n\nGeneral maintenance for a dataset of that size will probably simply be a\nnightly vacuum, weekly vacuum analyze, and annual reindex or dump /\nrestore (upgrades).\n\n-- \nRod Taylor <rbt@rbt.ca>\n\n", "msg_date": "19 Nov 2002 08:46:39 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Question regarding effects of Vacuum, Vacuum" } ]
[ { "msg_contents": "\n> >> Hmm ... now that's an interesting thought. So the input converter would\n> >> actively strip trailing blanks, output would add them back,\n> \n> > But how would the output know how many to put back?\n> \n> The output routine would need access to the column typmod. Which it\n> would have, in simple \"SELECT columnname\" cases, but this is a serious\n> weakness of the scheme in general.\n\nI think it is not really a weakness of the scheme, but a weakness that typmod\nis not available in some places where it would actually be needed. \nOne effect of this is, that all the varlena datatypes have a redundant \nlength info per row, even for such types that have the same length for \nall rows of one table (e.g. numeric(8,2), and char(n)). In a lot of cases\nthat means 50-100% more disk space than actually needed.\n\nI can see, that making typmod availabe in more places would probably \nbe major work (or not feasible at all), but I think it would generally be \nof (great) value.\n\nTo the problem of concatenation:\nWould it be feasible to alter the concatenation method to concatenate \nthe results of the output functions of the relevant expressions ?\nImho that would actually return the expected results more often than it \ncurrently does, and it would fix the typmod issue for char(n) concatenation.\n\nAndreas\n", "msg_date": "Tue, 19 Nov 2002 10:15:49 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: char(n) to varchar or text conversion should strip" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> I think it is not really a weakness of the scheme, but a weakness that typmod\n> is not available in some places where it would actually be needed. \n> One effect of this is, that all the varlena datatypes have a redundant \n> length info per row, even for such types that have the same length for \n> all rows of one table (e.g. numeric(8,2), and char(n)).\n\nThat argument doesn't hold any water, seeing that neither of your\nexamples are actually fixed-width... numeric doesn't store leading\nor trailing zeroes, and char(n) is not a fixed-width item when dealing\nwith multibyte encodings. And on top of that, there's TOAST to\nthink about. So I don't think there's any shot at getting rid of the\nvarlen word.\n\n> Would it be feasible to alter the concatenation method to concatenate \n> the results of the output functions of the relevant expressions ?\n> Imho that would actually return the expected results more often than it \n> currently does, and it would fix the typmod issue for char(n) concatenation.\n\nNo, it wouldn't really fix the typmod issue; you still have the problem\nof where is the output function going to get the typmod from? If the\nconcatenation argument is anything but a simple column reference, you've\nstill got trouble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Nov 2002 08:42:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: char(n) to varchar or text conversion should strip " } ]
[ { "msg_contents": "Folks,\n\nWe need a quote from a major code contributor to PostgreSQL about the\nupcoming 7.3 release -- something about how great the new release is,\nor some of the features in the release. We need this for the 7.3\npress release, which will be drafted in 2 days.\n\nIf you have something to say, please e-mail me, Marc (scrappy@hub.org)\nand Justin (justin@postgresql.org) off-list so we can quote you!\n\n-Josh Berkus\n", "msg_date": "Tue, 19 Nov 2002 08:53:35 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Need Quote for 7.3" }, { "msg_contents": "\nOn Tue, 19 Nov 2002, Josh Berkus wrote:\n\n> Folks,\n> \n> We need a quote from a major code contributor to PostgreSQL about the\n> upcoming 7.3 release -- something about how great the new release is,\n> or some of the features in the release. We need this for the 7.3\n> press release, which will be drafted in 2 days.\n> \n> If you have something to say, please e-mail me, Marc (scrappy@hub.org)\n> and Justin (justin@postgresql.org) off-list so we can quote you!\n\n\nI think it's great - but don't quote me on that. :)\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Tue, 19 Nov 2002 17:02:23 +0000 (GMT)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Need Quote for 7.3" }, { "msg_contents": "nandrews@investsystems.co.uk (\"Nigel J. Andrews\") writes:\n\n> On Tue, 19 Nov 2002, Josh Berkus wrote:\n> \n> > Folks,\n> > \n> > We need a quote from a major code contributor to PostgreSQL about the\n> > upcoming 7.3 release -- something about how great the new release is,\n> > or some of the features in the release. We need this for the 7.3\n> > press release, which will be drafted in 2 days.\n> > \n> > If you have something to say, please e-mail me, Marc (scrappy@hub.org)\n> > and Justin (justin@postgresql.org) off-list so we can quote you!\n> \n> \n> I think it's great - but don't quote me on that. :)\n> \n\nPostgreSQL. Because life's too short to learn Oracle.\n\n:)\n\nBilly O'Connor\n", "msg_date": "Tue, 19 Nov 2002 17:41:52 GMT", "msg_from": "Billy O'Connor <billyoc@linuxfromscratch.org>", "msg_from_op": false, "msg_subject": "Re: Need Quote for 7.3" }, { "msg_contents": "> > I think it's great - but don't quote me on that. :)\n> > \n> \n> PostgreSQL. Because life's too short to learn Oracle.\n\nPostgreSQL. For those with more to do than babysit a database.\n\n-- \nRod Taylor <rbt@rbt.ca>\n\n", "msg_date": "26 Nov 2002 10:05:09 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Need Quote for 7.3" }, { "msg_contents": "> > > I think it's great - but don't quote me on that. :)\n> > > \n> > \n> > PostgreSQL. Because life's too short to learn Oracle.\n> \n> PostgreSQL. For those with more to do than babysit a database.\n> \n\nAh, better. More orthogonal.\n\n", "msg_date": "26 Nov 2002 15:27:19 -0500", "msg_from": "Billy O'Connor <billyoc@linuxfromscratch.org>", "msg_from_op": false, "msg_subject": "Re: Need Quote for 7.3" }, { "msg_contents": "How about \"PostgreSQL, slightly less nightmares than the other databases\"\n\n\n", "msg_date": "Wed, 27 Nov 2002 10:27:35 -0500", "msg_from": "\"Merlin Moncure\" <merlin@rcsonline.com>", "msg_from_op": false, "msg_subject": "Re: Need Quote for 7.3" } ]
[ { "msg_contents": "Hi,\n\nIs there any reason why the grolist field in the table pg_group is \nimplemented as an array and not as a separate table?\n\nAccording to the documentation:\n\n<quote source=\"Postgresql 7.2 User Manual, chapter 6 near the end\">\nArrays are not sets; using arrays in the manner described in the previous \nparagraph is often a sign of database misdesign.\n</quote>\n\nI have trouble implementing a way to easily check whether a user is part \nof a group. (I use Apache::AuthDBI to implement authentication and wanted \nto make a view with columns username, userid , groupname. And installing \nthe contrib/array give's me a postgresql that is different from all the \nothers :-(\n\n\n-- \n__________________________________________________\n\"Nothing is as subjective as reality\"\nReinoud van Leeuwen reinoud.v@n.leeuwen.net\nhttp://www.xs4all.nl/~reinoud\n__________________________________________________\n", "msg_date": "Wed, 20 Nov 2002 13:03:18 +0100", "msg_from": "Reinoud van Leeuwen <reinoud.v@n.leeuwen.net>", "msg_from_op": true, "msg_subject": "Why an array in pg_group?" }, { "msg_contents": "Reinoud van Leeuwen kirjutas K, 20.11.2002 kell 17:03:\n> Hi,\n> \n> Is there any reason why the grolist field in the table pg_group is \n> implemented as an array and not as a separate table?\n\nmost likely for performance reasons.\n\n> According to the documentation:\n> \n> <quote source=\"Postgresql 7.2 User Manual, chapter 6 near the end\">\n> Arrays are not sets; using arrays in the manner described in the previous \n> paragraph is often a sign of database misdesign.\n> </quote>\n> \n> I have trouble implementing a way to easily check whether a user is part \n> of a group. (I use Apache::AuthDBI to implement authentication and wanted \n> to make a view with columns username, userid , groupname. And installing \n> the contrib/array give's me a postgresql that is different from all the \n> others :-(\n\nnot from those who also have installed contrib/array ;)\n\nbut you should actually be using contrib/intagg (and perhaps contrib\nintarray) for performance reasons ;)\n\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "27 Nov 2002 02:05:48 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Why an array in pg_group?" }, { "msg_contents": "On Tuesday 26 November 2002 09:05 pm, Hannu Krosing wrote:\n> Reinoud van Leeuwen kirjutas K, 20.11.2002 kell 17:03:\n> > Hi,\n> >\n> > Is there any reason why the grolist field in the table pg_group is\n> > implemented as an array and not as a separate table?\n>\n> most likely for performance reasons.\n>\n> > According to the documentation:\n> >\n> > <quote source=\"Postgresql 7.2 User Manual, chapter 6 near the end\">\n> > Arrays are not sets; using arrays in the manner described in the previous\n> > paragraph is often a sign of database misdesign.\n> > </quote>\n> >\n> > I have trouble implementing a way to easily check whether a user is part\n> > of a group. (I use Apache::AuthDBI to implement authentication and wanted\n> > to make a view with columns username, userid , groupname. And installing\n> > the contrib/array give's me a postgresql that is different from all the\n> > others :-(\n>\n> not from those who also have installed contrib/array ;)\n>\n> but you should actually be using contrib/intagg (and perhaps contrib\n> intarray) for performance reasons ;)\nCan You make syntax (operator or function) like :\n\nscalar integer IN array integers\nfor join scalar and array fileds ?\n\nin base PostgreSQL ?\n\nregards\nHaris Peco\n\n\n", "msg_date": "Wed, 27 Nov 2002 01:36:19 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: Why an array in pg_group?" } ]
[ { "msg_contents": "May be I miss something, but seems there is a problem with float4\nin 7.2.3 and 7.3RC1 (6.53 works fine):\n\ntest=# create table t ( a float4);\nCREATE TABLE\ntest=# insert into t values (0.1);\nINSERT 32789 1\ntest=# select * from t where a=0.1;\n a\n---\n(0 rows)\n\ntest=# select * from t where a=0.1::float4;\n a\n-----\n 0.1\n(1 row)\n\nNo problem with float8\n\ntest=# create table t8 ( a float8);\nCREATE TABLE\ntest=# insert into t8 values (0.1);\nINSERT 32792 1\ntest=# select * from t8 where a=0.1;\n a\n-----\n 0.1\n(1 row)\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 20 Nov 2002 18:38:22 +0300 (MSK)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "float4 problem" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n\n> May be I miss something, but seems there is a problem with float4\n> in 7.2.3 and 7.3RC1 (6.53 works fine):\n> \n> test=# create table t ( a float4);\n> CREATE TABLE\n> test=# insert into t values (0.1);\n> INSERT 32789 1\n> test=# select * from t where a=0.1;\n> a\n> ---\n> (0 rows)\n\n\nI'm guessing this is because 0.1 is not directly representable as a\nbinary floating point number, and literal floating constants are\nfloat8 not float4, and 0.1::float4 != 0.1::float8. Same problem that\ncauses queries on int2 fields not to use an index unless you cast the\nconstants in the query...\n\n-Doug\n", "msg_date": "20 Nov 2002 11:00:01 -0500", "msg_from": "Doug McNaught <doug@mcnaught.org>", "msg_from_op": false, "msg_subject": "Re: float4 problem" }, { "msg_contents": "Doug McNaught <doug@mcnaught.org> writes:\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n>> May be I miss something, but seems there is a problem with float4\n>> in 7.2.3 and 7.3RC1 (6.53 works fine):\n>> \n>> test=# create table t ( a float4);\n>> CREATE TABLE\n>> test=# insert into t values (0.1);\n>> INSERT 32789 1\n>> test=# select * from t where a=0.1;\n>> a\n>> ---\n>> (0 rows)\n\n\n> I'm guessing this is because 0.1 is not directly representable as a\n> binary floating point number, and literal floating constants are\n> float8 not float4, and 0.1::float4 != 0.1::float8.\n\nRight.\n\nI think that this particular form of the problem will go away in 7.4.\nCurrently, \"a = 0.1\" is resolved as float4=float8, and there's no way\nfor the float4 approximation of 0.1 to exactly equal the float8\napproximation of it. However, if we eliminate cross-datatype\ncomparison operators as I've proposed, the comparison should be resolved\nas float4 = float4 and it would work.\n\nNonetheless, expecting exact equality tests to succeed with float values\nis generally folly. I really do not believe the claim that this worked\nin 6.5.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Nov 2002 12:42:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: float4 problem " } ]
[ { "msg_contents": "Heres a patch which will create the sql_help.h file if it doesn't already\nexist using an installed copy of perl. I've tested it using perl v5.6.1 from\nActiveState and all appears to work.\n\nCan someone commit this for me, or throw back some comments.\n\nThanks,\n\nAl.\n\n\n--- src/bin/psql/win32.mak 2002/10/29 04:23:30 1.11\n+++ src/bin/psql/win32.mak 2002/11/20 19:44:35\n@@ -7,14 +7,16 @@\n !ENDIF\n\n CPP=cl.exe\n+PERL=perl.exe\n\n OUTDIR=.\\Release\n INTDIR=.\\Release\n+REFDOCDIR= ../../../doc/src/sgml/ref\n # Begin Custom Macros\n OutDir=.\\Release\n # End Custom Macros\n\n-ALL : \"$(OUTDIR)\\psql.exe\"\n+ALL : sql_help.h \"$(OUTDIR)\\psql.exe\"\n\n CLEAN :\n -@erase \"$(INTDIR)\\command.obj\"\n@@ -91,3 +93,7 @@\n $(CPP) @<<\n $(CPP_PROJ) $<\n <<\n+\n+sql_help.h: create_help.pl\n+ $(PERL) create_help.pl $(REFDOCDIR) $@\n+\n\n\n\n\n----- Original Message -----\nFrom: \"Al Sutton\" <al@alsutton.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Friday, November 15, 2002 8:48 PM\nSubject: Missing file from CVS?\n\n\n> All,\n>\n> I've just tried to build the Win32 components under Visual Studio's C++\n> compiler from the win32.mak CVS archive at\n> :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot and found that\nthe\n> following file was missing;\n>\n> src\\bin\\psql\\sql_help.h\n>\n> I've copied the file from the the source tree of version 7.2.3 and the\n> compile works with out any problems.\n>\n> Should the file be in CVS?\n>\n> Al.\n>\n\n\n", "msg_date": "Wed, 20 Nov 2002 19:47:07 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": true, "msg_subject": "Fw: Missing file from CVS?" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nAl Sutton wrote:\n> Heres a patch which will create the sql_help.h file if it doesn't already\n> exist using an installed copy of perl. I've tested it using perl v5.6.1 from\n> ActiveState and all appears to work.\n> \n> Can someone commit this for me, or throw back some comments.\n> \n> Thanks,\n> \n> Al.\n> \n> \n> --- src/bin/psql/win32.mak 2002/10/29 04:23:30 1.11\n> +++ src/bin/psql/win32.mak 2002/11/20 19:44:35\n> @@ -7,14 +7,16 @@\n> !ENDIF\n> \n> CPP=cl.exe\n> +PERL=perl.exe\n> \n> OUTDIR=.\\Release\n> INTDIR=.\\Release\n> +REFDOCDIR= ../../../doc/src/sgml/ref\n> # Begin Custom Macros\n> OutDir=.\\Release\n> # End Custom Macros\n> \n> -ALL : \"$(OUTDIR)\\psql.exe\"\n> +ALL : sql_help.h \"$(OUTDIR)\\psql.exe\"\n> \n> CLEAN :\n> -@erase \"$(INTDIR)\\command.obj\"\n> @@ -91,3 +93,7 @@\n> $(CPP) @<<\n> $(CPP_PROJ) $<\n> <<\n> +\n> +sql_help.h: create_help.pl\n> + $(PERL) create_help.pl $(REFDOCDIR) $@\n> +\n> \n> \n> \n> \n> ----- Original Message -----\n> From: \"Al Sutton\" <al@alsutton.com>\n> To: <pgsql-hackers@postgresql.org>\n> Sent: Friday, November 15, 2002 8:48 PM\n> Subject: Missing file from CVS?\n> \n> \n> > All,\n> >\n> > I've just tried to build the Win32 components under Visual Studio's C++\n> > compiler from the win32.mak CVS archive at\n> > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot and found that\n> the\n> > following file was missing;\n> >\n> > src\\bin\\psql\\sql_help.h\n> >\n> > I've copied the file from the the source tree of version 7.2.3 and the\n> > compile works with out any problems.\n> >\n> > Should the file be in CVS?\n> >\n> > Al.\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 22 Nov 2002 00:49:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fw: Missing file from CVS?" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nAl Sutton wrote:\n> Heres a patch which will create the sql_help.h file if it doesn't already\n> exist using an installed copy of perl. I've tested it using perl v5.6.1 from\n> ActiveState and all appears to work.\n> \n> Can someone commit this for me, or throw back some comments.\n> \n> Thanks,\n> \n> Al.\n> \n> \n> --- src/bin/psql/win32.mak 2002/10/29 04:23:30 1.11\n> +++ src/bin/psql/win32.mak 2002/11/20 19:44:35\n> @@ -7,14 +7,16 @@\n> !ENDIF\n> \n> CPP=cl.exe\n> +PERL=perl.exe\n> \n> OUTDIR=.\\Release\n> INTDIR=.\\Release\n> +REFDOCDIR= ../../../doc/src/sgml/ref\n> # Begin Custom Macros\n> OutDir=.\\Release\n> # End Custom Macros\n> \n> -ALL : \"$(OUTDIR)\\psql.exe\"\n> +ALL : sql_help.h \"$(OUTDIR)\\psql.exe\"\n> \n> CLEAN :\n> -@erase \"$(INTDIR)\\command.obj\"\n> @@ -91,3 +93,7 @@\n> $(CPP) @<<\n> $(CPP_PROJ) $<\n> <<\n> +\n> +sql_help.h: create_help.pl\n> + $(PERL) create_help.pl $(REFDOCDIR) $@\n> +\n> \n> \n> \n> \n> ----- Original Message -----\n> From: \"Al Sutton\" <al@alsutton.com>\n> To: <pgsql-hackers@postgresql.org>\n> Sent: Friday, November 15, 2002 8:48 PM\n> Subject: Missing file from CVS?\n> \n> \n> > All,\n> >\n> > I've just tried to build the Win32 components under Visual Studio's C++\n> > compiler from the win32.mak CVS archive at\n> > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot and found that\n> the\n> > following file was missing;\n> >\n> > src\\bin\\psql\\sql_help.h\n> >\n> > I've copied the file from the the source tree of version 7.2.3 and the\n> > compile works with out any problems.\n> >\n> > Should the file be in CVS?\n> >\n> > Al.\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 22 Nov 2002 23:06:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fw: Missing file from CVS?" } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql-server\nChanges by:\ttgl@postgresql.org\t02/11/20 19:42:20\n\nModified files:\n\tdoc/src/sgml : runtime.sgml \n\tsrc/backend/optimizer/path: costsize.c \n\tsrc/backend/optimizer/plan: createplan.c planmain.c planner.c \n\tsrc/backend/utils/misc: guc.c postgresql.conf.sample \n\tsrc/bin/psql : tab-complete.c \n\tsrc/include/optimizer: cost.h planmain.h \n\tsrc/test/regress/expected: aggregates.out rangefuncs.out \n\t rules.out select_having.out \n\t select_having_1.out \n\t select_implicit.out \n\t select_implicit_1.out subselect.out \n\tsrc/test/regress/input: misc.source \n\tsrc/test/regress/output: misc.source \n\tsrc/test/regress/sql: aggregates.sql rules.sql select_having.sql \n\t select_implicit.sql subselect.sql \n\nLog message:\n\tFinish implementation of hashed aggregation. Add enable_hashagg GUC\n\tparameter to allow it to be forced off for comparison purposes.\n\tAdd ORDER BY clauses to a bunch of regression test queries that will\n\totherwise produce randomly-ordered output in the new regime.\n\n", "msg_date": "Wed, 20 Nov 2002 19:42:20 -0500 (EST)", "msg_from": "tgl@postgresql.org (Tom Lane)", "msg_from_op": true, "msg_subject": "pgsql-server/ oc/src/sgml/runtime.sgml rc/back ..." }, { "msg_contents": "> Log message:\n> \tFinish implementation of hashed aggregation. Add enable_hashagg GUC\n> \tparameter to allow it to be forced off for comparison purposes.\n> \tAdd ORDER BY clauses to a bunch of regression test queries that will\n> \totherwise produce randomly-ordered output in the new regime.\n\nOut of interest (since I was away while this was proposed I assume),\nwhat's the idea with hashed aggergation? I assume each group is now in a\nhash bucket? How did it work before?\n\nChris\n\n", "msg_date": "Thu, 21 Nov 2002 09:01:01 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ oc/src/sgml/runtime.sgml rc/back ..." }, { "msg_contents": "Tom Lane wrote:\n> Log message:\n> \tFinish implementation of hashed aggregation. Add enable_hashagg GUC\n> \tparameter to allow it to be forced off for comparison purposes.\n> \tAdd ORDER BY clauses to a bunch of regression test queries that will\n> \totherwise produce randomly-ordered output in the new regime.\n\nTom, do we really want to add a GUC that is used just for comparison of\nperformance? I know we have the seqscan on/off, but there are valid\nreasons to do that. Do you think there will be cases where it will\nfaster to have this hash setting off?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 20 Nov 2002 20:04:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/ oc/src/sgml/runtime.sgml rc/back ..." }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Log message:\n> > \tFinish implementation of hashed aggregation. Add enable_hashagg GUC\n> > \tparameter to allow it to be forced off for comparison purposes.\n> > \tAdd ORDER BY clauses to a bunch of regression test queries that will\n> > \totherwise produce randomly-ordered output in the new regime.\n> \n> Out of interest (since I was away while this was proposed I assume),\n> what's the idea with hashed aggergation? I assume each group is now in a\n> hash bucket? How did it work before?\n\nIt sequential scanned the group of possible matches. How it hashes the\nvalue and looks for matches that way --- much faster and the way most\ndb's do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 20 Nov 2002 20:05:04 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ oc/src/sgml/runtime.sgml rc/back ..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, do we really want to add a GUC that is used just for comparison of\n> performance? I know we have the seqscan on/off, but there are valid\n> reasons to do that. Do you think there will be cases where it will\n> faster to have this hash setting off?\n\nSure --- that's why the planner code is going to great lengths to try to\nchoose the faster one. Even if I didn't think that, it'll be at least\nas useful as, say, enable_indexscan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Nov 2002 22:30:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/ oc/src/sgml/runtime.sgml rc/back ... " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom, do we really want to add a GUC that is used just for comparison of\n> > performance? I know we have the seqscan on/off, but there are valid\n> > reasons to do that. Do you think there will be cases where it will\n> > faster to have this hash setting off?\n> \n> Sure --- that's why the planner code is going to great lengths to try to\n> choose the faster one. Even if I didn't think that, it'll be at least\n> as useful as, say, enable_indexscan.\n\nOh, OK. Just checking. I was confused about your commit message\nbecause you seemed to be saying it was mostly for testing, and I thought\nyou meant testing to see if the hash code is an improvement over what we\nhad, rather than to see if the hash code is an improvement over the\nsequential scan GROUP BY path, which is still in the code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 20 Nov 2002 22:33:55 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/ oc/src/sgml/runtime.sgml rc/back" }, { "msg_contents": "Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n> Out of interest (since I was away while this was proposed I assume),\n> what's the idea with hashed aggergation?\n\nOld method: scan rows in order by the GROUP BY columns (requiring\na sort, or if you're lucky an indexscan), and execute one aggregation\nat a time.\n\nNew method: scan rows in any old order (typically a seqscan), and run\nall the per-group aggregates in parallel. It's a hash aggregation\nbecause we use an in-memory hashtable indexed by the values of the GROUP\nBY columns to keep track of the running state of each aggregate.\n\nThe hash method avoids a sort before aggregation, at the cost of a sort\nafterwards if you want the results in non-random order. But the\npost-sort is only sorting one row per group, which is usually a lot less\ndata than the input rows.\n\nOne case where the old method can still win is where you have\n\tSELECT ... GROUP BY foo ORDER BY foo LIMIT n;\nfor small n. The hash method does not produce any output till it's\nread all the input; the old method can produce a few rows very cheaply\nif foo is indexed.\n\nAlso, of course, the hash method fails if you have too many groups to\npermit the hashtable to fit in memory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Nov 2002 22:37:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/ oc/src/sgml/runtime.sgml rc/back ... " } ]
[ { "msg_contents": "The documentation on changing shared memory kernel settings on xBSD\n(namely FreeBSD, possibly others as well) isn't ideal, IMHO. It says:\n\n%%\n The options SYSVSHM and SYSVSEM need to be enabled when the\n kernel is compiled. (They are by default.) The maximum size of\n shared memory is determined by the option SHMMAXPGS (in\n pages). The following shows an example of how to set the various\n parameters:\n\noptions SYSVSHM\noptions SHMMAXPGS=4096\noptions SHMSEG=256\n\noptions SYSVSEM\noptions SEMMNI=256\noptions SEMMNS=512\noptions SEMMNU=256\noptions SEMMAP=256\n\n (On NetBSD and OpenBSD the key word is actually option singular.)\n\n You may also want to use the sysctl setting to lock shared memory\n into RAM and prevent it from being paged out to swap.\n%%\n\nHowever, it appears that shared memory & semaphore settings can also\nbe controlled via sysctls -- at least on a FreeBSD 4.7-RELEASE box, I\ncan see:\n\nkern.ipc.semmap: 30\nkern.ipc.semmni: 10\nkern.ipc.semmns: 60\nkern.ipc.semmnu: 30\nkern.ipc.semmsl: 60\nkern.ipc.semopm: 100\nkern.ipc.semume: 10\nkern.ipc.semusz: 92\nkern.ipc.semvmx: 32767\nkern.ipc.semaem: 16384\nkern.ipc.shmmax: 33554432\nkern.ipc.shmmin: 1\nkern.ipc.shmmni: 192\nkern.ipc.shmseg: 128\nkern.ipc.shmall: 8192\nkern.ipc.shm_use_phys: 0\n\nHowever, the FreeBSD box I'm playing with isn't mine, so I'm not too\nkeen to change sysctls (well, that and I don't have root :-) ). Would\na kind BSD user confirm that:\n\n (a) the sysctls above *can* be used to change kernel shared\n memory settings, and the default value of the sysctl is\n the kernel option referred to in the docs.\n\n (b) do the above sysctls work on NetBSD and OpenBSD as well?\n\n (c) the 'prevent shared memory paging' sysctl vaguely referred\n to in the docs is 'kern.ipc.shm_use_phys', right?\n\n (d) does the above sysctl also work on NetBSD and OpenBSD?\n\nThanks in advance,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "20 Nov 2002 23:48:47 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "xBSD shmem doc deficiency" }, { "msg_contents": "On 20 Nov 2002, Neil Conway wrote:\n\n> However, the FreeBSD box I'm playing with isn't mine, so I'm not too\n> keen to change sysctls (well, that and I don't have root :-) ). Would\n> a kind BSD user confirm that:\n> \n> (a) the sysctls above *can* be used to change kernel shared\n> memory settings, and the default value of the sysctl is\n> the kernel option referred to in the docs.\n> \n> (b) do the above sysctls work on NetBSD and OpenBSD as well?\n\nA quick look at OpenBSD 3.2 (man 3 sysctl) appears to show that all you\ncan do at runtime is enable/disable message queues, shared memory, and\nsemaphores, not adjust buffer counts or sizes. It seems the kernel must \nstill be recompiled with the desired new settings.\n\nJon\n\n", "msg_date": "Thu, 21 Nov 2002 05:14:25 +0000 (UTC)", "msg_from": "Jon Jensen <jon@endpoint.com>", "msg_from_op": false, "msg_subject": "Re: xBSD shmem doc deficiency" }, { "msg_contents": "\nApparently only some settings are adjustable.\n\nroot@dev:~# uname -smr\nFreeBSD 4.2-RELEASE i386\nroot@dev:~# sysctl -a | grep kern.ipc.semm\nkern.ipc.semmap: 30\nkern.ipc.semmni: 10\nkern.ipc.semmns: 60\nkern.ipc.semmnu: 30\nkern.ipc.semmsl: 60\nroot@dev:~# sysctl -w kern.ipc.semmap=50\nkern.ipc.semmap: 30 -> 50\nroot@dev:~# sysctl -w kern.ipc.semmni=50\nsysctl: oid 'kern.ipc.semmni' is read only\nroot@dev:~#\n\n\nOn 20 Nov 2002, Neil Conway wrote:\n\n> The documentation on changing shared memory kernel settings on xBSD\n> (namely FreeBSD, possibly others as well) isn't ideal, IMHO. It says:\n>\n> %%\n> The options SYSVSHM and SYSVSEM need to be enabled when the\n> kernel is compiled. (They are by default.) The maximum size of\n> shared memory is determined by the option SHMMAXPGS (in\n> pages). The following shows an example of how to set the various\n> parameters:\n>\n> options SYSVSHM\n> options SHMMAXPGS=4096\n> options SHMSEG=256\n>\n> options SYSVSEM\n> options SEMMNI=256\n> options SEMMNS=512\n> options SEMMNU=256\n> options SEMMAP=256\n>\n> (On NetBSD and OpenBSD the key word is actually option singular.)\n>\n> You may also want to use the sysctl setting to lock shared memory\n> into RAM and prevent it from being paged out to swap.\n> %%\n>\n> However, it appears that shared memory & semaphore settings can also\n> be controlled via sysctls -- at least on a FreeBSD 4.7-RELEASE box, I\n> can see:\n>\n> kern.ipc.semmap: 30\n> kern.ipc.semmni: 10\n> kern.ipc.semmns: 60\n> kern.ipc.semmnu: 30\n> kern.ipc.semmsl: 60\n> kern.ipc.semopm: 100\n> kern.ipc.semume: 10\n> kern.ipc.semusz: 92\n> kern.ipc.semvmx: 32767\n> kern.ipc.semaem: 16384\n> kern.ipc.shmmax: 33554432\n> kern.ipc.shmmin: 1\n> kern.ipc.shmmni: 192\n> kern.ipc.shmseg: 128\n> kern.ipc.shmall: 8192\n> kern.ipc.shm_use_phys: 0\n>\n> However, the FreeBSD box I'm playing with isn't mine, so I'm not too\n> keen to change sysctls (well, that and I don't have root :-) ). Would\n> a kind BSD user confirm that:\n>\n> (a) the sysctls above *can* be used to change kernel shared\n> memory settings, and the default value of the sysctl is\n> the kernel option referred to in the docs.\n>\n> (b) do the above sysctls work on NetBSD and OpenBSD as well?\n>\n> (c) the 'prevent shared memory paging' sysctl vaguely referred\n> to in the docs is 'kern.ipc.shm_use_phys', right?\n>\n> (d) does the above sysctl also work on NetBSD and OpenBSD?\n>\n> Thanks in advance,\n>\n> Neil\n>\n> --\n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Thu, 21 Nov 2002 02:04:34 -0500 (EST)", "msg_from": "Kris Jurka <books@ejurka.com>", "msg_from_op": false, "msg_subject": "Re: xBSD shmem doc deficiency" }, { "msg_contents": "Neil Conway wrote:\n> (c) the 'prevent shared memory paging' sysctl vaguely referred\n> to in the docs is 'kern.ipc.shm_use_phys', right?\n\nI have added a mention of this to the 7.4 docs:\n\n You might also want to use the <application>sysctl</> setting to\n lock shared memory into RAM and prevent it from being paged out\n to swap, e.g. <literal>kern.ipc.shm_use_phys</>.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 21 Nov 2002 13:20:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: xBSD shmem doc deficiency" }, { "msg_contents": "Hi Neil,\n\n> However, the FreeBSD box I'm playing with isn't mine, so I'm not too\n> keen to change sysctls (well, that and I don't have root :-) ). Would\n> a kind BSD user confirm that:\n>\n> (a) the sysctls above *can* be used to change kernel shared\n> memory settings, and the default value of the sysctl is\n> the kernel option referred to in the docs.\n\nUnless this has changed in 4.7, lots of those shm sysctls are\nread-only...ie. you cannot set the shared memory pool size at runtime. I'll\nlook at it again tho.\n\n> (c) the 'prevent shared memory paging' sysctl vaguely referred\n> to in the docs is 'kern.ipc.shm_use_phys', right?\n\nI'll have to investigate that one...\n\nChris\n\n\n", "msg_date": "Thu, 21 Nov 2002 11:18:20 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: xBSD shmem doc deficiency" } ]
[ { "msg_contents": "I'm a new comer of PostgreSQL, could anyone tell me can I view the\ncontent of logs( updating a tuple, etc.) ? And if I can, how to do it?\n\n \n\nThx!\n\n\n\n\n\n\n\n\n\n\nI’m a new comer of PostgreSQL, could anyone tell\nme can I view the content of logs( updating a tuple, etc.) ? And if I can, how\nto do it?\n \nThx!", "msg_date": "Thu, 21 Nov 2002 15:45:31 +0800", "msg_from": "\"WangYuan\" <wangyuanzju@hotmail.com>", "msg_from_op": true, "msg_subject": "How can I view the content of Logs?" }, { "msg_contents": "WangYuan wrote:\n> I'm a new comer of PostgreSQL, could anyone tell me can I view the\n> content of logs( updating a tuple, etc.) ? And if I can, how to do it?\n\n[ posts like this should be in general or admin lists.]\n\nDepending on how you start your postmaster, there should be a log file\nsomewhere, and postgresql.conf file allows you to control how much\ndetail goes into that file.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 26 Nov 2002 12:50:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How can I view the content of Logs?" } ]
[ { "msg_contents": "Hi,\n\nIs there any reason why the grolist field in the table pg_group is\nimplemented as an array and not as a separate table?\n\nAccording to the documentation:\n\n<quote source=\"Postgresql 7.2 User Manual, chapter 6 near the end\">\nArrays are not sets; using arrays in the manner described in the previous\nparagraph is often a sign of database misdesign.\n</quote>\n\nI have trouble implementing a way to easily check whether a user is part\nof a group. (I use Apache::AuthDBI to implement authentication and wanted\nto make a view with columns username, userid , groupname. And installing\nthe contrib/array give's me a postgresql that is different from all the\nothers :-(\n\n-- \n__________________________________________________\n\"Nothing is as subjective as reality\"\nReinoud van Leeuwen reinoud.v@n.leeuwen.net\nhttp://www.xs4all.nl/~reinoud\n__________________________________________________\n", "msg_date": "Thu, 21 Nov 2002 14:28:35 +0100", "msg_from": "Reinoud van Leeuwen <reinoud@xs4all.nl>", "msg_from_op": true, "msg_subject": "Why an array in pg_group?" }, { "msg_contents": "Reinoud van Leeuwen <reinoud@xs4all.nl> writes:\n> Is there any reason why the grolist field in the table pg_group is\n> implemented as an array and not as a separate table?\n\nIt's easier to cache a single entry per group in the GRONAME and GROSYSID\nsyscaches than a bunch of them. The design is optimized for the\nneeds of the system's internal permissions-checking code, not for\nthe convenience of people trying to interrogate pg_group in SQL.\n\n> I have trouble implementing a way to easily check whether a user is part\n> of a group.\n\nPerhaps you could create a table that has no purpose except to be a\npermissions-check target, and set it up to have permissions granted only\nto the group you care about. Then use has_table_privilege().\n\nIn the long run I'd have no objection to adding an is_group_member()\nfunction (need a better choice of name, perhaps) to cater to this sort\nof request. Too late for 7.3 though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Nov 2002 10:41:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why an array in pg_group? " }, { "msg_contents": "Tom Lane wrote:\n> > I have trouble implementing a way to easily check whether a user is part\n> > of a group.\n> \n> Perhaps you could create a table that has no purpose except to be a\n> permissions-check target, and set it up to have permissions granted only\n> to the group you care about. Then use has_table_privilege().\n> \n> In the long run I'd have no objection to adding an is_group_member()\n> function (need a better choice of name, perhaps) to cater to this sort\n> of request. Too late for 7.3 though.\n\nI believe Joe Conway already coded that, but it didn't make it into 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 21 Nov 2002 14:17:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why an array in pg_group?" } ]
[ { "msg_contents": "Hi!\n\nI'm new on this list, my name is David Pradier, and i'm french.\n\nI'm currently trying the new postgresql 7.3rc1, and i've noticed that if\ni compared an integer to an empty string, i ran in an error.\n\nExample :\n=# select nom_comm from operation where id_operation = '';\nERROR: pg_atoi: zero-length string\n\n\\d operation gives :\nid_operation | integer | not null default nextval('\"operation_type_id_operation_seq\"'::text)\n\nIs this a bug or a feature of the new 7.3 version ?\nIs there a purpose ?\n\nThanks for your help.\n\nBest regards,\nDavid.\n-- \ndpradier@apartia.fr\n", "msg_date": "Thu, 21 Nov 2002 17:30:10 +0100", "msg_from": "David Pradier <dpradier@apartia.fr>", "msg_from_op": true, "msg_subject": "Error when comparing an integer to an empty string." }, { "msg_contents": "On Thu, Nov 21, 2002 at 17:30:10 +0100,\n David Pradier <dpradier@apartia.fr> wrote:\n> Hi!\n> \n> I'm new on this list, my name is David Pradier, and i'm french.\n> \n> I'm currently trying the new postgresql 7.3rc1, and i've noticed that if\n> i compared an integer to an empty string, i ran in an error.\n> \n> Is this a bug or a feature of the new 7.3 version ?\n> Is there a purpose ?\n\nWhat number do you expect '' to represent?\n\nProbably you either want to use:\n= '0'\nor\nis null\ndepending on what you are really trying to do.\n", "msg_date": "Thu, 21 Nov 2002 11:07:55 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: Error when comparing an integer to an empty string." }, { "msg_contents": "On Thu, Nov 21, 2002 at 11:07:55AM -0600, Bruno Wolff III wrote:\n> On Thu, Nov 21, 2002 at 17:30:10 +0100,\n> David Pradier <dpradier@apartia.fr> wrote:\n> > Hi!\n> > \n> > I'm new on this list, my name is David Pradier, and i'm french.\n> > \n> > I'm currently trying the new postgresql 7.3rc1, and i've noticed that if\n> > i compared an integer to an empty string, i ran in an error.\n> > \n> > Is this a bug or a feature of the new 7.3 version ?\n> > Is there a purpose ?\n> \n> What number do you expect '' to represent?\n> \n> Probably you either want to use:\n> = '0'\n> or\n> is null\n> depending on what you are really trying to do.\n\nThe point David was trying to make is: \n\nwith 7.2:\n\n\ttemplate1=# select 1 = '';\n\t ?column? \n\t----------\n\t f\n\t(1 row)\n\n\nwith 7.3rc1:\n\n\ttemplate1=# select 1 = '';\n\tERROR: pg_atoi: zero-length string\n\n\nIs this change of behavior intentional?\n\n-- \n HIPPOLYTE: Tr�z�ne m'ob�it. Les campagnes de Cr�te\n Offrent au fils de Ph�dre une riche retraite.\n (Ph�dre, J-B Racine, acte 2, sc�ne 2)\n", "msg_date": "Thu, 21 Nov 2002 18:43:44 +0100", "msg_from": "Louis-David Mitterrand <vindex@apartia.org>", "msg_from_op": false, "msg_subject": "Re: Error when comparing an integer to an empty string." }, { "msg_contents": "Louis-David Mitterrand <vindex@apartia.org> writes:\n> with 7.2:\n> \n> \ttemplate1=# select 1 = '';\n> \t ?column? \n> \t----------\n> \t f\n> \t(1 row)\n> \n> \n> with 7.3rc1:\n> \n> \ttemplate1=# select 1 = '';\n> \tERROR: pg_atoi: zero-length string\n> \n> \n> Is this change of behavior intentional?\n\nYes.\n\nI raised it as a possible point of backwards incompatibility at the\ntime the change was made, but the consensus was that this behavior was\nworth getting rid of.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "21 Nov 2002 13:16:50 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Error when comparing an integer to an empty string." }, { "msg_contents": "Louis-David Mitterrand <vindex@apartia.org> writes:\n> The point David was trying to make is: \n> with 7.2:\n> \ttemplate1=# select 1 = '';\n> \t ?column? \n> \t----------\n> \t f\n> \t(1 row)\n\n> with 7.3rc1:\n> \ttemplate1=# select 1 = '';\n> \tERROR: pg_atoi: zero-length string\n\n> Is this change of behavior intentional?\n\nYes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Nov 2002 14:14:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error when comparing an integer to an empty string. " }, { "msg_contents": "> > i compared an integer to an empty string, i ran in an error.\n> > Is this a bug or a feature of the new 7.3 version ?\n> > Is there a purpose ?\n> \n> What number do you expect '' to represent?\n> Probably you either want to use:\n> = '0'\n> or\n> is null\n> depending on what you are really trying to do.\n\nIt's because it comes from a perl building of the request.\nTypically : '$youpee'\nWhen $youpee is undef, it's no problem in 7.2, and the request returns\nfalse.\nNow it raises an error.\n(Ok, i've understood it was on purpose ; i give these info only for the\nbackground :-)\n\nBest regards,\nDavid\n\n-- \ndpradier@apartia.fr\n", "msg_date": "Fri, 22 Nov 2002 10:02:42 +0100", "msg_from": "David Pradier <dpradier@apartia.fr>", "msg_from_op": true, "msg_subject": "Re: Error when comparing an integer to an empty string." } ]
[ { "msg_contents": "\nAre those two syntaxes eqivalent ?\n\nselect * from users where monitored;\nselect * from users where monitored=true;\n\nIf the answer is yes, the optimimer probably doesn't agree with you :)\n\nTested on RC1:\n\ntemplate1=# create table a (a boolean, b text);\nCREATE TABLE\n\n\n.... inserted ~18000 rows with just one true (just to make an index scan \n meaningful)....\n\ntemplate1=# vacuum analyze a;\nVACUUM\ntemplate1=# explain select * from a where a;\n QUERY PLAN\n----------------------------------------------------\n Seq Scan on a (cost=0.00..802.64 rows=1 width=11)\n Filter: a\n(2 rows)\n\ntemplate1=# explain select * from a where a=true;\n QUERY PLAN\n--------------------------------------------------------------\n Index Scan using a_a on a (cost=0.00..2.01 rows=1 width=11)\n Index Cond: (a = true)\n(2 rows)\n\nBye!\n\n-- \n Daniele Orlandi\n Planet Srl\n\n", "msg_date": "Thu, 21 Nov 2002 19:39:19 +0100", "msg_from": "Daniele Orlandi <daniele@orlandi.com>", "msg_from_op": true, "msg_subject": "Optimizer & boolean syntax" }, { "msg_contents": "Using the famous WAG tech, in your first query the optimizer has to\nevaluate monitored for each record to determine its value.\n\nRobert Treat\n\nOn Thu, 2002-11-21 at 13:39, Daniele Orlandi wrote:\n> \n> Are those two syntaxes eqivalent ?\n> \n> select * from users where monitored;\n> select * from users where monitored=true;\n> \n> If the answer is yes, the optimimer probably doesn't agree with you :)\n> \n> Tested on RC1:\n> \n> template1=# create table a (a boolean, b text);\n> CREATE TABLE\n> \n> \n> .... inserted ~18000 rows with just one true (just to make an index scan \n> meaningful)....\n> \n> template1=# vacuum analyze a;\n> VACUUM\n> template1=# explain select * from a where a;\n> QUERY PLAN\n> ----------------------------------------------------\n> Seq Scan on a (cost=0.00..802.64 rows=1 width=11)\n> Filter: a\n> (2 rows)\n> \n> template1=# explain select * from a where a=true;\n> QUERY PLAN\n> --------------------------------------------------------------\n> Index Scan using a_a on a (cost=0.00..2.01 rows=1 width=11)\n> Index Cond: (a = true)\n> (2 rows)\n> \n> Bye!\n> \n> -- \n> Daniele Orlandi\n> Planet Srl\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n\n", "msg_date": "21 Nov 2002 16:13:54 -0500", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "On Thu, 21 Nov 2002, Daniele Orlandi wrote:\n\n> Are those two syntaxes eqivalent ?\n>\n> select * from users where monitored;\n> select * from users where monitored=true;\n>\n> If the answer is yes, the optimimer probably doesn't agree with you :)\n\nThat depends on the definition of equivalent. They presumably give the\nsame answer (I'm assuming monitored is a boolean), but the latter has\nsomething that's considered an indexable condition and I believe the\nformer does not (even with enable_seqscan=off the former syntax\nappears to give a sequence scan, usually a good sign it's not considered\nindexable).\n\n", "msg_date": "Thu, 21 Nov 2002 13:38:30 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "> > Are those two syntaxes eqivalent ?\n> >\n> > select * from users where monitored;\n> > select * from users where monitored=true;\n> >\n> > If the answer is yes, the optimimer probably doesn't agree with you :)\n>\n> That depends on the definition of equivalent. They presumably give the\n> same answer (I'm assuming monitored is a boolean), but the latter has\n> something that's considered an indexable condition and I believe the\n> former does not (even with enable_seqscan=off the former syntax\n> appears to give a sequence scan, usually a good sign it's not considered\n> indexable).\n\nI think his point is that they _should_ be equivalent. Surely there's\nsomething in the optimiser that discards '=true' stuff, like 'a=a' should be\ndiscarded?\n\nChris\n\n\n", "msg_date": "Thu, 21 Nov 2002 13:59:53 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "\nOn Thu, 21 Nov 2002, Christopher Kings-Lynne wrote:\n\n> > > Are those two syntaxes eqivalent ?\n> > >\n> > > select * from users where monitored;\n> > > select * from users where monitored=true;\n> > >\n> > > If the answer is yes, the optimimer probably doesn't agree with you :)\n> >\n> > That depends on the definition of equivalent. They presumably give the\n> > same answer (I'm assuming monitored is a boolean), but the latter has\n> > something that's considered an indexable condition and I believe the\n> > former does not (even with enable_seqscan=off the former syntax\n> > appears to give a sequence scan, usually a good sign it's not considered\n> > indexable).\n>\n> I think his point is that they _should_ be equivalent. Surely there's\n> something in the optimiser that discards '=true' stuff, like 'a=a' should be\n> discarded?\n\nI figure that's what he meant, but it isn't what was said. ;)\n\n\"col\" isn't of the general form \"indexkey op constant\" or \"constant op\nindexkey\" which I presume it's looking for given the comments in\nindxpath.c. I'm not sure what the best way to make it work would be given\nthat presumably we'd want to make col IS TRUE/FALSE use an index at the\nsame time (since that appears to not do so as well).\n\n\n", "msg_date": "Thu, 21 Nov 2002 14:33:26 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "> > I think his point is that they _should_ be equivalent. Surely there's\n> > something in the optimiser that discards '=true' stuff, like 'a=a'\nshould be\n> > discarded?\n>\n> I figure that's what he meant, but it isn't what was said. ;)\n>\n> \"col\" isn't of the general form \"indexkey op constant\" or \"constant op\n> indexkey\" which I presume it's looking for given the comments in\n> indxpath.c. I'm not sure what the best way to make it work would be given\n> that presumably we'd want to make col IS TRUE/FALSE use an index at the\n> same time (since that appears to not do so as well).\n\nNot that I see the point of indexing booleans, but hey :)\n\nChris\n\n\n", "msg_date": "Thu, 21 Nov 2002 14:45:34 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "On Thu, Nov 21, 2002 at 02:45:34PM -0800, Christopher Kings-Lynne wrote:\n> > > I think his point is that they _should_ be equivalent. Surely there's\n> > > something in the optimiser that discards '=true' stuff, like 'a=a'\n> should be\n> > > discarded?\n\n> Not that I see the point of indexing booleans, but hey :)\n\nIf one of the values is much more infrequent than the other, you can\nprobably get a substantial win using a partial index, can't you?\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"Everybody understands Mickey Mouse. Few understand Hermann Hesse.\nHardly anybody understands Einstein. And nobody understands Emperor Norton.\"\n", "msg_date": "Thu, 21 Nov 2002 19:55:54 -0300", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "On Thu, 21 Nov 2002, Christopher Kings-Lynne wrote:\n\n> > > I think his point is that they _should_ be equivalent. Surely there's\n> > > something in the optimiser that discards '=true' stuff, like 'a=a'\n> should be\n> > > discarded?\n> >\n> > I figure that's what he meant, but it isn't what was said. ;)\n> >\n> > \"col\" isn't of the general form \"indexkey op constant\" or \"constant op\n> > indexkey\" which I presume it's looking for given the comments in\n> > indxpath.c. I'm not sure what the best way to make it work would be given\n> > that presumably we'd want to make col IS TRUE/FALSE use an index at the\n> > same time (since that appears to not do so as well).\n> \n> Not that I see the point of indexing booleans, but hey :)\n\nWhile full indexes do seem futile, partial indexes can be quite useful.\n\nselect articles from forum where approved is false\n\nif 99.9% of all articles are approved would be quite common.\n\n", "msg_date": "Thu, 21 Nov 2002 15:59:39 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "> > Not that I see the point of indexing booleans, but hey :)\n> \n> If one of the values is much more infrequent than the other, you can\n> probably get a substantial win using a partial index, can't you?\n\nYes, I thought of the partial index after I wrote that email :)\n\nChris\n\n\n", "msg_date": "Thu, 21 Nov 2002 15:01:10 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "On Thu, 21 Nov 2002, Christopher Kings-Lynne wrote:\n\n> > > I think his point is that they _should_ be equivalent. Surely there's\n> > > something in the optimiser that discards '=true' stuff, like 'a=a'\n> should be\n> > > discarded?\n> >\n> > I figure that's what he meant, but it isn't what was said. ;)\n> >\n> > \"col\" isn't of the general form \"indexkey op constant\" or \"constant op\n> > indexkey\" which I presume it's looking for given the comments in\n> > indxpath.c. I'm not sure what the best way to make it work would be given\n> > that presumably we'd want to make col IS TRUE/FALSE use an index at the\n> > same time (since that appears to not do so as well).\n> \n> Not that I see the point of indexing booleans, but hey :)\n\nalso, in reference to my last message, even if the % was 50/50, if the \ntable was such that the bool was in a table next to a text field with 20k \nor text in it, an index on the bool would be much faster to go through \nthan to seq scan the table.\n\n", "msg_date": "Thu, 21 Nov 2002 16:01:28 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "> > > \"col\" isn't of the general form \"indexkey op constant\" or \"constant op\n> > > indexkey\" which I presume it's looking for given the comments in\n> > > indxpath.c. I'm not sure what the best way to make it work would be\ngiven\n> > > that presumably we'd want to make col IS TRUE/FALSE use an index at\nthe\n> > > same time (since that appears to not do so as well).\n> >\n> > Not that I see the point of indexing booleans, but hey :)\n>\n> also, in reference to my last message, even if the % was 50/50, if the\n> table was such that the bool was in a table next to a text field with 20k\n> or text in it, an index on the bool would be much faster to go through\n> than to seq scan the table.\n\nHmmm...I'm not sure about that. Postgres's storage strategry with text will\nbe to keep it in a side table (or you can use ALTER TABLE/SET STORAGE) and\nit will only be retrieved if it's in the select parameters.\n\nChris\n\n", "msg_date": "Thu, 21 Nov 2002 15:02:41 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "\nOn Thu, 21 Nov 2002, Christopher Kings-Lynne wrote:\n\n> > > > \"col\" isn't of the general form \"indexkey op constant\" or \"constant op\n> > > > indexkey\" which I presume it's looking for given the comments in\n> > > > indxpath.c. I'm not sure what the best way to make it work would be\n> given\n> > > > that presumably we'd want to make col IS TRUE/FALSE use an index at\n> the\n> > > > same time (since that appears to not do so as well).\n> > >\n> > > Not that I see the point of indexing booleans, but hey :)\n> >\n> > also, in reference to my last message, even if the % was 50/50, if the\n> > table was such that the bool was in a table next to a text field with 20k\n> > or text in it, an index on the bool would be much faster to go through\n> > than to seq scan the table.\n>\n> Hmmm...I'm not sure about that. Postgres's storage strategry with text will\n> be to keep it in a side table (or you can use ALTER TABLE/SET STORAGE) and\n> it will only be retrieved if it's in the select parameters.\n\nTrue, but replace that text with 1500 integers. :)\n\nThe only problem with the partial index solution is that it seems to still\nonly work for the same method of asking for the result, so if you make an\nindex where col=true, using col IS TRUE or col in a query doesn't seem to\nuse it.\n\n", "msg_date": "Thu, 21 Nov 2002 15:47:14 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "Stephan Szabo wrote:\n> On Thu, 21 Nov 2002, Daniele Orlandi wrote:\n> \n> \n>>Are those two syntaxes eqivalent ?\n>>\n>>select * from users where monitored;\n>>select * from users where monitored=true;\n>>\n>>If the answer is yes, the optimimer probably doesn't agree with you :)\n> \n> \n> That depends on the definition of equivalent.\n\nBy equivalent I mean \"means the same thing so, behaves in the same way\".\n\nI consider the former syntax to be cleaner and I would tend to use it \nmost of times.\n\nFor what concerns partial indexes, I agree, it's a better approach for \nthis kind of indexing and I did some test:\n\n-------------------------\nctonet=# create index users_monitored on users (monitored) where monitored;\nCREATE\nctonet=# explain select * from users where monitored;\nNOTICE: QUERY PLAN:\n\nIndex Scan using users_monitored on users (cost=0.00..9.44 rows=6 \nwidth=186)\n\nEXPLAIN\n\nNice, it appears to use the index, but:\n\n\nctonet=# explain select * from users where monitored=true;\nNOTICE: QUERY PLAN:\n\nSeq Scan on users (cost=0.00..8298.84 rows=59 width=186)\n\nEXPLAIN\n-------------------------\n\nThe problem is the opposite... so, effectively, seems that the optimizer \nconsiders \"monitored\" and \"monitored=true\" as two different expressions...\n\nThe viceversa is analog and we also can see that the syntax \"monitored \nis true\" is considered different from the other two syntaxes:\n\n-----------------------\nctonet=# drop index users_monitored;\nDROP\nctonet=# create index users_monitored on users (monitored) where \nmonitored=true;\nCREATE\nctonet=# explain select * from users where monitored=true;\nNOTICE: QUERY PLAN:\n\nIndex Scan using users_monitored on users (cost=0.00..9.45 rows=6 \nwidth=186)\n\nEXPLAIN\nctonet=# explain select * from users where monitored;\nNOTICE: QUERY PLAN:\n\nSeq Scan on users (cost=0.00..8077.07 rows=59 width=186)\n\nEXPLAIN\n\nctonet=# create index users_monitored on users (monitored) where \nmonitored=true;\nCREATE\nctonet=# explain select * from users where monitored is true;\nNOTICE: QUERY PLAN:\n\nSeq Scan on users (cost=0.00..8077.07 rows=59 width=186)\n\nEXPLAIN\n-------------------------\n\nWhat I propose is that all those syntaxes are made equivalent (by, for \nexample, rewriting boolean comparisons to a common form) in order to \nhave a more consistent index usage.\n\nBye!\n\n-- \n Daniele Orlandi\n Planet Srl\n\n", "msg_date": "Fri, 22 Nov 2002 03:08:48 +0100", "msg_from": "Daniele Orlandi <daniele@orlandi.com>", "msg_from_op": true, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "Daniele Orlandi <daniele@orlandi.com> writes:\n> The problem is the opposite... so, effectively, seems that the optimizer \n> considers \"monitored\" and \"monitored=true\" as two different expressions...\n\nCheck.\n\n> The viceversa is analog and we also can see that the syntax \"monitored \n> is true\" is considered different from the other two syntaxes:\n\nAs it should be.\n\n> What I propose is that all those syntaxes are made equivalent\n\nOnly two of them are logically equivalent. Consider NULL.\n\nEven for the first two, assuming equivalence requires hard-wiring an\nassumption about the behavior of the \"bool = bool\" operator; which is\na user-redefinable operator. I'm not totally comfortable with the idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Nov 2002 21:50:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax " }, { "msg_contents": "On Thu, 21 Nov 2002, Stephan Szabo wrote:\n\n> \n> On Thu, 21 Nov 2002, Christopher Kings-Lynne wrote:\n> \n> > > > > \"col\" isn't of the general form \"indexkey op constant\" or \"constant op\n> > > > > indexkey\" which I presume it's looking for given the comments in\n> > > > > indxpath.c. I'm not sure what the best way to make it work would be\n> > given\n> > > > > that presumably we'd want to make col IS TRUE/FALSE use an index at\n> > the\n> > > > > same time (since that appears to not do so as well).\n> > > >\n> > > > Not that I see the point of indexing booleans, but hey :)\n> > >\n> > > also, in reference to my last message, even if the % was 50/50, if the\n> > > table was such that the bool was in a table next to a text field with 20k\n> > > or text in it, an index on the bool would be much faster to go through\n> > > than to seq scan the table.\n> >\n> > Hmmm...I'm not sure about that. Postgres's storage strategry with text will\n> > be to keep it in a side table (or you can use ALTER TABLE/SET STORAGE) and\n> > it will only be retrieved if it's in the select parameters.\n> \n> True, but replace that text with 1500 integers. :)\n> \n> The only problem with the partial index solution is that it seems to still\n> only work for the same method of asking for the result, so if you make an\n> index where col=true, using col IS TRUE or col in a query doesn't seem to\n> use it.\n\nTrue. I always use the syntax:\n\nselect * from table where field IS TRUE\n\nOR IS FALSE for consistency.\n\n", "msg_date": "Fri, 22 Nov 2002 08:59:27 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer & boolean syntax" }, { "msg_contents": "Tom Lane wrote:\n> \n> Only two of them are logically equivalent. Consider NULL.\n\nOhhh IS NOT TRUE or IS NOT FALSE also match NULL, I never knew this :)\n\n> Even for the first two, assuming equivalence requires hard-wiring an\n> assumption about the behavior of the \"bool = bool\" operator; which is\n> a user-redefinable operator.\n > I'm not totally comfortable with the idea.\n\nOk, I see your point and the problems that may arise, but I hope wou \nwill agree with me that from the point of view of the user, both clauses \nhave the same meaning and the index usage should be consistant with it.\n\nUnfortunatelly I don't know very well PostgreSQL internals, so I may be \nsaying a load of bullshits, but wouldn't be possible to consider any \nevaluation of a bool expression in the form of bool=bool with true as \nthe second 'bool'[1] ? At least as a TODO item ?\n\n\nThanks!\nBye!\n\n[1] Eventually including the \"var IS TRUE\" and \"var IS FALSE\" (not var \nIS NOT ...) which already are special syntax cases if I am not wrong.\n\n-- \n Daniele Orlandi\n Planet Srl\n\n", "msg_date": "Sun, 24 Nov 2002 04:27:14 +0100", "msg_from": "Daniele Orlandi <daniele@orlandi.com>", "msg_from_op": true, "msg_subject": "Re: Optimizer & boolean syntax" } ]
[ { "msg_contents": "There had been a great deal of discussion of how to improve the\nperformance of select/sorting on this list, what about\ninsert/delete/update?\n\nIs there any rules of thumb we need to follow? What are the parameters\nwe should tweak to whip the horse to go faster?\n\nThanks\n\n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "21 Nov 2002 15:54:03 -0500", "msg_from": "Wei Weng <wweng@kencast.com>", "msg_from_op": true, "msg_subject": "performance of insert/delete/update" }, { "msg_contents": "Wei,\n\n> There had been a great deal of discussion of how to improve the\n> performance of select/sorting on this list, what about\n> insert/delete/update?\n> \n> Is there any rules of thumb we need to follow? What are the\n> parameters\n> we should tweak to whip the horse to go faster?\n\nyes, lots of rules. Wanna be more specific? You wondering about\nquery structure, hardware, memory config, what?\n\n-Josh Berkus\n", "msg_date": "Thu, 21 Nov 2002 13:23:57 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On 21 Nov 2002, Wei Weng wrote:\n\n> On Thu, 2002-11-21 at 16:23, Josh Berkus wrote:\n> > Wei,\n> > \n> > > There had been a great deal of discussion of how to improve the\n> > > performance of select/sorting on this list, what about\n> > > insert/delete/update?\n> > > \n> > > Is there any rules of thumb we need to follow? What are the\n> > > parameters\n> > > we should tweak to whip the horse to go faster?\n> > \n> > yes, lots of rules. Wanna be more specific? You wondering about\n> > query structure, hardware, memory config, what?\n> I am most concerned about the software side, that is query structures\n> and postgresql config.\n\nThe absolutely most important thing to do to speed up inserts and updates \nis to squeeze as many as you can into one transaction. Within reason, of \ncourse. There's no great gain in putting more than a few thousand \ntogether at a time. If your application is only doing one or two updates \nin a transaction, it's going to be slower in terms of records written per \nsecond than an application that is updating 100 rows in a transaction.\n\nReducing triggers and foreign keys on the inserted tables to a minimum \nhelps.\n\nInserting into temporary holding tables and then having a regular process \nthat migrates the data into the main tables is sometimes necessary if \nyou're putting a lot of smaller inserts into a very large dataset. \nThen using a unioned view to show the two tables as one.\n\nPutting WAL (e.g. $PGDATA/pg_xlog directory) on it's own drive(s).\n\nPutting indexes that have to be updated during inserts onto their own \ndrive(s).\n\nPerforming regular vacuums on heavily updated tables.\n\nAlso, if your hardware is reliable, you can turn off fsync in \npostgresql.conf. That can increase performance by anywhere from 2 to 10 \ntimes, depending on your application.\n\n\n", "msg_date": "Thu, 21 Nov 2002 14:49:18 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Thu, 2002-11-21 at 16:23, Josh Berkus wrote:\n> Wei,\n> \n> > There had been a great deal of discussion of how to improve the\n> > performance of select/sorting on this list, what about\n> > insert/delete/update?\n> > \n> > Is there any rules of thumb we need to follow? What are the\n> > parameters\n> > we should tweak to whip the horse to go faster?\n> \n> yes, lots of rules. Wanna be more specific? You wondering about\n> query structure, hardware, memory config, what?\nI am most concerned about the software side, that is query structures\nand postgresql config.\n\nThanks\n\n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "21 Nov 2002 17:23:57 -0500", "msg_from": "Wei Weng <wweng@kencast.com>", "msg_from_op": true, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Scott,\n\n> The absolutely most important thing to do to speed up inserts and\n> updates \n> is to squeeze as many as you can into one transaction. Within\n> reason, of \n> course. There's no great gain in putting more than a few thousand \n> together at a time. If your application is only doing one or two\n> updates \n> in a transaction, it's going to be slower in terms of records written\n> per \n> second than an application that is updating 100 rows in a\n> transaction.\n\nThis only works up to the limit of the memory you have available for\nPostgres. If the updates in one transaction exceed your available\nmemory, you'll see a lot of swaps to disk log that will slow things\ndown by a factor of 10-50 times.\n\n> Reducing triggers and foreign keys on the inserted tables to a\n> minimum \n> helps.\n\n... provided that this will not jeapordize your data integrity. If you\nhave indispensable triggers in PL/pgSQL, re-qriting them in C will make\nthem, and thus updates on their tables, faster.\n\nAlso, for foriegn keys, it speeds up inserts and updates on parent\ntables with many child records if the foriegn key column in the child\ntable is indexed.\n\n> Putting WAL (e.g. $PGDATA/pg_xlog directory) on it's own drive(s).\n> \n> Putting indexes that have to be updated during inserts onto their own\n> \n> drive(s).\n> \n> Performing regular vacuums on heavily updated tables.\n> \n> Also, if your hardware is reliable, you can turn off fsync in \n> postgresql.conf. That can increase performance by anywhere from 2 to\n> 10 \n> times, depending on your application.\n\nIt can be dangerous though ... in the event of a power outage, for\nexample, your database could be corrupted and difficult to recover. So\n... \"at your own risk\".\n\nI've found that switching from fsync to fdatasync on Linux yields\nmarginal performance gain ... about 10-20%.\n\nAlso, if you are doing large updates (many records at once) you may\nwant to increase WAL_FILES and CHECKPOINT_BUFFER in postgresql.conf to\nallow for large transactions.\n\nFinally, you want to structure your queries so that you do the minimum\nnumber of update writes possible, or insert writes. For example, a\nprocedure that inserts a row, does some calculations, and then modifies\nseveral fields in that row is going to slow stuff down significantly\ncompared to doing the calculations as variables and only a single\ninsert. Certainly don't hit a table with 8 updates, each updating one\nfield instead of a single update statement.\n\n-Josh Berkus\n", "msg_date": "Thu, 21 Nov 2002 14:26:40 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Thu, 21 Nov 2002, Josh Berkus wrote:\n\n> Scott,\n> \n> > The absolutely most important thing to do to speed up inserts and\n> > updates \n> > is to squeeze as many as you can into one transaction. Within\n> > reason, of \n> > course. There's no great gain in putting more than a few thousand \n> > together at a time. If your application is only doing one or two\n> > updates \n> > in a transaction, it's going to be slower in terms of records written\n> > per \n> > second than an application that is updating 100 rows in a\n> > transaction.\n> \n> This only works up to the limit of the memory you have available for\n> Postgres. If the updates in one transaction exceed your available\n> memory, you'll see a lot of swaps to disk log that will slow things\n> down by a factor of 10-50 times.\n\nSorry, but that isn't true. MVCC means we don't have to hold all the data \nin memory, we can have multiple versions of the same tuples on disk, and \nuse memory for what it's meant for, buffering.\n\nThe performance gain \ncomes from the fact that postgresql doesn't have to perform the data \nconsistency checks needed during an insert until after all the rows are \ninserted, and it can \"gang check\" them/\n\n> > Reducing triggers and foreign keys on the inserted tables to a\n> > minimum \n> > helps.\n> \n> ... provided that this will not jeapordize your data integrity. If you\n> have indispensable triggers in PL/pgSQL, re-qriting them in C will make\n> them, and thus updates on their tables, faster.\n\nAgreed. But you've probably seen the occasional \"I wasn't sure if we \nneeded that check or not, so I threw it in just in case\" kind of database \ndesign. :-)\n\nI definitely don't advocate just tossing all your FKs to make it run \nfaster. \n\nAlso note that many folks have replaced foreign keys with triggers and \ngained in performance, as fks in pgsql still have some deadlock issues to \nbe worked out.\n\n> Also, for foriegn keys, it speeds up inserts and updates on parent\n> tables with many child records if the foriegn key column in the child\n> table is indexed.\n\nAbsolutely.\n\n> > Putting WAL (e.g. $PGDATA/pg_xlog directory) on it's own drive(s).\n> > \n> > Putting indexes that have to be updated during inserts onto their own\n> > \n> > drive(s).\n> > \n> > Performing regular vacuums on heavily updated tables.\n> > \n> > Also, if your hardware is reliable, you can turn off fsync in \n> > postgresql.conf. That can increase performance by anywhere from 2 to\n> > 10 \n> > times, depending on your application.\n> \n> It can be dangerous though ... in the event of a power outage, for\n> example, your database could be corrupted and difficult to recover. So\n> ... \"at your own risk\".\n\nNo, the database will not be corrupted, at least not in my experience. \nhowever, you MAY lose data from transactions that you thought were \ncommitted. I think Tom posted something about this a few days back.\n\n> I've found that switching from fsync to fdatasync on Linux yields\n> marginal performance gain ... about 10-20%.\n\nI'll have to try that.\n\n> Also, if you are doing large updates (many records at once) you may\n> want to increase WAL_FILES and CHECKPOINT_BUFFER in postgresql.conf to\n> allow for large transactions.\n\nActually, postgresql will create more WAL files if it needs to to handle \nthe size of a transaction. BUT, it won't create extra ones for heavier \nparallel load without being told to. I've inserted 100,000 rows at a \ntime with no problem on a machine with only 1 WAL file specified, and it \ndidn't burp. It does run faster having multiple wal files when under \nparallel load.\n\n> Finally, you want to structure your queries so that you do the minimum\n> number of update writes possible, or insert writes. For example, a\n> procedure that inserts a row, does some calculations, and then modifies\n> several fields in that row is going to slow stuff down significantly\n> compared to doing the calculations as variables and only a single\n> insert. Certainly don't hit a table with 8 updates, each updating one\n> field instead of a single update statement.\n\nThis is critical, and bites many people coming from a row level locking \ndatabase to an MVCC database. In MVCC every update creates a new on disk \ntuple. I think someone on the list a while back was updating their \ndatabase something like this:\n\nupdate table set field1='abc' where id=1;\nupdate table set field2='def' where id=1;\nupdate table set field3='ghi' where id=1;\nupdate table set field4='jkl' where id=1;\nupdate table set field5='mno' where id=1;\nupdate table set field6='pqr' where id=1;\n\nand they had to vacuum something like every 5 minutes.\n\nAlso, things like:\n\nupdate table set field1=field1+1\n\nare killers in an MVCC database as well.\n\n", "msg_date": "Thu, 21 Nov 2002 15:54:14 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "\nScott,\n\n> > This only works up to the limit of the memory you have available for\n> > Postgres. If the updates in one transaction exceed your available\n> > memory, you'll see a lot of swaps to disk log that will slow things\n> > down by a factor of 10-50 times.\n> \n> Sorry, but that isn't true. MVCC means we don't have to hold all the data \n> in memory, we can have multiple versions of the same tuples on disk, and \n> use memory for what it's meant for, buffering.\n\nSorry, you're absolutely correct. I don't know what I was thinking of; 's the \nproblem with an off-the-cuff response.\n\nPlease disregard the previous quote. Instead:\n\nDoing several large updates in a single transaction can lower performance if \nthe number of updates is sufficient to affect index usability and a VACUUM is \nreally needed between them. For example, a series of large data \ntransformation statements on a single table or set of related tables should \nhave VACCUUM statements between them, thus preventing you from putting them \nin a single transaction. \n\nExample, the series:\n1. INSERT 10,000 ROWS INTO table_a;\n2. UPDATE 100,000 ROWS IN table_a WHERE table_b;\n3. UPDATE 100,000 ROWS IN table_c WHERE table_a;\n\nWIll almost certainly need a VACUUM or even VACUUM FULL table_a after 2), \nrequiring you to split the update series into 2 transactions. Otherwise, the \n\"where table_a\" condition in step 3) will be extremely slow.\n\n> Also note that many folks have replaced foreign keys with triggers and \n> gained in performance, as fks in pgsql still have some deadlock issues to \n> be worked out.\n\nYeah. I think Neil Conway is overhauling FKs, which everyone considers a bit \nof a hack in the current implementation, including Jan who wrote it.\n\n> > It can be dangerous though ... in the event of a power outage, for\n> > example, your database could be corrupted and difficult to recover. So\n> > ... \"at your own risk\".\n> \n> No, the database will not be corrupted, at least not in my experience. \n> however, you MAY lose data from transactions that you thought were \n> committed. I think Tom posted something about this a few days back.\n\nHmmm ... have you done this? I'd like the performance gain, but I don't want \nto risk my data integrity. I've seen some awful things in databases (such as \nduplicate primary keys) from yanking a power cord repeatedly.\n\n> update table set field1=field1+1\n> \n> are killers in an MVCC database as well.\n\nYeah -- don't I know it.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 21 Nov 2002 15:34:53 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Thu, 21 Nov 2002, Josh Berkus wrote:\n\n> Doing several large updates in a single transaction can lower performance if \n> the number of updates is sufficient to affect index usability and a VACUUM is \n> really needed between them. For example, a series of large data \n> transformation statements on a single table or set of related tables should \n> have VACCUUM statements between them, thus preventing you from putting them \n> in a single transaction. \n> \n> Example, the series:\n> 1. INSERT 10,000 ROWS INTO table_a;\n> 2. UPDATE 100,000 ROWS IN table_a WHERE table_b;\n> 3. UPDATE 100,000 ROWS IN table_c WHERE table_a;\n> \n> WIll almost certainly need a VACUUM or even VACUUM FULL table_a after 2), \n> requiring you to split the update series into 2 transactions. Otherwise, the \n> \"where table_a\" condition in step 3) will be extremely slow.\n\nVery good point. One that points out the different mind set one needs \nwhen dealing with pgsql.\n\n> > > It can be dangerous though ... in the event of a power outage, for\n> > > example, your database could be corrupted and difficult to recover. So\n> > > ... \"at your own risk\".\n> > \n> > No, the database will not be corrupted, at least not in my experience. \n> > however, you MAY lose data from transactions that you thought were \n> > committed. I think Tom posted something about this a few days back.\n> \n> Hmmm ... have you done this? I'd like the performance gain, but I don't want \n> to risk my data integrity. I've seen some awful things in databases (such as \n> duplicate primary keys) from yanking a power cord repeatedly.\n\nI have, with killall -9 postmaster, on several occasions during testing \nunder heavy parallel load. I've never had 7.2.x fail because of this.\n\n", "msg_date": "Fri, 22 Nov 2002 08:56:14 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Scott,\n\n> > > The absolutely most important thing to do to speed up inserts and\n> > > updates \n> > > is to squeeze as many as you can into one transaction. \n\nI was discussing this on IRC, and nobody could verify this assertion.\n Do you have an example of bunlding multiple writes into a transaction\ngiving a performance gain?\n\n-Josh\n", "msg_date": "Fri, 22 Nov 2002 20:18:22 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Fri, 2002-11-22 at 22:18, Josh Berkus wrote:\n> Scott,\n> \n> > > > The absolutely most important thing to do to speed up inserts and\n> > > > updates \n> > > > is to squeeze as many as you can into one transaction. \n> \n> I was discussing this on IRC, and nobody could verify this assertion.\n> Do you have an example of bunlding multiple writes into a transaction\n> giving a performance gain?\n\nUnfortunately, I missed the beginning of this thread, but I do\nknow that eliminating as many indexes as possible is the answer.\nIf I'm going to insert \"lots\" of rows in an off-line situation,\nthen I'll drop *all* of the indexes, load the data, then re-index.\nIf deleting \"lots\", then I'll drop all but the 1 relevant index,\nthen re-index afterwards.\n\nAs for bundling multiple statements into a transaction to increase\nperformance, I think the questions are:\n- how much disk IO does one BEGIN TRANSACTION do? If it *does*\n do disk IO, then \"bundling\" *will* be more efficient, since\n less disk IO will be performed.\n- are, for example, 500 COMMITs of small amounts of data more or \n less efficient than 1 COMMIT of a large chunk of data? On the\n proprietary database that I use at work, efficiency goes up,\n then levels off at ~100 inserts per transaction.\n\nRon\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "23 Nov 2002 09:06:00 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "\nRon,\n\n> As for bundling multiple statements into a transaction to increase\n> performance, I think the questions are:\n> - how much disk IO does one BEGIN TRANSACTION do? If it *does*\n> do disk IO, then \"bundling\" *will* be more efficient, since\n> less disk IO will be performed.\n> - are, for example, 500 COMMITs of small amounts of data more or \n> less efficient than 1 COMMIT of a large chunk of data? On the\n> proprietary database that I use at work, efficiency goes up,\n> then levels off at ~100 inserts per transaction.\n\nThat's because some commercial databases (MS SQL, Sybase) use an \"unwinding \ntransaction log\" method of updating. That is, during a transaction, changes \nare written only to the transaction log, and those changes are \"played\" to \nthe database only on a COMMIT. It's an approach that is more efficient for \nlarge transactions, but has the unfortuate side effect of *requiring* read \nand write row locks for the duration of the transaction.\n\nIn Postgres, with MVCC, changes are written to the database immediately with a \nnew transaction ID and the new rows are \"activated\" on COMMIT. So the \nchanges are written to the database as the statements are executed, \nregardless. This is less efficient for large transactions than the \n\"unwinding log\" method, but has the advantage of eliminating read locks \nentirely and most deadlock situations.\n\nUnder MVCC, then, I am not convinced that bundling a bunch of writes into one \ntransaction is faster until I see it demonstrated. I certainly see no \nperformance gain on my system.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 23 Nov 2002 11:25:45 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> Under MVCC, then, I am not convinced that bundling a bunch of writes into one\n> transaction is faster until I see it demonstrated. I certainly see no \n> performance gain on my system.\n\nAre you running with fsync off?\n\nThe main reason for bundling updates into larger transactions is that\neach transaction commit requires an fsync on the WAL log. If you have\nfsync enabled, it is physically impossible to commit transactions faster\nthan one per revolution of the WAL disk, no matter how small the\ntransactions. (*) So it pays to make the transactions larger, not smaller.\n\nOn my machine I see a sizable difference (more than 2x) in the rate at\nwhich simple INSERT statements are processed as separate transactions\nand as large batches --- if I have fsync on. With fsync off, nearly no\ndifference.\n\n\t\t\tregards, tom lane\n\n(*) See recent proposals from Curtis Faith in pgsql-hackers about how\nwe might circumvent that limit ... but it's there today.\n", "msg_date": "Sat, 23 Nov 2002 15:20:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "\nTom,\n\n> On my machine I see a sizable difference (more than 2x) in the rate at\n> which simple INSERT statements are processed as separate transactions\n> and as large batches --- if I have fsync on. With fsync off, nearly no\n> difference.\n\nI'm using fdatasych, which *does* perform faster than fsych on my system. \nCould this make the difference?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 23 Nov 2002 12:31:50 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n>> On my machine I see a sizable difference (more than 2x) in the rate at\n>> which simple INSERT statements are processed as separate transactions\n>> and as large batches --- if I have fsync on. With fsync off, nearly no\n>> difference.\n\n> I'm using fdatasych, which *does* perform faster than fsych on my system. \n> Could this make the difference?\n\nNo; you still have to write the data and wait for the disk to spin.\n(FWIW, PG defaults to wal_sync_method = open_datasync on my system,\nand that's what I used in checking the speed just now. So I wasn't\nactually executing any fsync() calls either.)\n\nOn lots of PC hardware, the disks are configured to lie and report write\ncomplete as soon as they've accepted the data into their internal\nbuffers. If you see very little difference between fsync on and fsync\noff, or if you are able to benchmark transaction rates in excess of your\ndisk's RPM, you should suspect that your disk drive is lying to you.\n\nAs an example: in testing INSERT speed on my old HP box just now,\nI got measured rates of about 16000 inserts/minute with fsync off, and\n5700/min with fsync on (for 1 INSERT per transaction). Knowing that my\ndisk drive is 6000 RPM, the latter number is credible. On my PC I get\nnumbers way higher than the disk rotation rate :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Nov 2002 15:41:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "\nTom,\n\n> As an example: in testing INSERT speed on my old HP box just now,\n> I got measured rates of about 16000 inserts/minute with fsync off, and\n> 5700/min with fsync on (for 1 INSERT per transaction). Knowing that my\n> disk drive is 6000 RPM, the latter number is credible. On my PC I get\n> numbers way higher than the disk rotation rate :-(\n\nThanks for the info. As long as I have your ear, what's your opinion on the \nrisk level of running with fsynch off on a production system? I've seen a \nlot of posts on this list opining the lack of danger, but I'm a bit paranoid.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 23 Nov 2002 12:48:14 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> Thanks for the info. As long as I have your ear, what's your opinion on the \n> risk level of running with fsynch off on a production system?\n\nDepends on how much you trust your hardware, kernel, and power source.\n\nFsync off does not introduce any danger from Postgres crashes --- we\nalways write data out of userspace to the kernel before committing.\nThe question is whether writes can be relied on to get to disk once\nthe kernel has 'em.\n\nThere is a definite risk of data corruption (not just lost transactions,\nbut actively inconsistent database contents) if you suffer a\nsystem-level crash while running with fsync off. The theory of WAL\n(which remember means write *ahead* log) is that it protects you from\ndata corruption as long as WAL records always hit disk before the\nassociated changes in database data files do. Then after a crash you\ncan replay the WAL to make sure you have actually done all the changes\ndescribed by each readable WAL record, and presto you're consistent up\nto the end of the readable WAL. But if data file writes can get to disk\nin advance of their WAL record, you could have a situation where some\nbut not all changes described by a WAL record are in the database after\na system crash and recovery. This could mean incompletely applied\ntransactions, broken indexes, or who knows what.\n\nWhen you get right down to it, what we use fsync for is to force write\nordering --- Unix kernels do not guarantee write ordering any other way.\nWe use it to ensure WAL records hit disk before data file changes do.\n\nBottom line: I wouldn't run with fsync off in a mission-critical\ndatabase. If you're prepared to accept a risk of having to restore from\nyour last backup after a system crash, maybe it's okay.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Nov 2002 16:20:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "\nTom,\n\n> When you get right down to it, what we use fsync for is to force write\n> ordering --- Unix kernels do not guarantee write ordering any other way.\n> We use it to ensure WAL records hit disk before data file changes do.\n> \n> Bottom line: I wouldn't run with fsync off in a mission-critical\n> database. If you're prepared to accept a risk of having to restore from\n> your last backup after a system crash, maybe it's okay.\n\nThanks for that overview. Sadly, even with fsynch on, I was forced to restore \nfrom backup because the data needs to be 100% reliable and the crash was due \nto a disk lockup on a checkpoint ... beyond the ability of WAL to deal with, \nI think.\n\nOne last, last question: I was just asked a question on IRC, and I can't find \ndocs defining fsynch, fdatasynch, opensynch, and opendatasynch beyond section \n11.3 which just says that they are all synch methods. Are there docs?\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 23 Nov 2002 13:29:20 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> One last, last question: I was just asked a question on IRC, and I\n> can't find docs defining fsynch, fdatasynch, opensynch, and\n> opendatasynch beyond section 11.3 which just says that they are all\n> synch methods. Are there docs?\n\nSection 11.3 of what?\n\nThe only mention of open_datasync that I see in the docs is in the\nAdmin Guide chapter 3:\nhttp://developer.postgresql.org/docs/postgres/runtime-config.html#RUNTIME-CONFIG-WAL\n\nwhich saith\n\nWAL_SYNC_METHOD (string)\n\n Method used for forcing WAL updates out to disk. Possible values\n are FSYNC (call fsync() at each commit), FDATASYNC (call\n fdatasync() at each commit), OPEN_SYNC (write WAL files with open()\n option O_SYNC), or OPEN_DATASYNC (write WAL files with open()\n option O_DSYNC). Not all of these choices are available on all\n platforms. This option can only be set at server start or in the\n postgresql.conf file.\n\nThis may not help you much to decide which to use :-(, but it does tell\nyou what they are.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Nov 2002 16:41:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "On Fri, 22 Nov 2002, Josh Berkus wrote:\n\n> Scott,\n> \n> > > > The absolutely most important thing to do to speed up inserts and\n> > > > updates \n> > > > is to squeeze as many as you can into one transaction. \n> \n> I was discussing this on IRC, and nobody could verify this assertion.\n> Do you have an example of bunlding multiple writes into a transaction\n> giving a performance gain?\n\nYes, my own experience.\n\nIt's quite easy to test if you have a database with a large table to play \nwith, use pg_dump to dump a table with the -d switch (makes the dump use \ninsert statements.) Then, make two versions of the dump, one which has a \nbegin;end; pair around all the inserts and one that doesn't, then use psql \n-e to restore both dumps. The difference is HUGE. Around 10 to 20 times \nfaster with the begin end pairs. \n\nI'd think that anyone who's used postgresql for more than a few months \ncould corroborate my experience.\n\n", "msg_date": "Mon, 25 Nov 2002 09:31:20 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Scott,\n\n> It's quite easy to test if you have a database with a large table to play \n> with, use pg_dump to dump a table with the -d switch (makes the dump use \n> insert statements.) Then, make two versions of the dump, one which has a \n> begin;end; pair around all the inserts and one that doesn't, then use psql \n> -e to restore both dumps. The difference is HUGE. Around 10 to 20 times \n> faster with the begin end pairs. \n> \n> I'd think that anyone who's used postgresql for more than a few months \n> could corroborate my experience.\n\nOuch! \n\nNo need to get testy about it. \n\nYour test works as you said; the way I tried testing it before was different. \nGood to know. However, this approach is only useful if you are doing \nrapidfire updates or inserts coming off a single connection. But then it is \n*very* useful.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 25 Nov 2002 14:33:07 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 25 Nov 2002, Josh Berkus wrote:\n\n> Scott,\n> \n> > It's quite easy to test if you have a database with a large table to play \n> > with, use pg_dump to dump a table with the -d switch (makes the dump use \n> > insert statements.) Then, make two versions of the dump, one which has a \n> > begin;end; pair around all the inserts and one that doesn't, then use psql \n> > -e to restore both dumps. The difference is HUGE. Around 10 to 20 times \n> > faster with the begin end pairs. \n> > \n> > I'd think that anyone who's used postgresql for more than a few months \n> > could corroborate my experience.\n> \n> Ouch! \n> \n> No need to get testy about it. \n> \n> Your test works as you said; the way I tried testing it before was different. \n> Good to know. However, this approach is only useful if you are doing \n> rapidfire updates or inserts coming off a single connection. But then it is \n> *very* useful.\n\nI didn't mean that in a testy way, it's just that after you've sat through \na fifteen minute wait while a 1000 records are inserted, you pretty \nquickly switch to the method of inserting them all in one big \ntransaction. That's all.\n\nNote that the opposite is what really gets people in trouble. I've seen \nfolks inserting rather large amounts of data, say into ten or 15 tables, \nand their web servers were crawling under parallel load. Then, they put \nthem into a single transaction and they just flew.\n\nThe funny thing it, they've often avoided transactions because they \nfigured they'd be slower than just inserting the rows, and you kinda have \nto make them sit down first before you show them the performance increase \nfrom putting all those inserts into a single transaction.\n\nNo offense meant, really. It's just that you seemed to really doubt that \nputting things into one transaction helped, and putting things into one \nbig transaction if like the very first postgresql lesson a lot of \nnewcomers learn. :-)\n\n", "msg_date": "Mon, 25 Nov 2002 15:59:16 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": ">The funny thing it, they've often avoided transactions because they\n>figured they'd be slower than just inserting the rows, and you kinda have\n>to make them sit down first before you show them the performance increase\n>from putting all those inserts into a single transaction.\n>\n>No offense meant, really. It's just that you seemed to really doubt that\n>putting things into one transaction helped, and putting things into one\n>big transaction if like the very first postgresql lesson a lot of\n>newcomers learn. :-)\n\nScott,\n\nI'm new to postgresql, and as you suggested, this is \ncounter-intuitive to me. I would have thought that having to store \nall the inserts to be able to roll them back would take longer. Is \nmy thinking wrong or not relevant? Why is this not the case?\n\nThanks,\nTim\n", "msg_date": "Mon, 25 Nov 2002 18:41:53 -0500", "msg_from": "Tim Gardner <tgardner@codeHorse.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "> I'm new to postgresql, and as you suggested, this is \n> counter-intuitive to me. I would have thought that having to store \n> all the inserts to be able to roll them back would take longer. Is \n> my thinking wrong or not relevant? Why is this not the case?\n\nTypically that is the case. But Postgresql switches it around a little\nbit. Different trade-offs. No rollback log, but other processes are\nforced to go through you're left over garbage (hence 'vacuum').\n\nIt's still kinda slow with hundreds of connections (as compared to\nOracle) -- but close enough that a license fee -> hardware purchase\nfunds transfer more than makes up for it.\n\nGet yourself a 1GB battery backed ramdisk on it's own scsi chain for WAL\nand it'll fly no matter what size of transaction you use ;)\n\n-- \nRod Taylor <rbt@rbt.ca>\n\n", "msg_date": "25 Nov 2002 19:20:03 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 25 Nov 2002, Tim Gardner wrote:\n\n> >The funny thing it, they've often avoided transactions because they\n> >figured they'd be slower than just inserting the rows, and you kinda have\n> >to make them sit down first before you show them the performance increase\n> >from putting all those inserts into a single transaction.\n> >\n> >No offense meant, really. It's just that you seemed to really doubt that\n> >putting things into one transaction helped, and putting things into one\n> >big transaction if like the very first postgresql lesson a lot of\n> >newcomers learn. :-)\n> \n> Scott,\n> \n> I'm new to postgresql, and as you suggested, this is \n> counter-intuitive to me. I would have thought that having to store \n> all the inserts to be able to roll them back would take longer. Is \n> my thinking wrong or not relevant? Why is this not the case?\n\nYour thinking on this is wrong, and it is counter-intuitive to think that \na transaction would speed things up. Postgresql is very different from \nother databases.\n\nPostgresql was designed from day one as a transactional database. Which \nis why it was so bothersome that an Oracle marketroid recently was telling \nthe .org folks why they shouldn't use Postgresql because it didn't have \ntransactions. Postgresql may have a few warts here and there, but not \nsupporting transactions has NEVER been a problem for it.\n\nThere are two factors that make Postgresql so weird in regards to \ntransactions. One it that everything happens in a transaction (we won't \nmention truncate for a while, it's the only exception I know of.)\n\nThe next factor that makes for fast inserts of large amounts of data in a \ntransaction is MVCC. With Oracle and many other databases, transactions \nare written into a seperate log file, and when you commit, they are \ninserted into the database as one big group. This means you write your \ndata twice, once into the transaction log, and once into the database.\n\nWith Postgresql's implementation of MVCC, all your data are inserted in \nreal time, with a transaction date that makes the other clients ignore \nthem (mostly, other read committed transactions may or may not see them.)\n\nIf there are indexes to update, they are updated in the same \"invisible \nuntil committed\" way.\n\nAll this means that your inserts don't block anyone else's reads as well.\n\nThis means that when you commit, all postgresql does is make them visible.\n\nIn the event you roll back a transaction, the tuples are all just marked \nas dead and they get ignored.\n\nIt's interesting when you work with folks who came from other databases. \nMy coworker, who's been using Postgresql for about 2 years now, had an \ninteresting experience when he first started here. He was inserting \nsomething like 10,000 rows. He comes over and tells me there must be \nsomething wrong with the database, as his inserts have been running for 10 \nminutes, and he's not even halfway through. So I had him stop the \ninserts, clean out the rows (it was a new table for a new project) and \nwrap all 10,000 inserts into a transaction. What had been running for 10 \nminutes now ran in about 30 seconds.\n\nHe was floored. \n\nWell, good luck on using postgresql, and definitely keep in touch with the \nperformance and general mailing lists. They're a wealth of useful info.\n\n", "msg_date": "Mon, 25 Nov 2002 17:23:57 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On 25 Nov 2002, Rod Taylor wrote:\n\n> > I'm new to postgresql, and as you suggested, this is \n> > counter-intuitive to me. I would have thought that having to store \n> > all the inserts to be able to roll them back would take longer. Is \n> > my thinking wrong or not relevant? Why is this not the case?\n> \n> Typically that is the case. But Postgresql switches it around a little\n> bit. Different trade-offs. No rollback log, but other processes are\n> forced to go through you're left over garbage (hence 'vacuum').\n\nYeah, which means you always need to do a vacuum on a table after a lot of \nupdates/deletes. And analyze after a lot of inserts/updates/deletes.\n\n> It's still kinda slow with hundreds of connections (as compared to\n> Oracle) -- but close enough that a license fee -> hardware purchase\n> funds transfer more than makes up for it.\n\nAin't it amazing how much hardware a typical Oracle license can buy? ;^)\n\nHeck, even the license cost MS-SQL server is enough to buy a nice quad \nXeon with all the trimmings nowadays. Then you can probably still have \nenough left over for one of the pgsql developers to fly out and train your \nfolks on it.\n\n", "msg_date": "Mon, 25 Nov 2002 17:30:00 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": ">With Postgresql's implementation of MVCC, all your data are inserted in\n>real time, with a transaction date that makes the other clients ignore\n>them (mostly, other read committed transactions may or may not see them.)\n>\n>If there are indexes to update, they are updated in the same \"invisible\n>until committed\" way.\n>\n>All this means that your inserts don't block anyone else's reads as well.\n>\n>This means that when you commit, all postgresql does is make them visible.\n\nscott,\n\nExactly the kind of explanation/understanding I was hoping for!\n\nThank you!\n\nTim\n", "msg_date": "Mon, 25 Nov 2002 19:40:43 -0500", "msg_from": "Tim Gardner <tgardner@codeHorse.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 25 Nov 2002, scott.marlowe wrote:\n\n> On Mon, 25 Nov 2002, Tim Gardner wrote:\n> \n> > I'm new to postgresql, and as you suggested, this is \n> > counter-intuitive to me. I would have thought that having to store \n> > all the inserts to be able to roll them back would take longer. Is \n> > my thinking wrong or not relevant? Why is this not the case?\n> \n> Your thinking on this is wrong, and it is counter-intuitive to think that \n> a transaction would speed things up. Postgresql is very different from \n> other databases.\n\nSorry that came out like that, I meant to write:\n\nI meant to add in there that I thought the same way at first, and only \nafter a little trial and much error did I realize that I was thinking in \nterms of how other databases did things. I.e. most people make the same \nmistake when starting out with pgsql.\n\n", "msg_date": "Mon, 25 Nov 2002 17:41:09 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 2002-11-25 at 18:23, scott.marlowe wrote:\n> On Mon, 25 Nov 2002, Tim Gardner wrote:\n> \n[snip]\n> \n> There are two factors that make Postgresql so weird in regards to \n> transactions. One it that everything happens in a transaction (we won't \n> mention truncate for a while, it's the only exception I know of.)\n\nWhy is this so weird? Do I use the /other/ weird RDBMS? (Rdb/VMS)\n\n> The next factor that makes for fast inserts of large amounts of data in a \n> transaction is MVCC. With Oracle and many other databases, transactions \n> are written into a seperate log file, and when you commit, they are \n> inserted into the database as one big group. This means you write your \n> data twice, once into the transaction log, and once into the database.\n\nYou are just deferring the pain. Whereas others must flush from log\nto \"database files\", they do not have to VACUUM or VACUUM ANALYZE.\n\n> With Postgresql's implementation of MVCC, all your data are inserted in \n> real time, with a transaction date that makes the other clients ignore \n> them (mostly, other read committed transactions may or may not see them.)\n\nIs this unusual? (Except that Rdb/VMS uses a 64-bit integer (a\nTransaction Sequence Number) instead of a timestamp, because Rdb,\ncominging from VAX/VMS is natively cluster-aware, and it's not\nguaranteed that all nodes have the exact same timestamp.\n\n[snip]\n> In the event you roll back a transaction, the tuples are all just marked \n> as dead and they get ignored.\n\nWhat if you are in a 24x365 environment? Doing a VACUUM ANALYZE would\nreally slow down the nightly operations.\n\n> It's interesting when you work with folks who came from other databases. \n> My coworker, who's been using Postgresql for about 2 years now, had an \n> interesting experience when he first started here. He was inserting \n> something like 10,000 rows. He comes over and tells me there must be \n> something wrong with the database, as his inserts have been running for 10 \n> minutes, and he's not even halfway through. So I had him stop the \n> inserts, clean out the rows (it was a new table for a new project) and \n> wrap all 10,000 inserts into a transaction. What had been running for 10 \n> minutes now ran in about 30 seconds.\n\nAgain, why is this so unusual?????\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "25 Nov 2002 19:41:03 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Scott,\n\n> No offense meant, really. It's just that you seemed to really doubt that \n> putting things into one transaction helped, and putting things into one \n> big transaction if like the very first postgresql lesson a lot of \n> newcomers learn. :-)\n\nNot so odd, if you think about it. After all, this approach is only useful \nfor a series of small update/insert statements on a single connection. \nThinking about it, I frankly never do this except as part of a stored \nprocedure ... which, in Postgres, is automatically a transaction.\n\nI'm lucky enough that my data loads have all been adaptable to COPY \nstatements, which bypasses this issue completely.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 25 Nov 2002 17:41:42 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Ron Johnson <ron.l.johnson@cox.net> writes:\n> On Mon, 2002-11-25 at 18:23, scott.marlowe wrote:\n>> The next factor that makes for fast inserts of large amounts of data in a \n>> transaction is MVCC. With Oracle and many other databases, transactions \n>> are written into a seperate log file, and when you commit, they are \n>> inserted into the database as one big group. This means you write your \n>> data twice, once into the transaction log, and once into the database.\n\n> You are just deferring the pain. Whereas others must flush from log\n> to \"database files\", they do not have to VACUUM or VACUUM ANALYZE.\n\nSure, it's just shuffling the housekeeping work from one place to\nanother. The thing that I like about Postgres' approach is that we\nput the housekeeping in a background task (VACUUM) rather than in the\ncritical path of foreground transaction commit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Nov 2002 22:30:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "Tim Gardner <tgardner@codeHorse.com> writes:\n>> All this means that your inserts don't block anyone else's reads as well.\n>> This means that when you commit, all postgresql does is make them visible.\n\n> Exactly the kind of explanation/understanding I was hoping for!\n\nThere's another point worth making. What Scott was pointing out is that\nwhether you commit or roll back a transaction costs about the same, in\nPostgres, as far as tuple update processing is concerned. At the end of\na transaction, we have both new (inserted/updated) and old\n(deleted/replaced) tuples laying about in the database. Commit marks\nthe transaction committed in pg_clog; abort marks it aborted instead;\nneither one lifts a finger to touch the tuples. (Subsequent visitors\nto the tuples will mark them \"good\" or \"dead\" based on consulting\npg_clog, but we don't try to do that during transaction commit.)\n\nBut having said all that, transaction commit is more expensive than\ntransaction abort, because we have to flush the transaction commit\nWAL record to disk before we can report \"transaction successfully\ncommitted\". That means waiting for the disk to spin. Transaction abort\ndoesn't have to wait --- that's because if there's a crash and the abort\nrecord never makes it to disk, the default assumption on restart will be\nthat the transaction aborted, anyway.\n\nSo the basic reason that it's worth batching multiple updates into one\ntransaction is that you only wait for the commit record flush once,\nnot once per update. This makes no difference worth mentioning if your\nupdates are big, but on modern hardware you can update quite a few\nindividual rows in the time it takes the disk to spin once.\n\n(BTW, if you set fsync = off, then the performance difference goes away,\nbecause we don't wait for the commit record to flush to disk ... but\nthen you become vulnerable to problems after a system crash.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Nov 2002 22:44:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "On Mon, 2002-11-25 at 19:30, scott.marlowe wrote:\n> On 25 Nov 2002, Rod Taylor wrote:\n> \n> > > I'm new to postgresql, and as you suggested, this is \n> > > counter-intuitive to me. I would have thought that having to store \n> > > all the inserts to be able to roll them back would take longer. Is \n> > > my thinking wrong or not relevant? Why is this not the case?\n> > \n> > Typically that is the case. But Postgresql switches it around a little\n> > bit. Different trade-offs. No rollback log, but other processes are\n> > forced to go through you're left over garbage (hence 'vacuum').\n> \n> Yeah, which means you always need to do a vacuum on a table after a lot of \n> updates/deletes. And analyze after a lot of inserts/updates/deletes.\n\nA good auto-vacuum daemon will help that out :) Not really any\ndifferent than an OO dbs garbage collection process -- except PGs vacuum\nis several orders of magnitude faster.\n-- \nRod Taylor <rbt@rbt.ca>\n\n", "msg_date": "25 Nov 2002 22:52:51 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 2002-11-25 at 21:30, Tom Lane wrote:\n> Ron Johnson <ron.l.johnson@cox.net> writes:\n> > On Mon, 2002-11-25 at 18:23, scott.marlowe wrote:\n> >> The next factor that makes for fast inserts of large amounts of data in a \n> >> transaction is MVCC. With Oracle and many other databases, transactions \n> >> are written into a seperate log file, and when you commit, they are \n> >> inserted into the database as one big group. This means you write your \n> >> data twice, once into the transaction log, and once into the database.\n> \n> > You are just deferring the pain. Whereas others must flush from log\n> > to \"database files\", they do not have to VACUUM or VACUUM ANALYZE.\n> \n> Sure, it's just shuffling the housekeeping work from one place to\n> another. The thing that I like about Postgres' approach is that we\n> put the housekeeping in a background task (VACUUM) rather than in the\n> critical path of foreground transaction commit.\n\nIf you have a quiescent point somewhere in the middle of the night...\n\nIt's all about differing philosophies, though, and there's no way\nthat Oracle will re-write Rdb/VMS (they bought it from DEC in 1997\nfor it's high-volume OLTP technolgies) and you all won't re-write\nPostgres...\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "25 Nov 2002 22:27:41 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "tom lane wrote:\n> Sure, it's just shuffling the housekeeping work from one place to\n> another. The thing that I like about Postgres' approach is that we\n> put the housekeeping in a background task (VACUUM) rather than in the\n> critical path of foreground transaction commit.\n\nThinking with my marketing hat on, MVCC would be a much bigger win if VACUUM\nwas not required (or was done automagically). The need for periodic VACUUM\njust gives ammunition to the PostgreSQL opponents who can claim we are\ndeferring work but that it amounts to the same thing.\n\nA fully automatic background VACUUM will significantly reduce but will not\neliminate this perceived weakness.\n\nHowever, it always seemed to me there should be some way to reuse the space\nmore dynamically and quickly than a background VACUUM thereby reducing the\npercentage of tuples that are expired in heavy update cases. If only a very\ntiny number of tuples on the disk are expired this will reduce the aggregate\nperformance/space penalty of MVCC into insignificance for the majority of\nuses.\n\nCouldn't we reuse tuple and index space as soon as there are no transactions\nthat depend on the old tuple or index values. I have imagined that this was\nalways part of the long-term master plan.\n\nCouldn't we keep a list of dead tuples in shared memory and look in the list\nfirst when deciding where to place new values for inserts or updates so we\ndon't have to rely on VACUUM (even a background one)? If there are expired\ntuple slots in the list these would be used before allocating a new slot from\nthe tuple heap.\n\nThe only issue is determining the lowest transaction ID for in-process\ntransactions which seems relatively easy to do (if it's not already done\nsomewhere).\n\nIn the normal shutdown and startup case, a tuple VACUUM could be performed\nautomatically. This would normally be very fast since there would not be many\ntuples in the list.\n\nIndex slots would be handled differently since these cannot be substituted\none for another. However, these could be recovered as part of every index\npage update. Pages would be scanned before being written and any expired\nslots that had transaction ID's lower than the lowest active slot would be\nremoved. This could be done for non-leaf pages as well and would result in\nonly reorganizing a page that is already going to be written thereby not\nadding much to the overall work.\n\nI don't think that internal pages that contain pointers to values in nodes\nfurther down the tree that are no longer in the leaf nodes because of this\npartial expired entry elimination will cause a problem since searches and\nscans will still work fine.\n\nDoes VACUUM do something that could not be handled in this realtime manner?\n\n- Curtis\n\n\n", "msg_date": "Tue, 26 Nov 2002 11:32:28 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "[HACKERS] Realtime VACUUM, was: performance of insert/delete/update " }, { "msg_contents": "On Mon, Nov 25, 2002 at 05:30:00PM -0700, scott.marlowe wrote:\n> \n> Ain't it amazing how much hardware a typical Oracle license can buy? ;^)\n\nNot to mention the hardware budget for a typical Oracle installation.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 26 Nov 2002 11:49:18 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, Nov 25, 2002 at 07:41:03PM -0600, Ron Johnson wrote:\n> \n> What if you are in a 24x365 environment? Doing a VACUUM ANALYZE would\n> really slow down the nightly operations.\n\nWhy? After upgrading to 7.2, we find it a good idea to do frequent\nvacuum analyse on frequently-changed tables. It doesn't block, and\nif you vacuum frequently enough, it goes real fast.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 26 Nov 2002 11:54:17 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Tue, 26 Nov 2002, Andrew Sullivan wrote:\n\n> On Mon, Nov 25, 2002 at 07:41:03PM -0600, Ron Johnson wrote:\n> > \n> > What if you are in a 24x365 environment? Doing a VACUUM ANALYZE would\n> > really slow down the nightly operations.\n> \n> Why? After upgrading to 7.2, we find it a good idea to do frequent\n> vacuum analyse on frequently-changed tables. It doesn't block, and\n> if you vacuum frequently enough, it goes real fast.\n\nFor example, I just ran pgbench -c 20 -t 200 (20 concurrent's) with a \nscript in the background that looked like this:\n\n#!/bin/bash\nfor ((a=0;a=1;a=0)) do {\n vacuumdb -z postgres\n}\ndone\n\n(i.e. run vacuumdb in analyze against the database continuously.)\n\nOutput of top:\n\n71 processes: 63 sleeping, 8 running, 0 zombie, 0 stopped\nCPU0 states: 66.2% user, 25.1% system, 0.0% nice, 8.1% idle\nCPU1 states: 79.4% user, 18.3% system, 0.0% nice, 1.2% idle\nMem: 254660K av, 249304K used, 5356K free, 26736K shrd, 21720K \nbuff\nSwap: 3084272K av, 1300K used, 3082972K free 142396K \ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n21381 postgres 11 0 1304 1304 868 S 10.8 0.5 0:00 pgbench\n21393 postgres 14 0 4832 4832 4116 R 8.4 1.8 0:00 postmaster\n21390 postgres 9 0 4880 4880 4164 S 7.8 1.9 0:00 postmaster\n21385 postgres 14 0 4884 4884 4168 R 6.7 1.9 0:00 postmaster\n21399 postgres 9 0 4768 4768 4076 S 6.3 1.8 0:00 postmaster\n21402 postgres 9 0 4776 4776 4076 S 6.1 1.8 0:00 postmaster\n21383 postgres 14 0 4828 4828 4112 R 5.9 1.8 0:00 postmaster\n21386 postgres 14 0 4872 4872 4156 R 5.9 1.9 0:00 postmaster\n21392 postgres 9 0 4820 4820 4104 S 5.9 1.8 0:00 postmaster\n21409 postgres 11 0 4600 4600 3544 R 5.8 1.8 0:00 postmaster\n21387 postgres 9 0 4824 4824 4108 S 5.4 1.8 0:00 postmaster\n21394 postgres 9 0 4808 4808 4092 S 5.4 1.8 0:00 postmaster\n21391 postgres 9 0 4816 4816 4100 S 5.0 1.8 0:00 postmaster\n21398 postgres 9 0 4796 4796 4088 S 5.0 1.8 0:00 postmaster\n21384 postgres 9 0 4756 4756 4040 R 4.8 1.8 0:00 postmaster\n21389 postgres 9 0 4788 4788 4072 S 4.8 1.8 0:00 postmaster\n21397 postgres 9 0 4772 4772 4056 S 4.6 1.8 0:00 postmaster\n21388 postgres 9 0 4780 4780 4064 S 4.4 1.8 0:00 postmaster\n21396 postgres 9 0 4756 4756 4040 S 4.3 1.8 0:00 postmaster\n21395 postgres 14 0 4760 4760 4044 S 4.1 1.8 0:00 postmaster\n21401 postgres 14 0 4736 4736 4036 R 4.1 1.8 0:00 postmaster\n21400 postgres 9 0 4732 4732 4028 S 2.9 1.8 0:00 postmaster\n21403 postgres 9 0 1000 1000 820 S 2.4 0.3 0:00 vacuumdb\n21036 postgres 9 0 1056 1056 828 R 2.0 0.4 0:27 top\n18615 postgres 9 0 1912 1912 1820 S 1.1 0.7 0:01 postmaster\n21408 postgres 9 0 988 988 804 S 0.7 0.3 0:00 psql\n\nSo, pgbench is the big eater of CPU at 10%, each postmaster using about \n5%, and vacuumdb using 2.4%. Note that after a second, the vacuumdb use \ndrops off to 0% until it finishes and runs again. The output of the \npgbench without vacuumdb running, but with top, to be fair was:\n\nnumber of clients: 20\nnumber of transactions per client: 200\nnumber of transactions actually processed: 4000/4000\ntps = 54.428632 (including connections establishing)\ntps = 54.847276 (excluding connections establishing)\n\nWhile the output with the vacuumdb running continuously was:\n\nnumber of clients: 20\nnumber of transactions per client: 200\nnumber of transactions actually processed: 4000/4000\ntps = 52.114343 (including connections establishing)\ntps = 52.873435 (excluding connections establishing)\n\nSo, the difference in performance was around 4% slower.\n\nI'd hardly consider that a big hit against the database.\n\nNote that in every test I've made up and run, the difference is at most 5% \nwith vacuumdb -z running continuously in the background. Big text fields, \nlots of math, lots of fks, etc...\n\nYes, vacuum WAS a problem long ago, but since 7.2 came out it's only a \n\"problem\" in terms of remember to run it.\n\n", "msg_date": "Tue, 26 Nov 2002 11:06:47 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Tue, Nov 26, 2002 at 11:06:47AM -0700, scott.marlowe wrote:\n> So, the difference in performance was around 4% slower.\n> \n> I'd hardly consider that a big hit against the database.\n> \n> Note that in every test I've made up and run, the difference is at most 5% \n> with vacuumdb -z running continuously in the background. Big text fields, \n> lots of math, lots of fks, etc...\n\nAlso, it's important to remember that you may see a considerable\nimprovement in efficiency of some queries if you vacuum often, (it's\npartly dependent on the turnover in your database -- if it never\nchanges, you don't need to vacuum often). So a 5% hit in regular\nperformance may be worth it over the long haul, if certain queries\nare way cheaper to run. (That is, while you may get 4% slower\nperformance overall, if the really slow queries are much faster, the\nfast queries running slower may well be worth it. In my case,\ncertainly, I think it is.)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 26 Nov 2002 13:24:39 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Tue, 26 Nov 2002, Andrew Sullivan wrote:\n\n> On Tue, Nov 26, 2002 at 11:06:47AM -0700, scott.marlowe wrote:\n> > So, the difference in performance was around 4% slower.\n> > \n> > I'd hardly consider that a big hit against the database.\n> > \n> > Note that in every test I've made up and run, the difference is at most 5% \n> > with vacuumdb -z running continuously in the background. Big text fields, \n> > lots of math, lots of fks, etc...\n> \n> Also, it's important to remember that you may see a considerable\n> improvement in efficiency of some queries if you vacuum often, (it's\n> partly dependent on the turnover in your database -- if it never\n> changes, you don't need to vacuum often). So a 5% hit in regular\n> performance may be worth it over the long haul, if certain queries\n> are way cheaper to run. (That is, while you may get 4% slower\n> performance overall, if the really slow queries are much faster, the\n> fast queries running slower may well be worth it. In my case,\n> certainly, I think it is.)\n\nAgreed. We used to run vacuumdb at night only when we were running 7.1, \nand we had a script top detect if it had hung or anything. I.e. vacuuming \nwas still a semi-dangerous activity. I now have it set to run every hour \n(-z -a switches to vacuumdb). I'd run it more often but we just don't \nhave enough load to warrant it.\n\n", "msg_date": "Tue, 26 Nov 2002 11:46:27 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "\nGood ideas. I think the master solution is to hook the statistics\ndaemon information into an automatic vacuum that could _know_ which\ntables need attention.\n\n---------------------------------------------------------------------------\n\nCurtis Faith wrote:\n> tom lane wrote:\n> > Sure, it's just shuffling the housekeeping work from one place to\n> > another. The thing that I like about Postgres' approach is that we\n> > put the housekeeping in a background task (VACUUM) rather than in the\n> > critical path of foreground transaction commit.\n> \n> Thinking with my marketing hat on, MVCC would be a much bigger win if VACUUM\n> was not required (or was done automagically). The need for periodic VACUUM\n> just gives ammunition to the PostgreSQL opponents who can claim we are\n> deferring work but that it amounts to the same thing.\n> \n> A fully automatic background VACUUM will significantly reduce but will not\n> eliminate this perceived weakness.\n> \n> However, it always seemed to me there should be some way to reuse the space\n> more dynamically and quickly than a background VACUUM thereby reducing the\n> percentage of tuples that are expired in heavy update cases. If only a very\n> tiny number of tuples on the disk are expired this will reduce the aggregate\n> performance/space penalty of MVCC into insignificance for the majority of\n> uses.\n> \n> Couldn't we reuse tuple and index space as soon as there are no transactions\n> that depend on the old tuple or index values. I have imagined that this was\n> always part of the long-term master plan.\n> \n> Couldn't we keep a list of dead tuples in shared memory and look in the list\n> first when deciding where to place new values for inserts or updates so we\n> don't have to rely on VACUUM (even a background one)? If there are expired\n> tuple slots in the list these would be used before allocating a new slot from\n> the tuple heap.\n> \n> The only issue is determining the lowest transaction ID for in-process\n> transactions which seems relatively easy to do (if it's not already done\n> somewhere).\n> \n> In the normal shutdown and startup case, a tuple VACUUM could be performed\n> automatically. This would normally be very fast since there would not be many\n> tuples in the list.\n> \n> Index slots would be handled differently since these cannot be substituted\n> one for another. However, these could be recovered as part of every index\n> page update. Pages would be scanned before being written and any expired\n> slots that had transaction ID's lower than the lowest active slot would be\n> removed. This could be done for non-leaf pages as well and would result in\n> only reorganizing a page that is already going to be written thereby not\n> adding much to the overall work.\n> \n> I don't think that internal pages that contain pointers to values in nodes\n> further down the tree that are no longer in the leaf nodes because of this\n> partial expired entry elimination will cause a problem since searches and\n> scans will still work fine.\n> \n> Does VACUUM do something that could not be handled in this realtime manner?\n> \n> - Curtis\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 26 Nov 2002 14:09:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "On Mon, 2002-11-25 at 23:27, Ron Johnson wrote:\n> On Mon, 2002-11-25 at 21:30, Tom Lane wrote:\n> > Ron Johnson <ron.l.johnson@cox.net> writes:\n> > > On Mon, 2002-11-25 at 18:23, scott.marlowe wrote:\n> > >> The next factor that makes for fast inserts of large amounts of data in a \n> > >> transaction is MVCC. With Oracle and many other databases, transactions \n> > >> are written into a seperate log file, and when you commit, they are \n> > >> inserted into the database as one big group. This means you write your \n> > >> data twice, once into the transaction log, and once into the database.\n> > \n> > > You are just deferring the pain. Whereas others must flush from log\n> > > to \"database files\", they do not have to VACUUM or VACUUM ANALYZE.\n> > \n> > Sure, it's just shuffling the housekeeping work from one place to\n> > another. The thing that I like about Postgres' approach is that we\n> > put the housekeeping in a background task (VACUUM) rather than in the\n> > critical path of foreground transaction commit.\n> \n> If you have a quiescent point somewhere in the middle of the night...\n> \n\nYou seem to be implying that running vacuum analyze causes some large\nperformance issues, but it's just not the case. I run a 24x7 operation,\nand I have a few tables that \"turn over\" within 15 minutes. On these\ntables I run vacuum analyze every 5 - 10 minutes and really there is\nlittle/no performance penalty. \n\nRobert Treat\n\n\n", "msg_date": "26 Nov 2002 14:25:46 -0500", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> tom lane wrote:\n>> Sure, it's just shuffling the housekeeping work from one place to\n>> another. The thing that I like about Postgres' approach is that we\n>> put the housekeeping in a background task (VACUUM) rather than in the\n>> critical path of foreground transaction commit.\n\n> Couldn't we reuse tuple and index space as soon as there are no transactions\n> that depend on the old tuple or index values. I have imagined that this was\n> always part of the long-term master plan.\n> Couldn't we keep a list of dead tuples in shared memory and look in the list\n> first when deciding where to place new values for inserts or updates so we\n> don't have to rely on VACUUM (even a background one)?\n\nISTM that either of these ideas would lead to pushing VACUUM overhead\ninto the foreground transactions, which is exactly what we don't want to\ndo. Keep in mind also that shared memory is finite ... *very* finite.\nIt's bad enough trying to keep per-page status in there (cf FSM) ---\nper-tuple status is right out.\n\nI agree that automatic background VACUUMing would go a long way towards\nreducing operational problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Nov 2002 22:13:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "I always wandered if VACUUM is the right name for the porcess. Now, when\nPostgreSQL\nis actively challenging in Enterprise space, it might be a good idea to give\nit a more\nenterprise-like name. Try to think how it is looking for an outside person\nto see\nus, database professionals hold lenghty discussions about the ways we\nvacuum a database. Why should you need to vacuum a database? Is it\ndirty? In my personal opinion, something like \"space reclaiming daemon\",\n\"free-list organizer\", \"tuple recyle job\" or \"segment coalesce process\"\nwould\nsound more business-like .\n\nRegards,\nNick\n\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Curtis Faith\" <curtis@galtair.com>\nCc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; \"Ron Johnson\" <ron.l.johnson@cox.net>;\n\"PgSQL Performance ML\" <pgsql-performance@postgresql.org>;\n<pgsql-hackers@postgresql.org>\nSent: Tuesday, November 26, 2002 9:09 PM\nSubject: Re: [PERFORM] [HACKERS] Realtime VACUUM, was: performance of\ninsert/delete/update\n\n\n>\n> Good ideas. I think the master solution is to hook the statistics\n> daemon information into an automatic vacuum that could _know_ which\n> tables need attention.\n>\n> --------------------------------------------------------------------------\n-\n>\n> Curtis Faith wrote:\n> > tom lane wrote:\n> > > Sure, it's just shuffling the housekeeping work from one place to\n> > > another. The thing that I like about Postgres' approach is that we\n> > > put the housekeeping in a background task (VACUUM) rather than in the\n> > > critical path of foreground transaction commit.\n> >\n> > Thinking with my marketing hat on, MVCC would be a much bigger win if\nVACUUM\n> > was not required (or was done automagically). The need for periodic\nVACUUM\n> > just gives ammunition to the PostgreSQL opponents who can claim we are\n> > deferring work but that it amounts to the same thing.\n> >\n> > A fully automatic background VACUUM will significantly reduce but will\nnot\n> > eliminate this perceived weakness.\n> >\n> > However, it always seemed to me there should be some way to reuse the\nspace\n> > more dynamically and quickly than a background VACUUM thereby reducing\nthe\n> > percentage of tuples that are expired in heavy update cases. If only a\nvery\n> > tiny number of tuples on the disk are expired this will reduce the\naggregate\n> > performance/space penalty of MVCC into insignificance for the majority\nof\n> > uses.\n> >\n> > Couldn't we reuse tuple and index space as soon as there are no\ntransactions\n> > that depend on the old tuple or index values. I have imagined that this\nwas\n> > always part of the long-term master plan.\n> >\n> > Couldn't we keep a list of dead tuples in shared memory and look in the\nlist\n> > first when deciding where to place new values for inserts or updates so\nwe\n> > don't have to rely on VACUUM (even a background one)? If there are\nexpired\n> > tuple slots in the list these would be used before allocating a new slot\nfrom\n> > the tuple heap.\n> >\n> > The only issue is determining the lowest transaction ID for in-process\n> > transactions which seems relatively easy to do (if it's not already done\n> > somewhere).\n> >\n> > In the normal shutdown and startup case, a tuple VACUUM could be\nperformed\n> > automatically. This would normally be very fast since there would not be\nmany\n> > tuples in the list.\n> >\n> > Index slots would be handled differently since these cannot be\nsubstituted\n> > one for another. However, these could be recovered as part of every\nindex\n> > page update. Pages would be scanned before being written and any expired\n> > slots that had transaction ID's lower than the lowest active slot would\nbe\n> > removed. This could be done for non-leaf pages as well and would result\nin\n> > only reorganizing a page that is already going to be written thereby not\n> > adding much to the overall work.\n> >\n> > I don't think that internal pages that contain pointers to values in\nnodes\n> > further down the tree that are no longer in the leaf nodes because of\nthis\n> > partial expired entry elimination will cause a problem since searches\nand\n> > scans will still work fine.\n> >\n> > Does VACUUM do something that could not be handled in this realtime\nmanner?\n> >\n> > - Curtis\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania\n19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Wed, 27 Nov 2002 16:02:22 +0200", "msg_from": "\"Nicolai Tufar\" <ntufar@apb.com.tr>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "Just for the humor of it, as well as to confirm Nick's perspective, \nyears ago on our inhouse developed Burroughs mainframe dbms, we had a \nprocess called \"garbage collect\".\n\nNicolai Tufar wrote:\n\n>I always wandered if VACUUM is the right name for the porcess. Now, when\n>PostgreSQL\n>is actively challenging in Enterprise space, it might be a good idea to give\n>it a more\n>enterprise-like name. Try to think how it is looking for an outside person\n>to see\n>us, database professionals hold lenghty discussions about the ways we\n>vacuum a database. Why should you need to vacuum a database? Is it\n>dirty? In my personal opinion, something like \"space reclaiming daemon\",\n>\"free-list organizer\", \"tuple recyle job\" or \"segment coalesce process\"\n>would\n>sound more business-like .\n>\n>Regards,\n>Nick\n>\n>\n>----- Original Message -----\n>From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n>To: \"Curtis Faith\" <curtis@galtair.com>\n>Cc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; \"Ron Johnson\" <ron.l.johnson@cox.net>;\n>\"PgSQL Performance ML\" <pgsql-performance@postgresql.org>;\n><pgsql-hackers@postgresql.org>\n>Sent: Tuesday, November 26, 2002 9:09 PM\n>Subject: Re: [PERFORM] [HACKERS] Realtime VACUUM, was: performance of\n>insert/delete/update\n>\n>\n> \n>\n>>Good ideas. I think the master solution is to hook the statistics\n>>daemon information into an automatic vacuum that could _know_ which\n>>tables need attention.\n>>\n>>--------------------------------------------------------------------------\n>> \n>>\n>-\n> \n>\n>>Curtis Faith wrote:\n>> \n>>\n>>>tom lane wrote:\n>>> \n>>>\n>>>>Sure, it's just shuffling the housekeeping work from one place to\n>>>>another. The thing that I like about Postgres' approach is that we\n>>>>put the housekeeping in a background task (VACUUM) rather than in the\n>>>>critical path of foreground transaction commit.\n>>>> \n>>>>\n>>>Thinking with my marketing hat on, MVCC would be a much bigger win if\n>>> \n>>>\n>VACUUM\n> \n>\n>>>was not required (or was done automagically). The need for periodic\n>>> \n>>>\n>VACUUM\n> \n>\n>>>just gives ammunition to the PostgreSQL opponents who can claim we are\n>>>deferring work but that it amounts to the same thing.\n>>>\n>>>A fully automatic background VACUUM will significantly reduce but will\n>>> \n>>>\n>not\n> \n>\n>>>eliminate this perceived weakness.\n>>>\n>>>However, it always seemed to me there should be some way to reuse the\n>>> \n>>>\n>space\n> \n>\n>>>more dynamically and quickly than a background VACUUM thereby reducing\n>>> \n>>>\n>the\n> \n>\n>>>percentage of tuples that are expired in heavy update cases. If only a\n>>> \n>>>\n>very\n> \n>\n>>>tiny number of tuples on the disk are expired this will reduce the\n>>> \n>>>\n>aggregate\n> \n>\n>>>performance/space penalty of MVCC into insignificance for the majority\n>>> \n>>>\n>of\n> \n>\n>>>uses.\n>>>\n>>>Couldn't we reuse tuple and index space as soon as there are no\n>>> \n>>>\n>transactions\n> \n>\n>>>that depend on the old tuple or index values. I have imagined that this\n>>> \n>>>\n>was\n> \n>\n>>>always part of the long-term master plan.\n>>>\n>>>Couldn't we keep a list of dead tuples in shared memory and look in the\n>>> \n>>>\n>list\n> \n>\n>>>first when deciding where to place new values for inserts or updates so\n>>> \n>>>\n>we\n> \n>\n>>>don't have to rely on VACUUM (even a background one)? If there are\n>>> \n>>>\n>expired\n> \n>\n>>>tuple slots in the list these would be used before allocating a new slot\n>>> \n>>>\n>from\n> \n>\n>>>the tuple heap.\n>>>\n>>>The only issue is determining the lowest transaction ID for in-process\n>>>transactions which seems relatively easy to do (if it's not already done\n>>>somewhere).\n>>>\n>>>In the normal shutdown and startup case, a tuple VACUUM could be\n>>> \n>>>\n>performed\n> \n>\n>>>automatically. This would normally be very fast since there would not be\n>>> \n>>>\n>many\n> \n>\n>>>tuples in the list.\n>>>\n>>>Index slots would be handled differently since these cannot be\n>>> \n>>>\n>substituted\n> \n>\n>>>one for another. However, these could be recovered as part of every\n>>> \n>>>\n>index\n> \n>\n>>>page update. Pages would be scanned before being written and any expired\n>>>slots that had transaction ID's lower than the lowest active slot would\n>>> \n>>>\n>be\n> \n>\n>>>removed. This could be done for non-leaf pages as well and would result\n>>> \n>>>\n>in\n> \n>\n>>>only reorganizing a page that is already going to be written thereby not\n>>>adding much to the overall work.\n>>>\n>>>I don't think that internal pages that contain pointers to values in\n>>> \n>>>\n>nodes\n> \n>\n>>>further down the tree that are no longer in the leaf nodes because of\n>>> \n>>>\n>this\n> \n>\n>>>partial expired entry elimination will cause a problem since searches\n>>> \n>>>\n>and\n> \n>\n>>>scans will still work fine.\n>>>\n>>>Does VACUUM do something that could not be handled in this realtime\n>>> \n>>>\n>manner?\n> \n>\n>>>- Curtis\n>>>\n>>>\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 4: Don't 'kill -9' the postmaster\n>>>\n>>> \n>>>\n>>--\n>> Bruce Momjian | http://candle.pha.pa.us\n>> pgman@candle.pha.pa.us | (610) 359-1001\n>> + If your life is a hard drive, | 13 Roberts Road\n>> + Christ can be your backup. | Newtown Square, Pennsylvania\n>> \n>>\n>19073\n> \n>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>>\n>> \n>>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n> \n>\n\n\n\n\n", "msg_date": "Wed, 27 Nov 2002 09:43:01 -0500", "msg_from": "Jim Beckstrom <jrbeckstrom@sbcglobal.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "Or just reorg.\n\nAm Mittwoch, 27. November 2002 15:02 schrieb Nicolai Tufar:\n> I always wandered if VACUUM is the right name for the porcess. Now, when\n> PostgreSQL\n> is actively challenging in Enterprise space, it might be a good idea to\n> give it a more\n> enterprise-like name. Try to think how it is looking for an outside person\n> to see\n> us, database professionals hold lenghty discussions about the ways we\n> vacuum a database. Why should you need to vacuum a database? Is it\n> dirty? In my personal opinion, something like \"space reclaiming daemon\",\n> \"free-list organizer\", \"tuple recyle job\" or \"segment coalesce process\"\n> would\n> sound more business-like .\n>\n> Regards,\n> Nick\n>\n>\n> ----- Original Message -----\n> From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> To: \"Curtis Faith\" <curtis@galtair.com>\n> Cc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; \"Ron Johnson\" <ron.l.johnson@cox.net>;\n> \"PgSQL Performance ML\" <pgsql-performance@postgresql.org>;\n> <pgsql-hackers@postgresql.org>\n> Sent: Tuesday, November 26, 2002 9:09 PM\n> Subject: Re: [PERFORM] [HACKERS] Realtime VACUUM, was: performance of\n> insert/delete/update\n>\n> > Good ideas. I think the master solution is to hook the statistics\n> > daemon information into an automatic vacuum that could _know_ which\n> > tables need attention.\n> >\n> > -------------------------------------------------------------------------\n> >-\n>\n> -\n>\n> > Curtis Faith wrote:\n> > > tom lane wrote:\n> > > > Sure, it's just shuffling the housekeeping work from one place to\n> > > > another. The thing that I like about Postgres' approach is that we\n> > > > put the housekeeping in a background task (VACUUM) rather than in the\n> > > > critical path of foreground transaction commit.\n> > >\n> > > Thinking with my marketing hat on, MVCC would be a much bigger win if\n>\n> VACUUM\n>\n> > > was not required (or was done automagically). The need for periodic\n>\n> VACUUM\n>\n> > > just gives ammunition to the PostgreSQL opponents who can claim we are\n> > > deferring work but that it amounts to the same thing.\n> > >\n> > > A fully automatic background VACUUM will significantly reduce but will\n>\n> not\n>\n> > > eliminate this perceived weakness.\n> > >\n> > > However, it always seemed to me there should be some way to reuse the\n>\n> space\n>\n> > > more dynamically and quickly than a background VACUUM thereby reducing\n>\n> the\n>\n> > > percentage of tuples that are expired in heavy update cases. If only a\n>\n> very\n>\n> > > tiny number of tuples on the disk are expired this will reduce the\n>\n> aggregate\n>\n> > > performance/space penalty of MVCC into insignificance for the majority\n>\n> of\n>\n> > > uses.\n> > >\n> > > Couldn't we reuse tuple and index space as soon as there are no\n>\n> transactions\n>\n> > > that depend on the old tuple or index values. I have imagined that this\n>\n> was\n>\n> > > always part of the long-term master plan.\n> > >\n> > > Couldn't we keep a list of dead tuples in shared memory and look in the\n>\n> list\n>\n> > > first when deciding where to place new values for inserts or updates so\n>\n> we\n>\n> > > don't have to rely on VACUUM (even a background one)? If there are\n>\n> expired\n>\n> > > tuple slots in the list these would be used before allocating a new\n> > > slot\n>\n> from\n>\n> > > the tuple heap.\n> > >\n> > > The only issue is determining the lowest transaction ID for in-process\n> > > transactions which seems relatively easy to do (if it's not already\n> > > done somewhere).\n> > >\n> > > In the normal shutdown and startup case, a tuple VACUUM could be\n>\n> performed\n>\n> > > automatically. This would normally be very fast since there would not\n> > > be\n>\n> many\n>\n> > > tuples in the list.\n> > >\n> > > Index slots would be handled differently since these cannot be\n>\n> substituted\n>\n> > > one for another. However, these could be recovered as part of every\n>\n> index\n>\n> > > page update. Pages would be scanned before being written and any\n> > > expired slots that had transaction ID's lower than the lowest active\n> > > slot would\n>\n> be\n>\n> > > removed. This could be done for non-leaf pages as well and would result\n>\n> in\n>\n> > > only reorganizing a page that is already going to be written thereby\n> > > not adding much to the overall work.\n> > >\n> > > I don't think that internal pages that contain pointers to values in\n>\n> nodes\n>\n> > > further down the tree that are no longer in the leaf nodes because of\n>\n> this\n>\n> > > partial expired entry elimination will cause a problem since searches\n>\n> and\n>\n> > > scans will still work fine.\n> > >\n> > > Does VACUUM do something that could not be handled in this realtime\n>\n> manner?\n>\n> > > - Curtis\n> > >\n> > >\n> > >\n> > > ---------------------------(end of\n> > > broadcast)--------------------------- TIP 4: Don't 'kill -9' the\n> > > postmaster\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania\n>\n> 19073\n>\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nDr. Eckhardt + Partner GmbH\nhttp://www.epgmbh.de\n", "msg_date": "Wed, 27 Nov 2002 16:34:04 +0100", "msg_from": "Tommi Maekitalo <t.maekitalo@epgmbh.de>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "How about OPTIMIZE?\n\neg. optimize customers\n\ninstead of analyze, could be paired with agressive\n\nso, OPTIMIZE AGREESSIVE\n\nvery much a glass half empty, half full type thing. vacuum is not a\nproblem, its a solution.\n\nMerlin\n\n\n\"\"Curtis Faith\"\" <curtis@galtair.com> wrote in message\nnews:DMEEJMCDOJAKPPFACMPMIEIDCFAA.curtis@galtair.com...\n> tom lane wrote:\n> > Sure, it's just shuffling the housekeeping work from one place to\n> > another. The thing that I like about Postgres' approach is that we\n> > put the housekeeping in a background task (VACUUM) rather than in the\n> > critical path of foreground transaction commit.\n>\n> Thinking with my marketing hat on, MVCC would be a much bigger win if\nVACUUM\n> was not required (or was done automagically). The need for periodic VACUUM\n> just gives ammunition to the PostgreSQL opponents who can claim we are\n> deferring work but that it amounts to the same thing.\n>\n> A fully automatic background VACUUM will significantly reduce but will not\n> eliminate this perceived weakness.\n>\n> However, it always seemed to me there should be some way to reuse the\nspace\n> more dynamically and quickly than a background VACUUM thereby reducing the\n> percentage of tuples that are expired in heavy update cases. If only a\nvery\n> tiny number of tuples on the disk are expired this will reduce the\naggregate\n> performance/space penalty of MVCC into insignificance for the majority of\n> uses.\n>\n> Couldn't we reuse tuple and index space as soon as there are no\ntransactions\n> that depend on the old tuple or index values. I have imagined that this\nwas\n> always part of the long-term master plan.\n>\n> Couldn't we keep a list of dead tuples in shared memory and look in the\nlist\n> first when deciding where to place new values for inserts or updates so we\n> don't have to rely on VACUUM (even a background one)? If there are expired\n> tuple slots in the list these would be used before allocating a new slot\nfrom\n> the tuple heap.\n>\n> The only issue is determining the lowest transaction ID for in-process\n> transactions which seems relatively easy to do (if it's not already done\n> somewhere).\n>\n> In the normal shutdown and startup case, a tuple VACUUM could be performed\n> automatically. This would normally be very fast since there would not be\nmany\n> tuples in the list.\n>\n> Index slots would be handled differently since these cannot be substituted\n> one for another. However, these could be recovered as part of every index\n> page update. Pages would be scanned before being written and any expired\n> slots that had transaction ID's lower than the lowest active slot would be\n> removed. This could be done for non-leaf pages as well and would result in\n> only reorganizing a page that is already going to be written thereby not\n> adding much to the overall work.\n>\n> I don't think that internal pages that contain pointers to values in nodes\n> further down the tree that are no longer in the leaf nodes because of this\n> partial expired entry elimination will cause a problem since searches and\n> scans will still work fine.\n>\n> Does VACUUM do something that could not be handled in this realtime\nmanner?\n>\n> - Curtis\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n", "msg_date": "Wed, 27 Nov 2002 11:26:30 -0500", "msg_from": "\"Merlin Moncure\" <merlin@rcsonline.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "In a similar vein, setting the way back machine to the mid 80s when I was \nin the USAF and teaching the computer subsystem of the A-10 INS test \nstation, we had old reclaimed Sperry 1650 computers (the precursor to the \n1750) that had come out of the 1960 era fire control systems on \nbattleships like the Missouri and what not.\n\nWhen the OS went south, it would put up a message that said \"System Crash \nat address XXXXXXX\" or something very similar. A colonol saw that and \ninsisted that the folks who wrote the OS change the word crash, since in \nthe Air Force crash (as in plane crash) had such bad connotations. So, it \ngot changed to \"System Fault at address xxxxxxxxx\" For the first month or \ntwo that happened, folks would ask what a system fault was and what to do \nwith it. They new that a crash would need the machine to be power cycled \nbut didn't know what to do with a system fault. Shortly after that, the \nmanual for the test station had a little section added to it that \nbasically said a system fault was a crash. :-)\n\nOn Wed, 27 Nov 2002, Jim Beckstrom wrote:\n\n> Just for the humor of it, as well as to confirm Nick's perspective, \n> years ago on our inhouse developed Burroughs mainframe dbms, we had a \n> process called \"garbage collect\".\n> \n> Nicolai Tufar wrote:\n> \n> >I always wandered if VACUUM is the right name for the porcess. Now, when\n> >PostgreSQL\n> >is actively challenging in Enterprise space, it might be a good idea to give\n> >it a more\n> >enterprise-like name. Try to think how it is looking for an outside person\n> >to see\n> >us, database professionals hold lenghty discussions about the ways we\n> >vacuum a database. Why should you need to vacuum a database? Is it\n> >dirty? In my personal opinion, something like \"space reclaiming daemon\",\n> >\"free-list organizer\", \"tuple recyle job\" or \"segment coalesce process\"\n> >would\n> >sound more business-like .\n> >\n\n", "msg_date": "Wed, 27 Nov 2002 10:18:32 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM, was: performance of" } ]
[ { "msg_contents": "Do this:\n\ncreate database \"adsf asdf\";\n\nThen to a pg_dumpall and you get this:\n\n\\connect \"adsf asdf\"\npg_dump: too many command line options (first is 'asdf')\nTry 'pg_dump --help' for more information.\npg_dumpall: pg_dump failed on adsf asdf, exiting\nLOG: pq_recvbuf: unexpected EOF on client connection\n\nChris\n\n", "msg_date": "Thu, 21 Nov 2002 18:26:35 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "bug in pg_dumpall 7.3" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Do this:\n> create database \"adsf asdf\";\n\n> Then to a pg_dumpall and you get this:\n\n> \\connect \"adsf asdf\"\n> pg_dump: too many command line options (first is 'asdf')\n\nGood catch --- fixed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Nov 2002 22:12:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in pg_dumpall 7.3 " } ]
[ { "msg_contents": "\nsystem = powerpc-ibm-aix4.2.1.0 \n\n\nconfigure command \n\nenv CC=gcc ./configure --with-maxbackends=1024 --with-openssl=/usr/local/ssl --enable-syslog --enable-odbc --disable-nls\n\ngmake check output file\n\n\nregression.out\n--------------\n\nparallel group (13 tests): text varchar oid int2 char boolean float4 int4 name int8 float8 bit numeric\n boolean ... ok\n char ... ok\n name ... ok\n varchar ... ok\n text ... ok\n int2 ... ok\n int4 ... ok\n int8 ... ok\n oid ... ok\n float4 ... ok\n float8 ... ok\n bit ... ok\n numeric ... ok\ntest strings ... ok\ntest numerology ... ok\nparallel group (20 tests): lseg date path circle polygon box point time timetz tinterval abstime interval reltime comments inet timestamptz timestamp type_sanity opr_sanity oidjoins\n point ... ok\n lseg ... ok\n box ... ok\n path ... ok\n polygon ... ok\n circle ... ok\n date ... ok\n time ... ok\n timetz ... ok\n timestamp ... ok\n timestamptz ... ok\n interval ... ok\n abstime ... ok\n reltime ... ok\n tinterval ... ok\n inet ... ok\n comments ... ok\n oidjoins ... ok\n type_sanity ... ok\n opr_sanity ... ok\ntest geometry ... FAILED\ntest horology ... ok\ntest insert ... ok\ntest create_function_1 ... ok\ntest create_type ... ok\ntest create_table ... ok\ntest create_function_2 ... ok\ntest copy ... ok\nparallel group (7 tests): create_aggregate create_operator triggers vacuum inherit constraints create_misc\n constraints ... ok\n triggers ... ok\n create_misc ... ok\n create_aggregate ... ok\n create_operator ... ok\n inherit ... ok\n vacuum ... ok\nparallel group (2 tests): create_view create_index\n create_index ... ok\n create_view ... ok\ntest sanity_check ... ok\ntest errors ... ok\ntest select ... ok\nparallel group (16 tests): select_distinct_on select_into select_having transactions select_distinct random subselect portals arrays union select_implicit case aggregates hash_index join btree_index\n select_into ... ok\n select_distinct ... ok\n select_distinct_on ... ok\n select_implicit ... ok\n select_having ... ok\n subselect ... ok\n union ... ok\n case ... ok\n join ... ok\n aggregates ... ok\n transactions ... ok\n random ... ok\n portals ... ok\n arrays ... ok\n btree_index ... ok\n hash_index ... ok\ntest privileges ... ok\ntest misc ... ok\nparallel group (5 tests): portals_p2 cluster rules select_views foreign_key\n select_views ... ok\n portals_p2 ... ok\n rules ... ok\n foreign_key ... ok\n cluster ... ok\nparallel group (11 tests): limit truncate temp copy2 domain rangefuncs conversion prepare without_oid plpgsql alter_table\n limit ... ok\n plpgsql ... ok\n copy2 ... ok\n temp ... ok\n domain ... ok\n rangefuncs ... ok\n prepare ... ok\n without_oid ... ok\n conversion ... ok\n truncate ... ok\n alter_table ... ok\n\n\nregression.diffs\n-----------------\n\n\n*** ./expected/geometry-powerpc-aix4.out\tTue Sep 12 17:07:16 2000\n--- ./results/geometry.out\tThu Nov 21 21:46:01 2002\n***************\n*** 114,120 ****\n | (5.1,34.5) | [(1,2),(3,4)] | (3,4)\n | (-5,-12) | [(1,2),(3,4)] | (1,2)\n | (10,10) | [(1,2),(3,4)] | (3,4)\n! | (0,0) | [(0,0),(6,6)] | (0,0)\n | (-10,0) | [(0,0),(6,6)] | (0,0)\n | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)\n | (5.1,34.5) | [(0,0),(6,6)] | (6,6)\n--- 114,120 ----\n | (5.1,34.5) | [(1,2),(3,4)] | (3,4)\n | (-5,-12) | [(1,2),(3,4)] | (1,2)\n | (10,10) | [(1,2),(3,4)] | (3,4)\n! | (0,0) | [(0,0),(6,6)] | (-0,0)\n | (-10,0) | [(0,0),(6,6)] | (0,0)\n | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)\n | (5.1,34.5) | [(0,0),(6,6)] | (6,6)\n***************\n*** 127,133 ****\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140473)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n--- 127,133 ----\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140472)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n***************\n*** 448,454 ****\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994\n107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.4999999999\n6464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999\n999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((-0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999\n999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n--- 448,454 ----\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994\n107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.4999999999\n6464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999\n999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.9999\n99998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n***************\n*** 461,467 ****\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((-0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n--- 461,467 ----\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n\n======================================================================\n\n\npostmaster.log\n---------------\n\nLOG: database system was shut down at 2002-11-21 21:44:05 EST\nLOG: checkpoint record is at 0/8484B8\nLOG: redo record is at 0/8484B8; undo record is at 0/0; shutdown TRUE\nLOG: next transaction id: 480; next oid: 16976\nLOG: database system is ready\nERROR: DROP GROUP: group \"regressgroup1\" does not exist\nERROR: value too long for type character(1)\nERROR: pg_atoi: error in \"34.5\": can't parse \".5\"\nERROR: oidin: error in \"asdfasd\": can't parse \"asdfasd\"\nERROR: value too long for type character varying(1)\nERROR: oidin: error in \"99asdfasd\": can't parse \"asdfasd\"\nERROR: pg_atoi: error reading \"1000000000000\": Numerical result out of range\nERROR: pg_atoi: error in \"asdf\": can't parse \"asdf\"\nERROR: pg_atoi: error in \"34.5\": can't parse \".5\"\nERROR: pg_atoi: error reading \"100000\": Numerical result out of range\nERROR: pg_atoi: error in \"asdf\": can't parse \"asdf\"\nERROR: Bad boolean external representation 'XXX'\nERROR: Bad float4 input format -- overflow\nERROR: Bad float4 input format -- overflow\nERROR: Bad float4 input format -- underflow\nERROR: Bad float4 input format -- underflow\nERROR: Bit string length 2 does not match type BIT(11)\nERROR: value too long for type character varying(4)\nERROR: Bit string length 12 does not match type BIT(11)\nERROR: value too long for type character(4)\nERROR: float4div: divide by zero error\nERROR: Bit string too long for type BIT VARYING(11)\nERROR: Bad float8 input format -- overflow\nERROR: pow() result is out of range\nERROR: can't take log of zero\nERROR: can't take log of a negative number\nERROR: exp() result is out of range\nERROR: float8div: divide by zero error\nERROR: Input '10e400' is out of range for float8\nERROR: Input '-10e400' is out of range for float8\nERROR: Input '10e-400' is out of range for float8\nERROR: Input '-10e-400' is out of range for float8\nERROR: Cannot AND bit strings of different sizes\nERROR: Cannot OR bit strings of different sizes\nERROR: Cannot XOR bit strings of different sizes\nERROR: overflow on numeric ABS(value) >= 10^0 for field with precision 4 scale 4\nERROR: overflow on numeric ABS(value) >= 10^0 for field with precision 4 scale 4\nERROR: parser: parse error at or near \"' - third line'\" at character 75\nERROR: negative substring length not allowed\nERROR: negative substring length not allowed\nERROR: field position must be > 0\nERROR: Bad lseg external representation '(3asdf,2 ,3,4r2)'\nERROR: Bad lseg external representation '[1,2,3, 4'\nERROR: Bad lseg external representation '[(,2),(3,4)]'\nERROR: Bad lseg external representation '[(1,2),(3,4)'\nERROR: Bad point external representation 'asdfasdf'\nERROR: Bad point external representation '(10.0 10.0)'\nERROR: Bad point external representation '(10.0,10.0'\nERROR: Bad box external representation '(2.3, 4.5)'\nERROR: Bad box external representation 'asdfasdf(ad'\nERROR: Bad circle external representation '<(-100,0),-100>'\nERROR: Bad circle external representation '1abc,3,5'\nERROR: Bad circle external representation '(3,(1,2),3)'\nERROR: Bad date external representation '1997-02-29'\nERROR: Bad polygon external representation '0.0'\nERROR: Bad polygon external representation '(0.0 0.0'\nERROR: Bad polygon external representation '(0,1,2)'\nERROR: Bad polygon external representation '(0,1,2,3'\nERROR: Bad polygon external representation 'asdf'\nERROR: Bad path external representation '[(,2),(3,4)]'\nERROR: Bad path external representation '[(1,2),(3,4)'\nERROR: Bad time external representation '07:07 PST'\nERROR: Bad time external representation '08:08 EDT'\nERROR: Unable to identify an operator '+' for types 'time without time zone' and 'time without time zone'\n\tYou will have to retype this query using an explicit cast\nERROR: Bad timestamp external representation 'current'\nERROR: Bad timestamp external representation 'current'\nERROR: table \"inet_tbl\" does not exist\nERROR: Bad abstime external representation 'bad time specifications'\nERROR: Bad abstime external representation ''\nERROR: TIMESTAMP WITH TIME ZONE 'invalid' no longer supported\nERROR: Bad timestamp external representation 'Invalid Abstime'\nERROR: Bad reltime external representation 'badly formatted reltime'\nERROR: Bad interval external representation 'badly formatted interval'\nERROR: Bad interval external representation '@ 30 eons ago'\nERROR: Bad timestamp external representation 'Undefined Abstime'\nERROR: Bad reltime external representation '@ 30 eons ago'\nERROR: Bad abstime external representation 'Feb 35, 1946 10:00:00'\nERROR: Bad abstime external representation 'Feb 28, 1984 25:08:10'\nERROR: Bad abstime external representation 'bad date format'\nERROR: Unable to identify an operator '+' for types 'time with time zone' and 'time with time zone'\n\tYou will have to retype this query using an explicit cast\nERROR: invalid CIDR value '192.168.1.2/24': has bits set to right of mask\nERROR: invalid CIDR value '192.168.1.2/24': has bits set to right of mask\nERROR: TIMESTAMP 'invalid' no longer supported\nERROR: Bad timestamp external representation 'Invalid Abstime'\nERROR: Bad timestamp external representation 'Undefined Abstime'\nERROR: Bad timestamp external representation 'Feb 29 17:32:01 1997'\nERROR: Bad timestamp external representation 'Feb 16 17:32:01 -0097'\nERROR: TIMESTAMP WITH TIME ZONE out of range 'Feb 16 17:32:01 5097 BC'\nERROR: Bad timestamp external representation 'Feb 29 17:32:01 1997'\nERROR: Bad timestamp external representation 'Feb 16 17:32:01 -0097'\nERROR: TIMESTAMP out of range 'Feb 16 17:32:01 5097 BC'\nERROR: to_timestamp(): bad value for MON/Mon/mon\nERROR: to_timestamp(): bad value for MON/Mon/mon\nERROR: Unable to identify an operator '#' for types 'lseg' and 'point'\n\tYou will have to retype this query using an explicit cast\nERROR: Bad time external representation '040506.789+08'\nERROR: Bad time external representation '040506.789-08'\nERROR: Bad time external representation 'T040506.789+08'\nERROR: Bad time external representation 'T040506.789-08'\nERROR: Unable to identify an operator '-' for types 'date' and 'time with time zone'\n\tYou will have to retype this query using an explicit cast\nERROR: Cannot cast type time with time zone to interval\nERROR: Cannot cast type interval to time with time zone\nERROR: Unable to convert abstime 'invalid' to timestamp\nERROR: ExecInsert: Fail to add null value in not null attribute col2\nERROR: INSERT has more target columns than expressions\nERROR: INSERT has more target columns than expressions\nERROR: INSERT has more expressions than target columns\nERROR: INSERT has more expressions than target columns\nNOTICE: ProcedureCreate: type widget is not yet defined\nNOTICE: Argument type \"widget\" is only a shell\nNOTICE: ProcedureCreate: type city_budget is not yet defined\nNOTICE: Argument type \"city_budget\" is only a shell\nERROR: return type mismatch in function: declared to return integer, returns \"unknown\"\nERROR: parser: parse error at or near \"not\" at character 1\nERROR: function declared to return integer returns multiple columns in final SELECT\nERROR: Parameter '$2' is out of range\nERROR: CREATE FUNCTION: only one AS item needed for sql language\nERROR: stat failed on file 'nosuchfile': No such file or directory\nERROR: Can't find function nosuchsymbol in file /audio/kemper/SAH/postgresql-7.3rc1/src/test/regress/regress.so\nERROR: there is no built-in function named \"nosuch\"\nNOTICE: ProcedureCreate: type int42 is not yet defined\nNOTICE: Argument type \"int42\" is only a shell\nNOTICE: ProcedureCreate: type text_w_default is not yet defined\nNOTICE: Argument type \"text_w_default\" is only a shell\nNOTICE: Drop cascades to function get_default_test()\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"name\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"age\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"location\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"class\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"a\"\nNOTICE: hobbies_r.person%TYPE converted to text\nNOTICE: hobbies_r.name%TYPE converted to text\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"aa\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"aa\"\nERROR: parser: parse error at or near \",\" at character 43\nERROR: parser: parse error at or near \"IN\" at character 43\nERROR: ExecInsert: rejected due to CHECK constraint \"check_con\" on \"check_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"check_con\" on \"check_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"check_con\" on \"check_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"sequence_con\" on \"check2_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"sequence_con\" on \"check2_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"sequence_con\" on \"check2_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"sequence_con\" on \"check2_tbl\"\nERROR: check_fkeys2_pkey_exist: tuple references non-existing key in pkeys\nERROR: check_fkeys_pkey_exist: tuple references non-existing key in pkeys\nERROR: check_fkeys_pkey2_exist: tuple references non-existing key in fkeys2\nNOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\nERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\nNOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\nNOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\nNOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\nERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\nNOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\nNOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\nERROR: ExecInsert: rejected due to CHECK constraint \"insert_con\" on \"insert_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"insert_con\" on \"insert_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"$1\" on \"insert_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"insert_con\" on \"insert_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"$1\" on \"insert_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"insert_con\" on \"insert_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"insert_con\" on \"insert_tbl\"\nERROR: ExecInsert: rejected due to CHECK constraint \"insert_child_cy\" on \"insert_child\"\nERROR: ExecInsert: rejected due to CHECK constraint \"$1\" on \"insert_child\"\nERROR: ExecInsert: rejected due to CHECK constraint \"insert_con\" on \"insert_child\"\nERROR: ttdummy (tttest): you can't change price_on and/or price_off columns (use set_ttdummy)\nERROR: ExecInsert: rejected due to CHECK constraint \"insert_con\" on \"insert_tbl\"\nERROR: ExecUpdate: rejected due to CHECK constraint \"insert_con\" on \"insert_tbl\"\nERROR: copy: line 2, CopyFrom: rejected due to CHECK constraint \"copy_con\" on \"copy_tbl\"\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'primary_tbl_pkey' for table 'primary_tbl'\nERROR: Cannot insert a duplicate key into unique index primary_tbl_pkey\nERROR: ExecInsert: Fail to add null value in not null attribute i\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'primary_tbl_pkey' for table 'primary_tbl'\nERROR: ExecInsert: Fail to add null value in not null attribute i\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'unique_tbl_i_key' for table 'unique_tbl'\nERROR: Cannot insert a duplicate key into unique index unique_tbl_i_key\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'unique_tbl_i_key' for table 'unique_tbl'\nERROR: Cannot insert a duplicate key into unique index unique_tbl_i_key\nNOTICE: CREATE TABLE will create implicit sequence 'serialtest_f2_seq' for SERIAL column 'serialtest.f2'\nERROR: ExecInsert: Fail to add null value in not null attribute f2\nERROR: Cannot change number of columns in view\nERROR: Cannot change number of columns in view\nERROR: Cannot change datatype of view column \"b\"\nERROR: Cannot insert a duplicate key into unique index func_index_index\nERROR: parser: parse error at or near \";\" at character 7\nERROR: Relation \"nonesuch\" does not exist\nERROR: parser: parse error at or near \"from\" at character 8\nERROR: Attribute \"nonesuch\" not found\nERROR: Attribute \"nonesuch\" not found\nERROR: Attribute \"nonesuch\" not found\nERROR: parser: parse error at or near \"from\" at character 29\nERROR: Attribute \"foobar\" not found\nERROR: parser: parse error at or near \";\" at character 12\nERROR: Relation \"nonesuch\" does not exist\nERROR: parser: parse error at or near \";\" at character 11\nERROR: table \"nonesuch\" does not exist\nERROR: parser: parse error at or near \";\" at character 19\nERROR: Relation \"nonesuch\" does not exist\nERROR: Relation \"nonesuch\" does not exist\nERROR: renamerel: relation \"aggtest\" exists\nERROR: renamerel: relation \"stud_emp\" exists\nERROR: Relation \"nonesuchrel\" does not exist\nERROR: renameatt: attribute \"nonesuchatt\" does not exist\nERROR: renameatt: attribute \"manager\" exists\nERROR: renameatt: attribute \"oid\" exists\nWARNING: ROLLBACK: no transaction in progress\nWARNING: COMMIT: no transaction in progress\nERROR: AggregateCreate: function int2um(integer) does not exist\nERROR: Define: \"basetype\" unspecified\nERROR: parser: parse error at or near \";\" at character 11\nERROR: parser: parse error at or near \"314159\" at character 12\nERROR: index \"nonesuch\" does not exist\nERROR: parser: parse error at or near \";\" at character 15\nERROR: parser: parse error at or near \";\" at character 23\nERROR: parser: parse error at or near \"314159\" at character 16\nERROR: Type \"nonesuch\" does not exist\nERROR: RemoveAggregate: aggregate nonesuch(integer) does not exist\nERROR: RemoveAggregate: aggregate newcnt(real) does not exist\nERROR: parser: parse error at or near \"(\" at character 15\nERROR: parser: parse error at or near \"314159\" at character 15\nERROR: RemoveFunction: function nonesuch() does not exist\nERROR: parser: parse error at or near \";\" at character 10\nERROR: parser: parse error at or near \"314159\" at character 11\nERROR: Type \"nonesuch\" does not exist\nERROR: parser: parse error at or near \";\" at character 14\nERROR: parser: parse error at or near \";\" at character 21\nERROR: parser: parse error at or near \";\" at character 18\nERROR: parser: parse error at or near \",\" at character 19\nERROR: parser: parse error at or near \"(\" at character 15\nERROR: parser: parse error at or near \")\" at character 20\nERROR: parser: argument type missing (use NONE for unary operators)\nERROR: RemoveOperator: Operator '===' for types 'int4' and 'int4' does not exist\nERROR: parser: argument type missing (use NONE for unary operators)\nERROR: parser: parse error at or near \",\" at character 19\nERROR: Type \"nonesuch\" does not exist\nERROR: Type \"nonesuch\" does not exist\nERROR: parser: parse error at or near \")\" at character 24\nERROR: parser: parse error at or near \";\" at character 10\nERROR: parser: parse error at or near \"314159\" at character 11\nERROR: Relation \"noplace\" does not exist\nERROR: parser: parse error at or near \"tuple\" at character 6\nERROR: parser: parse error at or near \"instance\" at character 6\nERROR: parser: parse error at or near \"rewrite\" at character 6\nERROR: SELECT DISTINCT ON expressions must match initial ORDER BY expressions\nERROR: Attribute test_missing_target.b must be GROUPed or used in an aggregate function\nERROR: GROUP BY position 3 is not in target list\nERROR: Column reference \"b\" is ambiguous\nERROR: value too long for type character(5)\nERROR: Attribute test_missing_target.b must be GROUPed or used in an aggregate function\nERROR: Column reference \"i\" is ambiguous\nERROR: Column reference \"b\" is ambiguous\nERROR: Column reference \"b\" is ambiguous\nERROR: Attribute \"q2\" not found\nERROR: UNION JOIN is not implemented yet\nERROR: CREATE USER: user name \"regressuser4\" already exists\nWARNING: ALTER GROUP: user \"regressuser2\" is already in group \"regressgroup2\"\nERROR: atest2: permission denied\nERROR: atest2: permission denied\nERROR: atest2: permission denied\nERROR: atest2: permission denied\nERROR: atest2: permission denied\nERROR: atest2: permission denied\nERROR: atest1: must be owner\nERROR: atest2: permission denied\nERROR: atest1: permission denied\nERROR: atest2: permission denied\nERROR: atest1: permission denied\nERROR: atest1: permission denied\nERROR: atest2: permission denied\nERROR: atest1: permission denied\nERROR: atest2: permission denied\nERROR: atest2: permission denied\nERROR: atest2: permission denied\nERROR: atest2: permission denied\nERROR: atest2: permission denied\nERROR: atest3: permission denied\nERROR: atest2: permission denied\nERROR: atestv2: permission denied\nERROR: atestv3: permission denied\nERROR: atest2: permission denied\nERROR: language \"c\" is not trusted\nERROR: permission denied\nERROR: invalid privilege type USAGE for function object\nERROR: GRANT: function testfunc_nosuch(integer) does not exist\nERROR: sql: permission denied\nERROR: testfunc1: permission denied\nERROR: atest2: permission denied\nERROR: testfunc1: must be owner\nERROR: Relation \"pg_shad\" does not exist\nERROR: user \"nosuchuser\" does not exist\nERROR: has_table_privilege: invalid privilege type sel\nERROR: pg_class_aclcheck: invalid user id 4293967297\nERROR: pg_class_aclcheck: relation 1 not found\nNOTICE: Drop cascades to rule _RETURN on view atestv4\nNOTICE: Drop cascades to view atestv4\nERROR: view \"atestv4\" does not exist\nNOTICE: ALTER TABLE: merging definition of column \"a\" for child d_star\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: CREATE TABLE will create implicit sequence 'clstr_tst_s_rf_a_seq' for SERIAL column 'clstr_tst_s.rf_a'\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'clstr_tst_s_pkey' for table 'clstr_tst_s'\nERROR: $1 referential integrity violation - key referenced from fktable not found in pktable\nNOTICE: CREATE TABLE will create implicit sequence 'clstr_tst_a_seq' for SERIAL column 'clstr_tst.a'\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'clstr_tst_pkey' for table 'clstr_tst'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: constrname referential integrity violation - key referenced from fktable not found in pktable\nERROR: constrname referential integrity violation - key referenced from fktable not found in pktable\nERROR: constrname referential integrity violation - MATCH FULL doesn't allow mixing of NULL and NON-NULL key values\nERROR: constrname referential integrity violation - MATCH FULL doesn't allow mixing of NULL and NON-NULL key values\nNOTICE: Drop cascades to constraint constrname on table fktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: constrname2 referential integrity violation - key referenced from fktable not found in pktable\nERROR: constrname2 referential integrity violation - key referenced from fktable not found in pktable\nERROR: constrname2 referential integrity violation - MATCH FULL doesn't allow mixing of NULL and NON-NULL key values\nERROR: constrname2 referential integrity violation - MATCH FULL doesn't allow mixing of NULL and NON-NULL key values\nNOTICE: constraint constrname2 on table fktable depends on table pktable\nERROR: Cannot drop table pktable because other objects depend on it\n\tUse DROP ... CASCADE to drop the dependent objects too\nNOTICE: Drop cascades to constraint constrname2 on table fktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: $1 referential integrity violation - key referenced from fktable not found in pktable\nERROR: $1 referential integrity violation - key in pktable still referenced from fktable\nERROR: $1 referential integrity violation - key in pktable still referenced from fktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: constrname3 referential integrity violation - key referenced from fktable not found in pktable\nERROR: constrname3 referential integrity violation - key in pktable still referenced from fktable\nERROR: constrname3 referential integrity violation - key in pktable still referenced from fktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nERROR: clstr_tst_con referential integrity violation - key referenced from clstr_tst not found in clstr_tst_s\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: constrname3 referential integrity violation - key referenced from fktable not found in pktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: constrname3 referential integrity violation - key referenced from fktable not found in pktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: constrname3 referential integrity violation - key referenced from fktable not found in pktable\nERROR: constrname3 referential integrity violation - key referenced from fktable not found in pktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: CREATE TABLE: column \"ftest2\" referenced in foreign key constraint does not exist\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: CREATE TABLE: column \"ptest2\" referenced in foreign key constraint does not exist\nERROR: table \"fktable_fail1\" does not exist\nERROR: table \"fktable_fail2\" does not exist\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'pktable_ptest1_key' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: UNIQUE constraint matching given keys for referenced table \"pktable\" not found\nERROR: table \"fktable_fail1\" does not exist\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'cidr' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'cidr' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'integer' and 'inet'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Attribute \"f1\" not found\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'integer' and 'inet'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'pktable_base1_key' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: $1 referential integrity violation - key referenced from fktable not found in pktable\nERROR: $1 referential integrity violation - key in pktable still referenced from fktable\nERROR: $1 referential integrity violation - key in pktable still referenced from fktable\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: $1 referential integrity violation - key referenced from fktable not found in pktable\nERROR: $1 referential integrity violation - key in pktable still referenced from fktable\nERROR: $1 referential integrity violation - key in pktable still referenced from fktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: $1 referential integrity violation - key referenced from pktable not found in pktable\nERROR: $1 referential integrity violation - key in pktable still referenced from pktable\nERROR: $1 referential integrity violation - key referenced from pktable not found in pktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'cidr' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'cidr' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'integer' and 'inet'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet[]' and 'inet'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'integer' and 'inet'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nERROR: table \"pktable\" does not exist\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'fktable_pkey' for table 'fktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: $1 referential integrity violation - key referenced from fktable not found in pktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'fktable_pkey' for table 'fktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: $1 referential integrity violation - key referenced from fktable not found in pktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'fktable_pkey' for table 'fktable'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: $1 referential integrity violation - key referenced from fktable not found in pktable\nERROR: current transaction is aborted, queries ignored until end of transaction block\nNOTICE: CREATE TABLE will create implicit sequence 'x_a_seq' for SERIAL column 'x.a'\nERROR: DefineDomain: domaindroptest is not a basetype\nNOTICE: Adding missing FROM-clause entry for table \"foo2\"\nERROR: FROM function expression may not refer to other relations of same query level\nERROR: Prepared statement with name \"q1\" already exists\nERROR: Type \"domaindroptest\" does not exist\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'truncate_a_pkey' for table 'truncate_a'\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'foo_pkey' for table 'foo'\nERROR: conversion name \"myconv\" already exists\nERROR: default conversion for LATIN1 to UNICODE already exists\nERROR: Relation \"x\" has no column \"xyz\"\nERROR: Attribute \"d\" specified more than once\nERROR: copy: line 1, pg_atoi: zero-length string\nLOG: pq_flush: send() failed: Broken pipe\nFATAL: Socket command type \\ unknown\nERROR: copy: line 1, Missing data for column \"e\"\nLOG: pq_flush: send() failed: Broken pipe\nFATAL: Socket command type \\ unknown\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: copy: line 1, Missing data for column \"e\"\nLOG: pq_flush: send() failed: Broken pipe\nFATAL: Socket command type \\ unknown\nERROR: value too long for type character varying(5)\nERROR: copy: line 1, value too long for type character varying(5)\nLOG: pq_flush: send() failed: Broken pipe\nFATAL: Socket command type \\ unknown\nERROR: copy: line 1, Extra data after last expected column\nLOG: pq_flush: send() failed: Broken pipe\nFATAL: Socket command type 7 unknown\nERROR: TRUNCATE cannot be used as table truncate_b references this one via foreign key constraint $1\nERROR: Relation \"temptest\" does not exist\nERROR: COPY: table \"no_oids\" does not have OIDs\nERROR: COPY: table \"no_oids\" does not have OIDs\nERROR: Domain dnotnull does not allow NULL values\nERROR: Domain dnotnull does not allow NULL values\nERROR: Domain dnotnull does not allow NULL values\nERROR: ExecInsert: Fail to add null value in not null attribute col3\nERROR: copy: line 1, CopyFrom: Fail to add null value in not null attribute col3\nLOG: pq_flush: send() failed: Broken pipe\nFATAL: Socket command type \\ unknown\nERROR: Domain dnotnull does not allow NULL values\nERROR: Domain dnotnull does not allow NULL values\nERROR: Domain dnotnull does not allow NULL values\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'defaulttest_pkey' for table 'defaulttest'\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'foorescan_pkey' for table 'foorescan'\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'barrescan_pkey' for table 'barrescan'\nERROR: Wrong number of parameters, expected 6 but got 1\nERROR: Wrong number of parameters, expected 6 but got 7\nERROR: Parameter $3 of type boolean cannot be coerced into the expected type double precision\n\tYou will need to rewrite or cast the expression\nERROR: Type \"nonexistenttype\" does not exist\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'tmp2_pkey' for table 'tmp2'\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'tmp4_a_key' for table 'tmp4'\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: ALTER TABLE: column \"c\" referenced in foreign key constraint does not exist\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: ALTER TABLE: column \"b\" referenced in foreign key constraint does not exist\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: tmpconstr referential integrity violation - key referenced from tmp3 not found in tmp2\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: UNIQUE constraint matching given keys for referenced table \"tmp4\" not found\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nNOTICE: Drop cascades to constraint $2 on table fktable\nNOTICE: Drop cascades to constraint $1 on table fktable\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'pktable_pkey' for table 'pktable'\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'cidr' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'cidr' and 'integer'\n\tYou will have to retype this query using an explicit cast\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'integer' and 'inet'\n\tYou will have to retype this query using an explicit cast\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: Unable to identify an operator '=' for types 'inet' and 'integer'\n\tYou will have to retype this query using an explicit cast\nERROR: ExecInsert: rejected due to CHECK constraint \"atacc_test1\" on \"atacc1\"\nERROR: AlterTableAddConstraint: rejected due to CHECK constraint atacc_test1\nERROR: Cannot insert a duplicate key into unique index pfield_name\nERROR: WS.not.there does not exist\nERROR: illegal backlink beginning with XX\nERROR: PS.not.there does not exist\nERROR: illegal slotlink beginning with XX\nERROR: Cannot insert a duplicate key into unique index hslot_name\nERROR: no manual manipulation of HSlot\nERROR: no manual manipulation of HSlot\nERROR: system \"notthere\" does not exist\nERROR: IFace slotname \"IF.orion.ethernet_interface_name_too_long\" too long (20 char max)\nERROR: Attribute \"test1\" not found\nERROR: ExecInsert: rejected due to CHECK constraint \"atacc_test1\" on \"atacc1\"\nERROR: ExecInsert: rejected due to CHECK constraint \"$1\" on \"atacc1\"\nERROR: ExecInsert: rejected due to CHECK constraint \"foo\" on \"atacc2\"\nERROR: ExecInsert: rejected due to CHECK constraint \"foo\" on \"atacc3\"\nERROR: ExecInsert: rejected due to CHECK constraint \"foo\" on \"atacc2\"\nNOTICE: ALTER TABLE / ADD UNIQUE will create implicit index 'atacc_test1' for table 'atacc1'\nERROR: Cannot insert a duplicate key into unique index atacc_test1\nNOTICE: ALTER TABLE / ADD UNIQUE will create implicit index 'atacc_oid1' for table 'atacc1'\nNOTICE: ALTER TABLE / ADD UNIQUE will create implicit index 'atacc_test1' for table 'atacc1'\nERROR: Cannot create unique index. Table contains non-unique values\nERROR: ALTER TABLE: column \"test1\" named in key does not exist\nNOTICE: ALTER TABLE / ADD UNIQUE will create implicit index 'atacc_test1' for table 'atacc1'\nERROR: Cannot insert a duplicate key into unique index atacc_test1\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'atacc1_test_key' for table 'atacc1'\nNOTICE: ALTER TABLE / ADD UNIQUE will create implicit index 'atacc1_test2_key' for table 'atacc1'\nERROR: Cannot insert a duplicate key into unique index atacc1_test_key\nERROR: Existing attribute \"test\" cannot be a PRIMARY KEY because it is not marked NOT NULL\nNOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index 'atacc_oid1' for table 'atacc1'\nERROR: Existing attribute \"test\" cannot be a PRIMARY KEY because it is not marked NOT NULL\nERROR: Existing attribute \"test\" cannot be a PRIMARY KEY because it is not marked NOT NULL\nERROR: ALTER TABLE: column \"test1\" named in key does not exist\nERROR: Existing attribute \"test\" cannot be a PRIMARY KEY because it is not marked NOT NULL\nERROR: Existing attribute \"test\" cannot be a PRIMARY KEY because it is not marked NOT NULL\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'atacc1_pkey' for table 'atacc1'\nERROR: Cannot insert a duplicate key into unique index atacc1_pkey\nERROR: ExecInsert: Fail to add null value in not null attribute test\nERROR: ALTER TABLE: relation \"pg_class\" is a system catalog\nERROR: ALTER TABLE: relation \"pg_class\" is a system catalog\nERROR: Relation \"non_existent\" does not exist\nERROR: Relation \"non_existent\" does not exist\nNOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index 'atacc1_pkey' for table 'atacc1'\nERROR: ALTER TABLE: Attribute \"test\" is in a primary key\nERROR: ALTER TABLE: Attribute \"test\" contains NULL values\nERROR: Relation \"atacc1\" has no column \"bar\"\nERROR: Relation \"atacc1\" has no column \"bar\"\nERROR: ALTER TABLE: Cannot alter system attribute \"oid\"\nERROR: ALTER TABLE: Cannot alter system attribute \"oid\"\nERROR: ALTER TABLE: relation \"myview\" is not a table\nERROR: ALTER TABLE: relation \"myview\" is not a table\nERROR: ExecInsert: Fail to add null value in not null attribute a\nERROR: ExecInsert: Fail to add null value in not null attribute a\nERROR: ALTER TABLE: Attribute \"a\" contains NULL values\nERROR: ALTER TABLE: Attribute \"a\" contains NULL values\nERROR: ExecInsert: Fail to add null value in not null attribute a\nERROR: ExecInsert: Fail to add null value in not null attribute a\nERROR: ExecInsert: Fail to add null value in not null attribute a\nERROR: pg_atoi: error in \"wrong_datatype\": can't parse \"wrong_datatype\"\nERROR: Relation \"def_test\" has no column \"c3\"\nERROR: ALTER TABLE: relation \"pg_class\" is a system catalog\nERROR: Relation \"foo\" does not exist\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Attribute \"a\" not found\nERROR: Attribute \"........pg.dropped.1........\" not found\nERROR: Attribute \"a\" not found\nERROR: Attribute \"........pg.dropped.1........\" not found\nERROR: Attribute \"a\" not found\nERROR: No such attribute atacc1.a\nERROR: Attribute \"a\" not found\nERROR: Attribute \"a\" not found\nERROR: Attribute \"........pg.dropped.1........\" not found\nERROR: No such attribute atacc1.........pg.dropped.1........\nERROR: Attribute \"........pg.dropped.1........\" not found\nERROR: Attribute \"........pg.dropped.1........\" not found\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Attribute \"a\" not found\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Attribute \"........pg.dropped.1........\" not found\nERROR: INSERT has more expressions than target columns\nERROR: INSERT has more expressions than target columns\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Attribute \"a\" not found\nERROR: Attribute \"........pg.dropped.1........\" not found\nERROR: Relation \"atacc1\" has no column \"bar\"\nERROR: ALTER TABLE: Cannot drop system attribute \"oid\"\nERROR: ALTER TABLE: relation \"myview\" is not a table\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: ALTER TABLE: relation \"atacc1\" has no column \"a\"\nERROR: ALTER TABLE: relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: ALTER TABLE: relation \"atacc1\" has no column \"a\"\nERROR: ALTER TABLE: relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: Relation \"atacc1\" has no column \"a\"\nERROR: Relation \"atacc1\" has no column \"........pg.dropped.1........\"\nERROR: renameatt: attribute \"a\" does not exist\nERROR: renameatt: attribute \"........pg.dropped.1........\" does not exist\nERROR: ALTER TABLE: column \"a\" named in key does not exist\nERROR: ALTER TABLE: column \"........pg.dropped.1........\" named in key does not exist\nERROR: ALTER TABLE: column \"a\" named in key does not exist\nERROR: ALTER TABLE: column \"........pg.dropped.1........\" named in key does not exist\nERROR: Attribute \"a\" not found\nERROR: Attribute \"........pg.dropped.1........\" not found\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'atacc2_id_key' for table 'atacc2'\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: ALTER TABLE: column \"a\" referenced in foreign key constraint does not exist\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: ALTER TABLE: column \"........pg.dropped.1........\" referenced in foreign key constraint does not exist\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: ALTER TABLE: column \"a\" referenced in foreign key constraint does not exist\nNOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: ALTER TABLE: column \"........pg.dropped.1........\" referenced in foreign key constraint does not exist\nERROR: DefineIndex: attribute \"a\" not found\nERROR: DefineIndex: attribute \"........pg.dropped.1........\" not found\nERROR: Relation \"test\" has no column \"a\"\nERROR: Relation \"test\" has no column \"........pg.dropped.1........\"\nERROR: copy: line 1, Extra data after last expected column\nERROR: Relation \"test\" has no column \"a\"\nLOG: pq_flush: send() failed: Broken pipe\nFATAL: Socket command type 1 unknown\nERROR: Relation \"test\" has no column \"........pg.dropped.1........\"\nERROR: ALTER TABLE: Cannot drop inherited column \"a\"\nERROR: ALTER TABLE: Cannot drop inherited column \"b\"\nERROR: renameatt: inherited attribute \"a\" may not be renamed\nERROR: Inherited attribute \"a\" must be renamed in child tables too\nERROR: Inherited attribute \"a\" must be renamed in child tables too\nERROR: Attribute must be added to child tables too\nNOTICE: CREATE TABLE: merging attribute \"f1\" with inherited definition\nERROR: ALTER TABLE: Cannot drop inherited column \"f1\"\nERROR: Attribute \"f1\" not found\nNOTICE: Drop cascades to table c1\nERROR: ALTER TABLE: Cannot drop inherited column \"f1\"\nERROR: Attribute \"f1\" not found\nNOTICE: Drop cascades to table c1\nERROR: ALTER TABLE: Cannot drop inherited column \"f1\"\nNOTICE: Drop cascades to table c1\nNOTICE: CREATE TABLE: merging attribute \"f1\" with inherited definition\nERROR: ALTER TABLE: Cannot drop inherited column \"f1\"\nNOTICE: Drop cascades to table c1\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"name\"\nERROR: ALTER TABLE: Cannot drop inherited column \"name\"\nERROR: Relation \"gc1\" has no column \"name\"\nNOTICE: Drop cascades to table c1\nNOTICE: Drop cascades to table gc1\nLOG: smart shutdown request\nLOG: shutting down\nLOG: recycled transaction log file 0000000000000000\nLOG: recycled transaction log file 0000000000000001\nLOG: database system is shut down\n\n\nhorwitz@argoscomp.com (Samuel A Horwitz)\n\n\n", "msg_date": "Thu, 21 Nov 2002 21:59:04 -0500 (EST)", "msg_from": "Samuel A Horwitz <horwitz@argoscomp.com>", "msg_from_op": true, "msg_subject": "re [ANNOUNCE] RC1 Packaged for Testing ... AIX 4.2.1 result" } ]
[ { "msg_contents": "I am going to work on nested transactions for 7.4.\n\nMy goal is to first implement nested transactions:\n\n\tBEGIN;\n\tSELECT ...\n\tBEGIN;\n\tUPDATE;\n\tCOMMIT;\n\tDELETE;\n\tCOMMIT;\n\nand later savepoints (Oracle):\n\n\n\tBEGIN;\n\tSELECT ...\n\tSAVEPOINT t1;\n\tUPDATE;\n\tSAVEPOINT t2;\n\tDELETE;\n\tROLLBACK TO SAVEPOINT t2;\n\tCOMMIT;\n\nI assume people want both.\n\nAs an implementation, I will hack xact.c to create a transaction status\nstack so when you do a BEGIN inside a transaction, it saves the\ntransaction status, the transaction block status, and perhaps the\ncommand counter. A COMMIT restores these values.\n\nI also plan to modify the on commit/abort actions. On a subtransaction\ncommit, little has to be done, but on an ABORT, you must execute any\nabort actions required by that subtransaction _and_ remove any on commit\nactions for the subtransaction. There will need to be some code\nreorganization because some on commit/abort activity assumes only one\ntransaction can be in process. A stack will need to be added in those\ncases.\n\n\nAnd finally, I must abort tuple changes made by the aborted\nsubtransaction. One way of doing that is to keep all relation id's\nmodified by the transaction, and do a sequential scan of the tables on\nabort, changing the transaction id's to a fixed aborted transaction id. \nHowever, this could be slow. (We could store tids if only a few rows\nare updated by a subtransaction. That would speed it up considerably.)\n\nAnother idea is to use new transaction id's for the subtransactions, and\nupdate the transaction id status in pg_clog for the subtransactions, so\nthat there is no transaction id renumbering required. One problem with\nthis is the requirement of updating all the clog transaction id statuses\natomically. One way to do that would be to do parent/child dependency\nin clog so that if a child is looked up and it is marked as \"in\nprocess\", a check could be done against the parent. Once the outer\ntransaction is committed/aborted, those subtransactions could be updated\nso there would be no need to reference the parent any longer. This\nwould increase the clog size per transaction from 2 bits to 4 bytes \n(two bits for status, 30 bits for offset to parent).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 22 Nov 2002 00:32:46 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "nested transactions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am going to work on nested transactions for 7.4.\n> [some details]\n\nThis is, of course, barely scratching the surface of what will need to\nbe done.\n\nI assume you've abandoned the notion of a fast release cycle for 7.4?\n'Cause if you start on this, we ain't releasing any time soon ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Nov 2002 00:43:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am going to work on nested transactions for 7.4.\n> > [some details]\n> \n> This is, of course, barely scratching the surface of what will need to\n> be done.\n> \n> I assume you've abandoned the notion of a fast release cycle for 7.4?\n> 'Cause if you start on this, we ain't releasing any time soon ...\n\nAbandoned because of the delay in Win32 (end of Dec), PITR (not being\nworked on), and mostly because very few wanted a short release cycle.\n\nI will keep the transaction changes private to my tree, so if I can't\nget it done, I will just keep it for the next release.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 22 Nov 2002 00:45:14 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Bruce Momjian wrote:\n> I am going to work on nested transactions for 7.4.\n\nIf you're going to do a lot of reworking of how transactions are \nhandled, maybe this is a good time to beg for cursors that stay open \nacross commits. It looks like the JDBC driver is moving to using cursors \nwith ResultSet.CLOSE_CURSORS_AT_COMMIT, for the advantage of not having \nto fetch the entire result immediately and hold it in memory. If this \nwere implemented, the same could be done for \nResultSet.HOLD_CURSORS_OVER_COMMIT, which I think a lot of JDBC code needs.\n\nThanks,\nScott\n\n", "msg_date": "Fri, 22 Nov 2002 10:36:11 -0600", "msg_from": "Scott Lamb <slamb@slamb.org>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "On Friday 22 November 2002 04:36 pm, Scott Lamb wrote:\n> Bruce Momjian wrote:\n> > I am going to work on nested transactions for 7.4.\n>\n> If you're going to do a lot of reworking of how transactions are\n> handled, maybe this is a good time to beg for cursors that stay open\n> across commits. It looks like the JDBC driver is moving to using cursors\n> with ResultSet.CLOSE_CURSORS_AT_COMMIT, for the advantage of not having\n> to fetch the entire result immediately and hold it in memory. If this\n> were implemented, the same could be done for\n> ResultSet.HOLD_CURSORS_OVER_COMMIT, which I think a lot of JDBC code needs.\n>\n\nI agree.It is my favorite features - and if you set savepoint I think that stay first solution\n(begin; ... ; begin; ...; begin; ...;comit; ...;commit;...; commit;\n\nThanks \nHaris Peco\n", "msg_date": "Fri, 22 Nov 2002 17:47:29 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "On Fri, 22 Nov 2002 00:32:46 -0500 (EST), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>I am going to work on nested transactions for 7.4.\n> [...]\n>And finally, I must abort tuple changes made by the aborted\n>subtransaction. One way of doing that is to keep all relation id's\n>modified by the transaction, and do a sequential scan of the tables on\n>abort, changing the transaction id's to a fixed aborted transaction id. \n>However, this could be slow. (We could store tids if only a few rows\n>are updated by a subtransaction. That would speed it up considerably.)\n\nDepends on your definition of \"few\". I don't expect problems for up\nto several thousand tids. If there are more modified tuples, we could\nfirst reduce the list to page numbers, before finally falling back to\ntable scans.\n\n>Another idea is to use new transaction id's for the subtransactions, and\n>[...]\n>would increase the clog size per transaction from 2 bits to 4 bytes \n>(two bits for status, 30 bits for offset to parent).\n\nNice idea, this 30 bit offset. But one could argue that increased\nclog size even hurts users who don't use nested transactions at all.\nIf parent/child dependency is kept separate from status bits (in\npg_subtransxxxx files), additional I/O cost is only paid if\nsubtransactions are actually used. New status bits (XMIN_IS_SUB,\nXMAX_IS_SUB) in tuple headers can avoid unnecessary parent xid\nlookups.\n\nI also thought of subtransaction xids in tuple headers as short lived\ninformation. Under certain conditions they can be replaced with the\nparent xid as soon as the parent transaction has finished. I proposed\nthis to be done on the next tuple access just like we set\ncommitted/aborted flags now, though I'm not sure anymore that it is\nsafe to do this.\n\nOld pg_subtrans files can be removed by VACUUM.\n\nOne more difference between the two proposals: The former (locally\nremember modified tuples) can be used for recovery after a failed\ncommand. The latter (subtrans tree) can only help, if we give a new\nxid to each command, which I'm sure we don't want to do.\n\nServus\n Manfred\n", "msg_date": "Wed, 27 Nov 2002 16:11:55 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> And finally, I must abort tuple changes made by the aborted\n> subtransaction. One way of doing that is to keep all relation id's\n> modified by the transaction, and do a sequential scan of the tables on\n> abort, changing the transaction id's to a fixed aborted transaction id.\n> However, this could be slow. (We could store tids if only a few rows\n> are updated by a subtransaction. That would speed it up considerably.)\n\nAre you sure you don't want to use the log for this? It does mean that the\nlog can grow without bound for long-lived transactions, but it's very\nstraightforward and fast.\n\nKen Hirsch\n\n", "msg_date": "Wed, 27 Nov 2002 14:22:55 -0500", "msg_from": "\"Ken Hirsch\" <kahirsch@bellsouth.net>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Manfred Koizar wrote:\n> On Fri, 22 Nov 2002 00:32:46 -0500 (EST), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >I am going to work on nested transactions for 7.4.\n> > [...]\n> >And finally, I must abort tuple changes made by the aborted\n> >subtransaction. One way of doing that is to keep all relation id's\n> >modified by the transaction, and do a sequential scan of the tables on\n> >abort, changing the transaction id's to a fixed aborted transaction id. \n> >However, this could be slow. (We could store tids if only a few rows\n> >are updated by a subtransaction. That would speed it up considerably.)\n> \n> Depends on your definition of \"few\". I don't expect problems for up\n> to several thousand tids. If there are more modified tuples, we could\n> first reduce the list to page numbers, before finally falling back to\n> table scans.\n\nYes, and the key point is that those are kept only in the backend local\nmemory, so clearly thousands are possible. The outer transaction takes\ncare of all the ACID issues.\n\n> >Another idea is to use new transaction id's for the subtransactions, and\n> >[...]\n> >would increase the clog size per transaction from 2 bits to 4 bytes \n> >(two bits for status, 30 bits for offset to parent).\n> \n> Nice idea, this 30 bit offset. But one could argue that increased\n> clog size even hurts users who don't use nested transactions at all.\n> If parent/child dependency is kept separate from status bits (in\n> pg_subtransxxxx files), additional I/O cost is only paid if\n> subtransactions are actually used. New status bits (XMIN_IS_SUB,\n> XMAX_IS_SUB) in tuple headers can avoid unnecessary parent xid\n> lookups.\n> \n> I also thought of subtransaction xids in tuple headers as short lived\n> information. Under certain conditions they can be replaced with the\n> parent xid as soon as the parent transaction has finished. I proposed\n> this to be done on the next tuple access just like we set\n> committed/aborted flags now, though I'm not sure anymore that it is\n> safe to do this.\n> \n> Old pg_subtrans files can be removed by VACUUM.\n> \n> One more difference between the two proposals: The former (locally\n> remember modified tuples) can be used for recovery after a failed\n> command. The latter (subtrans tree) can only help, if we give a new\n> xid to each command, which I'm sure we don't want to do.\n\nThe interesting issue is that if we could set the commit/abort bits all\nat the same time, we could have the parent/child dependency local to the\nbackend --- other backends don't need to know the parent, only the\nstatus of the (subtransaction's) xid, and they need to see all those\nxid's committed at the same time.\n\nYou could store the backend slot id in pg_clog rather than the parent\nxid and look up the status of the outer xid for that backend slot. That\nwould allow you to use 2 bytes, with a max of 16k backends. The problem\nis that on a crash, the pg_clog points to invalid slots --- it would\nprobably have to be cleaned up on startup.\n\nBut still, you have an interesting idea of just setting the bit to be \"I\nam a child\". The trick is allowing backends to figure out who's child\nyou are. We could store this somehow in shared memory, but that is\nfinite and there can be lots of xid's for a backend using\nsubtransactions.\n\nI still think there must be a clean way, but I haven't figured it out yet.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 27 Nov 2002 22:47:33 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Ken Hirsch wrote:\n> From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> > And finally, I must abort tuple changes made by the aborted\n> > subtransaction. One way of doing that is to keep all relation id's\n> > modified by the transaction, and do a sequential scan of the tables on\n> > abort, changing the transaction id's to a fixed aborted transaction id.\n> > However, this could be slow. (We could store tids if only a few rows\n> > are updated by a subtransaction. That would speed it up considerably.)\n> \n> Are you sure you don't want to use the log for this? It does mean that the\n> log can grow without bound for long-lived transactions, but it's very\n> straightforward and fast.\n\nI don't think we want to have unlimited log file growth for long running\ntransactions/subtransactions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 27 Nov 2002 22:48:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Is there going to be a way to use transactions inside transactions of \ntransactions?\nIn other words:\n\n BEGIN;\n BEGIN;\n BEGIN;\n BEGIN;\n\n COMMIT;\n COMMIT;\n COMMIT;\n COMMIT;\n\nIs there a way to have some sort of recursive solution with every \ntransaction but the first one being a child transaction?\nIs there a way to implement that without too much extra effort?\nI just curious how that could be done.\n\n Hans\n\n\n\n", "msg_date": "Thu, 28 Nov 2002 09:22:09 +0100", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "On Wed, 27 Nov 2002 22:47:33 -0500 (EST), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>The interesting issue is that if we could set the commit/abort bits all\n>at the same time, we could have the parent/child dependency local to the\n>backend --- other backends don't need to know the parent, only the\n>status of the (subtransaction's) xid, and they need to see all those\n>xid's committed at the same time.\n\nYou mean the commit/abort bit in the tuple headers? Yes, this would\nbe interesting, but I see no way how this could be done. If it could,\nthere would be no need for pg_clog.\n\nReading your paragraph above one more time I think you mean the bits\nin pg_clog: Each subtransaction gets its own xid. On ROLLBACK the\nabort bits of the aborted (sub)transaction and all its children are\nset in pg_clog immediately. This operation does not have to be\natomic. On subtransaction COMMIT nothing happens to pg_clog, the\nstatus is only changed locally, the subtransaction still looks \"in\nprogress\" to other backends. Only when the main transaction commits,\nwe set the commit bits of the main transaction and all its non-aborted\nchildren in pg_clog. This action has to be atomic. Right?\n\nAFAICS the problem lies in updating several pg_clog bits at once. How\ncan this be done without holding a potentially long lasting lock?\n\n>You could store the backend slot id in pg_clog rather than the parent\n>xid and look up the status of the outer xid for that backend slot. That\n>would allow you to use 2 bytes, with a max of 16k backends. The problem\n>is that on a crash, the pg_clog points to invalid slots --- it would\n>probably have to be cleaned up on startup.\n\nAgain I would try to keep pg_clog compact and store the backend slots\nin another file, thus not slowing down instances where subtransactions\nare nor used. Apart from this minor detail I don't see, how this is\nsupposed to work. Could you elaborate?\n\n>But still, you have an interesting idea of just setting the bit to be \"I\n>am a child\".\n\nThe idea was to set subtransaction bits in the tuple header. Here is\nyet another different idea: Let the currently unused fourth state in\npg_clog indicate a committed subtransaction. There are two bits per\ntransaction, commit and abort, with the following meaning:\n\n a c\n 0 0 transaction in progress, the owning backend knows whether it is\n a main- or a sub-transaction, other backends don't care\n 1 0 aborted, nobody cares whether main- or sub-transaction\n 0 1 committed main-transaction (*)\n 1 1 committed sub-transaction, have to look for parent in\n pg_subtrans\n\nIf we allow the 1/1 state to be replaced with 0/1 or 1/0 (on the fly\nas a side effect of a visibility check, or by vacuum, or by\nCOMMIT/ROLLBACK), this could save a lot of parent lookups without\nhaving to touch the xids in the tuple headers.\n\nSo (*) should read: committed main-transaction or committed\nsub-transaction having a committed parent.\n\n>The trick is allowing backends to figure out who's child\n>you are. We could store this somehow in shared memory, but that is\n>finite and there can be lots of xid's for a backend using\n>subtransactions.\n\nThe subtrans dependencies have to be visible to all backends. Store\nthem to disk just like pg_clog. In older proposals I spoke of a\npg_subtrans \"table\" containing (parent, child) pairs. This was only\nmeant as a concept, not as a real SQL table subject to MVCC. An\nefficient(?) implementation could be an array of parent xids, indexed\nby child xid. Most of it can be stolen from the clog code.\n\nOne more argument for pg_subtrans being visible to all backends: If\nan UPDATE is about to change a tuple touched by another active\ntransaction, it waits for the other transaction to commit or abort.\nWe must always wait for the main transaction, not the subtrans.\n\n>I still think there must be a clean way,\nI hope so ...\n\n> but I haven't figured it out yet.\nAre we getting nearer?\n\nServus\n Manfred\n", "msg_date": "Thu, 28 Nov 2002 15:51:53 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Hans-J�rgen Sch�nig wrote:\n> Is there going to be a way to use transactions inside transactions of \n> transactions?\n> In other words:\n> \n> BEGIN;\n> BEGIN;\n> BEGIN;\n> BEGIN;\n> \n> COMMIT;\n> COMMIT;\n> COMMIT;\n> COMMIT;\n> \n> Is there a way to have some sort of recursive solution with every \n> transaction but the first one being a child transaction?\n> Is there a way to implement that without too much extra effort?\n> I just curious how that could be done.\n\nSure, nesting will be unlimited.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 28 Nov 2002 11:27:20 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Manfred Koizar wrote:\n> On Wed, 27 Nov 2002 22:47:33 -0500 (EST), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >The interesting issue is that if we could set the commit/abort bits all\n> >at the same time, we could have the parent/child dependency local to the\n> >backend --- other backends don't need to know the parent, only the\n> >status of the (subtransaction's) xid, and they need to see all those\n> >xid's committed at the same time.\n> \n> You mean the commit/abort bit in the tuple headers? Yes, this would\n> be interesting, but I see no way how this could be done. If it could,\n> there would be no need for pg_clog.\n> \n> Reading your paragraph above one more time I think you mean the bits\n> in pg_clog: Each subtransaction gets its own xid. On ROLLBACK the\n\nRight.\n\n> abort bits of the aborted (sub)transaction and all its children are\n> set in pg_clog immediately. This operation does not have to be\n> atomic. On subtransaction COMMIT nothing happens to pg_clog, the\n\nRight, going from RUNNING to ABORTED doesn't have to be atomic because\nboth tuples are invisible.\n\n> status is only changed locally, the subtransaction still looks \"in\n> progress\" to other backends. Only when the main transaction commits,\n> we set the commit bits of the main transaction and all its non-aborted\n> children in pg_clog. This action has to be atomic. Right?\n\nRight. We can't have some backends looking at part of the transaction\nas committed while at the same time other backends see the transaction\nas in process.\n\n> AFAICS the problem lies in updating several pg_clog bits at once. How\n> can this be done without holding a potentially long lasting lock?\n\nYes, locking is one possible solution, but no one likes that. One hack\nlock idea would be to create a subtransaction-only lock, so if you see\nthe special 4-th xact state (about to be committed as part of a\nsubtransaction) you have to wait on that lock (held by the backend\ntwiddling the xact bits), then look again. That basically would\nserialize all the bit-twiddling for subtransactions. I am sure I am\ngoing to get a \"yuck\" from the audience on that one, but I am not sure\nhow long that bit twiddling could take. Does xact twiddle every cause\nI/O? I think it could, which would be a pretty big performance problem.\nIt would serialize the subtransaction commits _and_ block anyone trying\nto get the status of those subtransactions. We would not use the the\n4th xid status during the transaction, only while we were twiddling the\nbits on commit.\n\n> >You could store the backend slot id in pg_clog rather than the parent\n> >xid and look up the status of the outer xid for that backend slot. That\n> >would allow you to use 2 bytes, with a max of 16k backends. The problem\n> >is that on a crash, the pg_clog points to invalid slots --- it would\n> >probably have to be cleaned up on startup.\n> \n> Again I would try to keep pg_clog compact and store the backend slots\n> in another file, thus not slowing down instances where subtransactions\n> are nor used. Apart from this minor detail I don't see, how this is\n> supposed to work. Could you elaborate?\n\nThe trick is that when that 4th status is set, backends looking up the\nstatus all need to point to a central location that can be set for all\nof them at once, hence the original idea of putting the parent xid in\nthe clog file. We don't _need_ to do that, but we do need a way to\n_point_ to a central location where the status can be looked up.\n\n> >But still, you have an interesting idea of just setting the bit to be \"I\n> >am a child\".\n> \n> The idea was to set subtransaction bits in the tuple header. Here is\n> yet another different idea: Let the currently unused fourth state in\n> pg_clog indicate a committed subtransaction. There are two bits per\n> transaction, commit and abort, with the following meaning:\n> \n> a c\n> 0 0 transaction in progress, the owning backend knows whether it is\n> a main- or a sub-transaction, other backends don't care\n> 1 0 aborted, nobody cares whether main- or sub-transaction\n> 0 1 committed main-transaction (*)\n> 1 1 committed sub-transaction, have to look for parent in\n> pg_subtrans\n> \n> If we allow the 1/1 state to be replaced with 0/1 or 1/0 (on the fly\n> as a side effect of a visibility check, or by vacuum, or by\n> COMMIT/ROLLBACK), this could save a lot of parent lookups without\n> having to touch the xids in the tuple headers.\n\nYes, you could do that, but we can easily just set the clog bits\natomically, and it will not be needed --- the tuple bits really don't\nhelp us, I think.\n\n> So (*) should read: committed main-transaction or committed\n> sub-transaction having a committed parent.\n> \n> >The trick is allowing backends to figure out who's child\n> >you are. We could store this somehow in shared memory, but that is\n> >finite and there can be lots of xid's for a backend using\n> >subtransactions.\n> \n> The subtrans dependencies have to be visible to all backends. Store\n> them to disk just like pg_clog. In older proposals I spoke of a\n> pg_subtrans \"table\" containing (parent, child) pairs. This was only\n> meant as a concept, not as a real SQL table subject to MVCC. An\n> efficient(?) implementation could be an array of parent xids, indexed\n> by child xid. Most of it can be stolen from the clog code.\n\nOK, we put it in a file. And how do we efficiently clean it up?\nRemember, it is only to be used for a _brief_ period of time. I think a\nfile system solution is doable if we can figure out a way not to create\na file for every xid. Do we spin through the files (one per outer\ntransaction) looking for a matching xid when we see that 4th xact state?\n\nMaybe we write the xid's to a file in a special directory in sorted\norder, and backends can do a btree search of each file in that directory\nlooking for the xid, and then knowing the master xid, look up that\nstatus, and once all the children xid's are updated, you delete the\nfile.\n\n> One more argument for pg_subtrans being visible to all backends: If\n> an UPDATE is about to change a tuple touched by another active\n> transaction, it waits for the other transaction to commit or abort.\n> We must always wait for the main transaction, not the subtrans.\n\nYes, but again, the xid status of subtransactions is only update just\nbefore commit of the main transaction, so there is little value to\nhaving those visible.\n\nLet's keep going!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 28 Nov 2002 12:59:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, locking is one possible solution, but no one likes that. One hack\n> lock idea would be to create a subtransaction-only lock, so if you see\n> the special 4-th xact state (about to be committed as part of a\n> subtransaction) you have to wait on that lock (held by the backend\n> twiddling the xact bits), then look again. That basically would\n> serialize all the bit-twiddling for subtransactions. I am sure I am\n> going to get a \"yuck\" from the audience on that one,\n\nYou sure are.\n\n> but I am not sure\n> how long that bit twiddling could take. Does xact twiddle every cause\n> I/O?\n\nYes, if the page of pg_clog you need to touch is not currently in a\nbuffer. With a large transaction you might have hundreds of\nsubtransactions, which could take an unpleasantly long time to mark\nall committed.\n\nWhat's worse, I think the above proposal requires a *single* lock for\nthis purpose (if there's more than one, how shall the requestor know\nwhich one to block on?) --- so you are serializing all transaction\ncommits that have subtransactions, with only one able to go through at\na time. That will really, really not do; the performance will be way\nworse than the chaining idea we discussed before.\n\n> You could store the backend slot id in pg_clog rather than the parent\n> xid and look up the status of the outer xid for that backend slot. That\n> would allow you to use 2 bytes, with a max of 16k backends.\n\nThis is also a bad idea, because backend slot ids are not stable (by the\ntime you look in PG_PROC, the slot may be occupied by a new, unrelated\nbackend process).\n\n> But still, you have an interesting idea of just setting the bit to be \"I\n> am a child\".\n\nThat bit alone doesn't help; you need to know *whose* child.\n\nAFAICS, the objection to putting parent xact IDs into pg_clog is\nbasically a performance issue: bigger clog means more I/O. This is\nsurely true; but the alternatives proposed so far are worse.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Nov 2002 17:54:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "\nI should add that I am not prepared to overhaul the pg_clog file format\nas part of adding subtransactions for 7.4. I can do the tid/sequential scan\nmethod for abort, or the single-lock method described.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, locking is one possible solution, but no one likes that. One hack\n> > lock idea would be to create a subtransaction-only lock, so if you see\n> > the special 4-th xact state (about to be committed as part of a\n> > subtransaction) you have to wait on that lock (held by the backend\n> > twiddling the xact bits), then look again. That basically would\n> > serialize all the bit-twiddling for subtransactions. I am sure I am\n> > going to get a \"yuck\" from the audience on that one,\n> \n> You sure are.\n> \n> > but I am not sure\n> > how long that bit twiddling could take. Does xact twiddle every cause\n> > I/O?\n> \n> Yes, if the page of pg_clog you need to touch is not currently in a\n> buffer. With a large transaction you might have hundreds of\n> subtransactions, which could take an unpleasantly long time to mark\n> all committed.\n> \n> What's worse, I think the above proposal requires a *single* lock for\n> this purpose (if there's more than one, how shall the requestor know\n> which one to block on?) --- so you are serializing all transaction\n> commits that have subtransactions, with only one able to go through at\n> a time. That will really, really not do; the performance will be way\n> worse than the chaining idea we discussed before.\n> \n> > You could store the backend slot id in pg_clog rather than the parent\n> > xid and look up the status of the outer xid for that backend slot. That\n> > would allow you to use 2 bytes, with a max of 16k backends.\n> \n> This is also a bad idea, because backend slot ids are not stable (by the\n> time you look in PG_PROC, the slot may be occupied by a new, unrelated\n> backend process).\n> \n> > But still, you have an interesting idea of just setting the bit to be \"I\n> > am a child\".\n> \n> That bit alone doesn't help; you need to know *whose* child.\n> \n> AFAICS, the objection to putting parent xact IDs into pg_clog is\n> basically a performance issue: bigger clog means more I/O. This is\n> surely true; but the alternatives proposed so far are worse.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 28 Nov 2002 21:05:18 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I should add that I am not prepared to overhaul the pg_clog file format\n> as part of adding subtransactions for 7.4. I can do the tid/sequential scan\n> method for abort, or the single-lock method described.\n\nIf you think that changing the pg_clog file format would be harder than\neither of those other ideas, I think you're very badly mistaken.\npg_clog is touched only by one rather simple module.\n\nI think the other methods will be completely unacceptable from a\nperformance point of view. They could maybe work if subtransactions\nwere a seldom-used feature; but the people who want to use 'em are\nmostly talking about a subtransaction for *every* command. If you\ndesign your implementation on the assumption that subtransactions are\ninfrequent, it will be unusably slow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Nov 2002 21:29:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I should add that I am not prepared to overhaul the pg_clog file format\n> > as part of adding subtransactions for 7.4. I can do the tid/sequential scan\n> > method for abort, or the single-lock method described.\n> \n> If you think that changing the pg_clog file format would be harder than\n> either of those other ideas, I think you're very badly mistaken.\n> pg_clog is touched only by one rather simple module.\n\nAgreed, the clog changes would be the simple solution. However, I am\nnot sure I can make that level of changes. In fact, the complexity of\nhaving multiple transactions per backend is going to be tough for me\ntoo.\n\nAlso, I should point out that balooning pg_clog by 16x is going to mean\nwe are perhaps 4-8x more likely to need extra pages to mark all\nsubtransactions.\n\nIsn't there some other way we can link these subtransactions together\nrather than mucking with pg_clog, as we only need the linkage while we\nmark them all committed?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 28 Nov 2002 21:35:13 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Also, I should point out that balooning pg_clog by 16x is going to mean\n> we are perhaps 4-8x more likely to need extra pages to mark all\n> subtransactions.\n\nSo? The critical point is that we don't need to serialize the pg_clog\noperations if we do it that way. Also, we can certainly expand the\nnumber of pg_clog pages held in memory by some amount. Right now it's\nonly 4, IIRC. We could make it 64 and probably no one would even\nnotice.\n\n> Isn't there some other way we can link these subtransactions together\n> rather than mucking with pg_clog, as we only need the linkage while we\n> mark them all committed?\n\nYou *cannot* expect to do it all in shared memory; you will be blown out\nof the water by the first long transaction that comes along, if you try.\nSo the question is not whether we put the status into a file, it is only\nwhat representation we choose.\n\nManfred suggested a separate log file (\"pg_subclog\" or some such) but\nI really don't see any operational advantage to that. You still end up\nwith 4 bytes per transaction, you're just assuming that putting them\nin a different file makes it better. I don't see how.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Nov 2002 21:46:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "Tom Lane wrote:\n> > Isn't there some other way we can link these subtransactions together\n> > rather than mucking with pg_clog, as we only need the linkage while we\n> > mark them all committed?\n> \n> You *cannot* expect to do it all in shared memory; you will be blown out\n> of the water by the first long transaction that comes along, if you try.\n> So the question is not whether we put the status into a file, it is only\n> what representation we choose.\n> \n> Manfred suggested a separate log file (\"pg_subclog\" or some such) but\n> I really don't see any operational advantage to that. You still end up\n> with 4 bytes per transaction, you're just assuming that putting them\n> in a different file makes it better. I don't see how.\n\nIt only becomes better if we can throw away that file (or contents) when\nthe transaction completes and we have marked all the subtransactions as\ncompleted. We can't compress pg_clog if we store the parent info in\nthere.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 28 Nov 2002 22:10:57 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It only becomes better if we can throw away that file (or contents) when\n> the transaction completes and we have marked all the subtransactions as\n> completed. We can't compress pg_clog if we store the parent info in\n> there.\n\nBut we already have a recycling mechanism for pg_clog. AFAICS,\ncreating a parallel log file with a separate recycling mechanism is\na study in wasted effort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Nov 2002 22:27:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It only becomes better if we can throw away that file (or contents) when\n> > the transaction completes and we have marked all the subtransactions as\n> > completed. We can't compress pg_clog if we store the parent info in\n> > there.\n> \n> But we already have a recycling mechanism for pg_clog. AFAICS,\n> creating a parallel log file with a separate recycling mechanism is\n> a study in wasted effort.\n\nBut that recycling requires the vacuum of every database in the system. \nDo people do that frequently enough?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 29 Nov 2002 00:53:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> But we already have a recycling mechanism for pg_clog. AFAICS,\n>> creating a parallel log file with a separate recycling mechanism is\n>> a study in wasted effort.\n\n> But that recycling requires the vacuum of every database in the system. \n> Do people do that frequently enough?\n\nOnce the auto vacuum code is in there, they won't have any choice ;-)\n\nIn any case, I saw no part of your proposal that removed the need for\nvacuum, so what's your point?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Nov 2002 00:56:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "On Thu, 28 Nov 2002 21:46:09 -0500, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Manfred suggested a separate log file (\"pg_subclog\" or some such) but\n>I really don't see any operational advantage to that. You still end up\n>with 4 bytes per transaction, you're just assuming that putting them\n>in a different file makes it better. I don't see how.\n\nThere are two points:\n\n1) If your site/instance/application/whatever... does not use nested\ntransactions or does use them only occasionally, you don't have to pay\nthe additional I/O cost.\n\n2) If we update a subtransaction's pg_clog bits as soon as the status\nof the main transaction is known, pg_subtrans is only visited once per\nsubtransaction, while pg_clog has to be looked up once per tuple.\n\nThings might look different however, if we wrap every command into a\nsubtransaction...\n\nServus\n Manfred\n", "msg_date": "Fri, 29 Nov 2002 12:23:26 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "On Friday 29 November 2002 00:56, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> But we already have a recycling mechanism for pg_clog. AFAICS,\n> >> creating a parallel log file with a separate recycling mechanism is\n> >> a study in wasted effort.\n> >\n> > But that recycling requires the vacuum of every database in the system.\n> > Do people do that frequently enough?\n>\n> Once the auto vacuum code is in there, they won't have any choice ;-)\n\nOK, I know postgres needs to be vacuumed every so often (I think its to \nguarantee safe XID wraparound?) I think the AVD should do something to \nguarnatee this is hapening. Since I am working on AVD, what are the criterea \nfor this? From the above I assume it also pertains to pg_clog recycling \n(which is related to XID wraparound?), but I know nothing about that.\n\nRight now AVD only performs vacuum analyze on specific tables as it deems they \nneed it, it does not perform vacuum on entire databases at any point yet.\n\n", "msg_date": "Fri, 29 Nov 2002 08:05:41 -0500", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> 1) If your site/instance/application/whatever... does not use nested\n> transactions or does use them only occasionally, you don't have to pay\n> the additional I/O cost.\n\nAs I already said to Bruce, designing this facility on the assumption\nthat it will be seldom-used is a recipe for failure. Everybody and\nhis brother wants commands that don't abort the whole transaction.\nAs soon as this facility exists, you can bet that the standard mode\nof operation will become \"one subtransaction per interactive command\".\nIf you don't design it to support that load, you may as well not bother\nto build it at all.\n\n> 2) If we update a subtransaction's pg_clog bits as soon as the status\n> of the main transaction is known, pg_subtrans is only visited once per\n> subtransaction, while pg_clog has to be looked up once per tuple.\n\nHow you figure that? It seems to me the visit rate is exactly the same,\nyou've just divided it into two files. Having to touch two files\ninstead of one seems if anything worse.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Nov 2002 10:53:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "\"Matthew T. O'Connor\" <matthew@zeut.net> writes:\n> Right now AVD only performs vacuum analyze on specific tables as it deems they \n> need it, it does not perform vacuum on entire databases at any point yet.\n\nSee\nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/routine-vacuuming.html\n\nHowever I think that only talks about XID wraparound and datfrozenxid.\npg_clog recycling is driven off the oldest datvacuumxid in pg_database;\nthe AVD should think about launching a database-wide vacuum whenever\nage(datvacuumxid) exceeds a million or two.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Nov 2002 11:01:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "Manfred Koizar wrote:\n> One more argument for pg_subtrans being visible to all backends: If\n> an UPDATE is about to change a tuple touched by another active\n> transaction, it waits for the other transaction to commit or abort.\n> We must always wait for the main transaction, not the subtrans.\n\nThis issue kills the idea that we can get away with providing lookup to\nthe other backends _only_ while we are twiddling the clog bits. Other\ntransactions are going to need to know if the XID they see on the tuple\nis owned by an active backend. This means we have to provide\nchild/master xid lookup during the transaction, meaning we may as well\nuse pg_clog or separate file, especially if we can get autovacuum for\n7.4. It kills the idea that somehow locking would work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 29 Nov 2002 11:24:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "On Thu, 28 Nov 2002 12:59:21 -0500 (EST), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>Yes, locking is one possible solution, but no one likes that. One hack\n>lock idea would be to create a subtransaction-only lock, [...]\n>\n>> [...] without\n>> having to touch the xids in the tuple headers.\n>\n>Yes, you could do that, but we can easily just set the clog bits\n>atomically,\n\n From what I read above I don't think we can *easily* set more than one\ntransaction's bits atomically.\n\n> and it will not be needed --- the tuple bits really don't\n>help us, I think.\n\nYes, this is what I said, or at least tried to say. I just wanted to\nmake clear how this new approach (use the fourth status) differs from\nolder proposals (replace subtransaction ids in tuple headers).\n\n>OK, we put it in a file. And how do we efficiently clean it up?\n>Remember, it is only to be used for a _brief_ period of time. I think a\n>file system solution is doable if we can figure out a way not to create\n>a file for every xid.\n\nI don't want to create one file for every transaction, but rather a\nhuge (sparse) array of parent xids. This array is divided into\nmanageable chunks, represented by files, \"pg_subtrans_NNNN\". These\nfiles are only created when necessary. At any time only a tiny part\nof the whole array is kept in shared buffers. This concept is similar\nor almost equal to pg_clog, which is an array of doublebits.\n\n>Maybe we write the xid's to a file in a special directory in sorted\n>order, and backends can do a btree search of each file in that directory\n>looking for the xid, and then knowing the master xid, look up that\n>status, and once all the children xid's are updated, you delete the\n>file.\n\nYes, dense arrays or btrees are other possible implementations. But\nfor simplicity I'd do it pg_clog style.\n\n>Yes, but again, the xid status of subtransactions is only update just\n>before commit of the main transaction, so there is little value to\n>having those visible.\n\nHaving them visible solves the atomicity problem without requiring\nlong locks. Updating the status of a single (main or sub) transaction\nis atomic, just like it is now.\n\nHere is what is to be done for some operations:\n\nBEGIN main transaction:\n\tGet a new xid (no change to current behaviour).\n\tpg_clog[xid] is still 00, meaning active.\n\tpg_subtrans[xid] is still 0, meaning no parent.\n\nBEGIN subtransaction:\n\tPush current transaction info onto local stack.\n\tGet a new xid.\n\tRecord parent xid in pg_subtrans[xid].\n\tpg_clog[xid] is still 00.\n\nROLLBACK subtransaction:\n\tSet pg_clog[xid] to 10 (aborted).\n\tOptionally set clog bits for subsubtransactions to 10.\n\tPop transaction info from stack.\n\nCOMMIT subtransaction:\n\tSet pg_clog[xid] to 11 (committed subtrans).\n\tDon't touch clog bits for subsubtransactions!\n\tPop transaction info from stack.\n\nROLLBACK main transaction:\n\tSet pg_clog[xid] to 10 (aborted).\n\tOptionally set clog bits for subtransactions to 10.\n\t\nCOMMIT main transaction:\n\tSet pg_clog[xid] to 01 (committed).\n\tOptionally set clog bits for subtransactions from 11 to 01.\n\tDon't touch clog bits for aborted subtransactions!\n\nVisibility check by other transactions: If a tuple is visited and its\nXMIN/XMAX_IS_COMMITTED/ABORTED flags are not yet set, pg_clog has to\nbe consulted to find out the status of the inserting/deleting\ntransaction xid. If pg_clog[xid] is ...\n\n\t00: transaction still active\n\n\t10: aborted\n\n\t01: committed\n\n\t11: committed subtransaction, have to check parent\n\nOnly in this last case do we have to get parentxid from pg_subtrans.\nNow we look at pg_clog[parentxid]. If we find ...\n\n\t00: parent still active, so xid is considered active, too\n\n\t10: parent aborted, so xid is considered aborted,\n\t optionally set pg_clog[xid] = 10\n\n\t01: parent committed, so xid is considered committed,\n\t optionally set pg_clog[xid] = 01\n\n\t11: recursively check grandparent(s) ...\n\nFor brevity the following operations are not covered in detail:\n. Visibility checks for tuples inserted/deleted by a (sub)transaction\nbelonging to the current transaction tree (have to check local\ntransaction stack whenever we look at a xid or switch to a parent xid)\n. HeapTupleSatisfiesUpdate (sometimes has to wait for parent\ntransaction)\n\nThe trick here is, that subtransaction status is immediately updated\nin pg_clog on commit/abort. Main transaction commit is atomic (just\nset its commit bit). Status 11 is short-lived, it is replaced with\nthe final status by one or more of\n\n\t- COMMIT/ROLLBACK of the main transaction\n\t- a later visibility check (as a side effect)\n\t- VACUUM\n\npg_subtrans cleanup: A pg_subtrans_NNNN file covers a known range of\ntransaction ids. As soon as none of these transactions has a pg_clog\nstatus of 11, the pg_subtrans_NNNN file can be removed. VACUUM can do\nthis, and it won't even have to check the heap.\n\nServus\n Manfred\n", "msg_date": "Fri, 29 Nov 2002 18:03:56 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Visibility check by other transactions: If a tuple is visited and its\n> XMIN/XMAX_IS_COMMITTED/ABORTED flags are not yet set, pg_clog has to\n> be consulted to find out the status of the inserting/deleting\n> transaction xid. If pg_clog[xid] is ...\n\n> \t00: transaction still active\n\n> \t10: aborted\n\n> \t01: committed\n\n> \t11: committed subtransaction, have to check parent\n\n> Only in this last case do we have to get parentxid from pg_subtrans.\n\nUnfortunately this discussion is wrong. User-level visibility checks\nwill usually have to fetch the parentxid in case 01 as well, because\neven if the parent is committed, it might not be visible in our\nsnapshot. Snapshots will record only topmost-parent XIDs (because\nthat's what we can find in the PG_PROC array, and anything else would\ncreate atomicity problems anyway). So we must chase to the topmost\nparent before testing visibility.\n\nThis means that the parentxid will need to be fetched in enough cases\nthat it's quite dubious that pushing it to a different file saves I/O.\n\nAlso, using a 11 state doubles the amount of pg_clog I/O needed to\ncommit a collection of subtransactions. You have to write 11 as the\nstate of each commitable subtransaction, then commit the parent (write\n01 as its state), then go back and change the state of each\nsubtransaction to 01. (Whether this last bit is done as part of parent\ntransaction commit, or during later inspections of the state of the\nsubtransaction, doesn't change the argument.)\n\nI think it would be preferable to use only three states: active,\naborted, committed. The parent commit protocol is (1) write 10 as state\nof each aborted subtransaction (this should be done as soon as the\nsubtransaction is known aborted, rather than delaying to parent commit);\n(2) write 01 as state of parent (this is the atomic commit); (3) write\n01 as state of each committed subtransaction. Readers who see 00 must\ncheck the parent state; if the parent is committed then they have to go\nback and recheck the child state (to see if it became \"aborted\" after\nthey looked). This halves the write traffic during a commit, at the\ncost of additional read traffic when subtransaction state is checked in\na narrow window after the time of parent transaction commit. I believe\nit nets out to be faster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Nov 2002 13:33:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "On Fri, 29 Nov 2002 13:33:28 -0500, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Unfortunately this discussion is wrong. User-level visibility checks\n>will usually have to fetch the parentxid in case 01 as well, because\n>even if the parent is committed, it might not be visible in our\n>snapshot.\n\nOr we don't allow a subtransaction's status to be updated from 11 to\n01 until we know, that the main transaction is visible to all active\ntransactions. Didn't check whether this is expensive to find out. At\nleast it should be doable by VACCUM.\n\n>Snapshots will record only topmost-parent XIDs (because\n>that's what we can find in the PG_PROC array, and anything else would\n>create atomicity problems anyway). So we must chase to the topmost\n>parent before testing visibility.\n\nBTW, I think this *forces* us to replace the sub xid with the\nrespective main xid in a tuple header, when we set\nXMIN/MAX_IS_COMMITTED. Otherwise we'd have to look for the main xid,\nwhenever a tuple is touched.\n\n>Also, using a 11 state doubles the amount of pg_clog I/O needed to\n>commit a collection of subtransactions.\n\nIs a pg_clog page written out to disk each time a bit is changed? I'd\nexpect some locality.\n\n>I think it would be preferable to use only three states: active,\n>aborted, committed. The parent commit protocol is (1) write 10 as state\n>of each aborted subtransaction (this should be done as soon as the\n>subtransaction is known aborted, rather than delaying to parent commit);\n>(2) write 01 as state of parent (this is the atomic commit); (3) write\n>01 as state of each committed subtransaction. Readers who see 00 must\n>check the parent state; if the parent is committed then they have to go\n>back and recheck the child state (to see if it became \"aborted\" after\n>they looked).\n\nNice idea! This saves the fourth status for future uses (for example,\nFirebird uses it for two phase commit). OTOH for reasons you\nmentioned above there's no chance to save parent xid lookups, if we go\nthis way.\n\n>This halves the write traffic during a commit, at the\n>cost of additional read traffic when subtransaction state is checked in\n>a narrow window after the time of parent transaction commit. I believe\n>it nets out to be faster.\n\nMaybe. The whole point of my approach is: If we can limit the active\nrange of transactions requiring parent xid lookups to a small fraction\nof the range needing pg_clog lookups, then it makes sense to store\nstatus bits and parent xids in different files. Otherwise keeping\nthem together in one file clearly is faster.\n\nServus\n Manfred\n", "msg_date": "Fri, 29 Nov 2002 21:10:42 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "Manfred Koizar wrote:\n> On Fri, 29 Nov 2002 13:33:28 -0500, Tom Lane <tgl@sss.pgh.pa.us>\n> wrote:\n> >Unfortunately this discussion is wrong. User-level visibility checks\n> >will usually have to fetch the parentxid in case 01 as well, because\n> >even if the parent is committed, it might not be visible in our\n> >snapshot.\n> \n> Or we don't allow a subtransaction's status to be updated from 11 to\n> 01 until we know, that the main transaction is visible to all active\n> transactions. Didn't check whether this is expensive to find out. At\n> least it should be doable by VACCUM.\n> \n> >Snapshots will record only topmost-parent XIDs (because\n> >that's what we can find in the PG_PROC array, and anything else would\n> >create atomicity problems anyway). So we must chase to the topmost\n> >parent before testing visibility.\n> \n> BTW, I think this *forces* us to replace the sub xid with the\n> respective main xid in a tuple header, when we set\n> XMIN/MAX_IS_COMMITTED. Otherwise we'd have to look for the main xid,\n> whenever a tuple is touched.\n\nSorry, I don't follow this. As far as I know, we will set the subxid on\nthe tuple so we can independently mark the xact as aborted without\nrevisiting all the tuples. Once it is committed/rolled back, I see no\nneed to lookup the parent, and in fact we could clear the clog parent\nxid offset so there is no way to access the parent anymore.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 29 Nov 2002 15:57:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Maybe. The whole point of my approach is: If we can limit the active\n> range of transactions requiring parent xid lookups to a small fraction\n> of the range needing pg_clog lookups, then it makes sense to store\n> status bits and parent xids in different files. Otherwise keeping\n> them together in one file clearly is faster.\n\nHmm ... I'm not sure that that's possible.\n\nBut wait a moment. The child xid is by definition always greater than\n(newer than) its parent. So if we consult pg_clog and find the\ntransaction marked committed, *and* the xid is before the window of XIDs\nin our snapshot, then even if it's not a top-level xid, the parent must\nbe before our window too. Therefore we can conclude the transaction is\nvisible in our snapshot. So indeed there is a good-size range of xids\nfor which we'll never need to chase the parent link: everything before\nthe RecentGlobalXmin computed by GetSnapshotData. (We do have to set\nsubtransactions to committed during parent commit to make this true;\nwe can't update them lazily. But I think that's okay.)\n\nMaybe you're right --- we could probably truncate pg_subtrans faster\nthan pg_clog, and we could definitely expect to keep less of it in\nmemory than pg_clog.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Nov 2002 16:01:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: nested transactions " }, { "msg_contents": "\nI am concerned this is getting beyond my capabilities for 7.4 --- anyone\nwant to help?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Manfred Koizar <mkoi-pg@aon.at> writes:\n> > Maybe. The whole point of my approach is: If we can limit the active\n> > range of transactions requiring parent xid lookups to a small fraction\n> > of the range needing pg_clog lookups, then it makes sense to store\n> > status bits and parent xids in different files. Otherwise keeping\n> > them together in one file clearly is faster.\n> \n> Hmm ... I'm not sure that that's possible.\n> \n> But wait a moment. The child xid is by definition always greater than\n> (newer than) its parent. So if we consult pg_clog and find the\n> transaction marked committed, *and* the xid is before the window of XIDs\n> in our snapshot, then even if it's not a top-level xid, the parent must\n> be before our window too. Therefore we can conclude the transaction is\n> visible in our snapshot. So indeed there is a good-size range of xids\n> for which we'll never need to chase the parent link: everything before\n> the RecentGlobalXmin computed by GetSnapshotData. (We do have to set\n> subtransactions to committed during parent commit to make this true;\n> we can't update them lazily. But I think that's okay.)\n> \n> Maybe you're right --- we could probably truncate pg_subtrans faster\n> than pg_clog, and we could definitely expect to keep less of it in\n> memory than pg_clog.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 29 Nov 2002 16:04:54 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "[Sorry for the delay. I'm a bit busy these days.]\n\nOn Fri, 29 Nov 2002 15:57:17 -0500 (EST), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>> BTW, I think this *forces* us to replace the sub xid with the\n>> respective main xid in a tuple header, when we set\n>> XMIN/MAX_IS_COMMITTED. Otherwise we'd have to look for the main xid,\n>> whenever a tuple is touched.\n>\n>Sorry, I don't follow this.\n\nProbably because we've mixed up several proposals. I'll try to pick\nthem apart below.\n\n>As far as I know, we will set the subxid on\n>the tuple so we can independently mark the xact as aborted without\n>revisiting all the tuples.\n\nYes.\n\n>Once it is committed/rolled back,\n\nThese cases are completely different. If a (main or sub-) transaction\nis rolled back, its effects are invisible to all transactions; this\nstatus is immediately effective and final. OTOH a subtransaction\ncommit is only tentative. It becomes effective when the main\ntransaction commits. (And the subtransaction's status turns to\naborted, when the main transaction aborts.)\n\n>I see no\n>need to lookup the parent, and in fact we could clear the clog parent\n>xid offset so there is no way to access the parent anymore.\n\nWhile a subtransaction is seen as \"tentatively committed\" other\ntransactions have to look up its parent to find out its effective\nstatus.\n\nProposal A was: Never show \"tentatively committed\" to outside\ntransactions. This would require neither any new flags in tuple\nheaders or in pg_clog nor a globally visible pg_subtrans structure.\nBut it only works, if we can commit a main transaction and all its\nsubtransactions atomically, which is only possible if we hold a long\nlasting lock. Did we agree that we don't want this?\n\nAll other solutions require a parent xid lookup at least during the\ntime span while a subtransaction is marked \"tentatively committed\" and\nnot yet known to be \"finally committed\". IIRC we have three proposals\nhow the \"tentatively committed\" status can be shown to outside\ntransactions:\n\n(B) Two flags in the tuple header (one for xmin, one for xmax) telling\nus \"the xid is a subtransaction\". I don't like this very much,\nbecause it's not in Normal Form: \"is a subtransaction\" is NOT a\nproperty of a tuple. OTOH we can declare it a denormalization for\nperformance reasons (we don't have to look up the parend xid, if the\nflag is not set.)\n\n(C) Explicitly use the fourth possible status in pg_clog for\n\"tentatively committed\". (Performance hack: replace with \"finally\ncommitted\" as soon as the xid is visible to all active transactions.)\n\n(D) Only one kind of \"committed\" in pg_clog; always look for a parent\nin pg_subtrans; for performance reasons integrate pg_subtrans into\npc_clog.\n\nTom brought up the snapshot visibility problem which applies to B, C,\nand D.\n\nWhile each of these proposals can be implemented (relatively) straight\nforward, the Black Art is: When and how can we modify the stored\nstate to avoid repeated parent xid lookups? We'll find out ...\n\nServus\n Manfred\n", "msg_date": "Wed, 04 Dec 2002 17:54:30 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Manfred Koizar wrote:\n> [Sorry for the delay. I'm a bit busy these days.]\n> \n> On Fri, 29 Nov 2002 15:57:17 -0500 (EST), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >> BTW, I think this *forces* us to replace the sub xid with the\n> >> respective main xid in a tuple header, when we set\n> >> XMIN/MAX_IS_COMMITTED. Otherwise we'd have to look for the main xid,\n> >> whenever a tuple is touched.\n> >\n> >Sorry, I don't follow this.\n> \n> Probably because we've mixed up several proposals. I'll try to pick\n> them apart below.\n\nOK.\n\n> These cases are completely different. If a (main or sub-) transaction\n> is rolled back, its effects are invisible to all transactions; this\n> status is immediately effective and final. OTOH a subtransaction\n> commit is only tentative. It becomes effective when the main\n> transaction commits. (And the subtransaction's status turns to\n> aborted, when the main transaction aborts.)\n\nRight.\n\n> >I see no\n> >need to lookup the parent, and in fact we could clear the clog parent\n> >xid offset so there is no way to access the parent anymore.\n> \n> While a subtransaction is seen as \"tentatively committed\" other\n> transactions have to look up its parent to find out its effective\n> status.\n\nRight. And we need those lookups to parent from the start of the\nsubtransaction until the commit/abort of the main transaction. If it\naborts, we can shorten that, but if they are all commit, we have to\nwait, and they have to be visible because other backends have to know if\nthe \"Running\" status of the transaction is still associated with an\nactive transaction, and we can only stamp one xid on a backend because\nshared memory is limited.\n\n> Proposal A was: Never show \"tentatively committed\" to outside\n> transactions. This would require neither any new flags in tuple\n> headers or in pg_clog nor a globally visible pg_subtrans structure.\n> But it only works, if we can commit a main transaction and all its\n> subtransactions atomically, which is only possible if we hold a long\n> lasting lock. Did we agree that we don't want this?\n\nAgain, we still need the lookup to main transaction for other backend\nlookups, so this idea is dead, and we don't want locking.\n\n> All other solutions require a parent xid lookup at least during the\n> time span while a subtransaction is marked \"tentatively committed\" and\n> not yet known to be \"finally committed\". IIRC we have three proposals\n> how the \"tentatively committed\" status can be shown to outside\n> transactions:\n\nYes.\n\n> (B) Two flags in the tuple header (one for xmin, one for xmax) telling\n> us \"the xid is a subtransaction\". I don't like this very much,\n> because it's not in Normal Form: \"is a subtransaction\" is NOT a\n> property of a tuple. OTOH we can declare it a denormalization for\n> performance reasons (we don't have to look up the parend xid, if the\n> flag is not set.)\n\nI see no reason to do that when we have that 4th state available in\npg_clog. They are going to lookup the xid status anyway, so why not\ncheck that \"is subtransaction\" status at that point too. Of course, we\ncan't mark \"IS COMMITTED\" on the tuple until the main transaction\ncommits, but that is simple logic.\n\n\n> (C) Explicitly use the fourth possible status in pg_clog for\n> \"tentatively committed\". (Performance hack: replace with \"finally\n> committed\" as soon as the xid is visible to all active transactions.)\n\nYes, I think this is the only way to go. If we need that 4th state\nlater, we can refactor the code, but for our purposes now, it is useful.\n\n> (D) Only one kind of \"committed\" in pg_clog; always look for a parent\n> in pg_subtrans; for performance reasons integrate pg_subtrans into\n> pc_clog.\n\nSeems that 4th state makes this an easy optimization, causing zero\noverhead for backends that _don't_ use subtransactions, except for\nbackends looking up the status of other backends with subtransactions.\n\n> Tom brought up the snapshot visibility problem which applies to B, C,\n> and D.\n> \n> While each of these proposals can be implemented (relatively) straight\n> forward, the Black Art is: When and how can we modify the stored\n> state to avoid repeated parent xid lookups? We'll find out ...\n\nI think there is now general agreement that we want a separate table to\nstore parent xids for subtransactions that is only looked up when that\n4th clog state is set, and once the main transaction commits, all those\n4th state clog entries can be cleaned up to simple commit. We can also\nexpire the pg_subtrans table for any xids less than the lowest running\nbackend xid, which is pretty significant optimization.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Dec 2002 13:49:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Bruce Momjian wrote:\n> I am going to work on nested transactions for 7.4.\n> \n> My goal is to first implement nested transactions:\n> \n> \tBEGIN;\n> \tSELECT ...\n> \tBEGIN;\n> \tUPDATE;\n> \tCOMMIT;\n> \tDELETE;\n> \tCOMMIT;\n> \n> and later savepoints (Oracle):\n> \n> \n> \tBEGIN;\n> \tSELECT ...\n> \tSAVEPOINT t1;\n> \tUPDATE;\n> \tSAVEPOINT t2;\n> \tDELETE;\n> \tROLLBACK TO SAVEPOINT t2;\n> \tCOMMIT;\n> \n> I assume people want both.\n\nYep.\n\nMy question is: how do you see cursors working with nested\ntransactions?\n\nRight now you can't do cursors outside of transactions.\nSubtransactions would complicate things a bit:\n\nBEGIN;\nDECLARE CURSOR x ...\nBEGIN\n(is cursor x visible here? What are the implications of using it if\nit is?)\n...\nCOMMIT;\n...\nCOMMIT;\n\n\nWould we only allow cursors within the innermost transactions? If we\nallow them anywhere else, why retain the requirement that they be used\nwithin transactions at all?\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Mon, 9 Dec 2002 19:20:29 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: nested transactions" }, { "msg_contents": "Kevin Brown wrote:\n> My question is: how do you see cursors working with nested\n> transactions?\n> \n> Right now you can't do cursors outside of transactions.\n> Subtransactions would complicate things a bit:\n> \n> BEGIN;\n> DECLARE CURSOR x ...\n> BEGIN\n> (is cursor x visible here? What are the implications of using it if\n> it is?)\n> ...\n> COMMIT;\n> ...\n> COMMIT;\n> \n> \n> Would we only allow cursors within the innermost transactions? If we\n> allow them anywhere else, why retain the requirement that they be used\n> within transactions at all?\n\nI talked to Tom and he feels it will be too hard to rollback a\nsubtransaction that affects cursors so we will disable use of cursors in\nsubtransactions, at least in the first implementation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 9 Dec 2002 22:23:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: nested transactions" } ]
[ { "msg_contents": "Hi there!\n\nPatch is posted to pgsql-patches. docs inside.\nSQL 99 version will be later.\n\nregards,\n\n---\n.evgen\n\n", "msg_date": "Fri, 22 Nov 2002 14:57:40 +0400 (SAMT)", "msg_from": "Evgen Potemkin <evgent@ns.terminal.ru>", "msg_from_op": true, "msg_subject": "Hirarchical queries a la Oracle. Patch." }, { "msg_contents": "Evgen Potemkin kirjutas R, 22.11.2002 kell 15:57:\n> Hi there!\n> \n> Patch is posted to pgsql-patches. docs inside.\n\nIt would of course be nice to support both Oracle and ISO/ANSI syntaxes,\nbut I'm afraid that the (+) may clash with our overloadable operators\nfeature.\n\n> SQL 99 version will be later.\n\nI attach a railroad diagram of SQL99 \"WITH RECURSIVE\" and a diff against\nmid-summer gram.y which implements half of SQL99 _syntax_ (just the WITH\n{RECURSIVE} part, SEARCH (tree search order order) and CYCLE (recursion\ncontrol) clauses are missing).\n\nWITH clause seems to be quite useful in its own right as well, not just\nfor recursive queries, so I guess that someone with good knwledge of\npostgresql internals could get plain WITH working quite fast -\n\nThe main difference between subqueries defined in WITH clause and in\nFROM clause is that while subqueries in FROM don't see each other in\ntheir namespaces, the ones in WITH either see all preceeding ones (plain\nwith) or just all in WITH clause (WITH RECURSIVE)\n\n--------------\nHannu", "msg_date": "27 Nov 2002 01:59:53 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Hirarchical queries a la Oracle. Patch." }, { "msg_contents": "thanks, it's VERY helpful.\nunderstanding SQL99 draft is a bit more difficult than i thought :)\n\nregards,\n\n---\n.evgen\n\nOn 27 Nov 2002, Hannu Krosing wrote:\n\n> I attach a railroad diagram of SQL99 \"WITH RECURSIVE\" and a diff against\n> mid-summer gram.y which implements half of SQL99 _syntax_ (just the WITH\n> {RECURSIVE} part, SEARCH (tree search order order) and CYCLE (recursion\n> control) clauses are missing).\n>\n\n", "msg_date": "Thu, 28 Nov 2002 21:34:14 +0400 (SAMT)", "msg_from": "Evgen Potemkin <evgent@ns.terminal.ru>", "msg_from_op": true, "msg_subject": "Re: Hirarchical queries a la Oracle. Patch." }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Thu, 2002-11-28 at 17:34, Evgen Potemkin wrote:\n>> understanding SQL99 draft is a bit more difficult than i thought :)\n\n> You might also try to get DB2 installed somewhere (IIRC IBM gives out \n> limited time developer copies).\n\nEven without DB2 installed, you can read the documentation for it on the\nweb. There's quite a lot of useful info in IBM's docs. For example\nhttp://nscpcw.physics.upenn.edu/db2_docs/db2s0/withsel.htm\nand the example starting at\nhttp://nscpcw.physics.upenn.edu/db2_docs/db2s0/db2s0446.htm\n\n(The docs are probably also readable directly from IBM, but this is the\nfirst copy I found by googling...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Nov 2002 18:06:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hirarchical queries a la Oracle. Patch. " }, { "msg_contents": "On Thu, 2002-11-28 at 17:34, Evgen Potemkin wrote:\n> thanks, it's VERY helpful.\n> understanding SQL99 draft is a bit more difficult than i thought :)\n\nYou might also try to get DB2 installed somewhere (IIRC IBM gives out \nlimited time developer copies).\n\nIt implements at least the basic recursive query (without requiring the word RECURSIVE :)\n\n> \n> --------------\n> Hannu\n", "msg_date": "28 Nov 2002 23:45:27 +0000", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Hirarchical queries a la Oracle. Patch." } ]
[ { "msg_contents": "Hi, all\n\nWhile testing RC1, I found CONNECTBY had another problem. \nIt seems to me that SCHEMA can't be used in CONNECTBY.\nIs it just in time for 7.3 to be added to TODO items ?\n\n\n\nCREATE TABLE test (id int4, parent_id int4, t text);\nINSERT INTO test VALUES(11, null, 'aaa');\nINSERT INTO test VALUES(101, 11, 'bbb');\nINSERT INTO test VALUES(110, 11, 'ccc');\nINSERT INTO test VALUES(111, 110, 'ddd');\nSELECT *\n FROM connectby('test', 'id', 'parent_id', '11', 0, '.')\n as t(id int4, parent_id int4, level int, branch text);\n\n id | parent_id | level | branch \n-----+-----------+-------+------------\n 11 | | 0 | 11\n 101 | 11 | 1 | 11.101\n 110 | 11 | 1 | 11.110\n 111 | 110 | 2 | 11.110.111\n(4 rows)\n\n\n\nCREATE SCHEMA ms;\nCREATE TABLE ms.test (id int4, parent_id int4, t text);\nINSERT INTO ms.test VALUES(11, null, 'aaa');\nINSERT INTO ms.test VALUES(101, 11, 'bbb');\nINSERT INTO ms.test VALUES(110, 11, 'ccc');\nINSERT INTO ms.test VALUES(111, 110, 'ddd');\nSELECT *\n FROM connectby('ms.test', 'id', 'parent_id', '101', 0, '.')\n as t(id int4, parent_id int4, level int, branch text);\n\nERROR: Relation \"ms.test\" does not exist\n\n\n\nRegards,\nMasaru Sugawara\n\n\n\n", "msg_date": "Sat, 23 Nov 2002 00:50:50 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": true, "msg_subject": "connectby with schema" }, { "msg_contents": "Masaru Sugawara wrote:\n> CREATE SCHEMA ms;\n> CREATE TABLE ms.test (id int4, parent_id int4, t text);\n> INSERT INTO ms.test VALUES(11, null, 'aaa');\n> INSERT INTO ms.test VALUES(101, 11, 'bbb');\n> INSERT INTO ms.test VALUES(110, 11, 'ccc');\n> INSERT INTO ms.test VALUES(111, 110, 'ddd');\n> SELECT *\n> FROM connectby('ms.test', 'id', 'parent_id', '101', 0, '.')\n> as t(id int4, parent_id int4, level int, branch text);\n> \n> ERROR: Relation \"ms.test\" does not exist\n> \n\nI've tracked this down to the fact that connectby does a quote_ident on the \nprovided relname, and in quote_ident, (quote_ident_required(t)) ends up being \ntrue. The problem will occur even with a simple query:\n\ntest=# SELECT id, parent_id FROM ms.test WHERE parent_id = '101' AND id IS NOT \nNULL;\n id | parent_id\n----+-----------\n(0 rows)\ntest=# SELECT id, parent_id FROM \"ms.test\" WHERE parent_id = '101' AND id IS \nNOT NULL;\nERROR: Relation \"ms.test\" does not exist\n\nBut this is not the behavior for unqualified table names:\n\ntest=# select * from foo;\n f1\n----\n 1\n(1 row)\ntest=# select * from \"foo\";\n f1\n----\n 1\n(1 row)\n\nIs quote_ident_required incorrectly dealing with schemas?\n\nThanks,\n\nJoe\n\n", "msg_date": "Fri, 22 Nov 2002 10:19:40 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "quote_ident and schemas (was Re: connectby with schema)" }, { "msg_contents": "Joe Conway wrote:\n> \n> Is quote_ident_required incorrectly dealing with schemas?\n> \n\nSorry to reply to myself, but another related question; shouldn't the \nfollowing produce \"Ms\".\"Test\"?\n\ntest=# select quote_ident('Ms.Test');\n quote_ident\n-------------\n \"Ms.Test\"\n(1 row)\n\nJoe\n\n", "msg_date": "Fri, 22 Nov 2002 10:33:50 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: quote_ident and schemas (was Re: connectby with schema)" }, { "msg_contents": "On Fri, 22 Nov 2002, Joe Conway wrote:\n\n> Masaru Sugawara wrote:\n> > CREATE SCHEMA ms;\n> > CREATE TABLE ms.test (id int4, parent_id int4, t text);\n> > INSERT INTO ms.test VALUES(11, null, 'aaa');\n> > INSERT INTO ms.test VALUES(101, 11, 'bbb');\n> > INSERT INTO ms.test VALUES(110, 11, 'ccc');\n> > INSERT INTO ms.test VALUES(111, 110, 'ddd');\n> > SELECT *\n> > FROM connectby('ms.test', 'id', 'parent_id', '101', 0, '.')\n> > as t(id int4, parent_id int4, level int, branch text);\n> >\n> > ERROR: Relation \"ms.test\" does not exist\n> >\n>\n> I've tracked this down to the fact that connectby does a quote_ident on the\n> provided relname, and in quote_ident, (quote_ident_required(t)) ends up being\n> true. The problem will occur even with a simple query:\n>\n> test=# SELECT id, parent_id FROM ms.test WHERE parent_id = '101' AND id IS NOT\n> NULL;\n> id | parent_id\n> ----+-----------\n> (0 rows)\n> test=# SELECT id, parent_id FROM \"ms.test\" WHERE parent_id = '101' AND id IS\n> NOT NULL;\n> ERROR: Relation \"ms.test\" does not exist\n\nI think the query result here is correct behavior since in the second the\nperiod shouldn't be a separator for schema and table but instead be part\nof the identifier.\n\nDropping some bits that probably aren't important and merging some states\n\n<table name> -> <qualified name>\n<qualified name> -> [<schema name> <period>] <identifier>\n<identifer> -> <regular identifier> |\n <delimited identifier>\n<delimited identifier> -> <double quote> <delimited identifier body>\n\t\t\t <double quote>\n\nI'd think that they'd parse like:\n ms.test -> <identifier> . <identifier>\n\"ms.test\" -> <delimited identifier>\n\nThe first would match <schema name> <period> <identifier>, but the second\nwould not.\n\n\n\n", "msg_date": "Fri, 22 Nov 2002 11:10:24 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: quote_ident and schemas (was Re: connectby with schema)" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Joe Conway wrote:\n>> Is quote_ident_required incorrectly dealing with schemas?\n\n> Sorry to reply to myself, but another related question; shouldn't the \n> following produce \"Ms\".\"Test\"?\n\n> test=# select quote_ident('Ms.Test');\n> quote_ident\n> -------------\n> \"Ms.Test\"\n> (1 row)\n\nNo, it should not. If it did, it would fail to cope with tablenames\ncontaining dots.\n\nSince connectby takes a string parameter (correct?) for the table name,\nmy advice would be to have it not do quote_ident, but instead expect the\nuser to include double quotes in the string value if dealing with\nmixed-case names. Compare the behavior of nextval() for example:\n\nregression=# select nextval('Foo.Bar');\nERROR: Namespace \"foo\" does not exist\nregression=# select nextval('\"Foo\".\"Bar\"');\nERROR: Namespace \"Foo\" does not exist\nregression=# select nextval('\"Foo.Bar\"');\nERROR: Relation \"Foo.Bar\" does not exist\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Nov 2002 14:15:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: quote_ident and schemas (was Re: connectby with schema) " }, { "msg_contents": "Tom Lane wrote:\n> Since connectby takes a string parameter (correct?) for the table name,\n> my advice would be to have it not do quote_ident, but instead expect the\n> user to include double quotes in the string value if dealing with\n> mixed-case names. Compare the behavior of nextval() for example:\n> \n> regression=# select nextval('Foo.Bar');\n> ERROR: Namespace \"foo\" does not exist\n> regression=# select nextval('\"Foo\".\"Bar\"');\n> ERROR: Namespace \"Foo\" does not exist\n> regression=# select nextval('\"Foo.Bar\"');\n> ERROR: Relation \"Foo.Bar\" does not exist\n> \n\nOK. Attached patch removes calls within the function to quote_ident, requiring \nthe user to appropriately quote their own identifiers. I also tweaked the \nregression test to deal with \"value\" becoming a reserved word.\n\nIf it's not too late, I'd like this to get into 7.3, but in any case, please \napply to HEAD.\n\nThanks,\n\nJoe\n\np.s. There are similar issues in dblink, but they appear a bit more difficult \nto address. I'll attempt to get them resloved this weekend, again in hopes to \nget them applied before 7.3 is released.", "msg_date": "Fri, 22 Nov 2002 15:21:48 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: quote_ident and schemas (was Re: [HACKERS] connectby with schema)" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> OK. Attached patch removes calls within the function to quote_ident, requiring \n> the user to appropriately quote their own identifiers. I also tweaked the \n> regression test to deal with \"value\" becoming a reserved word.\n\nApplied. I also threw in a quick note in the README to mention the need\nfor quoting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Nov 2002 20:55:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: quote_ident and schemas (was Re: [HACKERS] connectby with schema)" }, { "msg_contents": "Joe Conway wrote:\n> p.s. There are similar issues in dblink, but they appear a bit more \n> difficult to address. I'll attempt to get them resloved this weekend, \n> again in hopes to get them applied before 7.3 is released.\n> \n\nAttached patch removes most (hopefully just the appropriate ones) calls in \ndblink to quote_ident, requiring the user to quote their own identifiers. I \nalso added to the regression test a case for a quoted, schema qualified table \nname.\n\nIf it's not too late, I'd like this to get into 7.3, but in any case,\nplease apply to HEAD.\n\nThanks,\n\nJoe", "msg_date": "Sat, 23 Nov 2002 09:47:57 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: quote_ident and schemas (was Re: [HACKERS] connectby" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Attached patch removes most (hopefully just the appropriate ones) calls in \n> dblink to quote_ident, requiring the user to quote their own identifiers. I \n> also added to the regression test a case for a quoted, schema qualified table\n> name.\n\n> If it's not too late, I'd like this to get into 7.3, but in any case,\n> please apply to HEAD.\n\nApplied --- in 7.3 also, since Marc hasn't rolled RC2 yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Nov 2002 14:02:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: quote_ident and schemas (was Re: [HACKERS] connectby with schema)" }, { "msg_contents": "On Fri, 22 Nov 2002 15:21:48 -0800\nJoe Conway <mail@joeconway.com> wrote:\n\n> OK. Attached patch removes calls within the function to quote_ident, requiring \n> the user to appropriately quote their own identifiers. I also tweaked the \n> regression test to deal with \"value\" becoming a reserved word.\n> \n> If it's not too late, I'd like this to get into 7.3, but in any case, please \n> apply to HEAD.\n> \n\nThank you for your quick job.\n\n\nRegards,\nMasaru Sugawara\n\n-------------------------------------\nCREATE SCHEMA ms;\nCREATE TABLE ms.test (id int4, parent_id int4, t text);\nINSERT INTO ms.test VALUES(11, null, 'aaa');\nINSERT INTO ms.test VALUES(101, 11, 'bbb');\nINSERT INTO ms.test VALUES(110, 11, 'ccc');\nINSERT INTO ms.test VALUES(111, 110, 'ddd');\nSELECT *\n FROM connectby('ms.test', 'id', 'parent_id', '11', 0, '.')\n as t(id int, parent_id int, level int, branch text);\n\n id | parent_id | level | branch \n-----+-----------+-------+------------\n 11 | | 0 | 11\n 101 | 11 | 1 | 11.101\n 110 | 11 | 1 | 11.110\n 111 | 110 | 2 | 11.110.111\n(4 rows)\n\n\n\n------------------------------------\nCREATE SCHEMA \"MS\";\ndrop table \"MS\".\"Test\";\nCREATE TABLE \"MS\".\"Test\" (id int4, parent_id int4, t text);\nINSERT INTO \"MS\".\"Test\" VALUES(22, null, 'aaa');\nINSERT INTO \"MS\".\"Test\" VALUES(202, 22, 'bbb');\nINSERT INTO \"MS\".\"Test\" VALUES(220, 22, 'ccc');\nINSERT INTO \"MS\".\"Test\" VALUES(222, 220, 'ddd');\nSELECT *\n FROM connectby('\"MS\".\"Test\"', 'id', 'parent_id', '22', 0, '.')\n as t(id int, parent_id int, level int, branch text);\n\n\n id | parent_id | level | branch \n-----+-----------+-------+------------\n 22 | | 0 | 22\n 202 | 22 | 1 | 22.202\n 220 | 22 | 1 | 22.220\n 222 | 220 | 2 | 22.220.222\n(4 rows)\n\n\n\n\n\n", "msg_date": "Sun, 24 Nov 2002 13:26:32 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": true, "msg_subject": "Re: quote_ident and schemas (was Re: connectby with schema)" } ]
[ { "msg_contents": "Hi everyone,\n\nWas just reading an article regarding Sun technologies on TheRegister:\n\nhttp://www.theregister.co.uk/content/53/28259.html\n\n*******\n\nThe real problem with databases is administrative, he argued, where the\nDBA must do index rebuilds.\n\n\"Clustra had eliminated that problem because it was doing constant\nindexing. So the GUI has gone, along with the Rebuild button.\"\n\n*******\n\nIs \"Constant indexing\" something that sounds interesting for us to look\nat?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 23 Nov 2002 07:02:44 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Interesting thought from an article about Sun technologies" } ]
[ { "msg_contents": "I had read on one of the newsgroups that there is a planned native port to\nthe win32 platform, is this true? I read most of the win32 thread off of\nthe dev site and it was not clear if this was true.\n\nIn either case, I would like to advocate such a port to be done, and soon,\nnot for any altruistic reasons, but simply on behalf of myself (a windows\napplications developer) and the many others who are like me.\n\nI personally believe that Postgres has a great deal of potention in the\napplications market, with the database server packaged along with the\napplication. There is a great deal of need for this for medium to high end\nwindows applications, because there as of yet no existing Microsoft package\nthat can handle it.\n\nPostgres is ideally suited for this need because of its rich server side\nprogramming interfaces, liberal licensing, and high performance. Mysql,\ndespite their sucking up to the windows crowd, fails on all three of those\ncounts. However they have realized the need for a database embedded\napplication by allowing the mysql server to be linked directly with a\nwindows app (at least, on a technical level), and have talked about\nproviding a single user database .dll.\n\nI believe that mysql is not well suited for these types of applications\nthough for stability and performance reasons. For all the talk of speed, I\nthink postgres is the fastest database ever written for pc hardware, with\nthe one possible exception of Microsoft Foxpro (note: not written by\nMicrosoft). Sql server costs to much to ship with an app, and, quite\nfrankly, is rather slow.\n\nPostgres could easily springboard into a very strong niche market in\nembedded applicaions. From there, with increased awareness and developer\nsupport on the windows side, it could start pecking at more traditional data\nservices currently dominated by sql server, and (yuck!) access, and their\neveil fraternal twin, visual basic.\n\nSite note: good strategic positioning in this regard would be an XML shell\nfor postgres (pgxml) and a data provider for .net.\n\nThats my .02$. Many great thanks to the dev team. Please don't let postgres\ncontinue to be the software wold's best kept secret.\n\nMerlin\n\n\n\n", "msg_date": "Fri, 22 Nov 2002 16:24:42 -0500", "msg_from": "\"Merlin Moncure\" <merlin@rcsonline.com>", "msg_from_op": true, "msg_subject": "PostGres and WIN32, a plea!" }, { "msg_contents": "> Site note: good strategic positioning in this regard would be an XML shell\n> for postgres (pgxml) and a data provider for .net.\n\nJust so you know, there is already a .NET provider for PostgreSQL. You can\nfind it on:\n\nhttp://gborg.postgresql.org/\n\nChris\n\n", "msg_date": "Tue, 26 Nov 2002 10:51:04 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: PostGres and WIN32, a plea!" } ]
[ { "msg_contents": "I see we just recently made the word \"value\" reserved:\n\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/parser/keywords.c.diff?r1=1.130&r2=1.131\n\nI noticed it because it breaks the contrib/tablefunc regression test. ISTM \nlike this will break quite a few applications.\n\nJoe\n\n\n\n", "msg_date": "Fri, 22 Nov 2002 14:34:19 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "\"value\" a reserved word" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I see we just recently made the word \"value\" reserved:\n> http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/parser/keywords.c.diff?r1=1.130&r2=1.131\n> I noticed it because it breaks the contrib/tablefunc regression test. ISTM \n> like this will break quite a few applications.\n\nI'm not thrilled about it either. I wonder whether we could hack up\nsomething so that domain check constraints parse VALUE as a variable\nname instead of a reserved keyword? Without some such technique I\nthink we're kinda stuck, because the spec is perfectly clear about\nhow to write domain check constraints.\n\n(And, to be fair, SQL92 is also perfectly clear that VALUE is a reserved\nword; people griping about this won't have a lot of ground to stand on.\nBut I agree it'd be worth trying to find an alternative implementation\nthat doesn't reserve the keyword.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Nov 2002 17:43:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"value\" a reserved word " }, { "msg_contents": "Tom Lane kirjutas L, 23.11.2002 kell 03:43:\n> Joe Conway <mail@joeconway.com> writes:\n> > I see we just recently made the word \"value\" reserved:\n> > http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/parser/keywords.c.diff?r1=1.130&r2=1.131\n> > I noticed it because it breaks the contrib/tablefunc regression test. ISTM \n> > like this will break quite a few applications.\n> \n> I'm not thrilled about it either. I wonder whether we could hack up\n> something so that domain check constraints parse VALUE as a variable\n> name instead of a reserved keyword? Without some such technique I\n> think we're kinda stuck, because the spec is perfectly clear about\n> how to write domain check constraints.\n> \n> (And, to be fair, SQL92 is also perfectly clear that VALUE is a reserved\n> word; people griping about this won't have a lot of ground to stand on.\n> But I agree it'd be worth trying to find an alternative implementation\n> that doesn't reserve the keyword.)\n\nI've been playing around just a little in gram.y and I think that we are\npaying too high price for keeping some keywords \"somewhat reserved\".\n\nIn light of trying to become fully ISO/ANSI compliant (or even savvy ;)\ncould we not make a jump at say 7.4 to having the same set of reserved\nkeywords as SQL92/SQL99 and be done with it?\n\nThere is an Estonian proverb about futility of \"cutting off a dogs tail\nin a small piece at a time\" which seems to apply well to postgreSQL\nsyntax.\n\n---------------\nHannu\n\n\n", "msg_date": "23 Nov 2002 03:56:40 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: \"value\" a reserved word" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> In light of trying to become fully ISO/ANSI compliant (or even savvy ;)\n> could we not make a jump at say 7.4 to having the same set of reserved\n> keywords as SQL92/SQL99 and be done with it?\n\nI disagree ... especially for SQL99 keywords that we're not even using.\n\nAlso, SQL99 keywords that are actually only function names would be\noutright more difficult to reserve than not to reserve...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Nov 2002 18:05:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"value\" a reserved word " }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I see we just recently made the word \"value\" reserved:\n\nFYI, it's not reserved any more.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Dec 2002 15:37:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"value\" a reserved word " } ]
[ { "msg_contents": "I'm getting lots of regression failures:\n\n========================\n 25 of 89 tests failed.\n========================\n\nall pretty much looking like:\n\n SELECT '' AS one, o.* FROM OID_TBL o WHERE o.f1 = 1234;\n! ERROR: Relation \"pg_constraint_contypid_index\" does not exist\n SELECT '' AS five, o.* FROM OID_TBL o WHERE o.f1 <> '1234';\n\nI guess a recent change requires an initdb but no change was forced?\n(...an initdb does in fact fix the problem -- now only the \"misc\" test fails)\n\nJoe\n\n", "msg_date": "Fri, 22 Nov 2002 22:04:14 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "regression failures" }, { "msg_contents": "On Sat, 2002-11-23 at 01:04, Joe Conway wrote:\n> I'm getting lots of regression failures:\n> \n> ========================\n> 25 of 89 tests failed.\n> ========================\n> \n> all pretty much looking like:\n> \n> SELECT '' AS one, o.* FROM OID_TBL o WHERE o.f1 = 1234;\n> ! ERROR: Relation \"pg_constraint_contypid_index\" does not exist\n> SELECT '' AS five, o.* FROM OID_TBL o WHERE o.f1 <> '1234';\n> \n> I guess a recent change requires an initdb but no change was forced?\n> (...an initdb does in fact fix the problem -- now only the \"misc\" test fails)\n\nYou're right. I added an index to pg_constraint and had completely\nforgotten about it.\n\nSorry. Could someone bump CATALOG_VERSION_NO?\n\n-- \nRod Taylor <rbt@rbt.ca>\n\n", "msg_date": "23 Nov 2002 09:09:35 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: regression failures" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I guess a recent change requires an initdb but no change was forced?\n\nThere should have been a catversion bump for the domain-constraints\npatch, but there wasn't. When was your previous CVS pull?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Nov 2002 12:13:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression failures " }, { "msg_contents": "Tom Lane wrote:\n> There should have been a catversion bump for the domain-constraints\n> patch, but there wasn't. When was your previous CVS pull?\n> \n\nSeveral days ago at least. I use cvsup every few days to sync up. Is there a \nway I can tell exactly when?\n\nJoe\n\n", "msg_date": "Sat, 23 Nov 2002 09:28:29 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: regression failures" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> There should have been a catversion bump for the domain-constraints\n>> patch, but there wasn't. When was your previous CVS pull?\n\n> Several days ago at least.\n\nThen you probably got bit by that patch --- it was applied a couple days\nago.\n\n> I use cvsup every few days to sync up. Is there a \n> way I can tell exactly when?\n\nDunno, I don't use cvsup.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Nov 2002 12:36:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression failures " }, { "msg_contents": "\nOK, regression tests adjusted and cat version updated.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > Tom Lane wrote:\n> >> There should have been a catversion bump for the domain-constraints\n> >> patch, but there wasn't. When was your previous CVS pull?\n> \n> > Several days ago at least.\n> \n> Then you probably got bit by that patch --- it was applied a couple days\n> ago.\n> \n> > I use cvsup every few days to sync up. Is there a \n> > way I can tell exactly when?\n> \n> Dunno, I don't use cvsup.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 23 Nov 2002 13:13:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression failures" } ]
[ { "msg_contents": "Hello!\n\nI hope that someone here could help.\n\nI'm using PostgreSQL7.1.3\n\nI have 3 tables in my DB: the tables are defined in the following way:\n\n\nCREATE TABLE category(\nid SERIAL NOT NULL PRIMARY KEY,\n// etc etc\n\n)\n;\n\nCREATE TABLE subcategory(\nid SERIAL NOT NULL PRIMARY KEY,\ncategoryid int CONSTRAINT subcategory__ref_category\n REFERENCES category (id)\n // etc etc\n)\n;\n\nCREATE TABLE entry(\nentryid SERIAL NOT NULL PRIMARY KEY,\nisapproved CHAR(1) NOT NULL DEFAULT 'n',\nsubcategoryid int CONSTRAINT entry__ref_subcategory\n REFERENCES subcategory (id)\n // atd\n,\n)\n;\n\n\nI have the following SQL query :\n\n \"SELECT * FROM entry where isapproved='y' AND subcategoryid IN (SELECT id\nFROM subcategory WHERE\ncategoryid='\"+catID+\"') ORDER BY subcategoryid DESC\";\n\n\nFor a given categoryid( catID), the query will return all entries in the\n\"entry\" table\nhaving a corresponding subcategoryid(s)[returned by the inned subquery].\n\nBut I want to return only a limited number of entries of each\nsubcategory..... let's say that I want to return at most 5 entries of each\nsubcategory type ( for instance if the inner subquery returns 3 results,\nthus I will be having in total at most 15 entries as relust)....\n\nHow can this be achieved?\n\nI'm aware of postgreSQL \"LIMIT\" and \"GROUP BY\" clause..... but so far, I'm\nnot able to put all this together...\n\nThanks in advance.\n\nArcadius.\n\n\n\n\n\n", "msg_date": "Sat, 23 Nov 2002 23:09:37 +0100", "msg_from": "\"Arcadius A.\" <ahouans@sh.cvut.cz>", "msg_from_op": true, "msg_subject": "SQL query help!" }, { "msg_contents": "Tell me what did you try with limit and group by.\nWhere's IN, why don't you use EXISTS instead. It runs much master !\n\nRegards,\nLuis Sousa\n\nArcadius A. wrote:\n\n>Hello!\n>\n>I hope that someone here could help.\n>\n>I'm using PostgreSQL7.1.3\n>\n>I have 3 tables in my DB: the tables are defined in the following way:\n>\n>\n>CREATE TABLE category(\n>id SERIAL NOT NULL PRIMARY KEY,\n>// etc etc\n>\n>)\n>;\n>\n>CREATE TABLE subcategory(\n>id SERIAL NOT NULL PRIMARY KEY,\n>categoryid int CONSTRAINT subcategory__ref_category\n> REFERENCES category (id)\n> // etc etc\n>)\n>;\n>\n>CREATE TABLE entry(\n>entryid SERIAL NOT NULL PRIMARY KEY,\n>isapproved CHAR(1) NOT NULL DEFAULT 'n',\n>subcategoryid int CONSTRAINT entry__ref_subcategory\n> REFERENCES subcategory (id)\n> // atd\n>,\n>)\n>;\n>\n>\n>I have the following SQL query :\n>\n> \"SELECT * FROM entry where isapproved='y' AND subcategoryid IN (SELECT id\n>FROM subcategory WHERE\n>categoryid='\"+catID+\"') ORDER BY subcategoryid DESC\";\n>\n>\n>For a given categoryid( catID), the query will return all entries in the\n>\"entry\" table\n>having a corresponding subcategoryid(s)[returned by the inned subquery].\n>\n>But I want to return only a limited number of entries of each\n>subcategory..... let's say that I want to return at most 5 entries of each\n>subcategory type ( for instance if the inner subquery returns 3 results,\n>thus I will be having in total at most 15 entries as relust)....\n>\n>How can this be achieved?\n>\n>I'm aware of postgreSQL \"LIMIT\" and \"GROUP BY\" clause..... but so far, I'm\n>not able to put all this together...\n>\n>Thanks in advance.\n>\n>Arcadius.\n>\n>\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n>\n> \n>", "msg_date": "Wed, 27 Nov 2002 10:05:24 +0000", "msg_from": "Luis Sousa <llsousa@ualg.pt>", "msg_from_op": false, "msg_subject": "Re: SQL query help!" }, { "msg_contents": "\nHi,\n\ni run 2 queries on 2 similar boxes (one running Linux 2.4.7, redhat 7.1\nand the other running FreeBSD 4.7-RELEASE-p2)\n\nThe 2 boxes run postgresql 7.2.3.\n\nI get some performance results that are not obvious (at least to me)\n\ni have one table named \"noon\" with 108095 rows.\n\nThe 2 queries are:\nq1: SELECT count(*) from noon;\nq2: SELECT * from noon;\n\nLinux q1\n========\ndynacom=# EXPLAIN ANALYZE SELECT count(*) from noon;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=20508.19..20508.19 rows=1 width=0) (actual\ntime=338.17..338.17\nrows=1 loops=1)\n -> Seq Scan on noon (cost=0.00..20237.95 rows=108095 width=0) (actual\ntime=0.01..225.73 rows=108095 loops=1)\nTotal runtime: 338.25 msec\n\nLinux q2\n========\ndynacom=# EXPLAIN ANALYZE SELECT * from noon;\nNOTICE: QUERY PLAN:\n\nSeq Scan on noon (cost=0.00..20237.95 rows=108095 width=1960) (actual\ntime=1.22..67909.31 rows=108095 loops=1)\nTotal runtime: 68005.96 msec\n\nFreeBSD q1\n==========\ndynacom=# EXPLAIN ANALYZE SELECT count(*) from noon;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=20508.19..20508.19 rows=1 width=0) (actual\ntime=888.93..888.94\nrows=1 loops=1)\n -> Seq Scan on noon (cost=0.00..20237.95 rows=108095 width=0) (actual\ntime=0.02..501.09 rows=108095 loops=1)\nTotal runtime: 889.06 msec\n\nFreeBSD q2\n==========\ndynacom=# EXPLAIN ANALYZE SELECT * from noon;\nNOTICE: QUERY PLAN:\n\nSeq Scan on noon (cost=0.00..20237.95 rows=108095 width=1975) (actual\ntime=1.08..53470.93 rows=108095 loops=1)\nTotal runtime: 53827.37 msec\n\nThe pgsql configuration for both systems is identical\n(the FreeBSD system has less memory but vmstat dont show\nany paging activity so i assume this is not an issue here).\n\nThe interesting part is that FreeBSD does better in select *,\nwhereas Linux seem to do much better in select count(*).\n\nPaging and disk IO activity for both systems is near 0.\n\nWhen i run the select count(*) in Linux i notice a small\nincrease (15%) in Context Switches per sec, whereas in FreeBSD\ni notice a big increase in Context Switches (300%) and\na huge increase in system calls per second (from normally\n9-10 to 110,000).\n(Linux vmstat gives no syscall info).\n\nThe same results come out for every count(*) i try.\nIs it just the reporting from explain analyze??\n\nHas any hacker some light to shed??\n\nThanx.\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n\n", "msg_date": "Wed, 27 Nov 2002 14:23:12 +0200 (EET)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "FreeBSD, Linux: select, select count(*) performance" }, { "msg_contents": "Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> Linux q1\n> ========\n> dynacom=# EXPLAIN ANALYZE SELECT count(*) from noon;\n> NOTICE: QUERY PLAN:\n\n> Aggregate (cost=20508.19..20508.19 rows=1 width=0) (actual\n> time=338.17..338.17\n> rows=1 loops=1)\n> -> Seq Scan on noon (cost=0.00..20237.95 rows=108095 width=0) (actual\n> time=0.01..225.73 rows=108095 loops=1)\n> Total runtime: 338.25 msec\n\n> Linux q2\n> ========\n> dynacom=# EXPLAIN ANALYZE SELECT * from noon;\n> NOTICE: QUERY PLAN:\n\n> Seq Scan on noon (cost=0.00..20237.95 rows=108095 width=1960) (actual\n> time=1.22..67909.31 rows=108095 loops=1)\n> Total runtime: 68005.96 msec\n\nYou didn't say what was *in* the table, exactly ... but I'm betting\nthere are a lot of toasted columns, and that the extra runtime\nrepresents the time to fetch (and perhaps decompress) the TOAST entries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Nov 2002 11:15:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FreeBSD, Linux: select, select count(*) performance " }, { "msg_contents": "On Wed, 27 Nov 2002, Tom Lane wrote:\n\n> Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> > Linux q1\n> > ========\n> > dynacom=# EXPLAIN ANALYZE SELECT count(*) from noon;\n> > NOTICE: QUERY PLAN:\n>\n> > Aggregate (cost=20508.19..20508.19 rows=1 width=0) (actual\n> > time=338.17..338.17\n> > rows=1 loops=1)\n> > -> Seq Scan on noon (cost=0.00..20237.95 rows=108095 width=0) (actual\n> > time=0.01..225.73 rows=108095 loops=1)\n> > Total runtime: 338.25 msec\n>\n> > Linux q2\n> > ========\n> > dynacom=# EXPLAIN ANALYZE SELECT * from noon;\n> > NOTICE: QUERY PLAN:\n>\n> > Seq Scan on noon (cost=0.00..20237.95 rows=108095 width=1960) (actual\n> > time=1.22..67909.31 rows=108095 loops=1)\n> > Total runtime: 68005.96 msec\n>\n> You didn't say what was *in* the table, exactly ... but I'm betting\n> there are a lot of toasted columns, and that the extra runtime\n> represents the time to fetch (and perhaps decompress) the TOAST entries.\n\nAre there any reason to \"fetch (and perhaps decompress) the TOAST entries\"\njust to count(*) without any WHERE clause ?\n\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 27 Nov 2002 19:29:43 +0300 (MSK)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] FreeBSD, Linux: select, select count(*) performance " }, { "msg_contents": "On Wed, 27 Nov 2002, Tom Lane wrote:\n\n> Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> > Linux q1\n> > ========\n> > dynacom=# EXPLAIN ANALYZE SELECT count(*) from noon;\n> > NOTICE: QUERY PLAN:\n>\n> > Aggregate (cost=20508.19..20508.19 rows=1 width=0) (actual\n> > time=338.17..338.17\n> > rows=1 loops=1)\n> > -> Seq Scan on noon (cost=0.00..20237.95 rows=108095 width=0) (actual\n> > time=0.01..225.73 rows=108095 loops=1)\n> > Total runtime: 338.25 msec\n>\n> > Linux q2\n> > ========\n> > dynacom=# EXPLAIN ANALYZE SELECT * from noon;\n> > NOTICE: QUERY PLAN:\n>\n> > Seq Scan on noon (cost=0.00..20237.95 rows=108095 width=1960) (actual\n> > time=1.22..67909.31 rows=108095 loops=1)\n> > Total runtime: 68005.96 msec\n>\n> You didn't say what was *in* the table, exactly ... but I'm betting\n> there are a lot of toasted columns, and that the extra runtime\n> represents the time to fetch (and perhaps decompress) the TOAST entries.\n\n278 columns of various types.\nnamely,\n\n Table \"noon\"\n Column | Type | Modifiers\n------------------------+------------------------+-----------\n v_code | character varying(4) |\n log_no | bigint |\n report_date | date |\n report_time | time without time zone |\n voyage_no | integer |\n charterer | character varying(12) |\n port | character varying(24) |\n duration | character varying(4) |\n rotation | character varying(9) |\n me_do_cons | double precision |\n reason | character varying(12) |\n ancorage_date | date |\n ancorage_time | time without time zone |\n exp_berth_date | date |\n exp_berth_time | time without time zone |\n berth_date | date |\n berth_time | time without time zone |\n exp_sail_date | date |\n exp_sail_time | time without time zone |\n draft_fw | double precision |\n draft_aft | double precision |\n etc_date | date |\n etc_time | time without time zone |\n completion_date | date |\n completion_time | time without time zone |\n load_quantity | double precision |\n discharging_quantity | double precision |\n delivery_date | date |\n delivery_place | character varying(12) |\n redelivery_date | date |\n redelivery_time | time without time zone |\n redelivery_place | character varying(12) |\n rob_ifo | double precision |\n rob_mdo | double precision |\n log_ifo | double precision |\n log_mdo | double precision |\n rcv_ifo | double precision |\n rcv_mdo | double precision |\n rcv_me | double precision |\n rcv_cyl | double precision |\n rcv_gen | double precision |\n rob_me | double precision |\n rob_cyl | double precision |\n rob_gen | double precision |\n voyage_sub_no | integer |\n voyage_activity | character varying(3) |\n remarks | character varying(60) |\n latitude | character varying(6) |\n longitude | character varying(6) |\n speed | double precision |\n wind_direction | character varying(1) |\n rpm | double precision |\n fuelconsumption | double precision |\n me_bearing_oil_presure | double precision |\n me_bearing_amber | double precision |\n ambere | character varying(8) |\n remarks2 | character varying(12) |\n steam_hours | double precision |\n ifoconsboilerheat | double precision |\n ae_mdo_consumption | double precision |\n cyl_me_exh_temp01 | double precision |\n cyl_me_exh_temp02 | double precision |\n cyl_me_exh_temp03 | double precision |\n cyl_me_exh_temp04 | double precision |\n cyl_me_exh_temp05 | double precision |\n cyl_me_exh_temp06 | double precision |\n cyl_me_exh_temp07 | double precision |\n cyl_me_exh_temp08 | double precision |\n cyl_me_exh_temp09 | double precision |\n cyl_me_exh_temp10 | double precision |\n cyl_me_exh_temp11 | double precision |\n cyl_me_exh_temp12 | double precision |\n cyl_me_exh_temp13 | double precision |\n cyl_me_exh_temp14 | double precision |\n gen1_ae_exh_temp01 | double precision |\n gen1_ae_exh_temp02 | double precision |\n gen1_ae_exh_temp03 | double precision |\n gen1_ae_exh_temp04 | double precision |\n gen1_ae_exh_temp05 | double precision |\n gen1_ae_exh_temp06 | double precision |\n gen1_ae_exh_temp07 | double precision |\n gen1_ae_exh_temp08 | double precision |\n gen2_ae_exh_temp01 | double precision |\n gen2_ae_exh_temp02 | double precision |\n gen2_ae_exh_temp03 | double precision |\n gen2_ae_exh_temp04 | double precision |\n gen2_ae_exh_temp05 | double precision |\n gen2_ae_exh_temp06 | double precision |\n gen2_ae_exh_temp07 | double precision |\n gen2_ae_exh_temp08 | double precision |\n gen3_ae_exh_temp01 | double precision |\n gen3_ae_exh_temp02 | double precision |\n gen3_ae_exh_temp03 | double precision |\n gen3_ae_exh_temp04 | double precision |\n gen3_ae_exh_temp05 | double precision |\n gen3_ae_exh_temp06 | double precision |\n gen3_ae_exh_temp07 | double precision |\n gen3_ae_exh_temp08 | double precision |\n dont_know | character varying(14) |\n voyage_confirmation | character varying(1) |\n ldin | double precision |\n dist_to_go | integer |\n dom_fw_rob | double precision |\n fw_produced | double precision |\n fw_salinity | double precision |\n fw_cons_dom | double precision |\n fw_cons_boil | double precision |\n ifo_ballast | double precision |\n ifo_deballast | double precision |\n ifo_load | double precision |\n ifo_disc | double precision |\n ifo_blr_heat | double precision |\n foofield | double precision |\n sc_air_pr | double precision |\n sc_air_temp | integer |\n ae_oil_pr1 | double precision |\n ae_oil_pr2 | double precision |\n ae_oil_pr3 | double precision |\n ae_oil_pr4 | double precision |\n ae_oil_pr5 | double precision |\n gen1_ex_9 | integer |\n gen1_ex_10 | integer |\n gen1_ex_11 | integer |\n gen1_ex_12 | integer |\n gen1_ex_13 | integer |\n gen1_ex_14 | integer |\n gen1_ex_15 | integer |\n gen1_ex_16 | integer |\n gen1_ex_17 | integer |\n gen1_ex_18 | integer |\n gen1_ex_19 | integer |\n gen1_ex_20 | integer |\n gen2_ex_9 | integer |\n gen2_ex_10 | integer |\n gen2_ex_11 | integer |\n gen2_ex_12 | integer |\n gen2_ex_13 | integer |\n gen2_ex_14 | integer |\n gen2_ex_15 | integer |\n gen2_ex_16 | integer |\n gen2_ex_17 | integer |\n gen2_ex_18 | integer |\n gen2_ex_19 | integer |\n gen2_ex_20 | integer |\n gen3_ex_9 | integer |\n gen3_ex_10 | integer |\n gen3_ex_11 | integer |\n gen3_ex_12 | integer |\n gen3_ex_13 | integer |\n gen3_ex_14 | integer |\n gen3_ex_15 | integer |\n gen3_ex_16 | integer |\n gen3_ex_17 | integer |\n gen3_ex_18 | integer |\n gen3_ex_19 | integer |\n gen3_ex_20 | integer |\n gen4_ex_1 | integer |\n gen4_ex_2 | integer |\n gen4_ex_3 | integer |\n gen4_ex_4 | integer |\n gen4_ex_5 | integer |\n gen4_ex_6 | integer |\n gen4_ex_7 | integer |\n gen4_ex_8 | integer |\n gen4_ex_9 | integer |\n gen4_ex_10 | integer |\n gen4_ex_11 | integer |\n gen4_ex_12 | integer |\n gen4_ex_13 | integer |\n gen4_ex_14 | integer |\n gen4_ex_15 | integer |\n gen4_ex_16 | integer |\n gen4_ex_17 | integer |\n gen4_ex_18 | integer |\n gen4_ex_19 | integer |\n gen4_ex_20 | integer |\n gen5_ex_1 | integer |\n gen5_ex_2 | integer |\n gen5_ex_3 | integer |\n gen5_ex_4 | integer |\n gen5_ex_5 | integer |\n gen5_ex_6 | integer |\n gen5_ex_7 | integer |\n gen5_ex_8 | integer |\n gen5_ex_9 | integer |\n gen5_ex_10 | integer |\n gen5_ex_11 | integer |\n gen5_ex_12 | integer |\n gen5_ex_13 | integer |\n gen5_ex_14 | integer |\n gen5_ex_15 | integer |\n gen5_ex_16 | integer |\n gen5_ex_17 | integer |\n gen5_ex_18 | integer |\n gen5_ex_19 | integer |\n gen5_ex_20 | integer |\n ae_kw1 | integer |\n ae_kw2 | integer |\n ae_kw3 | integer |\n ae_kw4 | integer |\n ae_kw5 | integer |\n filler | integer |\n me_tc_rpm1 | integer |\n me_tc_rpm2 | integer |\n me_tc_rpm3 | integer |\n me_tc_rpm4 | integer |\n me_tc_rpm5 | integer |\n me_tc_ex1 | integer |\n me_tc_ex2 | integer |\n me_tc_ex3 | integer |\n me_tc_ex4 | integer |\n me_tc_ex5 | integer |\n me_air_cool1 | integer |\n me_air_cool2 | integer |\n heat_c1 | double precision |\n heat_c2 | double precision |\n heat_c3 | double precision |\n heat_c4 | double precision |\n heat_c5 | double precision |\n heat_c6 | double precision |\n heat_p1 | double precision |\n heat_p2 | double precision |\n heat_p3 | double precision |\n heat_p4 | double precision |\n heat_p5 | double precision |\n heat_p6 | double precision |\n heat_s1 | double precision |\n heat_s2 | double precision |\n heat_s3 | double precision |\n heat_s4 | double precision |\n heat_s5 | double precision |\n heat_s6 | double precision |\n igs_c1 | double precision |\n igs_c2 | double precision |\n igs_c3 | double precision |\n igs_c4 | double precision |\n igs_c5 | double precision |\n igs_c6 | double precision |\n igs_p1 | double precision |\n igs_p2 | double precision |\n igs_p3 | double precision |\n igs_p4 | double precision |\n igs_p5 | double precision |\n igs_p6 | double precision |\n igs_s1 | double precision |\n igs_s2 | double precision |\n igs_s3 | double precision |\n igs_s4 | double precision |\n igs_s5 | double precision |\n igs_s6 | double precision |\n slip | double precision |\n foofloat | double precision |\n fohandle | double precision |\n wind_dir | integer |\n intensity | integer |\n state_sea | character varying(12) |\n soundings | character varying(12) |\n ecyl15 | integer |\n ecyl16 | integer |\n ecyl17 | integer |\n ecyl18 | integer |\n ecyl19 | integer |\n ecyl20 | integer |\n rem7 | character varying(12) |\n rem8 | character varying(12) |\n rem9 | character varying(12) |\n rem10 | character varying(12) |\n rem11 | character varying(12) |\n rem12 | character varying(12) |\n rem13 | character varying(12) |\n rem14 | character varying(12) |\n rem15 | character varying(12) |\n mesumplevel | double precision |\n oilwat2 | double precision |\n tot_steam_time | double precision |\n sea_temp | integer |\n air_temp | integer |\n tg_kw | character varying(4) |\nIndexes: noonf_date,\n noonf_logno,\n noonf_rotation,\n noonf_vcode,\n noonf_voyageno\n\n\n\nThe data as i told you are the same db dumped from the production system.\nThis same dump file was used to populate both (Linux,FBSD) databases.\n\nHow is it possible one to have toasted columns whereas the other not??\nHow can someone identify toasted columns??\n\nThanx,\n\nAchilleus\n\n", "msg_date": "Wed, 27 Nov 2002 18:35:20 +0200 (EET)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "Re: FreeBSD, Linux: select, select count(*) performance " }, { "msg_contents": "Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> On Wed, 27 Nov 2002, Tom Lane wrote:\n>> You didn't say what was *in* the table, exactly ... but I'm betting\n>> there are a lot of toasted columns, and that the extra runtime\n>> represents the time to fetch (and perhaps decompress) the TOAST entries.\n\n> 278 columns of various types.\n> namely,\n> [snip]\n\nHmm, no particularly wide columns there --- but 278 columns is a lot.\nI think the extra time might just be the time involved in fetching all\nthose column values out of the table row?\n\nIf you're interested in pursuing it, I'd suggest rebuilding the backend\nwith profiling enabled so you can see exactly where the time goes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Nov 2002 11:56:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FreeBSD, Linux: select, select count(*) performance " }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> Are there any reason to \"fetch (and perhaps decompress) the TOAST entries\"\n> just to count(*) without any WHERE clause ?\n\nIt doesn't. That was my point...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Nov 2002 11:57:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] FreeBSD, Linux: select, select count(*) performance " }, { "msg_contents": "Hello!\n\n\"Luis Sousa\" <llsousa@ualg.pt> wrote in message\nnews:3DE498E4.2050002@ualg.pt...\n> This is a cryptographically signed message in MIME format.\n>\n> --------------ms080209060900030807050408\n> Content-Type: text/plain; charset=us-ascii; format=flowed\n> Content-Transfer-Encoding: 7bit\n>\n> Tell me what did you try with limit and group by.\n> Where's IN, why don't you use EXISTS instead. It runs much master !\n>\n\n\n\nThanks for the reply!\nAlright, I'll use EXISTS instead of IN .... I didn't know that EXISTS is\nfaster.....\n\nAbout my query, I have tried :\n\n \"\nSELECT * FROM entry where isapproved='y' AND EXISTS (SELECT id\nFROM subcategory WHERE catid='2') ORDER BY subcatid DESC LIMIT 5;\n\";\nThis will return only 5 rows....\n\nBut when I add the GROUP BY, then I got error\n\"\nSELECT * FROM entry where isapproved='y' AND EXISTS (SELECT id\nFROM subcategory WHERE catid='2') ORDER BY subcatid DESC LIMIT 5 GROUP BY\nsubcatid;\n\"\n\n: ERROR: parser: parse error at or near \"GROUP\"\n\n\nThanks.....\n\nArcadius.\n\n\n\n", "msg_date": "Wed, 27 Nov 2002 20:34:32 +0100", "msg_from": "\"Arcadius A.\" <ahouans@sh.cvut.cz>", "msg_from_op": true, "msg_subject": "Re: SQL query help!" } ]
[ { "msg_contents": "Hi Guys,\n\nI'm starting work on ADD COLUMN. I'm going to allow:\n\n* SERIAL, SERIAL8\n* DEFAULT\n* NOT NULL\n\netc...\n\nThe one big programming difficulty I see is the process of running through\nall the existing tuples in the relation the column was added to and\nevaluating the default for each row.\n\nI assume that's the correct behaviour? If they specify a default, the\ncolumn should be auto-filled with that default, right?\n\nIf someone could give me a really quick example look on how to do this, it'd\nbe really appreciated and would save me heaps of time...\n\nThe trick is that the default clause needs to be actually evaluated, not\njust set - eg. nextval('\"my_seq\"') sort of thing.\n\nI guess the other tricky bit is checking that the default value satisfies\nthe check constraint...?\n\nChris\n\n", "msg_date": "Sat, 23 Nov 2002 15:48:25 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Help with ADD COLUMN" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> The one big programming difficulty I see is the process of running through\n> all the existing tuples in the relation the column was added to and\n> evaluating the default for each row.\n\nBasically you wanna do an \"UPDATE tab SET col = <default expr>\". I'd\nsuggest letting the existing machinery handle as much of that as\npossible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Nov 2002 01:28:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Help with ADD COLUMN " }, { "msg_contents": "At 03:48 PM 23/11/2002 -0800, Christopher Kings-Lynne wrote:\n>I assume that's the correct behaviour? If they specify a default, the\n>column should be auto-filled with that default, right?\n\nGood question. We might want some input from other DBs; Dec RDB default \nexisting rows to NULL irrespective of the 'DEFAULT' clause.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 03 5330 3172 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Sun, 24 Nov 2002 17:34:22 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Help with ADD COLUMN" }, { "msg_contents": "On Sun, 2002-11-24 at 11:14, Hannu Krosing wrote:\n> On Sun, 2002-11-24 at 08:34, Philip Warner wrote:\n> > At 03:48 PM 23/11/2002 -0800, Christopher Kings-Lynne wrote:\n> > >I assume that's the correct behaviour? If they specify a default, the\n> > >column should be auto-filled with that default, right?\n> > \n> > Good question. We might want some input from other DBs; Dec RDB default \n> > existing rows to NULL irrespective of the 'DEFAULT' clause.\n> \n> Also, how would I express a new column with default for which I _want_ \n> that column in old records to be NULL ?\n\nSame way as you do now. Add the column, then alter in the default.\n\n-- \nRod Taylor <rbt@rbt.ca>\n\n", "msg_date": "24 Nov 2002 09:24:45 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Help with ADD COLUMN" }, { "msg_contents": "On Sun, 2002-11-24 at 08:34, Philip Warner wrote:\n> At 03:48 PM 23/11/2002 -0800, Christopher Kings-Lynne wrote:\n> >I assume that's the correct behaviour? If they specify a default, the\n> >column should be auto-filled with that default, right?\n> \n> Good question. We might want some input from other DBs; Dec RDB default \n> existing rows to NULL irrespective of the 'DEFAULT' clause.\n\nAlso, how would I express a new column with default for which I _want_ \nthat column in old records to be NULL ?\n\n----------------\nHannu\n\n\n\n", "msg_date": "24 Nov 2002 18:14:01 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Help with ADD COLUMN" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 03:48 PM 23/11/2002 -0800, Christopher Kings-Lynne wrote:\n>> I assume that's the correct behaviour? If they specify a default, the\n>> column should be auto-filled with that default, right?\n\n> Good question.\n\nNo, it's perfectly clear in the spec:\n\n 1) The column defined by the <column definition> is added to T.\n\n 2) Let C be the column added to T. Every value in C is the default\n value for C.\n\nThe reason we currently reject DEFAULT in an ADD COLUMN is precisely\nthat the spec requires the semantics we don't have implemented.\n(On the other hand, ALTER COLUMN SET DEFAULT is easy because it's not\nsupposed to affect existing table rows.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Nov 2002 12:35:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Help with ADD COLUMN " }, { "msg_contents": "\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > The one big programming difficulty I see is the process of running\nthrough\n> > all the existing tuples in the relation the column was added to and\n> > evaluating the default for each row.\n>\n> Basically you wanna do an \"UPDATE tab SET col = <default expr>\". I'd\n> suggest letting the existing machinery handle as much of that as\n> possible.\n\nUsing SPI calls?\n\nChris\n\n", "msg_date": "Sun, 24 Nov 2002 12:35:46 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Help with ADD COLUMN " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> Basically you wanna do an \"UPDATE tab SET col = <default expr>\". I'd\n>> suggest letting the existing machinery handle as much of that as\n>> possible.\n\n> Using SPI calls?\n\nI wouldn't use SPI; it introduces way too many variables --- besides\nwhich, you already have the default in internal form, why would you want\nto deparse and reparse it?\n\nI'd look into building a parsetree for an UPDATE statement and\nfeeding that to the executor.\n\nAn interesting question: should the rewriter be allowed to get its hands\non the thing, or not? I'm not sure it'd be a good idea to fire rules\nfor such an operation. For that matter, perhaps we don't want to fire\ntriggers either --- just how close should this come to being like a regular\nUPDATE?\n\nIt would probably net out to not a lot of code to do a heapscan,\nheap_modify_tuple, etc if we decide that not firing rules/triggers\nis more appropriate behavior. I'm not sure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Nov 2002 16:16:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Help with ADD COLUMN " } ]
[ { "msg_contents": "Marc, would you package RC2 please? CVS is ready to go and I would like\nto still hit the final release date of Wednesday.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 24 Nov 2002 15:10:15 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "7.3RC2 please" }, { "msg_contents": "\nSorry, got hit by both a migraine and power outages this weekend ... RC2\nis packaged up now, if ppl want to take a test run through her, and I'll\nput an announce up tomorrow morning when I get up, after the mirrors have\nhad a chance to update ...\n\n\nOn Sun, 24 Nov 2002, Bruce Momjian wrote:\n\n> Marc, would you package RC2 please? CVS is ready to go and I would like\n> to still hit the final release date of Wednesday.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n\n", "msg_date": "Sun, 24 Nov 2002 19:44:55 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.3RC2 please" } ]
[ { "msg_contents": "> I know, it's late, and I assume this is intentional, but wanted to check:\n\n<snip>\n\n> This drive phpgroupware nuts...\n>\n> (their code needs help, but).\n\nIt is in the release notes. I'm not sure why the behaviour was changed\n(other than it really is bad behaviour).\n\nChris\n\n", "msg_date": "Sun, 24 Nov 2002 14:55:52 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_atoi: zero length string" }, { "msg_contents": "I know, it's late, and I assume this is intentional, but wanted to check:\n\non 7.2.3:\n$ psql\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nler=# \\d\n List of relations\n Name | Type | Owner\n-------------+-------+-------\n nat_dal_est | table | ler\n(1 row)\n\nler=# create table z_test(t int);\nCREATE\nler=# insert into z_test(t) values('');\nINSERT 40606 1\nler=# select * from z_test;\n t\n---\n 0\n(1 row)\n\nler=# \\q\n$\nConnection to ler-freebie.iadfw.net closed.\n$\n\n\non 7.3rc1:\n$ psql\nWelcome to psql 7.3rc1, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nler=# create table z_test(t int);\nCREATE TABLE\nler=# insert into z_test(t) values('');\nERROR: pg_atoi: zero-length string\nler=# \\q\n$\n\n\nThis drive phpgroupware nuts...\n\n(their code needs help, but).\n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n\n\n", "msg_date": "Sun, 24 Nov 2002 16:58:10 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "pg_atoi: zero length string" }, { "msg_contents": "Bug #1797 filed with the PHPgroupware side of savannah.\n\nLER\n\n\n--On Sunday, November 24, 2002 14:55:52 -0800 Christopher Kings-Lynne \n<chriskl@familyhealth.com.au> wrote:\n\n>> I know, it's late, and I assume this is intentional, but wanted to check:\n>\n> <snip>\n>\n>> This drive phpgroupware nuts...\n>>\n>> (their code needs help, but).\n>\n> It is in the release notes. I'm not sure why the behaviour was changed\n> (other than it really is bad behaviour).\n>\n> Chris\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n\n\n", "msg_date": "Sun, 24 Nov 2002 17:33:48 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: pg_atoi: zero length string" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> It is in the release notes. I'm not sure why the behaviour was changed\n> (other than it really is bad behaviour).\n\nI think someone exhibited a case where COPY behavior was really\nconfusing because it was silently accepting an empty string for an int\nfield, instead of giving an error. Since the behavior seemed obviously\nbogus anyway, we were finally motivated to change it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Nov 2002 18:59:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_atoi: zero length string " } ]
[ { "msg_contents": "I just wrote a script to automate my post-patch application testing. \nThis should make for few problems after patch application:\n\n\tpgstop\n\trm -r /u/pg/data\n\tpgmakeall -c\t# make clean, make\n\tinitdb\n\tpgstart\n\tcd /pg/test/regress\n\tgmake clean\n\tgmake all runtest\n_\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 24 Nov 2002 21:11:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "patch testing" } ]
[ { "msg_contents": "-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org", "msg_date": "Sun, 24 Nov 2002 18:31:49 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "7.3rc2 Test Failures" }, { "msg_contents": "David Wheeler <david@wheeler.net> writes:\n> COPY onek FROM '/usr/local/src/postgresql-7.3rc2/src/test/regress/results/onek.data';\n> + ERROR: COPY command, running in backend with effective uid 77, could not open file '/usr/local/src/postgresql-7.3rc2/src/test/regress/results/onek.data' for reading. Errno = No such file or directory (2).\n\nLooks like a problem on your end...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "26 Nov 2002 10:12:19 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: 7.3rc2 Test Failures" }, { "msg_contents": "On Tuesday, November 26, 2002, at 07:12 AM, Neil Conway wrote:\n\n> Looks like a problem on your end...\n\nOh, the message finally got through, did it? I chatted with Bruce \nyesterday and ran the tests again and they all passed.\n\nThanks,\n\nDavid\n\n-- \nDavid Wheeler AIM: dwTheory\ndavid@wheeler.net ICQ: 15726394\nhttp://david.wheeler.net/ Yahoo!: dew7e\n Jabber: Theory@jabber.org\n\n", "msg_date": "Tue, 26 Nov 2002 07:18:22 -0800", "msg_from": "David Wheeler <david@wheeler.net>", "msg_from_op": true, "msg_subject": "Re: 7.3rc2 Test Failures" } ]
[ { "msg_contents": "Hi, I have been playing around with contrib/dbmirror with RC2 and\nfaced with following errors:\n\n perl DBMirror.pl slaveDatabase.conf\nGlobal symbol \"$setResult\" requires explicit package name at DBMirror.pl line 131.\nGlobal symbol \"$setResult\" requires explicit package name at DBMirror.pl line 132.\nGlobal symbol \"$setResult2\" requires explicit package name at DBMirror.pl line 140.\nGlobal symbol \"$setResult2\" requires explicit package name at DBMirror.pl line 141.\nExecution of DBMirror.pl aborted due to compilation errors.\n\nThis Linux and perl 5.6.1.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 25 Nov 2002 17:12:12 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "dbmirror" }, { "msg_contents": "\nYes, I get the same failure. with perl 5.005_03. Steven, can you\ncomment on this?\n\n---------------------------------------------------------------------------\n\nTatsuo Ishii wrote:\n> Hi, I have been playing around with contrib/dbmirror with RC2 and\n> faced with following errors:\n> \n> perl DBMirror.pl slaveDatabase.conf\n> Global symbol \"$setResult\" requires explicit package name at DBMirror.pl line 131.\n> Global symbol \"$setResult\" requires explicit package name at DBMirror.pl line 132.\n> Global symbol \"$setResult2\" requires explicit package name at DBMirror.pl line 140.\n> Global symbol \"$setResult2\" requires explicit package name at DBMirror.pl line 141.\n> Execution of DBMirror.pl aborted due to compilation errors.\n> \n> This Linux and perl 5.6.1.\n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Dec 2002 00:24:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: dbmirror" }, { "msg_contents": "On Thu, 5 Dec 2002, Bruce Momjian wrote:\n\nIt looks like the problem was introduced when the \"SET autocommit\"\nand \"SET search_path\" commands were added to the beginning of the script.\n\nThe attatched patch should fix the problem. It probably should be applied \nagainst the 7.3 and 7.4 branches.\n\n\n\n> \n> Yes, I get the same failure. with perl 5.005_03. Steven, can you\n> comment on this?\n> \n> ---------------------------------------------------------------------------\n> \n> Tatsuo Ishii wrote:\n> > Hi, I have been playing around with contrib/dbmirror with RC2 and\n> > faced with following errors:\n> > \n> > perl DBMirror.pl slaveDatabase.conf\n> > Global symbol \"$setResult\" requires explicit package name at DBMirror.pl line 131.\n> > Global symbol \"$setResult\" requires explicit package name at DBMirror.pl line 132.\n> > Global symbol \"$setResult2\" requires explicit package name at DBMirror.pl line 140.\n> > Global symbol \"$setResult2\" requires explicit package name at DBMirror.pl line 141.\n> > Execution of DBMirror.pl aborted due to compilation errors.\n> > \n> > This Linux and perl 5.6.1.\n> > --\n> > Tatsuo Ishii\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> \n> \n\n-- \nSteven Singer ssinger@navtechinc.com\nAircraft Performance Systems Phone: 519-747-1170 ext 282\nNavtech Systems Support Inc. AFTN: CYYZXNSX SITA: YYZNSCR\nWaterloo, Ontario ARINC: YKFNSCR", "msg_date": "Thu, 5 Dec 2002 20:54:10 +0000 (GMT)", "msg_from": "Steven Singer <ssinger@navtechinc.com>", "msg_from_op": false, "msg_subject": "Re: dbmirror" }, { "msg_contents": "\nThanks. Applied to 7.3 and CVS HEAD. It was me who added those\ncommands to set the envirnment, and I didn't realize it was the first\nuse of those variables, hence the need for 'my'.\n\nThanks. Fix will be in 7.3.1.\n\n---------------------------------------------------------------------------\n\nSteven Singer wrote:\n> On Thu, 5 Dec 2002, Bruce Momjian wrote:\n> \n> It looks like the problem was introduced when the \"SET autocommit\"\n> and \"SET search_path\" commands were added to the beginning of the script.\n> \n> The attatched patch should fix the problem. It probably should be applied \n> against the 7.3 and 7.4 branches.\n> \n> \n> \n> > \n> > Yes, I get the same failure. with perl 5.005_03. Steven, can you\n> > comment on this?\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > Tatsuo Ishii wrote:\n> > > Hi, I have been playing around with contrib/dbmirror with RC2 and\n> > > faced with following errors:\n> > > \n> > > perl DBMirror.pl slaveDatabase.conf\n> > > Global symbol \"$setResult\" requires explicit package name at DBMirror.pl line 131.\n> > > Global symbol \"$setResult\" requires explicit package name at DBMirror.pl line 132.\n> > > Global symbol \"$setResult2\" requires explicit package name at DBMirror.pl line 140.\n> > > Global symbol \"$setResult2\" requires explicit package name at DBMirror.pl line 141.\n> > > Execution of DBMirror.pl aborted due to compilation errors.\n> > > \n> > > This Linux and perl 5.6.1.\n> > > --\n> > > Tatsuo Ishii\n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > > \n> > \n> > \n> \n> -- \n> Steven Singer ssinger@navtechinc.com\n> Aircraft Performance Systems Phone: 519-747-1170 ext 282\n> Navtech Systems Support Inc. AFTN: CYYZXNSX SITA: YYZNSCR\n> Waterloo, Ontario ARINC: YKFNSCR\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Dec 2002 16:04:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: dbmirror" } ]
[ { "msg_contents": "\tAnyone knows right install location and name for language .mo files ?\n\n\tI created new .po file, configure and make was OK.\nTranslation is also checked as explained on PostgreSQL site.\nWhen I install, .mo file is copied to:\n/usr/local/pgsql/share/locale/hr_HR/LC_MESSAGES/postgres.mo (RedHat).\n\tIn postgresql.conf is already line that looks like this:\nLC_MESSAGES = 'hr_HR'.\n\nSo why I do not see the translated messages ?\nCan I set some sort of debbug level to see which lang is postgres actually \nusing ?\n\nI am ready to contribute this translation file (it is for Croatian language). \nMore than 60% of file is translated, but I will work some more on it.\n\nRegards !\n", "msg_date": "Mon, 25 Nov 2002 09:43:25 +0000", "msg_from": "Darko Prenosil <darko.prenosil@finteh.hr>", "msg_from_op": true, "msg_subject": "Location of language .mo files or\n\t=?iso-8859-2?q?'Za=B9to=20postgres=20ne=20govori=20Hrvatski'?= ???" }, { "msg_contents": "Zdravo,\n\nDarko Prenosil writes:\n\n> When I install, .mo file is copied to:\n> /usr/local/pgsql/share/locale/hr_HR/LC_MESSAGES/postgres.mo (RedHat).\n> \tIn postgresql.conf is already line that looks like this:\n> LC_MESSAGES = 'hr_HR'.\n>\n> So why I do not see the translated messages ?\n\nHard to tell. Try making a PO file for a frontend application (such as\npsql or pg_dump), set the environment variables, and then see what\nhappens. Possibly, there are less disturbing factors involved that way\nthan on the server side.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 26 Nov 2002 19:42:42 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Location of language .mo files or" }, { "msg_contents": "On Tuesday 26 November 2002 18:42, Peter Eisentraut wrote:\n> Zdravo,\n>\n> Darko Prenosil writes:\n> > When I install, .mo file is copied to:\n> > /usr/local/pgsql/share/locale/hr_HR/LC_MESSAGES/postgres.mo (RedHat).\n> > \tIn postgresql.conf is already line that looks like this:\n> > LC_MESSAGES = 'hr_HR'.\n> >\n> > So why I do not see the translated messages ?\n>\n> Hard to tell. Try making a PO file for a frontend application (such as\n> psql or pg_dump), set the environment variables, and then see what\n> happens. Possibly, there are less disturbing factors involved that way\n> than on the server side.\n\n\tI find out what was wrong. I did not \"make clean\", and postgres backend was \nalready compiled and linked without --enable-nls. So make only maked my \nhr_HR.mo, but did not recompiled the backend. After \"make clean\" and \"make \ninstall\" evetithing is working just fine.\nThank You anyway for Your effort !!!\n\nZdravo i tebi !\n\n\n", "msg_date": "Tue, 26 Nov 2002 20:26:40 +0000", "msg_from": "Darko Prenosil <darko.prenosil@finteh.hr>", "msg_from_op": true, "msg_subject": "Re: Location of language .mo files or\n\t=?iso-8859-2?q?'Za=B9to=20postgres=20ne=20govori=20Hrvatski'?= ???" } ]
[ { "msg_contents": "system = powerpc-ibm-aix4.2.1.0 \n\n\nconfigure command \n\nenv CC=gcc ./configure --with-maxbackends=1024 --with-openssl=/usr/local/ssl --enable-syslog --enable-odbc --disable-nls\n\ngmake check output file\n\n\nregression.out\n--------------\n\nparallel group (13 tests): text varchar oid int2 char boolean float4 int4 name int8 float8 bit numeric\n boolean ... ok\n char ... ok\n name ... ok\n varchar ... ok\n text ... ok\n int2 ... ok\n int4 ... ok\n int8 ... ok\n oid ... ok\n float4 ... ok\n float8 ... ok\n bit ... ok\n numeric ... ok\ntest strings ... ok\ntest numerology ... ok\nparallel group (20 tests): lseg date path circle polygon box point time timetz tinterval abstime interval reltime comments inet timestamptz timestamp type_sanity opr_sanity oidjoins\n point ... ok\n lseg ... ok\n box ... ok\n path ... ok\n polygon ... ok\n circle ... ok\n date ... ok\n time ... ok\n timetz ... ok\n timestamp ... ok\n timestamptz ... ok\n interval ... ok\n abstime ... ok\n reltime ... ok\n tinterval ... ok\n inet ... ok\n comments ... ok\n oidjoins ... ok\n type_sanity ... ok\n opr_sanity ... ok\ntest geometry ... FAILED\ntest horology ... ok\ntest insert ... ok\ntest create_function_1 ... ok\ntest create_type ... ok\ntest create_table ... ok\ntest create_function_2 ... ok\ntest copy ... ok\nparallel group (7 tests): create_aggregate create_operator triggers vacuum inherit constraints create_misc\n constraints ... ok\n triggers ... ok\n create_misc ... ok\n create_aggregate ... ok\n create_operator ... ok\n inherit ... ok\n vacuum ... ok\nparallel group (2 tests): create_view create_index\n create_index ... ok\n create_view ... ok\ntest sanity_check ... ok\ntest errors ... ok\ntest select ... ok\nparallel group (16 tests): select_distinct_on select_into select_having transactions select_distinct random subselect portals arrays union select_implicit case aggregates hash_index join btree_index\n select_into ... ok\n select_distinct ... ok\n select_distinct_on ... ok\n select_implicit ... ok\n select_having ... ok\n subselect ... ok\n union ... ok\n case ... ok\n join ... ok\n aggregates ... ok\n transactions ... ok\n random ... ok\n portals ... ok\n arrays ... ok\n btree_index ... ok\n hash_index ... ok\ntest privileges ... ok\ntest misc ... ok\nparallel group (5 tests): portals_p2 cluster rules select_views foreign_key\n select_views ... ok\n portals_p2 ... ok\n rules ... ok\n foreign_key ... ok\n cluster ... ok\nparallel group (11 tests): limit truncate temp copy2 domain rangefuncs conversion prepare without_oid plpgsql alter_table\n limit ... ok\n plpgsql ... ok\n copy2 ... ok\n temp ... ok\n domain ... ok\n rangefuncs ... ok\n prepare ... ok\n without_oid ... ok\n conversion ... ok\n truncate ... ok\n alter_table ... ok\n\n\nregression.diffs\n-----------------\n\n\n*** ./expected/geometry-powerpc-aix4.out\tTue Sep 12 17:07:16 2000\n--- ./results/geometry.out\tThu Nov 21 21:46:01 2002\n***************\n*** 114,120 ****\n | (5.1,34.5) | [(1,2),(3,4)] | (3,4)\n | (-5,-12) | [(1,2),(3,4)] | (1,2)\n | (10,10) | [(1,2),(3,4)] | (3,4)\n! | (0,0) | [(0,0),(6,6)] | (0,0)\n | (-10,0) | [(0,0),(6,6)] | (0,0)\n | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)\n | (5.1,34.5) | [(0,0),(6,6)] | (6,6)\n--- 114,120 ----\n | (5.1,34.5) | [(1,2),(3,4)] | (3,4)\n | (-5,-12) | [(1,2),(3,4)] | (1,2)\n | (10,10) | [(1,2),(3,4)] | (3,4)\n! | (0,0) | [(0,0),(6,6)] | (-0,0)\n | (-10,0) | [(0,0),(6,6)] | (0,0)\n | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)\n | (5.1,34.5) | [(0,0),(6,6)] | (6,6)\n***************\n*** 127,133 ****\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140473)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n--- 127,133 ----\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140472)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n***************\n*** 448,454 ****\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994\n107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.4999999999\n6464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999\n999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((-0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999\n999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n--- 448,454 ----\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994\n107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.4999999999\n6464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999\n999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.9999\n99998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n***************\n*** 461,467 ****\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((-0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n--- 461,467 ----\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n\n======================================================================\n", "msg_date": "Mon, 25 Nov 2002 08:45:46 -0500 (EST)", "msg_from": "Samuel A Horwitz <horwitz@argoscomp.com>", "msg_from_op": true, "msg_subject": "Re: RC1? AIX 4.2.1.0 " } ]
[ { "msg_contents": "Sorry forgot to include that I had to add -lssl and -lcrypto ti the libpq \nline in Makefile.global.in in the src directory to get ecpg to link\n\nas follows\n\n286c286\n< libpq = -L$(libpq_builddir) -lpq\n---\n> libpq = -L$(libpq_builddir) -lpq -lssl -lcrypto\n\n\nhorwitz@argoscomp.com (Samuel A Horwitz)\n\n\n---------- Forwarded message ----------\nDate: Mon, 25 Nov 2002 08:45:46 -0500 (EST)\nFrom: Samuel A Horwitz <horwitz@argoscomp.com>\nTo: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] RC1? AIX 4.2.1.0 \n\nsystem = powerpc-ibm-aix4.2.1.0 \n\n\nconfigure command \n\nenv CC=gcc ./configure --with-maxbackends=1024 --with-openssl=/usr/local/ssl --enable-syslog --enable-odbc --disable-nls\n\ngmake check output file\n\n\nregression.out\n--------------\n\nparallel group (13 tests): text varchar oid int2 char boolean float4 int4 name int8 float8 bit numeric\n boolean ... ok\n char ... ok\n name ... ok\n varchar ... ok\n text ... ok\n int2 ... ok\n int4 ... ok\n int8 ... ok\n oid ... ok\n float4 ... ok\n float8 ... ok\n bit ... ok\n numeric ... ok\ntest strings ... ok\ntest numerology ... ok\nparallel group (20 tests): lseg date path circle polygon box point time timetz tinterval abstime interval reltime comments inet timestamptz timestamp type_sanity opr_sanity oidjoins\n point ... ok\n lseg ... ok\n box ... ok\n path ... ok\n polygon ... ok\n circle ... ok\n date ... ok\n time ... ok\n timetz ... ok\n timestamp ... ok\n timestamptz ... ok\n interval ... ok\n abstime ... ok\n reltime ... ok\n tinterval ... ok\n inet ... ok\n comments ... ok\n oidjoins ... ok\n type_sanity ... ok\n opr_sanity ... ok\ntest geometry ... FAILED\ntest horology ... ok\ntest insert ... ok\ntest create_function_1 ... ok\ntest create_type ... ok\ntest create_table ... ok\ntest create_function_2 ... ok\ntest copy ... ok\nparallel group (7 tests): create_aggregate create_operator triggers vacuum inherit constraints create_misc\n constraints ... ok\n triggers ... ok\n create_misc ... ok\n create_aggregate ... ok\n create_operator ... ok\n inherit ... ok\n vacuum ... ok\nparallel group (2 tests): create_view create_index\n create_index ... ok\n create_view ... ok\ntest sanity_check ... ok\ntest errors ... ok\ntest select ... ok\nparallel group (16 tests): select_distinct_on select_into select_having transactions select_distinct random subselect portals arrays union select_implicit case aggregates hash_index join btree_index\n select_into ... ok\n select_distinct ... ok\n select_distinct_on ... ok\n select_implicit ... ok\n select_having ... ok\n subselect ... ok\n union ... ok\n case ... ok\n join ... ok\n aggregates ... ok\n transactions ... ok\n random ... ok\n portals ... ok\n arrays ... ok\n btree_index ... ok\n hash_index ... ok\ntest privileges ... ok\ntest misc ... ok\nparallel group (5 tests): portals_p2 cluster rules select_views foreign_key\n select_views ... ok\n portals_p2 ... ok\n rules ... ok\n foreign_key ... ok\n cluster ... ok\nparallel group (11 tests): limit truncate temp copy2 domain rangefuncs conversion prepare without_oid plpgsql alter_table\n limit ... ok\n plpgsql ... ok\n copy2 ... ok\n temp ... ok\n domain ... ok\n rangefuncs ... ok\n prepare ... ok\n without_oid ... ok\n conversion ... ok\n truncate ... ok\n alter_table ... ok\n\n\nregression.diffs\n-----------------\n\n\n*** ./expected/geometry-powerpc-aix4.out\tTue Sep 12 17:07:16 2000\n--- ./results/geometry.out\tThu Nov 21 21:46:01 2002\n***************\n*** 114,120 ****\n | (5.1,34.5) | [(1,2),(3,4)] | (3,4)\n | (-5,-12) | [(1,2),(3,4)] | (1,2)\n | (10,10) | [(1,2),(3,4)] | (3,4)\n! | (0,0) | [(0,0),(6,6)] | (0,0)\n | (-10,0) | [(0,0),(6,6)] | (0,0)\n | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)\n | (5.1,34.5) | [(0,0),(6,6)] | (6,6)\n--- 114,120 ----\n | (5.1,34.5) | [(1,2),(3,4)] | (3,4)\n | (-5,-12) | [(1,2),(3,4)] | (1,2)\n | (10,10) | [(1,2),(3,4)] | (3,4)\n! | (0,0) | [(0,0),(6,6)] | (-0,0)\n | (-10,0) | [(0,0),(6,6)] | (0,0)\n | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)\n | (5.1,34.5) | [(0,0),(6,6)] | (6,6)\n***************\n*** 127,133 ****\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140473)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n--- 127,133 ----\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140472)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n***************\n*** 448,454 ****\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994\n107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.4999999999\n6464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999\n999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((-0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999\n999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n--- 448,454 ----\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994\n107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.4999999999\n6464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999\n999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.9999\n99998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n***************\n*** 461,467 ****\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((-0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n--- 461,467 ----\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n\n======================================================================\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n", "msg_date": "Mon, 25 Nov 2002 09:12:55 -0500 (EST)", "msg_from": "Samuel A Horwitz <horwitz@argoscomp.com>", "msg_from_op": true, "msg_subject": "Re: RC1? AIX 4.2.1.0 (fwd)" } ]
[ { "msg_contents": "\nMorning all ...\n\n On Sunday this weekend, we packaged up PostgreSQL v7.3rc2 for testing\n... this release, if all goes well, will become the \"Final Release\" on\nWednesday, unless anyone comes up with any outstanding issues.\n\n At this point, we need as many ppl as possible to try and break it, so\nthat when we do release, its as solid as we can possibly make it.\n\n If all goes well, v7.3 will be released by December 1st.\n\n Downloads are available at all mirrors, or the main site:\n\n\tftp://ftp.postgresql.org/pub/beta\n\n Bugs should be reported to pgsql-bugs@postgresql.org\n\nThanks ...\n\n\n\n\n", "msg_date": "Mon, 25 Nov 2002 12:19:23 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "RC2 Packaged in Preparation for a Wednesday Release ..." }, { "msg_contents": "SuSE 7..3 (2.4.10-4GB)\n\nCompiles and passes regression fine:\nAll 89 tests passed. \n\nInstalling to dev server next.\n\nCheers,\nSteve\n\n\n\nOn Monday 25 November 2002 8:19 am, you wrote:\n> Morning all ...\n>\n> On Sunday this weekend, we packaged up PostgreSQL v7.3rc2 for testing\n> ... this release, if all goes well, will become the \"Final Release\" on\n> Wednesday, unless anyone comes up with any outstanding issues.\n>\n> At this point, we need as many ppl as possible to try and break it, so\n> that when we do release, its as solid as we can possibly make it.\n>\n> If all goes well, v7.3 will be released by December 1st.\n>\n> Downloads are available at all mirrors, or the main site:\n>\n> \tftp://ftp.postgresql.org/pub/beta\n>\n> Bugs should be reported to pgsql-bugs@postgresql.org\n>\n> Thanks ...\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Tue, 26 Nov 2002 10:00:38 -0800", "msg_from": "Steve Crawford <scrawford@pinpointresearch.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RC2 Packaged in Preparation for a Wednesday Release ..." } ]
[ { "msg_contents": "Hello,\n\ni've read that there are 2 different native ports for Windows\nsomewhere.\n\nI've searched for them but didn't found them. Is there anyone who can\npoint me to a link or send me a copy of the sources?\n\nThanks\n\nUlrich\n----------------------------------\n This e-mail is virus scanned\n Diese e-mail ist virusgeprueft\n\n", "msg_date": "Mon, 25 Nov 2002 18:37:38 +0100", "msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>", "msg_from_op": true, "msg_subject": "Native Win32 sources" }, { "msg_contents": "\nThey are in process for 7.4. There are a few non-source Win32\ncommercial distributions.\n\n---------------------------------------------------------------------------\n\nUlrich Neumann wrote:\n> Hello,\n> \n> i've read that there are 2 different native ports for Windows\n> somewhere.\n> \n> I've searched for them but didn't found them. Is there anyone who can\n> point me to a link or send me a copy of the sources?\n> \n> Thanks\n> \n> Ulrich\n> ----------------------------------\n> This e-mail is virus scanned\n> Diese e-mail ist virusgeprueft\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 25 Nov 2002 12:51:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Native Win32 sources" }, { "msg_contents": "Ulrich Neumann wrote:\n> Hello,\n> \n> i've read that there are 2 different native ports for Windows\n> somewhere.\n> \n> I've searched for them but didn't found them. Is there anyone who can\n> point me to a link or send me a copy of the sources?\n\nOh, you are probably asking about the sources. They are not publically\navailable yet.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 25 Nov 2002 12:51:39 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Native Win32 sources" }, { "msg_contents": "Is there a rough date for when they'll be available?\n\nI have a development team at work who currently have an M$-Windows box and a\nLinux box each in order to allow them to read M$-Office documents sent to us\nand develop against PostgreSQL (which we use in production).\n\nI know I could have a shared Linux box with multiple databases and have them\nbind to that, but one of the important aspects of our application is\nresponse time, and you can't accurately measure response times for code\nchanges on a shared system.\n\nHaving a Win32 native version would save a lot of hassles for me.\n\nAl.\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Ulrich Neumann\" <U_Neumann@gne.de>\nCc: <pgsql-hackers@postgresql.org>\nSent: Monday, November 25, 2002 5:51 PM\nSubject: [mail] Re: [HACKERS] Native Win32 sources\n\n\n> Ulrich Neumann wrote:\n> > Hello,\n> >\n> > i've read that there are 2 different native ports for Windows\n> > somewhere.\n> >\n> > I've searched for them but didn't found them. Is there anyone who can\n> > point me to a link or send me a copy of the sources?\n>\n> Oh, you are probably asking about the sources. They are not publically\n> available yet.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania\n19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\n", "msg_date": "Tue, 26 Nov 2002 08:24:03 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "Al, to be honest I don't think the Windows native would save hassle,\nrather it'd probably cause more! No disrespect to those doing the\nversion, read on for reasoning...\n\nYes, you get a beta of a Windows native version just now, yes it\nprobably will not be that long till the source is a available... But\nhow long till it's part of a cosha PostgreSQL release? Version\n7.4... Could be up to six months... Do you want to run pre-release\nversions in the meantime? Don't think so, not in a production\nenvironment!\n\nSo, the real way to save hassle is probably a cheap commodity PC with\nLinux installed... Or settle for the existing, non-native, Windows\nversion.\n\nBy the way, just to open Office documents? Have you tried OpenOffice?\n\nRegards, Lee Kindness.\n\nAl Sutton writes:\n > Is there a rough date for when they'll be available?\n > \n > I have a development team at work who currently have an M$-Windows box and a\n > Linux box each in order to allow them to read M$-Office documents sent to us\n > and develop against PostgreSQL (which we use in production).\n > \n > I know I could have a shared Linux box with multiple databases and have them\n > bind to that, but one of the important aspects of our application is\n > response time, and you can't accurately measure response times for code\n > changes on a shared system.\n > \n > Having a Win32 native version would save a lot of hassles for me.\n > \n > Al.\n > \n > ----- Original Message -----\n > From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n > \n > > Ulrich Neumann wrote:\n > > > Hello,\n > > >\n > > > i've read that there are 2 different native ports for Windows\n > > > somewhere.\n > > >\n > > > I've searched for them but didn't found them. Is there anyone who can\n > > > point me to a link or send me a copy of the sources?\n > >\n > > Oh, you are probably asking about the sources. They are not publically\n > > available yet.\n", "msg_date": "Tue, 26 Nov 2002 09:08:58 +0000", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "Lee,\n\nI wouldn't go for 7.4 in production until after it's gone gold, but being\nable to cut the number of boxes per developer by giving them a Win32 native\nversion would save on everything from the overhead of getting the developers\nfamiliar enough with Linux to be able to admin their own systems, to cutting\nthe network usage by having the DB and app on the same system, through to\ncutting the cost of electricity by only having one box per developer. It\nwould also be a good way of testing 7.4 against our app so we can plan for\nan upgrade when it's released ;).\n\nI've tried open office 1.0.1 and had to ditch it. It had problems with font\nrendering and tables that ment many of the forms that people sent as word\ndocuments had chunks that weren't displayed or printed. We did try it on a\nbox with MS-Word on it to ensure that the setup of the machine wasn't the\nissue, Word had no problems, OO failed horribly.\n\nThanks for the ideas,\n\nAl.\n\n----- Original Message -----\nFrom: \"Lee Kindness\" <lkindness@csl.co.uk>\nTo: \"Al Sutton\" <al@alsutton.com>\nCc: \"Bruce Momjian\" <pgman@candle.pha.pa.us>; \"Ulrich Neumann\"\n<U_Neumann@gne.de>; <pgsql-hackers@postgresql.org>; \"Lee Kindness\"\n<lkindness@csl.co.uk>\nSent: Tuesday, November 26, 2002 9:08 AM\nSubject: Re: [mail] Re: [HACKERS] Native Win32 sources\n\n\n> Al, to be honest I don't think the Windows native would save hassle,\n> rather it'd probably cause more! No disrespect to those doing the\n> version, read on for reasoning...\n>\n> Yes, you get a beta of a Windows native version just now, yes it\n> probably will not be that long till the source is a available... But\n> how long till it's part of a cosha PostgreSQL release? Version\n> 7.4... Could be up to six months... Do you want to run pre-release\n> versions in the meantime? Don't think so, not in a production\n> environment!\n>\n> So, the real way to save hassle is probably a cheap commodity PC with\n> Linux installed... Or settle for the existing, non-native, Windows\n> version.\n>\n> By the way, just to open Office documents? Have you tried OpenOffice?\n>\n> Regards, Lee Kindness.\n>\n> Al Sutton writes:\n> > Is there a rough date for when they'll be available?\n> >\n> > I have a development team at work who currently have an M$-Windows box\nand a\n> > Linux box each in order to allow them to read M$-Office documents sent\nto us\n> > and develop against PostgreSQL (which we use in production).\n> >\n> > I know I could have a shared Linux box with multiple databases and have\nthem\n> > bind to that, but one of the important aspects of our application is\n> > response time, and you can't accurately measure response times for code\n> > changes on a shared system.\n> >\n> > Having a Win32 native version would save a lot of hassles for me.\n> >\n> > Al.\n> >\n> > ----- Original Message -----\n> > From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> >\n> > > Ulrich Neumann wrote:\n> > > > Hello,\n> > > >\n> > > > i've read that there are 2 different native ports for Windows\n> > > > somewhere.\n> > > >\n> > > > I've searched for them but didn't found them. Is there anyone who\ncan\n> > > > point me to a link or send me a copy of the sources?\n> > >\n> > > Oh, you are probably asking about the sources. They are not\npublically\n> > > available yet.\n>\n\n\n", "msg_date": "Tue, 26 Nov 2002 11:33:42 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "On November 26, 2002 06:33 am, Al Sutton wrote:\n> I wouldn't go for 7.4 in production until after it's gone gold, but being\n> able to cut the number of boxes per developer by giving them a Win32 native\n> version would save on everything from the overhead of getting the\n> developers familiar enough with Linux to be able to admin their own\n> systems, to cutting the network usage by having the DB and app on the same\n> system, through to cutting the cost of electricity by only having one box\n> per developer. It would also be a good way of testing 7.4 against our app\n> so we can plan for an upgrade when it's released ;).\n\nIf your database is of any significant size you probably want a separate \ndatabase machine anyway. We run NetBSD everywhere and could easily put the \napps on the database machine but choose not to. We have 6 production servers \nrunning various apps and web servers and they all talk to a central database \nmachine which has lots of RAM. Forget about bandwidth. Just get a 100MBit \nswitch and plug everything into it. Network bandwidth won't normally be your \nbottleneck. Memory and CPU will be.\n\nWe actually have 4 database machines, 3 running transaction databases and 1 \nwith an rsynced copy for reporting purposes. We use 3 networks, 1 for the \napp servers to talk to the Internet, 1 for the app servers to talk to the \ndatabases and one for the databases to talk amongst themselves.\n\nEven for development we keep a separate database machine that developers all \nuse. They run whatever they want - we have people using NetBSD, Linux and \nWindows - but they work on one database which is tuned for the purpose. They \ncan even create their own databases on that system if they want for local \ntesting.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 26 Nov 2002 06:59:13 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <darcy@druid.net>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "D'Arcy,\n\nIn production the database servers are seperate multi-processor machines\nwith mirrored disks linked via Gigabit ethernet to the app server.\n\nIn development I have people extremely familiar with MS, but not very hot\nwith Unix in any flavour, who are developing Java and PHP code which is then\npassed into the QA phase where it's run on a replica of the production\nenvironment.\n\nMy goal is to allow my developers to work on the platform they know (MS),\nusing as many of the aspects of the production environment as possible (JVM\nversion, PHP version, and hopefully database version), without needing to\nbuy each new developer two machines, and incur the overhead of them\nfamiliarising themselves with a flavour of Unix.\n\nHope this helps you understand where I'm comming from,\n\nAl.\n\n----- Original Message -----\nFrom: \"D'Arcy J.M. Cain\" <darcy@druid.net>\nTo: \"Al Sutton\" <al@alsutton.com>; \"Lee Kindness\" <lkindness@csl.co.uk>\nCc: <pgsql-hackers@PostgreSQL.org>\nSent: Tuesday, November 26, 2002 11:59 AM\nSubject: Re: [mail] Re: [HACKERS] Native Win32 sources\n\n\n> On November 26, 2002 06:33 am, Al Sutton wrote:\n> > I wouldn't go for 7.4 in production until after it's gone gold, but\nbeing\n> > able to cut the number of boxes per developer by giving them a Win32\nnative\n> > version would save on everything from the overhead of getting the\n> > developers familiar enough with Linux to be able to admin their own\n> > systems, to cutting the network usage by having the DB and app on the\nsame\n> > system, through to cutting the cost of electricity by only having one\nbox\n> > per developer. It would also be a good way of testing 7.4 against our\napp\n> > so we can plan for an upgrade when it's released ;).\n>\n> If your database is of any significant size you probably want a separate\n> database machine anyway. We run NetBSD everywhere and could easily put\nthe\n> apps on the database machine but choose not to. We have 6 production\nservers\n> running various apps and web servers and they all talk to a central\ndatabase\n> machine which has lots of RAM. Forget about bandwidth. Just get a\n100MBit\n> switch and plug everything into it. Network bandwidth won't normally be\nyour\n> bottleneck. Memory and CPU will be.\n>\n> We actually have 4 database machines, 3 running transaction databases and\n1\n> with an rsynced copy for reporting purposes. We use 3 networks, 1 for the\n> app servers to talk to the Internet, 1 for the app servers to talk to the\n> databases and one for the databases to talk amongst themselves.\n>\n> Even for development we keep a separate database machine that developers\nall\n> use. They run whatever they want - we have people using NetBSD, Linux and\n> Windows - but they work on one database which is tuned for the purpose.\nThey\n> can even create their own databases on that system if they want for local\n> testing.\n>\n> --\n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\n", "msg_date": "Tue, 26 Nov 2002 15:37:13 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "On Tue, 26 Nov 2002, Al Sutton wrote:\n\n> D'Arcy,\n> \n> In production the database servers are seperate multi-processor machines\n> with mirrored disks linked via Gigabit ethernet to the app server.\n> \n> In development I have people extremely familiar with MS, but not very hot\n> with Unix in any flavour, who are developing Java and PHP code which is then\n> passed into the QA phase where it's run on a replica of the production\n> environment.\n> \n> My goal is to allow my developers to work on the platform they know (MS),\n> using as many of the aspects of the production environment as possible (JVM\n> version, PHP version, and hopefully database version), without needing to\n> buy each new developer two machines, and incur the overhead of them\n> familiarising themselves with a flavour of Unix.\n> \n> Hope this helps you understand where I'm comming from,\n\nI know it's not windows native but using Cygwin would at least get you \nout of the \"two boxes on everybody's desktop\" business. And for deveopers \nthe difference in performance isn't all that great, as the only real \nperformance issue is the one of creating / dropping backend connections is \nkinda slow. Since they'd be running on their own boxes for testing, you \ncould probably just use persistant connections and get pretty good \nperformance. What web server are they using? If it's apache, just set \nthe number of max children down to something like 20 or so and they should \nbe fine.\n\n", "msg_date": "Tue, 26 Nov 2002 09:34:27 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "Al Sutton wrote:\n> Lee,\n> \n> I wouldn't go for 7.4 in production until after it's gone gold, but being\n> able to cut the number of boxes per developer by giving them a Win32 native\n> version would save on everything from the overhead of getting the developers\n> familiar enough with Linux to be able to admin their own systems, to cutting\n> the network usage by having the DB and app on the same system, through to\n> cutting the cost of electricity by only having one box per developer. It\n> would also be a good way of testing 7.4 against our app so we can plan for\n> an upgrade when it's released ;).\n> \n> I've tried open office 1.0.1 and had to ditch it. It had problems with font\n> rendering and tables that ment many of the forms that people sent as word\n> documents had chunks that weren't displayed or printed. We did try it on a\n> box with MS-Word on it to ensure that the setup of the machine wasn't the\n> issue, Word had no problems, OO failed horribly.\n\nwww.peerdirect.com has a native PostgreSQL 7.2 release that should work\nfine for you until 7.4. It is in beta, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 26 Nov 2002 13:52:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "Al Sutton kirjutas T, 26.11.2002 kell 20:37:\n> D'Arcy,\n> \n> In production the database servers are seperate multi-processor machines\n> with mirrored disks linked via Gigabit ethernet to the app server.\n> \n> In development I have people extremely familiar with MS, but not very hot\n> with Unix in any flavour, who are developing Java and PHP code which is then\n> passed into the QA phase where it's run on a replica of the production\n> environment.\n> \n> My goal is to allow my developers to work on the platform they know (MS),\n> using as many of the aspects of the production environment as possible (JVM\n> version, PHP version, and hopefully database version), without needing to\n> buy each new developer two machines, and incur the overhead of them\n> familiarising themselves with a flavour of Unix.\n\nYou could try out VMWare and run a linux virtual machine under Windows,\nYou could set it up once with all necessary servers and then copy the\nfiles to each new developers machine.\n\nVMWare is not free, but should be significantly cheaper than buying a\nwhole computer.\n\n-------------\nHannu\n\n\n", "msg_date": "27 Nov 2002 01:12:15 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "> > D'Arcy,\n> >\n> > In production the database servers are seperate multi-processor machines\n> > with mirrored disks linked via Gigabit ethernet to the app server.\n> >\n> > In development I have people extremely familiar with MS, but not very hot\n> > with Unix in any flavour, who are developing Java and PHP code which is then\n> > passed into the QA phase where it's run on a replica of the production\n> > environment.\n> >\n> > My goal is to allow my developers to work on the platform they know (MS),\n> > using as many of the aspects of the production environment as possible (JVM\n> > version, PHP version, and hopefully database version), without needing to\n> > buy each new developer two machines, and incur the overhead of them\n> > familiarising themselves with a flavour of Unix.\n\n(from experience in a large .com web site)\n\nCan you have a central DB server? Do all the dev DB servers need to be\nindependent? You could even have a machine w/ ip*(# developers) and bind\na postgresql to each ip for each developer (assuming you had enough\nmemory, etc).\n\nWe used oracle once upon a time at my .com and used seperate schemas for\nthe seperate developers. This may be tricky for your environment\nbecause the developers would need to know what schema they would connect\nto if all schemas were under the same pgsql instance.\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 917-697-8665 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Tue, 26 Nov 2002 15:35:37 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "On 27 Nov 2002, Hannu Krosing wrote:\n\n> Al Sutton kirjutas T, 26.11.2002 kell 20:37:\n> > D'Arcy,\n> > \n> > In production the database servers are seperate multi-processor machines\n> > with mirrored disks linked via Gigabit ethernet to the app server.\n> > \n> > In development I have people extremely familiar with MS, but not very hot\n> > with Unix in any flavour, who are developing Java and PHP code which is then\n> > passed into the QA phase where it's run on a replica of the production\n> > environment.\n> > \n> > My goal is to allow my developers to work on the platform they know (MS),\n> > using as many of the aspects of the production environment as possible (JVM\n> > version, PHP version, and hopefully database version), without needing to\n> > buy each new developer two machines, and incur the overhead of them\n> > familiarising themselves with a flavour of Unix.\n> \n> You could try out VMWare and run a linux virtual machine under Windows,\n> You could set it up once with all necessary servers and then copy the\n> files to each new developers machine.\n> \n> VMWare is not free, but should be significantly cheaper than buying a\n> whole computer.\n\nIf you're gonna go that far, look at reversing that situation, i.e. run a \nlinux box for each person with windows in vmware. It's a much more stable \nsituation than the other way around. \n\nEither way, you can then run multiple Windows instances, of different \nversions of windows if need be, which means you can test and develop for \nmultiple windows environments on one box, no rebooting, not even having to \nturn your chair around.\n\nVMWare likes memory, so get plenty if you go that way.\n\nAnd don't worry about the problems getting familiar with most newer \nflavors of linux, they're pretty easy to grok for most developers.\n\nP.S. a note on windows and vmware: It's not uncommon for companies now to \nbuild a large linux box, put vmware gsx on it, and run dozens of windows \ninstances. That way the spare cycles for one server can be used by \nanother, you can consolidate your windows servers onto a couple of boxen, \nand you get much more reliable operation from windows when the hardware is \nabstracted away from underneath it.\n\n", "msg_date": "Tue, 26 Nov 2002 13:40:54 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "On Tue, 26 Nov 2002, bpalmer wrote:\n\n> > > D'Arcy,\n> > >\n> > > In production the database servers are seperate multi-processor machines\n> > > with mirrored disks linked via Gigabit ethernet to the app server.\n> > >\n> > > In development I have people extremely familiar with MS, but not very hot\n> > > with Unix in any flavour, who are developing Java and PHP code which is then\n> > > passed into the QA phase where it's run on a replica of the production\n> > > environment.\n> > >\n> > > My goal is to allow my developers to work on the platform they know (MS),\n> > > using as many of the aspects of the production environment as possible (JVM\n> > > version, PHP version, and hopefully database version), without needing to\n> > > buy each new developer two machines, and incur the overhead of them\n> > > familiarising themselves with a flavour of Unix.\n> \n> (from experience in a large .com web site)\n> \n> Can you have a central DB server? Do all the dev DB servers need to be\n> independent? You could even have a machine w/ ip*(# developers) and bind\n> a postgresql to each ip for each developer (assuming you had enough\n> memory, etc).\n> \n> We used oracle once upon a time at my .com and used seperate schemas for\n> the seperate developers. This may be tricky for your environment\n> because the developers would need to know what schema they would connect\n> to if all schemas were under the same pgsql instance.\n\n From what the original post was saying, it looks more like they're working \non a smaller semi-embedded type thing, like a home database of cds or \nsomething like that. OR at least something small like one or two people \nwould use like maybe a small inventory system or something.\n\nHigh speed under heavy parallel access wasn't as important as good speed \nfor one or two users for this application.\n\n", "msg_date": "Tue, 26 Nov 2002 14:13:34 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "scott.marlowe kirjutas K, 27.11.2002 kell 01:40:\n> On 27 Nov 2002, Hannu Krosing wrote:\n>\n> > You could try out VMWare and run a linux virtual machine under Windows,\n> > You could set it up once with all necessary servers and then copy the\n> > files to each new developers machine.\n> > \n> > VMWare is not free, but should be significantly cheaper than buying a\n> > whole computer.\n> \n> If you're gonna go that far, look at reversing that situation, i.e. run a \n> linux box for each person with windows in vmware. It's a much more stable \n> situation than the other way around. \n\nThat's how I use it.\n\nIt's also nice way to try out new win software - install it, check it\nout and if you don't like it just say no to \"save changes?\" when closing\nthe vmware session ;)\n\n> Either way, you can then run multiple Windows instances, of different \n> versions of windows if need be, which means you can test and develop for \n> multiple windows environments on one box, no rebooting, not even having to \n> turn your chair around.\n> \n> VMWare likes memory, so get plenty if you go that way.\n> \n> And don't worry about the problems getting familiar with most newer \n> flavors of linux, they're pretty easy to grok for most developers.\n> \n> P.S. a note on windows and vmware: It's not uncommon for companies now to \n> build a large linux box, put vmware gsx on it, and run dozens of windows \n> instances. That way the spare cycles for one server can be used by \n> another, you can consolidate your windows servers onto a couple of boxen, \n> and you get much more reliable operation from windows when the hardware is \n> abstracted away from underneath it.\n\nI guess this would be good for win _servers_, but how would you use this\nsetup for developers - will they all sit around a single box ?\n\n---------------\nHannu\n\n\n", "msg_date": "27 Nov 2002 02:14:22 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "On 27 Nov 2002, Hannu Krosing wrote:\n\n> scott.marlowe kirjutas K, 27.11.2002 kell 01:40:\n> > On 27 Nov 2002, Hannu Krosing wrote:\n> >\n> > > You could try out VMWare and run a linux virtual machine under Windows,\n> > > You could set it up once with all necessary servers and then copy the\n> > > files to each new developers machine.\n> > > \n> > > VMWare is not free, but should be significantly cheaper than buying a\n> > > whole computer.\n> > \n> > If you're gonna go that far, look at reversing that situation, i.e. run a \n> > linux box for each person with windows in vmware. It's a much more stable \n> > situation than the other way around. \n> \n> That's how I use it.\n> \n> It's also nice way to try out new win software - install it, check it\n> out and if you don't like it just say no to \"save changes?\" when closing\n> the vmware session ;)\n\nPlus, it's real easy to back up your windows servers. just shut them \ndown, backup their image, and start them back up.\n\n> > P.S. a note on windows and vmware: It's not uncommon for companies now to \n> > build a large linux box, put vmware gsx on it, and run dozens of windows \n> > instances. That way the spare cycles for one server can be used by \n> > another, you can consolidate your windows servers onto a couple of boxen, \n> > and you get much more reliable operation from windows when the hardware is \n> > abstracted away from underneath it.\n> \n> I guess this would be good for win _servers_, but how would you use this\n> setup for developers - will they all sit around a single box ?\n\nYou could probably use xwindows remote sessions for something like that, \nbut yeah, I was strictly thinking servers at that point. :-)\n\nThere is some work being done to put mutiple video cards and keyboard/mice \nonto a single large box and share it though. I don't think I like taking \nsharing quite that far though. :-0 \n\n", "msg_date": "Tue, 26 Nov 2002 15:11:15 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "The problem I have with VMWare is that for the cost of a licence plus the\nadditional hardware on the box running it (CPU power, RAM, etc.) I can buy a\nsecond cheap machine, using VMWare doesn't appear to save me my biggest\noverheads of training staff on Unix and cost of equipment (software and\nhardware). I've been looking at Bochs, but 1.4.1 wasn't stable enough to\ninstall RedHat, PostgreSQL, etc. reliably.\n\nThe database in question holds order information for over 2000 other\ncompanies, and is growing daily. There is also a requirement to keep the\ndata indefinatley.\n\nThe developers are developing two things;\n\n1- Providing an interface for the companies employees to update customer\ninformation and answer customer queries.\n\n2- Providing an area for merchants to log into that allows them to generate\nsome standardised reports over the order data, change passwords, setup\nrepeated payment system, etc.\n\nDeveloping these solutions does include the possibilities of modify the\ndatabase schema, the configuration of the database, and the datatypes used\nto represent the data (e.g. representing encyrpted data as a Base64 string\nor blob), and therefore the developers may need to make fundamental changes\nto the database and perform metrics on how they have affected performance.\n\nHope this helps,\n\nAl.\n\n----- Original Message -----\nFrom: \"scott.marlowe\" <scott.marlowe@ihs.com>\nTo: \"bpalmer\" <bpalmer@crimelabs.net>\nCc: \"D'Arcy J.M. Cain\" <darcy@druid.net>; <pgsql-hackers@postgresql.org>\nSent: Tuesday, November 26, 2002 9:13 PM\nSubject: Re: [mail] Re: [HACKERS] Native Win32 sources\n\n\n> On Tue, 26 Nov 2002, bpalmer wrote:\n>\n> > > > D'Arcy,\n> > > >\n> > > > In production the database servers are seperate multi-processor\nmachines\n> > > > with mirrored disks linked via Gigabit ethernet to the app server.\n> > > >\n> > > > In development I have people extremely familiar with MS, but not\nvery hot\n> > > > with Unix in any flavour, who are developing Java and PHP code which\nis then\n> > > > passed into the QA phase where it's run on a replica of the\nproduction\n> > > > environment.\n> > > >\n> > > > My goal is to allow my developers to work on the platform they know\n(MS),\n> > > > using as many of the aspects of the production environment as\npossible (JVM\n> > > > version, PHP version, and hopefully database version), without\nneeding to\n> > > > buy each new developer two machines, and incur the overhead of them\n> > > > familiarising themselves with a flavour of Unix.\n> >\n> > (from experience in a large .com web site)\n> >\n> > Can you have a central DB server? Do all the dev DB servers need to be\n> > independent? You could even have a machine w/ ip*(# developers) and\nbind\n> > a postgresql to each ip for each developer (assuming you had enough\n> > memory, etc).\n> >\n> > We used oracle once upon a time at my .com and used seperate schemas for\n> > the seperate developers. This may be tricky for your environment\n> > because the developers would need to know what schema they would connect\n> > to if all schemas were under the same pgsql instance.\n>\n> >From what the original post was saying, it looks more like they're\nworking\n> on a smaller semi-embedded type thing, like a home database of cds or\n> something like that. OR at least something small like one or two people\n> would use like maybe a small inventory system or something.\n>\n> High speed under heavy parallel access wasn't as important as good speed\n> for one or two users for this application.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\n", "msg_date": "Wed, 27 Nov 2002 08:21:51 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "On 27 Nov 2002 at 8:21, Al Sutton wrote:\n\n> The problem I have with VMWare is that for the cost of a licence plus the\n> additional hardware on the box running it (CPU power, RAM, etc.) I can buy a\n> second cheap machine, using VMWare doesn't appear to save me my biggest\n> overheads of training staff on Unix and cost of equipment (software and\n> hardware). I've been looking at Bochs, but 1.4.1 wasn't stable enough to\n> install RedHat, PostgreSQL, etc. reliably.\n\nI have been reading this thread all along and I have some suggestions. They are \nnot any different than already made but just summerising them.\n\n1) Move to linux.\n\nYou can put a second linux box with postgresql on it. Anyway your app. is on \nwindows so it does not make much of a difference because developers will be \naccessing database from their machines.\n\nSecondly if you buy a good enough mid-range machine, say with 40GB SCSI with 2G \nof RAM, each developer can develop on his/her own database. In case of \nperformance testing, you can schedule it just like any other shared resource.\n\nIt is very easy to run multiple isolated postgresql instances on a linux \nmachine. Just change the port number and use a separate data directory. That's \nit..\n\nGetting people familiarized with unix/.linux upto a point where they can use \ntheir own database is matter of half a day. \n\n2) Do not bank too much on windows port yet.\n\nWill all respect to people developing native windows port of postgresql, unless \nyou know the correct/stable behaviour of postgresql on unix, you might end up \nin a situation where you don't know whether a bug/problem is in postgresql or \nwith postgresql/windows. I would not recommend getting into such a situation.\n\nYour contribution is always welcome in any branch but IMO it is not worth at \nthe risk of slipping your own product development.\n\nBelieve me, moving to linux might seem scary at first but it is no more than \ncouple of days matter to get a box to play around. Untill you need a good \nmachine for performance tests, a simple 512MB machie with enough disk would be \nsufficient for any development among the group..\n\n HTH\n\nBye\n Shridhar\n\n--\nMy father taught me three things:\t(1) Never mix whiskey with anything but \nwater.\t(2) Never try to draw to an inside straight.\t(3) Never discuss business \nwith anyone who refuses to give his name.\n\n", "msg_date": "Wed, 27 Nov 2002 14:11:49 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "On Wed, 2002-11-27 at 08:21, Al Sutton wrote:\n> The problem I have with VMWare is that for the cost of a licence plus the\n> additional hardware on the box running it (CPU power, RAM, etc.) I can buy a\n> second cheap machine, using VMWare doesn't appear to save me my biggest\n> overheads of training staff on Unix and cost of equipment (software and\n> hardware). I've been looking at Bochs, but 1.4.1 wasn't stable enough to\n> install RedHat, PostgreSQL, etc. reliably.\n> \n> The database in question holds order information for over 2000 other\n> companies, and is growing daily. There is also a requirement to keep the\n> data indefinatley.\n> \n> The developers are developing two things;\n> \n> 1- Providing an interface for the companies employees to update customer\n> information and answer customer queries.\n> \n> 2- Providing an area for merchants to log into that allows them to generate\n> some standardised reports over the order data, change passwords, setup\n> repeated payment system, etc.\n> \n> Developing these solutions does include the possibilities of modify the\n> database schema, the configuration of the database, and the datatypes used\n> to represent the data (e.g. representing encyrpted data as a Base64 string\n> or blob), and therefore the developers may need to make fundamental changes\n> to the database and perform metrics on how they have affected performance.\n\nIf you need metrics and the production runs on some kind of unix, you\nshould definitely do the measuring on unix as well. A developers machine\nwith different os and other db tuning parameters may give you _very_\ndifferent results from the real deployment system.\n\nAlso, porting postgres to win32 wont magically make it into MS Access -\nmost DB management tasks will be exactly the same. If your developer are\nafraid of command line, give them some graphical or web tool for\nmanaging the db. \n\nIf they dont want to manage linux, then just set it up once and don't\ngive them the root pwd ;)\n\n--------------\nHannu\n\n\n\n", "msg_date": "27 Nov 2002 10:54:13 +0000", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "Hannu,\n\nUsing a Win32 platform will allow them to perform relative metrics. I'm not\nlooking for a statement saying things are x per cent faster than production,\nI'm looking for reproducable evidence that an improvement offers y per cent\nfaster performance than another configuration on the same platform.\n\nThe QA environment is designed to do final testing and compiling definitive\nmetrics against production systems, what I'm looking for is an easy method\nof allowing developers to see the relative change on performance for a given\nchange on the code base.\n\nI'm fully aware that they'll still have to use the config files of\nPostgreSQL on a Win32 port, but the ability to edit the config files, modify\nsql dumps to load data into new schema, transfer files between themselves,\nand perform day to day tasks such as reading Email and MS-Word formatted\ndocuments sent to us using tools that they are currently familiar with is a\nbig plus for me.\n\nThe bottom line is I can spend money training my developers on Linux and\npush project deadlines back until they become familiar with it, or I can\nobtain a free database on their native platform and reduce the number of\nmachines needed per developer as well as making the current DB machines\navailable as the main machine for new staff. The latter makes the most sense\nin the profit based business environment which I'm in.\n\nAl.\n\n----- Original Message -----\nFrom: \"Hannu Krosing\" <hannu@tm.ee>\nTo: \"Al Sutton\" <al@alsutton.com>\nCc: \"scott.marlowe\" <scott.marlowe@ihs.com>; \"bpalmer\"\n<bpalmer@crimelabs.net>; <pgsql-hackers@postgresql.org>\nSent: Wednesday, November 27, 2002 10:54 AM\nSubject: [spam] Re: [mail] Re: [HACKERS] Native Win32 sources\n\n\n> On Wed, 2002-11-27 at 08:21, Al Sutton wrote:\n> > The problem I have with VMWare is that for the cost of a licence plus\nthe\n> > additional hardware on the box running it (CPU power, RAM, etc.) I can\nbuy a\n> > second cheap machine, using VMWare doesn't appear to save me my biggest\n> > overheads of training staff on Unix and cost of equipment (software and\n> > hardware). I've been looking at Bochs, but 1.4.1 wasn't stable enough to\n> > install RedHat, PostgreSQL, etc. reliably.\n> >\n> > The database in question holds order information for over 2000 other\n> > companies, and is growing daily. There is also a requirement to keep the\n> > data indefinatley.\n> >\n> > The developers are developing two things;\n> >\n> > 1- Providing an interface for the companies employees to update customer\n> > information and answer customer queries.\n> >\n> > 2- Providing an area for merchants to log into that allows them to\ngenerate\n> > some standardised reports over the order data, change passwords, setup\n> > repeated payment system, etc.\n> >\n> > Developing these solutions does include the possibilities of modify the\n> > database schema, the configuration of the database, and the datatypes\nused\n> > to represent the data (e.g. representing encyrpted data as a Base64\nstring\n> > or blob), and therefore the developers may need to make fundamental\nchanges\n> > to the database and perform metrics on how they have affected\nperformance.\n>\n> If you need metrics and the production runs on some kind of unix, you\n> should definitely do the measuring on unix as well. A developers machine\n> with different os and other db tuning parameters may give you _very_\n> different results from the real deployment system.\n>\n> Also, porting postgres to win32 wont magically make it into MS Access -\n> most DB management tasks will be exactly the same. If your developer are\n> afraid of command line, give them some graphical or web tool for\n> managing the db.\n>\n> If they dont want to manage linux, then just set it up once and don't\n> give them the root pwd ;)\n>\n> --------------\n> Hannu\n>\n>\n>\n>\n\n\n", "msg_date": "Wed, 27 Nov 2002 21:48:59 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": false, "msg_subject": "Re: [spam] Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "I've posted an Email to the list as to why I'm avoiding a move to linux\n(cost of training -v- cost of database (free) + money saved from recycling\ncurrent DB machines).\n\nMy experience with PostgreSQL has always been good, and I beleive that we\ncan test any potential bugs that we may beleive are in the database by\nrunning our app in our the QA environment against the Linux version of the\ndatabase (to test platform specifics), and then the database version in\nproduction (to test version specifics).\n\nI'm quite happy to spend the time doing this to gain the cost benefit of\nfreeing up the extra machines my developers currently have.\n\nAl.\n\n----- Original Message -----\nFrom: \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>\nTo: <pgsql-hackers@postgresql.org>\nSent: Wednesday, November 27, 2002 8:41 AM\nSubject: Re: [mail] Re: [HACKERS] Native Win32 sources\n\n\n> On 27 Nov 2002 at 8:21, Al Sutton wrote:\n>\n> > The problem I have with VMWare is that for the cost of a licence plus\nthe\n> > additional hardware on the box running it (CPU power, RAM, etc.) I can\nbuy a\n> > second cheap machine, using VMWare doesn't appear to save me my biggest\n> > overheads of training staff on Unix and cost of equipment (software and\n> > hardware). I've been looking at Bochs, but 1.4.1 wasn't stable enough to\n> > install RedHat, PostgreSQL, etc. reliably.\n>\n> I have been reading this thread all along and I have some suggestions.\nThey are\n> not any different than already made but just summerising them.\n>\n> 1) Move to linux.\n>\n> You can put a second linux box with postgresql on it. Anyway your app. is\non\n> windows so it does not make much of a difference because developers will\nbe\n> accessing database from their machines.\n>\n> Secondly if you buy a good enough mid-range machine, say with 40GB SCSI\nwith 2G\n> of RAM, each developer can develop on his/her own database. In case of\n> performance testing, you can schedule it just like any other shared\nresource.\n>\n> It is very easy to run multiple isolated postgresql instances on a linux\n> machine. Just change the port number and use a separate data directory.\nThat's\n> it..\n>\n> Getting people familiarized with unix/.linux upto a point where they can\nuse\n> their own database is matter of half a day.\n>\n> 2) Do not bank too much on windows port yet.\n>\n> Will all respect to people developing native windows port of postgresql,\nunless\n> you know the correct/stable behaviour of postgresql on unix, you might end\nup\n> in a situation where you don't know whether a bug/problem is in postgresql\nor\n> with postgresql/windows. I would not recommend getting into such a\nsituation.\n>\n> Your contribution is always welcome in any branch but IMO it is not worth\nat\n> the risk of slipping your own product development.\n>\n> Believe me, moving to linux might seem scary at first but it is no more\nthan\n> couple of days matter to get a box to play around. Untill you need a good\n> machine for performance tests, a simple 512MB machie with enough disk\nwould be\n> sufficient for any development among the group..\n>\n> HTH\n>\n> Bye\n> Shridhar\n>\n> --\n> My father taught me three things: (1) Never mix whiskey with anything but\n> water. (2) Never try to draw to an inside straight. (3) Never discuss\nbusiness\n> with anyone who refuses to give his name.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\n", "msg_date": "Wed, 27 Nov 2002 21:54:10 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": false, "msg_subject": "Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "On Wed, 27 Nov 2002, Al Sutton wrote:\n\n> Hannu,\n> \n> Using a Win32 platform will allow them to perform relative metrics. I'm not\n> looking for a statement saying things are x per cent faster than production,\n> I'm looking for reproducable evidence that an improvement offers y per cent\n> faster performance than another configuration on the same platform.\n\nSo, does cygwin offer any win? I know it's still \"unix on windows\" but \nit's the bare minimum of unix, and it is easy to create one image of an \ninstall and copy it around onto other boxes in a semi-ready to go format.\n\n", "msg_date": "Wed, 27 Nov 2002 16:08:49 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [spam] Re: [mail] Re: Native Win32 sources" }, { "msg_contents": "It's an option, but I can see it being a bit of an H-Bomb to kill an ant if\nthe Win32 source appears within the next 6 weeks.\n\nI've played used cygwin before and I've always been uncomfortable with the\nway it's integrated with Windows. It always came accross as something that\nisn't really for the windows masses, but more for techies who want Unix on\nan MS platform. My main dislikes about it are;\n\n- Changing paths. If my developers install something in c:\\temp they expect\nto find it under /temp on cygwin.\n\n- Duplicating home directories. The users already have a home directory\nunder MS, why does cygwin need to use a different location?\n\nMy current plan is to use the Win32 native port myself when it first appears\nand thrash our app against it. Once I'm happy that the major functionality\nof our app works against the Win32 port, I'll introduce it to a limited\nnumber of developers who enjoy hacking code if it goes wrong and get them to\nnote a log any problems the come accross.\n\nIf nothing else it should mean a few more bodies testing the Win32 port\n(although I expect you'll find they'll be a large number of those as soon as\nit hits CVS).\n\nAl.\n\n\n----- Original Message -----\nFrom: \"scott.marlowe\" <scott.marlowe@ihs.com>\nTo: \"Al Sutton\" <al@alsutton.com>\nCc: \"Hannu Krosing\" <hannu@tm.ee>; \"bpalmer\" <bpalmer@crimelabs.net>;\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, November 27, 2002 11:08 PM\nSubject: Re: [spam] Re: [mail] Re: [HACKERS] Native Win32 sources\n\n\n> On Wed, 27 Nov 2002, Al Sutton wrote:\n>\n> > Hannu,\n> >\n> > Using a Win32 platform will allow them to perform relative metrics. I'm\nnot\n> > looking for a statement saying things are x per cent faster than\nproduction,\n> > I'm looking for reproducable evidence that an improvement offers y per\ncent\n> > faster performance than another configuration on the same platform.\n>\n> So, does cygwin offer any win? I know it's still \"unix on windows\" but\n> it's the bare minimum of unix, and it is easy to create one image of an\n> install and copy it around onto other boxes in a semi-ready to go format.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Thu, 28 Nov 2002 07:23:20 -0000", "msg_from": "\"Al Sutton\" <al@alsutton.com>", "msg_from_op": false, "msg_subject": "Re: [spam] Re: [mail] Re: Native Win32 sources" } ]
[ { "msg_contents": "Hi,\n\nJust trying to confirm my understanding of how PG manages transactions \nwith respect to stored procedures, in particular, stored procedures \nwhich invoke other procedures and their attendant SQL statements.\n\nAssuming the following description of a set of procedures:\n\n\tprocA consists of calls to procB, procC, and procD.\n\n\tprocB, procC, and procD invoke procE and procF.\n\n\tprocs B,C,D,E, and F invoke INSERT/UPDATE/SELECT's\n\nMy understanding is that since A) PG doesn't currently support nested \ntransactions, B) procedures can't currently define transactional \nelements within their body, and C) there's at least an implicit \ntransaction of single statement granularity at the outermost level via:\n\n\tselect procA();\n\nthat all INSERT/UPDATE/SELECT invocations within all nested procedures \noperate within a single transactional context, that being the context in \nwhich the procA() call is made.\n\nIs that correct?\n\nIf so, what is the lifetime of any locks which are acquired by the \nINSERT/UPDATE/SELECT statements within the transaction? Is it, as I \nbelieve, the lifetime of the procA invocation?\n\nI'm currently working with a system that makes extremely heavy use of \nnested pl/pgsql procedures to encode application logic and I'm concerned \nthat certain design patterns may dramatically degrade concurrency if \nthis transactional analysis is correct. Any insight into patterns of \ndevelopment that would avoid locking or concurrency issues would be \nhelpful.\n\nThanks in advance!\n\n\nss\n\nScott Shattuck\nTechnical Pursuit Inc.\n\n", "msg_date": "Mon, 25 Nov 2002 11:59:33 -0700", "msg_from": "Scott Shattuck <ss@technicalpursuit.com>", "msg_from_op": true, "msg_subject": "transactions and stored procedures" } ]
[ { "msg_contents": "I have an IRC report, confirmed, that in RC2 initdb -W (set super-user\npassword) fails:\n\n\t$ initdb -W\n\tThe files belonging to this database system will be owned by user\n\t\"postgres\".\n\tThis user must also own the server process.\n\t\n\tThe database cluster will be initialized with locale C.\n\t\n\tcreating directory /u/pg/data... ok\n\tcreating directory /u/pg/data/base... ok\n\tcreating directory /u/pg/data/global... ok\n\tcreating directory /u/pg/data/pg_xlog... ok\n\tcreating directory /u/pg/data/pg_clog... ok\n\tcreating template1 database in /u/pg/data/base/1... ok\n\tcreating configuration files... ok\n\tinitializing pg_shadow... ok\n\tEnter new superuser password: \n\tEnter it again: \n\tsetting password... \n\tThe group file wasn't generated. Please report this problem.\n\t\n\tinitdb failed.\n\tRemoving /u/pg/data.\n\nOf course, the obvious solution is not to use -W. ;-)\n\nI will research this but I wanted to report it right away. My only\nguess is that the standalone backend to alter the super-user password is\nmessing up things:\n\n \"$PGPATH\"/postgres $PGSQL_OPT template1 >/dev/null <<EOF\n ALTER USER \"$POSTGRES_SUPERUSERNAME\" WITH PASSWORD '$FirstPw';\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 25 Nov 2002 16:30:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Problem with initdb -W" }, { "msg_contents": "\n> I have an IRC report, confirmed, that in RC2 initdb -W (set super-user\n> password) fails:\n>\n> Enter new superuser password:\n> Enter it again:\n> setting password...\n> The group file wasn't generated. Please report this problem.\n>\n> initdb failed.\n> Removing /u/pg/data.\n\nI had actually experienced this problem as well on a one-off test machine.\nHowever I just put it down to weirdness and didn't follow it up...sorry...\n\nChris\n\n", "msg_date": "Mon, 25 Nov 2002 13:38:17 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Problem with initdb -W" }, { "msg_contents": "\nIt was easier than I thought. As I now remember, it is not a problem if\npg_pwd or pg_group don't exist. Initdb should check for pg_pwd because\nit just added a password, so it better exist, but there is no reason for\npg_group to exist at this point.\n\nI will patch 7.3 and current CVS. I don't think this warrants another\nRC candidate. This is the type of cleanup we will be doing to the 7.3\nbranch for the next few months anyway as part of minor releases.\n\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> I have an IRC report, confirmed, that in RC2 initdb -W (set super-user\n> password) fails:\n> \n> \t$ initdb -W\n> \tThe files belonging to this database system will be owned by user\n> \t\"postgres\".\n> \tThis user must also own the server process.\n> \t\n> \tThe database cluster will be initialized with locale C.\n> \t\n> \tcreating directory /u/pg/data... ok\n> \tcreating directory /u/pg/data/base... ok\n> \tcreating directory /u/pg/data/global... ok\n> \tcreating directory /u/pg/data/pg_xlog... ok\n> \tcreating directory /u/pg/data/pg_clog... ok\n> \tcreating template1 database in /u/pg/data/base/1... ok\n> \tcreating configuration files... ok\n> \tinitializing pg_shadow... ok\n> \tEnter new superuser password: \n> \tEnter it again: \n> \tsetting password... \n> \tThe group file wasn't generated. Please report this problem.\n> \t\n> \tinitdb failed.\n> \tRemoving /u/pg/data.\n> \n> Of course, the obvious solution is not to use -W. ;-)\n> \n> I will research this but I wanted to report it right away. My only\n> guess is that the standalone backend to alter the super-user password is\n> messing up things:\n> \n> \"$PGPATH\"/postgres $PGSQL_OPT template1 >/dev/null <<EOF\n> ALTER USER \"$POSTGRES_SUPERUSERNAME\" WITH PASSWORD '$FirstPw';\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 25 Nov 2002 16:38:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Problem with initdb -W" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It was easier than I thought. As I now remember, it is not a problem if\n> pg_pwd or pg_group don't exist. Initdb should check for pg_pwd because\n> it just added a password, so it better exist, but there is no reason for\n> pg_group to exist at this point.\n\nI think I may have induced this problem during this change:\n\n\tRevision 1.113 / (download) - annotate - [select for diffs] , Mon Oct 21 19:46:45 2002 UTC (5 weeks ago) by tgl \n\tCVS Tags: REL7_3_STABLE, HEAD \n\tChanges since 1.112: +144 -67 lines\n\tDiff to previous 1.112 \n\t\n\tMake CREATE/ALTER/DROP USER/GROUP transaction-safe, or at least pretty\n\tnearly so, by postponing write of flat password file until transaction\n\tcommit.\n\nI modified user.c to keep track separately of pg_shadow and pg_group\nchanges, so that it would write only the file it needed to. Before\nthat, it probably *was* true that the initial assignment of a password\nto the superuser would cause both pg_pwd and an empty pg_group to be\ncreated. Too bad it didn't occur to me to test initdb -W :-(\n\n> I will patch 7.3 and current CVS. I don't think this warrants another\n> RC candidate.\n\nAgreed, removal of an incorrect error check seems pretty safe ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Nov 2002 18:21:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with initdb -W " } ]
[ { "msg_contents": ">From another Solaris tester. Is anyone actually checking the web\nsubmissions?\n\nChris\n\n----- Original Message -----\nFrom: \"Martin Renters\" <martin@datafax.com>\nTo: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nSent: Monday, November 25, 2002 2:27 PM\nSubject: Re: PostgreSQL 7.3 Platform Testing\n\n\n> On Mon, Nov 25, 2002 at 02:09:46PM -0800, Christopher Kings-Lynne wrote:\n> > Hi Martin,\n> >\n> > RC2 is out now - want to give that a try?\n> >\n> > http://developer.postgresql.org/\n> >\n> > Another Solaris user has indicated that they're getting lots of\nregression\n> > errors so we'd really like to see how it goes for you!\n>\n> I will try this soon. I did submit a regression report for RC1 (via\n> the website and it only failed the geometry test there - and only because\n> of the least significant decimal digit was off by one in a couple of\n> places.\n>\n> Martin\n>\n\n", "msg_date": "Mon, 25 Nov 2002 15:11:31 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Fw: PostgreSQL 7.3 Platform Testing" } ]
[ { "msg_contents": "Does anyone know who the Postgres security patcher mentioned in this article\nis:\n\nhttp://kernel.sysdoor.com/eng/\n\nIs it that guy who found all the buffer overflows?\n\nChris\n\n", "msg_date": "Mon, 25 Nov 2002 17:24:28 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Postgres Security Expert???" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> Does anyone know who the Postgres security patcher mentioned in this article\n> is:\n> \n> http://kernel.sysdoor.com/eng/\n> \n> Is it that guy who found all the buffer overflows?\n\nDon't know. Just asked them through their online form.\n\nLet's hope they get back to us.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> Chris\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 26 Nov 2002 13:06:42 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Postgres Security Expert???" }, { "msg_contents": "Hi Chris,\n\nJust received this from them. Look like he was trying to claim stuff\nthat wasn't true.\n\n:-/\n\nThanks for pointing this out Chris. :)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n***********\n\n-------- Original Message --------\nSubject: Re: Demande de renseignements Defi SYSDOOR\nDate: Tue, 26 Nov 2002 11:04:47 +0100\nFrom: \"Vergoz Michael (SYSDOOR)\" <mvergoz@sysdoor.com>\nTo: <justin@postgresql.org>\nReferences: <200211260205.gAQ25GTK009595@jenna>\n\nDear Clift,\n\n>\n> Justin Clift PostgreSQL Global Development Group demande des informations\n> son adresse :\n>\n> Son e-mail : justin@postgresql.org\n> Son t�l�phone : +61.393631313\n> Son message : Hi,\n>\n> Just noticed your website mentioning that \"Michael Vergoz\" is well known\nto created security patches for PostgreSQL:\n>\n> http://kernel.sysdoor.com/eng/\n>\n> Can you please point us in their direction, as we don't know him by name.\n\nRight, it's true that i never make \"_security_ patches\" for\nPostGreSQL...\n\n>\n> As a side thought, would you please be able to correct the spelling of\nPostgreSQL on the same page. Presently it's spelt \"PostGreSQL\", which\nis\nincorrect.\n\nBetter way, i'v remove postgresql name in the site, as i think you want.\n\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> --------------------------------------------------------------------------\n-----------------------------\n> Source IP : 203.173.161.124 (p378-tnt1.mel.ihug.com.au)\n> Secure ID : 54DADE92526600767C167680368AFBBB@sysdoor.com\n> --------------------------------------------------------------------------\n-----------------------------\n>\n\nBest Regards,\nVergoz Michael\nSYSDOOR\nFounder\n\n***********\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 26 Nov 2002 21:40:07 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Postgres Security Expert???" }, { "msg_contents": "\n\nFWIW, a search on Google gives some hits for the name on the lists this\nyear.\n\nFirst impressions are that it's not Sir Mondred (or whatever the spelling was).\n\n\nOn Tue, 26 Nov 2002, Justin Clift wrote:\n\n> Hi Chris,\n> \n> Just received this from them. Look like he was trying to claim stuff\n> that wasn't true.\n> \n> :-/\n> \n> Thanks for pointing this out Chris. :)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> \n> ***********\n> \n> -------- Original Message --------\n> Subject: Re: Demande de renseignements Defi SYSDOOR\n> Date: Tue, 26 Nov 2002 11:04:47 +0100\n> From: \"Vergoz Michael (SYSDOOR)\" <mvergoz@sysdoor.com>\n> To: <justin@postgresql.org>\n> References: <200211260205.gAQ25GTK009595@jenna>\n> \n> Dear Clift,\n> \n> >\n> > Justin Clift PostgreSQL Global Development Group demande des informations\n> > son adresse :\n> >\n> > Son e-mail : justin@postgresql.org\n> > Son téléphone : +61.393631313\n> > Son message : Hi,\n> >\n> > Just noticed your website mentioning that \"Michael Vergoz\" is well known\n> to created security patches for PostgreSQL:\n> >\n> > http://kernel.sysdoor.com/eng/\n> >\n> > Can you please point us in their direction, as we don't know him by name.\n> \n> Right, it's true that i never make \"_security_ patches\" for\n> PostGreSQL...\n> \n> >\n> > As a side thought, would you please be able to correct the spelling of\n> PostgreSQL on the same page. Presently it's spelt \"PostGreSQL\", which\n> is\n> incorrect.\n> \n> Better way, i'v remove postgresql name in the site, as i think you want.\n> \n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > --------------------------------------------------------------------------\n> -----------------------------\n> > Source IP : 203.173.161.124 (p378-tnt1.mel.ihug.com.au)\n> > Secure ID : 54DADE92526600767C167680368AFBBB@sysdoor.com\n> > --------------------------------------------------------------------------\n> -----------------------------\n> >\n> \n> Best Regards,\n> Vergoz Michael\n> SYSDOOR\n> Founder\n> \n> ***********\n> \n\n", "msg_date": "Tue, 26 Nov 2002 10:54:48 +0000 (GMT)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Postgres Security Expert???" }, { "msg_contents": "On Tue, 26 Nov 2002, Justin Clift wrote:\n\n> Dear Clift,\n> \n> > As a side thought, would you please be able to correct the spelling of\n> PostgreSQL on the same page. Presently it's spelt \"PostGreSQL\", which\n> is\n> incorrect.\n> \n> Better way, i'v remove postgresql name in the site, as i think you want.\n> \n\nWell, they still have PostGreSQL still on their front page, which is \ndynamic, as it lists new instrusion attempts every time you refresh it.\n\n", "msg_date": "Tue, 26 Nov 2002 09:29:19 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Security Expert???" } ]
[ { "msg_contents": "I was playing with the Japanese win32 7.2.1 port and I noticed that \"select\n0 / 0\" caused the server to crash and restart. I understand that it is a\ntotally unsupported version, but it should be easy enough to check vs. the\ncurrent version. Note that select 0.0/0.0 worked fine!\n\n\nMerlin\n\n\n", "msg_date": "Mon, 25 Nov 2002 22:00:26 -0500", "msg_from": "\"Merlin Moncure\" <wizard_32141@yahoo.com>", "msg_from_op": true, "msg_subject": "possible obvious bug?" }, { "msg_contents": "Merlin Moncure kirjutas T, 26.11.2002 kell 08:00:\n> I was playing with the Japanese win32 7.2.1 port and I noticed that \"select\n> 0 / 0\" caused the server to crash and restart. I understand that it is a\n> totally unsupported version, but it should be easy enough to check vs. the\n> current version. Note that select 0.0/0.0 worked fine!\n\nSo what is the right answer ?\n\n----------\nHannu\n\n\n", "msg_date": "27 Nov 2002 02:17:14 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: possible obvious bug?" }, { "msg_contents": "On 27 Nov 2002, Hannu Krosing wrote:\n\n> Merlin Moncure kirjutas T, 26.11.2002 kell 08:00:\n> > I was playing with the Japanese win32 7.2.1 port and I noticed that \"select\n> > 0 / 0\" caused the server to crash and restart. I understand that it is a\n> > totally unsupported version, but it should be easy enough to check vs. the\n> > current version. Note that select 0.0/0.0 worked fine!\n> \n> So what is the right answer ?\n\nMaybe it's a locale oriented thing?\n\n", "msg_date": "Tue, 26 Nov 2002 15:11:40 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: possible obvious bug?" }, { "msg_contents": "\n> Merlin Moncure kirjutas T, 26.11.2002 kell 08:00:\n> > I was playing with the Japanese win32 7.2.1 port and I noticed that\n\"select\n> > 0 / 0\" caused the server to crash and restart. I understand that it is\na\n> > totally unsupported version, but it should be easy enough to check vs.\nthe\n> > current version. Note that select 0.0/0.0 worked fine!\n>\n> So what is the right answer ?\n\nNaN :)\n\nChris\n\n", "msg_date": "Tue, 26 Nov 2002 14:12:17 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: possible obvious bug?" }, { "msg_contents": "> > Merlin Moncure kirjutas T, 26.11.2002 kell 08:00:\n> > > I was playing with the Japanese win32 7.2.1 port and I noticed that\n\"select\n> > > 0 / 0\" caused the server to crash and restart. I understand that it\nis a\n> > > totally unsupported version, but it should be easy enough to check vs.\nthe\n> > > current version. Note that select 0.0/0.0 worked fine!\n> >\n> > So what is the right answer ?\n>\n> Maybe it's a locale oriented thing?\n\nIn 7.2.3 there seem to be two different messages:\n\nusa=# select 0/0;\nERROR: floating point exception! The last floating point operation either\nexceeded legal ranges or was a divide by zero\nusa=# select 0/0.0;\nERROR: float8div: divide by zero error\nusa=# select 0.0/0.0;\nERROR: float8div: divide by zero error\nusa=# select 0.0/0;\nERROR: float8div: divide by zero error\nusa=# select 1/0;\nERROR: floating point exception! The last floating point operation either\nexceeded legal ranges or was a divide by zero\nusa=# select 1/0.0;\nERROR: float8div: divide by zero error\n\n\nChris\n\n", "msg_date": "Tue, 26 Nov 2002 14:13:48 -0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: possible obvious bug?" } ]
[ { "msg_contents": "I have just applied a patch to 7.3 and current CVS to properly allocate\nan sprintf string in dbmirror:\n\n fullyqualtblname = SPI_palloc(strlen(tblname) + \n strlen(schemaname) + 6);\n sprintf(fullyqualtblname,\"\\\"%s\\\".\\\"%s\\\"\",\n schemaname,tblname);\n\nOld code had 4 instead of 6. Tatsuo found this bug.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 25 Nov 2002 22:09:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "dbmirror had sprintf string too short" } ]
[ { "msg_contents": "Bruce,\n\nyou're right, i'm asking about the sources because I want to help.\n\nIs it possible to help in this case or not?\n\nUlrich\n\n>>> Bruce Momjian <pgman@candle.pha.pa.us> 25.11.2002 18:51:39 >>>\nUlrich Neumann wrote:\n> Hello,\n> \n> i've read that there are 2 different native ports for Windows\n> somewhere.\n> \n> I've searched for them but didn't found them. Is there anyone who\ncan\n> point me to a link or send me a copy of the sources?\n\nOh, you are probably asking about the sources. They are not\npublically\navailable yet.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us \n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania\n19073\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n----------------------------------\n This e-mail is virus scanned\n Diese e-mail ist virusgeprueft\n\n", "msg_date": "Tue, 26 Nov 2002 10:21:00 +0100", "msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>", "msg_from_op": true, "msg_subject": "Antw: Re: Native Win32 sources" }, { "msg_contents": "\nI am told by PeerDirect that they will release the Win32 source as a\npatch against current CVS by the end of December. At that point, we\nwill make adjustments then apply the patch and start making any other\nchanges required.\n\nI don't think there is much we can do until they supply that patch. I\nthought about starting work on it but most were willing to wait for a\n100% functional patch.\n\nI am CC'ing Katie Ward <kward@peerdirect.com>, our PeerDirect contact on\nthis, and Jan, who also works for them.\n\n---------------------------------------------------------------------------\n\nUlrich Neumann wrote:\n> Bruce,\n> \n> you're right, i'm asking about the sources because I want to help.\n> \n> Is it possible to help in this case or not?\n> \n> Ulrich\n> \n> >>> Bruce Momjian <pgman@candle.pha.pa.us> 25.11.2002 18:51:39 >>>\n> Ulrich Neumann wrote:\n> > Hello,\n> > \n> > i've read that there are 2 different native ports for Windows\n> > somewhere.\n> > \n> > I've searched for them but didn't found them. Is there anyone who\n> can\n> > point me to a link or send me a copy of the sources?\n> \n> Oh, you are probably asking about the sources. They are not\n> publically\n> available yet.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us \n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania\n> 19073\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\n> majordomo@postgresql.org\n> ----------------------------------\n> This e-mail is virus scanned\n> Diese e-mail ist virusgeprueft\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 26 Nov 2002 13:58:46 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Antw: Re: Native Win32 sources" } ]
[ { "msg_contents": "Problem still exists in RC2\n\n\nhorwitz@argoscomp.com (Samuel A Horwitz)\n\n\n---------- Forwarded message ----------\nDate: Mon, 25 Nov 2002 09:12:55 -0500 (EST)\nFrom: Samuel A Horwitz <horwitz@argoscomp.com>\nTo: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] RC1? AIX 4.2.1.0 (fwd)\n\nSorry forgot to include that I had to add -lssl and -lcrypto t0 the libpq \nline in Makefile.global.in in the src directory to get ecpg to link\n\nas follows\n\n286c286\n< libpq = -L$(libpq_builddir) -lpq\n---\n> libpq = -L$(libpq_builddir) -lpq -lssl -lcrypto\n\n\nhorwitz@argoscomp.com (Samuel A Horwitz)\n\n\n---------- Forwarded message ----------\nDate: Mon, 25 Nov 2002 08:45:46 -0500 (EST)\nFrom: Samuel A Horwitz <horwitz@argoscomp.com>\nTo: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] RC1? AIX 4.2.1.0 \n\nsystem = powerpc-ibm-aix4.2.1.0 \n\n\nconfigure command \n\nenv CC=gcc ./configure --with-maxbackends=1024 --with-openssl=/usr/local/ssl --enable-syslog --enable-odbc --disable-nls\n\ngmake check output file\n\n\nregression.out\n--------------\n\nparallel group (13 tests): text varchar oid int2 char boolean float4 int4 name int8 float8 bit numeric\n boolean ... ok\n char ... ok\n name ... ok\n varchar ... ok\n text ... ok\n int2 ... ok\n int4 ... ok\n int8 ... ok\n oid ... ok\n float4 ... ok\n float8 ... ok\n bit ... ok\n numeric ... ok\ntest strings ... ok\ntest numerology ... ok\nparallel group (20 tests): lseg date path circle polygon box point time timetz tinterval abstime interval reltime comments inet timestamptz timestamp type_sanity opr_sanity oidjoins\n point ... ok\n lseg ... ok\n box ... ok\n path ... ok\n polygon ... ok\n circle ... ok\n date ... ok\n time ... ok\n timetz ... ok\n timestamp ... ok\n timestamptz ... ok\n interval ... ok\n abstime ... ok\n reltime ... ok\n tinterval ... ok\n inet ... ok\n comments ... ok\n oidjoins ... ok\n type_sanity ... ok\n opr_sanity ... ok\ntest geometry ... FAILED\ntest horology ... ok\ntest insert ... ok\ntest create_function_1 ... ok\ntest create_type ... ok\ntest create_table ... ok\ntest create_function_2 ... ok\ntest copy ... ok\nparallel group (7 tests): create_aggregate create_operator triggers vacuum inherit constraints create_misc\n constraints ... ok\n triggers ... ok\n create_misc ... ok\n create_aggregate ... ok\n create_operator ... ok\n inherit ... ok\n vacuum ... ok\nparallel group (2 tests): create_view create_index\n create_index ... ok\n create_view ... ok\ntest sanity_check ... ok\ntest errors ... ok\ntest select ... ok\nparallel group (16 tests): select_distinct_on select_into select_having transactions select_distinct random subselect portals arrays union select_implicit case aggregates hash_index join btree_index\n select_into ... ok\n select_distinct ... ok\n select_distinct_on ... ok\n select_implicit ... ok\n select_having ... ok\n subselect ... ok\n union ... ok\n case ... ok\n join ... ok\n aggregates ... ok\n transactions ... ok\n random ... ok\n portals ... ok\n arrays ... ok\n btree_index ... ok\n hash_index ... ok\ntest privileges ... ok\ntest misc ... ok\nparallel group (5 tests): portals_p2 cluster rules select_views foreign_key\n select_views ... ok\n portals_p2 ... ok\n rules ... ok\n foreign_key ... ok\n cluster ... ok\nparallel group (11 tests): limit truncate temp copy2 domain rangefuncs conversion prepare without_oid plpgsql alter_table\n limit ... ok\n plpgsql ... ok\n copy2 ... ok\n temp ... ok\n domain ... ok\n rangefuncs ... ok\n prepare ... ok\n without_oid ... ok\n conversion ... ok\n truncate ... ok\n alter_table ... ok\n\n\nregression.diffs\n-----------------\n\n\n*** ./expected/geometry-powerpc-aix4.out\tTue Sep 12 17:07:16 2000\n--- ./results/geometry.out\tThu Nov 21 21:46:01 2002\n***************\n*** 114,120 ****\n | (5.1,34.5) | [(1,2),(3,4)] | (3,4)\n | (-5,-12) | [(1,2),(3,4)] | (1,2)\n | (10,10) | [(1,2),(3,4)] | (3,4)\n! | (0,0) | [(0,0),(6,6)] | (0,0)\n | (-10,0) | [(0,0),(6,6)] | (0,0)\n | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)\n | (5.1,34.5) | [(0,0),(6,6)] | (6,6)\n--- 114,120 ----\n | (5.1,34.5) | [(1,2),(3,4)] | (3,4)\n | (-5,-12) | [(1,2),(3,4)] | (1,2)\n | (10,10) | [(1,2),(3,4)] | (3,4)\n! | (0,0) | [(0,0),(6,6)] | (-0,0)\n | (-10,0) | [(0,0),(6,6)] | (0,0)\n | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)\n | (5.1,34.5) | [(0,0),(6,6)] | (6,6)\n***************\n*** 127,133 ****\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140473)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n--- 127,133 ----\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140472)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n***************\n*** 448,454 ****\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994\n107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.4999999999\n6464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999\n999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((-0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999\n999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n--- 448,454 ----\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994\n107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081027))\n | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.4999999999\n6464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999\n999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.9999\n99998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n***************\n*** 461,467 ****\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((-0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n--- 461,467 ----\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n\n======================================================================\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Tue, 26 Nov 2002 08:46:32 -0500 (EST)", "msg_from": "Samuel A Horwitz <horwitz@argoscomp.com>", "msg_from_op": true, "msg_subject": "Re: RC2? AIX 4.2.1.0 (fwd)" } ]
[ { "msg_contents": ">I was playing with the Japanese win32 7.2.1 port and I noticed that \"select\n>0 / 0\" caused the server to crash and restart. I understand that it is a\n>totally unsupported version, but it should be easy enough to check vs. the\n>current version. Note that select 0.0/0.0 worked fine!\n\nSeems to work fine on my system.\n\npostgres=# SELECT version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\npostgres=# SELECT 0/0;\nERROR: floating point exception! The last floating point operation either\nexceeded legal ranges or was a divide by zero\npostgres=#\n\n\n", "msg_date": "Tue, 26 Nov 2002 15:41:19 +0100", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: possible obvious bug?" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Merlin Moncure [mailto:merlin@rcsonline.com] \n> Sent: 22 November 2002 21:25\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] PostGres and WIN32, a plea!\n> \n> \n> I had read on one of the newsgroups that there is a planned \n> native port to the win32 platform, is this true? I read most \n> of the win32 thread off of the dev site and it was not clear \n> if this was true.\n\nHi Merlin,\n\nThis is true - the port is being actively worked on (see recent posts\nfrom Bruce Momjian). A number of us have also been involved with the\nclosed source beta testing recently.\n\n> I think postgres is the fastest \n> database ever written for pc hardware, with the one possible \n> exception of Microsoft Foxpro (note: not written by \n> Microsoft). \n\nHmm, ever tried using a large multiuser database such as a finance\nsystem using a Foxpro database? Network managers have been known to\nmurder for less... :-)\n\nRegards, Dave.\n", "msg_date": "Tue, 26 Nov 2002 15:43:20 -0000", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: PostGres and WIN32, a plea!" }, { "msg_contents": "Dave Page wrote:\n> \n> \n> > -----Original Message-----\n> > From: Merlin Moncure [mailto:merlin@rcsonline.com] \n> > Sent: 22 November 2002 21:25\n> > To: pgsql-hackers@postgresql.org\n> > Subject: [HACKERS] PostGres and WIN32, a plea!\n> > \n> > \n> > I had read on one of the newsgroups that there is a planned \n> > native port to the win32 platform, is this true? I read most \n> > of the win32 thread off of the dev site and it was not clear \n> > if this was true.\n> \n> Hi Merlin,\n> \n> This is true - the port is being actively worked on (see recent posts\n> from Bruce Momjian). A number of us have also been involved with the\n> closed source beta testing recently.\n\nYes, I expect the 7.4 release, due in mid-2003, to have a native Win32\nport.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 26 Nov 2002 13:27:10 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostGres and WIN32, a plea!" }, { "msg_contents": ">\n> Hmm, ever tried using a large multiuser database such as a finance\n> system using a Foxpro database? Network managers have been known to\n> murder for less... :-)\n>\nHmm, I have, and you could imagine the result :)\nIt was a small system, really and everything was fine until I added my 10th\nuser. Then my data left me like the parting of the Red Sea :).\n\nBuilding a database system on lousy tehnology, only to rewrite it is\nsomething all database admins have to go through. I think its kind of like\ncoming of age. On the unix side of things, you have mysql catching people\nthe same way.\n\nFP did have a very nice query optimizer. Also, FP views optimized the where\ncondition through the query, and have for quite some time (does PG do this\nyet?). I think the FP team was really on to something, till M hamstrung the\nproject.\n\nFP also had the ability to write user defined functions into the query,\nsomething I thought I would have to give up forever, until I stumbled across\nPG (from the mysql docs, go figure!)\n\n\nMerlin\n\n\n", "msg_date": "Wed, 27 Nov 2002 09:04:49 -0500", "msg_from": "\"Merlin Moncure\" <merlin@rcsonline.com>", "msg_from_op": false, "msg_subject": "Re: PostGres and WIN32, a plea!" } ]
[ { "msg_contents": "Here is a good article on linking/loading:\n\n\thttp://www.linuxjournal.com/article.php?sid=6463\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 26 Nov 2002 12:33:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Article on Linkers/Loaders" } ]