threads
listlengths
1
2.99k
[ { "msg_contents": "Just discovered, by mistyping the www address...\n\nInteresting, whom does this one belong to?\nhttp://www.postgresql.ca.org\n\n--\nSerguei A. Mokhov\n \n\n", "msg_date": "Wed, 19 Sep 2001 08:57:02 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "[OT] http://www.postgresql.ca.org" }, { "msg_contents": "http://anything.ca.org goes to the same IP address. It has nothing to do\nwith postgres\n\n----- Original Message -----\nFrom: \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>\nTo: \"PostgreSQL Hackers\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, September 19, 2001 8:57 AM\nSubject: [HACKERS] [OT] http://www.postgresql.ca.org\n\n\n> Just discovered, by mistyping the www address...\n>\n> Interesting, whom does this one belong to?\n> http://www.postgresql.ca.org\n>\n> --\n> Serguei A. Mokhov\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n", "msg_date": "Thu, 20 Sep 2001 06:58:20 -0400", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: [OT] http://www.postgresql.ca.org" }, { "msg_contents": "On Wed, 19 Sep 2001, Serguei Mokhov wrote:\n\n> Just discovered, by mistyping the www address...\n>\n> Interesting, whom does this one belong to?\n> http://www.postgresql.ca.org\n\nProbably http://www.ca.org/ Cocaine Anonymous\n\nCAWSO, INC (CA5-DOM)\n 3740 OVERLAND AVE SUITE C\n LOS ANGELES, CA 90034\n US\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 20 Sep 2001 07:08:23 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [OT] http://www.postgresql.ca.org" } ]
[ { "msg_contents": "Hi,\n\nI am trying to store some large object in database. \n\nI print the number of bytes returned by lo_write. It's\ngreater than zero. However sometime later, I start\ngetting errors for lo_closelike invalid object\ndescriptor fd=0 where df is variable I used for large\nobject fd.\n\nActually after lo_write is done lo_close is called. I\nwonder why it takes time between these two.\n\nI also found that the error values for many functions\nare not properly documented. e.g. lo_open.\n\nOne more thing I am not sure of. Am I suppose to\ncommit after I call lo_creat but before I can call\nlo_open?\n\nI am experimenting. I will let you know the results.\n\n Shridhar\n\n__________________________________________________\nTerrorist Attacks on U.S. - How can you help?\nDonate cash, emergency relief information\nhttp://dailynews.yahoo.com/fc/US/Emergency_Information/\n", "msg_date": "Wed, 19 Sep 2001 06:32:05 -0700 (PDT)", "msg_from": "Shridhar Daithankar <chamanya@yahoo.com>", "msg_from_op": true, "msg_subject": "Large object help requierd" } ]
[ { "msg_contents": "Please note that as of 07:24PST, 09-19\nthe cvsup server is old and needs to be\nupgraded, the following is what I now\nget when trying to sync:\n\nParsing supfile \"cvsup_config\"\nConnecting to postgresql.org\nConnected to postgresql.org\nServer software version: REL_16_1\nServer postgresql.org has the S1G bug\nSee http://www.polstra.com/projects/freeware/CVSup/s1g/ for details\nPlease notify the maintainer of postgresql.org\nRefusing update from server with S1G bug\n\nWhen might the server be upgraded?\n\nThe above page gives a good reason:\n\nOutdated clients create a heavy load on the servers,\nso upgrading to SNAP_16_1e is strongly recommended.\n\nBest regards,\n\n.. Otto\n\nOtto Hirr\nOLAB Inc\n503.617.6595\notto.hirr@olabinc.com\n", "msg_date": "Wed, 19 Sep 2001 07:20:32 -0700", "msg_from": "\"Otto Hirr\" <otto.hirr@olabinc.com>", "msg_from_op": true, "msg_subject": "CVSup Server has the time bug, needs to be upgraded" } ]
[ { "msg_contents": "I am advised by the (few remaining) Great Bridge folk that they will be\nshutting down their webservers very shortly, like tomorrow. This will\ntake the project pages at greatbridge.org offline.\n\nAll the data at greatbridge.org has been transferred to a hub.org server\nat http://gborg.postgresql.org/. Project URLs will be the same except\nfor the change of site name. Please note that while the data is there,\nnot all of the site functionality is up yet. Chris Ryan is working\nhard to make the necessary adjustments, and the glitches should be\nsmoothed out over the next few days.\n\nOn behalf of the PG core committee, I'd like to thank the Great Bridge\npeople and particularly Chris Ryan for their efforts to continue support\nfor the projects that were hosted at greatbridge.org. Also, of course,\nthanks to Marc and hub.org for picking up the service.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Sep 2001 20:37:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Status of greatbridge.org" } ]
[ { "msg_contents": "I am working on date/time stuff, and in the spirit of cleaning up\ninteresting but marginally useful features I've dropped support for\n\"invalid\" for the timestamp and timestamptz types. \n\nTo help with upgrading, I thought that I'd map it to a NULL value, and\nsee the following in the regression tests:\n\n-- Obsolete special values\nINSERT INTO TIMESTAMP_TBL VALUES ('invalid');\nERROR: OidFunctionCall3: function 1150 returned NULL\n\nIs this error message a feature of all returns of NULL, particular to\ninput routines, or can I somehow mark routines as being allowed to\nreturn NULL values?\n\nComments?\n\n - Thomas\n", "msg_date": "Thu, 20 Sep 2001 02:01:51 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Returning NULL from functions" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> ERROR: OidFunctionCall3: function 1150 returned NULL\n\n> Is this error message a feature of all returns of NULL, particular to\n> input routines, or can I somehow mark routines as being allowed to\n> return NULL values?\n\nIt's a \"feature\" for all places that invoke SQL functions via the\nFooFunctionCallN routines. The API for those routines offers no way\nto handle passing or returning NULL, so they have little choice but to\nraise elog(ERROR) if they see the called function return NULL.\n\nThose routines are intended for use in places where a NULL result is\nnonsensical anyway, and so extending their API to allow NULL would just\ncreate useless notational clutter. If you want to cope with NULLs then\nyou have to set up and inspect a FunctionCallInfoData structure for\nyourself. See the comments in backend/utils/fmgr/fmgr.c.\n\nOffhand I see no good reason why a datatype input routine should return\nNULL. Either return a valid value of the type, or elog(ERROR). IMHO,\nNULL is (and should be) an out-of-band value handled by\ndatatype-independent logic.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Sep 2001 23:11:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Returning NULL from functions " } ]
[ { "msg_contents": "I have split the timestamp data type into two types to better match\nSQL9x specs. I've implemented them as \"timestamp\" and \"timestamptz\" (the\nlatter corresponding to the implementation in recent releases), and have\nimplemented conversion routines between the two types. However, I expect\nto be able to cast one to the other, but am crashing the server deep\ndown in the executor when phrasing a query using the CAST() syntax,\nwhereas an explicit call to the conversion routine seems to work fine.\n\nI thought that I understood the mechanisms for this casting (I'd done\nthe original implementation, after all ;) but the code has evolved since\nthen, presumably for the better. Any hints on what could be happening?\n\n - Thomas\n\nthomas=# select timestamp without time zone 'now',\n timestamptz(timestamp without time zone 'now');\n------------------------+---------------------------\n 2001-09-19 19:05:07.81 | 2001-09-19 19:05:07.81-07\n(1 row)\n\nthomas=# select cast(timestamp without time zone 'now' as timestamptz);\nserver closed the connection unexpectedly\n\nbackend> select cast(timestamp without time zone 'now' as timestamptz);\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x401286f6 in strncpy () from /lib/libc.so.6\n(gdb) where\n#0 0x401286f6 in strncpy () from /lib/libc.so.6\n#1 0x819fc77 in namestrcpy (name=0x82d6340, str=0x1f8 <Address 0x1f8\nout of bounds>) at name.c:175\n#2 0x806cb34 in TupleDescInitEntry (desc=0x82d6304, attributeNumber=1, \n attributeName=0x1f8 <Address 0x1f8 out of bounds>, oidtypeid=1184,\ntypmod=-1, attdim=0, \n attisset=0 '\\000') at tupdesc.c:365\n#3 0x810d309 in ExecTypeFromTL (targetList=0x82d60a8) at\nexecTuples.c:594\n#4 0x810d63d in ExecAssignResultTypeFromTL (node=0x82d60c4,\ncommonstate=0x82d628c) at execUtils.c:288\n#5 0x8115072 in ExecInitResult (node=0x82d60c4, estate=0x82d616c,\nparent=0x0) at nodeResult.c:227\n#6 0x8109334 in ExecInitNode (node=0x82d60c4, estate=0x82d616c,\nparent=0x0) at execProcnode.c:140\n#7 0x8107570 in InitPlan (operation=CMD_SELECT, parseTree=0x82d23cc,\nplan=0x82d60c4, estate=0x82d616c)\n at execMain.c:628\n#8 0x8106c5e in ExecutorStart (queryDesc=0x82d6150, estate=0x82d616c)\nat execMain.c:135\n#9 0x817e719 in ProcessQuery (parsetree=0x82d23cc, plan=0x82d60c4,\ndest=Debug) at pquery.c:257\n#10 0x817cbf2 in pg_exec_query_string (\n query_string=0x82d2008 \"select cast(timestamp without time zone\n'now' as timestamptz);\\n\", dest=Debug, \n parse_context=0x82a5e94) at postgres.c:812\n#11 0x817e0c1 in PostgresMain (argc=2, argv=0xbffff6f4, real_argc=2,\nreal_argv=0xbffff6f4, \n username=0x828f6d8 \"thomas\") at postgres.c:1963\n#12 0x81236e5 in main (argc=2, argv=0xbffff6f4) at main.c:203\n#13 0x400e9cbe in __libc_start_main () from /lib/libc.so.6\n", "msg_date": "Thu, 20 Sep 2001 02:12:34 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "type casting troubles" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> #2 0x806cb34 in TupleDescInitEntry (desc=0x82d6304, attributeNumber=1, \n> attributeName=0x1f8 <Address 0x1f8 out of bounds>, oidtypeid=1184,\n> typmod=-1, attdim=0, \n> attisset=0 '\\000') at tupdesc.c:365\n\nThis appears to indicate that you have a Resdom node with an invalid\nresname field. Seems like that wouldn't be a datatype-specific issue\nat all. Have you changed the handling of cast() nodes?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Sep 2001 23:19:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: type casting troubles " }, { "msg_contents": "> > #2 0x806cb34 in TupleDescInitEntry (desc=0x82d6304, attributeNumber=1,\n> > attributeName=0x1f8 <Address 0x1f8 out of bounds>, oidtypeid=1184,\n> > typmod=-1, attdim=0,\n> > attisset=0 '\\000') at tupdesc.c:365\n> This appears to indicate that you have a Resdom node with an invalid\n> resname field. Seems like that wouldn't be a datatype-specific issue\n> at all. Have you changed the handling of cast() nodes?\n\nNothing changed afaik, though I *have* accumulated a few changes over\nthe last couple of months which I have not committed back to cvs. I'll\nlook at it, but can't think of why I'd be messing with TypeCast nodes at\nall.\n\nI'd have expected that all of the nodes would have references to\ntimestamp and timestamptz by the time I'm that far into the executor,\nand they seem to be data types in good standing for other purposes at\nleast.\n\n - Thomas\n", "msg_date": "Thu, 20 Sep 2001 05:20:45 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: type casting troubles" }, { "msg_contents": "OK, I see a patch from 2001-09-10 for parse_target.c which is a smoking\ngun. The patch tries to force a column name for the TypeCast node, and\ndoesn't check to see if one was actually specified :(\n\nSo that probably explains why, on my system,\n\n select cast(int4 '1' as float8);\n\nfails, while\n\n select cast(int4 '1' as float8) as \"foobar\";\n\nsucceeds.\n\nOuch. I've wasted a bunch of time on this.\n\n - Thomas\n", "msg_date": "Thu, 20 Sep 2001 14:54:43 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: type casting troubles" }, { "msg_contents": "> Nope, these variants all work for me. But I know where the problem is\n> now: you have a broken version of FigureColname() in parse_target.c.\n> Somebody submitted a bogus patch last week; I've fixed it in current\n> CVS, but evidently your sources are from last week.\n\nSure. cvsup needed to be upgraded on hub.org (and I wasn't certain it\nwould work anyway given the recent changes), so I stopped updating my\ntree while I was in the middle of the big push to get the date/time\nstuff written.\n\nDarn.\n\n - Thomas\n", "msg_date": "Thu, 20 Sep 2001 14:57:28 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: type casting troubles" }, { "msg_contents": "When will CVSup be upgraded? I continue to see\nthat you have the time bug. I just ran cvsup and got:\n\nParsing supfile \"cvsup_config\"\nConnecting to postgresql.org\nConnected to postgresql.org\nServer software version: REL_16_1\nServer postgresql.org has the S1G bug\nSee http://www.polstra.com/projects/freeware/CVSup/s1g/ for details\nPlease notify the maintainer of postgresql.org\nRefusing update from server with S1G bug\n\nRegards,\n\n.. Otto\n\nOtto Hirr\nOLAB Inc\n503.617.6595\notto.hirr@olabinc.com\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of \n> Thomas Lockhart\n> Sent: Thursday, September 20, 2001 7:57 AM\n> To: Tom Lane\n> Cc: Hackers List\n> Subject: Re: type casting troubles\n> \n> \n> > Nope, these variants all work for me. But I know where the \n> problem is\n> > now: you have a broken version of FigureColname() in parse_target.c.\n> > Somebody submitted a bogus patch last week; I've fixed it in current\n> > CVS, but evidently your sources are from last week.\n> \n> Sure. cvsup needed to be upgraded on hub.org (and I wasn't certain it\n> would work anyway given the recent changes), so I stopped updating my\n> tree while I was in the middle of the big push to get the date/time\n> stuff written.\n> \n> Darn.\n> \n> - Thomas\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n", "msg_date": "Thu, 20 Sep 2001 09:56:25 -0700", "msg_from": "\"Otto Hirr\" <otto.hirr@olabinc.com>", "msg_from_op": false, "msg_subject": "Re: type casting troubles" } ]
[ { "msg_contents": "\nI sent a message a while back on this list on why an insert onto a table A\nwhich has a foreign key constraint to table B obtains a write (exclusive)\nlock on that row on table B (basically does a select for update). The answer\nwas there is no SQL construct to obtain read (shared) locks on a particular\nrow, therefore it took a write lock.\n\nI was just wondering, isn't the fact that FOREIGN KEY takes a write lock on\nits parent a bug? I was just wondering whether this is being worked on, and\nif anyone has any ideas where to start in case I want to work on it, or can\nI create my own function / constraint which will just emulate a shared lock\nbehavior for a FOREIGN KEY constrant. This is making it tough to sanely\nhandle concurrent long-running transactions, even if I use the INITIALLY\nDEFERRED for the foreign key constrant.\n\nThanx a lot, and thanx for this wonderful DB.\n\n-rchit\n\n", "msg_date": "Wed, 19 Sep 2001 20:51:19 -0700", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": true, "msg_subject": "FOREIGN KEY taking write locks on parent." }, { "msg_contents": "Rachit Siamwalla <rachit@ensim.com> writes:\n> I was just wondering, isn't the fact that FOREIGN KEY takes a write lock on\n> its parent a bug?\n\nYes, I think so. Fixing it is not trivial (else we'd have done it right\nto start with) ... but if you want to step up to the plate to fix it,\nwe're all ears ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Sep 2001 01:12:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FOREIGN KEY taking write locks on parent. " } ]
[ { "msg_contents": "Now that GreatBridge is gone. (I'm pretty sad about that, they looked like they\nwere working on some cool stuff.)\n\nHas this changed, in any way, the development path of PostgreSQL?\n", "msg_date": "Thu, 20 Sep 2001 08:58:37 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "PostgreSQL funding/organization" }, { "msg_contents": "On Thursday 20 September 2001 08:58 am, mlw wrote:\n> Now that GreatBridge is gone. (I'm pretty sad about that, they looked like\n> they were working on some cool stuff.)\n\n> Has this changed, in any way, the development path of PostgreSQL?\n\nJust my personal opinion:\n\nWhile PostgreSQL was developed prior to _any_ substantial funding through the \nefforts of the PostgreSQL Global Development Group on volunteer basis, the \nfact that Bruce, Tom, and Jan will have to make a living elsewhere might very \nwell impact the speed of development. \n\nAlthough, Tom was doing an incredible amount of bugfixing long before Great \nBridge. Bruce was doing the same things before Great Bridge as he did while \nemployed at GB. Jan added PL/Tcl and PL/pgsql (and foreign keys, and many \nother things) long before Great Bridge was on the map at all. \n\nAnd the other three members of the core group have never been employed by \nGreat Bridge. And look at Vadim's, Thomas', and Marc's work.... I don't \nthink the lackof GB funding will impact the direction of development, even if \nit does impact the speed of development.\n\nAnd RedHat is still in the mix, as is PostgreSQL, Inc, and others. Great \nBridge wasn't the only game in town for PostgreSQL development.\n\nJust one additional comment: PostgreSQL, being open source, isn't going to go \naway because one company went away. If Oracle, IBM, Microsoft, or other \nproprietary database vendors 'went away' the likelihood of their RDBMS \nproducts being orphaned is high -- but PostgreSQL cannot be orphaned in that \nsense due to its open source nature.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 20 Sep 2001 10:39:35 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL funding/organization" }, { "msg_contents": "> ... but PostgreSQL cannot be orphaned in that\n> sense due to its open source nature.\n\nI'll second that. It isn't just \"the open source nature\" of PostgreSQL\nwhich will keep it viable, it is the active developer and user community\nwhich has grown up around it which makes it unlikely that it will become\nirrelevant. Other db efforts may try to get the open source gloss with\nlicensing and tarballs, but PostgreSQL actually has the process and\n*people* which makes it successful.\n\n - Thomas\n", "msg_date": "Thu, 20 Sep 2001 15:04:13 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL funding/organization" }, { "msg_contents": "On Thursday 20 September 2001 11:04 am, Thomas Lockhart wrote:\n> > ... but PostgreSQL cannot be orphaned in that\n> > sense due to its open source nature.\n\n> I'll second that. It isn't just \"the open source nature\" of PostgreSQL\n> which will keep it viable, it is the active developer and user community\n\nSorry. My semantics are 'open source status'!='open source nature' -- having \nan open source nature includes the developer community's openness. Many \nprojects have open source status -- few have an open source nature. I should \nhave clarified what I meant.\n\nSee my letter to Linux Weekly News from a few weeks ago to find out what I \nsee in 'open source nature.'\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 20 Sep 2001 12:11:04 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL funding/organization" }, { "msg_contents": "On Thu, 20 Sep 2001, Lamar Owen wrote:\n\n> On Thursday 20 September 2001 08:58 am, mlw wrote:\n> > Now that GreatBridge is gone. (I'm pretty sad about that, they looked like\n> > they were working on some cool stuff.)\n>\n> > Has this changed, in any way, the development path of PostgreSQL?\n>\n> Just my personal opinion:\n>\n> While PostgreSQL was developed prior to _any_ substantial funding through the\n> efforts of the PostgreSQL Global Development Group on volunteer basis, the\n> fact that Bruce, Tom, and Jan will have to make a living elsewhere might very\n> well impact the speed of development.\n>\n> Although, Tom was doing an incredible amount of bugfixing long before Great\n> Bridge. Bruce was doing the same things before Great Bridge as he did while\n> employed at GB. Jan added PL/Tcl and PL/pgsql (and foreign keys, and many\n> other things) long before Great Bridge was on the map at all.\n\nAnd, in fact, since GB came on the map, Jan has been so busy getting\nacross from Germany that he hasn't had much time to work on PgSQL, which\nmeans that GB slowed down development to an extent ... balanced off with\nTom's increased development, but that is neither here nor there ...\n\nThree things that GB provided for their $25million:\n\n1. Tom's ability to focus on programming more\n2. Bruce's ability to travel and evangelize(sp?) more\n3. www.greatbridge.org\n\nThree things that are going to change now that GB is gone:\n\n1. tom's wife will see more of him\n2. bruce's wife and kids will see more of him\n3. www.greatbridge.org lives on as gborg.postgresql.org\n\nTwo things that won't change much:\n\n1. Tom's visibility in the groups and in the cvs commit logs\n2. Bruce's visibility in the groups and in the cvs commit logs\n\nIMHO ...\n\n\n", "msg_date": "Thu, 20 Sep 2001 12:47:08 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL funding/organization" }, { "msg_contents": "On Thursday 20 September 2001 12:47 pm, Marc G. Fournier wrote:\n> On Thu, 20 Sep 2001, Lamar Owen wrote:\n> > On Thursday 20 September 2001 08:58 am, mlw wrote:\n> > > Has this changed, in any way, the development path of PostgreSQL?\n> > Just my personal opinion:\n\n> > long before Great Bridge was on the map at\n> > all.\n\n> Three things that GB provided for their $25million:\n...\n> Three things that are going to change now that GB is gone:\n...\n> Two things that won't change much:\n...\n\nGood summary, Marc.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 20 Sep 2001 13:23:26 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL funding/organization" }, { "msg_contents": "On Thu, Sep 20, 2001 at 12:47:08PM -0400, Marc G. Fournier wrote:\n> \n> Three things that are going to change now that GB is gone:\n> \n> 1. tom's wife will see more of him\n> 2. bruce's wife and kids will see more of him\n\n It seems that GB finish is their women conspiracy :-)\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 21 Sep 2001 10:00:30 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL funding/organization" }, { "msg_contents": "> Three things that GB provided for their $25million:\n> \n> 1. Tom's ability to focus on programming more\n> 2. Bruce's ability to travel and evangelize(sp?) more\n> 3. www.greatbridge.org\n> \n> Three things that are going to change now that GB is gone:\n> \n> 1. tom's wife will see more of him\n> 2. bruce's wife and kids will see more of him\n> 3. www.greatbridge.org lives on as gborg.postgresql.org\n> \n> Two things that won't change much:\n> \n> 1. Tom's visibility in the groups and in the cvs commit logs\n> 2. Bruce's visibility in the groups and in the cvs commit logs\n\nDon't count us out yet. I belive I will find a job continuing to work\non PostgreSQL full-time and hope the others can do the same.\n\nAs for my wife, she actually saw more of me while I was at GB because\nbefore I had a full-time job _and_ worked on PostgreSQL several hours a\nday. With GB, I only worked on PostgreSQL, which freed up lots of time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Sep 2001 00:56:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL funding/organization" }, { "msg_contents": "On Sat, 22 Sep 2001, Bruce Momjian wrote:\n\n> > Three things that GB provided for their $25million:\n> >\n> > 1. Tom's ability to focus on programming more\n> > 2. Bruce's ability to travel and evangelize(sp?) more\n> > 3. www.greatbridge.org\n> >\n> > Three things that are going to change now that GB is gone:\n> >\n> > 1. tom's wife will see more of him\n> > 2. bruce's wife and kids will see more of him\n> > 3. www.greatbridge.org lives on as gborg.postgresql.org\n> >\n> > Two things that won't change much:\n> >\n> > 1. Tom's visibility in the groups and in the cvs commit logs\n> > 2. Bruce's visibility in the groups and in the cvs commit logs\n>\n> Don't count us out yet. I belive I will find a job continuing to work\n> on PostgreSQL full-time and hope the others can do the same.\n\nWasn't counting any of you out in the above *scratch head*\n\n\n", "msg_date": "Sat, 22 Sep 2001 19:48:36 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL funding/organization" }, { "msg_contents": "> On Sat, 22 Sep 2001, Bruce Momjian wrote:\n> \n> > > Three things that GB provided for their $25million:\n> > >\n> > > 1. Tom's ability to focus on programming more\n> > > 2. Bruce's ability to travel and evangelize(sp?) more\n> > > 3. www.greatbridge.org\n> > >\n> > > Three things that are going to change now that GB is gone:\n> > >\n> > > 1. tom's wife will see more of him\n> > > 2. bruce's wife and kids will see more of him\n> > > 3. www.greatbridge.org lives on as gborg.postgresql.org\n> > >\n> > > Two things that won't change much:\n> > >\n> > > 1. Tom's visibility in the groups and in the cvs commit logs\n> > > 2. Bruce's visibility in the groups and in the cvs commit logs\n> >\n> > Don't count us out yet. I belive I will find a job continuing to work\n> > on PostgreSQL full-time and hope the others can do the same.\n> \n> Wasn't counting any of you out in the above *scratch head*\n\nSorry, it was only a saying. I thank you for the kind words.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Sep 2001 19:50:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL funding/organization" } ]
[ { "msg_contents": "Hi,\n\nThis isn't working for me either (no existing checkout) :\n\n[justin@justinspc cvs]$ cvs -d\n:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n(Logging in to anoncvs@anoncvs.postgresql.org)\nCVS password: <<just pressed enter>>\n[justin@justinspc cvs]$ cvs -d\n:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\ncvs server: Updating pgsql\ncvs server: failed to create lock directory for\n`/projects/cvsroot/pgsql' (/projects/cvsroot/pgsql/#cvs.lock):\nPermission denied\ncvs server: failed to obtain dir lock in repository\n`/projects/cvsroot/pgsql'\ncvs [server aborted]: read lock failed - giving up\n[justin@justinspc cvs]$ cvs -d\n:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot logout\n(Logging out of anoncvs@anoncvs.postgresql.org)\n[justin@justinspc cvs]$\n\nAlso tried with using the password of 'postgresql'.\n\nI'm get the feeling the problem is not on my end.\n\n:(\n\nRegards and best wishes,\n\nJustin Clift\n", "msg_date": "Fri, 21 Sep 2001 00:06:27 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Further CVS errors" }, { "msg_contents": "\nOokay, this it now:\n\n> cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n(Logging in to anoncvs@anoncvs.postgresql.org)\nCVS password:\n> cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\ncvs server: Updating pgsql\nU pgsql/COPYRIGHT\nU pgsql/GNUmakefile.in\nU pgsql/HISTORY\nU pgsql/INSTALL\nU pgsql/Makefile\nU pgsql/README\nU pgsql/aclocal.m4\n^Ccvs [checkout aborted]: received interrupt signal\n\n\nOn Fri, 21 Sep 2001, Justin Clift wrote:\n\n> Hi,\n>\n> This isn't working for me either (no existing checkout) :\n>\n> [justin@justinspc cvs]$ cvs -d\n> :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n> (Logging in to anoncvs@anoncvs.postgresql.org)\n> CVS password: <<just pressed enter>>\n> [justin@justinspc cvs]$ cvs -d\n> :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\n> cvs server: Updating pgsql\n> cvs server: failed to create lock directory for\n> `/projects/cvsroot/pgsql' (/projects/cvsroot/pgsql/#cvs.lock):\n> Permission denied\n> cvs server: failed to obtain dir lock in repository\n> `/projects/cvsroot/pgsql'\n> cvs [server aborted]: read lock failed - giving up\n> [justin@justinspc cvs]$ cvs -d\n> :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot logout\n> (Logging out of anoncvs@anoncvs.postgresql.org)\n> [justin@justinspc cvs]$\n>\n> Also tried with using the password of 'postgresql'.\n>\n> I'm get the feeling the problem is not on my end.\n>\n> :(\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Fri, 21 Sep 2001 08:11:23 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Further CVS errors" }, { "msg_contents": "Marc,\n\nI'm back from vacation and also can't do cvs:\n\npg@zen:~/cvs$ cvs -z3 checkout pgsql\ncannot create_adm_p /tmp/cvs-serv67095/pgsql\nPermission denied\n\n\tOleg\nOn Fri, 21 Sep 2001, Marc G. Fournier wrote:\n\n>\n> Ookay, this it now:\n>\n> > cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n> (Logging in to anoncvs@anoncvs.postgresql.org)\n> CVS password:\n> > cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\n> cvs server: Updating pgsql\n> U pgsql/COPYRIGHT\n> U pgsql/GNUmakefile.in\n> U pgsql/HISTORY\n> U pgsql/INSTALL\n> U pgsql/Makefile\n> U pgsql/README\n> U pgsql/aclocal.m4\n> ^Ccvs [checkout aborted]: received interrupt signal\n>\n>\n> On Fri, 21 Sep 2001, Justin Clift wrote:\n>\n> > Hi,\n> >\n> > This isn't working for me either (no existing checkout) :\n> >\n> > [justin@justinspc cvs]$ cvs -d\n> > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n> > (Logging in to anoncvs@anoncvs.postgresql.org)\n> > CVS password: <<just pressed enter>>\n> > [justin@justinspc cvs]$ cvs -d\n> > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\n> > cvs server: Updating pgsql\n> > cvs server: failed to create lock directory for\n> > `/projects/cvsroot/pgsql' (/projects/cvsroot/pgsql/#cvs.lock):\n> > Permission denied\n> > cvs server: failed to obtain dir lock in repository\n> > `/projects/cvsroot/pgsql'\n> > cvs [server aborted]: read lock failed - giving up\n> > [justin@justinspc cvs]$ cvs -d\n> > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot logout\n> > (Logging out of anoncvs@anoncvs.postgresql.org)\n> > [justin@justinspc cvs]$\n> >\n> > Also tried with using the password of 'postgresql'.\n> >\n> > I'm get the feeling the problem is not on my end.\n> >\n> > :(\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 21 Sep 2001 20:00:19 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Further CVS errors" }, { "msg_contents": "\ntry that ... you might have to remove the pgsql directory first, but I've\njust tried from a remote machine and ran into same problems (testing from\nsame machine isn't necessarily good *groan*) ... it looks like its working\nfor me now ...\n\nOn Fri, 21 Sep 2001, Oleg Bartunov wrote:\n\n> Marc,\n>\n> I'm back from vacation and also can't do cvs:\n>\n> pg@zen:~/cvs$ cvs -z3 checkout pgsql\n> cannot create_adm_p /tmp/cvs-serv67095/pgsql\n> Permission denied\n>\n> \tOleg\n> On Fri, 21 Sep 2001, Marc G. Fournier wrote:\n>\n> >\n> > Ookay, this it now:\n> >\n> > > cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n> > (Logging in to anoncvs@anoncvs.postgresql.org)\n> > CVS password:\n> > > cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\n> > cvs server: Updating pgsql\n> > U pgsql/COPYRIGHT\n> > U pgsql/GNUmakefile.in\n> > U pgsql/HISTORY\n> > U pgsql/INSTALL\n> > U pgsql/Makefile\n> > U pgsql/README\n> > U pgsql/aclocal.m4\n> > ^Ccvs [checkout aborted]: received interrupt signal\n> >\n> >\n> > On Fri, 21 Sep 2001, Justin Clift wrote:\n> >\n> > > Hi,\n> > >\n> > > This isn't working for me either (no existing checkout) :\n> > >\n> > > [justin@justinspc cvs]$ cvs -d\n> > > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n> > > (Logging in to anoncvs@anoncvs.postgresql.org)\n> > > CVS password: <<just pressed enter>>\n> > > [justin@justinspc cvs]$ cvs -d\n> > > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\n> > > cvs server: Updating pgsql\n> > > cvs server: failed to create lock directory for\n> > > `/projects/cvsroot/pgsql' (/projects/cvsroot/pgsql/#cvs.lock):\n> > > Permission denied\n> > > cvs server: failed to obtain dir lock in repository\n> > > `/projects/cvsroot/pgsql'\n> > > cvs [server aborted]: read lock failed - giving up\n> > > [justin@justinspc cvs]$ cvs -d\n> > > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot logout\n> > > (Logging out of anoncvs@anoncvs.postgresql.org)\n> > > [justin@justinspc cvs]$\n> > >\n> > > Also tried with using the password of 'postgresql'.\n> > >\n> > > I'm get the feeling the problem is not on my end.\n> > >\n> > > :(\n> > >\n> > > Regards and best wishes,\n> > >\n> > > Justin Clift\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n", "msg_date": "Fri, 21 Sep 2001 14:36:59 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Further CVS errors" }, { "msg_contents": "Thanks.\nIt works now for me\n\n\tRegards,\n\n\t\tOleg\nOn Fri, 21 Sep 2001, Marc G. Fournier wrote:\n\n>\n> try that ... you might have to remove the pgsql directory first, but I've\n> just tried from a remote machine and ran into same problems (testing from\n> same machine isn't necessarily good *groan*) ... it looks like its working\n> for me now ...\n>\n> On Fri, 21 Sep 2001, Oleg Bartunov wrote:\n>\n> > Marc,\n> >\n> > I'm back from vacation and also can't do cvs:\n> >\n> > pg@zen:~/cvs$ cvs -z3 checkout pgsql\n> > cannot create_adm_p /tmp/cvs-serv67095/pgsql\n> > Permission denied\n> >\n> > \tOleg\n> > On Fri, 21 Sep 2001, Marc G. Fournier wrote:\n> >\n> > >\n> > > Ookay, this it now:\n> > >\n> > > > cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n> > > (Logging in to anoncvs@anoncvs.postgresql.org)\n> > > CVS password:\n> > > > cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\n> > > cvs server: Updating pgsql\n> > > U pgsql/COPYRIGHT\n> > > U pgsql/GNUmakefile.in\n> > > U pgsql/HISTORY\n> > > U pgsql/INSTALL\n> > > U pgsql/Makefile\n> > > U pgsql/README\n> > > U pgsql/aclocal.m4\n> > > ^Ccvs [checkout aborted]: received interrupt signal\n> > >\n> > >\n> > > On Fri, 21 Sep 2001, Justin Clift wrote:\n> > >\n> > > > Hi,\n> > > >\n> > > > This isn't working for me either (no existing checkout) :\n> > > >\n> > > > [justin@justinspc cvs]$ cvs -d\n> > > > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n> > > > (Logging in to anoncvs@anoncvs.postgresql.org)\n> > > > CVS password: <<just pressed enter>>\n> > > > [justin@justinspc cvs]$ cvs -d\n> > > > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\n> > > > cvs server: Updating pgsql\n> > > > cvs server: failed to create lock directory for\n> > > > `/projects/cvsroot/pgsql' (/projects/cvsroot/pgsql/#cvs.lock):\n> > > > Permission denied\n> > > > cvs server: failed to obtain dir lock in repository\n> > > > `/projects/cvsroot/pgsql'\n> > > > cvs [server aborted]: read lock failed - giving up\n> > > > [justin@justinspc cvs]$ cvs -d\n> > > > :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot logout\n> > > > (Logging out of anoncvs@anoncvs.postgresql.org)\n> > > > [justin@justinspc cvs]$\n> > > >\n> > > > Also tried with using the password of 'postgresql'.\n> > > >\n> > > > I'm get the feeling the problem is not on my end.\n> > > >\n> > > > :(\n> > > >\n> > > > Regards and best wishes,\n> > > >\n> > > > Justin Clift\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 21 Sep 2001 22:38:40 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Further CVS errors" } ]
[ { "msg_contents": "\nI just tried to get PostgreSQL from CVS , but it rejected the password\n'postgresql' for user 'anoncvs':\n\n $ export CVSROOT=:pserver:anoncvs@postgresql.org:/home/projects/pgsql/cvsroot\n $ cvs login\n (Logging in to anoncvs@postgresql.org)\n CVS password:\n cvs login: authorization failed: server postgresql.org rejected access to /home/projects/pgsql/cvsroot for user anoncvs\n\n\nThen, I tried to post this to pgsql-hackers, but my scubscrption\nfailed, too!\n\n From: majordomo-owner@postgresql.org\n To: \"noel@burton-krahn.com\" <noel@burton-krahn.com>\n Subject: Majordomo results\n Date: Thu, 20 Sep 2001 11:55:29 -0400 (EDT)\n\n >>>> subscribe\n **** Illegal command!\n\n **** Skipped 1 line of trailing unparseable text.\n\n No valid commands processed.\n\nIs majordomo and CVS broken, or do I need different instructions?\n\n--Noel\n\n\n", "msg_date": "Thu, 20 Sep 2001 09:04:04 -0700 (Pacific Daylight Time)", "msg_from": "\"noel@burton-krahn.com\" <noel@burton-krahn.com>", "msg_from_op": true, "msg_subject": "Can't subscribe or get CVS" }, { "msg_contents": "Hi noel,\n\nThe correct CVSROOT is now:\n\nexport CVSROOT=:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\nAnd the password is blank or 'postgresql'\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of\n> noel@burton-krahn.com\n> Sent: Friday, 21 September 2001 12:04 AM\n> To: webmaster@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Can't subscribe or get CVS\n> \n> \n> \n> I just tried to get PostgreSQL from CVS , but it rejected the password\n> 'postgresql' for user 'anoncvs':\n> \n> $ export \n> CVSROOT=:pserver:anoncvs@postgresql.org:/home/projects/pgsql/cvsroot\n> $ cvs login\n> (Logging in to anoncvs@postgresql.org)\n> CVS password:\n> cvs login: authorization failed: server postgresql.org \n> rejected access to /home/projects/pgsql/cvsroot for user anoncvs\n> \n> \n> Then, I tried to post this to pgsql-hackers, but my scubscrption\n> failed, too!\n> \n> From: majordomo-owner@postgresql.org\n> To: \"noel@burton-krahn.com\" <noel@burton-krahn.com>\n> Subject: Majordomo results\n> Date: Thu, 20 Sep 2001 11:55:29 -0400 (EDT)\n> \n> >>>> subscribe\n> **** Illegal command!\n> \n> **** Skipped 1 line of trailing unparseable text.\n> \n> No valid commands processed.\n> \n> Is majordomo and CVS broken, or do I need different instructions?\n> \n> --Noel\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Fri, 28 Sep 2001 10:34:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Can't subscribe or get CVS" } ]
[ { "msg_contents": "When the server starts I see something like\n\nDEBUG: redo record is at 0/146ED4; undo record is at 0/0; shutdown TRUE\n\nbut what does \"shutdown TRUE\" mean? It doesn't mean \"I'm shutting down\"\nnor \"the last shutdown was successful\", so it's not very obvious.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 20 Sep 2001 18:26:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Shutdown TRUE?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> When the server starts I see something like\n> DEBUG: redo record is at 0/146ED4; undo record is at 0/0; shutdown TRUE\n\n> but what does \"shutdown TRUE\" mean? It doesn't mean \"I'm shutting down\"\n> nor \"the last shutdown was successful\", so it's not very obvious.\n\nIt means the latest checkpoint record in the WAL log has the shutdown bit\nset, implying that there was an intentional shutdown. This should be\nredundant with the control-file-state-field info that's reported a line\nor two earlier --- but I suppose it would be interesting for debugging\nif it didn't agree.\n\nFeel free to change the message wording...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Sep 2001 15:52:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shutdown TRUE? " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> It means the latest checkpoint record in the WAL log has the shutdown bit\n>> set, implying that there was an intentional shutdown.\n\n> I'd hardly consider a kill -9 an intentional shutdown. ???\n\nHuh? If you zap the postmaster with kill -9, the last checkpoint record\nin the WAL log will not have the shutdown bit set.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Sep 2001 16:30:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shutdown TRUE? " }, { "msg_contents": "Tom Lane writes:\n\n> It means the latest checkpoint record in the WAL log has the shutdown bit\n> set, implying that there was an intentional shutdown.\n\nI'd hardly consider a kill -9 an intentional shutdown. ???\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 20 Sep 2001 22:31:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Shutdown TRUE? " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Huh? If you zap the postmaster with kill -9, the last checkpoint record\n>> in the WAL log will not have the shutdown bit set.\n\n> It does here:\n\nHad you actually done anything to the database between postmaster\nstartup and kill? If there's no reason to do a checkpoint then it\nwon't checkpoint, so the last checkpoint would be from the previous\nincarnation. But under normal circumstances I see \"shutdown FALSE\"\nhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Sep 2001 19:26:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shutdown TRUE? " }, { "msg_contents": "Tom Lane writes:\n\n> Huh? If you zap the postmaster with kill -9, the last checkpoint record\n> in the WAL log will not have the shutdown bit set.\n\nIt does here:\n\npeter ~$ pg-install/bin/postmaster -D pg-install/var/data\nDEBUG: database system was shut down at 2001-09-20 18:52:58 CEST\nDEBUG: checkpoint record is at 0/146F54\nDEBUG: redo record is at 0/146F54; undo record is at 0/0; shutdown TRUE\nDEBUG: next transaction id: 136; next oid: 16579\nDEBUG: database system is ready\nKilled\npeter ~$ pg-install/bin/postmaster -D pg-install/var/data\nDEBUG: database system was interrupted at 2001-09-20 22:27:00 CEST\nDEBUG: checkpoint record is at 0/146F54\nDEBUG: redo record is at 0/146F54; undo record is at 0/0; shutdown TRUE\nDEBUG: next transaction id: 136; next oid: 16579\nDEBUG: database system was not properly shut down; automatic recovery in progress\nDEBUG: ReadRecord: record with zero length at 0/146F94\nDEBUG: redo is not required\nDEBUG: database system is ready\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 21 Sep 2001 01:28:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Shutdown TRUE? " }, { "msg_contents": "Tom Lane writes:\n\n> Had you actually done anything to the database between postmaster\n> startup and kill?\n\nI've run the regression tests several times (parallel and serial) and\nkilled the postmaster at different places, even killed a few backends in\nbetween, actual redo happened, yet shutdown was invariably TRUE.\n\nAnother question I have is what is the significance of\n\n ReadRecord: record with zero length at 0/D65C00\n\n? It seems to occur at the end of every redo run, perhaps it simply means\nend of records. At least it's not clear to the user whether this is\ndebug, info, warning, or error.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 21 Sep 2001 02:00:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Shutdown TRUE? " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Had you actually done anything to the database between postmaster\n>> startup and kill?\n\n> I've run the regression tests several times (parallel and serial) and\n> killed the postmaster at different places, even killed a few backends in\n> between, actual redo happened, yet shutdown was invariably TRUE.\n\nHm. Do you have an especially long intra-checkpoint interval set in\npostgresql.conf? I'd expect that if you'd done anything to the db and\nthen waited at least a checkpoint interval (or done a manual CHECKPOINT)\nbefore killing the postmaster, you'd find a non-shutdown checkpoint\nrecord. That's what I get anyway.\n\n> Another question I have is what is the significance of\n> ReadRecord: record with zero length at 0/D65C00\n> ? It seems to occur at the end of every redo run, perhaps it simply means\n> end of records.\n\nYeah, that would be the normal symptom of reaching the end of the log.\n\n> At least it's not clear to the user whether this is debug, info,\n> warning, or error. \n\nIt's debug, and so labeled.\n\nPossibly we need more elog levels than we have --- the stuff that comes\nout at startup is not all of the same urgency, but DEBUG is the only\nelog level we can use for it, really...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Sep 2001 20:05:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shutdown TRUE? " } ]
[ { "msg_contents": "I'm trying to update my cvs tree and am currently seeing the following:\n\nParsing supfile \"postgres.cvsup\"\nConnecting to cvsup.postgresql.org\nCannot connect to cvsup.postgresql.org: Connection refused\nWill retry at 20:15:50\n\nIs this expected? Should cvsup.postgresql.org be answering connection\nrequests yet? My previous connections had been through postgresql.org\n(and working for the last few years ;) but currently show\n\nmyst$ ./repsync\nParsing supfile \"postgres.cvsup\"\nConnecting to postgresql.org\nConnected to postgresql.org\nServer software version: REL_16_1\nNegotiating file attribute support\nExchanging collection information\nServer message: Collection \"pgsql\" release \"cvs\" is not available here\nEstablishing multiplexed-mode data connection\nRunning\nSkipping collection pgsql/cvs\nShutting down connection to server\nFinished successfully\n\nSo two problems on this one (which might be moot if the new machine\nshould be doing this instead): the CVSup version is wrong and the\ncvs-pulling configuration no longer exists.\n\n - Thomas\n", "msg_date": "Thu, 20 Sep 2001 20:15:26 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "cvsup trouble" }, { "msg_contents": "\n\nOkay, that is fixed ... for some reason, it didn't restart on last reboot,\nhave to watch that one ...\n\n\n\nOn Thu, 20 Sep 2001, Thomas Lockhart wrote:\n\n> I'm trying to update my cvs tree and am currently seeing the following:\n>\n> Parsing supfile \"postgres.cvsup\"\n> Connecting to cvsup.postgresql.org\n> Cannot connect to cvsup.postgresql.org: Connection refused\n> Will retry at 20:15:50\n>\n> Is this expected? Should cvsup.postgresql.org be answering connection\n> requests yet? My previous connections had been through postgresql.org\n> (and working for the last few years ;) but currently show\n>\n> myst$ ./repsync\n> Parsing supfile \"postgres.cvsup\"\n> Connecting to postgresql.org\n> Connected to postgresql.org\n> Server software version: REL_16_1\n> Negotiating file attribute support\n> Exchanging collection information\n> Server message: Collection \"pgsql\" release \"cvs\" is not available here\n> Establishing multiplexed-mode data connection\n> Running\n> Skipping collection pgsql/cvs\n> Shutting down connection to server\n> Finished successfully\n>\n> So two problems on this one (which might be moot if the new machine\n> should be doing this instead): the CVSup version is wrong and the\n> cvs-pulling configuration no longer exists.\n>\n> - Thomas\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Fri, 21 Sep 2001 08:08:32 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: cvsup trouble" }, { "msg_contents": "> Okay, that is fixed ... for some reason, it didn't restart on last reboot,\n> have to watch that one ...\n\nThanks for fixing it. Now of course the machine does not seem to be\nvisible. That was the case yesterday too; are these planned outages, is\nit still bouncing up and down as it is configured, or is it flakey? I\ncan see postgresql.org pretty consistantly, but cvsup.postgresql.org is\nnot visible at the same time.\n\n - Thomas\n", "msg_date": "Fri, 21 Sep 2001 13:26:36 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: cvsup trouble" }, { "msg_contents": "\njust checked, and the machine has been up 14days now ... I can ssh into it\nfrom cvs.postgresql.org, and the last cvsup connection was about 5 minutes\nago:\n\nSep 21 09:10:42 server1 cvsupd[50149]: +3 otto@host115.olabinc.com (fs1.olabinc.com) [SNAP_16_1e/16.1]\nSep 21 09:11:53 server1 cvsupd[50149]: -3 [165Kin+275Kout] Finished successfully\n\nnslookup for it should return:\n\nName: rs.postgresql.org\nAddress: 64.39.15.238\nAliases: cvsup.postgresql.org\n\nOn Fri, 21 Sep 2001, Thomas Lockhart wrote:\n\n> > Okay, that is fixed ... for some reason, it didn't restart on last reboot,\n> > have to watch that one ...\n>\n> Thanks for fixing it. Now of course the machine does not seem to be\n> visible. That was the case yesterday too; are these planned outages, is\n> it still bouncing up and down as it is configured, or is it flakey? I\n> can see postgresql.org pretty consistantly, but cvsup.postgresql.org is\n> not visible at the same time.\n>\n> - Thomas\n>\n\n", "msg_date": "Fri, 21 Sep 2001 10:21:55 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: cvsup trouble" }, { "msg_contents": "Two points\n\n1) I think that the problem with restarting at boot has occurred before.\n2) I just sync'ed via cvsup at cvsup.postgresql.org and it deleted all\n under pgsql/src/interfaces/odbc/*. It also does not seem to appear\n anywhere else. What's up?\n\nThis is my client version:\n(193)-> cvsup -v\nCVSup client, GUI version\nCopyright 1996-2001 John D. Polstra\nSoftware version: SNAP_16_1e\nProtocol version: 17.0\nOperating system: FreeBSD4\nhttp://www.polstra.com/projects/freeware/CVSup/\nReport problems to cvsup-bugs@polstra.com\n\nThis is the start of the output of the session:\n(195)-> head -20 !$\nhead -20 20010921.out\nParsing supfile \"cvsup_config\"\nConnecting to cvsup.postgresql.org\nConnected to cvsup.postgresql.org\nServer software version: REL_16_1p3\nFalling back to protocol version 16.1\nNegotiating file attribute support\nExchanging collection information\nEstablishing multiplexed-mode data connection\nRunning\nUpdating collection pgsql/cvs\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of \n> Thomas Lockhart\n> Sent: Friday, September 21, 2001 6:27 AM\n> To: Marc G. Fournier\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: cvsup trouble\n> \n> \n> > Okay, that is fixed ... for some reason, it didn't restart \n> on last reboot,\n> > have to watch that one ...\n> \n> Thanks for fixing it. Now of course the machine does not seem to be\n> visible. That was the case yesterday too; are these planned \n> outages, is\n> it still bouncing up and down as it is configured, or is it flakey? I\n> can see postgresql.org pretty consistantly, but \n> cvsup.postgresql.org is\n> not visible at the same time.\n> \n> - Thomas\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> majordomo@postgresql.org)\n> \n> \n", "msg_date": "Fri, 21 Sep 2001 07:23:09 -0700", "msg_from": "\"Otto Hirr\" <otto.hirr@olabinc.com>", "msg_from_op": false, "msg_subject": "Re: cvsup trouble - ODBC blown away !?!?" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Thanks for fixing it. Now of course the machine does not seem to be\n> visible. That was the case yesterday too; are these planned outages, is\n> it still bouncing up and down as it is configured, or is it flakey?\n\nLooks fine from here:\n\n$ ping cvsup.postgresql.org\nPING rs.postgresql.org: 64 byte packets\n64 bytes from 64.39.15.238: icmp_seq=0. time=57. ms\n64 bytes from 64.39.15.238: icmp_seq=1. time=70. ms\n\nPerhaps there is a routing problem somewhere between you and 64.39.15.238?\nThat machine is not physically at hub (looks like it's a Rackspace site)\nso there might be connectivity issues that are different from hub's.\nWhat do you get from tracerouting to cvsup.postgresql.org?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2001 10:41:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvsup trouble " }, { "msg_contents": "> $ ping cvsup.postgresql.org\n> PING rs.postgresql.org: 64 byte packets\n> 64 bytes from 64.39.15.238: icmp_seq=0. time=57. ms\n> 64 bytes from 64.39.15.238: icmp_seq=1. time=70. ms\n> Perhaps there is a routing problem somewhere between you and 64.39.15.238?\n> That machine is not physically at hub (looks like it's a Rackspace site)\n> so there might be connectivity issues that are different from hub's.\n> What do you get from tracerouting to cvsup.postgresql.org?\n\n*slaps forehead*\n\nI didn't catch on to the different network, and 64.x has always been a\nproblem on my firewall/masquerading box since I'm also on a 64.x subnet\nand it keeps wanting to put in default routes for a class A network. So\nit is all a problem on my end.\n\nI had done some testing at another location yesterday, when I was\nfinding that cvsup connections were being rejected.\n\nAny hints on how to supress this default route when the network is\nconfigured?\n\nSorry Marc for the false alarm (today anyway ;)\n\n - Thomas\n", "msg_date": "Fri, 21 Sep 2001 15:49:03 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: cvsup trouble" }, { "msg_contents": "On Fri, 21 Sep 2001, Thomas Lockhart wrote:\n\n> > $ ping cvsup.postgresql.org\n> > PING rs.postgresql.org: 64 byte packets\n> > 64 bytes from 64.39.15.238: icmp_seq=0. time=57. ms\n> > 64 bytes from 64.39.15.238: icmp_seq=1. time=70. ms\n> > Perhaps there is a routing problem somewhere between you and 64.39.15.238?\n> > That machine is not physically at hub (looks like it's a Rackspace site)\n> > so there might be connectivity issues that are different from hub's.\n> > What do you get from tracerouting to cvsup.postgresql.org?\n> \n> *slaps forehead*\n> \n> I didn't catch on to the different network, and 64.x has always been a\n> problem on my firewall/masquerading box since I'm also on a 64.x subnet\n> and it keeps wanting to put in default routes for a class A network. So\n> it is all a problem on my end.\n> \n> I had done some testing at another location yesterday, when I was\n> finding that cvsup connections were being rejected.\n> \n> Any hints on how to supress this default route when the network is\n> configured?\n\nBest suggestion: Renumber your internal network into one of private\nnetworks (10.*, 192.168.*, 172.16.*).\n\nAlternate suggestion: configure correct netmask for your internal network\n(such as, if you choose to be 64.1.1.1 with netmask 255.255.255.0, you\nwill only lose connectivity to 64.1.1.*, not entire 64.*)\n\n -alex\n\n", "msg_date": "Fri, 21 Sep 2001 13:10:50 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: cvsup trouble" }, { "msg_contents": "> 2) I just sync'ed via cvsup at cvsup.postgresql.org and it deleted all\n> under pgsql/src/interfaces/odbc/*. It also does not seem to appear\n> anywhere else. What's up?\n\nI see this too. I blew away my repository and populated it from scratch,\nand still see the problem. In fact, the odbc directory doesn't even\nappear in (my replicated) repository at all, let alone moving to the\nattic.\n\nMarc, can we verify that the ODBC directory actually exists in the\nreplicated cvs repository? Can we please get a site map to help us\nnavigate around to help diagnose problems?\n\nmyst$ cvsup -v\nCVSup client, non-GUI version\nCopyright 1996-2001 John D. Polstra\nSoftware version: SNAP_16_1d\nProtocol version: 16.1\nOperating system: LINUXLIBC6\nhttp://www.polstra.com/projects/freeware/CVSup/\nReport problems to cvsup-bugs@polstra.com\n\nand the server is running \n\nServer software version: REL_16_1p3\n\nThis does *not* seem to be fresh enough. JDP recommends installing\nREL_16_1e, and REL_16_1d was the first with bug fixes for the time tag\nproblem. Not sure what REL_16_1p3 is, but it does not seem to be in the\nsame line of fixed code (maybe a preliminary or patch release from\nsometime in the past??).\n\nMarc, help!! Check http://people.freebsd.org/~jdp/s1g/ for details on\nthe bug and FreeBSD package files which fix it.\n\n - Thomas\n", "msg_date": "Sat, 22 Sep 2001 02:14:36 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: cvsup trouble - ODBC blown away !?!?" }, { "msg_contents": "\nFixed ... my exclude file had a rule in it that prevented odbc from being\nrsync'd down ... all should be downloaded now ...\n\nas for a 'sitemap', the server that all of this is on is meant to be\npurelya 'mirror' of the central one on mail.postgresql.org, except there\nare no accounts on the machine for you to go looking around it with ...\n\nas for a sitemap of the central site ... all the web related stuff is in\n/usr/local/www, and cvsroot is /cvsroot ...\n\n\nOn Sat, 22 Sep 2001, Thomas Lockhart wrote:\n\n> > 2) I just sync'ed via cvsup at cvsup.postgresql.org and it deleted all\n> > under pgsql/src/interfaces/odbc/*. It also does not seem to appear\n> > anywhere else. What's up?\n>\n> I see this too. I blew away my repository and populated it from scratch,\n> and still see the problem. In fact, the odbc directory doesn't even\n> appear in (my replicated) repository at all, let alone moving to the\n> attic.\n>\n> Marc, can we verify that the ODBC directory actually exists in the\n> replicated cvs repository? Can we please get a site map to help us\n> navigate around to help diagnose problems?\n>\n> myst$ cvsup -v\n> CVSup client, non-GUI version\n> Copyright 1996-2001 John D. Polstra\n> Software version: SNAP_16_1d\n> Protocol version: 16.1\n> Operating system: LINUXLIBC6\n> http://www.polstra.com/projects/freeware/CVSup/\n> Report problems to cvsup-bugs@polstra.com\n>\n> and the server is running\n>\n> Server software version: REL_16_1p3\n>\n> This does *not* seem to be fresh enough. JDP recommends installing\n> REL_16_1e, and REL_16_1d was the first with bug fixes for the time tag\n> problem. Not sure what REL_16_1p3 is, but it does not seem to be in the\n> same line of fixed code (maybe a preliminary or patch release from\n> sometime in the past??).\n>\n> Marc, help!! Check http://people.freebsd.org/~jdp/s1g/ for details on\n> the bug and FreeBSD package files which fix it.\n>\n> - Thomas\n>\n\n", "msg_date": "Fri, 21 Sep 2001 22:28:25 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: cvsup trouble - ODBC blown away !?!?" }, { "msg_contents": "Thanks Marc !\n\nI just cvsup'ed and odbc seems to be all back.\n\nThanks !\n\nBest regards,\n\n.. Otto\n\nOtto Hirr\nOLAB Inc\n503.617.6595\notto.hirr@olabinc.com\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc \n> G. Fournier\n> Sent: Friday, September 21, 2001 7:28 PM\n> To: Thomas Lockhart\n> Cc: otto.hirr@olabinc.com; thomas@pgsql.com;\n> pgsql-hackers@postgresql.org\n> Subject: Re: cvsup trouble - ODBC blown away !?!?\n> \n> \n> \n> Fixed ... my exclude file had a rule in it that prevented \n> odbc from being\n> rsync'd down ... all should be downloaded now ...\n> \n> as for a 'sitemap', the server that all of this is on is meant to be\n> purelya 'mirror' of the central one on mail.postgresql.org, \n> except there\n> are no accounts on the machine for you to go looking around \n> it with ...\n> \n> as for a sitemap of the central site ... all the web related \n> stuff is in\n> /usr/local/www, and cvsroot is /cvsroot ...\n> \n> \n> On Sat, 22 Sep 2001, Thomas Lockhart wrote:\n> \n> > > 2) I just sync'ed via cvsup at cvsup.postgresql.org and \n> it deleted all\n> > > under pgsql/src/interfaces/odbc/*. It also does not \n> seem to appear\n> > > anywhere else. What's up?\n> >\n> > I see this too. I blew away my repository and populated it \n> from scratch,\n> > and still see the problem. In fact, the odbc directory doesn't even\n> > appear in (my replicated) repository at all, let alone moving to the\n> > attic.\n> >\n> > Marc, can we verify that the ODBC directory actually exists in the\n> > replicated cvs repository? Can we please get a site map to help us\n> > navigate around to help diagnose problems?\n> >\n> > myst$ cvsup -v\n> > CVSup client, non-GUI version\n> > Copyright 1996-2001 John D. Polstra\n> > Software version: SNAP_16_1d\n> > Protocol version: 16.1\n> > Operating system: LINUXLIBC6\n> > http://www.polstra.com/projects/freeware/CVSup/\n> > Report problems to cvsup-bugs@polstra.com\n> >\n> > and the server is running\n> >\n> > Server software version: REL_16_1p3\n> >\n> > This does *not* seem to be fresh enough. JDP recommends installing\n> > REL_16_1e, and REL_16_1d was the first with bug fixes for \n> the time tag\n> > problem. Not sure what REL_16_1p3 is, but it does not seem \n> to be in the\n> > same line of fixed code (maybe a preliminary or patch release from\n> > sometime in the past??).\n> >\n> > Marc, help!! Check http://people.freebsd.org/~jdp/s1g/ for \n> details on\n> > the bug and FreeBSD package files which fix it.\n> >\n> > - Thomas\n> >\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n", "msg_date": "Fri, 21 Sep 2001 19:39:25 -0700", "msg_from": "\"Otto Hirr\" <otto.hirr@olabinc.com>", "msg_from_op": false, "msg_subject": "Re: cvsup trouble - ODBC blown away !?!?" }, { "msg_contents": "> Fixed ... my exclude file had a rule in it that prevented odbc from being\n> rsync'd down ... all should be downloaded now ...\n\nGreat! Haven't tested it yet, but Otto is happy so I'm sure I'll be\ntoo...\n\n> > Server software version: REL_16_1p3\n> > This does *not* seem to be fresh enough. JDP recommends installing\n> > REL_16_1e, and REL_16_1d was the first with bug fixes for the time tag\n> > problem. Not sure what REL_16_1p3 is, but it does not seem to be in the\n> > same line of fixed code (maybe a preliminary or patch release from\n> > sometime in the past??).\n\nWhat about the server versioning issue? I see no mention of\nREL_16_1p<anything> as being safe to use. Please confirm that this is in\nfact the 16_1d or 16_1e release!!\n\n - Thomas\n", "msg_date": "Sat, 22 Sep 2001 03:30:25 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: cvsup trouble - ODBC blown away !?!?" }, { "msg_contents": "On Sat, 22 Sep 2001, Thomas Lockhart wrote:\n\n> > Fixed ... my exclude file had a rule in it that prevented odbc from being\n> > rsync'd down ... all should be downloaded now ...\n>\n> Great! Haven't tested it yet, but Otto is happy so I'm sure I'll be\n> too...\n>\n> > > Server software version: REL_16_1p3\n> > > This does *not* seem to be fresh enough. JDP recommends installing\n> > > REL_16_1e, and REL_16_1d was the first with bug fixes for the time tag\n> > > problem. Not sure what REL_16_1p3 is, but it does not seem to be in the\n> > > same line of fixed code (maybe a preliminary or patch release from\n> > > sometime in the past??).\n>\n> What about the server versioning issue? I see no mention of\n> REL_16_1p<anything> as being safe to use. Please confirm that this is in\n> fact the 16_1d or 16_1e release!!\n\nThis is what is/was latest in ports after you mentioned the problem ...\nI'll check ports again over the next day or so to see if John has oploaded\nsomething even newer ....\n\n\n", "msg_date": "Fri, 21 Sep 2001 23:33:03 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: cvsup trouble - ODBC blown away !?!?" }, { "msg_contents": "> > What about the server versioning issue? I see no mention of\n> > REL_16_1p<anything> as being safe to use. Please confirm that this is in\n> > fact the 16_1d or 16_1e release!!\n> This is what is/was latest in ports after you mentioned the problem ...\n> I'll check ports again over the next day or so to see if John has oploaded\n> something even newer ....\n\nAck! That's the whole point! This is a critical bug fix for which John\nPolstra has posted the required binaries, including for FreeBSD. If it\nisn't the right version, we get time-corrupted CVS repositories. The URL\nis http://people.freebsd.org/~jdp/s1g/ and for all I know that is the\nonly source of packages which contain the fixes. If it isn't the right\npackage, it doesn't fix the problem.\n\nI'd really like this so that I can finish the date/time improvements for\nthe beta freeze.\n\n - Thomas\n", "msg_date": "Sat, 22 Sep 2001 05:52:58 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: cvsup trouble - ODBC blown away !?!?" }, { "msg_contents": "> > > Please confirm that this is in\n> > > fact the 16_1d or 16_1e release!!\n\nThank you thank you thank you!!! I see that 16_1e is now installed on\nthe server, and the world is good again.\n\nI'm about done with extensive patches on date/time support, though I\nthink I won't be ready to commit them before I leave town for work. I\n*should* be able to commit things remotely, and expect to do so Thursday\nor Friday after merging with the current cvs tree.\n\nThanks again for updating the server.\n\n - Thomas\n\nps. Thank you\n", "msg_date": "Wed, 26 Sep 2001 06:56:53 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: cvsup trouble - ODBC blown away !?!?" } ]
[ { "msg_contents": "Not sure how to do this, but it shouldn't be too ugly. I want to drop\ncolumns from a table as well as be able to change the data type of a column.\nWhere should I be looking in the source for altering tables? For removing I\ndon't imagine it could be too ugly as all it should be is a reverse add.\nNot so certian about changing the data type though. Can this be done simply\nby adjusting something in pg_attribute or am I missing something? Also, for\nthe future, MS Sql already has this and it'd be very helpful if pgsql did:\nmodifying data between databases using a select, insert, update, or delete\n(execuse me if I'm wrong in my syntax, but in MS Sql I believe it\ndbname..tablename.column).\n\nGeoff\n", "msg_date": "Thu, 20 Sep 2001 16:42:44 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "Changing columns" } ]
[ { "msg_contents": "Do the multibyte regression tests in src/test/mb currently pass for\nother people? I'm getting failures on most of them, and what it looks\nlike to me is that the latest commits of the \"expected\" files contain\nwrong results.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Sep 2001 21:55:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Multibyte regression tests broken?" }, { "msg_contents": "On Thu, Sep 20, 2001 at 09:55:56PM -0400, Tom Lane wrote:\n> Do the multibyte regression tests in src/test/mb currently pass for\n> other people? I'm getting failures on most of them, and what it looks\n> like to me is that the latest commits of the \"expected\" files contain\n> wrong results.\n\n$ ./mbregress.sh\nDROP DATABASE\nCREATE DATABASE\neuc_jp .. ok\nsjis .. ok\neuc_kr .. failed\neuc_cn .. failed\neuc_tw .. failed\nbig5 .. failed\nunicode .. failed\nmule_internal .. failed\n\n In the 7.2 is possible use queries that in the old version of \n\"expected results\" finish with error. The \"expected\" files are \nout of date.\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 21 Sep 2001 10:35:40 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Multibyte regression tests broken?" }, { "msg_contents": "> Do the multibyte regression tests in src/test/mb currently pass for\n> other people? I'm getting failures on most of them, and what it looks\n> like to me is that the latest commits of the \"expected\" files contain\n> wrong results.\n\nYou are correct. Will fix.\n--\nTatsuo Ishii\n\n", "msg_date": "Fri, 21 Sep 2001 19:40:01 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Multibyte regression tests broken?" } ]
[ { "msg_contents": "Hi, \nplease run below recursion function in 7.1.3\n-----------------------------------------------------------\ndrop function listTest(int4);\ncreate function listTest(int4)\nreturns text\nas 'declare\n mtype alias for $1;\n begin\n if mtype=0 then\n return ''0'';\n else\n return mtype||listTest(mtype-1);\n end if;\n end;'\nlanguage 'plpgsql';\nselect listTest(5);\n--------------------------------------\nresponse:\n\n111110 (in 7.1.3, incorrenct!)\n\n543210 (in 7.0.3, correct!)\n\nPlease tell me how to slove this problem if I don't want to go back to 7.0.3.\n\nThanks\nXiaoming \n\n\n\n*******************Internet Email Confidentiality Footer*******************\nPrivileged/Confidential Information may be contained in this message. If you\nare not the addressee indicated in this message (or responsible for delivery of\nthe message to such person), you may not copy or deliver this message to anyone.\nIn such case, you should destroy this message and kindly notify the sender by\nreply email. Please advise immediately if you or your employer does not consent\nto Internet email for messages of this kind. Opinions, conclusions and other\ninformation in this message that do not relate to the official business of my\nfirm shall be understood as neither given nor endorsed by it.\n\n\n\n\n\n\n\nHi, \nplease run below recursion function in \n7.1.3\n-----------------------------------------------------------\ndrop function listTest(int4);create function \nlistTest(int4)returns textas 'declare mtype alias for \n$1; begin  if mtype=0 then  return \n''0'';  else  return \nmtype||listTest(mtype-1);  end if; end;'language \n'plpgsql';\nselect listTest(5);\n--------------------------------------response:\n \n111110 (in 7.1.3, incorrenct!)\n \n543210 (in 7.0.3, correct!)\n \nPlease tell me how to slove this problem if I don't \nwant to go back to 7.0.3.\n \nThanks\nXiaoming \n \n \n \n*******************Internet Email Confidentiality \nFooter*******************Privileged/Confidential Information may be \ncontained in this message. If youare not the addressee indicated in this \nmessage (or responsible for delivery ofthe message to such person), you may \nnot copy or deliver this message to anyone.In such case, you should destroy \nthis message and kindly notify the sender byreply email. Please advise \nimmediately if you or your employer does not consentto Internet email for \nmessages of this kind. Opinions, conclusions and otherinformation in this \nmessage that do not relate to the official business of myfirm shall be \nunderstood as neither given nor endorsed by it.", "msg_date": "Fri, 21 Sep 2001 17:35:47 +0800", "msg_from": "\"Zhang Xiaoming\" <xiaoming.zhang@ebridgex.com>", "msg_from_op": true, "msg_subject": "bug! in 7.1.3 (for recursion function)" } ]
[ { "msg_contents": "Hi,\n\nThis is not a real security issue but it seems not very appropreate\nbehavior for me.\n\n$ psql -U foo test\nPassword: XXX\n\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntest=> \\c - postgres\nYou are now connected as new user postgres\n\nAs you can see, psql reconnect as any user if the password is same as\nfoo. Of course this is due to the careless password setting, but I\nthink it's better to prompt ANY TIME the user tries to switch to\nanother user. Comments?\n--\nTatsuo Ishii\n", "msg_date": "Fri, 21 Sep 2001 19:56:27 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "psql and security" }, { "msg_contents": "Tatsuo Ishii:\n\n> As you can see, psql reconnect as any user if the password is same as\n> foo. Of course this is due to the careless password setting, but I\n> think it's better to prompt ANY TIME the user tries to switch to\n> another user. Comments?\n\nDoes postgres have a concept of a 'root' user? Then the password should\nonly be prompted when one isn't root; ie. adopt Unix semantics.\n\n\nCheers,\n\nColin\n\n\n", "msg_date": "Fri, 21 Sep 2001 14:08:44 +0200", "msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and security" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> As you can see, psql reconnect as any user if the password is same as\n> foo. Of course this is due to the careless password setting, but I\n> think it's better to prompt ANY TIME the user tries to switch to\n> another user.\n\nI'm not sure. A few users have voiced concerns about this before, but we\nhave no count of the users that might enjoy this convenience. ;-)\n\nBasically, the attack scenario here is that if you have a psql running and\nleave your terminal, someone else can come in and get access to any other\ndatabase that you might have access to, without knowing your password.\nBut given a running psql, figuring out the password isn't so hard (running\na debugger or inducing a core dump would be likely options), and\nconcluding that this password is valid for all databases is trivial since\nthat's the default setup.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 21 Sep 2001 15:16:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: psql and security" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> As you can see, psql reconnect as any user if the password is same as\n> foo. Of course this is due to the careless password setting, but I\n> think it's better to prompt ANY TIME the user tries to switch to\n> another user. Comments?\n\nYeah, I agree. Looks like a simple change in dbconnect():\n\n /*\n * Use old password if no new one given (if you didn't have an old\n * one, fine)\n */\n if (!pwparam && oldconn)\n pwparam = PQpass(oldconn);\n\nto\n\n /*\n * Use old password (if any) if no new one given and we are\n * reconnecting as same user\n */\n if (!pwparam && oldconn && PQuser(oldconn) && userparam &&\n strcmp(PQuser(oldconn), userparam) == 0)\n pwparam = PQpass(oldconn);\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2001 10:29:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql and security " }, { "msg_contents": "\"Colin 't Hart\" <cthart@yahoo.com> writes:\n> Does postgres have a concept of a 'root' user? Then the password should\n> only be prompted when one isn't root; ie. adopt Unix semantics.\n\nCan't really do that in psql's \\c, since it's establishing a whole new\nconnection; there is no possibility for superuserness on the old\nconnection to provide any relaxation of the check.\n\nHowever, see SET SESSION AUTHORIZATION, which does what you're thinking\nof within the context of a single connection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2001 10:32:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and security " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> concluding that this password is valid for all databases is trivial since\n> that's the default setup.\n\nNo, I think you're missing the point --- we're concerned about\nreconnecting as a different user, not reconnecting to a different\ndatabase. The issue is that psql will silently try to use user A's\npassword to authenticate as user B. While one would hope that this\nfails, it doesn't seem like a good idea even to try it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2001 10:36:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql and security " }, { "msg_contents": "Tom Lane writes:\n\n> No, I think you're missing the point --- we're concerned about\n> reconnecting as a different user, not reconnecting to a different\n> database.\n\nOh, of course. I agree, in that case the password shouldn't be reused.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 21 Sep 2001 20:08:45 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: psql and security " }, { "msg_contents": "\nPatch applied. Thanks Tatsuo and Tom.\n\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > As you can see, psql reconnect as any user if the password is same as\n> > foo. Of course this is due to the careless password setting, but I\n> > think it's better to prompt ANY TIME the user tries to switch to\n> > another user. Comments?\n> \n> Yeah, I agree. Looks like a simple change in dbconnect():\n> \n> /*\n> * Use old password if no new one given (if you didn't have an old\n> * one, fine)\n> */\n> if (!pwparam && oldconn)\n> pwparam = PQpass(oldconn);\n> \n> to\n> \n> /*\n> * Use old password (if any) if no new one given and we are\n> * reconnecting as same user\n> */\n> if (!pwparam && oldconn && PQuser(oldconn) && userparam &&\n> strcmp(PQuser(oldconn), userparam) == 0)\n> pwparam = PQpass(oldconn);\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 12:54:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql and security" } ]
[ { "msg_contents": "\nI still have problem when doing cvs update on a fresh source tree. I did\ncvs checkout after the changes in cvs servers.\n\n$ cvs update\ncannot create_adm_p /tmp/cvs-serv24877/ChangeLogs\nPermission denied\n\nI think it is a problem on the CVS server. My client side is Cygwin\n1.3.3 and cvs 1.11\n\n\n\t\t\tDan\n", "msg_date": "Fri, 21 Sep 2001 13:05:41 +0200", "msg_from": "=?us-ascii?Q?Horak_Daniel?= <horak@sit.plzen-city.cz>", "msg_from_op": true, "msg_subject": "Re: Further CVS errors" }, { "msg_contents": "\nTry it:\n\n> cd pgsql\n> cvs -q update -APd .\n? .new.configure\nU configure\nU configure.in\nU register.txt\nU ChangeLogs/ChangeLog-7.1-7.1.1\nU ChangeLogs/ChangeLog-7.1RC1-to-7.1RC2\nU ChangeLogs/ChangeLog-7.1RC2-to-7.1RC3\nU ChangeLogs/ChangeLog-7.1RC3-to-7.1rc4\nU ChangeLogs/ChangeLog-7.1beta1-to-7.1beta3\nU ChangeLogs/ChangeLog-7.1beta3-to-7.1beta4\n^Ccvs [update aborted]: received interrupt signal\n\n\nOn Fri, 21 Sep 2001, Horak Daniel wrote:\n\n>\n> I still have problem when doing cvs update on a fresh source tree. I did\n> cvs checkout after the changes in cvs servers.\n>\n> $ cvs update\n> cannot create_adm_p /tmp/cvs-serv24877/ChangeLogs\n> Permission denied\n>\n> I think it is a problem on the CVS server. My client side is Cygwin\n> 1.3.3 and cvs 1.11\n>\n>\n> \t\t\tDan\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Fri, 21 Sep 2001 08:12:06 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Further CVS errors" } ]
[ { "msg_contents": "\n> > As you can see, psql reconnect as any user if the password is same\nas\n> > foo. Of course this is due to the careless password setting, but I\n> > think it's better to prompt ANY TIME the user tries to switch to\n> > another user.\n> \n> I'm not sure. A few users have voiced concerns about this before, but\nwe\n> have no count of the users that might enjoy this convenience. ;-)\n> \n> Basically, the attack scenario here is that if you have a psql running\nand\n> leave your terminal, someone else can come in and get access to any\nother\n> database that you might have access to, without knowing your password.\n> But given a running psql, figuring out the password isn't so hard\n(running\n> a debugger or inducing a core dump would be likely options), and\n> concluding that this password is valid for all databases is trivial\nsince\n> that's the default setup.\n\nThis feature was added to conveniently let an already connected user\nswitch to another database. Imho you could distinguish the exact case at\nhand,\nwhere a new user was specified and prompt for a new password.\n\nAndreas\n", "msg_date": "Fri, 21 Sep 2001 16:21:31 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: psql and security" } ]
[ { "msg_contents": "Hello friends,\n\nint4eq (xid, int4) seems to be needed for proper support of MS Access2K:\nCREATE FUNCTION \"int4eq\" (xid, int4)\nRETURNS bool\nAS 'int4eq'\nLANGUAGE 'internal'\n\nIs int4eq function included in PostgreSQL 7.2?\n\nRegards,\nJean-Michel POURE\n\n", "msg_date": "Fri, 21 Sep 2001 16:40:12 +0200", "msg_from": "Jean-Michel POURE <jmpoure@axitrad.com>", "msg_from_op": true, "msg_subject": "int4eq (xid, int4)" }, { "msg_contents": "Jean-Michel POURE wrote:\n> \n> Hello friends,\n> \n> int4eq (xid, int4) seems to be needed for proper support of MS Access2K:\n> CREATE FUNCTION \"int4eq\" (xid, int4)\n> RETURNS bool\n> AS 'int4eq'\n> LANGUAGE 'internal'\n> \n> Is int4eq function included in PostgreSQL 7.2?\n\nI added a '=' operator between xid and int in 7.2.\nThe registration of int4eq and =(xid,int) for\nrow versioning is no longer needed in 7.2.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 01 Oct 2001 09:11:36 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: int4eq (xid, int4)" }, { "msg_contents": "I'm reposting it to pgsql-odbc list because I've not\nseen it on pgsql-odbc.\n\nJean-Michel POURE wrote:\n> \n> Hello friends,\n> \n> int4eq (xid, int4) seems to be needed for proper support of MS Access2K:\n> CREATE FUNCTION \"int4eq\" (xid, int4)\n> RETURNS bool\n> AS 'int4eq'\n> LANGUAGE 'internal'\n> \n> Is int4eq function included in PostgreSQL 7.2?\n\nI added a '=' operator between xid and int in 7.2.\nThe registration of int4eq and =(xid,int) for\nrow versioning is no longer needed in 7.2.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Thu, 04 Oct 2001 13:08:22 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] int4eq (xid, int4)" } ]
[ { "msg_contents": "While looking through the code I found an already existing alter table drop\ncolumn in src/backend/parser/gram.y. However, when I try to run it in psql\nit comes back with a not implemented. Was the left hand not talking to the\nright hand when this was coded or is there more to this?\n\nGeoff\n", "msg_date": "Fri, 21 Sep 2001 12:32:16 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "an already existing alter table drop column ?!?!?!" }, { "msg_contents": "\nOn Fri, 21 Sep 2001, Gowey, Geoffrey wrote:\n\n> While looking through the code I found an already existing alter table drop\n> column in src/backend/parser/gram.y. However, when I try to run it in psql\n> it comes back with a not implemented. Was the left hand not talking to the\n> right hand when this was coded or is there more to this?\n\nIIRC, it was not enabled pending further discussion of the behavior.\n\n", "msg_date": "Fri, 21 Sep 2001 10:15:09 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: an already existing alter table drop column ?!?!?!" }, { "msg_contents": "Stephan Szabo wrote:\n> \n> On Fri, 21 Sep 2001, Gowey, Geoffrey wrote:\n> \n> > While looking through the code I found an already existing alter table drop\n> > column in src/backend/parser/gram.y. However, when I try to run it in psql\n> > it comes back with a not implemented. Was the left hand not talking to the\n> > right hand when this was coded or is there more to this?\n> \n> IIRC, it was not enabled pending further discussion of the behavior.\n\nAs to 'DROP COLUMN', I neglected to remove _DROP_COLUMN_HACK__\nstuff which has no meaing now sorry. I would remove it after\nthe 7.2 release.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 24 Sep 2001 12:01:54 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: an already existing alter table drop column ?!?!?!" } ]
[ { "msg_contents": "When modifying postgresql.conf (or, now, pg_hba.conf) one must send a\nSIGHUP to the postmaster to get it to pay attention. Seems like it'd\nbe nice if pg_ctl had an option to do that, rather than having to muck\nabout with looking in ps output. Any objections? What should the\noption be called? \"pg_ctl hup\" is short but maybe too Unix-sysadminy;\nperhaps something like \"pg_ctl reconfig\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2001 12:49:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pg_ctl needs a SIGHUP option" }, { "msg_contents": "Tom Lane writes:\n\n> Any objections?\n\nNo. Definitely needed.\n\n> What should the option be called? \"pg_ctl hup\" is short but maybe too\n> Unix-sysadminy; perhaps something like \"pg_ctl reconfig\"?\n\nIf you accept the Linux Standards Base as a precedent for the other\noptions, it should be \"reload\". That will most easily map the the init.d\nscripts.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 21 Sep 2001 20:07:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_ctl needs a SIGHUP option" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> What should the option be called?\n\n> If you accept the Linux Standards Base as a precedent for the other\n> options, it should be \"reload\".\n\nWorks for me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2001 14:08:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_ctl needs a SIGHUP option " } ]
[ { "msg_contents": "I am running the command as you have written:\n\n> cd pgsql\n> cvs -q update -APd .\n\nbut still I am getting\n\n> cannot create_adm_p /tmp/cvs-serv24877/ChangeLogs\n> Permission denied\n\non both Cygwin and Linux\n\nmy CVS/Repository for the top-level directory\n/projects/cvsroot/pgsql\n\nand I have tries only relative path in CVS/Repository (pgsql)\n\nmy CVS/Root\n:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\n\n\n\t\tDan", "msg_date": "Fri, 21 Sep 2001 19:27:10 +0200", "msg_from": "=?windows-1250?Q?Hor=E1k_Daniel?= <horak@sit.plzen-city.cz>", "msg_from_op": true, "msg_subject": "Re: Further CVS errors" }, { "msg_contents": "On Fri, Sep 21, 2001 at 07:27:10PM +0200, Hor�k Daniel wrote:\n...\n> but still I am getting\n> \n> > cannot create_adm_p /tmp/cvs-serv24877/ChangeLogs\n> > Permission denied\n\n<aol>\nMe Too!\n</aol>\n\n", "msg_date": "Fri, 21 Sep 2001 18:51:57 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: Further CVS errors" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nFolks-\n\n Has the password for anoncvs been changed? I can't seem to get into it\nanymore.\n\n\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7q8XViysnOdCML0URArjBAJ9xkuFik8VWCALEMOFG4SUDDG3nlACfVa05\n98136MftKgl9sp9pPXz7SlM=\n=zSvL\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 21 Sep 2001 15:57:26 -0700 (MST)", "msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>", "msg_from_op": false, "msg_subject": "anoncvs failure..." }, { "msg_contents": "\nwhat sort of error?\n\nOn Fri, 21 Sep 2001, Ned Wolpert wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Folks-\n>\n> Has the password for anoncvs been changed? I can't seem to get into it\n> anymore.\n>\n>\n>\n>\n> Virtually,\n> Ned Wolpert <ned.wolpert@knowledgenet.com>\n>\n> D08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.0.6 (GNU/Linux)\n> Comment: For info see http://www.gnupg.org\n>\n> iD8DBQE7q8XViysnOdCML0URArjBAJ9xkuFik8VWCALEMOFG4SUDDG3nlACfVa05\n> 98136MftKgl9sp9pPXz7SlM=\n> =zSvL\n> -----END PGP SIGNATURE-----\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Fri, 21 Sep 2001 21:44:23 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "Marc,\n\nI'm getting errors too, it looks like you HATE me ;-(\n\nKeith.\n\n\n% truss -rall -wall cvs -d :pserver:anoncvs@postgresql.org:/home/projects/pgsql/cvsroot login\nexecve(\"/usr/local/bin/cvs\", 0xEFFFF414, 0xEFFFF428) argc = 4\nopen(\"/dev/zero\", O_RDONLY) = 3\nmmap(0x00000000, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE, 3, 0) = 0xEF7C0000\n.\n.\n.\n.\nconnect(3, 0xEFFFF2A0, 16, 1) = 0\nsend(3, 0x00082940, 27, 0) = 27\n B E G I N V E R I F I C A T I O N R E Q U E S T\\n\nsend(3, 0x000AEEA8, 28, 0) = 28\n / h o m e / p r o j e c t s / p g s q l / c v s r o o t\nsend(3, \"\\n\", 1, 0) = 1\nsend(3, \" a n o n c v s\", 7, 0) = 7\nsend(3, \"\\n\", 1, 0) = 1\nsend(3, \" A : 0 Z , I d Z\", 9, 0) = 9\nsend(3, \"\\n\", 1, 0) = 1\nsend(3, 0x00082960, 25, 0) = 25\n E N D V E R I F I C A T I O N R E Q U E S T\\n\nrecv(3, \" I\", 1, 0) = 1\nrecv(3, \" \", 1, 0) = 1\nrecv(3, \" H\", 1, 0) = 1\nrecv(3, \" A\", 1, 0) = 1\nrecv(3, \" T\", 1, 0) = 1\nrecv(3, \" E\", 1, 0) = 1\nrecv(3, \" \", 1, 0) = 1\nrecv(3, \" Y\", 1, 0) = 1\nrecv(3, \" O\", 1, 0) = 1\nrecv(3, \" U\", 1, 0) = 1\nrecv(3, \"\\n\", 1, 0) = 1\nshutdown(3, 2, 1) = 0\ncvs [login aborted]: authorization failed: server postgresql.org rejected access\nwrite(2, 0x000B10A8, 81) = 81\n c v s [ l o g i n a b o r t e d ] : a u t h o r i z a t i\n o n f a i l e d : s e r v e r p o s t g r e s q l . o r g\n r e j e c t e d a c c e s s\\n\nllseek(0, 0, SEEK_CUR) = 307706\n_exit(1)\n\n\nOn Fri, 21 Sep 2001 21:44:23 -0400 (EDT)\n\"Marc G. Fournier\" <scrappy@hub.org> wrote:\n\n> \n> what sort of error?\n> \n> On Fri, 21 Sep 2001, Ned Wolpert wrote:\n> \n> > -----BEGIN PGP SIGNED MESSAGE-----\n> > Hash: SHA1\n> >\n> > Folks-\n> >\n> > Has the password for anoncvs been changed? I can't seem to get into it\n> > anymore.\n> >\n> >\n> >\n> >\n> > Virtually,\n> > Ned Wolpert <ned.wolpert@knowledgenet.com>\n> >\n> > D08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45\n> > -----BEGIN PGP SIGNATURE-----\n> > Version: GnuPG v1.0.6 (GNU/Linux)\n> > Comment: For info see http://www.gnupg.org\n> >\n> > iD8DBQE7q8XViysnOdCML0URArjBAJ9xkuFik8VWCALEMOFG4SUDDG3nlACfVa05\n> > 98136MftKgl9sp9pPXz7SlM=\n> > =zSvL\n> > -----END PGP SIGNATURE-----\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Sat, 22 Sep 2001 13:25:28 +0100", "msg_from": "Keith Parks <emkxp01@middleton-top.co.uk>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "\nas was announced several times so far ... try:\n\n-d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot :)\n\n\nOn Sat, 22 Sep 2001, Keith Parks wrote:\n\n\n> Marc,\n>\n> I'm getting errors too, it looks like you HATE me ;-(\n>\n> Keith.\n>\n>\n> % truss -rall -wall cvs -d :pserver:anoncvs@postgresql.org:/home/projects/pgsql/cvsroot login\n> execve(\"/usr/local/bin/cvs\", 0xEFFFF414, 0xEFFFF428) argc = 4\n> open(\"/dev/zero\", O_RDONLY) = 3\n> mmap(0x00000000, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE, 3, 0) = 0xEF7C0000\n> .\n> .\n> .\n> .\n> connect(3, 0xEFFFF2A0, 16, 1) = 0\n> send(3, 0x00082940, 27, 0) = 27\n> B E G I N V E R I F I C A T I O N R E Q U E S T\\n\n> send(3, 0x000AEEA8, 28, 0) = 28\n> / h o m e / p r o j e c t s / p g s q l / c v s r o o t\n> send(3, \"\\n\", 1, 0) = 1\n> send(3, \" a n o n c v s\", 7, 0) = 7\n> send(3, \"\\n\", 1, 0) = 1\n> send(3, \" A : 0 Z , I d Z\", 9, 0) = 9\n> send(3, \"\\n\", 1, 0) = 1\n> send(3, 0x00082960, 25, 0) = 25\n> E N D V E R I F I C A T I O N R E Q U E S T\\n\n> recv(3, \" I\", 1, 0) = 1\n> recv(3, \" \", 1, 0) = 1\n> recv(3, \" H\", 1, 0) = 1\n> recv(3, \" A\", 1, 0) = 1\n> recv(3, \" T\", 1, 0) = 1\n> recv(3, \" E\", 1, 0) = 1\n> recv(3, \" \", 1, 0) = 1\n> recv(3, \" Y\", 1, 0) = 1\n> recv(3, \" O\", 1, 0) = 1\n> recv(3, \" U\", 1, 0) = 1\n> recv(3, \"\\n\", 1, 0) = 1\n> shutdown(3, 2, 1) = 0\n> cvs [login aborted]: authorization failed: server postgresql.org rejected access\n> write(2, 0x000B10A8, 81) = 81\n> c v s [ l o g i n a b o r t e d ] : a u t h o r i z a t i\n> o n f a i l e d : s e r v e r p o s t g r e s q l . o r g\n> r e j e c t e d a c c e s s\\n\n> llseek(0, 0, SEEK_CUR) = 307706\n> _exit(1)\n>\n>\n> On Fri, 21 Sep 2001 21:44:23 -0400 (EDT)\n> \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n>\n> >\n> > what sort of error?\n> >\n> > On Fri, 21 Sep 2001, Ned Wolpert wrote:\n> >\n> > > -----BEGIN PGP SIGNED MESSAGE-----\n> > > Hash: SHA1\n> > >\n> > > Folks-\n> > >\n> > > Has the password for anoncvs been changed? I can't seem to get into it\n> > > anymore.\n> > >\n> > >\n> > >\n> > >\n> > > Virtually,\n> > > Ned Wolpert <ned.wolpert@knowledgenet.com>\n> > >\n> > > D08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45\n> > > -----BEGIN PGP SIGNATURE-----\n> > > Version: GnuPG v1.0.6 (GNU/Linux)\n> > > Comment: For info see http://www.gnupg.org\n> > >\n> > > iD8DBQE7q8XViysnOdCML0URArjBAJ9xkuFik8VWCALEMOFG4SUDDG3nlACfVa05\n> > > 98136MftKgl9sp9pPXz7SlM=\n> > > =zSvL\n> > > -----END PGP SIGNATURE-----\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Sat, 22 Sep 2001 19:49:52 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "> \n> as was announced several times so far ... try:\n> \n> -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot :)\n\nI am confused. Is the path /projects/cvsroot or just /cvsroot, and is\nthe anoncvs password blank or 'postgresql'.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Sep 2001 19:55:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "\nanoncvs: :pserer:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\t- passwd is blank, but postgresql should work just as well\n\n cvs: :ext:<userid>@cvs.postgresql.org:/cvsroot\n\t\tCVS_RSH set to ssh\n\n\n\nOn Sat, 22 Sep 2001, Bruce Momjian wrote:\n\n> >\n> > as was announced several times so far ... try:\n> >\n> > -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot :)\n>\n> I am confused. Is the path /projects/cvsroot or just /cvsroot, and is\n> the anoncvs password blank or 'postgresql'.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Sat, 22 Sep 2001 21:07:26 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "> \n> anoncvs: :pserer:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> \t- passwd is blank, but postgresql should work just as well\n> \n> cvs: :ext:<userid>@cvs.postgresql.org:/cvsroot\n> \t\tCVS_RSH set to ssh\n> \n\nOK, one more question. With non-anon cvs, I use /cvsroot, not\n/projects/cvsroot. Is that correct? I need to update cvs.sgml too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Sep 2001 21:09:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "On Sat, 22 Sep 2001, Bruce Momjian wrote:\n\n> >\n> > anoncvs: :pserer:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> > \t- passwd is blank, but postgresql should work just as well\n> >\n> > cvs: :ext:<userid>@cvs.postgresql.org:/cvsroot\n> > \t\tCVS_RSH set to ssh\n> >\n>\n> OK, one more question. With non-anon cvs, I use /cvsroot, not\n> /projects/cvsroot. Is that correct? I need to update cvs.sgml too.\n\n*scratch head* ummm ... ya, that's it ...\n\n\n", "msg_date": "Sat, 22 Sep 2001 21:14:06 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "On Sat, 22 Sep 2001, Marc G. Fournier wrote:\n\n> anoncvs: :pserer:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> \t- passwd is blank, but postgresql should work just as well\n\nI can confirm that this works.\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Sat, 22 Sep 2001 20:15:11 -0500 (CDT)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "> On Sat, 22 Sep 2001, Bruce Momjian wrote:\n> \n> > >\n> > > anoncvs: :pserer:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> > > \t- passwd is blank, but postgresql should work just as well\n> > >\n> > > cvs: :ext:<userid>@cvs.postgresql.org:/cvsroot\n> > > \t\tCVS_RSH set to ssh\n> > >\n> >\n> > OK, one more question. With non-anon cvs, I use /cvsroot, not\n> > /projects/cvsroot. Is that correct? I need to update cvs.sgml too.\n> \n> *scratch head* ummm ... ya, that's it ...\n\nOK, cvs.sgml updated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Sep 2001 21:16:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "On Sat, Sep 22, 2001 at 08:15:11PM -0500, Dominic J. Eidson wrote:\n> On Sat, 22 Sep 2001, Marc G. Fournier wrote:\n> \n> > anoncvs: :pserer:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> > \t- passwd is blank, but postgresql should work just as well\n> \n> I can confirm that this works.\n\nStill no good for me:\n\nprotocol error: directory '/home/projects/pgsql/cvsroot/pgsql/src/backend/access/heap' not within root '/projects/cvsroot'\n\nChecking:\n% pwd\n/usr/src/local/pgsql/src/backend/access/heap\n% cat CVS/Root\n:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\n\n?\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 24 Sep 2001 15:12:39 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "On Mon, 24 Sep 2001, Patrick Welche wrote:\n\n> On Sat, Sep 22, 2001 at 08:15:11PM -0500, Dominic J. Eidson wrote:\n> > On Sat, 22 Sep 2001, Marc G. Fournier wrote:\n> >\n> > > anoncvs: :pserer:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> > > \t- passwd is blank, but postgresql should work just as well\n> >\n> > I can confirm that this works.\n>\n> Still no good for me:\n>\n> protocol error: directory\n> '/home/projects/pgsql/cvsroot/pgsql/src/backend/access/heap' not\n> within root '/projects/cvsroot'\n\nokay, somehow you have two different CVSROOT's configured?\n/home/projects/pgsql/cvsroot was the old server, /projects/cvsroot is the\nnew one ....\n\n\n", "msg_date": "Mon, 24 Sep 2001 10:22:28 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "On Mon, Sep 24, 2001 at 10:22:28AM -0400, Marc G. Fournier wrote:\n> \n> okay, somehow you have two different CVSROOT's configured?\n> /home/projects/pgsql/cvsroot was the old server, /projects/cvsroot is the\n> new one ....\n\nAny hints? I had done a (csh)\ncd /usr/src/local/pgsql\nfind . -name Root -print > allroots\ngrep -v CVS allroots\nforeach i ( `cat allroots`)\n echo \":pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\" > $i\nend\n\nand CVSROOT is not set as an environment variable... Also odd that it\nappears there and there is no sign of \"home\" anywhere..\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 24 Sep 2001 18:04:17 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "On Mon, Sep 24, 2001 at 06:04:17PM +0100, Patrick Welche wrote:\n... \n> and CVSROOT is not set as an environment variable... Also odd that it\n> appears there and there is no sign of \"home\" anywhere..\n\nGot it: had /home/... in pgsql/src/backend/access/heap/CVS/Repository (!)\nAll OK now..\n\nCheers,\n\nPatrick\n", "msg_date": "Tue, 25 Sep 2001 14:07:29 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." }, { "msg_contents": "> On Mon, Sep 24, 2001 at 10:22:28AM -0400, Marc G. Fournier wrote:\n> > \n> > okay, somehow you have two different CVSROOT's configured?\n> > /home/projects/pgsql/cvsroot was the old server, /projects/cvsroot is the\n> > new one ....\n> \n> Any hints? I had done a (csh)\n> cd /usr/src/local/pgsql\n> find . -name Root -print > allroots\n> grep -v CVS allroots\n> foreach i ( `cat allroots`)\n> echo \":pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\" > $i\n> end\n> \n> and CVSROOT is not set as an environment variable... Also odd that it\n> appears there and there is no sign of \"home\" anywhere..\n\nI would just delete the old CVS tree and download a new one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 26 Sep 2001 03:43:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anoncvs failure..." } ]
[ { "msg_contents": "I tried to subscribe to a few more lists (docs, patches, and committers) and\ngot this for them all:\n\n>>>> subscribe\n**** Illegal command!\n\nNo valid commands processed.\n\n\n\n\nGeoff\n", "msg_date": "Fri, 21 Sep 2001 14:01:52 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "majordomo broken?" } ]
[ { "msg_contents": "I am back from one week vacation. I will work through the mailing lists\ntomorrow and leave for OSDN conference on Sunday afternoon, EST.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 21 Sep 2001 21:28:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "I have returned" } ]
[ { "msg_contents": "Hi,\n\nI have written a small function that show how many tuples are dead\netc. in a specified table. Example output is:\n\ntest=# select pgstattuple('tellers');\nNOTICE: physical length: 0.02MB live tuples: 200 (0.01MB, 58.59%) dead tuples: 100 (0.00MB, 29.30%) overhead: 12.11%\n pgstattuple \n-------------\n 29.296875\n(1 row)\n\nShall I add this function into contrib directory?\n--\nTatsuo Ishii\n", "msg_date": "Sat, 22 Sep 2001 13:36:47 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Tupple statistics function" }, { "msg_contents": "> Hi,\n> \n> I have written a small function that show how many tuples are dead\n> etc. in a specified table. Example output is:\n> \n> test=# select pgstattuple('tellers');\n> NOTICE: physical length: 0.02MB live tuples: 200 (0.01MB, 58.59%) dead tuples: 100 (0.00MB, 29.30%) overhead: 12.11%\n> pgstattuple \n> -------------\n> 29.296875\n> (1 row)\n> \n> Shall I add this function into contrib directory?\n\nI have been wanting this for a long time. In fact, I wanted it linked\nto VACUUM so you could vacuum a table only if it had >X% dead tuples. \nSeems we can find a place for this in the existing commands. Not sure\nwhere, though. Ideas?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Sep 2001 00:58:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tupple statistics function" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have written a small function that show how many tuples are dead\n> etc. in a specified table.\n\nDead according to whose viewpoint? Under MVCC this seems to be\nin the eye of the beholder...\n\n> Shall I add this function into contrib directory?\n\nNo real objection, but you should carefully document exactly what\nthe results mean.\n\nBTW, I'd suggest accounting for free, reusable space separately from\n\"overhead\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Sep 2001 01:06:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tupple statistics function " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I have written a small function that show how many tuples are dead\n> > etc. in a specified table.\n> \n> Dead according to whose viewpoint? Under MVCC this seems to be\n> in the eye of the beholder...\n\nYou can know if the tuple is visible to other backends, or at least take\na good guess like VACUUM does. Maybe he has coded that in there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Sep 2001 01:26:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tupple statistics function" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> ... Maybe he has coded that in there.\n\nMaybe so, but he didn't say. That's why I was asking for exact\ndocumentation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Sep 2001 01:29:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tupple statistics function " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I have written a small function that show how many tuples are dead\n> > etc. in a specified table.\n> \n> Dead according to whose viewpoint? Under MVCC this seems to be\n> in the eye of the beholder...\n> \n> > Shall I add this function into contrib directory?\n> \n> No real objection, but you should carefully document exactly what\n> the results mean.\n> \n> BTW, I'd suggest accounting for free, reusable space separately from\n> \"overhead\".\n> \n> \t\t\tregards, tom lane\n\nOk, here are the source code...\n\n/*\n * $Header: /home/t-ishii/repository/pgstattuple/pgstattuple.c,v 1.2 2001/08/30 06:21:48 t-ishii Exp $\n *\n * Copyright (c) 2001 Tatsuo Ishii\n *\n * Permission to use, copy, modify, and distribute this software and\n * its documentation for any purpose, without fee, and without a\n * written agreement is hereby granted, provided that the above\n * copyright notice and this paragraph and the following two\n * paragraphs appear in all copies.\n */\n\n#include \"postgres.h\"\n#include \"fmgr.h\"\n#include \"access/heapam.h\"\n#include \"access/transam.h\"\n\nPG_FUNCTION_INFO_V1(pgstattuple);\n\nextern Datum pgstattuple(PG_FUNCTION_ARGS);\n\n/* ----------\n * pgstattuple:\n * returns the percentage of dead tuples\n *\n * C FUNCTION definition\n * pgstattuple(NAME) returns FLOAT8\n * ----------\n */\nDatum\npgstattuple(PG_FUNCTION_ARGS)\n{\n Name\tp = PG_GETARG_NAME(0);\n\n Relation\trel;\n HeapScanDesc\tscan;\n HeapTuple\ttuple;\n BlockNumber nblocks;\n double\ttable_len;\n uint64\ttuple_len = 0;\n uint64\tdead_tuple_len = 0;\n uint32\ttuple_count = 0;\n uint32\tdead_tuple_count = 0;\n double\ttuple_percent;\n double\tdead_tuple_percent;\n\n rel = heap_openr(NameStr(*p), NoLock);\n nblocks = RelationGetNumberOfBlocks(rel);\n scan = heap_beginscan(rel, false, SnapshotAny, 0, NULL);\n\n while ((tuple = heap_getnext(scan,0)))\n {\n\tif (HeapTupleSatisfiesNow(tuple->t_data))\n\t{\n\t tuple_len += tuple->t_len;\n\t tuple_count++;\n\t}\n\telse\n\t{\n\t dead_tuple_len += tuple->t_len;\n\t dead_tuple_count++;\n\t}\n }\n heap_endscan(scan);\n heap_close(rel, NoLock);\n\n table_len = (double)nblocks*BLCKSZ;\n\n if (nblocks == 0)\n {\n\ttuple_percent = 0.0;\n\tdead_tuple_percent = 0.0;\n }\n else\n {\n\ttuple_percent = (double)tuple_len*100.0/table_len;\n\tdead_tuple_percent = (double)dead_tuple_len*100.0/table_len;\n }\n\n elog(NOTICE,\"physical length: %.2fMB live tuples: %u (%.2fMB, %.2f%%) dead tuples: %u (%.2fMB, %.2f%%) overhead: %.2f%%\",\n\n\t table_len/1024/1024,\n\n\t tuple_count,\n\t (double)tuple_len/1024/1024,\n\t tuple_percent,\n\n\t dead_tuple_count,\n\t (double)dead_tuple_len/1024/1024,\n\t dead_tuple_percent,\n\n\t (nblocks == 0)?0.0: 100.0 - tuple_percent - dead_tuple_percent);\n\n PG_RETURN_FLOAT8(dead_tuple_percent);\n}\n", "msg_date": "Sat, 22 Sep 2001 14:35:50 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Tupple statistics function " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > ... Maybe he has coded that in there.\n> \n> Maybe so, but he didn't say. That's why I was asking for exact\n> documentation.\n\nAs you can see from the source code, it just use\nHeapTupleSatisfiesNow(). I wrote this function for the admin\nuse. He/she should know if active transactions are touching the table,\nI think. But more precise guess might be interesting for some cases.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 22 Sep 2001 15:03:33 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Tupple statistics function " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Hi,\n> >\n> > I have written a small function that show how many tuples are dead\n> > etc. in a specified table. Example output is:\n> >\n> > test=# select pgstattuple('tellers');\n> > NOTICE: physical length: 0.02MB live tuples: 200 (0.01MB, 58.59%) dead tuples: 100 (0.00MB, 29.30%) overhead: 12.11%\n> > pgstattuple\n> > -------------\n> > 29.296875\n> > (1 row)\n> >\n> > Shall I add this function into contrib directory?\n> \n> I have been wanting this for a long time. In fact, I wanted it linked\n> to VACUUM so you could vacuum a table only if it had >X% dead tuples.\n> Seems we can find a place for this in the existing commands. Not sure\n> where, though. Ideas?\n\nIf you mean the reporting of stats how about EXPLAIN VACUMN (with other\ninfo as well?) or EXPLAIN [VERBOSE] TABLE (see below).\n\nIn general EXPLAIN could be expanded to be a command to return an\nexplanation and stats of many items.\nThere could also be EXPLAIN that only shows fields and EXPLAIN VERBOSE\nthat also shows more detail such as stats (as that tends to take more\ntime to collect).\n\nExamples:\nEXPLAIN TABLE ttt\t\tshow table fields and indexes/rules\n\t\t\t\tVERBOSE:stats (inc tuple stats)\nEXPLAIN INDEX iii\t\tshow index description and stats\nEXPLAIN USER/GROUP uuu\t\tshow user name (and the users groups)\n\t\t\t\tVERBOSE:list GRANTs\nEXPLAIN FUNCTION/AGGREGATE/OPERATOR fff\n\t\t\t\tshow arguments of user functions\n\t\t\t\tVERBOSE:show source code\n\nThese might be useful, easier to remember, unchanging between versions\nalternatives to the SELECT * from pg_ttt methods used at present.\nIt it probably worth checking the security options for these (not every\nuser should have function source code access in some business apps).\n\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \nThis is the identity that I use for NewsGroups. Email to \nthis will just sit there. If you wish to email me replace\nthe domain with knightpiesold . co . uk (no spaces).\n", "msg_date": "Mon, 24 Sep 2001 11:40:37 +0100", "msg_from": "\"Thurstan R. McDougle\" <trmcdougle@my-deja.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tupple statistics function" }, { "msg_contents": ">>>>> \"Thurstan\" == Thurstan R McDougle <trmcdougle@my-deja.com> writes:\n\nThurstan> In general EXPLAIN could be expanded to be a command to\nThurstan> return an explanation and stats of many items. There could\nThurstan> also be EXPLAIN that only shows fields and EXPLAIN VERBOSE\nThurstan> that also shows more detail such as stats (as that tends to\nThurstan> take more time to collect).\n\nIt would also be interesting to take everything that psql does and put\neach of them into a view so that it could be queried directly.\nThere's no reason that the magic should reside in client-side code.\nIt'd also make psql much simpler. :) I mean, why is \"\\d\" anything\nother than \"select * from pg_table_view;\", with all the logic to\ncompute that table in the view code?\n\nUnless having a view on the server is expensive. Is a server view\nexpensive if nobody calls it? I mean, it's not maintained like an\nindex, is it?\n\n-- \nRandal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095\n<merlyn@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>\nPerl/Unix/security consulting, Technical writing, Comedy, etc. etc.\nSee PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!\n", "msg_date": "24 Sep 2001 08:06:59 -0700", "msg_from": "merlyn@stonehenge.com (Randal L. Schwartz)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tupple statistics function" }, { "msg_contents": "> >>>>> \"Thurstan\" == Thurstan R McDougle <trmcdougle@my-deja.com> writes:\n> \n> Thurstan> In general EXPLAIN could be expanded to be a command to\n> Thurstan> return an explanation and stats of many items. There could\n> Thurstan> also be EXPLAIN that only shows fields and EXPLAIN VERBOSE\n> Thurstan> that also shows more detail such as stats (as that tends to\n> Thurstan> take more time to collect).\n> \n> It would also be interesting to take everything that psql does and put\n> each of them into a view so that it could be queried directly.\n> There's no reason that the magic should reside in client-side code.\n> It'd also make psql much simpler. :) I mean, why is \"\\d\" anything\n> other than \"select * from pg_table_view;\", with all the logic to\n> compute that table in the view code?\n> \n> Unless having a view on the server is expensive. Is a server view\n> expensive if nobody calls it? I mean, it's not maintained like an\n> index, is it?\n\nAdded to TODO:\n\n\t* Move psql backslash information into views\n\nMakes sense.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 16:40:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tupple statistics function" } ]
[ { "msg_contents": "Tom,\n\nplease apply attached patch to current CVS.\n\nChanges:\n\n 1. Added support for boolean queries (indexable operator @@, looks like\n a @@ '1|(2&3)'\n 2. Some code cleanup and optimization\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Sat, 22 Sep 2001 16:22:07 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "patch for contrib/intarray (CVS)" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Tom,\n> \n> please apply attached patch to current CVS.\n> \n> Changes:\n> \n> 1. Added support for boolean queries (indexable operator @@, looks like\n> a @@ '1|(2&3)'\n> 2. Some code cleanup and optimization\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 22 Sep 2001 10:34:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: patch for contrib/intarray (CVS)" }, { "msg_contents": "\nPatch applied. Thanks.\n\n> Tom,\n> \n> please apply attached patch to current CVS.\n> \n> Changes:\n> \n> 1. Added support for boolean queries (indexable operator @@, looks like\n> a @@ '1|(2&3)'\n> 2. Some code cleanup and optimization\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 23 Sep 2001 00:16:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: patch for contrib/intarray (CVS)" } ]
[ { "msg_contents": "Hi!\n\nFYI: a nasty mistake... :)\n\nI am writing a java application, which uses postgreSQL thru JDBC2.\n\nI have two connections to database;\na batch connection, which is initialized as\n realConnection.setAutoCommit( false );\n realConnection.setTransactionIsolation(\nrealConnection.TRANSACTION_READ_COMMITTED );\n\na query connection, which was initialized as\n realConnection.setAutoCommit( false );\n realConnection.setTransactionIsolation(\nrealConnection.TRANSACTION_READ_COMMITTED );\n realConnection.setReadOnly( true );\n\nWell everything was okay, until I started to drop tables and indexes... :(\nDrop statements did hangup for forever when backend did try to take\nexclusive accss lock of index or table.\n\nI used these connections as\nbatch: created table&indexes. Initialized table with some data\nquery: make several queries\nbatch: cleanup and drop indexes&table\n\nAfter couple of days, I did finally found the *real* reason for hangup...\nThe query connection is not set to be an autocommitted, however, I didn't\nuse any connection.commit(). The rest you can imagine... ;-)\n\nNow everything works fine, when I put autocommit on for query connection.\n\n\nMatti \"a bit smarter\" Lehtonen\n--\nMatti Lehtonen Software designer, Stonesoft Corp. Networks R&D\nAddr: It�lahdenkatu 22 A, FIN-00210 Helsinki\nMobile: +358 40 750 4969 Fax: +358 9 4767 1345\nE-mail: matti.lehtonen@stonesoft.com\nInternet: http://www.stonesoft.com/\n\n\n", "msg_date": "Sat, 22 Sep 2001 22:35:32 +0300", "msg_from": "Matti.Lehtonen@stonesoft.com", "msg_from_op": true, "msg_subject": "DROP TABLE and DROP INDEX hangs up in version 7.1.3, when..." } ]
[ { "msg_contents": "Hi all,\n\nDid anyone ever figure out why Solaris boxes had those random failures\nduring the regression tests?\n\nIf no, then should we ensure the regression test on Solaris run's over\nTCP instead (like BeOS and QNX do)? To do this is a one-line fix in the\nshell script.\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 23 Sep 2001 15:11:09 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Should we disable Solaris using Unix Domain Sockets in the regression\n\ttest?" }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n\n> Did anyone ever figure out why Solaris boxes had those random failures\n> during the regression tests?\n\nIt should be better now that listen() is being called with a larger\nnumber. The basic problem is that Solaris actually honors the\nlisten() backlog argument, and if more than that many Unix socket\nclients attempt to connect simultaneously, some are rejected\nimmediately.\n\nIan\n", "msg_date": "22 Sep 2001 22:48:53 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Should we disable Solaris using Unix Domain Sockets in the\n\tregression test?" }, { "msg_contents": "Cool.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nIan Lance Taylor wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> \n> > Did anyone ever figure out why Solaris boxes had those random failures\n> > during the regression tests?\n> \n> It should be better now that listen() is being called with a larger\n> number. The basic problem is that Solaris actually honors the\n> listen() backlog argument, and if more than that many Unix socket\n> clients attempt to connect simultaneously, some are rejected\n> immediately.\n> \n> Ian\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 23 Sep 2001 16:00:12 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Should we disable Solaris using Unix Domain Sockets in the " } ]
[ { "msg_contents": "I've hit some really evil nastiness that is either a Postgres 7.1.3 bug,\nor signs of early-onset senility for me. I was having trouble with my\ndatabase dying while inserting some values, and running some PL/pgSQL.\n\nThe schema is as listed below, and I'm getting \npsql:fuck.sql:175: ERROR: ExecReplace: rejected due to CHECK constraint users_logged_in\nwhile inserting values into the uservote table. If I had a few columns to\nthe users table, postgres crashes instead of giving this (nonsensical)\nerror.\n\nI'd greatly appreciate any insight, even if it involves a 2x4.\n\nBelow is a significantly simplified version of my schema, which exhibits\nthe above problem.\n\nDROP RULE uservote_update_item_mod;\nDROP RULE uservote_delete_item_dec;\nDROP RULE uservote_insert_item_inc;\n\nDROP RULE itemvote_update_item_mod;\nDROP RULE itemvote_delete_item_dec;\nDROP RULE itemvote_insert_item_inc;\n\nDROP FUNCTION mod_node_vote_count(INT4, INT2, INT2);\n\nDROP TABLE uservote;\nDROP TABLE itemvote;\n\nDROP TABLE item;\nDROP TABLE users;\nDROP TABLE node;\n\nDROP SEQUENCE node_id_seq;\n\nCREATE SEQUENCE node_id_seq;\n\nCREATE TABLE node (\n node_id INT4 UNIQUE NOT NULL DEFAULT nextval('node_id_seq'),\n name TEXT NOT NULL,\n nays INT4 NOT NULL DEFAULT 0\n CHECK ( nays >= 0 ),\n yays INT4 NOT NULL DEFAULT 0,\n CHECK ( yays >= 0 ),\n rating INT2 NOT NULL DEFAULT 50\n CHECK ( rating >= 0 AND rating <= 100 ),\n PRIMARY KEY (node_id)\n);\n \nCREATE TABLE users (\n node_id INT4 UNIQUE NOT NULL,\n\temail TEXT NOT NULL,\n\trealname\tTEXT NOT NULL,\n\tpass_hash\tVARCHAR(32) NOT NULL,\n logged_in\tINT2 NOT NULL DEFAULT 0 \n CHECK (logged_in = 0 OR logged_in = 1)\n) INHERITS (node);\n\nCREATE TABLE item (\n node_id INT4 UNIQUE NOT NULL,\n\tcreator_id\tINT4 NOT NULL\n REFERENCES users (node_id)\n ON DELETE CASCADE\n ON UPDATE CASCADE,\n reason \tTEXT NOT NULL\n) INHERITS (node);\t\n\nCREATE TABLE itemvote (\n\tvote_date\tTIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,\n\ttarget_id INT4 NOT NULL\n REFERENCES item (node_id) \n ON DELETE CASCADE\n ON UPDATE CASCADE,\n user_id INT4 NOT NULL\n\t\t\t REFERENCES users (node_id)\n\t\t\t ON DELETE CASCADE\n \t\t\t ON UPDATE CASCADE,\n\tnays\t\tINT2 NOT NULL\n CHECK (nays = 0 OR nays = 1),\n\n\tPRIMARY KEY (user_id, target_id)\n);\n\nCREATE TABLE uservote (\n\tvote_date\tTIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,\n\ttarget_id INT4 NOT NULL\n REFERENCES users (node_id) \n ON DELETE CASCADE\n ON UPDATE CASCADE,\n user_id INT4 NOT NULL\n\t\t\t REFERENCES users (node_id)\n\t\t\t ON DELETE CASCADE\n \t\t\t ON UPDATE CASCADE,\n\tnays\t\tINT2 NOT NULL\n CHECK (nays = 0 OR nays = 1),\n\n\tPRIMARY KEY (user_id, target_id)\n);\n\n-- modifies an items nays/yays count totals as appropriate\n-- first arg: item number\n-- second arg: 1 or 0, nays or yays.\n-- third arg: 1 or 0, add a vote, or remove a vote\nCREATE FUNCTION mod_node_vote_count (INT4, INT2, INT2) RETURNS INT2 AS '\n DECLARE\n node_num ALIAS for $1;\n nay_status ALIAS for $2;\n add ALIAS for $3;\n\n nay_tot INT4 NOT NULL DEFAULT 0;\n yay_tot INT4 NOT NULL DEFAULT 0;\n BEGIN\n IF add = 1\n THEN\n IF nay_status = 1\n THEN\n UPDATE node SET nays = nays + 1 WHERE node_id = node_num;\n ELSE\n UPDATE node SET yays = yays + 1 WHERE node_id = node_num;\n END IF;\n ELSE\n IF nay_status = 1\n THEN\n UPDATE node SET nays = nays - 1 WHERE node_id = node_num;\n ELSE\n UPDATE node SET yays = yays - 1 WHERE node_id = node_num;\n END IF;\n END IF;\n SELECT nays INTO nay_tot FROM node WHERE node_id = node_num;\n SELECT yays INTO yay_tot FROM node WHERE node_id = node_num;\n\n IF nay_tot + yay_tot != 0\n THEN\n UPDATE node SET rating = CEIL( (yay_tot * 100)/(yay_tot + nay_tot) ) WHERE node_id = node_num;\n ELSE\n UPDATE node SET rating = 50 WHERE node_id = node_num;\n END IF;\n\n RETURN 1;\n END;\n' LANGUAGE 'plpgsql';\n\n------------------------------------------------------------------------\n-- vote totalling rules\n\n-- vote insertion\nCREATE RULE itemvote_insert_item_inc AS\n ON INSERT TO itemvote DO\n SELECT mod_node_vote_count(NEW.target_id, NEW.nays, 1);\n\nCREATE RULE uservote_insert_item_inc AS\n ON INSERT TO uservote DO\n SELECT mod_node_vote_count(NEW.target_id, NEW.nays, 1);\n\n-- vote deletion\nCREATE RULE itemvote_delete_item_dec AS\n ON DELETE TO itemvote DO\n SELECT mod_node_vote_count(OLD.target_id, OLD.nays, 0);\n\nCREATE RULE uservote_delete_item_dec AS\n ON DELETE TO uservote DO\n SELECT mod_node_vote_count(OLD.target_id, OLD.nays, 0);\n\n-- vote updates\nCREATE RULE itemvote_update_item_mod AS\n ON UPDATE TO itemvote WHERE OLD.nays != NEW.nays DO\n (SELECT mod_node_vote_count(OLD.target_id, OLD.nays, 1);\n SELECT mod_node_vote_count(NEW.target_id, NEW.nays, 0););\n\nCREATE RULE uservote_update_item_mod AS\n ON UPDATE TO uservote WHERE OLD.nays != NEW.nays DO\n (SELECT mod_node_vote_count(OLD.target_id, OLD.nays, 1);\n SELECT mod_node_vote_count(NEW.target_id, NEW.nays, 0););\n\n-- users\nINSERT INTO users (name, pass_hash, realname, email) VALUES ('mosch', 'dafe001b7733b0f3236aa95e00f8ed88', 'Kevin', 'monica@whitehouse.gov');\nINSERT INTO users (name, pass_hash, realname, email) VALUES ('Wakko', 'c6ef90fcf92bf703c3cc79a679c192a3', 'Alex', 'wakko@bitey.net');\n\n-- items\nINSERT INTO item (name, creator_id, reason) VALUES ('slashdot.org', 2, 'Because it\\'s a pile of turd.');\nINSERT INTO item (name, creator_id, reason) VALUES ('Yahoo!', 2, 'Because it\\'s ugly.');\nINSERT INTO item (name, creator_id, reason) VALUES ('memepool', 1, 'Because it\\'s phat phat phat phat phat.');\nINSERT INTO item (name, creator_id, reason) VALUES ('blow!!??!!', 1, 'this record nays nays nays');\n\n-- item votes\nINSERT INTO itemvote (target_id, user_id, nays) VALUES (3, 1, 1);\nINSERT INTO itemvote (target_id, user_id, nays) VALUES (4, 1, 0);\nINSERT INTO itemvote (target_id, user_id, nays) VALUES (5, 2, 1);\n\n-- user votes\nINSERT INTO uservote (target_id, user_id, nays) VALUES (2, 1, 0);\nINSERT INTO uservote (target_id, user_id, nays) VALUES (1, 2, 1);\n", "msg_date": "Sun, 23 Sep 2001 07:09:46 +0000", "msg_from": "Kevin Way <kevin.way@overtone.org>", "msg_from_op": true, "msg_subject": "confounding, incorrect constraint error" }, { "msg_contents": "> > Below is a significantly simplified version of my schema, which\n> > exhibits\n> > the above problem.\n> \n> Unfortunately, even a simplified version of your schema would take me\n> some hours to understand. As your rule-setting is quite complex, my\n> first instinct would be to hunt for circular procedural logic in your\n> rules. Try to pursue, step by step, everything that happens from the\n> moment you send the insert command to uservotes. You may find that the\n> logic cascades back to the beginning. I've done this to myself on\n> occasion, causing the DB to hang on a seemingly simple request.\n\nI'm fairly certain that there's no circular procedural logic.\n\nThe errors can be turned on/off by turning on/off the uservote_ series\nof rules, which are attached to the uservote table. These rules call\nmod_node_vote_count which only touches the node table. There are no\nrules or triggers associated with the node table, so there is no circular\nlogic there.\n\nAdditional strangeness is that the itemvote_ series of rules works perfectly\ndespite the fact that the only difference between uservote_ and itemvote_\nrules is the table that triggers them, they both call the same procedure on\nthe nodes table.\n\nMy current thinking is that something is stomping on some memory, because \nyou can vary the effect of the error from being an incorrectly failed CHECK\nconstraint, to crashing the database, by varying the number of columns in\nthe tables in question.\n\nI'm unemployed at the moment and this is a pet project, so I can't offer\nmuch in the way of financial compensation, but I'll start the bidding at\n$50 donation in your name to your choice of the EFF, the Red Cross, or the\nAmerican Cancer Society, in return for a fix. (If none of these charities\nare acceptable, surely one can be found later that is acceptable to both\nparties).\n\nAgain, I greatly appreciate any help, and I apologize that my test case is\nstill fairly sizeable, despite being about 10% the size of the original\ncode.\n\n-Kevin Way", "msg_date": "Sun, 23 Sep 2001 20:54:01 +0000", "msg_from": "Kevin Way <kevin.way@overtone.org>", "msg_from_op": true, "msg_subject": "Re: confounding, incorrect constraint error" }, { "msg_contents": "Kevin Way wrote:\n> I'm unemployed at the moment and this is a pet project, so I can't offer\n> much in the way of financial compensation, but I'll start the bidding at\n> $50 donation in your name to your choice of the EFF, the Red Cross, or the\n> American Cancer Society, in return for a fix. (If none of these charities\n> are acceptable, surely one can be found later that is acceptable to both\n> parties).\n\n Sorry, I missed the original post of the problem. If you can\n send it to me again and change your offer into donating blood\n at the Red Cross, I would take a look at it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 24 Sep 2001 14:12:03 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: confounding, incorrect constraint error" }, { "msg_contents": "Hi everybody!\n\nI tried, and it works: the current CVS version really runs\nhappily the query what sent to heaven our 7.1 version of the\nbackend.\n\nKevin: your original complex schema also runs smoothly.\n\nThanks for our mindful developers!\n\nRegards,\nBaldvin\n\nI think Jan wrote:\n> Sorry, I missed the original post of the problem. If you can\n> send it to me again and change your offer into donating blood\n> at the Red Cross, I would take a look at it.\n\nProbably the chances for a prize went... :-(( However, is there\nstill a shortage of blod???\n\nBaldvin\n\n\n", "msg_date": "Mon, 24 Sep 2001 20:38:20 +0200 (MEST)", "msg_from": "Kovacs Baldvin <kb136@hszk.bme.hu>", "msg_from_op": false, "msg_subject": "CHECK problem really OK now..." }, { "msg_contents": "Baldvin,\n\n> Probably the chances for a prize went... :-(( However, is there\n> still a shortage of blod???\n\nLast I checked, the American Red Cross does not want your blood right\nnow -- they want it two weeks from now. Currently blood stores are\nfull, but they get depleted pretty fast. Of course, what they really\nwant is for you to make a commitment to donate twice a year, every year.\n\n-Josh\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n", "msg_date": "Mon, 24 Sep 2001 12:18:04 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: CHECK problem really OK now..." }, { "msg_contents": "Kovacs Baldvin wrote:\n> Hi everybody!\n>\n> I tried, and it works: the current CVS version really runs\n> happily the query what sent to heaven our 7.1 version of the\n> backend.\n>\n> Kevin: your original complex schema also runs smoothly.\n>\n> Thanks for our mindful developers!\n>\n> Regards,\n> Baldvin\n>\n> I think Jan wrote:\n> > Sorry, I missed the original post of the problem. If you can\n> > send it to me again and change your offer into donating blood\n> > at the Red Cross, I would take a look at it.\n>\n> Probably the chances for a prize went... :-(( However, is there\n> still a shortage of blod???\n\n Well, for the NY disaster they probably have more than enough\n - not that many injured people there - sad enough though. But\n what's wrong with using the current wave of patriotism to get\n as much as they can get?\n\n It help's saving life! Using the victims for that purpose\n isn't abuse. It is turning grief, anger and sadness into\n help and hope.\n\n Let blood become \"Open Source\". Give it for free and you'll\n get plenty of it when you need some.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 24 Sep 2001 15:23:13 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: CHECK problem really OK now..." }, { "msg_contents": "I tend to follow the mailing list through news.postgresql.org, and it seems\nlike all the -hackers messages are ending up in the .general group rather\nthan .hackers.\n\nDid I somehow blow my configuration or are other people experiencing the\nsame thing?\n\nAZ\n\n\n", "msg_date": "Mon, 24 Sep 2001 13:38:27 -0700", "msg_from": "\"August Zajonc\" <junk-pgsql@aontic.com>", "msg_from_op": false, "msg_subject": "[HACKERS] not on .hackers" }, { "msg_contents": "August Zajonc:\n\n> I tend to follow the mailing list through news.postgresql.org, and it\nseems\n> like all the -hackers messages are ending up in the .general group rather\n> than .hackers.\n\nI also follow the mailing list(s) through news.postgresql.org and now that\nyou mention it ...\n\nIt hasn't always been like this.\n\n\nCheers,\n\nColin\n\n\n", "msg_date": "Mon, 24 Sep 2001 22:51:14 +0200", "msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] not on .hackers" }, { "msg_contents": "On Mon, Sep 24, 2001 at 03:23:13PM -0400, Jan Wieck wrote:\n> \n> It help's saving life! Using the victims for that purpose\n> isn't abuse. It is turning grief, anger and sadness into\n> help and hope.\n> \n> Let blood become \"Open Source\". Give it for free and you'll\n> get plenty of it when you need some.\n\n\tI couldn't agree more!\n\n\t-Roberto\n-- \n+------------| Roberto Mello - http://www.brasileiro.net |------------+\nComputer Science, Utah State University - http://www.usu.edu\nUSU Free Software & GNU/Linux Club - http://fslc.usu.edu\nSpace Dynamics Lab, Developer - http://www.sdl.usu.edu\nOpenACS - Enterprise free web toolkit - http://openacs.org\nTAFB -> Text Above Fullquote Below \n", "msg_date": "Mon, 24 Sep 2001 22:49:57 -0600", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": false, "msg_subject": "Re: CHECK problem really OK now..." }, { "msg_contents": "\"Colin 't Hart\" <cthart@yahoo.com> wrote in message news:<9oo6en$qr8$1@news.tht.net>...\n> August Zajonc:\n> \n> > I tend to follow the mailing list through news.postgresql.org, and it\n> seems\n> > like all the -hackers messages are ending up in the .general group rather\n> > than .hackers.\n> \n> I also follow the mailing list(s) through news.postgresql.org and now that\n> you mention it ...\n> \n> It hasn't always been like this.\n\nI'm getting the same thing. Almost nothing is going to the Hackers\nlist on Google groups (old Dejanews). They all seem to be sent to\nother lists with the [HACKERS] id in the subject.\n\n-Tony\n", "msg_date": "25 Sep 2001 11:59:20 -0700", "msg_from": "reina@nsi.edu (Tony Reina)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] not on .hackers" }, { "msg_contents": "Kovacs Baldvin <kb136@hszk.bme.hu> writes:\n> I tried, and it works: the current CVS version really runs\n> happily the query what sent to heaven our 7.1 version of the\n> backend.\n\nI believe this traces to a fix I made in May:\n\n2001-05-27 16:48 tgl\n\n\t* src/: backend/executor/execJunk.c, backend/executor/execMain.c,\n\tinclude/executor/executor.h, include/nodes/execnodes.h: When using\n\ta junkfilter, the output tuple should NOT be stored back into the\n\tsame tuple slot that the raw tuple came from, because that slot has\n\tthe wrong tuple descriptor. Store it into its own slot with the\n\tcorrect descriptor, instead. This repairs problems with SPI\n\tfunctions seeing inappropriate tuple descriptors --- for example,\n\tplpgsql code failing to cope with SELECT FOR UPDATE.\n\nI didn't realize at the time that the error would also affect updates of\nchild tables, but tracing through your example with 7.1 shows clearly\nthat the CHECK is being applied to a slot that contains a four-column\ntuple and only a three-column descriptor. Ooops.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 14:36:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] CHECK problem really OK now... " } ]
[ { "msg_contents": "I have made lpad/rpad/btrim/ltrim/rtirm/translate functions multibyte\naware. I think we could mark following item as \"done\".\n\n* Make functions more multi-byte aware, e.g. trim()\n\nAnything I forgot to make multibyte aware?\n--\nTatsuo Ishii\n", "msg_date": "Sun, 23 Sep 2001 20:11:08 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "mutibyte aware functions" }, { "msg_contents": "> I have made lpad/rpad/btrim/ltrim/rtirm/translate functions multibyte\n> aware. I think we could mark following item as \"done\".\n> \n> * Make functions more multi-byte aware, e.g. trim()\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 23 Sep 2001 09:55:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: mutibyte aware functions" }, { "msg_contents": "Hello,\n\nI have set up a UNICODE database in PostgreSQL 7.1.2 and use psql for \nquerying (\\ENCODING > UNICODE).\nTo perform tests, I downloaded code charts from http://www.unicode.org/charts/\n\n1) UTF-8\nhttp://www.postgresql.org/idocs/index.php?app-psql.html explains\n\"Anything contained in single quotes is furthermore subject to C-like \nsubstitutions for \\n (new line), \\t (tab), \\digits, \\0digits, and \\0xdigits \n(the character with the given decimal, octal, or hexadecimal code).\"\n\nTo start, I would like to store/display a simple 'A' letter in psql, number \n0041, with the following queries:\n\n > 'INSERT INTO TABLE table_name VALUES (column-name) VALUES ( ' \\0041' );\n and then SELECT * FROM table_name. It does not work.\n > Or simply SELECT '\\0041'; which does not return 'A'.\n\nDo I miss something?\n\n2) Japanese coding\nDo you recommend EUC_JP or UNICODE for storing Japanese text in PostgreSQL?\nThis is for use in PHP (both for input and display, no recode needed).\n\n3) Is there a way to query available encodings in PostgreSQL for display in \npgAdmin.\nIs it a planned feature in PostgreSQL 7.2? This would be nice if it existed.\nExample: function pg_available_encodings -> SQL-ASCII;UNICODE;EUC-JP etc...\n\nThank you in advance,\nJean-Michel POURE\npgAdmin Team\nhttp://pgadmin.postgresql.org\n", "msg_date": "Sun, 23 Sep 2001 16:18:36 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "UTF-8 support" }, { "msg_contents": "> 1) UTF-8\n> http://www.postgresql.org/idocs/index.php?app-psql.html explains\n> \"Anything contained in single quotes is furthermore subject to C-like \n> substitutions for \\n (new line), \\t (tab), \\digits, \\0digits, and \\0xdigits \n> (the character with the given decimal, octal, or hexadecimal code).\"\n> \n> To start, I would like to store/display a simple 'A' letter in psql, number \n> 0041, with the following queries:\n> \n> > 'INSERT INTO TABLE table_name VALUES (column-name) VALUES ( ' \\0041' );\n> and then SELECT * FROM table_name. It does not work.\n> > Or simply SELECT '\\0041'; which does not return 'A'.\n\nTry:\n\nINSERT INTO TABLE table_name VALUES (column-name) VALUES ( ' \\101' );\n\nI don't know why the docs claim so, '\\OCTAL_NUMBER' seems to work\nanyway.\n\nBTW, 'A' is not 041 in octal, it is 101.\n\n> 2) Japanese coding\n> Do you recommend EUC_JP or UNICODE for storing Japanese text in PostgreSQL?\n> This is for use in PHP (both for input and display, no recode needed).\n\nIf you are going to use Japanese only, EUC_JP will take less storage\nspace. So, in general EUC_JP is recommended.\n\n> 3) Is there a way to query available encodings in PostgreSQL for display in \n> pgAdmin.\n> Is it a planned feature in PostgreSQL 7.2? This would be nice if it existed.\n> Example: function pg_available_encodings -> SQL-ASCII;UNICODE;EUC-JP etc...\n\nCurrently no. But it would be easy to implement such a function. What\ncomes in mind is:\n\npg_available_encodings([INTEGER how]) RETURNS setof TEXT\n\nwhere how is\n\n 0(or omitted): returns all available encodings\n 1: returns encodings in backend\n 2: returns encodings in frontend\n\nComments?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 24 Sep 2001 08:58:22 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: UTF-8 support" }, { "msg_contents": "----- Original Message ----- \nFrom: Tatsuo Ishii <t-ishii@sra.co.jp>\nSent: Sunday, September 23, 2001 7:58 PM\n\n> > 3) Is there a way to query available encodings in PostgreSQL for display in \n> > pgAdmin.\n> > Is it a planned feature in PostgreSQL 7.2? This would be nice if it existed.\n> > Example: function pg_available_encodings -> SQL-ASCII;UNICODE;EUC-JP etc...\n> \n> Currently no. But it would be easy to implement such a function. What\n> comes in mind is:\n> \n> pg_available_encodings([INTEGER how]) RETURNS setof TEXT\n> \n> where how is\n> \n> 0(or omitted): returns all available encodings\n> 1: returns encodings in backend\n> 2: returns encodings in frontend\n> \n> Comments?\n\n 3: returns encodings of both backend and frontend\n\nWhy both? To compare and match upon the need.\nIf by 0 (ALL) you meant the same, then please ignore my comment.\n\nMy question is now how many BE's/FE's would you return encodings for?\n\nS.\n\n", "msg_date": "Sun, 23 Sep 2001 22:15:26 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: UTF-8 support" }, { "msg_contents": "> > pg_available_encodings([INTEGER how]) RETURNS setof TEXT\n> > \n> > where how is\n> > \n> > 0(or omitted): returns all available encodings\n> > 1: returns encodings in backend\n> > 2: returns encodings in frontend\n> > \n> > Comments?\n> \n> 3: returns encodings of both backend and frontend\n> \n> Why both? To compare and match upon the need.\n> If by 0 (ALL) you meant the same, then please ignore my comment.\n\nYou are correct. We don't need how=3.\n\n> My question is now how many BE's/FE's would you return encodings for?\n\nI don't quite understand your question. What I thought were something\nlike this:\n\nSELECT pg_available_encodings();\npg_available_encodings\n----------------------\nSQL_ASCII\nEUC_JP\nEUC_CN\nEUC_KR\nEUC_TW\nUNICODE\nMULE_INTERNAL\nLATIN1\nLATIN2\nLATIN3\nLATIN4\nLATIN5\nKOI8\nWIN\nALT\nSJIS\nBIG5\nWIN1250\n\nBTW, another question comes to my mind. Why don't we make available\nthis kind of information by ordinaly tables or views, rather than by\nfunctions? It would be more flexible and easy to use.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 24 Sep 2001 11:47:31 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: UTF-8 support" }, { "msg_contents": "----- Original Message ----- \nFrom: Tatsuo Ishii <t-ishii@sra.co.jp>\nSent: Sunday, September 23, 2001 10:47 PM\n\n> > My question is now how many BE's/FE's would you return encodings for?\n> \n> I don't quite understand your question. What I thought were something\n> like this:\n> \n> SELECT pg_available_encodings();\n> pg_available_encodings\n> ----------------------\n> SQL_ASCII\n> EUC_JP\n> EUC_CN\n> EUC_KR\n> EUC_TW\n> UNICODE\n> MULE_INTERNAL\n> LATIN1\n> LATIN2\n> LATIN3\n> LATIN4\n> LATIN5\n> KOI8\n> WIN\n> ALT\n> SJIS\n> BIG5\n> WIN1250\n\nWhich ones belong to the backend and which ones to the frontend?\nOr even more: which ones belong to the backend, which ones\nto the frontend #1, which ones to the frontend #2, etc...\n\nFor examle, I have two fronends:\n\nFE1: UNICODE, WIN1251\nFE2: KOI8, UNICODE\nBE: UNICODE, LATIN1, ALT\n\nWhich ones SELECT pg_available_encodings(); will show?\nThe ones of the BE and the FE making the request?\n\nIn case I need to communicate with BE using one common\nencoding between the two if it is available.\n\n> BTW, another question comes to my mind. Why don't we make available\n> this kind of information by ordinaly tables or views, rather than by\n> functions? It would be more flexible and easy to use.\n\nSounds like a good idea, make another system table for encodings\nand NLS stuff...\n\nS.\n\n", "msg_date": "Sun, 23 Sep 2001 23:05:17 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: UTF-8 support" }, { "msg_contents": "Jean-Michel POURE wrote:\n> \n> 3) Is there a way to query available encodings in PostgreSQL for display in\n> pgAdmin.\n\nCould pgAdmin display multibyte chars in the first place ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 24 Sep 2001 14:25:49 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [ODBC] UTF-8 support" }, { "msg_contents": "> Which ones belong to the backend and which ones to the frontend?\n> Or even more: which ones belong to the backend, which ones\n> to the frontend #1, which ones to the frontend #2, etc...\n> \n> For examle, I have two fronends:\n> \n> FE1: UNICODE, WIN1251\n> FE2: KOI8, UNICODE\n> BE: UNICODE, LATIN1, ALT\n> \n> Which ones SELECT pg_available_encodings(); will show?\n> The ones of the BE and the FE making the request?\n> \n> In case I need to communicate with BE using one common\n> encoding between the two if it is available.\n\nI'm confused.\n\nWhat do you mean by BE? BE's encoding is determined by the database\nthat FE chooses. If you just want to know what kind encodings are\nthere in the database, why not use:\n\nSELECT DISTINCT ON (encoding) pg_encoding_to_char(encoding) AS\nencoding FROM pg_database;\n\nAlso, FE's encoding could be any valid encoding that FE chooses,\ni.e. it' not BE's choice.\n\nCan you show me more concrete examples showing what you actually want\nto do?\n\n>> 3) Is there a way to query available encodings in PostgreSQL for display in\n>> pgAdmin.\n>\n> Could pgAdmin display multibyte chars in the first place ?\n\nWao. If pgAdmin could not display multibyte chars, all discussions\nhere are meaningless:-<\n--\nTatsuo Ishii\n", "msg_date": "Mon, 24 Sep 2001 16:12:59 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: UTF-8 support" }, { "msg_contents": "Hello Tatsuo & all,\n\nFirst of all, I would like to thank all the ODBC team for their work.\nI have been using various databases quite extensively and PostgreSQL ranks \namong the best for ODBC support.\n\n>Try:\n>INSERT INTO TABLE table_name VALUES (column-name) VALUES ( ' \\101' );\n>I don't know why the docs claim so, '\\OCTAL_NUMBER' seems to work\n>anyway.BTW, 'A' is not 041 in octal, it is 101.\n\nWould it be possible to use the hexadecimal \\u000 notation in 7.2? Is there \na built-in function to convert hexadecimal into octal?\nThis would be an interesting feature as it seems to be a standard notation \nfor UNICODE values (http://www.unicode.org/charts/).\nAlso, Java has the ability to display UNICODE using \\u000 notation. I think \nit is the same with PHP although I am not sure.\n\n> > 2) Japanese coding\n> > Do you recommend EUC_JP or UNICODE for storing Japanese text in PostgreSQL?\n> > This is for use in PHP (both for input and display, no recode needed).\n>\n>If you are going to use Japanese only, EUC_JP will take less storage\n>space. So, in general EUC_JP is recommended.\n\nAre some Japanese fonts designed to work only for EUC-JP or UNICODE?\nCan all Japanese fonts be mapped from EUC-JP to UNICODE and conversly?\n\n> > Can you show me more concrete examples showing what you actually want \n> to do?\nYes, I am very interested in testing double-byte display in pgAdmin II.\npgAdmin I and II are developed in Visual Basic which supports double-byte \nUnicode forms since SP4.\n\nHow about ODBC double-byte support?\nIs any translation performed between back-end and front-end?\nAre there special ODBC settings to display Japanese?\nAre there limitations due to Windows (95/98/NT)?\nI heard 95 Japanese support was catastrophic.\n\nPresently, I have to store Japanese text in PostgreSQL 7.1 with PHP.\nI will do my best to display Japanese text in pgAdmin II.\n\nGreetings from Paris,\nJean-Michel POURE\n", "msg_date": "Mon, 24 Sep 2001 10:53:31 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UTF-8 support" }, { "msg_contents": "Jean-Michel POURE wrote:\n> \n> Hello Tatsuo & all,\n> \n> \n> How about ODBC double-byte support?\n\nIf you use 'EUC_xx' or 'UTF-8' as the client encoding\nyou don't need the multibyte version of psqlodbc driver\nbut if you use 'SJIS' or 'BIG5' you need the multibyte\nversion introduced by Eiji Tokuya.\n\n> Is any translation performed between back-end and front-end?\n\nYou have to set the connect settings Datasource option like\n set client_encoding to 'SJIS'\nif you use the different client encoding from that of\nserver. psqlodbc driver issues above command immediately\nafter the connection was successful and could use conversion\nfunctionality of PostgreSQL backends.\n\n> Are there special ODBC settings to display Japanese?\n\npgAdmin.exe doesn't display Japanese characters here\nbut I can see Japanese characters using pgAdmin source.\nThere seems to be .. e.g. font problem ...\n\n> Are there limitations due to Windows (95/98/NT)?\n> I heard 95 Japanese support was catastrophic.\n\nI've ever used win9X little and don't know the\ndetails.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 25 Sep 2001 09:41:27 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UTF-8 support" }, { "msg_contents": "Hello,\n\nAre there built-in functions to convert UTF-8 string values into \nhexadecimal \\uxxxx and octal values and conversely?\nIf yes, can I parse any UTF-8 string safely with PL/pgSQL to return \\uxxxx \nand octal values?\n\nBest regards,\nJean-Michel POURE\n\n", "msg_date": "Tue, 25 Sep 2001 09:25:14 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: [ODBC] UTF-8 support" }, { "msg_contents": "----- Original Message ----- \nFrom: Tatsuo Ishii <t-ishii@sra.co.jp>\nSent: Monday, September 24, 2001 3:12 AM\n\n> > Which ones belong to the backend and which ones to the frontend?\n> > Or even more: which ones belong to the backend, which ones\n> > to the frontend #1, which ones to the frontend #2, etc...\n> > \n> > For examle, I have two fronends:\n> > \n> > FE1: UNICODE, WIN1251\n> > FE2: KOI8, UNICODE\n> > BE: UNICODE, LATIN1, ALT\n> > \n> > Which ones SELECT pg_available_encodings(); will show?\n> > The ones of the BE and the FE making the request?\n> > \n> > In case I need to communicate with BE using one common\n> > encoding between the two if it is available.\n> \n> I'm confused.\n\nSorry, I was confused myself about how the mechanics of\nof it works and confused you. :)\n\n> What do you mean by BE? BE's encoding is determined by the database\n> that FE chooses. If you just want to know what kind encodings are\n> there in the database, why not use:\n> \n> SELECT DISTINCT ON (encoding) pg_encoding_to_char(encoding) AS\n> encoding FROM pg_database;\n> \n> Also, FE's encoding could be any valid encoding that FE chooses,\n> i.e. it' not BE's choice.\n\nTrue. I gotta look at that.\n\n> Can you show me more concrete examples showing what you actually want\n> to do?\n\nOnce I have them completed and when per-columnt encoding support\nwill be available. Basically, it's gonna be\none BE supporting various encodings, and different kinds of clients\nconnecting to it, including Windows client as well as Linux. The database\nwill have text desriptions of some things in various languages,\n(for now only languages I can communicate on: English, Spanish, French and\nRussian, but they gonna be more in the future), and it would be nice to\nknow in advance what encoding is used in so appropriate conversion\nis done before messages are retunred to clients, so I don't loose accents\nlike in French or Spanish or they don't get converted to some cyrillic\ncharacters at the FE side (French accented characters tend to do that).\n\nAnyway, when it gets to more concrete details and the project\nbecomes more tangible, I might come back with my questions :)\n\n> >> 3) Is there a way to query available encodings in PostgreSQL for display in\n> >> pgAdmin.\n> >\n> > Could pgAdmin display multibyte chars in the first place ?\n> \n> Wao. If pgAdmin could not display multibyte chars, all discussions\n> here are meaningless:-<\n\nThe discussion aren't meaningless here,\nand the pgAdmin team is working now on pgAdmin II,\nwhich I hope will support multibyte characters.\n\n--\nS.\n\n", "msg_date": "Sun, 7 Oct 2001 14:41:37 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: UTF-8 support" } ]
[ { "msg_contents": "Hi guys,\n\nI've been looking at CLUSTER today. I've put together a patch which\nrecreates all the indices which current CLUSTER drops and copies relacl\nfrom the old pg_class tuple and puts it in the new one. \n\nI was working on updating pg_inherits to handle the new OID when it\noccured to me that pg_inherits is not the only system table\ncorrupted. pg_trigger, pg_rewrite (and therefore views) and pg_description\nneed to be updated as well. It seems like the easiest thing to do would be\nto update the new relation to have to OID of the old relation. Is there\nany reason why we would not want to do this?\n\nGavin\n\n", "msg_date": "Sun, 23 Sep 2001 22:59:42 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "CLUSTER TODO item" }, { "msg_contents": "> Hi guys,\n> \n> I've been looking at CLUSTER today. I've put together a patch which\n> recreates all the indices which current CLUSTER drops and copies relacl\n> from the old pg_class tuple and puts it in the new one. \n> \n> I was working on updating pg_inherits to handle the new OID when it\n> occured to me that pg_inherits is not the only system table\n> corrupted. pg_trigger, pg_rewrite (and therefore views) and pg_description\n> need to be updated as well. It seems like the easiest thing to do would be\n> to update the new relation to have to OID of the old relation. Is there\n> any reason why we would not want to do this?\n\nWe did strange things with this in the past because we didn't have\npg_class.relfilenode. Now that we do, you can just recreate the heap\ntable using a different oid filename and update relfilenode when you are\ndone. Keep the same oid. Of course, as you noted, the old indexes have\nto be recreated because the heap has changed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 23 Sep 2001 09:58:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER TODO item" }, { "msg_contents": ">> I've been looking at CLUSTER today. I've put together a patch which\n>> recreates all the indices which current CLUSTER drops and copies relacl\n>> from the old pg_class tuple and puts it in the new one. \n\nThis is entirely the wrong way to go at it.\n\n> We did strange things with this in the past because we didn't have\n> pg_class.relfilenode. Now that we do, you can just recreate the heap\n> table using a different oid filename and update relfilenode when you are\n> done.\n\nYes. CLUSTER should not do one single thing to the system catalogs,\nexcept heap_update the pg_class rows for the table and indexes with\nnew relfilenode values. That is the facility that a rewrite of CLUSTER\nwas waiting for, so now that we have it, it's pointless to put more\nwork into the old CLUSTER implementation.\n\nNote: I'm not convinced that relfilenode and pg_class.oid are each\nused in exactly the right spots. Once we have cases where they can\ndiffer, we may well find some bugs to flush out. But that needs to\nhappen anyway, so don't let it dissuade you from doing CLUSTER the\nright way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Sep 2001 17:04:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER TODO item " }, { "msg_contents": "On Sun, 23 Sep 2001, Tom Lane wrote:\n> \n> Note: I'm not convinced that relfilenode and pg_class.oid are each\n> used in exactly the right spots. Once we have cases where they can\n> differ, we may well find some bugs to flush out. But that needs to\n> happen anyway, so don't let it dissuade you from doing CLUSTER the\n> right way.\n\nI think I may have broken stuff. I'm not sure. I've fiddled a fair bit but\nI'm still segfaulting in the storage manager. It might be because I'm\nheap_creating and then just stealing relfilenode - I don't know.\n\nAnyway, a patch is attached which clusters and then recreates the indexes\n- but then segfaults.\n\nAm I going about this all wrong?\n\nThanks\n\nGavin", "msg_date": "Mon, 24 Sep 2001 15:19:10 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Re: CLUSTER TODO item " }, { "msg_contents": "\nCan I get a status on this?\n\n\n> On Sun, 23 Sep 2001, Tom Lane wrote:\n> > \n> > Note: I'm not convinced that relfilenode and pg_class.oid are each\n> > used in exactly the right spots. Once we have cases where they can\n> > differ, we may well find some bugs to flush out. But that needs to\n> > happen anyway, so don't let it dissuade you from doing CLUSTER the\n> > right way.\n> \n> I think I may have broken stuff. I'm not sure. I've fiddled a fair bit but\n> I'm still segfaulting in the storage manager. It might be because I'm\n> heap_creating and then just stealing relfilenode - I don't know.\n> \n> Anyway, a patch is attached which clusters and then recreates the indexes\n> - but then segfaults.\n> \n> Am I going about this all wrong?\n> \n> Thanks\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 16:37:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER TODO item" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can I get a status on this?\n\nIt's not gonna happen for 7.2, I think ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 20:10:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER TODO item " }, { "msg_contents": "On Thu, 11 Oct 2001, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can I get a status on this?\n> \n> It's not gonna happen for 7.2, I think ...\n> \n> \t\t\tregards, tom lane\n\nI'd love it to go out with 7.2 but I've had no time to work on this patch\nlately. The reason I need time is that, after having fiddled a fair bit,\nI've decided that there really needs to be support for the creation of a\nnew relfilenode in the storage manager.\n\nThe current patch works like this:\n\nCreate new heap (heap_create())\nCopy tuples from old heap to new heap via index scan\nForm a new pg_class tuple\nsimple_heap_update()\nupdate catalogue indices\nrebuild existing indices\n\nThis causes an overflow in the localbuf.c so I guess this is wrong\n(included in patch on 24/sep) =). I've looked at various combinations of\nthis:\n\nmemcpy() the old Relation into a new Relation, update smgrunlink() the old\nRelation and heap_storage_create() the new relation. This dies because\nsmgrunlink only schedules the drop, where as heap_storage_create() actually \ncreates a file on the file system (open() returns with EEXIST).\n\nI've also tried just copying the structure but heap_open() relies on OID\nnot relfilenode.\n\nI'm probably going about it the wrong way, but it strikes me that there\nneeds to be a way to abstract the relfilenode from OID in the heap access \ncode so that one can easily manipulate the heap on disk without having to \nplay with OIDs. \n\nI would have included code examples/clearer description but the box I\ndon't have access to the box I created the patches on atm =(.\n\nGavin\n\n", "msg_date": "Fri, 12 Oct 2001 13:35:30 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Re: CLUSTER TODO item " }, { "msg_contents": "\nIs there an updated version of this patch for 7.3?\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Thu, 11 Oct 2001, Tom Lane wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Can I get a status on this?\n> > \n> > It's not gonna happen for 7.2, I think ...\n> > \n> > \t\t\tregards, tom lane\n> \n> I'd love it to go out with 7.2 but I've had no time to work on this patch\n> lately. The reason I need time is that, after having fiddled a fair bit,\n> I've decided that there really needs to be support for the creation of a\n> new relfilenode in the storage manager.\n> \n> The current patch works like this:\n> \n> Create new heap (heap_create())\n> Copy tuples from old heap to new heap via index scan\n> Form a new pg_class tuple\n> simple_heap_update()\n> update catalogue indices\n> rebuild existing indices\n> \n> This causes an overflow in the localbuf.c so I guess this is wrong\n> (included in patch on 24/sep) =). I've looked at various combinations of\n> this:\n> \n> memcpy() the old Relation into a new Relation, update smgrunlink() the old\n> Relation and heap_storage_create() the new relation. This dies because\n> smgrunlink only schedules the drop, where as heap_storage_create() actually \n> creates a file on the file system (open() returns with EEXIST).\n> \n> I've also tried just copying the structure but heap_open() relies on OID\n> not relfilenode.\n> \n> I'm probably going about it the wrong way, but it strikes me that there\n> needs to be a way to abstract the relfilenode from OID in the heap access \n> code so that one can easily manipulate the heap on disk without having to \n> play with OIDs. \n> \n> I would have included code examples/clearer description but the box I\n> don't have access to the box I created the patches on atm =(.\n> \n> Gavin\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 11:15:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER TODO item" }, { "msg_contents": "\nGavin, do you have an updated patch you want applied to 7.3?\n\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Thu, 11 Oct 2001, Tom Lane wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Can I get a status on this?\n> > \n> > It's not gonna happen for 7.2, I think ...\n> > \n> > \t\t\tregards, tom lane\n> \n> I'd love it to go out with 7.2 but I've had no time to work on this patch\n> lately. The reason I need time is that, after having fiddled a fair bit,\n> I've decided that there really needs to be support for the creation of a\n> new relfilenode in the storage manager.\n> \n> The current patch works like this:\n> \n> Create new heap (heap_create())\n> Copy tuples from old heap to new heap via index scan\n> Form a new pg_class tuple\n> simple_heap_update()\n> update catalogue indices\n> rebuild existing indices\n> \n> This causes an overflow in the localbuf.c so I guess this is wrong\n> (included in patch on 24/sep) =). I've looked at various combinations of\n> this:\n> \n> memcpy() the old Relation into a new Relation, update smgrunlink() the old\n> Relation and heap_storage_create() the new relation. This dies because\n> smgrunlink only schedules the drop, where as heap_storage_create() actually \n> creates a file on the file system (open() returns with EEXIST).\n> \n> I've also tried just copying the structure but heap_open() relies on OID\n> not relfilenode.\n> \n> I'm probably going about it the wrong way, but it strikes me that there\n> needs to be a way to abstract the relfilenode from OID in the heap access \n> code so that one can easily manipulate the heap on disk without having to \n> play with OIDs. \n> \n> I would have included code examples/clearer description but the box I\n> don't have access to the box I created the patches on atm =(.\n> \n> Gavin\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 13:06:22 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER TODO item" }, { "msg_contents": "Hi Bruce,\n\nI put this on hold since I sent that email. This is basically because the\nAPI doesn't lend itself well to attaching new relfilenodes to existing\nheaps. I definately want to put this patch in but just need to find time\nto modify the API to support this correctly.\n\nGavin\n\n\nOn Fri, 22 Feb 2002, Bruce Momjian wrote:\n\n> \n> Is there an updated version of this patch for 7.3?\n> \n> ---------------------------------------------------------------------------\n> \n> Gavin Sherry wrote:\n> > On Thu, 11 Oct 2001, Tom Lane wrote:\n> > \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > Can I get a status on this?\n> > > \n> > > It's not gonna happen for 7.2, I think ...\n> > > \n> > > \t\t\tregards, tom lane\n> > \n> > I'd love it to go out with 7.2 but I've had no time to work on this patch\n> > lately. The reason I need time is that, after having fiddled a fair bit,\n> > I've decided that there really needs to be support for the creation of a\n> > new relfilenode in the storage manager.\n> > \n> > The current patch works like this:\n> > \n> > Create new heap (heap_create())\n> > Copy tuples from old heap to new heap via index scan\n> > Form a new pg_class tuple\n> > simple_heap_update()\n> > update catalogue indices\n> > rebuild existing indices\n> > \n> > This causes an overflow in the localbuf.c so I guess this is wrong\n> > (included in patch on 24/sep) =). I've looked at various combinations of\n> > this:\n> > \n> > memcpy() the old Relation into a new Relation, update smgrunlink() the old\n> > Relation and heap_storage_create() the new relation. This dies because\n> > smgrunlink only schedules the drop, where as heap_storage_create() actually \n> > creates a file on the file system (open() returns with EEXIST).\n> > \n> > I've also tried just copying the structure but heap_open() relies on OID\n> > not relfilenode.\n> > \n> > I'm probably going about it the wrong way, but it strikes me that there\n> > needs to be a way to abstract the relfilenode from OID in the heap access \n> > code so that one can easily manipulate the heap on disk without having to \n> > play with OIDs. \n> > \n> > I would have included code examples/clearer description but the box I\n> > don't have access to the box I created the patches on atm =(.\n> > \n> > Gavin\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> \n> \n\n", "msg_date": "Sat, 23 Feb 2002 12:26:12 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Re: CLUSTER TODO item" }, { "msg_contents": "\nGavin, please continue working on this feature and continue discussion\nthe hackers list.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Thu, 11 Oct 2001, Tom Lane wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Can I get a status on this?\n> > \n> > It's not gonna happen for 7.2, I think ...\n> > \n> > \t\t\tregards, tom lane\n> \n> I'd love it to go out with 7.2 but I've had no time to work on this patch\n> lately. The reason I need time is that, after having fiddled a fair bit,\n> I've decided that there really needs to be support for the creation of a\n> new relfilenode in the storage manager.\n> \n> The current patch works like this:\n> \n> Create new heap (heap_create())\n> Copy tuples from old heap to new heap via index scan\n> Form a new pg_class tuple\n> simple_heap_update()\n> update catalogue indices\n> rebuild existing indices\n> \n> This causes an overflow in the localbuf.c so I guess this is wrong\n> (included in patch on 24/sep) =). I've looked at various combinations of\n> this:\n> \n> memcpy() the old Relation into a new Relation, update smgrunlink() the old\n> Relation and heap_storage_create() the new relation. This dies because\n> smgrunlink only schedules the drop, where as heap_storage_create() actually \n> creates a file on the file system (open() returns with EEXIST).\n> \n> I've also tried just copying the structure but heap_open() relies on OID\n> not relfilenode.\n> \n> I'm probably going about it the wrong way, but it strikes me that there\n> needs to be a way to abstract the relfilenode from OID in the heap access \n> code so that one can easily manipulate the heap on disk without having to \n> play with OIDs. \n> \n> I would have included code examples/clearer description but the box I\n> don't have access to the box I created the patches on atm =(.\n> \n> Gavin\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Feb 2002 22:54:47 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CLUSTER TODO item" }, { "msg_contents": "I put some work i to this last night. Here's how it works.\n\n1) sanity checks\n2) save information (IndexInfo *) about indices on the relation being\nclustered.\n3) build an uncatalogued heap so that the API can be used without too much\nfuss and get a new relfilenode \n(heap_create_with_catalog(...,RELKIND_UNCATALOGED,...)\n4) scan the index, inserting tuples in index order into the uncatalogued\nrelation (ie, insert data into new relfilenode).\n5) update relcache/system with new relfilenode for clustered relation\n6) drop/create indices, given (2)\n7) smgrunlink() old relfilenode\n8) Remove uncatalogued relation from relation from relcache\n\nThere are two issues here. One is minor: At step three I should create a\nnew toast table entry, and then attach it to the clustered relation at\nstep five.\n\nThe second point is that postgres doesn't seem to like me creating\nuncataloged relations and then doing RelationForgetRelation(). I will look\na little harder though.\n\nGavin\n\n\nOn Sun, 24 Feb 2002, Bruce Momjian wrote:\n\n> \n> Gavin, please continue working on this feature and continue discussion\n> the hackers list.\n> \n> ---------------------------------------------------------------------------\n> \n> Gavin Sherry wrote:\n> > On Thu, 11 Oct 2001, Tom Lane wrote:\n> > \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > Can I get a status on this?\n> > > \n> > > It's not gonna happen for 7.2, I think ...\n> > > \n> > > \t\t\tregards, tom lane\n> > \n> > I'd love it to go out with 7.2 but I've had no time to work on this patch\n> > lately. The reason I need time is that, after having fiddled a fair bit,\n> > I've decided that there really needs to be support for the creation of a\n> > new relfilenode in the storage manager.\n> > \n> > The current patch works like this:\n> > \n> > Create new heap (heap_create())\n> > Copy tuples from old heap to new heap via index scan\n> > Form a new pg_class tuple\n> > simple_heap_update()\n> > update catalogue indices\n> > rebuild existing indices\n> > \n> > This causes an overflow in the localbuf.c so I guess this is wrong\n> > (included in patch on 24/sep) =). I've looked at various combinations of\n> > this:\n> > \n> > memcpy() the old Relation into a new Relation, update smgrunlink() the old\n> > Relation and heap_storage_create() the new relation. This dies because\n> > smgrunlink only schedules the drop, where as heap_storage_create() actually \n> > creates a file on the file system (open() returns with EEXIST).\n> > \n> > I've also tried just copying the structure but heap_open() relies on OID\n> > not relfilenode.\n> > \n> > I'm probably going about it the wrong way, but it strikes me that there\n> > needs to be a way to abstract the relfilenode from OID in the heap access \n> > code so that one can easily manipulate the heap on disk without having to \n> > play with OIDs. \n> > \n> > I would have included code examples/clearer description but the box I\n> > don't have access to the box I created the patches on atm =(.\n> > \n> > Gavin\n> > \n> > \n> \n> \n\n", "msg_date": "Mon, 25 Feb 2002 15:14:53 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Re: CLUSTER TODO item" } ]
[ { "msg_contents": "Since there have been drastic CVS changes, the web page doc should REALLY\nbe updated...\n\nhttp://www.ca.postgresql.org/devel-corner/docs/postgres/cvs.html\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n\n", "msg_date": "Sun, 23 Sep 2001 13:36:46 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "CVS changes" }, { "msg_contents": "> Since there have been drastic CVS changes, the web page doc should REALLY\n> be updated...\n\nWithdrawn, I notive Bruce made the changes, but I guess they havn't hit\nyet. My bad.\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Sun, 23 Sep 2001 13:52:59 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> Since there have been drastic CVS changes, the web page doc should REALLY\n> be updated...\n\nI updated the SGML but they are not building nightly, I think. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 23 Sep 2001 13:57:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "----- Original Message ----- \nFrom: bpalmer <bpalmer@crimelabs.net>\nSent: Sunday, September 23, 2001 1:36 PM\n\n> Since there have been drastic CVS changes, the web page doc should REALLY\n> be updated...\n> \n> http://www.ca.postgresql.org/devel-corner/docs/postgres/cvs.html\n\n... and CVSweb put back on-line too :)\n\nS.\n\n\n", "msg_date": "Sun, 23 Sep 2001 14:02:44 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: CVS changes" } ]
[ { "msg_contents": "I know we had expected beta last Friday, but I think the needed CVS\nchanges have delayed that. I know a few people are working on final\nadditions. Perhaps this Friday would be a good beta target.\n\nTom and I will be at the OSDN conference until Wednesday.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 23 Sep 2001 13:46:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Beta time" }, { "msg_contents": "\nwith all the changes going on, we're most likely looking at Oct 1st,\nearliest ... things are startin to stabilize, but until that 18gig is\ninstalled next week, we still have th eproblems with updating ftp, unless\nPeter can clear out th e400+Meg in his acount? :)\n\nOn Sun, 23 Sep 2001, Bruce Momjian wrote:\n\n> I know we had expected beta last Friday, but I think the needed CVS\n> changes have delayed that. I know a few people are working on final\n> additions. Perhaps this Friday would be a good beta target.\n>\n> Tom and I will be at the OSDN conference until Wednesday.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Sun, 23 Sep 2001 22:39:32 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Beta time" }, { "msg_contents": "Marc G. Fournier writes:\n\n> with all the changes going on, we're most likely looking at Oct 1st,\n> earliest ... things are startin to stabilize, but until that 18gig is\n> installed next week, we still have th eproblems with updating ftp, unless\n> Peter can clear out th e400+Meg in his acount? :)\n\nUh, I only saw 180 MB. I reduced it to 113 now.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n\n", "msg_date": "Mon, 24 Sep 2001 18:41:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Beta time" }, { "msg_contents": "> Marc G. Fournier writes:\n> \n> > with all the changes going on, we're most likely looking at Oct 1st,\n> > earliest ... things are startin to stabilize, but until that 18gig is\n> > installed next week, we still have th eproblems with updating ftp, unless\n> > Peter can clear out th e400+Meg in his acount? :)\n> \n> Uh, I only saw 180 MB. I reduced it to 113 now.\n\nI got rid of my CVS copy, freeing 33MB.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 25 Sep 2001 21:40:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta time" } ]
[ { "msg_contents": "\nI take that back. I just updated this last night with the proper path. \nShould be OK tomorrow.\n\n> > Since there have been drastic CVS changes, the web page doc should REALLY\n> > be updated...\n> \n> I updated the SGML but they are not building nightly, I think. \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 23 Sep 2001 14:00:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "Bruce Momjian writes:\n\n> I take that back. I just updated this last night with the proper path.\n> Should be OK tomorrow.\n\nThe docs currently aren't building on *.postgresql.org because the\nsoftware is not installed.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 23 Sep 2001 20:35:06 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "\nSomeone want to refresh for me which software is required, as I'll get it\ninstalled ASAP ...\n\nOn Sun, 23 Sep 2001, Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n>\n> > I take that back. I just updated this last night with the proper path.\n> > Should be OK tomorrow.\n>\n> The docs currently aren't building on *.postgresql.org because the\n> software is not installed.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Sun, 23 Sep 2001 19:02:00 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "Marc G. Fournier writes:\n\n> Someone want to refresh for me which software is required, as I'll get it\n> installed ASAP ...\n\nThese would get us started:\n\ndevel/bison\ntextproc/docproj\ntextproc/openjade\n\nI tried building OpenJade on that server the other day by hand and got Bus\nerrors when I ran it. -- So no guarantees. ;-)\n\nBtw., this may be unrelated but I also notice that the last\npostgresql-{opt,test}-snapshot.tar.gz files on the ftp mirrors are from\nJuly 5. The others seem to be up to date.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 24 Sep 2001 01:53:33 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "\n\ntry that ... all three are now installed ...\n\n\nOn Mon, 24 Sep 2001, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > Someone want to refresh for me which software is required, as I'll get it\n> > installed ASAP ...\n>\n> These would get us started:\n>\n> devel/bison\n> textproc/docproj\n> textproc/openjade\n>\n> I tried building OpenJade on that server the other day by hand and got Bus\n> errors when I ran it. -- So no guarantees. ;-)\n>\n> Btw., this may be unrelated but I also notice that the last\n> postgresql-{opt,test}-snapshot.tar.gz files on the ftp mirrors are from\n> July 5. The others seem to be up to date.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n\n", "msg_date": "Sun, 23 Sep 2001 22:38:08 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "I've built the docs on cvs.postgresql.org, but where are the ftp and www\nareas these days and how does one get stuff onto them?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 30 Sep 2001 18:03:33 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "\n\njust about to be moved to the new server, now that the new 18gi drive has\nbeen installed ... plan on getting that done this afternoon ...\n\nOn Sun, 30 Sep 2001, Peter Eisentraut wrote:\n\n> I've built the docs on cvs.postgresql.org, but where are the ftp and www\n> areas these days and how does one get stuff onto them?\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n\n", "msg_date": "Sun, 30 Sep 2001 12:57:11 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> \n> \n> just about to be moved to the new server, now that the new 18gi drive has\n> been installed ... plan on getting that done this afternoon ...\n\nDon't rush. I am setting up my system to check the SGML docs every 15\nminutes and rebuild if necessary. Overnight builds are not frequent\nenough for people modifying the SGML files. This way they can commit\nchanges and check in fifteen minutes to see the changes in HTML.\n\nThey are now at:\n\n\thttp://candle.pha.pa.us/main/writings/pgsql/sgml\n\nI will add a link to the bottom of the developer's page.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 15:57:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "\nCan someone help me get the 1/2 hours SGML build link on to the\ndeveloper's page. I can't find where that is located. I modified\nindex.html in the developers corner on www.ca.postgresql.org but it is\nnot showing:\n\n\thttp://developer.postgresql.org/index.php \n\n> \n> just about to be moved to the new server, now that the new 18gi drive has\n> been installed ... plan on getting that done this afternoon ...\n> \n> On Sun, 30 Sep 2001, Peter Eisentraut wrote:\n> \n> > I've built the docs on cvs.postgresql.org, but where are the ftp and www\n> > areas these days and how does one get stuff onto them?\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 16:42:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "\ndone already, ftp should be accessible as it was before ...\n\nOn Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n> >\n> >\n> > just about to be moved to the new server, now that the new 18gi drive has\n> > been installed ... plan on getting that done this afternoon ...\n>\n> Don't rush. I am setting up my system to check the SGML docs every 15\n> minutes and rebuild if necessary. Overnight builds are not frequent\n> enough for people modifying the SGML files. This way they can commit\n> changes and check in fifteen minutes to see the changes in HTML.\n>\n> They are now at:\n>\n> \thttp://candle.pha.pa.us/main/writings/pgsql/sgml\n>\n> I will add a link to the bottom of the developer's page.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Sun, 30 Sep 2001 16:49:56 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n>\n> Can someone help me get the 1/2 hours SGML build link on to the\n> developer's page. I can't find where that is located. I modified\n> index.html in the developers corner on www.ca.postgresql.org but it is\n> not showing:\n>\n> \thttp://developer.postgresql.org/index.php\n\nWhat do you want added?\n\n>\n> >\n> > just about to be moved to the new server, now that the new 18gi drive has\n> > been installed ... plan on getting that done this afternoon ...\n> >\n> > On Sun, 30 Sep 2001, Peter Eisentraut wrote:\n> >\n> > > I've built the docs on cvs.postgresql.org, but where are the ftp and www\n> > > areas these days and how does one get stuff onto them?\n> > >\n> > > --\n> > > Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> > >\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 30 Sep 2001 17:31:12 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "> On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> \n> >\n> > Can someone help me get the 1/2 hours SGML build link on to the\n> > developer's page. I can't find where that is located. I modified\n> > index.html in the developers corner on www.ca.postgresql.org but it is\n> > not showing:\n> >\n> > \thttp://developer.postgresql.org/index.php\n> \n> What do you want added?\n\nI wanted this added at the bottom:\n\n<li><a href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml\">SGML docs\nupdated every half-hour</a>\n\nI updated the www cvs copy and put it on the web site but it seems that\npage is now generated via PHP if I am seeing the URL correctly:\n\n\thttp://developer.postgresql.org/index.php\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 18:14:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "On Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n> > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> >\n> > >\n> > > Can someone help me get the 1/2 hours SGML build link on to the\n> > > developer's page. I can't find where that is located. I modified\n> > > index.html in the developers corner on www.ca.postgresql.org but it is\n> > > not showing:\n> > >\n> > > \thttp://developer.postgresql.org/index.php\n> >\n> > What do you want added?\n>\n> I wanted this added at the bottom:\n>\n> <li><a href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml\">SGML docs\n> updated every half-hour</a>\n\nIt's there.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 30 Sep 2001 18:21:29 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "On Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n\n> >\n> >\n> > just about to be moved to the new server, now that the new 18gi drive has\n> > been installed ... plan on getting that done this afternoon ...\n>\n> Don't rush. I am setting up my system to check the SGML docs every 15\n> minutes and rebuild if necessary. Overnight builds are not frequent\n>\nWould it not be better to provide a means for developers to cause the rebuild on demand? A 15-minute wait doesn't seem convenient to me.\n\n\n>\n\n", "msg_date": "Mon, 1 Oct 2001 06:46:27 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> \n> > > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> > >\n> > > >\n> > > > Can someone help me get the 1/2 hours SGML build link on to the\n> > > > developer's page. I can't find where that is located. I modified\n> > > > index.html in the developers corner on www.ca.postgresql.org but it is\n> > > > not showing:\n> > > >\n> > > > \thttp://developer.postgresql.org/index.php\n> > >\n> > > What do you want added?\n> >\n> > I wanted this added at the bottom:\n> >\n> > <li><a href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml\">SGML docs\n> > updated every half-hour</a>\n> \n> It's there.\n\nI also need:\n\n<li><a \nhref=\"http://candle.pha.pa.us/main/writings/pgsql/sgml/build.log\">SGML\nbuild log</a>\n\nAgain, I got it into the cvs www but it doesn't appear on\nwww.ca.postgresql.org. Is there something I am doing wrong?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 19:04:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "> On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> \n> \n> > >\n> > >\n> > > just about to be moved to the new server, now that the new 18gi drive has\n> > > been installed ... plan on getting that done this afternoon ...\n> >\n> > Don't rush. I am setting up my system to check the SGML docs every 15\n> > minutes and rebuild if necessary. Overnight builds are not frequent\n> >\n> Would it not be better to provide a means for developers to cause the rebuild on demand? A 15-minute wait doesn't seem convenient to me.\n\nYep, but it takes 15 minutes to build anyway, so I figured I would check\nvery 15 minutes and adding another 15, that makes 1/2 hour. We don't\nhave a mechanism to build only a few html files. You have to do the\nwhole thing.\n\nSuggestions?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 19:06:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n> > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> >\n> > > > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> > > >\n> > > > >\n> > > > > Can someone help me get the 1/2 hours SGML build link on to the\n> > > > > developer's page. I can't find where that is located. I modified\n> > > > > index.html in the developers corner on www.ca.postgresql.org but it is\n> > > > > not showing:\n> > > > >\n> > > > > \thttp://developer.postgresql.org/index.php\n> > > >\n> > > > What do you want added?\n> > >\n> > > I wanted this added at the bottom:\n> > >\n> > > <li><a href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml\">SGML docs\n> > > updated every half-hour</a>\n> >\n> > It's there.\n>\n> I also need:\n>\n> <li><a\n> href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml/build.log\">SGML\n> build log</a>\n\nAdded.\n\n> Again, I got it into the cvs www but it doesn't appear on\n> www.ca.postgresql.org. Is there something I am doing wrong?\n\nThat's because developers corner is gone. users lounge and the\nmain site are next to go.\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 30 Sep 2001 19:21:56 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "> > I also need:\n> >\n> > <li><a\n> > href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml/build.log\">SGML\n> > build log</a>\n> \n> Added.\n> \n> > Again, I got it into the cvs www but it doesn't appear on\n> > www.ca.postgresql.org. Is there something I am doing wrong?\n> \n> That's because developers corner is gone. users lounge and the\n> main site are next to go.\n\nOK, exactly how are changes made? Can only you make them?\n\nAlso, any idea why the link isn't working? It doesn't work here either.\nI thought you could put in a text file in any directory and it would\nwork but now I am unsure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 19:37:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "On Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n> > > I also need:\n> > >\n> > > <li><a\n> > > href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml/build.log\">SGML\n> > > build log</a>\n> >\n> > Added.\n> >\n> > > Again, I got it into the cvs www but it doesn't appear on\n> > > www.ca.postgresql.org. Is there something I am doing wrong?\n> >\n> > That's because developers corner is gone. users lounge and the\n> > main site are next to go.\n>\n> OK, exactly how are changes made? Can only you make them?\n\nFor now yes. There's too much going on to be letting more than me and\nMarc in there.\n\n> Also, any idea why the link isn't working? It doesn't work here either.\n> I thought you could put in a text file in any directory and it would\n> work but now I am unsure.\n\nWhat link?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 30 Sep 2001 19:41:23 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "> > > That's because developers corner is gone. users lounge and the\n> > > main site are next to go.\n> >\n> > OK, exactly how are changes made? Can only you make them?\n> \n> For now yes. There's too much going on to be letting more than me and\n> Marc in there.\n\nNo problem.\n\n> > Also, any idea why the link isn't working? It doesn't work here either.\n> > I thought you could put in a text file in any directory and it would\n> > work but now I am unsure.\n> \n> What link?\n\nThe new SGML log build output. I fixed it by renaming it to build.html\nand putting <HTML> and <PRE> tags. Would you rename the link from\nbuild.log to build.html. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 19:42:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "On Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n> > > > That's because developers corner is gone. users lounge and the\n> > > > main site are next to go.\n> > >\n> > > OK, exactly how are changes made? Can only you make them?\n> >\n> > For now yes. There's too much going on to be letting more than me and\n> > Marc in there.\n>\n> No problem.\n>\n> > > Also, any idea why the link isn't working? It doesn't work here either.\n> > > I thought you could put in a text file in any directory and it would\n> > > work but now I am unsure.\n> >\n> > What link?\n>\n> The new SGML log build output. I fixed it by renaming it to build.html\n> and putting <HTML> and <PRE> tags. Would you rename the link from\n> build.log to build.html. Thanks.\n\nAll set.\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 30 Sep 2001 19:45:15 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "Bruce Momjian writes:\n\n> Don't rush. I am setting up my system to check the SGML docs every 15\n> minutes and rebuild if necessary. Overnight builds are not frequent\n> enough for people modifying the SGML files.\n\nThis is news to me... Do you mean you commit stuff and then wait 15\nminutes to see how it looks?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 1 Oct 2001 01:52:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Mon, 1 Oct 2001, Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n>\n> > Don't rush. I am setting up my system to check the SGML docs every 15\n> > minutes and rebuild if necessary. Overnight builds are not frequent\n> > enough for people modifying the SGML files.\n>\n> This is news to me... Do you mean you commit stuff and then wait 15\n> minutes to see how it looks?\n\nYou mean you commit stuff before testing it locally?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 30 Sep 2001 20:05:25 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > Don't rush. I am setting up my system to check the SGML docs every 15\n> > minutes and rebuild if necessary. Overnight builds are not frequent\n> > enough for people modifying the SGML files.\n> \n> This is news to me... Do you mean you commit stuff and then wait 15\n> minutes to see how it looks?\n\nI have a cron job that does cvs update every 15 minutes and recreates\nthe HTML if needed.\n\nI know you can manually run it on individual SGML files but I don't see\na way to automate that. Do you?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 20:31:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> On Mon, 1 Oct 2001, Peter Eisentraut wrote:\n> \n> > Bruce Momjian writes:\n> >\n> > > Don't rush. I am setting up my system to check the SGML docs every 15\n> > > minutes and rebuild if necessary. Overnight builds are not frequent\n> > > enough for people modifying the SGML files.\n> >\n> > This is news to me... Do you mean you commit stuff and then wait 15\n> > minutes to see how it looks?\n> \n> You mean you commit stuff before testing it locally?\n\nNot everyone has sgml tools on their local machine, do they?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 20:32:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n> > On Mon, 1 Oct 2001, Peter Eisentraut wrote:\n> >\n> > > Bruce Momjian writes:\n> > >\n> > > > Don't rush. I am setting up my system to check the SGML docs every 15\n> > > > minutes and rebuild if necessary. Overnight builds are not frequent\n> > > > enough for people modifying the SGML files.\n> > >\n> > > This is news to me... Do you mean you commit stuff and then wait 15\n> > > minutes to see how it looks?\n> >\n> > You mean you commit stuff before testing it locally?\n>\n> Not everyone has sgml tools on their local machine, do they?\n\nI understood it to be a requirement for doc development.\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 30 Sep 2001 20:52:45 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> \n> > > On Mon, 1 Oct 2001, Peter Eisentraut wrote:\n> > >\n> > > > Bruce Momjian writes:\n> > > >\n> > > > > Don't rush. I am setting up my system to check the SGML docs every 15\n> > > > > minutes and rebuild if necessary. Overnight builds are not frequent\n> > > > > enough for people modifying the SGML files.\n> > > >\n> > > > This is news to me... Do you mean you commit stuff and then wait 15\n> > > > minutes to see how it looks?\n> > >\n> > > You mean you commit stuff before testing it locally?\n> >\n> > Not everyone has sgml tools on their local machine, do they?\n> \n> I understood it to be a requirement for doc development.\n\nWe have lots of people who submit SGML changes with their code patches\nand they certainly don't have it installed. I only installed it in the\npast six months.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 20:53:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "Something is broken with anon cvs:\n\ncvs server: Updating pgsql/contrib/pgstattuple\ncvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgstattuple' (/projects/cvsroot/pgsql/contrib/pgstattuple/#cvs.lock): Permission denied\ncvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgstattuple'\ncvs [server aborted]: read lock failed - giving up\n\n\tOleg\nOn Sun, 30 Sep 2001, Marc G. Fournier wrote:\n\n>\n> done already, ftp should be accessible as it was before ...\n>\n> On Sun, 30 Sep 2001, Bruce Momjian wrote:\n>\n> > >\n> > >\n> > > just about to be moved to the new server, now that the new 18gi drive has\n> > > been installed ... plan on getting that done this afternoon ...\n> >\n> > Don't rush. I am setting up my system to check the SGML docs every 15\n> > minutes and rebuild if necessary. Overnight builds are not frequent\n> > enough for people modifying the SGML files. This way they can commit\n> > changes and check in fifteen minutes to see the changes in HTML.\n> >\n> > They are now at:\n> >\n> > \thttp://candle.pha.pa.us/main/writings/pgsql/sgml\n> >\n> > I will add a link to the bottom of the developer's page.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 1 Oct 2001 09:34:53 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "...\n> Not everyone has sgml tools on their local machine, do they?\n...\n\nBut everyone committing changes to docs does, or should have, sgml tools\non *their* machines. Right??\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 18:46:03 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> <li><a href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml\">SGML docs\n> updated every half-hour</a>\n\nYa know, if I'm required to wait 12 hours for *any* source code build to\nbe available to my CVSup'd repository, why should we slap in a bunch of\nshort circuits for the docs? And if this is a valuable thing to do, why\nshort circuit the docs building procedure I've already got on the main\nmachines by pointing the web site off to some other machine?\n\nfwiw, one docs building procedure, either ~thomas's, or ~petere's new\none, or ~bruce's should be sufficient imho.\n\nSince we all seem to be heading off in different directions, we should\ntalk about why that is necessary and try to get headed somewhere\nproductively imho.\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 18:54:52 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "> ...\n> > Not everyone has sgml tools on their local machine, do they?\n> ...\n> \n> But everyone committing changes to docs does, or should have, sgml tools\n> on *their* machines. Right??\n\nFor a long time I didn't. I would make the edits and check them the\nnext day. I tried to make sure I had stuff balanced and most of the\nchanges were minor.\n\nSGML is not tough to download but it is tough to get all the pieces\nworking together.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 14:55:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> > <li><a href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml\">SGML docs\n> > updated every half-hour</a>\n> \n> Ya know, if I'm required to wait 12 hours for *any* source code build to\n> be available to my CVSup'd repository, why should we slap in a bunch of\n> short circuits for the docs? And if this is a valuable thing to do, why\n> short circuit the docs building procedure I've already got on the main\n> machines by pointing the web site off to some other machine?\n> \n> fwiw, one docs building procedure, either ~thomas's, or ~petere's new\n> one, or ~bruce's should be sufficient imho.\n> \n> Since we all seem to be heading off in different directions, we should\n> talk about why that is necessary and try to get headed somewhere\n> productively imho.\n\nI did it because the doc build was down and it was supposedly holding up\nbeta. I do think we need more frequent doc builds than 24 hours though.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 14:58:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> Not everyone has sgml tools on their local machine, do they?\n\n> But everyone committing changes to docs does, or should have, sgml tools\n> on *their* machines. Right??\n\nI think it's a bad idea to create such a policy; anything we do that\nraises the bar for docs contributions will just mean that we have less\nand worse documentation. Hasn't our past policy always been \"submit\nwhatever you want, even plain text, and we'll fix the markup later\"?\n\nIf you want to see the effects of such a policy, you need look no\nfurther than Jan, who still has not committed one line of documentation\nabout the pgstats views because he doesn't want to be bothered with\nSGML.\n\nIf Bruce wants to expend a lot of cycles on his machine to provide fast\nturnaround of docs changes, that's fine with me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 15:02:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS changes " }, { "msg_contents": "> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> >> Not everyone has sgml tools on their local machine, do they?\n> \n> > But everyone committing changes to docs does, or should have, sgml tools\n> > on *their* machines. Right??\n> \n> I think it's a bad idea to create such a policy; anything we do that\n> raises the bar for docs contributions will just mean that we have less\n> and worse documentation. Hasn't our past policy always been \"submit\n> whatever you want, even plain text, and we'll fix the markup later\"?\n> \n> If you want to see the effects of such a policy, you need look no\n> further than Jan, who still has not committed one line of documentation\n> about the pgstats views because he doesn't want to be bothered with\n> SGML.\n> \n> If Bruce wants to expend a lot of cycles on his machine to provide fast\n> turnaround of docs changes, that's fine with me.\n\nI also have the SGML build log with errors in _red_! Nifty feature.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 15:04:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "\nVince, can I have write permision to /usr/local/www/www/html/docs on the\nnew web server machine so I can create symbolic links to the files in my\nhome directory?\n\n> On Mon, 1 Oct 2001, Peter Eisentraut wrote:\n> \n> > Bruce Momjian writes:\n> >\n> > > Don't rush. I am setting up my system to check the SGML docs every 15\n> > > minutes and rebuild if necessary. Overnight builds are not frequent\n> > > enough for people modifying the SGML files.\n> >\n> > This is news to me... Do you mean you commit stuff and then wait 15\n> > minutes to see how it looks?\n> \n> You mean you commit stuff before testing it locally?\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 15:41:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> > >> Not everyone has sgml tools on their local machine, do they?\n> > > But everyone committing changes to docs does, or should have, sgml tools\n> > > on *their* machines. Right??\n> > I think it's a bad idea to create such a policy; anything we do that\n> > raises the bar for docs contributions will just mean that we have less\n> > and worse documentation. Hasn't our past policy always been \"submit\n> > whatever you want, even plain text, and we'll fix the markup later\"?\n\nCertainly! I've never said otherwise, didn't suggest a change in policy,\nand have always been committed to *doing* the markup to make the former\n\"submit anything\" happen. If the docs are broken for a while, or if we\nhave to work on the submitted patches before committing them, then we\nshould do that.\n\nBut docs don't magically happen without someone babysitting them at\nleast a little; I know that from experience and I know that Peter and\nothers have done a lot of work to make this happen too.\n\nRight now, folks are seeing that docs are not getting built, and are\nheading off in several directions. I *thought* I've been paying\nattention (but have been very busy so could have missed it) and I *know*\nthat lots of work is being done on the web site, things are being\nrearranged etc etc but for those of us who are peripherally involved\nwith pieces of the web site it would be *really* helpful to know what is\nplanned, where things are going to go, when it might happen, etc etc.\n\nI would like to resolve the 12 hour lag for CVSup and anoncvs service,\nand frankly until that is resolved we should accept that we have a 12\nhour turnaround on most changes. Coincidentally, that is the rate at\nwhich the docs are (or would be, if I could figure out where they should\ngo) currently built.\n\nLet's stop trying to each build a new wheel; we have plenty enough\nalready. There are fundamental breakages in the current servers, and\nuntil those are resolve I'm not happy that we are changing web site info\netc to suggest workarounds as permanent solutions.\n\nall imho of course ;)\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 20:10:50 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> Vince, can I have write permision to /usr/local/www/www/html/docs on the\n> new web server machine so I can create symbolic links to the files in my\n> home directory?\n\n??!!\n\nBruce, what is actually happening here? If we are changing procedures\nfor building docs, please try to have a discussion about it first. What\nare you trying to accomplish that is anything but what has already been\nimplemented?\n\nIf we are taking the role of \"central doc build process\" away from me,\nistm that Peter E would be an appropriate choice for the replacement.\n\nThe fundamental problem is not that there is no docs building procedure.\nIt is that the docs destination area disappeared several weeks ago and\nis nowhere to be seen.\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 20:57:04 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> > Vince, can I have write permision to /usr/local/www/www/html/docs on the\n> > new web server machine so I can create symbolic links to the files in my\n> > home directory?\n> \n> ??!!\n> \n> Bruce, what is actually happening here? If we are changing procedures\n> for building docs, please try to have a discussion about it first. What\n> are you trying to accomplish that is anything but what has already been\n> implemented?\n> \n> If we are taking the role of \"central doc build process\" away from me,\n> istm that Peter E would be an appropriate choice for the replacement.\n> \n> The fundamental problem is not that there is no docs building procedure.\n> It is that the docs destination area disappeared several weeks ago and\n> is nowhere to be seen.\n\nNo, this is for the HTML version of my book and some of the articles I\nhave written. Doesn't relate to SGML at all.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 16:57:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n\n> > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> >\n> >\n> > > >\n> > > >\n> > > > just about to be moved to the new server, now that the new 18gi drive has\n> > > > been installed ... plan on getting that done this afternoon ...\n> > >\n> > > Don't rush. I am setting up my system to check the SGML docs every 15\n> > > minutes and rebuild if necessary. Overnight builds are not frequent\n> > >\n> > Would it not be better to provide a means for developers to cause the rebuild on demand? A 15-minute wait doesn't seem convenient to me.\n>\n> Yep, but it takes 15 minutes to build anyway, so I figured I would check\n> very 15 minutes and adding another 15, that makes 1/2 hour. We don't\n> have a mechanism to build only a few html files. You have to do the\n> whole thing.\n>\n> Suggestions?\n\nI don't know enough about how it works (or doesn't), but the delay looks worse.\n\nAdd the delay for \"missing the bus\" and you're out to a 45-minute delay.\n\nThe need for on-demand is even greater, even something done crudely:\n\nIf a build's in process, flag the need.\n\nWhen the build completes, check if it has to be done again.\n\nI assume that updates aren't so frequent that you'd be constantly rebuilding, or so infrequently a missed rebuild would cause serious problems.\n\nPerhaps a way to check if a rebuild's in process so that if it's slower than usual a developer can see it's not forgotten (or who else is doing one).\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 07:00:31 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Mon, 1 Oct 2001, Thomas Lockhart wrote:\n\n> > Vince, can I have write permision to /usr/local/www/www/html/docs on the\n> > new web server machine so I can create symbolic links to the files in my\n> > home directory?\n>\n> ??!!\n>\n> Bruce, what is actually happening here? If we are changing procedures\n> for building docs, please try to have a discussion about it first. What\n> are you trying to accomplish that is anything but what has already been\n> implemented?\n>\n> If we are taking the role of \"central doc build process\" away from me,\n> istm that Peter E would be an appropriate choice for the replacement.\n>\n> The fundamental problem is not that there is no docs building procedure.\n> It is that the docs destination area disappeared several weeks ago and\n> is nowhere to be seen.\n\nUntil the dust settles the only two people with write access to either\nthe developer or regular website will be Marc and myself. I have enough\nto do trying to get mirroring straight, keep up with changes to the sites,\netc. that I really don't need anyone messing around. So Bruce, your\nrequest is going to be denied.\n\nTom, what's missing or out of place? If I can I'll make sure it's there.\nThis morning I noticed a number of things that I thought were there really\nweren't.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 1 Oct 2001 20:38:52 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "...\n> Tom, what's missing or out of place? If I can I'll make sure it's there.\n> This morning I noticed a number of things that I thought were there really\n> weren't.\n\nWhat I was looking for was the place where development docs (built from\ndaily snapshots) would go. They used to be underneath devel-corner in a\ndoc(s ??) directory.\n\nI guess I would also expect to find, somewhere, the as-released docs for\ncurrent (and previous?) versions.\n\nI was looking the other day and didn't see any of the signs I would have\nexpected like html files from the docs build process.\n\nSince I don't feel like I understand what the new layout actually is, it\nwould be nice to hear where I would *expect* to find these things.\n\"devel-corner is history\", \"users-lounge is history\" lead me to believe\nthat structure is changing, but I don't know to what.\n\nThere are only a few places that I care about (the online docs area is\none of those). I'm happy to wait until new locations are available, but\na quick explanation of where that might be would give me a warm fuzzy\nthat I'm not falling behind and that I can predict whether the new\nlayout requires changes to my doc building scripts (as an example).\n\n - Thomas\n", "msg_date": "Tue, 02 Oct 2001 01:14:00 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> Until the dust settles the only two people with write access to either\n> the developer or regular website will be Marc and myself. I have enough\n> to do trying to get mirroring straight, keep up with changes to the sites,\n> etc. that I really don't need anyone messing around. So Bruce, your\n> request is going to be denied.\n\nAll my links are broken until it is fixed. Here are the links I need,\nexcept it is /home/momjian now:\n\nlrwxrwxrwx 1 momjian pgsql 54 May 22 2000 aw_pgsql_book -> /home/projects\n/pgsql/developers/momjian/aw_pgsql_book/\nlrwxrwxrwx 1 momjian pgsql 48 May 22 2000 booktips -> /home/projects/pgsq\nl/developers/momjian/booktips\nlrwxrwxr-x 1 momjian pgsql 54 Jun 18 14:38 hw_performance -> /home/project\ns/pgsql/developers/momjian/hw_performance\nlrwxrwxr-x 1 momjian pgsql 56 Jun 18 14:55 internalpics.pdf -> /home/proje\ncts/pgsql/developers/momjian/internalpics.pdf\nlrwxrwxr-x 1 momjian pgsql 52 Jun 18 14:38 writing_apps -> /home/projects/\npgsql/developers/momjian/writing_apps\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 21:31:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Tue, 2 Oct 2001, Thomas Lockhart wrote:\n\n> ...\n> > Tom, what's missing or out of place? If I can I'll make sure it's there.\n> > This morning I noticed a number of things that I thought were there really\n> > weren't.\n>\n> What I was looking for was the place where development docs (built from\n> daily snapshots) would go. They used to be underneath devel-corner in a\n> doc(s ??) directory.\n\ndevel-corner is now developer.postgresql.org These were the nitely\nbuilds, right? For a place to point, look to /docs in the developers\ntree.\n\n> I guess I would also expect to find, somewhere, the as-released docs for\n> current (and previous?) versions.\n\nHow wonderful, those are missing too. They're still in regular cvs,\nright? I'll get them tomorrow.\n\n> I was looking the other day and didn't see any of the signs I would have\n> expected like html files from the docs build process.\n>\n> Since I don't feel like I understand what the new layout actually is, it\n> would be nice to hear where I would *expect* to find these things.\n> \"devel-corner is history\", \"users-lounge is history\" lead me to believe\n> that structure is changing, but I don't know to what.\n\nyep. developers.postgresql.org will give you an idea as to how.\n\n> There are only a few places that I care about (the online docs area is\n> one of those). I'm happy to wait until new locations are available, but\n> a quick explanation of where that might be would give me a warm fuzzy\n> that I'm not falling behind and that I can predict whether the new\n> layout requires changes to my doc building scripts (as an example).\n\n/docs is created. whatever scripts are needed we can put them in place.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 1 Oct 2001 22:38:09 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Mon, 1 Oct 2001, Bruce Momjian wrote:\n\n> > > <li><a href=\"http://candle.pha.pa.us/main/writings/pgsql/sgml\">SGML docs\n> > > updated every half-hour</a>\n> >\n> > Ya know, if I'm required to wait 12 hours for *any* source code build to\n> > be available to my CVSup'd repository, why should we slap in a bunch of\n> > short circuits for the docs? And if this is a valuable thing to do, why\n> > short circuit the docs building procedure I've already got on the main\n> > machines by pointing the web site off to some other machine?\n> >\n> > fwiw, one docs building procedure, either ~thomas's, or ~petere's new\n> > one, or ~bruce's should be sufficient imho.\n> >\n> > Since we all seem to be heading off in different directions, we should\n> > talk about why that is necessary and try to get headed somewhere\n> > productively imho.\n>\n> I did it because the doc build was down and it was supposedly holding up\n> beta. I do think we need more frequent doc builds than 24 hours though.\n\nSince when doe sa doc build, or docs of any kind, hold up beta? docs are\n'fluid' up until a week before release,as they have always been ...\n\nlast I heard from petere, there were tools that need to be installed, and\nwere installed, ont eh server for the builds ... Peter, did I miss\nsomething in that, or was it just waiting for ftp.postgresql.org to be on\nthe new server?\n\n\n\n", "msg_date": "Mon, 1 Oct 2001 23:40:09 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "...\n> Since when doe sa doc build, or docs of any kind, hold up beta? docs are\n> 'fluid' up until a week before release,as they have always been ...\n\nSure. I think that the consensus was that a docs snapshot *should* be\navailable at the beginning of beta, and there are other issues which\nseem to be showstoppers. All of which will be resolved soon I'm sure.\n\nafaik we would like to be in beta RSN, but I have a few more catalog\nchanges to make so if everything else were in place we would still want\nto wait a day or two.\n\nLots that would be uncovered in beta is being addressed now, and I sense\nan increase in the number of things being fixed and improved (a common\nsign that people are stepping up their efforts to support the release)\nso there is no need for a quick fix on any one thing.\n\n> last I heard from petere, there were tools that need to be installed, and\n> were installed, ont eh server for the builds ... Peter, did I miss\n> something in that, or was it just waiting for ftp.postgresql.org to be on\n> the new server?\n\nNot sure about that. I started to refresh the doc build procedure but\nstopped when I couldn't figure out where the docs would go afterwards. I\n*think* I now know where that would be, but we are not to the point\nwhere I can actually write files there. And if developer.postgresql.org\ndoesn't share disks with my home directory and the cvs repository, I'm\nlooking at having to do a few more tweaks to move these things around\nfrom whatever the best \"build machine\" would be.\n\nbtw, in the machine info I've had before today,\n\"developer.postgresql.org\" didn't show up. So I'll still claim to be\nconfused over what machines, aliases, and virtual machines are coming\nonline, how they relate to each other, what disks they will see, etc\netc. To quote an old advertisement down here, \"inquiring minds want to\nknow...\". ;)\n\n - Thomas\n\n - Thomas\n", "msg_date": "Tue, 02 Oct 2001 04:04:17 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "> > > Since we all seem to be heading off in different directions, we should\n> > > talk about why that is necessary and try to get headed somewhere\n> > > productively imho.\n> >\n> > I did it because the doc build was down and it was supposedly holding up\n> > beta. I do think we need more frequent doc builds than 24 hours though.\n> \n> Since when doe sa doc build, or docs of any kind, hold up beta? docs are\n> 'fluid' up until a week before release,as they have always been ...\n> \n> last I heard from petere, there were tools that need to be installed, and\n> were installed, ont eh server for the builds ... Peter, did I miss\n> something in that, or was it just waiting for ftp.postgresql.org to be on\n> the new server?\n\nTom said we shouldn't go beta without the docs building.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 00:59:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Since when doe sa doc build, or docs of any kind, hold up beta? docs are\n>> 'fluid' up until a week before release,as they have always been ...\n\n> Tom said we shouldn't go beta without the docs building.\n\nI'm not sure whether you're referring to me or Lockhart, but if the\nformer: I don't say that we must have a working automatic build\nprocess (though it's surely a high priority). What I do say is that\nwe must have a reasonably up-to-date version of the development docs\nposted at the canonical place for same. Else, beta testers will not\nknow what to test, and so the whole notion of a beta test period is\nrendered meaningless.\n\nIf the docs situation were the only thing holding up beta, I'd say\ndo a manual build and install and get on with it. But it seems we\nhave some other problems too, so Marc and Vince have some breathing\nroom to finish fixing the server configuration issues...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 01:55:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes " }, { "msg_contents": "On Mon, 1 Oct 2001, Bruce Momjian wrote:\n\n> > Until the dust settles the only two people with write access to either\n> > the developer or regular website will be Marc and myself. I have enough\n> > to do trying to get mirroring straight, keep up with changes to the sites,\n> > etc. that I really don't need anyone messing around. So Bruce, your\n> > request is going to be denied.\n>\n> All my links are broken until it is fixed. Here are the links I need,\n> except it is /home/momjian now:\n>\n> lrwxrwxrwx 1 momjian pgsql 54 May 22 2000 aw_pgsql_book -> /home/projects\n> /pgsql/developers/momjian/aw_pgsql_book/\n> lrwxrwxrwx 1 momjian pgsql 48 May 22 2000 booktips -> /home/projects/pgsq\n> l/developers/momjian/booktips\n> lrwxrwxr-x 1 momjian pgsql 54 Jun 18 14:38 hw_performance -> /home/project\n> s/pgsql/developers/momjian/hw_performance\n> lrwxrwxr-x 1 momjian pgsql 56 Jun 18 14:55 internalpics.pdf -> /home/proje\n> cts/pgsql/developers/momjian/internalpics.pdf\n> lrwxrwxr-x 1 momjian pgsql 52 Jun 18 14:38 writing_apps -> /home/projects/\n> pgsql/developers/momjian/writing_apps\n\nMove these into a subdir under momjian and I'll point to that. Is the\ninternalpics.pdf different than the one on the developers site?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 05:24:41 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Mon, 1 Oct 2001, Thomas Lockhart wrote:\n\n> I would like to resolve the 12 hour lag for CVSup and anoncvs service,\n> and frankly until that is resolved we should accept that we have a 12\n> hour turnaround on most changes. Coincidentally, that is the rate at\n> which the docs are (or would be, if I could figure out where they\n> should go) currently built.\n\nIt should never have been 12hrs ... I had it set for 4 ... but, I have\ndrop'd it down to hourly effective this morning ...\n\n> Let's stop trying to each build a new wheel; we have plenty enough\n> already. There are fundamental breakages in the current servers, and\n> until those are resolve I'm not happy that we are changing web site\n> info etc to suggest workarounds as permanent solutions.\n\nAgreed ...\n\n\n", "msg_date": "Tue, 2 Oct 2001 08:36:15 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Mon, 1 Oct 2001, Vince Vielhaber wrote:\n\n> On Tue, 2 Oct 2001, Thomas Lockhart wrote:\n>\n> > ...\n> > > Tom, what's missing or out of place? If I can I'll make sure it's there.\n> > > This morning I noticed a number of things that I thought were there really\n> > > weren't.\n> >\n> > What I was looking for was the place where development docs (built from\n> > daily snapshots) would go. They used to be underneath devel-corner in a\n> > doc(s ??) directory.\n>\n> devel-corner is now developer.postgresql.org These were the nitely\n> builds, right? For a place to point, look to /docs in the developers\n> tree.\n\ndevelopers tree => /usr/local/www/developers\n\n\n", "msg_date": "Tue, 2 Oct 2001 08:38:44 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> On Mon, 1 Oct 2001, Thomas Lockhart wrote:\n>> I would like to resolve the 12 hour lag for CVSup and anoncvs service,\n\n> It should never have been 12hrs ... I had it set for 4 ... but, I have\n> drop'd it down to hourly effective this morning ...\n\nThat sounds like a good setting for anoncvs service. But I think Thomas\n(and any other committers who use cvsup) still need a cvsup server\nrunning on the master cvs machine. Even a 1-hour lag is too much when\nyou are trying to commit changes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 09:31:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS changes " }, { "msg_contents": "\nwill set that one up next ...\n\nOn Tue, 2 Oct 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > On Mon, 1 Oct 2001, Thomas Lockhart wrote:\n> >> I would like to resolve the 12 hour lag for CVSup and anoncvs service,\n>\n> > It should never have been 12hrs ... I had it set for 4 ... but, I have\n> > drop'd it down to hourly effective this morning ...\n>\n> That sounds like a good setting for anoncvs service. But I think Thomas\n> (and any other committers who use cvsup) still need a cvsup server\n> running on the master cvs machine. Even a 1-hour lag is too much when\n> you are trying to commit changes.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Tue, 2 Oct 2001 10:18:16 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes " }, { "msg_contents": "> On Mon, 1 Oct 2001, Bruce Momjian wrote:\n> \n> > > Until the dust settles the only two people with write access to either\n> > > the developer or regular website will be Marc and myself. I have enough\n> > > to do trying to get mirroring straight, keep up with changes to the sites,\n> > > etc. that I really don't need anyone messing around. So Bruce, your\n> > > request is going to be denied.\n> >\n> > All my links are broken until it is fixed. Here are the links I need,\n> > except it is /home/momjian now:\n> >\n> > lrwxrwxrwx 1 momjian pgsql 54 May 22 2000 aw_pgsql_book -> /home/projects\n> > /pgsql/developers/momjian/aw_pgsql_book/\n> > lrwxrwxrwx 1 momjian pgsql 48 May 22 2000 booktips -> /home/projects/pgsq\n> > l/developers/momjian/booktips\n> > lrwxrwxr-x 1 momjian pgsql 54 Jun 18 14:38 hw_performance -> /home/project\n> > s/pgsql/developers/momjian/hw_performance\n> > lrwxrwxr-x 1 momjian pgsql 56 Jun 18 14:55 internalpics.pdf -> /home/proje\n> > cts/pgsql/developers/momjian/internalpics.pdf\n> > lrwxrwxr-x 1 momjian pgsql 52 Jun 18 14:38 writing_apps -> /home/projects/\n> > pgsql/developers/momjian/writing_apps\n> \n> Move these into a subdir under momjian and I'll point to that. Is the\n> internalpics.pdf different than the one on the developers site?\n\nOK, new directory is /home/momjian/docs and links need to be created in\n/usr/local/www/www/html/docs.\n\ninternalpics.pdf is the same as the one you have on the developers site\nbut it should be a link to the copy in my home directory so I can update\nit regularly. I know it is kind of strange to have a symlink for the\nsame file from the main web site and the developers site to my home\ndirectory but the old URL for that is still used in other places so I\nneed both.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 11:53:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Tue, 2 Oct 2001, Bruce Momjian wrote:\n\n> > On Mon, 1 Oct 2001, Bruce Momjian wrote:\n> >\n> > > > Until the dust settles the only two people with write access to either\n> > > > the developer or regular website will be Marc and myself. I have enough\n> > > > to do trying to get mirroring straight, keep up with changes to the sites,\n> > > > etc. that I really don't need anyone messing around. So Bruce, your\n> > > > request is going to be denied.\n> > >\n> > > All my links are broken until it is fixed. Here are the links I need,\n> > > except it is /home/momjian now:\n> > >\n> > > lrwxrwxrwx 1 momjian pgsql 54 May 22 2000 aw_pgsql_book -> /home/projects\n> > > /pgsql/developers/momjian/aw_pgsql_book/\n> > > lrwxrwxrwx 1 momjian pgsql 48 May 22 2000 booktips -> /home/projects/pgsq\n> > > l/developers/momjian/booktips\n> > > lrwxrwxr-x 1 momjian pgsql 54 Jun 18 14:38 hw_performance -> /home/project\n> > > s/pgsql/developers/momjian/hw_performance\n> > > lrwxrwxr-x 1 momjian pgsql 56 Jun 18 14:55 internalpics.pdf -> /home/proje\n> > > cts/pgsql/developers/momjian/internalpics.pdf\n> > > lrwxrwxr-x 1 momjian pgsql 52 Jun 18 14:38 writing_apps -> /home/projects/\n> > > pgsql/developers/momjian/writing_apps\n> >\n> > Move these into a subdir under momjian and I'll point to that. Is the\n> > internalpics.pdf different than the one on the developers site?\n>\n> OK, new directory is /home/momjian/docs and links need to be created in\n> /usr/local/www/www/html/docs.\n>\n> internalpics.pdf is the same as the one you have on the developers site\n> but it should be a link to the copy in my home directory so I can update\n> it regularly. I know it is kind of strange to have a symlink for the\n> same file from the main web site and the developers site to my home\n> directory but the old URL for that is still used in other places so I\n> need both.\n\nAll done. Let me know if anything's missing.\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 12:04:07 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> > OK, new directory is /home/momjian/docs and links need to be created in\n> > /usr/local/www/www/html/docs.\n> >\n> > internalpics.pdf is the same as the one you have on the developers site\n> > but it should be a link to the copy in my home directory so I can update\n> > it regularly. I know it is kind of strange to have a symlink for the\n> > same file from the main web site and the developers site to my home\n> > directory but the old URL for that is still used in other places so I\n> > need both.\n> \n> All done. Let me know if anything's missing.\n\nBeautiful.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 12:07:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> \n> \n> > > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> > >\n> > >\n> > > > >\n> > > > >\n> > > > > just about to be moved to the new server, now that the new 18gi drive has\n> > > > > been installed ... plan on getting that done this afternoon ...\n> > > >\n> > > > Don't rush. I am setting up my system to check the SGML docs every 15\n> > > > minutes and rebuild if necessary. Overnight builds are not frequent\n> > > >\n> > > Would it not be better to provide a means for developers to cause the rebuild on demand? A 15-minute wait doesn't seem convenient to me.\n> >\n> > Yep, but it takes 15 minutes to build anyway, so I figured I would check\n> > very 15 minutes and adding another 15, that makes 1/2 hour. We don't\n> > have a mechanism to build only a few html files. You have to do the\n> > whole thing.\n> >\n> > Suggestions?\n> \n> I don't know enough about how it works (or doesn't), but the\n> delay looks worse.\n> \n> Add the delay for \"missing the bus\" and you're out to a 45-minute\n> delay.\n\nTrue.\n\n> The need for on-demand is even greater, even something done\n> crudely:\n> \n> If a build's in process, flag the need.\n\nAdded. I realized that I could have two running at the same time, which\nwould be a disaster.\n\n> When the build completes, check if it has to be done again.\n\nGreat idea! Added.\n\n> I assume that updates aren't so frequent that you'd be constantly\n> rebuilding, or so infrequently a missed rebuild would cause\n> serious problems.\n\nYep.\n\n> Perhaps a way to check if a rebuild's in process so that if it's\n> slower than usual a developer can see it's not forgotten (or\n> who else is doing one).\n\nScript attached. I could poll cvs more frequently but it seems rude to\nhit the cvs server more frequently than every 15 minutes. If people\nwant it polled more frequently, and Marc doesn't mind, I can change the\npolling interval here.\n\nAlso, I added something that will show the files modified in the current\nbuild.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n:\ntrap \"rm -f /tmp/$$ /tmp/$$a /tmp/pgsql_sgml\" 0 1 2 3 15\n\n[ -f /tmp/pgsql_sgml ] && exit\ntouch /tmp/pgsql_sgml\n\ncd /u/src/gen/pgsql/sgml/pgsql/doc/src\n\nwhile pgcvs update . 2>&1 | grep -v '^?' >/tmp/$$a\ndo\n\techo \"Build: `date`\" >>build.dates\n\tcat /tmp/$$a >>build.dates\n\techo \"PostgreSQL CVS Documentation Build\" >/tmp/$$\n\techo \"==================================\\n\" >>/tmp/$$\n\techo \"Build started: `date`\\n\" >>/tmp/$$\n\tgmake 2>&1 | grep -v DTDDECL >> /tmp/$$\n\techo \"Build completed: `date`\\n\" >>/tmp/$$\n\techo \"Changes in this build:\"\n\tcat /tmp/$$a >>/tmp/$$\n\techo \"\\nErrors appear in red.\\n\" >>/tmp/$$\n\tpipe sed 's;HTML.manifest:;HTML.manifest :;g' /tmp/$$\n\ttxt2html -m -s 100 -p 100 --title \"PostgreSQL CVS Docs built `date`\" \\\n\t--link /u/txt2html/txt2html.dict \\\n\t--append_head /u/txt2html/BODY /tmp/$$ >build.html\n\tpipe sed 's;^.*error.*$;<FONT COLOR=\"RED\">&</FONT>;' build.html\n\tpipe sed 's;^.*Error.*$;<FONT COLOR=\"RED\">&</FONT>;' build.html\n\tpipe sed 's;^.*:E:.*$;<FONT COLOR=\"RED\">&</FONT>;' build.html\n\trm -f /var/www/docs/main/writings/pgsql/sgml/*\n\tmv sgml/*.html build.html /var/www/docs/main/writings/pgsql/sgml\n\tcp sgml/*.css /var/www/docs/main/writings/pgsql/sgml\ndone", "msg_date": "Tue, 2 Oct 2001 12:14:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Tue, 2 Oct 2001, Thomas Lockhart wrote:\n\n> Not sure about that. I started to refresh the doc build procedure but\n> stopped when I couldn't figure out where the docs would go afterwards.\n> I *think* I now know where that would be, but we are not to the point\n> where I can actually write files there. And if\n> developer.postgresql.org doesn't share disks with my home directory\n> and the cvs repository, I'm looking at having to do a few more tweaks\n> to move these things around from whatever the best \"build machine\"\n> would be.\n\nIts all on the same hard drive, on the same machine ...\n\n> btw, in the machine info I've had before today,\n> \"developer.postgresql.org\" didn't show up. So I'll still claim to be\n> confused over what machines, aliases, and virtual machines are coming\n> online, how they relate to each other, what disks they will see, etc\n> etc. To quote an old advertisement down here, \"inquiring minds want to\n> know...\". ;)\n\nokay, let's try and do a summary here that is bound to miss something,\nbut ...\n\nseveral monthss back, we moved to doing \"virtual machine hosting\", where\nour clients were provided an isolated environment for their domains ... it\nmeant that we could provide very specialized services as a client required\n(jakarta-tomcat for some, openacs for others, etc) without\nconflicts/sharing between them ...\n\na 'domain machine' doesn't have any partitions per se, so if you do a 'df\n.' from anywhre on cvs.postgresql.org, it will always be the same 'drive'\n...\n\nnow,if you look at http://www.postgresql.org, it is just a \"portal\" now to\nthe mirrors, and hte mirrors have been tightened up so that any that is\nout of date by 48hrs is automatically drop'd ... Vince and gang are\nworking on making it into even more of a portal site then what it is now,\nwith links to articles about PgSQL, news about it, etc ... work in\nprogress ...\n\nThe 'physical sites' for PgSQL are moving to:\n\n\thttp://www.<cc>.postgresql.org\n\tftp://ftp.<cc>.postgresql.org\n\nOn the main server, where everything is pulled from,\nhttp://www.ca.postgresql.org points to /usr/local/www/www/html ...\n\nhttp://www.postgresql.org itself is on a different machine, as it doesn't\ncontain any \"real\" information, its all going to be dynamically generated\nout of the database ...\n\nhttp://archives.<cc>.postgresql.org is the mailing list archives which is\nmhonarc based, and uses UDMSearch to search through them ... again,\neverything in there is automagically generated, but is pulled from the\nmain server ...\n\nhttp://fts.postgresql.org is still the OpenFTS search of the archives, so\nthere are two ways of searching the archives ...\n\nhttp://gborg.postgresql.org, still being worked on, is in its own 'virtual\nmachine', since I didn't want to worry about overlaps with CVS ...\n\nthen there is techdocs and developer ...\n\narchives was split from the main site, mainly to reduce teh size of teh\nmirror, to make it optional instead of required ... www.<cc> is going to\nbe, as far as I understand it, the 'released stuff', while developer.<cc>\nis going to be stuff like the TODO lists and views into the CVS repository\nand whatnot ...\n\nAlot of it is still a work in progress, and once we get the portal done,\nwe should have better 'directions' to give to ppl as to where they want to\ngo, but the 'central site' has just gotten too un-manageable from an\nadmin, as well as mirror, point of view ...\n\nAnything that anyone is going to need to modify will always be on the\ncentral server, which, once we get the new mirror stuff in place, will be\njust plain 'postgresql.org', but for now is 'cvs.postgresql.org' ... ftp\ndirectories are under ~ftp and web directories are all off of\n/usr/local/www ... those will not change ...\n\ndoes that help any, in the short term?\n\n", "msg_date": "Tue, 2 Oct 2001 12:36:05 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "On Tuesday 02 October 2001 12:36 pm, Marc G. Fournier wrote:\n> does that help any, in the short term?\nThanks for the roadmap. Now we know a cliff isn't around the next blind \nhairpin curve.... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 2 Oct 2001 12:42:25 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "\nUmmmm ... I thought we weren't going to do this, but were going to fix the\nproper build process?\n\nOn Tue, 2 Oct 2001, Bruce Momjian wrote:\n\n> > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> >\n> >\n> > > > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> > > >\n> > > >\n> > > > > >\n> > > > > >\n> > > > > > just about to be moved to the new server, now that the new 18gi drive has\n> > > > > > been installed ... plan on getting that done this afternoon ...\n> > > > >\n> > > > > Don't rush. I am setting up my system to check the SGML docs every 15\n> > > > > minutes and rebuild if necessary. Overnight builds are not frequent\n> > > > >\n> > > > Would it not be better to provide a means for developers to cause the rebuild on demand? A 15-minute wait doesn't seem convenient to me.\n> > >\n> > > Yep, but it takes 15 minutes to build anyway, so I figured I would check\n> > > very 15 minutes and adding another 15, that makes 1/2 hour. We don't\n> > > have a mechanism to build only a few html files. You have to do the\n> > > whole thing.\n> > >\n> > > Suggestions?\n> >\n> > I don't know enough about how it works (or doesn't), but the\n> > delay looks worse.\n> >\n> > Add the delay for \"missing the bus\" and you're out to a 45-minute\n> > delay.\n>\n> True.\n>\n> > The need for on-demand is even greater, even something done\n> > crudely:\n> >\n> > If a build's in process, flag the need.\n>\n> Added. I realized that I could have two running at the same time, which\n> would be a disaster.\n>\n> > When the build completes, check if it has to be done again.\n>\n> Great idea! Added.\n>\n> > I assume that updates aren't so frequent that you'd be constantly\n> > rebuilding, or so infrequently a missed rebuild would cause\n> > serious problems.\n>\n> Yep.\n>\n> > Perhaps a way to check if a rebuild's in process so that if it's\n> > slower than usual a developer can see it's not forgotten (or\n> > who else is doing one).\n>\n> Script attached. I could poll cvs more frequently but it seems rude to\n> hit the cvs server more frequently than every 15 minutes. If people\n> want it polled more frequently, and Marc doesn't mind, I can change the\n> polling interval here.\n>\n> Also, I added something that will show the files modified in the current\n> build.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Tue, 2 Oct 2001 12:55:26 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "> \n> Ummmm ... I thought we weren't going to do this, but were going to fix the\n> proper build process?\n\nWell, until it works I can fiddle with it here. What will the future\nbuild interval be? Are people OK with that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 12:57:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Tue, 2 Oct 2001, Lamar Owen wrote:\n\n> On Tuesday 02 October 2001 12:36 pm, Marc G. Fournier wrote:\n> > does that help any, in the short term?\n> Thanks for the roadmap. Now we know a cliff isn't around the next blind\n> hairpin curve.... :-)\n\nShhhhh! :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 12:59:45 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "Vince, can you update:\n\n\t/usr/local/www/developer/TODO/docs/cvs.html\n\nto use the right CVS locations or give me permission to modify it.\n\n\n> On Tue, 2 Oct 2001, Lamar Owen wrote:\n> \n> > On Tuesday 02 October 2001 12:36 pm, Marc G. Fournier wrote:\n> > > does that help any, in the short term?\n> > Thanks for the roadmap. Now we know a cliff isn't around the next blind\n> > hairpin curve.... :-)\n> \n> Shhhhh! :)\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 13:33:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "Bruce Momjian writes:\n\n> I know you can manually run it on individual SGML files but I don't see\n> a way to automate that. Do you?\n\nI don't because there isn't. For one, if a link points to some other file\nyou lose.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 2 Oct 2001 21:03:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Tue, 2 Oct 2001, Bruce Momjian wrote:\n\n> Vince, can you update:\n>\n> \t/usr/local/www/developer/TODO/docs/cvs.html\n>\n> to use the right CVS locations or give me permission to modify it.\n\nWhere is the new one hiding?\n\n>\n>\n> > On Tue, 2 Oct 2001, Lamar Owen wrote:\n> >\n> > > On Tuesday 02 October 2001 12:36 pm, Marc G. Fournier wrote:\n> > > > does that help any, in the short term?\n> > > Thanks for the roadmap. Now we know a cliff isn't around the next blind\n> > > hairpin curve.... :-)\n> >\n> > Shhhhh! :)\n> >\n> > Vince.\n> > --\n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 15:29:37 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "> On Tue, 2 Oct 2001, Bruce Momjian wrote:\n> \n> > Vince, can you update:\n> >\n> > \t/usr/local/www/developer/TODO/docs/cvs.html\n> >\n> > to use the right CVS locations or give me permission to modify it.\n> \n> Where is the new one hiding?\n\nUh, we are not building SGML yet. Here is the HTML file that is current\nfrom my machine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nThe CVS RepositoryPostgreSQL 7.2 DocumentationPrevNextAppendix DG1. The CVS RepositoryTable of ContentsDG1.1. Getting The Source Via Anonymous CVSDG1.2. CVS Tree OrganizationDG1.3. Getting The Source Via CVSup The Postgres source code is stored and managed using the\n CVS code management system.\n At least two methods,\n anonymous CVS and CVSup, \n are available to pull the CVS code tree from the \n Postgres server to your local machine.\n DG1.1. Getting The Source Via Anonymous CVS If you would like to keep up with the current sources on a regular\n basis, you can fetch them from our CVS server \n and then use CVS to\n retrieve updates from time to time.\n Anonymous CVS You will need a local copy of CVS \n (Concurrent Version Control System), which you can get from\n http://www.cyclic.com/ or\n any GNU software archive site. \n We currently recommend version 1.10 (the most recent at the time\n of writing). Many systems have a recent version of \n cvs installed by default.\n Do an initial login to the CVS server:\n\n $ cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n \n\n You will be prompted for a password; just press ENTER.\n You should only need to do this once, since the password will be\n saved in .cvspass in your home directory.\n Fetch the Postgres sources:\n cvs -z3 -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co -P pgsql\n \n\n which installs the Postgres sources into a \n subdirectory pgsql\n of the directory you are currently in.\n\n Note: If you have a fast link to the Internet, you may not need\n -z3, which instructs\n CVS to use gzip compression for transferred data. But\n on a modem-speed link, it's a very substantial win.\n \n This initial checkout is a little slower than simply downloading\n a tar.gz file; expect it to take 40 minutes or so if you\n have a 28.8K modem. The advantage of\n CVS\n doesn't show up until you want to update the file set later on.\n Whenever you want to update to the latest CVS sources,\n cd into\n the pgsql subdirectory, and issue\n $ cvs -z3 update -d -P\n \n\n This will fetch only the changes since the last time you updated.\n You can update in just a couple of minutes, typically, even over\n a modem-speed line.\n You can save yourself some typing by making a file .cvsrc\n in your home directory that contains\n\n cvs -z3\nupdate -d -P\n \n\n This supplies the -z3 option to all cvs commands, and the\n -d and -P options to cvs update. Then you just have\n to say\n $ cvs update\n \n\n to update your files.\n Caution Some older versions of CVS have a bug that\n causes all checked-out files to be stored world-writable in your\n directory. If you see that this has happened, you can do something like\n $ chmod -R go-w pgsql\n \n to set the permissions properly.\n This bug is fixed as of \n CVS version 1.9.28.\n CVS can do a lot of other things,\n such as fetching prior revisions\n of the Postgres sources\n rather than the latest development version.\n For more info consult the manual that comes with\n CVS, or see the online\n documentation at\n http://www.cyclic.com/.\n PrevHomeNextFor the ProgrammerUpCVS Tree Organization", "msg_date": "Tue, 2 Oct 2001 15:34:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "On Tue, 2 Oct 2001, Bruce Momjian wrote:\n\n> > On Tue, 2 Oct 2001, Bruce Momjian wrote:\n> >\n> > > Vince, can you update:\n> > >\n> > > \t/usr/local/www/developer/TODO/docs/cvs.html\n> > >\n> > > to use the right CVS locations or give me permission to modify it.\n> >\n> > Where is the new one hiding?\n>\n> Uh, we are not building SGML yet. Here is the HTML file that is current\n> from my machine.\n\nGot it and it's installed.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 17:18:27 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "Marc G. Fournier writes:\n\n> last I heard from petere, there were tools that need to be installed, and\n> were installed, ont eh server for the builds ... Peter, did I miss\n> something in that, or was it just waiting for ftp.postgresql.org to be on\n> the new server?\n\nDocs build fine. There just wasn't a place to put them until a few days\nago, plus I didn't want to put them there because I'm not the one in\ncharge of this. AFAICT, Thomas Lockhart should be able to use his old\nscripted setup with a path change or three as already discussed.\n\nThomas, you will probably want to get some newer DSSSL stylesheets rather\nthan the ones installed under /usr/local/...; you can find tarballs for a\nrecent release or two at or near ~petere/PGDOC/{share|src} (forget where).\nUse DOCBOOKSTYLE = <path> in Makefile.custom, or in the environment before\nconfigure, as usual.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 3 Oct 2001 00:14:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS changes" }, { "msg_contents": "> will set that one up next ...\n\nGreat! Thanks...\n\n - Thomas\n\n> > That sounds like a good setting for anoncvs service. But I think Thomas\n> > (and any other committers who use cvsup) still need a cvsup server\n> > running on the master cvs machine. Even a 1-hour lag is too much when\n> > you are trying to commit changes.\n", "msg_date": "Wed, 03 Oct 2001 01:54:15 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Tue, 2 Oct 2001, Bruce Momjian wrote:\n\n\n> > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> >\n> >\n> > > > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> Script attached. I could poll cvs more frequently but it seems rude to\n> hit the cvs server more frequently than every 15 minutes. If people\n> want it polled more frequently, and Marc doesn't mind, I can change the\n> polling interval here.\n>\n> Also, I added something that will show the files modified in the current\n> build.\n\n\nI'm not vary familiar with the administration of CVS - if it has a means to\nrun a script when something's updated, maybe you could flag the need to rebuild\nautomatically.\n\n\n\n\n", "msg_date": "Thu, 4 Oct 2001 07:36:15 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: CVS changes" }, { "msg_contents": "On Thu, 4 Oct 2001, John Summerfield wrote:\n\n> On Tue, 2 Oct 2001, Bruce Momjian wrote:\n>\n>\n> > > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> > >\n> > >\n> > > > > On Sun, 30 Sep 2001, Bruce Momjian wrote:\n> > Script attached. I could poll cvs more frequently but it seems rude to\n> > hit the cvs server more frequently than every 15 minutes. If people\n> > want it polled more frequently, and Marc doesn't mind, I can change the\n> > polling interval here.\n> >\n> > Also, I added something that will show the files modified in the current\n> > build.\n>\n>\n> I'm not vary familiar with the administration of CVS - if it has a means to\n> run a script when something's updated, maybe you could flag the need to rebuild\n> automatically.\n\nIt has that ability, but only on the server itself ... bruce has taken it\nupon himself to do this on his own machine, while the base problem is\nworked on, which is moving everything over to the new server ...\n\nIts had its hiccups, but considering we really haven't moved anything\nanywhere in >5 years now, and considering we are not in beta yet, we've\ndone a decent job of getting problems fixed as soon as they've been\nreported ...\n\n", "msg_date": "Thu, 4 Oct 2001 12:08:07 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS changes" } ]
[ { "msg_contents": "Hello (mainly developer) folks!\n\nProbably Kevin really found a bug.\n\nWhen I saw his words in $50, I immediately started to look around his\nproblem... You probably don't think that as a student here, in Hungary I\nlive half a month for $50 :-))))\n\nSo I simplified his given schema as much as I needed, and found out\nwhat exactly the problem is.\n\n(Kevin, if you badly need a workaround, I can give you one,\nbut quite an ugly one.)\n\nThe problem is: when updating a row in an ancestor table,\nwhich is really belongs to a child, there's something wrong\nwith the CHECK system.\n\nHere's the simple schema, producing the error:\n\n----------------\n\ndrop table child;\ndrop table ancestor;\n\ncreate table ancestor (\n node_id int4,\n a int4\n);\n\ncreate table child (\n b int4 NOT NULL DEFAULT 0\n CHECK ( b = 0 OR b = 1)\n) inherits (ancestor);\n\ninsert into ancestor values (3,4);\ninsert into child values (5,6,1);\n\nupdate ancestor set a=8 where node_id=5;\n\n----------------\n\nIf one leaves out the CHECK condition, the UPDATE\nworks just fine, and __the final result meets the\ncheck condition__.\n\nSo it seems to me that the system\n1. either tries to check the CHECK condition of the child on the\n ancestor\n2. or only uses a wrong time to check against it.\n\n\n\n\nBest regards,\nBaldvin\n\n\n", "msg_date": "Mon, 24 Sep 2001 01:55:38 +0200 (MEST)", "msg_from": "Kovacs Baldvin <kb136@hszk.bme.hu>", "msg_from_op": true, "msg_subject": "Bug?: Update on ancestor for a row of a child" }, { "msg_contents": "> The problem is: when updating a row in an ancestor table,\n> which is really belongs to a child, there's something wrong\n> with the CHECK system.\n\nWell, I believe you found one minor problem. The bigger one is still\nlurking in the shadows though. To duplicate it, take my previous schema,\nand add \n\n lastlog TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, \n\nto the users table, between pass_hash and logged_in.\n\nAfter doing so, you'll find that postgres actually crashes when you try\nto insert a vote into the uservote table. That's the one that has me\nlooking at the costs involved with migrating to Oracle.\n\n-Kevin\n", "msg_date": "Mon, 24 Sep 2001 02:51:50 +0000", "msg_from": "Kevin Way <kevin.way@overtone.org>", "msg_from_op": false, "msg_subject": "Re: Bug?: Update on ancestor for a row of a child" }, { "msg_contents": "Kevin,\n\n> After doing so, you'll find that postgres actually crashes when you\n> try\n> to insert a vote into the uservote table. That's the one that has me\n> looking at the costs involved with migrating to Oracle.\n\nAnd you think that Oracle is entirely free of bugs? ;-)\n\nAt least here, you can get a message to the actual database developers.\n\nStill, I understand your frustration. \n\n-Josh\n\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco", "msg_date": "Mon, 24 Sep 2001 08:11:59 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Bug?: Update on ancestor for a row of a child" }, { "msg_contents": "> And you think that Oracle is entirely free of bugs? ;-)\n\nYes, but they'd be exciting technology-oriented e-business enabled bugs!\n\n> Still, I understand your frustration. \n\nThanks... It's just frustrating that the bug is on something so basic,\nwhich makes it both hard to code around and hard for me to delve into the\npostgres source and fix. I spent a few hours tracing source before finally\nconceding the point that it takes more than a few hours to understand\npostgres internals well enough to fix a major bug.\n\nFor development purposes I've just removed all the CHECK constraints \nfrom my child tables, and I'm hoping some genius will solve the problem\nby the time I'm looking to deploy.\n\n-Kevin Way\n", "msg_date": "Mon, 24 Sep 2001 16:11:48 +0000", "msg_from": "Kevin Way <kevin.way@overtone.org>", "msg_from_op": false, "msg_subject": "Re: Bug?: Update on ancestor for a row of a child" } ]
[ { "msg_contents": "It may be just me, or I am grossly misunderstanding syntax of outer joins,\nbut I see that plans for my queries are different depending on how I place\njoin conditions and sometimes even on order of the tables.\n\nBasically, if I mix ANSI-syntax outer joins (a left outer join b on\na.id=b.id) and \"where-syntax\" joins (from a,b where a.id=b.id) in the same\nquery, things get strange.\n\nExample:\n1:\nexplain select * from customers c,orders o left outer join adsl_orders ao\non ao.order_id=o.order_id\nwhere c.cust_id=o.cust_id\nand c.cust_id=152\n\n\nNested Loop (cost=94.23..577.47 rows=2 width=290)\n -> Index Scan using customers_pkey on customers c (cost=0.00..2.02\nrows=1 width=125)\n -> Materialize (cost=501.65..501.65 rows=5904 width=165)\n -> Hash Join (cost=94.23..501.65 rows=5904 width=165)\n -> Seq Scan on orders o (cost=0.00..131.04 rows=5904\nwidth=58)\n -> Hash (cost=86.18..86.18 rows=3218 width=107)\n -> Seq Scan on adsl_orders ao (cost=0.00..86.18\nrows=3218 width=107)\n\nQuery 2:\n\nexplain select * from customers c join orders o on c.cust_id=o.cust_id\nleft outer join adsl_orders ao on ao.order_id=o.order_id\nwhere c.cust_id=152\n\nNested Loop (cost=0.00..9.30 rows=2 width=290)\n -> Nested Loop (cost=0.00..5.06 rows=2 width=183)\n -> Index Scan using customers_pkey on customers c\n(cost=0.00..2.02 rows=1 width=125)\n -> Index Scan using orders_idx1 on orders o (cost=0.00..3.03\nrows=1 width=58)\n -> Index Scan using adsl_orders_pkey on adsl_orders ao\n(cost=0.00..2.02 rows=1 width=107)\n\nTo me, both queries seem exactly identical in meaning, and should generate\nthe same plans. However, in my experience, if I use outer join anywhere in\nthe query, I must use \"JOIN\" syntax to join all other tables as well,\notherwise, my query plans are _extremely_ slow.\n\nany hints? Or I am grossly misunderstanding outer join symantics?\n\n-alex\n\n", "msg_date": "Sun, 23 Sep 2001 21:29:11 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "outer joins strangeness" }, { "msg_contents": "On Sun, 23 Sep 2001, Alex Pilosov wrote:\n\n> It may be just me, or I am grossly misunderstanding syntax of outer joins,\n> but I see that plans for my queries are different depending on how I place\n> join conditions and sometimes even on order of the tables.\n> \n> Example:\n> 1:\n> explain select * from customers c,orders o left outer join adsl_orders ao\n> on ao.order_id=o.order_id\n> where c.cust_id=o.cust_id\n> and c.cust_id=152\n> \n> \n> Nested Loop (cost=94.23..577.47 rows=2 width=290)\n> -> Index Scan using customers_pkey on customers c (cost=0.00..2.02\n> rows=1 width=125)\n> -> Materialize (cost=501.65..501.65 rows=5904 width=165)\n> -> Hash Join (cost=94.23..501.65 rows=5904 width=165)\n> -> Seq Scan on orders o (cost=0.00..131.04 rows=5904\n> width=58)\n> -> Hash (cost=86.18..86.18 rows=3218 width=107)\n> -> Seq Scan on adsl_orders ao (cost=0.00..86.18\n> rows=3218 width=107)\n> \n> Query 2:\n> \n> explain select * from customers c join orders o on c.cust_id=o.cust_id\n> left outer join adsl_orders ao on ao.order_id=o.order_id\n> where c.cust_id=152\n> \n> Nested Loop (cost=0.00..9.30 rows=2 width=290)\n> -> Nested Loop (cost=0.00..5.06 rows=2 width=183)\n> -> Index Scan using customers_pkey on customers c\n> (cost=0.00..2.02 rows=1 width=125)\n> -> Index Scan using orders_idx1 on orders o (cost=0.00..3.03\n> rows=1 width=58)\n> -> Index Scan using adsl_orders_pkey on adsl_orders ao\n> (cost=0.00..2.02 rows=1 width=107)\n> \n> To me, both queries seem exactly identical in meaning, and should generate\n> the same plans. However, in my experience, if I use outer join anywhere in\n> the query, I must use \"JOIN\" syntax to join all other tables as well,\n> otherwise, my query plans are _extremely_ slow.\n\nPostgres treats join syntax as an explicit definition of what order to\njoins in. So, I'd guess it sees the first as: do the LOJ and then join\nthat to the separate table. \n\nAnd for right outer join (for example), those two queries would not\nbe equivalent if I read the ordering correctly. The former syntax\nwould mean outer first and then the inner, whereas the second would\nbe inner first then the outer, and that could have different results.\n\n", "msg_date": "Sun, 23 Sep 2001 21:57:11 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: outer joins strangeness" }, { "msg_contents": "On Sun, 23 Sep 2001, Stephan Szabo wrote:\n\n> On Sun, 23 Sep 2001, Alex Pilosov wrote:\n> \n> > It may be just me, or I am grossly misunderstanding syntax of outer joins,\n> > but I see that plans for my queries are different depending on how I place\n> > join conditions and sometimes even on order of the tables.\n> > \n> > Example:\n> > 1:\n> > explain select * from customers c,orders o left outer join adsl_orders ao\n> > on ao.order_id=o.order_id\n> > where c.cust_id=o.cust_id\n> > and c.cust_id=152\n<snip>\n> > \n> > explain select * from customers c join orders o on c.cust_id=o.cust_id\n> > left outer join adsl_orders ao on ao.order_id=o.order_id\n> > where c.cust_id=152\n\n> Postgres treats join syntax as an explicit definition of what order to\n> joins in. So, I'd guess it sees the first as: do the LOJ and then join\n> that to the separate table. \nYeah, I figure that's how it sees it, but that's pretty stupid from\nperformance reasons :P)\n\nIt _should_ realize that left outer join only constricts join order\nbetween two tables in outer join, and joins to all other tables should\nstill be treated normally.\n\nI'm going to CC this to -hackers, maybe someone will shed a light on the\ninternals of this. \n\n> And for right outer join (for example), those two queries would not\n> be equivalent if I read the ordering correctly. The former syntax\n> would mean outer first and then the inner, whereas the second would\n> be inner first then the outer, and that could have different results.\nTrue. But this is not right outer join, its a left outer join...:)\n\nPostgres should understand that left outer join does not constrict join\norder...\n\n-alex\n\n", "msg_date": "Mon, 24 Sep 2001 01:09:31 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [SQL] outer joins strangeness" }, { "msg_contents": "On Mon, 24 Sep 2001, Alex Pilosov wrote:\n\n> On Sun, 23 Sep 2001, Stephan Szabo wrote:\n> \n> > On Sun, 23 Sep 2001, Alex Pilosov wrote:\n> > \n> > Postgres treats join syntax as an explicit definition of what order to\n> > joins in. So, I'd guess it sees the first as: do the LOJ and then join\n> > that to the separate table. \n> Yeah, I figure that's how it sees it, but that's pretty stupid from\n> performance reasons :P)\n>\n> It _should_ realize that left outer join only constricts join order\n> between two tables in outer join, and joins to all other tables should\n> still be treated normally.\n(see below)\n> \n> I'm going to CC this to -hackers, maybe someone will shed a light on the\n> internals of this. \n> \n> > And for right outer join (for example), those two queries would not\n> > be equivalent if I read the ordering correctly. The former syntax\n> > would mean outer first and then the inner, whereas the second would\n> > be inner first then the outer, and that could have different results.\n> True. But this is not right outer join, its a left outer join...:)\n> \n> Postgres should understand that left outer join does not constrict join\n> order...\n\nBut it can. If your condition was a joining between the other table\nand the right side of the left outer join, you'd have the same condition\nas a right outer join and the left side. The real condition I think\nis that you can join a non-explicitly joined table to the <x> side of an\n<x> outer join before the outer join but not to the other side.\n\n", "msg_date": "Mon, 24 Sep 2001 09:07:31 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] outer joins strangeness" }, { "msg_contents": "[moved to hackers]\n\nOn Mon, 24 Sep 2001, Stephan Szabo wrote:\n\n> > Postgres should understand that left outer join does not constrict join\n> > order...\n> \n> But it can. If your condition was a joining between the other table\n> and the right side of the left outer join, you'd have the same condition\n> as a right outer join and the left side. The real condition I think\n> is that you can join a non-explicitly joined table to the <x> side of an\n> <x> outer join before the outer join but not to the other side.\nYes yes. Maybe I was imprecise. Right join and left join are the same,\nonly the ordering is different. Lets call the table that will always be\nincluded in a join a \"complete\" table. \n\nThen, joins should not impose join order on \"complete\" table. Of course,\njoins against 'incomplete' table must be done only after outer join is\ndone.\n\nAnyone who can actually fix it? :)\n-alex\n\n\n", "msg_date": "Mon, 24 Sep 2001 13:00:23 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [SQL] outer joins strangeness" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I'm going to CC this to -hackers, maybe someone will shed a light on the\n> internals of this. \n\nIt's not unintentional. See\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/explicit-joins.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Sep 2001 00:19:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] outer joins strangeness " } ]
[ { "msg_contents": "For OpenBSD to work, we need a change from LOCAL_CREDS to SCM_CREDS.\nBruce, I think you are familure with this one. Care to make the change?\n(I have no idea where to make it!).\n\nThanks all,\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n\n", "msg_date": "Mon, 24 Sep 2001 00:18:49 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "LOCAL_CREDS -> SCM_CREDS in src/backend/libpq/auth.c:535" }, { "msg_contents": "> For OpenBSD to work, we need a change from LOCAL_CREDS to SCM_CREDS.\n> Bruce, I think you are familure with this one. Care to make the change?\n> (I have no idea where to make it!).\n\nOK, I have applied the following patch that fixes the problem on\nOpenBSD. In my reading of the OpenBSD kernel, it has 'struct sockcred'\nbut has no code in the kernel to deal with SCM_CREDS or LOCAL_CREDS. \nThe patch tests for both HAVE_STRUCT_SOCKCRED and LOCAL_CREDS before it\nwill try local socket credential authentication. This means we have\nlocal creds on Linux, NetBSD, FreeBSD, and BSD/OS. I will document this\nin pg_hba.conf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/libpq/auth.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/libpq/auth.c,v\nretrieving revision 1.67\ndiff -c -r1.67 auth.c\n*** src/backend/libpq/auth.c\t2001/09/21 20:31:45\t1.67\n--- src/backend/libpq/auth.c\t2001/09/26 19:30:30\n***************\n*** 520,526 ****\n \t\t\tbreak;\n \n \t\tcase uaIdent:\n! #if !defined(SO_PEERCRED) && (defined(HAVE_STRUCT_CMSGCRED) || defined(HAVE_STRUCT_FCRED) || defined(HAVE_STRUCT_SOCKCRED))\n \t\t\t/*\n \t\t\t *\tIf we are doing ident on unix-domain sockets,\n \t\t\t *\tuse SCM_CREDS only if it is defined and SO_PEERCRED isn't.\n--- 520,526 ----\n \t\t\tbreak;\n \n \t\tcase uaIdent:\n! #if !defined(SO_PEERCRED) && (defined(HAVE_STRUCT_CMSGCRED) || defined(HAVE_STRUCT_FCRED) || (defined(HAVE_STRUCT_SOCKCRED) && defined(LOCAL_CREDS)))\n \t\t\t/*\n \t\t\t *\tIf we are doing ident on unix-domain sockets,\n \t\t\t *\tuse SCM_CREDS only if it is defined and SO_PEERCRED isn't.\nIndex: src/backend/libpq/hba.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/libpq/hba.c,v\nretrieving revision 1.72\ndiff -c -r1.72 hba.c\n*** src/backend/libpq/hba.c\t2001/09/21 20:31:46\t1.72\n--- src/backend/libpq/hba.c\t2001/09/26 19:30:30\n***************\n*** 904,910 ****\n \n \treturn true;\n \n! #elif defined(HAVE_STRUCT_CMSGCRED) || defined(HAVE_STRUCT_FCRED) || defined(HAVE_STRUCT_SOCKCRED)\n \tstruct msghdr msg;\n \n /* Credentials structure */\n--- 904,910 ----\n \n \treturn true;\n \n! #elif defined(HAVE_STRUCT_CMSGCRED) || defined(HAVE_STRUCT_FCRED) || (defined(HAVE_STRUCT_SOCKCRED) && defined(LOCAL_CREDS))\n \tstruct msghdr msg;\n \n /* Credentials structure */\nIndex: src/interfaces/libpq/fe-auth.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq/fe-auth.c,v\nretrieving revision 1.60\ndiff -c -r1.60 fe-auth.c\n*** src/interfaces/libpq/fe-auth.c\t2001/09/21 20:31:49\t1.60\n--- src/interfaces/libpq/fe-auth.c\t2001/09/26 19:30:53\n***************\n*** 435,444 ****\n \n #endif\t /* KRB5 */\n \n- #if defined(HAVE_STRUCT_CMSGCRED) || defined(HAVE_STRUCT_FCRED) || defined(HAVE_STRUCT_SOCKCRED)\n static int\n pg_local_sendauth(char *PQerrormsg, PGconn *conn)\n {\n \tchar buf;\n \tstruct iovec iov;\n \tstruct msghdr msg;\n--- 435,444 ----\n \n #endif\t /* KRB5 */\n \n static int\n pg_local_sendauth(char *PQerrormsg, PGconn *conn)\n {\n+ #if defined(HAVE_STRUCT_CMSGCRED) || defined(HAVE_STRUCT_FCRED) || (defined(HAVE_STRUCT_SOCKCRED) && defined(LOCAL_CREDS))\n \tchar buf;\n \tstruct iovec iov;\n \tstruct msghdr msg;\n***************\n*** 485,492 ****\n \t\treturn STATUS_ERROR;\n \t}\n \treturn STATUS_OK;\n! }\n #endif\n \n static int\n pg_password_sendauth(PGconn *conn, const char *password, AuthRequest areq)\n--- 485,496 ----\n \t\treturn STATUS_ERROR;\n \t}\n \treturn STATUS_OK;\n! #else\n! \tsnprintf(PQerrormsg, PQERRORMSG_LENGTH,\n! \t\t\t libpq_gettext(\"SCM_CRED authentication method not supported\\n\"));\n! \treturn STATUS_ERROR;\n #endif\n+ }\n \n static int\n pg_password_sendauth(PGconn *conn, const char *password, AuthRequest areq)\n***************\n*** 614,627 ****\n \t\t\tbreak;\n \n \t\tcase AUTH_REQ_SCM_CREDS:\n- #if defined(HAVE_STRUCT_CMSGCRED) || defined(HAVE_STRUCT_FCRED) || defined(HAVE_STRUCT_SOCKCRED)\n \t\t\tif (pg_local_sendauth(PQerrormsg, conn) != STATUS_OK)\n \t\t\t\treturn STATUS_ERROR;\n- #else\n- \t\t\tsnprintf(PQerrormsg, PQERRORMSG_LENGTH,\n- \t\t\t\t\t libpq_gettext(\"SCM_CRED authentication method not supported\\n\"));\n- \t\t\treturn STATUS_ERROR;\n- #endif\n \t\t\tbreak;\n \n \t\tdefault:\n--- 618,625 ----", "msg_date": "Wed, 26 Sep 2001 15:53:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: LOCAL_CREDS -> SCM_CREDS in src/backend/libpq/auth.c:535" } ]
[ { "msg_contents": "Hello--\n\nI'm looking for anecdotes describing debugging experiences with\ndatabase systems. In particular, I want to hear about how you've\nsolved particularly difficult bugs that were a real headache in a real\nsystem. The bugs could have occurred in any aspect of a database:\nproblems with the user interface, the SQL, processing efficiency, data\nmodel, stored procedures/triggers/constraints, etc. I'm interested in\nboth solved and unsolved bugs.\n\nI'd like to hear about how you solved the problem--did you use any\nsupporting tools, did you have a systematic approach for homing in on\nthe bug, did a solution suddenly come to you out of nowhere? What made\nthis problem particularly memorable? Indeed, was a solution ever\nfound?\n\nA brief, stream of consciousness style response (right now!) would be\nwonderful, and much better than a carefully worked out story (in a few\ndays). I can then get back to you with further questions if necessary.\n\nI'm collecting these database bug war stories as part of a research\nproject that is examining the types of difficult database problems\nthat database professionals (at all levels) experience. These war\nstories will be used for analysis only, and no names or other\nidentifying characteristics of your tales will be stated in the final\n(or any other) report.\n\nThank you for your help!\nSally Jo Cunningham\ndb_debugging@hotmail.com\n-- \n\nDept. of Computer Science\nUniversity of Waikato \nHamilton, New Zealand\n", "msg_date": "23 Sep 2001 22:58:29 -0700", "msg_from": "db_debugging@hotmail.com (Sally Jo Cunningham)", "msg_from_op": true, "msg_subject": "request for database bug 'war stories'" } ]
[ { "msg_contents": "Hello,\n\nwell at first I could not believe what I was seeing ...\n\nLook at the following code (ecpg/lib/execute.c):\n\n const char *locale=setlocale(LC_NUMERIC, NULL);\n setlocale(LC_NUMERIC, \"C\");\n[....]\n setlocale(LC_NUMERIC, locale);\n\n\nWell at least on glibc-2.2 it seems that setlocale retuns a pointer to\nmalloced memory, and frees this pointer on subsequent calls to\nsetlocale. This is standard conformant and has good reasons. But used as\nabove it is lethal (but not lethal enough to be easily recognized). So\nthe content locale points to is freed by the second call to setlocale.\n\nThe remedy is easy (given that _no other_ call to setlocale happens\ninbetween ...)\n\n const char *locale=setlocale(LC_NUMERIC, \"C\");\n [...]\n setlocale(LC_NUMERIC, locale);\n\n\nSo I would kindly ask you to take a second look at every invokation of\nsetlocale. And to apply the following patch.\n\nYours\n Christof", "msg_date": "Mon, 24 Sep 2001 09:18:42 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Problem with setlocale (found in libecpg) [accessing a memory\n\tlocation after freeing it]" }, { "msg_contents": "On Mon, Sep 24, 2001 at 09:18:42AM +0200, Christof Petig wrote:\n> well at first I could not believe what I was seeing ...\n\n:-)\n\n> Look at the following code (ecpg/lib/execute.c):\n> \n> const char *locale=setlocale(LC_NUMERIC, NULL);\n> setlocale(LC_NUMERIC, \"C\");\n> [....]\n> setlocale(LC_NUMERIC, locale);\n> \n> \n> Well at least on glibc-2.2 it seems that setlocale retuns a pointer to\n> malloced memory, and frees this pointer on subsequent calls to\n\nDoesn't look that way on my system. The following programs simply dumps core\nin free().\n\n#include <locale.h> \n#include <stdio.h>\n\nmain()\n{\n\tconst char *locale=setlocale(LC_NUMERIC, NULL); \n\t\n\tprintf(\"%c\\n\", locale);\n\tfree(locale);\n}\n\n> setlocale. This is standard conformant and has good reasons. But used as\n\nYou're partially right. Standard says \"This string may be allocated in\nstatic storage.\" So, yes, with your patch we are on the safe side. I just\ncommitted the changes.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 25 Sep 2001 20:15:06 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Problem with setlocale (found in libecpg) [accessing a memory\n\tlocation after freeing it]" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Hello,\n> \n> well at first I could not believe what I was seeing ...\n> \n> Look at the following code (ecpg/lib/execute.c):\n> \n> const char *locale=setlocale(LC_NUMERIC, NULL);\n> setlocale(LC_NUMERIC, \"C\");\n> [....]\n> setlocale(LC_NUMERIC, locale);\n> \n> \n> Well at least on glibc-2.2 it seems that setlocale retuns a pointer to\n> malloced memory, and frees this pointer on subsequent calls to\n> setlocale. This is standard conformant and has good reasons. But used as\n> above it is lethal (but not lethal enough to be easily recognized). So\n> the content locale points to is freed by the second call to setlocale.\n> \n> The remedy is easy (given that _no other_ call to setlocale happens\n> inbetween ...)\n> \n> const char *locale=setlocale(LC_NUMERIC, \"C\");\n> [...]\n> setlocale(LC_NUMERIC, locale);\n> \n> \n> So I would kindly ask you to take a second look at every invokation of\n> setlocale. And to apply the following patch.\n> \n> Yours\n> Christof\n> \n\n[ application/x-gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 26 Sep 2001 16:22:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with setlocale (found in libecpg) [accessing a" }, { "msg_contents": ">> Well at least on glibc-2.2 it seems that setlocale retuns a pointer to\n>> malloced memory, and frees this pointer on subsequent calls to\n>> setlocale.\n>> So I would kindly ask you to take a second look at every invokation of\n>> setlocale.\n\nI looked around, and am worried about the behavior of PGLC_current()\nin src/backend/utils/adt/pg_locale.c. It doesn't change locale but\ndoes retrieve several successive setlocale() results. Does that work\nin glibc?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Sep 2001 00:08:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with setlocale (found in libecpg) [accessing a " }, { "msg_contents": "Tom Lane wrote:\n\n> >> Well at least on glibc-2.2 it seems that setlocale retuns a pointer to\n> >> malloced memory, and frees this pointer on subsequent calls to\n> >> setlocale.\n> >> So I would kindly ask you to take a second look at every invokation of\n> >> setlocale.\n>\n> I looked around, and am worried about the behavior of PGLC_current()\n> in src/backend/utils/adt/pg_locale.c. It doesn't change locale but\n> does retrieve several successive setlocale() results. Does that work\n> in glibc?\n\nWell actually I did not check glibc's source code. But I tried to run my\nprogram with efence and it aborted in execute.c\n\n[ locale=setlocale(LC_NUMERIC,NULL);\n setlocale(LC_NUMERIC,\"C\");\n ...\n setlocale(LC_NUMERIC,locale); // access to already freed memory\n(locale)\n]\n\nSo my best guess is that setlocale\n- uses a malloced memory for return (which copes best with variable length\nstrings)\n- frees this on a subsequent calls and allocates a new one.\n\nYes, I'm worried about PGLC_current(), too.\nIMHO we should definitely copy the result to a malloced area.\nDoes the current solution work with static storage (old libcs?)? The last\ncall would overwrite the first result, wouldn't it?\n\nChristof\n\n\n", "msg_date": "Thu, 27 Sep 2001 09:26:12 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Problem with setlocale (found in libecpg) [accessing a" }, { "msg_contents": "On Thu, Sep 27, 2001 at 09:26:12AM +0200, Christof Petig wrote:\n> Tom Lane wrote:\n> \n> > >> Well at least on glibc-2.2 it seems that setlocale retuns a pointer to\n> > >> malloced memory, and frees this pointer on subsequent calls to\n> > >> setlocale.\n> > >> So I would kindly ask you to take a second look at every invokation of\n> > >> setlocale.\n> >\n> > I looked around, and am worried about the behavior of PGLC_current()\n> > in src/backend/utils/adt/pg_locale.c. It doesn't change locale but\n> > does retrieve several successive setlocale() results. Does that work\n> > in glibc?\n> \n> Well actually I did not check glibc's source code. But I tried to run my\n> program with efence and it aborted in execute.c\n\n I see locale/setlocale.c in glibc (I'm very like that PG hasn't same\ncoding style as glibc developers:-). You are right with strdup()/free()\nin the setlocale().\n\n> \n> [ locale=setlocale(LC_NUMERIC,NULL);\n> setlocale(LC_NUMERIC,\"C\");\n> ...\n> setlocale(LC_NUMERIC,locale); // access to already freed memory\n> (locale)\n> ]\n> \n> So my best guess is that setlocale\n> - uses a malloced memory for return (which copes best with variable length\n> strings)\n> - frees this on a subsequent calls and allocates a new one.\n> \n> Yes, I'm worried about PGLC_current(), too.\n\n For example to_char() calls PGLC_localeconv(). In the PGLC_localeconv() \nis used:\n\nPGLC_current(&lc);\nsetlocale(LC_ALL, \"\");\nPGLC_setlocale(&lc);\t\t/* <-- access to free memory ? */\n\n\n I see now it in detail and Christof probably found really pretty bug.\nSome users already notice something like:\n\ntest=# select to_char(45123.4, 'L99G999D9');\nNOTICE: pg_setlocale(): 'LC_MONETARY=p�-@�' cannot be honored.\n to_char\n-------------\n K� 45�123,4\n(1 row)\n\n (I use Czech locales)\n\n We don't see this bug often, because PGLC_localeconv() result is cached\nand pg_setlocale is called only once. \n\n\n It must be fixed for 7.2. May be allocate it in PG, because we need\nkeep data in PG_LocaleCategories independent on glibc's strdup/free.\nMay be:\n\nPGLC_current(PG_LocaleCategories * lc)\n{\n lc->lang = getenv(\"LANG\");\n\n\tPGLC_free_caltegories(lc);\n\n \tlc->lc_ctype = pstrdup(setlocale(LC_CTYPE, NULL));\n\n\t... etc.\n} \n\nvoid\nPGLC_free_caltegories(PG_LocaleCategories * lc)\n{\n\tif (lc->lc_ctype)\n\t\tpfree(lc->lc_ctype);\n\n\t...etc.\n}\n\n Comments? I right now work on patch for this.\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 27 Sep 2001 10:49:24 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Problem with setlocale (found in libecpg) [accessing a" }, { "msg_contents": "On Thu, Sep 27, 2001 at 12:08:29AM -0400, Tom Lane wrote:\n> >> Well at least on glibc-2.2 it seems that setlocale retuns a pointer to\n> >> malloced memory, and frees this pointer on subsequent calls to\n> >> setlocale.\n> >> So I would kindly ask you to take a second look at every invokation of\n> >> setlocale.\n> \n> I looked around, and am worried about the behavior of PGLC_current()\n> in src/backend/utils/adt/pg_locale.c. It doesn't change locale but\n> does retrieve several successive setlocale() results. Does that work\n> in glibc?\n\n The patch is attached. Now it's independent on glibc's game of setlocale()\nresults and free/strdup. It works for me...\n\n Thanks to Christof!\n\n\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz", "msg_date": "Thu, 27 Sep 2001 12:11:15 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "pg_locale (Was: Re: Problem with setlocale (found in libecpg)...)" }, { "msg_contents": "Michael Meskes wrote:\n\n> On Thu, Sep 27, 2001 at 12:08:29AM -0400, Tom Lane wrote:\n> > I looked around, and am worried about the behavior of PGLC_current()\n> > in src/backend/utils/adt/pg_locale.c. It doesn't change locale but\n> > does retrieve several successive setlocale() results. Does that work\n> > in glibc?\n>\n> I haven't experienced any problem so far, but then I wasn't able to\n> reproduce Christof's either on my glibc2.2 system.\n\nYou have to link with efence to see it (see below). (BTW the bug is in\nlibecpg)\n\nOtherwise the bug is hidden (setting an illegal locale simply does not do\nanything if we ignore it's return value (setlocale returns NULL on\nerror)). Perhaps outputting a notice to the debug stream if setlocale\nfails is a good choice (I don't like to raise a SQL error).\n\nChristof\n\n[More detailed: if the former value is freed, the pointer still points to\na valid memory region (without efence), further processing inside ecpg\nwill reuse that region for just another string (an input variable's value\nin SQL notation).\nSo setting locale '0' or 'ISO' or 'some string' silently fails.]\n\n\n", "msg_date": "Thu, 27 Sep 2001 12:17:31 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Problem with setlocale (found in libecpg) [accessing a" }, { "msg_contents": "On Tue, Sep 25, 2001 at 08:15:06PM +0200, Michael Meskes wrote:\n> > \n> > Well at least on glibc-2.2 it seems that setlocale retuns a pointer to\n> > malloced memory, and frees this pointer on subsequent calls to\n> \n> Doesn't look that way on my system. The following programs simply dumps core\n> in free().\n> \n> #include <locale.h> \n> #include <stdio.h>\n> \n> main()\n> {\n> \tconst char *locale=setlocale(LC_NUMERIC, NULL); \n> \t\n> \tprintf(\"%c\\n\", locale);\n> \tfree(locale);\n> }\n\n Because you bad use setlocale().\n\n The setlocale(LC_NUMERIC, NULL) returns actual LC_NUMERIC setting, but \nyour program hasn't some setting, because you don't call:\n\nsetlocale(LC_NUMERIC, \"\") or setlocale(LC_NUMERIC, \"some_locales\")\n\n before setlocale(LC_NUMERIC, NULL), try this program:\n\n\n#include <stdio.h>\n#include <locale.h>\n#include <stdlib.h>\n\nint\nmain()\n{\n char *locale;\n\n /* create array with locales names */\n setlocale(LC_NUMERIC, \"\");\n\n /* returns data from actual setting */\n locale = setlocale(LC_NUMERIC, NULL);\n\n printf(\"%s\\n\", locale);\n free((void *) locale);\n exit(1);\n}\n\n and don't forget set LC_ALL before program runnig. With default locales \"C\" \nit is same as with NULL. \n\nPrevious code:\n\n$ export LC_ALL=\"cs_CZ\"\n$ ./loc\n cs_CZ\n$ export LC_ALL=\"C\"\n$ ./loc\n C\n Segmentation fault\t<-- in free()\n\n\n .... and see locale/setlocale.c in glibc sources :-)\n\n\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 27 Sep 2001 17:26:56 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Problem with setlocale (found in libecpg) [accessing a memory\n\tlocation after freeing it]" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> On Thu, Sep 27, 2001 at 12:08:29AM -0400, Tom Lane wrote:\n> > >> Well at least on glibc-2.2 it seems that setlocale retuns a pointer to\n> > >> malloced memory, and frees this pointer on subsequent calls to\n> > >> setlocale.\n> > >> So I would kindly ask you to take a second look at every invokation of\n> > >> setlocale.\n> > \n> > I looked around, and am worried about the behavior of PGLC_current()\n> > in src/backend/utils/adt/pg_locale.c. It doesn't change locale but\n> > does retrieve several successive setlocale() results. Does that work\n> > in glibc?\n> \n> The patch is attached. Now it's independent on glibc's game of setlocale()\n> results and free/strdup. It works for me...\n> \n> Thanks to Christof!\n> \n> \tKarel\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 15:37:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_locale (Was: Re: Problem with setlocale (found in libecpg)...)" }, { "msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> You're partially right. Standard says \"This string may be allocated in\n> static storage.\" So, yes, with your patch we are on the safe side. I just\n> committed the changes.\n\nThis patch wasn't right: setlocale(LC_NUMERIC, \"C\") returns a string\ncorresponding to the *new* locale setting, not the old one. Therefore,\nwhile the patched code failed to dump core, it also failed to restore\nthe previous locale setting as intended. I have committed an updated\nversion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 16:15:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with setlocale (found in libecpg) [accessing a memory\n\tlocation after freeing it]" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Hiroshi Inoue [mailto:Inoue@tpf.co.jp] \n> Sent: 24 September 2001 06:26\n> To: Jean-Michel POURE\n> Cc: pgsql-odbc@postgresql.org; pgsql-hackers@postgresql.org; \n> Tatsuo Ishii\n> Subject: Re: [ODBC] UTF-8 support\n> \n> \n> Jean-Michel POURE wrote:\n> > \n> > 3) Is there a way to query available encodings in PostgreSQL for \n> > display in pgAdmin.\n> \n> Could pgAdmin display multibyte chars in the first place ?\n\nDunno, never tried it - perhaps some could let us know?\n\npgAdmin does now (as of last night) support encoding names (SQL_ASCII,\nEUC_JP, KOI8 etc), however the list of options is hardcoded from the list in\nmultibyte.html (though you can overtype if you know better). If there is a\nmore up-to-date list than this I'd be glad to know.\n\nRegards, Dave.\n", "msg_date": "Mon, 24 Sep 2001 08:37:20 +0100", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [ODBC] UTF-8 support" } ]
[ { "msg_contents": "-- Hi Kevin, and everyone!\n-- \n-- I don't think that I only found a minor bug compared to\n-- the other you wrote in your last letter: the backend crash\n-- is caused by the same CHECK constraint in the child table.\n-- \n-- However, for you without time to analyzing Kevin's huge\n-- scheme, here is the very simplified, crash-causing script.\n-- \n------------------------------------\n\ndrop table child;\ndrop table ancestor;\n\ncreate table ancestor (\n node_id int4,\n a int4\n);\n\ncreate table child (\n b int4 NOT NULL DEFAULT 0 ,\n c int4 not null default 3,\n CHECK ( child.b = 0 OR child.b = 1 )\n) inherits (ancestor);\n\ninsert into ancestor values (3,4);\ninsert into child (node_id, a, b) values (5,6,1);\n\nupdate ancestor set a=8 where node_id=5;\n\n---------------------------------\n-- \n-- I am hunting it, but I have to learn all what this query-executing\n-- about, so probably it takes uncomparable longer for me than for\n-- a developer.\n-- \n-- Regards,\n-- Baldvin\n-- \n\n\n", "msg_date": "Mon, 24 Sep 2001 10:01:43 +0200 (MEST)", "msg_from": "Kovacs Baldvin <kb136@hszk.bme.hu>", "msg_from_op": true, "msg_subject": "Server crash caused by CHECK on child" }, { "msg_contents": "> -- I don't think that I only found a minor bug compared to\n> -- the other you wrote in your last letter: the backend crash\n> -- is caused by the same CHECK constraint in the child table.\n\nOooh, my bad. I should run your scripts before assuming I know how\nthey fail.\n\n> -- However, for you without time to analyzing Kevin's huge\n> -- scheme, here is the very simplified, crash-causing script.\n\nThank you so much for finding this simplified method of crashing\nPostgres. Hopefully somebody can find a fix now.\n\n> -- I am hunting it, but I have to learn all what this query-executing\n> -- about, so probably it takes uncomparable longer for me than for\n> -- a developer.\n\nThat's my problem as well, though your example is vastly easier to\ntrace than mine. \n\n-Kevin Way\n\n", "msg_date": "Mon, 24 Sep 2001 11:42:33 +0000", "msg_from": "Kevin Way <kevin.way@overtone.org>", "msg_from_op": false, "msg_subject": "Re: Server crash caused by CHECK on child" }, { "msg_contents": "\nWhat version are you trying this script on? I'm not\nseeing a crash on my 7.2 devel system (and the update occurs).\n\nOn Mon, 24 Sep 2001, Kovacs Baldvin wrote:\n\n> -- Hi Kevin, and everyone!\n> -- \n> -- I don't think that I only found a minor bug compared to\n> -- the other you wrote in your last letter: the backend crash\n> -- is caused by the same CHECK constraint in the child table.\n> -- \n> -- However, for you without time to analyzing Kevin's huge\n> -- scheme, here is the very simplified, crash-causing script.\n> -- \n> ------------------------------------\n> \n> drop table child;\n> drop table ancestor;\n> \n> create table ancestor (\n> node_id int4,\n> a int4\n> );\n> \n> create table child (\n> b int4 NOT NULL DEFAULT 0 ,\n> c int4 not null default 3,\n> CHECK ( child.b = 0 OR child.b = 1 )\n> ) inherits (ancestor);\n> \n> insert into ancestor values (3,4);\n> insert into child (node_id, a, b) values (5,6,1);\n> \n> update ancestor set a=8 where node_id=5;\n\n", "msg_date": "Mon, 24 Sep 2001 09:09:48 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Server crash caused by CHECK on child" }, { "msg_contents": "> What version are you trying this script on? I'm not\n> seeing a crash on my 7.2 devel system (and the update occurs).\n\n7.1.3. I'll sup a copy of the 7.2 sources and see if that fixes the test\ncase, and my actual bug.\n\n-Kevin Way\n", "msg_date": "Mon, 24 Sep 2001 16:18:10 +0000", "msg_from": "Kevin Way <kevin.way@overtone.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Server crash caused by CHECK on child" }, { "msg_contents": "\nI can confirm this now works fine in current sources. No crash.\n\n> -- Hi Kevin, and everyone!\n> -- \n> -- I don't think that I only found a minor bug compared to\n> -- the other you wrote in your last letter: the backend crash\n> -- is caused by the same CHECK constraint in the child table.\n> -- \n> -- However, for you without time to analyzing Kevin's huge\n> -- scheme, here is the very simplified, crash-causing script.\n> -- \n> ------------------------------------\n> \n> drop table child;\n> drop table ancestor;\n> \n> create table ancestor (\n> node_id int4,\n> a int4\n> );\n> \n> create table child (\n> b int4 NOT NULL DEFAULT 0 ,\n> c int4 not null default 3,\n> CHECK ( child.b = 0 OR child.b = 1 )\n> ) inherits (ancestor);\n> \n> insert into ancestor values (3,4);\n> insert into child (node_id, a, b) values (5,6,1);\n> \n> update ancestor set a=8 where node_id=5;\n> \n> ---------------------------------\n> -- \n> -- I am hunting it, but I have to learn all what this query-executing\n> -- about, so probably it takes uncomparable longer for me than for\n> -- a developer.\n> -- \n> -- Regards,\n> -- Baldvin\n> -- \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 16:38:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Server crash caused by CHECK on child" } ]
[ { "msg_contents": "I posted this in my last message, but have not heard anything yet so I'm\nwondering if it was overlooked. I need to know how to change a column from\nbeing say a varchar(9) to an integer. Does anyone know how to change the\ndata type?\n\nGeoff\n", "msg_date": "Mon, 24 Sep 2001 10:18:34 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "Changing data types" }, { "msg_contents": "This is not for -hackers. \n\nAnd the answer is \"no, you can't\". Recreate the table with correct types\nand insert the old values into it.\n\nOn Mon, 24 Sep 2001, Gowey, Geoffrey wrote:\n\n> I posted this in my last message, but have not heard anything yet so I'm\n> wondering if it was overlooked. I need to know how to change a column from\n> being say a varchar(9) to an integer. Does anyone know how to change the\n> data type?\n> \n> Geoff\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n\n", "msg_date": "Mon, 24 Sep 2001 10:40:23 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Changing data types" }, { "msg_contents": "\"Gowey, Geoffrey\" wrote:\n> \n> I posted this in my last message, but have not heard anything yet so I'm\n> wondering if it was overlooked. I need to know how to change a column from\n> being say a varchar(9) to an integer. Does anyone know how to change the\n> data type?\n\ncreate temptable\n as select col_a, col_b, varchar9col_c::int, col_d from originaltable\n;\n\ndrop table originaltable;\n\nalter table temptable rename to originaltable;\n\n\n\nand then create all indexes and constraints.\n\n---------------\nHannu\n", "msg_date": "Mon, 24 Sep 2001 17:03:38 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Changing data types" } ]
[ { "msg_contents": "\n>This is not for -hackers. \n\nHow so?\n\n>And the answer is \"no, you can't\". Recreate the table with correct types\n>and insert the old values into it.\n\nYou're kidding me, right? *prepares to gargle* MS Sql server can. Surely\nwe can implement this feature or aren't we aiming to go head to head with\ncommercial rdbms'?\n\n>>On Mon, 24 Sep 2001, Gowey, Geoffrey wrote:\n\n>> I posted this in my last message, but have not heard anything yet so I'm\n>> wondering if it was overlooked. I need to know how to change a column\nfrom\n>> being say a varchar(9) to an integer. Does anyone know how to change the\n>> data type?\n", "msg_date": "Mon, 24 Sep 2001 10:53:02 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "Re: Changing data types" }, { "msg_contents": "Hello all,\n\n> >And the answer is \"no, you can't\". Recreate the table with correct types\n> >and insert the old values into it.\n>\n>You're kidding me, right? *prepares to gargle* MS Sql server can. Surely\n>we can implement this feature or aren't we aiming to go head to head with\n>commercial rdbms'?\n\nThe other day, I spent 3 hours dropping old_1, old_2 and old_n fields in a DB.\nBut what if your table if it has triggers or foreign keys.\n\nThere is a very similar problem with DROP FUNCTION / CREATE FUNCTION.\nIf function A is based on function B and you drop function B, function A is \nbroken.\nSame as for views: if view A incorporates function A and you drop function \nA, view A is broken.\n\nOK: what's the point then?\n\nTHE POINT IS THAT WHEN YOU HAVE NESTED OBJECTS, YOU NEED TO DROP THEM ALL \nAND RECREATE THEM ALL.\nSO IF YOU WANT TO MODIFY ONE LINE OF CODE, YOU WILL PROBABLY NEED TO \nREBUILD ANYTHING.\nNORMAL HUMANS CANNOT DO THIS. MY CODE IS COMPLETE POSTGRESQL SERVER-SIDE.\nIN THESE CONDITIONS, THE CODE CANNOT BE OPTIMIZED ALSO BECAUSE OIDs CHANGE \nALL THE TIME.\n\nThe way we do it in pgAdmin I \nhttp://cvs.social-housing.org/viewcvs.cgi/pgadmin1\nis that we maintain a dependency table based on STRING NAMES and not OIDs.\nWhen altering an object (view, function, trigger) we rebuild all dependent \nobjects.\n\nIs this the way we should proceed with pgAdmin II?\nIs anyone planning a real dependency table based on object STRING NAMES?\n\nWe need some advice:\n1) Client solution: should we add the rebuilding feature to pgAdmin II?\n2) Server solution: should we wait until the ALTER OBJECT project is complete?\n\nPlease advice. Help needed.\nVote for (1) or (2).\n\nRegards,\nJean-Michel POURE\npgAdmin Team\nhttp://pgadmin.postgresql.org\n\n\n\n", "msg_date": "Mon, 24 Sep 2001 21:11:04 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Alter project: client or server side?" }, { "msg_contents": "\"Gowey, Geoffrey\" wrote:\n> \n> >This is not for -hackers.\n> \n> How so?\n> \n> >And the answer is \"no, you can't\". Recreate the table with correct types\n> >and insert the old values into it.\n> \n> You're kidding me, right? *prepares to gargle* MS Sql server can. Surely\n> we can implement this feature or aren't we aiming to go head to head with\n> commercial rdbms'?\n\nTo be honest I am very surprised that MS SQL supports that, but then again\nMicrosoft is so used to doing everything so utterly wrong, they have to design\nall their products with the ability to support fundamental design error\ncorrections on the fly.\n\nI would be surprised if Oracle, DB2, or other \"industrial grade\" databases\ncould do this. Needing to change a column from a varchar to an integer is a\nhuge change and a major error in design.\n\nAdding a column, updating a column with a conversion routine, dropping the old\ncolumn, and renaming the new column to the old column name is probably\nsupported, but, geez, I have been dealing with SQL for almost 8 years and I\nhave never needed to do that.\n", "msg_date": "Mon, 24 Sep 2001 21:25:10 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Changing data types" }, { "msg_contents": "On Mon, 24 Sep 2001, mlw wrote:\n\n> To be honest I am very surprised that MS SQL supports that, but then\n> again Microsoft is so used to doing everything so utterly wrong, they\n> have to design all their products with the ability to support\n> fundamental design error corrections on the fly.\n> \n> I would be surprised if Oracle, DB2, or other \"industrial grade\"\n> databases could do this. Needing to change a column from a varchar to\n> an integer is a huge change and a major error in design.\nActually they do. Its not a such a great deal, same as adding a column and\ndropping a column. If you can do that, you can do modification of type. \n\nThe sticky thing is dropping a column. There are two options, and\npostgresql developers just can't make up their mind :P)\n\na) keep old column data in database (wasted space, but fast)\nb) immediately 'compress' table, removing old data (slow, needs a lot of\nspace for compression)\n\nOption a) was implemented once, but kludgy, and had a few kinks, and it\nwas removed. Option b) plain sucks :P)\n\n-alex\n\n\n", "msg_date": "Mon, 24 Sep 2001 21:41:44 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Changing data types" }, { "msg_contents": "> The sticky thing is dropping a column. There are two options, and\n> postgresql developers just can't make up their mind :P)\n>\n> a) keep old column data in database (wasted space, but fast)\n> b) immediately 'compress' table, removing old data (slow, needs a\nlot of\n> space for compression)\n>\n> Option a) was implemented once, but kludgy, and had a few kinks, and\nit\n> was removed. Option b) plain sucks :P)\n\nOut of curiosity how was option a) implemented? I could envision\nsupporting multiple versions of a tuple style to be found within a\ntable (each described in pg_attribute). Gradually these would be\nupgraded through normal use.\n\nI'm personally not fond of the option b) due to the time involved in\ncompleting the action. Not only is space an issue, but locking the\ndatabase up for a day while removing a column isn't the nicest thing\nto do -- rename, make nullable, drop all constraints and try to ignore\nit right?\n\nOne would expect that keeping multiple versions of a tuple structure\ninside a single table to be slower than normal for selects, but I\ndon't think it would require marking the rows themselves -- just base\nit on the max and min transactions in the table at that time. Vacuum\nwould have to push the issue (5k tuples at a time?) of upgrading some\nof the tuples each time it's run in order to enfore that they were all\ngone before XID wrap. Background vacuum is ideal for that (if\nimplemented). Drop all constraints, indexes and the name (change to\n$1 or something) of the column immediatly. Vacuum can determine when\nXID Min in a table is > XID Max of another version and drop the\ninformation from pg_attribute.\n\nObviously affected:\n- pg_attribute, and anything dealing with it (add XID Max, XID Min\nwraps for known ranges)\n- storage machanism. On read of a tuple attempt to make it fit latest\nversion (XID Max is NULL) by ignoring select fields.\n\nI'll have to leave it up to the pros as to whether it can be done,\nshould be done, and what else it'll affect.\n\nI suppose this was option a) that was removed due to it's kludgyness\n:)\n\n\n", "msg_date": "Mon, 24 Sep 2001 22:22:09 -0400", "msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>", "msg_from_op": false, "msg_subject": "Re: Changing data types" }, { "msg_contents": "On Mon, 24 Sep 2001, Rod Taylor wrote:\n\n> Out of curiosity how was option a) implemented? I could envision\n> supporting multiple versions of a tuple style to be found within a\n> table (each described in pg_attribute). Gradually these would be\n> upgraded through normal use.\nCheck the archives (look for \"DROP COLUMN\" and \"Hiroshi Inoue\", author of\noriginal patch).\n\n> One would expect that keeping multiple versions of a tuple structure\n> inside a single table to be slower than normal for selects, but I\n> don't think it would require marking the rows themselves -- just base\n> it on the max and min transactions in the table at that time. Vacuum\n> would have to push the issue (5k tuples at a time?) of upgrading some\n> of the tuples each time it's run in order to enfore that they were all\n> gone before XID wrap. Background vacuum is ideal for that (if\n> implemented). Drop all constraints, indexes and the name (change to\n> $1 or something) of the column immediatly. Vacuum can determine when\n> XID Min in a table is > XID Max of another version and drop the\n> information from pg_attribute.\nI think it was done by setting attribute_id to negative, essentially\nhiding it from most code, instead of having two tuple versions, but I\nreally am not very familiar. Check archives :)\n\n> Obviously affected:\n> - pg_attribute, and anything dealing with it (add XID Max, XID Min\n> wraps for known ranges)\n> - storage machanism. On read of a tuple attempt to make it fit latest\n> version (XID Max is NULL) by ignoring select fields.\n> \n> I'll have to leave it up to the pros as to whether it can be done,\n> should be done, and what else it'll affect.\n> \n> I suppose this was option a) that was removed due to it's kludgyness\n> :)\n\n\n", "msg_date": "Mon, 24 Sep 2001 22:31:36 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Changing data types" } ]
[ { "msg_contents": "One thought did just occur to me. It is at least theoretically possible to\nsimplisticly migrate on column type to another by reading in the data and\noid of the row into a struct, drop the column, create a new column with the\ncorrect data type, and populate. This is ugly, but it is better than saying\n\"no, you can't\".\n\nGeoff\n\n-----Original Message-----\nFrom: Gowey, Geoffrey \nSent: Monday, September 24, 2001 10:53 AM\nTo: 'Alex Pilosov'; Gowey, Geoffrey\nCc: pgsql-hackers@postgresql.org\nSubject: RE: [HACKERS] Changing data types\n\n\n\n>This is not for -hackers. \n\nHow so?\n\n>And the answer is \"no, you can't\". Recreate the table with correct types\n>and insert the old values into it.\n\nYou're kidding me, right? *prepares to gargle* MS Sql server can. Surely\nwe can implement this feature or aren't we aiming to go head to head with\ncommercial rdbms'?\n\n>>On Mon, 24 Sep 2001, Gowey, Geoffrey wrote:\n\n>> I posted this in my last message, but have not heard anything yet so I'm\n>> wondering if it was overlooked. I need to know how to change a column\nfrom\n>> being say a varchar(9) to an integer. Does anyone know how to change the\n>> data type?\n", "msg_date": "Mon, 24 Sep 2001 10:55:54 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "Re: Changing data types" }, { "msg_contents": "\"Gowey, Geoffrey\" wrote:\n> \n> One thought did just occur to me. It is at least theoretically possible to\n> simplisticly migrate on column type to another by reading in the data and\n> oid of the row into a struct, drop the column, create a new column with the\n> correct data type, and populate. This is ugly, but it is better than saying\n> \"no, you can't\".\n\nThe DROP COLUMN part is the one that is what's really hard. \n\nIt is not currently supported in postgreSQL\n\nSupporting it comes up now and then, but as the solution (changing\nsystem tables \nand then rewriting the whole table) is considered ugly in current\nimplementation\nit has always windled down to not doing it.\n\nThe way to manually change column type is something like:\n\nalter table mytable add column newcolumn int;\nupdate table set newcolumn = oldcolumn;\nalter table rename oldcolumn to __del__001;\nalter table rename newcolumn to oldcolumn;\n\n\nbut you can't DROP COLUMN without recreating the TABLE\n\n------------\nHannu\n", "msg_date": "Mon, 24 Sep 2001 17:26:37 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Changing data types" } ]
[ { "msg_contents": "Does this mean that the code is or isn't usable?\n\nGeoff\n\n-----Original Message-----\nFrom: Hiroshi Inoue [mailto:Inoue@tpf.co.jp]\nSent: Sunday, September 23, 2001 11:02 PM\nTo: Stephan Szabo\nCc: Gowey, Geoffrey; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] an already existing alter table drop column ?!?!?!\n\n\nStephan Szabo wrote:\n> \n> On Fri, 21 Sep 2001, Gowey, Geoffrey wrote:\n> \n> > While looking through the code I found an already existing alter table\ndrop\n> > column in src/backend/parser/gram.y. However, when I try to run it in\npsql\n> > it comes back with a not implemented. Was the left hand not talking to\nthe\n> > right hand when this was coded or is there more to this?\n> \n> IIRC, it was not enabled pending further discussion of the behavior.\n\nAs to 'DROP COLUMN', I neglected to remove _DROP_COLUMN_HACK__\nstuff which has no meaing now sorry. I would remove it after\nthe 7.2 release.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 24 Sep 2001 12:30:33 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "Re: an already existing alter table drop column ?!?!?!" }, { "msg_contents": "\nisn't\n\nOn Mon, 24 Sep 2001, Gowey, Geoffrey wrote:\n\n> Does this mean that the code is or isn't usable?\n>\n> Geoff\n>\n> -----Original Message-----\n> From: Hiroshi Inoue [mailto:Inoue@tpf.co.jp]\n> Sent: Sunday, September 23, 2001 11:02 PM\n> To: Stephan Szabo\n> Cc: Gowey, Geoffrey; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] an already existing alter table drop column ?!?!?!\n>\n>\n> Stephan Szabo wrote:\n> >\n> > On Fri, 21 Sep 2001, Gowey, Geoffrey wrote:\n> >\n> > > While looking through the code I found an already existing alter table\n> drop\n> > > column in src/backend/parser/gram.y. However, when I try to run it in\n> psql\n> > > it comes back with a not implemented. Was the left hand not talking to\n> the\n> > > right hand when this was coded or is there more to this?\n> >\n> > IIRC, it was not enabled pending further discussion of the behavior.\n>\n> As to 'DROP COLUMN', I neglected to remove _DROP_COLUMN_HACK__\n> stuff which has no meaing now sorry. I would remove it after\n> the 7.2 release.\n>\n> regards,\n> Hiroshi Inoue\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Mon, 24 Sep 2001 13:13:39 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: an already existing alter table drop column ?!?!?!" } ]
[ { "msg_contents": "damn. Is there a usable version being worked around? If so, how long before\nit is available?\n\nGeoff\n\n-----Original Message-----\nFrom: Marc G. Fournier [mailto:scrappy@hub.org]\nSent: Monday, September 24, 2001 1:14 PM\nTo: Gowey, Geoffrey\nCc: 'Hiroshi Inoue'; Stephan Szabo; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] an already existing alter table drop column\n?!?!?!\n\n\n\nisn't\n", "msg_date": "Mon, 24 Sep 2001 13:18:21 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "Re: an already existing alter table drop column ?!?!?!" }, { "msg_contents": "On Mon, 24 Sep 2001, Gowey, Geoffrey wrote:\n\n> damn. Is there a usable version being worked around? If so, how long before\n> it is available?\n\ntoo many dissending opinions on how it should (and shouldn't) be done ...\nthe one that was put in pre-maturely needed 2x the disk space to do ... so\nif you had a multi-gig table, you had to have enough free space to store a\nsecond copy in order to remove a column ...\n\nthere was a bunch of us that felt that it could be done better using a\n'flag' on the column so that it was hidden ... and some that thought ...\netc, etc ...\n\n\n> > Geoff\n>\n> -----Original Message-----\n> From: Marc G. Fournier [mailto:scrappy@hub.org]\n> Sent: Monday, September 24, 2001 1:14 PM\n> To: Gowey, Geoffrey\n> Cc: 'Hiroshi Inoue'; Stephan Szabo; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] an already existing alter table drop column\n> ?!?!?!\n>\n>\n>\n> isn't\n>\n\n", "msg_date": "Mon, 24 Sep 2001 13:28:16 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: an already existing alter table drop column ?!?!?!" } ]
[ { "msg_contents": "I do like the flag idea as a temporary patch until a fully workable solution\nis available. Just run a delete on the column and then flag it so it takes\nup no space and is hidden. By doing this now a more perfect solution can be\nfound later. \n\nGeoff\n\n-----Original Message-----\nFrom: Marc G. Fournier [mailto:scrappy@hub.org]\nSent: Monday, September 24, 2001 1:28 PM\nTo: Gowey, Geoffrey\nCc: 'Hiroshi Inoue'; Stephan Szabo; pgsql-hackers@postgresql.org\nSubject: RE: [HACKERS] an already existing alter table drop column\n?!?!?!\n\n\nOn Mon, 24 Sep 2001, Gowey, Geoffrey wrote:\n\n> damn. Is there a usable version being worked around? If so, how long\nbefore\n> it is available?\n\ntoo many dissending opinions on how it should (and shouldn't) be done ...\nthe one that was put in pre-maturely needed 2x the disk space to do ... so\nif you had a multi-gig table, you had to have enough free space to store a\nsecond copy in order to remove a column ...\n\nthere was a bunch of us that felt that it could be done better using a\n'flag' on the column so that it was hidden ... and some that thought ...\netc, etc ...\n\n\n> > Geoff\n>\n> -----Original Message-----\n> From: Marc G. Fournier [mailto:scrappy@hub.org]\n> Sent: Monday, September 24, 2001 1:14 PM\n> To: Gowey, Geoffrey\n> Cc: 'Hiroshi Inoue'; Stephan Szabo; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] an already existing alter table drop column\n> ?!?!?!\n>\n>\n>\n> isn't\n>\n", "msg_date": "Mon, 24 Sep 2001 13:35:05 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "Re: an already existing alter table drop column ?!?!?!" } ]
[ { "msg_contents": "Also, rename the column to something else so a new one can be created in\nit's place with the same name. Do columns have their own oid's? If so this\nplus some random chars could be the new name.\n\nGeoff\n\n-----Original Message-----\nFrom: Gowey, Geoffrey \nSent: Monday, September 24, 2001 1:35 PM\nTo: 'Marc G. Fournier'; Gowey, Geoffrey\nCc: 'Hiroshi Inoue'; Stephan Szabo; pgsql-hackers@postgresql.org\nSubject: RE: [HACKERS] an already existing alter table drop column\n?!?!?!\n\n\nI do like the flag idea as a temporary patch until a fully workable solution\nis available. Just run a delete on the column and then flag it so it takes\nup no space and is hidden. By doing this now a more perfect solution can be\nfound later. \n\nGeoff\n\n-----Original Message-----\nFrom: Marc G. Fournier [mailto:scrappy@hub.org]\nSent: Monday, September 24, 2001 1:28 PM\nTo: Gowey, Geoffrey\nCc: 'Hiroshi Inoue'; Stephan Szabo; pgsql-hackers@postgresql.org\nSubject: RE: [HACKERS] an already existing alter table drop column\n?!?!?!\n\n\nOn Mon, 24 Sep 2001, Gowey, Geoffrey wrote:\n\n> damn. Is there a usable version being worked around? If so, how long\nbefore\n> it is available?\n\ntoo many dissending opinions on how it should (and shouldn't) be done ...\nthe one that was put in pre-maturely needed 2x the disk space to do ... so\nif you had a multi-gig table, you had to have enough free space to store a\nsecond copy in order to remove a column ...\n\nthere was a bunch of us that felt that it could be done better using a\n'flag' on the column so that it was hidden ... and some that thought ...\netc, etc ...\n\n\n> > Geoff\n>\n> -----Original Message-----\n> From: Marc G. Fournier [mailto:scrappy@hub.org]\n> Sent: Monday, September 24, 2001 1:14 PM\n> To: Gowey, Geoffrey\n> Cc: 'Hiroshi Inoue'; Stephan Szabo; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] an already existing alter table drop column\n> ?!?!?!\n>\n>\n>\n> isn't\n>\n", "msg_date": "Mon, 24 Sep 2001 13:48:59 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "Re: an already existing alter table drop column ?!?!?!" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr] \n> Sent: 24 September 2001 20:11\n> To: pgadmin-hackers@postgresql.org\n> Subject: [pgadmin-hackers] Alter project: client or server side?\n\n< snipped long discussion about object dependencies and gotchas with regard\nto dropping/recreating (== editting) of objects such as functions - e.g.\nview frog uses function tadpole. Editting tadpole will break frog so we need\nto rebuild frog as well >\n\n> The way we do it in pgAdmin I \n> http://cvs.social-housing.org/viewcvs.cgi/pgadmin1\n> is that we maintain a dependency table based on STRING NAMES \n> and not OIDs. When altering an object (view, function, \n> trigger) we rebuild all dependent \n> objects.\n> \n> Is this the way we should proceed with pgAdmin II?\n> Is anyone planning a real dependency table based on object \n> STRING NAMES?\n> \n> We need some advice:\n> 1) Client solution: should we add the rebuilding feature to \n> pgAdmin II?\n> 2) Server solution: should we wait until the ALTER OBJECT \n> project is complete?\n\nI've CC'd this to pgsql-hackers in hope of some guidence from the developers\nthere.\n\nMy current view is that we need to implement these facilities (object\ndependency tracking/rebuilding) client side. I believe we are just coming up\nto the 7.2 beta and the required features do not exist to my knowledge,\ntherefore we either wait and hope they get written for 7.3 (or 8.0) or do it\nourselves client side.\n\nRegards, Dave.\n", "msg_date": "Mon, 24 Sep 2001 20:43:01 +0100", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Alter project: client or server side?" } ]
[ { "msg_contents": "Hi all,\n\nwhile working on a new project involving PostgreSQL and making some\ntests, I have come up with the following output from psql :\n\n lang | length | length | text | text\n------+--------+--------+-----------+-----------\n isl | 7 | 6 | ᅵlᅵta | ᅵleit\n isl | 7 | 7 | ᅵlᅵta | ᅵlitum\n isl | 7 | 7 | ᅵlᅵta | ᅵlitiᅵ\n isl | 5 | 4 | maᅵur | mann\n isl | 5 | 7 | maᅵur | mᅵnnum\n isl | 5 | 5 | maᅵur | manna\n isl | 5 | 4 | ᅵska | -aᅵi\n\n[the misalignment is what I got, it's not a copy-paste error]\n\nThis is pasted from a UTF-8 xterm running psql under a UTF-8 locale,\nquerying a database created with -E UNICODE (by the way, these are\nicelandic words :) ).\n\nWhat you see above is misleading, since it's not possible to see that\n'ᅵ', 'ᅵ', 'ᅵ' and 'ᅵ' are using combining marks, while 'ᅵ' is not.\n\nAs a reminder, a combining mark in Unicode is that ᅵ is actually\nencoded as a + ' (where ' is the acute combining mark).\n\nEncoded in UTF-8, it's then <61 cc 81> [UTF16: 0061 0301],\ninstead of <c3 a1> [UTF16: 00E1].\n\nThe \"length\" fields are what is returned by length(a.text) and\nlength(b.text).\n\nSo, this shows two problems :\n\n- length() on the server side doesn't handle correctly Unicode [I have\n the same result with char_length()], and returns the number of chars\n (as it is however advertised to do), rather the length of the\n string.\n\n- the psql frontend makes the same mistake.\n\nI am using version 7.1.3 (debian sid), so it may have been corrected\nin the meantime (in this case, I apologise, but I have only recently\nstarted again to use PostgreSQL and I haven't followed -hackers long\nenough).\n\n\n=> I think fixing psql shouldn't be too complicated, as the glibc\nshould be providing the locale, and return the right values (is this\nthe case ? and what happens for combined latin + chinese characters\nfor example ? I'll have to try that later). If it's not fixed already,\ndo you want me to look at this ? [it will take some time, as I haven't\nset up any development environment for postgres yet, and I'm away for\none week from thursday].\n\n=> regarding the backend, it may be more complex, as the underlaying\nsystem may not provide any UTF-8 locale to use (!= from being UTF-8\naware : an administrator may have decided that UTF-8 locales are\nuseless on a server, as only root connections are made, and he wants\nonly the C locale on the console - I've seen that quite often ;) ).\n\n\nThis brings me to another subject : I will need to support the full\nUnicode collation algorithm (UCA, as described in TR#10 [1] of the\nUnicode consortium), and I will need to be able to sort according to\nlocales which may not be installed on the backend server (some of\nwhich may not even be recognised by GNU libc, which supports already\nmore than 140 locales -- artificial languages would be an example). I\nwill also need to be able to normalise the unicode strings (TR#15 [2])\nso that I don't have some characters in legacy codepoints [as 00E1\nabove], and others with combining marks.\n\nThere is today an implementation in perl of the needed functionality,\nin Unicode::Collate and Unicode::Normalize (which I haven't tried yet\n:( ). But as they are Perl modules, the untrusted version of perl,\nplperlu, will be needed, and it's a pity for what I consider a core\nfunctionality in the future (not that plperlu isn't a good thing - I\ncan't wait for it ! - but that an untrusted pl language is needed to\nsupport normalisation and collation).\n\nNote also that there are a lot of data associated with these\nalgorithms, as you could expect.\n\nI was wondering if some people have already thought about this, or\nalready done something, or if some of you are interested in this. If\nnobody does anything, I'll do something eventually, probably before\nChristmas (I don't have much time for this, and I don't need the\nfunctionality right now), but if there is an interest, I could team\nwith others and develop it faster :)\n\nAnyway, I'm open to suggestions :\n\n- implement it in C, in the core,\n\n- implement it in C, as contributed custom functions,\n\n- implement it in perl (by reusing Unicode:: work), in a trusted plperl,\n\n- implement it in perl, calling Unicode:: modules, in an untrusted\n plperl.\n\nand then :\n\n- provide the data in tables (system and/or user) - which should be\n available across databases,\n\n- load the data from the original text files provided in Unicode (and\n other as needed), if the functionality is compiled into the server.\n\n- I believe the basic unicode information should be standard, and the\n locales should be provided as contrib/ files to be plugged in as\n needed.\n\nI can't really accept a solution which would rely on the underlaying\nlibc, as it may not provide the necessary locales (or maybe, then,\nhave a way to override the collating tables by user tables - actually,\nthis would be certainly the best solution if it's in the core, as the\ntables will put an extra burden on the distribution and the\ninstallation footprint, especially if the tables are already there,\nfor glibc, for perl5.6+, for other software dealing with Unicode).\n\nThe main functions I foresee are :\n\n- provide a normalisation function to all 4 forms,\n\n- provide a collation_key(text, language) function, as the calculation\n of the key may be expensive, some may want to index on the result (I\n would :) ),\n\n- provide a collation algorithm, using the two previous facilities,\n which can do primary to tertiary collation (cf TR#10 for a detailed\n explanation).\n\nI haven't looked at PostgreSQL code yet (shame !), so I may be\ncompletely off-track, in which case I'll retract myself and won't\nbother you again (on that subject, that is ;) )...\n\nComments ?\n\n\nPatrice.\n\n[1] http://www.unicode.org/unicode/reports/tr10/\n\n[2] http://www.unicode.org/unicode/reports/tr15/\n\n-- \nPatrice HᅵDᅵ ------------------------------- patrice ᅵ islande.org -----\n -- Isn't it weird how scientists can imagine all the matter of the\nuniverse exploding out of a dot smaller than the head of a pin, but they\ncan't come up with a more evocative name for it than \"The Big Bang\" ?\n -- What would _you_ call the creation of the universe ?\n -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n------------------------------------------ http://www.islande.org/ -----\n", "msg_date": "Mon, 24 Sep 2001 23:52:04 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": true, "msg_subject": "Unicode combining characters" }, { "msg_contents": "> So, this shows two problems :\n> \n> - length() on the server side doesn't handle correctly Unicode [I have\n> the same result with char_length()], and returns the number of chars\n> (as it is however advertised to do), rather the length of the\n> string.\n\nThis is a known limitation.\n\n> - the psql frontend makes the same mistake.\n>\n> I am using version 7.1.3 (debian sid), so it may have been corrected\n> in the meantime (in this case, I apologise, but I have only recently\n> started again to use PostgreSQL and I haven't followed -hackers long\n> enough).\n> \n> \n> => I think fixing psql shouldn't be too complicated, as the glibc\n> should be providing the locale, and return the right values (is this\n> the case ? and what happens for combined latin + chinese characters\n> for example ? I'll have to try that later). If it's not fixed already,\n> do you want me to look at this ? [it will take some time, as I haven't\n> set up any development environment for postgres yet, and I'm away for\n> one week from thursday].\n\nSounds great.\n\n> I was wondering if some people have already thought about this, or\n> already done something, or if some of you are interested in this. If\n> nobody does anything, I'll do something eventually, probably before\n> Christmas (I don't have much time for this, and I don't need the\n> functionality right now), but if there is an interest, I could team\n> with others and develop it faster :)\n\nI'm very interested in your point. I will start studying [1][2] after\nthe beta freeze.\n\n> Anyway, I'm open to suggestions :\n> \n> - implement it in C, in the core,\n> \n> - implement it in C, as contributed custom functions,\n\nThis may be a good starting point.\n\n> I can't really accept a solution which would rely on the underlaying\n> libc, as it may not provide the necessary locales (or maybe, then,\n\nI totally agree here.\n\n> The main functions I foresee are :\n> \n> - provide a normalisation function to all 4 forms,\n> \n> - provide a collation_key(text, language) function, as the calculation\n> of the key may be expensive, some may want to index on the result (I\n> would :) ),\n> \n> - provide a collation algorithm, using the two previous facilities,\n> which can do primary to tertiary collation (cf TR#10 for a detailed\n> explanation).\n> \n> I haven't looked at PostgreSQL code yet (shame !), so I may be\n> completely off-track, in which case I'll retract myself and won't\n> bother you again (on that subject, that is ;) )...\n> \n> Comments ?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 25 Sep 2001 09:56:36 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "Looks like a good project for 7.3\nProbably the best starting point would be to develope contrib/unicode\nwith smooth transition to core.\n\n\tOleg\nOn Mon, 24 Sep 2001, Patrice [iso-8859-15] HО©╫dО©╫ wrote:\n\n> Hi all,\n>\n> while working on a new project involving PostgreSQL and making some\n> tests, I have come up with the following output from psql :\n>\n> lang | length | length | text | text\n> ------+--------+--------+-----------+-----------\n> isl | 7 | 6 | О©╫lО©╫ta | О©╫leit\n> isl | 7 | 7 | О©╫lО©╫ta | О©╫litum\n> isl | 7 | 7 | О©╫lО©╫ta | О©╫litiО©╫\n> isl | 5 | 4 | maО©╫ur | mann\n> isl | 5 | 7 | maО©╫ur | mО©╫nnum\n> isl | 5 | 5 | maО©╫ur | manna\n> isl | 5 | 4 | О©╫ska | -aО©╫i\n>\n> [the misalignment is what I got, it's not a copy-paste error]\n>\n> This is pasted from a UTF-8 xterm running psql under a UTF-8 locale,\n> querying a database created with -E UNICODE (by the way, these are\n> icelandic words :) ).\n>\n> What you see above is misleading, since it's not possible to see that\n> 'О©╫', 'О©╫', 'О©╫' and 'О©╫' are using combining marks, while 'О©╫' is not.\n>\n> As a reminder, a combining mark in Unicode is that О©╫ is actually\n> encoded as a + ' (where ' is the acute combining mark).\n>\n> Encoded in UTF-8, it's then <61 cc 81> [UTF16: 0061 0301],\n> instead of <c3 a1> [UTF16: 00E1].\n>\n> The \"length\" fields are what is returned by length(a.text) and\n> length(b.text).\n>\n> So, this shows two problems :\n>\n> - length() on the server side doesn't handle correctly Unicode [I have\n> the same result with char_length()], and returns the number of chars\n> (as it is however advertised to do), rather the length of the\n> string.\n>\n> - the psql frontend makes the same mistake.\n>\n> I am using version 7.1.3 (debian sid), so it may have been corrected\n> in the meantime (in this case, I apologise, but I have only recently\n> started again to use PostgreSQL and I haven't followed -hackers long\n> enough).\n>\n>\n> => I think fixing psql shouldn't be too complicated, as the glibc\n> should be providing the locale, and return the right values (is this\n> the case ? and what happens for combined latin + chinese characters\n> for example ? I'll have to try that later). If it's not fixed already,\n> do you want me to look at this ? [it will take some time, as I haven't\n> set up any development environment for postgres yet, and I'm away for\n> one week from thursday].\n>\n> => regarding the backend, it may be more complex, as the underlaying\n> system may not provide any UTF-8 locale to use (!= from being UTF-8\n> aware : an administrator may have decided that UTF-8 locales are\n> useless on a server, as only root connections are made, and he wants\n> only the C locale on the console - I've seen that quite often ;) ).\n>\n>\n> This brings me to another subject : I will need to support the full\n> Unicode collation algorithm (UCA, as described in TR#10 [1] of the\n> Unicode consortium), and I will need to be able to sort according to\n> locales which may not be installed on the backend server (some of\n> which may not even be recognised by GNU libc, which supports already\n> more than 140 locales -- artificial languages would be an example). I\n> will also need to be able to normalise the unicode strings (TR#15 [2])\n> so that I don't have some characters in legacy codepoints [as 00E1\n> above], and others with combining marks.\n>\n> There is today an implementation in perl of the needed functionality,\n> in Unicode::Collate and Unicode::Normalize (which I haven't tried yet\n> :( ). But as they are Perl modules, the untrusted version of perl,\n> plperlu, will be needed, and it's a pity for what I consider a core\n> functionality in the future (not that plperlu isn't a good thing - I\n> can't wait for it ! - but that an untrusted pl language is needed to\n> support normalisation and collation).\n>\n> Note also that there are a lot of data associated with these\n> algorithms, as you could expect.\n>\n> I was wondering if some people have already thought about this, or\n> already done something, or if some of you are interested in this. If\n> nobody does anything, I'll do something eventually, probably before\n> Christmas (I don't have much time for this, and I don't need the\n> functionality right now), but if there is an interest, I could team\n> with others and develop it faster :)\n>\n> Anyway, I'm open to suggestions :\n>\n> - implement it in C, in the core,\n>\n> - implement it in C, as contributed custom functions,\n>\n> - implement it in perl (by reusing Unicode:: work), in a trusted plperl,\n>\n> - implement it in perl, calling Unicode:: modules, in an untrusted\n> plperl.\n>\n> and then :\n>\n> - provide the data in tables (system and/or user) - which should be\n> available across databases,\n>\n> - load the data from the original text files provided in Unicode (and\n> other as needed), if the functionality is compiled into the server.\n>\n> - I believe the basic unicode information should be standard, and the\n> locales should be provided as contrib/ files to be plugged in as\n> needed.\n>\n> I can't really accept a solution which would rely on the underlaying\n> libc, as it may not provide the necessary locales (or maybe, then,\n> have a way to override the collating tables by user tables - actually,\n> this would be certainly the best solution if it's in the core, as the\n> tables will put an extra burden on the distribution and the\n> installation footprint, especially if the tables are already there,\n> for glibc, for perl5.6+, for other software dealing with Unicode).\n>\n> The main functions I foresee are :\n>\n> - provide a normalisation function to all 4 forms,\n>\n> - provide a collation_key(text, language) function, as the calculation\n> of the key may be expensive, some may want to index on the result (I\n> would :) ),\n>\n> - provide a collation algorithm, using the two previous facilities,\n> which can do primary to tertiary collation (cf TR#10 for a detailed\n> explanation).\n>\n> I haven't looked at PostgreSQL code yet (shame !), so I may be\n> completely off-track, in which case I'll retract myself and won't\n> bother you again (on that subject, that is ;) )...\n>\n> Comments ?\n>\n>\n> Patrice.\n>\n> [1] http://www.unicode.org/unicode/reports/tr10/\n>\n> [2] http://www.unicode.org/unicode/reports/tr15/\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 25 Sep 2001 12:50:51 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "Hi,\n\n* Tatsuo Ishii <t-ishii@sra.co.jp> [010925 18:18]:\n> > So, this shows two problems :\n> > \n> > - length() on the server side doesn't handle correctly Unicode [I\n> > have the same result with char_length()], and returns the number\n> > of chars (as it is however advertised to do), rather the length\n> > of the string.\n> \n> This is a known limitation.\n\nTo solve this, we could use wcwidth() (there is a custom\nimplementation for the systems which don't have it in the glibc). I'll\nhave a look at it later.\n\n> > - the psql frontend makes the same mistake.\n\nSame thing here.\n\nI have just installed the CVS and downloaded the development version\n(thanks Baldvin), tested that the stock version compiles fine, and\nI'll now have a look at how to make this work. :) I'll send a patch\nwhen I have this working here.\n\n> Sounds great.\n\n[Unicode normalisation and collation in the backend]\n\n> I'm very interested in your point. I will start studying [1][2] after\n> the beta freeze.\n> \n> > Anyway, I'm open to suggestions :\n> > \n> > - implement it in C, in the core,\n> > \n> > - implement it in C, as contributed custom functions,\n> \n> This may be a good starting point.\n> \n> > I can't really accept a solution which would rely on the underlaying\n> > libc, as it may not provide the necessary locales (or maybe, then,\n> \n> I totally agree here.\n\nAs Oleg suggested, I will try to aim for 7.3, first with a version in\ncontrib, and later, if the implementation is fine, it could be moved\nto the core (or not ? Though it would be nice to make sure every\nPostgreSQL installation which supports unicode has it, so that users\nwon't need to have administrative rights to use the functionality).\n\nI think I will go for a C version, and probably the collation and\nnormalisation data in tables, with some way to override the defaults\nwith secondary tables... I'll report as soon as I have something +/-\nworking.\n\n> --\n> Tatsuo Ishii\n\nPatrice.\n\n-- \nPatrice HᅵDᅵ ------------------------------- patrice ᅵ islande org -----\n -- Isn't it weird how scientists can imagine all the matter of the\nuniverse exploding out of a dot smaller than the head of a pin, but they\ncan't come up with a more evocative name for it than \"The Big Bang\" ?\n -- What would _you_ call the creation of the universe ?\n -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n------------------------------------------ http://www.islande.org/ -----\n\n", "msg_date": "Tue, 25 Sep 2001 20:14:20 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": true, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> > > - length() on the server side doesn't handle correctly Unicode [I\n> > > have the same result with char_length()], and returns the number\n> > > of chars (as it is however advertised to do), rather the length\n> > > of the string.\n> > \n> > This is a known limitation.\n> \n> To solve this, we could use wcwidth() (there is a custom\n> implementation for the systems which don't have it in the glibc). I'll\n> have a look at it later.\n\nAnd wcwidth() depends on the locale. That is the another reason we\ncould not use it.\n\n> As Oleg suggested, I will try to aim for 7.3, first with a version in\n> contrib, and later, if the implementation is fine, it could be moved\n> to the core (or not ? Though it would be nice to make sure every\n> PostgreSQL installation which supports unicode has it, so that users\n> won't need to have administrative rights to use the functionality).\n\nI would like to see SQL99's charset, collate functionality for 7.3 (or\nlater). If this happens, current multibyte implementation would be\ndramatically changed. That would be a good timing to merge your\nUnicode stuffs into the main source tree.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 26 Sep 2001 10:03:13 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> I would like to see SQL99's charset, collate functionality for 7.3 (or\n> later). If this happens, current multibyte implementation would be\n> dramatically changed...\n\nI'm *still* interested in working on this (an old story I know). I'm\nworking on date/time stuff for 7.2, but hopefully 7.3 will see some\nadvances in the SQL99 direction on charset etc.\n\n - Thomas\n", "msg_date": "Wed, 26 Sep 2001 06:17:36 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> > I would like to see SQL99's charset, collate functionality for 7.3 (or\n> > later). If this happens, current multibyte implementation would be\n> > dramatically changed...\n> \n> I'm *still* interested in working on this (an old story I know). I'm\n> working on date/time stuff for 7.2, but hopefully 7.3 will see some\n> advances in the SQL99 direction on charset etc.\n\nBTW, I see \"CHARACTER SET\" in gram.y. Does current already support\nthat syntax?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 26 Sep 2001 16:26:40 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> BTW, I see \"CHARACTER SET\" in gram.y. Does current already support\n> that syntax?\n\nYes and no. gram.y knows about CHARACTER SET, but only for the long\nform, the clause is in the wrong position (it preceeds the length\nspecification) and it does not do much useful (generates a data type\nbased on the character set name which does not get recognized farther\nback). Examples:\n\nthomas=# create table t1 (c varchar(20) character set sql_ascii);\nERROR: parser: parse error at or near \"character\"\nthomas=# create table t1 (c character varying character set sql_ascii\n(20));\nERROR: Unable to locate type name 'varsql_ascii' in catalog\n\nI'm pretty sure I'll get shift/reduce troubles when trying to move that\nclause to *after* the length specifier. I'll try to do something with\nthe syntax for 7.2 once I've finished the date/time stuff.\n\n - Thomas\n", "msg_date": "Wed, 26 Sep 2001 12:43:22 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "\nCan someone give me TODO items for this discussion?\n\n> > So, this shows two problems :\n> > \n> > - length() on the server side doesn't handle correctly Unicode [I have\n> > the same result with char_length()], and returns the number of chars\n> > (as it is however advertised to do), rather the length of the\n> > string.\n> \n> This is a known limitation.\n> \n> > - the psql frontend makes the same mistake.\n> >\n> > I am using version 7.1.3 (debian sid), so it may have been corrected\n> > in the meantime (in this case, I apologise, but I have only recently\n> > started again to use PostgreSQL and I haven't followed -hackers long\n> > enough).\n> > \n> > \n> > => I think fixing psql shouldn't be too complicated, as the glibc\n> > should be providing the locale, and return the right values (is this\n> > the case ? and what happens for combined latin + chinese characters\n> > for example ? I'll have to try that later). If it's not fixed already,\n> > do you want me to look at this ? [it will take some time, as I haven't\n> > set up any development environment for postgres yet, and I'm away for\n> > one week from thursday].\n> \n> Sounds great.\n> \n> > I was wondering if some people have already thought about this, or\n> > already done something, or if some of you are interested in this. If\n> > nobody does anything, I'll do something eventually, probably before\n> > Christmas (I don't have much time for this, and I don't need the\n> > functionality right now), but if there is an interest, I could team\n> > with others and develop it faster :)\n> \n> I'm very interested in your point. I will start studying [1][2] after\n> the beta freeze.\n> \n> > Anyway, I'm open to suggestions :\n> > \n> > - implement it in C, in the core,\n> > \n> > - implement it in C, as contributed custom functions,\n> \n> This may be a good starting point.\n> \n> > I can't really accept a solution which would rely on the underlaying\n> > libc, as it may not provide the necessary locales (or maybe, then,\n> \n> I totally agree here.\n> \n> > The main functions I foresee are :\n> > \n> > - provide a normalisation function to all 4 forms,\n> > \n> > - provide a collation_key(text, language) function, as the calculation\n> > of the key may be expensive, some may want to index on the result (I\n> > would :) ),\n> > \n> > - provide a collation algorithm, using the two previous facilities,\n> > which can do primary to tertiary collation (cf TR#10 for a detailed\n> > explanation).\n> > \n> > I haven't looked at PostgreSQL code yet (shame !), so I may be\n> > completely off-track, in which case I'll retract myself and won't\n> > bother you again (on that subject, that is ;) )...\n> > \n> > Comments ?\n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 13:55:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> Can someone give me TODO items for this discussion?\n\nWhat about:\n\nImprove Unicode combined character handling\n--\nTatsuo Ishii\n\n> > > So, this shows two problems :\n> > > \n> > > - length() on the server side doesn't handle correctly Unicode [I have\n> > > the same result with char_length()], and returns the number of chars\n> > > (as it is however advertised to do), rather the length of the\n> > > string.\n> > \n> > This is a known limitation.\n> > \n> > > - the psql frontend makes the same mistake.\n> > >\n> > > I am using version 7.1.3 (debian sid), so it may have been corrected\n> > > in the meantime (in this case, I apologise, but I have only recently\n> > > started again to use PostgreSQL and I haven't followed -hackers long\n> > > enough).\n> > > \n> > > \n> > > => I think fixing psql shouldn't be too complicated, as the glibc\n> > > should be providing the locale, and return the right values (is this\n> > > the case ? and what happens for combined latin + chinese characters\n> > > for example ? I'll have to try that later). If it's not fixed already,\n> > > do you want me to look at this ? [it will take some time, as I haven't\n> > > set up any development environment for postgres yet, and I'm away for\n> > > one week from thursday].\n> > \n> > Sounds great.\n> > \n> > > I was wondering if some people have already thought about this, or\n> > > already done something, or if some of you are interested in this. If\n> > > nobody does anything, I'll do something eventually, probably before\n> > > Christmas (I don't have much time for this, and I don't need the\n> > > functionality right now), but if there is an interest, I could team\n> > > with others and develop it faster :)\n> > \n> > I'm very interested in your point. I will start studying [1][2] after\n> > the beta freeze.\n> > \n> > > Anyway, I'm open to suggestions :\n> > > \n> > > - implement it in C, in the core,\n> > > \n> > > - implement it in C, as contributed custom functions,\n> > \n> > This may be a good starting point.\n> > \n> > > I can't really accept a solution which would rely on the underlaying\n> > > libc, as it may not provide the necessary locales (or maybe, then,\n> > \n> > I totally agree here.\n> > \n> > > The main functions I foresee are :\n> > > \n> > > - provide a normalisation function to all 4 forms,\n> > > \n> > > - provide a collation_key(text, language) function, as the calculation\n> > > of the key may be expensive, some may want to index on the result (I\n> > > would :) ),\n> > > \n> > > - provide a collation algorithm, using the two previous facilities,\n> > > which can do primary to tertiary collation (cf TR#10 for a detailed\n> > > explanation).\n> > > \n> > > I haven't looked at PostgreSQL code yet (shame !), so I may be\n> > > completely off-track, in which case I'll retract myself and won't\n> > > bother you again (on that subject, that is ;) )...\n> > > \n> > > Comments ?\n> > --\n> > Tatsuo Ishii\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n", "msg_date": "Tue, 02 Oct 2001 10:14:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> > Can someone give me TODO items for this discussion?\n> \n> What about:\n> \n> Improve Unicode combined character handling\n\nDone. I can't update the web version because I don't have permission.\n\nAlso, have we decided if multibyte should be the configure default now?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 21:23:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> Also, have we decided if multibyte should be the configure default now?\n\nNot sure.\n\nAnyway I have tested LIKE/REGEX query test using current. The query\nexecuted is:\n\nexplain analyze select '0000000 5089 474e...( 16475\nbytes long text containing only 0-9a-z chars) like 'aaa';\n\nand\n\nexplain analyze select '0000000 5089 474e...( 16475\nbytes long text containing only 0-9a-z chars) ~ 'aaa';\n\nHere is the result:\n\n\tno MB\t\twith MB\nLIKE\t0.09 msec\t0.08 msec\nREGEX\t0.09 msec\t0.10 msec\n\nLIKE with MB seemed to be resonably fast, but REGEX with MB seemed a\nlittle bit slow. Probably this is due the wide character conversion\noverhead.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 02 Oct 2001 15:49:52 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "\nIf no one can find a case where multibyte is slower, I think we should\nenable it by default. Comments?\n\n\n> > Also, have we decided if multibyte should be the configure default now?\n> \n> Not sure.\n> \n> Anyway I have tested LIKE/REGEX query test using current. The query\n> executed is:\n> \n> explain analyze select '0000000 5089 474e...( 16475\n> bytes long text containing only 0-9a-z chars) like 'aaa';\n> \n> and\n> \n> explain analyze select '0000000 5089 474e...( 16475\n> bytes long text containing only 0-9a-z chars) ~ 'aaa';\n> \n> Here is the result:\n> \n> \tno MB\t\twith MB\n> LIKE\t0.09 msec\t0.08 msec\n> REGEX\t0.09 msec\t0.10 msec\n> \n> LIKE with MB seemed to be resonably fast, but REGEX with MB seemed a\n> little bit slow. Probably this is due the wide character conversion\n> overhead.\n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 12:31:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> If no one can find a case where multibyte is slower, I think we should\n> enable it by default. Comments?\n\nWell, he just did point out such a case:\n\n>> no MB\t\twith MB\n>> LIKE \t0.09 msec\t0.08 msec\n>> REGEX\t0.09 msec\t0.10 msec\n\nBut I agree with your conclusion. If the worst penalty we can find is\nthat a regex comparison operator is 10% slower, we may as well turn it\non by default. Most people will never notice the difference, and anyone\nwho really cares can always turn it off again.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 13:48:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > If no one can find a case where multibyte is slower, I think we should\n> > enable it by default. Comments?\n> \n> Well, he just did point out such a case:\n> \n> >> no MB\t\twith MB\n> >> LIKE \t0.09 msec\t0.08 msec\n> >> REGEX\t0.09 msec\t0.10 msec\n> \n> But I agree with your conclusion. If the worst penalty we can find is\n> that a regex comparison operator is 10% slower, we may as well turn it\n> on by default. Most people will never notice the difference, and anyone\n> who really cares can always turn it off again.\n\nBut the strange thing is that LIKE is faster, perhaps meaning his\nmeasurements can't even see the difference, or is it because the LIKE\noptimization is off for multibyte.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 2 Oct 2001 13:55:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> But the strange thing is that LIKE is faster, perhaps meaning his\n> measurements can't even see the difference,\n\nYeah, I suspect there's 10% or more noise in these numbers. But then\none could read the results as saying we can't reliably measure any\ndifference at all ...\n\nI'd feel more confident if the measurements were done using operators\nrepeated enough times to yield multiple-second runtimes. I don't\ntrust fractional-second time measurements on Unix boxen; too much chance\nof bogus results due to activity of other processes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Oct 2001 14:08:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "Tatsuo Ishii writes:\n\n> LIKE with MB seemed to be resonably fast, but REGEX with MB seemed a\n> little bit slow. Probably this is due the wide character conversion\n> overhead.\n\nCould this conversion be optimized to recognize when it's dealing with a\nsingle-byte character encoding?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 3 Oct 2001 00:14:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> Yeah, I suspect there's 10% or more noise in these numbers. But then\n> one could read the results as saying we can't reliably measure any\n> difference at all ...\n> \n> I'd feel more confident if the measurements were done using operators\n> repeated enough times to yield multiple-second runtimes. I don't\n> trust fractional-second time measurements on Unix boxen; too much chance\n> of bogus results due to activity of other processes.\n\nAny idea to do that? I tried to do a measurements using something like\n\"SELECT * FROM t1 WHERE very-long-string-column LIKE 'aaa'\", but I'm\nafraid the I/O time masks the difference...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 03 Oct 2001 10:01:22 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> > LIKE with MB seemed to be resonably fast, but REGEX with MB seemed a\n> > little bit slow. Probably this is due the wide character conversion\n> > overhead.\n> \n> Could this conversion be optimized to recognize when it's dealing with a\n> single-byte character encoding?\n\nNot sure, will look into...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 03 Oct 2001 10:11:03 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> I'd feel more confident if the measurements were done using operators\n>> repeated enough times to yield multiple-second runtimes.\n\n> Any idea to do that?\n\nMaybe something like this: declare a plpgsql function that takes two\ntext parameters and has a body like\n\n\tfor (i = 0 to a million)\n\t\tboolvar := $1 like $2;\n\nThen call it with strings of different lengths and see how the runtime\nvaries. You need to apply the LIKE to function parameters, else the\nsystem will probably collapse the LIKE operation to a constant...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 00:14:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "> Maybe something like this: declare a plpgsql function that takes two\n> text parameters and has a body like\n> \n> \tfor (i = 0 to a million)\n> \t\tboolvar := $1 like $2;\n> \n> Then call it with strings of different lengths and see how the runtime\n> varies. You need to apply the LIKE to function parameters, else the\n> system will probably collapse the LIKE operation to a constant...\n\nGood idea. I did tests for both LIKE and REGEX using PL/pgsql\nfunctions(see source code below). Here are the result. What I did was\ncalling the functions with changing taret strings from 32byte to\n8192. Times are all in msec.\n\n(1) LIKE\n\nbytes Without MB\tWith MB\n\n32\t 8121.94\t8094.73\n64\t 8167.98\t8105.24\n128\t 8151.30\t8108.61\n256\t 8090.12\t8098.20\n512\t 8111.05\t8101.07\n1024\t 8110.49\t8099.61\n2048\t 8095.32\t8106.00\n4096\t 8094.88\t8091.19\n8192\t 8123.02\t8121.63\n\n(2) REGEX\n\nbytes Without MB\tWith MB\n\n32\t117.93\t\t119.47\n64\t126.41\t\t127.61\n128\t143.97\t\t146.55\n256\t180.49\t\t183.69\n512\t255.53\t\t256.16\n1024\t410.59\t\t409.22\n2048\t5176.38\t\t5181.99\n4096\t6000.82\t\t5627.84\n8192\t6529.15\t\t6547.10\n\n------------- shell script -------------------\nfor i in 32 64 128 256 512 1024 2048 4096 8192\ndo\npsql -c \"explain analyze select liketest(a,'aaa') from (select substring('very_long_text' from 0 for $i) as a) as a\" test\ndone\n------------- shell script -------------------\n\n------------- functions -----------------\ndrop function liketest(text,text);\ncreate function liketest(text,text) returns bool as '\ndeclare\n\ti int;\n\trtn boolean;\nbegin\n\ti := 1000000;\n\twhile i > 0 loop\n\t rtn := $1 like $2;\n\t i := i - 1;\n\tend loop;\n\treturn rtn;\nend;\n' language 'plpgsql';\n\ndrop function regextest(text,text);\ncreate function regextest(text,text) returns bool as '\ndeclare\n\ti int;\n\trtn boolean;\nbegin\n\ti := 10000;\n\twhile i > 0 loop\n\t rtn := $1 ~ $2;\n\t i := i - 1;\n\tend loop;\n\treturn rtn;\nend;\n' language 'plpgsql';\n------------- functions -----------------\n", "msg_date": "Wed, 03 Oct 2001 16:12:57 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters " }, { "msg_contents": "\nCan I ask about the status of this?\n\n\n> Hi all,\n> \n> while working on a new project involving PostgreSQL and making some\n> tests, I have come up with the following output from psql :\n> \n> lang | length | length | text | text\n> ------+--------+--------+-----------+-----------\n> isl | 7 | 6 | _l_ta | _leit\n> isl | 7 | 7 | _l_ta | _litum\n> isl | 7 | 7 | _l_ta | _liti_\n> isl | 5 | 4 | ma_ur | mann\n> isl | 5 | 7 | ma_ur | m_nnum\n> isl | 5 | 5 | ma_ur | manna\n> isl | 5 | 4 | _ska | -a_i\n> \n> [the misalignment is what I got, it's not a copy-paste error]\n> \n> This is pasted from a UTF-8 xterm running psql under a UTF-8 locale,\n> querying a database created with -E UNICODE (by the way, these are\n> icelandic words :) ).\n> \n> What you see above is misleading, since it's not possible to see that\n> '_', '_', '_' and '_' are using combining marks, while '_' is not.\n> \n> As a reminder, a combining mark in Unicode is that _ is actually\n> encoded as a + ' (where ' is the acute combining mark).\n> \n> Encoded in UTF-8, it's then <61 cc 81> [UTF16: 0061 0301],\n> instead of <c3 a1> [UTF16: 00E1].\n> \n> The \"length\" fields are what is returned by length(a.text) and\n> length(b.text).\n> \n> So, this shows two problems :\n> \n> - length() on the server side doesn't handle correctly Unicode [I have\n> the same result with char_length()], and returns the number of chars\n> (as it is however advertised to do), rather the length of the\n> string.\n> \n> - the psql frontend makes the same mistake.\n> \n> I am using version 7.1.3 (debian sid), so it may have been corrected\n> in the meantime (in this case, I apologise, but I have only recently\n> started again to use PostgreSQL and I haven't followed -hackers long\n> enough).\n> \n> \n> => I think fixing psql shouldn't be too complicated, as the glibc\n> should be providing the locale, and return the right values (is this\n> the case ? and what happens for combined latin + chinese characters\n> for example ? I'll have to try that later). If it's not fixed already,\n> do you want me to look at this ? [it will take some time, as I haven't\n> set up any development environment for postgres yet, and I'm away for\n> one week from thursday].\n> \n> => regarding the backend, it may be more complex, as the underlaying\n> system may not provide any UTF-8 locale to use (!= from being UTF-8\n> aware : an administrator may have decided that UTF-8 locales are\n> useless on a server, as only root connections are made, and he wants\n> only the C locale on the console - I've seen that quite often ;) ).\n> \n> \n> This brings me to another subject : I will need to support the full\n> Unicode collation algorithm (UCA, as described in TR#10 [1] of the\n> Unicode consortium), and I will need to be able to sort according to\n> locales which may not be installed on the backend server (some of\n> which may not even be recognised by GNU libc, which supports already\n> more than 140 locales -- artificial languages would be an example). I\n> will also need to be able to normalise the unicode strings (TR#15 [2])\n> so that I don't have some characters in legacy codepoints [as 00E1\n> above], and others with combining marks.\n> \n> There is today an implementation in perl of the needed functionality,\n> in Unicode::Collate and Unicode::Normalize (which I haven't tried yet\n> :( ). But as they are Perl modules, the untrusted version of perl,\n> plperlu, will be needed, and it's a pity for what I consider a core\n> functionality in the future (not that plperlu isn't a good thing - I\n> can't wait for it ! - but that an untrusted pl language is needed to\n> support normalisation and collation).\n> \n> Note also that there are a lot of data associated with these\n> algorithms, as you could expect.\n> \n> I was wondering if some people have already thought about this, or\n> already done something, or if some of you are interested in this. If\n> nobody does anything, I'll do something eventually, probably before\n> Christmas (I don't have much time for this, and I don't need the\n> functionality right now), but if there is an interest, I could team\n> with others and develop it faster :)\n> \n> Anyway, I'm open to suggestions :\n> \n> - implement it in C, in the core,\n> \n> - implement it in C, as contributed custom functions,\n> \n> - implement it in perl (by reusing Unicode:: work), in a trusted plperl,\n> \n> - implement it in perl, calling Unicode:: modules, in an untrusted\n> plperl.\n> \n> and then :\n> \n> - provide the data in tables (system and/or user) - which should be\n> available across databases,\n> \n> - load the data from the original text files provided in Unicode (and\n> other as needed), if the functionality is compiled into the server.\n> \n> - I believe the basic unicode information should be standard, and the\n> locales should be provided as contrib/ files to be plugged in as\n> needed.\n> \n> I can't really accept a solution which would rely on the underlaying\n> libc, as it may not provide the necessary locales (or maybe, then,\n> have a way to override the collating tables by user tables - actually,\n> this would be certainly the best solution if it's in the core, as the\n> tables will put an extra burden on the distribution and the\n> installation footprint, especially if the tables are already there,\n> for glibc, for perl5.6+, for other software dealing with Unicode).\n> \n> The main functions I foresee are :\n> \n> - provide a normalisation function to all 4 forms,\n> \n> - provide a collation_key(text, language) function, as the calculation\n> of the key may be expensive, some may want to index on the result (I\n> would :) ),\n> \n> - provide a collation algorithm, using the two previous facilities,\n> which can do primary to tertiary collation (cf TR#10 for a detailed\n> explanation).\n> \n> I haven't looked at PostgreSQL code yet (shame !), so I may be\n> completely off-track, in which case I'll retract myself and won't\n> bother you again (on that subject, that is ;) )...\n> \n> Comments ?\n> \n> \n> Patrice.\n> \n> [1] http://www.unicode.org/unicode/reports/tr10/\n> \n> [2] http://www.unicode.org/unicode/reports/tr15/\n> \n> -- \n> Patrice H_D_ ------------------------------- patrice _ islande.org -----\n> -- Isn't it weird how scientists can imagine all the matter of the\n> universe exploding out of a dot smaller than the head of a pin, but they\n> can't come up with a more evocative name for it than \"The Big Bang\" ?\n> -- What would _you_ call the creation of the universe ?\n> -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n> ------------------------------------------ http://www.islande.org/ -----\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 16:43:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [011011 22:49]:\n> \n> Can I ask about the status of this?\n\nI have sent a patch a few days ago solving the client-side issue (on\nthe pgsql-patches mailing list) for review. I think Tatsuo said it\nlooked OK, however he should confirm/infirm this.\n\nThere is still the issue about unicode characters which have code\npoints above U00FFFF, which probably should be rejected on the server\nside. I have yet to update my patch for that. I'll probably do that\ntomorrow, as I don't have more time tonight, but I think this will be\ntrivial, so maybe Tatsuo can do it, if he has some time before that :)\n\nIf there are other issues, I'd like to know :)\n\nRegarding the implementation of Unicode functionality (normalisation,\ncollation, Unicode-aware regexes, uc/lc/tc (title-case) functions,...)\non the server side, it's definitely something for 7.3 (though it might\nbe available sooner). It will probably be just a contributed extension\nfirst. I'm currently making an alpha version of the project I'm\nworking on in order to have sufficient \"real-life\" Unicode data to\nwork with, and make sure the design choices make sense :)\n\nPatrice.\n\nBTW, I tried to find web-accessible archives of pgsql-patches, are\nthere some, or should each and every discussion be followed-up on\npgsql-hackers (even though the description for pgsql-patches includes\ndiscussions on patches) ?\n\n> > Hi all,\n> > \n> > while working on a new project involving PostgreSQL and making some\n> > tests, I have come up with the following output from psql :\n> > \n> > lang | length | length | text | text\n> > ------+--------+--------+-----------+-----------\n> > isl | 7 | 6 | _l_ta | _leit\n> > isl | 7 | 7 | _l_ta | _litum\n> > isl | 7 | 7 | _l_ta | _liti_\n> > isl | 5 | 4 | ma_ur | mann\n> > isl | 5 | 7 | ma_ur | m_nnum\n> > isl | 5 | 5 | ma_ur | manna\n> > isl | 5 | 4 | _ska | -a_i\n> > \n> > [the misalignment is what I got, it's not a copy-paste error]\n> > \n> > This is pasted from a UTF-8 xterm running psql under a UTF-8 locale,\n> > querying a database created with -E UNICODE (by the way, these are\n> > icelandic words :) ).\n> > \n> > What you see above is misleading, since it's not possible to see that\n> > '_', '_', '_' and '_' are using combining marks, while '_' is not.\n> > \n> > As a reminder, a combining mark in Unicode is that _ is actually\n> > encoded as a + ' (where ' is the acute combining mark).\n> > \n> > Encoded in UTF-8, it's then <61 cc 81> [UTF16: 0061 0301],\n> > instead of <c3 a1> [UTF16: 00E1].\n> > \n> > The \"length\" fields are what is returned by length(a.text) and\n> > length(b.text).\n> > \n> > So, this shows two problems :\n> > \n> > - length() on the server side doesn't handle correctly Unicode [I have\n> > the same result with char_length()], and returns the number of chars\n> > (as it is however advertised to do), rather the length of the\n> > string.\n> > \n> > - the psql frontend makes the same mistake.\n> > \n> > I am using version 7.1.3 (debian sid), so it may have been corrected\n> > in the meantime (in this case, I apologise, but I have only recently\n> > started again to use PostgreSQL and I haven't followed -hackers long\n> > enough).\n> > \n> > \n> > => I think fixing psql shouldn't be too complicated, as the glibc\n> > should be providing the locale, and return the right values (is this\n> > the case ? and what happens for combined latin + chinese characters\n> > for example ? I'll have to try that later). If it's not fixed already,\n> > do you want me to look at this ? [it will take some time, as I haven't\n> > set up any development environment for postgres yet, and I'm away for\n> > one week from thursday].\n> > \n> > => regarding the backend, it may be more complex, as the underlaying\n> > system may not provide any UTF-8 locale to use (!= from being UTF-8\n> > aware : an administrator may have decided that UTF-8 locales are\n> > useless on a server, as only root connections are made, and he wants\n> > only the C locale on the console - I've seen that quite often ;) ).\n> > \n> > \n> > This brings me to another subject : I will need to support the full\n> > Unicode collation algorithm (UCA, as described in TR#10 [1] of the\n> > Unicode consortium), and I will need to be able to sort according to\n> > locales which may not be installed on the backend server (some of\n> > which may not even be recognised by GNU libc, which supports already\n> > more than 140 locales -- artificial languages would be an example). I\n> > will also need to be able to normalise the unicode strings (TR#15 [2])\n> > so that I don't have some characters in legacy codepoints [as 00E1\n> > above], and others with combining marks.\n> > \n> > There is today an implementation in perl of the needed functionality,\n> > in Unicode::Collate and Unicode::Normalize (which I haven't tried yet\n> > :( ). But as they are Perl modules, the untrusted version of perl,\n> > plperlu, will be needed, and it's a pity for what I consider a core\n> > functionality in the future (not that plperlu isn't a good thing - I\n> > can't wait for it ! - but that an untrusted pl language is needed to\n> > support normalisation and collation).\n> > \n> > Note also that there are a lot of data associated with these\n> > algorithms, as you could expect.\n> > \n> > I was wondering if some people have already thought about this, or\n> > already done something, or if some of you are interested in this. If\n> > nobody does anything, I'll do something eventually, probably before\n> > Christmas (I don't have much time for this, and I don't need the\n> > functionality right now), but if there is an interest, I could team\n> > with others and develop it faster :)\n> > \n> > Anyway, I'm open to suggestions :\n> > \n> > - implement it in C, in the core,\n> > \n> > - implement it in C, as contributed custom functions,\n> > \n> > - implement it in perl (by reusing Unicode:: work), in a trusted plperl,\n> > \n> > - implement it in perl, calling Unicode:: modules, in an untrusted\n> > plperl.\n> > \n> > and then :\n> > \n> > - provide the data in tables (system and/or user) - which should be\n> > available across databases,\n> > \n> > - load the data from the original text files provided in Unicode (and\n> > other as needed), if the functionality is compiled into the server.\n> > \n> > - I believe the basic unicode information should be standard, and the\n> > locales should be provided as contrib/ files to be plugged in as\n> > needed.\n> > \n> > I can't really accept a solution which would rely on the underlaying\n> > libc, as it may not provide the necessary locales (or maybe, then,\n> > have a way to override the collating tables by user tables - actually,\n> > this would be certainly the best solution if it's in the core, as the\n> > tables will put an extra burden on the distribution and the\n> > installation footprint, especially if the tables are already there,\n> > for glibc, for perl5.6+, for other software dealing with Unicode).\n> > \n> > The main functions I foresee are :\n> > \n> > - provide a normalisation function to all 4 forms,\n> > \n> > - provide a collation_key(text, language) function, as the calculation\n> > of the key may be expensive, some may want to index on the result (I\n> > would :) ),\n> > \n> > - provide a collation algorithm, using the two previous facilities,\n> > which can do primary to tertiary collation (cf TR#10 for a detailed\n> > explanation).\n> > \n> > I haven't looked at PostgreSQL code yet (shame !), so I may be\n> > completely off-track, in which case I'll retract myself and won't\n> > bother you again (on that subject, that is ;) )...\n> > \n> > Comments ?\n> > \n> > \n> > Patrice.\n> > \n> > [1] http://www.unicode.org/unicode/reports/tr10/\n> > \n> > [2] http://www.unicode.org/unicode/reports/tr15/\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/\n", "msg_date": "Thu, 11 Oct 2001 23:23:36 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": true, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> [011011 22:49]:\n> > \n> > Can I ask about the status of this?\n> \n> I have sent a patch a few days ago solving the client-side issue (on\n> the pgsql-patches mailing list) for review. I think Tatsuo said it\n> looked OK, however he should confirm/infirm this.\n\nI've been waiting for Peter's opnion. His understanding of psql is\nmuch better than me.\n\n> There is still the issue about unicode characters which have code\n> points above U00FFFF, which probably should be rejected on the server\n> side. I have yet to update my patch for that. I'll probably do that\n> tomorrow, as I don't have more time tonight, but I think this will be\n> trivial, so maybe Tatsuo can do it, if he has some time before that :)\n\nRejecting over U00FFFF is considered a bug fix, that means we could\nfix after the beta test begins:-)\n\nBTW, have you tried my updates for supporting ISO 8859 characters?\nPlease let me know if you have troubles especially with \"euro\". I\ndon't understand none of these charsets...\n--\nTatsuo Ishii\n", "msg_date": "Fri, 12 Oct 2001 15:07:08 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" }, { "msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> [011011 22:49]:\n> > \n> > Can I ask about the status of this?\n> \n> I have sent a patch a few days ago solving the client-side issue (on\n> the pgsql-patches mailing list) for review. I think Tatsuo said it\n> looked OK, however he should confirm/infirm this.\n\nOK, I saw the client encoding function appear today. I assume Tatsuo\nand friends will finish this up.\n\n> Regarding the implementation of Unicode functionality (normalisation,\n> collation, Unicode-aware regexes, uc/lc/tc (title-case) functions,...)\n> on the server side, it's definitely something for 7.3 (though it might\n> be available sooner). It will probably be just a contributed extension\n> first. I'm currently making an alpha version of the project I'm\n> working on in order to have sufficient \"real-life\" Unicode data to\n> work with, and make sure the design choices make sense :)\n\nIf you would like to add some TODO items, please let me know. Good to\ndocument them even if you can't get to them for a while.\n\n> BTW, I tried to find web-accessible archives of pgsql-patches, are\n> there some, or should each and every discussion be followed-up on\n> pgsql-hackers (even though the description for pgsql-patches includes\n> discussions on patches) ?\n\nI use:\n\n\thttp://fts.postgresql.org/db/mw/\n\nYou can discuss on patches or hackers. I usually do patch discussion on\npatches unless I need a larger audience for comments.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 11:59:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode combining characters" } ]
[ { "msg_contents": "Two problems (I've been holding back reporting until I ran into this\nsecond):\n\nBackground: I'm trying to migrate an existing db from MS Sql 2K to pgsql by\ncreating a DTS job.\n\nDB system info: NT 4 sp6a, MS SQL2K Regular, Pgsql ODBC driver 7.01.00.06.\n\nFirst problem: The ODBC driver reports a column of type money in pgsql (and\nin ms sql) as float4 and barfs when I try executing the job because the data\nis not in the correct format (this is one of, but not the only. reasons why\nI am looking for how to change the column datatype).\n\nSecond problem: I create a table with a serial column. This table is not in\nthe existing db and no values are being set. The job bombs when executed\nwith a \"you don't have permissions to set sequence ..._seq\" (why this is\nbeing modified at all I do not know).\n\nIn regards to the second problem, if I create a serial column *after* the\ndamn is imported will all the records be updated?\n\n\nGeoff\n", "msg_date": "Mon, 24 Sep 2001 18:41:57 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "ODBC driver flakieness" }, { "msg_contents": "Hi Gowey,\nPlease post to pgsql-odbc list if the problem is ODBC\nspecific.\n\n\"Gowey, Geoffrey\" wrote:\n> \n> Two problems (I've been holding back reporting until I ran into this\n> second):\n> \n> Background: I'm trying to migrate an existing db from MS Sql 2K to pgsql by\n> creating a DTS job.\n> \n> DB system info: NT 4 sp6a, MS SQL2K Regular, Pgsql ODBC driver 7.01.00.06.\n> \n> First problem: The ODBC driver reports a column of type money in pgsql (and\n> in ms sql) as float4 and barfs when I try executing the job because the data\n\nAn example please.\n\n> \n> Second problem: I create a table with a serial column. This table is not in\n> the existing db and no values are being set. The job bombs when executed\n> with a \"you don't have permissions to set sequence ..._seq\" (why this is\n> being modified at all I do not know).\n\nIs this an ODBC problem ?\nIf the e.g. inserting user is different from the one who created\nthe table, UPDATE permission on the sequence should be granted\nas well.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 25 Sep 2001 10:42:04 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: ODBC driver flakieness" } ]
[ { "msg_contents": "I see following in src/backend/optimizer/plan/planner.c\n\n\tif (child && IsA(child, FromExpr))\n\t{\n\n\t\t/*\n\t\t * Yes, so do we want to merge it into parent?\tAlways do\n\t\t * so if child has just one element (since that doesn't\n\t\t * make the parent's list any longer). Otherwise we have\n\t\t * to be careful about the increase in planning time\n\t\t * caused by combining the two join search spaces into\n\t\t * one. Our heuristic is to merge if the merge will\n\t\t * produce a join list no longer than GEQO_RELS/2.\n\t\t * (Perhaps need an additional user parameter?)\n\t\t */\n\nThis is really annoying since:\n\no these code fragments actually controls the optimization efforts for\n subqueries and views, not related to GEQO at all. So using GEQO\n parameters for this kind of purpose seems abuse for me.\n\no Even if geqo = false in postgresql.con, the code looks into the GEQO\n value. This is really confusing for users.\n\nSo I propose a new GUC parameter called \"subquery_merge_threshold\"\nsolely for this purpose.\n\nComments?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 25 Sep 2001 11:42:28 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Proposal: new GUC paramter" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> This is really annoying since:\n> o these code fragments actually controls the optimization efforts for\n> subqueries and views, not related to GEQO at all. So using GEQO\n> parameters for this kind of purpose seems abuse for me.\n\nBut GEQO_RELS is directly related to the maximum number of FROM-clause\nentries that we want to try to handle by exhaustive search. So I think\nit's not completely unreasonable to use it for this additional purpose.\n\nStill, if you want to do the work to create another GUC parameter,\nI won't object.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 01:09:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: new GUC paramter " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > This is really annoying since:\n> > o these code fragments actually controls the optimization efforts for\n> > subqueries and views, not related to GEQO at all. So using GEQO\n> > parameters for this kind of purpose seems abuse for me.\n> \n> But GEQO_RELS is directly related to the maximum number of FROM-clause\n> entries that we want to try to handle by exhaustive search. So I think\n> it's not completely unreasonable to use it for this additional purpose.\n> \n> Still, if you want to do the work to create another GUC parameter,\n> I won't object.\n\nThis is a tough call. The GEQO value is used here to indicate a table\nlist that is very long and needs GEQO processing, so there is some\nrelationship. If we get to a point where the number of tables is too\nlarge, we do have problems.\n\nHowever, the GEQO setting is set to the point where we want GEQO to take\nover from the standard optimizer. If GEQO was to be improved, this\nvalue would be decreased but the point at which you would want to stop\nincreasing the target list probably would be the same.\n\nThe GEQO/2 is clearly just a ballpark estimate. I can see the value as\na separate config parameter, but I can also see it as something that may\nbe confusing to users and <1% of people will want to change it. In\nfact, interestingly, even if GEQO is off, GEQO_THRESHHOLD can be changed\nby users wishing to pull more of their subqueries into their target\nlist.\n\nI started thinking of some more complex comparison we could do, such as\ndetermining if:\n\n\t2 * (factorial(rels_in_upper_query) + factorial(rels_in_subquery)) <\n\tfactorial(rels_in_upper_query + factorial(rels_in_subquery)\n\nbut this doesn't seem to generate good decisions.\n\nI have applied the following documentation patch to at least document\nthe current behavior.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.89\ndiff -c -r1.89 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t2001/10/09 18:46:00\t1.89\n--- doc/src/sgml/runtime.sgml\t2001/10/11 21:08:59\n***************\n*** 719,725 ****\n this many FROM items involved. (Note that a JOIN construct\n \tcounts as only one FROM item.) The default is 11. For simpler\n \tqueries it is usually best to use the\n! deterministic, exhaustive planner.\n </para>\n </listitem>\n </varlistentry>\n--- 719,727 ----\n this many FROM items involved. (Note that a JOIN construct\n \tcounts as only one FROM item.) The default is 11. For simpler\n \tqueries it is usually best to use the\n! deterministic, exhaustive planner. This parameter also controls\n! how hard the optimizer will try to merge subquery\n! <literal>FROM</literal> clauses into the upper query.\n </para>\n </listitem>\n </varlistentry>", "msg_date": "Thu, 11 Oct 2001 17:15:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: new GUC paramter" } ]
[ { "msg_contents": "Has anyone successfully built docs recently? I got:\n\ncd sgml && /bin/tar -c -f ../admin.tar -T HTML.manifest *.gif *.css\n/bin/tar: *.gif: Cannot stat: No such file or directory\n--\nTatsuo Ishii\n", "msg_date": "Tue, 25 Sep 2001 12:56:49 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "doc build error" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> Has anyone successfully built docs recently? I got:\n>\n> cd sgml && /bin/tar -c -f ../admin.tar -T HTML.manifest *.gif *.css\n> /bin/tar: *.gif: Cannot stat: No such file or directory\n\nI'll fix it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 30 Sep 2001 00:55:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: doc build error" } ]
[ { "msg_contents": "Hello all,\n\n> >And the answer is \"no, you can't\". Recreate the table with correct types\n> >and insert the old values into it.\n>\n>You're kidding me, right? *prepares to gargle* MS Sql server can. Surely\n>we can implement this feature or aren't we aiming to go head to head with\n>commercial rdbms'?\n\nThe other day, I spent 3 hours dropping old_1, old_2 and old_n fields in a \ntest DB.\nBut what if your table if it has triggers or foreign keys.\n\nThere is a very similar problem with DROP FUNCTION / CREATE FUNCTION.\nIf function A is based on function B and you drop function B, function A is \nbroken.\nSame as for views: if view A incorporates function A and you drop function \nA, view A is broken.\n\nOK: what's the point then?\n\nTHE POINT IS THAT WHEN YOU HAVE NESTED OBJECTS, YOU NEED TO DROP THEM ALL \nAND RECREATE THEM ALL.\nSO IF YOU WANT TO MODIFY ONE LINE OF CODE, YOU WILL PROBABLY NEED TO \nREBUILD ANYTHING.\nNORMAL HUMANS CANNOT DO THIS. MY CODE IS COMPLETE POSTGRESQL SERVER-SIDE.\nIN THESE CONDITIONS, THE CODE CANNOT BE OPTIMIZED ALSO BECAUSE OIDs CHANGE \nALL THE TIME.\n\nThe way we do it in pgAdmin I \nhttp://cvs.social-housing.org/viewcvs.cgi/pgadmin1\nis that we maintain a dependency table based on STRING NAMES and not OIDs.\nWhen altering an object (view, function, trigger) we rebuild all dependent \nobjects.\n\nIs this the way we should proceed with pgAdmin II?\nIs anyone planning a real dependency table based on object STRING NAMES?\n\nWe need some advice:\n1) Client solution: should we add the rebuilding feature to pgAdmin II?\n2) Server solution: should we wait until the ALTER OBJECT project is complete?\n\nPlease advice. Help needed.\nVote for (1) or (2).\n\nRegards,\nJean-Michel POURE\npgAdmin Team\nhttp://pgadmin.postgresql.org\n\n\n\n", "msg_date": "Tue, 25 Sep 2001 08:21:18 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": true, "msg_subject": "Alter project: client or server side?" } ]
[ { "msg_contents": "Unfortunately, some of the head aches I have been encountering require me to\nbe able to do such oddities (example: my money column type not working with\nthe pgsql odbc driver). It's not just limited to a varchar to int\nconversion that was just an example. There's a bunch of things that I need\nto be able to do (and I would gladly help with the coding if I knew where to\nstart).\n\nGeoff\n\n-----Original Message-----\nFrom: mlw [mailto:markw@mohawksoft.com]\nSent: Monday, September 24, 2001 9:25 PM\nTo: Gowey, Geoffrey\nCc: 'Alex Pilosov'; pgsql-hackers@postgresql.org\nSubject: Re: Changing data types\n\n\n\"Gowey, Geoffrey\" wrote:\n> \n> >This is not for -hackers.\n> \n> How so?\n> \n> >And the answer is \"no, you can't\". Recreate the table with correct types\n> >and insert the old values into it.\n> \n> You're kidding me, right? *prepares to gargle* MS Sql server can. Surely\n> we can implement this feature or aren't we aiming to go head to head with\n> commercial rdbms'?\n\nTo be honest I am very surprised that MS SQL supports that, but then again\nMicrosoft is so used to doing everything so utterly wrong, they have to\ndesign\nall their products with the ability to support fundamental design error\ncorrections on the fly.\n\nI would be surprised if Oracle, DB2, or other \"industrial grade\" databases\ncould do this. Needing to change a column from a varchar to an integer is a\nhuge change and a major error in design.\n\nAdding a column, updating a column with a conversion routine, dropping the\nold\ncolumn, and renaming the new column to the old column name is probably\nsupported, but, geez, I have been dealing with SQL for almost 8 years and I\nhave never needed to do that.\n", "msg_date": "Tue, 25 Sep 2001 09:19:33 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "Re: Changing data types" } ]
[ { "msg_contents": "Hello, \n\nI have a problem at the time of requete containing accents with\nPostgreSQL version 6.5.3 : \n\nVia psql, the following requete does not function : \n\nSELECT id_dico, name \n FROM dico_fr \n WHERE name ~* '^b�' \n ORDER BY name; \n\n\nThe tables are coded in UNICODE.\n\nA idee ? \n\nThank you. \n-- \n==============================================\n| FREDERIC MASSOT |\n| http://www.juliana-multimedia.com |\n| mailto:frederic@juliana-multimedia.com |\n===========================Debian=GNU/Linux===\n", "msg_date": "Tue, 25 Sep 2001 15:43:14 +0200", "msg_from": "frederic massot <frederic@juliana-multimedia.com>", "msg_from_op": true, "msg_subject": "Problem with the accents" }, { "msg_contents": "> I have a problem at the time of requete containing accents with\n> PostgreSQL version 6.5.3 : \n> \n> Via psql, the following requete does not function : \n> \n> SELECT id_dico, name \n> FROM dico_fr \n> WHERE name ~* '^b�' \n> ORDER BY name; \n> \n> \n> The tables are coded in UNICODE.\n\nAre you sure that you put the query in UTF-8 encoding?\n--\nTatsuo Ishii\n", "msg_date": "Fri, 28 Sep 2001 09:56:39 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Problem with the accents" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> > I have a problem at the time of requete containing accents with\n> > PostgreSQL version 6.5.3 :\n> >\n> > Via psql, the following requete does not function :\n> >\n> > SELECT id_dico, name\n> > FROM dico_fr\n> > WHERE name ~* '^b�'\n> > ORDER BY name;\n> >\n> > The tables are coded in UNICODE.\n> \n> Are you sure that you put the query in UTF-8 encoding?\n\nYes :\n\nessai=> SHOW CLIENT_ENCODING;\nNOTICE: Current client encoding is UNICODE\nSHOW VARIABLE\n\n\nessai=> \\l\ndatname |datdba|encoding|datpath\n------------------+------+--------+------------\ntemplate1 | 31| 5|template1\nessai | 1000| 5|essai\n\n\nEncoding 5 corresponds has the UNICODE.\n\nHere the sequence of command :\n\nessai=> SELECT * FROM dico_fr WHERE nom ~* '^b�';\nessai'>\nessai'> '\nessai-> ;\nid_dico|nom\n-------+---\n(0 rows)\n\n\nI am obliged to add a quote and a semicolon to finish the request. :-(\n\nI have the same problem when the requests are sent via PHP, even by\nmaking them precede by: \npg_exec($db_conn, \"SET NAMES 'UNICODE'\"); \n\n\nThank you for your assistance.\n-- \n==============================================\n| FREDERIC MASSOT |\n| http://www.juliana-multimedia.com |\n| mailto:frederic@juliana-multimedia.com |\n===========================Debian=GNU/Linux===\n", "msg_date": "Fri, 28 Sep 2001 18:13:36 +0200", "msg_from": "frederic massot <frederic@juliana-multimedia.com>", "msg_from_op": true, "msg_subject": "Re: Problem with the accents" }, { "msg_contents": "> > Are you sure that you put the query in UTF-8 encoding?\n> \n> Yes :\n> \n> essai=> SHOW CLIENT_ENCODING;\n> NOTICE: Current client encoding is UNICODE\n> SHOW VARIABLE\n\nMy point is whether you put the character in UTF-8 or not. Try:\n\n$ echo 'e' > e.txt (e is actually e + accent)\n$ od -x e.txt\n\nand show me the result of \"cat e.txt\"\n--\nTatsuo Ishii\n", "msg_date": "Sat, 29 Sep 2001 09:21:45 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Problem with the accents" }, { "msg_contents": "You are definitely inputting ISO 8859-1 characters, not UTF-8. That's\nthe source of your problem.\n\n> Tatsuo Ishii wrote:\n> > \n> > > > Are you sure that you put the query in UTF-8 encoding?\n> > >\n> > > Yes :\n> > >\n> > > essai=> SHOW CLIENT_ENCODING;\n> > > NOTICE: Current client encoding is UNICODE\n> > > SHOW VARIABLE\n> > \n> > My point is whether you put the character in UTF-8 or not. Try:\n> > \n> > $ echo 'e' > e.txt (e is actually e + accent)\n> > $ od -x e.txt\n> > \n> > and show me the result of \"cat e.txt\"\n> \n> \n> $ echo '�' > e.txt\n> \n> $ cat e.txt\n> �\n> \n> $ od -x e.txt\n> 0000000 0ae9\n> 0000002\n", "msg_date": "Tue, 02 Oct 2001 14:47:59 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Problem with the accents" }, { "msg_contents": "> Tatsuo Ishii wrote:\n> > \n> > You are definitely inputting ISO 8859-1 characters, not UTF-8. That's\n> > the source of your problem.\n> > \n> \n> Hello, In fact, we create and lodge Web sites and we use\n> PostgreSQL/Apache/PHP. \n> \n> I parameterized the encoding in \"UNICODE\" thinking that it was most\n> flexible.\n> \n> We are French, but we have also Brazilian customers.\n\nSo you need to have an Unicode database but your client apps does not\nhave the capability to input Unicode, right?\n\nThen the only solution would be upgrading to 7.1 and turning on\n--enable-unicode-conversion. 7.1 would do the conversion between ISO\n8859 and Unicode on the server side.\n\n> According to you, by using PostgreSQL 6.5.3 (the passage to version 7.1\n> is planned for the end of the year) can encoding MULE_INTERNAL solve the\n> problem ? \n\nYes. create a MULE_INTERNAL database and use set client_encoding to\nwhatever...\n\n> Thank you. \n> \n> PS: Sorry for my English: -) \n\nMe too:-)\n--\nTatsuo Ishii\n", "msg_date": "Wed, 03 Oct 2001 10:01:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Problem with the accents" } ]
[ { "msg_contents": "I see that\n\n SELECT (time '00:00', interval '1 hour')\n OVERLAPS (time '01:30', interval '1 day') AS \"False\";\n\n(a case in the regression tests) returns false (currently labeled as\nreturning true, but that is another issue ;).\n\nistm that this *should* return true, since times do wrap around the day\nboundary. That is, this should be evaluated as though the second time\nperiod is really a full day long, rather than evaluate to a time period\nof zero length.\n\nSQL99 *seems* to ask for the current behavior, rather than a (to me)\nmore intuitive wrapping behavior. Could someone check their\ninterpretation of the standard to see if there could be support for the\n\"wrapping interpretation\"? Does anyone else think the first (current)\ninterpretation is a bit screwy??\n\n - Thomas\n", "msg_date": "Tue, 25 Sep 2001 15:47:43 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "overlaps operator for time type(s)" } ]
[ { "msg_contents": "In a C application I want to run several \ninsert commands within a chained transaction \n(for faster execution). \n>From time to time there will be an insert command \ncausing an \nERROR: Cannot insert a duplicate key into a unique index\n\nAs a result, the whole transaction is aborted and all \nthe previous inserts are lost. \nIs there any way to preserve the data \nexcept working with \"autocommit\" ? \nWhat I have in mind particularly is something like \n\"Do not abort on duplicate key error\".\n\nRegards, Christoph \n", "msg_date": "Tue, 25 Sep 2001 16:27:38 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": true, "msg_subject": "Transaction in chained mode " } ]
[ { "msg_contents": "Well, O_DIRECT has finally made it into the Linux kernel. It lets you \nopen a file in such a way that reads and writes don't go to the buffer \ncache but straight to the disk. Accesses must be aligned on\nfilesystem block boundaries.\n\nIs there any case where PG would benefit from this? I can see it\nreducing memory pressure and duplication of data between PG shared\nbuffers and the block buffer cache. OTOH, it does require that writes \nbe batched up for decent performance, since each write has an implicit \nfsync() involved (just as with O_SYNC).\n\nAnyone played with this on systems that already support it (Solaris?)\n\n-Doug\n-- \nIn a world of steel-eyed death, and men who are fighting to be warm,\nCome in, she said, I'll give you shelter from the storm. -Dylan\n", "msg_date": "25 Sep 2001 14:56:59 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": true, "msg_subject": "O_DIRECT and performance" }, { "msg_contents": "> Well, O_DIRECT has finally made it into the Linux kernel. It lets you \n> open a file in such a way that reads and writes don't go to the buffer \n> cache but straight to the disk. Accesses must be aligned on\n> filesystem block boundaries.\n> \n> Is there any case where PG would benefit from this? I can see it\n> reducing memory pressure and duplication of data between PG shared\n> buffers and the block buffer cache. OTOH, it does require that writes \n> be batched up for decent performance, since each write has an implicit \n> fsync() involved (just as with O_SYNC).\n> \n> Anyone played with this on systems that already support it (Solaris?)\n\nI have heard there are many cases there O_DIRECT on Solaris is slower\nfor databases than normal I/O. I think bulk copy was faster but not\nnormal operation. Probably not something we are going to get into soon.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Sep 2001 20:40:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: O_DIRECT and performance" }, { "msg_contents": "Red Hat Linux 7.2, postgresql 7.2b2 configured like this:\n\n./configure --enable-syslog --prefix=/usr --sysconfdir=/etc --with-pam\n--with-openssl --with-unixodbc --with-tcl --with-perl --with-python\n--enable-locale --enable-recode --enable-multibyte --enable-nls\n\n\"make check\" fails on initdb:\n\n============== creating temporary installation ==============\n============== initializing database system ==============\n\npg_regress: initdb failed\nExamine ./log/initdb.log for the reason.\n\n\nteg@halden postgresql-7.2b2]$ cat src/test/regress/log/initdb.log \nRunning with noclean mode on. Mistakes will not be cleaned up.\n/home/devel/teg/postgresql-7.2b2/src/test/regress/./tmp_check/install//usr/bin/pg_encoding: relocation error: /home/devel/teg/postgresql-7.2b2/src/test/regress/./tmp_check/install//usr/bin/pg_encoding: undefined symbol: pg_valid_server_encoding\ninitdb: pg_encoding failed\n\nPerhaps you did not configure PostgreSQL for multibyte support or\nthe program was not successfully installed.\n[teg@halden postgresql-7.2b2]$\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "16 Nov 2001 12:17:06 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "7.2b2 \"make check\" failure on Red Hat Linux 7.2" }, { "msg_contents": "teg@redhat.com (Trond Eivind Glomsr�d) writes:\n\n> Red Hat Linux 7.2, postgresql 7.2b2 configured like this:\n> \n> ./configure --enable-syslog --prefix=/usr --sysconfdir=/etc --with-pam\n> --with-openssl --with-unixodbc --with-tcl --with-perl --with-python\n> --enable-locale --enable-recode --enable-multibyte --enable-nls\n\nWhen just using \"configure\" it works - apart from a false failure in\nthe regression tests (geometry), it seems to work (a couple cases of\n\"-0.121320343559642\" vs. \"-0.121320343559643\" - a better way than raw\ndiff for comparing would be useful).\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "16 Nov 2001 12:47:27 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> /home/devel/teg/postgresql-7.2b2/src/test/regress/./tmp_check/install//usr/bin/pg_encoding: relocation error: /home/devel/teg/postgresql-7.2b2/src/test/regress/./tmp_check/install//usr/bin/pg_encoding: undefined symbol: pg_valid_server_encoding\n> initdb: pg_encoding failed\n\npg_encoding relies on libpq to supply the pg_valid_server_encoding()\nsubroutine, but that subroutine is only compiled into libpq in a\nMULTIBYTE build. I speculate that your executable was picking up\na non-MULTIBYTE libpq shared library from someplace. Check ldconfig\nand all that stuff...\n\nFWIW, both multibyte and non-multibyte builds are working OK for me\nin current sources.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Nov 2001 14:26:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2 " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > /home/devel/teg/postgresql-7.2b2/src/test/regress/./tmp_check/install//usr/bin/pg_encoding: relocation error: /home/devel/teg/postgresql-7.2b2/src/test/regress/./tmp_check/install//usr/bin/pg_encoding: undefined symbol: pg_valid_server_encoding\n> > initdb: pg_encoding failed\n> \n> pg_encoding relies on libpq to supply the pg_valid_server_encoding()\n> subroutine, but that subroutine is only compiled into libpq in a\n> MULTIBYTE build. I speculate that your executable was picking up\n> a non-MULTIBYTE libpq shared library from someplace. Check ldconfig\n> and all that stuff...\n\nI have an existing installation of 7.1 on the system, that's why I did\n\"make check\" in the build directory.\n\n\"--prefix=/usr\" seems to be the \"culprit\" - without it, it regression\ntests run just fine. \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "16 Nov 2001 16:37:53 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> I speculate that your executable was picking up\n>> a non-MULTIBYTE libpq shared library from someplace. Check ldconfig\n>> and all that stuff...\n\n> I have an existing installation of 7.1 on the system, that's why I did\n> \"make check\" in the build directory.\n\n> \"--prefix=/usr\" seems to be the \"culprit\" - without it, it regression\n> tests run just fine. \n\nThe pg_regress script sets LD_LIBRARY_PATH to try to cause libpq and\nthe other shlibs to be picked up from the temp installation tree.\nPerhaps this is wrong or insufficient on your platform. It certainly\nsounds like the dynamic linker is choosing the installed libpq over\nthe one that we want it to use. Any thoughts on fixing that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Nov 2001 16:46:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2 " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > Tom Lane <tgl@sss.pgh.pa.us> writes:\n> >> I speculate that your executable was picking up\n> >> a non-MULTIBYTE libpq shared library from someplace. Check ldconfig\n> >> and all that stuff...\n> \n> > I have an existing installation of 7.1 on the system, that's why I did\n> > \"make check\" in the build directory.\n> \n> > \"--prefix=/usr\" seems to be the \"culprit\" - without it, it regression\n> > tests run just fine. \n> \n> The pg_regress script sets LD_LIBRARY_PATH to try to cause libpq and\n> the other shlibs to be picked up from the temp installation tree.\n> Perhaps this is wrong or insufficient on your platform. It certainly\n> sounds like the dynamic linker is choosing the installed libpq over\n> the one that we want it to use. Any thoughts on fixing that?\n\nSince it works when prefix isn't /usr, I'd guess that the build\nprocess sets rpath. This takes precedence over LD_LIBRARY_PATH.\n\nFix? Don't use rpath - it's evil, and should be avoided anyway.\n\nld(1) contains some info on how libraries are resolved.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "16 Nov 2001 16:50:49 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> Since it works when prefix isn't /usr, I'd guess that the build\n> process sets rpath. This takes precedence over LD_LIBRARY_PATH.\n\n> Fix? Don't use rpath - it's evil, and should be avoided anyway.\n\nPeter, any thoughts on that? It's not clear to me that removing\nrpath settings would be an improvement overall, even if it makes\n\"make check\" more reliable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Nov 2001 17:39:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2 " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [011116 16:56]:\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > Since it works when prefix isn't /usr, I'd guess that the build\n> > process sets rpath. This takes precedence over LD_LIBRARY_PATH.\n> \n> > Fix? Don't use rpath - it's evil, and should be avoided anyway.\n> \n> Peter, any thoughts on that? It's not clear to me that removing\n> rpath settings would be an improvement overall, even if it makes\n> \"make check\" more reliable.\nMy general take is rpath is a *GOOD* thing, as it prevents USERS from\nnot being able to find REQUIRED shared libs. \n\nI vote for it staying.\n\nLER\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 16 Nov 2001 16:58:19 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2" }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> Fix? Don't use rpath - it's evil, and should be avoided anyway.\n\nconfigure --disable-rpath is your friend.\n\nOverall, rpath gives us much more benefit than the occasional trouble it\ncreates.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 18 Nov 2001 17:42:37 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2" }, { "msg_contents": "On Sun, 18 Nov 2001, Peter Eisentraut wrote:\n\n> Trond Eivind Glomsr�d writes:\n> \n> > Fix? Don't use rpath - it's evil, and should be avoided anyway.\n> \n> configure --disable-rpath is your friend.\n\nDidn't know about that one - it will certainly be put to use :)\n\n> Overall, rpath gives us much more benefit than the occasional trouble it\n> creates.\n\nWhen you install (not configure for, just install) into a separate tree \n(for easier packaging), it's a hole which can be exploited - some packages \nwill rpath into /var/tmp/<foo>, for instance. Hackers can then put their \nown library there. One big offender here is perl's automatic module \ncreation script which will change the rpaths from what you told it when \nyou built it to what you do when you install it.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n", "msg_date": "Sun, 18 Nov 2001 11:45:24 -0500 (EST)", "msg_from": "=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <teg@redhat.com>", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2" }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> When you install (not configure for, just install) into a separate tree\n> (for easier packaging), it's a hole which can be exploited - some packages\n> will rpath into /var/tmp/<foo>, for instance. Hackers can then put their\n> own library there.\n\n\"Some packages\"... ;-)\n\n> One big offender here is perl's automatic module\n> creation script which will change the rpaths from what you told it when\n> you built it to what you do when you install it.\n\nThis should be fixed now, although the perl module will actually not obey\nthe --disable-rpath switch. Can't have everything, I guess...\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 18 Nov 2001 21:40:12 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n\n> * Tom Lane <tgl@sss.pgh.pa.us> [011116 16:56]:\n> > teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > > Since it works when prefix isn't /usr, I'd guess that the build\n> > > process sets rpath. This takes precedence over LD_LIBRARY_PATH.\n> > \n> > > Fix? Don't use rpath - it's evil, and should be avoided anyway.\n> > \n> > Peter, any thoughts on that? It's not clear to me that removing\n> > rpath settings would be an improvement overall, even if it makes\n> > \"make check\" more reliable.\n> My general take is rpath is a *GOOD* thing, as it prevents USERS from\n> not being able to find REQUIRED shared libs. \n\nLib-directories should be listed in /etc/ld.so.conf, and no rpath used\n- at least on linux.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "19 Nov 2001 11:47:45 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2" }, { "msg_contents": "* Trond Eivind Glomsr?d <teg@redhat.com> [011119 10:47]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> \n> > * Tom Lane <tgl@sss.pgh.pa.us> [011116 16:56]:\n> > > teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > > > Since it works when prefix isn't /usr, I'd guess that the build\n> > > > process sets rpath. This takes precedence over LD_LIBRARY_PATH.\n> > > \n> > > > Fix? Don't use rpath - it's evil, and should be avoided anyway.\n> > > \n> > > Peter, any thoughts on that? It's not clear to me that removing\n> > > rpath settings would be an improvement overall, even if it makes\n> > > \"make check\" more reliable.\n> > My general take is rpath is a *GOOD* thing, as it prevents USERS from\n> > not being able to find REQUIRED shared libs. \n> \n> Lib-directories should be listed in /etc/ld.so.conf, and no rpath used\n> - at least on linux.\nno such animal on OpenUNIX 8 (In native \"unix\" mode).\n\n\n> \n> -- \n> Trond Eivind Glomsr�d\n> Red Hat, Inc.\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 19 Nov 2001 10:53:37 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: 7.2b2 \"make check\" failure on Red Hat Linux 7.2" } ]
[ { "msg_contents": "Hello all,\n\n> >And the answer is \"no, you can't\". Recreate the table with correct types\n> >and insert the old values into it.\n>\n>You're kidding me, right? *prepares to gargle* MS Sql server can. Surely\n>we can implement this feature or aren't we aiming to go head to head with\n>commercial rdbms'?\n\nThe other day, I spent 3 hours dropping old_1, old_2 and old_n fields in a \ntest DB.\nBut what if your table if it has triggers or foreign keys.\n\nThere is a very similar problem with DROP FUNCTION / CREATE FUNCTION.\nIf function A is based on function B and you drop function B, function A is \nbroken.\nSame as for views: if view A incorporates function A and you drop function \nA, view A is broken.\n\nOK: what's the point then?\n\nTHE POINT IS THAT WHEN YOU HAVE NESTED OBJECTS, YOU NEED TO DROP THEM ALL \nAND RECREATE THEM ALL.\nSO IF YOU WANT TO MODIFY ONE LINE OF CODE, YOU WILL PROBABLY NEED TO \nREBUILD ANYTHING.\nNORMAL HUMANS CANNOT DO THIS. MY CODE IS COMPLETE POSTGRESQL SERVER-SIDE.\nIN THESE CONDITIONS, THE CODE CANNOT BE OPTIMIZED ALSO BECAUSE OIDs CHANGE \nALL THE TIME.\n\nThe way we do it in pgAdmin I \nhttp://cvs.social-housing.org/viewcvs.cgi/pgadmin1\nis that we maintain a dependency table based on STRING NAMES and not OIDs.\nWhen altering an object (view, function, trigger) we rebuild all dependent \nobjects.\n\nIs this the way we should proceed with pgAdmin II?\nIs anyone planning a real dependency table based on object STRING NAMES?\n\nWe need some advice:\n1) Client solution: should we add the rebuilding feature to pgAdmin II?\n2) Server solution: should we wait until the ALTER OBJECT project is complete?\n\nPlease advice. Help needed.\nVote for (1) or (2).\n\nRegards,\nJean-Michel POURE\npgAdmin Team\nhttp://pgadmin.postgresql.org\n\n\n\n", "msg_date": "Tue, 25 Sep 2001 21:04:40 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": true, "msg_subject": "Alter project: client or server side?" }, { "msg_contents": "Jean-Michel POURE wrote:\n> \n> Is this the way we should proceed with pgAdmin II?\n> Is anyone planning a real dependency table based on object STRING NAMES?\n> \n> We need some advice:\n> 1) Client solution: should we add the rebuilding feature to pgAdmin II?\n> 2) Server solution: should we wait until the ALTER OBJECT project is complete?\n> \n> Please advice. Help needed.\n> Vote for (1) or (2).\n\nI vote for (2).\nIt's the server's resposibitiy to ensure the db's consistency.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 26 Sep 2001 11:12:50 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Alter project: client or server side?" } ]
[ { "msg_contents": "The O_DIRECT flag has been added in FreeBSD 4.4 (386 & Alpha) also. From\nthe release notes:\n\nKernel Changes\n\nThe O_DIRECT flag has been added to open(2) and fcntl(2). Specifying this\nflag for open files will attempt to minimize the cache effects of reading\nand writing.\n\n\nSee: http://www.freebsd.org/releases/4.4R/relnotes-i386.html\n\n", "msg_date": "Wed, 26 Sep 2001 10:30:17 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "O_DIRECT" }, { "msg_contents": "> The O_DIRECT flag has been added in FreeBSD 4.4 (386 & Alpha) also. From\n> the release notes:\n> \n> Kernel Changes\n> \n> The O_DIRECT flag has been added to open(2) and fcntl(2). Specifying this\n> flag for open files will attempt to minimize the cache effects of reading\n> and writing.\n\nI wonder if using this for WAL would be good.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Sep 2001 21:13:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: O_DIRECT" } ]
[ { "msg_contents": "Here is a revised version of pg_stattuple, which shows how many tuples\nare \"dead\" etc. Per Tom's suggestion, a statistic of free/resuable\nspace is now printed.\n\ntest=# select pgstattuple('accounts');\nNOTICE: physical length: 39.06MB live tuples: 100000 (12.59MB, 32.23%) dead tuples: 200000 (25.18MB, 64.45%) free/reusable space: 0.04MB (0.10%) overhead: 3.22%\n pgstattuple \n-------------\n 64.453125\n\nWhat I'm not sure is:\n\no Should I place any kind of lock after reading buffer?\n\no Should I use similar algorithm to the one used in vacuum to determin\n whether the tuple is \"dead\" or not?\n\nSuggestions?\n--\nTatsuo Ishii\n\n-------------------------------------------------------------------------\n/*\n * $Header: /home/t-ishii/repository/pgstattuple/pgstattuple.c,v 1.2 2001/08/30 06:21:48 t-ishii Exp $\n *\n * Copyright (c) 2001 Tatsuo Ishii\n *\n * Permission to use, copy, modify, and distribute this software and\n * its documentation for any purpose, without fee, and without a\n * written agreement is hereby granted, provided that the above\n * copyright notice and this paragraph and the following two\n * paragraphs appear in all copies.\n *\n * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,\n * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED\n * OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT\n * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n * A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN \"AS\n * IS\" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,\n * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n */\n\n#include \"postgres.h\"\n#include \"fmgr.h\"\n#include \"access/heapam.h\"\n#include \"access/transam.h\"\n\nPG_FUNCTION_INFO_V1(pgstattuple);\n\nextern Datum pgstattuple(PG_FUNCTION_ARGS);\n\n/* ----------\n * pgstattuple:\n * returns the percentage of dead tuples\n *\n * C FUNCTION definition\n * pgstattuple(NAME) returns FLOAT8\n * ----------\n */\nDatum\npgstattuple(PG_FUNCTION_ARGS)\n{\n Name\tp = PG_GETARG_NAME(0);\n\n Relation\trel;\n HeapScanDesc\tscan;\n HeapTuple\ttuple;\n BlockNumber nblocks;\n BlockNumber block = InvalidBlockNumber;\n double\ttable_len;\n uint64\ttuple_len = 0;\n uint64\tdead_tuple_len = 0;\n uint32\ttuple_count = 0;\n uint32\tdead_tuple_count = 0;\n double\ttuple_percent;\n double\tdead_tuple_percent;\n\n Buffer\tbuffer = InvalidBuffer;\n uint64\tfree_space = 0;\t/* free/reusable space in bytes */\n double\tfree_percent;\t\t/* free/reusable space in % */\n\n rel = heap_openr(NameStr(*p), NoLock);\n nblocks = RelationGetNumberOfBlocks(rel);\n scan = heap_beginscan(rel, false, SnapshotAny, 0, NULL);\n\n while ((tuple = heap_getnext(scan,0)))\n {\n\tif (HeapTupleSatisfiesNow(tuple->t_data))\n\t{\n\t tuple_len += tuple->t_len;\n\t tuple_count++;\n\t}\n\telse\n\t{\n\t dead_tuple_len += tuple->t_len;\n\t dead_tuple_count++;\n\t}\n\n\tif (!BlockNumberIsValid(block) ||\n\t block != BlockIdGetBlockNumber(&tuple->t_self.ip_blkid))\n\t{\n\t block = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);\n\t buffer = ReadBuffer(rel, block);\n\t free_space += PageGetFreeSpace((Page)BufferGetPage(buffer));\n\t ReleaseBuffer(buffer);\n\t}\n }\n heap_endscan(scan);\n heap_close(rel, NoLock);\n\n table_len = (double)nblocks*BLCKSZ;\n\n if (nblocks == 0)\n {\n\ttuple_percent = 0.0;\n\tdead_tuple_percent = 0.0;\n\tfree_percent = 0.0;\n }\n else\n {\n\ttuple_percent = (double)tuple_len*100.0/table_len;\n\tdead_tuple_percent = (double)dead_tuple_len*100.0/table_len;\n\tfree_percent = (double)free_space*100.0/table_len;\n }\n\n elog(NOTICE,\"physical length: %.2fMB live tuples: %u (%.2fMB, %.2f%%) dead tuples: %u (%.2fMB, %.2f%%) free/reusable space: %.2fMB (%.2f%%) overhead: %.2f%%\",\n\n\t table_len/1024/1024,\t /* phsical length in MB */\n\n\t tuple_count,\t/* number of live tuples */\n\t (double)tuple_len/1024/1024,\t/* live tuples in MB */\n\t tuple_percent,\t/* live tuples in % */\n\n\t dead_tuple_count,\t/* number of dead tuples */\n\t (double)dead_tuple_len/1024/1024,\t/* dead tuples in MB */\n\t dead_tuple_percent,\t/* dead tuples in % */\n\n\t (double)free_space/1024/1024,\t/* free/available space in MB */\n\n\t free_percent,\t/* free/available space in % */\n\n\t /* overhead in % */\n\t (nblocks == 0)?0.0: 100.0\n\t - tuple_percent\n\t - dead_tuple_percent\n\t- free_percent);\n\n PG_RETURN_FLOAT8(dead_tuple_percent);\n}\n", "msg_date": "Wed, 26 Sep 2001 13:39:38 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "tuple statistics function" }, { "msg_contents": "Tatsuo Ishii wrote:\n> Here is a revised version of pg_stattuple, which shows how many tuples\n> are \"dead\" etc. Per Tom's suggestion, a statistic of free/resuable\n> space is now printed.\n>\n> test=# select pgstattuple('accounts');\n> NOTICE: physical length: 39.06MB live tuples: 100000 (12.59MB, 32.23%) dead tuples: 200000 (25.18MB, 64.45%) free/reusable space: 0.04MB (0.10%) overhead: 3.22%\n> pgstattuple\n> -------------\n> 64.453125\n>\n> What I'm not sure is:\n>\n> o Should I place any kind of lock after reading buffer?\n>\n> o Should I use similar algorithm to the one used in vacuum to determin\n> whether the tuple is \"dead\" or not?\n>\n> Suggestions?\n\n A little unrelated to your question, but what about returning\n an array of all the values and adding another argument to\n suppress the NOTICE? That would IMHO make the function very\n useful in administrative tools.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 26 Sep 2001 08:10:08 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: tuple statistics function" } ]
[ { "msg_contents": "In 7.1.3, you can create a column as \"time without time zone\", but it\ndoesn't seem to show as such in psql...\n\neg:\n\ntest=# alter table chat_meetings add column timeofday time without time\nzone;\n\nALTER\ntest=# \\d chat_meetings\n Table \"chat_meetings\"\n Attribute | Type | Modifier\n\n------------+------------------------+--------------------------------------\n------\n--------------------\n meeting_id | integer | not null default\nnextval('chat_meetings_mee\nting_id_seq'::text)\n host_id | integer | not null\n title | character varying(255) | not null\n abstract | text | not null\n time | integer | not null\n dayofweek | smallint |\n timeofday | time |\nIndices: chat_meetings_pkey,\n host_id_chat_meetings_key\n\nThere's no easy way of seeing what exact type the column is it seems.\n\nChris\n\n", "msg_date": "Wed, 26 Sep 2001 16:57:35 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "time without time zone" } ]
[ { "msg_contents": "Gah. Ignore my previous email - I read the docs further and it turns out\nthat \"time\" and \"time without time zone\" are synonymns.\n\nChris\n\n", "msg_date": "Wed, 26 Sep 2001 16:58:52 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "time without time zone" } ]
[ { "msg_contents": "Greetings.\nI don�t know if this is the right place to ask this, sorry if this don't\nbelong here.\nI begun this week working in a new firm. They use linux and PostgreSQL as\nthe database for the Intranet site and for the management of CV's and\nKnowledge Management (they have an on-line system to manage and search\ninformation about workers of the firm and projects they have).\n\nThey now want to convert from the current linux/postgresql platform to\nWindows and SQLServer. I've never worked before with linux or PostgreSQL so\ni know nothing about the capabilities of this combo. \n\nMy question are:\n\n- the PostgreSQL database has a lot of information. Is it possible to\nmigrate the data of the PostgreSQL to SQLServer in Windows? What do i need\nto do?\n\n- is it possible to migrate the tables relationships (the relational schema)\nof the DB to SQLServer or do i have to build the DB from scratch?\n\nThanks for reading,\nArmindo Dias\n\n", "msg_date": "Wed, 26 Sep 2001 12:04:27 +0100", "msg_from": "armindo.dias@dhvmc.pt", "msg_from_op": true, "msg_subject": "Converting from pgsql to sqlserver?" }, { "msg_contents": "armindo.dias@dhvmc.pt wrote:\n> \n> Greetings.\n> I don�t know if this is the right place to ask this, sorry if this don't\n> belong here.\n> I begun this week working in a new firm. They use linux and PostgreSQL as\n> the database for the Intranet site and for the management of CV's and\n> Knowledge Management (they have an on-line system to manage and search\n> information about workers of the firm and projects they have).\n> \n> They now want to convert from the current linux/postgresql platform to\n> Windows and SQLServer. I've never worked before with linux or PostgreSQL so\n> i know nothing about the capabilities of this combo.\n\nI have no idea under what circumstance one would wish to go from PostgreSQL to\nMicrosoftSQL, it just seems like a mistake, but hey, you'll be back when you\nrealize how much it costs and how much you lose. \n\n> \n> My question are:\n> \n> - the PostgreSQL database has a lot of information. Is it possible to\n> migrate the data of the PostgreSQL to SQLServer in Windows? What do i need\n> to do?\n\nPostgreSQL has a lot of export utilities. I'm not sure what MSSQL currently\nsupports. Find a match, and do the export/import.\n\nYou may be able to use \"pg_dump\" to dump out the database as a series of SQL\ninserts. It will be slow, but it should be the more portable way to do it. Just\nbe carfull of date formats. You will probably need a few tries.\n\n> \n> - is it possible to migrate the tables relationships (the relational schema)\n> of the DB to SQLServer or do i have to build the DB from scratch?\n\nYou will need to dump out the schema for the postgres database by executing\n\"pg_dump -s database\" and that will dump out a SQL script which will create the\nschema. no doubt, you will have to edit it to make it work with MSSQL but, it\nis a good start.\n\nMay I ask why you are going from PostgreSQL to MSSQL?\n", "msg_date": "Wed, 26 Sep 2001 07:50:25 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Converting from pgsql to sqlserver?" }, { "msg_contents": "Hi Armindo,\n\nIan Harding has written a guide for converting from MS SQL Server to\nPostgreSQL. I know this is the opposite of what you want, but it might\nbe useful as it highlights some of the areas of difference between these\nproducts :\n\nhttp://techdocs.postgresql.org/techdocs/sqlserver2pgsql.php\n\nHope that's useful.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\narmindo.dias@dhvmc.pt wrote:\n> \n> Greetings.\n> I don�t know if this is the right place to ask this, sorry if this don't\n> belong here.\n> I begun this week working in a new firm. They use linux and PostgreSQL as\n> the database for the Intranet site and for the management of CV's and\n> Knowledge Management (they have an on-line system to manage and search\n> information about workers of the firm and projects they have).\n> \n> They now want to convert from the current linux/postgresql platform to\n> Windows and SQLServer. I've never worked before with linux or PostgreSQL so\n> i know nothing about the capabilities of this combo.\n> \n> My question are:\n> \n> - the PostgreSQL database has a lot of information. Is it possible to\n> migrate the data of the PostgreSQL to SQLServer in Windows? What do i need\n> to do?\n> \n> - is it possible to migrate the tables relationships (the relational schema)\n> of the DB to SQLServer or do i have to build the DB from scratch?\n> \n> Thanks for reading,\n> Armindo Dias\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 27 Sep 2001 00:34:07 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Converting from pgsql to sqlserver?" }, { "msg_contents": "> armindo.dias@dhvmc.pt wrote:\n> >\n> > Greetings.\n> > I don�t know if this is the right place to ask this, sorry if this don't\n> > belong here.\n> > I begun this week working in a new firm. They use linux and PostgreSQL as\n> > the database for the Intranet site and for the management of CV's and\n> > Knowledge Management (they have an on-line system to manage and search\n> > information about workers of the firm and projects they have).\n> >\n> > They now want to convert from the current linux/postgresql platform to\n> > Windows and SQLServer.\n\nIf you stay away from IIS (and Internet as a whole) you should be quite \nsafe on Windows platforms ;)\n\nsee: http://news.cnet.com/news/0-1003-200-7294516.html?tag=mainstry\n\n> > I've never worked before with linux or PostgreSQL so\n> > i know nothing about the capabilities of this combo.\n\nShould be capable enough ;)\n\nPG dump produces mostly ANSI-SQL compatible script that will recreate\nthe \ndatabase. (Older versions dump suome stuff, like foreign keys as system \ntable modifications, so you have to modify at least that ).\n\nYou may also have to change type names, as pg_dump uses often original \npostgreSQL type names and not their ANSI equivalents. (It is meant to be \nprimarily a backup tool, not porting tool)\n\nThe PL/PGSQL and Transact-SQL (or whatever MS calls it) are quite \ndifferent though, so any stored procs have to be rewritten.\n\n> > My question are:\n> >\n> > - the PostgreSQL database has a lot of information. Is it possible to\n> > migrate the data of the PostgreSQL to SQLServer in Windows? What do i need\n> > to do?\n\npg_dump -D -a dbname\n\n> >\n> > - is it possible to migrate the tables relationships (the relational schema)\n> > of the DB to SQLServer or do i have to build the DB from scratch?\n\npg_dump -s dbname\n\n-------------------\nHannu\n", "msg_date": "Thu, 27 Sep 2001 14:48:47 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Converting from pgsql to sqlserver?" } ]
[ { "msg_contents": "Hi all,\n By mapping the WAL files by each backend in to its address\nspace using \"mmap\" system call , there will be big\n improvements in performance from the following point of view:\n 1. Each backend directly writes in to the address\nspace which is obtained by maping the WAL files.\n this saves the write system call at the end of\nevery transaction which transfres 8k of\n data from user space to kernel.\n 2. since every transaction does not modify all the 8k\ncontent of WAL page , so by issuing the\n fsync , the kernel only transfers only the\nkernel pages which are modified , which is 4k for\n linux so fsync time is saved by twice.\nAny comments ?.\n\n\nRegards\njana\n", "msg_date": "Wed, 26 Sep 2001 19:23:27 +0800", "msg_from": "Janardhana Reddy <jana-reddy@mediaring.com.sg>", "msg_from_op": true, "msg_subject": "PERFORMANCE IMPROVEMENT by mapping WAL FILES " }, { "msg_contents": "> Hi all,\n> By mapping the WAL files by each backend in to its address\n> space using \"mmap\" system call , there will be big\n> improvements in performance from the following point of view:\n> 1. Each backend directly writes in to the address\n> space which is obtained by maping the WAL files.\n> this saves the write system call at the end of\n> every transaction which transfres 8k of\n> data from user space to kernel.\n> 2. since every transaction does not modify all the 8k\n> content of WAL page , so by issuing the\n> fsync , the kernel only transfers only the\n> kernel pages which are modified , which is 4k for\n> linux so fsync time is saved by twice.\n> Any comments ?.\n\nThis is interesting. We are concerned about using mmap() for all I/O\nbecause we could eat up quite a bit of address space for big tables, but\nWAL seems like an ideal use for mmap().\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 26 Sep 2001 10:54:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PERFORMANCE IMPROVEMENT by mapping WAL FILES" }, { "msg_contents": "Janardhana Reddy <jana-reddy@mediaring.com.sg> writes:\n> By mapping the WAL files by each backend in to its address\n> space using \"mmap\" system call ,\n\nThere are a lot of problems with trying to use mmap for Postgres. One\nis portability: not all platforms have mmap, so we'd still have to\nsupport the non-mmap case; and it's not at all clear that fsync/msync\nsemantics are consistent across platforms, either. A bigger objection\nis that mmap'ing a file in one backend does not cause it to become\navailable to other backends, thus the entire concept of shared buffers\nbreaks down.\n\nIf you think you can make it work, feel free to try it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 11:15:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PERFORMANCE IMPROVEMENT by mapping WAL FILES " } ]
[ { "msg_contents": "Hi, \n\nI have a table that contains almost 8 milion rows. The primary key is a \nsequence, so the index should have a good distribution. Why does the \noptimizer refuse to use the index for getting the maximum value?\n(even after a vacuum analyze of the table)\n\nradius=# explain select max(radiuspk) from radius ;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=257484.70..257484.70 rows=1 width=8)\n -> Seq Scan on radius (cost=0.00..237616.76 rows=7947176 width=8)\n\n\nTable and key info:\n\nDid not find any relation named \"radius_pk\".\nradius=# \\d radius\n Table \"radius\"\n Attribute | Type | Modifier \n---------------------+--------------------------+---------------------------\n sessionid | character varying(30) | not null\n username | character varying(30) | not null\n nas_ip | character varying(50) | not null\n logfileid | integer |\n login_ip_host | character varying(50) | not null\n framed_ip_address | character varying(50) |\n file_timestamp | timestamp with time zone | not null\n corrected_timestamp | timestamp with time zone | not null\n acct_status_type | smallint | not null\n bytesin | bigint |\n bytesout | bigint |\n handled | boolean | not null default 'f'\n sessionhandled | boolean | not null default 'f'\n radiuspk | bigint | not null default nextval\n('radiuspk_seq'::text)\nIndices: pk_radius,\n radius_us\n\nradius=# \\d pk_radius\n Index \"pk_radius\"\n Attribute | Type\n-----------+--------\n radiuspk | bigint\nunique btree (primary key)\n\n\n", "msg_date": "Wed, 26 Sep 2001 14:53:14 +0200 (CEST)", "msg_from": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl>", "msg_from_op": true, "msg_subject": "optimizer question" }, { "msg_contents": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl> writes:\n> I have a table that contains almost 8 milion rows. The primary key is a \n> sequence, so the index should have a good distribution. Why does the \n> optimizer refuse to use the index for getting the maximum value?\n\nThe optimizer has no idea that max() has anything to do with indexes.\nYou could try something like\n\n\tselect * from tab order by foo desc limit 1;\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 11:19:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: optimizer question " }, { "msg_contents": "> \"Reinoud van Leeuwen\" <reinoud@xs4all.nl> writes:\n> > I have a table that contains almost 8 milion rows. The primary key is a \n> > sequence, so the index should have a good distribution. Why does the \n> > optimizer refuse to use the index for getting the maximum value?\n> \n> The optimizer has no idea that max() has anything to do with indexes.\n> You could try something like\n> \n> \tselect * from tab order by foo desc limit 1;\n\nCan we consider doing this optimization automatically?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 17:17:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: optimizer question" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > \"Reinoud van Leeuwen\" <reinoud@xs4all.nl> writes:\n> > > I have a table that contains almost 8 milion rows. The primary key is a\n> > > sequence, so the index should have a good distribution. Why does the\n> > > optimizer refuse to use the index for getting the maximum value?\n> >\n> > The optimizer has no idea that max() has anything to do with indexes.\n> > You could try something like\n> >\n> > select * from tab order by foo desc limit 1;\n> \n> Can we consider doing this optimization automatically?\n\nOnly if we assume that people do not define their own max() that does\nsomething \nthat can't be calculated using the above formula like calculating AVG().\n\n---------------\nHannu\n", "msg_date": "Fri, 12 Oct 2001 08:28:20 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: optimizer question" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > \"Reinoud van Leeuwen\" <reinoud@xs4all.nl> writes:\n> > > I have a table that contains almost 8 milion rows. The primary key is a\n> > > sequence, so the index should have a good distribution. Why does the\n> > > optimizer refuse to use the index for getting the maximum value?\n> >\n> > The optimizer has no idea that max() has anything to do with indexes.\n> > You could try something like\n> >\n> > select * from tab order by foo desc limit 1;\n> \n> Can we consider doing this optimization automatically?\n\nThat would be real cool. I don't know of any database that does that. (I do\nknow that our Oracle 8i does not)\n\n\nOn a side note (can of worms?) is there the notion of a \"rule based optimizer\"\nvs the cost based optimizer in Postgres?\n", "msg_date": "Fri, 12 Oct 2001 06:24:45 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: optimizer question" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > > \"Reinoud van Leeuwen\" <reinoud@xs4all.nl> writes:\n> > > > > I have a table that contains almost 8 milion rows. The primary key is a\n> > > > > sequence, so the index should have a good distribution. Why does the\n> > > > > optimizer refuse to use the index for getting the maximum value?\n> > > >\n> > > > The optimizer has no idea that max() has anything to do with indexes.\n> > > > You could try something like\n> > > >\n> > > > select * from tab order by foo desc limit 1;\n> > >\n> > > Can we consider doing this optimization automatically?\n> >\n> > Only if we assume that people do not define their own max() that does\n> > something\n> > that can't be calculated using the above formula like calculating AVG().\n> \n> I hadn't thought of that one. I can't imagine a max() that doesn't\n> match the ORDER BY collating.\n\nBut suppose you could have different indexes on the same column. For\nexample \nfor IP address you can theoretically define one index that indexes by\nmask \nlength and other that indexes by numeric value of IP and yet another\nthat \nindexes by some combination of both.\n\nwhen doing an ORDER BY you can specify 'USING operator'\n\n> Updated TODO item:\n> \n> * Use indexes for min() and max() or convert to SELECT col FROM tab\n> ORDER BY col DESC LIMIT 1;\n\nMaybe rather\n\n* Use indexes for min() and max() or convert to \"SELECT col FROM tab\n ORDER BY col DESC USING max_index_op LIMIT 1\" if there is an index \n on tab that uses btree(col max_index_op)\n\nit seems that in most other cases the rewrite would be either a \nmisoptimisation or plain wrong.\n\n----------------\nHannu\n", "msg_date": "Fri, 12 Oct 2001 19:09:59 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: optimizer question" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > > \"Reinoud van Leeuwen\" <reinoud@xs4all.nl> writes:\n> > > > I have a table that contains almost 8 milion rows. The primary key is a\n> > > > sequence, so the index should have a good distribution. Why does the\n> > > > optimizer refuse to use the index for getting the maximum value?\n> > >\n> > > The optimizer has no idea that max() has anything to do with indexes.\n> > > You could try something like\n> > >\n> > > select * from tab order by foo desc limit 1;\n> > \n> > Can we consider doing this optimization automatically?\n> \n> Only if we assume that people do not define their own max() that does\n> something \n> that can't be calculated using the above formula like calculating AVG().\n\nI hadn't thought of that one. I can't imagine a max() that doesn't\nmatch the ORDER BY collating.\n\nUpdated TODO item:\n\n* Use indexes for min() and max() or convert to SELECT col FROM tab\n ORDER BY col DESC LIMIT 1; \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 12:13:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: optimizer question" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Maybe rather\n\n> * Use indexes for min() and max() or convert to \"SELECT col FROM tab\n> ORDER BY col DESC USING max_index_op LIMIT 1\" if there is an index \n> on tab that uses btree(col max_index_op)\n\n> it seems that in most other cases the rewrite would be either a \n> misoptimisation or plain wrong.\n\nWe would clearly need to add information to the system catalogs to allow\nthe planner to determine whether a given aggregate matches up to a given\nindex opclass. This has been discussed before.\n\nA more interesting question is how to determine whether such a rewrite\nwould be a win. That is NOT a foregone conclusion. Consider\n\n\tSELECT max(col1) FROM tab WHERE col2 BETWEEN 12 AND 42;\n\nDepending on the selectivity of the WHERE condition, we might be far\nbetter off to scan on a col2 index and use our traditional max()\ncode than to scan on a col1 index until we find a row passing the\nWHERE condition. I'm not sure whether the planner currently has\nstatistics appropriate for such estimates or not ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Oct 2001 13:14:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: optimizer question " }, { "msg_contents": "> Hannu Krosing <hannu@tm.ee> writes:\n> > Maybe rather\n> \n> > * Use indexes for min() and max() or convert to \"SELECT col FROM tab\n> > ORDER BY col DESC USING max_index_op LIMIT 1\" if there is an index \n> > on tab that uses btree(col max_index_op)\n> \n> > it seems that in most other cases the rewrite would be either a \n> > misoptimisation or plain wrong.\n> \n> We would clearly need to add information to the system catalogs to allow\n> the planner to determine whether a given aggregate matches up to a given\n> index opclass. This has been discussed before.\n> \n> A more interesting question is how to determine whether such a rewrite\n> would be a win. That is NOT a foregone conclusion. Consider\n> \n> \tSELECT max(col1) FROM tab WHERE col2 BETWEEN 12 AND 42;\n> \n> Depending on the selectivity of the WHERE condition, we might be far\n> better off to scan on a col2 index and use our traditional max()\n> code than to scan on a col1 index until we find a row passing the\n> WHERE condition. I'm not sure whether the planner currently has\n> statistics appropriate for such estimates or not ...\n\nYes, agreed. This would be just for limited cases. Updated to:\n\n* Use indexes for min() and max() or convert to SELECT col FROM tab ORDER\n BY col DESC LIMIT 1 if appropriate index exists and WHERE clause acceptible\n ^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 13:22:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: optimizer question" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Hannu Krosing <hannu@tm.ee> writes:\n> > > Maybe rather\n> >\n> > > * Use indexes for min() and max() or convert to \"SELECT col FROM tab\n> > > ORDER BY col DESC USING max_index_op LIMIT 1\" if there is an index\n> > > on tab that uses btree(col max_index_op)\n> >\n> > > it seems that in most other cases the rewrite would be either a\n> > > misoptimisation or plain wrong.\n> >\n> > We would clearly need to add information to the system catalogs to allow\n> > the planner to determine whether a given aggregate matches up to a given\n> > index opclass. This has been discussed before.\n> >\n> > A more interesting question is how to determine whether such a rewrite\n> > would be a win. That is NOT a foregone conclusion. Consider\n> >\n> > SELECT max(col1) FROM tab WHERE col2 BETWEEN 12 AND 42;\n> >\n> > Depending on the selectivity of the WHERE condition, we might be far\n> > better off to scan on a col2 index and use our traditional max()\n> > code than to scan on a col1 index until we find a row passing the\n> > WHERE condition. I'm not sure whether the planner currently has\n> > statistics appropriate for such estimates or not ...\n> \n> Yes, agreed. This would be just for limited cases. Updated to:\n> \n> * Use indexes for min() and max() or convert to SELECT col FROM tab ORDER\n> BY col DESC LIMIT 1 if appropriate index exists and WHERE clause acceptible\n> ^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^\nIt would be probably a win if only exact match of\n\n SELECT MAX(*) FROM TAB ;\n\nwould be rewritten if appropriate index exists.\n\nThe appropriateness should be explicitly declared in aggregate\ndefinition.\n\n-----------------\nHannu\n", "msg_date": "Sat, 13 Oct 2001 08:52:31 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: optimizer question" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > > Hannu Krosing <hannu@tm.ee> writes:\n> > > > Maybe rather\n> > >\n> > > > * Use indexes for min() and max() or convert to \"SELECT col FROM tab\n> > > > ORDER BY col DESC USING max_index_op LIMIT 1\" if there is an index\n> > > > on tab that uses btree(col max_index_op)\n> > >\n> > > > it seems that in most other cases the rewrite would be either a\n> > > > misoptimisation or plain wrong.\n> > >\n> > > We would clearly need to add information to the system catalogs to allow\n> > > the planner to determine whether a given aggregate matches up to a given\n> > > index opclass. This has been discussed before.\n> > >\n> > > A more interesting question is how to determine whether such a rewrite\n> > > would be a win. That is NOT a foregone conclusion. Consider\n> > >\n> > > SELECT max(col1) FROM tab WHERE col2 BETWEEN 12 AND 42;\n> > >\n> > > Depending on the selectivity of the WHERE condition, we might be far\n> > > better off to scan on a col2 index and use our traditional max()\n> > > code than to scan on a col1 index until we find a row passing the\n> > > WHERE condition. I'm not sure whether the planner currently has\n> > > statistics appropriate for such estimates or not ...\n> >\n> > Yes, agreed. This would be just for limited cases. Updated to:\n> >\n> > * Use indexes for min() and max() or convert to SELECT col FROM tab ORDER\n> > BY col DESC LIMIT 1 if appropriate index exists and WHERE clause acceptible\n> > ^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^\n> It would be probably a win if only exact match of\n> \n> SELECT MAX(*) FROM TAB ;\n> \n> would be rewritten if appropriate index exists.\n> \n> The appropriateness should be explicitly declared in aggregate\n> definition.\n\nI want to chime in here. If the ability exists to evaluate that max() or min()\nis appropriate, and that using the equivilent of \"select select col1 from tab\ndesc limit 1\" for \"select max(col1) from tab\" would be a huge gain for\nPostgres. I know our Oracle8i can't do it, and it would be a very usefull\noptimization. \n\nAt issue is the the \"limit\" clause is very very cool and not available in\nOracle, and since it isn't available, one does not think to use it, and in\nqueries where they my execute on both Postgres AND oracle, you can't use it.\n", "msg_date": "Sat, 13 Oct 2001 10:10:07 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: optimizer question" } ]
[ { "msg_contents": "I'm looking at pg_proc.h to adjust the cacheable attribute for date/time\nfunctions. Can anyone recall why the interval data type would have been\nconsidered non-cacheable? I didn't make internal changes to that type,\nbut istm that it should be cacheable already.\n\nFor timestamp and timestamptz, I've eliminated the \"current\" special\nvalue which afaicr is the only reason timestamp had not been cacheable\nin the past. Are there any functions which should *not* be considered\ncacheable for those types? Apparently the _in() and _out() functions\nshould not be? Everything else is deterministic so would seem to be a\ncandidate.\n\nComments?\n\n - Thomas\n", "msg_date": "Wed, 26 Sep 2001 13:08:48 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "iscacheable for date/time?" }, { "msg_contents": "How about iscacheable for the to_char() functions? Can we recall why\nthose are not cacheable, even for non-date/time types?\n\n - Thomas\n", "msg_date": "Wed, 26 Sep 2001 13:17:21 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: iscacheable for date/time?" }, { "msg_contents": "... and how about the istrusted attribute for various routines? Should\nit be always false or always true for C builtin functions? How about for\nbuiltin SQL functions which are built on top of trusted C functions? Are\nwe guarding against catalog changes on the underlying C routines?\n\n - Thomas\n", "msg_date": "Wed, 26 Sep 2001 13:24:19 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: iscacheable for date/time?" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> ... and how about the istrusted attribute for various routines? Should\n> it be always false or always true for C builtin functions? How about for\n> builtin SQL functions which are built on top of trusted C functions? Are\n> we guarding against catalog changes on the underlying C routines?\n\nI have always had trouble with the \"iscacheable\" flag, there needs to be a\nnumber of \"cache\" levels:\n\n(1) cache per transaction, so you can use a function in a where statement\nand it does not force a table scan. IMHO this should be the default for all\nfunctions, but is not supported in PostgreSQL.\n\n(2) nocache, which would mean it forces a tables scan. This is the current\ndefault.\n\n(3) global cache, which means the results can be stored in perpetuity, this\nis the current intended meaning of iscacheable.\n\n\n\n\n", "msg_date": "Wed, 26 Sep 2001 10:01:26 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: iscacheable for date/time?" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Can anyone recall why the interval data type would have been\n> considered non-cacheable?\n\nI believe I made all functions for all datetime-related types\nnoncacheable, simply because I wasn't sure which of them had the\n\"current\" behavior.\n\n> For timestamp and timestamptz, I've eliminated the \"current\" special\n> value which afaicr is the only reason timestamp had not been cacheable\n> in the past. Are there any functions which should *not* be considered\n> cacheable for those types? Apparently the _in() and _out() functions\n> should not be?\n\nin() should not be, since its result for the strings \"now\", \"today\",\n\"tomorrow\", etc is variable. But AFAICS there's no reason to mark out()\nas noncacheable anymore.\n\nThe general rule is: if there are any fixed input values for which the\noutput might vary over time, then it should be noncachable.\n\nDunno why to_char is marked noncachable; does it perhaps have\nformat-string entries that pick up current time somehow? I might just\nhave been worried about its response to \"current\", though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 10:57:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: iscacheable for date/time? " }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> ... and how about the istrusted attribute for various routines? Should\n> it be always false or always true for C builtin functions?\n\nAt the moment it seems to be true for every pg_proc entry in template1.\nAFAIK the attribute is not actually being looked at, anyway. I think\nit used to be used to determine which functions needed to be executed in\na separate subprocess for safety reasons (ie, coredump of the function\nwouldn't kill the backend) ... but that code's been gone for a long while.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 11:01:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: iscacheable for date/time? " } ]
[ { "msg_contents": "I have just upgraded to the new PostgreSQL 7.1.3 (from 7.0.3) and have been\nexperiencing a pretty serious problem:\n On one particular page, in what seems to be completely random instances,\nI get buffer overruns and either 0-rows or a crashed apache child. Turning\non PHP's --enable-debug, I receive the following:\n\n\n[Wed Sep 26 06:21:12 2001] Script: '/path/to/script.php'\n---------------------------------------\npgsql.c(167) : Block 0x086A6DF8 status:\nBeginning: Overrun (magic=0x00000000, expected=0x7312F8DC)\n End: Unknown\n---------------------------------------\n\nSometimes it will actually crash mid-way (probably overwrote some valuable\ncode):\n---------------------------------------\npgsql.c(167) : Block 0x08684290 status:\nBeginning: Overrun (magic=0x0000111A, expected=0x7312F8DC)\n[Wed Sep 26 09:22:46 2001] [notice] child pid 8710 exit signal Segmentation\nfault (11)\n\nThis problem is of great concern to me and I have been working for days\ntrying to debug it myself and find other reports, with little success. The\nline it claims to be failing on is PHP's ext/pgsql/pgsql.c on line 167 (by\nwhat this claims) which is the following function [the\nefree(PGG(last_notice)) line].\n\nstatic void\n_notice_handler(void *arg, const char *message)\n{\n PGLS_FETCH();\n\n if (! PGG(ignore_notices)) {\n php_log_err((char *) message);\n if (PGG(last_notice) != NULL) {\n efree(PGG(last_notice));\n }\n PGG(last_notice) = estrdup(message);\n }\n}\n\n\nCan anyone provide further input as to why this is causing problems? The\nPHP code works sometimes and not others, and it seems to be only that one\nscript, so I do not believe it to be a hardware issue.\n\nAny thoughts? I can provide any further system information if needed. I\nhave tried recompiling pgsql, php and apache with different optimizations\n[including none at all and debug mode on as i have now] with little change\nin the result.\n\nThanks in advance;\n--\nMike\n\ncc: pgsql-hackers; pgsql-php; pgsql_bugs\n", "msg_date": "Wed, 26 Sep 2001 13:13:17 -0300", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "PostgreSQL / PHP Overrun Error" }, { "msg_contents": "\"Mike Rogers\" <temp6453@hotmail.com> writes:\n> This problem is of great concern to me and I have been working for days\n> trying to debug it myself and find other reports, with little success. The\n> line it claims to be failing on is PHP's ext/pgsql/pgsql.c on line 167 (by\n> what this claims) which is the following function [the\n> efree(PGG(last_notice)) line].\n\nThis isn't our code, so you'd likely have better luck complaining on\nsome PHP-related list. But it looks to me like this code is simply\ntrying to free any previous notice message before it stores the new\none into PGG(last_notice) (whatever the heck that is). I'm guessing\nthat that pointer is uninitialized or has been clobbered somehow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 12:23:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL / PHP Overrun Error " }, { "msg_contents": "Well it really isn't your code (true), but the only thing that is changed is\nthe 7.0-7.1- Was a data length changed on the return or something that\ncould affect this?\n--\nMike\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Mike Rogers\" <temp6453@hotmail.com>\nCc: <pgsql-hackers@postgresql.org>; <pgsql-php@postgresql.org>;\n<pgsql-bugs@postgresql.org>\nSent: Wednesday, September 26, 2001 1:23 PM\nSubject: Re: [BUGS] PostgreSQL / PHP Overrun Error\n\n\n> \"Mike Rogers\" <temp6453@hotmail.com> writes:\n> > This problem is of great concern to me and I have been working for days\n> > trying to debug it myself and find other reports, with little success.\nThe\n> > line it claims to be failing on is PHP's ext/pgsql/pgsql.c on line 167\n(by\n> > what this claims) which is the following function [the\n> > efree(PGG(last_notice)) line].\n>\n> This isn't our code, so you'd likely have better luck complaining on\n> some PHP-related list. But it looks to me like this code is simply\n> trying to free any previous notice message before it stores the new\n> one into PGG(last_notice) (whatever the heck that is). I'm guessing\n> that that pointer is uninitialized or has been clobbered somehow.\n>\n> regards, tom lane\n>\n", "msg_date": "Wed, 26 Sep 2001 13:28:49 -0300", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL / PHP Overrun Error" }, { "msg_contents": "--- Mike Rogers <temp6453@hotmail.com> wrote:\n> Can anyone provide further input as to why this is causing\n> problems? The\n> PHP code works sometimes and not others, and it seems to be\n> only that one\n> script, so I do not believe it to be a hardware issue.\n> \n> Any thoughts? I can provide any further system information if\n> needed. I\n> have tried recompiling pgsql, php and apache with different\n> optimizations\n> [including none at all and debug mode on as i have now] with\n> little change\n> in the result.\n\nI performed a search on http://bugs.php.net/ but I did not find\nyour problem. If it persists then I would suggest submitting a\nnew bug report.\n\nPHP 4.0.6 is the first version of PHP to have the\npg_last_notice() function which apparently the problem is tied\nin with. Is that the version that you are running? What PHP\npg_* commands are you using on this PHP script that you are not\nusing on other pages? \n\nBrent\n\n__________________________________________________\nDo You Yahoo!?\nGet email alerts & NEW webcam video instant messaging with Yahoo! Messenger. http://im.yahoo.com\n", "msg_date": "Wed, 26 Sep 2001 09:55:19 -0700 (PDT)", "msg_from": "\"Brent R. Matzelle\" <bmatzelle@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL / PHP Overrun Error" }, { "msg_contents": "Mike Rogers wrote:\n\n> Well it really isn't your code (true), but the only thing that is changed is\n> the 7.0-7.1- Was a data length changed on the return or something that\n> could affect this?\n\nWhat version of PHP are you using?\n\n\n", "msg_date": "Wed, 26 Sep 2001 12:55:57 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL / PHP Overrun Error" }, { "msg_contents": "--- Mike Rogers <temp6453@hotmail.com> wrote:\n> Well it really isn't your code (true), but the only thing that\n> is changed is\n> the 7.0-7.1- Was a data length changed on the return or\n> something that\n> could affect this?\n\nI believe that it is unlikely that the problem is with Postgres.\n I have been running PostgreSQL 7.1.3 with PHP 4.0.4pl1 for\nmonths now with no problems. I believe this bug was introduced\nin PHP 4.0.6.\n\nBrent\n\n\n__________________________________________________\nDo You Yahoo!?\nGet email alerts & NEW webcam video instant messaging with Yahoo! Messenger. http://im.yahoo.com\n", "msg_date": "Wed, 26 Sep 2001 09:59:33 -0700 (PDT)", "msg_from": "\"Brent R. Matzelle\" <bmatzelle@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "Sorry:\n PHP 4.0.6 (with memory leak patch [download listed right below\nphp-4.0.6.tar.gz download- It was a problem])\n PostgreSQL 7.1.3\n Apache 1.3.20 (with mod_ssl- but it does the same thing without mod_ssl)\n--\nMike\n\n----- Original Message -----\nFrom: \"mlw\" <markw@mohawksoft.com>\nTo: \"Mike Rogers\" <temp6453@hotmail.com>\nCc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; <pgsql-hackers@postgresql.org>;\n<pgsql-php@postgresql.org>; <pgsql-bugs@postgresql.org>\nSent: Wednesday, September 26, 2001 1:55 PM\nSubject: Re: [BUGS] PostgreSQL / PHP Overrun Error\n\n\n> Mike Rogers wrote:\n>\n> > Well it really isn't your code (true), but the only thing that is\nchanged is\n> > the 7.0-7.1- Was a data length changed on the return or something that\n> > could affect this?\n>\n> What version of PHP are you using?\n>\n>\n>\n", "msg_date": "Wed, 26 Sep 2001 14:07:08 -0300", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL / PHP Overrun Error" }, { "msg_contents": "Interesting. I am using that same configuration. We are using the same thing on\nour website as well. I have never seen this problem. Weird.\n\nMy guess is that you are getting an error. The PHP code is some how mucking\nthis up. But I would try executing the query in psql and see what comes up.\n\nThe PHP code than handles the error may have a fixed langth buffer for speed,\nand it is to short for a longer 7.1 error message. Again, I am guessing.\n\nMy bet is that the query is failing with an error, so you really have two\nproblems. A problem in your SQL which is causing you to see a bug in PHP.\n\n\n\nMike Rogers wrote:\n\n> Sorry:\n> PHP 4.0.6 (with memory leak patch [download listed right below\n> php-4.0.6.tar.gz download- It was a problem])\n> PostgreSQL 7.1.3\n> Apache 1.3.20 (with mod_ssl- but it does the same thing without mod_ssl)\n> --\n> Mike\n>\n> ----- Original Message -----\n> From: \"mlw\" <markw@mohawksoft.com>\n> To: \"Mike Rogers\" <temp6453@hotmail.com>\n> Cc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; <pgsql-hackers@postgresql.org>;\n> <pgsql-php@postgresql.org>; <pgsql-bugs@postgresql.org>\n> Sent: Wednesday, September 26, 2001 1:55 PM\n> Subject: Re: [BUGS] PostgreSQL / PHP Overrun Error\n>\n> > Mike Rogers wrote:\n> >\n> > > Well it really isn't your code (true), but the only thing that is\n> changed is\n> > > the 7.0-7.1- Was a data length changed on the return or something that\n> > > could affect this?\n> >\n> > What version of PHP are you using?\n> >\n> >\n> >\n\n", "msg_date": "Wed, 26 Sep 2001 13:15:14 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL / PHP Overrun Error" }, { "msg_contents": "I am currently running PHP-4.0.6 (with the memory leak patch as posted to\ncorrect a problem in this release [it is on the download page]).\nThe script that causes them mostly \"includes\" other scripts to do it's job.\nThe home page uses some of the scripts that it uses and I've seen the errors\nthere too. There errors seem infrequent, and just occur randomly... Maybe\nevery 2-10 minutes on a reasonably high-volume server. The commands that\nseem to be getting executed by the script [it's not my script]:\n pg_exec\n pg_errormessage\n pg_fetch_array\n pg_errormessage\n pg_freeresult\n pg_fieldname\n pg_fieldtype\n pg_fieldsize\n pg_cmdtuples\n pg_numrows\n pg_numfields\n\nI have been trying minor modifications to the code to ensure that there are\nno minor erorrs (such as setting things to null in case functions don't) but\nam trying to track it down further.\n I truly wish there was a way to get a status report from the script as\nit goes [see what line it stopped on]- Unfortunatly, I don't think that's\nreally possible.\n\nThanks;\n--\nMike\n\n\n----- Original Message -----\nFrom: \"Brent R. Matzelle\" <bmatzelle@yahoo.com>\nTo: <pgsql-php@postgresql.org>\nSent: Wednesday, September 26, 2001 1:55 PM\nSubject: Re: [PHP] PostgreSQL / PHP Overrun Error\n\n\n> --- Mike Rogers <temp6453@hotmail.com> wrote:\n> > Can anyone provide further input as to why this is causing\n> > problems? The\n> > PHP code works sometimes and not others, and it seems to be\n> > only that one\n> > script, so I do not believe it to be a hardware issue.\n> >\n> > Any thoughts? I can provide any further system information if\n> > needed. I\n> > have tried recompiling pgsql, php and apache with different\n> > optimizations\n> > [including none at all and debug mode on as i have now] with\n> > little change\n> > in the result.\n>\n> I performed a search on http://bugs.php.net/ but I did not find\n> your problem. If it persists then I would suggest submitting a\n> new bug report.\n>\n> PHP 4.0.6 is the first version of PHP to have the\n> pg_last_notice() function which apparently the problem is tied\n> in with. Is that the version that you are running? What PHP\n> pg_* commands are you using on this PHP script that you are not\n> using on other pages?\n>\n> Brent\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Get email alerts & NEW webcam video instant messaging with Yahoo!\nMessenger. http://im.yahoo.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n", "msg_date": "Wed, 26 Sep 2001 16:46:16 -0300", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL / PHP Overrun Error" }, { "msg_contents": "--- Mike Rogers <temp6453@hotmail.com> wrote:\n> I am currently running PHP-4.0.6 (with the memory leak patch\n> as posted to\n> correct a problem in this release [it is on the download\n> page]).\n> The script that causes them mostly \"includes\" other scripts to\n> do it's job.\n> The home page uses some of the scripts that it uses and I've\n> seen the errors\n> there too. There errors seem infrequent, and just occur\n> randomly... Maybe\n> every 2-10 minutes on a reasonably high-volume server. The\n> commands that\n> seem to be getting executed by the script [it's not my\n> script]:\n> pg_exec\n> pg_errormessage\n> pg_fetch_array\n> pg_errormessage\n> pg_freeresult\n> pg_fieldname\n> pg_fieldtype\n> pg_fieldsize\n> pg_cmdtuples\n> pg_numrows\n> pg_numfields\n\nAll of those functions are okay to use except that I would\ncommment out the calls to pg_freeresult(). I have seen various\nolder bug reports that have noted that this function can be \ntroublesome. Plus, PHP frees this memory automatically whether\nyou call this function or not. I rarely use this function,\nunless I am making queries that return enormous amounts of data,\nand even then I really do not require it.\n\nLast note, the PHP debug information that you gathered seemed to\nbe cleaning up the pg_notice memory if I was not mistaken. I\nwould not be surprised if this function was called by\npg_freeresult. \n\nBrent\n\n__________________________________________________\nDo You Yahoo!?\nGet email alerts & NEW webcam video instant messaging with Yahoo! Messenger. http://im.yahoo.com\n", "msg_date": "Wed, 26 Sep 2001 13:05:10 -0700 (PDT)", "msg_from": "\"Brent R. Matzelle\" <bmatzelle@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL / PHP Overrun Error" }, { "msg_contents": "As you can see here, I commented out the efree line and instead just set it\nto none. This means that PHP is simply freeing it in it's own memory-leak\ncleanup. It means the logs gets:\n pgsql.c(170) : Freeing 0x085D3AAC (62 bytes),\nscript=/path/to/index.html\n Last leak repeated 7 times\ninstead of segmentation faulting.\n\nIf you efree() something which was not emalloc()'ed nor estrdup()'ed you\nwill probably get a segmentation fault. That's my understanding of efree()-\nI'd say that's probably be what's going on... So by me just NULL'ing\nPGG(last_notice) instead of efree'ing it, it means that should it not have\nbeen emalloc'd or estrdup'd then it won't crash miserably... And it's not.\n\nstatic void\n_notice_handler(void *arg, const char *message)\n{\n PGLS_FETCH();\n\n if (! PGG(ignore_notices)) {\n php_log_err((char *) message);\n if (PGG(last_notice) != NULL) {\n /* efree(PGG(last_notice)); */\n PGG(last_notice) = NULL;\n }\n PGG(last_notice) = estrdup(message);\n }\n}\n\nSo yes- I wish there was a real solution rather than relying on php's\ninternal cleanup mechanism... But I will happily start from here as a point\nto move from... I will send patches once i get the problem completely\nfixed- in a better solution than this trial.\n\n--\nMike\n\n----- Original Message -----\nFrom: \"Brent R. Matzelle\" <bmatzelle@yahoo.com>\nTo: \"Mike Rogers\" <temp6453@hotmail.com>; <pgsql-php@postgresql.org>\nSent: Wednesday, September 26, 2001 5:05 PM\nSubject: Re: [PHP] PostgreSQL / PHP Overrun Error\n\n\n> --- Mike Rogers <temp6453@hotmail.com> wrote:\n> > I am currently running PHP-4.0.6 (with the memory leak patch\n> > as posted to\n> > correct a problem in this release [it is on the download\n> > page]).\n> > The script that causes them mostly \"includes\" other scripts to\n> > do it's job.\n> > The home page uses some of the scripts that it uses and I've\n> > seen the errors\n> > there too. There errors seem infrequent, and just occur\n> > randomly... Maybe\n> > every 2-10 minutes on a reasonably high-volume server. The\n> > commands that\n> > seem to be getting executed by the script [it's not my\n> > script]:\n> > pg_exec\n> > pg_errormessage\n> > pg_fetch_array\n> > pg_errormessage\n> > pg_freeresult\n> > pg_fieldname\n> > pg_fieldtype\n> > pg_fieldsize\n> > pg_cmdtuples\n> > pg_numrows\n> > pg_numfields\n>\n> All of those functions are okay to use except that I would\n> commment out the calls to pg_freeresult(). I have seen various\n> older bug reports that have noted that this function can be\n> troublesome. Plus, PHP frees this memory automatically whether\n> you call this function or not. I rarely use this function,\n> unless I am making queries that return enormous amounts of data,\n> and even then I really do not require it.\n>\n> Last note, the PHP debug information that you gathered seemed to\n> be cleaning up the pg_notice memory if I was not mistaken. I\n> would not be surprised if this function was called by\n> pg_freeresult.\n>\n> Brent\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Get email alerts & NEW webcam video instant messaging with Yahoo!\nMessenger. http://im.yahoo.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Wed, 26 Sep 2001 17:42:45 -0300", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL / PHP Overrun Error" }, { "msg_contents": "Have you recompiled PHP to link against the new postgres libraries?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Mike Rogers\n> Sent: Thursday, 27 September 2001 1:07 AM\n> To: mlw\n> Cc: pgsql-hackers@postgresql.org; pgsql-php@postgresql.org;\n> pgsql-bugs@postgresql.org\n> Subject: Re: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error\n> \n> \n> Sorry:\n> PHP 4.0.6 (with memory leak patch [download listed right below\n> php-4.0.6.tar.gz download- It was a problem])\n> PostgreSQL 7.1.3\n> Apache 1.3.20 (with mod_ssl- but it does the same thing \n> without mod_ssl)\n> --\n> Mike\n> \n> ----- Original Message -----\n> From: \"mlw\" <markw@mohawksoft.com>\n> To: \"Mike Rogers\" <temp6453@hotmail.com>\n> Cc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; <pgsql-hackers@postgresql.org>;\n> <pgsql-php@postgresql.org>; <pgsql-bugs@postgresql.org>\n> Sent: Wednesday, September 26, 2001 1:55 PM\n> Subject: Re: [BUGS] PostgreSQL / PHP Overrun Error\n> \n> \n> > Mike Rogers wrote:\n> >\n> > > Well it really isn't your code (true), but the only thing that is\n> changed is\n> > > the 7.0-7.1- Was a data length changed on the return or \n> something that\n> > > could affect this?\n> >\n> > What version of PHP are you using?\n> >\n> >\n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Thu, 27 Sep 2001 09:31:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "There is a problem in PHP-4.0.6. Please use PHP4.0.7 or 4.0.8 and the\nproblem will be solved. This can be obtained from CVS\n--\nMike\n\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nTo: \"Mike Rogers\" <temp6453@hotmail.com>; \"mlw\" <markw@mohawksoft.com>\nCc: <pgsql-hackers@postgresql.org>; <pgsql-php@postgresql.org>;\n<pgsql-bugs@postgresql.org>\nSent: Wednesday, September 26, 2001 10:31 PM\nSubject: RE: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error\n\n\n> Have you recompiled PHP to link against the new postgres libraries?\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Mike Rogers\n> > Sent: Thursday, 27 September 2001 1:07 AM\n> > To: mlw\n> > Cc: pgsql-hackers@postgresql.org; pgsql-php@postgresql.org;\n> > pgsql-bugs@postgresql.org\n> > Subject: Re: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error\n> >\n> >\n> > Sorry:\n> > PHP 4.0.6 (with memory leak patch [download listed right below\n> > php-4.0.6.tar.gz download- It was a problem])\n> > PostgreSQL 7.1.3\n> > Apache 1.3.20 (with mod_ssl- but it does the same thing\n> > without mod_ssl)\n> > --\n> > Mike\n> >\n> > ----- Original Message -----\n> > From: \"mlw\" <markw@mohawksoft.com>\n> > To: \"Mike Rogers\" <temp6453@hotmail.com>\n> > Cc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; <pgsql-hackers@postgresql.org>;\n> > <pgsql-php@postgresql.org>; <pgsql-bugs@postgresql.org>\n> > Sent: Wednesday, September 26, 2001 1:55 PM\n> > Subject: Re: [BUGS] PostgreSQL / PHP Overrun Error\n> >\n> >\n> > > Mike Rogers wrote:\n> > >\n> > > > Well it really isn't your code (true), but the only thing that is\n> > changed is\n> > > > the 7.0-7.1- Was a data length changed on the return or\n> > something that\n> > > > could affect this?\n> > >\n> > > What version of PHP are you using?\n> > >\n> > >\n> > >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n>\n", "msg_date": "Wed, 26 Sep 2001 22:51:18 -0300", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "On Mi� 26 Sep 2001 22:51, Mike Rogers wrote:\n> There is a problem in PHP-4.0.6. Please use PHP4.0.7 or 4.0.8 and the\n> problem will be solved. This can be obtained from CVS\n\nSorry, but 4.0.6 is the last version out (there may be some RC of 4.0.7), but \nhow can we get those, and how much can we trust a RC version?\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Thu, 27 Sep 2001 17:55:36 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: [PHP] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "I'm using the current CVS (4.0.8-dev)- It's spectacular. Lower memory\nusage, more descriptive debug, better control over it. Tons more options,\nsmaller code, much much faster.\n\n Can you trust it- sure. It isn't a release candidate. It is the\ncurrent development version. As they find problems, they get fixed. As\nlong as you keep a bit on top you are fine. If anything it is _MORE_ secure\nthan the current version, as any security problems were fixed earlier as\nsoon as they get found, and new bugs aren't known yet. I'm seriously\nimpressed with it and feel like I will be using the CVS code quite a bit\nmore. Any bugs Zend memory manager cleans up anyway.\n\n Note: if you are not having the problem of 0 rows or buffer overruns,\ndon't bother upgrading as it will not benefit you. Clearly I was and the\nnew code fixed the flaws in the existing code for my usage.\n--\nMike\n\n----- Original Message -----\nFrom: \"Mart�n Marqu�s\" <martin@bugs.unl.edu.ar>\nTo: \"Mike Rogers\" <temp6453@hotmail.com>\nCc: <pgsql-hackers@postgresql.org>; <pgsql-php@postgresql.org>\nSent: Thursday, September 27, 2001 5:55 PM\nSubject: Re: [PHP] [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error\n\n\n> On Mi� 26 Sep 2001 22:51, Mike Rogers wrote:\n> > There is a problem in PHP-4.0.6. Please use PHP4.0.7 or 4.0.8 and the\n> > problem will be solved. This can be obtained from CVS\n>\n> Sorry, but 4.0.6 is the last version out (there may be some RC of 4.0.7),\nbut\n> how can we get those, and how much can we trust a RC version?\n>\n> Saludos... :-)\n>\n> --\n> Porqu� usar una base de datos relacional cualquiera,\n> si pod�s usar PostgreSQL?\n> -----------------------------------------------------------------\n> Mart�n Marqu�s | mmarques@unl.edu.ar\n> Programador, Administrador, DBA | Centro de Telematica\n> Universidad Nacional\n> del Litoral\n> -----------------------------------------------------------------\n>\n", "msg_date": "Thu, 27 Sep 2001 18:22:14 -0300", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: [PHP] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "> I'm using the current CVS (4.0.8-dev)- It's spectacular. Lower memory\n> usage, more descriptive debug, better control over it. Tons more options,\n> smaller code, much much faster.\n> \n> Can you trust it- sure. It isn't a release candidate. It is the\n> current development version. As they find problems, they get fixed. As\n> long as you keep a bit on top you are fine. If anything it is _MORE_ secure\n> than the current version, as any security problems were fixed earlier as\n> soon as they get found, and new bugs aren't known yet. I'm seriously\n> impressed with it and feel like I will be using the CVS code quite a bit\n> more. Any bugs Zend memory manager cleans up anyway.\n> \n> Note: if you are not having the problem of 0 rows or buffer overruns,\n> don't bother upgrading as it will not benefit you. Clearly I was and the\n> new code fixed the flaws in the existing code for my usage.\n\nKeep in mind we can change the on-disk structure or system tasbles\nanytime so you may have trouble moving between different CVS versions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 15:50:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PHP] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "On Fri, Sep 28, 2001 at 03:50:12PM -0400, Bruce Momjian wrote:\n> > I'm using the current CVS (4.0.8-dev)- It's spectacular. Lower memory\n> > usage, more descriptive debug, better control over it. Tons more options,\n> > smaller code, much much faster.\n<snip>\n> > Note: if you are not having the problem of 0 rows or buffer overruns,\n> > don't bother upgrading as it will not benefit you. Clearly I was and the\n> > new code fixed the flaws in the existing code for my usage.\n> \n> Keep in mind we can change the on-disk structure or system tasbles\n> anytime so you may have trouble moving between different CVS versions.\n\nAlso note that Bruce is talking about using PostgreSQL CVS version, rather\nthan the PHP CVS version (as one can tell from the version number 4.0.8)\nthat everyone else is discussing. ;-)\n\nRoss\n", "msg_date": "Mon, 1 Oct 2001 10:02:15 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "\n--- \"Ross J. Reedstrom\" <reedstrm@rice.edu> wrote:\n> On Fri, Sep 28, 2001 at 03:50:12PM -0400, Bruce Momjian wrote:\n> > > I'm using the current CVS (4.0.8-dev)- It's spectacular. \n> Lower memory\n> > > usage, more descriptive debug, better control over it. \n> Tons more options,\n> > > smaller code, much much faster.\n> <snip>\n> > > Note: if you are not having the problem of 0 rows or\n> buffer overruns,\n> > > don't bother upgrading as it will not benefit you. \n> Clearly I was and the\n> > > new code fixed the flaws in the existing code for my\n> usage.\n> > \n> > Keep in mind we can change the on-disk structure or system\n> tasbles\n> > anytime so you may have trouble moving between different CVS\n> versions.\n> \n> Also note that Bruce is talking about using PostgreSQL CVS\n> version, rather\n> than the PHP CVS version (as one can tell from the version\n> number 4.0.8)\n> that everyone else is discussing. ;-)\n\nI just tried to access the CVS version but the CVS web and CVS\npages for PostgreSQL are missing:\n\nhttp://www.postgresql.org/cgi/cvsweb.cgi/pgsql and \nhttp://www.postgresql.org/devel-corner/docs/postgres/cvs.html\n\nHow can I get this new version? Has this new code been\nsubmitted to the PHP guys yet? \n\nHave any of you been successful in building a pgsql PHP\nextension (pgsql.so)? I have not been able to locate the\ndocument that describes how to compile one. \n\nThanks.\n\nBrent\n\n__________________________________________________\nDo You Yahoo!?\nListen to your Yahoo! Mail messages from any phone.\nhttp://phone.yahoo.com\n", "msg_date": "Mon, 1 Oct 2001 10:46:06 -0700 (PDT)", "msg_from": "\"Brent R. Matzelle\" <bmatzelle@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "It is not the CVS version of Postgres ((*shudders* The actual release\nversion is buggy enough)). The CVS version of PHP\n\nhttp://www.php.net/cvsup.php\n\n--\nMike\n\n\n----- Original Message -----\nFrom: \"Brent R. Matzelle\" <bmatzelle@yahoo.com>\nTo: <pgsql-php@postgresql.org>\nSent: Monday, October 01, 2001 2:46 PM\nSubject: Re: [PHP] [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error\n\n\n>\n> --- \"Ross J. Reedstrom\" <reedstrm@rice.edu> wrote:\n> > On Fri, Sep 28, 2001 at 03:50:12PM -0400, Bruce Momjian wrote:\n> > > > I'm using the current CVS (4.0.8-dev)- It's spectacular.\n> > Lower memory\n> > > > usage, more descriptive debug, better control over it.\n> > Tons more options,\n> > > > smaller code, much much faster.\n> > <snip>\n> > > > Note: if you are not having the problem of 0 rows or\n> > buffer overruns,\n> > > > don't bother upgrading as it will not benefit you.\n> > Clearly I was and the\n> > > > new code fixed the flaws in the existing code for my\n> > usage.\n> > >\n> > > Keep in mind we can change the on-disk structure or system\n> > tasbles\n> > > anytime so you may have trouble moving between different CVS\n> > versions.\n> >\n> > Also note that Bruce is talking about using PostgreSQL CVS\n> > version, rather\n> > than the PHP CVS version (as one can tell from the version\n> > number 4.0.8)\n> > that everyone else is discussing. ;-)\n>\n> I just tried to access the CVS version but the CVS web and CVS\n> pages for PostgreSQL are missing:\n>\n> http://www.postgresql.org/cgi/cvsweb.cgi/pgsql and\n> http://www.postgresql.org/devel-corner/docs/postgres/cvs.html\n>\n> How can I get this new version? Has this new code been\n> submitted to the PHP guys yet?\n>\n> Have any of you been successful in building a pgsql PHP\n> extension (pgsql.so)? I have not been able to locate the\n> document that describes how to compile one.\n>\n> Thanks.\n>\n> Brent\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Listen to your Yahoo! Mail messages from any phone.\n> http://phone.yahoo.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Mon, 1 Oct 2001 16:25:41 -0300", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "Oh come on now, let's not start that kind of thread-- they don't serve any\npurpose but to piss people off :-)\n\nHave a good one..\n\n-Mitch\n\n> It is not the CVS version of Postgres ((*shudders* The actual release\n> version is buggy enough)). The CVS version of PHP\n>\n> http://www.php.net/cvsup.php\n>\n> --\n> Mike\n\n", "msg_date": "Mon, 1 Oct 2001 16:17:58 -0400", "msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "--- Mike Rogers <temp6453@hotmail.com> wrote:\n> It is not the CVS version of Postgres ((*shudders* The actual\n> release\n> version is buggy enough)). The CVS version of PHP\n> \n> http://www.php.net/cvsup.php\n\nMy mistake, that last message made it seem like PG had their own\nbranch of PHP. Regardless, has anyone compiled the pgsql\nextension? I do not want to build PG support directly into PHP.\n \n\nBrent\n\n__________________________________________________\nDo You Yahoo!?\nListen to your Yahoo! Mail messages from any phone.\nhttp://phone.yahoo.com\n", "msg_date": "Mon, 1 Oct 2001 13:39:02 -0700 (PDT)", "msg_from": "\"Brent R. Matzelle\" <bmatzelle@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "On Lun 01 Oct 2001 17:39, you wrote:\n> --- Mike Rogers <temp6453@hotmail.com> wrote:\n> > It is not the CVS version of Postgres ((*shudders* The actual\n> > release\n> > version is buggy enough)). The CVS version of PHP\n> >\n> > http://www.php.net/cvsup.php\n>\n> My mistake, that last message made it seem like PG had their own\n> branch of PHP. Regardless, has anyone compiled the pgsql\n> extension? I do not want to build PG support directly into PHP.\n\nIn all my compilations, I used the extensions. What do yoy mean by \"build PG \nsupport directly into PHP\"?\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Mon, 1 Oct 2001 17:49:34 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "--- Mart���n Marqu���s <martin@bugs.unl.edu.ar> wrote:\n> On Lun 01 Oct 2001 17:39, you wrote:\n> > --- Mike Rogers <temp6453@hotmail.com> wrote:\n> > > It is not the CVS version of Postgres ((*shudders* The\n> actual\n> > > release\n> > > version is buggy enough)). The CVS version of PHP\n> > >\n> > > http://www.php.net/cvsup.php\n> >\n> > My mistake, that last message made it seem like PG had their\n> own\n> > branch of PHP. Regardless, has anyone compiled the pgsql\n> > extension? I do not want to build PG support directly into\n> PHP.\n> \n> In all my compilations, I used the extensions. What do yoy\n> mean by \"build PG \n> support directly into PHP\"?\n\nI use the php-4.0.4pl1 and php-pgsql RPMs from RedHat. I just\nwant to re-build the php-pgsql extension\n(/usr/lib/php4/pgsql.so) for php-4.0.4pl1 rather than with\nphp-4.0.6. I do not want to re-compile PHP --with-pgsql. \n\nBrent\n\n__________________________________________________\nDo You Yahoo!?\nListen to your Yahoo! Mail messages from any phone.\nhttp://phone.yahoo.com\n", "msg_date": "Mon, 1 Oct 2001 14:10:16 -0700 (PDT)", "msg_from": "\"Brent R. Matzelle\" <bmatzelle@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "This is a known problem with PHP 4.0.6. You might want to upgrade to\n4.1.0RC2, or try patches made by one of the PHP developers:\n\nftp://ftp.sra.co.jp/pub/cmd/postgres/php/php-4.0.6-patches.tar.gz\n\n> Mike Rogers wrote:\n> \n> > Sorry:\n> > PHP 4.0.6 (with memory leak patch [download listed right below\n> > php-4.0.6.tar.gz download- It was a problem])\n> > PostgreSQL 7.1.3\n> > Apache 1.3.20 (with mod_ssl- but it does the same thing without mod_ssl)\n> > --\n> > Mike\n> >\n> > ----- Original Message -----\n> > From: \"mlw\" <markw@mohawksoft.com>\n> > To: \"Mike Rogers\" <temp6453@hotmail.com>\n> > Cc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; <pgsql-hackers@postgresql.org>;\n> > <pgsql-php@postgresql.org>; <pgsql-bugs@postgresql.org>\n> > Sent: Wednesday, September 26, 2001 1:55 PM\n> > Subject: Re: [BUGS] PostgreSQL / PHP Overrun Error\n> >\n> > > Mike Rogers wrote:\n> > >\n> > > > Well it really isn't your code (true), but the only thing that is\n> > changed is\n> > > > the 7.0-7.1- Was a data length changed on the return or something that\n> > > > could affect this?\n> > >\n> > > What version of PHP are you using?\n> > >\n> > >\n> > >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n", "msg_date": "Fri, 16 Nov 2001 10:08:18 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [PHP] PostgreSQL / PHP Overrun Error" }, { "msg_contents": "Why did it just send out tons of mail since September of this year- every\nmessage?\n--\nMike\n\n", "msg_date": "Fri, 16 Nov 2001 14:29:43 -0400", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "[PG MAIL LISTS] SEND OUT ALL????" }, { "msg_contents": "\nsomeone, either intentially or accidentally, sent out a load to the lists\n...\n\n\nOn Fri, 16 Nov 2001, Mike Rogers wrote:\n\n> Why did it just send out tons of mail since September of this year- every\n> message?\n> --\n> Mike\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Fri, 16 Nov 2001 14:26:20 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [PG MAIL LISTS] SEND OUT ALL????" }, { "msg_contents": "if you look at the originating time for all of the messages that got sent\nout of when it was sent from the host machine (with HELO host). Clearly it\nwas done on an admin side.\n--\nMike\n\n----- Original Message -----\nFrom: \"Marc G. Fournier\" <scrappy@hub.org>\nTo: \"Mike Rogers\" <temp6453@hotmail.com>\nCc: <pgsql-hackers@postgresql.org>; <pgsql-php@postgresql.org>;\n<pgsql-bugs@postgresql.org>\nSent: Friday, November 16, 2001 3:26 PM\nSubject: Re: [BUGS] [HACKERS] [PG MAIL LISTS] SEND OUT ALL????\n\n\n>\n> someone, either intentially or accidentally, sent out a load to the lists\n> ...\n>\n>\n> On Fri, 16 Nov 2001, Mike Rogers wrote:\n>\n> > Why did it just send out tons of mail since September of this year-\nevery\n> > message?\n> > --\n> > Mike\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Sat, 17 Nov 2001 00:22:56 -0400", "msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [PG MAIL LISTS] SEND OUT ALL????" } ]
[ { "msg_contents": "Hi,\n\nI have been working a bit at a patch for that problem in psql. The\npatch is far from being ready for inclusion or whatever, it's just for\ncomments...\n\nBy the way, someone can tell me how to generate nice patches showing\nthe difference between one's version and the cvs code that has been\ndownloaded ? I'm new to this (I've only used cvs for personal projects\nso far, and I don't need to send patches to myself ;) ).\n\nThe good things in this patch :\n\n- it works for me :)\n\n- I've used Markus Kuhn's implementation of wcwidth.c : it is locale\n independant, and is in the public domain. :) [if we keep it, I'll\n have to tell him, though !]\n\n- No dependency on the local libc's UTF-8-awareness ;) [I've seen that\n psql has no such dependancy, at least in print.c, so I haven't added\n any]. Actually, the change is completely self-contained.\n\n- I've made my own utf-8 -> ucs converter, since I haven't found any\n without a copyright notice yesterday. It checks invalid and\n non-optimal UTF-8 sequences, as requested per Unicode 3.0.1 (or 3.1,\n I don't remember).\n\n- it works for japanese (and I believe other \"full-width\" characters).\n\n- if MULTIBYTE is not defined, the code doesn't change from the\n commited version.\n\nThe not so good things :\n\n- I've made my own utf-8 -> ucs converter... It seems to work fine,\n but it's not tested well enough, it may not be so robust.\n\n- The printf( \"%*s\", width, utfstr) doesn't work as expected, so I had\n to fix by doing printf( \"%*s%s\", width - utfstrwidth, \"\", utfstr);\n\n- everything in #ifdef MULTIBYTE/#endif . Since they're is no\n dependancy on anything else (including the rest of the multibyte\n implementation - which I haven't had the time to look at in detail),\n it doesn't depend on it.\n\n- I get this (for each call to pg_mb_utfs_width) and I don't know why :\n\n print.c:265: warning: passing arg 1 of `pg_mb_utfs_width' discards\n qualifiers from pointer target type\n\n- If pg_mb_utfs_width finds an invalid UTF-8 string, it truncates it.\n I suppose that's what we want to do, but that's probably not the\n best place to do it.\n\nThe bad things :\n\n- If MULTIBYTE is defined, the strings must be in UTF-8, it doesn't\n check any encoding.\n\n- it is not integrated at all with the rest of the MB code.\n\n- it doesn't respect the indentation policy ;)\n\n\nTo do :\n\n- integrate better with the rest of the MB (client-side encoding), and\n with the rest of the code of print.c .\n\n- verify utf8-to-ucs robustness seriously.\n\n- make a visually nicer code :)\n\n- find better function names.\n\nAnd possibly :\n\n- consolidate the code, in order to remove the need for the #ifdef's\n in many places.\n\n- make it working with some others multiwidth-encoding (but then, I\n don't know anything about these encodings myself !).\n\n- check also utf-8 stream at input time, so that no invalid utf-8 is\n sent to the backend (at least from psql - the backend will need also\n a strict checking for UTF-8).\n\n- add nice UTF-8 borders as an option :)\n\n- add a command-line parameter to consider Unicode Ambiguous\n characters (characters which can be narrow or wide, depending on the\n terminal) wide characters, as it seems to be the case for CJK\n terminals (as per TR#11).\n\n- What else ?\n\n\nBTW, here is the table I had in the first mail. I would have shown the\none with all the weird Unicode characters, but my mutt is configured\nwith iso-8859-15, and I doubt many of you have utf-8 as a default yet\n:)\n\n+------+-------+--------+\n| lang | text | text |\n+------+-------+--------+\n| isl | ᅵlᅵta | ᅵleit |\n| isl | ᅵlᅵta | ᅵlitum |\n| isl | ᅵlᅵta | ᅵlitiᅵ |\n| isl | maᅵur | mann |\n| isl | maᅵur | mᅵnnum |\n| isl | maᅵur | manna |\n| isl | ᅵska | -aᅵi |\n+------+-------+--------+\n\n\nThe files in attachment :\n- a diff for pgsql/src/bin/psql/print.c\n- a diff for pgsql/src/bin/psql/Makefile\n- two new files :\n pgsql/src/bin/psql/pg_mb_utf8.c\n pgsql/src/bin/psql/pg_mb_utf8.h\n\nHave fun !\n\nPatrice\n\n-- \nPatrice HᅵDᅵ ------------------------------- patrice ᅵ islande org -----\n -- Isn't it weird how scientists can imagine all the matter of the\nuniverse exploding out of a dot smaller than the head of a pin, but they\ncan't come up with a more evocative name for it than \"The Big Bang\" ?\n -- What would _you_ call the creation of the universe ?\n -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n------------------------------------------ http://www.islande.org/ -----", "msg_date": "Wed, 26 Sep 2001 20:23:33 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": true, "msg_subject": "Combining chars in psql (pre-patch)" }, { "msg_contents": "\nPatrice, do you have an updated patch you want applied to 7.3?\n\n---------------------------------------------------------------------------\n\nPatrice H�d� wrote:\n> Hi,\n> \n> I have been working a bit at a patch for that problem in psql. The\n> patch is far from being ready for inclusion or whatever, it's just for\n> comments...\n> \n> By the way, someone can tell me how to generate nice patches showing\n> the difference between one's version and the cvs code that has been\n> downloaded ? I'm new to this (I've only used cvs for personal projects\n> so far, and I don't need to send patches to myself ;) ).\n> \n> The good things in this patch :\n> \n> - it works for me :)\n> \n> - I've used Markus Kuhn's implementation of wcwidth.c : it is locale\n> independant, and is in the public domain. :) [if we keep it, I'll\n> have to tell him, though !]\n> \n> - No dependency on the local libc's UTF-8-awareness ;) [I've seen that\n> psql has no such dependancy, at least in print.c, so I haven't added\n> any]. Actually, the change is completely self-contained.\n> \n> - I've made my own utf-8 -> ucs converter, since I haven't found any\n> without a copyright notice yesterday. It checks invalid and\n> non-optimal UTF-8 sequences, as requested per Unicode 3.0.1 (or 3.1,\n> I don't remember).\n> \n> - it works for japanese (and I believe other \"full-width\" characters).\n> \n> - if MULTIBYTE is not defined, the code doesn't change from the\n> commited version.\n> \n> The not so good things :\n> \n> - I've made my own utf-8 -> ucs converter... It seems to work fine,\n> but it's not tested well enough, it may not be so robust.\n> \n> - The printf( \"%*s\", width, utfstr) doesn't work as expected, so I had\n> to fix by doing printf( \"%*s%s\", width - utfstrwidth, \"\", utfstr);\n> \n> - everything in #ifdef MULTIBYTE/#endif . Since they're is no\n> dependancy on anything else (including the rest of the multibyte\n> implementation - which I haven't had the time to look at in detail),\n> it doesn't depend on it.\n> \n> - I get this (for each call to pg_mb_utfs_width) and I don't know why :\n> \n> print.c:265: warning: passing arg 1 of `pg_mb_utfs_width' discards\n> qualifiers from pointer target type\n> \n> - If pg_mb_utfs_width finds an invalid UTF-8 string, it truncates it.\n> I suppose that's what we want to do, but that's probably not the\n> best place to do it.\n> \n> The bad things :\n> \n> - If MULTIBYTE is defined, the strings must be in UTF-8, it doesn't\n> check any encoding.\n> \n> - it is not integrated at all with the rest of the MB code.\n> \n> - it doesn't respect the indentation policy ;)\n> \n> \n> To do :\n> \n> - integrate better with the rest of the MB (client-side encoding), and\n> with the rest of the code of print.c .\n> \n> - verify utf8-to-ucs robustness seriously.\n> \n> - make a visually nicer code :)\n> \n> - find better function names.\n> \n> And possibly :\n> \n> - consolidate the code, in order to remove the need for the #ifdef's\n> in many places.\n> \n> - make it working with some others multiwidth-encoding (but then, I\n> don't know anything about these encodings myself !).\n> \n> - check also utf-8 stream at input time, so that no invalid utf-8 is\n> sent to the backend (at least from psql - the backend will need also\n> a strict checking for UTF-8).\n> \n> - add nice UTF-8 borders as an option :)\n> \n> - add a command-line parameter to consider Unicode Ambiguous\n> characters (characters which can be narrow or wide, depending on the\n> terminal) wide characters, as it seems to be the case for CJK\n> terminals (as per TR#11).\n> \n> - What else ?\n> \n> \n> BTW, here is the table I had in the first mail. I would have shown the\n> one with all the weird Unicode characters, but my mutt is configured\n> with iso-8859-15, and I doubt many of you have utf-8 as a default yet\n> :)\n> \n> +------+-------+--------+\n> | lang | text | text |\n> +------+-------+--------+\n> | isl | ?l?ta | ?leit |\n> | isl | ?l?ta | ?litum |\n> | isl | ?l?ta | ?liti? |\n> | isl | ma?ur | mann |\n> | isl | ma?ur | m?nnum |\n> | isl | ma?ur | manna |\n> | isl | ?ska | -a?i |\n> +------+-------+--------+\n> \n> \n> The files in attachment :\n> - a diff for pgsql/src/bin/psql/print.c\n> - a diff for pgsql/src/bin/psql/Makefile\n> - two new files :\n> pgsql/src/bin/psql/pg_mb_utf8.c\n> pgsql/src/bin/psql/pg_mb_utf8.h\n> \n> Have fun !\n> \n> Patrice\n> \n> -- \n> Patrice H?D? ------------------------------- patrice ? islande org -----\n> -- Isn't it weird how scientists can imagine all the matter of the\n> universe exploding out of a dot smaller than the head of a pin, but they\n> can't come up with a more evocative name for it than \"The Big Bang\" ?\n> -- What would _you_ call the creation of the universe ?\n> -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n> ------------------------------------------ http://www.islande.org/ -----\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 13:07:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Combining chars in psql (pre-patch)" }, { "msg_contents": "\n\"cvs diff -c\" shows the differences from your source and cvs.\n\nI am a little confused. What functionality does this add?\n\n---------------------------------------------------------------------------\n\nPatrice H�d� wrote:\n> Hi,\n> \n> I have been working a bit at a patch for that problem in psql. The\n> patch is far from being ready for inclusion or whatever, it's just for\n> comments...\n> \n> By the way, someone can tell me how to generate nice patches showing\n> the difference between one's version and the cvs code that has been\n> downloaded ? I'm new to this (I've only used cvs for personal projects\n> so far, and I don't need to send patches to myself ;) ).\n> \n> The good things in this patch :\n> \n> - it works for me :)\n> \n> - I've used Markus Kuhn's implementation of wcwidth.c : it is locale\n> independant, and is in the public domain. :) [if we keep it, I'll\n> have to tell him, though !]\n> \n> - No dependency on the local libc's UTF-8-awareness ;) [I've seen that\n> psql has no such dependancy, at least in print.c, so I haven't added\n> any]. Actually, the change is completely self-contained.\n> \n> - I've made my own utf-8 -> ucs converter, since I haven't found any\n> without a copyright notice yesterday. It checks invalid and\n> non-optimal UTF-8 sequences, as requested per Unicode 3.0.1 (or 3.1,\n> I don't remember).\n> \n> - it works for japanese (and I believe other \"full-width\" characters).\n> \n> - if MULTIBYTE is not defined, the code doesn't change from the\n> commited version.\n> \n> The not so good things :\n> \n> - I've made my own utf-8 -> ucs converter... It seems to work fine,\n> but it's not tested well enough, it may not be so robust.\n> \n> - The printf( \"%*s\", width, utfstr) doesn't work as expected, so I had\n> to fix by doing printf( \"%*s%s\", width - utfstrwidth, \"\", utfstr);\n> \n> - everything in #ifdef MULTIBYTE/#endif . Since they're is no\n> dependancy on anything else (including the rest of the multibyte\n> implementation - which I haven't had the time to look at in detail),\n> it doesn't depend on it.\n> \n> - I get this (for each call to pg_mb_utfs_width) and I don't know why :\n> \n> print.c:265: warning: passing arg 1 of `pg_mb_utfs_width' discards\n> qualifiers from pointer target type\n> \n> - If pg_mb_utfs_width finds an invalid UTF-8 string, it truncates it.\n> I suppose that's what we want to do, but that's probably not the\n> best place to do it.\n> \n> The bad things :\n> \n> - If MULTIBYTE is defined, the strings must be in UTF-8, it doesn't\n> check any encoding.\n> \n> - it is not integrated at all with the rest of the MB code.\n> \n> - it doesn't respect the indentation policy ;)\n> \n> \n> To do :\n> \n> - integrate better with the rest of the MB (client-side encoding), and\n> with the rest of the code of print.c .\n> \n> - verify utf8-to-ucs robustness seriously.\n> \n> - make a visually nicer code :)\n> \n> - find better function names.\n> \n> And possibly :\n> \n> - consolidate the code, in order to remove the need for the #ifdef's\n> in many places.\n> \n> - make it working with some others multiwidth-encoding (but then, I\n> don't know anything about these encodings myself !).\n> \n> - check also utf-8 stream at input time, so that no invalid utf-8 is\n> sent to the backend (at least from psql - the backend will need also\n> a strict checking for UTF-8).\n> \n> - add nice UTF-8 borders as an option :)\n> \n> - add a command-line parameter to consider Unicode Ambiguous\n> characters (characters which can be narrow or wide, depending on the\n> terminal) wide characters, as it seems to be the case for CJK\n> terminals (as per TR#11).\n> \n> - What else ?\n> \n> \n> BTW, here is the table I had in the first mail. I would have shown the\n> one with all the weird Unicode characters, but my mutt is configured\n> with iso-8859-15, and I doubt many of you have utf-8 as a default yet\n> :)\n> \n> +------+-------+--------+\n> | lang | text | text |\n> +------+-------+--------+\n> | isl | ?l?ta | ?leit |\n> | isl | ?l?ta | ?litum |\n> | isl | ?l?ta | ?liti? |\n> | isl | ma?ur | mann |\n> | isl | ma?ur | m?nnum |\n> | isl | ma?ur | manna |\n> | isl | ?ska | -a?i |\n> +------+-------+--------+\n> \n> \n> The files in attachment :\n> - a diff for pgsql/src/bin/psql/print.c\n> - a diff for pgsql/src/bin/psql/Makefile\n> - two new files :\n> pgsql/src/bin/psql/pg_mb_utf8.c\n> pgsql/src/bin/psql/pg_mb_utf8.h\n> \n> Have fun !\n> \n> Patrice\n> \n> -- \n> Patrice H?D? ------------------------------- patrice ? islande org -----\n> -- Isn't it weird how scientists can imagine all the matter of the\n> universe exploding out of a dot smaller than the head of a pin, but they\n> can't come up with a more evocative name for it than \"The Big Bang\" ?\n> -- What would _you_ call the creation of the universe ?\n> -- \"The HORRENDOUS SPACE KABLOOIE !\" - Calvin and Hobbes\n> ------------------------------------------ http://www.islande.org/ -----\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Mar 2002 16:16:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Combining chars in psql (pre-patch)" } ]
[ { "msg_contents": "\nI'm trying to use an integer from a table to add/subtract time in months.\nIOW:\n\ncreate table foo(nummonths int);\n\nselect now() - nummonths months;\n\nSo far nothing I've tried will work - short of a function. Is there a\nway to do this?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 26 Sep 2001 16:30:52 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "casting for dates" }, { "msg_contents": "Will\n\nSELECT now() - 'nummonths months'::interval ;\n\nwork?\n\n\n----- Original Message -----\nFrom: \"Vince Vielhaber\" <vev@michvhf.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Wednesday, September 26, 2001 4:30 PM\nSubject: [HACKERS] casting for dates\n\n\n>\n> I'm trying to use an integer from a table to add/subtract time in months.\n> IOW:\n>\n> create table foo(nummonths int);\n>\n> select now() - nummonths months;\n>\n> So far nothing I've tried will work - short of a function. Is there a\n> way to do this?\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Wed, 26 Sep 2001 16:52:16 -0400", "msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>", "msg_from_op": false, "msg_subject": "Re: casting for dates" }, { "msg_contents": "On Thu, 2001-09-27 at 08:30, Vince Vielhaber wrote:\n> \n> I'm trying to use an integer from a table to add/subtract time in months.\n> IOW:\n> \n> create table foo(nummonths int);\n> \n> select now() - nummonths months;\n\nnewsroom=# select now() - interval( text(3) || ' months');\n ?column? \n------------------------\n 2001-06-27 08:56:27+12\n(1 row)\n\n\nCrude, but hey: it works :-)\n\nCheers,\n\t\t\t\t\tSAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7217 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n\n", "msg_date": "27 Sep 2001 08:57:19 +1200", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: casting for dates" }, { "msg_contents": "On 27 Sep 2001, Andrew McMillan wrote:\n\n> On Thu, 2001-09-27 at 08:30, Vince Vielhaber wrote:\n> >\n> > I'm trying to use an integer from a table to add/subtract time in months.\n> > IOW:\n> >\n> > create table foo(nummonths int);\n> >\n> > select now() - nummonths months;\n>\n> newsroom=# select now() - interval( text(3) || ' months');\n> ?column?\n> ------------------------\n> 2001-06-27 08:56:27+12\n> (1 row)\n>\n>\n> Crude, but hey: it works :-)\n\nIt certainly does! Thanks!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 26 Sep 2001 17:19:27 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: casting for dates" }, { "msg_contents": "On Wed, 26 Sep 2001, Mitch Vincent wrote:\n\n> Will\n>\n> SELECT now() - 'nummonths months'::interval ;\n>\n> work?\n\nUnfortunately no.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 26 Sep 2001 21:27:49 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: casting for dates" } ]
[ { "msg_contents": "Haven't tried yet, but perhaps casting nummonths to an interval datatype \nwould do the trick.\n\n-r\n\nAt 04:30 PM 9/26/01 -0400, Vince Vielhaber wrote:\n\n\n>I'm trying to use an integer from a table to add/subtract time in months.\n>IOW:\n>\n>create table foo(nummonths int);\n>\n>select now() - nummonths months;\n>\n>So far nothing I've tried will work - short of a function. Is there a\n>way to do this?\n>\n>Vince.\n>--\n>==========================================================================\n>Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n>==========================================================================\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>\n>\n>---\n>Incoming mail is certified Virus Free.\n>Checked by AVG anti-virus system (http://www.grisoft.com).\n>Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01", "msg_date": "Wed, 26 Sep 2001 17:14:19 -0400", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": true, "msg_subject": "Re: casting for dates" } ]
[ { "msg_contents": "Short! :-)\n\nPostgreSQL version: 7.1.3\n\nI do I dump of a database which has some views, rules, and different \npermissions on each view.\n\nThe dump puts first the permissions and after that the view creation, so when \nI import the dump back to the server (or another server) I get lts of errors, \nand have to change the permission by hand.\n\nHas this already been reported?\n\nsaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Wed, 26 Sep 2001 19:43:44 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "pg_dump bug" }, { "msg_contents": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar> writes:\n> PostgreSQL version: 7.1.3\n\n> The dump puts first the permissions and after that the view creation,\n\nAre you certain you are using the 7.1.3 version of pg_dump, and not\nsomething older? This was fixed in 7.1.3 according to the CVS logs...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2001 20:26:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug " } ]
[ { "msg_contents": "Hi,\nAnyone tried fragmenting tables into multiple sub tables \ntransparently through Postgres rewrite rules ? I'm having \na table with 200,000 rows with varchar columns and noticed \nthat updates,inserts take a lot longer time compared to a \nfew rows in the same table. I have a lot of memory in my \nmachine like 2Gig and 600,000 buffers. \n\nI really appreciate any pointers.\n\nKarthik Guruswamy\n", "msg_date": "26 Sep 2001 17:56:14 -0700", "msg_from": "karthikg@yahoo.com (Karthik Guruswamy)", "msg_from_op": true, "msg_subject": "Fragmenting tables in postgres" }, { "msg_contents": "karthikg@yahoo.com (Karthik Guruswamy) writes:\n> Anyone tried fragmenting tables into multiple sub tables \n> transparently through Postgres rewrite rules ? I'm having \n> a table with 200,000 rows with varchar columns and noticed \n> that updates,inserts take a lot longer time compared to a \n> few rows in the same table.\n\nThat's not a very big table ... there's no reason for inserts to\ntake a long time, and not much reason for updates to take long either\nif you have appropriate indexes to help find the rows to be updated.\nHave you VACUUM ANALYZEd this table recently (or ever?) Have you\ntried EXPLAINing the queries to see if they use indexes?\n\n> I have a lot of memory in my \n> machine like 2Gig and 600,000 buffers. \n\nYou mean you set -B to 600000? That's not a bright idea. A few\nthousand will be plenty, and will probably perform lots better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Sep 2001 13:37:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fragmenting tables in postgres " }, { "msg_contents": "> karthikg@yahoo.com (Karthik Guruswamy) writes:\n> > Anyone tried fragmenting tables into multiple sub tables \n> > transparently through Postgres rewrite rules ? I'm having \n> > a table with 200,000 rows with varchar columns and noticed \n> > that updates,inserts take a lot longer time compared to a \n> > few rows in the same table.\n> \n> That's not a very big table ... there's no reason for inserts to\n> take a long time, and not much reason for updates to take long either\n> if you have appropriate indexes to help find the rows to be updated.\n> Have you VACUUM ANALYZEd this table recently (or ever?) Have you\n> tried EXPLAINing the queries to see if they use indexes?\n> \n> > I have a lot of memory in my \n> > machine like 2Gig and 600,000 buffers. \n> \n> You mean you set -B to 600000? That's not a bright idea. A few\n> thousand will be plenty, and will probably perform lots better.\n\nThis is a good question. When does too many buffers become a\nperformance problem?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 15:33:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fragmenting tables in postgres" } ]
[ { "msg_contents": "Itried everywhere.... can't get JDBC to update. I can't build 7.13 J2EE \nJDBC, I get build errors.\n\nThe driver I have bellow, I can't get to update using bellow code....\n\nSuggestions please?\nVic\n\n-------- Original Message --------\nSubject: Re: JDBC update wont, plz help.\nDate: Wed, 26 Sep 2001 10:24:30 -0700\nFrom: Vic Cekvneich <vic@proj.com>\nOrganization: Hub.Org Networking Services (http://www.hub.org)\nNewsgroups: comp.databases.postgresql.general\nReferences: <3BB1FD68.6000700@proJ.com>\n\nI downloaded the 7.13 src and did a JDBC build using J2EE 13, to get the\nsqlX package.\nThe build failed with errors.\n\nIs there someone who can make a build with sqlX java (must have J2EE SDK\nenviroment) ?\nHelp...... please....\nVic\n\nVic Cekvenich wrote:\n > I am dead in the water....\n >\n > > I used a driver from http://jdbc.fastcrypt.com/ , the latest driver I\n >\n >>could find w/ JDK1.3 and a simple CachedRowSet from SUN Devlopers\n >>connection.\n >>\n >>It will not update via AcceptChages.\n >>Error is This methos is not yet implementd.\n >>\n >>Can someone help me update a tabe using JDBC plz? Please cc vic@proj.com.\n >>\n >>TIA,\n >>Vic\n >>\n >>PS:\n >>My source code:\n >>\n >>import sun.jdbc.rowset.*;\n >>// get it from Java Developer's Connection\n >>// or look at Appendix Download\n >>import java.sql.*;\n >>import org.postgresql.*;\n >>\n >>public class pimRDB {\n >>public static void main (String args[])\n >>{\n >>try\n >>{\n >>\n >>CachedRowSet crs = new CachedRowSet();\n >>// row set is better than ResultSet,\n >>// configure driver, read more in book JDBC Database Access :\n >>Class.forName(\"org.postgresql.Driver\");\n >>crs.setUrl(\"jdbc:postgresql://localhost/template1\");\n >>// yours should not be localhost, but\n >>//an IP address of DBS server. The 5432 is the IP port\n >>crs.setUsername(\"sysdba\");\n >>\n >>crs.setPassword(\"sysdba\");\n >>\n >>//select\n >>crs.setCommand(\"select NAM from NAM where PK = ?\");\n >>crs.setTableName(\"NAM\");\n >>// use your field names\n >>crs.setInt(1, 8);\n >>\n >>// pass the first argument to the select command to\n >>// retrieve PK id of Name #8, the 8th name entered\n >>crs.execute();\n >>crs.next();\n >>\n >>//get the field value\n >>String nam = crs.getString(\"NAM\");\n >>System.out.println(nam);\n >>// repeat\n >>crs.updateString(\"NAM\",\"Vic\");\ncrs.updateRow();\n >>crs.acceptChanges();\n >>//select\n >>crs.setCommand(\"select NAM from NAM where PK = ?\");\n >>// use your field names\n >>crs.setInt(1, 8);\n >>// pass the first argument to the select command to\n >>// retrieve PK id of Name #8, the 8th name entered\n >>crs.execute();\n >>crs.next();\n >>\n >>//get the field value\n >>nam = crs.getString(\"NAM\");\n >>System.out.println(nam);\n >>\n >>} // try\n >>catch (Exception e) {System.out.println(e);} // catch\n >>}// main\n >>}// class\n >>//Optional: Make the program take an argument\n >>//of the PK for the record it should retrieve\n >>\n >>\n >>\n >\n >\n >\n > ---------------\n >\n >\n >\n >\n >\n > ---------------------------(end of broadcast)---------------------------\n > TIP 2: you can get off all lists at once with the unregister command\n > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n >\n\n\n", "msg_date": "Wed, 26 Sep 2001 19:23:26 -0700", "msg_from": "Vic Cekvneich <vic@proj.com>", "msg_from_op": true, "msg_subject": "[Fwd: Re: JDBC update wont, plz help.]" }, { "msg_contents": "On Wed, 26 Sep 2001 19:23:26 -0700, you wrote:\n>Itried everywhere.... can't get JDBC to update.\n\nThe right place for this question is the\npgsql-jdbc@postgresql.org mailing list.\n\nI think you have a better chance of getting help when you post a\nbrief description of the problem, instead of the full history of\npostings elsewhere. After briefly reading through your posting I\nhaven't a clue what the problem is. You'd better separate your\nproblem with the functionality of the driver from build\nproblems.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Thu, 27 Sep 2001 18:17:06 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: JDBC update wont, plz help.]" } ]
[ { "msg_contents": "I did some benchmarking with/without multibyte support using current.\n\n(1) regression test\n\nWith multibyte support:\n9.52user 3.38system 0:59.27elapsed 21%CPU (0avgtext+0avgdata 0maxresident)k\n\nWithout multibyte support:\n8.97user 4.84system 1:00.85elapsed 22%CPU (0avgtext+0avgdata 0maxresident)k\n\n(2) pgbench\n\nWith multibyte support(first column is the concurrent user, second is\nthe TPS):\n\n1 46.004932\n2 70.848123\n4 88.147471\n8 90.472970\n16 96.620166\n32 95.947363\n64 92.718780\n128 61.725883\n\nWitout multibyte support:\n1 52.668169\n2 68.132654\n4 79.956663\n8 81.133516\n16 96.618124\n32 92.283645\n64 86.936559\n128 87.584099\n\nfor your convenience, a graph is attached(bench.png).\n\n(3) testing environment\n\nLinux kernel 2.2.17\nPIII 750MHz, 256MB RAM, IDE disk\nconfigure option: configure --enable-multibyte=EUC_JP or configure\npostgresql.conf settings(other than default):\n\t\tmax_connections = 128\n\t\tshared_buffers = 1024\n\t\twal_sync_method = open_sync\n\t\tdeadlock_timeout = 100000\npgbench options:\n\t-s 2 (initialization)\n\t-t 10 (benchmarking)\n--\nTatsuo Ishii", "msg_date": "Thu, 27 Sep 2001 14:22:07 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "multibyte performance" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> I did some benchmarking with/without multibyte support using current.\n\n...\n \n> (2) pgbench\n> \n> With multibyte support(first column is the concurrent user, second is\n> the TPS):\n...\n> 32 95.947363\n> 64 92.718780\n> 128 61.725883\n> \n> Witout multibyte support:\n...\n> 32 92.283645\n> 64 86.936559\n> 128 87.584099\n\nDo you have any theory why multibyte passes non-mb at 128 ?\n\nSome subtle timing thing perhaps (or just bad luck for non-mb at 128)?\n\n\n-----------\nHannu\n", "msg_date": "Thu, 27 Sep 2001 13:00:41 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: multibyte performance" }, { "msg_contents": "On Thu, Sep 27, 2001 at 02:22:07PM +0900, Tatsuo Ishii wrote:\n> I did some benchmarking with/without multibyte support using current.\n> \n> (1) regression test\n> \n> With multibyte support:\n> 9.52user 3.38system 0:59.27elapsed 21%CPU (0avgtext+0avgdata 0maxresident)k\n> \n> Without multibyte support:\n> 8.97user 4.84system 1:00.85elapsed 22%CPU (0avgtext+0avgdata 0maxresident)k\n\n It's nice.\n\n Can you try it for old 7.1? I should like see some improvement between\nrelease:-)\n\n\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 27 Sep 2001 10:56:04 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: multibyte performance" }, { "msg_contents": "> It's nice.\n\n:-)\n\n> Can you try it for old 7.1? I should like see some improvement between\n> release:-)\n\nNot sure if it's meaningfull since new regression test cases might be\nadded for current?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 27 Sep 2001 18:19:32 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: multibyte performance" }, { "msg_contents": "> > With multibyte support(first column is the concurrent user, second is\n> > the TPS):\n> ...\n> > 32 95.947363\n> > 64 92.718780\n> > 128 61.725883\n> > \n> > Witout multibyte support:\n> ...\n> > 32 92.283645\n> > 64 86.936559\n> > 128 87.584099\n> \n> Do you have any theory why multibyte passes non-mb at 128 ?\n> \n> Some subtle timing thing perhaps (or just bad luck for non-mb at 128)?\n\nMay be or may not be. I was anxious about the load module size and\nthought it might stress the memory. So while running pgbench I checked\nthe memory usage using vmstat. However it showed no excess page\nin/page out...\n--\nTatsuo Ishii\n", "msg_date": "Thu, 27 Sep 2001 21:30:38 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: multibyte performance" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I did some benchmarking with/without multibyte support using current.\n> (1) regression test\n> (2) pgbench\n\npgbench unfortunately seems quite irrelevant to this issue, since it\nperforms no textual operations whatsoever. It'd be interesting to\nmodify pgbench so that it updates the \"filler\" column somehow on each\nupdate (perhaps store a text copy of the new balance there), and then\nrepeat the tests.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Sep 2001 10:09:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: multibyte performance " }, { "msg_contents": "> pgbench unfortunately seems quite irrelevant to this issue, since it\n> performs no textual operations whatsoever.\n\nYup.\n\n> It'd be interesting to\n> modify pgbench so that it updates the \"filler\" column somehow on each\n> update (perhaps store a text copy of the new balance there), and then\n> repeat the tests.\n\nMaybe. I'm not sure if it would show significant differences though.\n\nAnyway, what I'm interested in include:\n\no regexp/like/ilike operations\no very long text handling\n\nI'll come up with more testings..\n--\nTatsuo Ishii\n", "msg_date": "Fri, 28 Sep 2001 10:05:24 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: multibyte performance " }, { "msg_contents": "> > I did some benchmarking with/without multibyte support using current.\n> > (1) regression test\n> > (2) pgbench\n> \n> pgbench unfortunately seems quite irrelevant to this issue, since it\n> performs no textual operations whatsoever. It'd be interesting to\n> modify pgbench so that it updates the \"filler\" column somehow on each\n> update (perhaps store a text copy of the new balance there), and then\n> repeat the tests.\n\nOk. Here is the result:\n\nWithout multibyte:\n1 50.190473\n2 65.943052\n4 74.908752\n8 62.589973\n16 87.546988\n32 94.448773\n64 88.019452\n128 64.107839\n\nWith multibyte:\n1 47.473237\n2 61.435628\n4 83.047684\n8 95.556846\n16 92.157352\n32 95.879001\n64 91.486652\n128 66.926568\n\na graph is attached.", "msg_date": "Fri, 28 Sep 2001 15:27:22 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: multibyte performance " } ]
[ { "msg_contents": "Haller Christoph wrote:\n\n> My first message:\n> In a C application I want to run several\n> insert commands within a chained transaction\n> (for faster execution).\n> >From time to time there will be an insert command\n> causing an\n> ERROR: Cannot insert a duplicate key into a unique index\n>\n> As a result, the whole transaction is aborted and all\n> the previous inserts are lost.\n> Is there any way to preserve the data\n> except working with \"autocommit\" ?\n> What I have in mind particularly is something like\n> \"Do not abort on duplicate key error\".\n\nSimply select by the key you want to enter. If you get 100 an insert is ok,\notherwise do an update. Oracle has a feature called 'insert or update' which\nfollows this strategy. There also was some talk on this list about\nimplementing this, but I don't remember the conclusion.\n\nBTW: I strongly recommend staying away from autocommit. You cannot\ncontrol/know whether/when you started a new transaction.\n\nChristof\n\nPS: I would love to have nested transactions, too. But no time to spare ...\nPerhaps somebody does this for 7.3?\n\n\n", "msg_date": "Thu, 27 Sep 2001 12:47:58 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Re: Abort transaction on duplicate key error" }, { "msg_contents": "Hi all, \nSorry for bothering you with my stuff for the second time \nbut I haven't got any answer within two days and the problem \nappears fundamental, at least to me. \nI have a C application running to deal with meteorological data \nlike temperature, precipitation, wind speed, wind direction, ... \nAnd I mean loads of data like several thousand sets within every \nten minutes. \n>From time to time it happens the transmitters have delivered wrong data, \nso they send the sets again to be taken as correction. \nThe idea is to create a unique index on the timestamp, the location id \nand the measurement id, then when receiving a duplicate key error \nmove on to an update command on that specific row. \nBut, within PostgreSQL this strategy does not work any longer within \na chained transaction, because the duplicate key error leads to \n'abort the whole transaction'. \nWhat I can do is change from chained transaction to unchained transaction, \nbut what I have read in the mailing list so far, the commit operation \nrequires loads of cpu time, and I do not have time for this when \nprocessing thousands of sets. \nI am wondering now whether there is a fundamental design error in \nmy strategy. \nAny ideas, suggestions highly appreciated and thanks for reading so far. \nRegards, Christoph \n\nMy first message:\nIn a C application I want to run several \ninsert commands within a chained transaction \n(for faster execution). \n>From time to time there will be an insert command \ncausing an \nERROR: Cannot insert a duplicate key into a unique index\n\nAs a result, the whole transaction is aborted and all \nthe previous inserts are lost. \nIs there any way to preserve the data \nexcept working with \"autocommit\" ? \nWhat I have in mind particularly is something like \n\"Do not abort on duplicate key error\".\n", "msg_date": "Thu, 27 Sep 2001 11:27:09 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": false, "msg_subject": "Abort transaction on duplicate key error " }, { "msg_contents": "Thanks a lot. Now that I've read your message, \nI wonder why I was asking something trivial. \nChristoph\n\n> > In a C application I want to run several\n> > insert commands within a chained transaction\n> > (for faster execution).\n> > >From time to time there will be an insert command\n> > causing an\n> > ERROR: Cannot insert a duplicate key into a unique index\n> >\n> > As a result, the whole transaction is aborted and all\n> > the previous inserts are lost.\n> > Is there any way to preserve the data\n> > except working with \"autocommit\" ?\n> > What I have in mind particularly is something like\n> > \"Do not abort on duplicate key error\".\n> \n> Simply select by the key you want to enter. If you get 100 an insert is ok,\n> otherwise do an update. Oracle has a feature called 'insert or update' which\n> follows this strategy. There also was some talk on this list about\n> implementing this, but I don't remember the conclusion.\n> \n> BTW: I strongly recommend staying away from autocommit. You cannot\n> control/know whether/when you started a new transaction.\n> \n> Christof\n> \n> PS: I would love to have nested transactions, too. But no time to spare ...\n> Perhaps somebody does this for 7.3?\n> \n\n", "msg_date": "Thu, 27 Sep 2001 12:59:47 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": false, "msg_subject": "Re: Abort transaction on duplicate key error" }, { "msg_contents": "Haller,\n\nThe way I have handled this in the past is to attempt the following \ninsert, followed by an update if the insert doesn't insert any rows:\n\ninsert into foo (fooPK, foo2)\nselect 'valuePK', 'value2'\nwhere not exists\n (select 'x' from foo\n where fooPK = 'valuePK')\n\nif number of rows inserted = 0, then the row already exists so do an update\n\nupdate foo set foo2 = 'value2'\nwhere fooPK = 'valuePK'\n\nSince I don't know what client interface you are using (java, perl, C), \nI can't give you exact code for this, but the above should be easily \nimplemented in any language.\n\nthanks,\n--Barry\n\n\n\nHaller Christoph wrote:\n\n> Hi all, \n> Sorry for bothering you with my stuff for the second time \n> but I haven't got any answer within two days and the problem \n> appears fundamental, at least to me. \n> I have a C application running to deal with meteorological data \n> like temperature, precipitation, wind speed, wind direction, ... \n> And I mean loads of data like several thousand sets within every \n> ten minutes. \n>>From time to time it happens the transmitters have delivered wrong data, \n> so they send the sets again to be taken as correction. \n> The idea is to create a unique index on the timestamp, the location id \n> and the measurement id, then when receiving a duplicate key error \n> move on to an update command on that specific row. \n> But, within PostgreSQL this strategy does not work any longer within \n> a chained transaction, because the duplicate key error leads to \n> 'abort the whole transaction'. \n> What I can do is change from chained transaction to unchained transaction, \n> but what I have read in the mailing list so far, the commit operation \n> requires loads of cpu time, and I do not have time for this when \n> processing thousands of sets. \n> I am wondering now whether there is a fundamental design error in \n> my strategy. \n> Any ideas, suggestions highly appreciated and thanks for reading so far. \n> Regards, Christoph \n> \n> My first message:\n> In a C application I want to run several \n> insert commands within a chained transaction \n> (for faster execution). \n>>From time to time there will be an insert command \n> causing an \n> ERROR: Cannot insert a duplicate key into a unique index\n> \n> As a result, the whole transaction is aborted and all \n> the previous inserts are lost. \n> Is there any way to preserve the data \n> except working with \"autocommit\" ? \n> What I have in mind particularly is something like \n> \"Do not abort on duplicate key error\".\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n\n", "msg_date": "Thu, 27 Sep 2001 10:04:07 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Abort transaction on duplicate key error" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tatsuo Ishii [mailto:t-ishii@sra.co.jp] \n> Sent: 24 September 2001 00:58\n> To: jm.poure@freesurf.fr\n> Cc: pgsql-odbc@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: Re: [ODBC] [HACKERS] UTF-8 support\n> \n> Currently no. But it would be easy to implement such a \n> function. What comes in mind is:\n> \n> pg_available_encodings([INTEGER how]) RETURNS setof TEXT\n> \n> where how is\n> \n> 0(or omitted): returns all available encodings\n> 1: returns encodings in backend\n> 2: returns encodings in frontend\n> \n> Comments?\n\nWould certainly be useful to pgAdmin if someone could implement this...\n\nRegards, Dave.\n", "msg_date": "Thu, 27 Sep 2001 16:00:41 +0100", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] UTF-8 support" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tatsuo Ishii [mailto:t-ishii@sra.co.jp] \n> Sent: 24 September 2001 08:13\n> To: sa_mokho@alcor.concordia.ca\n> Cc: jm.poure@freesurf.fr; pgsql-odbc@postgresql.org; \n> pgsql-hackers@postgresql.org\n> Subject: Re: [ODBC] [HACKERS] UTF-8 support\n> \n> \n> > Which ones belong to the backend and which ones to the frontend? Or \n> > even more: which ones belong to the backend, which ones to the \n> > frontend #1, which ones to the frontend #2, etc...\n> > \n> > For examle, I have two fronends:\n> > \n> > FE1: UNICODE, WIN1251\n> > FE2: KOI8, UNICODE\n> > BE: UNICODE, LATIN1, ALT\n> > \n> > Which ones SELECT pg_available_encodings(); will show?\n> > The ones of the BE and the FE making the request?\n> > \n> > In case I need to communicate with BE using one common encoding \n> > between the two if it is available.\n> \n> I'm confused.\n> \n> What do you mean by BE? BE's encoding is determined by the \n> database that FE chooses. If you just want to know what kind \n> encodings are there in the database, why not use:\n> \n> SELECT DISTINCT ON (encoding) pg_encoding_to_char(encoding) \n> AS encoding FROM pg_database;\n> \n> Also, FE's encoding could be any valid encoding that FE \n> chooses, i.e. it' not BE's choice.\n> \n> Can you show me more concrete examples showing what you \n> actually want to do?\n> \n> >> 3) Is there a way to query available encodings in PostgreSQL for \n> >> display in pgAdmin.\n> >\n> > Could pgAdmin display multibyte chars in the first place ?\n> \n> Wao. If pgAdmin could not display multibyte chars, all \n> discussions here are meaningless:-<\n\nApparently it can, I just don't know how to do it yet! From what\nJean-Michel's said, it's just a case of kicking VB6 in the right part of the\nanatomy...\n\nRegards, Dave.\n", "msg_date": "Thu, 27 Sep 2001 17:18:13 +0100", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] UTF-8 support" } ]
[ { "msg_contents": "I'm considering moving s_lock.c from backend/storage/buffer, where it\nseems to make no sense, into backend/storage/lmgr which seems like a\nmore logical place for it. However, the only way to do it that I know\nof is to \"cvs remove\" in the one directory and then \"cvs add\" a new copy\nin the other. That would lose the CVS log history of the file, or at\nleast make it a lot harder to find. Is there a way to attach the past\ncommit history to the file in its new location? Should I just do it and\nnot worry about the history? Should I leave well enough alone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Sep 2001 12:43:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Moving CVS files around?" }, { "msg_contents": "> I'm considering moving s_lock.c from backend/storage/buffer, where it\n> seems to make no sense, into backend/storage/lmgr which seems like a\n> more logical place for it. However, the only way to do it that I know\n> of is to \"cvs remove\" in the one directory and then \"cvs add\" a new copy\n> in the other. That would lose the CVS log history of the file, or at\n> least make it a lot harder to find. Is there a way to attach the past\n> commit history to the file in its new location? Should I just do it and\n> not worry about the history? Should I leave well enough alone?\n\nI vote you just move it. It never made sense in /buffer to me either. \nI always looked for it in lmgr first.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Sep 2001 14:47:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Moving CVS files around?" }, { "msg_contents": "When moving files in CVS I usually use the cvs add/cvs remove in the same\ncommit with -m something like 'Changed location/name for file xxx to yyy'.\n\nThat way you have trace in the log about what happened to a file as both\nold/new name/location.\n\nMaybe not the nicest way but it usually works fine and I haven't found a\nbetter way yet.\n\nIMHO you should just do it and not worry about the history. If someone wants\nto read it they will have to issue a few more commands and as time\nprogresses there are usually less and less interest in the old history. It's\nbetter than start fiddling around with CVS-files.\n\n/Stefan\n\nTom Lane wrote:\n\n> I'm considering moving s_lock.c from backend/storage/buffer, where it\n> seems to make no sense, into backend/storage/lmgr which seems like a\n> more logical place for it. However, the only way to do it that I know\n> of is to \"cvs remove\" in the one directory and then \"cvs add\" a new copy\n> in the other. That would lose the CVS log history of the file, or at\n> least make it a lot harder to find. Is there a way to attach the past\n> commit history to the file in its new location? Should I just do it and\n> not worry about the history? Should I leave well enough alone?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Mon, 01 Oct 2001 14:12:29 +0200", "msg_from": "Stefan Rindeskar <sr@globecom.net>", "msg_from_op": false, "msg_subject": "Re: Moving CVS files around?" }, { "msg_contents": "\nI can move it manually on the backend ... let me know when/if you want it\ndone ...\n\n\n\nOn Mon, 1 Oct 2001, Stefan Rindeskar wrote:\n\n> When moving files in CVS I usually use the cvs add/cvs remove in the same\n> commit with -m something like 'Changed location/name for file xxx to yyy'.\n>\n> That way you have trace in the log about what happened to a file as both\n> old/new name/location.\n>\n> Maybe not the nicest way but it usually works fine and I haven't found a\n> better way yet.\n>\n> IMHO you should just do it and not worry about the history. If someone wants\n> to read it they will have to issue a few more commands and as time\n> progresses there are usually less and less interest in the old history. It's\n> better than start fiddling around with CVS-files.\n>\n> /Stefan\n>\n> Tom Lane wrote:\n>\n> > I'm considering moving s_lock.c from backend/storage/buffer, where it\n> > seems to make no sense, into backend/storage/lmgr which seems like a\n> > more logical place for it. However, the only way to do it that I know\n> > of is to \"cvs remove\" in the one directory and then \"cvs add\" a new copy\n> > in the other. That would lose the CVS log history of the file, or at\n> > least make it a lot harder to find. Is there a way to attach the past\n> > commit history to the file in its new location? Should I just do it and\n> > not worry about the history? Should I leave well enough alone?\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Mon, 1 Oct 2001 10:12:09 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Moving CVS files around?" } ]
[ { "msg_contents": "I've committed (most of?) the changes to implement a timestamp without\ntime zone type. To help with backward compatibility and upgrading, I've\nmade the *default* timestamp the one with time zone. That is, if you\nwant a timestamp without time zone you will have to fully specify it\nverbatim. SQL99 calls for \"without time zone\" to be the default, and we\ncan make it so in the next release.\n\nI have not made any changes to pg_dump or psql or any other utility or\nclient-side driver to recognize the new types or to do conversions to\n\"SQL99 names\", but these should be done before we exit beta.\n\nI can not actually *check* my changes to the cvs tree, since\ncvsup.postgresql.org apparently does not yet see the changes to\ncvs.postgresql.org. This is pretty annoying from my pov; hopefully the\ngain in distributing the load offsets the problems in syncing from a\nslave machine rather than \"truth\".\n\nDetails from the cvs log follow.\n\n - Thomas\n\nMeasure the current transaction time to milliseconds.\nDefine a new function, GetCurrentTransactionStartTimeUsec() to get the\ntime\n to this precision.\nAllow now() and timestamp 'now' to use this higher precision result so\n we now have fractional seconds in this \"constant\".\nAdd timestamp without time zone type.\nMove previous timestamp type to timestamp with time zone.\nAccept another ISO variant for date/time values: yyyy-mm-ddThh:mm:ss\n (note the \"T\" separating the day from hours information).\nRemove 'current' from date/time types; convert to 'now' in input.\nSeparate time and timetz regression tests.\nSeparate timestamp and timestamptz regression test.\n", "msg_date": "Fri, 28 Sep 2001 08:24:53 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Big update for timestamp etc." } ]
[ { "msg_contents": "Please apply attached patch to current CVS tree.\n\nChanges:\n\n 1. gist__int_ops is now without lossy\n 2. added sort entry in picksplit\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Fri, 28 Sep 2001 14:10:05 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "path for contrib/intarray (current CVS)" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Please apply attached patch to current CVS tree.\n> \n> Changes:\n> \n> 1. gist__int_ops is now without lossy\n> 2. added sort entry in picksplit\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Sep 2001 14:27:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: path for contrib/intarray (current CVS)" }, { "msg_contents": "\nPatch applied. Thanks.\n\n> Please apply attached patch to current CVS tree.\n> \n> Changes:\n> \n> 1. gist__int_ops is now without lossy\n> 2. added sort entry in picksplit\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 14:26:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: path for contrib/intarray (current CVS)" } ]
[ { "msg_contents": "Can someone else run this and confirm the results against the tip\nof the CVS repository?\n\nI'm trying to trace this bug (help welcome too).\n\n(it was hidden in a trigger and a pain to narrow to this point)\n-Brad\n\n-----\n\ndrop function mul1(int4,int4);\ndrop function mul2(int4,int4);\ndrop function callmul1();\ndrop function callmul2a();\ndrop function callmul2b();\ncreate function mul1(int4,int4) returns int8 as 'select int8($1) * int8($2)' language 'sql';\ncreate function mul2(int4,int4) returns int8 as 'select int8($1) * 4294967296::int8 + int8($2)' language 'sql';\ncreate function callmul1() returns int8 as 'return plpy.execute(\"select mul1(6,7) as x\")[0][\"x\"]' language 'plpython';\ncreate function callmul2a() returns int8 as 'select mul2(7,8)' language 'sql';\ncreate function callmul2b() returns int8 as 'return plpy.execute(\"select mul2(7,8) as x\")[0][\"x\"]' language 'plpython';\nselect mul1(3,4);\nselect callmul1();\nselect mul2(5,6);\nselect callmul2a();\nselect callmul2b();\n\nResults:\n\n...\n callmul1 \n----------\n 42\n(1 row)\n\n mul2 \n-------------\n 21474836486\n(1 row)\n\n callmul2a \n-------------\n 30064771080\n(1 row)\n\npsql:bug:14: pqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally\n\tbefore or while processing the request.\npsql:bug:14: connection to server was lost\n", "msg_date": "Fri, 28 Sep 2001 12:05:20 -0400", "msg_from": "Bradley McLean <brad@bradm.net>", "msg_from_op": true, "msg_subject": "Plpython bug with int8?" }, { "msg_contents": "Replying to my own method.\n\nIn src/pl/plpython/plpython.c around line 1381, PLy_input_datum_func2\nerroneously assigns PLyInt_FromString to handle int8 types.\n\nBecause PLyInt_FromString uses strtol at line 1461, it fails as\nsoon as you pass it a parameter exceed the bounds of an int4.\n\nThere are two problems to solve here:\n\n1) All of the conversion functions that return NULL ( line 1463 as\nan example, page up and down from there) will cause the backend to terminate\nabnormally. I'm not sure if this is considered a correct behavior,\nor if elog(ERROR, ...) is a better approach. Comments?\n\n2) I need an input conversion from string to int8, like perhaps\nPLyLongLong_FromString() as an proposal. src/backend/utils\n/adt/int8.c has one, but it's not really well designed for access\nfrom C. Should I duplicate the functionality, figure out how to\nwrap it for the call from plpython.c, or is there an existing\nstandard for factoring the body of the int8in function into a\nwidely available utility location.\n\nHope those questions are clear. I'd like to get a patch in\nbefore Beta.\n\n-Brad\n\n* Bradley McLean (brad@bradm.net) [010929 01:55]:\n> Can someone else run this and confirm the results against the tip\n> of the CVS repository?\n> \n> I'm trying to trace this bug (help welcome too).\n> \n> (it was hidden in a trigger and a pain to narrow to this point)\n> -Brad\n> \n> -----\n> \n> drop function mul1(int4,int4);\n> drop function mul2(int4,int4);\n> drop function callmul1();\n> drop function callmul2a();\n> drop function callmul2b();\n> create function mul1(int4,int4) returns int8 as 'select int8($1) * int8($2)' language 'sql';\n> create function mul2(int4,int4) returns int8 as 'select int8($1) * 4294967296::int8 + int8($2)' language 'sql';\n> create function callmul1() returns int8 as 'return plpy.execute(\"select mul1(6,7) as x\")[0][\"x\"]' language 'plpython';\n> create function callmul2a() returns int8 as 'select mul2(7,8)' language 'sql';\n> create function callmul2b() returns int8 as 'return plpy.execute(\"select mul2(7,8) as x\")[0][\"x\"]' language 'plpython';\n> select mul1(3,4);\n> select callmul1();\n> select mul2(5,6);\n> select callmul2a();\n> select callmul2b();\n> \n> Results:\n> \n> ...\n> callmul1 \n> ----------\n> 42\n> (1 row)\n> \n> mul2 \n> -------------\n> 21474836486\n> (1 row)\n> \n> callmul2a \n> -------------\n> 30064771080\n> (1 row)\n> \n> psql:bug:14: pqReadData() -- backend closed the channel unexpectedly.\n> \tThis probably means the backend terminated abnormally\n> \tbefore or while processing the request.\n> psql:bug:14: connection to server was lost\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Mon, 1 Oct 2001 13:41:01 -0400", "msg_from": "Bradley McLean <brad@bradm.net>", "msg_from_op": true, "msg_subject": "Re: Plpython bug with int8 - Found, need advice" }, { "msg_contents": "Bradley McLean <brad@bradm.net> writes:\n> 1) All of the conversion functions that return NULL ( line 1463 as\n> an example, page up and down from there) will cause the backend to terminate\n> abnormally. I'm not sure if this is considered a correct behavior,\n> or if elog(ERROR, ...) is a better approach. Comments?\n\nBackend coredump is never a correct behavior. I'd recommend elog ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2001 18:27:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Plpython bug with int8 - Found, need advice " } ]
[ { "msg_contents": "OK, I think we are on track for Monday beta. Marc, will you be\npackaging a beta1 tarball on Monday or waiting a few days? I need to\nrun pgindent and pgjindent either right before or after beta starts.\n\nAlso, what are we doing with the toplevel /ChangeLogs. I never\nunderstood the purpose of it, and I know others have similar questions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 17:28:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Preparation for Beta" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I think we are on track for Monday beta.\n\nOne thing that I think absolutely *must* happen before we can go beta\nis to fix the documentation build process at hub.org. Until the online\ndeveloper docs are up-to-date, how are beta testers going to know what\nto look at? (For that matter, what are the odds that the doc tarball\ngeneration process will succeed?)\n\nPersonally I'd really like to see CVSWeb working again too...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 15:08:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta " }, { "msg_contents": "On Sat, 29 Sep 2001, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I think we are on track for Monday beta.\n>\n> One thing that I think absolutely *must* happen before we can go beta\n> is to fix the documentation build process at hub.org. Until the online\n> developer docs are up-to-date, how are beta testers going to know what\n> to look at? (For that matter, what are the odds that the doc tarball\n> generation process will succeed?)\n>\n> Personally I'd really like to see CVSWeb working again too...\n\nSo would I but it can't happen till Marc finishes some configuring\nI requested on the developers site.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 29 Sep 2001 16:14:25 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I think we are on track for Monday beta.\n> \n> One thing that I think absolutely *must* happen before we can go beta\n> is to fix the documentation build process at hub.org. Until the online\n> developer docs are up-to-date, how are beta testers going to know what\n> to look at? (For that matter, what are the odds that the doc tarball\n> generation process will succeed?)\n> \n> Personally I'd really like to see CVSWeb working again too...\n\nOK, are we ready for beta tomorrow, Monday? Are we building a tarball\ntomorrow? I need to pgindent before that. Also, what about /ChangeLog?\nCan I remove it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 19:07:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Preparation for Beta" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, are we ready for beta tomorrow, Monday? Are we building a tarball\n> tomorrow? I need to pgindent before that.\n\nPlease hold off on the pgindent until I've done something about this\nhashtable alignment issue ... I'll try to get it done tonight.\n\n> Also, what about /ChangeLog?\n> Can I remove it?\n\nI vote for removing it; I don't see why we keep such files in CVS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Sep 2001 20:13:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta " }, { "msg_contents": "\nwe arent' ready to go beta ... I'll let you know when ...\n\nOn Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > OK, I think we are on track for Monday beta.\n> >\n> > One thing that I think absolutely *must* happen before we can go beta\n> > is to fix the documentation build process at hub.org. Until the online\n> > developer docs are up-to-date, how are beta testers going to know what\n> > to look at? (For that matter, what are the odds that the doc tarball\n> > generation process will succeed?)\n> >\n> > Personally I'd really like to see CVSWeb working again too...\n>\n> OK, are we ready for beta tomorrow, Monday? Are we building a tarball\n> tomorrow? I need to pgindent before that. Also, what about /ChangeLog?\n> Can I remove it?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Mon, 1 Oct 2001 10:40:46 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> we arent' ready to go beta ... I'll let you know when ...\n\nI agree --- for one thing, we definitely have some bugs in the datetime\npg_proc entries. Unless we want to go into beta with a known initdb\nyet to do, we've got to wait for Thomas to deal with those.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 11:48:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta " }, { "msg_contents": "> I agree --- for one thing, we definitely have some bugs in the datetime\n> pg_proc entries. Unless we want to go into beta with a known initdb\n> yet to do, we've got to wait for Thomas to deal with those.\n\nI had a chance to work on timestamp features this weekend, and would\nlike to apply patches which give us another step toward SQL99 compliance\nwith support for specifying precision. As in\n\n select timestamp(2) with time zone '2001-09-30 04:06:06.78';\n\nThis uses typmod to adjust precision when date/times are read or written\nto columns, as is done for other data types already. I did *not* alter\nthe on-disk format for the types, but I think that the new support may\nbe sufficient for compliance and has less overhead than trying to carry\nthe precision with every instance of the type.\n\nI will also look at the remaining catalog issues (didn't get to email\nthis weekend), but will not likely have time until this evening (Hawaii\ntime :)\n\nI haven't figured out *where* the docs are supposed to go in the new\nscheme; none of the disk areas known to me on the new machines seem to\nhave a documentation directory at all! Where is this stuff supposed to\ngo? If things are going in different places, perhaps someone will have\ntime to mention, in one message, where those places are and what the\nsubdirectory scheme will look like?\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 18:37:26 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta" }, { "msg_contents": "> I haven't figured out *where* the docs are supposed to go in the new\n> scheme; none of the disk areas known to me on the new machines seem to\n> have a documentation directory at all! Where is this stuff supposed to\n> go? If things are going in different places, perhaps someone will have\n> time to mention, in one message, where those places are and what the\n> subdirectory scheme will look like?\n\nNot sure. Use my SGML/HTML copy at the bottom of the developers page.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 14:54:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Preparation for Beta" }, { "msg_contents": "On Mon, 1 Oct 2001, Thomas Lockhart wrote:\n\n> > I agree --- for one thing, we definitely have some bugs in the datetime\n> > pg_proc entries. Unless we want to go into beta with a known initdb\n> > yet to do, we've got to wait for Thomas to deal with those.\n>\n> I had a chance to work on timestamp features this weekend, and would\n> like to apply patches which give us another step toward SQL99 compliance\n> with support for specifying precision. As in\n>\n> select timestamp(2) with time zone '2001-09-30 04:06:06.78';\n>\n> This uses typmod to adjust precision when date/times are read or written\n> to columns, as is done for other data types already. I did *not* alter\n> the on-disk format for the types, but I think that the new support may\n> be sufficient for compliance and has less overhead than trying to carry\n> the precision with every instance of the type.\n>\n> I will also look at the remaining catalog issues (didn't get to email\n> this weekend), but will not likely have time until this evening (Hawaii\n> time :)\n>\n> I haven't figured out *where* the docs are supposed to go in the new\n> scheme; none of the disk areas known to me on the new machines seem to\n> have a documentation directory at all! Where is this stuff supposed to\n> go? If things are going in different places, perhaps someone will have\n> time to mention, in one message, where those places are and what the\n> subdirectory scheme will look like?\n\nthe only changes to directory schemes is:\n\n\t/var/spool/ftp/* for anon ftp\n\t/usr/local/www/www for web stuff\n\ndirectory structures under each should still be as they always were ...\n\n\n", "msg_date": "Mon, 1 Oct 2001 14:59:11 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta" }, { "msg_contents": "...\n> the only changes to directory schemes is:\n> /var/spool/ftp/* for anon ftp\n> /usr/local/www/www for web stuff\n> directory structures under each should still be as they always were ...\n\nBut they aren't (or didn't seem to be). \"find\" couldn't find any\ninstance of the docs, unpacked or otherwise.\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 20:13:54 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta" }, { "msg_contents": "On Monday 01 October 2001 04:13 pm, Thomas Lockhart wrote:\n\n> > the only changes to directory schemes is:\n> > /var/spool/ftp/* for anon ftp\n> > /usr/local/www/www for web stuff\n> > directory structures under each should still be as they always were ...\n\n> But they aren't (or didn't seem to be). \"find\" couldn't find any\n> instance of the docs, unpacked or otherwise.\n\nOk, apparently there's some confusion as to what host does what.\n\nMarc, can you provide a list of host with their functions? For instance, in \nmy case of uploading RPMset's tothe master ftp site, to where do I ssh, to \nwhat directory to I go, etc. The same thing is true for CVS, web, etc.\n\nIe, do I still upload to hub.org:/home/projects/pgsql/ftp/pub/binary for the \nRPM's? I'm a fairly experienced sysadmin, and I'm a little confused at the \nnew setup -- although the files are still on hub.org where they have always \nbeen, is it still the master for the ftp site, or?\n\nA map of the developer _sites_ would be a good addition to the developer's \ncorner, IMHO, unless people have security concerns that would prevent that.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 1 Oct 2001 16:28:10 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta" }, { "msg_contents": "On Mon, 1 Oct 2001, Lamar Owen wrote:\n\n> On Monday 01 October 2001 04:13 pm, Thomas Lockhart wrote:\n>\n> > > the only changes to directory schemes is:\n> > > /var/spool/ftp/* for anon ftp\n> > > /usr/local/www/www for web stuff\n> > > directory structures under each should still be as they always were ...\n>\n> > But they aren't (or didn't seem to be). \"find\" couldn't find any\n> > instance of the docs, unpacked or otherwise.\n>\n> Ok, apparently there's some confusion as to what host does what.\n>\n> Marc, can you provide a list of host with their functions? For instance, in\n> my case of uploading RPMset's tothe master ftp site, to where do I ssh, to\n> what directory to I go, etc. The same thing is true for CVS, web, etc.\n\nftp.postgresql.org:/var/spool/ftp/pub/binary\n(216.126.85.28)\n\nIgnore everything on hub.org from hence forth ... the only reason there\nare still files over there is so that we don't get a flood of email's from\nthe various mirrors concerning brokenness while Vince finishes off a few\nadministrative issues he has ...\n\nThe only site that a developer now needs to deal with is:\n\n\tftp.postgresql.org (which, once Vince gives me the go ahead, will\nbe the same as postgresql.org, which still points at the old machine)\n\nThe only changes that *should* be is that:\n\n/home/projects/pgsql/ftp => /var/spool/ftp\n/home/projects/pgsql/ftp/www => /usr/local/www/www\n\nif there are tools missing, let me know, as Peter did, and I'll get those\ninstalled ... if you can't find something, tell me what hte old directory\nis that you are expecting, and I'll see what went missing ...\n\n", "msg_date": "Tue, 2 Oct 2001 08:32:05 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta" }, { "msg_contents": "On Tuesday 02 October 2001 08:32 am, Marc G. Fournier wrote:\n> ftp.postgresql.org:/var/spool/ftp/pub/binary\n> (216.126.85.28)\n\nSo far so good. Login successful, group membership correct. I'll let you \nknow if I stumble across a roadblock.\n\nOh, and BTW: having done server splits and moves in the past myself, you and \nVince have my gratitude and, well, sympathy, over this, as a dynamic website \nand ftpsite move by itself is never trivial. And there are many more details \nhere than a typical site move....\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 2 Oct 2001 10:43:33 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Preparation for Beta" } ]
[ { "msg_contents": "> > Hi all,\n> > By mapping the WAL files by each backend in to its address\n> > space using \"mmap\" system call , there will be big\n> > improvements in performance from the following point of view:\n> > 1. Each backend directly writes in to the address\n> > space which is obtained by maping the WAL files.\n> > this saves the write system call at the end of\n> > every transaction which transfres 8k of\n> > data from user space to kernel.\n> > 2. since every transaction does not modify all the 8k\n> > content of WAL page , so by issuing the\n> > fsync , the kernel only transfers only the\n> > kernel pages which are modified , which is 4k for\n> > linux so fsync time is saved by twice.\n> > Any comments ?.\n> \n> This is interesting. We are concerned about using mmap() for all I/O\n> because we could eat up quite a bit of address space for big tables, but\n> WAL seems like an ideal use for mmap().\n\nOK, I have talked to Tom Lane about this on the phone and we have a few\nideas.\n\nHistorically, we have avoided mmap() because of portability problems,\nand because using mmap() to write to large tables could consume lots of\naddress space with little benefit. However, I perhaps can see WAL as\nbeing a good use of mmap.\n\nFirst, there is the issue of using mmap(). For OS's that have the\nmmap() MAP_SHARED flag, different backends could mmap the same file and\neach see the changes. However, keep in mind we still have to fsync()\nWAL, so we need to use msync().\n\nSo, looking at the benefits of using mmap(), we have overhead of\ndifferent backends having to mmap something that now sits quite easily\nin shared memory. Now, I can see mmap reducing the copy from user to\nkernel, but there are other ways to fix that. We could modify the\nwrite() routines to write() 8k on first WAL page write and later write\nonly the modified part of the page to the kernel buffers. The old\nkernel buffer is probably still around so it is unlikely to require a\nread from the file system to read in the rest of the page. This reduces\nthe write from 8k to something probably less than 4k which is better\nthan we can do with mmap.\n\nI will add a TODO item to this effect.\n\nAs far as reducing the write to disk from 8k to 4k, if we have to\nfsync/msync, we have to wait for the disk to spin to the proper location\nand at that point writing 4k or 8k doesn't seem like much of a win.\n\nIn summary, I think it would be nice to reduce the 8k transfer from user\nto kernel on secondary page writes to only the modified part of the\npage. I am uncertain if mmap() or anything else will help the physical\nwrite to the disk.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Sep 2001 17:37:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PERFORMANCE IMPROVEMENT by mapping WAL FILES" }, { "msg_contents": " I have just completed the functional testing the WAL using mmap , it is\n\n working fine, I have tested by commenting out the \"CreateCheckPoint \"\nfunctionality so that\n when i kill the postgres and restart it will redo all the records from the\nWAL log file which\n is updated using mmap.\n Just i need to clean code and to do some stress testing.\n By the end of this week i should able to complete the stress test and\ngenerate the patch file .\n As Tom Lane mentioned i see the problem in portability to all platforms,\n\n what i propose is to use mmap for only WAL for some platforms like\n linux,freebsd etc . For other platforms we can use the existing method by\nslightly modifying the\n write() routine to write only the modified part of the page.\n\nRegards\njana\n\n>\n>\n> OK, I have talked to Tom Lane about this on the phone and we have a few\n> ideas.\n>\n> Historically, we have avoided mmap() because of portability problems,\n> and because using mmap() to write to large tables could consume lots of\n> address space with little benefit. However, I perhaps can see WAL as\n> being a good use of mmap.\n>\n> First, there is the issue of using mmap(). For OS's that have the\n> mmap() MAP_SHARED flag, different backends could mmap the same file and\n> each see the changes. However, keep in mind we still have to fsync()\n> WAL, so we need to use msync().\n>\n> So, looking at the benefits of using mmap(), we have overhead of\n> different backends having to mmap something that now sits quite easily\n> in shared memory. Now, I can see mmap reducing the copy from user to\n> kernel, but there are other ways to fix that. We could modify the\n> write() routines to write() 8k on first WAL page write and later write\n> only the modified part of the page to the kernel buffers. The old\n> kernel buffer is probably still around so it is unlikely to require a\n> read from the file system to read in the rest of the page. This reduces\n> the write from 8k to something probably less than 4k which is better\n> than we can do with mmap.\n>\n> I will add a TODO item to this effect.\n>\n> As far as reducing the write to disk from 8k to 4k, if we have to\n> fsync/msync, we have to wait for the disk to spin to the proper location\n> and at that point writing 4k or 8k doesn't seem like much of a win.\n>\n> In summary, I think it would be nice to reduce the 8k transfer from user\n> to kernel on secondary page writes to only the modified part of the\n> page. I am uncertain if mmap() or anything else will help the physical\n> write to the disk.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 01 Oct 2001 17:57:04 +0800", "msg_from": "Janardhana Reddy <jana-reddy@mediaring.com.sg>", "msg_from_op": false, "msg_subject": "Re: PERFORMANCE IMPROVEMENT by mapping WAL FILES" }, { "msg_contents": "\nSounds good. Keep us posted. This will probably not make it into 7.2\nbut can be added to 7.3. We can perhaps conditionally use your code in\nplace of what is there. I have also looked at reducing the write() size\nfor WAL secondary writes. That will have to wait for 7.3 too because we\nare so near beta.\n\n\n> I have just completed the functional testing the WAL using mmap , it is\n> \n> working fine, I have tested by commenting out the \"CreateCheckPoint \"\n> functionality so that\n> when i kill the postgres and restart it will redo all the records from the\n> WAL log file which\n> is updated using mmap.\n> Just i need to clean code and to do some stress testing.\n> By the end of this week i should able to complete the stress test and\n> generate the patch file .\n> As Tom Lane mentioned i see the problem in portability to all platforms,\n> \n> what i propose is to use mmap for only WAL for some platforms like\n> linux,freebsd etc . For other platforms we can use the existing method by\n> slightly modifying the\n> write() routine to write only the modified part of the page.\n> \n> Regards\n> jana\n> \n> >\n> >\n> > OK, I have talked to Tom Lane about this on the phone and we have a few\n> > ideas.\n> >\n> > Historically, we have avoided mmap() because of portability problems,\n> > and because using mmap() to write to large tables could consume lots of\n> > address space with little benefit. However, I perhaps can see WAL as\n> > being a good use of mmap.\n> >\n> > First, there is the issue of using mmap(). For OS's that have the\n> > mmap() MAP_SHARED flag, different backends could mmap the same file and\n> > each see the changes. However, keep in mind we still have to fsync()\n> > WAL, so we need to use msync().\n> >\n> > So, looking at the benefits of using mmap(), we have overhead of\n> > different backends having to mmap something that now sits quite easily\n> > in shared memory. Now, I can see mmap reducing the copy from user to\n> > kernel, but there are other ways to fix that. We could modify the\n> > write() routines to write() 8k on first WAL page write and later write\n> > only the modified part of the page to the kernel buffers. The old\n> > kernel buffer is probably still around so it is unlikely to require a\n> > read from the file system to read in the rest of the page. This reduces\n> > the write from 8k to something probably less than 4k which is better\n> > than we can do with mmap.\n> >\n> > I will add a TODO item to this effect.\n> >\n> > As far as reducing the write to disk from 8k to 4k, if we have to\n> > fsync/msync, we have to wait for the disk to spin to the proper location\n> > and at that point writing 4k or 8k doesn't seem like much of a win.\n> >\n> > In summary, I think it would be nice to reduce the 8k transfer from user\n> > to kernel on secondary page writes to only the modified part of the\n> > page. I am uncertain if mmap() or anything else will help the physical\n> > write to the disk.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 11:44:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PERFORMANCE IMPROVEMENT by mapping WAL FILES" }, { "msg_contents": "\nI have added this to TODO.detail/mmap.\n\n> I have just completed the functional testing the WAL using mmap , it is\n> \n> working fine, I have tested by commenting out the \"CreateCheckPoint \"\n> functionality so that\n> when i kill the postgres and restart it will redo all the records from the\n> WAL log file which\n> is updated using mmap.\n> Just i need to clean code and to do some stress testing.\n> By the end of this week i should able to complete the stress test and\n> generate the patch file .\n> As Tom Lane mentioned i see the problem in portability to all platforms,\n> \n> what i propose is to use mmap for only WAL for some platforms like\n> linux,freebsd etc . For other platforms we can use the existing method by\n> slightly modifying the\n> write() routine to write only the modified part of the page.\n> \n> Regards\n> jana\n> \n> >\n> >\n> > OK, I have talked to Tom Lane about this on the phone and we have a few\n> > ideas.\n> >\n> > Historically, we have avoided mmap() because of portability problems,\n> > and because using mmap() to write to large tables could consume lots of\n> > address space with little benefit. However, I perhaps can see WAL as\n> > being a good use of mmap.\n> >\n> > First, there is the issue of using mmap(). For OS's that have the\n> > mmap() MAP_SHARED flag, different backends could mmap the same file and\n> > each see the changes. However, keep in mind we still have to fsync()\n> > WAL, so we need to use msync().\n> >\n> > So, looking at the benefits of using mmap(), we have overhead of\n> > different backends having to mmap something that now sits quite easily\n> > in shared memory. Now, I can see mmap reducing the copy from user to\n> > kernel, but there are other ways to fix that. We could modify the\n> > write() routines to write() 8k on first WAL page write and later write\n> > only the modified part of the page to the kernel buffers. The old\n> > kernel buffer is probably still around so it is unlikely to require a\n> > read from the file system to read in the rest of the page. This reduces\n> > the write from 8k to something probably less than 4k which is better\n> > than we can do with mmap.\n> >\n> > I will add a TODO item to this effect.\n> >\n> > As far as reducing the write to disk from 8k to 4k, if we have to\n> > fsync/msync, we have to wait for the disk to spin to the proper location\n> > and at that point writing 4k or 8k doesn't seem like much of a win.\n> >\n> > In summary, I think it would be nice to reduce the 8k transfer from user\n> > to kernel on secondary page writes to only the modified part of the\n> > page. I am uncertain if mmap() or anything else will help the physical\n> > write to the disk.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 13:35:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: PERFORMANCE IMPROVEMENT by mapping WAL FILES" } ]
[ { "msg_contents": "I have just noticed a flaw in the handling of \"-o backend-options\"\npostmaster parameters. To wit: although these options will be passed\nto all backends launched by the postmaster, they aren't passed to\ncheckpoint, xlog startup, and xlog shutdown subprocesses (everything\nthat goes through BootstrapMain). Since BootstrapMain doesn't\nrecognize the same set of options that PostgresMain does, this is\na necessary restriction. Unfortunately it means that checkpoint etc.\ndon't necessarily run with the same options as normal backends.\n\nThe particular case that I ran into is that I've been in the habit\nof running test postmasters with \"-o -F\" to suppress fsync. Kernel\ntracing showed that checkpoint processes were issuing fsyncs anyway,\nand I eventually realized why: they're not seeing the command line\noption.\n\nWhile that's not a fatal problem, I could imagine *much* more serious\nmisbehavior from inconsistent settings of some GUC parameters. Since\nbackends believe that these parameters have PGC_POSTMASTER priority,\nthey'll accept changes that they probably oughtn't. For example,\n\tpostmaster -o --shared_buffers=N\nwill cause things to blow up very nicely indeed: backends will have\na value of NBuffers that doesn't agree with what the postmaster has.\n\nI wonder whether we should retire -o. Or change it so that the\npostmaster parses the given options for itself (consequently adjusting\nits copies of GUC variables) instead of passing them on to backends\nfor parsing at backend start time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Sep 2001 22:37:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Glitch in handling of postmaster -o options" }, { "msg_contents": "Tom Lane writes:\n\n> The particular case that I ran into is that I've been in the habit\n> of running test postmasters with \"-o -F\" to suppress fsync. Kernel\n> tracing showed that checkpoint processes were issuing fsyncs anyway,\n> and I eventually realized why: they're not seeing the command line\n> option.\n\npostmaster -F should be used in this particular case. I'm not sure about\na general solution. I don't even know what the set of parameters for\ncheckpoint processes is or why they could usefully differ from the\npostmaster's global settings.\n\n> While that's not a fatal problem, I could imagine *much* more serious\n> misbehavior from inconsistent settings of some GUC parameters. Since\n> backends believe that these parameters have PGC_POSTMASTER priority,\n> they'll accept changes that they probably oughtn't. For example,\n> \tpostmaster -o --shared_buffers=N\n> will cause things to blow up very nicely indeed: backends will have\n> a value of NBuffers that doesn't agree with what the postmaster has.\n\nThis is a bug. PG 7.1 wouldn't let this thing go through but with all the\nchanges made for the RESET ALL functionality (I think) this has snuck in.\n\nMy best guess is that this change was made to allow using\nSetConfigOption() in PostgresMain() with options that are really\npostmaster-global and are passed down to the backends. But AFAICS there\naren't any such options anymore.\n\n> I wonder whether we should retire -o.\n\nThe only two postgres options which I would consider user-space are -F and\n-S. The former also exists as a postmaster option, but that is newer and\npeople are used to the older form. The second one doesn't for obvious\nreasons. With the config file this isn't so much of an issue any longer,\nbut people do use it.\n\n(It's always been a goal of mine to merge the options that any of the\nbackend processes accept. The -S option really is the killer for that.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 30 Sep 2001 00:54:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" }, { "msg_contents": "Tom Lane wrote:\n> \n<snip>\n> \n> I wonder whether we should retire -o. Or change it so that the\n> postmaster parses the given options for itself (consequently adjusting\n> its copies of GUC variables) instead of passing them on to backends\n> for parsing at backend start time.\n\nRetiring -o would seem like a good idea. Just about every person I bump\ninto that's new to PostgreSQL doesn't get -o right for some time. It's\nsimple in concept, but different from how every other package works, so\nit confuses newcomers who don't know the difference between the\ndifferent parts of PostgreSQL.\n\nIt would be good if we could just having options that replace each -o\noption (i.e. -F instead of -o '-F', -x -y instead of -o '-x -y') so it's\nsimilar to how other programs command line arguments work.\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 30 Sep 2001 12:46:21 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Retiring -o would seem like a good idea.\n\nThat was what I was thinking too. I can think of ways to reimplement\n-o options so that they work safely ... but is it worth the trouble?\nAFAICS, -o options confuse both people and machines, and have no\nredeeming value beyond supporting old startup scripts. Which could\neasily be updated.\n\nSome judicious feature removal may be the best path here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 22:56:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Glitch in handling of postmaster -o options " }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n>> I wonder whether we should retire -o.\n\n> How about putting -o stuff after -p? That way only postmaster\n> code can set PGC_POSTMASTER options for a backend, no way for\n> user to mess up. ATM this would break -o -F tho'.\n\nIndeed. If we're going to force people to change their scripts anyway,\nIMHO we might as well go all the way...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Sep 2001 11:37:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Glitch in handling of postmaster -o options " }, { "msg_contents": "On Sun, Sep 30, 2001 at 12:54:25AM +0200, Peter Eisentraut wrote:\n> Tom Lane writes:\n> > While that's not a fatal problem, I could imagine *much* more serious\n> > misbehavior from inconsistent settings of some GUC parameters. Since\n> > backends believe that these parameters have PGC_POSTMASTER priority,\n> > they'll accept changes that they probably oughtn't. For example,\n> > \tpostmaster -o --shared_buffers=N\n> > will cause things to blow up very nicely indeed: backends will have\n> > a value of NBuffers that doesn't agree with what the postmaster has.\n> \n> This is a bug. PG 7.1 wouldn't let this thing go through but with all the\n> changes made for the RESET ALL functionality (I think) this has snuck in.\n> \n> My best guess is that this change was made to allow using\n> SetConfigOption() in PostgresMain() with options that are really\n> postmaster-global and are passed down to the backends. But AFAICS there\n> aren't any such options anymore.\n> \n> > I wonder whether we should retire -o.\n\nHow about putting -o stuff after -p? That way only postmaster\ncode can set PGC_POSTMASTER options for a backend, no way for\nuser to mess up. ATM this would break -o -F tho'.\n\n-- \nmarko\n\n", "msg_date": "Sun, 30 Sep 2001 17:40:20 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" }, { "msg_contents": "> Marko Kreen <marko@l-t.ee> writes:\n> >> I wonder whether we should retire -o.\n> \n> > How about putting -o stuff after -p? That way only postmaster\n> > code can set PGC_POSTMASTER options for a backend, no way for\n> > user to mess up. ATM this would break -o -F tho'.\n\nNot sure what you are suggesting here. Should we keep -o but say all\noptions after -o are passed to postgres backends:\n\n\tpostmaster -a -b -c -o -f -g -h\n\nIn this case, -abc goes to postmaster and -fgh goes to postgres.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 14:13:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" }, { "msg_contents": "On Sun, Sep 30, 2001 at 02:13:34PM -0400, Bruce Momjian wrote:\n> > Marko Kreen <marko@l-t.ee> writes:\n> > >> I wonder whether we should retire -o.\n> > \n> > > How about putting -o stuff after -p? That way only postmaster\n> > > code can set PGC_POSTMASTER options for a backend, no way for\n> > > user to mess up. ATM this would break -o -F tho'.\n> \n> Not sure what you are suggesting here. Should we keep -o but say all\n> options after -o are passed to postgres backends:\n\nI am suggesting this.\n\nLike previosly discussed, postmaster -F should be used instead\nof postmaster '-o -F'. Other options with PGC_BACKEND, like -S\nkeep on working.\n\n-- \nmarko\n\n\nIndex: src/backend/postmaster/postmaster.c\n===================================================================\nRCS file: /opt/cvs/pgsql/pgsql/src/backend/postmaster/postmaster.c,v\nretrieving revision 1.243\ndiff -u -c -r1.243 postmaster.c\n*** src/backend/postmaster/postmaster.c\t21 Sep 2001 20:31:48 -0000\t1.243\n--- src/backend/postmaster/postmaster.c\t30 Sep 2001 15:35:44 -0000\n***************\n*** 2035,2048 ****\n \t\tav[ac++] = debugbuf;\n \t}\n \n- \t/*\n- \t * Pass any backend switches specified with -o in the postmaster's own\n- \t * command line. We assume these are secure. (It's OK to mangle\n- \t * ExtraOptions since we are now in the child process; this won't\n- \t * change the postmaster's copy.)\n- \t */\n- \tsplit_opts(av, &ac, ExtraOptions);\n- \n \t/* Tell the backend what protocol the frontend is using. */\n \tsprintf(protobuf, \"-v%u\", port->proto);\n \tav[ac++] = protobuf;\n--- 2035,2040 ----\n***************\n*** 2055,2060 ****\n--- 2047,2062 ----\n \n \tStrNCpy(dbbuf, port->database, ARGV_SIZE);\n \tav[ac++] = dbbuf;\n+ \n+ \t/*\n+ \t * Pass any backend switches specified with -o in the postmaster's own\n+ \t * command line. (It's OK to mangle ExtraOptions since we are now in the\n+ \t * child process; this won't change the postmaster's copy.)\n+ \t *\n+ \t * We dont assume anymore they are secure, now only PGC_BACKEND\n+ \t * options can be specified that way.\n+ \t */\n+ \tsplit_opts(av, &ac, ExtraOptions);\n \n \t/*\n \t * Pass the (insecure) option switches from the connection request.\n", "msg_date": "Sun, 30 Sep 2001 21:05:00 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n> I am suggesting this.\n> [ code snipped ]\n\nOkay, that would mean that \"-o '-S nnn'\" still works, but \"-o -F\"\ndoesn't.\n\nBut ... the thing is, there is no reason for -o to exist anymore other\nthan backwards compatibility with existing startup scripts. -o doesn't\ndo anything you can't do more cleanly and sanely with GUC options\n(--sort_mem, etc). So, I don't really see much value in keeping it\nif you're going to break one of the more common usages --- which I'm\nsure -o -F is.\n\nSince the problem I identified is not likely to bite very many people,\nmy vote is not to try to apply a code solution now. I think we should\nleave the code alone, and instead document in 7.2 that -o is deprecated\n(and explain what to do instead), with the intention of removing it in\n7.3. Giving people a release cycle's worth of notice seems sufficient.\n\nPossibly we could also take this opportunity to deprecate -S and the\nother options that are standing in the way of unified command line\noptions for postmasters and backends.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Sep 2001 17:13:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Glitch in handling of postmaster -o options " }, { "msg_contents": "Hi all,\n\nThere seem to be a few namespace conflicts for the options of postgres\nand postmaster. The one's I could identify from the man pages are :\n\n-i -N -o -p -S -s\n\nIf we are going to deprecate -o, then we'll need to make sure we also\nintroduce replacement names where these conflicts are. This way, in the\nfuture -o can be treated like a 'no-option' and everything would work.\n\nIf we notify of the impending deprecation now, to actually occur in 7.3,\nwould we be best intoducing alternative option names somewhere in the\n7.2 beta cycle so people writing scripts for 7.2 can use the new names\nand know their scripts will work into the future?\n\n???\n\nRegards and best wishes,\n\nJustin Clift\n\n\nTom Lane wrote:\n> \n> Marko Kreen <marko@l-t.ee> writes:\n> > I am suggesting this.\n> > [ code snipped ]\n> \n> Okay, that would mean that \"-o '-S nnn'\" still works, but \"-o -F\"\n> doesn't.\n> \n> But ... the thing is, there is no reason for -o to exist anymore other\n> than backwards compatibility with existing startup scripts. -o doesn't\n> do anything you can't do more cleanly and sanely with GUC options\n> (--sort_mem, etc). So, I don't really see much value in keeping it\n> if you're going to break one of the more common usages --- which I'm\n> sure -o -F is.\n> \n> Since the problem I identified is not likely to bite very many people,\n> my vote is not to try to apply a code solution now. I think we should\n> leave the code alone, and instead document in 7.2 that -o is deprecated\n> (and explain what to do instead), with the intention of removing it in\n> 7.3. Giving people a release cycle's worth of notice seems sufficient.\n> \n> Possibly we could also take this opportunity to deprecate -S and the\n> other options that are standing in the way of unified command line\n> options for postmasters and backends.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 01 Oct 2001 20:57:52 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" }, { "msg_contents": "\nWould someone give me a status on this?\n\n---------------------------------------------------------------------------\n\n> Hi all,\n> \n> There seem to be a few namespace conflicts for the options of postgres\n> and postmaster. The one's I could identify from the man pages are :\n> \n> -i -N -o -p -S -s\n> \n> If we are going to deprecate -o, then we'll need to make sure we also\n> introduce replacement names where these conflicts are. This way, in the\n> future -o can be treated like a 'no-option' and everything would work.\n> \n> If we notify of the impending deprecation now, to actually occur in 7.3,\n> would we be best intoducing alternative option names somewhere in the\n> 7.2 beta cycle so people writing scripts for 7.2 can use the new names\n> and know their scripts will work into the future?\n> \n> ???\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> \n> Tom Lane wrote:\n> > \n> > Marko Kreen <marko@l-t.ee> writes:\n> > > I am suggesting this.\n> > > [ code snipped ]\n> > \n> > Okay, that would mean that \"-o '-S nnn'\" still works, but \"-o -F\"\n> > doesn't.\n> > \n> > But ... the thing is, there is no reason for -o to exist anymore other\n> > than backwards compatibility with existing startup scripts. -o doesn't\n> > do anything you can't do more cleanly and sanely with GUC options\n> > (--sort_mem, etc). So, I don't really see much value in keeping it\n> > if you're going to break one of the more common usages --- which I'm\n> > sure -o -F is.\n> > \n> > Since the problem I identified is not likely to bite very many people,\n> > my vote is not to try to apply a code solution now. I think we should\n> > leave the code alone, and instead document in 7.2 that -o is deprecated\n> > (and explain what to do instead), with the intention of removing it in\n> > 7.3. Giving people a release cycle's worth of notice seems sufficient.\n> > \n> > Possibly we could also take this opportunity to deprecate -S and the\n> > other options that are standing in the way of unified command line\n> > options for postmasters and backends.\n> > \n> > regards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> -- \n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 17:32:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Would someone give me a status on this?\n\nI don't think we need any code changes. If we decide to deprecate -o\n(or anything else), it's just a documentation change. So we can argue\nabout it during beta ...\n\n>> If we notify of the impending deprecation now, to actually occur in 7.3,\n>> would we be best intoducing alternative option names somewhere in the\n>> 7.2 beta cycle so people writing scripts for 7.2 can use the new names\n>> and know their scripts will work into the future?\n\nThe alternative option names already exist, in the form of GUC\nvariables. For example, \"--sort-mem=NNN\" could replace -S NNN.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 20:13:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Glitch in handling of postmaster -o options " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Would someone give me a status on this?\n> \n> I don't think we need any code changes. If we decide to deprecate -o\n> (or anything else), it's just a documentation change. So we can argue\n> about it during beta ...\n> \n> >> If we notify of the impending deprecation now, to actually occur in 7.3,\n> >> would we be best intoducing alternative option names somewhere in the\n> >> 7.2 beta cycle so people writing scripts for 7.2 can use the new names\n> >> and know their scripts will work into the future?\n> \n> The alternative option names already exist, in the form of GUC\n> variables. For example, \"--sort-mem=NNN\" could replace -S NNN.\n\nI don't think we can remove -o behavior during beta because it will\naffect people using -S in startup scripts. I just wanted to know if I\nshould record this on the TODO list. Added to TODO:\n\n * Remove behavior of postmaster -o after making\n postmaster/postgres flags unique\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 22:45:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't think we can remove -o behavior during beta because it will\n> affect people using -S in startup scripts.\n\nThat was *not* the proposal under discussion. The proposal was to\nwarn people in the 7.2 documentation that we plan to remove -o in 7.3.\n\nAFAICS there is no backwards-compatible way to clean up these switches,\nand so the best bet is to make an incompatible change --- after suitable\nwarning.\n\t\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2001 23:25:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Glitch in handling of postmaster -o options " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't think we can remove -o behavior during beta because it will\n> > affect people using -S in startup scripts.\n> \n> That was *not* the proposal under discussion. The proposal was to\n> warn people in the 7.2 documentation that we plan to remove -o in 7.3.\n> \n> AFAICS there is no backwards-compatible way to clean up these switches,\n> and so the best bet is to make an incompatible change --- after suitable\n> warning.\n\nOK, it is on the TODO list so we can do it when we want to. So you are\nconsidering only a documentation warning? I was thinking we should have\nsingle-letter replacements installed and print a warning to the user\nthat the -o '-a -b -c' thing will go away in 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 01:17:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Would someone give me a status on this?\n> \n> I don't think we need any code changes. If we decide to deprecate -o\n> (or anything else), it's just a documentation change. So we can argue\n> about it during beta ...\n> \n> >> If we notify of the impending deprecation now, to actually occur in 7.3,\n> >> would we be best intoducing alternative option names somewhere in the\n> >> 7.2 beta cycle so people writing scripts for 7.2 can use the new names\n> >> and know their scripts will work into the future?\n> \n> The alternative option names already exist, in the form of GUC\n> variables. For example, \"--sort-mem=NNN\" could replace -S NNN.\n\nOK, the long options already exist and people can use those in 7.2\nwithout the -o, right?\n\nDo you have to have long option support in your OS to use them? Do we\nwant to have options that _don't_ have single-letter versions? \nCertainly we can't have single-letter versions of all the GUC options\nbut do we remove ones that were already there? I guess we could.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 01:33:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Glitch in handling of postmaster -o options" } ]
[ { "msg_contents": "Just a HUMOR, no offense and take it easy!!\n=================================\n\nIf someone ask's you how do you compare the features and capability of\nthese three popular SQL servers:\nOracle, PostgreSQL and MySQL ??\n\nThen comes the answer in layman's terms:\n\n\"A Elephant, Powerful White Horse, Fast Hare (Rabbit). Please tell me\nwhich animal you want ??\"\n\nOracle = Elephant, Big and hefty but very bulky\nPostgreSQL = Powerful White Horse, top breed horse (I love riding\nhorses)\nMySQL = Fast running Rabbit (Hare), I like rabbits as pets, very decent\nanimals!!\n\nOracle, PostgreSQL are in development for the last 22 years. Both had\noriginations in University of California, Berkeley.\nBoth are very mature, ACID compliant (Atomicity, Concurrency,.....) and\nare robust.\n\nMySQL is like rabbit, it runs fast and can be a good pet.\nBut see, the speed is NOT at all important when it comes to SQL server -\nit is the ACID compliance, Data Integrity\nand robustness and features and language interfaces which are\nimportant.\n\nYou must not compare Elephant with Hare (Oracle with MySQL)\n\nBut the best among these three animals is - White Powerful Horse!!\n(PostgreSQL !!)\n\nWhy do you think every person is waiting in a long queue to get a ride\non \"White Powerful Horse\" (PostgreSQL) ???\n\nEvery person on the planet wants to ride on PostgreSQL !!\n\n\n\n\n\n", "msg_date": "Sat, 29 Sep 2001 16:19:01 GMT", "msg_from": "peace_flower <\"alavoor[AT]\"@yahoo.com>", "msg_from_op": true, "msg_subject": "Elephant, Horse and Hare (Rabbit) : Oracle, PostgreSQL and MySQL !" } ]
[ { "msg_contents": "How hard would it be to pre-fork an extra backend for the database a\nuser just requested so if they next user asks for the same database, the\nbackend would already be started?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Sep 2001 14:22:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Pre-forking backend" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How hard would it be to pre-fork an extra backend\n\nHow are you going to pass the connection socket to an already-forked\nchild process? AFAIK there's no remotely portable way ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 14:36:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How hard would it be to pre-fork an extra backend\n> \n> How are you going to pass the connection socket to an already-forked\n> child process? AFAIK there's no remotely portable way ...\n\nNo idea but it seemed like a nice optimization if we could do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Sep 2001 14:38:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> How hard would it be to pre-fork an extra backend for the database a\n> user just requested so if they next user asks for the same database, the\n> backend would already be started?\n\nThe only problem I could see is the socket. The pre-forked() back-end would\nhave to do the accept() for the new connection, but you could always have a\nforked process waiting to go in the accept() routine. When it accepts a new\nsocket, it sends a signal off to the parent back-end to fork() over (couldn't\nresist) a new backend.\n\nThat way there would be no fork() over head for new connections.\n", "msg_date": "Sat, 29 Sep 2001 14:50:25 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > How hard would it be to pre-fork an extra backend\n> >\n> > How are you going to pass the connection socket to an already-forked\n> > child process? AFAIK there's no remotely portable way ...\n>\n> No idea but it seemed like a nice optimization if we could do it.\n\nWhat can be done is to have the parent process open and listen() on the\nsocket, then have each child do an accept() on the socket. That way you\ndon't have to pass the socket. The function of the parent process would then\nbe only to decide when to start new children.\n\nOn some operating systems, only one child at a time can accept() on the\nsocket. On these, you have to lock around the call to accept().\n\n\n\n\n\n", "msg_date": "Sat, 29 Sep 2001 16:28:00 -0400", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "> Bruce Momjian wrote:\n> > Tom Lane wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > How hard would it be to pre-fork an extra backend\n> > >\n> > > How are you going to pass the connection socket to an already-forked\n> > > child process? AFAIK there's no remotely portable way ...\n> >\n> > No idea but it seemed like a nice optimization if we could do it.\n> \n> What can be done is to have the parent process open and listen() on the\n> socket, then have each child do an accept() on the socket. That way you\n> don't have to pass the socket. The function of the parent process would then\n> be only to decide when to start new children.\n> \n> On some operating systems, only one child at a time can accept() on the\n> socket. On these, you have to lock around the call to accept().\n\nBut how do you know the client wants the database you have forked? They\ncould want a different one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 29 Sep 2001 16:29:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> On some operating systems, only one child at a time can accept() on the\n>> socket. On these, you have to lock around the call to accept().\n\n> But how do you know the client wants the database you have forked? They\n> could want a different one.\n\nThis approach would only work as far as saving the fork() call itself,\nnot the backend setup time. Not sure it's worth the trouble. I doubt\nthat the fork itself is a huge component of our start time; it's setting\nup all the catalog caches and so forth that's expensive.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 16:50:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend " }, { "msg_contents": "Tom Lane wrote:\n>\n> This approach would only work as far as saving the fork() call itself,\n> not the backend setup time. Not sure it's worth the trouble. I doubt\n> that the fork itself is a huge component of our start time; it's setting\n> up all the catalog caches and so forth that's expensive.\n\nOn Unix, yeah, but on Windows, VMS, MPE/iX, possibly others, forking is\nexpensive. Even on Unix, you're not losing anything by this architecture.\n\nThe simple solution is to have wait on separate sockets and add a redirect\ncapability to the protocol. The program would be:\n\nIf the clients wants the database I have open,\n great, we're in business\nelse if the client supports redirect,\n do redirect\nelse if I can pass file descriptor on this OS,\n pass file descriptor to the right process\nelse\n throw away what we've done and open the right database.\n\nSimple! It's just a small matter of programming.\n\n\n\n\n", "msg_date": "Sat, 29 Sep 2001 17:06:13 -0400", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend " }, { "msg_contents": "At 04:50 PM 9/29/01 -0400, Tom Lane wrote:\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>>> On some operating systems, only one child at a time can accept() on the\n>>> socket. On these, you have to lock around the call to accept().\n>\n>> But how do you know the client wants the database you have forked? They\n>> could want a different one.\n>\n>This approach would only work as far as saving the fork() call itself,\n>not the backend setup time. Not sure it's worth the trouble. I doubt\n>that the fork itself is a huge component of our start time; it's setting\n>up all the catalog caches and so forth that's expensive.\n\nI don't think there's much benefit as well.\n\nFor most cases where preforking would help, you could just simply not\ndisconnect. Get the app to connect to the correct DB on startup and then\njust wait, do stuff then don't disconnect either rollback or commit. Or\nhave a DB connection pool.\n\nWhat would be good is a DB that can handle lots of connections well. That\nwould help almost any case.\n\nPreforking is good for web servers but for DB servers it doesn't seem as\nuseful.\n\nCheerio,\nLink.\n\n", "msg_date": "Sun, 30 Sep 2001 09:54:44 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How hard would it be to pre-fork an extra backend\n> \n> How are you going to pass the connection socket to an already-forked\n> child process? AFAIK there's no remotely portable way ...\n\nUmm... Apache? They use a preforking model and it works quite well for \nevery *NIX that Apache runs on. ;) Maybe RSE can comment on this \nfurther... -sc\n\n-- \nSean Chittenden\n", "msg_date": "Sat, 29 Sep 2001 19:28:01 -0700", "msg_from": "sean-pgsql-hackers@chittenden.org", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How hard would it be to pre-fork an extra backend\n>\n> How are you going to pass the connection socket to an already-forked\n> child process? AFAIK there's no remotely portable way ...\n\n One of the mechanisms I've seen was that the master process\n just does the socket(), bind(), listen(), than forks off and\n the children coordinate via a semaphore that at most one of\n them executes a blocking accept(). I think it was in some\n older apache release.\n\n But in contrast to apache, we currently do most of the\n initialization after we authenticated the user and know what\n database to connect to. I'm not sure how much of the backend\n startup could be done before accepting the connection.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Sun, 30 Sep 2001 00:40:49 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "On Sat, 29 Sep 2001 sean-pgsql-hackers@chittenden.org wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > How hard would it be to pre-fork an extra backend\n> > \n> > How are you going to pass the connection socket to an already-forked\n> > child process? AFAIK there's no remotely portable way ...\n> \n> Umm... Apache? They use a preforking model and it works quite well for \n> every *NIX that Apache runs on. ;) Maybe RSE can comment on this \n> further... -sc\n\nIt works very good for what Apache requires. Namely, to have a queue of\nprocesses ready to serve pages. Its not that simple with PostgreSQL - as\nthe discussion so far has drawn out - since there is no simple way to\nguarantee that the 'right' child gets the socket. The reason why there\nneeds to be a 'right' child is that a socket needs to be passed to a child\nwhich has started up for a given database. Otherwise, there's no benefit.\n\nThis aside, isn't it possible to just copy the socket and some\ndata about the database required into shared memory and have the preforked\nchildren pick the socket up from there. Combined with a framework which\ntests that there are still idle pre-forked children waiting for this\ndatabase and some configuration options to allow users to specify a number\nof waiting backends for a given database, and this would work pretty well.\n\nGavin\n\n", "msg_date": "Sun, 30 Sep 2001 20:12:20 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n\n> This aside, isn't it possible to just copy the socket and some\n> data about the database required into shared memory and have the preforked\n> children pick the socket up from there.\n\nUmmm.... No. There's no Unix API for doing so.\n\nYou can pass open file descriptors across Unix domain sockets on most\nsystems, which is a possible way to address the problem, but probably\nnot worth it for the reasons discussed earlier.\n\n-Doug\n-- \nIn a world of steel-eyed death, and men who are fighting to be warm,\nCome in, she said, I'll give you shelter from the storm. -Dylan\n", "msg_date": "30 Sep 2001 11:11:43 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "* Gavin Sherry (swm@linuxworld.com.au) [010930 06:13]:\n> On Sat, 29 Sep 2001 sean-pgsql-hackers@chittenden.org wrote:\n> \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > How hard would it be to pre-fork an extra backend\n> > > \n> > > How are you going to pass the connection socket to an already-forked\n> > > child process? AFAIK there's no remotely portable way ...\n> > \n> > Umm... Apache? They use a preforking model and it works quite well for \n> > every *NIX that Apache runs on. ;) Maybe RSE can comment on this \n> > further... -sc\n> \n> It works very good for what Apache requires. Namely, to have a queue of\n> processes ready to serve pages. Its not that simple with PostgreSQL - as\n> the discussion so far has drawn out - since there is no simple way to\n> guarantee that the 'right' child gets the socket. The reason why there\n> needs to be a 'right' child is that a socket needs to be passed to a child\n> which has started up for a given database. Otherwise, there's no benefit.\n\nInteresting: So as the number of databases served by a given system\napproaches one, the efficiency of this increases.\n\nIs it useful if it only works for one database within a server? I can\nenvision applications for this.\n\n-Brad\n", "msg_date": "Sun, 30 Sep 2001 11:56:53 -0400", "msg_from": "Bradley McLean <brad@bradm.net>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "Doug McNaught wrote:\n>\n> You can pass open file descriptors across Unix domain sockets on most\n> systems, which is a possible way to address the problem, but probably\n> not worth it for the reasons discussed earlier.\n\nI think that it does solve the problem. The only drawback is that it's not\nportable. Almost all systems do support one of two methods, though.\n\n\n", "msg_date": "Sun, 30 Sep 2001 12:45:57 -0400", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "Bradley McLean <brad@bradm.net> writes:\n> Is it useful if it only works for one database within a server?\n\nOnce we have schemas (7.3, I hope), I think a lot of installations will\nhave only one production database. However, if we were going to do this\nwhat we'd probably do is allow the DBA to configure the postmaster to\nstart N pre-forked backends per database, where N can depend on the\ndatabase. There's no reason to limit it to just one database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Sep 2001 13:35:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend " }, { "msg_contents": "\n> \n> Once we have schemas (7.3, I hope), I think a lot of installations will\n> have only one production database. However, if we were going to do this\n> what we'd probably do is allow the DBA to configure the postmaster to\n> start N pre-forked backends per database, where N can depend on the\n> database. There's no reason to limit it to just one database.\n\nThe optimized version of Postgres-R uses pre-forked backends for \nhandling remote\nwrite sets. It currently uses one user/database, so I'm all for having \na configurable\nparameter for starting a pool of backends for each database. We'll have \nto make sure\nthat number * the number of databases is lower than the max number of \nbackends at\nstart up.\n\nDarren\n\n> \n> \n\n", "msg_date": "Sun, 30 Sep 2001 19:51:06 -0400", "msg_from": "Darren Johnson <darren.johnson@home.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "> >\n> > How hard would it be to pre-fork an extra backend for the database a\n> > user just requested so if they next user asks for the same database, the\n> > backend would already be started?\n\n Perhaps I'm missing something, but it seems to me that the cost of forking\na new backend would be pretty trivial compared to the expense of processing\nanything but the most simple query. Am I wrong in that?\n\nsteve\n\n\n", "msg_date": "Sun, 30 Sep 2001 20:16:27 -0600", "msg_from": "\"Steve Wolfe\" <steve@iboats.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "> > >\n> > > How hard would it be to pre-fork an extra backend for the database a\n> > > user just requested so if they next user asks for the same database, the\n> > > backend would already be started?\n> \n> Perhaps I'm missing something, but it seems to me that the cost of forking\n> a new backend would be pretty trivial compared to the expense of processing\n> anything but the most simple query. Am I wrong in that?\n\nTrue on most OS's, but on Solaris, fork is pretty expensive, or at least\nwe are told.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 22:50:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "At 08:16 PM 30-09-2001 -0600, Steve Wolfe wrote:\n>> >\n>> > How hard would it be to pre-fork an extra backend for the database a\n>> > user just requested so if they next user asks for the same database, the\n>> > backend would already be started?\n>\n> Perhaps I'm missing something, but it seems to me that the cost of forking\n>a new backend would be pretty trivial compared to the expense of processing\n>anything but the most simple query. Am I wrong in that?\n\nI think forking costs a lot on Solaris. That's why Sun promotes threads :).\n\nI still don't see many advantages of doing the preforking in postgresql.\nWhat would the benefits be? Able to open and close db connections many\ntimes a second? Any other advantages?\n\nCan't the apps do their own preforking? All they do is preopen their own db\nconnections. Then they can take care of whatever initialization and details\nthey want.\n\nIt seems that opening and closing db connections over the network will\nalways be slower than just leaving a prepared connection open, looking at\njust the network connection setup time alone.\n\nI suppose it is helpful for plain cgi scripts, but those don't scale do they?\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 01 Oct 2001 11:27:25 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > How hard would it be to pre-fork an extra backend\n> >\n> > How are you going to pass the connection socket to an already-forked\n> > child process? AFAIK there's no remotely portable way ...\n> \n> One of the mechanisms I've seen was that the master process\n> just does the socket(), bind(), listen(), than forks off and\n> the children coordinate via a semaphore that at most one of\n> them executes a blocking accept(). I think it was in some\n> older apache release.\n> \n> But in contrast to apache, we currently do most of the\n> initialization after we authenticated the user and know what\n> database to connect to. I'm not sure how much of the backend\n> startup could be done before accepting the connection.\n\nI agree this may not be a big win on most platforms, but for platforms\nlike Solaris and NT, it could be a big win. Added to TODO:\n\n\t* Do listen() in postmaster and accept() in pre-forked backend\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 12:19:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "> Gavin Sherry <swm@linuxworld.com.au> writes:\n> \n> > This aside, isn't it possible to just copy the socket and some\n> > data about the database required into shared memory and have the preforked\n> > children pick the socket up from there.\n> \n> Ummm.... No. There's no Unix API for doing so.\n> \n> You can pass open file descriptors across Unix domain sockets on most\n> systems, which is a possible way to address the problem, but probably\n> not worth it for the reasons discussed earlier.\n\nOK, let's assume we have pre-forked backends that do the accept(). One\nenhancement would be for the child to connect to the last requested\ndatabase. If the accept() user wants the same database, it is already\nconnected, or at least its cache is loaded. If they want another one,\nwe can disconnect and connect to the database they request. This would\nbe portable for all OS's because there is no file descriptor passing.\n\nAdded to TODO:\n\n* Have pre-forked backend pre-connect to last requested database or pass\n file descriptor to backend pre-forked for matching database\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 12 Oct 2001 12:29:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, let's assume we have pre-forked backends that do the accept(). One\n> enhancement would be for the child to connect to the last requested\n> database. If the accept() user wants the same database, it is already\n> connected, or at least its cache is loaded. If they want another one,\n> we can disconnect and connect to the database they request. This would\n> be portable for all OS's because there is no file descriptor passing.\n\nThis is bad because you have hidden \"connection pooling\" that cannot be\ncircumvented, and I guarantee that it will become a problem because \"new\nconnection\" will no longer equal \"new connection\". Additionally, you're\nassuming a setup were any new connection will connect to a random (from\nthe server's point of view) database. I claim these setups are not the\nmajority. In fact, any one client application would usually only connect\nto exactly one database, so it might as well keep that connection open.\nFor systems were this is not possible for some reason or where different\ndatabases or connection parameters are really required, there are already\nplenty of solutions available that are tuned or tunable to the situation\nat hand, so your solution would just get in the way. In short, you're\nadding a level of complexity where there is no problem.\n\n> Added to TODO:\n\nI haven't seen a consensus yet.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 13 Oct 2001 20:14:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > OK, let's assume we have pre-forked backends that do the accept(). One\n> > enhancement would be for the child to connect to the last requested\n> > database. If the accept() user wants the same database, it is already\n> > connected, or at least its cache is loaded. If they want another one,\n> > we can disconnect and connect to the database they request. This would\n> > be portable for all OS's because there is no file descriptor passing.\n> \n> This is bad because you have hidden \"connection pooling\" that cannot be\n> circumvented, and I guarantee that it will become a problem because \"new\n> connection\" will no longer equal \"new connection\". Additionally, you're\n> assuming a setup were any new connection will connect to a random (from\n> the server's point of view) database. I claim these setups are not the\n> majority. In fact, any one client application would usually only connect\n> to exactly one database, so it might as well keep that connection open.\n> For systems were this is not possible for some reason or where different\n> databases or connection parameters are really required, there are already\n> plenty of solutions available that are tuned or tunable to the situation\n> at hand, so your solution would just get in the way. In short, you're\n> adding a level of complexity where there is no problem.\n\nOf course, there needs more work on the item. My assumption is that GUC\nwould control this and that perhaps X requests for the same database\nwould have to occur before such pre-loading would start. Another idea\nis to somehow pass the requested database name before the accept() so\nyou could have multiple database ready to go and have the proper backend\ndo the accept().\n\nI realize this is all pie-in-the-sky but I think we need some connection\npooling capability in the backend someday. We are fine with Apache and\nPHP becuase they can pool themselves but at some point we have too many\nclients reinventing the wheel rather than having our backend do it.\n\nAlso, this relates to pre-forking backends and does not related to\nre-using backends, which is another nice feature we should have someday.\n\n> > Added to TODO:\n> \n> I haven't seen a consensus yet.\n\nTrue. I can remove it or improve it. It is actually:\n\n* Have pre-forked backend pre-connect to last requested database or pass\n file descriptor to backend pre-forked for matching database\n\nwhich mentions passing file descriptors to backends, which we have\ndiscussed and should be recorded for posterity.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 13 Oct 2001 15:55:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "How would authentication and access control be done with a preforking\nbackend? I personally find a preforking backend desirable, but that's just me.\n\nBut if people really want preforking how about not doing it in the backend.\n\nCreate a small program that makes a few connections to postgresql, does\nsome initialization, preconnects to various DBs (or maybe limited to one DB\nspecified on startup), and listens on one port/socket. It might not even\nprefork, just cache connections so first connection is slow, subsequent\nones are cached along with the user-pass for faster authentication. \n\nThen your apps can connect to that small program, authenticate, and get the\nrelevant connection. Call it a \"Listener\" if you want ;).\n\nIt does mean double the number of processes. But if done decently it is\nlikely to mean two less complex and less buggy processes, compared to one\nmore complex process. \n\nWould the performance be that much lower using this method? There are other\nconfigurations possible with this approach e.g.:\n\napp--unixsocket--\"listener\"--SSL--backend on another host.\n\nThis configuration should reduce the TCP and SSL connection set up times\nover a network.\n\nCould have different types of preforkers. Then if a certain mode gets very\npopular and performance is insufficient then it could be appropriate to\nmove that mode to the backend.\n\nCheerio,\nLink.\n\nAt 03:55 PM 13-10-2001 -0400, Bruce Momjian wrote:\n>\n>I realize this is all pie-in-the-sky but I think we need some connection\n>pooling capability in the backend someday. We are fine with Apache and\n>PHP becuase they can pool themselves but at some point we have too many\n>clients reinventing the wheel rather than having our backend do it.\n>\n>Also, this relates to pre-forking backends and does not related to\n>re-using backends, which is another nice feature we should have someday.\n>\n>> > Added to TODO:\n>> \n>> I haven't seen a consensus yet.\n>\n>True. I can remove it or improve it. It is actually:\n>\n>* Have pre-forked backend pre-connect to last requested database or pass\n> file descriptor to backend pre-forked for matching database\n>\n>which mentions passing file descriptors to backends, which we have\n>discussed and should be recorded for posterity.\n\n\n", "msg_date": "Mon, 15 Oct 2001 10:32:51 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> Create a small program that makes a few connections to postgresql, does\n> some initialization, preconnects to various DBs (or maybe limited to one DB\n> specified on startup), and listens on one port/socket. It might not even\n> prefork, just cache connections so first connection is slow, subsequent\n> ones are cached along with the user-pass for faster authentication. \n\n> Then your apps can connect to that small program, authenticate, and get the\n> relevant connection. Call it a \"Listener\" if you want ;).\n\nCouple of problems...\n\n(a) where is this outside program going to get authentication\ninformation from?\n\n(b) it seems that not only the authentication exchange, but also all\nsubsequent data exchange of each connection would have to go through\nthis additional program. That middleman is going to become a\nbottleneck.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Oct 2001 10:18:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend " }, { "msg_contents": "I was just combing through an Oracle control file, it occured to me that a\npre-forking Postgres could be designed to work similar to Oracle.\n\nOracle has a listener keyed to an IP port. The instance of the listener is\nconfigured to listen for a particular database instance. This may be a bit\nflakey, but hear me out here. This would be similar to having Postgres listen on\ndifferent ports.\n\nA single Postgres instance can start and run as it does now, but it can also be\nconfigured to prefork \"listeners\" for specified databases on different ports.\nThat way you do not need the main postgres instance to be able to pass the socket\nto the forked child. Also, in the configuration files for the \"listeners\" you\ncould also specify Postgres settings for each database.\n\n", "msg_date": "Mon, 15 Oct 2001 12:37:50 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend, an idea" }, { "msg_contents": "(I'm having trouble with e-mail, so if you get this twice, sorry)\n\nI was looking at some Oracle configuration files today, and it occurred to me\nhow Postgres can be made to pre-fork, similarly to Oracle.\n\nOracle has \"listener\" processes that listen on a port for Oracle clients. The\nlisteners are configured for a database. Postgres could work the same way. It\ncould start up on port 5432 and work as it always has, and, in addition, it\ncould read a configuration script which directs it to \"pre-fork\" listeners on\nother ports, one port per database. This would work because they already know\nthe database that they should be ready to use. The back-end does not need to be\ninvolved.\n\nOnce you connect to the pre-forked back end, it will already be ready to\nperform a query because it has already loaded the database. The file which\nconfigures the \"pre-forked\" database could also contain run-time changeable\ntuning options for each \"pre-forked\" instance, presumably, because you would\ntune it for each database on which it would operate.\n", "msg_date": "Mon, 15 Oct 2001 17:05:05 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend - new idea" }, { "msg_contents": "At 10:18 AM 15-10-2001 -0400, Tom Lane wrote:\n>Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n>> Create a small program that makes a few connections to postgresql, does\n>> some initialization, preconnects to various DBs (or maybe limited to one DB\n>> specified on startup), and listens on one port/socket. It might not even\n>> prefork, just cache connections so first connection is slow, subsequent\n>> ones are cached along with the user-pass for faster authentication. \n>\n>> Then your apps can connect to that small program, authenticate, and get the\n>> relevant connection. Call it a \"Listener\" if you want ;).\n>\n>Couple of problems...\n>\n>(a) where is this outside program going to get authentication\n>information from?\n\nVarious options:\n1) No authentication required by client - authentication supplied on\nstartup/config.\n2) Local authentication - runs as postgres user, reads from postgres files.\n3) Local authentication - from config file, mapped to actual remote\nauthentication\n4) Authentication from remote server, then cached in memory.\n\n>(b) it seems that not only the authentication exchange, but also all\n>subsequent data exchange of each connection would have to go through\n>this additional program. That middleman is going to become a\n>bottleneck.\n\nThe authentication exchange doesn't happen that often, since the DB\nconnections are reused - no reconnection.\n\nTrue it might be a bottleneck. But in certain setups the middleman is not\nrunning on the DB server and thus not using the DB server resources.\n\n---\nAre there really compelling reasons for having a preforking backend? What\nwould the benefits be? Faster connection setup times? Connecting and\ndisconnecting quickly is important for a webserver because of the HTTP\nprotocol, but for a DB server? Would it really be fast in cases where\nthere's authentication and access control to various databases? \n\nPerhaps it's undesirable for people to roll their own DB connection\npooling. But my worry is that there's such a great diversity that most\npeople may still have to roll their own DB connection pooling, then a\npreforking backend just adds complexity and sucks up a bit more resources\nfor little gain. \n\nFor example in my case if connection setup times are a problem, I'd just\npreconnect and reuse the connections for many transactions. Wouldn't that\nstill be much faster than a preforking backend? How fast would a preforking\nbackend be?\n\nRegards,\nLink.\n\n", "msg_date": "Tue, 16 Oct 2001 17:52:19 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend " }, { "msg_contents": "On Mon 15 Oct 2001 04:32, you wrote:\n\n\nDBBalancer (http://www.sourceforge.net/projects/dbbalancer/) does something \nlike that.\n\n\n>\n> Create a small program that makes a few connections to postgresql, does\n> some initialization, preconnects to various DBs (or maybe limited to one DB\n> specified on startup), and listens on one port/socket. It might not even\n> prefork, just cache connections so first connection is slow, subsequent\n> ones are cached along with the user-pass for faster authentication.\n>\n\n\n> Then your apps can connect to that small program, authenticate, and get the\n> relevant connection. Call it a \"Listener\" if you want ;).\n\n\n-- \n\n----------------------------------\nRegards from Spain. Daniel Varela\n----------------------------------\n\nIf you think education is expensive, try ignorance.\n -Derek Bok (Former Harvard President)\n", "msg_date": "Wed, 17 Oct 2001 19:21:51 +0200", "msg_from": "Daniel Varela Santoalla <dvs@arrakis.es>", "msg_from_op": false, "msg_subject": "Re: Pre-forking backend" } ]
[ { "msg_contents": "I've been looking at the iscachable changes you committed recently,\nand I think a lot of them need to be adjusted still.\n\nOne consideration I hadn't thought of recently (though I think we did\ntake it into account for the 7.0 release) is that any function whose\noutput varies depending on the TimeZone variable has to be marked\nnoncachable. This certainly means that some (all?) of the datetime\noutput functions need to be noncachable. I am wondering also if any\nof the type conversion functions depend on TimeZone --- for example,\nwhat are the rules for conversion between timestamptz and timestamp?\n\nThe functions that convert between type TEXT and the datetime types\nneed to be treated the same as the corresponding I/O conversion\nfunctions. For example, text_date is currently marked cachable\nwhich is wrong --- as evidenced by the fact that CURRENT_DATE is\nfolded prematurely:\n\nregression=# create table foo (f1 date default current_date);\nCREATE\nregression=# \\d foo\n Table \"foo\"\n Column | Type | Modifiers\n--------+------+----------------------------\n f1 | date | default '2001-09-29'::date\n\nThe two single-parameter age() functions need to be noncachable since\nthey depend on today's date. I also suspect that their implementation\nshould be revised: writing 'today' with no qualifier exposes you to\npremature constant folding. Probably\n\tselect age(current_date::timestamp, $1)\n(or ::timestamptz respectively) would work better.\n\nWhy are only some of the date_part functions cachable? Is this a\ntimezone dependency issue, or just an oversight?\n\nSurely the abstime comparison functions must be cachable (if they can't\nbe, then indexes on abstime are nonsensical...). Ditto for all other\nwithin-type comparison functions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Sep 2001 14:54:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "iscachable settings for datetime functions" }, { "msg_contents": "> I've been looking at the iscachable changes you committed recently,\n> and I think a lot of them need to be adjusted still.\n\nSure. Me too ;)\n\nI changed some for the areas within which I was working, and it did\noccur to me that (as you mention below) anything affected as a side\neffect of some other system setting such as default time zone will need\nto be non-cachable.\n\n> One consideration I hadn't thought of recently (though I think we did\n> take it into account for the 7.0 release) is that any function whose\n> output varies depending on the TimeZone variable has to be marked\n> noncachable. This certainly means that some (all?) of the datetime\n> output functions need to be noncachable. I am wondering also if any\n> of the type conversion functions depend on TimeZone --- for example,\n> what are the rules for conversion between timestamptz and timestamp?\n\nRight. Some can be cachable (e.g. timestamp, date, and time do not have\nassociated time zones). I'll look at the other ones asap.\n\n> The functions that convert between type TEXT and the datetime types\n> need to be treated the same as the corresponding I/O conversion\n> functions. For example, text_date is currently marked cachable\n> which is wrong --- as evidenced by the fact that CURRENT_DATE is\n> folded prematurely:\n> \n> regression=# create table foo (f1 date default current_date);\n> CREATE\n> regression=# \\d foo\n> Table \"foo\"\n> Column | Type | Modifiers\n> --------+------+----------------------------\n> f1 | date | default '2001-09-29'::date\n\nHmm. Perhaps the definition for CURRENT_DATE should be recast as a call\nto now() (which happens to return timestamp) or perhaps I should have\nanother function call. In any case, I agree that text_date() needs to be\nnoncachable.\n\n> The two single-parameter age() functions need to be noncachable since\n> they depend on today's date. I also suspect that their implementation\n> should be revised: writing 'today' with no qualifier exposes you to\n> premature constant folding. Probably\n> select age(current_date::timestamp, $1)\n> (or ::timestamptz respectively) would work better.\n> \n> Why are only some of the date_part functions cachable? Is this a\n> timezone dependency issue, or just an oversight?\n> \n> Surely the abstime comparison functions must be cachable (if they can't\n> be, then indexes on abstime are nonsensical...). Ditto for all other\n> within-type comparison functions.\n\nI stayed away from changes to abstime, since I wasn't working with that\ntype and wanted to limit collateral damage to the other big changes I\nhad made.\n\nI'll propose that we postpone beta until after Marc, Vince, and\n*everyone* agree that the servers are running smoothly (a step already\nsuggested by others). And I'll also ask that we allow my latest\ndate/time changes and the above catalog fixups, which may come about\nbefore the servers settle down.\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 22:07:57 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: iscachable settings for datetime functions" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Hmm. Perhaps the definition for CURRENT_DATE should be recast as a call\n> to now() (which happens to return timestamp)\n\nYes, something like date(now()) might be less subject to breakage as we\nmonkey around with the semantics of literals and implicit coercions.\n\nBTW, although date(now()) seems okay, I got a rather surprising result\nfrom\n\ntemplate1=# select time(now());\nERROR: Bad time external representation '2001-10-01 18:21:53.49-04'\n\nI haven't dug into this, but surmise that it's trying to do something\ninvolving a conversion to text and back. The less ambiguous form\nfails completely:\n\ntemplate1=# select now()::time;\nERROR: Cannot cast type 'timestamp with time zone' to 'time'\n\n> I stayed away from changes to abstime, since I wasn't working with that\n> type and wanted to limit collateral damage to the other big changes I\n> had made.\n\nWell, I'll take responsibility for fixing that, if you want to spread\nthe blame ;-). It's my fault that those routines are marked cachable\nto begin with --- I hadn't dug into which datetime types had \"current\"\nand which didn't, just marked 'em all noncachable on sight.\n\n> I'll propose that we postpone beta until after Marc, Vince, and\n> *everyone* agree that the servers are running smoothly (a step already\n> suggested by others). And I'll also ask that we allow my latest\n> date/time changes and the above catalog fixups, which may come about\n> before the servers settle down.\n\nWe clearly have got to fix the cachability issue, so I have no objection\nto you applying your additional changes if you think they're ready.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 18:27:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: iscachable settings for datetime functions " }, { "msg_contents": "...\n> Well, I'll take responsibility for fixing that, if you want to spread\n> the blame ;-). It's my fault that those routines are marked cachable\n> to begin with --- I hadn't dug into which datetime types had \"current\"\n> and which didn't, just marked 'em all noncachable on sight.\n\nI'm happy to do it; how about saving your cycles to look at the results.\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 22:59:30 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: iscachable settings for datetime functions" } ]
[ { "msg_contents": "In the attached script, second call to test1() causes error:\n\nselect test1();\n test1 \n-------\n 0\n(1 row)\n\nselect test1();\npsql:/home/t-ishii/tmp/aaa.sql:13: NOTICE: Error occurred while executing PL/pgSQL function test1\npsql:/home/t-ishii/tmp/aaa.sql:13: NOTICE: line 5 at select into variables\npsql:/home/t-ishii/tmp/aaa.sql:13: ERROR: Relation 422543 does not exist\n\nMaybe PL/pgSQL cache problem?\n--\nTatsuo Ishii\n----------------------------------------------------------------------\ndrop function test1();\ncreate function test1() returns int as '\ndeclare\nrec RECORD;\nbegin\ncreate temp table temp_aaa (i int);\nselect into rec * from temp_aaa;\ndrop table temp_aaa;\nreturn 0;\nend;\n'language 'plpgsql';\nselect test1();\nselect test1();\n", "msg_date": "Sun, 30 Sep 2001 21:13:02 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "PL/pgSQL bug?" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Maybe PL/pgSQL cache problem?\n\nThis is a well-known problem: plpgsql caches a query plan that refers\nto the first version of the temp table, and it doesn't know it needs\nto rebuild the plan. AFAIK the only workaround at present is to use\nEXECUTE for queries referencing the temp table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Sep 2001 11:35:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL bug? " }, { "msg_contents": "Tatsuo Ishii wrote:\n> In the attached script, second call to test1() causes error:\n\n Well known error.\n\n PL/pgSQL creates saved execution plans for almost every\n expression and query using SPI_prepare(), SPI_saveplan(). If\n any of the objects, referenced from such a plan get's\n dropped, they become invalid and for now, only reconnecting\n to the database can heal.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Sun, 30 Sep 2001 12:18:08 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL bug?" }, { "msg_contents": "> This is a well-known problem: plpgsql caches a query plan that refers\n> to the first version of the temp table, and it doesn't know it needs\n> to rebuild the plan. AFAIK the only workaround at present is to use\n> EXECUTE for queries referencing the temp table.\n\nBut EXECUTE does not support select into, does it?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 01 Oct 2001 09:55:15 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: PL/pgSQL bug? " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> AFAIK the only workaround at present is to use\n>> EXECUTE for queries referencing the temp table.\n\n> But EXECUTE does not support select into, does it?\n\nYou could probably get the result you want using\n\tFOR rec IN EXECUTE text_expression LOOP ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 12:58:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL bug? " } ]
[ { "msg_contents": "On Mon, 1 Oct 2001, Bruce Momjian wrote:\n\n> > Marc,\n> >\n> > it worked, but now I'm again getting:\n> >\n> > cvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgcrypto/expected' (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock): Permission denied\n> > cvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> > cvs [server aborted]: read lock failed - giving up\n> >\n> > Seems, again wrong permissions\n>\n> Those are directories I just created. They have the same permission as\n> all the other files here. Maybe there is a problem with CVS server\n> creating stuff with the wrong permission.\n\nPlease also check things on the anoncvs server (if it's different).\n\nTake care,\n\nBill\n\n", "msg_date": "Sun, 30 Sep 2001 08:19:20 -0700 (PDT)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": true, "msg_subject": "Re: cvs problem" }, { "msg_contents": "Marc,\n\nwhat's happens with cvs ? I still can't update:\n\ncvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgstattuple' (/projects/cvsroot/pgsql/contrib/pgstattuple/#cvs.lock): Permission denied\ncvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgstattuple'\ncvs [server aborted]: read lock failed - giving up\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 1 Oct 2001 17:37:41 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "cvs problem" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> what's happens with cvs ? I still can't update:\n\n> cvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgstattuple' (/projects/cvsroot/pgsql/contrib/pgstattuple/#cvs.lock): Permission denied\n> cvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgstattuple'\n> cvs [server aborted]: read lock failed - giving up\n\nHmm. That's a new directory that Tatsuo just created. I bet the\numask or group ID that the cvs server is running under is wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 12:03:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs problem " }, { "msg_contents": "\nshould be fixed now ...\n\n\nOn Mon, 1 Oct 2001, Oleg Bartunov wrote:\n\n> Marc,\n>\n> what's happens with cvs ? I still can't update:\n>\n> cvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgstattuple' (/projects/cvsroot/pgsql/contrib/pgstattuple/#cvs.lock): Permission denied\n> cvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgstattuple'\n> cvs [server aborted]: read lock failed - giving up\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n", "msg_date": "Mon, 1 Oct 2001 12:10:17 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "Marc,\n\nit worked, but now I'm again getting:\n\ncvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgcrypto/expected' (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock): Permission denied\ncvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgcrypto/expected'\ncvs [server aborted]: read lock failed - giving up\n\nSeems, again wrong permissions\n\n\tRegards,\n\n\t\tOleg\nOn Mon, 1 Oct 2001, Marc G. Fournier wrote:\n\n>\n> should be fixed now ...\n>\n>\n> On Mon, 1 Oct 2001, Oleg Bartunov wrote:\n>\n> > Marc,\n> >\n> > what's happens with cvs ? I still can't update:\n> >\n> > cvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgstattuple' (/projects/cvsroot/pgsql/contrib/pgstattuple/#cvs.lock): Permission denied\n> > cvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgstattuple'\n> > cvs [server aborted]: read lock failed - giving up\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 1 Oct 2001 22:11:52 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "> Marc,\n> \n> it worked, but now I'm again getting:\n> \n> cvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgcrypto/expected' (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock): Permission denied\n> cvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> cvs [server aborted]: read lock failed - giving up\n> \n> Seems, again wrong permissions\n\nThose are directories I just created. They have the same permission as\nall the other files here. Maybe there is a problem with CVS server\ncreating stuff with the wrong permission.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 15:31:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "\n> Those are directories I just created. They have the same permission as\n> all the other files here. Maybe there is a problem with CVS server\n> creating stuff with the wrong permission.\n\n From my experience, it is likely that you have a umask problem. Or there\nis some other read/write permissions problem for group and other.\n\nPlease dive into the cvs repository and check the files (and directory,\nwhich seems to be lacking group write permission) Bruce.\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 21:05:22 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "> \n> > Those are directories I just created. They have the same permission as\n> > all the other files here. Maybe there is a problem with CVS server\n> > creating stuff with the wrong permission.\n> \n> >From my experience, it is likely that you have a umask problem. Or there\n> is some other read/write permissions problem for group and other.\n> \n> Please dive into the cvs repository and check the files (and directory,\n> which seems to be lacking group write permission) Bruce.\n\nOK, I see:\n\ndrwxrwxr-x 2 momjian pgsql 512 Oct 1 15:53 sql\n\nand in the SQL directory I see:\n\n-r--r--r-- 1 momjian pgsql 1843 Oct 1 12:12 blowfish.sql,v\n-r--r--r-- 1 momjian pgsql 600 Oct 1 12:12 crypt-blowfish.sql,v\n-r--r--r-- 1 momjian pgsql 547 Oct 1 12:12 crypt-des.sql,v\n-r--r--r-- 1 momjian pgsql 559 Oct 1 12:12 crypt-md5.sql,v\n-r--r--r-- 1 momjian pgsql 571 Oct 1 12:12 crypt-xdes.sql,v\n-r--r--r-- 1 momjian pgsql 1529 Oct 1 12:12 hmac-md5.sql,v\n-r--r--r-- 1 momjian pgsql 1536 Oct 1 12:12 hmac-sha1.sql,v\n-r--r--r-- 1 momjian pgsql 350 Oct 1 12:12 init.sql,v\n-r--r--r-- 1 momjian pgsql 694 Oct 1 12:12 md5.sql,v\n-r--r--r-- 1 momjian pgsql 1371 Oct 1 12:12 rijndael.sql,v\n-r--r--r-- 1 momjian pgsql 702 Oct 1 12:12 sha1.sql,v\n\nI don't know what else to look for. Ideas?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 17:10:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "> \n> > Those are directories I just created. They have the same permission as\n> > all the other files here. Maybe there is a problem with CVS server\n> > creating stuff with the wrong permission.\n> \n> >From my experience, it is likely that you have a umask problem. Or there\n> is some other read/write permissions problem for group and other.\n> \n> Please dive into the cvs repository and check the files (and directory,\n> which seems to be lacking group write permission) Bruce.\n\nThe one thing I can't check is the anoncvs directory. Not sure if that\nis the same as the CVS directory.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 17:22:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The one thing I can't check is the anoncvs directory. Not sure if that\n> is the same as the CVS directory.\n\nIt is the anoncvs server that's broken. The committers don't seem to be\nhaving any problem with accesses to the primary server. I suspect that\nthere's a umask or group-membership issue on the anoncvs machine only.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 17:57:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs problem " }, { "msg_contents": "On Mon, 1 Oct 2001, Bruce Momjian wrote:\n\n\n> > Marc,\n> >\n> > it worked, but now I'm again getting:\n> >\n> > cvs server: failed to create lock directory for /projects/cvsroot/pgsql/contrib/pgcrypto/expected' (/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock): Permission denied\n> > cvs server: failed to obtain dir lock in repository /projects/cvsroot/pgsql/contrib/pgcrypto/expected'\n> > cvs [server aborted]: read lock failed - giving up\n> >\n> > Seems, again wrong permissions\n>\n> Those are directories I just created. They have the same permission as\n> all the other files here. Maybe there is a problem with CVS server\n> creating stuff with the wrong permission.\n\n\nDid you change versions of cvs (the software)? I had a little fiddle with it some time ago, and there was a change whereby the newer version didn't do what I wanted.\n\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 07:11:21 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Mon, 1 Oct 2001, Tom Lane wrote:\n\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The one thing I can't check is the anoncvs directory. Not sure if that\n> > is the same as the CVS directory.\n>\n> It is the anoncvs server that's broken. The committers don't seem to be\n> having any problem with accesses to the primary server. I suspect that\n> there's a umask or group-membership issue on the anoncvs machine only.\n\n\nIt seems you don't have to be new here to be a bit peeved about things;-(\n\n\nI'm peeved because I found instructions on how to checkout with anonymous CVS and did so a few times.\n\nI had a find old time finding and reporting problems (with the software).\n\nThen CVS stopped working because someone thought it a fine idea to reorganise the directory structure, to change the CVSROOT. No matter the user who had the old one stored on their computers.\n\n\nI've report it twice, pointing out that what I did before worked, and that I was doing coincided with what the web pages said.\n\nThere was discussion that the web pages were wrong and who's job it was to fix. As an invited guest, I reckon it's the CVS repository that is wrong. It's wrong because it's different from what worked before.\n\n\nTime to get your act together fellas.\n\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 07:33:52 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: cvs problem " }, { "msg_contents": "> > The one thing I can't check is the anoncvs directory. Not sure if that\n> > is the same as the CVS directory.\n> \n> It is the anoncvs server that's broken. The committers don't seem to be\n> having any problem with accesses to the primary server. I suspect that\n> there's a umask or group-membership issue on the anoncvs machine only.\n\nAck! Before everyone under the sun claims to have found a problem: the \nuser that the anon cvs pserver is running under can't create lock files \nin the src dirs inline w/ the code). A few things can happen:\n\n1) Add the cvs pserver user to the pgsql group and add write privs to \nthe dirs for the user (chmod -R g+w *)\n\n2) Add a LockDir in the cvs config.\n\n cvs co CVSROOT\n cd CVSROOT\n echo \"LockDir /tmp/cvsroot\" >> config\n cvs ci -m \"Adding a central lock dir to the anoncvs server\" config\n mkdir /tmp/cvsroot\n chmod 1777 /tmp/cvsroot\n chown root.wheel /tmp/cvsroot\n\n I think that should do it... -sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 1 Oct 2001 22:07:27 -0700", "msg_from": "Sean Chittenden <sean-pgsql-hackers@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Tue, 2 Oct 2001, John Summerfield wrote:\n\n> On Mon, 1 Oct 2001, Tom Lane wrote:\n>\n>\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > The one thing I can't check is the anoncvs directory. Not sure if that\n> > > is the same as the CVS directory.\n> >\n> > It is the anoncvs server that's broken. The committers don't seem to be\n> > having any problem with accesses to the primary server. I suspect that\n> > there's a umask or group-membership issue on the anoncvs machine only.\n>\n>\n> It seems you don't have to be new here to be a bit peeved about things;-(\n>\n>\n> I'm peeved because I found instructions on how to checkout with anonymous CVS and did so a few times.\n>\n> I had a find old time finding and reporting problems (with the software).\n>\n> Then CVS stopped working because someone thought it a fine idea to reorganise the directory structure, to change the CVSROOT. No matter the user who had the old one stored on their computers.\n\nGee, I didn't realize we were doing it just cuze \"someone thought it a\nfine idea\"\n\n> I've report it twice, pointing out that what I did before worked, and that I was doing coincided with what the web pages said.\n>\n> There was discussion that the web pages were wrong and who's job it was to fix. As an invited guest, I reckon it's the CVS repository that is wrong. It's wrong because it's different from what worked before.\n\nI just looked in cvs and it looks fine there.\n\n> Time to get your act together fellas.\n\nGee, I didn't realize we were screwing off.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 2 Oct 2001 11:47:59 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: cvs problem " }, { "msg_contents": "On Monday 01 October 2001 07:33 pm, John Summerfield wrote:\n> It seems you don't have to be new here to be a bit peeved about things;-(\n[snip]\n> Time to get your act together fellas.\n\nThis is open source John, not rocket science. (pun intended)\n\nLighten up. The release will happen, regardless of minor server issues (that \nare being worked out right now, even as I write, by highly capable \nprofessionals, who, BTW, are doing this on a volunteer basis).\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 2 Oct 2001 12:35:45 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Tue, 2 Oct 2001, Vince Vielhaber wrote:\n\n\n> On Tue, 2 Oct 2001, John Summerfield wrote:\n>\n> > On Mon, 1 Oct 2001, Tom Lane wrote:\n> >\n> >\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > The one thing I can't check is the anoncvs directory. Not sure if that\n> > > > is the same as the CVS directory.\n> > >\n> > > It is the anoncvs server that's broken. The committers don't seem to be\n> > > having any problem with accesses to the primary server. I suspect that\n> > > there's a umask or group-membership issue on the anoncvs machine only.\n> >\n> >\n> > It seems you don't have to be new here to be a bit peeved about things;-(\n> >\n> >\n> > I'm peeved because I found instructions on how to checkout with anonymous CVS and did so a few times.\n> >\n> > I had a find old time finding and reporting problems (with the software).\n> >\n> > Then CVS stopped working because someone thought it a fine idea to reorganise the directory structure, to change the CVSROOT. No matter the user who had the old one stored on their computers.\n>\n> Gee, I didn't realize we were doing it just cuze \"someone thought it a\n> fine idea\"\n\nThen why didn't you put it back the way it had been when it worked? When someone does a cvs login, cvs records the value for CVSROOT.\n\nYou made everyone's cvs login fail.\n\nI don't see the sense in that.\n\n\nAnd then you diddle round endlessly instead of telling me what I had to do to fix YOUR mistake.\n\n\nI hope you're in better shape by beta time.\n\n\nAnd instead of componding the problem with rudeness, listen to it. You don't have to like the message or the language, but there are reasons for both.\n\n\n\n\n", "msg_date": "Thu, 4 Oct 2001 07:29:32 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: cvs problem " }, { "msg_contents": "On Tue, 2 Oct 2001, Lamar Owen wrote:\n\n\n> On Monday 01 October 2001 07:33 pm, John Summerfield wrote:\n> > It seems you don't have to be new here to be a bit peeved about things;-(\n> [snip]\n> > Time to get your act together fellas.\n>\n> This is open source John, not rocket science. (pun intended)\n\nHmm. Kids I was at school with were building rockets in their backyards.\nOSS is similarly a backyard affair. Awhere's the difference;-)\n\n\n>\n> Lighten up. The release will happen, regardless of minor server issues (that\n> are being worked out right now, even as I write, by highly capable\n> professionals, who, BTW, are doing this on a volunteer basis).\n\nI appreciate the volunteer point. However, a project in disarray is a project in disarray\nwhether volunteer or not.\n\nAnd the discussion that followed my original report of the CVS problem looked to me\n more like finger-pointing than a real effort to locate and fix the problem, or to help me on my way.\n\nI know I'm not well-known here, but I've made my contributions in other arenas where I'm stronger.\n\nPG isn't perfect - we all know that. Nor is the project administration. When there's a problem identified, someone has to take responsibility for fixing it, and someone has to ensure the person reporting the problem has a way forward.\n\n\nChanging the CVS repository so it doesn't work the same way any more isn't smart. Having wrong documentation isn't smart. Taking two weeks and NOT fixing a simple problem isn't smart. Giving wrong advice isn't smart. Test your advice - where possible I do.\n\n\"Lighten up\" isn't the right response. Examine your project. See what points\nI make have merit. Welcome criticism. You don't have to like the message you know;-)\n\n\n\n\n\n", "msg_date": "Thu, 4 Oct 2001 07:53:43 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "> Changing the CVS repository so it doesn't work the same way any\n> more isn't smart. Having wrong documentation isn't smart. Taking\n> two weeks and NOT fixing a simple problem isn't smart. Giving\n> wrong advice isn't smart. Test your advice - where possible I\n> do.\n\nI will consider this point valid. When we changed CVS download steps,\nwe should have updated the CVS web page right away. I did update the\nSGML version, but for other reasons, the web version didn't update and\nthat caused great confusion.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 4 Oct 2001 11:32:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "\nOn Thu, 4 Oct 2001, John Summerfield wrote:\n\n> On Tue, 2 Oct 2001, Lamar Owen wrote:\n> \n> \n> > On Monday 01 October 2001 07:33 pm, John Summerfield wrote:\n> > > It seems you don't have to be new here to be a bit peeved about things;-(\n> > [snip]\n> > > Time to get your act together fellas.\n> >\n> > This is open source John, not rocket science. (pun intended)\n> >\n> > Lighten up. The release will happen, regardless of minor server issues (that\n> > are being worked out right now, even as I write, by highly capable\n> > professionals, who, BTW, are doing this on a volunteer basis).\n> \n> Changing the CVS repository so it doesn't work the same way any more\n> isn't smart. Having wrong documentation isn't smart. Taking two weeks\n> and NOT fixing a simple problem isn't smart. Giving wrong advice isn't\n> smart. Test your advice - where possible I do.\n> \n> \"Lighten up\" isn't the right response. Examine your project. See what points\n> I make have merit. Welcome criticism. You don't have to like the message you know;-)\n\nActually, I think \"Lighten up\" was a reasonable response, given the tone\nof the message this was in response to which appears to be what Lamar was\nresponding to. Besides, there's a far cry from a message of constructive\ncriticism and the message this was in response to. The point that the\ndocumentation and reality need to match up is a good one, but saying\nthat \"It's wrong because it's different from what worked before\" isn't\nreasonable. Saying, \"This change is unfortunate and did it really have\nto happen and why? And the documentation and the server realities really\nhave to match up. Perhaps changing the page first with a note of both\nconfigurations with an estimated time change for the server would have\nbeen better/the right way to do this\" is reasonable.\n\n", "msg_date": "Thu, 4 Oct 2001 08:33:30 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Thu, 4 Oct 2001, John Summerfield wrote:\n\n> I know I'm not well-known here, but I've made my contributions in other arenas where I'm stronger.\n>\n> PG isn't perfect - we all know that. Nor is the project administration. When there's a problem identified, someone has to take responsibility for fixing it, and someone has to ensure the person reporting the problem has a way forward.\n>\n>\n> Changing the CVS repository so it doesn't work the same way any more isn't smart. Having wrong documentation isn't smart. Taking two weeks and NOT fixing a simple problem isn't smart. Giving wrong advice isn't smart. Test your advice - where possible I do.\n>\n> \"Lighten up\" isn't the right response. Examine your project. See what points\n> I make have merit. Welcome criticism. You don't have to like the message you know;-)\n\nDo you know why you're not well known? With the pissy attitude you're\nshowing here more and more folks tend to just drop your email in the\nbit bucket - like this...\n\n *** PLONK ***\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 4 Oct 2001 11:33:37 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "...\n> I hope you're in better shape by beta time.\n\nHi John. I would highly recommend taking a deep breath and coming back\nin a couple of weeks. Things should have settled down by then and we\nwill have returned to your high standards of service and support. In the\nmeantime, folks are busy getting us there...\n\nRegards.\n\n - Thomas\n", "msg_date": "Thu, 04 Oct 2001 17:51:58 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Wednesday 03 October 2001 07:53 pm, John Summerfield wrote:\n> On Tue, 2 Oct 2001, Lamar Owen wrote:\n> > On Monday 01 October 2001 07:33 pm, John Summerfield wrote:\n> > > Time to get your act together fellas.\n\n> > This is open source John, not rocket science. (pun intended)\n\n> Hmm. Kids I was at school with were building rockets in their backyards.\n> OSS is similarly a backyard affair. Awhere's the difference;-)\n\nBut that's a _hobby_, not 'rocket science' -- and the pun was that there _is_ \na rocket scientist among us.... Lots of us here are doing this as a hobby.\n\n> > Lighten up. The release will happen, regardless of minor server issues\n> > (that are being worked out right now, even as I write, by highly capable\n> > professionals, who, BTW, are doing this on a volunteer basis).\n\n> I appreciate the volunteer point. However, a project in disarray is a\n> project in disarray whether volunteer or not.\n\nTwo weeks of disarray versus 5 years of soloid performance. You'd think a \ncouple of weeks of temporary pain wouldn't be a big deal.\n\n> PG isn't perfect - we all know that. Nor is the project administration.\n> When there's a problem identified, someone has to take responsibility for\n> fixing it, and someone has to ensure the person reporting the problem has a\n> way forward.\n\nAnd the problem is being addressed. Patience is a good watchword.\n\n> \"Lighten up\" isn't the right response. Examine your project. See what\n> points I make have merit. Welcome criticism. You don't have to like the\n> message you know;-)\n\nNo, I don't have to like the message. But the message can be phrased in a \nmore polite way, as has been pointed out. You were just being a little too \n_serious_ about it, that's all. Give it a week or two, and things will be \nOK. The issues are in Vince and Marc's very capable hands -- but, as Marc \nsaid, this stuff has lived in the same place for >5 years -- and lots of \ninterdependencies had to be addressed. And they _are_being addressed.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 4 Oct 2001 17:18:29 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Thu, 4 Oct 2001, Lamar Owen wrote:\n\n\n> On Wednesday 03 October 2001 07:53 pm, John Summerfield wrote:\n> > On Tue, 2 Oct 2001, Lamar Owen wrote:\n> > > On Monday 01 October 2001 07:33 pm, John Summerfield wrote:\n> > > > Time to get your act together fellas.\n>\n> > > This is open source John, not rocket science. (pun intended)\n>\n> > Hmm. Kids I was at school with were building rockets in their backyards.\n> > OSS is similarly a backyard affair. Awhere's the difference;-)\n>\n> But that's a _hobby_, not 'rocket science' -- and the pun was that there _is_\n> a rocket scientist among us.... Lots of us here are doing this as a hobby.\n\nWell, I don't know the backgrounds of the folk here, any more than you know mine;-)\n\nAnd as far as I can tell, most open-source workers can properly be described as hobbyists.\nI know some get paid for their efforts, but not a lot.\n\n\n\n> > > Lighten up. The release will happen, regardless of minor server issues\n> > > (that are being worked out right now, even as I write, by highly capable\n> > > professionals, who, BTW, are doing this on a volunteer basis).\n>\n> > I appreciate the volunteer point. However, a project in disarray is a\n> > project in disarray whether volunteer or not.\n\nWell, remember I only arrived here in the past two weeks. What I've seen has not been\nreassuring.\n\n>\n> Two weeks of disarray versus 5 years of soloid performance. You'd think a\n> couple of weeks of temporary pain wouldn't be a big deal.\n>\n> > PG isn't perfect - we all know that. Nor is the project administration.\n> > When there's a problem identified, someone has to take responsibility for\n> > fixing it, and someone has to ensure the person reporting the problem has a\n> > way forward.\n>\n> And the problem is being addressed. Patience is a good watchword.\n\nIt's hard to believe there's a serious effort being made to fix a problem when all the effort\nI can see has no apparent relationship to the problem.\n\n> > \"Lighten up\" isn't the right response. Examine your project. See what\n> > points I make have merit. Welcome criticism. You don't have to like the\n> > message you know;-)\n>\n> No, I don't have to like the message. But the message can be phrased in a\n> more polite way, as has been pointed out. You were just being a little too\n> _serious_ about it, that's all. Give it a week or two, and things will be\n\nI don't think I was being rude. It's true I'm no diplomat. I've criticised actions (and,\nI think, with considerable justice), but I've not actually criticised people.\n\nWe all make mistakes, we should all be ready for them to be pointed out.\n\n\n\n> OK. The issues are in Vince and Marc's very capable hands -- but, as Marc\n> said, this stuff has lived in the same place for >5 years -- and lots of\n> interdependencies had to be addressed. And they _are_being addressed.\n\n\nAnd my point is that something moved. Something that many people (I don't know how many,\nbut thousands wouldn't surprise me) depended on.\n\nI have over thirty years' experience in computing, many of them supporting users. That experiece\ntells me that making a change that inconveniences users is a mistake. If the change really must\nbe made, do it so as to reduce the inconvenience as far as possible.\n\n From my other readinds I see that the PG team controls the entire disk layout. Given that, I can\nsee no reason that the CVS tree needed to be changed in the way it was.\n\nI still think it should be made to work the old way; both ways, now, as there are people\nwho depend on both structures.\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 5 Oct 2001 09:40:22 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Thu, 4 Oct 2001, Stephan Szabo wrote:\n\n\n> of the message this was in response to which appears to be what Lamar was\n> responding to. Besides, there's a far cry from a message of constructive\n> criticism and the message this was in response to. The point that the\n> documentation and reality need to match up is a good one, but saying\n> that \"It's wrong because it's different from what worked before\" isn't\n> reasonable. Saying, \"This change is unfortunate and did it really have\n> to happen and why? And the documentation and the server realities really\n> have to match up. Perhaps changing the page first with a note of both\n> configurations with an estimated time change for the server would have\n> been better/the right way to do this\" is reasonable.\n>\n\nHow many people use anonymouse CVS? Hundreds? Thousands? Tens of thousands?\n\nYou must have been fixing a pretty serious problem to justify inconveniencing them all. And\nif it really is that serious, add a word of explanation (to forestall problem reports)\nand apology.\n\n\nThat is the point of my complaint. If it was just me, I'd shrug my shoulders and\n(once I figured out how) get on with it.\n\n\n\n\n", "msg_date": "Fri, 5 Oct 2001 10:03:14 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Fri, 5 Oct 2001, John Summerfield wrote:\n\n> I don't think I was being rude. It's true I'm no diplomat. I've\n> criticised actions (and, I think, with considerable justice), but I've\n> not actually criticised people.\n\nHow can your criticisms be 'with considerable justice' (justification?)\nwhen you have no more then 2 weeks knowledge of the project and/or the ppl\ninvolved in it? For starters, this 'move' has been going on for more then\n2 weeks now, and this last phase has been the one withthe most hiccups, as\nits also been the one with the most ppl affected ...\n\n> And my point is that something moved. Something that many people (I\n> don't know how many, but thousands wouldn't surprise me) depended on.\n\nI'd be surprised if most of what moved affected hundreds ...\n\n> I have over thirty years' experience in computing, many of them\n> supporting users. That experiece tells me that making a change that\n> inconveniences users is a mistake. If the change really must be made,\n> do it so as to reduce the inconvenience as far as possible.\n\nAs you have only been here 2 weeks, what do you know about what did (or\ndidn't) need to be done? Or about the work that went into making the\nwhole change as painless as possible? CVS was the most painful, and there\nwere forewarnings about it, to the extent that we made sure that those\nwith changes to commit weren't affected before they changed ...\n\n\"End users\", IMHO, it should have been totally irrelevant too ... remove\nand re-check out the tree ... but, then again, I believe it was PeterE\nthat even posted a 'how to change your CVS to point to the new server'\nscript, so that those users weren't inconvience as well ...\n\nif you are going to play with CVS, then deal with problems that *will*\narise as a result of it ... else, download the .tar.gz files like everyone\nelse ...\n\n> From my other readinds I see that the PG team controls the entire disk\n> layout. Given that, I can see no reason that the CVS tree needed to be\n> changed in the way it was.\n\nwell, for one, its easier to remember:\n\n\t/cvsroot\n\nthen\n\n\t/home/projects/pgsql/cvsroot\n\n\n", "msg_date": "Fri, 5 Oct 2001 08:22:41 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Fri, 5 Oct 2001, John Summerfield wrote:\n\n> On Thu, 4 Oct 2001, Stephan Szabo wrote:\n>\n>\n> > of the message this was in response to which appears to be what Lamar was\n> > responding to. Besides, there's a far cry from a message of constructive\n> > criticism and the message this was in response to. The point that the\n> > documentation and reality need to match up is a good one, but saying\n> > that \"It's wrong because it's different from what worked before\" isn't\n> > reasonable. Saying, \"This change is unfortunate and did it really have\n> > to happen and why? And the documentation and the server realities really\n> > have to match up. Perhaps changing the page first with a note of both\n> > configurations with an estimated time change for the server would have\n> > been better/the right way to do this\" is reasonable.\n> >\n>\n> How many people use anonymouse CVS? Hundreds? Thousands? Tens of thousands?\n\nHundreds, maybe ...\n\n> You must have been fixing a pretty serious problem to justify\n> inconveniencing them all. And if it really is that serious, add a word\n> of explanation (to forestall problem reports) and apology.\n\nUmmm, all of these changes and moves were forewarned ... obvious that was\nbefore you came on the scene with your \"in a perfect world\" assessments\n...\n\n> That is the point of my complaint. If it was just me, I'd shrug my\n> shoulders and (once I figured out how) get on with it.\n\nObviously it is/was only you, cause nobody else appear to give two shakes\n...\n\n\n", "msg_date": "Fri, 5 Oct 2001 08:24:11 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Thursday 04 October 2001 09:40 pm, John Summerfield wrote:\n> Well, remember I only arrived here in the past two weeks. What I've seen\n> has not been reassuring.\n\nMay I be so bold as to suggest your taking a few hours of your time and \nreading the last six months to a year of the list's archives? :-)\n\nI did just that a little over two years ago when I (somewhat rudely, I may \nadd) grabbed the RPM bull by the specfile horns -- I read the previous two \nyears of archives to get a feel for the culture of the group first. As I \ncan read 1200-1600 words per minute, I'm able to process 10-15k messages per \nday easily. (My daily load is right now around 1k messages per day, along \nwith all my other broadcast engineering related work). 1k messages takes a \nlittle over half an hour.\n\n> It's hard to believe there's a serious effort being made to fix a problem\n> when all the effort I can see has no apparent relationship to the problem.\n\nThe interdependencies run deep into areas that appear unrelated to the \nproblem at hand.\n\n> I don't think I was being rude. It's true I'm no diplomat. I've criticised\n> actions (and, I think, with considerable justice), but I've not actually\n> criticised people.\n\n'You'd better get your act together fellas' was just a tad too critical for \nmy taste, sorry. I'm not known for being a diplomat either.... I've seen \nreal flamewars, too, and your message was certainly not a flame, but just a \ntad too critical. Had I considered it a flame I wouldn't have bothered \nreplying.\n\n> And my point is that something moved. Something that many people (I don't\n> know how many, but thousands wouldn't surprise me) depended on.\n\n> That experiece tells me that making a change that inconveniences\n> users is a mistake.\n\nToo much backwards compatibility is also a mistake. If a person is tracking \nthe bleeding-edge CVS, they are already having to watch for initdb-forcing \nchanges -- a CVSROOT change is small potatoes compared to an initdb-force -- \nwhich happens _often_ during the development cycle. There are other major \nchanges that happen -- to the point where a fresh checkout might even be \nnecessary, so that leftover files from previous local builds aren't present. \nThat's just a part of using CVS, in my experience.\n\nAnd this is not the only open source project that has changed CVSROOT on \npeople -- in fact, this one has changed less than any of the other projects \nthat I track with CVS.\n\nJust a momentary pain, really.\n\n> From my other readinds I see that the PG team controls the entire disk\n> layout. Given that, I can see no reason that the CVS tree needed to be\n> changed in the way it was.\n\nCorrection: Marc Fournier controls the entire disk layout, as it's his \nserver. It was his decision to change the layout.\n\n> I still think it should be made to work the old way; both ways, now, as\n> there are people who depend on both structures.\n\nAll you have to do to use the new layout is to blow away your checkout, login \nagain with the new layout, and run another checkout. It doesn't take long to \ndo. And it is now documented again, AFAIK.\n\nIt's a little more complicated for CVSup users, though. And that's where \nThomas (our resident rocket scientist) had his beef, along with the \ndocumentation build process, etc. When docs build automatically, and you \nchange the source document to reflect a change, but the build fails due to \nthat very layout change, it can get pretty ugly. And that's exactly what \nhappened, from my perspective.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 5 Oct 2001 11:05:42 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Fri, 5 Oct 2001, Lamar Owen wrote:\n\n\n> On Thursday 04 October 2001 09:40 pm, John Summerfield wrote:\n> > Well, remember I only arrived here in the past two weeks. What I've seen\n> > has not been reassuring.\n>\n> May I be so bold as to suggest your taking a few hours of your time and\n> reading the last six months to a year of the list's archives? :-)\n\nI'm on several PG lists (and many others). I do entirely enough reading already.\nAnd I don't read as quickly as you do.\n\n\n>\n> > It's hard to believe there's a serious effort being made to fix a problem\n> > when all the effort I can see has no apparent relationship to the problem.\n>\n> The interdependencies run deep into areas that appear unrelated to the\n> problem at hand.\n>\n> > I don't think I was being rude. It's true I'm no diplomat. I've criticised\n> > actions (and, I think, with considerable justice), but I've not actually\n> > criticised people.\n>\n> 'You'd better get your act together fellas' was just a tad too critical for\n> my taste, sorry. I'm not known for being a diplomat either.... I've seen\n> real flamewars, too, and your message was certainly not a flame, but just a\n> tad too critical. Had I considered it a flame I wouldn't have bothered\n> replying.\n\nI made it clear I was getting frustrated. Read in that context, I think it was\npretty mild;-)\n\n\nEspecially considering the point I saw a project in disarray (and I don't think anyone's\ndisputed that).\n\n\n\n\n> > And my point is that something moved. Something that many people (I don't\n> > know how many, but thousands wouldn't surprise me) depended on.\n>\n> > That experiece tells me that making a change that inconveniences\n> > users is a mistake.\n>\n> Too much backwards compatibility is also a mistake. If a person is tracking\n> the bleeding-edge CVS, they are already having to watch for initdb-forcing\n> changes -- a CVSROOT change is small potatoes compared to an initdb-force --\n> which happens _often_ during the development cycle. There are other major\n> changes that happen -- to the point where a fresh checkout might even be\n> necessary, so that leftover files from previous local builds aren't present.\n> That's just a part of using CVS, in my experience.\n\nI'm prepared for the project itself breaking my data - that I expect is likely to happen.\n\nEach time I do a CVS update, I rebuild my binaries and database.\n\n\n>\n> And this is not the only open source project that has changed CVSROOT on\n> people -- in fact, this one has changed less than any of the other projects\n\nI'm ingnorant. I don't see why CVSROOT would need to be changed. I thought\nUnix filesystems were flexible enough to allow CVS to see things as they really aren't,\nperhaps using symlinks.\n\nAnd if one project does change it, I still don't see how that justifies another.\n\n> that I track with CVS.\n>\n> Just a momentary pain, really.\n\n From what I can tell, completely unnecessary. And until somone explains why it\n was necessary, then I won't understand it.\n\n\n>\n> > From my other readinds I see that the PG team controls the entire disk\n> > layout. Given that, I can see no reason that the CVS tree needed to be\n> > changed in the way it was.\n>\n> Correction: Marc Fournier controls the entire disk layout, as it's his\n> server. It was his decision to change the layout.\n\nIs Marc part of the team?\n\n\n>\n> > I still think it should be made to work the old way; both ways, now, as\n> > there are people who depend on both structures.\n>\n> All you have to do to use the new layout is to blow away your checkout, login\n> again with the new layout, and run another checkout. It doesn't take long to\n> do. And it is now documented again, AFAIK.\n\n\nAnd this is the first time someone has bothered to tell me, and it's weeks since the problem\n arose and I reported it (I had figured it out and got a new checkout).\n\nInstead of telling me how to go on with my affairs, there ensured a discussion about the documentation being\nwrong, about the devlopers corner shouldn't really be there and so on.\n\nAnd replies to my problems enroling in these lists were no more helpful. I wanted to get on the\nlist because I felt I wasn't seeing all the relevant discussion of other problems, and in particular\nwhat was broken wrt CVS.\n\nNow, I've always thought that the first thing to do when someone has a problem is to give them the solution\nto their problem if the solution is easily had.\n\nIn those two cases it would have taken someone a minute or two to say, Sorry 'bout that. Do this.'\n\nTo be sure if the Postgresql itself is broken, that may take longer, but that wasn't the case. The procedures\n(from my point of view) or documentation (from yours) was wrong, and what I needed most was the knowledge\nof what actually works.\n\nAnd referring me to the incorrect documentation didn't serve to persuade me the project's\nrunning well.\n\n\n", "msg_date": "Sat, 6 Oct 2001 07:48:43 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Fri, 5 Oct 2001, Marc G. Fournier wrote:\n\n\n> On Fri, 5 Oct 2001, John Summerfield wrote:\n>\n> > I don't think I was being rude. It's true I'm no diplomat. I've\n> > criticised actions (and, I think, with considerable justice), but I've\n> > not actually criticised people.\n>\n> How can your criticisms be 'with considerable justice' (justification?)\n\nI think my comments to another go to address this. The short form\nis that the particular problems I had required all of two minutes to fix\nby telling me the solution.\n\nWhen I get pointed to the same wrong documents I've allready read, then\nI think I' well justified in concluding that there's something wrong\nand saying so.\n\n>\n> > And my point is that something moved. Something that many people (I\n> > don't know how many, but thousands wouldn't surprise me) depended on.\n>\n> I'd be surprised if most of what moved affected hundreds ...\n\nWhat I can surmise of the scale of your operation suggests its likely.\n\n\n> > I have over thirty years' experience in computing, many of them\n> > supporting users. That experiece tells me that making a change that\n> > inconveniences users is a mistake. If the change really must be made,\n> > do it so as to reduce the inconvenience as far as possible.\n>\n> As you have only been here 2 weeks, what do you know about what did (or\n> didn't) need to be done? Or about the work that went into making the\n> whole change as painless as possible? CVS was the most painful, and there\n> were forewarnings about it, to the extent that we made sure that those\n> with changes to commit weren't affected before they changed ...\n\n\nForwarning didn't help those arriving at the wrong time. Getting on to the lists\ndidn't work as described, so don't use that as an excuse.\n\n\nMy major concern is not so much the changes that were made, but that the problems\nthat arose were not addressed properly.\n\n>\n> \"End users\", IMHO, it should have been totally irrelevant too ... remove\n> and re-check out the tree ... but, then again, I believe it was PeterE\n> that even posted a 'how to change your CVS to point to the new server'\n> script, so that those users weren't inconvience as well ...\n\nHe might have, but not anywhere that I could find it.\n\n>\n> if you are going to play with CVS, then deal with problems that *will*\n> arise as a result of it ... else, download the .tar.gz files like everyone\n> else ...\n>\n> > From my other readinds I see that the PG team controls the entire disk\n> > layout. Given that, I can see no reason that the CVS tree needed to be\n> > changed in the way it was.\n>\n> well, for one, its easier to remember:\n>\n> \t/cvsroot\n>\n> then\n>\n> \t/home/projects/pgsql/cvsroot\n\nln -s /home/projects/pgsql/cvsroot /cvsroot\n\n\nNobody even took the time to tell me what it had changed to. I can live with the change if I have to,\nbut first it's essential to have current information, and nobody took (in over two weeks) the time\nto ensure I had that information.\n\n\nNow I don't know why individuals get involved in OSS projects or how they get their rewards,\nbut I'd guess that having happy users is part of it.\n\nAnd users need support, and those who use the CVS repositiories you should especially welcome as they're\n(or so I imagine) prospective new team members..\n\nWhat I'm complaining about is neither helpful nor welcoming.\n\n\nNow there have been exceptions - I've also had some very good responses to other problems.\n\n\n\n\n\n\n", "msg_date": "Sat, 6 Oct 2001 08:20:11 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Friday 05 October 2001 07:48 pm, John Summerfield wrote:\n> On Fri, 5 Oct 2001, Lamar Owen wrote:\n> I made it clear I was getting frustrated. Read in that context, I think it\n> was pretty mild;-)\n\nBut it was done in an uninformed way. Had you read the previous two-three \nweeks of the archives, your questions mostly would have been answered. And I \nthink that's the thing that irritated me -- at least try to understand what \nthe culture of the project is before criticizing it. I am reminded of Tom \nLane's first post....(it's in the archives....)\n\n> Especially considering the point I saw a project in disarray (and I don't\n> think anyone's disputed that).\n\nTemporary disarray only. But this project is a truly open project, where no \none person has final say. People will occassionally get upset and disagree \n-- this is normal life in a true open source project. People here tend to \ndisagree in a much more mild way that with some other projects, though.\n\n> I'm ingnorant. I don't see why CVSROOT would need to be changed. I thought\n> Unix filesystems were flexible enough to allow CVS to see things as they\n> really aren't, perhaps using symlinks.\n\nWhy introduce additional complexity into an already complex system? The next \ntime things need to be shuffled (in a few weeks, months, or years), having to \nremember to un-dangle a symlink might cause another issue. The new CVSROOT \nis shorter and simpler, and it is useless to mung in an unnecessary symlink. \nPeterE had just posted how to get around it without a new checkout -- a \ncursory half-hour browse through the last month's archives would have found \nit.\n\n> > Correction: Marc Fournier controls the entire disk layout, as it's his\n> > server. It was his decision to change the layout.\n\n> Is Marc part of the team?\n\nReference the developers listing on developer.postgresql.org.\n\n> Instead of telling me how to go on with my affairs, there ensured a\n> discussion about the documentation being wrong, about the devlopers corner\n> shouldn't really be there and so on.\n\nBecause your report was a symptom of a larger problem -- that of the \nautomatically generated pages not generating properly. Fix the cause, not \nthe symptom.\n\n> Now, I've always thought that the first thing to do when someone has a\n> problem is to give them the solution to their problem if the solution is\n> easily had.\n\nThe first thing that has to be done is to find the real problem. You may not \neven know what the real problem is -- if CVS hadn't broken, something else \nwould have pointed out that the docs we're being autogenerated any more. And \na temporary fix for a symptom is not nearly as useful as is a cure for the \nreal problem. Once the problem has been found and fixed, the symptom will go \naway.\n\n> To be sure if the Postgresql itself is broken, that may take longer, but\n> that wasn't the case. The procedures (from my point of view) or\n> documentation (from yours) was wrong, and what I needed most was the\n> knowledge of what actually works.\n\n> And referring me to the incorrect documentation didn't serve to persuade me\n> the project's running well.\n\nIf the documentation build procedure was broken (which it was), telling you \nthe correct incantation would have only fixed your individual problem. \nAttempting to fix the doc build process, with your help in saying 'Yes, that \nfixed it' or 'no that didn't' helps the whole userbase, not just you. \n\nPostgreSQL's online documentation is derived from the very same SGML source \ndocumentation that ships with the tarball -- to prevent duplication of \neffort. As the autogen system has been in place for awhile, when a change in \nthe SGML source didn't propagate to the web pages, that problem had to be \nfound before it could be fixed. In the process a few feathers got ruffled, \nbut things are ok as of now, AFAIK.\n\nAgain, it's been a kindof rocky two-three weeks --but this is momentary. But \nyou have to have context to see its momentary nature -- and only by reading \nthe archives can you obtain proper context.\n\nAs to why things had to change, well, Marc is the person to answer that. \nBut, really, he is under no obligation to answer that, as he's providing his \nservers and bandwidth to this project. They're his, and he can run them as \nhe sees fit. If he wants to justify his actions, then that's ok. If he \ndoesn't want to justify those actions, well, I can't really have a problem \nwith that either -- I'm not the one forking out the money for his bandwidth. \nI'm too busy being thankful that he is taking the time and pouring out the \neffort to keep things running. And things typically have run very well over \nthe last five-plus years that Marc has hosted the project. After all,` he \nbasically rescued the whole thing. But that's context.....\n\nAnd speaking of context, even in my extra-busy state of the last three \nmonths, I was able to follow what was going on, and prepared for it here.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 8 Oct 2001 09:49:46 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Temprary issue gripes(was:Re: cvs problem)" }, { "msg_contents": "\nOn Mon, 8 Oct 2001, Lamar Owen wrote:\n\n\n> On Friday 05 October 2001 07:48 pm, John Summerfield wrote:\n> > On Fri, 5 Oct 2001, Lamar Owen wrote:\n> > I made it clear I was getting frustrated. Read in that context, I think it\n> > was pretty mild;-)\n>\n> But it was done in an uninformed way. Had you read the previous two-three\n> weeks of the archives, your questions mostly would have been answered. And I\n> think that's the thing that irritated me -- at least try to understand what\n> the culture of the project is before criticizing it. I am reminded of Tom\n> Lane's first post....(it's in the archives....)\n\n\nI've spend years helping people (for free, just like you people here), first with OS/2 and then with Linux, specifically Red Hat.\n\nI understand and accept that many questions are asked over and over.\n\nWhere possible I have referred people to existing documentation, often\nwith a shortened explanation.\n\nOften I illustrate answers with clippings from a terminal window.\n\n\nIf I can't test an answer, I give a clue that it's untested.\n>\n> > Especially considering the point I saw a project in disarray (and I don't\n> > think anyone's disputed that).\n>\n> Temporary disarray only. But this project is a truly open project, where no\n> one person has final say. People will occassionally get upset and disagree\n> -- this is normal life in a true open source project. People here tend to\n> disagree in a much more mild way that with some other projects, though.\n>\n> > I'm ingnorant. I don't see why CVSROOT would need to be changed. I thought\n> > Unix filesystems were flexible enough to allow CVS to see things as they\n> > really aren't, perhaps using symlinks.\n>\n> Why introduce additional complexity into an already complex system? The next\n\nSomeone said it had been the old way for five years. It caused me no\ninconveniece - I simply copied and pasted the lengthy name from the web page.\n\nI don't see for whom the long name would be a problem; certainly if\nit has been that way for five years, it couldn't have been a serious\nproblem.\n\n\n\n\n> time things need to be shuffled (in a few weeks, months, or years), having to\n> remember to un-dangle a symlink might cause another issue. The new CVSROOT\n> is shorter and simpler, and it is useless to mung in an unnecessary symlink.\n> PeterE had just posted how to get around it without a new checkout -- a\n> cursory half-hour browse through the last month's archives would have found\n> it.\n\nThe symlink could be used the other way - leave CVSROOT alone so it works\nthe way it 'always has,' create the symlink as a convenience for whoever\nwants it.\n\n\n\n>\n> > > Correction: Marc Fournier controls the entire disk layout, as it's his\n> > > server. It was his decision to change the layout.\n>\n> > Is Marc part of the team?\n>\n> Reference the developers listing on developer.postgresql.org.\n\nyes or no? I don't have web access at present. He contributes to discussions,\nso I guess in at least some sense he is.\n\n>\n> > Instead of telling me how to go on with my affairs, there ensured a\n> > discussion about the documentation being wrong, about the devlopers corner\n> > shouldn't really be there and so on.\n>\n> Because your report was a symptom of a larger problem -- that of the\n> automatically generated pages not generating properly. Fix the cause, not\n> the symptom.\n>\n> > Now, I've always thought that the first thing to do when someone has a\n> > problem is to give them the solution to their problem if the solution is\n> > easily had.\n>\n> The first thing that has to be done is to find the real problem. You may not\n> even know what the real problem is -- if CVS hadn't broken, something else\n> would have pointed out that the docs we're being autogenerated any more. And\n> a temporary fix for a symptom is not nearly as useful as is a cure for the\n> real problem. Once the problem has been found and fixed, the symptom will go\n> away.\n\nNo reason at all to make people wait for thich incantation. Someone had the\ncorrect information. Probably a minute to find it and incorproate it\nin a response.\n\n\n>\n> > To be sure if the Postgresql itself is broken, that may take longer, but\n> > that wasn't the case. The procedures (from my point of view) or\n> > documentation (from yours) was wrong, and what I needed most was the\n> > knowledge of what actually works.\n>\n> > And referring me to the incorrect documentation didn't serve to persuade me\n> > the project's running well.\n>\n> If the documentation build procedure was broken (which it was), telling you\n> the correct incantation would have only fixed your individual problem.\n\nFix my problem so I can go on looking for more. Then fix the real problem.\n\nI went to the CVS version because I had problems with the most recent\nrelease. I try not to report fixed problems.\n\nHowever, I do need reasonable support from the developers, and I was only\nseeking a a modest amount of support.\n\n\n> Attempting to fix the doc build process, with your help in saying 'Yes, that\n> fixed it' or 'no that didn't' helps the whole userbase, not just you.\n\nAnyone can verify that the page I mentioned contains correct (or incorrect)\ninformation. Doesn't need a say-so from me.\n\n\n\n>\n> PostgreSQL's online documentation is derived from the very same SGML source\n> documentation that ships with the tarball -- to prevent duplication of\n> effort. As the autogen system has been in place for awhile, when a change in\n> the SGML source didn't propagate to the web pages, that problem had to be\n> found before it could be fixed. In the process a few feathers got ruffled,\n> but things are ok as of now, AFAIK.\n>\n> Again, it's been a kindof rocky two-three weeks --but this is momentary. But\n> you have to have context to see its momentary nature -- and only by reading\n> the archives can you obtain proper context.\n\nI don't ordinarily have web access. Archives are inconvenient. And, in\nmy experience, somewhat hard to use. It can be hard to find a specific topic\n- too many synomyms - and often they're too voluminous, and unless you have\na high-bandwidth service (I don't) slow.\n\n\n\n", "msg_date": "Tue, 9 Oct 2001 09:37:30 +0800 (WST)", "msg_from": "John Summerfield <pgtest@os2.ami.com.au>", "msg_from_op": false, "msg_subject": "Re: Temprary issue gripes(was:Re: cvs problem)" }, { "msg_contents": "On Monday 08 October 2001 09:37 pm, John Summerfield wrote:\n> I don't see for whom the long name would be a problem; certainly if\n> it has been that way for five years, it couldn't have been a serious\n> problem.\n\nAsk Marc why he changed it.\n\n> > > > Correction: Marc Fournier controls the entire disk layout, as it's\n> > > > his server. It was his decision to change the layout.\n\n> > > Is Marc part of the team?\n\n> > Reference the developers listing on developer.postgresql.org.\n\n> yes or no? I don't have web access at present. He contributes to\n> discussions, so I guess in at least some sense he is.\n\nHe is one of the six core developers; maintains the postgresql.org server \n(which he donates); maintains the network bandwidth which we all enjoy; \ncoordinates releases; runs the mailing list, ftp, web, CVS, CVSup, and \nnesgroup services; is President of PostgreSQL, Inc, who provides first-rate \ncommercial support for PostgreSQL; is chief cook and bottlewasher; and \nanything else I may have left out. He is one of the first four who took the \nalso-ran Postgres95 and turned it into the real database known as PostgreSQL.\n\nSo, yes, I guess you could say he's part of the team... :-)\n\n> > > Instead of telling me how to go on with my affairs, there ensured a\n> > > discussion about the documentation being wrong, about the devlopers\n> > > corner shouldn't really be there and so on.\n\n> > Because your report was a symptom of a larger problem -- that of the\n> > automatically generated pages not generating properly. Fix the cause,\n> > not the symptom.\n\n> No reason at all to make people wait for thich incantation. Someone had the\n> correct information. Probably a minute to find it and incorproate it\n> in a response.\n\nDid you or did you not post the question to pgsql-hackers? This list isn't \nfor just telling people how to solve problems -- that is what admin, general, \nports, etc are for. The hackers list is the developers list, where the \ndevelopers talk through development problems. So, directly answering your \nquestion wasn't the top priority -- fixing the larger problem was.\n\n> However, I do need reasonable support from the developers, and I was only\n> seeking a a modest amount of support.\n\nIf you're going to run CVS or even beta versions, you had better be ready to \ndo alot of your own support. I'm not trying to be mean or arrogant, either -- \nif a change in CVSROOT and a lack of docs is too upsetting, wait on the final \nrelease, or a release candidate, where things are much more polished. \nBleeding edge sometimes cuts -- and I've been there.\n\n> I don't ordinarily have web access. Archives are inconvenient. And, in\n> my experience, somewhat hard to use. It can be hard to find a specific\n> topic - too many synomyms - and often they're too voluminous, and unless\n> you have a high-bandwidth service (I don't) slow.\n\nI can sympathize to an extent with that, but I again have to go back to what \nirked me -- you made an uninformed critical remark that had nothing to do \nwith your question. Don't make critical remarks about a process or project \nof which workings you are ignorant. That isn't meant to be demeaning -- I \ntry to follow that very same advice, as it was given to me long ago by none \nother than Jonathan Kamens. And it takes more than just a couple of weeks \nreading the list to get familiar with the workingsof a project this size.\n\nThat's really all I have to say about that on-list.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 10 Oct 2001 17:31:53 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Temprary issue gripes(was:Re: cvs problem)" } ]
[ { "msg_contents": "\nnow that the 18gig is in place and I have space once more to work,\nftp.postgresql.org now points at the new server (~ftp for those that\nlogin) ... let me know if there are any problems ...\n\n\n\n", "msg_date": "Sun, 30 Sep 2001 16:49:15 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "ftp.postgresql.org points to new server ..." }, { "msg_contents": "> \n> now that the 18gig is in place and I have space once more to work,\n> ftp.postgresql.org now points at the new server (~ftp for those that\n> login) ... let me know if there are any problems ...\n\nI wonder if we should be allowing people to use non-anonymous ftp. There is\nno password protection there. I use scp and like it a lot. It is less\ninteractive than ftp but is easier to automate in scripts.\n\nI believe Peter's issue is that the web server is on a different machine\nfrom our accounts and there is no way to get the SGML output into the\nweb server. I have created an HTML copy of the SGML here so that should\nwork for people.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 30 Sep 2001 18:12:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ftp.postgresql.org points to new server ..." }, { "msg_contents": "\nweb server and ftp server are now on the same machine, actually .....\neverything is now merged together once more ...\n\nOn Sun, 30 Sep 2001, Bruce Momjian wrote:\n\n> >\n> > now that the 18gig is in place and I have space once more to work,\n> > ftp.postgresql.org now points at the new server (~ftp for those that\n> > login) ... let me know if there are any problems ...\n>\n> I wonder if we should be allowing people to use non-anonymous ftp. There is\n> no password protection there. I use scp and like it a lot. It is less\n> interactive than ftp but is easier to automate in scripts.\n>\n> I believe Peter's issue is that the web server is on a different machine\n> from our accounts and there is no way to get the SGML output into the\n> web server. I have created an HTML copy of the SGML here so that should\n> work for people.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Sun, 30 Sep 2001 22:16:28 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: ftp.postgresql.org points to new server ..." }, { "msg_contents": "Marc,\n\nwill hub.org::postgresql-ftp works\n\n\tOleg\nOn Sun, 30 Sep 2001, Marc G. Fournier wrote:\n\n>\n> now that the 18gig is in place and I have space once more to work,\n> ftp.postgresql.org now points at the new server (~ftp for those that\n> login) ... let me know if there are any problems ...\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 1 Oct 2001 09:36:32 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: ftp.postgresql.org points to new server ..." }, { "msg_contents": "\nits stale right now, but there have been no changes to ftp yet anyway ...\nvince has a bit of work to do before we break all the mirrors and force\nthem to point to the new server ... almost done, so close, so close ...\n\n\n On Mon, 1 Oct 2001, Oleg Bartunov\nwrote:\n\n> Marc,\n>\n> will hub.org::postgresql-ftp works\n>\n> \tOleg\n> On Sun, 30 Sep 2001, Marc G. Fournier wrote:\n>\n> >\n> > now that the 18gig is in place and I have space once more to work,\n> > ftp.postgresql.org now points at the new server (~ftp for those that\n> > login) ... let me know if there are any problems ...\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n", "msg_date": "Mon, 1 Oct 2001 08:44:06 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: ftp.postgresql.org points to new server ..." }, { "msg_contents": "> \n> web server and ftp server are now on the same machine, actually .....\n> everything is now merged together once more ...\n> \n\nBut are our accounts/CVS on the same machines as the web server now?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 13:54:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ftp.postgresql.org points to new server ..." }, { "msg_contents": "\nyes\n\nOn Mon, 1 Oct 2001, Bruce Momjian wrote:\n\n> >\n> > web server and ftp server are now on the same machine, actually .....\n> > everything is now merged together once more ...\n> >\n>\n> But are our accounts/CVS on the same machines as the web server now?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Mon, 1 Oct 2001 14:57:59 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: ftp.postgresql.org points to new server ..." } ]
[ { "msg_contents": "I have just found a nasty portability problem in the new statistics\nstuff. It assumes that it can store int64 values in hashtable entries\n... but the dynahash module hands back pointers that are only aligned\nto sizeof(long). On a machine where int64 has to be 8-byte-aligned,\ninstant coredump.\n\nI hadn't noticed this before, even though HPPA is just such an\narchitecture, because gcc emits pairs of single-word load and store\ninstructions for int64, not doubleword load and store. As soon as\nI recompiled with HP's compiler, pgstat didn't work anymore.\n\nI think the cleanest answer is to fix dynahash so that it gives back\nMAXALIGN'd pointers. It looks like this will not waste a significant\namount of space, and it'll probably be a necessary change in the long\nrun anyway.\n\nWhile I'm messing with this, I think I will also change the API for\nhash_create so that one specifies the key size and the total entry\nsize, not the key size and data size to be added together. It's\ntoo easy to get the alignment considerations wrong with the present\nAPI --- there are a number of places that are presently silently\nassuming that there's no padding in their hashtable entry structs,\neg ri_triggers.c. And I observe that pgstat itself thinks that that's\nwhat the API is anyway...\n\nComments, objections? I think this is a must-fix-before-beta item.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Sep 2001 20:11:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgstat dumps core if alignof(int64) > alignof(long)" } ]
[ { "msg_contents": "I just grabbed cvs tip this afternoon and ran into two issues:\n\n- First one is that the regression fails on \"geometry\" on what appears to be\na difference in the 13th decimal place of the output value. See the attached\nregression diff.\n- Second was on reloading my data, I got the following error message (4\ntimes):\n ERROR: operator class \"timestamp_ops\" does not accept data type\ntimestamp with time zone\n\nI've not seen either of these problems before, and I've gone through this\nroutine at least a dozen times in the past month or so. The second problem\nis caused by the following statements (auto generated by pg_dumpall):\n\nCREATE UNIQUE INDEX \"lda_idx_1\" on \"laser_diagnostic_activity\" using btree\n( \"laser_sn\" \"varchar_ops\", \"diag_run_date\" \"timestamp_ops\" );\nCREATE INDEX \"ldi_idx_1\" on \"laser_diagnostic_items\" using btree (\n\"laser_sn\" \"varchar_ops\", \"diag_run_date\" \"timestamp_ops\", \"diag_type\"\n\"bpchar_ops\", \"diag_item\" \"int4_ops\" );\nCREATE INDEX \"lv_processed_hdr_idx4\" on \"lv_processed_hdr\" using btree (\n\"lv_dt\" \"timestamp_ops\" );\nCREATE INDEX \"lv_processed_hdr_idx3\" on \"lv_processed_hdr\" using btree (\n\"laser_type\" \"varchar_ops\", \"lv_dt\" \"timestamp_ops\" );\n\nAnd the related tables look like:\n\nCREATE TABLE \"laser_diagnostic_activity\" (\n \"lda_id\" integer DEFAULT\nnextval('\"laser_diagnostic_act_lda_id_seq\"'::text) NOT NULL,\n \"laser_sn\" character varying(15) NOT NULL,\n \"diag_run_date\" timestamp with time zone NOT NULL,\n \"diag_ul_date\" timestamp with time zone NOT NULL,\n Constraint \"laser_diagnostic_activity_pkey\" Primary Key (\"lda_id\")\n);\n\nCREATE TABLE \"laser_diagnostic_items\" (\n \"ldi_id\" integer DEFAULT\nnextval('\"laser_diagnostic_ite_ldi_id_seq\"'::text) NOT NULL,\n \"laser_sn\" character varying(15) NOT NULL,\n \"diag_run_date\" timestamp with time zone NOT NULL,\n \"diag_type\" character(1) NOT NULL,\n \"diag_item\" integer NOT NULL,\n \"diag_description\" character varying(20) NOT NULL,\n \"diag_value\" character varying(24),\n \"diag_units\" character varying(10),\n \"lda_id\" integer,\n Constraint \"laser_diagnostic_items_pkey\" Primary Key (\"ldi_id\")\n);\n\nCREATE TABLE \"lv_processed_hdr\" (\n \"hdr_id\" integer NOT NULL,\n \"laser_sn\" character varying(15) NOT NULL,\n \"config_id\" integer,\n \"file_path\" character varying(255) NOT NULL,\n \"file_name\" character varying(255) NOT NULL,\n \"lv_dt\" timestamp with time zone NOT NULL,\n \"orig_lv_file\" character varying(255) NOT NULL,\n \"app_ver\" character varying(50) NOT NULL,\n \"lte_name\" character varying(50) NOT NULL,\n \"lte_login\" character varying(50) NOT NULL,\n \"integrator\" character varying(25) NOT NULL,\n \"laser_type\" character varying(50) NOT NULL,\n \"test_name\" character varying(50) NOT NULL,\n \"data_set_name\" character varying(50) NOT NULL,\n \"data_set_id\" character varying(50) NOT NULL\n);\n\nAny suggestions?\n\nThanks,\n\nJoe", "msg_date": "Sun, 30 Sep 2001 19:36:24 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "cvs tip problems" }, { "msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n> I just grabbed cvs tip this afternoon and ran into two issues:\n\n> - First one is that the regression fails on \"geometry\" on what appears to be\n> a difference in the 13th decimal place of the output value. See the attached\n> regression diff.\n\nLooks like it was not you that changed, but Thomas' reference machine.\nWhat platform are you on, anyway?\n\n> - Second was on reloading my data, I got the following error message (4\n> times):\n> ERROR: operator class \"timestamp_ops\" does not accept data type\n> timestamp with time zone\n\nOh, ye olde change-of-opclass-name problem. I've stuck a hack into\ngram.y as we've done in the past, but I'm starting to think that we\nneed a better answer going forward. Maybe pg_dump could be tweaked to\nnot dump opclass names if they are default opclasses?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 01:29:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs tip problems " }, { "msg_contents": "> > - First one is that the regression fails on \"geometry\" on what appears\nto be\n> > a difference in the 13th decimal place of the output value. See the\nattached\n> > regression diff.\n>\n> Looks like it was not you that changed, but Thomas' reference machine.\n> What platform are you on, anyway?\n>\nNot sure if this question was for Thomas or me, but for the record:\n\ni686 arch (AMD Athlon CPU), Red Hat 7.0 with lots of updates and a 2.4.2\nkernel compiled from source.\ntest=# select version();\n version\n----------------------------------------------------------------\n PostgreSQL 7.2devel on i686-pc-linux-gnu, compiled by GCC 2.96\n\n\n> > - Second was on reloading my data, I got the following error message (4\n> > times):\n> > ERROR: operator class \"timestamp_ops\" does not accept data type\n> > timestamp with time zone\n>\n> Oh, ye olde change-of-opclass-name problem. I've stuck a hack into\n> gram.y as we've done in the past, but I'm starting to think that we\n> need a better answer going forward. Maybe pg_dump could be tweaked to\n> not dump opclass names if they are default opclasses?\n>\n\nThat sounds like a good plan to me. I was able to rebuild the indexes last\nnight by changing \"timestamp_ops\" to \"timestamptz_ops\", but it sure wasn't\nintuitive.\n\nThanks,\n\nJoe\n\n\n\n", "msg_date": "Mon, 1 Oct 2001 08:50:50 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: cvs tip problems " }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I would propose that the reference machine be one that one of the\n> \"committers\" owns, and be one whose owner is willing to *always* go\n> through the effort to resolve regression test changes and differences.\n\nEr ... wasn't that *you*?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 15:05:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs tip problems " }, { "msg_contents": "> > - First one is that the regression fails on \"geometry\" on what appears to be\n> > a difference in the 13th decimal place of the output value. See the attached\n> > regression diff.\n> Looks like it was not you that changed, but Thomas' reference machine.\n> What platform are you on, anyway?\n\nfwiw, the former regression test results had column alignment problems,\nseemingly from psql! Not sure why those were there.\n\nI'm running Mandrake 7.2 which has glibc-2.1.3, and I've had regression\ntest failures on *my* machine for quite some time now (which went away,\nof course, after the last updates). \n\nI would propose that the reference machine be one that one of the\n\"committers\" owns, and be one whose owner is willing to *always* go\nthrough the effort to resolve regression test changes and differences.\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 19:06:17 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: cvs tip problems" }, { "msg_contents": "> > I would propose that the reference machine be one that one of the\n> > \"committers\" owns, and be one whose owner is willing to *always* go\n> > through the effort to resolve regression test changes and differences.\n> Er ... wasn't that *you*?\n\nYes. At the moment my toes are bright red and sore from being stepped\non, and I'm trying to get out of the way or figure out what the problems\nare. I'm happy to continue to contribute things like this, but don't\nlike being held up then bypassed then ignored (cf current docs building\ntroubles). I'm frustrated.\n\nI'm trying to keep an open mind about all this to help move in a\ndifferent direction if that is desired and/or necessary. That's all.\n\n - Thomas\n", "msg_date": "Mon, 01 Oct 2001 21:01:10 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: cvs tip problems" }, { "msg_contents": "> > > I would propose that the reference machine be one that one of the\n> > > \"committers\" owns, and be one whose owner is willing to *always* go\n> > > through the effort to resolve regression test changes and differences.\n> > Er ... wasn't that *you*?\n> \n> Yes. At the moment my toes are bright red and sore from being stepped\n> on, and I'm trying to get out of the way or figure out what the problems\n> are. I'm happy to continue to contribute things like this, but don't\n> like being held up then bypassed then ignored (cf current docs building\n> troubles). I'm frustrated.\n\nI think we are all just scrambing to get beta ready while the server\nreconfigures itself. :-) I don't see any fundamental changes being\nproposed. We are trying to plug leaks and are stepping on toes, or at\nleast it looks that way sometimes. :-) I can yank my CVS build if it\ncauses confusion once we get the main one working.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 17:07:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs tip problems" }, { "msg_contents": "On Monday 01 October 2001 05:07 pm, Bruce Momjian wrote:\n> I think we are all just scrambing to get beta ready while the server\n> reconfigures itself. :-) I don't see any fundamental changes being\n> proposed. We are trying to plug leaks and are stepping on toes, or at\n> least it looks that way sometimes. :-) I can yank my CVS build if it\n> causes confusion once we get the main one working.\n\nIMHO, we should have the server configuration working before going beta. \nEspecially little details like documentation. :-O \n\nBut I tend to believe that's EXACTLY what Marc meant when he said we weren't \nready to go beta.\n\nSo, let's all please take a deep breath, count ten, and see what the server \nsituation develops into. After all, we all want the beta to go smoothly -- \nand differences in the present and past server configs are not making a \nsmooth beta practical right now. Once we work through the differences, and \nget things smooth again, Marc's hard work in upgrading the server situation \nis sure to reap benefits if we'll be patient as he works out the kinks.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 1 Oct 2001 17:19:24 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: cvs tip problems" }, { "msg_contents": ">> ERROR: operator class \"timestamp_ops\" does not accept data type\n>> timestamp with time zone\n\n> Oh, ye olde change-of-opclass-name problem. I've stuck a hack into\n> gram.y as we've done in the past, but I'm starting to think that we\n> need a better answer going forward. Maybe pg_dump could be tweaked to\n> not dump opclass names if they are default opclasses?\n\nNot having heard any objections, I have done this. We're still stuck\nwith needing the hack for 7.2, but perhaps future rearrangements of\nindex opclasses will be less painful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 17:41:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs tip problems " }, { "msg_contents": "On Mon, 1 Oct 2001, Bruce Momjian wrote:\n\n> > > > I would propose that the reference machine be one that one of the\n> > > > \"committers\" owns, and be one whose owner is willing to *always* go\n> > > > through the effort to resolve regression test changes and differences.\n> > > Er ... wasn't that *you*?\n> >\n> > Yes. At the moment my toes are bright red and sore from being stepped\n> > on, and I'm trying to get out of the way or figure out what the problems\n> > are. I'm happy to continue to contribute things like this, but don't\n> > like being held up then bypassed then ignored (cf current docs building\n> > troubles). I'm frustrated.\n>\n> I think we are all just scrambing to get beta ready while the server\n> reconfigures itself. :-) I don't see any fundamental changes being\n> proposed. We are trying to plug leaks and are stepping on toes, or at\n> least it looks that way sometimes. :-) I can yank my CVS build if it\n> causes confusion once we get the main one working.\n\nthe reason what we have not gone beta as of yet, and will not for a little\nwhlie yet, is the disruption caused by re-merging all of the functionality\non the new server ...\n\ndocs have never been a hold up for beta in the past, not sure why you\nconsider them to be now, but they definitely shouldn't be something that\nis 'rushed to go beta' ... tangents to bandaid a problem that needs to be\nfixed, like the docs build, just detract from everything else that has to\nget done, I think ...\n\n\n", "msg_date": "Tue, 2 Oct 2001 08:22:40 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: cvs tip problems" } ]
[ { "msg_contents": "Per Tom's request(1000 concurrent backends), I tried current on IBM\nAIX 5L and found that make check hungs:\n\nparallel group (13 tests): float4 oid varchar\n\npgbench hungs too if more than 4 or so concurrent backends are\ninvolved. Unfortunately gdb does not work well on AIX, so I'm stucked.\nMaybe a new locking code?\n\nBTW PostgreSQL 7.1.3 works fine.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 01 Oct 2001 13:15:12 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Problem on AIX with current" } ]
[ { "msg_contents": "> [HACKERS] What executes faster? \n> Now that I've found the solution for my duplicate key problem, \n> I'm wondering what executes faster when I have to check for \n> duplicates. \n> 1. try to update \n> if no row affected -> do the insert \n> else done \n> 2. do a select \n> if row not found -> do the insert \n> else do the update \n> Another idea I'm thinking about: \n> I'm doing the check for duplicate key by myself now. \n> Aren't insert commands running faster, if I replace \n> an unique index by a not-unique index. \n\nI have solved an almost similar problem.\nI have a large table (about 8 milion rows) called radius and a table with \nupdates and newlines called radiusupdate.\nThe first thing I tried was 2 queries:\nupdate radius \n from radiusupdate \n where radius.pk = radiusupdate.pk\n\ninsert into radius \nselect * \n from radiusupdate RU\n where RU.pk not in (select pk from radius)\n\nBut the second one is obviously not very fast. A \"not in\" never is... So I \nnow do things just a little bit different. I added a field to the table \nradiusupdate called \"newline\". It is default set to true. Then I replace \nthe second query by these two:\n\nupdate radiusupdate\n set newline = false\n from radius R\n where radiusupdate.pk = radius.pk\n\ninsert into radius\nselect *\n from radiusupdate RU\n where newline = true\n\nThis is a lot faster in my case....\n\nReinoud\n\n", "msg_date": "Mon, 1 Oct 2001 11:39:45 +0200 (CEST)", "msg_from": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl>", "msg_from_op": true, "msg_subject": "Re: What executes faster?" }, { "msg_contents": "[HACKERS] What executes faster? \nNow that I've found the solution for my duplicate key problem, \nI'm wondering what executes faster when I have to check for \nduplicates. \n 1. try to update \n if no row affected -> do the insert \n else done \n 2. do a select \n if row not found -> do the insert \n else do the update \nAnother idea I'm thinking about: \nI'm doing the check for duplicate key by myself now. \nAren't insert commands running faster, if I replace \nan unique index by a not-unique index. \n\nRegards, Christoph \n", "msg_date": "Mon, 01 Oct 2001 11:24:11 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": false, "msg_subject": "What executes faster? " } ]
[ { "msg_contents": "OS: FreeBSD4.3\n\nDiagnostic:\ngcc -g -Wall -Wmissing-prototypes -Wmissing-declarations \n-I../../../../src/include -c -o spin.o spin.c\nIn file included from /usr/include/sys/sem.h:13,\n from spin.c:26:\n/usr/include/sys/ipc.h:54: syntax error before `ushort'\n/usr/include/sys/ipc.h:95: syntax error before `ftok'\n/usr/include/sys/ipc.h:95: warning: data definition has no type or storage class\nIn file included from spin.c:26:\n/usr/include/sys/sem.h:20: syntax error before `u_short'\n/usr/include/sys/sem.h:23: syntax error before `time_t'\n/usr/include/sys/sem.h:34: syntax error before `u_short'\n/usr/include/sys/sem.h:48: syntax error before `u_short'\n/usr/include/sys/sem.h:103: syntax error before `int'\n\nPatch:\n*** src/backend/storage/lmgr/spin.c.orig Mon Oct 1 14:13:01 2001\n--- src/backend/storage/lmgr/spin.c Mon Oct 1 14:16:15 2001\n***************\n*** 23,28 ****\n--- 23,29 ----\n\n #include <errno.h>\n #ifdef HAVE_SYS_SEM_H\n+ #include <sys/types.h>\n #include <sys/sem.h>\n #endif\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Mon, 01 Oct 2001 14:20:33 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": true, "msg_subject": "Current CVS: compilation error" }, { "msg_contents": "\nPatch applied. Thanks.\n\n> OS: FreeBSD4.3\n> \n> Diagnostic:\n> gcc -g -Wall -Wmissing-prototypes -Wmissing-declarations \n> -I../../../../src/include -c -o spin.o spin.c\n> In file included from /usr/include/sys/sem.h:13,\n> from spin.c:26:\n> /usr/include/sys/ipc.h:54: syntax error before `ushort'\n> /usr/include/sys/ipc.h:95: syntax error before `ftok'\n> /usr/include/sys/ipc.h:95: warning: data definition has no type or storage class\n> In file included from spin.c:26:\n> /usr/include/sys/sem.h:20: syntax error before `u_short'\n> /usr/include/sys/sem.h:23: syntax error before `time_t'\n> /usr/include/sys/sem.h:34: syntax error before `u_short'\n> /usr/include/sys/sem.h:48: syntax error before `u_short'\n> /usr/include/sys/sem.h:103: syntax error before `int'\n> \n> Patch:\n> *** src/backend/storage/lmgr/spin.c.orig Mon Oct 1 14:13:01 2001\n> --- src/backend/storage/lmgr/spin.c Mon Oct 1 14:16:15 2001\n> ***************\n> *** 23,28 ****\n> --- 23,29 ----\n> \n> #include <errno.h>\n> #ifdef HAVE_SYS_SEM_H\n> + #include <sys/types.h>\n> #include <sys/sem.h>\n> #endif\n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 13:52:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Current CVS: compilation error" } ]
[ { "msg_contents": "\n May be it's cosmetic, but:\n\n - \"Migration to 7.1\" right is \"Migration to 7.2\"\n \n Be missing:\n \n - information that some parts of interface are translated to \n Swedish, Russian, French, Germany nad Czech.\n\n (IMHO right place for information about Peter's NLS support is \n \"Major changes in this release\" paragraph :-)\n\n - support for \"standard\" encoding names and multibyte code cleanup\n and performance improvement (Tatsuo's test look interesting:-)\n\n - new to_char(interval, text) function \n\n - locale bugfix (if is used glibc-2.2) in backend and ODBC(?)\n\n\t\tKarel\n\nPS. A lot of people search in HISTORY file inspiration for their application\nimprovement and also here try search answer to question \"why dones't work it\nafter update?\". Found answer here is better than in in mailing lists:-)\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 1 Oct 2001 12:26:12 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "HISTORY file" }, { "msg_contents": "> \n> May be it's cosmetic, but:\n> \n> - \"Migration to 7.1\" right is \"Migration to 7.2\"\n\n\nFixed.\n\n> \n> Be missing:\n> \n> - information that some parts of interface are translated to \n> Swedish, Russian, French, Germany nad Czech.\n\n> (IMHO right place for information about Peter's NLS support is \n> \"Major changes in this release\" paragraph :-)\n\nAdded to list of major features.\n\n> - support for \"standard\" encoding names and multibyte code cleanup\n> and performance improvement (Tatsuo's test look interesting:-)\n\nIs that major?\n\n> - new to_char(interval, text) function \n\nAdded to list.\n\n> \n> - locale bugfix (if is used glibc-2.2) in backend and ODBC(?)\n\nToo specific for list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 17:38:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY file" } ]
[ { "msg_contents": "Hello,\n\nI'm in the process of porting a large application from Ingres to\nPostgreSQL. We make heavy use of bulkloading using the 'COPY'\nstatement in ESQL/C. Consider the SQL statements below (in a psql\nsession on an arbitrary database):\n\n CREATE TABLE copytest(f1 INTEGER, f2 INTEGER);\n CREATE UNIQUE INDEX copytest_idx ON copytest USING BTREE(f1, f2);\n COPY copytest FROM '/tmp/copytest';\n\nGiven the file /tmp/copytest:\n\n 1\t1\n 2\t2\n 3\t3\n 4\t4\n 4\t4\n 5\t5\n 6\t6\n\nwill result in the following output:\n\n ERROR: copy: line 5, Cannot insert a duplicate key into unique index copytest_idx\n\nHowever my application code is assuming that duplicate rows will\nsimply be ignored (this is the case in Ingres, and I believe Oracle's\nbulkloader too). I propose modifying _bt_check_unique() in\n/backend/access/nbtree/nbtinsert.c to emit a NOTICE (rather than\nERROR) elog() and return NULL (or appropriate) to the calling function\nif a duplicate key is detected and a 'COPY FROM' is in progress (add\nnew parameter to flag this).\n\nWould this seem a reasonable thing to do? Does anyone rely on COPY\nFROM causing an ERROR on duplicate input? Would:\n\n WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)\n\nneed to be added to the COPY command (I hope not)?\n\nThanks,\n\n-- \n Lee Kindness, Senior Software Engineer\n Concept Systems Limited.\n", "msg_date": "Mon, 1 Oct 2001 12:04:41 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Lee Kindness wrote:\n> \n<snip>\n> \n> WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)\n\nI would suggest :\n\nWITH ON_DUPLICATE = IGNORE|TERMINATE\n\nOr maybe IGNORE_DUPLICATE\n\npurely for easier understanding, given there is no present standard nor\nother databases' syntax to conform to.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> need to be added to the COPY command (I hope not)?\n> \n> Thanks,\n> \n> --\n> Lee Kindness, Senior Software Engineer\n> Concept Systems Limited.\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 01 Oct 2001 21:40:41 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> Would this seem a reasonable thing to do? Does anyone rely on COPY\n> FROM causing an ERROR on duplicate input?\n\nYes. This change will not be acceptable unless it's made an optional\n(and not default, IMHO, though perhaps that's negotiable) feature of\nCOPY.\n\nThe implementation might be rather messy too. I don't much care for the\nnotion of a routine as low-level as bt_check_unique knowing that the\ncontext is or is not COPY. We might have to do some restructuring.\n\n> Would:\n> WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)\n> need to be added to the COPY command (I hope not)?\n\nIt occurs to me that skip-the-insert might be a useful option for\nINSERTs that detect a unique-key conflict, not only for COPY. (Cf.\nthe regular discussions we see on whether to do INSERT first or\nUPDATE first when the key might already exist.) Maybe a SET variable\nthat applies to all forms of insertion would be appropriate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 09:36:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Justin Clift writes:\n > Lee Kindness wrote:\n > > WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)\n > I would suggest :\n > WITH ON_DUPLICATE = IGNORE|TERMINATE\n > purely for easier understanding, given there is no present standard nor\n > other databases' syntax to conform to.\n\nPersonally I don't see the need, and think that 'COPY FROM' could well\njust go with the new semantics...\n\nOnto an implementation issue - _bt_check_unique() returns a\nTransactionId, my plans were to return NullTransactionId on a\nduplicate key but naturally this is used in the success\nscenario. Looking in backend/transam/transam.c I see:\n\n TransactionId NullTransactionId = (TransactionId) 0;\n TransactionId AmiTransactionId = (TransactionId) 512;\n TransactionId FirstTransactionId = (TransactionId) 514;\n\n From this I'd gather <514 can be used as magic-values/constants, So\nwould I be safe doing:\n\n TransactionId XXXXTransactionId = (TransactionId) 1;\n\nand return XXXXTransactionId from _bt_check_unique() back to\n_bt_do_insert()? Naturally XXXX is something meaningful. I presume all\nI need to know is if 'xwait' in _bt_check_unique() is ever '1'...\n\nThanks,\n\n--\n Lee Kindness, Senior Software Engineer\n Concept Systems Limited.\n", "msg_date": "Mon, 1 Oct 2001 14:40:07 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Tom Lane writes:\n > Lee Kindness <lkindness@csl.co.uk> writes:\n > > Would this seem a reasonable thing to do? Does anyone rely on COPY\n > > FROM causing an ERROR on duplicate input?\n > Yes. This change will not be acceptable unless it's made an optional\n > (and not default, IMHO, though perhaps that's negotiable) feature of\n > COPY.\n\nI see where you're coming from, but seriously what's the use/point of\nCOPY aborting and doing a rollback if one duplicate key is found? I\nthink it's quite reasonable to presume the input to COPY has had as\nlittle processing done on it as possible. I could loop through the\ninput file before sending it to COPY but that's just wasting cycles\nand effort - Postgres has btree lookup built in, I don't want to roll\nmy own before giving Postgres my input file!\n\n > The implementation might be rather messy too. I don't much care\n > for the notion of a routine as low-level as bt_check_unique knowing\n > that the context is or is not COPY. We might have to do some\n > restructuring.\n\nWell in reality it wouldn't be \"you're getting run from copy\" but\nrather \"notice on duplicate, rather than error & exit\". There is a\ntelling comment in nbtinsert.c just before _bt_check_unique() is\ncalled:\n\n\t/*\n\t * If we're not allowing duplicates, make sure the key isn't already\n\t * in the index. XXX this belongs somewhere else, likely\n\t */\n\nSo perhaps dupes should be searched for before _bt_doinsert is called,\nor somewhere more appropriate?\n\n > > Would:\n > > WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)\n > > need to be added to the COPY command (I hope not)?\n > It occurs to me that skip-the-insert might be a useful option for\n > INSERTs that detect a unique-key conflict, not only for COPY. (Cf.\n > the regular discussions we see on whether to do INSERT first or\n > UPDATE first when the key might already exist.) Maybe a SET variable\n > that applies to all forms of insertion would be appropriate.\n\nThat makes quite a bit of sense.\n\n-- \n Lee Kindness, Senior Software Engineer\n Concept Systems Limited.\n", "msg_date": "Mon, 1 Oct 2001 14:54:25 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> I see where you're coming from, but seriously what's the use/point of\n> COPY aborting and doing a rollback if one duplicate key is found?\n\nError detection. If I'm loading what I think is valid data, having the\nsystem silently ignore certain types of errors is not acceptable ---\nI'm especially not pleased at the notion of removing an error check\nthat's always been there because someone else thinks that would make it\nmore convenient for his application.\n\n> I think it's quite reasonable to presume the input to COPY has had as\n> little processing done on it as possible.\n\nThe primary and traditional use of COPY has always been to reload dumped\ndata. That's why it doesn't do any fancy processing like DEFAULT\ninsertion, and that's why it should be quite strict about error\nconditions. In a reload scenario, any sort of problem deserves\ncareful investigation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Oct 2001 10:02:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Tom Lane writes:\n > I'm especially not pleased at the notion of removing an error check\n > that's always been there because someone else thinks that would make it\n > more convenient for his application.\n\nPlease, don't get me wrong - I don't want to come across arrogant. I'm\nsimply trying to improve the 'COPY FROM' command in a situation where\nspeed is a critical issue and the data is dirty... And that must be a\nrelatively common scenario in industry.\n\nAnd I never said the duplicate should be silently ignored - an\nelog(NOTICE) should still be output.\n\nLee.\n", "msg_date": "Mon, 1 Oct 2001 15:17:43 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Lee Kindness wrote:\n\n>Tom Lane writes:\n> > Lee Kindness <lkindness@csl.co.uk> writes:\n> > > Would this seem a reasonable thing to do? Does anyone rely on COPY\n> > > FROM causing an ERROR on duplicate input?\n> > Yes. This change will not be acceptable unless it's made an optional\n> > (and not default, IMHO, though perhaps that's negotiable) feature of\n> > COPY.\n>\n>I see where you're coming from, but seriously what's the use/point of\n>COPY aborting and doing a rollback if one duplicate key is found? I\n>think it's quite reasonable to presume the input to COPY has had as\n>little processing done on it as possible. I could loop through the\n>input file before sending it to COPY but that's just wasting cycles\n>and effort - Postgres has btree lookup built in, I don't want to roll\n>my own before giving Postgres my input file!\n>\n> > The implementation might be rather messy too. I don't much care\n> > for the notion of a routine as low-level as bt_check_unique knowing\n> > that the context is or is not COPY. We might have to do some\n> > restructuring.\n>\n>Well in reality it wouldn't be \"you're getting run from copy\" but\n>rather \"notice on duplicate, rather than error & exit\". There is a\n>telling comment in nbtinsert.c just before _bt_check_unique() is\n>called:\n>\n>\t/*\n>\t * If we're not allowing duplicates, make sure the key isn't already\n>\t * in the index. XXX this belongs somewhere else, likely\n>\t */\n>\n>So perhaps dupes should be searched for before _bt_doinsert is called,\n>or somewhere more appropriate?\n>\n> > > Would:\n> > > WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)\n> > > need to be added to the COPY command (I hope not)?\n> > It occurs to me that skip-the-insert might be a useful option for\n> > INSERTs that detect a unique-key conflict, not only for COPY. (Cf.\n> > the regular discussions we see on whether to do INSERT first or\n> > UPDATE first when the key might already exist.) Maybe a SET variable\n> > that applies to all forms of insertion would be appropriate.\n>\n>That makes quite a bit of sense.\n>\nThis is tring to avoid one step.\n\nIMHO, you should copy into a temporary table and the do a select \ndistinct from it into the table that you want.\n\nA. You can validate your data before you put it into your permanent table.\nB. This doesn't cost you much.\n\nDon't make the assumption that bulk copies have not been checked or \nvalidated. The assumption should be correct data or you shouldn't be \nusing COPY.\n\n\n\n\n\n>\n\n", "msg_date": "Mon, 01 Oct 2001 09:21:22 -0500", "msg_from": "Thomas Swan <tswan@olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "> However my application code is assuming that duplicate rows will\n> simply be ignored (this is the case in Ingres, and I believe Oracle's\n> bulkloader too). I propose modifying _bt_check_unique() in\n> /backend/access/nbtree/nbtinsert.c to emit a NOTICE (rather than\n> ERROR) elog() and return NULL (or appropriate) to the calling function\n> if a duplicate key is detected and a 'COPY FROM' is in progress (add\n> new parameter to flag this).\n\nIf you have a UNIQUE index on the table, just throwing away duplicates\nseems really bad to me. I know Ingres had that heapsort structure that\nwould remove duplicates. That may be an interesting feature to add as\nan operation that can be performed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 1 Oct 2001 12:03:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Tom Lane writes:\n\n> It occurs to me that skip-the-insert might be a useful option for\n> INSERTs that detect a unique-key conflict, not only for COPY. (Cf.\n> the regular discussions we see on whether to do INSERT first or\n> UPDATE first when the key might already exist.) Maybe a SET variable\n> that applies to all forms of insertion would be appropriate.\n\nWhat we need is:\n\n1. Make errors not abort the transaction.\n\n2. Error codes\n\nThen you can make your client deal with this in which ever way you want,\nat least for single-value inserts.\n\nHowever, it seems to me that COPY ignoring duplicates can easily be done\nby preprocessing the input file.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 3 Oct 2001 00:13:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Peter Eisentraut writes:\n > However, it seems to me that COPY ignoring duplicates can easily be\n > done by preprocessing the input file.\n\nOr by post-processing, like (error checking cut):\n\n void import_shots(char *impfile, int lineshoot_id)\n {\n char tab_name[128];\n char tab_temp[128];\n\n frig_file(impfile); /* add the postgres header */\n sprintf(tab_name, \"shot_%d\", lineshoot_id);\n sprintf(tab_temp, \"shot_%d_tmp\", lineshoot_id);\n\n sprintf(cmd, \"CREATE TEMPORARY TABLE %s AS SELECT * FROM shot\",\n\t tab_temp);\n EXEC SQL EXECUTE IMMEDIATE :cmd;\n EXEC SQL COMMIT WORK; /* will not work without comit here! */\n\n sprintf(cmd, \"COPY BINARY %s FROM '%s'\", tab_temp, impfile);\n append_page_alloc(cmd, tab_name, impfile, 1);\n EXEC SQL EXECUTE IMMEDIATE :cmd;\n \n sprintf(cmd, \"INSERT INTO %s SELECT DISTINCT ON(shot_time) * FROM %s\",\n\t tab_name, tab_temp);\n \n EXEC SQL EXECUTE IMMEDIATE :cmd;\n\n sprintf(cmd, \"DROP TABLE %s\", tab_temp);\n EXEC SQL EXECUTE IMMEDIATE :cmd;\n\n EXEC SQL COMMIT WORK ;\n remove(impfile);\n }\n\nHowever this is adding significant time to the import\noperation. Likewise I could loop round the input file first and hunt\nfor duplicates, again with a performance hit.\n\nMy main point is that Postgres can easily and quickly check for\nduplicates during the COPY (as it does currently) and it adds zero\nexecution time to simply ignore these duplicate rows. Obviously this\nis a useful feature otherwise Oracle, Ingres and other commercial\nrelational databases wouldn't feature similiar functionality.\n\nYes, in an ideal world the input to COPY should be clean and\nconsistent with defined indexes. However this is only really the case\nwhen COPY is used for database/table backup and restore. It misses the\npoint that a major use of COPY is in speed optimisation on bulk\ninserts...\n\nLee.\n", "msg_date": "Wed, 3 Oct 2001 11:01:30 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Guys,\n\nI've made some inroads towards adding 'ignore duplicates'\nfunctionality to PostgreSQL's COPY command. I've updated the parser\ngrammar for COPY FROM to now accept:\n\n COPY [ BINARY ] table [ WITH OIDS ]\n FROM { 'filename' | stdin }\n [ [USING] DELIMITERS 'delimiter' ]\n [ WITH [NULL AS 'null string']\n [IGNORE DUPLICATES] ]\n\nand added code to propagate this setting down to the CopyFrom function\nin backend/commands/copy.c.\n\nI also played around with _bt_check_unique, _bt_do_insert and btinsert\nto return NULL on duplicate rather than elog(ERROR). Likewise\nExecInsertIndexTuples and index_insert were passed the\nignore_duplicate flag and index_insert changed to elog(ERROR) if the\nreturn from the insert function was NULL and ignore_duplicate flag was\nfalse.\n\nThese changes worked and gave the desired result for the COPY FROM\ncommand, however as many mentioned these changes are far too low\nlevel... After assessing the situation more fully, I believe the\nfollowing change in CopyFrom would be more suitable:\n\n\t\t/* BEFORE ROW INSERT Triggers */\n\t\tif (resultRelInfo->ri_TrigDesc &&\n\t\t\tresultRelInfo->ri_TrigDesc->n_before_row[TRIGGER_EVENT_INSERT] > 0)\n\t\t{\n\t\t\tHeapTuple\tnewtuple;\n\t\t\tnewtuple = ExecBRInsertTriggers(estate, resultRelInfo, tuple);\n\n\t\t\tif (newtuple == NULL)\t\t/* \"do nothing\" */\n\t\t\t\tskip_tuple = true;\n\t\t\telse if (newtuple != tuple) /* modified by Trigger(s) */\n\t\t\t{\n\t\t\t\theap_freetuple(tuple);\n\t\t\t\ttuple = newtuple;\n\t\t\t}\n\t\t}\n\n\t\t/* new code */\n\t\tif( ignore_duplicates == true )\n\t\t{\n\t\t\t\tif( duplicate index value )\n\t\t\t\t\tskip_tuple = true;\n\t\t}\n\n\t\tif (!skip_tuple)\n\t\t{\n\n\nNow I imagine 'duplicate index value' would be functionally similar to\n_bt_check_unique but obviously higher level. Is there any existing\ncode with the functionality I desire? Can anyone point me in the right\nway...\n\nThanks,\n\nLee Kindness.\n\nLee Kindness writes:\n > I'm in the process of porting a large application from Ingres to\n > PostgreSQL. We make heavy use of bulkloading using the 'COPY'\n > statement in ESQL/C. Consider the SQL statements below (in a psql\n > session on an arbitrary database):\n > \n > CREATE TABLE copytest(f1 INTEGER, f2 INTEGER);\n > CREATE UNIQUE INDEX copytest_idx ON copytest USING BTREE(f1, f2);\n > COPY copytest FROM '/tmp/copytest';\n > \n > Given the file /tmp/copytest:\n > \n > 1\t1\n > 2\t2\n > 3\t3\n > 4\t4\n > 4\t4\n > 5\t5\n > 6\t6\n > \n > will result in the following output:\n > \n > ERROR: copy: line 5, Cannot insert a duplicate key into unique index copytest_idx\n > \n > However my application code is assuming that duplicate rows will\n > simply be ignored (this is the case in Ingres, and I believe Oracle's\n > bulkloader too). I propose modifying _bt_check_unique() in\n > /backend/access/nbtree/nbtinsert.c to emit a NOTICE (rather than\n > ERROR) elog() and return NULL (or appropriate) to the calling function\n > if a duplicate key is detected and a 'COPY FROM' is in progress (add\n > new parameter to flag this).\n > \n > Would this seem a reasonable thing to do? Does anyone rely on COPY\n > FROM causing an ERROR on duplicate input? Would:\n > \n > WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)\n > \n > need to be added to the COPY command (I hope not)?\n > \n > Thanks,\n > \n > -- \n > Lee Kindness, Senior Software Engineer\n > Concept Systems Limited.\n", "msg_date": "Mon, 8 Oct 2001 13:41:27 +0100 (BST)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Gents,\n\nI started quite a long thread about this back in September. To\nsummarise I was proposing that COPY FROM would not abort the\ntransaction when it encountered data which would cause a uniqueness\nviolation on the table index(s).\n\nGenerally I think this was seen as a 'Good Thing'TM for a number of\nreasons:\n\n 1. Performance enhancements when doing doing bulk inserts - pre or\npost processing the data to remove duplicates is very time\nconsuming. Likewise the best tool should always be used for the job at\nand, and for searching/removing things it's a database.\n\n2. Feature parity with other database systems. For example Oracle's\nSQLOADER has a feature to not insert duplicates and rather move\nthem to another file for later investigation.\n\nNaturally the default behaviour would be the current one of assuming\nvalid data. Also the duplicate check would not add anything to the\ncurrent code path for COPY FROM - it would not take any longer.\n\nI attempted to add this functionality to PostgreSQL myself but got as\nfar as an updated parser and a COPY FROM which resulted in a database\nrecovery!\n\nSo (here's the question finally) is it worthwhile adding this\nenhancement to the TODO list?\n\nThanks, Lee.\n\n-- \n Lee Kindness, Senior Software Engineer, Concept Systems Limited.\n http://services.csl.co.uk/ http://www.csl.co.uk/ +44 131 5575595\n", "msg_date": "Tue, 11 Dec 2001 16:05:34 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "On Mon, Oct 01, 2001 at 03:17:43PM +0100, Lee Kindness wrote:\n> Tom Lane writes:\n> > I'm especially not pleased at the notion of removing an error check\n> > that's always been there because someone else thinks that would make it\n> > more convenient for his application.\n> \n> Please, don't get me wrong - I don't want to come across arrogant. I'm\n> simply trying to improve the 'COPY FROM' command in a situation where\n> speed is a critical issue and the data is dirty... And that must be a\n> relatively common scenario in industry.\n\nIsn't that when you do your bulk copy into into a holding table, then\nclean it up, and then insert into your live system?\n\nPatrick\n", "msg_date": "Thu, 13 Dec 2001 12:31:14 +0000", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Patrick Welche writes:\n > On Mon, Oct 01, 2001 at 03:17:43PM +0100, Lee Kindness wrote:\n > > Please, don't get me wrong - I don't want to come across arrogant. I'm\n > > simply trying to improve the 'COPY FROM' command in a situation where\n > > speed is a critical issue and the data is dirty... And that must be a\n > > relatively common scenario.\n > Isn't that when you do your bulk copy into into a holding table, then\n > clean it up, and then insert into your live system?\n\nThat's what I'm currently doing as a workaround - a SELECT DISTINCT\nfrom a temporary table into the real table with the unique index on\nit. However this takes absolute ages - say 5 seconds for the copy\n(which is the ballpark figure I aiming toward and can achieve with\nIngres) plus another 30ish seconds for the SELECT DISTINCT.\n\nThe majority of database systems out there handle this situation in\none manner or another (MySQL ignores or replaces; Ingres ignores;\nOracle ignores or logs; others...). Indeed PostgreSQL currently checks\nfor duplicates in the COPY code but throws an elog(ERROR) rather than\nignoring the row, or passing the error back up the call chain.\n\nMy use of PostgreSQL is very time critical, and sadly this issue alone\nmay force an evaluation of Oracle's performance in this respect!\n\nBest regards, Lee Kindness.\n\n-- \n Lee Kindness, Senior Software Engineer, Concept Systems Limited.\n http://services.csl.co.uk/ http://www.csl.co.uk/ +44 131 5575595\n", "msg_date": "Thu, 13 Dec 2001 13:25:11 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Lee Kindness wrote:\n> \n> Patrick Welche writes:\n> > On Mon, Oct 01, 2001 at 03:17:43PM +0100, Lee Kindness wrote:\n> > > Please, don't get me wrong - I don't want to come across arrogant. I'm\n> > > simply trying to improve the 'COPY FROM' command in a situation where\n> > > speed is a critical issue and the data is dirty... And that must be a\n> > > relatively common scenario.\n> > Isn't that when you do your bulk copy into into a holding table, then\n> > clean it up, and then insert into your live system?\n> \n> That's what I'm currently doing as a workaround - a SELECT DISTINCT\n> from a temporary table into the real table with the unique index on\n> it. However this takes absolute ages - say 5 seconds for the copy\n> (which is the ballpark figure I aiming toward and can achieve with\n> Ingres) plus another 30ish seconds for the SELECT DISTINCT.\n> \n> The majority of database systems out there handle this situation in\n> one manner or another (MySQL ignores or replaces; Ingres ignores;\n> Oracle ignores or logs; others...). Indeed PostgreSQL currently checks\n> for duplicates in the COPY code but throws an elog(ERROR) rather than\n> ignoring the row, or passing the error back up the call chain.\n\nI guess postgresql will be able to do it once savepoints get\nimplemented.\n\n> My use of PostgreSQL is very time critical, and sadly this issue alone\n> may force an evaluation of Oracle's performance in this respect!\n\nCan't you clean the duplicates _outside_ postgresql, say\n\ncat dumpfile | sort | uniq | psql db -c 'copy mytable from stdin'\n\nwith your version of uniq.\n\nor perhaps\n\npsql db -c 'copy mytable to stdout' >> dumpfile\nsort dumpfile | uniq | psql db -c 'copy mytable from stdin'\n\nif you already have something in mytable.\n\n------------\nHannu\n", "msg_date": "Thu, 13 Dec 2001 15:59:33 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Hannu Krosing writes:\n > Lee Kindness wrote:\n > > The majority of database systems out there handle this situation in\n > > one manner or another (MySQL ignores or replaces; Ingres ignores;\n > > Oracle ignores or logs; others...). Indeed PostgreSQL currently checks\n > > for duplicates in the COPY code but throws an elog(ERROR) rather than\n > > ignoring the row, or passing the error back up the call chain.\n > I guess postgresql will be able to do it once savepoints get\n > implemented.\n\nThis is encouraging to hear. I can see how this would make the code\nchanges relatively minimal and more manageable - the changes to the\ncurrent code are simply over my head!\n\nAre savepoints relatively high up on the TODO list, once 7.2 is out the\ndoor?\n\n > > My use of PostgreSQL is very time critical, and sadly this issue alone\n > > may force an evaluation of Oracle's performance in this respect!\n > Can't you clean the duplicates _outside_ postgresql, say\n > cat dumpfile | sort | uniq | psql db -c 'copy mytable from stdin'\n\nThis is certainly a possibility, however it's just really moving the\nprocessing elsewhere. The combined time is still around the same.\n\nI've/we've done a lot of investigation with approaches like this and\nalso with techniques assuming the locality of the duplicates (which is\na no-goer). None improve the situation.\n\nI'm not going to compare the time of just using INSERTs rather than\nCOPY...\n\nThanks for your response, Lee Kindness.\n\n-- \n Lee Kindness, Senior Software Engineer, Concept Systems Limited.\n http://services.csl.co.uk/ http://www.csl.co.uk/ +44 131 5575595\n", "msg_date": "Thu, 13 Dec 2001 14:56:52 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Lee Kindness writes:\n > I'm not going to compare the time of just using INSERTs rather than\n > COPY...\n\nOoops, I'm NOW going to... Obviously my subconscious is telling me\notherwise - bring on the Christmas party!\n\nLee.\n\n-- \n Lee Kindness, Senior Software Engineer, Concept Systems Limited.\n http://services.csl.co.uk/ http://www.csl.co.uk/ +44 131 5575595\n", "msg_date": "Thu, 13 Dec 2001 15:00:53 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "On Thu, Dec 13, 2001 at 01:25:11PM +0000, Lee Kindness wrote:\n> That's what I'm currently doing as a workaround - a SELECT DISTINCT\n> from a temporary table into the real table with the unique index on\n> it. However this takes absolute ages - say 5 seconds for the copy\n> (which is the ballpark figure I aiming toward and can achieve with\n> Ingres) plus another 30ish seconds for the SELECT DISTINCT.\n\nThen your column really isn't unique, so how about dropping the unique index,\nimport the data, fix the duplicates, recreate the unique index - just as\nanother possible work around ;)\n\nPatrick\n", "msg_date": "Thu, 13 Dec 2001 15:29:57 +0000", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Patrick Welche writes:\n > On Thu, Dec 13, 2001 at 01:25:11PM +0000, Lee Kindness wrote:\n > > That's what I'm currently doing as a workaround - a SELECT DISTINCT\n > > from a temporary table into the real table with the unique index on\n > > it. However this takes absolute ages - say 5 seconds for the copy\n > > (which is the ballpark figure I aiming toward and can achieve with\n > > Ingres) plus another 30ish seconds for the SELECT DISTINCT.\n > Then your column really isn't unique,\n\nThat's another discussion entirely ;) - it's spat out by a real-time\nsystem which doesn't have the time or resources to check this. Further\nprecision loss later in the data's life adds more duplicates...\n\n > so how about dropping the unique index, import the data, fix the\n > duplicates, recreate the unique index - just as another possible\n > work around ;) \n\nThis is just going to be the same(ish) time, no?\n\n CREATE TABLE tab (p1 INT, p2 INT, other1 INT, other2 INT);\n COPY tab FROM 'file';\n DELETE FROM tab WHERE p1, p2 NOT IN (SELECT DISTINCT p1, p2\n FROM tab);\n CREATE UNIQUE INDEX tab_idx ON tab USING BTREE(p1, p2);\n\nor am I missing something?\n\nThanks, Lee.\n\n-- \n Lee Kindness, Senior Software Engineer, Concept Systems Limited.\n http://services.csl.co.uk/ http://www.csl.co.uk/ +44 131 5575595\n", "msg_date": "Thu, 13 Dec 2001 15:44:31 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "On Thu, Dec 13, 2001 at 03:44:31PM +0000, Lee Kindness wrote:\n> Patrick Welche writes:\n> > On Thu, Dec 13, 2001 at 01:25:11PM +0000, Lee Kindness wrote:\n> > > That's what I'm currently doing as a workaround - a SELECT DISTINCT\n> > > from a temporary table into the real table with the unique index on\n> > > it. However this takes absolute ages - say 5 seconds for the copy\n> > > (which is the ballpark figure I aiming toward and can achieve with\n> > > Ingres) plus another 30ish seconds for the SELECT DISTINCT.\n> > Then your column really isn't unique,\n> \n> That's another discussion entirely ;) - it's spat out by a real-time\n> system which doesn't have the time or resources to check this. Further\n> precision loss later in the data's life adds more duplicates...\n\nHmm, the data has a later life - sounds like you'll need to remove dups\nthen, anyway, so can you get away with just letting the dups in? Remove\nthe UNIQUE requirement, and let the real time system just dump away.\nHow critical is it to later steps that there be no dups? And how many\n(potential) dups is your RTS producing, anyway?\n\nYour later processing (which apparently can _generate_ dups) might be\nthe out of the critical time path place to worry about removing dups.\n\nRoss\n\nP.S. This falls into the class of problem solving characterized by\n\"if you can't solve the problem as stated, restate the problem to be\none you _can_ solve\" ;-)\n\n> \n> > so how about dropping the unique index, import the data, fix the\n> > duplicates, recreate the unique index - just as another possible\n> > work around ;) \n> \n> This is just going to be the same(ish) time, no?\n> \n> CREATE TABLE tab (p1 INT, p2 INT, other1 INT, other2 INT);\n> COPY tab FROM 'file';\n> DELETE FROM tab WHERE p1, p2 NOT IN (SELECT DISTINCT p1, p2\n> FROM tab);\n> CREATE UNIQUE INDEX tab_idx ON tab USING BTREE(p1, p2);\n> \n> or am I missing something?\n> \n> Thanks, Lee.\n> \n> -- \n> Lee Kindness, Senior Software Engineer, Concept Systems Limited.\n> http://services.csl.co.uk/ http://www.csl.co.uk/ +44 131 5575595\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Thu, 13 Dec 2001 10:22:59 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Lee Kindness writes:\n\n> 1. Performance enhancements when doing doing bulk inserts - pre or\n> post processing the data to remove duplicates is very time\n> consuming. Likewise the best tool should always be used for the job at\n> and, and for searching/removing things it's a database.\n\nArguably, a better tool for this is sort(1). For instance, if you have a\ntypical copy input file with tab-separated fields and the primary key is\nin columns 1 and 2, you can remove duplicates with\n\nsort -k 1,2 -u INFILE > OUTFILE\n\nTo get a record of what duplicates were removed, use diff.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 13 Dec 2001 19:20:18 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "Lee Kindness writes:\n\n> Yes, in an ideal world the input to COPY should be clean and\n> consistent with defined indexes. However this is only really the case\n> when COPY is used for database/table backup and restore. It misses the\n> point that a major use of COPY is in speed optimisation on bulk\n> inserts...\n\nI think allowing this feature would open up a world of new dangerous\nideas, such as ignoring check contraints or foreign keys or magically\nmassaging other tables so that the foreign keys are satisfied, or ignoring\ndefault values, or whatever. The next step would then be allowing the\nsame optimizations in INSERT. I feel COPY should load the data and that's\nit. If you don't like the data you have then you have to fix it first.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 13 Dec 2001 19:20:59 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " }, { "msg_contents": "O.K., time to start looking into the _nature_ of the dups in your\ndata, to see if there's anything specific to take advantage of, since\nthe general solution (tell the DBMS to ignore dups) isn't available,\nand isn't likely to get there real soon. \n\nSo what does your data look like, and how do the dups occur?\n\nAny chance it's in a really simple format, and the dups are also really\nsimple, like 'one record per line, dups occur as identical adjacent\nlines?' if so, 'uniq' will solve the problem with little to no speed\npenalty. (it's the sort that kills ...)\n\nOr are you only gettinga dup'ed field,m and the rule 'ignore later\nrecords?' I could see this happen if the dta is timestamped at a \ngranularity that doesn't _exactly_ match the repetition rate: e.g.\nstamp to the second, record once a second.\n\nSo, what's it look like? Since it's one format, I bet a small, simple\npipe filter could handle dup elimination on the fly.\n\nRoss\n\nOn Thu, Dec 13, 2001 at 05:02:15PM +0000, Lee Kindness wrote:\n> \n> The RTS outputs to a file which is then subsequently used as input to\n> other packages, one of which is the application i'm concerned\n> with. While fixing at source is the ideal solution there are terabytes\n> of legacy data around (this is raw seismic navigational data). Also\n> there are more than one competing packages...\n> \n> Our package post-processes (we're still very concerned about speed as\n> this is normally done while 'shooting' the seismic data) this data to\n> produce the final seismic navigational data, which is then later used\n> by other products...\n> \n> The problem at hand is importing the initial data - no duplicates are\n> produced by the program itself later (nor in its output data).\n> \n> Sadly a large number of later SQL queries assume no duplicates and\n> would result in incorrect processing calculations, amongst other\n> things. The shear number of these queries makes changing them\n> impractical.\n> \n> > P.S. This falls into the class of problem solving characterized by\n> > \"if you can't solve the problem as stated, restate the problem to be\n> > one you _can_ solve\" ;-)\n> \n> Which is what i've been knocking my head against for the last few\n> weeks ;) The real problem is a move away from our current RDMS\n> (Ingres) to PostgreSQL will not happen if the performance of the\n> product significantly decreases (which it currently has for the import\n> stage) and since Ingres already just ignores the duplicates...\n> \n> I really want to move to PostgreSQL...\n> \n> Thanks for your input,\n> \n> -- \n> Lee Kindness, Senior Software Engineer, Concept Systems Limited.\n> http://services.csl.co.uk/ http://www.csl.co.uk/ +44 131 5575595\n", "msg_date": "Thu, 13 Dec 2001 12:36:27 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates?" }, { "msg_contents": "Peter Eisentraut writes:\n > I think allowing this feature would open up a world of new\n > dangerous ideas, such as ignoring check contraints or foreign keys\n > or magically massaging other tables so that the foreign keys are\n > satisfied, or ignoring default values, or whatever. The next step\n > would then be allowing the same optimizations in INSERT. I feel\n > COPY should load the data and that's it. If you don't like the\n > data you have then you have to fix it first.\n\nI agree that PostgreSQL's checks during COPY are a bonus and I\nwouldn't dream of not having them. Many database systems provide a\nfast bulkload by ignoring these constraits and cross references -\nthat's a tricky/horrid situation.\n\nHowever I suppose the question is should such 'invalid data' abort the\ntransaction, it seems a bit drastic...\n\nI suppose i'm not really after a IGNORE DUPLICATES option, but rather\na CONTINUE ON ERROR kind of thing.\n\nRegards, Lee.\n", "msg_date": "Fri, 14 Dec 2001 11:30:17 +0000 (GMT)", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " } ]
[ { "msg_contents": "\n> > The O_DIRECT flag has been added to open(2) and fcntl(2). Specifying\nthis\n> > flag for open files will attempt to minimize the cache effects of\nreading\n> > and writing.\n> \n> I wonder if using this for WAL would be good.\n\nNot before the code is not optimized to write more than the current 8k\nto \nthe WAL at a time.\n(The killer currently are larger transactions that produce approx more \nthan 64k WAL, (try the open_datasync setting with \"copy from\"))\n\nAndreas\n", "msg_date": "Mon, 1 Oct 2001 14:15:16 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: O_DIRECT" } ]