threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "\nI packaged up an RC2 over the weekend, and pretty much as soon as I had it\npackaged and in place, before I could announce it, there were several\npatches thrown in ... so, I left it there, let anyone who happened to see\nit pick it up, but didn't announce it ...\n\nEverything has been quiet, as far as patches are concerned, for the past\n24+hrs ... I'd like to roll (and actually announce) an solid RC3 tonight,\nwith announce first thing tomorrow morning, unless anyone has anythign\nthey aer sitting on?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Wed, 4 Apr 2001 08:53:19 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "All's quiet ... RC3 packaging ..."
},
{
"msg_contents": "I just have to setup a trong logging of users postgres activity using php\nscript.\n\nIt's work well and my disk is going full quickly.\ni have a file like this\n\n{ TARGETENTRY\n :resdom\n { RESDOM\n :resno 2\n :restype 1043\n :restypmod 24\n :resname \"abrev_formation\"\n :reskey 0\n :reskeyop 0\n :resgroupref 0\n :resjunk false\n }\n\nit's to hard to exploit this file to extract a user and machine from where\nthey work.\n\nSome help for managing logging\n\n\nFouad Fezzi\n\n",
"msg_date": "Wed, 4 Apr 2001 14:46:06 +0200",
"msg_from": "\"Fouad Fezzi\" <fezzi@iup.univ-avignon.fr>",
"msg_from_op": false,
"msg_subject": "logging is funny..."
},
{
"msg_contents": "There is an ELOG_TIMESTAMPS options and some others in include/config.h or\ninclude/config.h.in. I think that is where you get the pid of the\nbackend and stuff. I agree it needs more detail, at least the process\nid.\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> I just have to setup a trong logging of users postgres activity using php\n> script.\n> \n> It's work well and my disk is going full quickly.\n> i have a file like this\n> \n> { TARGETENTRY\n> :resdom\n> { RESDOM\n> :resno 2\n> :restype 1043\n> :restypmod 24\n> :resname \"abrev_formation\"\n> :reskey 0\n> :reskeyop 0\n> :resgroupref 0\n> :resjunk false\n> }\n> \n> it's to hard to exploit this file to extract a user and machine from where\n> they work.\n> \n> Some help for managing logging\n> \n> \n> Fouad Fezzi\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 4 Apr 2001 09:58:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logging is funny..."
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Everything has been quiet, as far as patches are concerned, for the past\n> 24+hrs ... I'd like to roll (and actually announce) an solid RC3 tonight,\n> with announce first thing tomorrow morning, unless anyone has anythign\n> they aer sitting on?\n\nI think we've got to remove that failing horology test before we wrap RC3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Apr 2001 09:59:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: All's quiet ... RC3 packaging ... "
},
{
"msg_contents": "On Wed, 4 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Everything has been quiet, as far as patches are concerned, for the past\n> > 24+hrs ... I'd like to roll (and actually announce) an solid RC3 tonight,\n> > with announce first thing tomorrow morning, unless anyone has anythign\n> > they aer sitting on?\n>\n> I think we've got to remove that failing horology test before we wrap RC3.\n\ncan we comment out the test for now, so that its still in there, but not\ntested? or is there absolutely non way that we can fix that in the long\nterm?\n\n\n",
"msg_date": "Wed, 4 Apr 2001 11:53:22 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: All's quiet ... RC3 packaging ... "
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> On Wed, 4 Apr 2001, Tom Lane wrote:\n>> I think we've got to remove that failing horology test before we wrap RC3.\n\n> can we comment out the test for now, so that its still in there, but not\n> tested? or is there absolutely non way that we can fix that in the long\n> term?\n\nCommenting it out was the only idea that I had. Maybe Thomas has a\nbetter idea, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Apr 2001 10:56:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: All's quiet ... RC3 packaging ... "
},
{
"msg_contents": "On Wed, Apr 04, 2001 at 10:56:27AM -0400, Tom Lane allegedly wrote:\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > On Wed, 4 Apr 2001, Tom Lane wrote:\n> >> I think we've got to remove that failing horology test before we wrap RC3.\n> \n> > can we comment out the test for now, so that its still in there, but not\n> > tested? or is there absolutely non way that we can fix that in the long\n> > term?\n> \n> Commenting it out was the only idea that I had. Maybe Thomas has a\n> better idea, though.\n\nWhy not work with a maximum error in the regression tests? For instance,\nallow a small difference after the 8th digit? That would pick out the\nreal bugs and let the round-off errors pass, right?\n\nMathijs\n-- \nIt's not that perl programmers are idiots, it's that the language\nrewards idiotic behavior in a way that no other language or tool has\never done.\n Erik Naggum\n",
"msg_date": "Thu, 5 Apr 2001 01:20:35 +0200",
"msg_from": "Mathijs Brands <mathijs@ilse.nl>",
"msg_from_op": false,
"msg_subject": "Re: All's quiet ... RC3 packaging ..."
}
] |
[
{
"msg_contents": "Hi,\n\n I have a string that is getting encoded using blowfish in postgres \n(using my own function written in C for PG) and storted in the\ndatabase. \nThe problem is, that with some strings after encoding is done, I end up \nwith a special character in the middle of the encoded string that is \ninterprited as the end of that string, so basically the result is \ntrunticated because of that. \nNow the weird part is if I do length(col) where col is the field that \nholds my encoded string, it returns the real size, but if from C in my \ndecode function I do VARSIZE(string) it gives me the wrong (short) size.\n\nMy question is, how do I get from C the real size of the string even if\nit has special character in it. Length(string) from postgres does it \nsomehow, and does it but VARSIZE(string) fails.\n\n\n\nAny feedback would be appreciated,\nBoulat.\n\n\n-- \nNothing Like the Sun\n",
"msg_date": "Wed, 04 Apr 2001 11:17:55 -0400",
"msg_from": "Boulat Khakimov <boulat@inet-interactif.com>",
"msg_from_op": true,
"msg_subject": "Special char problem, need help."
}
] |
[
{
"msg_contents": "On Wednesday 04 April 2001 22:42, Ciaran Johnston wrote:\n> Hi,\n>\n> Sorry to bother you's but I am currently doing a database comparison and\n> have been trying to get postgresql installed. I'm running Solaris 2.7. I\n> downloaded pgsql 7.03 and ran ./configure in the src/ directory. This\n> was fine until the very end when this error appeared:\n\nWhy are you running configure inside src/? I'm not sure if the 7.0.x had the \nconfigure on the src/ dir or the root.\n\nYou could take a look at 7.1RC[2-3], which looks pretty stable, and I have \n(RC1) compiled and working on a Solaris 8 SPARC.\n\nSaludos... :-)\n\n\n-- \nEl mejor sistema operativo es aquel que te da de comer.\nCuida tu dieta.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 4 Apr 2001 18:46:12 +0300",
"msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "Re: Configure problems on Solaris 2.7, pgsql 7.02 and 7.03"
},
{
"msg_contents": "Hi,\n\nSorry to bother you's but I am currently doing a database comparison and\nhave been trying to get postgresql installed. I'm running Solaris 2.7. I\ndownloaded pgsql 7.03 and ran ./configure in the src/ directory. This\nwas fine until the very end when this error appeared:\n\ncreating ./config.status\ncreating GNUmakefile\nsed: command garbled: s%@CC_VERSION@%Version: CSE C Compiler CAA 139\n1065/11 R14A\ncreating Makefile.global\nsed: command garbled: s%@CC_VERSION@%Version: CSE C Compiler CAA 139\n1065/11 R14A\ncreating backend/port/Makefile\nsed: command garbled: s%@CC_VERSION@%Version: CSE C Compiler CAA 139\n1065/11 R14A\ncreating backend/catalog/genbki.sh\n...\nsed: command garbled: s%@CC_VERSION@%Version: CSE C Compiler CAA 139\n1065/11 R14A\ncreating include/config.h\n\nI am using GNU make (obviously - I got it this far :-). As the error was\nsed-related I thought at first that I had to use GNU sed - so I d/loaded\nthat - no help. My sed knowledge is limited to knowing the name, but a\nnewsgroup user informed me that:\n\n#######################\n\nThe substitute command requires 3 delimiters like this:\ns%<find>%<replace_with>%<flags>\n ^\n |\n \\_________> Is this missing\nin your configure file?\n\nThe error message you pasted implies that only 2 delimiters were\npresent.\nIf that is the case, just throw another one of those percent signs at\nthe\nend of the command. \n\n#######################\n\nIs this the case? Is this a problem you have seen before? Am I doing\nsomething wrong? I would dearly like to know as I have heard good\nreports about pg and it's relative performance over MySQL and\nproprietary systems.\n\nThanking you for your time,\n\nCiaran.\n\n\n-- \nCiaran Johnston \nEricsson Systems Expertise Ltd.,\nAthlone\nCo. Westmeath\nEire\n\nemail: Ciaran.Johnston@eei.ericsson.se\nPhone: +353 902 31274\n",
"msg_date": "Wed, 04 Apr 2001 20:42:10 +0100",
"msg_from": "Ciaran Johnston <Ciaran.Johnston@eei.ericsson.se>",
"msg_from_op": false,
"msg_subject": "Configure problems on Solaris 2.7, pgsql 7.02 and 7.03"
},
{
"msg_contents": "Ciaran Johnston <Ciaran.Johnston@eei.ericsson.se> writes:\n> Sorry to bother you's but I am currently doing a database comparison and\n> have been trying to get postgresql installed. I'm running Solaris 2.7. I\n> downloaded pgsql 7.03 and ran ./configure in the src/ directory. This\n> was fine until the very end when this error appeared:\n\n> creating ./config.status\n> creating GNUmakefile\n> sed: command garbled: s%@CC_VERSION@%Version: CSE C Compiler CAA 139\n> 1065/11 R14A\n\nHm. I'm guessing that cc --version on your machine puts out multiple\nlines, or something like that. Possibly you could simply ignore this\nerror and push on, but since you are just doing evaluation and not\nproduction work, I'd suggest you forget 7.0.3 and try 7.1RC2 or later\ninstead. It looks like the newer version is more wary about this sort\nof thing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Apr 2001 19:27:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Configure problems on Solaris 2.7, pgsql 7.02 and 7.03 "
},
{
"msg_contents": "On Wed, Apr 04, 2001 at 06:46:12PM +0300, Mart�n Marqu�s allegedly wrote:\n> Why are you running configure inside src/? I'm not sure if the 7.0.x had the \n> configure on the src/ dir or the root.\n\nIt's in the src dir with 7.0.x alright.\n\n> You could take a look at 7.1RC[2-3], which looks pretty stable, and I have \n> (RC1) compiled and working on a Solaris 8 SPARC.\n\nI'm using pgsql 7.0.3 on Solaris 7 Sparc and Solaris 8 Intel for a\nwebsite that get's about 600,000 pageviews daily. pgsql 7.0.x works\nwithout problems for me and I'm connecting to it via JDBC. No crashes\nor major problems so far.\n\nAnyway, 7.0.x should work problem free on Solaris (there are some\nissues with 7.1 at the moment). To make this a bit easier to diagnose,\ncould you send me the output of the following commands?\n\n| jumpstart:~$ uname -a\n| SunOS jumpstart 5.7 Generic_106541-08 sun4u sparc SUNW,Ultra-Enterprise\n| jumpstart:~$ echo $PATH\n| /sbin:/usr/sbin:/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/ccs/bin\n| jumpstart:~$ which sed\n| /bin/sed\n| jumpstart:~$ which gcc\n| /usr/local/bin/gcc\n| jumpstart:~$ gcc -v\n| Reading specs from /usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/specs\n| gcc version 2.95.2 19991024 (release)\n| jumpstart:~$ which make\n| /usr/local/bin/make\n| jumpstart:~$ make -v\n| GNU Make version 3.78.1, by Richard Stallman and Roland McGrath.\n| Built for sparc-sun-solaris2.7\n| Copyright (C) 1988, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99\n| Free Software Foundation, Inc.\n| This is free software; see the source for copying conditions.\n| There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A\n| PARTICULAR PURPOSE.\n| \n| Report bugs to <bug-make@gnu.org>.\n\nIt is of course possible that the output on your system is quite\ndifferent.\n\nRegards,\n\nMathijs\n-- \nIt's not that perl programmers are idiots, it's that the language\nrewards idiotic behavior in a way that no other language or tool has\never done.\n Erik Naggum\n",
"msg_date": "Thu, 5 Apr 2001 01:44:06 +0200",
"msg_from": "Mathijs Brands <mathijs@ilse.nl>",
"msg_from_op": false,
"msg_subject": "Re: Configure problems on Solaris 2.7, pgsql 7.02 and 7.03"
},
{
"msg_contents": "Mathijs Brands wrote:\n> \n> On Wed, Apr 04, 2001 at 06:46:12PM +0300, Mart�n Marqu�s allegedly wrote:\n> > Why are you running configure inside src/? I'm not sure if the 7.0.x had the\n> > configure on the src/ dir or the root.\n> \n> It's in the src dir with 7.0.x alright.\n> \n> > You could take a look at 7.1RC[2-3], which looks pretty stable, and I have\n> > (RC1) compiled and working on a Solaris 8 SPARC.\n> \n> I'm using pgsql 7.0.3 on Solaris 7 Sparc and Solaris 8 Intel for a\n> website that get's about 600,000 pageviews daily. pgsql 7.0.x works\n> without problems for me and I'm connecting to it via JDBC. No crashes\n> or major problems so far.\n> \n\nThis is pretty much what I want to do - it is an internal management\nserver but the principle is the same and I intend to use JDBC to connect\nto it. I'm glad to hear you are getting good results. Anyway, I checked\nout 7.1RC2 and it configured OK, but gave me an error on make, in an\ninclude file. We are planning the first release of the management system\nfor August, so it would be preferable to get a stable system up and\nrunning ASAP rather than a test one. \n \n\n> Anyway, 7.0.x should work problem free on Solaris (there are some\n> issues with 7.1 at the moment). To make this a bit easier to diagnose,\n> could you send me the output of the following commands?\n> \n\nHere they are, with a couple extra for luck. The obese path is the\ncreation of the sysadmins here, and I have had to install my own\nversions of make and sed:\n\n##################################################################\n\neeiatuc282 eeicjon 101> uname -a\nSunOS eeiatuc282 5.7 Generic_106541-11 sun4u sparc SUNW,Ultra-5_10\neeiatuc282 eeicjon 102> echo $PATH\n/client_local/db/ant/bin:/client_local/mysql-3.22.32-sun-solaris2.7-sparc/bin:/home/eeicjon/R7B/bin:/apps/Java/jdk1.2.2_05/bin:/apps/workshop/SUNWspro/bin/:/usr/ccs/bin:/home/atus07B/apps3/emacs-20.5/bin:/usr/atria/bin:/usr/dt/bin:/usr/openwin/bin:/bin:/usr/bin:/usr/ucb:/usr/lib/X11:/usr/local/SUNWspro:/usr/openwin/lib/X11:/usr/bin:/bin:/usr/etc:/usr/5bin:/usr/sbin:/usr/local/bin:/usr/ucb:/usr/local/flextool/bin/sun4:/usr/openwin/bin:/usr/openwin/bin/xview:/home/atus07C/apps4/apssystem_v3.2/bin:/apps/localtools/filediff:/apps/scripts:/apps/localtools:/usr/local/WWW/bin:/home/atus02A/publish/ioffice/bin:/home/atus11B/SUNWspro/bin:/apps/Java/jdk1.1/bin:/apps/Java/jdk1.2beta3:/apps/Java/jit:/apps/eliza-R6/sunos5/bin:.\neeiatuc282 eeicjon 103> which sed\nsed: aliased to /client_local/exec/sed/bin/sed\neeiatuc282 eeicjon 104> sed -V\nGNU sed version 3.02\n\nCopyright (C) 1998 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is\nNO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR\nPURPOSE,\nto the extent permitted by law.\neeiatuc282 eeicjon 105> which gcc\n/home/atus07C/apps4/apssystem_v3.2/bin/gcc\neeiatuc282 eeicjon 106> gcc -v\n/home/atus07C/apps4/apssystem_v3.2/lib/cse/c_compiler/bin/gcc\n-B/home/atus07C/apps4/apssystem_v3.2/lib/cse/c_compiler/lib/gcc-lib/sun4/ericsson/\n-v\nReading specs from\n/home/atus07C/apps4/apssystem_v3.2/lib/cse/c_compiler/lib/gcc-lib/sun4/ericsson/specs\ngcc version 2.9-gnupro-98r2\neeiatuc282 eeicjon 107> which make\nmake: aliased to /client_local/exec/make/bin/make\neeiatuc282 eeicjon 108> make -v\nGNU Make version 3.79, by Richard Stallman and Roland McGrath.\nBuilt for sparc-sun-solaris2.7\nCopyright (C) 1988, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99\n Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions.\nThere is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A\nPARTICULAR PURPOSE.\n\nReport bugs to <bug-make@gnu.org>.\n\neeiatuc282 eeicjon 109>\n\n##################################################################\n\n\nThanks in advance for any help you can give me on this one.\n\n-- \nCiaran Johnston \nEricsson Systems Expertise Ltd.,\nAthlone\nCo. Westmeath\nEire\n\nemail: Ciaran.Johnston@eei.ericsson.se\nPhone: +353 902 31274\n",
"msg_date": "Thu, 05 Apr 2001 13:14:03 +0100",
"msg_from": "Ciaran Johnston <Ciaran.Johnston@eei.ericsson.se>",
"msg_from_op": false,
"msg_subject": "Re: Configure problems on Solaris 2.7, pgsql 7.02 and 7.03"
},
{
"msg_contents": "Mathijs Brands wrote:\n\n> <SNIP>\n>\n> If you want to start running your production machine in august, it would\n> be a very good idea to start using 7.1 now, since the stable release will\n> most likely be out before then (probably april or early may).\n>\n> You seem to be using the Cygnus version of GCC. I'm not sure that will\n> work ok, although it most likely does. The version of make and sed you're\n> using are ok, so they shouldn't be causing your problems.\n>\n> Since it took me a couple of days to respond, it's possible that you've\n> already resolved this problem. Have you? If not, you might try a binary\n> distribution of pgsql instead. Or you could mail me the errors you're\n> receiving. Maybe I can figure it out. No promises though.\n\nThanks, and thanks to all who responded. I experimented with a couple of configs and\nfinally went back to 7.0.3 and edited the configure script so that it didn't pass 'cc\n--version' to sed (cheers to Tom Lane for pointing out the multiple-line output).\nThis worked and compiled, but postmaster gave me a nasty memory error which the FAQ\nwas able to provide a workaround for - shmget failed (invalid argument) was the\nerror. Seems my kernel isn't properly configured for postGres (along with all the\nother things that are wrong with my system). I'm currently passing -N 16 -B 32 to the\npostmaster and it is running - haven't tested it yet tho'. Are there optimum\nparameters for these numbers, before I get around to getting my sysadmins to fix my\nkernel for me? The testing should really be finished by next week and there's no way\nthey'll get it right before then. Also how much difference will this make? I'm\nnoticing slower queries but better scaleability to bigger tables than MySQL at the\nminute, although my tests were far from optimised, and I have only just come back to\nthis.\n\nI could probably have got 7.1RC2 working as well, I was just hoping the above error\nwas specific to that version and not to my system :-). When we set up the proper test\nenvironment (which won't be for a few weeks), we'll probably use that (assuming\nno-one here decides to spend 50 grand on Oracle, or MySQL doesn't suddenly pip the\npost in the tests :-).\n\nThanks again.\n\nCiaran.\n\n--\nCiaran Johnston\nEricsson Systems Expertise Ltd.,\nAthlone\nCo. Westmeath\nEire\n\nemail: Ciaran.Johnston@eei.ericsson.se\nPhone: +353 902 31274\n\n\n\n",
"msg_date": "Mon, 09 Apr 2001 18:57:00 +0100",
"msg_from": "Ciaran Johnston <Ciaran.Johnston@eei.ericsson.se>",
"msg_from_op": false,
"msg_subject": "Re: Configure problems on Solaris 2.7, pgsql 7.02 and 7.03"
}
] |
[
{
"msg_contents": "> Everything has been quiet, as far as patches are concerned, \n> for the past 24+hrs ... I'd like to roll (and actually announce)\n> an solid RC3 tonight, with announce first thing tomorrow morning,\n> unless anyone has anythign they aer sitting on?\n\nWe still have opened report about losing files after backend\ncrash... No new input from Konstantin -:(\nI'll run some tests...\n\nVadim\n",
"msg_date": "Wed, 4 Apr 2001 08:52:02 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: All's quiet ... RC3 packaging ..."
},
{
"msg_contents": "On Wed, 4 Apr 2001, Mikheev, Vadim wrote:\n\n> > Everything has been quiet, as far as patches are concerned,\n> > for the past 24+hrs ... I'd like to roll (and actually announce)\n> > an solid RC3 tonight, with announce first thing tomorrow morning,\n> > unless anyone has anythign they aer sitting on?\n>\n> We still have opened report about losing files after backend\n> crash... No new input from Konstantin -:(\n\nif not easily recreateable, we can leave that one as somethign for v7.1.1\n...\n\n> I'll run some tests...\n\nthen again, if it is easily recreatable ... :)\n\n\n",
"msg_date": "Wed, 4 Apr 2001 13:59:02 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "RE: All's quiet ... RC3 packaging ..."
},
{
"msg_contents": "On Thursday 05 April 2001 00:41, Thomas Lockhart wrote:\n> I've got patches for the regression tests to work around the \"time with\n> time zone\" DST problem. Will apply to the tree asap, and will post a\n> message when that is done.\n\nIs RC3 going out or should I think about RC2?\n\nSaludos... ;-)\n\n-- \nEl mejor sistema operativo es aquel que te da de comer.\nCuida tu dieta.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n",
"msg_date": "Wed, 4 Apr 2001 20:07:11 +0300",
"msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>",
"msg_from_op": false,
"msg_subject": "Re: Re: All's quiet ... RC3 packaging ..."
},
{
"msg_contents": "I've got patches for the regression tests to work around the \"time with\ntime zone\" DST problem. Will apply to the tree asap, and will post a\nmessage when that is done.\n\n - Thomas\n",
"msg_date": "Wed, 04 Apr 2001 21:41:49 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: All's quiet ... RC3 packaging ..."
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> On Wed, 4 Apr 2001, Mikheev, Vadim wrote:\n> Everything has been quiet, as far as patches are concerned,\n> for the past 24+hrs ... I'd like to roll (and actually announce)\n> an solid RC3 tonight, with announce first thing tomorrow morning,\n> unless anyone has anythign they aer sitting on?\n>> \n>> We still have opened report about losing files after backend\n>> crash... No new input from Konstantin -:(\n\nI'd suggest we go ahead and roll RC3. There's no way to tell how long\nit might take to diagnose Konstantin's report, and the other issues we\nhad seem to be closed out at the moment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Apr 2001 19:00:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: All's quiet ... RC3 packaging ... "
},
{
"msg_contents": "On Wed, 4 Apr 2001, Thomas Lockhart wrote:\n\n> I've got patches for the regression tests to work around the \"time with\n> time zone\" DST problem. Will apply to the tree asap, and will post a\n> message when that is done.\n\nSounds cool ... I'll scheduale an RC3 then, around that bug being fixed\n...\n\n\n",
"msg_date": "Wed, 4 Apr 2001 20:11:41 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: All's quiet ... RC3 packaging ..."
}
] |
[
{
"msg_contents": "(cc'd the -hackers mailing list)\n\nThanks for the reports Matthew. There is a single failure in the\nNetBSD/sparc64 test due to a problem in the reltime test (or in starting\nthe reltime test). There is a different failure in your NetBSD/sparc\ntest, but since you are not confident about your installation we'll wait\nto diagnose that (unless this rings a bell with someone).\n\nAnyone have suggestions for Mathew?\n\n - Thomas\n\n> for postgresql-7.1RC2.tar.gz, here is my `make check' for NetBSD/sparc64:\n>\n> [ ... ]\n> reltime ... FAILED\n> [ ... ]\n> test horology ... FAILED\n> [ ... ]\n> inherit ... FAILED\n> [ ... ]\n> test misc ... FAILED\n> [ ... ]\n> \n> =======================\n> 4 of 76 tests failed.\n> =======================\n> \n> digging into the regression.diffs, i can see that:\n> - reltime failed because it just had:\n> ! psql: Backend startup failed\n\nHmm. That one is a problem. Perhaps someone will have a suggestion?\n\n> - horology failed because of off-by-one errors somewhere:\n\nNot a problem; I have an unintended dependency on daylight savings time,\nwhich now causes this test to fail for everyone. The test itself should\nbe fixed for the release.\n\n> for several cases. another failure here was due to:\n> ! ERROR: Relation 'reltime_tbl' does not exist\n> which i guess is caused by the first failure.\n\nYes, I think you are right.\n\n> - inherit fails because the ordering is invalid, eg:\n> \n> - a | aaa\n> a | aaaa\n> a | aaaaa\n> a | aaaaaa\n> a | aaaaaaa\n> a | aaaaaaaa\n> b | bbb\n> b | bbbb\n> b | bbbbb\n> b | bbbbbb\n> b | bbbbbbb\n> - b | bbbbbbbb\n> c | ccc\n> \n> vs\n> \n> a | aaaa\n> a | aaaaa\n> a | aaaaaa\n> a | aaaaaaa\n> a | aaaaaaaa\n> + a | aaa\n> + b | bbbbbbbb\n> b | bbb\n> b | bbbb\n> b | bbbbb\n> b | bbbbbb\n> b | bbbbbbb\n> \n> there are dozens of these failures in the inherit test.\n> \n> - misc fails because of the reltime failure, i guess:\n> \n> - reltime_tbl\n> \n> and:\n> \n> ! (90 rows)\n> --\n> ! (89 rows)\n> \n> i don't know anything about postgresql (i am merely testing at the\n> suggestion of a friend) so i'm not very well equiped to debug these\n> failures without some help.\n> \n> and for NetBSD/sparc:\n> \n> [ ... ]\n> test horology ... FAILED\n> [ ... ]\n> create_index ... FAILED\n> [ ... ]\n> test sanity_check ... FAILED\n> [ ... ]\n> \n> - horology fails for similar reasons as sparc64, but only 2\n> failures instead of about 15.\n> \n> - create_index failed because of some weird error that may\n> have more to do with the quick-n-dirty installation i have\n> on the SS20 i'm doing the test on:\n> \n> CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n> + ERROR: cannot read block 3 of hash_i4_index: Bad address\n> CREATE INDEX hash_name_index ON hash_name_heap USING hash (random name_ops);\n> + ERROR: cannot read block 3 of hash_name_index: Bad address\n> CREATE INDEX hash_txt_index ON hash_txt_heap USING hash (random text_ops);\n> + ERROR: cannot read block 3 of hash_txt_index: Bad address\n> CREATE INDEX hash_f8_index ON hash_f8_heap USING hash (random float8_ops);\n> + ERROR: cannot read block 3 of hash_f8_index: Bad address\n> \n> - sanity_check fails because of the create_index failure:\n> \n> - hash_f8_heap | t\n> - hash_i4_heap | t\n> - hash_name_heap | t\n> - hash_txt_heap | t\n> \n> ! (45 rows)\n> vs\n> ! (41 rows)\n> \n> i will be reinstalling this SS20 with a full installation sometime in\n> the next few days. i will re-run the testsuite after this to see if\n> that is causing any of the lossage. none of the sparc64 lossage should\n> be related, and that was run on an Ultra1/140 FWIW. both of these were\n> run under NetBSD 1.5S (-current from a few weeks ago.)\n> \n> .mrg.\n",
"msg_date": "Wed, 04 Apr 2001 16:59:24 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: [lockhart@alumni.caltech.edu: Third call for platform\n testing]"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> Anyone have suggestions for Mathew?\n\n>> for postgresql-7.1RC2.tar.gz, here is my `make check' for NetBSD/sparc64:\n\n>> digging into the regression.diffs, i can see that:\n>> - reltime failed because it just had:\n>> ! psql: Backend startup failed\n\nThe postmaster log file should have more info, but a first thought is\nthat you ran up against process or swap-space limitations. The parallel\ncheck has fifty-odd processes going at its peak, which is more than the\ndefault per-user process limit on many Unixen.\n\n>> - inherit fails because the ordering is invalid, eg:\n\nOrdering issues are not really bugs (cf documentation about interpreting\nregression results), although it'd be interesting to know if these diffs\nstill occur after you resolve the other failures.\n\n>> - create_index failed because of some weird error that may\n>> have more to do with the quick-n-dirty installation i have\n>> on the SS20 i'm doing the test on:\n>> \n>> CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n>> + ERROR: cannot read block 3 of hash_i4_index: Bad address\n\n\"Bad address\"? That seems pretty bizarre.\n\n>> i will be reinstalling this SS20 with a full installation sometime in\n>> the next few days. i will re-run the testsuite after this to see if\n>> that is causing any of the lossage.\n\nPlease let us know.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Apr 2001 19:20:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [lockhart@alumni.caltech.edu: Third call for platform testing] "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> >> CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n> >> + ERROR: cannot read block 3 of hash_i4_index: Bad address\n> \n> \"Bad address\"? That seems pretty bizarre.\n\nThis is obviously something that shows up on _some_ NetBSD platforms.\nThe above was on sparc64, but that same problem is the only one I see\nin the regression testing on NetBSD/vax that isn't just different\nfloating point (the VAX doesn't have IEEE), different ordering of\n(unordered) collections or different wording of strerror() output.\n\nNetBSD/i386 doesn't have the \"Bad address\" problem.\n\n-tih\n-- \nThe basic difference is this: hackers build things, crackers break them.\n",
"msg_date": "05 Apr 2001 09:52:34 +0200",
"msg_from": "Tom Ivar Helbekkmo <tih@kpnQwest.no>",
"msg_from_op": false,
"msg_subject": "Re: [lockhart@alumni.caltech.edu: Third call for platform testing]"
},
{
"msg_contents": " \n > >> CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n > >> + ERROR: cannot read block 3 of hash_i4_index: Bad address\n > \n > \"Bad address\"? That seems pretty bizarre.\n \n This is obviously something that shows up on _some_ NetBSD platforms.\n The above was on sparc64, but that same problem is the only one I see\n\nthat Bad address message was actually from sparc.\n",
"msg_date": "Thu, 05 Apr 2001 21:32:20 +1000",
"msg_from": "matthew green <mrg@eterna.com.au>",
"msg_from_op": false,
"msg_subject": "re: [lockhart@alumni.caltech.edu: Third call for platform testing] "
},
{
"msg_contents": " \n >> digging into the regression.diffs, i can see that:\n >> - reltime failed because it just had:\n >> ! psql: Backend startup failed\n \n The postmaster log file should have more info, but a first thought is\n that you ran up against process or swap-space limitations. The parallel\n check has fifty-odd processes going at its peak, which is more than the\n default per-user process limit on many Unixen.\n\nhmm, maxproc=80 on this system currently and i wasn't really doing anything\nelse. it has 256MB ram and 280MB swap (unused). exactly what am i looking\nfor in the postmaster.log file? it is 65kb long...\n",
"msg_date": "Thu, 05 Apr 2001 21:45:41 +1000",
"msg_from": "matthew green <mrg@eterna.com.au>",
"msg_from_op": false,
"msg_subject": "re: [lockhart@alumni.caltech.edu: Third call for platform testing] "
},
{
"msg_contents": "matthew green <mrg@eterna.com.au> writes:\n> digging into the regression.diffs, i can see that:\n> - reltime failed because it just had:\n> ! psql: Backend startup failed\n \n> The postmaster log file should have more info, but a first thought is\n> that you ran up against process or swap-space limitations. The parallel\n> check has fifty-odd processes going at its peak, which is more than the\n> default per-user process limit on many Unixen.\n\n> hmm, maxproc=80 on this system currently and i wasn't really doing anything\n> else. it has 256MB ram and 280MB swap (unused). exactly what am i looking\n> for in the postmaster.log file? it is 65kb long...\n\nLook for messages about \"fork failed\". They should give a kernel error\nmessage too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Apr 2001 10:08:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [lockhart@alumni.caltech.edu: Third call for platform testing] "
},
{
"msg_contents": "Tom Ivar Helbekkmo <tih@kpnQwest.no> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n> + ERROR: cannot read block 3 of hash_i4_index: Bad address\n>> \n>> \"Bad address\"? That seems pretty bizarre.\n\n> This is obviously something that shows up on _some_ NetBSD platforms.\n\nIf it's reproducible on more than one box then we should look into it.\nAm I right to guess that \"Bad address\" means a bogus pointer handed to\na kernel call? If so, it'll probably take some digging with gdb to find\nout the cause. I'd be happy to do the digging if anyone can give me an\naccount reachable via telnet or ssh on one of these machines.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Apr 2001 10:17:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [lockhart@alumni.caltech.edu: Third call for platform testing] "
},
{
"msg_contents": " \n >> i will be reinstalling this SS20 with a full installation sometime in\n >> the next few days. i will re-run the testsuite after this to see if\n >> that is causing any of the lossage.\n \n Please let us know.\n\n\nactually, i had a classic i could test with -- all except horology passed,\nso if there were two expected failures there, all is fine on NetBSD/sparc.\n",
"msg_date": "Fri, 06 Apr 2001 02:26:09 +1000",
"msg_from": "matthew green <mrg@eterna.com.au>",
"msg_from_op": false,
"msg_subject": "re: [lockhart@alumni.caltech.edu: Third call for platform testing] "
},
{
"msg_contents": "\n matthew green <mrg@eterna.com.au> writes:\n > digging into the regression.diffs, i can see that:\n > - reltime failed because it just had:\n > ! psql: Backend startup failed\n \n > The postmaster log file should have more info, but a first thought is\n > that you ran up against process or swap-space limitations. The parallel\n > check has fifty-odd processes going at its peak, which is more than the\n > default per-user process limit on many Unixen.\n \n > hmm, maxproc=80 on this system currently and i wasn't really doing anything\n > else. it has 256MB ram and 280MB swap (unused). exactly what am i looking\n > for in the postmaster.log file? it is 65kb long...\n \n Look for messages about \"fork failed\". They should give a kernel error\n message too.\n\n\nafter running `unlimit' (tcsh) before `make check', the only failures i have\nare the horology (expected) and the inherit sorted failures, on NetBSD/sparc64.\n\n\ni also believe the `Bad address' errors were caused when the test was run in\nan NFS mounted directory.\n\n\n.mrg.\n",
"msg_date": "Fri, 06 Apr 2001 04:06:36 +1000",
"msg_from": "matthew green <mrg@eterna.com.au>",
"msg_from_op": false,
"msg_subject": "re: [lockhart@alumni.caltech.edu: Third call for platform testing] "
},
{
"msg_contents": "> after running `unlimit' (tcsh) before `make check', the only failures i have\n> are the horology (expected) and the inherit sorted failures, on NetBSD/sparc64.\n\nI'll mark both NetBSD/sparc as supported, for both 32 and 64-bit builds.\nThanks!\n\n - Thomas\n",
"msg_date": "Fri, 06 Apr 2001 05:14:53 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: [lockhart@alumni.caltech.edu: Third call for platform\n testing]"
},
{
"msg_contents": "matthew green <mrg@eterna.com.au> writes:\n\n> i also believe the `Bad address' errors were caused when the test\n> was run in an NFS mounted directory.\n\nYou may have something, there. My test run on the VAX was over NFS.\nI set up NetBSD on a VAX specifically to test PostgreSQL 7.1, but I\ndidn't have any disk available that it could use, so I went for NFS.\n\n-tih\n-- \nThe basic difference is this: hackers build things, crackers break them.\n",
"msg_date": "08 Apr 2001 20:20:47 +0200",
"msg_from": "Tom Ivar Helbekkmo <tih@kpnQwest.no>",
"msg_from_op": false,
"msg_subject": "Re: [lockhart@alumni.caltech.edu: Third call for platform testing]"
},
{
"msg_contents": "Tom Ivar Helbekkmo <tih@kpnQwest.no> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n> + ERROR: cannot read block 3 of hash_i4_index: Bad address\n>> \n>> \"Bad address\"? That seems pretty bizarre.\n\n> This is obviously something that shows up on _some_ NetBSD platforms.\n> The above was on sparc64, but that same problem is the only one I see\n> in the regression testing on NetBSD/vax that isn't just different\n> floating point (the VAX doesn't have IEEE), different ordering of\n> (unordered) collections or different wording of strerror() output.\n\n> NetBSD/i386 doesn't have the \"Bad address\" problem.\n\nAfter looking into it, I find that the problem is this: Postgres, or at\nleast the hash-index part of it, expects to be able to lseek() to a\nposition past the end of a file and then get a non-failure return from\nread(). (This happens indirectly because it uses ReadBuffer for blocks\nthat it has never yet written.) Given the attached test program, I get\nthis result on my own machine:\n\n$ touch z\t\t\t-- create an empty file\n$ ./a.out z 0\t\t\t-- read at offset 0\nRead 0 bytes\n$ ./a.out z 1\t\t\t-- read at offset 8K\nRead 0 bytes\n\nPresumably, the same result appears everywhere else that the regress\ntests pass. But NetBSD 1.5T gives\n\n$ touch z\n$ ./a.out z 0\nRead 0 bytes\n$ ./a.out z 1\nread: Bad address\n$ uname -a\nNetBSD varg.i.eunet.no 1.5T NetBSD 1.5T (VARG) #4: Thu Apr 5 23:38:04 CEST 2001 root@varg.i.eunet.no:/usr/src/sys/arch/vax/compile/VARG vax\n\nI think this is indisputably a bug in (some versions of) NetBSD. If I\ncan seek past the end of file, read() shouldn't consider it a hard error\nto read there --- and in any case, EFAULT isn't a very reasonable error\ncode to return. Since it seems not to be a widespread problem, I'm not\neager to change the hash code to try to avoid it.\n\n\t\t\tregards, tom lane\n\n\n#include <stdio.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <unistd.h>\n\nint main (int argc, char** argv)\n{\n\tchar *fname = argv[1];\n\tint fd, readres;\n\tlong seekres;\n\tchar buf[8192];\n\n\tfd = open(fname, O_RDONLY, 0);\n\tif (fd < 0)\n\t{\n\t\tperror(fname);\n\t\texit(1);\n\t}\n\tseekres = lseek(fd, atoi(argv[2]) * 8192, SEEK_SET);\n\tif (seekres < 0)\n\t{\n\t\tperror(\"seek\");\n\t\texit(1);\n\t}\n\treadres = read(fd, buf, sizeof(buf));\n\tif (readres < 0)\n\t{\n\t\tperror(\"read\");\n\t\texit(1);\n\t}\n\tprintf(\"Read %d bytes\\n\", readres);\n\n\texit(0);\n}\n",
"msg_date": "Fri, 13 Apr 2001 21:16:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "NetBSD \"Bad address\" failure (was Re: Third call for platform\n testing)"
},
{
"msg_contents": "I wrote:\n> I think this is indisputably a bug in (some versions of) NetBSD. If I\n> can seek past the end of file, read() shouldn't consider it a hard error\n> to read there --- and in any case, EFAULT isn't a very reasonable error\n> code to return. Since it seems not to be a widespread problem, I'm not\n> eager to change the hash code to try to avoid it.\n\nI forgot to mention a possible contributing factor: the files involved\nwere NFS-mounted, in the case I was looking at. So this may be an NFS\nproblem more than a NetBSD problem. Anyone want to try the given test\ncase on NFS-mounted files on other systems?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 22:39:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NetBSD \"Bad address\" failure (was Re: Third call for platform\n\ttesting)"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> > I think this is indisputably a bug in (some versions of) NetBSD.\n> \n> I forgot to mention a possible contributing factor: the files involved\n> were NFS-mounted, in the case I was looking at. So this may be an NFS\n> problem more than a NetBSD problem. Anyone want to try the given test\n> case on NFS-mounted files on other systems?\n\nI can verify, that with NetBSD-current on sparc, your test code works\nthe way you want it to on local disk, but fails (in the way you've\nobserved), if the target file is on an NFS-mounted file system.\n\n-tih\n-- \nThe basic difference is this: hackers build things, crackers break them.\n",
"msg_date": "14 Apr 2001 23:24:55 +0200",
"msg_from": "Tom Ivar Helbekkmo <tih@kpnQwest.no>",
"msg_from_op": false,
"msg_subject": "Re: NetBSD \"Bad address\" failure (was Re: Third call for platform\n\ttesting)"
},
{
"msg_contents": "Tom Ivar Helbekkmo <tih@kpnQwest.no> writes:\n> I can verify, that with NetBSD-current on sparc, your test code works\n> the way you want it to on local disk, but fails (in the way you've\n> observed), if the target file is on an NFS-mounted file system.\n\nFWIW, the test program succeeds (no error) using HPUX 10.20 and a couple\ndifferent Linux flavors as either client or server. So I'm still\nthinking that it's NetBSD-specific. It would be useful to try it on\nsome other BSD derivatives though ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Apr 2001 19:09:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NetBSD \"Bad address\" failure (was Re: Third call for platform\n\ttesting)"
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010414 18:15]:\n> Tom Ivar Helbekkmo <tih@kpnQwest.no> writes:\n> > I can verify, that with NetBSD-current on sparc, your test code works\n> > the way you want it to on local disk, but fails (in the way you've\n> > observed), if the target file is on an NFS-mounted file system.\n> \n> FWIW, the test program succeeds (no error) using HPUX 10.20 and a couple\n> different Linux flavors as either client or server. So I'm still\n> thinking that it's NetBSD-specific. It would be useful to try it on\n> some other BSD derivatives though ...\nI can arrange a test on FreeBSD 4.3... I'll try it tomorrow...\n\n(Or I can give access....)\n\nLER\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 14 Apr 2001 21:40:23 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: NetBSD \"Bad address\" failure (was Re: Third call for platform\n\ttesting)"
},
{
"msg_contents": "\n\nyes, this is a bug in netbsd-current that was introduced with about 5 month\nago with the new unified buffer cache system. it has been fixed.\n\n\nthanks.\n\n\n\nFrom: Chuck Silvers <chs@netbsd.org>\nTo: source-changes@netbsd.org\nSubject: CVS commit: syssrc\nDate: Mon, 16 Apr 2001 17:37:44 +0300 (EEST)\n\n\nModule Name:\tsyssrc\nCommitted By:\tchs\nDate:\t\tMon Apr 16 14:37:44 UTC 2001\n\nModified Files:\n\tsyssrc/sys/nfs: nfs_bio.c\n\nLog Message:\nreads at or after EOF should \"succeed\".\n\n\nTo generate a diff of this commit:\ncvs rdiff -r1.65 -r1.66 syssrc/sys/nfs/nfs_bio.c\n\nPlease note that diffs are not public domain; they are subject to the\ncopyright notices on the relevant files.\n\n",
"msg_date": "Tue, 17 Apr 2001 02:15:56 +1000",
"msg_from": "matthew green <mrg@eterna.com.au>",
"msg_from_op": false,
"msg_subject": "re: NetBSD \"Bad address\" failure (was Re: Third call for platform\n\ttesting)"
}
] |
[
{
"msg_contents": "I am trying to build Mac OSX 10.0 from the current cvs.\n\n./configure --with-perl --with-openssl --enable-syslog |& tee configure.logfile\n\n<snip>\n\nmake |& tee Makefile.logfile\n\n<snip>\n\nRunning Mkbootstrap for plperl ()\nchmod 644 plperl.bs\nLD_RUN_PATH=\"\" cc -o blib/arch/auto/plperl/plperl.bundle -bundle \n-undefined suppress plperl.o eloglvl.o SPI.o \n/System/Library/Perl/darwin/auto/Opcode/Opcode.bundle \n-L/System/Library/Perl/darwin/CORE -lperl\n/usr/bin/ld: /System/Library/Perl/darwin/auto/Opcode/Opcode.bundle is \ninput for the dynamic link editor, is not relocatable by the static \nlink editor again\nmake[4]: *** [blib/arch/auto/plperl/plperl.bundle] Error 1\nmake[3]: *** [all] Error 2\nmake[2]: *** [all] Error 2\nmake[1]: *** [all] Error 2\nmake: *** [all] Error 2\n\nany ideas?\n\nNeil\n",
"msg_date": "Wed, 4 Apr 2001 12:56:53 -0500",
"msg_from": "Neil Tiffin <ntiffin@earthlink.net>",
"msg_from_op": true,
"msg_subject": "Problem Building on Mac OSX"
}
] |
[
{
"msg_contents": "> \n> Bruce,\n> \n> Two changes for the TODO list.\n> \n> 1. Under \"RELIABILITY/MISC\", add:\n> \n> Write out a CRC with each data block, and verify it on reading.\n> \n> 2. Under SOURCE CODE, I believe Tom has already implemented:\n> \n> Correct CRC WAL code to be a real CRC64 algorithm \n\nTODO updated. I know we did number 2, but did we agree on #1 and is it\ndone?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 4 Apr 2001 16:59:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TODO list"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Two changes for the TODO list.\n>> \n>> 1. Under \"RELIABILITY/MISC\", add:\n>> \n>> Write out a CRC with each data block, and verify it on reading.\n>> \n>> 2. Under SOURCE CODE, I believe Tom has already implemented:\n>> \n>> Correct CRC WAL code to be a real CRC64 algorithm \n\n> TODO updated. I know we did number 2, but did we agree on #1 and is it\n> done?\n\n#2 is indeed done. #1 is not done, and possibly not agreed to ---\nI think Vadim had doubts about its usefulness, though personally I'd\nlike to see it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Apr 2001 17:28:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list "
},
{
"msg_contents": "> > TODO updated. I know we did number 2, but did we agree on #1 and is it\n> > done?\n> \n> #2 is indeed done. #1 is not done, and possibly not agreed to ---\n> I think Vadim had doubts about its usefulness, though personally I'd\n> like to see it.\n\nThat was my recollection too. This was the discussion about testing the\ndisk hardware. #1 removed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 4 Apr 2001 17:31:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: TODO list"
},
{
"msg_contents": "> > > TODO updated. I know we did number 2, but did we agree on #1 and is\nit\n> > > done?\n> >\n> > #2 is indeed done. #1 is not done, and possibly not agreed to ---\n> > I think Vadim had doubts about its usefulness, though personally I'd\n> > like to see it.\n>\n> That was my recollection too. This was the discussion about testing the\n> disk hardware. #1 removed.\n\nWhat is recommended in the bible (Gray and Reuter), especially for larger\ndisk block sizes that may not be written atomically, is to have a word at\nthe end of the that must match a word at the beginning of the block. It\ngets changed each time you write the block.\n\nKen Hirsch\nAll your database are belong to us.\n\n",
"msg_date": "Thu, 5 Apr 2001 16:25:42 -0400",
"msg_from": "\"Ken Hirsch\" <kahirsch@bellsouth.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list"
},
{
"msg_contents": "On Thu, Apr 05, 2001 at 04:25:42PM -0400, Ken Hirsch wrote:\n> > > > TODO updated. I know we did number 2, but did we agree on #1 and is\n> it\n> > > > done?\n> > >\n> > > #2 is indeed done. #1 is not done, and possibly not agreed to ---\n> > > I think Vadim had doubts about its usefulness, though personally I'd\n> > > like to see it.\n> >\n> > That was my recollection too. This was the discussion about testing the\n> > disk hardware. #1 removed.\n> \n> What is recommended in the bible (Gray and Reuter), especially for larger\n> disk block sizes that may not be written atomically, is to have a word at\n> the end of the that must match a word at the beginning of the block. It\n> gets changed each time you write the block.\n\nThat only works if your blocks are atomic. Even SCSI disks reorder\nsector writes, and they are free to write the first and last sectors\nof an 8k-32k block, and not have written the intermediate blocks \nbefore the power goes out. On IDE disks it is of course far worse.\n\n(On many (most?) IDE drives, even when they have been told to report \nwrite completion only after data is physically on the platter, they will \n\"forget\" if they see activity that looks like benchmarking. Others just \nignore the command, and in any case they all default to unsafe mode.)\n\nIf the reason that a block CRC isn't on the TODO list is that Vadim\nobjects, maybe we should hear some reasons why he objects? Maybe \nthe objections could be dealt with, and everyone satisfied.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Thu, 5 Apr 2001 14:01:10 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list"
}
] |
[
{
"msg_contents": "Hi,\n\nI am writing a C program that accesses a 7.0.3 database using libpq. I would like to\nbe able to do a fprintf through some file pointer or pipe TO the database via a PQexec call of COPY. \nI have been unable to figure out if this is possible. I see where I can do this via stdin,\nbut I don't want the user to make the entry, I want the executable to do it and still enjoy\nthe performance of the COPY command.\n\nI see that the COPY docs mention using a pipe instead of stdin:\n\n\"stdin Specifies that input comes from a pipe or terminal\"\n\nHowever, I can find no other info regarding doing a printf through a pipe directly to the db using COPY.\n\n\nIs there a way to tell the COPY command to accept a file pointer or pipe or whatever other than stdin as it's input\nwhen copying TO the db?\n\nThanks!\n-- \nJohn Coers Intrinsity, Inc. \ncoers@intrinsity.com Austin, Texas\n",
"msg_date": "Wed, 04 Apr 2001 16:13:35 -0500",
"msg_from": "John Coers <coers@intrinsity.com>",
"msg_from_op": true,
"msg_subject": "COPY Question"
},
{
"msg_contents": "In article <3ACB8E7F.C12A50CA@intrinsity.com>, \"John Coers\"\n<coers@intrinsity.com> wrote:\n\n> Hi,\n> \n> I am writing a C program that accesses a 7.0.3 database using libpq. I\n> would like to be able to do a fprintf through some file pointer or pipe\n> TO the database via a PQexec call of COPY. I have been unable to figure\n> out if this is possible. I see where I can do this via stdin, but I\n> don't want the user to make the entry, I want the executable to do it\n> and still enjoy the performance of the COPY command.\n\nIf you're on UNIX, look at the popen(3) call.\n\nGordon.\n-- \nIt doesn't get any easier, you just go faster.\n -- Greg LeMond\n",
"msg_date": "Wed, 04 Apr 2001 18:11:13 -0400",
"msg_from": "\"Gordon A. Runkle\" <gar@no-spam-integrated-dynamics.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY Question"
},
{
"msg_contents": "Hi,\n\nMy generic problem is performance when copying very large amounts of data to a db from multiple clients.\n\nI am writing a C program on Linux Redhat6.2 that accesses a 7.0.3 database using libpq. I\nwould like to be able to do a printf through STDOUT (or another file pointer) TO the database via a \nPQexec call of COPY. Something like this would be ideal:\n\nPQexec(conn, \"COPY moncoverage from STDOUT\");\n\nI have been unable to figure out if this is possible. I understand that I can do this via stdin, \nbut I don't want the user to make the entry, I want the executable to do it and still enjoy the \nperformance of the COPY command.\n\nI also understand that I can open a pipe to psql and do it through there like this:\n\nFILE *ofp = popen(\"psql -a -c 'COPY tablename from stdin' -d dbname -h host\",\"w\");\n \nfor(i=0;i<15000;i++)\n fprintf(ofp,\"%s\\n\",row[i]);\n \nfprintf(ofp,\"\\\\.\\n\");\npclose(ofp);\n\n\nHowever, performance is extremely important to me in this application. I already have a connection\nopen to the db in the C executable to do 10 inserts, so it I would like to go ahead and use that\nconnection to perform my PQexec(conn,'COPY...') to reduce connection overhead. Also, I am assuming that\nthe PQexec call makes a more efficient connection that the popen to psql and COPY scheme. During trial\nruns my server maintains 34 postmaster processes -- there has to be a better way to do this.\n\nAny help would be greatly appreciated. Please let me know if this is the wrong place to post this.\n",
"msg_date": "Mon, 09 Apr 2001 15:58:39 -0500",
"msg_from": "John Coers <coers@intrinsity.com>",
"msg_from_op": true,
"msg_subject": "libpq PQexec call of COPY"
},
{
"msg_contents": "John Coers writes:\n\n> Hi,\n>\n> My generic problem is performance when copying very large amounts of data to a db from multiple clients.\n>\n> I am writing a C program on Linux Redhat6.2 that accesses a 7.0.3 database using libpq. I\n> would like to be able to do a printf through STDOUT (or another file pointer) TO the database via a\n> PQexec call of COPY. Something like this would be ideal:\n>\n> PQexec(conn, \"COPY moncoverage from STDOUT\");\n>\n> I have been unable to figure out if this is possible. I understand that I can do this via stdin,\n> but I don't want the user to make the entry, I want the executable to do it and still enjoy the\n> performance of the COPY command.\n\nRead the libpq chapter in the Programmer's Guide and look into\nsrc/bin/psql/copy.c for information and examples of using COPY through\nlibpq. Yes, it's possible, but you need to use special API calls.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 10 Apr 2001 19:08:43 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: libpq PQexec call of COPY"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Mikheev, Vadim [mailto:vmikheev@SECTORBASE.COM]\n> Sent: Wednesday, April 04, 2001 3:37 AM\n> To: 'Tom Lane'\n> Subject: RE: [BUGS] Loosing files after backend crash \n> \n> \n> 1. Indices could be recreated with REINDEX or pg_class could \n> be queried\n> with seq scan (something like where relname like \n> '%seq_i___data_buffer%')...\n> Konstantin?\n\nWell, bad news. After a few more tries to crash the backend, the whole\npostmaster crashed and didn't rise back.\nIt fails to start up reporting \"Apr 4 18:53:05 wale postgres[71618]: [9]\nFATAL 2: XLogWrite: write request is past end of log\" to syslog.\nAnd the last line of errlog sounds like \"/usr/local/pgsql/bin/postmaster:\nStartup proc 72905 exited with status 512 - abort\"\nI wanted to ask, if I need to re-initdb or there are some other ways to fix\nthe problem?\nIf I need to re-init, can I preserve the database in it's current state, to\ncontinue my investigation from the point I was interrupted?\n\nI hope I will be able to answer your questions after I heal the postmaster.\n\n> 3. Could you help us reproduce this bug, Konstantin?\n> What exactly did you do after sequence creation?\n\nHere's the script which creates the sequence and the temp table:\n\n---------------------8<---------------------\nbegin transaction;\ndrop sequence _seq_i___data_buffer;\ncreate sequence _seq_i___data_buffer;\n\nCREATE TEMPORARY TABLE __data_buffer (\n buff_id int4 UNIQUE NOT NULL default NEXTVAL( '_seq_i___data_buffer' ),\n rule_id int4 NOT NULL,\n _value decimal( 18, 0) NOT NULL,\n _count decimal( 18, 0) NOT NULL,\n value_time timestamp NOT NULL\n );\n\ninsert into __data_buffer\n(buff_id , rule_id , _value , _count\n, value_time )\n(\n... [689 UNION'd selects]\n);\ncommit;\n---------------------8<---------------------\nAfter that I run my function.\n\nShall I send you it's code? (It's 23 Kbytes big).\n\n> Does your function reference temp table you've mentioned?\nYes. And it also creates three more temporary tables (Let's call them A, B\nand C).\nActually, temp table 'A' is populated with the values from the external\ntemptable ('__data_buffer').\nAnd the 'B' temp table is populated with a query from temptable A.\nThen table C is populated with a huge query, which joins many tables and\ntable 'B' among them.\nBut tables 'A' and the external one are not referenced in this huge query.\nWell, this very query crashes the postmaster.\nOur team is playing with this query to locate the reasons for this failure.\nWe will report you the results of our investigation.\n\nIf you want to have a look at the query, here it is.\n'__vars_info' is what I referenced as temptable C.\n'__rule_data_with_tis' is what I referenced as temptable 'B'. \nAll other tables are not temporary.\nint1 is my self-written type. To prevent from blaming my type I can say,\nthat replacing it with, say, float gives the same results.\ndatediff() is my own function (MS SQL analog, works well alone).\n\n---------------------8<---------------------\n INSERT INTO __vars_info ( var_id, min_old_ti, max_old_ti, lifetime,\ntimeinterval )\n ( SELECT variable.var_id,\n CASE\n WHEN year = 1 THEN MIN( datediff(''year'', basetime, timebegin))\n WHEN month = 1 THEN MIN( datediff(''month'', basetime, timebegin))\n WHEN week = 1 THEN MIN( datediff(''week'', basetime, timebegin))\n WHEN day = 1 THEN MIN( datediff(''day'', basetime, timebegin))\n WHEN hour = 1 THEN MIN( datediff(''hour'', basetime, timebegin))\n WHEN five_minute = 1 THEN MIN( datediff(''minute'', basetime,\ntimebegin) / 5)\n END as min_old_ti,\n CASE\n WHEN year = 1 THEN MAX( datediff(''year'', basetime, timebegin))\n WHEN month = 1 THEN MAX( datediff(''month'', basetime, timebegin))\n WHEN week = 1 THEN MAX( datediff(''week'', basetime, timebegin))\n WHEN day = 1 THEN MAX( datediff(''day'', basetime, timebegin))\n WHEN hour = 1 THEN MAX( datediff(''hour'', basetime, timebegin))\n WHEN five_minute = 1 THEN MAX( datediff(''minute'', basetime,\ntimebegin) / 5)\n END as max_old_ti,\n lifetime, timeinterval\n FROM\n variable\n LEFT JOIN\n ( SELECT var_id, timebegin FROM var_value WHERE var_id IN\n ( SELECT DISTINCT var_id FROM variable WHERE vset_id IN\n ( SELECT DISTINCT vset_id FROM vset_to_rule WHERE rule_id IN\n ( SELECT DISTINCT rule_id FROM __rule_data_with_tis ) ) ) )\n AS sel_var_value\n ON variable.var_id = sel_var_value.var_id,\n ( SELECT ti_id, (year)::int1 AS year, (month)::int1 AS month,\n (week)::int1 AS week, (day)::int1 AS day,\n (hour)::int1 AS hour, (five_minute)::int1 AS five_minute\n FROM timeinterval )\n AS timeinterval\n WHERE\n variable.ti_id = timeinterval.ti_id AND\n variable.isactive = 1 AND variable.vset_id IN\n ( SELECT DISTINCT vset_id FROM vset_to_rule\n WHERE vset_id IN ( SELECT DISTINCT vset_id FROM variable_set where\nisactive = 1 ) AND\n rule_id IN ( SELECT DISTINCT rule_id FROM\n__rule_data_with_tis ) )\n GROUP BY variable.var_id, lifetime, timeinterval, year::int4,\nmonth::int4, week::int4, day::int4, hour::int4, five_minute::int4)\n ;\n---------------------8<---------------------\n\n> What cause crash? Maybe crash is related somehow...\nSee above.\n\n> Could you try to reproduce failure with wal_debug = 1 and\n> post me postmaster' log?\nI'll do it after I succeed in bringing the postmaster up.\n\nRegards, \nKonstantin Solodovnikov.\n\nP.S. Almost forgot:\nHere's what psql tells about our sequence:\n\n---------------------8<---------------------\nNetflow_Test=# \\d seq_i___data_buffer\n Table \"seq_i___data_buffer\"\n Attribute | Type | Modifier\n------------+--------------------------+------------------------------------\n-------------------\n buff_id | integer | not null default\nnextval('seq_i___data_buffer'::text)\n ^^^^^^^\n buff_id | integer |\n ^^^^^^^\n rule_id | integer | not null\n _value | numeric(18,0) | not null\n _count | numeric(18,0) | not null\n value_time | timestamp with time zone | not null\nIndex: pg_temp.96430.1\n ^^^^^^^^^^^^^\n---------------------8<---------------------\n\nIt shows almost the same structure as __data_buffer should have.\nExcept for the fact, that it has 'buff_id' doubled :)\n\nK.S.\n",
"msg_date": "Thu, 5 Apr 2001 02:25:29 +0400 ",
"msg_from": "KS <ks@tcnet.ru>",
"msg_from_op": true,
"msg_subject": "RE: [BUGS] Loosing files after backend crash "
},
{
"msg_contents": "KS <ks@tcnet.ru> writes:\n> Well, bad news. After a few more tries to crash the backend, the whole\n> postmaster crashed and didn't rise back.\n> It fails to start up reporting \"Apr 4 18:53:05 wale postgres[71618]: [9]\n> FATAL 2: XLogWrite: write request is past end of log\" to syslog.\n\nUgh.\n\n> And the last line of errlog sounds like \"/usr/local/pgsql/bin/postmaster:\n> Startup proc 72905 exited with status 512 - abort\"\n> I wanted to ask, if I need to re-initdb or there are some other ways to fix\n> the problem?\n\nYou can use contrib/pg_resetxlog to remove the damaged WAL log, which\nwill allow the postmaster to start up. If you have the space, please\nfirst save the contents of $PGDATA (the whole tree if possible, else at\nleast pg_xlog directory) for later analysis. Note that the database\nmight not be completely consistent after you zap the WAL log --- safest\nbet would be to dump the data, look it over for problems, then initdb\nand restore.\n\n> Shall I send you it's code? (It's 23 Kbytes big).\n\nPlease. 23K is no problem.\n\nBTW, exactly which version of Postgres are you working with --- is it a\nCVS snapshot, or a beta or RC release, and if so which one?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 Apr 2001 18:38:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Loosing files after backend crash "
}
] |
[
{
"msg_contents": "Using current cvs version on Mac OS X 10.0 built with\n\n./configure\nmake\nmade check\n\ntest horology ... FAILED\n\nfrom the regression.diffs file\n\nIs this a problem or not?\n\nNeil Tiffin\nChicago\n\n\n[localhost:src/test/regress] ntiffin% cat regression.diffs\n*** ./expected/horology.out Sun Dec 3 08:51:11 2000\n--- ./results/horology.out Wed Apr 4 13:31:02 2001\n***************\n*** 122,128 ****\n SELECT time with time zone '01:30' + interval '02:01' AS \"03:31:00-08\";\n 03:31:00-08\n -------------\n! 03:31:00-08\n (1 row)\n\n SELECT time with time zone '01:30-08' - interval '02:01' AS \"23:29:00-08\";\n--- 122,128 ----\n SELECT time with time zone '01:30' + interval '02:01' AS \"03:31:00-08\";\n 03:31:00-08\n -------------\n! 03:31:00-07\n (1 row)\n\n SELECT time with time zone '01:30-08' - interval '02:01' AS \"23:29:00-08\";\n***************\n*** 140,146 ****\n SELECT time with time zone '03:30' + interval '1 month 04:01' AS \n\"07:31:00-08\";\n 07:31:00-08\n -------------\n! 07:31:00-08\n (1 row)\n\n SELECT interval '04:30' - time with time zone '01:02' AS \"+03:28\";\n--- 140,146 ----\n SELECT time with time zone '03:30' + interval '1 month 04:01' AS \n\"07:31:00-08\";\n 07:31:00-08\n -------------\n! 07:31:00-07\n (1 row)\n\n SELECT interval '04:30' - time with time zone '01:02' AS \"+03:28\";\n\n======================================================================\n\n",
"msg_date": "Wed, 4 Apr 2001 17:47:30 -0500",
"msg_from": "Neil Tiffin <ntiffin@earthlink.net>",
"msg_from_op": true,
"msg_subject": "Regression failed Mac OSX"
},
{
"msg_contents": "At 5:47 PM -0500 4/4/01, Neil Tiffin wrote:\n>Using current cvs version on Mac OS X 10.0\n>\n>test horology ... FAILED\n\n\nI can't reproduce this on a 10.0 (4K78) system. I just ran the regression tests from cvs HEAD downloaded @ 16:10 PDT on a G3/350. It also passed several dozen iterations on a G4/400 and a G4/450x2 running 10.0.1.\n\nLooping (make clean; make runcheck) in a shell while loop seems to eat all available disk space, but I don't think that's a PG bug.\n\nOooh, while waiting for another check to pass, I read the pgsql-hackers archives, and I see that the horology test problem was because of daylight savings time (on all platforms). It appears to have been resolved, because current sources don't have the problem.\n\n-pmb\n\n\n",
"msg_date": "Wed, 4 Apr 2001 16:30:16 -0700",
"msg_from": "Peter Bierman <bierman@apple.com>",
"msg_from_op": false,
"msg_subject": "Re: Regression failed Mac OSX"
}
] |
[
{
"msg_contents": "> Well, bad news. After a few more tries to crash the backend, the whole\n> postmaster crashed and didn't rise back.\n> It fails to start up reporting \"Apr 4 18:53:05 wale \n> postgres[71618]: [9]\n> FATAL 2: XLogWrite: write request is past end of log\" to syslog.\n\nHmmm, the only XLogWrite startup process should be doing is\nwhen writing final checkpoint after successful REDO.\n\n> > Could you try to reproduce failure with wal_debug = 1 and\n> > post me postmaster' log?\n> I'll do it after I succeed in bringing the postmaster up.\n\nPlease try to startup with wal_debug = 1 and post us log.\n\nVadim\n \n",
"msg_date": "Wed, 4 Apr 2001 16:05:32 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: [BUGS] Loosing files after backend crash "
}
] |
[
{
"msg_contents": "> Well, bad news. After a few more tries to crash the backend, the whole\n> postmaster crashed and didn't rise back.\n> It fails to start up reporting \"Apr 4 18:53:05 wale \n> postgres[71618]: [9]\n> FATAL 2: XLogWrite: write request is past end of log\" to syslog.\n\nOk, this one is easy to fix. From Konstantin startup log:\n\n...\n> REDO @ 0/220716996; LSN 0/220717056: ...\n ^^^^^^^^^\nEnd of 8K page!\n\n...\n> INSERT @ 0/220717064: prev 0/220716996; ... checkpoint ...\n> XLogFlush: rqst 0/220717128; wrt 0/220717056; flsh 0/220717056\n\nCheckpoint is the first record on new page. To satisfy\n\n if (!XLByteLT(LogwrtResult.Write, XLogCtl->xlblocks[Write->curridx]))\n elog(STOP, \"XLogWrite: write request is past end of log\");\n\nin XLogWrite() we have to initialize XLogCtl->xlblocks[0] to the next page\n(where checkpoint will go) in StartupXLOG().\n\nStill not related to original problem. But this is second bug discovered\nsince issue was rised -:)\n\nVadim\n",
"msg_date": "Wed, 4 Apr 2001 18:31:21 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: [BUGS] Loosing files after backend crash "
},
{
"msg_contents": "> > FATAL 2: XLogWrite: write request is past end of log\" to syslog.\n\nOk, I hope this one is fixed. Tom, please review changes.\nKonstantin, are you able to compile PG from CVS?\nTo restart postmaster with current version...\n\nVadim\n\n\n",
"msg_date": "Thu, 5 Apr 2001 02:41:27 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: RE: [BUGS] Loosing files after backend crash "
}
] |
[
{
"msg_contents": "\n> > Could you please try to just remove the cpp flag? Also I wonder why you are\n> > using \"long long int\" instead of just \"long int\" in your C program. Well\n> > that is the people who complained to you.\n> \n> Yes, dropping the CPP flags solves the problem for us. I assume all\n> platforms have long long now?\n> \n> We used long long as this seems to be pretty consistently 64 bits on\n> different platforms, and our code runs on Tru64, PC linux and openBSD.\n\nI think the people did the perfectly correct thing to use long long int,\nsince that makes their code more portable.\nCan someone try to help me understand this please ?\nMy understanding so far is:\n\t1. long int is the same as long (32 or more bits)\n\t2. long long int is at least 64 bits (I have so far not seen more that 64 bits)\n\t\t(my original understanding was that it is 64bits, but Tom corrected me)\n\nSo my conclusion would be, that ecpg should understand \"long long int\" since\nthat is preferable over a \"long int\" that is 64bits by chance.\n\nI do agree with the statement, that HAVE_LONG_LONG_INT_64 shoud be\ndefined on all platforms where the compiler understands it to be 64bits.\nIt would imho be the responsibility of backend code, to only do one of\nthe two if both are defined.\nOtherwise the defines should have a different name like USE_....\n\nAndreas\n",
"msg_date": "Thu, 5 Apr 2001 10:01:53 +0200 ",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: ecpg long int problem on alpha + fix"
},
{
"msg_contents": "On Thu, Apr 05, 2001 at 10:01:53AM +0200, Zeugswetter Andreas SB wrote:\n> I do agree with the statement, that HAVE_LONG_LONG_INT_64 shoud be\n> defined on all platforms where the compiler understands it to be 64bits.\n> It would imho be the responsibility of backend code, to only do one of\n> the two if both are defined.\n\nI just committed some changes so that ecpg does acceptt \"long long\"\nvariables all the time, but repleces them with type \"long\" if\nHAVE_LONG_LONG_INT_64 is not defined. This appears to be a strategy similar\nto the one used by the backend.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 5 Apr 2001 10:40:03 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: ecpg long int problem on alpha + fix"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> I just committed some changes so that ecpg does acceptt \"long long\"\n> variables all the time, but repleces them with type \"long\" if\n> HAVE_LONG_LONG_INT_64 is not defined.\n\nThis looks like a workable strategy for now. Ten years from now, when\n\"long\" means 64 bits everywhere and people are starting to use \"long long\"\nto mean 128 bits, we'll have to revisit it ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Apr 2001 13:31:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg long int problem on alpha + fix "
}
] |
[
{
"msg_contents": "We are just (as per other queries recently) building a new system\nusing postgresql as the backend database. 7.1 seems like it is going\ngive us a number of essential fixes and useful features that make it\nworth waiting a while.\n\nAs I have not seen announcements of the beta and RC cuts on\npgsql-announce, I would assume the development schedule is a more\nclosed thing than general chatter here, BUT is there an target\ntimetable for 7.1 (more that \"when its ready\" and subsequent 7.1.1\ntype releases ?\n\nThis information would help us decide whether to use an RC as a\ndevelopment platform, moving to release later when we are ready to\ntest final work.\n\nAlso, would it be possible to announce alpha/beta/RC releases to\npgsql-announce ?\n\nrgds,\n-- \nPeter Galbavy\nKnowledge Matters Ltd\nhttp://www.knowledge.com/\n",
"msg_date": "Thu, 5 Apr 2001 10:06:08 +0100",
"msg_from": "Peter Galbavy <peter.galbavy@knowledge.com>",
"msg_from_op": true,
"msg_subject": "release dates and announcements ?"
},
{
"msg_contents": "\nHi Peter ...\n\n\tThe problem this cycle has been that as soon as a package is ready\nfor announce, ppl have been cropping up with bugs that need to be fixed,\nso we don't bother announcing it ... except to -hackers ...\n\n\tWe are currently at Release Candidate 3, with an RC4 most likely\ngoing out tomorrow evening, which will also be announced to -announce as\nthe 'final release before release' ...\n\n\t... and, if all goes well, the full relesae will be out on Friday\nof this coming week ...\n\nOn Thu, 5 Apr 2001, Peter Galbavy wrote:\n\n> We are just (as per other queries recently) building a new system\n> using postgresql as the backend database. 7.1 seems like it is going\n> give us a number of essential fixes and useful features that make it\n> worth waiting a while.\n>\n> As I have not seen announcements of the beta and RC cuts on\n> pgsql-announce, I would assume the development schedule is a more\n> closed thing than general chatter here, BUT is there an target\n> timetable for 7.1 (more that \"when its ready\" and subsequent 7.1.1\n> type releases ?\n>\n> This information would help us decide whether to use an RC as a\n> development platform, moving to release later when we are ready to\n> test final work.\n>\n> Also, would it be possible to announce alpha/beta/RC releases to\n> pgsql-announce ?\n>\n> rgds,\n> --\n> Peter Galbavy\n> Knowledge Matters Ltd\n> http://www.knowledge.com/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sun, 8 Apr 2001 02:07:24 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: release dates and announcements ?"
},
{
"msg_contents": "In article <20010405100604.B12401@office.knowledge.com>, \"Peter Galbavy\"\n<peter.galbavy@knowledge.com> wrote:\n\n> This information would help us decide whether to use an RC as a\n> development platform, moving to release later when we are ready to test\n> final work.\n\nPeter,\n\nFor what it's worth, I've been using 7.1beta? and RC? in\ndevelopment for some time. beta5 -> beta6 required an\ninitdb, but the since then I've just compiled the new\nversion and installed it. No problems, and the feature\nset is worth it for me.\n\nGordon Runkle\nIntegrated Dynamics, Inc.\n-- \nIt doesn't get any easier, you just go faster.\n -- Greg LeMond\n",
"msg_date": "Mon, 09 Apr 2001 20:39:32 -0400",
"msg_from": "\"Gordon Runkle\" <gar@integrated-dynamics.com>",
"msg_from_op": false,
"msg_subject": "Re: release dates and announcements ?"
}
] |
[
{
"msg_contents": "I will be on the road for the next two weeks. If something need to be done\nwith ecpg please go ahead and make the changes.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 5 Apr 2001 14:19:51 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "On the road"
}
] |
[
{
"msg_contents": "\n> > 1. Under \"RELIABILITY/MISC\", add:\n> > \n> > Write out a CRC with each data block, and verify it on reading.\n\n> TODO updated. I know we did number 2, but did we agree on #1 and is it\n> done?\n\nHas anybody done performance and reliability tests with CRC64 ? \nI think it must be a CPU eater. It looks a lot more complex than a CRC32.\n\nSince we need to guard a maximum of 32k bytes for pg pages I would - if at all -\nconsider to use a 32bit adler instead of a CRC, since that is a lot cheaper\nto calculate. \n\nAndreas\n",
"msg_date": "Thu, 5 Apr 2001 17:45:46 +0200 ",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: TODO list"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Has anybody done performance and reliability tests with CRC64 ? \n> I think it must be a CPU eater. It looks a lot more complex than a CRC32.\n\nOn my box (PA-RISC) the inner loop is about 14 cycles/byte, vs. about\n7 cycles/byte for CRC32. On almost any machine, either one will be\nnegligible in comparison to the cost of disk I/O.\n\n> Since we need to guard a maximum of 32k bytes for pg pages I would -\n> if at all - consider to use a 32bit adler instead of a CRC, since that\n> is a lot cheaper to calculate.\n\nYou are several months too late to re-open that argument. It's done and\nit's not changing for 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Apr 2001 13:05:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: TODO list "
}
] |
[
{
"msg_contents": "A friend of mine (Matthew Green) mentioned that 7.1RC2 had NetBSD/powerpc\ndown as unttested, and asked me to test it. So here are the results:\n\nOn my NetBSD/macppc system running NetBSD 1.5, gmake check reported that 2\nof 62 tests failed. I've attached regression.diff to this message.\n\nThe two that failed were geometry and horology.\n\n From looking at the output, the horology test seemed to get confused about\ndaylight savings time.\n\nThe geometry test seems to be a difference in the last digit in two of the\nnumbers:\n\nExpect:\t\t\tGot:\n-1.33012701887967\t-1.33012701887966\n0.500000000081028\t0.500000000081027\n\nTake care,\n\nBill",
"msg_date": "Thu, 5 Apr 2001 10:33:39 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": true,
"msg_subject": "Test results for postgresql-7.1RC2 on NetBSD/macppc 1.5"
}
] |
[
{
"msg_contents": "> If the reason that a block CRC isn't on the TODO list is that Vadim\n> objects, maybe we should hear some reasons why he objects? Maybe \n> the objections could be dealt with, and everyone satisfied.\n\nUnordered disk writes are covered by backing up modified blocks\nin log. It allows not only catch such writes, as would CRC do,\nbut *avoid* them.\n\nSo, for what CRC could be used? To catch disk damages?\nDisk has its own CRC for this.\n\nVadim\n",
"msg_date": "Thu, 5 Apr 2001 14:27:48 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: TODO list"
},
{
"msg_contents": "On Thu, Apr 05, 2001 at 02:27:48PM -0700, Mikheev, Vadim wrote:\n> > If the reason that a block CRC isn't on the TODO list is that Vadim\n> > objects, maybe we should hear some reasons why he objects? Maybe \n> > the objections could be dealt with, and everyone satisfied.\n> \n> Unordered disk writes are covered by backing up modified blocks\n> in log. It allows not only catch such writes, as would CRC do,\n> but *avoid* them.\n> \n> So, for what CRC could be used? To catch disk damages?\n> Disk has its own CRC for this.\n\nOK, this was already discussed, maybe while Vadim was absent. \nShould I re-post the previous text?\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Thu, 5 Apr 2001 14:38:34 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list"
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> If the reason that a block CRC isn't on the TODO list is that Vadim\n>> objects, maybe we should hear some reasons why he objects? Maybe \n>> the objections could be dealt with, and everyone satisfied.\n\n> Unordered disk writes are covered by backing up modified blocks\n> in log. It allows not only catch such writes, as would CRC do,\n> but *avoid* them.\n\n> So, for what CRC could be used? To catch disk damages?\n> Disk has its own CRC for this.\n\nOh, I see. For anyone else who has trouble reading between the lines:\n\nBlocks that have recently been written, but failed to make it down to\nthe disk platter intact, should be restorable from the WAL log. So we\ndo not need a block-level CRC to guard against partial writes.\n\nA block-level CRC might be useful to guard against long-term data\nlossage, but Vadim thinks that the disk's own CRCs ought to be\nsufficient for that (and I can't say I disagree).\n\nSo the only real benefit of a block-level CRC would be to guard against\nbits dropped in transit from the disk surface to someplace else, ie,\nduring read or during a \"cp -r\" type copy of the database to another\nlocation. That's not a totally negligible risk, but is it worth the\noverhead of updating and checking block CRCs? Seems dubious at best.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Apr 2001 18:25:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list "
},
{
"msg_contents": "> > So, for what CRC could be used? To catch disk damages?\n> > Disk has its own CRC for this.\n> \n> Oh, I see. For anyone else who has trouble reading between the lines:\n> \n> Blocks that have recently been written, but failed to make it down to\n> the disk platter intact, should be restorable from the WAL log. So we\n> do not need a block-level CRC to guard against partial writes.\n> \n> A block-level CRC might be useful to guard against long-term data\n> lossage, but Vadim thinks that the disk's own CRCs ought to be\n> sufficient for that (and I can't say I disagree).\n> \n> So the only real benefit of a block-level CRC would be to guard against\n> bits dropped in transit from the disk surface to someplace else, ie,\n> during read or during a \"cp -r\" type copy of the database to another\n> location. That's not a totally negligible risk, but is it worth the\n> overhead of updating and checking block CRCs? Seems dubious at best.\n\nAgreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 5 Apr 2001 20:27:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list"
},
{
"msg_contents": "On Thu, Apr 05, 2001 at 06:25:17PM -0400, Tom Lane wrote:\n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> >> If the reason that a block CRC isn't on the TODO list is that Vadim\n> >> objects, maybe we should hear some reasons why he objects? Maybe \n> >> the objections could be dealt with, and everyone satisfied.\n> \n> > Unordered disk writes are covered by backing up modified blocks\n> > in log. It allows not only catch such writes, as would CRC do,\n> > but *avoid* them.\n> \n> > So, for what CRC could be used? To catch disk damages?\n> > Disk has its own CRC for this.\n> \n> Blocks that have recently been written, but failed to make it down to\n> the disk platter intact, should be restorable from the WAL log. So we\n> do not need a block-level CRC to guard against partial writes.\n\nIf a block is missing some sectors in the middle, how would you know\nto reconstruct it from the WAL, without a block CRC telling you that\nthe block is corrupt?\n\n \n> A block-level CRC might be useful to guard against long-term data\n> lossage, but Vadim thinks that the disk's own CRCs ought to be\n> sufficient for that (and I can't say I disagree).\n\nThe people who make the disks don't agree. \n\nThey publish the error rate they guarantee, and they meet it, more \nor less. They publish a rate that is _just_ low enough to satisfy \nnoncritical requirements (on the correct assumption that they can't \nsatisfy critical requirements in any case) and high enough not to \ninterfere with benchmarks. They assume that if you need better \nreliability you can and will provide it yourself, and rely on their \nCRC only as a performance optimization.\n\nAt the raw sector level, they get (and correct) errors very frequently; \nwhen they are not getting \"enough\" errors, they pack the bits more \ndensely until they do, and sell a higher-density drive.\n\n> So the only real benefit of a block-level CRC would be to guard against\n> bits dropped in transit from the disk surface to someplace else, ie,\n> during read or during a \"cp -r\" type copy of the database to another\n> location. That's not a totally negligible risk, but is it worth the\n> overhead of updating and checking block CRCs? Seems dubious at best.\n\nVadim didn't want to re-open this discussion until after 7.1 is out\nthe door, but that \"dubious at best\" demands an answer. See the archive \nposting:\n\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/2001-01/msg00473.html\n\n...\n\nIncidentally, is the page at \n\n http://www.postgresql.org/mhonarc/pgsql-hackers/2001-01/\n\nthe best place to find old messages? It's never worked right for me.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Thu, 5 Apr 2001 18:39:15 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list"
},
{
"msg_contents": "At 18:25 5/04/01 -0400, Tom Lane wrote:\n>\n>A block-level CRC might be useful to guard against long-term data\n>lossage, but Vadim thinks that the disk's own CRCs ought to be\n>sufficient for that (and I can't say I disagree).\n>\n>So the only real benefit of a block-level CRC would be to guard against\n>bits dropped in transit from the disk surface to someplace else\n\nWhat about guarding against file system problems, like blocks of one\n(non-PG) file erroneously writing to blocks of another (PG table) file?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 06 Apr 2001 12:07:07 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n>> So the only real benefit of a block-level CRC would be to guard against\n>> bits dropped in transit from the disk surface to someplace else\n\n> What about guarding against file system problems, like blocks of one\n> (non-PG) file erroneously writing to blocks of another (PG table) file?\n\nWell, what about it? Can you offer numbers demonstrating that this risk\nis probable enough to justify the effort and runtime cost of a block\nCRC?\n\nIf we're in the business of expending cycles to guard against\nnil-probability risks, let's checksum our executables every time we\nstart up, to make sure they're not overwritten. Actually, we'd better\nre-checksum program text memory every few seconds, in case RAM dropped\na bit since we looked last. And let's follow every memcpy by a memcmp\nto make sure that didn't drop a bit. Heck, let's keep a CRC on every\npalloc'd memory block. And so on and so forth. Sooner or later you've\ngot to draw the line at diminishing returns, both for runtime costs\nand for the programming effort you spent on this stuff (instead of on\nfinding/fixing bugs that might bite you with far greater frequency than\nanything a CRC might catch for you).\n\nTo be perfectly clear: I have actually seen bug reports trace to\nproblems that I think a block-level CRC might have detected (not\ncorrected, of course, but at least the user might have realized he had\nflaky hardware a little sooner). So I do not say that the upside to\na block CRC is nil. But I am unconvinced that it exceeds the downside,\nin development effort, runtime, false failure reports (is that CRC error\nreally due to hardware trouble, or a software bug that failed to update\nthe CRC? and how do you get around the CRC error to get at your data??)\netc etc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Apr 2001 22:52:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list "
},
{
"msg_contents": "> If we're in the business of expending cycles to guard against\n> nil-probability risks, let's checksum our executables every time we\n> start up, to make sure they're not overwritten. Actually, we'd\nbetter\n> re-checksum program text memory every few seconds, in case RAM\ndropped\n> a bit since we looked last. And let's follow every memcpy by a\nmemcmp\n> to make sure that didn't drop a bit. Heck, let's keep a CRC on\nevery\n\nWhy does it sound like you have problems with radiation eating away at\nyour live memory for satellite operations?\n\n",
"msg_date": "Thu, 5 Apr 2001 23:03:24 -0400",
"msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list "
},
{
"msg_contents": "At 22:52 5/04/01 -0400, Tom Lane wrote:\n>\n>> What about guarding against file system problems, like blocks of one\n>> (non-PG) file erroneously writing to blocks of another (PG table) file?\n>\n>Well, what about it? Can you offer numbers demonstrating that this risk\n>is probable enough to justify the effort and runtime cost of a block\n>CRC?\n\nRhetorical crap aside, I've had more file system falures (including badly\nmapped file data) than I have had disk hardware failures. So, if you are\nconsidering 'bits dropped in transit', you should also be considering data\ncorruption not related to the hardware.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 06 Apr 2001 14:09:08 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list "
}
] |
[
{
"msg_contents": "> > So, for what CRC could be used? To catch disk damages?\n> > Disk has its own CRC for this.\n> \n> OK, this was already discussed, maybe while Vadim was absent. \n> Should I re-post the previous text?\n\nLet's return to this discussion *after* 7.1 release.\nMy main objection was (and is) - no time to deal with\nthis issue for 7.1\n\nVadim\n",
"msg_date": "Thu, 5 Apr 2001 14:47:41 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: TODO list"
},
{
"msg_contents": "On Thu, Apr 05, 2001 at 02:47:41PM -0700, Mikheev, Vadim wrote:\n> > > So, for what CRC could be used? To catch disk damages?\n> > > Disk has its own CRC for this.\n> > \n> > OK, this was already discussed, maybe while Vadim was absent. \n> > Should I re-post the previous text?\n> \n> Let's return to this discussion *after* 7.1 release.\n> My main objection was (and is) - no time to deal with\n> this issue for 7.1.\n\nOK, everybody agreed on that before. \n\nThis doesn't read like an objection to having it on the TODO list for\nsome future release. \n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Thu, 5 Apr 2001 15:06:31 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list"
}
] |
[
{
"msg_contents": "Bottom line: 7.1RC1 passes most of the regression tests on \nNetBSD/macppc. It's probably good enough for normal use since the \ndifferences are not extensive, but someone would need to look at the \ndiff's for longer than the 10 seconds or so I've spent so far, and \nsomeone should actually set it up for real use to check that.\n\nI used the vanilla tarball from postgresql.org, not the NetBSD package system.\n\nDetails: It's clearly less clean than the OS X build. Also my G4 \ndesktop runs rings around both a Sun Ultra 1 and the 8500 I have \nNetBSD on for stuff like this build.\n\nI'm actually reevaluating how much I want to keep running real open \nsource OS's vs the partly open MacOS X when both the OS and this \napplication runs so much cleaner on the most recent (fastest) \nhardware. I like Apple stuff, but I never thought I would be this \nimpressed with OS X this quickly. I guess I should shut up now or \nrisk a flame war since the point is the PG port quality and not how \ngood the target platform is.\n\n>% gmake check\n>gmake -C ../../../contrib/spi REFINT_VERBOSE=1 refint.so autoinc.so\n>gmake[1]: Entering directory `/usr/local/dist/postgresql-7.1RC1/contrib/spi'\n>gmake[1]: `refint.so' is up to date.\n>gmake[1]: `autoinc.so' is up to date.\n>gmake[1]: Leaving directory `/usr/local/dist/postgresql-7.1RC1/contrib/spi'\n>/bin/sh ./pg_regress --temp-install --top-builddir=../../.. \n>--schedule=./parallel_schedule --multibyte=\n>============== removing existing temp installation ==============\n>============== creating temporary installation ==============\n>============== initializing database system ==============\n>============== starting postmaster ==============\n>running on port 65432 with pid 21643\n>============== creating database \"regression\" ==============\n>CREATE DATABASE\n>============== installing PL/pgSQL ==============\n>============== running regression test queries ==============\n>parallel group (13 tests): text float4 varchar oid int2 char \n>boolean int4 int8 name float8 bit numeric\n> boolean ... ok\n> char ... ok\n> name ... ok\n> varchar ... ok\n> text ... ok\n> int2 ... ok\n> int4 ... ok\n> int8 ... ok\n> oid ... ok\n> float4 ... ok\n> float8 ... ok\n> bit ... ok\n> numeric ... ok\n>test strings ... ok\n>test numerology ... ok\n>parallel group (18 tests): path interval date time circle reltime \n>box lseg abstime inet point comments tinterval polygon timestamp \n>type_sanity oidjoins opr_sanity\n> point ... ok\n> lseg ... ok\n> box ... ok\n> path ... ok\n> polygon ... ok\n> circle ... ok\n> date ... ok\n> time ... ok\n> timestamp ... ok\n> interval ... ok\n> abstime ... ok\n> reltime ... ok\n> tinterval ... ok\n> inet ... ok\n> comments ... ok\n> oidjoins ... ok\n> type_sanity ... ok\n> opr_sanity ... ok\n>test geometry ... FAILED\n>test horology ... FAILED\n>test create_function_1 ... ok\n>test create_type ... ok\n>test create_table ... ok\n>test create_function_2 ... ok\n>test copy ... ok\n>parallel group (7 tests): create_operator create_aggregate inherit \n>triggers constraints create_misc create_index\n> constraints ... ok\n> triggers ... ok\n> create_misc ... ok\n> create_aggregate ... ok\n> create_operator ... ok\n> create_index ... ok\n> inherit ... ok\n>test create_view ... ok\n>test sanity_check ... ok\n>test errors ... ok\n>test select ... ok\n>parallel group (16 tests): arrays select_having select_distinct \n>transactions random portals select_into union select_distinct_on \n>select_implicit case subselect aggregates btree_index join hash_index\n> select_into ... ok\n> select_distinct ... ok\n> select_distinct_on ... ok\n> select_implicit ... ok\n> select_having ... ok\n> subselect ... ok\n> union ... ok\n> case ... ok\n> join ... ok\n> aggregates ... ok\n> transactions ... ok\n> random ... ok\n> portals ... ok\n> arrays ... ok\n> btree_index ... ok\n> hash_index ... ok\n>test misc ... ok\n>parallel group (5 tests): portals_p2 alter_table foreign_key \n>select_views rules\n> select_views ... ok\n> alter_table ... ok\n> portals_p2 ... ok\n> rules ... ok\n> foreign_key ... ok\n>parallel group (3 tests): temp limit plpgsql\n> limit ... ok\n> plpgsql ... ok\n> temp ... ok\n>============== shutting down postmaster ==============\n>\n>=======================\n> 2 of 76 tests failed.\n>=======================\n>\n>The differences that caused some tests to fail can be viewed in the\n>file `./regression.diffs'. A copy of the test summary that you see\n>above is saved in the file `./regression.out'.\n\nThe diff is:\n>*** ./expected/geometry-positive-zeros.out Tue Sep 12 14:07:16 2000\n>--- ./results/geometry.out Thu Apr 5 08:29:58 2001\n>***************\n>*** 445,451 ****\n> \n>-----+---------------------------------------------------------------- \n>---------------------------------------------------------------------- \n>---------------------------------------------------------------------- \n>---------------------------------------------------------------------- \n>---------------------------------------------------------------------- \n>-------------------------------------\n> | \n>((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.5980 \n>7621135842),(1.53102359017709e-11,3),(1.50000000001768,2.5980762113431 \n>1),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.598 \n>07621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(- \n>4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807 \n>621138138,-1.49999999995138))\n> | \n>((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.60 \n>2540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036) \n>,(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.602540 \n>3778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999 \n>999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.602540379 \n>3795,-47.9999999983795))\n>! | \n>((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301 \n>270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5 \n>.33012701894346,5.49999999996317),(6,2.99999999994897),(5.330127018892 \n>42,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.99999999 \n>9923449,-2),(-1.50000000007366,-1.33012701887967),(-3.33012701896897,0 \n>.500000000081028))\n> | \n>((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.598 \n>07621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311), \n>(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133 \n>545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999 \n>999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.598076211381 \n>38,0.500000000048616))\n> | \n>((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.66 \n>0254037861),(100.000000000051,210),(105.000000000059,208.66025403781), \n>(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254 \n>037785,194.999999999897),(104.999999999882,191.339745962088),(99.99999 \n>99998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,19 \n>5.000000000162))\n> | \n>((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540 \n>378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186 \n>.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.60254 \n>0377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.99 \n>9999998469,-100),(49.9999999985268,-86.6025403775933),(13.397459620620 \n>5,-49.9999999983795))\n>--- 445,451 ----\n> \n>-----+---------------------------------------------------------------- \n>---------------------------------------------------------------------- \n>---------------------------------------------------------------------- \n>---------------------------------------------------------------------- \n>---------------------------------------------------------------------- \n>-------------------------------------\n> | \n>((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.5980 \n>7621135842),(1.53102359017709e-11,3),(1.50000000001768,2.5980762113431 \n>1),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.598 \n>07621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(- \n>4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807 \n>621138138,-1.49999999995138))\n> | \n>((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.60 \n>2540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036) \n>,(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.602540 \n>3778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999 \n>999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.602540379 \n>3795,-47.9999999983795))\n>! | \n>((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301 \n>270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5 \n>.33012701894346,5.49999999996317),(6,2.99999999994897),(5.330127018892 \n>42,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.99999999 \n>9923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0 \n>.500000000081027))\n> | \n>((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.598 \n>07621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311), \n>(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133 \n>545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999 \n>999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.598076211381 \n>38,0.500000000048616))\n> | \n>((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.66 \n>0254037861),(100.000000000051,210),(105.000000000059,208.66025403781), \n>(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254 \n>037785,194.999999999897),(104.999999999882,191.339745962088),(99.99999 \n>99998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,19 \n>5.000000000162))\n> | \n>((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540 \n>378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186 \n>.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.60254 \n>0377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.99 \n>9999998469,-100),(49.9999999985268,-86.6025403775933),(13.397459620620 \n>5,-49.9999999983795))\n>\n>======================================================================\n>\n>*** ./expected/horology.out Sun Dec 3 06:51:11 2000\n>--- ./results/horology.out Thu Apr 5 08:30:01 2001\n>***************\n>*** 122,128 ****\n> SELECT time with time zone '01:30' + interval '02:01' AS \"03:31:00-08\";\n> 03:31:00-08\n> -------------\n>! 03:31:00-08\n> (1 row)\n>\n> SELECT time with time zone '01:30-08' - interval '02:01' AS \"23:29:00-08\";\n>--- 122,128 ----\n> SELECT time with time zone '01:30' + interval '02:01' AS \"03:31:00-08\";\n> 03:31:00-08\n> -------------\n>! 03:31:00-07\n> (1 row)\n>\n> SELECT time with time zone '01:30-08' - interval '02:01' AS \"23:29:00-08\";\n>***************\n>*** 140,146 ****\n> SELECT time with time zone '03:30' + interval '1 month 04:01' AS \n>\"07:31:00-08\";\n> 07:31:00-08\n> -------------\n>! 07:31:00-08\n> (1 row)\n>\n> SELECT interval '04:30' - time with time zone '01:02' AS \"+03:28\";\n>--- 140,146 ----\n> SELECT time with time zone '03:30' + interval '1 month 04:01' AS \n>\"07:31:00-08\";\n> 07:31:00-08\n> -------------\n>! 07:31:00-07\n> (1 row)\n>\n> SELECT interval '04:30' - time with time zone '01:02' AS \"+03:28\";\n>\n>======================================================================\n>\n\n\nDuring the build I got these ugly, but presumably harmless complaints:\n\n>gmake[4]: Entering directory \n>`/usr/local/dist/postgresql-7.1RC1/src/pl/plpgsql/src'\n>gcc -c -I. -I../../../../src/include -O2 -pipe -Wall \n>-Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -o \n>pl_parse.o pl_gram.c\n>lex.plpgsql_yy.c: In function `plpgsql_yylex':\n>lex.plpgsql_yy.c:971: warning: label `find_rule' defined but not used\n>lex.plpgsql_yy.c: At top level:\n>lex.plpgsql_yy.c:2223: warning: `plpgsql_yy_flex_realloc' defined but not used\n\nin a few places, and\n\n>gcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations \n>-I../../../src/interfaces/libpq -I../../../src/include -c -o \n>tab-complete.o tab-complete.c\n>tab-complete.c: In function `initialize_readline':\n>tab-complete.c:103: warning: assignment from incompatible pointer type\n>tab-complete.c: In function `psql_completion':\n>tab-complete.c:292: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:296: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:301: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:309: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:320: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:325: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:332: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:337: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:342: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:347: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:350: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:366: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:371: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:378: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:381: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:392: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:400: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:406: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:410: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:413: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:420: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:423: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:429: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:435: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:440: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:448: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:455: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:460: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:465: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:473: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:478: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:490: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:493: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:496: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:506: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:514: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:521: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:532: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:541: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:545: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:553: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:556: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:559: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:569: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:572: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:578: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:582: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:587: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:592: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:599: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:604: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:606: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:608: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:619: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:622: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:626: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:634: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:640: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:646: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:651: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:660: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:666: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:672: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:678: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:682: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:687: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:690: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:698: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:702: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:704: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:709: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:714: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:716: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:718: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:725: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:749: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>tab-complete.c:763: warning: passing arg 2 of `completion_matches' \n>from incompatible pointer type\n>gcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations \n>command.o common.o help.o input.o stringutils.o mainloop.o copy.o \n>startup.o prompt.o variables.o large_obj.o print.o describe.o \n>tab-complete.o -L../../../src/interfaces/libpq -lpq \n>-Wl,-R/usr/local/lib -lz -lcrypt -lresolv -lcompat -lm -lutil -ledit \n>-ltermcap -o psql\n>gmake[3]: Leaving directory `/usr/local/dist/postgresql-7.1RC1/src/bin/psql'\n\nGreat work as always. It's been nice to see the progressive \nimprovement in the portability and quality of the DBMS over the years.\n\n\nSignature held pending an ISO 9000 compliant\nsignature design and approval process.\nh.b.hotz@jpl.nasa.gov, or hbhotz@oxy.edu\n",
"msg_date": "Thu, 5 Apr 2001 16:15:20 -0700",
"msg_from": "\"Henry B. Hotz\" <hotz@jpl.nasa.gov>",
"msg_from_op": true,
"msg_subject": "Re: Call for platforms"
},
{
"msg_contents": "> Bottom line: 7.1RC1 passes most of the regression tests on\n> NetBSD/macppc. It's probably good enough for normal use since the\n> differences are not extensive, but someone would need to look at the\n> diff's for longer than the 10 seconds or so I've spent so far, and\n> someone should actually set it up for real use to check that.\n\nI'll mark it as supported; the horology diffs are not significant and\ngeometry is known to be a bit different on some platforms.\n\nIncluding the not-tested-for-7.1 NetBSD/m68k, we are supported on 30\nplatforms for the upcoming release, with definite potential for a couple\nof more (QNX and Ultrix).\n\n*That* is some sort of milestone! :))\n\n - Thomas\n",
"msg_date": "Fri, 06 Apr 2001 05:29:28 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Call for platforms"
},
{
"msg_contents": "\"Henry B. Hotz\" <hotz@jpl.nasa.gov> writes:\n> Bottom line: 7.1RC1 passes most of the regression tests on \n> NetBSD/macppc.\n\nThe only thing that surprised me here was all of the warnings from\nlibreadline calls:\n\n>> tab-complete.c: In function `initialize_readline':\n>> tab-complete.c:103: warning: assignment from incompatible pointer type\n>> tab-complete.c: In function `psql_completion':\n>> tab-complete.c:292: warning: passing arg 2 of `completion_matches' \n>> from incompatible pointer type\n>> tab-complete.c:296: warning: passing arg 2 of `completion_matches' \n>> from incompatible pointer type\n\nWhat version of libreadline do you have installed, and how does it\ndeclare completion_matches()?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 01:50:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Call for platforms "
},
{
"msg_contents": "\nIf somethings happen this weekend, I *MAY* have a HP9000/433s (M68K) \nrunning NetBSD to play with....\n\nNot enough to hold up the release, but...\n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 4/6/01, 12:29:28 AM, Thomas Lockhart <lockhart@alumni.caltech.edu> wrote \nregarding [HACKERS] Re: Call for platforms:\n\n\n> > Bottom line: 7.1RC1 passes most of the regression tests on\n> > NetBSD/macppc. It's probably good enough for normal use since the\n> > differences are not extensive, but someone would need to look at the\n> > diff's for longer than the 10 seconds or so I've spent so far, and\n> > someone should actually set it up for real use to check that.\n\n> I'll mark it as supported; the horology diffs are not significant and\n> geometry is known to be a bit different on some platforms.\n\n> Including the not-tested-for-7.1 NetBSD/m68k, we are supported on 30\n> platforms for the upcoming release, with definite potential for a couple\n> of more (QNX and Ultrix).\n\n> *That* is some sort of milestone! :))\n\n> - Thomas\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n\n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Fri, 06 Apr 2001 19:10:21 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: Call for platforms"
},
{
"msg_contents": "> If somethings happen this weekend, I *MAY* have a HP9000/433s (M68K)\n> running NetBSD to play with....\n\nThat would be great. I *know* that there are some m68k machines around\nsomewhere on this planet, and it would be a shame to not have NetBSD\ntested for the release...\n\n - Thomas\n",
"msg_date": "Fri, 06 Apr 2001 20:37:06 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: Call for platforms"
},
{
"msg_contents": "At 1:50 AM -0400 4/6/01, Tom Lane wrote:\n>\"Henry B. Hotz\" <hotz@jpl.nasa.gov> writes:\n> > Bottom line: 7.1RC1 passes most of the regression tests on\n> > NetBSD/macppc.\n>\n>The only thing that surprised me here was all of the warnings from\n>libreadline calls:\n>\n> >> tab-complete.c: In function `initialize_readline':\n> >> tab-complete.c:103: warning: assignment from incompatible pointer type\n> >> tab-complete.c: In function `psql_completion':\n> >> tab-complete.c:292: warning: passing arg 2 of `completion_matches'\n> >> from incompatible pointer type\n> >> tab-complete.c:296: warning: passing arg 2 of `completion_matches'\n> >> from incompatible pointer type\n>\n>What version of libreadline do you have installed, and how does it\n>declare completion_matches()?\n\nI have whatever is standard on NetBSD 1.5. I noticed that configure \nfound a readline.h include file, but NetBSD doesn't integrate the \ncurrent GNU implementation. I did not do a test of psql to see if \nthe feature worked.\n\nI'm sure you could \"fix\" this problem if you installed GNU readline \nand referenced it in the build. Since Solaris had even worse issues \nwith needing GNU support utilities installed this didn't seem like a \nbig deal to me. OTOH it could confuse a new user.\n\n\nSignature held pending an ISO 9000 compliant\nsignature design and approval process.\nh.b.hotz@jpl.nasa.gov, or hbhotz@oxy.edu\n",
"msg_date": "Mon, 9 Apr 2001 11:41:55 -0700",
"msg_from": "\"Henry B. Hotz\" <hotz@jpl.nasa.gov>",
"msg_from_op": true,
"msg_subject": "Re: Re: Call for platforms"
},
{
"msg_contents": "\n> At 1:50 AM -0400 4/6/01, Tom Lane wrote:\n> >\"Henry B. Hotz\" <hotz@jpl.nasa.gov> writes:\n> > > Bottom line: 7.1RC1 passes most of the regression tests on\n> > > NetBSD/macppc.\n> >\n> >The only thing that surprised me here was all of the warnings from\n> >libreadline calls:\n> >\n> > >> tab-complete.c: In function `initialize_readline':\n> > >> tab-complete.c:103: warning: assignment from incompatible pointer type\n> > >> tab-complete.c: In function `psql_completion':\n> > >> tab-complete.c:292: warning: passing arg 2 of `completion_matches'\n> > >> from incompatible pointer type\n> > >> tab-complete.c:296: warning: passing arg 2 of `completion_matches'\n> > >> from incompatible pointer type\n> >\n> >What version of libreadline do you have installed, and how does it\n> >declare completion_matches()?\n\n$ uname -srm\nNetBSD 1.5 i386\n$ grep CPFunction /usr/include/readline.h\ntypedef char *CPFunction __P((const char *, int));\nextern CPFunction *rl_completion_entry_function;\nchar **completion_matches __P((const char *, CPFunction *));\n\nPutting the 'const' in the relevant PostgreSQL functions (diff against\n7.1RC3 below) removes these warnings. I don't know what that does on\na machine using GNU readline ... I can check that in a day or two if\nanyone's interested.\n\nThe NetBSD libedit-emulating-readline works just fine with psql even\nwithout the warnings fixed -- they're harmless in this case.\n\nRegards,\n\nGiles\n\n*** src/bin/psql/tab-complete.c-orig\tMon Apr 2 05:17:32 2001\n--- src/bin/psql/tab-complete.c\tTue Apr 10 19:51:21 2001\n***************\n*** 70,80 ****\n \n \n /* Forward declaration of functions */\n! static char **psql_completion(char *text, int start, int end);\n! static char *create_command_generator(char *text, int state);\n! static char *complete_from_query(char *text, int state);\n! static char *complete_from_const(char *text, int state);\n! static char *complete_from_list(char *text, int state);\n \n static PGresult *exec_query(char *query);\n char\t *quote_file_name(char *text, int match_type, char *quote_pointer);\n--- 70,80 ----\n \n \n /* Forward declaration of functions */\n! static char **psql_completion(const char *text, int start, int end);\n! static char *create_command_generator(const char *text, int state);\n! static char *complete_from_query(const char *text, int state);\n! static char *complete_from_const(const char *text, int state);\n! static char *complete_from_list(char const *text, int state);\n \n static PGresult *exec_query(char *query);\n char\t *quote_file_name(char *text, int match_type, char *quote_pointer);\n***************\n*** 177,183 ****\n libraries completion_matches() function, so we don't have to worry about it.\n */\n static char **\n! psql_completion(char *text, int start, int end)\n {\n \t/* This is the variable we'll return. */\n \tchar\t **matches = NULL;\n--- 177,183 ----\n libraries completion_matches() function, so we don't have to worry about it.\n */\n static char **\n! psql_completion(const char *text, int start, int end)\n {\n \t/* This is the variable we'll return. */\n \tchar\t **matches = NULL;\n***************\n*** 796,802 ****\n as defined above.\n */\n static char *\n! create_command_generator(char *text, int state)\n {\n \tstatic int\tlist_index,\n \t\t\t\tstring_length;\n--- 796,802 ----\n as defined above.\n */\n static char *\n! create_command_generator(const char *text, int state)\n {\n \tstatic int\tlist_index,\n \t\t\t\tstring_length;\n***************\n*** 829,835 ****\n etc.\n */\n static char *\n! complete_from_query(char *text, int state)\n {\n \tstatic int\tlist_index,\n \t\t\t\tstring_length;\n--- 829,835 ----\n etc.\n */\n static char *\n! complete_from_query(const char *text, int state)\n {\n \tstatic int\tlist_index,\n \t\t\t\tstring_length;\n***************\n*** 877,883 ****\n SQL words that can appear at certain spot.\n */\n static char *\n! complete_from_list(char *text, int state)\n {\n \tstatic int\tstring_length,\n \t\t\t\tlist_index;\n--- 877,883 ----\n SQL words that can appear at certain spot.\n */\n static char *\n! complete_from_list(const char *text, int state)\n {\n \tstatic int\tstring_length,\n \t\t\t\tlist_index;\n***************\n*** 911,917 ****\n The string to be passed must be in completion_charp.\n */\n static char *\n! complete_from_const(char *text, int state)\n {\n \t(void) text;\t\t\t\t/* We don't care about what was entered\n \t\t\t\t\t\t\t\t * already. */\n--- 911,917 ----\n The string to be passed must be in completion_charp.\n */\n static char *\n! complete_from_const(const char *text, int state)\n {\n \t(void) text;\t\t\t\t/* We don't care about what was entered\n \t\t\t\t\t\t\t\t * already. */\n\n\n",
"msg_date": "Tue, 10 Apr 2001 19:52:57 +1000",
"msg_from": "Giles Lean <giles@nemeton.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: Call for platforms "
},
{
"msg_contents": "On Mon, Apr 09, 2001 at 11:41:55AM -0700, Henry B. Hotz wrote:\n> At 1:50 AM -0400 4/6/01, Tom Lane wrote:\n...\n> >What version of libreadline do you have installed, and how does it\n> >declare completion_matches()?\n> \n> I have whatever is standard on NetBSD 1.5. I noticed that configure \n> found a readline.h include file, but NetBSD doesn't integrate the \n> current GNU implementation. I did not do a test of psql to see if \n> the feature worked.\n> \n> I'm sure you could \"fix\" this problem if you installed GNU readline \n> and referenced it in the build. Since Solaris had even worse issues \n> with needing GNU support utilities installed this didn't seem like a \n> big deal to me. OTOH it could confuse a new user.\n\nOdd: I am using the standard NetBSD readline found in -ledit and it is\nfine.. Can it be a -1.5 vs -current difference?\n\n\nI have just stumbled across something which is broken though:\n\nNetBSD-1.5S/arm32:\n% ldd `which psql`\n/usr/local/pgsql/bin/psql:\n -lpq.2 => /usr/local/pgsql/lib/libpq.so.2.1 (0x2003b000)\n -lz.0 => /usr/lib/libz.so.0.2 (0x20048000)\n -lcrypt.0 => /usr/lib/libcrypt.so.0.0 (0x20056000)\n -lresolv.1 => /usr/lib/libresolv.so.1.0 (0x2005c000)\n -lm.0 => /usr/lib/libm.so.0.1 (0x20065000)\n -lutil.5 => /usr/lib/libutil.so.5.5 (0x2008b000)\n -ledit.2 => /usr/lib/libedit.so.2.5 (0x20096000)\n -lc.12 => /usr/lib/libc.so.12.74 (0x200ae000)\n\nNetBSD-1.5U/i386:\n% ldd `which psql`\n/usr/local/pgsql/bin/psql:\n -lcrypt.0 => /usr/lib/libcrypt.so.0\n -lresolv.1 => /usr/lib/libresolv.so.1\n -lpq.2 => /usr/local/pgsql/lib/libpq.so.2\n -lz.0 => /usr/lib/libz.so.0\n -lm.0 => /usr/lib/libm387.so.0\n -lm.0 => /usr/lib/libm.so.0\n -lutil.5 => /usr/lib/libutil.so.5\n -ledit.2 => /usr/lib/libedit.so.2\n -ltermcap.0 => /usr/lib/libtermcap.so.0\n -lc.12 => /usr/lib/libc.so.12\n\n-ltermcap is missing from arm32 - it's necessary if libedit is going to\nfind _tgetent..\n\nInvestigating now..\n\nCheers,\n\nPatrick\n",
"msg_date": "Sun, 22 Apr 2001 19:17:28 +0100",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": false,
"msg_subject": "Re: Re: Call for platforms"
}
] |
[
{
"msg_contents": "CREATE TABLE junk (\n col SERIAL PRIMARY KEY\n);\n\nINSERT INTO junk (col) DEFAULT VALUES;\n\nINSERT INTO junk DEFAULT VALUES:\n\n\nSecond insert works, first one fails.\n\nINSERT INTO table [ ( column [, ...] ) ]\n { DEFAULT VALUES | VALUES ( expression [, ...] ) | SELECT query }\n\n\nThe column list should just be ignored correct?\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.",
"msg_date": "Thu, 5 Apr 2001 19:16:49 -0400",
"msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>",
"msg_from_op": true,
"msg_subject": "INSERT Issues"
},
{
"msg_contents": "\"Rod Taylor\" <rod.taylor@inquent.com> writes:\n> INSERT INTO table [ ( column [, ...] ) ]\n> { DEFAULT VALUES | VALUES ( expression [, ...] ) | SELECT query }\n\nThe documentation is wrong here, not the code. SQL92 defines the syntax\nas\n\n <insert statement> ::=\n INSERT INTO <table name> <insert columns and source>\n\n <insert columns and source> ::=\n [ <left paren> <insert column list> <right paren> ] <query expression>\n | DEFAULT VALUES\n\n <insert column list> ::= <column name list>\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 10:53:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: INSERT Issues "
},
{
"msg_contents": "On Thu, Apr 05, 2001 at 07:16:49PM -0400, Rod Taylor wrote:\n> CREATE TABLE junk (\n> col SERIAL PRIMARY KEY\n> );\n> \n> INSERT INTO junk (col) DEFAULT VALUES;\n> \n> INSERT INTO junk DEFAULT VALUES:\n> \n> \n> Second insert works, first one fails.\n> \n> INSERT INTO table [ ( column [, ...] ) ]\n> { DEFAULT VALUES | VALUES ( expression [, ...] ) | SELECT query }\n> \n> \n> The column list should just be ignored correct?\n> \n\nHmm, the BNF from SQL1992 actually is:\n\n\n <insert statement> ::=\n INSERT INTO <table name>\n <insert columns and source>\n \n \n <insert columns and source> ::=\n [ <left paren> <insert column list> <right paren> ]\n <query expression>\n | DEFAULT VALUES\n \n <insert column list> ::= <column name list>\n\nSo the grammar is right to reject your first example.\n\nAccording to the rules for <insert statement>:\n\n 2) An <insert columns and source> that specifies DEFAULT VALUES is\n equivalent to an <insert columns and source> that specifies a\n <query expression> of the form\n\n VALUES (DEFAULT, . . . )\n\n where the number of \"DEFAULT\" entries is equal to the number of\n columns of T.\n\nSo the proper spelling of your first version is:\n\n INSERT INTO junk (col) VALUES (DEFAULT);\n\nDoes that work for you?\n\nRoss\n",
"msg_date": "Fri, 6 Apr 2001 09:54:25 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: INSERT Issues"
},
{
"msg_contents": "create table junk (col SERIAL);\n\nINSERT INTO junk (col) VALUES (DEFAULT);\nERROR: parser: parse error at or near \"DEFAULT\";\n\n> INSERT INTO junk (col) VALUES (DEFAULT);\n> \n> Does that work for you?\n> \n> Ross\n> \n\n",
"msg_date": "Fri, 6 Apr 2001 11:17:56 -0400",
"msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>",
"msg_from_op": true,
"msg_subject": "Re: INSERT Issues"
}
] |
[
{
"msg_contents": "Found the issue. Try out the attached SQL in a fresh database.\n\nI had honestly expected the second delete to work properly as nothing\nhad to be removed that table.\n\nThe rule was added as a temporary measure to protect the data\ncurrently in the table -- without the intent of otherwise impeding the\nother informations use. I suppose I forgot that the table wouldn't be\nlooked at as the rule is checked quite early.\n\n\n\nCREATE TABLE junk_parent (\n col SERIAL PRIMARY KEY\n);\n\nINSERT INTO junk_parent DEFAULT VALUES;\nINSERT INTO junk_parent DEFAULT VALUES;\nINSERT INTO junk_parent DEFAULT VALUES;\n\nCREATE TABLE junk (\n col int4 NOT NULL REFERENCES junk_parent(col) ON UPDATE CASCADE ON\nDELETE CASCADE\n);\n\nINSERT INTO junk VALUES ('1');\n\nDELETE FROM junk_parent WHERE col = 1;\nDELETE FROM junk_parent WHERE col = 2;\n\n\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.",
"msg_date": "Thu, 5 Apr 2001 19:27:45 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Foreign Key & Rule confusion WAS: Lost Trigger(s)?"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Found the issue. Try out the attached SQL in a fresh database.\n\nAnd? AFAICT it behaves as expected, in either 7.0.2 or current ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 01:54:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Foreign Key & Rule confusion WAS: Lost Trigger(s)? "
},
{
"msg_contents": "Not quite as expected. I didn't expect deleting the 2 from the\nprimary table to fail because the CASCADE DELETE wasn't able to run on\nthe second (even though no values existed in that table). I suppose\nit does run properly (blocks all delete attempts) -- but I just didn't\nexpect it to error out on values which didn't exist in the second\ntable -- thereby blocking the deletion from the primary or referred\ntable..\n\nTried against 7.1beta3 and 7.1beta5.\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Friday, April 06, 2001 1:54 AM\nSubject: Re: [HACKERS] Foreign Key & Rule confusion WAS: Lost\nTrigger(s)?\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Found the issue. Try out the attached SQL in a fresh database.\n>\n> And? AFAICT it behaves as expected, in either 7.0.2 or current ...\n>\n> regards, tom lane\n>\n\n\n",
"msg_date": "Fri, 6 Apr 2001 08:53:37 -0400",
"msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>",
"msg_from_op": false,
"msg_subject": "Re: Foreign Key & Rule confusion WAS: Lost Trigger(s)? "
},
{
"msg_contents": "\"Rod Taylor\" <rod.taylor@inquent.com> writes:\n> Not quite as expected. I didn't expect deleting the 2 from the\n> primary table to fail because the CASCADE DELETE wasn't able to run on\n> the second (even though no values existed in that table).\n\nBut it *doesn't* fail. At least not in the versions I tried.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 10:09:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Foreign Key & Rule confusion WAS: Lost Trigger(s)? "
},
{
"msg_contents": "\"Rod Taylor\" <rod.taylor@inquent.com> writes:\n> I must apologize, I was copying from one screen to another due to\n> network outage and gave a bad example -- missed the most important\n> part.\n\n> There should have been an AS ON DELETE TO junk DO INSTEAD NOTHING;\n> rule.\n\nAh so. With that in place, I see what you are talking about:\n\nregression=# DELETE FROM junk_parent WHERE col = 1;\nERROR: SPI_execp() failed in RI_FKey_cascade_del()\nregression=# DELETE FROM junk_parent WHERE col = 2;\nERROR: SPI_execp() failed in RI_FKey_cascade_del()\n\n\n> The RI_FKey_cascade_del() trigger fails on the second delete attempt.\n> To me it should ignore the error if there wasn't anything to delete in\n> the first place.\n\nWell, I think the issue is something different. Right now, referential\nintegrity triggers are implemented as issuing actual queries --- which\nare subject to rule rewrites. It strikes me that perhaps this is wrong,\nand a referential integrity operation should proceed without regard to\nrules.\n\nIf you think that rules indeed should be able to affect referential\nintegrity updates, then it would probably be better that neither of\nthese examples fail (ie, the RI triggers should not complain about their\nqueries having been rewritten to nothing).\n\nI don't see a good argument for raising an error on the first delete and\nnot the second. Either ref integrity is subject to rules, or it's not.\n\nNext question: should a trigger be able to defeat an RI update? That\ncan happen now, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 11:20:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Foreign Key & Rule confusion WAS: Lost Trigger(s)? "
},
{
"msg_contents": "Ack...\n\nAll my current history keeping methods are done via triggers on tables\n(generally set off by various RI_ triggers). Not real good if it\ndidn't set off those triggers for me.\n\nI'm sure rules are a ditto in that case for others.\n\nI was hoping for a way to prevent the RI trigger from failing if there\nwasn't anything to do anyway -- SELECT FOR DELETE -- if no results\nignore, if there were results delete the results. Delete does a\nsearch anyway, this would lock the rows and later get rid of them. A\nhack, and I have no idea how it would pan out -- but that's would\nproduce what I expected to happen.\n\nOtherwise I change all the ON DELETE DO INSTEAD NOTHING rules to\ntriggers which see if the parent still exists (and doesn't allow\ndeletion if it does) otherwise it cancels the delete. Not a nice\nsolution.\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rod.taylor@inquent.com>\nCc: <pgsql-hackers@postgreSQL.org>\nSent: Friday, April 06, 2001 11:20 AM\nSubject: Re: [HACKERS] Foreign Key & Rule confusion WAS: Lost\nTrigger(s)?\n\n\n> \"Rod Taylor\" <rod.taylor@inquent.com> writes:\n> > I must apologize, I was copying from one screen to another due to\n> > network outage and gave a bad example -- missed the most important\n> > part.\n>\n> > There should have been an AS ON DELETE TO junk DO INSTEAD NOTHING;\n> > rule.\n>\n> Ah so. With that in place, I see what you are talking about:\n>\n> regression=# DELETE FROM junk_parent WHERE col = 1;\n> ERROR: SPI_execp() failed in RI_FKey_cascade_del()\n> regression=# DELETE FROM junk_parent WHERE col = 2;\n> ERROR: SPI_execp() failed in RI_FKey_cascade_del()\n>\n>\n> > The RI_FKey_cascade_del() trigger fails on the second delete\nattempt.\n> > To me it should ignore the error if there wasn't anything to\ndelete in\n> > the first place.\n>\n> Well, I think the issue is something different. Right now,\nreferential\n> integrity triggers are implemented as issuing actual queries ---\nwhich\n> are subject to rule rewrites. It strikes me that perhaps this is\nwrong,\n> and a referential integrity operation should proceed without regard\nto\n> rules.\n>\n> If you think that rules indeed should be able to affect referential\n> integrity updates, then it would probably be better that neither of\n> these examples fail (ie, the RI triggers should not complain about\ntheir\n> queries having been rewritten to nothing).\n>\n> I don't see a good argument for raising an error on the first delete\nand\n> not the second. Either ref integrity is subject to rules, or it's\nnot.\n>\n> Next question: should a trigger be able to defeat an RI update?\nThat\n> can happen now, too.\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Fri, 6 Apr 2001 11:51:57 -0400",
"msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>",
"msg_from_op": false,
"msg_subject": "Re: Foreign Key & Rule confusion WAS: Lost Trigger(s)? "
}
] |
[
{
"msg_contents": "Found the issue. Try out the included SQL.\n\nI had honestly expected the second delete to work properly as nothing\nhad to be removed that table.\n\nThe rule was added as a temporary measure to protect the data\ncurrently in the table -- without the intent of otherwise impeding the\nother information's use. I suppose I forgot that the table wouldn't be\nlooked at as the rule is checked quite early.\n\n\n\nCREATE TABLE junk_parent (\n col SERIAL PRIMARY KEY\n);\n\nINSERT INTO junk_parent DEFAULT VALUES;\nINSERT INTO junk_parent DEFAULT VALUES;\nINSERT INTO junk_parent DEFAULT VALUES;\n\nCREATE TABLE junk (\n col int4 NOT NULL REFERENCES junk_parent(col) ON UPDATE CASCADE ON\nDELETE CASCADE\n);\n\nINSERT INTO junk VALUES ('1');\n\nDELETE FROM junk_parent WHERE col = 1;\nDELETE FROM junk_parent WHERE col = 2;\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.",
"msg_date": "Thu, 5 Apr 2001 19:29:56 -0400",
"msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>",
"msg_from_op": true,
"msg_subject": "Foreign Key & Rule confusion RE: Lost Trigger(s)?"
}
] |
[
{
"msg_contents": "> > Blocks that have recently been written, but failed to make\n> > it down to the disk platter intact, should be restorable from\n> > the WAL log. So we do not need a block-level CRC to guard\n> > against partial writes.\n> \n> If a block is missing some sectors in the middle, how would you know\n> to reconstruct it from the WAL, without a block CRC telling you that\n> the block is corrupt?\n\nOn recovery we unconditionally copy *entire* block content from the log\nfor each block modified since last checkpoint. And we do not write new\ncheckpoint record (ie do not advance recovery start point) untill we know\nthat all data blocks are flushed on disk (including blocks modified before\ncheckpointer started).\n\nVadim\n",
"msg_date": "Thu, 5 Apr 2001 19:36:54 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: TODO list"
}
] |
[
{
"msg_contents": "\nThomas? Did I miss your patch for the 'WITH TIMEZONE' regression test?\n\nDoes anyone else have anything left outstanding that should hold me off\nfrom doing an RC3 tomorrow?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Thu, 5 Apr 2001 23:59:16 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "RC3 ... anyone have anything left outstanding?"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Thomas? Did I miss your patch for the 'WITH TIMEZONE' regression test?\n\nStill not there in CVS ...\n\n> Does anyone else have anything left outstanding that should hold me off\n> from doing an RC3 tomorrow?\n\nOther than a better answer for the horology test, I think we are good\nto go. The main thing that was still bothering me was Konstantin\nSolodovnikov's report of database corruption. I just committed a fix\nfor the primary cause of that problem: turns out he was triggering a\nrandom transfer of control inside plpgsql. (Calling through a\npreviously freed function pointer is uncool...) I'm guessing that the\nensuing corruption of the database can be blamed on whatever bit of code\nmanaged to misexecute before the backend crashed completely. This is\nplausible because he reports that he only saw corruption in perhaps one\nout of every several hundred repetitions of the crash --- it makes sense\nthat you'd need to mistransfer just so to result in writing junk XLOG\nentries or whatever was the direct cause of the data corruption.\n\nVadim is still poking at the test case Konstantin sent, but I'll bet\nhe won't be able to reproduce any corruption. The effects of jumping\nthrough an overwritten function pointer would be exceedingly\nsystem-specific.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Apr 2001 23:25:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RC3 ... anyone have anything left outstanding? "
},
{
"msg_contents": "\nOkay, unless I hear different from anyone out there, I'm goin to roll RC3\nwhen I get to work tomorrow, and announce it before I leave (to give it\nsome time to propogate to the mirrors) ...\n\nOn Thu, 5 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Thomas? Did I miss your patch for the 'WITH TIMEZONE' regression test?\n>\n> Still not there in CVS ...\n>\n> > Does anyone else have anything left outstanding that should hold me off\n> > from doing an RC3 tomorrow?\n>\n> Other than a better answer for the horology test, I think we are good\n> to go. The main thing that was still bothering me was Konstantin\n> Solodovnikov's report of database corruption. I just committed a fix\n> for the primary cause of that problem: turns out he was triggering a\n> random transfer of control inside plpgsql. (Calling through a\n> previously freed function pointer is uncool...) I'm guessing that the\n> ensuing corruption of the database can be blamed on whatever bit of code\n> managed to misexecute before the backend crashed completely. This is\n> plausible because he reports that he only saw corruption in perhaps one\n> out of every several hundred repetitions of the crash --- it makes sense\n> that you'd need to mistransfer just so to result in writing junk XLOG\n> entries or whatever was the direct cause of the data corruption.\n>\n> Vadim is still poking at the test case Konstantin sent, but I'll bet\n> he won't be able to reproduce any corruption. The effects of jumping\n> through an overwritten function pointer would be exceedingly\n> system-specific.\n>\n> \t\t\tregards, tom lane\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 6 Apr 2001 00:44:23 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: RC3 ... anyone have anything left outstanding? "
},
{
"msg_contents": "> Okay, unless I hear different from anyone out there, I'm goin to roll RC3\n> when I get to work tomorrow, and announce it before I leave (to give it\n> some time to propogate to the mirrors) ...\n> > The Hermit Hacker <scrappy@hub.org> writes:\n> > > Thomas? Did I miss your patch for the 'WITH TIMEZONE' regression test?\n> > Still not there in CVS ...\n\nI've committed a fix to the horology regression test which keeps *some*\nkind of test for the \"time with time zone\" type with an implicit time\nzone. Not ideal, but we can work on it later.\n\nbtw, I've applied the patch for the expected/ files to all variants of\nhorology.out, so all platforms should pass that test now.\n\nI've also committed the up to date platform list, which has 30 distinct\nplatforms supported!! Thanks to Henry Hotz for getting us to that magic\nnumber with NetBSD/ppc.\n\n - Thomas\n",
"msg_date": "Fri, 06 Apr 2001 05:56:55 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: RC3 ... anyone have anything left outstanding?"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> btw, I've applied the patch for the expected/ files to all variants of\n> horology.out, so all platforms should pass that test now.\n\nFWIW, I confirm that horology-no-DST-before-1970 is good; it passes on\nHPUX. Can anyone confirm horology-solaris-1947?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 02:49:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: RC3 ... anyone have anything left outstanding? "
}
] |
[
{
"msg_contents": "\n\nEinar Karttunen wrote:\n\n>\n> >\n> > integer (float_expression) or int (float_expression) DO work on\n> RedHat6.2/PostgreSQL6.5 and DO NOT work on Mandrake/PostgreSQL7.0.2\n\n> Try using int2()/int4()/int8() instead of integer(). The intn()\nfunctions\n> convert the float to a integer n bytes long, in normal cases you\nprobably\n> want to use int4().\n>\n\nEinar,\n\nMuch obliged.\n\nint4() has done a job.\n\nWhy is that NOT documented under \"Matematical functions\"?\n\nRegards,\n\n\nSteven.\n\n--\n***********************************************\n\nSteven Vajdic (BSc/Hon, MSc)\nSenior Software Engineer\nMotorola Australia Software Centre (MASC)\n2 Second Avenue, Technology Park\nAdelaide, South Australia 5095\nemail: Steven.Vajdic@motorola.com\nemail: svajdic@asc.corp.mot.com\nPh.: +61-8-8168-3435\nFax: +61-8-8168-3501\nFront Office (Ph): +61-8-8168-3500\n\n----------------------------------------\nmobile: +61 (0)419 860 903\nAFTER WORK email: steven_vajdic@ivillage.com\n----------------------------------------\n\n***********************************************\n\n\n",
"msg_date": "Fri, 06 Apr 2001 18:57:16 +0930",
"msg_from": "Steven Vajdic <svajdic@asc.corp.mot.com>",
"msg_from_op": true,
"msg_subject": "Integer to float function"
},
{
"msg_contents": "> > > integer (float_expression) or int (float_expression) DO work on\n> > RedHat6.2/PostgreSQL6.5 and DO NOT work on Mandrake/PostgreSQL7.0.2\n> > Try using int2()/int4()/int8() instead of integer().\n> Why is that NOT documented under \"Matematical functions\"?\n\nBecause we haven't received any patches to document it? ;)\n\n - Thomas\n",
"msg_date": "Fri, 06 Apr 2001 13:46:22 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Integer to float function"
},
{
"msg_contents": "Steven Vajdic writes:\n\n> int4() has done a job.\n>\n> Why is that NOT documented under \"Matematical functions\"?\n\nBecause you're supposed to use CAST(value AS integer).\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 6 Apr 2001 16:39:22 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Integer to float function"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> Try using int2()/int4()/int8() instead of integer().\n>> Why is that NOT documented under \"Matematical functions\"?\n\n> Because we haven't received any patches to document it? ;)\n\nOr because it's not a mathematical function. I don't think that\ndatatype conversion functions belong under that heading.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 10:54:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Integer to float function "
}
] |
[
{
"msg_contents": "My nightly dump of one of my databases started failing Wednesday night and\nI'm not sure what is going on. When I pg_dump this one database (others on\nthis machine are fine) I get this output from pg_dump\n\n-- last builtin oid is 17216 \n-- reading user-defined types \n-- reading user-defined functions \n-- reading user-defined aggregates \n-- reading user-defined operators \n-- reading user-defined tables \n-- finding Triggers for relation: 'jobs' \n[snip] ...\n-- finding DEFAULT expression for attr: 'id' \n-- finding the attrs and types for table: 'testassignments' \n-- finding DEFAULT expression for attr: 'id' \n-- finding DEFAULT expression for attr: 'standby' \n-- finding the attrs and types for table: 'pagedata' \n-- finding DEFAULT expression for attr: 'id' \n-- flagging inherited attributes in subtables \n-- dumping out database comment \n-- dumping out user-defined types \n-- dumping out tables \n-- dumping out user-defined procedural languages \n-- dumping out user-defined functions \nfailed sanity check, type with oid 101993741 was not found\n\nrelevant info: pg7.0.2 on redhat 6.1, 700MHz Athlon, 256M, linux software\nmirror hard drives.\n\nbtw: This is a completely different server at a different location from the\none that was having problems two weeks ago and thank you all for the help\nthen, I don't think I emailed a thank you, it was a busy day.\n",
"msg_date": "Fri, 6 Apr 2001 10:33:19 -0500 ",
"msg_from": "Matthew <matt@ctlno.com>",
"msg_from_op": true,
"msg_subject": "More Problems"
},
{
"msg_contents": "Matthew <matt@ctlno.com> writes:\n> -- dumping out user-defined functions \n> failed sanity check, type with oid 101993741 was not found\n\nLooks like you have a function that refers to a since-deleted type.\nYou'll need to find and drop the function (which may mean manually\ndeleting its pg_proc row, since there's no way to name the function\nto DROP FUNCTION if one of its parameters is a now-unknown type).\n\nAnother possibility is that the type still exists but you deleted its\nowning user from pg_shadow; that will confuse pg_dump too. In that\ncase you can just create a new user with the same usesysid, or you can\nupdate the type's typowner field in pg_type to refer to some existing\nuser.\n\nTry \"select * from pg_type where oid = 101993741\" to figure out which\nsituation applies ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 14:51:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More Problems "
}
] |
[
{
"msg_contents": "> FWIW, I confirm that horology-no-DST-before-1970 is good; it passes on\n> HPUX. Can anyone confirm horology-solaris-1947?\n\nHow to test it? All default tests are Ok on my Solaris.\n\nVadim\n",
"msg_date": "Fri, 6 Apr 2001 09:23:00 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: RC3 ... anyone have anything left outstanding? "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> FWIW, I confirm that horology-no-DST-before-1970 is good; it passes on\n>> HPUX. Can anyone confirm horology-solaris-1947?\n\n> How to test it? All default tests are Ok on my Solaris.\n\nIf the horology test shows as passing, then we're set.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 12:26:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: RC3 ... anyone have anything left outstanding? "
}
] |
[
{
"msg_contents": "> To be perfectly clear: I have actually seen bug reports trace to\n> problems that I think a block-level CRC might have detected (not\n> corrected, of course, but at least the user might have realized he had\n> flaky hardware a little sooner). So I do not say that the upside to\n> a block CRC is nil. But I am unconvinced that it exceeds the\n> downside, in development effort, runtime, false failure reports\n> (is that CRC error really due to hardware trouble, or a software bug\n> that failed to update the CRC? and how do you get around the CRC error\n> to get at your data??) etc etc.\n\nSomething to remember: currently we update t_infomask (set\nHEAP_XMAX_COMMITTED etc) while holding share lock on buffer -\nwe have to change this before block CRC implementation.\n\nVadim\n",
"msg_date": "Fri, 6 Apr 2001 10:25:20 -0700 ",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: TODO list "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Something to remember: currently we update t_infomask (set\n> HEAP_XMAX_COMMITTED etc) while holding share lock on buffer -\n> we have to change this before block CRC implementation.\n\nYeah, we'd lose some concurrency there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 14:46:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: TODO list "
}
] |
[
{
"msg_contents": "\ndominic=# \\l\n List of databases\n Database | Owner \n---------------+----------\n aleal | aleal\n arivera | arivera\n bbeyer | bbeyer\n brandon | brandon\n brandon | postgres\n dominic | dominic\n ds3 | agould\n keystone | dominic\n kperoni | kperoni\n mgrooms | mgrooms\n postgres | brandon\t<-- Incorrect\n postgres | postgres\n smc_is_neteng | dominic\n template1 | brandon\t<-- Incorrect\n template1 | postgres\n wwwrun | brandon\t<-- Incorrect\n wwwrun | postgres\n(17 rows)\n\n\nAny idea what would cause this\"\n\n(The \"<-- Incorrect\" was added by me.. )\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Fri, 6 Apr 2001 13:47:06 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": true,
"msg_subject": "Duplicate databases..."
},
{
"msg_contents": "\nI just love to reply to myself..\n\nOn Fri, 6 Apr 2001, Dominic J. Eidson wrote:\n\n[Snip]\n\n> postgres | brandon\t<-- Incorrect\n> postgres | postgres\n> smc_is_neteng | dominic\n> template1 | brandon\t<-- Incorrect\n> template1 | postgres\n> wwwrun | brandon\t<-- Incorrect\n> wwwrun | postgres\n> (17 rows)\n> \n> \n> Any idea what would cause this\"\n> \n> (The \"<-- Incorrect\" was added by me.. )\n\ndominic=# select version();\n version \n---------------------------------------------------------------------\n PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66\n(1 row)\n\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Fri, 6 Apr 2001 13:51:28 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": true,
"msg_subject": "Re: Duplicate databases..."
},
{
"msg_contents": "Vacuuming pg_database should make the bogus entries go away ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 16:55:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate databases... "
},
{
"msg_contents": "\"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n>> postgres | brandon\t<-- Incorrect\n>> postgres | postgres\n>> smc_is_neteng | dominic\n>> template1 | brandon\t<-- Incorrect\n>> template1 | postgres\n>> wwwrun | brandon\t<-- Incorrect\n>> wwwrun | postgres\n>> (17 rows)\n\nActually, I take that back, this isn't a connection-time issue...\n\nOn closer look, I'll bet that \"brandon\" and \"postgres\" have the\nsame usesysid assigned in pg_shadow. Need to change one of them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Apr 2001 16:56:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate databases... "
},
{
"msg_contents": "On Fri, 6 Apr 2001, Tom Lane wrote:\n\n> On closer look, I'll bet that \"brandon\" and \"postgres\" have the\n> same usesysid assigned in pg_shadow. Need to change one of them.\n\nThat was it - both were 501.\n\nThanks.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Fri, 6 Apr 2001 16:30:45 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": true,
"msg_subject": "Re: Duplicate databases... "
}
] |
[
{
"msg_contents": "On Fri, 06 April 2001, Tom Lane wrote:\n\n> \n> Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> > Try using int2()/int4()/int8() instead of integer().\n> >> Why is that NOT documented under \"Matematical functions\"?\n> \n> > Because we haven't received any patches to document it? ;)\n> \n> Or because it's not a mathematical function. I don't think that\n> datatype conversion functions belong under that heading.\n> \n> regards, tom lane\n\nI did not mean to question the position\nof integre() or int4(), or ... in the PostgreSQL documents.\n\nI HAVE found a list of functions, among them integer(), under mathematical functions and have NOT found anything about int4() or intn(), should I say. Perhaps, I was not searching the docs enough, but you know how we do things in a hurry - we do search the TOC.\n\nSo, if there is intn() desribed somewhere in the docs, I apologise BUT there is not then we have a communication problem.\n\nThanks in any case. The most important thing is that my migration from RedHat6.2/postgreSQL6.5 to Mandrake/PostgreSQL7.0.2 was easy and successfull, mostly because of this forum and questions/answers that we have \nin circulation.\n\nCheers,\n\nSteven.\n\n\n\n_________________________________________________________________\niVillage.com: Solutions for Your Life \nCheck out the most exciting women's community on the Web \nhttp://www.ivillage.com\n",
"msg_date": "6 Apr 2001 20:36:53 -0700",
"msg_from": "steven_vajdic@ivillage.com",
"msg_from_op": true,
"msg_subject": "Re: Re: Integer to float function"
}
] |
[
{
"msg_contents": "[ BCC to admin]\n\nI have completed my PostgreSQL session monitor utility, pgmonitor. I\nhave recently added the ability to start/stop the postmaster.\n\nI considered adding the ability to set postmaster/postgres command\nflags, but decided they are not changed frequently enough.\n\nIt still does not work on Solaris under 7.1RC because no Solaris users\nhave gotten ps_status working on that platform.\n\nI plan to write PostgreSQL articles for the next few weeks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n P G M O N I T O R\n\npgmonitor, version 0.42\n\nThe main web site for pgmonitor is:\n\thttp://greatbridge.org/project/pgmonitor/projdisplay.php\n\nYou can download the most recent version from\n\tftp://ftp.greatbridge.org/pub/pgmonitor\n\nThis tool allows monitoring of PostgreSQL activity. It requires Tcl/Tk\n8.0 or later. It may require modification of the 'ps' flags for certain\nplatforms. It is known to run on *BSD, Linux, and HPUX.\n\nPgmonitor only works when run on the database server machine. To use it\nremotely, log into the remote machine, set the DISPLAY variable to point\nto your local X server, and start pgmonitor. Pgmonitor will then run on\nthe remote machine, but will display on your local machine.\n\nPgmonitor uses 'ps' to display backend process activity. It uses 'gdb'\nto display running queries, and 'kill' to cancel queries and terminate\ndatabase connections.\n\nPgmonitor stores your most recent refresh and sort settings in the file\n~/.pgmonitor. This file is used to reload your defaults every time\npgmonitor is started.\n\nIf you are running PostgreSQL 7.1.0 or earlier, the 'query' button will\nnot work unless you compile PostgreSQL with debug symbols (-g), or apply\nthe supplied patch 'query_display.diff' and recompile PostgreSQL. The\nlater method is recommended.\n\nFor porting assistance, there are 'set debug' and 'set show_all' options\nin the script. 'debug' outputs status information while pgmonitor\nrunning, and 'show_all' shows all PostgreSQL user processes, such as the\npostmaster.\n\nBruce Momjian <pgman@candle.pha.pa.us>",
"msg_date": "Sat, 7 Apr 2001 00:00:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "pgmonitor completed"
},
{
"msg_contents": "\nI tried to intall pltcl, but failed when I tried to build a\npltcl.so \nit seems hangging on there forever, what's wrong??\n \nsu-2.04# cd /work/src/pgsql702/src/pl/tcl/\nsu-2.04# ls\nCVS mkMakefile.tcldefs.sh.in\nINSTALL modules\nMakefile pltcl.c\nlicense.terms pltcl_guide.nr\nmkMakefile.tcldefs.sh test\nsu-2.04# gmake\nMakefile:22: Makefile.tcldefs: No such file or directory\n/bin/sh ./mkMakefile.tcldefs.sh\n^Cgmake: *** Deleting file `Makefile.tcldefs'\ngmake: *** [Makefile.tcldefs] Interrupt\n\n\nJie LIANG\n\nSt. Bernard Software\n\n10350 Science Center Drive\nSuite 100, San Diego, CA 92121\nOffice:(858)320-4873\n\njliang@ipinc.com\nwww.stbernard.com\nwww.ipinc.com\n\n\n",
"msg_date": "Sat, 7 Apr 2001 17:55:25 -0700 (PDT)",
"msg_from": "Jie Liang <jliang@ipinc.com>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] pgmonitor completed"
},
{
"msg_contents": "You don't need pltcl, just libtcl.\n\n> \n> I tried to intall pltcl, but failed when I tried to build a\n> pltcl.so \n> it seems hangging on there forever, what's wrong??\n> \n> su-2.04# cd /work/src/pgsql702/src/pl/tcl/\n> su-2.04# ls\n> CVS mkMakefile.tcldefs.sh.in\n> INSTALL modules\n> Makefile pltcl.c\n> license.terms pltcl_guide.nr\n> mkMakefile.tcldefs.sh test\n> su-2.04# gmake\n> Makefile:22: Makefile.tcldefs: No such file or directory\n> /bin/sh ./mkMakefile.tcldefs.sh\n> ^Cgmake: *** Deleting file `Makefile.tcldefs'\n> gmake: *** [Makefile.tcldefs] Interrupt\n> \n> \n> Jie LIANG\n> \n> St. Bernard Software\n> \n> 10350 Science Center Drive\n> Suite 100, San Diego, CA 92121\n> Office:(858)320-4873\n> \n> jliang@ipinc.com\n> www.stbernard.com\n> www.ipinc.com\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 7 Apr 2001 21:32:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] pgmonitor completed"
},
{
"msg_contents": "No, I was wrong in even mentioning libpgtcl. You don't even need that. \npgmonitor does not connect to the database at any time. I should run as\na shell script if you have tcl/tk >= 8.0 installed.\n\n> \n> What I need to do to install libtcl\n> \n> Do I still need to\n> createlang -d dbname pltcl??\n> and when??\n> \n> \n> Jie LIANG\n> \n> St. Bernard Software\n> \n> 10350 Science Center Drive\n> Suite 100, San Diego, CA 92121\n> Office:(858)320-4873\n> \n> jliang@ipinc.com\n> www.stbernard.com\n> www.ipinc.com\n> \n> On Sat, 7 Apr 2001, Bruce Momjian wrote:\n> \n> > You don't need pltcl, just libtcl.\n> > \n> > > \n> > > I tried to intall pltcl, but failed when I tried to build a\n> > > pltcl.so \n> > > it seems hangging on there forever, what's wrong??\n> > > \n> > > su-2.04# cd /work/src/pgsql702/src/pl/tcl/\n> > > su-2.04# ls\n> > > CVS mkMakefile.tcldefs.sh.in\n> > > INSTALL modules\n> > > Makefile pltcl.c\n> > > license.terms pltcl_guide.nr\n> > > mkMakefile.tcldefs.sh test\n> > > su-2.04# gmake\n> > > Makefile:22: Makefile.tcldefs: No such file or directory\n> > > /bin/sh ./mkMakefile.tcldefs.sh\n> > > ^Cgmake: *** Deleting file `Makefile.tcldefs'\n> > > gmake: *** [Makefile.tcldefs] Interrupt\n> > > \n> > > \n> > > Jie LIANG\n> > > \n> > > St. Bernard Software\n> > > \n> > > 10350 Science Center Drive\n> > > Suite 100, San Diego, CA 92121\n> > > Office:(858)320-4873\n> > > \n> > > jliang@ipinc.com\n> > > www.stbernard.com\n> > > www.ipinc.com\n> > > \n> > > \n> > > \n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 7 Apr 2001 22:33:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: libtcl"
},
{
"msg_contents": "\nWhat I need to do to install libtcl\n\nDo I still need to\ncreatelang -d dbname pltcl??\nand when??\n\n\nJie LIANG\n\nSt. Bernard Software\n\n10350 Science Center Drive\nSuite 100, San Diego, CA 92121\nOffice:(858)320-4873\n\njliang@ipinc.com\nwww.stbernard.com\nwww.ipinc.com\n\nOn Sat, 7 Apr 2001, Bruce Momjian wrote:\n\n> You don't need pltcl, just libtcl.\n> \n> > \n> > I tried to intall pltcl, but failed when I tried to build a\n> > pltcl.so \n> > it seems hangging on there forever, what's wrong??\n> > \n> > su-2.04# cd /work/src/pgsql702/src/pl/tcl/\n> > su-2.04# ls\n> > CVS mkMakefile.tcldefs.sh.in\n> > INSTALL modules\n> > Makefile pltcl.c\n> > license.terms pltcl_guide.nr\n> > mkMakefile.tcldefs.sh test\n> > su-2.04# gmake\n> > Makefile:22: Makefile.tcldefs: No such file or directory\n> > /bin/sh ./mkMakefile.tcldefs.sh\n> > ^Cgmake: *** Deleting file `Makefile.tcldefs'\n> > gmake: *** [Makefile.tcldefs] Interrupt\n> > \n> > \n> > Jie LIANG\n> > \n> > St. Bernard Software\n> > \n> > 10350 Science Center Drive\n> > Suite 100, San Diego, CA 92121\n> > Office:(858)320-4873\n> > \n> > jliang@ipinc.com\n> > www.stbernard.com\n> > www.ipinc.com\n> > \n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n",
"msg_date": "Sat, 7 Apr 2001 19:36:54 -0700 (PDT)",
"msg_from": "Jie Liang <jliang@ipinc.com>",
"msg_from_op": false,
"msg_subject": "libtcl"
},
{
"msg_contents": "\nI believe what we are talking about is diff,\nI am trying to install pltcl language on my db not pgmonitor.\nWhen I wand to build a pltcl.so, I got following msg,\nI want to know there's any wrong?\n\nThanks anyway.\n\n\nJie LIANG\n\nSt. Bernard Software\n\n10350 Science Center Drive\nSuite 100, San Diego, CA 92121\nOffice:(858)320-4873\n\njliang@ipinc.com\nwww.stbernard.com\nwww.ipinc.com\n\nOn Sat, 7 Apr 2001, Bruce Momjian wrote:\n\n> No, I was wrong in even mentioning libpgtcl. You don't even need that. \n> pgmonitor does not connect to the database at any time. I should run as\n> a shell script if you have tcl/tk >= 8.0 installed.\n> \n> > \n> > What I need to do to install libtcl\n> > \n> > Do I still need to\n> > createlang -d dbname pltcl??\n> > and when??\n> > \n> > \n> > Jie LIANG\n> > \n> > St. Bernard Software\n> > \n> > 10350 Science Center Drive\n> > Suite 100, San Diego, CA 92121\n> > Office:(858)320-4873\n> > \n> > jliang@ipinc.com\n> > www.stbernard.com\n> > www.ipinc.com\n> > \n> > On Sat, 7 Apr 2001, Bruce Momjian wrote:\n> > \n> > > You don't need pltcl, just libtcl.\n> > > \n> > > > \n> > > > I tried to intall pltcl, but failed when I tried to build a\n> > > > pltcl.so \n> > > > it seems hangging on there forever, what's wrong??\n> > > > \n> > > > su-2.04# cd /work/src/pgsql702/src/pl/tcl/\n> > > > su-2.04# ls\n> > > > CVS mkMakefile.tcldefs.sh.in\n> > > > INSTALL modules\n> > > > Makefile pltcl.c\n> > > > license.terms pltcl_guide.nr\n> > > > mkMakefile.tcldefs.sh test\n> > > > su-2.04# gmake\n> > > > Makefile:22: Makefile.tcldefs: No such file or directory\n> > > > /bin/sh ./mkMakefile.tcldefs.sh\n> > > > ^Cgmake: *** Deleting file `Makefile.tcldefs'\n> > > > gmake: *** [Makefile.tcldefs] Interrupt\n> > > > \n> > > > \n> > > > Jie LIANG\n> > > > \n> > > > St. Bernard Software\n> > > > \n> > > > 10350 Science Center Drive\n> > > > Suite 100, San Diego, CA 92121\n> > > > Office:(858)320-4873\n> > > > \n> > > > jliang@ipinc.com\n> > > > www.stbernard.com\n> > > > www.ipinc.com\n> > > > \n> > > > \n> > > > \n> > > \n> > > \n> > > -- \n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > > \n> > \n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n",
"msg_date": "Sat, 7 Apr 2001 20:04:44 -0700 (PDT)",
"msg_from": "Jie Liang <jliang@ipinc.com>",
"msg_from_op": false,
"msg_subject": "Re: libtcl"
}
] |
[
{
"msg_contents": "Hi!\n\nI've noticed a pg_dump/pg_dumpall problem with timestamp variables, in\ndumping the minute, and second values:\ninstead of dumping\n12:01:00.00 it dumps out 12:60:00.00 which is not accepted when\nrestoring a database...\n\nGyuro Lehel\n\n\n",
"msg_date": "Sat, 7 Apr 2001 09:37:28 +0200",
"msg_from": "Lehel Gyuro <lehel@bin.hu>",
"msg_from_op": true,
"msg_subject": "pg_dupp/pg_dumpall problem!"
},
{
"msg_contents": "> I've noticed a pg_dump/pg_dumpall problem with timestamp variables, in\n> dumping the minute, and second values:\n> instead of dumping\n> 12:01:00.00 it dumps out 12:60:00.00 which is not accepted when\n> restoring a database...\n\nYou are running the Mandrake distro, or somehow compiling with a bad set\nof mixed-up compiler flags. You need to *not* compile with -ffast-math,\nbut rather with -O2 or -O3 only.\n\n - Thomas\n",
"msg_date": "Mon, 09 Apr 2001 06:34:05 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: pg_dupp/pg_dumpall problem!"
}
] |
[
{
"msg_contents": "The man page for createlang refers to the --echo option, but in fact\nthat option does not exist.\n\nThis patch implements it and also expands the man page for a couple of\noptions that were not documented.\n\ndiff -ur postgresql-7.1RC3/doc/src/sgml/ref/createlang.sgml postgresql-7.1cRC3/doc/src/sgml/ref/createlang.sgml\n--- postgresql-7.1RC3/doc/src/sgml/ref/createlang.sgml\tSun Jan 7 02:03:28 2001\n+++ postgresql-7.1cRC3/doc/src/sgml/ref/createlang.sgml\tSat Apr 7 08:56:26 2001\n@@ -62,11 +62,39 @@\n </varlistentry>\n \n <varlistentry>\n+ <term>--pglib <replaceable class=\"parameter\">directory</replaceable></term>\n+ <listitem>\n+ <para>\n+\tSpecifies the directory in which the language interpreter is\n+ to be found.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+ <varlistentry>\n <term>-l, --list</term>\n <listitem>\n <para>\n Shows a list of already installed languages in the target database\n (which must be specified).\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+ <varlistentry>\n+ <term>--echo</term>\n+ <listitem>\n+ <para>\n+ Displays SQL commands as they are executed.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+ <varlistentry>\n+ <term>-?, --help</term>\n+ <listitem>\n+ <para>\n+ Shows a brief help message.\n </para>\n </listitem>\n </varlistentry>\ndiff -ur postgresql-7.1RC3/src/bin/scripts/createlang.sh postgresql-7.1cRC3/src/bin/scripts/createlang.sh\n--- postgresql-7.1RC3/src/bin/scripts/createlang.sh\tFri Feb 23 18:12:18 2001\n+++ postgresql-7.1cRC3/src/bin/scripts/createlang.sh\tSat Apr 7 07:59:18 2001\n@@ -98,6 +98,9 @@\n PGLIB=`echo \"$1\" | sed 's/^--pglib=//'`\n ;;\n \n+\t--echo)\n+\t\tlistcmd=TRUE\n+\t\t;;\n \t-*)\n \t\techo \"$CMDNAME: invalid option: $1\" 1>&2\n echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n@@ -155,7 +158,12 @@\n # List option\n # ----------\n if [ \"$list\" ]; then\n- ${PATHNAME}psql $PSQLOPT -d \"$dbname\" -P 'title=Procedural languages' -c \"SELECT lanname as \\\"Name\\\", lanpltrusted as \\\"Trusted?\\\", lancompiler as \\\"Compiler\\\" FROM pg_language WHERE lanispl = 't'\"\n+\tsqlcmd=\"SELECT lanname as \\\"Name\\\", lanpltrusted as \\\"Trusted?\\\", lancompiler as \\\"Compiler\\\" FROM pg_language WHERE lanispl = 't'\"\n+\tif [ -n \"$listcmd\" ]\n+\tthen\n+\t\techo $sqlcmd\n+\tfi\n+ ${PATHNAME}psql $PSQLOPT -d \"$dbname\" -P 'title=Procedural languages' -c \"$sqlcmd\"\n exit $?\n fi\n \n@@ -237,7 +245,12 @@\n # ----------\n # Make sure the language isn't already installed\n # ----------\n-res=`$PSQL \"SELECT oid FROM pg_language WHERE lanname = '$langname'\"`\n+sqlcmd=\"SELECT oid FROM pg_language WHERE lanname = '$langname'\"\n+if [ -n \"$listcmd\" ]\n+then\n+\techo $sqlcmd\n+fi\n+res=`$PSQL \"$sqlcmd\"`\n if [ $? -ne 0 ]; then\n \techo \"$CMDNAME: external error\" 1>&2\n \texit 1\n@@ -251,7 +264,12 @@\n # ----------\n # Check that there is no function named as the call handler\n # ----------\n-res=`$PSQL \"SELECT oid FROM pg_proc WHERE proname = '$handler'\"`\n+sqlcmd=\"SELECT oid FROM pg_proc WHERE proname = '$handler'\"\n+if [ -n \"$listcmd\" ]\n+then\n+\techo $sqlcmd\n+fi\n+res=`$PSQL \"$sqlcmd\"`\n if [ ! -z \"$res\" ]; then\n \techo \"$CMDNAME: A function named '$handler' already exists. Installation aborted.\" 1>&2\n \texit 1\n@@ -260,13 +278,23 @@\n # ----------\n # Create the call handler and the language\n # ----------\n-$PSQL \"CREATE FUNCTION $handler () RETURNS OPAQUE AS '$PGLIB/${object}$DLSUFFIX' LANGUAGE 'C'\"\n+sqlcmd=\"CREATE FUNCTION $handler () RETURNS OPAQUE AS '$PGLIB/${object}$DLSUFFIX' LANGUAGE 'C'\"\n+if [ -n \"$listcmd\" ]\n+then\n+\techo $sqlcmd\n+fi\n+$PSQL \"$sqlcmd\"\n if [ $? -ne 0 ]; then\n \techo \"$CMDNAME: language installation failed\" 1>&2\n \texit 1\n fi\n \n-$PSQL \"CREATE ${trusted}PROCEDURAL LANGUAGE '$langname' HANDLER $handler LANCOMPILER '$lancomp'\"\n+sqlcmd=\"CREATE ${trusted}PROCEDURAL LANGUAGE '$langname' HANDLER $handler LANCOMPILER '$lancomp'\"\n+if [ -n \"$listcmd\" ]\n+then\n+\techo $sqlcmd\n+fi\n+$PSQL \"$sqlcmd\"\n if [ $? -ne 0 ]; then\n \techo \"$CMDNAME: language installation failed\" 1>&2\n \texit 1\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I sought the Lord, and he answered me; he delivered me\n from all my fears.\" Psalm 34:4 \n\n",
"msg_date": "Sat, 07 Apr 2001 10:15:51 +0100",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "createlang patch"
},
{
"msg_contents": "\nI will save this for 7.2. Thanks.\n\n> The man page for createlang refers to the --echo option, but in fact\n> that option does not exist.\n> \n> This patch implements it and also expands the man page for a couple of\n> options that were not documented.\n> \n> diff -ur postgresql-7.1RC3/doc/src/sgml/ref/createlang.sgml postgresql-7.1cRC3/doc/src/sgml/ref/createlang.sgml\n> --- postgresql-7.1RC3/doc/src/sgml/ref/createlang.sgml\tSun Jan 7 02:03:28 2001\n> +++ postgresql-7.1cRC3/doc/src/sgml/ref/createlang.sgml\tSat Apr 7 08:56:26 2001\n> @@ -62,11 +62,39 @@\n> </varlistentry>\n> \n> <varlistentry>\n> + <term>--pglib <replaceable class=\"parameter\">directory</replaceable></term>\n> + <listitem>\n> + <para>\n> +\tSpecifies the directory in which the language interpreter is\n> + to be found.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> <term>-l, --list</term>\n> <listitem>\n> <para>\n> Shows a list of already installed languages in the target database\n> (which must be specified).\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term>--echo</term>\n> + <listitem>\n> + <para>\n> + Displays SQL commands as they are executed.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term>-?, --help</term>\n> + <listitem>\n> + <para>\n> + Shows a brief help message.\n> </para>\n> </listitem>\n> </varlistentry>\n> diff -ur postgresql-7.1RC3/src/bin/scripts/createlang.sh postgresql-7.1cRC3/src/bin/scripts/createlang.sh\n> --- postgresql-7.1RC3/src/bin/scripts/createlang.sh\tFri Feb 23 18:12:18 2001\n> +++ postgresql-7.1cRC3/src/bin/scripts/createlang.sh\tSat Apr 7 07:59:18 2001\n> @@ -98,6 +98,9 @@\n> PGLIB=`echo \"$1\" | sed 's/^--pglib=//'`\n> ;;\n> \n> +\t--echo)\n> +\t\tlistcmd=TRUE\n> +\t\t;;\n> \t-*)\n> \t\techo \"$CMDNAME: invalid option: $1\" 1>&2\n> echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n> @@ -155,7 +158,12 @@\n> # List option\n> # ----------\n> if [ \"$list\" ]; then\n> - ${PATHNAME}psql $PSQLOPT -d \"$dbname\" -P 'title=Procedural languages' -c \"SELECT lanname as \\\"Name\\\", lanpltrusted as \\\"Trusted?\\\", lancompiler as \\\"Compiler\\\" FROM pg_language WHERE lanispl = 't'\"\n> +\tsqlcmd=\"SELECT lanname as \\\"Name\\\", lanpltrusted as \\\"Trusted?\\\", lancompiler as \\\"Compiler\\\" FROM pg_language WHERE lanispl = 't'\"\n> +\tif [ -n \"$listcmd\" ]\n> +\tthen\n> +\t\techo $sqlcmd\n> +\tfi\n> + ${PATHNAME}psql $PSQLOPT -d \"$dbname\" -P 'title=Procedural languages' -c \"$sqlcmd\"\n> exit $?\n> fi\n> \n> @@ -237,7 +245,12 @@\n> # ----------\n> # Make sure the language isn't already installed\n> # ----------\n> -res=`$PSQL \"SELECT oid FROM pg_language WHERE lanname = '$langname'\"`\n> +sqlcmd=\"SELECT oid FROM pg_language WHERE lanname = '$langname'\"\n> +if [ -n \"$listcmd\" ]\n> +then\n> +\techo $sqlcmd\n> +fi\n> +res=`$PSQL \"$sqlcmd\"`\n> if [ $? -ne 0 ]; then\n> \techo \"$CMDNAME: external error\" 1>&2\n> \texit 1\n> @@ -251,7 +264,12 @@\n> # ----------\n> # Check that there is no function named as the call handler\n> # ----------\n> -res=`$PSQL \"SELECT oid FROM pg_proc WHERE proname = '$handler'\"`\n> +sqlcmd=\"SELECT oid FROM pg_proc WHERE proname = '$handler'\"\n> +if [ -n \"$listcmd\" ]\n> +then\n> +\techo $sqlcmd\n> +fi\n> +res=`$PSQL \"$sqlcmd\"`\n> if [ ! -z \"$res\" ]; then\n> \techo \"$CMDNAME: A function named '$handler' already exists. Installation aborted.\" 1>&2\n> \texit 1\n> @@ -260,13 +278,23 @@\n> # ----------\n> # Create the call handler and the language\n> # ----------\n> -$PSQL \"CREATE FUNCTION $handler () RETURNS OPAQUE AS '$PGLIB/${object}$DLSUFFIX' LANGUAGE 'C'\"\n> +sqlcmd=\"CREATE FUNCTION $handler () RETURNS OPAQUE AS '$PGLIB/${object}$DLSUFFIX' LANGUAGE 'C'\"\n> +if [ -n \"$listcmd\" ]\n> +then\n> +\techo $sqlcmd\n> +fi\n> +$PSQL \"$sqlcmd\"\n> if [ $? -ne 0 ]; then\n> \techo \"$CMDNAME: language installation failed\" 1>&2\n> \texit 1\n> fi\n> \n> -$PSQL \"CREATE ${trusted}PROCEDURAL LANGUAGE '$langname' HANDLER $handler LANCOMPILER '$lancomp'\"\n> +sqlcmd=\"CREATE ${trusted}PROCEDURAL LANGUAGE '$langname' HANDLER $handler LANCOMPILER '$lancomp'\"\n> +if [ -n \"$listcmd\" ]\n> +then\n> +\techo $sqlcmd\n> +fi\n> +$PSQL \"$sqlcmd\"\n> if [ $? -ne 0 ]; then\n> \techo \"$CMDNAME: language installation failed\" 1>&2\n> \texit 1\n> \n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"I sought the Lord, and he answered me; he delivered me\n> from all my fears.\" Psalm 34:4 \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 7 Apr 2001 11:24:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: createlang patch"
},
{
"msg_contents": "\nPatch applied by Peter E. Thanks.\n\n> The man page for createlang refers to the --echo option, but in fact\n> that option does not exist.\n> \n> This patch implements it and also expands the man page for a couple of\n> options that were not documented.\n> \n> diff -ur postgresql-7.1RC3/doc/src/sgml/ref/createlang.sgml postgresql-7.1cRC3/doc/src/sgml/ref/createlang.sgml\n> --- postgresql-7.1RC3/doc/src/sgml/ref/createlang.sgml\tSun Jan 7 02:03:28 2001\n> +++ postgresql-7.1cRC3/doc/src/sgml/ref/createlang.sgml\tSat Apr 7 08:56:26 2001\n> @@ -62,11 +62,39 @@\n> </varlistentry>\n> \n> <varlistentry>\n> + <term>--pglib <replaceable class=\"parameter\">directory</replaceable></term>\n> + <listitem>\n> + <para>\n> +\tSpecifies the directory in which the language interpreter is\n> + to be found.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> <term>-l, --list</term>\n> <listitem>\n> <para>\n> Shows a list of already installed languages in the target database\n> (which must be specified).\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term>--echo</term>\n> + <listitem>\n> + <para>\n> + Displays SQL commands as they are executed.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term>-?, --help</term>\n> + <listitem>\n> + <para>\n> + Shows a brief help message.\n> </para>\n> </listitem>\n> </varlistentry>\n> diff -ur postgresql-7.1RC3/src/bin/scripts/createlang.sh postgresql-7.1cRC3/src/bin/scripts/createlang.sh\n> --- postgresql-7.1RC3/src/bin/scripts/createlang.sh\tFri Feb 23 18:12:18 2001\n> +++ postgresql-7.1cRC3/src/bin/scripts/createlang.sh\tSat Apr 7 07:59:18 2001\n> @@ -98,6 +98,9 @@\n> PGLIB=`echo \"$1\" | sed 's/^--pglib=//'`\n> ;;\n> \n> +\t--echo)\n> +\t\tlistcmd=TRUE\n> +\t\t;;\n> \t-*)\n> \t\techo \"$CMDNAME: invalid option: $1\" 1>&2\n> echo \"Try '$CMDNAME --help' for more information.\" 1>&2\n> @@ -155,7 +158,12 @@\n> # List option\n> # ----------\n> if [ \"$list\" ]; then\n> - ${PATHNAME}psql $PSQLOPT -d \"$dbname\" -P 'title=Procedural languages' -c \"SELECT lanname as \\\"Name\\\", lanpltrusted as \\\"Trusted?\\\", lancompiler as \\\"Compiler\\\" FROM pg_language WHERE lanispl = 't'\"\n> +\tsqlcmd=\"SELECT lanname as \\\"Name\\\", lanpltrusted as \\\"Trusted?\\\", lancompiler as \\\"Compiler\\\" FROM pg_language WHERE lanispl = 't'\"\n> +\tif [ -n \"$listcmd\" ]\n> +\tthen\n> +\t\techo $sqlcmd\n> +\tfi\n> + ${PATHNAME}psql $PSQLOPT -d \"$dbname\" -P 'title=Procedural languages' -c \"$sqlcmd\"\n> exit $?\n> fi\n> \n> @@ -237,7 +245,12 @@\n> # ----------\n> # Make sure the language isn't already installed\n> # ----------\n> -res=`$PSQL \"SELECT oid FROM pg_language WHERE lanname = '$langname'\"`\n> +sqlcmd=\"SELECT oid FROM pg_language WHERE lanname = '$langname'\"\n> +if [ -n \"$listcmd\" ]\n> +then\n> +\techo $sqlcmd\n> +fi\n> +res=`$PSQL \"$sqlcmd\"`\n> if [ $? -ne 0 ]; then\n> \techo \"$CMDNAME: external error\" 1>&2\n> \texit 1\n> @@ -251,7 +264,12 @@\n> # ----------\n> # Check that there is no function named as the call handler\n> # ----------\n> -res=`$PSQL \"SELECT oid FROM pg_proc WHERE proname = '$handler'\"`\n> +sqlcmd=\"SELECT oid FROM pg_proc WHERE proname = '$handler'\"\n> +if [ -n \"$listcmd\" ]\n> +then\n> +\techo $sqlcmd\n> +fi\n> +res=`$PSQL \"$sqlcmd\"`\n> if [ ! -z \"$res\" ]; then\n> \techo \"$CMDNAME: A function named '$handler' already exists. Installation aborted.\" 1>&2\n> \texit 1\n> @@ -260,13 +278,23 @@\n> # ----------\n> # Create the call handler and the language\n> # ----------\n> -$PSQL \"CREATE FUNCTION $handler () RETURNS OPAQUE AS '$PGLIB/${object}$DLSUFFIX' LANGUAGE 'C'\"\n> +sqlcmd=\"CREATE FUNCTION $handler () RETURNS OPAQUE AS '$PGLIB/${object}$DLSUFFIX' LANGUAGE 'C'\"\n> +if [ -n \"$listcmd\" ]\n> +then\n> +\techo $sqlcmd\n> +fi\n> +$PSQL \"$sqlcmd\"\n> if [ $? -ne 0 ]; then\n> \techo \"$CMDNAME: language installation failed\" 1>&2\n> \texit 1\n> fi\n> \n> -$PSQL \"CREATE ${trusted}PROCEDURAL LANGUAGE '$langname' HANDLER $handler LANCOMPILER '$lancomp'\"\n> +sqlcmd=\"CREATE ${trusted}PROCEDURAL LANGUAGE '$langname' HANDLER $handler LANCOMPILER '$lancomp'\"\n> +if [ -n \"$listcmd\" ]\n> +then\n> +\techo $sqlcmd\n> +fi\n> +$PSQL \"$sqlcmd\"\n> if [ $? -ne 0 ]; then\n> \techo \"$CMDNAME: language installation failed\" 1>&2\n> \texit 1\n> \n> -- \n> Oliver Elphick Oliver.Elphick@lfix.co.uk\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"I sought the Lord, and he answered me; he delivered me\n> from all my fears.\" Psalm 34:4 \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 9 May 2001 18:55:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: createlang patch"
}
] |
[
{
"msg_contents": "Debian packages of 7.1RC3 have been uploaded to the Debian experimental\ndistribution and are also available at http://www.debian.org/~elphick/postgresq\nl\n\nThese packages are built for sid (Debian unstable); I am currently\ntrying to build a set for potato (stable).\n\n\n\nIncidentally, when the next release is done, could we have a more\nmachine-friendly system of version naming?\n\nWe started with 7.1beta, then went to 7.1RC and will finish with 7.1.\n\nThese sort in reverse order to what is wanted! so I have to play\ngames with the release names to get them to work with the Debian \ninstaller (7.1beta, 7.1cRC, 7.1final). Why not adopt the practice\nof making odd-numbered minor releases be development and even-numbered\nones be stable? Name the final release of 7.1 as 7.2, go on with\ndeveloping 7.3.x and finally release that as 7.4.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"I sought the Lord, and he answered me; he delivered me\n from all my fears.\" Psalm 34:4 \n\n\n",
"msg_date": "Sat, 07 Apr 2001 14:04:14 +0100",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Debian packages of 7.1RC3"
}
] |
[
{
"msg_contents": "Hello all,\n\nI would like to inform you all that I am currently working on the \nimplementation of PL/pgSQL packages on both server-side (PostgreSQL 7.1) \nand client-side (PgAdmin).\nThe idea is to add an PL/pgSQL Integrated Development Environment to \npgadmin. Help and suggestions needed. If someone is already working on a \nsimilar project, let me know how I can help. For discussion, please \nregister on mailto:pgadmin-hackers@greatbridge.org mailing list. Help and \nsuggestions needed !\n\nFirst of all, some useful resources:\nhttp://www.oreilly.com/catalog/advoracle/chapter/ch02.html\nhttp://postgresql.rmplc.co.uk/devel-corner/docs/programmer/plpgsql-porting.html\n\nThe basic idea behind the project is to store functions and packages in \nPgAdmin tables and use drop/create mechanisms to load them in the database.\nHere is a first analysis, do not blame in case it is imprecise:\n\n1) Dependencies\nThe main problem when compiling a set of functions is dependencies :\n- transitivity dependencies: if function B relies on function B, and \nfunction A relies on function C, the compilation should be in A, B and C order.\n- cross dependencies: if a function A relies on B, B relies on C and C \nrelies on A, compilation will not work. Warnings should be sent to the user.\nAccording to http://www.oreilly.com/catalog/advoracle/chapter/ch02.html, \nthis problem exists in Oracle databases (!!!).\n\nTo avoid simple dependency problems, we need to work on isolating compiling \nmechanisms.\n\nThis could be something like :\n- functions with no sub calls are compiled first,\n- functions with sub calls are compiled secondly, according to an automatic \ndependency analysis,\n- triggers are compiled at last,\n- ultimately, users should be able to define compilation order.\n\nThere are maybe more simple mechanisms (???).\nDoes pg_dump isolate functions in a precise order (???).\n\n2) Isolate Development / Production versions\nFor every single function, we should isolate the production version \n(stable) from the development version (unstable).\nThis will help debugging and solve dependencies until the project is \n'cleanly' compiled and debugged.\nThis can be done by renaming all functions with the 'unstable_' prefix \nduring compilation and the use of aliases.\n\nLet's see the example with functionX :\n-> functionX is an alias that calls :\nstable_functionX (arg1, ...): stable version (production)\nunstable_functionX (arg1, ...): unstable version (development)\nserial1_functionX (arg1, ...), serial2_functionX (arg1, ...): archived \nversions of functionX\n\nOf course, this would be transparent for the developer which will only see \nfunctionX in the IDE.\nSwitching from unstable_function to stable_function would only require to \nrecompile the aliases.\n\n3) Serialize package releases\nIt should be possible to serialize packages and store/reload different \nreleases.\nA logging table will provide a change log (with user names and description \nof changes).\nI do not intend to work on diffs and don't think it is possible.\n\n4) Server-side logic\nMost of the logic should be developed in PL/pgSQL.\nOn client-side, PgSchema (the new object structure of Pgadmin) will manage \nthe whole thing.\n\n5) Syntax checking / indenting.\nHas anyone heard of open-source objects handling code indenting and syntax \nchecking ?\nI am not going to work on this, help needed.\n\n6) Import / Export of packages\nWe need a simple mechanism to import/export packages.\n\n7) Master/Slave PL/pgSQL Server\nCode should be stored on a master server and distributed to slave servers \nthrough simple mechanisms.\nThis last logic will be stored in PgSchema as I don't know how to do it \nwith PostgreSQL itself.\nAny possibility to embed it in PostgreSQL (remote call ???).\n\nLooking forward to hearing from you,\nGreetings from Jean-Michel POURE, Paris\n\n\n",
"msg_date": "Sat, 07 Apr 2001 16:23:55 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "PL/pgSQL IDE project"
}
] |
[
{
"msg_contents": "I searched for the error and I have found :\n\nwhen I execute psql template0\nthe SIGSEGV is generated when postinit calls RelationPhaseInitializePhase2\nin heap_openr with RelationName = pg_am at the return r I have the error.\n\nwhen I execute psql template1\nthe SIGSEGV is generated when postinit calls RelationPhaseInitializePhase2\nin heap_openr with RelationName = pg_class at the return r I have the error.\n\nI can't understand whats happen.\n\nCould someone help me ?\n\nThanks\n\nMaurizio Cauci\n\n\n.\n----- Original Message -----\nFrom: \"Maurizio\" <maurizio.c@libero.it>\nTo: \"Peter Eisentraut\" <peter_e@gmx.net>\nCc: <pgsql-hackers@postgresql.org>\nSent: Tuesday, April 03, 2001 6:31 PM\nSubject: Re: [HACKERS] QNX : POSSIBLE BUG IN CONFIGURE ?\n\n\n> OK,\n> I compiled postgresql RC2. I have not error nor warnings so I hope it's\nall\n> right.\n> I also changed something in os.h --> port/qnx4.h\n> and in s_lock.c\n>\n> I will post the changes until the end of the week.\n>\n> I executed initdb and all works fine.\n> I executed postmaster and the proces run OK.\n>\n> When I run psql template0 I have an error. I am not expert walking throu\n> postgresql sources.\n> could You tell me if there some change from beta 6 to RC1 or RC2 that can\n> give this problem in QNX so I can try to check all?\n>\n> Attached is the server.log file with the SIGSEGV.\n>\n> Thanks\n>\n> ----- Original Message -----\n> From: \"Peter Eisentraut\" <peter_e@gmx.net>\n> To: \"Maurizio\" <maurizio.c@libero.it>\n> Cc: <pgsql-hackers@postgresql.org>\n> Sent: Monday, April 02, 2001 6:38 PM\n> Subject: Re: [HACKERS] QNX : POSSIBLE BUG IN CONFIGURE ?\n>\n>\n> > Maurizio writes:\n> >\n> > > the problem is :\n> > >\n> > > when I execute configure it recognize the executable suffix as .map\n> > > and this is not right. And the test program fails.\n> >\n> > This is a known (to me) bug in Autoconf. Maybe there's a way to prevent\n> > the .map files to be generated? Fixing this isn't too hard, but I don't\n> > feel urgent about it when there are more problems with the QNX port\nstill\n> > down the line.\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\n----------------------------------------------------------------------------\n----\n\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Sat, 7 Apr 2001 16:32:36 +0200",
"msg_from": "\"Maurizio\" <maurizio.c@libero.it>",
"msg_from_op": true,
"msg_subject": "Fw: QNX : POSSIBLE BUG IN CONFIGURE ?"
}
] |
[
{
"msg_contents": "Hi.\n\nA few weeks (months?) ago I made a patch to the postgres\nbackend to get back the number of realized moves after\na MOVE command. So if I issue a \"MOVE 100 IN cusrorname\",\nbut there was only 66 rows left, I get back not only \"MOVE\",\nbut \"MOVE 66\". If the 100 steps could be realized, then\n\"MOVE 100\" would come back. \n\nI send this info to you, because I would like to ask you if\nit could be OK to include in future versions. I think you\nare in a beta testing phase now, so it is trivially not the\ntime to include it now...\n\nThe solution is 2 code lines into command.c, and then the\nmessage of move cames with the number into for example psql.\n1 other word into the jdbc driver, and this number is\navailable at update_count.\n\nI made the patch to the latest (one day old) CVS version.\n\nHere are the patches. Please look at them, and if you think \nit's a good idea, then please let me know where and how should\nI post them, and approximately when will you finish with the\nbeta testing, so it can be really considered seriously.\n\nI included them also as an attachment, because my silly pine\ninsists to break the lines...\n\n--- ./src/backend/commands/command.c.orig\tFri Mar 23 05:49:52 2001\n+++ ./src/backend/commands/command.c\tSat Apr 7 10:24:27 2001\n@@ -174,6 +174,12 @@\n \t\tif (!portal->atEnd)\n \t\t{\n \t\t\tExecutorRun(queryDesc, estate, EXEC_FOR, (long) count);\n+\n+\t\t\t/* I use CMD_UPDATE, because no CMD_MOVE or the like\n+\t\t\t exists, and I would like to provide the same\n+\t\t\t kind of info as CMD_UPDATE */\n+\t\t\tUpdateCommandInfo(CMD_UPDATE, 0, estate->es_processed);\n+\n \t\t\tif (estate->es_processed > 0)\n \t\t\t\tportal->atStart = false;\t\t/* OK to back up now */\n \t\t\tif (count <= 0 || (int) estate->es_processed < count)\n@@ -185,6 +191,12 @@\n \t\tif (!portal->atStart)\n \t\t{\n \t\t\tExecutorRun(queryDesc, estate, EXEC_BACK, (long) count);\n+\n+\t\t\t/* I use CMD_UPDATE, because no CMD_MOVE or the like\n+\t\t\t exists, and I would like to provide the same\n+\t\t\t kind of info as CMD_UPDATE */\n+\t\t\tUpdateCommandInfo(CMD_UPDATE, 0, -1*estate->es_processed);\n+\n \t\t\tif (estate->es_processed > 0)\n \t\t\t\tportal->atEnd = false;\t/* OK to go forward now */\n \t\t\tif (count <= 0 || (int) estate->es_processed < count)\n\n\n\nHere is the patch for the jdbc driver. >! I couldn't test it\nwith the current version, because it needs ant, and I didn't\nhave time and money today to download it... !< However, it\nis a trivial change, and if Peter T. Mount reads it, I ask\nhim to check if he likes it... Thanks for any kind of answer. \n\n--- ./src/interfaces/jdbc/org/postgresql/Connection.java.orig\tWed Jan 31 09:26:01 2001\n+++ ./src/interfaces/jdbc/org/postgresql/Connection.java\tSat Apr 7 16:42:04 2001\n@@ -490,7 +490,7 @@\n \t\t\t recv_status = pg_stream.ReceiveString(receive_sbuf,8192,getEncoding());\n \n \t\t\t\t// Now handle the update count correctly.\n-\t\t\t\tif(recv_status.startsWith(\"INSERT\") || recv_status.startsWith(\"UPDATE\") || recv_status.startsWith(\"DELETE\")) {\n+\t\t\t\tif(recv_status.startsWith(\"INSERT\") || recv_status.startsWith(\"UPDATE\") || recv_status.startsWith(\"DELETE\") || recv_status.startsWith(\"MOVE\")) {\n \t\t\t\t\ttry {\n \t\t\t\t\t\tupdate_count = Integer.parseInt(recv_status.substring(1+recv_status.lastIndexOf(' ')));\n \t\t\t\t\t} catch(NumberFormatException nfe) {\n\n\n-------------------\n(This last looks a bit complex, but the change is really a new \n\"|| recv_status.startsWith(\"MOVE\")\" only...)\n\n\nThank you for having read this, \n\nBaldvin\n\np.s.: I read a page on your homepage, called \"unapplied patches\".\nI would like to know if it means \"still unapplied patches\", or\nit means \"bad, and not accepted ideas\".",
"msg_date": "Sat, 7 Apr 2001 22:08:45 +0200 (MET DST)",
"msg_from": "Kovacs Baldvin <kb136@hszk.bme.hu>",
"msg_from_op": true,
"msg_subject": "Message of move"
}
] |
[
{
"msg_contents": "Hi!\n\nI had very funny problems with \"make install\" of the\nCVS version. The clue was a bit strange behavior of bash\n(/bin/sh is only a link in my debian). \n\nThe whole thing is about wildcard expansion: there's an \noption called nocaseglob. I never heard of it before, \nbut this was the cause for the Error 1 while make install.\nI don't know, what is the default of this option in \nbash, but in my computer shows different behaviour with\ninteractive shells and non interactives.\n\nI had to make a patch, qiute trivial, in src/bin/pgaccess/Makefile.\nI changed the language-installer line so it checks whether it\ndoesn't really try to copy the CVS subdirectory as a language...\n\nIf anyone is interested in it, here's the patch:\n\n--- ./src/bin/pgaccess/Makefile.orig\tSun Feb 18 19:34:01 2001\n+++ ./src/bin/pgaccess/Makefile\tSat Apr 7 16:05:01 2001\n@@ -30,7 +30,7 @@\n \t$(INSTALL_SCRIPT) $(srcdir)/main.tcl $(DESTDIR)$(pgaccessdir)\n \tfor i in $(srcdir)/lib/*.tcl; do $(INSTALL_DATA) $$i $(DESTDIR)$(pgaccessdir)/lib || exit 1; done\n \tfor i in $(srcdir)/lib/help/*.hlp; do $(INSTALL_DATA) $$i $(DESTDIR)$(pgaccessdir)/lib/help || exit 1; done\n-\tfor i in $(srcdir)/lib/languages/[a-z]*; do $(INSTALL_DATA) $$i $(DESTDIR)$(pgaccessdir)/lib/languages || exit 1; done\n+\tfor i in $(srcdir)/lib/languages/[a-z]*; do [ -f \"$$i\" ] && { $(INSTALL_DATA) $$i $(DESTDIR)$(pgaccessdir)/lib/languages || exit 1; }; done\n \tfor i in $(srcdir)/images/*.gif; do $(INSTALL_DATA) $$i $(DESTDIR)$(pgaccessdir)/images || exit 1; done\n \n installdirs:\n\n\nBye,\nBaldvin",
"msg_date": "Sat, 7 Apr 2001 22:10:45 +0200 (MET DST)",
"msg_from": "Kovacs Baldvin <kb136@hszk.bme.hu>",
"msg_from_op": true,
"msg_subject": "The makefile of pgaccess (CVS)"
}
] |
[
{
"msg_contents": "Since people suddenly seem to be suffering from bandwidth concerns I have\ndevised a new distribution split to address this issue. I propose the\nfollowing four sub-tarballs:\n\n* postgresql-XXX.base.tar.gz\t3.3 MB\n\nEverything not in one of the ones below.\n\n* postgresql-XXX.opt.tar.gz\t1.7 MB\n\nEverything not needed unless you use one of the following configure\noptions: --with-CXX --with-tcl --with-perl --with-python --with-java\n--enable-multibyte --enable-odbc, plus some other not-really-needed\nthings.\n\nThe exact directory list is\nsrc/bin/: pgaccess pgtclsh pg_encoding\nsrc/interfaces: odbc libpq++ libpgtcl perl5 python jdbc\nsrc/pl/: plperl tcl\nsrc/backend/utils/mb contrib/retep src/tools build.xml\n\n* postgresql-XXX.docs.tar.gz\t1.9 MB\n\ndoc/postgres.tar.gz doc/src doc/TODO.detail doc/internals.ps\n\n(Note man pages are in .base.)\n\n* postgresql-XXX.test.tar.gz\t1.0 MB\n\nsrc/test\n\nAll this is proportionally about the same as right now, except that each\ntarball except base would now be truly optional. So someone that only\nwants to use, say, PHP and psql only needs to download the base package.\n\nPatch below. Yes/no/maybe?\n\n--- GNUmakefile.in Sun Apr 8 01:14:23 2001\n+++ GNUmakefile2 Sun Apr 8 01:19:55 2001\n@@ -60,7 +60,7 @@\n\n dist: $(distdir).tar.gz\n ifeq ($(split-dist), yes)\n-dist: $(distdir).base.tar.gz $(distdir).docs.tar.gz $(distdir).support.tar.gz $(distdir).test.tar.gz\n+dist: $(distdir).base.tar.gz $(distdir).docs.tar.gz $(distdir).opt.tar.gz $(distdir).test.tar.gz\n endif\n dist:\n -rm -rf $(distdir)\n@@ -68,15 +68,22 @@\n $(distdir).tar: distdir\n $(TAR) chf $@ $(distdir)\n\n+opt_files := $(addprefix src/bin/, pgaccess pgtclsh pg_encoding) \\\n+ $(addprefix src/interfaces/, odbc libpq++ libpgtcl perl5 python jdbc) \\\n+ $(addprefix src/pl/, plperl tcl) \\\n+ src/backend/utils/mb contrib/retep src/tools build.xml\n+\n+docs_files := doc/postgres.tar.gz doc/src doc/TODO.detail doc/internals.ps\n+\n $(distdir).base.tar: distdir\n- $(TAR) -c $(addprefix --exclude $(distdir)/, doc src/test src/interfaces src/bin) \\\n+ $(TAR) -c $(addprefix --exclude $(distdir)/, $(docs_files) $(opt_files) src/test) \\\n -f $@ $(distdir)\n\n $(distdir).docs.tar: distdir\n- $(TAR) cf $@ $(distdir)/doc\n+ $(TAR) cf $@ $(addprefix $(distdir)/, $(docs_files))\n\n-$(distdir).support.tar: distdir\n- $(TAR) cf $@ $(distdir)/src/interfaces $(distdir)/src/bin\n+$(distdir).opt.tar: distdir\n+ $(TAR) cf $@ $(addprefix $(distdir)/, $(opt_files))\n\n $(distdir).test.tar: distdir\n $(TAR) cf $@ $(distdir)/src/test\n===snip\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 8 Apr 2001 01:24:35 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "A more useful way to split the distribution"
},
{
"msg_contents": "\nOh, I definitely like this ... and get rid of the *large* file, which will\nsave all the mirrors a good deal of space over time ...\n\nOn Sun, 8 Apr 2001, Peter Eisentraut wrote:\n\n> Since people suddenly seem to be suffering from bandwidth concerns I have\n> devised a new distribution split to address this issue. I propose the\n> following four sub-tarballs:\n>\n> * postgresql-XXX.base.tar.gz\t3.3 MB\n>\n> Everything not in one of the ones below.\n>\n> * postgresql-XXX.opt.tar.gz\t1.7 MB\n>\n> Everything not needed unless you use one of the following configure\n> options: --with-CXX --with-tcl --with-perl --with-python --with-java\n> --enable-multibyte --enable-odbc, plus some other not-really-needed\n> things.\n>\n> The exact directory list is\n> src/bin/: pgaccess pgtclsh pg_encoding\n> src/interfaces: odbc libpq++ libpgtcl perl5 python jdbc\n> src/pl/: plperl tcl\n> src/backend/utils/mb contrib/retep src/tools build.xml\n>\n> * postgresql-XXX.docs.tar.gz\t1.9 MB\n>\n> doc/postgres.tar.gz doc/src doc/TODO.detail doc/internals.ps\n>\n> (Note man pages are in .base.)\n>\n> * postgresql-XXX.test.tar.gz\t1.0 MB\n>\n> src/test\n>\n> All this is proportionally about the same as right now, except that each\n> tarball except base would now be truly optional. So someone that only\n> wants to use, say, PHP and psql only needs to download the base package.\n>\n> Patch below. Yes/no/maybe?\n>\n> --- GNUmakefile.in Sun Apr 8 01:14:23 2001\n> +++ GNUmakefile2 Sun Apr 8 01:19:55 2001\n> @@ -60,7 +60,7 @@\n>\n> dist: $(distdir).tar.gz\n> ifeq ($(split-dist), yes)\n> -dist: $(distdir).base.tar.gz $(distdir).docs.tar.gz $(distdir).support.tar.gz $(distdir).test.tar.gz\n> +dist: $(distdir).base.tar.gz $(distdir).docs.tar.gz $(distdir).opt.tar.gz $(distdir).test.tar.gz\n> endif\n> dist:\n> -rm -rf $(distdir)\n> @@ -68,15 +68,22 @@\n> $(distdir).tar: distdir\n> $(TAR) chf $@ $(distdir)\n>\n> +opt_files := $(addprefix src/bin/, pgaccess pgtclsh pg_encoding) \\\n> + $(addprefix src/interfaces/, odbc libpq++ libpgtcl perl5 python jdbc) \\\n> + $(addprefix src/pl/, plperl tcl) \\\n> + src/backend/utils/mb contrib/retep src/tools build.xml\n> +\n> +docs_files := doc/postgres.tar.gz doc/src doc/TODO.detail doc/internals.ps\n> +\n> $(distdir).base.tar: distdir\n> - $(TAR) -c $(addprefix --exclude $(distdir)/, doc src/test src/interfaces src/bin) \\\n> + $(TAR) -c $(addprefix --exclude $(distdir)/, $(docs_files) $(opt_files) src/test) \\\n> -f $@ $(distdir)\n>\n> $(distdir).docs.tar: distdir\n> - $(TAR) cf $@ $(distdir)/doc\n> + $(TAR) cf $@ $(addprefix $(distdir)/, $(docs_files))\n>\n> -$(distdir).support.tar: distdir\n> - $(TAR) cf $@ $(distdir)/src/interfaces $(distdir)/src/bin\n> +$(distdir).opt.tar: distdir\n> + $(TAR) cf $@ $(addprefix $(distdir)/, $(opt_files))\n>\n> $(distdir).test.tar: distdir\n> $(TAR) cf $@ $(distdir)/src/test\n> ===snip\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sat, 7 Apr 2001 20:40:34 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: A more useful way to split the distribution"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> Oh, I definitely like this ... and get rid of the *large* file, which will\n> save all the mirrors a good deal of space over time ...\n\nYou gonna make a set of RC3 or 4 tarballs along these lines to test? I\nwant to try a build with this split before doing too much else -- well,\nactually, I just want to make sure I get it right before release, as I'd\nlike to not have but a couple of days before an RPM release after the\nannouncement.\n\nSounds like a plan.\n\nI'm going to upload a set of RC3 RPM's tonight -- there are changes that\nI need people to comment upon.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 07 Apr 2001 19:57:48 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: A more useful way to split the distribution"
},
{
"msg_contents": "\nas soon as Peter commits the changes, I'll do up an RC4 with the new\nformat so that everyone can test it ...\n\nOn Sat, 7 Apr 2001, Lamar Owen wrote:\n\n> The Hermit Hacker wrote:\n> > Oh, I definitely like this ... and get rid of the *large* file, which will\n> > save all the mirrors a good deal of space over time ...\n>\n> You gonna make a set of RC3 or 4 tarballs along these lines to test? I\n> want to try a build with this split before doing too much else -- well,\n> actually, I just want to make sure I get it right before release, as I'd\n> like to not have but a couple of days before an RPM release after the\n> announcement.\n>\n> Sounds like a plan.\n>\n> I'm going to upload a set of RC3 RPM's tonight -- there are changes that\n> I need people to comment upon.\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sat, 7 Apr 2001 21:20:21 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: A more useful way to split the distribution"
},
{
"msg_contents": "On Sat, 7 Apr 2001, The Hermit Hacker wrote:\n\n>\n> Oh, I definitely like this ... and get rid of the *large* file, which will\n> save all the mirrors a good deal of space over time ...\n>\n> On Sun, 8 Apr 2001, Peter Eisentraut wrote:\n>\n> > Since people suddenly seem to be suffering from bandwidth concerns I have\n> > devised a new distribution split to address this issue. I propose the\n> > following four sub-tarballs:\n> >\n> > * postgresql-XXX.base.tar.gz\t3.3 MB\n> >\n> > Everything not in one of the ones below.\n> >\n> > * postgresql-XXX.opt.tar.gz\t1.7 MB\n> >\n> > Everything not needed unless you use one of the following configure\n> > options: --with-CXX --with-tcl --with-perl --with-python --with-java\n> > --enable-multibyte --enable-odbc, plus some other not-really-needed\n> > things.\n\nAs long as there's still the FULL tarball with everything in it available.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sat, 7 Apr 2001 20:53:58 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: A more useful way to split the distribution"
},
{
"msg_contents": "On Sun, 08 Apr 2001 11:24, Peter Eisentraut wrote:\n> Since people suddenly seem to be suffering from bandwidth concerns I\n> have devised a new distribution split to address this issue. \n\n[ ... snipping the many tarballs argument ... ]\n\nFor me and I expect many other folk on the edge of civilization it is a \ntotal PITA to have to fiddle around and download many separate tarball \nfiles. What I want is to be able to start a d/l going and then come back \nwhen it's finished and know that I have _everything_ I actually need to \nhave a working and documented product in one shot. \n\nFor developers, contributors and testers and I would like to suggest that \nan exact snapshot of the complete CVS source archive is appropriate. We \ncan then track the changes every day using cvs or cvsup - wonderful tool \nbtw - \n\nWhat is really _really_ needed is a text README which explains exactly \nwhat file contains.\n\nPersonally I have found that the limitations of the packaging systems to \nbe such a nuisence that I always compile everything from source.\n\n-- \nSincerely etc.,\n\n NAME Christopher Sawtell\n CELL PHONE 021 257 4451\n ICQ UIN 45863470\n EMAIL csawtell @ xtra . co . nz\n CNOTES ftp://ftp.funet.fi/pub/languages/C/tutorials/sawtell_C.tar.gz\n\n -->> Please refrain from using HTML or WORD attachments in e-mails to me \n<<--\n\n",
"msg_date": "Sun, 8 Apr 2001 16:37:13 +1200",
"msg_from": "Christopher Sawtell <csawtell@xtra.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: A more useful way to split the distribution"
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> ... and get rid of the *large* file, which will\n> save all the mirrors a good deal of space over time ...\n\nYou will certainly get a furious crowd at your door within hours if you do\nthat, as the follow-ups show. Saving download bandwidth is a valid issue,\nbut saving disk space on the order of perhaps 50 MB for sites that act as\ndownload archives is not worth the drawbacks.\n\nBtw., do we have any download statistics, especially as to how many people\nelected to download the \"chunks\"?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 8 Apr 2001 12:30:37 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: A more useful way to split the distribution"
},
{
"msg_contents": "Christopher Sawtell writes:\n\n> For me and I expect many other folk on the edge of civilization it is a\n> total PITA to have to fiddle around and download many separate tarball\n> files. What I want is to be able to start a d/l going and then come back\n> when it's finished and know that I have _everything_ I actually need to\n> have a working and documented product in one shot.\n\nRight. The only reason for splitting the distribution is to cater to the\nfictitious crowd with \"bandwidth problems\" or those that explicitly know\nthat they don't need the rest. There will still be a canonical full\ntarball with everything, or at least I will not put my name to something\nthat abolishes it. In fact, I didn't like the idea of the split tarballs\nin the first place, I'm merely changing the split to something more\nuseful.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 8 Apr 2001 12:40:04 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: A more useful way to split the distribution"
},
{
"msg_contents": "I wrote:\n\n> Since people suddenly seem to be suffering from bandwidth concerns I have\n> devised a new distribution split to address this issue. I propose the\n> following four sub-tarballs:\n\n> * postgresql-XXX.base.tar.gz\t3.3 MB\n> * postgresql-XXX.opt.tar.gz\t1.7 MB\n> * postgresql-XXX.docs.tar.gz\t1.9 MB\n> * postgresql-XXX.test.tar.gz\t1.0 MB\n\nSince we're going to make a change, I'd like to change the names to\n\npostgresql-base-XXX.tar.gz\n\netc. to align them with existing practice (cf. RPMs, GCC download). Dots\nshould be used for format-identifying extensions.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 8 Apr 2001 14:51:28 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: A more useful way to split the distribution"
},
{
"msg_contents": "On Sun, 8 Apr 2001, Peter Eisentraut wrote:\n\n> I wrote:\n>\n> > Since people suddenly seem to be suffering from bandwidth concerns I have\n> > devised a new distribution split to address this issue. I propose the\n> > following four sub-tarballs:\n>\n> > * postgresql-XXX.base.tar.gz\t3.3 MB\n> > * postgresql-XXX.opt.tar.gz\t1.7 MB\n> > * postgresql-XXX.docs.tar.gz\t1.9 MB\n> > * postgresql-XXX.test.tar.gz\t1.0 MB\n>\n> Since we're going to make a change, I'd like to change the names to\n>\n> postgresql-base-XXX.tar.gz\n>\n> etc. to align them with existing practice (cf. RPMs, GCC download). Dots\n> should be used for format-identifying extensions.\n\nGo for it ... more a visual change then anything ...\n\n\n",
"msg_date": "Sun, 8 Apr 2001 14:39:19 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: A more useful way to split the distribution"
},
{
"msg_contents": "\nthis only represents since 8:30am this morning ...\n\n/source/v7.0.3/postgresql-7.0.3.support.tar.gz => 9\n/source/v7.0.3/postgresql-7.0.3.test.tar.gz => 3\n/source/v7.0.3/postgresql-7.0.3.docs.tar.gz => 10\n/source/v7.0.3/postgresql-7.0.3.tar.gz => 22\n/source/v7.0.3/postgresql-7.0.3.base.tar.gz => 9\n\non a side note, we almost have as many downloads of psqlodbc in that time\nperiod:\n\n/odbc/psqlodbc_home.html => 15\n/odbc/versions/psqlodbc-07_01_0002.zip => 4\n/odbc/versions/psqlodbc-07_01_0003.zip => 4\n/odbc/versions/psqlodbc-07_01_0004.zip => 18\n\nso it isn't a \"fictitous crowd\" that is going with the smaller chunks ...\nits about 30% on a very small sample ...\n\nOn Sun, 8 Apr 2001, Peter Eisentraut wrote:\n\n> Christopher Sawtell writes:\n>\n> > For me and I expect many other folk on the edge of civilization it is a\n> > total PITA to have to fiddle around and download many separate tarball\n> > files. What I want is to be able to start a d/l going and then come back\n> > when it's finished and know that I have _everything_ I actually need to\n> > have a working and documented product in one shot.\n>\n> Right. The only reason for splitting the distribution is to cater to the\n> fictitious crowd with \"bandwidth problems\" or those that explicitly know\n> that they don't need the rest. There will still be a canonical full\n> tarball with everything, or at least I will not put my name to something\n> that abolishes it. In fact, I didn't like the idea of the split tarballs\n> in the first place, I'm merely changing the split to something more\n> useful.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sun, 8 Apr 2001 15:06:16 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: A more useful way to split the distribution"
},
{
"msg_contents": "> so it isn't a \"fictitous crowd\" that is going with the smaller chunks ...\n> its about 30% on a very small sample ...\n\n(back in town from the weekend, to see the PostgreSQL tarball ripped to\nshreds ;)\n\nPeter, I'm with you on this. If folks want to help support PostgreSQL by\nproviding subset-tarballs, then great. But many of us have contributed\nto the monolithic tarball, and will continue to do so. So lets make sure\nthat we have *at least* the big tarball available, and if someone wants\nto subset it then I'm sure that would be very useful for a large number\nof users, even if percentage-wise they are not the majority.\n\nNo point in polarizing it or forcing a choice: certainly the form we\nhave used for the last 6 years (and for the 6 years before that too,\nprobably) is a legitimate and useful form, and we can experiment with\nsubsets as much as anyone cares to.\n\nWith the big tarball, Lamar and others (such as Oliver and myself) can\ncontinue their packaging work for 7.1 without having to cope with last\nminute subset issues.\n\n - Thomas\n",
"msg_date": "Mon, 09 Apr 2001 06:27:01 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: A more useful way to split the distribution"
}
] |
[
{
"msg_contents": "\nAs the subject says, PostgreSQL 7.1RC3 passes 'make check' when built\nas a 64 bit application on HP-UX 11.00.\n\nYes Vince, I've added it to your results page too:\n\n http://www.postgresql.org/~vev/regress/report.php?50\n\nRegards,\n\nGiles\n",
"msg_date": "Sun, 08 Apr 2001 10:16:43 +1000",
"msg_from": "Giles Lean <giles@nemeton.com.au>",
"msg_from_op": true,
"msg_subject": "7.1RC3 passes as 64 bit application on HP-UX 11.00"
}
] |
[
{
"msg_contents": "Uploaded. Please take a look.\n\nftp://ftp.postgresql.org/pub/dev/test-rpms\n\nThere _are_ changes. I will detail the changes for the RC4 RPMset.\n\nKarl's pl/perl changes will go into the next set. pg_dumplo will have a\nbuilt binary, to be located in /usr/lib/pgsql/contrib.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 07 Apr 2001 22:41:50 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "RPMS for RC3"
},
{
"msg_contents": "Lamar Owen writes:\n\n> Uploaded. Please take a look.\n>\n> ftp://ftp.postgresql.org/pub/dev/test-rpms\n>\n> There _are_ changes. I will detail the changes for the RC4 RPMset.\n\nCoupla issues:\n\nI'm confused about the logging. You install a logrotate configuration\nwhich talks about a file /var/log/postgresql, but the spec file creates a\ndirectory /var/log/pgsql/. Meanwhile, the start script you provide sends\nthe log output to /dev/null.\n\nstop() in the init script should use pg_ctl -m fast.\n\nPGVERSION in the init script is still at beta6. Maybe you should track\nthis from the source or the spec file.\n\nYou're still shipping old jar files. You could build them from the source\npackage.\n\nIn 'make COPT=\"$CFLAGS\" all', the COPT shouldn't be used. You should\nexport CFLAGS before running configure. (What about CXXFLAGS?)\n\nBefore long, 'cp /usr/lib/python1.5/config/Makefile.pre.in .' is going to\nget out of date. There's already Python 1.6 and 2.0. Since you're\nconfiguring --with-python, the work you do there isn't necessary anyway,\nsince 'make all' takes care of it.\n\n'make -C doc' is a no-op. Also, the docs are installed automatically, the\nstuff you do under '# man pages....' needs some work. You probably want\nto run gzip on the files after installation.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 9 Apr 2001 00:30:03 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for RC3"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Coupla issues:\n \n> I'm confused about the logging. You install a logrotate configuration\n> which talks about a file /var/log/postgresql, but the spec file creates a\n> directory /var/log/pgsql/\n\nI'll correct that. The logrotate script (which I guess to prevent\nconfusion should just simply Go Away) was intended to roll a syslog()\nlog file, after the postinstall scriptlet added a line to\n/etc/syslog.conf to place the logs in a good location -- however, as\ngood solid way of doing that has eluded me at this time -- mostly thanks\nto the fact that these RPMs get installed as part of the OS install of\nmany distributions, meaning I have to carefully choose what utilities I\nuse during a %pre or %post scriptlet. \n\nHowever, I may back down from that position and just document HOW to\nsetup logging in the README -- these RPMs shouldn't really try to be the\nfinal shipping version for every distribution out there -- they should\njust be a good foundation for any distribution that wants to ship\nPostgreSQL. The least I can get away with doing, the better. And the\nleast I assume, the better. The current RPMset is far from this ideal,\nas there are many RedHatisms that translate poorly to some\ndistributions.\n\n>. Meanwhile, the start script you provide sends\n> the log output to /dev/null.\n\nYes. As statedabout, logging was intended to be completely done via\nsyslog for ease of rotation as well as consistency withthe target\ndistribution.\n \n> stop() in the init script should use pg_ctl -m fast.\n\nWhy? The stop() may or may not be called as part of system shutdown --\nand I have no real way of knowing which. The system shutdown variety\ncertainly could use fast -- but the manual variety? Minor issue,\nthough. Might just put the fast in there if there's no good reason not\nto.\n \n> PGVERSION in the init script is still at beta6. Maybe you should track\n> this from the source or the spec file.\n\nArgh. Typo. Yes, a better versioning system is needed. I'll certainly\ncorrect the version before final. But since I'm getting ready to put\nall the RPM's ancilliary files and some build-time scripts into CVS on\ngreatbridge.org, I'll do a version tracking from the spec file macros\nacross all scripts as part of that effort.\n \n> You're still shipping old jar files. You could build them from the source\n> package.\n\nWith which JDK? As Red Hat doesn't ship a _standard_ JDK, which one is\nappropriate? Or, what is the _standard_ JDK?\n \n> In 'make COPT=\"$CFLAGS\" all', the COPT shouldn't be used. You should\n> export CFLAGS before running configure. (What about CXXFLAGS?)\n\nOk.\n \n> Before long, 'cp /usr/lib/python1.5/config/Makefile.pre.in .' is going to\n> get out of date. There's already Python 1.6 and 2.0. Since you're\n> configuring --with-python, the work you do there isn't necessary anyway,\n> since 'make all' takes care of it.\n\nDoes make all in the python interface properly handle DESTDIR for\ncorrect RPM_BUILD_ROOT handling? At that point, I'll certainly change\nthat -- but I had enough trouble with the Perl interface's handling or\nmishandling of that -- IOW DESTDIR didn't in the Perl interface\n--necessitating three builds of the interface to get it properly linked.\n \n> 'make -C doc' is a no-op.\n\nOk -- now it is, 7.0 it wasn't. I'll pull it out and see what happens\nor doesn't happen.\n\n> Also, the docs are installed automatically, the\n> stuff you do under '# man pages....' needs some work.\n\nThe man pages are still in a separate tarball, or not?\n\n> You probably want\n> to run gzip on the files after installation.\n\nDone automagically by the buildrootpolicy of the rpm build system, on\ndistributions that do buildrootpolicies, which are standard in late\n3.0.x RPM as well as 4.x RPM. This is one of the reasons the %{_mandir}\nmacro is used for all man pages.\n\nThanks for taking the time to look (again).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 08 Apr 2001 20:41:24 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: RPMS for RC3"
},
{
"msg_contents": "Lamar Owen writes:\n\n> > You're still shipping old jar files. You could build them from the source\n> > package.\n>\n> With which JDK? As Red Hat doesn't ship a _standard_ JDK, which one is\n> appropriate? Or, what is the _standard_ JDK?\n\nThere is no standard JDK, in the same sense as there is no standard C\ncompiler. You run configure --with-java; make; make install and voil�\nthere's the JDBC driver, made by whatever JDK was around at the moment.\n(For appropriate values of voil�, of course.) Most distributions include\nKaffe I suppose, which should serve fine (once you set up the CLASSPATH\ncorrectly; YMMV).\n\n> The man pages are still in a separate tarball, or not?\n\nThey're in a tarball, but they're not separate.\n\n> > You probably want\n> > to run gzip on the files after installation.\n>\n> Done automagically by the buildrootpolicy of the rpm build system,\n\nAmazing... ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 9 Apr 2001 18:44:47 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RPMS for RC3"
}
] |
[
{
"msg_contents": "Are up. Contrib subpackage includes built binaries.\n\nBinary RPMset built for _Red_Hat_6.2_. I will be building Red Hat 7.0\nRPMS after I get back from a two day vacation -- which will put me back\nonline Wednesday. I might get a set out for RH7 tomorrow, though.\n\nThe split to contrib and docs subpackages has greatly reduced the size\nof the main client RPM down to a little over 1MB. The contrib\nsubpackage is nearly a meg, as is the docs subpackage (which as yet\ndoesn't have the hardcopy docs).\n\nAlso, I would love it to see jars of the 7.1 JDBC built in the near\nfuture, as I _still_ don't have a good JDK on any of my devel boxen --\nmeaning I'm still shipping the 7.0 JDBC in the jdbc subpackage.\n\nftp://ftp.postgresql.org/pub/dev/test-rpms,as usual. See\nREADME.rpm-dist in the main package for more details, as it is actually\nup to date at this time.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 08 Apr 2001 01:13:03 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "RC3-3 RPMS."
}
] |
[
{
"msg_contents": " Hi--\n This is my first time at using Postgress , and I would like to install it\non a Win 2000 machine .HELP !!!\n Thanks\n Nadim\n\n",
"msg_date": "Sun, 8 Apr 2001 15:34:47 -0400",
"msg_from": "\"Nadim H Rabbani\" <nrabbani@se.fit.edu>",
"msg_from_op": true,
"msg_subject": "Postgress On Windows"
}
] |
[
{
"msg_contents": "Hi--\n This is my first time at using Postgress , and I would like to install it\non a Win 2000 machine .HELP !!!\n Thanks\n Nadim\n",
"msg_date": "Sun, 8 Apr 2001 15:44:26 -0400",
"msg_from": "\"Nadim H Rabbani\" <nrabbani@se.fit.edu>",
"msg_from_op": true,
"msg_subject": "Postgress on Windows"
}
] |
[
{
"msg_contents": "\nLadies and Gentlemen ...\n\nIts been a long, arduous, up hill battle to get to this point, with all of\nthe changes since v7.0 was released, but we're finally there ...\n\n\nThe PostgreSQL Global Development Group is *pleased* to announce the\navailability of PostgreSQL v7.1 Release Candidate 4, the long awaited\nsuccessor to v7.0.\n\n\nBefore anyone asks what a 'Release Candidate' is, and what happened to\n1-3 ... a Release Candidate is what the developers have decided is going\nto be the Release, based on no known bugs remaining, but want to get more\ngeneral testing.\n\nIf, by Friday, April 13th, there have been no bugs reported, all that will\nhappen is that rc4 will get renamed as the official release, no\nrepackaging or anything ...\n\nWhat happened to 1-3? We packaged 1, one of the developers came across a\nbug before an announcement went out, so we didn't announce ... similar to\nthe other 2.\n\nPlease NOTE that this is *not* the official release ... this is what we\nbelieve, at this time, is going to be the official release, based on\nextensive testing over the past several months, but if someone reports a\nbug based on this, it will be addressed and a new package built ...\n\nWhat does v7.1 provide that v7.0 didn't? From our HISTORY file:\n\n================\nMajor changes in this release:\n\n Write-ahead Log (WAL) - To maintain database consistency in case\nof an operating system crash, previous releases of PostgreSQL have forced\nall data modifications to disk before each transaction commit. With WAL,\nonly one log file must be flushed to disk, greatly improving performance.\nIf you have been using -F in previous releases to disable disk flushes,\nyou may want to consider discontinuing its use.\n\n TOAST - Previous releases had a compiled-in row length limit,\ntypically 8 - 32 kB. This limit made storage of long text fields\ndifficult. With TOAST, long rows of any length can be stored with good\nperformance.\n\n Outer Joins - We now support outer joins. The UNION/NOT IN\nworkaround for outer joins is no longer required. We use the SQL92 outer\njoin syntax.\n\n Function Manager - The previous C function manager did not handle\nNULLs properly, nor did it support 64-bit CPU's (Alpha). The new function\nmanager does. You can continue using your old custom functions, but you\nmay want to rewrite them in the future to use the new function manager\ncall interface.\n\n Complex Queries - A large number of complex queries that were\nunsupported in previous releases now work. Many combinations of views,\naggregates, UNION, LIMIT, cursors, subqueries, and inherited tables now\nwork properly. Inherited tables are now accessed by default. Subqueries\nin FROM are now supported.\n=================\n\nFor a more complete list of New Features and Bugs Fixed, please read the\nHISTORY segment available at:\n\n\tftp://ftp.postgresql.org/pub/README.v7_1\n\nSource code is available at ftp://ftp.postgresql.org/pub/v7.1 ...\n\nPlease report any bugs that you encounter to pgsql-bugs@postgresql.org\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sun, 8 Apr 2001 17:08:25 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL v7.1 Release Candidate 4 "
},
{
"msg_contents": "Where can I get a Postscript version docs for 7.1?\n--\nTatsuo Ishii\n\n> Ladies and Gentlemen ...\n> \n> Its been a long, arduous, up hill battle to get to this point, with all of\n> the changes since v7.0 was released, but we're finally there ...\n> \n> \n> The PostgreSQL Global Development Group is *pleased* to announce the\n> availability of PostgreSQL v7.1 Release Candidate 4, the long awaited\n> successor to v7.0.\n> \n> \n> Before anyone asks what a 'Release Candidate' is, and what happened to\n> 1-3 ... a Release Candidate is what the developers have decided is going\n> to be the Release, based on no known bugs remaining, but want to get more\n> general testing.\n> \n> If, by Friday, April 13th, there have been no bugs reported, all that will\n> happen is that rc4 will get renamed as the official release, no\n> repackaging or anything ...\n> \n> What happened to 1-3? We packaged 1, one of the developers came across a\n> bug before an announcement went out, so we didn't announce ... similar to\n> the other 2.\n> \n> Please NOTE that this is *not* the official release ... this is what we\n> believe, at this time, is going to be the official release, based on\n> extensive testing over the past several months, but if someone reports a\n> bug based on this, it will be addressed and a new package built ...\n> \n> What does v7.1 provide that v7.0 didn't? From our HISTORY file:\n> \n> ================\n> Major changes in this release:\n> \n> Write-ahead Log (WAL) - To maintain database consistency in case\n> of an operating system crash, previous releases of PostgreSQL have forced\n> all data modifications to disk before each transaction commit. With WAL,\n> only one log file must be flushed to disk, greatly improving performance.\n> If you have been using -F in previous releases to disable disk flushes,\n> you may want to consider discontinuing its use.\n> \n> TOAST - Previous releases had a compiled-in row length limit,\n> typically 8 - 32 kB. This limit made storage of long text fields\n> difficult. With TOAST, long rows of any length can be stored with good\n> performance.\n> \n> Outer Joins - We now support outer joins. The UNION/NOT IN\n> workaround for outer joins is no longer required. We use the SQL92 outer\n> join syntax.\n> \n> Function Manager - The previous C function manager did not handle\n> NULLs properly, nor did it support 64-bit CPU's (Alpha). The new function\n> manager does. You can continue using your old custom functions, but you\n> may want to rewrite them in the future to use the new function manager\n> call interface.\n> \n> Complex Queries - A large number of complex queries that were\n> unsupported in previous releases now work. Many combinations of views,\n> aggregates, UNION, LIMIT, cursors, subqueries, and inherited tables now\n> work properly. Inherited tables are now accessed by default. Subqueries\n> in FROM are now supported.\n> =================\n> \n> For a more complete list of New Features and Bugs Fixed, please read the\n> HISTORY segment available at:\n> \n> \tftp://ftp.postgresql.org/pub/README.v7_1\n> \n> Source code is available at ftp://ftp.postgresql.org/pub/v7.1 ...\n> \n> Please report any bugs that you encounter to pgsql-bugs@postgresql.org\n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n",
"msg_date": "Mon, 09 Apr 2001 10:21:10 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [ANNOUNCE] PostgreSQL v7.1 Release Candidate 4 "
},
{
"msg_contents": "> Where can I get a Postscript version docs for 7.1?\n\nI'll start building hardcopy in the next day or two, and hope that it\nwill be done quickly (more quickly that in previous releases). Will keep\ny'all informed on the progress...\n\n - Thomas\n",
"msg_date": "Mon, 09 Apr 2001 06:44:25 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: [ANNOUNCE] PostgreSQL v7.1 Release Candidate 4"
},
{
"msg_contents": "Hi all,\n\nOn Sun, 8 Apr 2001, The Hermit Hacker wrote:\n\n> If, by Friday, April 13th, there have been no bugs reported, all that will\n> happen is that rc4 will get renamed as the official release, no\n> repackaging or anything ...\n\nWas hoping that I'd have some time to get around to it before now, but I\nhaven't so am posting to the list. For quite some time I have found the\nbehaviour of CLUSTER to be deceiving. The documentation has some to say\nabout its short comings.\n\n The table is actually copied to a temporary table in index order, then\n renamed back to the original name. For this reason, all grant\n permissions and other indexes are lost when clustering is performed.\n\nIt also drops the other relation meta data. I think this should at least\nbe noted in the documentation for 7.1 full release or the heap copy should\nlook at copying over triggers, checks etc.\n\nSorry to chime in so close to release, I only just looked to see if this\nhad been addressed.\n\nThanks\n\nGavin\n\n\n",
"msg_date": "Mon, 9 Apr 2001 23:14:50 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1 Release Candidate 4 "
},
{
"msg_contents": "> Hi all,\n> \n> On Sun, 8 Apr 2001, The Hermit Hacker wrote:\n> \n> > If, by Friday, April 13th, there have been no bugs reported, all that will\n> > happen is that rc4 will get renamed as the official release, no\n> > repackaging or anything ...\n> \n> Was hoping that I'd have some time to get around to it before now, but I\n> haven't so am posting to the list. For quite some time I have found the\n> behaviour of CLUSTER to be deceiving. The documentation has some to say\n> about its short comings.\n> \n> The table is actually copied to a temporary table in index order, then\n> renamed back to the original name. For this reason, all grant\n> permissions and other indexes are lost when clustering is performed.\n> \n> It also drops the other relation meta data. I think this should at least\n> be noted in the documentation for 7.1 full release or the heap copy should\n> look at copying over triggers, checks etc.\n\nCan you give me specific text? I though 7.1 was a little better about\npreserving the metadata.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Apr 2001 09:27:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1 Release Candidate 4"
},
{
"msg_contents": "Bruce,\n\nProblem is the use of heap_drop_with_catalog().\n\n * heap_drop_with_catalog - removes all record of named\nrelation from catalogs\n *\n * 1) open relation, check for existence, etc.\n * 2) remove inheritance information\n * 3) remove indexes\n * 4) remove pg_class tuple\n * 5) remove pg_attribute tuples and related descriptions\n * 6) remove pg_description tuples\n * 7) remove pg_type tuples\n * 8) RemoveConstraints ()\n * 9) unlink relation\n\nOnly these things are destroyed. relchecks, for example, stays consistent\nand works correctly.\n\nGavin\n\n",
"msg_date": "Mon, 9 Apr 2001 23:59:35 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1 Release Candidate 4"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can you give me specific text? I though 7.1 was a little better about\n> preserving the metadata.\n\nNot in the least --- 7.1's CLUSTER is just as bad as ever.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 12:36:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL v7.1 Release Candidate 4 "
}
] |
[
{
"msg_contents": "Regression tests for Yellow Dog Linux (PPC RedHat derivative) failed all \nover the place with 7.0.3. Passed smoothly with 7.1RC3, though. I've \ngot details if anybody's curious.\n\n -nat\n",
"msg_date": "Sun, 08 Apr 2001 18:14:09 -0700",
"msg_from": "Nat Irons <poppler@bumppo.net>",
"msg_from_op": true,
"msg_subject": "Yellow Dog Linux/PPC regression"
},
{
"msg_contents": "Nat Irons writes:\n\n> Regression tests for Yellow Dog Linux (PPC RedHat derivative) failed all\n> over the place with 7.0.3. Passed smoothly with 7.1RC3, though. I've\n> got details if anybody's curious.\n\nProbably no one curious since PPC support is one of the things that were\nfixed in 7.1. :-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 9 Apr 2001 18:13:19 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Yellow Dog Linux/PPC regression"
},
{
"msg_contents": "Nat Irons <poppler@bumppo.net> writes:\n> Regression tests for Yellow Dog Linux (PPC RedHat derivative) failed all \n> over the place with 7.0.3. Passed smoothly with 7.1RC3, though.\n\nUnsurprising if you compiled with any optimization level higher than -O0.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 12:32:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Yellow Dog Linux/PPC regression "
}
] |
[
{
"msg_contents": "Hi guys,\n\nJust thinking about the future directions PostgreSQL is taking, and it\nseems (just a feeling) like most people prefer it to be as self tuning\nas possible.\n\nIn trying to think about how it will/would do that I think PostgreSQL\nwill need to know \"how much\" of the resources of the server its on, it's\nallowed to take.\n\nCan think of three scenario's, 1) Single-purpose PostgreSQL server 2)\nshared function server (i.e. Apache, Postgres, etc on the same box) 3)\nEmbedded or otherwise resource limited server (Palmtop, etc).\n\nWhen we get around to PostgreSQL's self-tuning ability being actively\ndeveloped (and I think Bruce has done some of the very start with his\nmonitor program), perhaps having a compile time option to set the\ndefault for the server, and a runtime option in case it changes?\n\ni.e.\n\n--tuning=superserver\n--tuning=shared\n--tuning=embedded\n\npostmaster -t superserver\npostmaster -t shared\npostmaster -t embedded\n\nWhat do people think?\n\nRegards and best wishes,\n\nJustin Clift\n\nP.S. - I'm not on the Hackers mailing list from this account. Can\nanyone responding please include me directly in their replies?\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Mon, 09 Apr 2001 11:33:50 +1000",
"msg_from": "Justin Clift <jclift@iprimus.com.au>",
"msg_from_op": true,
"msg_subject": "\"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "My idea was to have PostgreSQL output tips to help performance. The\nTODO item is:\n\t\n\t* Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM\n\t ANALYZE, and CLUSTER\n\nI also will be writing an article on performance tuning this month. \nWhat parameters would these options you suggest control? I usually\nprefer options that have more concrete effect.\n\n\n> Just thinking about the future directions PostgreSQL is taking, and it\n> seems (just a feeling) like most people prefer it to be as self tuning\n> as possible.\n> \n> In trying to think about how it will/would do that I think PostgreSQL\n> will need to know \"how much\" of the resources of the server its on, it's\n> allowed to take.\n> \n> Can think of three scenario's, 1) Single-purpose PostgreSQL server 2)\n> shared function server (i.e. Apache, Postgres, etc on the same box) 3)\n> Embedded or otherwise resource limited server (Palmtop, etc).\n> \n> When we get around to PostgreSQL's self-tuning ability being actively\n> developed (and I think Bruce has done some of the very start with his\n> monitor program), perhaps having a compile time option to set the\n> default for the server, and a runtime option in case it changes?\n> \n> i.e.\n> \n> --tuning=superserver\n> --tuning=shared\n> --tuning=embedded\n> \n> postmaster -t superserver\n> postmaster -t shared\n> postmaster -t embedded\n> \n> What do people think?\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> P.S. - I'm not on the Hackers mailing list from this account. Can\n> anyone responding please include me directly in their replies?\n> \n> -- \n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Apr 2001 00:18:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "I like this. Ensure that tips can be dumped into a log file --\npreferably separate from the main one -- so it can be run on a live\nsystem for a short period of time, recorded then analyzed later.\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Justin Clift\" <jclift@iprimus.com.au>\nCc: <pgsql-hackers@postgresql.org>\nSent: Monday, April 09, 2001 12:18 AM\nSubject: Re: [HACKERS] \"--tuning\" compile and runtime option (?)\n\n\n> My idea was to have PostgreSQL output tips to help performance. The\n> TODO item is:\n>\n> * Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM\n> ANALYZE, and CLUSTER\n>\n> I also will be writing an article on performance tuning this month.\n> What parameters would these options you suggest control? I usually\n> prefer options that have more concrete effect.\n>\n>\n> > Just thinking about the future directions PostgreSQL is taking,\nand it\n> > seems (just a feeling) like most people prefer it to be as self\ntuning\n> > as possible.\n> >\n> > In trying to think about how it will/would do that I think\nPostgreSQL\n> > will need to know \"how much\" of the resources of the server its\non, it's\n> > allowed to take.\n> >\n> > Can think of three scenario's, 1) Single-purpose PostgreSQL server\n2)\n> > shared function server (i.e. Apache, Postgres, etc on the same\nbox) 3)\n> > Embedded or otherwise resource limited server (Palmtop, etc).\n> >\n> > When we get around to PostgreSQL's self-tuning ability being\nactively\n> > developed (and I think Bruce has done some of the very start with\nhis\n> > monitor program), perhaps having a compile time option to set the\n> > default for the server, and a runtime option in case it changes?\n> >\n> > i.e.\n> >\n> > --tuning=superserver\n> > --tuning=shared\n> > --tuning=embedded\n> >\n> > postmaster -t superserver\n> > postmaster -t shared\n> > postmaster -t embedded\n> >\n> > What do people think?\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > P.S. - I'm not on the Hackers mailing list from this account. Can\n> > anyone responding please include me directly in their replies?\n> >\n> > --\n> > \"My grandfather once told me that there are two kinds of people:\nthose\n> > who work and those who take the credit. He told me to try to be in\nthe\n> > first group; there was less competition there.\"\n> > - Indira Gandhi\n> >\n> > ---------------------------(end of\nbroadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister\ncommand\n> > (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n> >\n>\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n",
"msg_date": "Mon, 9 Apr 2001 00:22:20 -0400",
"msg_from": "\"Rod Taylor\" <rbt@barchord.com>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> I like this. Ensure that tips can be dumped into a log file --\n> preferably separate from the main one -- so it can be run on a live\n> system for a short period of time, recorded then analyzed later.\n\nYes, they would go into the standard postmaster log.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Apr 2001 00:32:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "Justin Clift writes:\n\n> When we get around to PostgreSQL's self-tuning ability being actively\n> developed (and I think Bruce has done some of the very start with his\n> monitor program), perhaps having a compile time option to set the\n> default for the server, and a runtime option in case it changes?\n> i.e.\n> --tuning=superserver\n> --tuning=shared\n> --tuning=embedded\n> postmaster -t superserver\n> postmaster -t shared\n> postmaster -t embedded\n\nI'm generally no friend of generic \"make it fast\", \"make it small\"\noptions. It is usually hard to decide what settings should go under what\nheading because everyone is in a different situation. The solution is to\nprovide user guidance to the existing configuration variables that goes\nbeyond what they do by adding why the user should care.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 9 Apr 2001 18:28:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "An excellent idea.\n\nI suspect you'll get a biased resonse from the -hackers folks. This really\nis an excellent idea.\n\nThose options cover I think the main scenarios, with the first two options\nbeing the most important. Ideally you'd basically sample server specs\n(speed, # of procs, mem etc) and set up for that based on profile. It should\nthen be possible to dump the settings that are used (--tuning = these\ncmdline --options changed from defaults).\n\nNovices can use it to get of the ground, intermediate level dba's can use it\nas a sizing tool, and -hackers can flame each other over its very existence.\n\nAugust\n\n----- Original Message -----\nFrom: \"Justin Clift\" <jclift@iprimus.com.au>\nNewsgroups: comp.databases.postgresql.hackers\nSent: Sunday, April 08, 2001 11:36 PM\nSubject: \"--tuning\" compile and runtime option (?)\n\n\n> Hi guys,\n>\n> Just thinking about the future directions PostgreSQL is taking, and it\n> seems (just a feeling) like most people prefer it to be as self tuning\n> as possible.\n>\n> In trying to think about how it will/would do that I think PostgreSQL\n> will need to know \"how much\" of the resources of the server its on, it's\n> allowed to take.\n>\n> Can think of three scenario's, 1) Single-purpose PostgreSQL server 2)\n> shared function server (i.e. Apache, Postgres, etc on the same box) 3)\n> Embedded or otherwise resource limited server (Palmtop, etc).\n>\n> When we get around to PostgreSQL's self-tuning ability being actively\n> developed (and I think Bruce has done some of the very start with his\n> monitor program), perhaps having a compile time option to set the\n> default for the server, and a runtime option in case it changes?\n>\n> i.e.\n>\n> --tuning=superserver\n> --tuning=shared\n> --tuning=embedded\n>\n> postmaster -t superserver\n> postmaster -t shared\n> postmaster -t embedded\n>\n> What do people think?\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n\n\n",
"msg_date": "Mon, 9 Apr 2001 12:46:25 -0400",
"msg_from": "\"August Zajonc\" <augustz@bigfoot.com>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "Hi Bruce,\n\nMy thought on this is more for an \"overall effect\".\n\nDown The Track (i.e. in a few versions or so) I'm thinking, rightly or\nwrongly, that PostgreSQL will become Very Good at tuning itself.\n\nIt would be a good thing if PostgreSQL could know just how fair it can\nplay in regards to the server it's working on.\n\nFor example, if lets say it's installed on a server in which it's the\nonly important thing. i.e. OS + PostgreSQL and thats about it. \nIndicating to the PostgreSQL server that's it's allowed to consume all\nthe available resources to its maximum benefit would allow possible\nfuture \"self-tuning\" algorithms to say \"well, in these circumstances the\nbest way to deal with the present load is X\". And it would do things\nwithout regard for other possible services, as it would know that it's\nrunning by itself. This would be something like a\n\"--tuning=superserver\" compile-time option or run-time flag.\n\nConversely, the PostgreSQL server may be on a box with several other\nservices, like Apache, MySQL, FTP daemons, and so forth. In that case\nit would possibly select different algorithms, knowing that it had to\n\"play fair\" with the server's resources. This may be indicated to it by\na \"--tuning=shared\" compile-time option or run-time flag.\n\nAnd similar for embedded systems, where there is a lower or different\nresource allocation strategy.\n\nThis is a general indication of thoughts I was having last night and\nthis morning, and I bring it up more as a point of interest and\nwondering if others see that it may be of benefit.\n\nPresently we have to benchmark and then hand-tune the servers ourselves,\nand thats good. I'm thinking more about PostgreSQL's internal ways of\ndealing with queries and handling of resources though, in a\nsecond-by-second situation.\n\nWhat do you think?\n\nRegards and best wishes,\n\nJustin Clift\n\nBruce Momjian wrote:\n> \n> My idea was to have PostgreSQL output tips to help performance. The\n> TODO item is:\n> \n> * Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM\n> ANALYZE, and CLUSTER\n> \n> I also will be writing an article on performance tuning this month.\n> What parameters would these options you suggest control? I usually\n> prefer options that have more concrete effect.\n> \n> > Just thinking about the future directions PostgreSQL is taking, and it\n> > seems (just a feeling) like most people prefer it to be as self tuning\n> > as possible.\n> >\n> > In trying to think about how it will/would do that I think PostgreSQL\n> > will need to know \"how much\" of the resources of the server its on, it's\n> > allowed to take.\n> >\n> > Can think of three scenario's, 1) Single-purpose PostgreSQL server 2)\n> > shared function server (i.e. Apache, Postgres, etc on the same box) 3)\n> > Embedded or otherwise resource limited server (Palmtop, etc).\n> >\n> > When we get around to PostgreSQL's self-tuning ability being actively\n> > developed (and I think Bruce has done some of the very start with his\n> > monitor program), perhaps having a compile time option to set the\n> > default for the server, and a runtime option in case it changes?\n> >\n> > i.e.\n> >\n> > --tuning=superserver\n> > --tuning=shared\n> > --tuning=embedded\n> >\n> > postmaster -t superserver\n> > postmaster -t shared\n> > postmaster -t embedded\n> >\n> > What do people think?\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> > P.S. - I'm not on the Hackers mailing list from this account. Can\n> > anyone responding please include me directly in their replies?\n> >\n> > --\n> > \"My grandfather once told me that there are two kinds of people: those\n> > who work and those who take the credit. He told me to try to be in the\n> > first group; there was less competition there.\"\n> > - Indira Gandhi\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 10 Apr 2001 03:16:43 +1000",
"msg_from": "Justin Clift <jclift@iprimus.com.au>",
"msg_from_op": true,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "> Hi Bruce,\n> \n> My thought on this is more for an \"overall effect\".\n> \n> Down The Track (i.e. in a few versions or so) I'm thinking, rightly or\n> wrongly, that PostgreSQL will become Very Good at tuning itself.\n> \n> It would be a good thing if PostgreSQL could know just how fair it can\n> play in regards to the server it's working on.\n\nOK, what options would you recommend be auto-tuned in each circumstance?\nI can imagine open files and maybe sortmemory, but even then, other\nbackends can affect the proper value. Share memory usually has a kernel\nlimit which prevents us from auto-tuning that too much.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Apr 2001 13:44:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "I'd be happy to see during initial setup a few questions go by that would\nsize the underlying OS properly as well. We all do the same things with a\nnew system, increase filesystem limits etc... Some of these options (on a\ndedicated postgresql) are gimme's. Why not do them once upfront, prompt the\nuser (share memory, file handles) are to low, should I increase the limits?\nI'd love it, and some of the \"PostgreSQL doesn't scale even the the load is\nlow\" complaints would go away.\n\nThe hitch I can see is that much will be distribution/platform specific, but\nthose don't change that radically that motivated volunteers couldn't keep\npace.\n\nAugust\n\n\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote in message\nnews:200104091744.NAA12563@candle.pha.pa.us...\n\n> OK, what options would you recommend be auto-tuned in each circumstance?\n> I can imagine open files and maybe sortmemory, but even then, other\n> backends can affect the proper value. Share memory usually has a kernel\n> limit which prevents us from auto-tuning that too much.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Mon, 9 Apr 2001 21:21:57 -0400",
"msg_from": "\"August Zajonc\" <junk-postgre@aontic.com>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> OK, what options would you recommend be auto-tuned in each circumstance?\n> I can imagine open files and maybe sortmemory, but even then, other\n> backends can affect the proper value. Share memory usually has a kernel\n> limit which prevents us from auto-tuning that too much.\n\nShare memory might have a kernel limit, but that's no excuse for not\nallowing this process to auto-tune it.\n\nI have truckloads of memory in my server if I am setting it up for a\nserious database, and I usually edit /etc/sysctl.conf (Debian GNU/Linux,\nand some other Linux's - possibly other unixen as well) to set the\nshared memory. Usually I set it to around 90% of the actual RAM in the\nsystem.\n\nSo, if I have 1G RAM, and my database is 600M but my application only\nends up hitting 20% of that on a regular basis do I benefit from\nadjusting my -B beyond 12000 or so? A question that the docs seem to\nthink is 'suck it and see'. I haven't had the time or equipment to\nbenchmark stuff in a wide range of hardware environments, myself, but if\nan auto-tune option suggested to me that performance increased up to a\n-B of 4000 or so, and that the server stopped working past there, I'm\nafraid that only an idiot would cease investigating at that point :-)\n\nIt would be wonderful if the auto-tuning gave sensible advice in these\nsorts of situations, and then made some further suggestions that an\noperator might use to take the tuning to the next level. A mention of\nkernel shared memory limits would seem appropriate in there somewhere.\n\nThe problem I usually have with a lot of \"auto tuning\" (and other sorts\nof automation) on other software is that it takes the approach that the\nuser knows nothing, and we \"don't want to bother their pretty little\nheads with these sorts of problems\". I feel like a total _blonde_ when\nI use MS SQL Server, because it either hides the possibility of me\nadjusting it, or it doesn't explain what/how/why to adjust. PostgreSQL\nshould, of course, offer advice. It shouldn't assume that because I've\nsaid \"auto-tune\" that I don't want to know why it is doing what it is\ndoing. What conclusions it has come to, and what decisions it has made\nas a result.\n\nFinally, thanks for pursuing these options. I think they will be a huge\nhelp, as well as hopefully providing more data on performance issues\nback to the core team.\n\nThat'll be 2c, please :-)\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Wed, 11 Apr 2001 00:04:20 +1200",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "Well, again, I will write a performance tuning article this month, which\nhopefyully will help people.\n\nMy recommendation on shared memory is that if you have a machine that is\ngoing to be used only for PostgreSQL, the shared memory should be\nincreased to the point where you are not seeing any swap page-ins during\nnormal use. I know you have the kernel buffer cache for all unused\nmemory, but those pages are copied in and out of the PostgreSQL buffer\ncache for processing, which can be an expensive operation.\n\nNow how do you automate something to increase shared memory until there\nare no page swap-ins under normal use. I think the administrator will\nhave to be involved because a script has no idea what a normal load\nlooks like. The best we could do is to monitor swap-ins as part of the\nrunning server and report to the administrator that there is extra\nmemory around that could be used for shared memory.\n\n> Bruce Momjian wrote:\n> > \n> > OK, what options would you recommend be auto-tuned in each circumstance?\n> > I can imagine open files and maybe sortmemory, but even then, other\n> > backends can affect the proper value. Share memory usually has a kernel\n> > limit which prevents us from auto-tuning that too much.\n> \n> Share memory might have a kernel limit, but that's no excuse for not\n> allowing this process to auto-tune it.\n> \n> I have truckloads of memory in my server if I am setting it up for a\n> serious database, and I usually edit /etc/sysctl.conf (Debian GNU/Linux,\n> and some other Linux's - possibly other unixen as well) to set the\n> shared memory. Usually I set it to around 90% of the actual RAM in the\n> system.\n> \n> So, if I have 1G RAM, and my database is 600M but my application only\n> ends up hitting 20% of that on a regular basis do I benefit from\n> adjusting my -B beyond 12000 or so? A question that the docs seem to\n> think is 'suck it and see'. I haven't had the time or equipment to\n> benchmark stuff in a wide range of hardware environments, myself, but if\n> an auto-tune option suggested to me that performance increased up to a\n> -B of 4000 or so, and that the server stopped working past there, I'm\n> afraid that only an idiot would cease investigating at that point :-)\n> \n> It would be wonderful if the auto-tuning gave sensible advice in these\n> sorts of situations, and then made some further suggestions that an\n> operator might use to take the tuning to the next level. A mention of\n> kernel shared memory limits would seem appropriate in there somewhere.\n> \n> The problem I usually have with a lot of \"auto tuning\" (and other sorts\n> of automation) on other software is that it takes the approach that the\n> user knows nothing, and we \"don't want to bother their pretty little\n> heads with these sorts of problems\". I feel like a total _blonde_ when\n> I use MS SQL Server, because it either hides the possibility of me\n> adjusting it, or it doesn't explain what/how/why to adjust. PostgreSQL\n> should, of course, offer advice. It shouldn't assume that because I've\n> said \"auto-tune\" that I don't want to know why it is doing what it is\n> doing. What conclusions it has come to, and what decisions it has made\n> as a result.\n> \n> Finally, thanks for pursuing these options. I think they will be a huge\n> help, as well as hopefully providing more data on performance issues\n> back to the core team.\n> \n> That'll be 2c, please :-)\n> \t\t\t\t\tAndrew.\n> -- \n> _____________________________________________________________________\n> Andrew McMillan, e-mail: Andrew@catalyst.net.nz\n> Catalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\n> Me: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 10 Apr 2001 17:06:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Well, again, I will write a performance tuning article this month, which\n> hopefyully will help people.\n> \n> My recommendation on shared memory is that if you have a machine that is\n> going to be used only for PostgreSQL, the shared memory should be\n> increased to the point where you are not seeing any swap page-ins during\n> normal use. I know you have the kernel buffer cache for all unused\n> memory, but those pages are copied in and out of the PostgreSQL buffer\n> cache for processing, which can be an expensive operation.\n> \n> Now how do you automate something to increase shared memory until there\n> are no page swap-ins under normal use. I think the administrator will\n> have to be involved because a script has no idea what a normal load\n> looks like. The best we could do is to monitor swap-ins as part of the\n> running server and report to the administrator that there is extra\n> memory around that could be used for shared memory.\n\nBrilliant. Thanks for that - it's exactly the sort of information / statistics\nstuff that it is useful to know.\n\nI use Progress RDBMS on a few sites. On a Progress database I get this sort of\ninformation which can help me tune things:\n\n\n Activity - Sampled at 04/11/01 12:32 for 892:23:25.\n\n Event Total Per Sec Event Total Per Sec\n Commits 50518 0.0 Undos 24 0.0 \n Record Updates 72407 0.0 Record Reads 121294681 37.7 \n Record Creates 37065 0.0 Record Deletes 19807 0.0 \n DB Writes 25720 0.0 DB Reads 1551040 0.4 \n BI Writes 14701 0.0 BI Reads 14534 0.0 \n AI Writes 0 0.0 \n Record Locks 645952 0.2 Record Waits 0 0.0 \n Checkpoints 62 0.0 Buffers Flushed 13102 0.0 \n\n Rec Lock Waits 0 % BI Buf Waits 0 % AI Buf Waits 0 %\n Writes by APW 0 % Writes by BIW 0 % Writes by AIW 0 %\n Buffer Hits 16 %\n DB Size 96 MB BI Size 3192 K AI Size 0 K\n FR chain 0 blocks RM chain 1 blocks\n Shared Memory 29864 K Segments 1\n\n 8 Servers, 7 Users (0 Local, 7 Remote, 0 Batch),0 Apws\n\n\n\nOr, for a more reasonable length of sample:\n\n Activity - Sampled at 04/11/01 12:42 for 0:09:26.\n\n Event Total Per Sec Event Total Per Sec\n Commits 14 0.0 Undos 0 0.0 \n Record Updates 7 0.0 Record Reads 90488 159.8 \n Record Creates 1 0.0 Record Deletes 0 0.0 \n DB Writes 38 0.0 DB Reads 1636 2.8 \n BI Writes 5 0.0 BI Reads 0 0.0 \n AI Writes 0 0.0 \n Record Locks 69 0.1 Record Waits 0 0.0 \n Checkpoints 0 0.0 Buffers Flushed 0 0.0 \n\n Rec Lock Waits 0 % BI Buf Waits 0 % AI Buf Waits 0 %\n Writes by APW 0 % Writes by BIW 0 % Writes by AIW 0 %\n Buffer Hits 99 %\n DB Size 96 MB BI Size 3192 K AI Size 0 K\n FR chain 0 blocks RM chain 1 blocks\n Shared Memory 29864 K Segments 1\n\n 8 Servers, 9 Users (0 Local, 9 Remote, 0 Batch),0 Apws\n\n\nI find this is quite a straightforward and useful set of statistics. Just having\nthis sort of functionality easily available gets me used to the sorts of numbers I\ncan expect in different hardware environments. It is then simple to conduct basic\ntuning by running reports (or other operations) and seeing the sorts of numbers you\nget for the sample period.\n\nOf course Progress has a bunch more stuff you can tune, including separate processes\nfor asynchronously writing database pages, or their after-image and before-image\nfiles. I don't have any databases that get that arcane though, hence the APW, BIW\nand AIW statistics are zero above.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Wed, 11 Apr 2001 13:02:46 +1000",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]\n> Sent: 07 April 2001 15:24\n> To: pgAdmin-hackers@greatbridge.org\n> Cc: pgsql-hackers@postgresql.org\n> Subject: [pgAdmin-hackers] PL/pgSQL IDE project\n> \n> \n> Hello all,\n> \n> I would like to inform you all that I am currently working on the \n> implementation of PL/pgSQL packages on both server-side \n> (PostgreSQL 7.1) \n> and client-side (PgAdmin).\n> The idea is to add an PL/pgSQL Integrated Development Environment to \n> pgadmin. Help and suggestions needed. If someone is already \n> working on a \n> similar project, let me know how I can help. For discussion, please \n> register on mailto:pgadmin-hackers@greatbridge.org mailing \n> list. Help and \n> suggestions needed !\n\nHi Jean-Michel,\n\nSounds great. My only concern is that you consider the way different code\nhas already been implemented in pgAdmin eg:\n\n1) Any server side objects (SSOs) such as tables, functions or views should\nbe prefixed 'pgadmin_'. There is a mechanism in place in basSQL.bas which\nwill autorepair/upgrade SSOs. In the case of upgrades there is an SSO\nversion number stored in basGlobal.bas. If this doesn't match the version\nnumber in the pgadmin_param table then an upgrade will occur.\n\n2) Use the same error handling that is already implemented elsewhere.\n\nWhere applicable, be sure to reuse existing code like the SQL Wizard - no\npoint in writing another one!\n\nGood luck,\n\nRegards, Dave.\n",
"msg_date": "Mon, 9 Apr 2001 08:29:40 +0100 ",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "RE: [pgAdmin-hackers] PL/pgSQL IDE project"
}
] |
[
{
"msg_contents": "\n> However, 7.1beta6 to 7.1rc4 to 7.1.0 would be an ok \n> progression, as 7.1 < 7.1.0, I think (saying that without having tested it could be\n> dangerous.... :-)).\n\nI like this 7.1.0, it would also help to clarify what exact version is at hand.\nPeople tend to use shorthands like 7.1 to refer to any patch version (like 7.1.3).\n\nAndreas\n",
"msg_date": "Mon, 9 Apr 2001 11:37:58 +0200 ",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: RPM upgrade caveats going from a beta version to RC"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n\n> > However, 7.1beta6 to 7.1rc4 to 7.1.0 would be an ok\n> > progression, as 7.1 < 7.1.0, I think (saying that without having tested it could be\n> > dangerous.... :-)).\n>\n> I like this 7.1.0, it would also help to clarify what exact version is at hand.\n> People tend to use shorthands like 7.1 to refer to any patch version (like 7.1.3).\n\nI like that. In fact, we almost shipped a 7.0.0 but it was switched back\nat the last minute because that's how it always was. Perhaps the\nobjection is that a 7.1.0 would indicate that there will be a 7.1.1 bug\nfix release, but let's not kid ourselves.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 9 Apr 2001 18:18:37 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: AW: RPM upgrade caveats going from a beta version to\n RC"
}
] |
[
{
"msg_contents": "Hi,\n\ni implement additional server functionality. Currently (v7.0.3), executing\nSQL update statement on the same\nrow from inside two different processess results in blocking second process\nto the end of transaction in the first one.\nIn real OLTP application second process can't wait too long. After few\nseconds server should return with\nmessage 'lock timeout exceeded'. I modify postgres lock manager source code\nto obtain that functionality.\nI take advantage of deadlock detection mechanism. Currently deadlock\ndetection routine initialy check for\nsimple deadlock detection between two processess, next insert lock into lock\nqueue and after\nDEADLOCK_CHECK_TIMER seconds run HandleDeadLock to comprehensive deadlock\ndetection.\nTo obtain 'timeout on lock' feature I do as follow:\n\n1. Add new configure parameter. I add #define statement in file\ninclude/config.in\n #define NO_WAIT_FOR_LOCK 1\n In the future somebody can add new option to SQL SET command\n\n2. Modify HandleDeadLock routine. In file backend/storage/lmgr/proc.c change\nlines 866-870\n\n if (!DeadLockCheck(MyProc, MyProc->waitLock))\n {\n UnlockLockTable();\n return;\n }\n\n to\n\n if (!NO_WAIT_FOR_LOCK)\n {\n if (!DeadLockCheck(MyProc, MyProc->waitLock))\n {\n UnlockLockTable();\n return;\n }\n }\n\nWith this modyfication every conflicting lock wait DEADLOCK_CHECK_TIMER\nseconds in queue and returns with error\n'deadlock detect'.\n\nWho can add 'timeout on lock' feature to the next postgres server release?\n\n\n\n",
"msg_date": "Mon, 9 Apr 2001 12:02:39 +0200",
"msg_from": "\"Henryk Szal\" <szal@doctorq.com.pl>",
"msg_from_op": true,
"msg_subject": "timeout on lock"
}
] |
[
{
"msg_contents": "Hi,\n\nI implement additional server functionality. Currently (v7.0.3), executing\nSQL update statement on the same\nrow from inside two different processess results in blocking second process\nto the end of transaction in\nthe first one. In real OLTP application second process can't wait too long.\nAfter few seconds server should\nreturn to the application message:'lock timeout exceeded'. I modify postgres\nlock manager source code to\nobtain that functionality. I take advantage of deadlock detection mechanism.\nCurrently deadlock\ndetection routine initialy check for simple deadlock detection between two\nprocessess, next insert lock\ninto lock queue and after DEADLOCK_CHECK_TIMER seconds run HandleDeadLock to\ncomprehensive deadlock detection.\nTo obtain 'timeout on lock' feature I do as follow:\n\n1. Add new configure parameter. Currently I add #define statement in file\ninclude/config.in\n #define NO_WAIT_FOR_LOCK 1\n In the future somebody can add new option to SQL SET command\n\n2. Modify HandleDeadLock routine. In file backend/storage/lmgr/proc.c change\nlines 866-870\n\n if (!DeadLockCheck(MyProc, MyProc->waitLock))\n {\n UnlockLockTable();\n return;\n }\n\n to\n\n if (!NO_WAIT_FOR_LOCK)\n {\n if (!DeadLockCheck(MyProc, MyProc->waitLock))\n {\n UnlockLockTable();\n return;\n }\n }\n\nWith this modyfication every conflicting lock wait DEADLOCK_CHECK_TIMER\nseconds in queue and returns with error\n'deadlock detect'.\n\nWho can add this simply 'timeout on lock' implementation to the next\npostgres server release?\n\n\n\n",
"msg_date": "Mon, 9 Apr 2001 12:30:18 +0200",
"msg_from": "\"Henryk Szal\" <szal@doctorq.com.pl>",
"msg_from_op": true,
"msg_subject": "timeout on lock feature"
},
{
"msg_contents": "I can imagine some people wanting this. However, 7.1 has new deadlock\ndetection code, so I would you make a 7.1 version and send it over. We\ncan get it into 7.2. I think we need a SET variable, and it should\ndefault to OFF.\n\nGood idea. Thanks.\n\n\n> Hi,\n> \n> I implement additional server functionality. Currently (v7.0.3), executing\n> SQL update statement on the same\n> row from inside two different processess results in blocking second process\n> to the end of transaction in\n> the first one. In real OLTP application second process can't wait too long.\n> After few seconds server should\n> return to the application message:'lock timeout exceeded'. I modify postgres\n> lock manager source code to\n> obtain that functionality. I take advantage of deadlock detection mechanism.\n> Currently deadlock\n> detection routine initialy check for simple deadlock detection between two\n> processess, next insert lock\n> into lock queue and after DEADLOCK_CHECK_TIMER seconds run HandleDeadLock to\n> comprehensive deadlock detection.\n> To obtain 'timeout on lock' feature I do as follow:\n> \n> 1. Add new configure parameter. Currently I add #define statement in file\n> include/config.in\n> #define NO_WAIT_FOR_LOCK 1\n> In the future somebody can add new option to SQL SET command\n> \n> 2. Modify HandleDeadLock routine. In file backend/storage/lmgr/proc.c change\n> lines 866-870\n> \n> if (!DeadLockCheck(MyProc, MyProc->waitLock))\n> {\n> UnlockLockTable();\n> return;\n> }\n> \n> to\n> \n> if (!NO_WAIT_FOR_LOCK)\n> {\n> if (!DeadLockCheck(MyProc, MyProc->waitLock))\n> {\n> UnlockLockTable();\n> return;\n> }\n> }\n> \n> With this modyfication every conflicting lock wait DEADLOCK_CHECK_TIMER\n> seconds in queue and returns with error\n> 'deadlock detect'.\n> \n> Who can add this simply 'timeout on lock' implementation to the next\n> postgres server release?\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Apr 2001 13:48:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "\nIf you can't handle the SET variable stuff, we can do it over here.\n\nThanks.\n\n> Hi,\n> \n> I implement additional server functionality. Currently (v7.0.3), executing\n> SQL update statement on the same\n> row from inside two different processess results in blocking second process\n> to the end of transaction in\n> the first one. In real OLTP application second process can't wait too long.\n> After few seconds server should\n> return to the application message:'lock timeout exceeded'. I modify postgres\n> lock manager source code to\n> obtain that functionality. I take advantage of deadlock detection mechanism.\n> Currently deadlock\n> detection routine initialy check for simple deadlock detection between two\n> processess, next insert lock\n> into lock queue and after DEADLOCK_CHECK_TIMER seconds run HandleDeadLock to\n> comprehensive deadlock detection.\n> To obtain 'timeout on lock' feature I do as follow:\n> \n> 1. Add new configure parameter. Currently I add #define statement in file\n> include/config.in\n> #define NO_WAIT_FOR_LOCK 1\n> In the future somebody can add new option to SQL SET command\n> \n> 2. Modify HandleDeadLock routine. In file backend/storage/lmgr/proc.c change\n> lines 866-870\n> \n> if (!DeadLockCheck(MyProc, MyProc->waitLock))\n> {\n> UnlockLockTable();\n> return;\n> }\n> \n> to\n> \n> if (!NO_WAIT_FOR_LOCK)\n> {\n> if (!DeadLockCheck(MyProc, MyProc->waitLock))\n> {\n> UnlockLockTable();\n> return;\n> }\n> }\n> \n> With this modyfication every conflicting lock wait DEADLOCK_CHECK_TIMER\n> seconds in queue and returns with error\n> 'deadlock detect'.\n> \n> Who can add this simply 'timeout on lock' implementation to the next\n> postgres server release?\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Apr 2001 13:48:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I can imagine some people wanting this. However, 7.1 has new deadlock\n> detection code, so I would you make a 7.1 version and send it over. We\n> can get it into 7.2.\n\nI object strongly to any such \"feature\" in the low-level form that\nHenryk proposes, because it would affect *ALL* locking. Do you really\nwant all your other transactions to go belly-up if, say, someone vacuums\npg_class?\n\nA variant of LOCK TABLE that explicitly requests a timeout might make\nsense, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 12:44:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature "
},
{
"msg_contents": "\nI was thinking SET because UPDATE does an auto-lock.\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I can imagine some people wanting this. However, 7.1 has new deadlock\n> > detection code, so I would you make a 7.1 version and send it over. We\n> > can get it into 7.2.\n> \n> I object strongly to any such \"feature\" in the low-level form that\n> Henryk proposes, because it would affect *ALL* locking. Do you really\n> want all your other transactions to go belly-up if, say, someone vacuums\n> pg_class?\n> \n> A variant of LOCK TABLE that explicitly requests a timeout might make\n> sense, though.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Apr 2001 12:46:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was thinking SET because UPDATE does an auto-lock.\n\nNot to mention a ton of implicit locks acquired on various system tables\nduring parsing/planning. You really want auto timeout on all of those?\nI sure don't.\n\nThe appropriate way to do this given a LOCK TABLE option would be like\n\n\tBEGIN;\n\tLOCK TABLE foo IN ROW EXCLUSIVE MODE WITH TIMEOUT n;\n\tUPDATE foo SET ...;\n\tCOMMIT;\n\nwhich restricts the scope of the timeout behavior to just the specific\nlock that the user is thinking of, and doesn't risk breaking fundamental\nsystem operations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 14:01:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature "
},
{
"msg_contents": "> The appropriate way to do this given a LOCK TABLE option would be like\n> \n> \tBEGIN;\n> \tLOCK TABLE foo IN ROW EXCLUSIVE MODE WITH TIMEOUT n;\n> \tUPDATE foo SET ...;\n> \tCOMMIT;\n> \n> which restricts the scope of the timeout behavior to just the specific\n> lock that the user is thinking of, and doesn't risk breaking fundamental\n> system operations.\n\nThis is pretty tough because the user has to know the proper lock type,\nright?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Apr 2001 14:28:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "YES, this feature should affect ALL locks.\n'Timeout on lock' parameter says to server \"I CAN'T WAIT WITH THIS\nTRANSACTION TOO LONG BECAUSE OF (ANY) LOCK\",\nso if my process is in conflict with another (system or user) process, then\ni want to abort\nmy transaction. Somebody can set timeout to bigger value (minutes, for\nexample). In my OLTP applications\ni set this value to 10 sec. because i have a lot of short transactions\ngenerated by operators.\nLOCK TABLE is deficient because i need not wait also on some technical locks\n(if this locks blocks me too long!).\n\n\nTom Lane wrote in message <4547.987180295@sss.pgh.pa.us>...\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I can imagine some people wanting this. However, 7.1 has new deadlock\n>> detection code, so I would you make a 7.1 version and send it over. We\n>> can get it into 7.2.\n>\n>I object strongly to any such \"feature\" in the low-level form that\n>Henryk proposes, because it would affect *ALL* locking. Do you really\n>want all your other transactions to go belly-up if, say, someone vacuums\n>pg_class?\n>\n>A variant of LOCK TABLE that explicitly requests a timeout might make\n>sense, though.\n>\n> regards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://www.postgresql.org/search.mpl\n\n\n",
"msg_date": "Tue, 17 Apr 2001 10:44:12 +0200",
"msg_from": "\"Henryk Szal\" <szal@doctorq.com.pl>",
"msg_from_op": true,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "\nTom Lane wrote in message <4982.987184866@sss.pgh.pa.us>...\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I was thinking SET because UPDATE does an auto-lock.\n>\n>Not to mention a ton of implicit locks acquired on various system tables\n>during parsing/planning. You really want auto timeout on all of those?\n>I sure don't.\n\n*****************************************\nYES, I DO! My transaction can't wait.\nIf parser on planner is blocked, then i want to abort my transaction.\n*****************************************\n\n>\n>The appropriate way to do this given a LOCK TABLE option would be like\n>\n> BEGIN;\n> LOCK TABLE foo IN ROW EXCLUSIVE MODE WITH TIMEOUT n;\n> UPDATE foo SET ...;\n> COMMIT;\n>\n\n*****************************************\nWith this solution, some server processes can block me!\n*****************************************\n\n\n>which restricts the scope of the timeout behavior to just the specific\n>lock that the user is thinking of, and doesn't risk breaking fundamental\n>system operations.\n>\n> regards, tom lane\n\n*****************************************\nThis is real problem, but i think other postgres modules are ready for my\nsolution\n(because it is extension to deadlock detection mechanism)\n*****************************************\n\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://www.postgresql.org/search.mpl\n\n\n",
"msg_date": "Tue, 17 Apr 2001 11:09:22 +0200",
"msg_from": "\"Henryk Szal\" <szal@doctorq.com.pl>",
"msg_from_op": true,
"msg_subject": "Re: timeout on lock feature"
},
{
"msg_contents": "\"Henryk Szal\" <szal@doctorq.com.pl> writes:\n> YES, this feature should affect ALL locks.\n> 'Timeout on lock' parameter says to server \"I CAN'T WAIT WITH THIS\n> TRANSACTION TOO LONG BECAUSE OF (ANY) LOCK\",\n\nIt still seems to me that what such an application wants is not a lock\ntimeout at all, but an overall limit on the total elapsed time for the\nquery. If you can't afford to wait to get a lock, why is it OK to wait\n(perhaps much longer) for I/O or computation?\n\nSuch a limit would be best handled by sending a query-cancel request\nwhen you run out of patience, it seems to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 20:13:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: timeout on lock feature "
},
{
"msg_contents": "\"Henryk Szal\" <szal@doctorq.com.pl> writes:\n\n> YES, I DO! My transaction can't wait.\n> If parser on planner is blocked, then i want to abort my transaction.\n\nWhat are your actual timing constraints? Is the constraint ``no\ndatabase table access may take longer than 10 seconds?'' Or is it\n``no database transaction may take longer than 10 seconds?'' Or is\nthe constraint ``this operation may not take longer than 10 seconds?''\n\nIf the first is the actual constraint, then indeed a timeout on table\naccess is appropriate. But that would be a weird constraint. Can you\nexplain further why you need this?\n\nIf the second is the actual constraint, that also sounds strange; a\ndatabase transaction is not normally a complete transaction. You\nusually have to worry about other communication overhead.\n\nIf the third is the actual constraint, then shouldn't you do the\ntimeout at the operation level, rather than at the database level?\nWhat is preventing you from doing that?\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 3988: A computer scientist is someone who fixes things that aren't broken.\n",
"msg_date": "17 Apr 2001 17:34:33 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: timeout on lock feature"
},
{
"msg_contents": "My typical short transaction run in 3 seconds (on heavy loaded system 30\nsec.). But without 'timeout\non lock' it can run 60-180 minutes because someone (user or administrator)\nrun long transaction.\nTimeout value is negligible. I set one to 10 sec. because if my two (3 sec.)\ntransaction are in conflict, then\nboth will be executed (second 3 sec. later).\n\nIan Lance Taylor wrote in message ...\n>\"Henryk Szal\" <szal@doctorq.com.pl> writes:\n>\n>> YES, I DO! My transaction can't wait.\n>> If parser on planner is blocked, then i want to abort my transaction.\n>\n>What are your actual timing constraints? Is the constraint ``no\n>database table access may take longer than 10 seconds?'' Or is it\n>``no database transaction may take longer than 10 seconds?'' Or is\n>the constraint ``this operation may not take longer than 10 seconds?''\n>\n>If the first is the actual constraint, then indeed a timeout on table\n>access is appropriate. But that would be a weird constraint. Can you\n>explain further why you need this?\n>\n>If the second is the actual constraint, that also sounds strange; a\n>database transaction is not normally a complete transaction. You\n>usually have to worry about other communication overhead.\n>\n>If the third is the actual constraint, then shouldn't you do the\n>timeout at the operation level, rather than at the database level?\n>What is preventing you from doing that?\n>\n>Ian\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3988: A computer scientist is someone who fixes things that aren't\nbroken.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://www.postgresql.org/search.mpl\n\n\n",
"msg_date": "Wed, 18 Apr 2001 11:17:07 +0200",
"msg_from": "\"Henryk Szal\" <szal@doctorq.com.pl>",
"msg_from_op": true,
"msg_subject": "Re: Re: timeout on lock feature"
},
{
"msg_contents": "\"Henryk Szal\" <szal@doctorq.com.pl> writes:\n\n> My typical short transaction run in 3 seconds (on heavy loaded system 30\n> sec.). But without 'timeout\n> on lock' it can run 60-180 minutes because someone (user or administrator)\n> run long transaction.\n> Timeout value is negligible. I set one to 10 sec. because if my two (3 sec.)\n> transaction are in conflict, then\n> both will be executed (second 3 sec. later).\n\nThanks, but that actually doesn't answer my question.\n\nI asked: ``What are your actual timing constraints?'' By that I mean,\nwhat real world constraints do you need to satisfy? You aren't\nputting in a timeout for your health. You are doing it to acheive\nsome goal. What is that goal?\n\nI gave three sample goals, still below. Is one of them correct? Or\ndo you have a different one entirely?\n\nIan\n\n> Ian Lance Taylor wrote in message ...\n> >\"Henryk Szal\" <szal@doctorq.com.pl> writes:\n> >\n> >> YES, I DO! My transaction can't wait.\n> >> If parser on planner is blocked, then i want to abort my transaction.\n> >\n> >What are your actual timing constraints? Is the constraint ``no\n> >database table access may take longer than 10 seconds?'' Or is it\n> >``no database transaction may take longer than 10 seconds?'' Or is\n> >the constraint ``this operation may not take longer than 10 seconds?''\n> >\n> >If the first is the actual constraint, then indeed a timeout on table\n> >access is appropriate. But that would be a weird constraint. Can you\n> >explain further why you need this?\n> >\n> >If the second is the actual constraint, that also sounds strange; a\n> >database transaction is not normally a complete transaction. You\n> >usually have to worry about other communication overhead.\n> >\n> >If the third is the actual constraint, then shouldn't you do the\n> >timeout at the operation level, rather than at the database level?\n> >What is preventing you from doing that?\n\n---------------------------(end of broadcast)---------------------------\nTIP 582: There are two major products that come out of Berkeley: LSD and UNIX.\nWe don't believe this to be a coincidence.\n\t\t-- Jeremy S. Anderson\n",
"msg_date": "19 Apr 2001 10:06:02 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: timeout on lock feature"
}
] |
[
{
"msg_contents": "\tWhen I downlaod a full tarball I want it all, I'm greedy like that.\n;)\nIf it is to be split up as standard I believe problems will arise with\ndifferent versions being used together (by me most likley...). Also IMHO it\nwill not necessarily be relised the docs have not been down loaded which\nmeans refering to older docs if there was a previous installation, or not\nfinding any if no previous install.\n\tAlso to prevent confusion it might be usefull to have the split\ndistro in its own sub directory (eg Postgresql-7.1-Split-Distro, or\nsomesuch), as when I first looked in on the download directory it was not\nimediatly obvious there was one main tarball and the rest where a split\nversion rather than a main one with optional stuff (which is not my favoured\noption).\nThis is all just in my opinion of course.\n- Stuart\n\n",
"msg_date": "Mon, 9 Apr 2001 11:56:09 +0100 ",
"msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>",
"msg_from_op": true,
"msg_subject": "Split Distro"
},
{
"msg_contents": "On Mon, 9 Apr 2001, Henshall, Stuart - WCP wrote:\n\n> \tWhen I downlaod a full tarball I want it all, I'm greedy like that.\n> ;)\n> If it is to be split up as standard I believe problems will arise with\n> different versions being used together (by me most likley...). Also IMHO it\n> will not necessarily be relised the docs have not been down loaded which\n> means refering to older docs if there was a previous installation, or not\n> finding any if no previous install.\n> \tAlso to prevent confusion it might be usefull to have the split\n> distro in its own sub directory (eg Postgresql-7.1-Split-Distro, or\n> somesuch), as when I first looked in on the download directory it was not\n> imediatly obvious there was one main tarball and the rest where a split\n> version rather than a main one with optional stuff (which is not my favoured\n> option).\n\nWell, unless you have a broken client, the first thing you get when you\nenter the directory that the files are in is:\n\n=====================\nInformation regarding the split distribution\n--------------------------------------------\n\nIn the various download directories you will find alongside files with\nnames like\n\npostgresql-XXX.tar.gz\n\n(where XXX is a version number) smaller files with the names\n\npostgresql-base-XXX.tar.gz\npostgresql-opt-XXX.tar.gz\npostgresql-docs-XXX.tar.gz\npostgresql-test-XXX.tar.gz\n\nThe file named \"postgresql-XXX.tar.gz\" is the full source distribution.\nEach of the other four \"tarballs\" contains a subset of the files from the\nfull distribution, for downloading convenience. If you download all four\nof them and unpack them into the same directory you will get exactly what\nyou would have gotten had you downloaded the full distribution.\n\nThe -base package is the only one that is required for successful\ninstallation. It contains the server and the essential client interfaces.\nThe -opt package contains all parts whose compilation needs to be enabled\nexplicitly. This includes the C++, JDBC, ODBC, Perl, Python, and Tcl\ninterfaces, as well as multibyte support. The -docs package contains the\ndocumentation in HTML format (man pages are in -base) and the\ndocumentation sources. You don't need to download this package if you\nintend to browse the documentation on the web. Finally, the -test package\ncontains the regression test suite.\n\n(Note, this scheme is new as of version 7.1RC4. Previous versions used a\ndifferent, incompatible split where all subpackages where required.)\n\n===================\n\n",
"msg_date": "Mon, 9 Apr 2001 08:51:08 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Split Distro"
}
] |
[
{
"msg_contents": "I have in my posession a HP9000/433s box. It doesn't have an OS on it\nyet, but will probably have NetBSD 1.5 on it soon. Do we need PG\ntested on it?\n\nThis is a 33Mhz MC68040 with 128Meg RAM...\n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 9 Apr 2001 08:22:01 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "M68K..."
},
{
"msg_contents": "> I have in my posession a HP9000/433s box. It doesn't have an OS on it\n> yet, but will probably have NetBSD 1.5 on it soon. Do we need PG\n> tested on it?\n\nYes!\n\n> This is a 33Mhz MC68040 with 128Meg RAM...\n\nBetter start now, it will take a while ;)\n\n - Thomas\n",
"msg_date": "Mon, 09 Apr 2001 14:40:16 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: M68K..."
},
{
"msg_contents": "\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 4/9/01, 9:40:16 AM, Thomas Lockhart <lockhart@alumni.caltech.edu> wrote \nregarding [HACKERS] Re: M68K...:\n\n\n> > I have in my posession a HP9000/433s box. It doesn't have an OS on it\n> > yet, but will probably have NetBSD 1.5 on it soon. Do we need PG\n> > tested on it?\n\n> Yes!\n\n> > This is a 33Mhz MC68040 with 128Meg RAM...\n\n> Better start now, it will take a while ;)\n\nNo Kidding. Not gonna get done for the Release on 4/13/2001, but I'll \nreport back as soon as I get it up and stable. \n\nOr does someone else want an account once I get the Base OS installed? \n\nLER\n\n\n> - Thomas\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n",
"msg_date": "Mon, 09 Apr 2001 15:37:23 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: M68K..."
}
] |
[
{
"msg_contents": "DISCLAIMER: don't take this as MySQL flaming (it isn't) or personally, this \nare just my observations on an application, not a benchmark.\n\nToday I tried a quite simple, mostly write database (HTTP logging). \n* Postgres peaked at 709 inserts/sec (committed after 3 seconds or 100 \ninserts, whichever comes first)\n* MySQL peaked at 735 inserts/sec (no transactions)\n\nHowever, MySQL completly choked over when trying to query something usefull \nout of the database while the inserts are running (at full speed). Postgres \nworked like a charm. The only real advantage of mysql was a simple \"select \ncount(1) from logs\", which mysql answered immidiatly,while postgres did a \nfull table scan.\n\nFor querying the DB, postgres won, my queries ran about 12% faster in \nPostgres than MySQL.\n\nGiven the fact that the \"one-user\" case was MySQL's real advantage up to now, \nPostgres 7.1 will be an important milestone.\n\n-- \n===================================================\n Mario Weilguni � � � � � � � � KPNQwest Austria GmbH\n�Senior Engineer Web Solutions Nikolaiplatz 4\n�tel: +43-316-813824 � � � � 8020 graz, austria\n�fax: +43-316-813824-26 � � � http://www.kpnqwest.at\n�e-mail: mario.weilguni@kpnqwest.com\n===================================================\n",
"msg_date": "Mon, 9 Apr 2001 20:26:54 +0200",
"msg_from": "Mario Weilguni <mweilguni@sime.com>",
"msg_from_op": true,
"msg_subject": "MySQL vs. Postgres - congratulations to the postgres team"
}
] |
[
{
"msg_contents": "Excessively long values are currently silently truncated when they are\ninserted into char or varchar fields. This makes the entire notion of\nspecifying a length limit for these types kind of useless, IMO. Needless\nto say, it's also not in compliance with SQL.\n\nHow do people feel about changing this to raise an error in this\nsituation? Does anybody rely on silent truncation? Should this be\nuser-settable, or can those people resort to using triggers?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 9 Apr 2001 21:20:42 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Truncation of char, varchar types"
},
{
"msg_contents": "\nAfter v7.1 is released ... ?\n\nOn Mon, 9 Apr 2001, Peter Eisentraut wrote:\n\n> Excessively long values are currently silently truncated when they are\n> inserted into char or varchar fields. This makes the entire notion of\n> specifying a length limit for these types kind of useless, IMO. Needless\n> to say, it's also not in compliance with SQL.\n>\n> How do people feel about changing this to raise an error in this\n> situation? Does anybody rely on silent truncation? Should this be\n> user-settable, or can those people resort to using triggers?\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Mon, 9 Apr 2001 16:27:37 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Truncation of char, varchar types"
},
{
"msg_contents": "On Mon, Apr 09, 2001 at 09:20:42PM +0200, Peter Eisentraut wrote:\n> Excessively long values are currently silently truncated when they are\n> inserted into char or varchar fields. This makes the entire notion of\n> specifying a length limit for these types kind of useless, IMO. Needless\n> to say, it's also not in compliance with SQL.\n> \n> How do people feel about changing this to raise an error in this\n> situation? Does anybody rely on silent truncation? Should this be\n> user-settable, or can those people resort to using triggers?\n\nYes, detecting and reporting errors early is a Good Thing. You don't \ndo anybody any favors by pretending to save data, but really throwing \nit away.\n\nWe have noticed here also that object (e.g. table) names get truncated \nin some places and not others. If you create a table with a long name, \nPG truncates the name and creates a table with the shorter name; but \nif you refer to the table by the same long name, PG reports an error. \n(Very long names may show up in machine- generated schemas.) Would \npatches for this, e.g. to refuse to create a table with an impossible \nname, be welcome? \n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Mon, 9 Apr 2001 13:30:26 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Truncation of char, varchar types"
},
{
"msg_contents": "Nathan Myers wrote:\n\n> (Very long names may show up in machine- generated schemas.) Would\n> patches for this, e.g. to refuse to create a table with an impossible\n> name, be welcome?\n\nYes. And throw in the picture also the length of sequences coming from\nSERIALs, etc.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 10 Apr 2001 14:44:34 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: Truncation of char, varchar types"
},
{
"msg_contents": "Nathan Myers writes:\n\n> We have noticed here also that object (e.g. table) names get truncated\n> in some places and not others. If you create a table with a long name,\n> PG truncates the name and creates a table with the shorter name; but\n> if you refer to the table by the same long name, PG reports an error.\n\nThis seems odd, because the truncation happens in the scanner. Care to\nprovide a test case?\n\n> (Very long names may show up in machine- generated schemas.) Would\n> patches for this, e.g. to refuse to create a table with an impossible\n> name, be welcome?\n\nTom Lane is opposed to this, although a number of people seem to like it.\nSounds like a configuration option to me.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 10 Apr 2001 19:05:51 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Truncation of char, varchar types"
},
{
"msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> We have noticed here also that object (e.g. table) names get truncated \n> in some places and not others. If you create a table with a long name, \n> PG truncates the name and creates a table with the shorter name; but \n> if you refer to the table by the same long name, PG reports an error.\n\nExample please? This is clearly a bug. It is also demonstrably not\nthe case in ordinary scenarios:\n\nplay=> create table a1234567890123456789012345678901234567890(f1 int);\nNOTICE: identifier \"a1234567890123456789012345678901234567890\" will be truncated to \"a123456789012345678901234567890\"\nCREATE\nplay=> select * from a1234567890123456789012345678901234567890;\nNOTICE: identifier \"a1234567890123456789012345678901234567890\" will be truncated to \"a123456789012345678901234567890\"\n f1\n----\n(0 rows)\n\nplay=> select * from \"a1234567890123456789012345678901234567890\";\nNOTICE: identifier \"a1234567890123456789012345678901234567890\" will be truncated to \"a123456789012345678901234567890\"\n f1\n----\n(0 rows)\n\nI have a vague recollection that we found/fixed one or more such bugs in\nisolated contexts during 7.1 development, so the issue may be gone\nalready.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 01:16:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Truncation of char, varchar types "
},
{
"msg_contents": "On Fri, Apr 13, 2001 at 01:16:43AM -0400, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > We have noticed here also that object (e.g. table) names get truncated \n> > in some places and not others. If you create a table with a long name, \n> > PG truncates the name and creates a table with the shorter name; but \n> > if you refer to the table by the same long name, PG reports an error.\n> \n> Example please? This is clearly a bug. \n\nSorry, false alarm. When I got the test case, it turned out to\nbe the more familiar problem:\n\n create table foo_..._bar1 (id1 ...);\n [notice, \"foo_..._bar1\" truncated to \"foo_..._bar\"]\n create table foo_..._bar (id2 ...);\n [error, foo_..._bar already exists]\n create index foo_..._bar_ix on foo_..._bar(id2);\n [notice, \"foo_..._bar_ix\" truncated to \"foo_..._bar\"]\n [error, foo_..._bar already exists]\n [error, attribute \"id2\" not found]\n\nIt would be more helpful for the first \"create\" to fail so we don't \nend up cluttered with objects that shouldn't exist, and which interfere\nwith operations on objects which should.\n\nBut I'm not proposing that for 7.1.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Fri, 13 Apr 2001 10:50:31 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Truncation of object names"
},
{
"msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> Sorry, false alarm. When I got the test case, it turned out to\n> be the more familiar problem:\n\n> create table foo_..._bar1 (id1 ...);\n> [notice, \"foo_..._bar1\" truncated to \"foo_..._bar\"]\n> create table foo_..._bar (id2 ...);\n> [error, foo_..._bar already exists]\n> create index foo_..._bar_ix on foo_..._bar(id2);\n> [notice, \"foo_..._bar_ix\" truncated to \"foo_..._bar\"]\n> [error, foo_..._bar already exists]\n> [error, attribute \"id2\" not found]\n\n> It would be more helpful for the first \"create\" to fail so we don't \n> end up cluttered with objects that shouldn't exist, and which interfere\n> with operations on objects which should.\n\nSeems to me that if you want a bunch of CREATEs to be mutually\ndependent, then you wrap them all in a BEGIN/END block.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 14:54:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Truncation of object names "
},
{
"msg_contents": "On Fri, Apr 13, 2001 at 02:54:47PM -0400, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > Sorry, false alarm. When I got the test case, it turned out to\n> > be the more familiar problem:\n> \n> > create table foo_..._bar1 (id1 ...);\n> > [notice, \"foo_..._bar1\" truncated to \"foo_..._bar\"]\n> > create table foo_..._bar (id2 ...);\n> > [error, foo_..._bar already exists]\n> > create index foo_..._bar_ix on foo_..._bar(id2);\n> > [notice, \"foo_..._bar_ix\" truncated to \"foo_..._bar\"]\n> > [error, foo_..._bar already exists]\n> > [error, attribute \"id2\" not found]\n> \n> > It would be more helpful for the first \"create\" to fail so we don't \n> > end up cluttered with objects that shouldn't exist, and which interfere\n> > with operations on objects which should.\n> \n> Seems to me that if you want a bunch of CREATEs to be mutually\n> dependent, then you wrap them all in a BEGIN/END block.\n\nYes, but... The second and third commands weren't supposed to be \nrelated to the first at all, never mind dependent on it. They were \nmade dependent by PG crushing the names together.\n\nWe are thinking about working around the name length limitation \n(encountered in migrating from other dbs) by allowing \"foo.bar.baz\" \nname syntax, as a sort of rudimentary namespace mechanism. It ain't\nschemas, but it's better than \"foo__bar__baz\".\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Fri, 13 Apr 2001 13:12:38 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Truncation of object names"
},
{
"msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n>> Seems to me that if you want a bunch of CREATEs to be mutually\n>> dependent, then you wrap them all in a BEGIN/END block.\n\n> Yes, but... The second and third commands weren't supposed to be \n> related to the first at all, never mind dependent on it. They were \n> made dependent by PG crushing the names together.\n\nGood point.\n\n> We are thinking about working around the name length limitation \n> (encountered in migrating from other dbs) by allowing \"foo.bar.baz\" \n> name syntax, as a sort of rudimentary namespace mechanism.\n\nHave you thought about simply increasing NAMEDATALEN in your\ninstallation? If you really are generating names that aren't unique\nin 31 characters, that seems like the way to go ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 16:27:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Truncation of object names "
},
{
"msg_contents": "On Fri, 13 Apr 2001, Tom Lane wrote:\n\n> ncm@zembu.com (Nathan Myers) writes:\n> >> Seems to me that if you want a bunch of CREATEs to be mutually\n> >> dependent, then you wrap them all in a BEGIN/END block.\n> \n> > Yes, but... The second and third commands weren't supposed to be \n> > related to the first at all, never mind dependent on it. They were \n> > made dependent by PG crushing the names together.\n> \n> Good point.\n> \n> > We are thinking about working around the name length limitation \n> > (encountered in migrating from other dbs) by allowing \"foo.bar.baz\" \n> > name syntax, as a sort of rudimentary namespace mechanism.\n> \n> Have you thought about simply increasing NAMEDATALEN in your\n> installation? If you really are generating names that aren't unique\n> in 31 characters, that seems like the way to go ...\n\nTom (or others) --\n\nOther than (a) it wastes a bit of space in the pg_ tables, and (b) it may\nscrew up postgresql utility programs (pgaccess, pgadmin, etc.), is there\nany reason to keep the default at 32? Are there performance limitations?\n(Will C-based triggers and client programs and such need to be modified?)\n\nI don't think that my tables are incredibly verbose, autogenerated\nsequence and index names often push the limit. The problem w/everyone\ncompiling it at a higher number is that it makes it difficult to\ntransparently move a PG database from one server to another.\n\nThanks!\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Fri, 13 Apr 2001 16:33:09 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: Truncation of object names "
},
{
"msg_contents": "Joel Burton <jburton@scw.org> writes:\n>> Have you thought about simply increasing NAMEDATALEN in your\n>> installation? If you really are generating names that aren't unique\n>> in 31 characters, that seems like the way to go ...\n\n> Other than (a) it wastes a bit of space in the pg_ tables, and (b) it may\n> screw up postgresql utility programs (pgaccess, pgadmin, etc.), is there\n> any reason to keep the default at 32? Are there performance limitations?\n\nThose are pretty much the reasons, plus a compatibility issue:\nNAMEDATALEN *is* visible to clients (that's why it's in postgres_ext.h).\nSo changing the default value would risk breaking clients that hadn't\nbeen recompiled.\n\n> (Will C-based triggers and client programs and such need to be modified?)\n\nNot if they've been properly coded (written in terms of NAMEDATALEN not\na hard constant).\n\nObviously, these objections are not strong enough to keep us from\nincreasing the standard value of NAMEDATALEN if it seems that many\npeople are running into the limit. But AFAICT relatively few people\nhave such problems, and I'm hesitant to make everyone deal with a change\nfor the benefit of a few. Count me as a weak vote for leaving it where\nit is ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 16:55:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Truncation of object names "
},
{
"msg_contents": "On Fri, Apr 13, 2001 at 04:27:15PM -0400, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > We are thinking about working around the name length limitation \n> > (encountered in migrating from other dbs) by allowing \"foo.bar.baz\" \n> > name syntax, as a sort of rudimentary namespace mechanism.\n> \n> Have you thought about simply increasing NAMEDATALEN in your\n> installation? If you really are generating names that aren't unique\n> in 31 characters, that seems like the way to go ...\n\nWe discussed that, and will probably do it (too).\n\nOne problem is that, having translated \"foo.bar.baz\" to \"foo_bar_baz\", \nyou have a problem when you encounter \"foo.bar_baz\" in subsequent code.\nI.e., a separate delimiter character helps, even when name length isn't \nan issue. Also, accepting the names as they appear in the source code \nalready means the number of changes needed is much smaller, even when\nyou don't have true schema support. \n\nNathan Myers\nncm@zembu.com\n\n",
"msg_date": "Fri, 13 Apr 2001 13:59:29 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Truncation of object names"
},
{
"msg_contents": "On Fri, 13 Apr 2001, Tom Lane wrote:\n\n> Obviously, these objections are not strong enough to keep us from\n> increasing the standard value of NAMEDATALEN if it seems that many\n> people are running into the limit. But AFAICT relatively few people\n> have such problems, and I'm hesitant to make everyone deal with a change\n> for the benefit of a few. Count me as a weak vote for leaving it where\n> it is ...\n\nHmm... Of course, it's Bad to break things if one doesn't have to. But\n(IMHO) its also bad to leave it at a setting that makes some group of\npeople (~ 3%?) have to recompile it, and a larger group (~ 10%) wish they\ndid/knew how to. (I, in general, share your hesistancy to break something\nfor the benefit of the few, 'cept I'm one of the few this time. ;-) )\n\nFor some changes, one could just prewarn the world that This Is Coming,\nand they should anticipate it with 6 months notice or such. In this case,\nthough, it would seem that knowing it was coming wouldn't help any --\nyou'd still have to recompile your client for the 32char names and the 64\n(?) char names, during the 7.1 -> 7.2 (or 7.5 -> 8.0 or\nwhatever) transition period.\n\nI'd like to see it longer -- is there any sane way of doing this with\nnotice, or, as I fear, would it always be a pain, regardless of how much\nadvance notice the world rec'd?\n\nThanks,\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Fri, 13 Apr 2001 17:08:46 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: Truncation of object names "
},
{
"msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> On Fri, Apr 13, 2001 at 04:27:15PM -0400, Tom Lane wrote:\n>> Have you thought about simply increasing NAMEDATALEN in your\n>> installation? If you really are generating names that aren't unique\n>> in 31 characters, that seems like the way to go ...\n\n> We discussed that, and will probably do it (too).\n\n> One problem is that, having translated \"foo.bar.baz\" to \"foo_bar_baz\", \n> you have a problem when you encounter \"foo.bar_baz\" in subsequent code.\n\nSo it's not really so much that NAMEDATALEN is too short for your\nindividual names, it's that you are concatenating names as a workaround\nfor the lack of schema support.\n\nFWIW, I believe schemas are very high on the priority list for 7.2 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 17:45:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Truncation of object names "
},
{
"msg_contents": "Call me thick as two planks, but when you guys constantly refer to 'schema\nsupport' in PostgreSQL, what exactly are you referring to?\n\nChris\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\nSent: Saturday, 14 April 2001 5:46 AM\nTo: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Truncation of object names\n\n\nncm@zembu.com (Nathan Myers) writes:\n> On Fri, Apr 13, 2001 at 04:27:15PM -0400, Tom Lane wrote:\n>> Have you thought about simply increasing NAMEDATALEN in your\n>> installation? If you really are generating names that aren't unique\n>> in 31 characters, that seems like the way to go ...\n\n> We discussed that, and will probably do it (too).\n\n> One problem is that, having translated \"foo.bar.baz\" to \"foo_bar_baz\",\n> you have a problem when you encounter \"foo.bar_baz\" in subsequent code.\n\nSo it's not really so much that NAMEDATALEN is too short for your\nindividual names, it's that you are concatenating names as a workaround\nfor the lack of schema support.\n\nFWIW, I believe schemas are very high on the priority list for 7.2 ...\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://www.postgresql.org/search.mpl\n\n",
"msg_date": "Tue, 17 Apr 2001 10:16:41 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Truncation of object names "
}
] |
[
{
"msg_contents": "Hello there,\n\nI first ran configure with the following options\n\n ./configure --with-perl --with-tcl --enable-odbc --with-java \n--enable-syslog --enable-debug\n\nand then compiled postgresql-7.1rc4 on Redhat 7.0 successfully\nwith the exceptions in JDBC and Perl modules as\nindicated below.\n\n-------------------------------------------------------------\n\ngmake[3]: Entering directory \n`/usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc'\n/usr/jakarta/jakarta-ant/bin/ant -buildfile ../../../build.xml -Dmajor=7 \n-Dminor=1 -Dfullversion=7.1rc4 -Ddef_pgport=5432\nBuildfile: ../../../build.xml\n\njar:\n\ncall:\n\nprepare:\n\ncheck_versions:\n\ndriver:\nConfigured build for the JDBC2 edition driver.\n\ncompile:\n [javac] Compiling 41 source files to \n/usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build\n [javac] Modern compiler is not available - using classic compiler\n\nBUILD FAILED\n\n/usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build.xml:99: \nCannot use classic compiler, as it is not available\n\nTotal time: 0 seconds\n\n-----------------------------------------------------------------\n\n <!-- This is the core of the driver. It is common for all three \nversions -->\n\n <target name=\"compile\" depends=\"prepare,check_versions,driver\">\n\n <!-- **** The following is line 99 of build.xml ******* -->\n <javac srcdir=\"${src}\" destdir=\"${dest}\">\n\n <include name=\"${package}/**\" />\n <exclude name=\"${package}/core/ConnectionHook.java\" \nunless=\"jdk1.3+\" />\n <exclude name=\"${package}/jdbc1/**\" if=\"jdk1.2+\" />\n <exclude name=\"${package}/jdbc2/**\" unless=\"jdk1.2+\" />\n <exclude name=\"${package}/largeobject/PGblob.java\" \nunless=\"jdk1.2+\" />\n <exclude name=\"${package}/largeobject/PGclob.java\" \nunless=\"jdk1.2+\" />\n <exclude name=\"${package}/PostgresqlDataSource.java\" \nunless=\"jdk1.2e+\" />\n <exclude name=\"${package}/xa/**\" unless=\"jdk1.2e+\" />\n <exclude name=\"${package}/test/**\" unless=\"junit\" />\n </javac>\n <copy todir=\"${dest}\" overwrite=\"true\" filtering=\"on\">\n <fileset dir=\"${src}\">\n <include name=\"**/*.properties\" />\n <exclude name=\"${dest}/**\" />\n </fileset>\n </copy>\n </target>\n\n\nI have both j2se version 1.3 and ant installed on the machine.\n\n----------------------------------------------------------------\ngmake[4]: Entering directory \n`/usr/pgsql-pkg/postgresql-7.1rc4/src/pl/plperl'\n*****\n* Cannot build PL/Perl because libperl is not a shared library.\n* Skipped.\n*****\n\nIt seems like that the compiler does not like the fact that\n\n/usr/lib/perl5/5.6.0/i386-linux/CORE/libperl.a\n\nis not a shared object.\n-----------------------------------------------------\n\nYour comments to resolve these issues is greatly\nappreciated.\n\nBTW, rserv module in contrib directory now compiles\nbeautifully.\n\nRegards,\nHY\n\n",
"msg_date": "Mon, 09 Apr 2001 15:33:19 -0700",
"msg_from": "\"Homayoun Yousefi'zadeh\" <homayounyz@home.com>",
"msg_from_op": true,
"msg_subject": "JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "This sounds like the problem with the version of gcc that is included with\nrh7.0\n\nIf you don't want to upgrade gcc to a newer version, I think you can fix the\nproblem by \"mv\"ing gcc to brokengcc and then creating creating a new symlink\ngcc to kgcc. Redhat included a non-broken gcc in the distro and called it\nkgcc.\n\n\t~c\n\n~ -----Original Message-----\n~ From: pgsql-general-owner@postgresql.org\n~ [mailto:pgsql-general-owner@postgresql.org]On Behalf Of Homayoun\n~ Yousefi'zadeh\n~ Sent: Monday, April 09, 2001 6:33 PM\n~ To: pgsql-general@postgresql.org\n~ Subject: [GENERAL] JDBC and Perl compiling problems w/ postgresql-7.1rc4\n~\n~\n~ Hello there,\n~\n~ I first ran configure with the following options\n~\n~ ./configure --with-perl --with-tcl --enable-odbc --with-java\n~ --enable-syslog --enable-debug\n~\n~ and then compiled postgresql-7.1rc4 on Redhat 7.0 successfully\n~ with the exceptions in JDBC and Perl modules as\n~ indicated below.\n~\n~ -------------------------------------------------------------\n~\n~ gmake[3]: Entering directory\n~ `/usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc'\n~ /usr/jakarta/jakarta-ant/bin/ant -buildfile ../../../build.xml -Dmajor=7\n~ -Dminor=1 -Dfullversion=7.1rc4 -Ddef_pgport=5432\n~ Buildfile: ../../../build.xml\n~\n~ jar:\n~\n~ call:\n~\n~ prepare:\n~\n~ check_versions:\n~\n~ driver:\n~ Configured build for the JDBC2 edition driver.\n~\n~ compile:\n~ [javac] Compiling 41 source files to\n~ /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build\n~ [javac] Modern compiler is not available - using classic compiler\n~\n~ BUILD FAILED\n~\n~ /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build.xml:99:\n~ Cannot use classic compiler, as it is not available\n~\n~ Total time: 0 seconds\n~\n~ -----------------------------------------------------------------\n~\n~ <!-- This is the core of the driver. It is common for all three\n~ versions -->\n~\n~ <target name=\"compile\" depends=\"prepare,check_versions,driver\">\n~\n~ <!-- **** The following is line 99 of build.xml ******* -->\n~ <javac srcdir=\"${src}\" destdir=\"${dest}\">\n~\n~ <include name=\"${package}/**\" />\n~ <exclude name=\"${package}/core/ConnectionHook.java\"\n~ unless=\"jdk1.3+\" />\n~ <exclude name=\"${package}/jdbc1/**\" if=\"jdk1.2+\" />\n~ <exclude name=\"${package}/jdbc2/**\" unless=\"jdk1.2+\" />\n~ <exclude name=\"${package}/largeobject/PGblob.java\"\n~ unless=\"jdk1.2+\" />\n~ <exclude name=\"${package}/largeobject/PGclob.java\"\n~ unless=\"jdk1.2+\" />\n~ <exclude name=\"${package}/PostgresqlDataSource.java\"\n~ unless=\"jdk1.2e+\" />\n~ <exclude name=\"${package}/xa/**\" unless=\"jdk1.2e+\" />\n~ <exclude name=\"${package}/test/**\" unless=\"junit\" />\n~ </javac>\n~ <copy todir=\"${dest}\" overwrite=\"true\" filtering=\"on\">\n~ <fileset dir=\"${src}\">\n~ <include name=\"**/*.properties\" />\n~ <exclude name=\"${dest}/**\" />\n~ </fileset>\n~ </copy>\n~ </target>\n~\n~\n~ I have both j2se version 1.3 and ant installed on the machine.\n~\n~ ----------------------------------------------------------------\n~ gmake[4]: Entering directory\n~ `/usr/pgsql-pkg/postgresql-7.1rc4/src/pl/plperl'\n~ *****\n~ * Cannot build PL/Perl because libperl is not a shared library.\n~ * Skipped.\n~ *****\n~\n~ It seems like that the compiler does not like the fact that\n~\n~ /usr/lib/perl5/5.6.0/i386-linux/CORE/libperl.a\n~\n~ is not a shared object.\n~ -----------------------------------------------------\n~\n~ Your comments to resolve these issues is greatly\n~ appreciated.\n~\n~ BTW, rserv module in contrib directory now compiles\n~ beautifully.\n~\n~ Regards,\n~ HY\n~\n~\n~ ---------------------------(end of broadcast)---------------------------\n~ TIP 4: Don't 'kill -9' the postmaster\n~\n\n",
"msg_date": "Mon, 9 Apr 2001 19:31:25 -0400",
"msg_from": "\"Charlie Derr\" <charliederr@organicmeat.net>",
"msg_from_op": false,
"msg_subject": "RE: JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "Charlie Derr wrote:\n\n> This sounds like the problem with the version of gcc that is included with\n> rh7.0\n> \n> If you don't want to upgrade gcc to a newer version, I think you can fix the\n> problem by \"mv\"ing gcc to brokengcc and then creating creating a new symlink\n> gcc to kgcc. Redhat included a non-broken gcc in the distro and called it\n> kgcc.\n\nI did what you suggested and nothing changed.\nActually, JDBC problem seems to be ant related\nas it did not exist w/ version 7.0.3.\n\nThe perl problem is definitely coming from libperl.a\nfile as specifically mentioned in the Makefile.\n\nSuggestions??\n\nRegards,\nHY\n\n\n> ~ -----Original Message-----\n> ~ From: pgsql-general-owner@postgresql.org\n> ~ [mailto:pgsql-general-owner@postgresql.org]On Behalf Of Homayoun\n> ~ Yousefi'zadeh\n> ~ Sent: Monday, April 09, 2001 6:33 PM\n> ~ To: pgsql-general@postgresql.org\n> ~ Subject: [GENERAL] JDBC and Perl compiling problems w/ postgresql-7.1rc4\n> ~\n> ~\n> ~ Hello there,\n> ~\n> ~ I first ran configure with the following options\n> ~\n> ~ ./configure --with-perl --with-tcl --enable-odbc --with-java\n> ~ --enable-syslog --enable-debug\n> ~\n> ~ and then compiled postgresql-7.1rc4 on Redhat 7.0 successfully\n> ~ with the exceptions in JDBC and Perl modules as\n> ~ indicated below.\n> ~\n> ~ -------------------------------------------------------------\n> ~\n> ~ gmake[3]: Entering directory\n> ~ `/usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc'\n> ~ /usr/jakarta/jakarta-ant/bin/ant -buildfile ../../../build.xml -Dmajor=7\n> ~ -Dminor=1 -Dfullversion=7.1rc4 -Ddef_pgport=5432\n> ~ Buildfile: ../../../build.xml\n> ~\n> ~ jar:\n> ~\n> ~ call:\n> ~\n> ~ prepare:\n> ~\n> ~ check_versions:\n> ~\n> ~ driver:\n> ~ Configured build for the JDBC2 edition driver.\n> ~\n> ~ compile:\n> ~ [javac] Compiling 41 source files to\n> ~ /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build\n> ~ [javac] Modern compiler is not available - using classic compiler\n> ~\n> ~ BUILD FAILED\n> ~\n> ~ /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build.xml:99:\n> ~ Cannot use classic compiler, as it is not available\n> ~\n> ~ Total time: 0 seconds\n> ~\n> ~ -----------------------------------------------------------------\n> ~\n> ~ <!-- This is the core of the driver. It is common for all three\n> ~ versions -->\n> ~\n> ~ <target name=\"compile\" depends=\"prepare,check_versions,driver\">\n> ~\n> ~ <!-- **** The following is line 99 of build.xml ******* -->\n> ~ <javac srcdir=\"${src}\" destdir=\"${dest}\">\n> ~\n> ~ <include name=\"${package}/**\" />\n> ~ <exclude name=\"${package}/core/ConnectionHook.java\"\n> ~ unless=\"jdk1.3+\" />\n> ~ <exclude name=\"${package}/jdbc1/**\" if=\"jdk1.2+\" />\n> ~ <exclude name=\"${package}/jdbc2/**\" unless=\"jdk1.2+\" />\n> ~ <exclude name=\"${package}/largeobject/PGblob.java\"\n> ~ unless=\"jdk1.2+\" />\n> ~ <exclude name=\"${package}/largeobject/PGclob.java\"\n> ~ unless=\"jdk1.2+\" />\n> ~ <exclude name=\"${package}/PostgresqlDataSource.java\"\n> ~ unless=\"jdk1.2e+\" />\n> ~ <exclude name=\"${package}/xa/**\" unless=\"jdk1.2e+\" />\n> ~ <exclude name=\"${package}/test/**\" unless=\"junit\" />\n> ~ </javac>\n> ~ <copy todir=\"${dest}\" overwrite=\"true\" filtering=\"on\">\n> ~ <fileset dir=\"${src}\">\n> ~ <include name=\"**/*.properties\" />\n> ~ <exclude name=\"${dest}/**\" />\n> ~ </fileset>\n> ~ </copy>\n> ~ </target>\n> ~\n> ~\n> ~ I have both j2se version 1.3 and ant installed on the machine.\n> ~\n> ~ ----------------------------------------------------------------\n> ~ gmake[4]: Entering directory\n> ~ `/usr/pgsql-pkg/postgresql-7.1rc4/src/pl/plperl'\n> ~ *****\n> ~ * Cannot build PL/Perl because libperl is not a shared library.\n> ~ * Skipped.\n> ~ *****\n> ~\n> ~ It seems like that the compiler does not like the fact that\n> ~\n> ~ /usr/lib/perl5/5.6.0/i386-linux/CORE/libperl.a\n> ~\n> ~ is not a shared object.\n> ~ -----------------------------------------------------\n> ~\n> ~ Your comments to resolve these issues is greatly\n> ~ appreciated.\n> ~\n> ~ BTW, rserv module in contrib directory now compiles\n> ~ beautifully.\n> ~\n> ~ Regards,\n> ~ HY\n> ~\n> ~\n> ~ ---------------------------(end of broadcast)---------------------------\n> ~ TIP 4: Don't 'kill -9' the postmaster\n> ~\n\n",
"msg_date": "Mon, 09 Apr 2001 16:43:03 -0700",
"msg_from": "\"Homayoun Yousefi'zadeh\" <homayounyz@home.com>",
"msg_from_op": true,
"msg_subject": "Re: JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "\"Homayoun Yousefi'zadeh\" <homayounyz@home.com> writes:\n\n> I did what you suggested and nothing changed.\n> Actually, JDBC problem seems to be ant related\n> as it did not exist w/ version 7.0.3.\n\nYou might want to double-check that JAVAHOME (sp?) is set before you\nmake. I had problems building with Ant until I figured that out. If\nyou look in the Ant docs it tells you how to set that variable (I\nthink).\n\n> The perl problem is definitely coming from libperl.a\n> file as specifically mentioned in the Makefile.\n\nNo solution for this except to get the Perl sources, configure it to\nbuild a shared libperl.so, and build and install the whole thing.\nNone of the RPM packages that I know of supply a shared library for\nPerl.\n\n-Doug\n",
"msg_date": "09 Apr 2001 22:09:07 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: JDBC and Perl compiling problems w/\n\tpostgresql-7.1rc4"
},
{
"msg_contents": "Doug McNaught wrote:\n\n \n>> I did what you suggested and nothing changed.\n>> Actually, JDBC problem seems to be ant related\n>> as it did not exist w/ version 7.0.3.\n> \n> \n> You might want to double-check that JAVAHOME (sp?) is set before you\n> make. I had problems building with Ant until I figured that out. If\n> you look in the Ant docs it tells you how to set that variable (I\n> think).\n\nThanks for the response. I actually went thru\nthe full exercise when I was compiling Tomcat\nengine with Ant. Every thing seems to be set up\nproperly. This is a part od /etc/profile file\nthat shows the settings of environmental variables.\n\nPATH=\"$PATH:/usr/X11R6/bin:/usr/jbuilder4/bin:/usr/jdk1.3/bin:/usr/jakarta/jakarta-ant/bin:/usr/j2e\ne1.3/bin:/usr/local/pgsql/bin\"\nMANPATH=$MANPATH:/usr/local/pgsql/man\n\nexport JAVA_HOME=/usr/jdk1.3\nexport JAVAHOME=/usr/jdk1.3\nexport J2EE_HOME=/usr/j2ee1.3\nexport ANT_HOME=/usr/jakarta/jakarta-ant\n\nexport JAKARTA_HOME=/usr/jakarta\nexport TOMCAT_HOME=/usr/jakarta/dist/tomcat\nexport LD_LIBRARY_PATH=/usr/local/pgsql/lib\nexport PGDATA=/usr/local/pgsql/data\n\nDid you use j2se 1.3_02 for Linux from Sun\nto build the driver?\n\nBTW, I did not have any problem building the\ndriver under version 7.0.3.\n\n\n>> The perl problem is definitely coming from libperl.a\n>> file as specifically mentioned in the Makefile.\n> \n> \n> No solution for this except to get the Perl sources, configure it to\n> build a shared libperl.so, and build and install the whole thing.\n> None of the RPM packages that I know of supply a shared library for\n> Perl.\n\nThis is what I was not hoping for.\nWhy does version 7.0.3 not have the problem?\n\nDo you guys suggest I go thru the exercise\nof building libperl.so or this is going to\nbe fixed w/ the official release?\n\nBest,\nHY\n\n\n",
"msg_date": "Mon, 09 Apr 2001 19:41:10 -0700",
"msg_from": "\"Homayoun Yousefi'zadeh\" <homayounyz@home.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "\nYes, and also rerun ./configure --with-java < ... > after you set\nJAVA_HOME in your shell environment.\n\nNorm\n\n--------------------------------------\nNorman Clarke\nCombimatrix Corp Software Development\nHarbour Pointe Tech Center\n6500 Harbour Heights Pkwy, Suite 301\nMukilteo, WA 98275\n \ntel: 425.493.2240\nfax: 425.493.2010\n--------------------------------------\n\nOn 9 Apr 2001, Doug McNaught wrote:\n\n> You might want to double-check that JAVAHOME (sp?) is set before you\n> make. I had problems building with Ant until I figured that out. If\n> you look in the Ant docs it tells you how to set that variable (I\n> think).\n\n\n",
"msg_date": "Mon, 9 Apr 2001 19:46:07 -0700 (PDT)",
"msg_from": "\"Norman J. Clarke\" <norman@combimatrix.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] JDBC and Perl compiling problems w/\n\tpostgresql-7.1rc4"
},
{
"msg_contents": "With the Perl problem, it's saying you need a shared library version of\nperl, but the one on your system isn't.\n\nYou'll either need to install a shared library version of Perl then, or\ncompile PostgreSQL without the --with-perl option.\n\nRegards and best wishes,\n\nJustin Clift\n\nHomayoun Yousefi'zadeh wrote:\n> \n> Hello there,\n> \n> I first ran configure with the following options\n> \n> ./configure --with-perl --with-tcl --enable-odbc --with-java\n> --enable-syslog --enable-debug\n> \n> and then compiled postgresql-7.1rc4 on Redhat 7.0 successfully\n> with the exceptions in JDBC and Perl modules as\n> indicated below.\n> \n> -------------------------------------------------------------\n> \n> gmake[3]: Entering directory\n> `/usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc'\n> /usr/jakarta/jakarta-ant/bin/ant -buildfile ../../../build.xml -Dmajor=7\n> -Dminor=1 -Dfullversion=7.1rc4 -Ddef_pgport=5432\n> Buildfile: ../../../build.xml\n> \n> jar:\n> \n> call:\n> \n> prepare:\n> \n> check_versions:\n> \n> driver:\n> Configured build for the JDBC2 edition driver.\n> \n> compile:\n> [javac] Compiling 41 source files to\n> /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build\n> [javac] Modern compiler is not available - using classic compiler\n> \n> BUILD FAILED\n> \n> /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build.xml:99:\n> Cannot use classic compiler, as it is not available\n> \n> Total time: 0 seconds\n> \n> -----------------------------------------------------------------\n> \n> <!-- This is the core of the driver. It is common for all three\n> versions -->\n> \n> <target name=\"compile\" depends=\"prepare,check_versions,driver\">\n> \n> <!-- **** The following is line 99 of build.xml ******* -->\n> <javac srcdir=\"${src}\" destdir=\"${dest}\">\n> \n> <include name=\"${package}/**\" />\n> <exclude name=\"${package}/core/ConnectionHook.java\"\n> unless=\"jdk1.3+\" />\n> <exclude name=\"${package}/jdbc1/**\" if=\"jdk1.2+\" />\n> <exclude name=\"${package}/jdbc2/**\" unless=\"jdk1.2+\" />\n> <exclude name=\"${package}/largeobject/PGblob.java\"\n> unless=\"jdk1.2+\" />\n> <exclude name=\"${package}/largeobject/PGclob.java\"\n> unless=\"jdk1.2+\" />\n> <exclude name=\"${package}/PostgresqlDataSource.java\"\n> unless=\"jdk1.2e+\" />\n> <exclude name=\"${package}/xa/**\" unless=\"jdk1.2e+\" />\n> <exclude name=\"${package}/test/**\" unless=\"junit\" />\n> </javac>\n> <copy todir=\"${dest}\" overwrite=\"true\" filtering=\"on\">\n> <fileset dir=\"${src}\">\n> <include name=\"**/*.properties\" />\n> <exclude name=\"${dest}/**\" />\n> </fileset>\n> </copy>\n> </target>\n> \n> I have both j2se version 1.3 and ant installed on the machine.\n> \n> ----------------------------------------------------------------\n> gmake[4]: Entering directory\n> `/usr/pgsql-pkg/postgresql-7.1rc4/src/pl/plperl'\n> *****\n> * Cannot build PL/Perl because libperl is not a shared library.\n> * Skipped.\n> *****\n> \n> It seems like that the compiler does not like the fact that\n> \n> /usr/lib/perl5/5.6.0/i386-linux/CORE/libperl.a\n> \n> is not a shared object.\n> -----------------------------------------------------\n> \n> Your comments to resolve these issues is greatly\n> appreciated.\n> \n> BTW, rserv module in contrib directory now compiles\n> beautifully.\n> \n> Regards,\n> HY\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 10 Apr 2001 15:23:10 +1000",
"msg_from": "Justin Clift <jclift@iprimus.com.au>",
"msg_from_op": false,
"msg_subject": "Re: JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "\"Homayoun Yousefi'zadeh\" <homayounyz@home.com> writes:\n\n> Thanks for the response. I actually went thru\n> the full exercise when I was compiling Tomcat\n> engine with Ant. Every thing seems to be set up\n> properly. This is a part od /etc/profile file\n> that shows the settings of environmental variables.\n\nHmm, nothing obviously wrong there that I can see...\n\n> Did you use j2se 1.3_02 for Linux from Sun\n> to build the driver?\n\nActually, I think it was Blackdown 1.2.2, with PG7.1beta5 (I haven't\ntried compiling anything later). If I run into the prblem you're\nhaving with RC4 or the release 7.1 (as I plan to compile them soon)\nI'll try to look into it.\n\n> BTW, I did not have any problem building the\n> driver under version 7.0.3.\n\nI don't think 7.0.x used Ant, so it's not surprising.\n\n> >> The perl problem is definitely coming from libperl.a\n> >> file as specifically mentioned in the Makefile.\n> > No solution for this except to get the Perl sources, configure it to\n> \n> > build a shared libperl.so, and build and install the whole thing.\n> > None of the RPM packages that I know of supply a shared library for\n> > Perl.\n> \n> This is what I was not hoping for.\n> Why does version 7.0.3 not have the problem?\n\nBecause it doesn't have Perl as an embedded procedural language (see\nbelow). \n\n> Do you guys suggest I go thru the exercise\n> of building libperl.so or this is going to\n> be fixed w/ the official release?\n\nWhat you may not be aware of is that there are two places where Perl\nis used in the build. One is the Perl client library (the 'Pg'\nmodule). This should not require libperl.so as all it does is build a\nbog-standard extension module.\n\nThe other usage is for Perl as an embedded procedural language like\nPL/PGSQL. In order to compile this you need a shared libperl. It is\nnot a \"bug\" in Postgres; it's simply what's required to embed the\nPerl interpreter into the backend.\n\nIf you just want the client lib, I think you can ignore the missing\nlibperl.so and the client will be built just fine. PL/Perl isn't that \nuseful right now anyhow since it doesn't have an interface to the\nbackend's query mechanism.\n\n-Doug\n",
"msg_date": "10 Apr 2001 02:14:23 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "At 15:33 09/04/01 -0700, Homayoun Yousefi'zadeh wrote:\n>Hello there,\n>\n>I first ran configure with the following options\n>\n> ./configure --with-perl --with-tcl --enable-odbc --with-java \n> --enable-syslog --enable-debug\n>\n>and then compiled postgresql-7.1rc4 on Redhat 7.0 successfully\n>with the exceptions in JDBC and Perl modules as\n>indicated below.\n\n[snip]\n\nYour the second person I've read today with the \"classic\" vs \"modern\" \ncompiler and ant.\n\nI'm looking into it, but it's something between ant & the JDK on Linux at \nthe moment.\n\n>I have both j2se version 1.3 and ant installed on the machine.\n\nIs it 1.3.0 per chance?\n\nPeter\n\n",
"msg_date": "Tue, 10 Apr 2001 16:55:23 +0100",
"msg_from": "Peter Mount <peter@retep.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: JDBC and Perl compiling problems w/\n postgresql-7.1rc4"
},
{
"msg_contents": "> What you may not be aware of is that there are two places where Perl\n> is used in the build. One is the Perl client library (the 'Pg'\n> module). This should not require libperl.so as all it does is build a\n> bog-standard extension module.\n> \n> The other usage is for Perl as an embedded procedural language like\n> PL/PGSQL. In order to compile this you need a shared libperl. It is\n> not a \"bug\" in Postgres; it's simply what's required to embed the\n> Perl interpreter into the backend.\n> \n> If you just want the client lib, I think you can ignore the missing\n> libperl.so and the client will be built just fine. PL/Perl isn't that \n> useful right now anyhow since it doesn't have an interface to the\n> backend's query mechanism.\n\nGreat information. Thanks.\n\nThe reason I need to compile w/ Perl\nsupport turned on is what I am reading\nin the README.rserv of the ERServer\navailable in contrib directory.\nIt says that the requirements are:\n\n- PostgreSQL >= 7.0.X\n A separate Makefile is required for PostgreSQL 7.0.x and earlier\n- Perl5 and the PostgreSQL perl interface\n\nI am thinking that it only requires client lib as\nthe module compiles just fine. Can you confirm this please?\n\nRegards,\nHY\n\n\n",
"msg_date": "Tue, 10 Apr 2001 10:06:08 -0700",
"msg_from": "\"Homayoun Yousefi'zadeh\" <homayounyz@home.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "\"Homayoun Yousefi'zadeh\" <homayounyz@home.com> writes:\n\n> The reason I need to compile w/ Perl\n> support turned on is what I am reading\n> in the README.rserv of the ERServer\n> available in contrib directory.\n> It says that the requirements are:\n> \n> - PostgreSQL >= 7.0.X\n> A separate Makefile is required for PostgreSQL 7.0.x and earlier\n> - Perl5 and the PostgreSQL perl interface\n> \n> I am thinking that it only requires client lib as\n> the module compiles just fine. Can you confirm this please?\n\nThat agrees with my reading of the sentence above, but I've not\ninstalled rserv so I'm not absolutely positive. \n\nOne thing you can do is look in the SQL code that rserv uses and see\nif there are any \n\nCREATE FUNCTION foo() RETURNS whatever AS 'blah' LANGUAGE 'plperl';\n\nstatements.\n\n-Doug\n",
"msg_date": "10 Apr 2001 13:13:35 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "> The reason I need to compile w/ Perl\n> support turned on is what I am reading\n> in the README.rserv of the ERServer\n> available in contrib directory.\n> It says that the requirements are:\n> - PostgreSQL >= 7.0.X\n> A separate Makefile is required for PostgreSQL 7.0.x and earlier\n> - Perl5 and the PostgreSQL perl interface\n> I am thinking that it only requires client lib as\n> the module compiles just fine. Can you confirm this please?\n\nYes. It is only the external (client-side) perl interface which is\nrequired, to support the rserv scripts.\n\n - Thomas\n",
"msg_date": "Tue, 10 Apr 2001 17:38:17 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "On Mon, Apr 09, 2001 at 03:33:19PM -0700, Homayoun Yousefi'zadeh wrote:\n> \n> I first ran configure with the following options\n> \n> ./configure --with-perl --with-tcl --enable-odbc --with-java \n> --enable-syslog --enable-debug\n> \n> and then compiled postgresql-7.1rc4 on Redhat 7.0 successfully\n> with the exceptions in JDBC and Perl modules as\n> indicated below.\n\n\n> BUILD FAILED\n> \n> /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build.xml:99: \n> Cannot use classic compiler, as it is not available\n> \n\n(using jdk 1.3)\n\nSeems like you dont have JAVA_HOME/lib/tools.jar in Ant's classpath.\n\nNext, if you have Ant 1.2 (especially Debian's) & jdk1.3 (well, you\nhave this one) and have problems with:\n\n1) undefined ${major}\n\nor\n\n2) Zip* error: cannot create archive with no entries (from memory,\n cant remember exact message)\n\nthen use this patch:\n\n http://www.l-t.ee/marko/ant12.diff\n\n[for some reason (core people did not have this configuretion)]\nit did not reach main tree.\n\n-- \nmarko\n\n",
"msg_date": "Tue, 10 Apr 2001 21:20:20 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "Thought that I share this with the group. Thanks Marco.\n\nThe problem was gone once I created a soft link to\nJAVA_HOME/lib/tools.jar in the default directory\nJAVA_HOME/jre/lib/ext/. I have not set specific CLASSPATH\nvariable and I could understand the problem better if\nI had seen the problem when using ant to build other\npkgs. I used ant to build tomcat and never saw this\nproblem!\n\nFor those of you looking for the compiled jar file\ngo to the bottom of following page\n\nhttp://hyousefi.tripod.com/pub.html\n\nThanks,\nHY\n\n------------------------------------------------------\n\nMarko Kreen wrote:\n\n> On Mon, Apr 09, 2001 at 03:33:19PM -0700, Homayoun Yousefi'zadeh wrote:\n> \n>> I first ran configure with the following options\n>> \n>> ./configure --with-perl --with-tcl --enable-odbc --with-java \n>> --enable-syslog --enable-debug\n>> \n>> and then compiled postgresql-7.1rc4 on Redhat 7.0 successfully\n>> with the exceptions in JDBC and Perl modules as\n>> indicated below.\n> \n> \n> \n>> BUILD FAILED\n>> \n>> /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build.xml:99: \n>> Cannot use classic compiler, as it is not available\n>> \n> \n> \n> (using jdk 1.3)\n> \n> Seems like you dont have JAVA_HOME/lib/tools.jar in Ant's classpath.\n> \n> Next, if you have Ant 1.2 (especially Debian's) & jdk1.3 (well, you\n> have this one) and have problems with:\n> \n> 1) undefined ${major}\n> \n> or\n> \n> 2) Zip* error: cannot create archive with no entries (from memory,\n> cant remember exact message)\n> \n> then use this patch:\n> \n> http://www.l-t.ee/marko/ant12.diff\n> \n> [for some reason (core people did not have this configuretion)]\n> it did not reach main tree.\n\n",
"msg_date": "Tue, 10 Apr 2001 16:38:28 -0700",
"msg_from": "\"Homayoun Yousefi'zadeh\" <homayounyz@home.com>",
"msg_from_op": true,
"msg_subject": "Re: JDBC and Perl compiling problems w/ postgresql-7.1rc4"
},
{
"msg_contents": "On 10 Apr 2001 16:38:28 -0700, Homayoun Yousefi'zadeh wrote:\n\n\n> For those of you looking for the compiled jar file\n> go to the bottom of following page\n...\n\nWell thanks for that!!! Dreamweaver now sees the driver (placed in\n:Config:JDBC) and doesn't get upset when I test anymore using \"Using\ndriver on this machine\"\n\nIt won't connect to the server yet but I suspect that this is more\nlikely a connection configuration problem. Looking into it.\n\nStill for the final 7.1 version I would _really_ like to be able to\ncompile everything on my machine...\n\nCheers\n\nTony Grant\n\n-- \nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\n\n",
"msg_date": "11 Apr 2001 11:35:42 +0200",
"msg_from": "Tony Grant <tony@animaproductions.com>",
"msg_from_op": false,
"msg_subject": "Re: JDBC and Perl compiling problems w/ postgresql-7.1rc4"
}
] |
[
{
"msg_contents": "Charlie Derr wrote:\n\n > This sounds like the problem with the version of gcc\n > that is included with rh7.0\n >\n > If you don't want to upgrade gcc to a newer version,\n > I think you can fix the problem by \"mv\"ing gcc to brokengcc\n > and then creating creating a new symlink gcc to kgcc. Redhat\n > included a non-broken gcc in the distro and called it kgcc.\n\nI did what you suggested and nothing changed.\nActually, JDBC problem seems to be ant related\nas it did not exist w/ version 7.0.3.\n\nThe perl problem is definitely coming from libperl.a\nfile as specifically mentioned in the Makefile.\n\nSuggestions??\n\nRegards,\nHY\n\n~ -----Original Message-----\n~ From: pgsql-general-owner@postgresql.org\n~ [mailto:pgsql-general-owner@postgresql.org]On Behalf Of Homayoun\n~ Yousefi'zadeh\n~ Sent: Monday, April 09, 2001 6:33 PM\n~ To: pgsql-general@postgresql.org\n~ Subject: [GENERAL] JDBC and Perl compiling problems w/ postgresql-7.1rc4\n~\n~\n~ Hello there,\n~\n~ I first ran configure with the following options\n~\n~ ./configure --with-perl --with-tcl --enable-odbc --with-java\n~ --enable-syslog --enable-debug\n~\n~ and then compiled postgresql-7.1rc4 on Redhat 7.0 successfully\n~ with the exceptions in JDBC and Perl modules as\n~ indicated below.\n~\n~ -------------------------------------------------------------\n~\n~ gmake[3]: Entering directory\n~ `/usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc'\n~ /usr/jakarta/jakarta-ant/bin/ant -buildfile ../../../build.xml -Dmajor=7\n~ -Dminor=1 -Dfullversion=7.1rc4 -Ddef_pgport=5432\n~ Buildfile: ../../../build.xml\n~\n~ jar:\n~\n~ call:\n~\n~ prepare:\n~\n~ check_versions:\n~\n~ driver:\n~ Configured build for the JDBC2 edition driver.\n~\n~ compile:\n~ [javac] Compiling 41 source files to\n~ /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build\n~ [javac] Modern compiler is not available - using classic compiler\n~\n~ BUILD FAILED\n~\n~ /usr/pgsql-pkg/postgresql-7.1rc4/src/interfaces/jdbc/build.xml:99:\n~ Cannot use classic compiler, as it is not available\n~\n~ Total time: 0 seconds\n~\n~ -----------------------------------------------------------------\n~\n~ <!-- This is the core of the driver. It is common for all three\n~ versions -->\n~\n~ <target name=\"compile\" depends=\"prepare,check_versions,driver\">\n~\n~ <!-- **** The following is line 99 of build.xml ******* -->\n~ <javac srcdir=\"${src}\" destdir=\"${dest}\">\n~\n~ <include name=\"${package}/**\" />\n~ <exclude name=\"${package}/core/ConnectionHook.java\"\n~ unless=\"jdk1.3+\" />\n~ <exclude name=\"${package}/jdbc1/**\" if=\"jdk1.2+\" />\n~ <exclude name=\"${package}/jdbc2/**\" unless=\"jdk1.2+\" />\n~ <exclude name=\"${package}/largeobject/PGblob.java\"\n~ unless=\"jdk1.2+\" />\n~ <exclude name=\"${package}/largeobject/PGclob.java\"\n~ unless=\"jdk1.2+\" />\n~ <exclude name=\"${package}/PostgresqlDataSource.java\"\n~ unless=\"jdk1.2e+\" />\n~ <exclude name=\"${package}/xa/**\" unless=\"jdk1.2e+\" />\n~ <exclude name=\"${package}/test/**\" unless=\"junit\" />\n~ </javac>\n~ <copy todir=\"${dest}\" overwrite=\"true\" filtering=\"on\">\n~ <fileset dir=\"${src}\">\n~ <include name=\"**/*.properties\" />\n~ <exclude name=\"${dest}/**\" />\n~ </fileset>\n~ </copy>\n~ </target>\n~\n~\n~ I have both j2se version 1.3 and ant installed on the machine.\n~\n~ ----------------------------------------------------------------\n~ gmake[4]: Entering directory\n~ `/usr/pgsql-pkg/postgresql-7.1rc4/src/pl/plperl'\n~ *****\n~ * Cannot build PL/Perl because libperl is not a shared library.\n~ * Skipped.\n~ *****\n~\n~ It seems like that the compiler does not like the fact that\n~\n~ /usr/lib/perl5/5.6.0/i386-linux/CORE/libperl.a\n~\n~ is not a shared object.\n~ -----------------------------------------------------\n~\n~ Your comments to resolve these issues is greatly\n~ appreciated.\n~\n~ BTW, rserv module in contrib directory now compiles\n~ beautifully.\n~\n~ Regards,\n~ HY\n\n",
"msg_date": "Mon, 09 Apr 2001 16:50:14 -0700",
"msg_from": "\"Homayoun Yousefi'zadeh\" <homayounyz@home.com>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] JDBC and Perl compiling problems w/ postgresql-7.1rc4"
}
] |
[
{
"msg_contents": "On Thu, Apr 05, 2001 at 04:08:48AM -0400, Peter T Mount wrote:\n> Quoting Kyle VanderBeek <kylev@yaga.com>:\n> \n> \n> > Please consider applying my patch to the 7.0 codebase as a stop-gap \n> > measure until such time as the optimizer can be improved to notice \n> > indecies on INT8 columns and cast INT arguments up.\n> \n> This will have to wait until after 7.1 is released. As this is a \"new\" feature, \n> this can not be included into 7.1 as it's now in the final Release Candidate \n> phase.\n\nThis is a new feature? Using indecies is \"new\"? I guess I really beg to \ndiffer. Seems like a bugfix to me (in the \"workaround\" category).\n\nI'm going to start digging around in the optimizer code so such hacks as \nmine aren't needed. It's really haenous to find out your production \nserver is freaking out and doing sequential scans for EVERYTHING.\n\nAnother hack I need to work on (or someone else can) is to squish in a\nlayer of filesystem hashing for large objects. We tried to use large\nobjects and got destroyed. 40,000 rows and the server barely functioned.\nI think this is because of 2 things:\n\n1) Filehandles not being closed. This was an oversite I've seen covered \nin the list archives somewhere.\n\n2) The fact that all objects are stored in a the single data directory. \nOnce you get up to a good number of objects, directory scans really take a \nlong, long time. This slows down any subsequent openings of large \nobjects. Is someone working on this problem? Or have a patch already?\n\n-- \nKyle.\n \"I hate every ape I see, from chimpan-A to chimpan-Z\" -- Troy McClure\n",
"msg_date": "Mon, 9 Apr 2001 18:30:56 -0700",
"msg_from": "Kyle VanderBeek <kylev@yaga.com>",
"msg_from_op": true,
"msg_subject": "Re: JDBC int8 hack"
},
{
"msg_contents": "At 18:30 09/04/01 -0700, Kyle VanderBeek wrote:\n>On Thu, Apr 05, 2001 at 04:08:48AM -0400, Peter T Mount wrote:\n> > Quoting Kyle VanderBeek <kylev@yaga.com>:\n> >\n> >\n> > > Please consider applying my patch to the 7.0 codebase as a stop-gap\n> > > measure until such time as the optimizer can be improved to notice\n> > > indecies on INT8 columns and cast INT arguments up.\n> >\n> > This will have to wait until after 7.1 is released. As this is a \"new\" \n> feature,\n> > this can not be included into 7.1 as it's now in the final Release \n> Candidate\n> > phase.\n>\n>This is a new feature? Using indecies is \"new\"? I guess I really beg to\n>differ. Seems like a bugfix to me (in the \"workaround\" category).\n\nYes they are. INT8 is not a feature/type yet supported by the driver, hence \nit's \"new\".\n\nInfact the jdbc driver supports no array's at this time (as PostgreSQL & \nSQL3 arrays are different beasts).\n\nIf it's worked in the past, then that was sheer luck.\n\n>I'm going to start digging around in the optimizer code so such hacks as\n>mine aren't needed. It's really haenous to find out your production\n>server is freaking out and doing sequential scans for EVERYTHING.\n\nAre you talking about the optimiser in the backend as there isn't one in \nthe jdbc driver.\n\n\n>Another hack I need to work on (or someone else can) is to squish in a\n>layer of filesystem hashing for large objects. We tried to use large\n>objects and got destroyed. 40,000 rows and the server barely functioned.\n>I think this is because of 2 things:\n>\n>1) Filehandles not being closed. This was an oversite I've seen covered\n>in the list archives somewhere.\n\nOk, ensure you are closing the large objects within JDBC. If you are then \nthis is a backend problem.\n\nOne thing to try is to commit the transaction a bit more often (if you are \nrunning within a single transaction for all 40k objects). Committing the \ntransaction will force the backend to close all open large objects on that \nconnection.\n\n>2) The fact that all objects are stored in a the single data directory.\n>Once you get up to a good number of objects, directory scans really take a\n>long, long time. This slows down any subsequent openings of large\n>objects. Is someone working on this problem? Or have a patch already?\n\nAgain not JDBC. Forwarding to the hackers list on this one. The naming \nconventions were changed a lot in 7.1, and it was for more flexability.\n\nPeter\n\n",
"msg_date": "Tue, 10 Apr 2001 14:24:24 +0100",
"msg_from": "Peter Mount <peter@retep.org.uk>",
"msg_from_op": false,
"msg_subject": "Large Object problems (was Re: JDBC int8 hack)"
},
{
"msg_contents": "On Tue, Apr 10, 2001 at 02:24:24PM +0100, Peter Mount wrote:\n> At 18:30 09/04/01 -0700, Kyle VanderBeek wrote:\n> >This is a new feature? Using indecies is \"new\"? I guess I really beg to\n> >differ. Seems like a bugfix to me (in the \"workaround\" category).\n> \n> Yes they are. INT8 is not a feature/type yet supported by the driver, hence \n> it's \"new\".\n> \n> Infact the jdbc driver supports no array's at this time (as PostgreSQL & \n> SQL3 arrays are different beasts).\n> \n> If it's worked in the past, then that was sheer luck.\n\nAlright man, you've got me confused. Are you saying that despite the \nexistance of INT8 as a column type, and PreparedStatement.setLong(), that \nthese ought not be used? If so, there is a really big warning missing \nfrom the documentation!\n\nI guess I'm asking this: I've got an enterprise database runnign 7.0.3\nready to go using INT8 primary keys and being accessed through my\nre-touched JDBC driver. Am I screwed? Is it going to break? If so, I\nneed to fix this all very, very fast.\n\n-- \nKyle.\n \"I hate every ape I see, from chimpan-A to chimpan-Z\" -- Troy McClure\n",
"msg_date": "Tue, 10 Apr 2001 13:39:16 -0700",
"msg_from": "Kyle VanderBeek <kylev@yaga.com>",
"msg_from_op": true,
"msg_subject": "Re: Large Object problems (was Re: JDBC int8 hack)"
},
{
"msg_contents": "Sorry, meant to hit all of these.\n\nOn Tue, Apr 10, 2001 at 02:24:24PM +0100, Peter Mount wrote:\n> >I'm going to start digging around in the optimizer code so such hacks as\n> >mine aren't needed. It's really haenous to find out your production\n> >server is freaking out and doing sequential scans for EVERYTHING.\n> \n> Are you talking about the optimiser in the backend as there isn't one in \n> the jdbc driver.\n\nYeah, in the backend. My patch to the JDBC driver only helps people using\nJDBC to get to a database (obviously). From any other access method, a\nstatement like:\n\nSELECT * FROM Foo where bar=1234\n\nwill do a sequential scan even if there is an index Foo_bar_idx on \"bar\" \nif bar is INT8. It seems to me that the optimizer should be able to\nnotice the index Foo_bar_idx and convert the argument \"1234\" to an INT8 in \norder to use Foo_bar_idx over doing a sequential scan (in which case, \n\"1234\" probably gets converted to INT8 anyhow to do comparisons).\n\nGranted, I'm theorizing. I should probably shut up and RTFS.\n\nAnyhow, all my patch did was tack the \"::int8\" cast onto parameters that \nwere set by PreparedStatement.setLong(). We did this after finding that \nEXPLAIN'ing this:\n\nSELECT * FROM Foo where bar=1234::int8\n\ndidn't degrade to a sequential scan like the other SELECT statement (w/o \nthe cast).\n\n> >Another hack I need to work on (or someone else can) is to squish in a\n> >layer of filesystem hashing for large objects. We tried to use large\n> >objects and got destroyed. 40,000 rows and the server barely functioned.\n> >I think this is because of 2 things:\n> >\n> >1) Filehandles not being closed. This was an oversite I've seen covered\n> >in the list archives somewhere.\n> \n> Ok, ensure you are closing the large objects within JDBC. If you are then \n> this is a backend problem.\n> \n> One thing to try is to commit the transaction a bit more often (if you are \n> running within a single transaction for all 40k objects). Committing the \n> transaction will force the backend to close all open large objects on that \n> connection.\n\nWe were using setBytes(), as we were trying to minimize porting work from \nthe previous database we were using. And we were comitting after every \ntransaction. We switched to Base64 encoding and storing strings, so we're \nin better shape now.\n\nI'm going to write some more test code in my evenings and see if I can get \ncurrent PostgreSQL to suck up filehandles. I'll post again if I can put \ntogether a coherent bug report or patch.\n\n> >2) The fact that all objects are stored in a the single data directory.\n> >Once you get up to a good number of objects, directory scans really take a\n> >long, long time. This slows down any subsequent openings of large\n> >objects. Is someone working on this problem? Or have a patch already?\n> \n> Again not JDBC. Forwarding to the hackers list on this one. The naming \n> conventions were changed a lot in 7.1, and it was for more flexability.\n\nRight, cool. I'll check out the new codebase. Thanks.\n\nThis is *so* the curse of open source: now I'm going to be using up my \npersonal time to look for bugs I find at work. Oh well, I didn't need \nto sleep anyhow. ;-)\n\n-- \nKyle.\n \"I hate every ape I see, from chimpan-A to chimpan-Z\" -- Troy McClure\n",
"msg_date": "Tue, 10 Apr 2001 14:08:22 -0700",
"msg_from": "Kyle VanderBeek <kylev@yaga.com>",
"msg_from_op": true,
"msg_subject": "Re: Large Object problems (was Re: JDBC int8 hack)"
},
{
"msg_contents": "> > >This is a new feature? Using indecies is \"new\"? I guess I really beg to\n> > >differ. Seems like a bugfix to me (in the \"workaround\" category).\n> > Yes they are. INT8 is not a feature/type yet supported by the driver, hence\n> > it's \"new\".\n> > Infact the jdbc driver supports no array's at this time (as PostgreSQL &\n> > SQL3 arrays are different beasts).\n> > If it's worked in the past, then that was sheer luck.\n> Alright man, you've got me confused. Are you saying that despite the\n> existance of INT8 as a column type, and PreparedStatement.setLong(), that\n> these ought not be used? If so, there is a really big warning missing\n> from the documentation!\n\nAh, it just dawned on me what might be happening: Peter, I'm guessing\nthat you are thinking of \"INT48\" or some such, the pseudo-integer array\ntype. Kyle is referring to the \"int8\" 8 byte integer type.\n\n> I guess I'm asking this: I've got an enterprise database runnign 7.0.3\n> ready to go using INT8 primary keys and being accessed through my\n> re-touched JDBC driver. Am I screwed? Is it going to break? If so, I\n> need to fix this all very, very fast.\n\nbtw, it might be better to use a syntax like\n\n ... where col = '1234';\n\nor\n\n ... where col = int8 '1234';\n\nIf the former works, then that is a bit more generic that slapping a\n\"::int8\" onto the constant field.\n\nI'd imagine that this could also be coded into the app; if so that may\nbe where it belongs since then the driver does not have to massage the\nqueries as much and it will be easier for the *driver* to stay\ncompatible with applications.\n\n - Thomas\n",
"msg_date": "Wed, 11 Apr 2001 02:57:16 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Large Object problems (was Re: JDBC int8 hack)"
},
{
"msg_contents": "On Wed, Apr 11, 2001 at 02:57:16AM +0000, Thomas Lockhart wrote:\n> > Alright man, you've got me confused. Are you saying that despite the\n> > existance of INT8 as a column type, and PreparedStatement.setLong(), that\n> > these ought not be used? If so, there is a really big warning missing\n> > from the documentation!\n> \n> Ah, it just dawned on me what might be happening: Peter, I'm guessing\n> that you are thinking of \"INT48\" or some such, the pseudo-integer array\n> type. Kyle is referring to the \"int8\" 8 byte integer type.\n\nYes!\n\n> > I guess I'm asking this: I've got an enterprise database runnign 7.0.3\n> > ready to go using INT8 primary keys and being accessed through my\n> > re-touched JDBC driver. Am I screwed? Is it going to break? If so, I\n> > need to fix this all very, very fast.\n> \n> btw, it might be better to use a syntax like\n> \n> ... where col = '1234';\n> \n> or\n> \n> ... where col = int8 '1234';\n> \n> If the former works, then that is a bit more generic that slapping a\n> \"::int8\" onto the constant field.\n\nIt seems like a wash to me; either way gets the desired result. Tacking \non ::int8 was the quickest. It also seems neater than this:\n\n set(parameterIndex, (\"int8 '\" + new Long(x)).toString() + \"'\");\n\nin PreparedStatement.setLong().\n\n> I'd imagine that this could also be coded into the app; if so that may\n> be where it belongs since then the driver does not have to massage the\n> queries as much and it will be easier for the *driver* to stay\n> compatible with applications.\n\nThis seems to be the wrong idea to me. The idea is that JDBC allows you \nto be a little bit \"backend agnostic\". It'd be pretty disappointing if \nthis wasn't true for even the base types. Application programmers should \njust call setLong() they're dealing with an 8-byte (Long or long) integer. \nIt'd be a shame to have a PostgreSQL-specific call to setString(\"int8 '\" + \nx.toString() + \"'\") littering your code. That seems to fly in the face of \neverything that JDBC/DBI/ODBC (etc) are about.\n\n-- \nKyle.\n \"I hate every ape I see, from chimpan-A to chimpan-Z\" -- Troy McClure\n",
"msg_date": "Wed, 11 Apr 2001 13:46:44 -0700",
"msg_from": "Kyle VanderBeek <kylev@yaga.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: Large Object problems (was Re: JDBC int8 hack)"
},
{
"msg_contents": "Kyle VanderBeek <kylev@yaga.com> writes:\n> Please consider applying my patch to the 7.0 codebase as a stop-gap \n> measure until such time as the optimizer can be improved to notice \n> indecies on INT8 columns and cast INT arguments up.\n\nI think it's an extremely bad idea to hack up JDBC to get around a\nbackend shortcoming. The hack will persist and cause problems long\nafter the real issue has been fixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 12:51:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: JDBC int8 hack "
},
{
"msg_contents": "Quoting Kyle VanderBeek <kylev@yaga.com>:\n\n> On Tue, Apr 10, 2001 at 02:24:24PM +0100, Peter Mount wrote:\n> > At 18:30 09/04/01 -0700, Kyle VanderBeek wrote:\n> > >This is a new feature? Using indecies is \"new\"? I guess I really\n> beg to\n> > >differ. Seems like a bugfix to me (in the \"workaround\" category).\n> > \n> > Yes they are. INT8 is not a feature/type yet supported by the driver,\n> hence \n> > it's \"new\".\n> > \n> > Infact the jdbc driver supports no array's at this time (as PostgreSQL\n> & \n> > SQL3 arrays are different beasts).\n> > \n> > If it's worked in the past, then that was sheer luck.\n> \n> Alright man, you've got me confused. Are you saying that despite the \n> existance of INT8 as a column type, and PreparedStatement.setLong(),\n> that \n> these ought not be used? If so, there is a really big warning missing \n> from the documentation!\n\nErm, int8 isn't long, but an array of 8 int's (unless it's changed).\n\n> I guess I'm asking this: I've got an enterprise database runnign 7.0.3\n> ready to go using INT8 primary keys and being accessed through my\n> re-touched JDBC driver. Am I screwed? Is it going to break? If so, I\n> need to fix this all very, very fast.\n> \n> -- \n> Kyle.\n> \"I hate every ape I see, from chimpan-A to chimpan-Z\" -- Troy\n> McClure\n> \n\n\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n",
"msg_date": "Tue, 17 Apr 2001 09:11:54 -0400 (EDT)",
"msg_from": "Peter T Mount <peter@retep.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Large Object problems (was Re: JDBC int8 hack)"
},
{
"msg_contents": "Quoting Thomas Lockhart <lockhart@alumni.caltech.edu>:\n\n> > > >This is a new feature? Using indecies is \"new\"? I guess I really\n> beg to\n> > > >differ. Seems like a bugfix to me (in the \"workaround\" category).\n> > > Yes they are. INT8 is not a feature/type yet supported by the\n> driver, hence\n> > > it's \"new\".\n> > > Infact the jdbc driver supports no array's at this time (as\n> PostgreSQL &\n> > > SQL3 arrays are different beasts).\n> > > If it's worked in the past, then that was sheer luck.\n> > Alright man, you've got me confused. Are you saying that despite the\n> > existance of INT8 as a column type, and PreparedStatement.setLong(),\n> that\n> > these ought not be used? If so, there is a really big warning\n> missing\n> > from the documentation!\n> \n> Ah, it just dawned on me what might be happening: Peter, I'm guessing\n> that you are thinking of \"INT48\" or some such, the pseudo-integer array\n> type. Kyle is referring to the \"int8\" 8 byte integer type.\n\nAh, that would explain it. However int8 (as in 8 byte int) has not been \nimplemented AFAIK (which is why I've said it's \"new\"). Until now, I've taken \nint8 to be the one that used to be used (probably still is) in system tables \netc.\n\n> > I guess I'm asking this: I've got an enterprise database runnign\n> 7.0.3\n> > ready to go using INT8 primary keys and being accessed through my\n> > re-touched JDBC driver. Am I screwed? Is it going to break? If so,\n> I\n> > need to fix this all very, very fast.\n> \n> btw, it might be better to use a syntax like\n> \n> ... where col = '1234';\n> \n> or\n> \n> ... where col = int8 '1234';\n> \n> If the former works, then that is a bit more generic that slapping a\n> \"::int8\" onto the constant field.\n> \n> I'd imagine that this could also be coded into the app; if so that may\n> be where it belongs since then the driver does not have to massage the\n> queries as much and it will be easier for the *driver* to stay\n> compatible with applications.\n\nI agree.\n\nPeter\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n",
"msg_date": "Tue, 17 Apr 2001 09:27:33 -0400 (EDT)",
"msg_from": "Peter T Mount <peter@retep.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Large Object problems (was Re: JDBC int8 hack)"
},
{
"msg_contents": "> Erm, int8 isn't long, but an array of 8 int's (unless it's changed).\n\nint8 is a 64-bit integer. There used to be a type (maybe called int48\n??) which was 8 4-byte integers. afaicr that is now called oidvector\n(and there is an int2vector also). The name changes for these latter\ntypes were fairly recent.\n\nKyle is asking about the 64-bit integer type called int8 in the catalog\nand int64 in the backend source code.\n\n - Thomas\n",
"msg_date": "Tue, 17 Apr 2001 13:30:57 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Large Object problems (was Re: JDBC int8 hack)"
},
{
"msg_contents": "Peter T Mount <peter@retep.org.uk> writes:\n>> Ah, it just dawned on me what might be happening: Peter, I'm guessing\n>> that you are thinking of \"INT48\" or some such, the pseudo-integer array\n>> type. Kyle is referring to the \"int8\" 8 byte integer type.\n\n> Ah, that would explain it. However int8 (as in 8 byte int) has not been \n> implemented AFAIK (which is why I've said it's \"new\"). Until now, I've taken \n> int8 to be the one that used to be used (probably still is) in system tables \n> etc.\n\nSay what? \"int8\" has been a 64-bit-integer type since release 6.4.\nI think it existed in contrib even before that, but certainly that is\nwhat \"int8\" has meant for the last three or so years.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Apr 2001 10:53:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Large Object problems (was Re: JDBC int8 hack) "
},
{
"msg_contents": "On Tue, Apr 17, 2001 at 09:11:54AM -0400, Peter T Mount wrote:\n> Erm, int8 isn't long, but an array of 8 int's (unless it's changed).\n\nhttp://postgresql.readysetnet.com/users-lounge/docs/7.0/user/datatype.htm#AEN942\n\nIt is very much an 8-byte integer, the correlary to Java's Long/long.\n\n-- \nKyle.\n \"I hate every ape I see, from chimpan-A to chimpan-Z\" -- Troy McClure\n",
"msg_date": "Tue, 17 Apr 2001 11:29:27 -0700",
"msg_from": "Kyle VanderBeek <kylev@yaga.com>",
"msg_from_op": true,
"msg_subject": "Re: Large Object problems (was Re: JDBC int8 hack)"
}
] |
[
{
"msg_contents": "I've tried to write a plpgsql function, and noticed the following\nproblem : (7.1RC2 rpm from postgresql.org)\n\nWhen issuing a comand like:\nDECLARE\n rowvar tablaname%ROWTYPE\nBEGIN\n FOR rowvar IN SELECT FROM tablename t WHERE t.xxx=yyy AND t.zzz=qqq\n LOOP\n .....\n END LOOP;\n\nor just simply issuing a query like\n\nDECLARE\n rownum int4:=0;\nBEGIN\n SELECT count (*)::int4 INTO rownum FROM tablename t WHERE t.xxx=yyy\n AND t.zzz=qqq;\n\nonly the t.zzz=qqq is realized in the returned rows, the t.xxx=yyy is\nskipped, but when executing the query with execute everything is fine.\nIs this an undocumented bug? I've noticed that the same query\nstructure is avoided in the documentation, you use simple queryes with\nonly one filtering condition... :(\n\n\n",
"msg_date": "Tue, 10 Apr 2001 09:31:32 +0200",
"msg_from": "Lehel Gyuro <lehel@bin.hu>",
"msg_from_op": true,
"msg_subject": "Maybe a plpgsql bug?"
},
{
"msg_contents": "Lehel Gyuro <lehel@bin.hu> writes:\n> I've tried to write a plpgsql function, and noticed the following\n> problem : (7.1RC2 rpm from postgresql.org)\n\nPerhaps this is a bug, but you have not given a complete example that\nwould allow someone else to try to reproduce it. Please see the\nguidelines for bug reports.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 13:22:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Maybe a plpgsql bug? "
}
] |
[
{
"msg_contents": "dear all,\nI currently using postgresql v7.0.3\nwhen i import text file to table with command \"copy tablename from\n'/tmp/a.txt';\nand it shows\n\"copy: line 20, Cannot insert a duplicate key into unique index testpri_pk\"\n,then it exits with doing nothing.\n\nI want to ignore this errors and continue copy the next record. How to do\nthat?\nif I don't filter in '/tmp/a.txt' before using copy command.\n\nThank you so much for your help in advance .\nRegards\nJaruwan\n\n",
"msg_date": "Tue, 10 Apr 2001 15:14:32 +0700",
"msg_from": "\"Jaruwan Laongmal\" <jaruwan@gits.net.th>",
"msg_from_op": true,
"msg_subject": "problem with copy command "
},
{
"msg_contents": "On Tue, 10 Apr 2001, Jaruwan Laongmal wrote:\n\n> dear all,\n> I currently using postgresql v7.0.3\n> when i import text file to table with command \"copy tablename from\n> '/tmp/a.txt';\n> and it shows\n> \"copy: line 20, Cannot insert a duplicate key into unique index testpri_pk\"\n> ,then it exits with doing nothing.\n> \n> I want to ignore this errors and continue copy the next record. How to do\n> that?\n> if I don't filter in '/tmp/a.txt' before using copy command.\n\nAFAIK, you can't ignore primary keys, so you can't cheat and get it in,\neven for a moment. And if COPY encounters bad data, it ends the\ntransaction. (Both of these seem like the Right Thing to me, though\nperhaps there's an argument for COPY IGNORING ERRORS or something like\nthat. Ick.)\n\n\nEither\n\n1) filter /tmp/a.txt to remove duplicates\n\nor \n\n2) drop your unique index, copy the data, get rid of duplicates, the add\nthe index again\n\nor\n\n2) \n\nAssuming your table you're importing to is\n\n CREATE TABLE names (lname text, fname text, primary key (lname,\nfname) );\n\nCreate another table w/o the primary key:\n\n CREATE TABLE import (lname text, fname text);\n\ncopy to *this* table, then copy from this table to the names table,\nignoring duplicates in the import:\n\n SELECT distinct fname, lname into names from import;\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 10 Apr 2001 11:36:43 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: problem with copy command "
},
{
"msg_contents": "Jaruwan Laongmal wrote:\n\n> dear all,\n> I currently using postgresql v7.0.3\n> when i import text file to table with command \"copy tablename from\n> '/tmp/a.txt';\n> and it shows\n> \"copy: line 20, Cannot insert a duplicate key into unique index testpri_pk\"\n> ,then it exits with doing nothing.\n>\n> I want to ignore this errors and continue copy the next record. How to do\n> that?\n> if I don't filter in '/tmp/a.txt' before using copy command.\n>\n> Thank you so much for your help in advance .\n> Regards\n> Jaruwan\n\nTry to delete the unique index testpri_pk ... but if you want to create the\nunique index again you must delete (or modify) you'r not_unique rows.\n\n\nGeorge Moga,\n Data Systems Srl\n Slobozia, ROMANIA\n\n\n",
"msg_date": "Wed, 11 Apr 2001 12:27:21 +0300",
"msg_from": "George Moga <george@dsn.ro>",
"msg_from_op": false,
"msg_subject": "Re: problem with copy command"
}
] |
[
{
"msg_contents": "I can understand why there might be some resistance to the idea of\nadding performance tuning flags into the server rather than documenting\nexisting settings better, but I think a compromise would be possible.\n\nCould we develop a helper application that takes the --tuning\nsuperserver argument and translates that into a set of options to pass?\nThat way, fine-tuning by hand is still practical, but for those who just\nwant a good first set of values, a tuning helper application that looks\nat system memory, processor speed, and a user-supplied indication of the\nsystem's purpose and produces a set of postmaster options might be the\nway to approach this. And we don't bloat the server with extra\nalgorithms. There is no dependency on this utility, either (assuming we\ncontinue to use workable defaults for postmaster options!) but it may\nbenefit some people to use it. \n\nI really like the performance hints thing too. \n\nJohn\n\n-- \nJohn Gray\nTel +44-7974-100-584\nmailto:jgray@beansindustry.co.uk\n\n\n",
"msg_date": "10 Apr 2001 10:52:25 +0100",
"msg_from": "John Gray <jgray@beansindustry.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "\nThe problem is that I can't figure out what would be tuned by these\noptions. We only have 2-3 parameters that can be changed.\n\n\n> I can understand why there might be some resistance to the idea of\n> adding performance tuning flags into the server rather than documenting\n> existing settings better, but I think a compromise would be possible.\n> \n> Could we develop a helper application that takes the --tuning\n> superserver argument and translates that into a set of options to pass?\n> That way, fine-tuning by hand is still practical, but for those who just\n> want a good first set of values, a tuning helper application that looks\n> at system memory, processor speed, and a user-supplied indication of the\n> system's purpose and produces a set of postmaster options might be the\n> way to approach this. And we don't bloat the server with extra\n> algorithms. There is no dependency on this utility, either (assuming we\n> continue to use workable defaults for postmaster options!) but it may\n> benefit some people to use it. \n> \n> I really like the performance hints thing too. \n> \n> John\n> \n> -- \n> John Gray\n> Tel +44-7974-100-584\n> mailto:jgray@beansindustry.co.uk\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 10 Apr 2001 10:37:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": " Indeed, as an avid user (and tuner, I suppose) of PostgreSQL, I can't\nsee how any configure option would be faster or better than the existing\ncommand line /config file parameters -- it would only serve to make things\nharder to deal with IMHO. \"Tuning\" PostgreSQL is pretty simple, and is\nexplained pretty well throughout the manual (especially in the section\ntitled \"Understanding Performance\"). We have -S -B and the fsync options,\n<Austin Powers Voice> That's about it.. </Austin Powers Voice> --- right?\nAll are explained in the manual and are as easy to use as anyone could\nask... Any OS tuning should be left up to the administrator as that's what\nadministrators are for :-)\n\n Just my humble $0.02 worth..\n\n-Mitch\nSoftware development :\nYou can have it cheap, fast or working. Choose two.\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"John Gray\" <jgray@beansindustry.co.uk>\nCc: <pgsql-hackers@postgresql.org>\nSent: Tuesday, April 10, 2001 10:37 AM\nSubject: Re: Re: \"--tuning\" compile and runtime option (?)\n\n\n>\n> The problem is that I can't figure out what would be tuned by these\n> options. We only have 2-3 parameters that can be changed.\n>\n>\n> > I can understand why there might be some resistance to the idea of\n> > adding performance tuning flags into the server rather than documenting\n> > existing settings better, but I think a compromise would be possible.\n> >\n> > Could we develop a helper application that takes the --tuning\n> > superserver argument and translates that into a set of options to pass?\n> > That way, fine-tuning by hand is still practical, but for those who just\n> > want a good first set of values, a tuning helper application that looks\n> > at system memory, processor speed, and a user-supplied indication of the\n> > system's purpose and produces a set of postmaster options might be the\n> > way to approach this. And we don't bloat the server with extra\n> > algorithms. There is no dependency on this utility, either (assuming we\n> > continue to use workable defaults for postmaster options!) but it may\n> > benefit some people to use it.\n> >\n> > I really like the performance hints thing too.\n> >\n> > John\n> >\n> > --\n> > John Gray\n> > Tel +44-7974-100-584\n> > mailto:jgray@beansindustry.co.uk\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Tue, 10 Apr 2001 10:47:13 -0400",
"msg_from": "\"Mitch Vincent\" <mitch@venux.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "> I can't see how any configure option would be faster or\n> better than the existing command line /config file parameters\n> -- it would only serve to make things harder to deal with IMHO.\n> \"Tuning\" PostgreSQL is pretty simple, and is explained pretty\n> well throughout the manual (especially in the section titled\n> \"Understanding Performance\"). We have -S -B and the fsync\n> options, <Austin Powers Voice> That's about it.. </Austin Powers\n> Voice> --- right?\n> All are explained in the manual and are as easy to use as anyone could\n> ask... Any OS tuning should be left up to the administrator\n> as that's what administrators are for :-)\n\nSomeday, maybe other parameters (such as some of the optimizer's default\nvalues) will be configurable/changable on the fly. Would seem silly to\nhave to stop/start the server to change those.\n\nJust an example, I'm sure there are other internal values that admins\nwould like to tweak without recompiling or restarting. Would be much\nnicer to login as the superuser to do it rather than shut down my site.\n\ndarrenk\n\n",
"msg_date": "Tue, 10 Apr 2001 11:13:19 -0400",
"msg_from": "\"Darren King\" <darrenk@insightdist.com>",
"msg_from_op": false,
"msg_subject": "RE: Re: Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "Well, what is being discussed here would require a *recompile* to change the\nvalues, so you sure wouldn't want that! :-)\n\nCurrently you can set a lot of internal variables with the SET command in any PG\nsession.. It's useful for forcing index or sequential scans and such (and I'm\nsure a lot more)..\n\n-Mitch\nSoftware development :\nYou can have it cheap, fast or working. Choose two.\n\n----- Original Message -----\nFrom: \"Darren King\" <darrenk@insightdist.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Tuesday, April 10, 2001 11:13 AM\nSubject: RE: Re: Re: \"--tuning\" compile and runtime option (?)\n\n\n> > I can't see how any configure option would be faster or\n> > better than the existing command line /config file parameters\n> > -- it would only serve to make things harder to deal with IMHO.\n> > \"Tuning\" PostgreSQL is pretty simple, and is explained pretty\n> > well throughout the manual (especially in the section titled\n> > \"Understanding Performance\"). We have -S -B and the fsync\n> > options, <Austin Powers Voice> That's about it.. </Austin Powers\n> > Voice> --- right?\n> > All are explained in the manual and are as easy to use as anyone could\n> > ask... Any OS tuning should be left up to the administrator\n> > as that's what administrators are for :-)\n>\n> Someday, maybe other parameters (such as some of the optimizer's default\n> values) will be configurable/changable on the fly. Would seem silly to\n> have to stop/start the server to change those.\n>\n> Just an example, I'm sure there are other internal values that admins\n> would like to tweak without recompiling or restarting. Would be much\n> nicer to login as the superuser to do it rather than shut down my site.\n>\n> darrenk\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Tue, 10 Apr 2001 11:26:02 -0400",
"msg_from": "\"Mitch Vincent\" <mitch@venux.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: \"--tuning\" compile and runtime option (?)"
},
{
"msg_contents": "We do have postgresql.conf. I am unsure how I would tune those based\non a single flag.\n\n> > I can't see how any configure option would be faster or\n> > better than the existing command line /config file parameters\n> > -- it would only serve to make things harder to deal with IMHO.\n> > \"Tuning\" PostgreSQL is pretty simple, and is explained pretty\n> > well throughout the manual (especially in the section titled\n> > \"Understanding Performance\"). We have -S -B and the fsync\n> > options, <Austin Powers Voice> That's about it.. </Austin Powers\n> > Voice> --- right?\n> > All are explained in the manual and are as easy to use as anyone could\n> > ask... Any OS tuning should be left up to the administrator\n> > as that's what administrators are for :-)\n> \n> Someday, maybe other parameters (such as some of the optimizer's default\n> values) will be configurable/changable on the fly. Would seem silly to\n> have to stop/start the server to change those.\n> \n> Just an example, I'm sure there are other internal values that admins\n> would like to tweak without recompiling or restarting. Would be much\n> nicer to login as the superuser to do it rather than shut down my site.\n> \n> darrenk\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 10 Apr 2001 11:35:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: \"--tuning\" compile and runtime option (?)"
}
] |
[
{
"msg_contents": "Hi,\n\nI have the following table, containing about 570000 Rows, but some\nindexes are not used, on 7.1RC4, freshly vacuumed (analyse). It was the\nsame at least in 7.1RC1\n\n\n --- SNIP ---\n\n CREATE TABLE access_log(\n site_id int2 NOT NULL DEFAULT 0,\n access_time timestamp NOT NULL DEFAULT NOW(),\n time_taken interval NOT NULL,\n remote_ip inet NOT NULL,\n method_num int2 NOT NULL,\n url_id int4 NOT NULL REFERENCES urls(id),\n referer_id int4 REFERENCES referer(id),\n browser_id int4 REFERENCES browser(id),\n status int2 NOT NULL DEFAULT 0,\n bytes int4 NOT NULL DEFAULT 0,\n content_id int2 NOT NULL REFERENCES content_types(id),\n\n https_flag boolean NOT NULL DEFAULT 'f',\n\n session_id char(32),\n user_id int4 REFERENCES users(id),\n uname varchar(255),\n \n note_id int4\n \n );\n \n CREATE INDEX site_idx ON access_log(site_id);\n CREATE INDEX access_time_idx ON access_log(access_time);\n CREATE INDEX time_taken_idx ON access_log(time_taken);\n CREATE INDEX remote_ip_idx ON access_log(remote_ip);\n CREATE INDEX method_idx ON access_log(method_num);\n CREATE INDEX url_idx ON access_log(url_id);\n CREATE INDEX referer_idx ON access_log(referer_id);\n CREATE INDEX browser_idx ON access_log(browser_id);\n CREATE INDEX status_idx ON access_log(status);\n CREATE INDEX bytes_idx ON access_log(bytes);\n CREATE INDEX content_idx ON access_log(content_id);\n CREATE INDEX https_idx ON access_log(https_flag);\n CREATE INDEX session_idx ON access_log(session_id);\n CREATE INDEX user_id_idx ON access_log(user_id); \n CREATE INDEX user_idx ON access_log(uname); \n CREATE INDEX note_idx ON access_log(note_id);\n \n\n --- SNAP ---\n\n\nurl_idx seems OK:\n\n logger=# EXPLAIN SELECT * FROM access_log WHERE url_id = 1000;\n Index Scan using url_idx on access_log \n (cost=0.00..3618.92 rows=1002 width=89)\n\n\n\nBut the others not:\n\n logger=# EXPLAIN SELECT * FROM access_log WHERE method_num = 0;\n Seq Scan on access_log (cost=0.00..16443.71 rows=559371 width=89)\n\n logger=# EXPLAIN SELECT * FROM access_log WHERE browser_id = 500; \n Seq Scan on access_log (cost=0.00..16443.71 rows=7935 width=89)\n\n logger=# EXPLAIN SELECT * FROM access_log WHERE content_id = 20;\n Seq Scan on access_log (cost=0.00..16443.71 rows=20579 width=89)\n\n....\n\n\nAnd very strange:\n\n logger=# EXPLAIN SELECT * FROM access_log WHERE access_time > \n '2001-04-10 10:10:10';\n Index Scan using access_time_idx on access_log \n (cost=0.00..10605.12 rows=3251 width=89)\n\n logger=# EXPLAIN SELECT * FROM access_log WHERE access_time > \n '2001-04-08 10:10:10';\n Seq Scan on access_log (cost=0.00..16443.71 rows=152292 width=89)\n\n\n\nIndexes are also not used in Subselects:\n\n\n logger=# EXPLAIN SELECT * FROM access_log WHERE url_id in (1); \n Index Scan using url_idx on access_log \n (cost=0.00..3618.92 rows=1002 width=89)\n\n logger=# EXPLAIN SELECT * FROM access_log WHERE url_id in (1,2,3);\n Index Scan using url_idx, url_idx, url_idx on access_log \n (cost=0.00..10871.79 rows=3000 width=89)\n\n\nBut:\n\n logger=# EXPLAIN SELECT * FROM access_log \n WHERE url_id IN (SELECT 1);\n Seq Scan on access_log (cost=0.00..16443.71 rows=572537 width=89)\n SubPlan\n -> Materialize (cost=0.00..0.00 rows=0 width=0)\n -> Result (cost=0.00..0.00 rows=0 width=0)\n\n\n\nIndexes are also not used for remote_ip, ORDER BY access_time\n(timestamp), ORDER BY time_taken (interval), status, method_num etc. The\nonly I found where indexes are used is url_id!\n\nhmmm, any hints? Bug?\n\n\nPostgres configuration is default for all optimizations (geqo_...);\nothers:\n\n sort_mem = 1024\n shared_buffers = 512 \n\n\nTested on Linux and Win2K/Cygwin.\n\n\nFor the hackers: explain verbose follows:\n\nlogger=# EXPLAIN VERBOSE SELECT * FROM access_log WHERE url_id = 1000;\nNOTICE: QUERY DUMP:\n\n{ INDEXSCAN :startup_cost 0.00 :total_cost 3618.92 :rows 1002 :width 89\n:qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 21\n:restypmod -1 :resname site_id :reskey 0 :reskeyop 0 :ressortgroupref 0\n:resjunk false } :expr { VAR :varno 1 :varattno 1 :vartype 21 :vartypmod\n-1 :varlevelsup 0 :varnoold 1 :varoattno 1}} { TARGETENTRY :resdom {\nRESDOM :resno 2 :restype 1184 :restypmod -1 :resname access_time :reskey\n0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 2 :vartype 1184 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 2}} { TARGETENTRY :resdom { RESDOM :resno 3 :restype 1186\n:restypmod -1 :resname time_taken :reskey 0 :reskeyop 0 :ressortgroupref\n0 :resjunk false } :expr { VAR :varno 1 :varattno 3 :vartype 1186\n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 3}} { TARGETENTRY\n:resdom { RESDOM :resno 4 :restype 869 :restypmod -1 :resname remote_ip\n:reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR\n:varno 1 :varattno 4 :vartype 869 :vartypmod -1 :varlevelsup 0\n:varnoold 1 :varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 5\n:restype 21 :restypmod -1 :resname method_num :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 5\n:vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 5}} {\nTARGETENTRY :resdom { RESDOM :resno 6 :restype 23 :restypmod -1 :resname\nurl_id :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr {\nVAR :varno 1 :varattno 6 :vartype 23 :vartypmod -1 :varlevelsup 0\n:varnoold 1 :varoattno 6}} { TARGETENTRY :resdom { RESDOM :resno 7\n:restype 23 :restypmod -1 :resname referer_id :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 7\n:vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 7}} {\nTARGETENTRY :resdom { RESDOM :resno 8 :restype 23 :restypmod -1 :resname\nbrowser_id :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 1 :varattno 8 :vartype 23 :vartypmod -1 :varlevelsup\n0 :varnoold 1 :varoattno 8}} { TARGETENTRY :resdom { RESDOM :resno 9\n:restype 21 :restypmod -1 :resname status :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 9\n:vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 9}} {\nTARGETENTRY :resdom { RESDOM :resno 10 :restype 23 :restypmod -1\n:resname bytes :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 1 :varattno 10 :vartype 23 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 10}} { TARGETENTRY :resdom {\nRESDOM :resno 11 :restype 21 :restypmod -1 :resname content_id :reskey 0\n:reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 11 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 11}} { TARGETENTRY :resdom { RESDOM :resno 12 :restype 16\n:restypmod -1 :resname https_flag :reskey 0 :reskeyop 0 :ressortgroupref\n0 :resjunk false } :expr { VAR :varno 1 :varattno 12 :vartype 16\n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 12}} { TARGETENTRY\n:resdom { RESDOM :resno 13 :restype 1042 :restypmod 36 :resname\nsession_id :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 1 :varattno 13 :vartype 1042 :vartypmod 36 \n:varlevelsup 0 :varnoold 1 :varoattno 13}} { TARGETENTRY :resdom {\nRESDOM :resno 14 :restype 23 :restypmod -1 :resname user_id :reskey 0\n:reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 14 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 14}} { TARGETENTRY :resdom { RESDOM :resno 15 :restype 1043\n:restypmod 259 :resname uname :reskey 0 :reskeyop 0 :ressortgroupref 0\n:resjunk false } :expr { VAR :varno 1 :varattno 15 :vartype 1043\n:vartypmod 259 :varlevelsup 0 :varnoold 1 :varoattno 15}} { TARGETENTRY\n:resdom { RESDOM :resno 16 :restype 23 :restypmod -1 :resname note_id\n:reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR\n:varno 1 :varattno 16 :vartype 23 :vartypmod -1 :varlevelsup 0\n:varnoold 1 :varoattno 16}}) :qpqual <> :lefttree <> :righttree <>\n:extprm () :locprm () :initplan <> :nprm 0 :scanrelid 1 :indxid (\n1870492) :indxqual (({ EXPR :typeOid 16 :opType op :oper { OPER :opno\n96 :opid 65 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 1\n:vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 6} {\nCONST :consttype 23 :constlen 4 :constbyval true :constisnull false\n:constvalue 4 [ -24 3 0 0 ] })})) :indxqualorig (({ EXPR :typeOid 16 \n:opType op :oper { OPER :opno 96 :opid 65 :opresulttype 16 } :args ({\nVAR :varno 1 :varattno 6 :vartype 23 :vartypmod -1 :varlevelsup 0\n:varnoold 1 :varoattno 6} { CONST :consttype 23 :constlen 4 :constbyval\ntrue :constisnull false :constvalue 4 [ -24 3 0 0 ] })})) :indxorderdir\n1 }\nNOTICE: QUERY PLAN:\n\nIndex Scan using url_idx on access_log (cost=0.00..3618.92 rows=1002\nwidth=89)\n\nEXPLAIN\n\n\n\nlogger=# EXPLAIN VERBOSE SELECT * FROM access_log WHERE referer_id =\n1000;\nNOTICE: QUERY DUMP:\n\n{ SEQSCAN :startup_cost 0.00 :total_cost 16443.71 :rows 11715 :width 89\n:qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 21\n:restypmod -1 :resname site_id :reskey 0 :reskeyop 0 :ressortgroupref 0\n:resjunk false } :expr { VAR :varno 1 :varattno 1 :vartype 21 :vartypmod\n-1 :varlevelsup 0 :varnoold 1 :varoattno 1}} { TARGETENTRY :resdom {\nRESDOM :resno 2 :restype 1184 :restypmod -1 :resname access_time :reskey\n0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 2 :vartype 1184 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 2}} { TARGETENTRY :resdom { RESDOM :resno 3 :restype 1186\n:restypmod -1 :resname time_taken :reskey 0 :reskeyop 0 :ressortgroupref\n0 :resjunk false } :expr { VAR :varno 1 :varattno 3 :vartype 1186\n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 3}} { TARGETENTRY\n:resdom { RESDOM :resno 4 :restype 869 :restypmod -1 :resname remote_ip\n:reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR\n:varno 1 :varattno 4 :vartype 869 :vartypmod -1 :varlevelsup 0\n:varnoold 1 :varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 5\n:restype 21 :restypmod -1 :resname method_num :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 5\n:vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 5}} {\nTARGETENTRY :resdom { RESDOM :resno 6 :restype 23 :restypmod -1 :resname\nurl_id :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr {\nVAR :varno 1 :varattno 6 :vartype 23 :vartypmod -1 :varlevelsup 0\n:varnoold 1 :varoattno 6}} { TARGETENTRY :resdom { RESDOM :resno 7\n:restype 23 :restypmod -1 :resname referer_id :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 7\n:vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 7}} {\nTARGETENTRY :resdom { RESDOM :resno 8 :restype 23 :restypmod -1 :resname\nbrowser_id :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 1 :varattno 8 :vartype 23 :vartypmod -1 :varlevelsup\n0 :varnoold 1 :varoattno 8}} { TARGETENTRY :resdom { RESDOM :resno 9\n:restype 21 :restypmod -1 :resname status :reskey 0 :reskeyop 0\n:ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 9\n:vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 9}} {\nTARGETENTRY :resdom { RESDOM :resno 10 :restype 23 :restypmod -1\n:resname bytes :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 1 :varattno 10 :vartype 23 :vartypmod -1 \n:varlevelsup 0 :varnoold 1 :varoattno 10}} { TARGETENTRY :resdom {\nRESDOM :resno 11 :restype 21 :restypmod -1 :resname content_id :reskey 0\n:reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 11 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 11}} { TARGETENTRY :resdom { RESDOM :resno 12 :restype 16\n:restypmod -1 :resname https_flag :reskey 0 :reskeyop 0 :ressortgroupref\n0 :resjunk false } :expr { VAR :varno 1 :varattno 12 :vartype 16\n:vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 12}} { TARGETENTRY\n:resdom { RESDOM :resno 13 :restype 1042 :restypmod 36 :resname\nsession_id :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\n:expr { VAR :varno 1 :varattno 13 :vartype 1042 :vartypmod 36 \n:varlevelsup 0 :varnoold 1 :varoattno 13}} { TARGETENTRY :resdom {\nRESDOM :resno 14 :restype 23 :restypmod -1 :resname user_id :reskey 0\n:reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\n:varattno 14 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 14}} { TARGETENTRY :resdom { RESDOM :resno 15 :restype 1043\n:restypmod 259 :resname uname :reskey 0 :reskeyop 0 :ressortgroupref 0\n:resjunk false } :expr { VAR :varno 1 :varattno 15 :vartype 1043\n:vartypmod 259 :varlevelsup 0 :varnoold 1 :varoattno 15}} { TARGETENTRY\n:resdom { RESDOM :resno 16 :restype 23 :restypmod -1 :resname note_id\n:reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR\n:varno 1 :varattno 16 :vartype 23 :vartypmod -1 :varlevelsup 0\n:varnoold 1 :varoattno 16}}) :qpqual ({ EXPR :typeOid 16 :opType op\n:oper { OPER :opno 96 :opid 65 :opresulttype 16 } :args ({ VAR :varno 1\n:varattno 7 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1\n:varoattno 7} { CONST :consttype 23 :constlen 4 :constbyval true\n:constisnull false :constvalue 4 [ -24 3 0 0 ] })}) :lefttree <>\n:righttree <> :extprm () :locprm () :initplan <> :nprm 0 :scanrelid 1 }\nNOTICE: QUERY PLAN:\n\nSeq Scan on access_log (cost=0.00..16443.71 rows=11715 width=89)\n\nEXPLAIN\n\n\n\nThanks & Ciao\n\n Alvar\n\n-- \nAGI\nMagirusstrasse 21B, 70469 Stuttgart\nFon +49 (0)711.228 74-50, Fax +49 (0)711.228 74-88\n+++news+++news+++news+++\nBeste Image-Website 2001 kommt von AGI\nhttp://www.agi.de/tagebuch\nhttp://www.agi.com/diary (english)\n",
"msg_date": "Tue, 10 Apr 2001 12:37:20 +0200",
"msg_from": "Alvar Freude <alvar@agi.de>",
"msg_from_op": true,
"msg_subject": "Indexes not used in 7.1RC4: Bug?"
},
{
"msg_contents": "> I have the following table, containing about 570000 Rows, but some\n> indexes are not used, on 7.1RC4, freshly vacuumed (analyse). It was the\n> same at least in 7.1RC1\n> CREATE TABLE access_log(\n> access_time timestamp NOT NULL DEFAULT NOW(),\n> method_num int2 NOT NULL,\n> url_id int4 NOT NULL REFERENCES urls(id),\n> );\n> CREATE INDEX method_idx ON access_log(method_num);\n> CREATE INDEX url_idx ON access_log(url_id);\n> url_idx seems OK:\n> But the others not:\n> logger=# EXPLAIN SELECT * FROM access_log WHERE method_num = 0;\n> Seq Scan on access_log (cost=0.00..16443.71 rows=559371 width=89)\n\nThe parser does not know that your int4 constant \"0\" can be represented\nas an int2. Try\n\n SELECT * FROM access_log WHERE method_num = int2 '0';\n\n(note the type coersion on the constant; there are other ways of\nspecifying the same thing).\n\nFor the other cases, PostgreSQL is estimating the query cost to be lower\nwith a sequential scan. For the \"SELECT 1\" subselect case, it may be\nthat the optimizer does not cheat and determine that there will be only\none row returned, or that the query can be reformulated to use a simple\nconstant.\n\nHTH\n\n - Thomas\n",
"msg_date": "Tue, 10 Apr 2001 13:53:34 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Indexes not used in 7.1RC4: Bug?"
},
{
"msg_contents": "\n> url_idx seems OK:\n> \n> logger=# EXPLAIN SELECT * FROM access_log WHERE url_id = 1000;\n> Index Scan using url_idx on access_log \n> (cost=0.00..3618.92 rows=1002 width=89)\n> \n> \n> \n> But the others not:\n> \n> logger=# EXPLAIN SELECT * FROM access_log WHERE method_num = 0;\n> Seq Scan on access_log (cost=0.00..16443.71 rows=559371 width=89)\n\nIn this case you'd need to coerce the 0 to int2 in any case, but it\nfigures that most of your rows are returned so a sequence scan will\nbe faster. For an index scan, it's got to seek around the heap file\ndoing random access to determine if rows are visible to your transaction\nwhich is more expensive than sequential reads, so at some point the\noptimizer will guess the sequence scan to be faster. \n\n> logger=# EXPLAIN SELECT * FROM access_log WHERE browser_id = 500; \n> Seq Scan on access_log (cost=0.00..16443.71 rows=7935 width=89)\n> \n> logger=# EXPLAIN SELECT * FROM access_log WHERE content_id = 20;\n> Seq Scan on access_log (cost=0.00..16443.71 rows=20579 width=89)\n\nNot sure on these two, it probably is estimating less disk access\nfor doing the sequence scan, my guess would be that the break point\nis probably somewhere between the 1000 and 8000 row point for the\ntwo queries. And I believe the second was an int2, so you'll need\nto cast the 20.\n\n> And very strange:\n> \n> logger=# EXPLAIN SELECT * FROM access_log WHERE access_time > \n> '2001-04-10 10:10:10';\n> Index Scan using access_time_idx on access_log \n> (cost=0.00..10605.12 rows=3251 width=89)\n> \n> logger=# EXPLAIN SELECT * FROM access_log WHERE access_time > \n> '2001-04-08 10:10:10';\n> Seq Scan on access_log (cost=0.00..16443.71 rows=152292 width=89)\n\nSame as above, for 3000 rows it thinks index scan will be faster,\nfor 152000 rows the sequence scan.\n\n> But:\n> \n> logger=# EXPLAIN SELECT * FROM access_log \n> WHERE url_id IN (SELECT 1);\n> Seq Scan on access_log (cost=0.00..16443.71 rows=572537 width=89)\n> SubPlan\n> -> Materialize (cost=0.00..0.00 rows=0 width=0)\n> -> Result (cost=0.00..0.00 rows=0 width=0)\n\nMy guess is it doesn't realize that SELECT 1 is a constant that it\ncan use the index for. IN (subselect) isn't handled very well right\nnow. \n\n> Indexes are also not used for remote_ip, ORDER BY access_time\n> (timestamp), ORDER BY time_taken (interval), status, method_num etc. The\n> only I found where indexes are used is url_id!\n\nAny of the int2s will require explicit casting of a constant in order\nto use the index. I'm not sure on the others.\n\n",
"msg_date": "Tue, 10 Apr 2001 09:13:49 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Indexes not used in 7.1RC4: Bug?"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> The parser does not know that your int4 constant \"0\" can be represented\n> as an int2. Try\n> \n> SELECT * FROM access_log WHERE method_num = int2 '0';\n\nhmmm, but its still a sequentiell scan:\n\n\n logger=# explain SELECT * FROM access_log \n WHERE method_num = int2 '0';\n Seq Scan on access_log (cost=0.00..16443.71 rows=559371 width=89)\n\nBut: \nNow I realised: the number of rows! :)\nIf I make \"WHERE method_num = int2 '2', then the index is used,\ninteresting -- so it seems that the optimizer uses the value of the\nWHERE clause to check what might be faster and guesses, that an index\nscan is more overhead and slower. Nice!\n\n\n> For the other cases, PostgreSQL is estimating the query cost to be lower\n> with a sequential scan. \n\nhm, OK, but I guess, that he is estimating wrong ;) \n\nAfter re-reading the using-explain chapter in the docs I guess I\nunderstand the problems of estimating the number of rows ...\n\n\nDo you have any hints how to optimize the ..._cost-Values?\n\nPerhaps it is possible to write a test program, which checks out some\ngood ..._cost-Values -- I'm volunteer, but I guess it should possible\nfor this to force some optimizer results to measure the real time some\ndifferent methods cost.\n\n\n\n> For the \"SELECT 1\" subselect case, it may be that the optimizer does not cheat and > determine that there will be only\n> one row returned, or that the query can be reformulated to use a simple\n> constant.\n\nyes, it was only an example -- i hope nobody is really so stupid and\nuses a \"select 1\" subselect ;)\n\nIt might be an optimization, that the hole subselect is performed before\nthe outer select is called, so the result of the subselect can be used\nin the query planer.\n\n\nCiao\n Alvar\n\n-- \nAGI\nMagirusstrasse 21B, 70469 Stuttgart\nFon +49 (0)711.228 74-50, Fax +49 (0)711.228 74-88\n+++news+++news+++news+++\nBeste Image-Website 2001 kommt von AGI\nhttp://www.agi.de/tagebuch\nhttp://www.agi.com/diary (english)\n",
"msg_date": "Tue, 10 Apr 2001 18:39:38 +0200",
"msg_from": "Alvar Freude <alvar@agi.de>",
"msg_from_op": true,
"msg_subject": "Re: Indexes not used in 7.1RC4: Bug?"
},
{
"msg_contents": "Thomas Lockhart wrote:\n\n> The parser does not know that your int4 constant \"0\" can be represented\n> as an int2. Try\n> \n> SELECT * FROM access_log WHERE method_num = int2 '0';\n> \n> (note the type coersion on the constant; there are other ways of\n> specifying the same thing).\n\nSurely this is something that should be fixed. An int2 column ought to behave\nexactly like an int4 with a CHECK() constraint forcing the value to be in\nrange. \n\nIn object oriented terms:\n\n a smallint isA integer\n a integer isA bigint\n\nLikewise:\n\n a integer isA smallint if it falls in -32768..32767\n a bigint isA integer if it falls in -2147483648..2147483647\n\nSimilar promotion rules should apply for all other numeric types. Any floating\npoint value without a fractional part should be treated exactly like a big\ninteger.\n\nThe issues here are closely related to the 7.1 changes in INHERITS semantics. \nIf any operator treats a smaller precision (more highly constrained) type in a\nmaterially different way than a compatible higher precision type, it is\nfundamentally broken for exactly the same reason that we expect a query on a\nsuper-class would be if if did not return all matching instances of every sub\nclass.\n\nIf a function is overloaded with multiple compatible scalar data types, the\ndatabase should be free to call any matching implementation after performing\nan arbitrary number of *lossless* compatible type conversions.\n\ni.e. if you have f(smallint), f(integer), and f(double) the actual function\ncalled by f(0) should be undefined. The distinction between smallint '0',\ninteger '0', and double '0' is meaningless and should be explicitly ignored.\n\nThis is a little extreme, but I do not think it makes a lot of sense to\nmaintain semantic differences between different representations of the same\nnumber. (Oracle certainly doesn't)\n\nAny comments?\n\n\n - Mark Butler\n",
"msg_date": "Tue, 10 Apr 2001 11:06:00 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": false,
"msg_subject": "Re: Indexes not used in 7.1RC4: Bug?"
},
{
"msg_contents": "Hmm. The problem is as you describe, but the requirements for a solution\nare more severe than you (or I) would hope. \n\nWe would like to have an extensible mechanism for type promotion and\ndemotion, but it is not (yet) clear how to implement it. In this case,\nwe must demote a constant assigned as \"int4\" by the parser into an\n\"int2\" to be directly comparable to the indexed column. We could\nprobably do this with some hack code as a brute-force exercise, but no\none has yet bothered (patches welcome ;) But in general, we must handle\nthe case that the specified constraint is *not* directly convertible to\nthe indexed type (e.g. is out of range) even though this would seem to\nreduce to a choice between a trivial noop or a sequential scan of the\nentire table. If we can do this without cluttering up the code too much,\nwe should go ahead and do it, but it has apparently been a low priority.\n\n - Thomas\n",
"msg_date": "Tue, 10 Apr 2001 19:42:56 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Indexes not used in 7.1RC4: Bug?"
},
{
"msg_contents": "Good day,\n\nI've been experimenting a bit with Full Text Indexing in PostgreSQL. I\nhave found several conflicting sites various places on the net pertaining\nto whether or not PostgreSQL supports FTI, and I was hoping I could find\nan authoritative answer here - I tried searching the website's archives,\nbut the search seems to be having some problems.\n\nAt any rate, I am running a CVS snapshot of 7.1, and I have been trying to\ncreate a full text index on a series of resumes. Some of these exceed 8k\nin size, which is no longer a storage problem of course with 7.1, but I\nseem to have run into the wicked 8k once again. Specifically:\n\nERROR: index_formtuple: data takes 9344 bytes, max is 8191\n\nFurthermore, after trying to just index on a 8191-character long substring\nof the resume, I run into the following:\n\nERROR: btree: index item size 3948 exceeds maximum 2713\n\nThe only way I could actually get the index created was to substring the\nbody of the resumes down to 2k. I also later tried using HASH rather than\nBTREE, which worked, but none of these solutions really appreciably\nincreased performance in the way we were hoping.\n\nAre these known and accepted limitations of the current 7.1\nimplementation, or am I doing something terribly wrong? ;)\nOn Tue, 10 Apr 2001, Thomas Lockhart wrote:\n\n>> I have the following table, containing about 570000 Rows, but some\n>> indexes are not used, on 7.1RC4, freshly vacuumed (analyse). It was the\n>> same at least in 7.1RC1\n>> CREATE TABLE access_log(\n>> access_time timestamp NOT NULL DEFAULT NOW(),\n>> method_num int2 NOT NULL,\n>> url_id int4 NOT NULL REFERENCES urls(id),\n>> );\n>> CREATE INDEX method_idx ON access_log(method_num);\n>> CREATE INDEX url_idx ON access_log(url_id);\n>> url_idx seems OK:\n>> But the others not:\n>> logger=# EXPLAIN SELECT * FROM access_log WHERE method_num = 0;\n>> Seq Scan on access_log (cost=0.00..16443.71 rows=559371 width=89)\n>\n>The parser does not know that your int4 constant \"0\" can be represented\n>as an int2. Try\n>\n> SELECT * FROM access_log WHERE method_num = int2 '0';\n>\n>(note the type coersion on the constant; there are other ways of\n>specifying the same thing).\n>\n>For the other cases, PostgreSQL is estimating the query cost to be lower\n>with a sequential scan. For the \"SELECT 1\" subselect case, it may be\n>that the optimizer does not cheat and determine that there will be only\n>one row returned, or that the query can be reformulated to use a simple\n>constant.\n>\n>HTH\n>\n> - Thomas\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n\n-- \n--\n<COMPANY>CommandPrompt\t- http://www.commandprompt.com\t</COMPANY>\n<PROJECT>OpenDocs, LLC.\t- http://www.opendocs.org\t</PROJECT>\n<PROJECT>LinuxPorts \t- http://www.linuxports.com </PROJECT>\n<WEBMASTER>LDP\t\t- http://www.linuxdoc.org\t</WEBMASTER>\n--\nInstead of asking why a piece of software is using \"1970s technology,\"\nstart asking why software is ignoring 30 years of accumulated wisdom.\n--\n\n",
"msg_date": "Tue, 10 Apr 2001 15:41:31 -0700 (PDT)",
"msg_from": "Poet/Joshua Drake <poet@linuxports.com>",
"msg_from_op": false,
"msg_subject": "Speaking of Indexing... (Text indexing)"
},
{
"msg_contents": "On Tue, 10 Apr 2001, Poet/Joshua Drake wrote:\n\n> I've been experimenting a bit with Full Text Indexing in PostgreSQL. I\n> have found several conflicting sites various places on the net pertaining\n> to whether or not PostgreSQL supports FTI, and I was hoping I could find\n> an authoritative answer here - I tried searching the website's archives,\n> but the search seems to be having some problems.\n> \n> At any rate, I am running a CVS snapshot of 7.1, and I have been trying to\n> create a full text index on a series of resumes. Some of these exceed 8k\n> in size, which is no longer a storage problem of course with 7.1, but I\n> seem to have run into the wicked 8k once again. Specifically:\n\nJoshua --\n\nCREATE INDEX ... creates an index on a field, allowing for faster\nsearches, *if* you're looking to match the first part of that text string.\nSo, if I have a table of movie titles, creating an index on column title\nwill allow for faster searches if my criteria is something like\ntitle='Toto Les Heros' (or like 'Toto%' or such), but not (AFAIK) for\ntitle ~ 'Les' or title LIKE '%Les%'. The index doesn't help here.\n\nFor these long fields you have, you probably want to search for a word in\nthe field, not match the start of the field. A regular index isn't your\nanswer.\n\nThere is a full text indexing solution in the contrib/ directory of the\nsource. It essentially creates a new table w/every occurence of every word\nfragment, with a reference back to the row that contains it. Searching\nagainst this is indexed, and is speedy. The only downside is that you will\nhave a *large* table holding the full text index.\n\nMore help can be found in the README file in contrib/fulltextindex\n\nHTH,\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 10 Apr 2001 22:13:08 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: Speaking of Indexing... (Text indexing)"
},
{
"msg_contents": "> Furthermore, after trying to just index on a 8191-character long substring\n> of the resume, I run into the following:\n> ERROR: btree: index item size 3948 exceeds maximum 2713\n> The only way I could actually get the index created was to substring the\n> body of the resumes down to 2k. I also later tried using HASH rather than\n> BTREE, which worked, but none of these solutions really appreciably\n> increased performance in the way we were hoping.\n> \n> Are these known and accepted limitations of the current 7.1\n> implementation, or am I doing something terribly wrong? ;)\n\nHmm. I'm pretty sure that a single index on the entire contents of a\nresume *as a single field* is close to useless. And an index on an 8k\npiece is also useless. Presumably you really want an index covering each\nsignificant word of each resume, in which case you would not run into\nthe 4k limit (or 2k limit? it is documented somewhere) on the size of an\n*index* field (which is still a limitation on PostgreSQL built with the\nstandard 8k block size. Of course, you can build with a larger block\nsize).\n\nhth\n\n - Thomas\n",
"msg_date": "Wed, 11 Apr 2001 02:34:01 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Speaking of Indexing... (Text indexing)"
},
{
"msg_contents": "\nI believe that the basis for such a mechanism should be a model of the\nsemantic type inheritance for primitive data types. Note that type\ninheritance is a completely different concept than representation inheritance,\nas witnessed by the confusion over the now implemented proposal to correct the\nsemantics of table inheritance.\n\nLogically, a sub-type expresses the idea that any instance of the sub-type is\nalso an instance of the super-type.\n\nFor example, semantically speaking, a smallint is an integer because the set\nof all small integers is a subset of the set of all integers.\n\nWe could represent this fact with something like a the pg_inherits table with\nentries for conversion functions to convert the canonical representation of\nthe \nsub-type into the canonical representation of the super-type and vice versa.\n\nIn a normal implementation, the index scan boundary values should be stored\ninternally using the representation of the lowest common super-type. That way\nyou can get a correct result for queries like(*):\n\n select * from table where smallint_column < 100000\n\nAlternatively, the query engine could internally down cast the value to be\ncompared to the index column type extended with flags like the following:\n\nCOMPATIBLE_VALUE_GREATER - value is comparable and always greater than\n any instance of column type\nCOMPATIBLE_VALUE_LESS - value is comparable and always less than\n any instance of column type\nINCOMPATIBLE_VALUE - value is not comparable to column type\n\nThe type down-conversion function would need to clear the resulting value and\nset the appropriate flag if the conversion does not succeed.\n\nThe flags would then be used to calculate which index scan boundary values are\nequivalent to the original query predicate by substituting the maximum and\nminimum allowed values of the column type as appropriate.\n\nI have not looked at the source code in detail yet, but I believe the basic\nidea is sound.\n\n - Mark Butler\n\n\nNote: Oracle avoids this whole problem for numeric types by using a common\nvariable precision format for *all* numbers. The nice thing is that you can\nincrease the precision / scale of any numeric column without touching the data\nin each row.\n\nThomas Lockhart wrote:\n> \n> Hmm. The problem is as you describe, but the requirements for a solution\n> are more severe than you (or I) would hope.\n> \n> We would like to have an extensible mechanism for type promotion and\n> demotion, but it is not (yet) clear how to implement it. In this case,\n> we must demote a constant assigned as \"int4\" by the parser into an\n> \"int2\" to be directly comparable to the indexed column. We could\n> probably do this with some hack code as a brute-force exercise, but no\n> one has yet bothered (patches welcome ;) But in general, we must handle\n> the case that the specified constraint is *not* directly convertible to\n> the indexed type (e.g. is out of range) even though this would seem to\n> reduce to a choice between a trivial noop or a sequential scan of the\n> entire table. If we can do this without cluttering up the code too much,\n> we should go ahead and do it, but it has apparently been a low priority.\n> \n> - Thomas\n",
"msg_date": "Tue, 10 Apr 2001 21:11:52 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": false,
"msg_subject": "Re: Extensible mechanism for type promotion / demotion"
},
{
"msg_contents": "Poet/Joshua Drake wrote:\n> \n> Good day,\n> \n> I've been experimenting a bit with Full Text Indexing in PostgreSQL. I\n> have found several conflicting sites various places on the net pertaining\n> to whether or not PostgreSQL supports FTI, and I was hoping I could find\n> an authoritative answer here - I tried searching the website's archives,\n> but the search seems to be having some problems.\n> \n> At any rate, I am running a CVS snapshot of 7.1, and I have been trying to\n> create a full text index on a series of resumes. Some of these exceed 8k\n> in size, which is no longer a storage problem of course with 7.1, but I\n> seem to have run into the wicked 8k once again. Specifically:\n> \n> ERROR: index_formtuple: data takes 9344 bytes, max is 8191\n> \n> Furthermore, after trying to just index on a 8191-character long substring\n> of the resume, I run into the following:\n> \n> ERROR: btree: index item size 3948 exceeds maximum 2713\n> \n> The only way I could actually get the index created was to substring the\n> body of the resumes down to 2k. I also later tried using HASH rather than\n> BTREE, which worked, but none of these solutions really appreciably\n> increased performance in the way we were hoping.\n> \n> Are these known and accepted limitations of the current 7.1\n> implementation, or am I doing something terribly wrong? ;)\n> On Tue, 10 Apr 2001, Thomas Lockhart wrote:\n\nYou need to use the 'contrib' code for full-text indexing. The indexing you are\ntrying to do with that is just using the whole content of the string as the index\nvalue. Close to useless.\n\nThe contrib code is in contrib/fulltextindex.\n\nI have a hacked version of that which changes it to keyword indexing, if you're\ninterested.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Wed, 11 Apr 2001 15:35:33 +1000",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Speaking of Indexing... (Text indexing)"
},
{
"msg_contents": "At 4/10/2001 02:42 PM, Thomas Lockhart wrote:\n>Hmm. The problem is as you describe, but the requirements for a solution\n>are more severe than you (or I) would hope.\n>\n>We would like to have an extensible mechanism for type promotion and\n>demotion, but it is not (yet) clear how to implement it. In this case,\n>we must demote a constant assigned as \"int4\" by the parser into an\n>\"int2\" to be directly comparable to the indexed column. We could\n>probably do this with some hack code as a brute-force exercise, but no\n>one has yet bothered (patches welcome ;) But in general, we must handle\n>the case that the specified constraint is *not* directly convertible to\n>the indexed type (e.g. is out of range) even though this would seem to\n>reduce to a choice between a trivial noop or a sequential scan of the\n>entire table. If we can do this without cluttering up the code too much,\n>we should go ahead and do it, but it has apparently been a low priority.\n\nWhat about going the other way around... Promote the int2 to an int4 \n(lossless). Actually for all int1,int2 datatypes (regardless of whether it \nwas the constant or the column) you could promote all to a common int4 and \nthen do comparisons. Promoting all to int8 and then doing a comparison \nwould be excessively slow.\n\n",
"msg_date": "Wed, 11 Apr 2001 09:06:58 -0500",
"msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: Indexes not used in 7.1RC4: Bug?"
},
{
"msg_contents": "> What about going the other way around... Promote the int2 to an int4\n> (lossless). Actually for all int1,int2 datatypes (regardless of whether it\n> was the constant or the column) you could promote all to a common int4 and\n> then do comparisons.\n\nThat is why the index is not used: the backend is promoting all of the\nint2 column values to \nint4 for the comparison, and concludes that the available index is not\nrelevant.\n\nThe index traversal code would need to know how to promote individual\nvalues in the index for comparison, which is an interesting idea but I\nhaven't thought about how efficient it would be. Clearly the cost would\nbe different than a simple comparison.\n\n - Thomas\n",
"msg_date": "Wed, 11 Apr 2001 15:07:32 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Indexes not used in 7.1RC4: Bug?"
},
{
"msg_contents": "> Hmm. I'm pretty sure that a single index on the entire contents of a\n> resume *as a single field* is close to useless. And an index on an 8k\n> piece is also useless. Presumably you really want an index covering each\n> significant word of each resume, in which case you would not run into\n> the 4k limit (or 2k limit? it is documented somewhere) on the size of an\n> *index* field (which is still a limitation on PostgreSQL built with the\n> standard 8k block size. Of course, you can build with a larger block\n> size).\n\nJust an FYI..\n\nI asked the other day and someone (Tom?) told me it was about 2k.. \n\n-Mitch\n\n",
"msg_date": "Wed, 11 Apr 2001 11:33:06 -0400",
"msg_from": "\"Mitch Vincent\" <mitch@venux.net>",
"msg_from_op": false,
"msg_subject": "Re: Speaking of Indexing... (Text indexing)"
},
{
"msg_contents": "There are several ways to solve the problem:\n\n1. Convert to common numeric format for all numbers, ala Oracle\n2. Promote for comparison during the index scan\n3. Promote index boundary values for comparison in query planner only\n Convert back to index column type for actual scan\n\nOption 1 doesn't solve the general problem, has a space / performance penalty,\nand would be a major change.\n\nOption 2 involves making serious changes to every index access method, and\nalso has a performance penalty.\n\nOption 3 appears to me to be the way to go. The main general requirement is\nmethod similar to typeInheritsFrom() in backend/parser/parse_func.c to\ndetermine whether a true promotion is possible for a pair of non-complex data\ntypes.\n\nOne thing I am not clear on is how much re-planning is done when a query is\nexecuted with different parameter values. If re-planning is not done, is it\nacceptable to make minor plan changes according to the parameter values? \n\nFor example, it would be necessary to change a \"<\" operator to a \"<=\" operator\nto get proper index scan behavior on a smallint index if the original right\nhand side was greater than 32767.\n\n- Mark\n\nThomas Lockhart wrote:\n\n> That is why the index is not used: the backend is promoting all of the\n> int2 column values to\n> int4 for the comparison, and concludes that the available index is not\n> relevant.\n> \n> The index traversal code would need to know how to promote individual\n> values in the index for comparison, which is an interesting idea but I\n> haven't thought about how efficient it would be. Clearly the cost would\n> be different than a simple comparison.\n",
"msg_date": "Wed, 11 Apr 2001 10:48:49 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": false,
"msg_subject": "Re: Index type promotion"
}
] |
[
{
"msg_contents": "\n> Excessively long values are currently silently truncated when they are\n> inserted into char or varchar fields. This makes the entire notion of\n> specifying a length limit for these types kind of useless, IMO. Needless\n> to say, it's also not in compliance with SQL.\n\nTo quote Tom \"paragraph and verse please\" :-)\n\n> How do people feel about changing this to raise an error in this\n> situation?\n\nCan't do.\n\n> Does anybody rely on silent truncation?\n\nYes, iirc the only thing you are allowed to do is issue a warning,\nbut the truncation is allowed and must succeed. \n(checked in Informix and Oracle)\n\nThe appropriate SQLSTATE is: \"01004\" String data, right truncation\nnote that class 01 is a \"success with warning\".\n\nAndreas\n",
"msg_date": "Tue, 10 Apr 2001 12:46:55 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Truncation of char, varchar types"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n\n> Yes, iirc the only thing you are allowed to do is issue a warning,\n> but the truncation is allowed and must succeed.\n> (checked in Informix and Oracle)\n\n? As much as I remember, Oracle raises an error. But it's been a few\nyears since I last touched it, so maybe I'm wrong.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 10 Apr 2001 16:32:52 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Truncation of char, varchar types"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n\n> > Excessively long values are currently silently truncated when they are\n> > inserted into char or varchar fields. This makes the entire notion of\n> > specifying a length limit for these types kind of useless, IMO. Needless\n> > to say, it's also not in compliance with SQL.\n>\n> To quote Tom \"paragraph and verse please\" :-)\n\nSQL 1992, 9.2 GR 3 e)\n\n\"\"\"\nIf the data type of T is variable-length character string and\nthe length in characters M of V is greater than the maximum\nlength in characters L of T, then,\n\nCase:\n\n i) If the rightmost M-L characters of V are all <space>s, then\n the value of T is set to the first L characters of V and\n the length in characters of T is set to L.\n\nii) If one or more of the rightmost M-L characters of V are\n not <space>s, then an exception condition is raised: data\n ^^^^^^^^^\n exception-string data, right truncation.\n\"\"\"\n\nSimilarly in SQL 1999 and for other data types.\n\n> > How do people feel about changing this to raise an error in this\n> > situation?\n>\n> Can't do.\n\nWhy not?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 10 Apr 2001 18:07:03 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: AW: Truncation of char, varchar types"
}
] |
[
{
"msg_contents": "This is what I get in Oracle 8:\n\nSQL> CREATE TABLE test (value VARCHAR (10));\n\nTable created.\n\nSQL> INSERT INTO test VALUES ('Mike Mascari');\nINSERT INTO test VALUES ('Mike Mascari')\n *\nERROR at line 1:\nORA-01401: inserted value too large for column\n\n\nSQL> quit\n\nOf course, if the standard is ambiguous, retaining backwards \ncompatibility sure would be nice.\n\nFWIW,\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tZeugswetter Andreas SB [SMTP:ZeugswetterA@wien.spardat.at]\nSent:\tTuesday, April 10, 2001 6:47 AM\nTo:\t'Peter Eisentraut'; PostgreSQL Development\nSubject:\tAW: [HACKERS] Truncation of char, varchar types\n\n\n> Excessively long values are currently silently truncated when they \nare\n> inserted into char or varchar fields. This makes the entire notion \nof\n> specifying a length limit for these types kind of useless, IMO. \n Needless\n> to say, it's also not in compliance with SQL.\n\nTo quote Tom \"paragraph and verse please\" :-)\n\n> How do people feel about changing this to raise an error in this\n> situation?\n\nCan't do.\n\n> Does anybody rely on silent truncation?\n\nYes, iirc the only thing you are allowed to do is issue a warning,\nbut the truncation is allowed and must succeed.\n(checked in Informix and Oracle)\n\nThe appropriate SQLSTATE is: \"01004\" String data, right truncation\nnote that class 01 is a \"success with warning\".\n\nAndreas\n\n",
"msg_date": "Tue, 10 Apr 2001 12:41:59 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": true,
"msg_subject": "RE: Truncation of char, varchar types"
}
] |
[
{
"msg_contents": "\nTheoretically, should one be able to do:\n\npg_dumpall > db.out\nremove 7.0.3 bin, lib, data, etc\ninstall 7.1 bin, lib, etc\ninitdb 7.1\npsql template1 < db.out\n\nBasically, has anyone actually tried *that* yet and can report on whether\nor not it works?\n\nI'm just about to try it here, on >2gig of data, but if others have\nexperience with this, that would be great ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 10 Apr 2001 14:24:33 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Going from 7.0.3 -> 7.1 ..."
},
{
"msg_contents": "On Tue, Apr 10, 2001 at 02:24:33PM -0300, The Hermit Hacker wrote:\n> \n> Theoretically, should one be able to do:\n> \n> pg_dumpall > db.out\n> remove 7.0.3 bin, lib, data, etc\n> install 7.1 bin, lib, etc\n> initdb 7.1\n> psql template1 < db.out\n> \n> Basically, has anyone actually tried *that* yet and can report on whether\n> or not it works?\n\nWell, 7.0 pg_dump does not get the dependencies right... But in\nthat case the 7.0 -> 7.0 import also does not work.\n\n> I'm just about to try it here, on >2gig of data, but if others have\n> experience with this, that would be great ...\n\nIn my project, there is need for this well-known RDBMS tool\nknows as 'vi'... ;)\n\n\n-- \nmarko\n\n",
"msg_date": "Tue, 10 Apr 2001 21:32:33 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: Going from 7.0.3 -> 7.1 ..."
}
] |
[
{
"msg_contents": "\nv7.0.3 database:\n\ntrends_acctng=# \\d\n List of relations\n Name | Type | Owner\n-------------+-------+-------\n accounts | table | pgsql\n admin | table | pgsql\n calls | table | pgsql\n comments | table | pgsql\n cookies | table | pgsql\n credit_card | table | pgsql\n credits | table | pgsql\n logs | table | pgsql\n personal | table | pgsql\n radhist | table | pgsql\n radlog | table | pgsql\n remote_host | table | pgsql\n static_ip | table | pgsql\n users | table | pgsql\n(14 rows)\n\n\nv7.1 database:\n\ntrends_acctng=# \\d\n List of relations\n Name | Type | Owner\n--------------------------+----------+---------\n buy | table | jeff\n buy_bid_seq | sequence | jeff\n clients_c_id_seq | sequence | jeff\n cppvad_clients | table | jeff\n cppvad_clients_cc_id_seq | sequence | jeff\n cppvad_info | table | jeff\n cppvad_info_cid_seq | sequence | jeff\n download | table | jeff\n download_dlid_seq | sequence | jeff\n exchange | table | jeff\n exchange_exid_seq | sequence | jeff\n gallery | table | scrappy\n listing | table | area902\n listing_lid_seq | sequence | area902\n ndict10 | table | pgsql\n ndict11 | table | pgsql\n ndict12 | table | pgsql\n ndict16 | table | pgsql\n ndict2 | table | pgsql\n ndict3 | table | pgsql\n ndict32 | table | pgsql\n ndict4 | table | pgsql\n ndict5 | table | pgsql\n ndict6 | table | pgsql\n ndict7 | table | pgsql\n ndict8 | table | pgsql\n ndict9 | table | pgsql\n projects | table | scrappy\n thepress | table | jeff\n thepress_id_seq | sequence | jeff\n ticket | table | pgsql\n ticket_comments | table | pgsql\n ticket_ticket_id_seq | sequence | pgsql\n ticket_times | table | pgsql\n(34 rows)\n\n\nall I did was use pg_dumpall from v7.0.3 to dump to a text file, and\n\"psql template1 < dumpfile\" to load it back in again ...\n\nobviously this doesn't work like it has in the past?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 10 Apr 2001 15:22:14 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't import to v7.1?"
},
{
"msg_contents": "On Tue, 10 Apr 2001, The Hermit Hacker wrote:\n\n> all I did was use pg_dumpall from v7.0.3 to dump to a text file, and\n> \"psql template1 < dumpfile\" to load it back in again ...\n> \n> obviously this doesn't work like it has in the past?\n\nMarc --\n\nWas there an error message during restore?\n\nI've been dumping/restoring w/7.1 since long before beta, w/o real\nproblems, but haven't been doing this w/7.0.3 stuff. But still, psql\nshould give you some error messages.\n\n(I'm sure you know this, but for the benefit of others on the list)\nIn Linux, I usually use the command)\n\n psql dbname < dumpfile 2>&1 | grep ERROR\n\nso that I don't miss any errors among the all the NOTICEs\n\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 10 Apr 2001 14:33:37 -0400 (EDT)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't import to v7.1?"
},
{
"msg_contents": "\nNo errors, nothing ... here is the backend:\n\n%bin/postmaster -D /usr/local/pgsql/data\nDEBUG: database system was shut down at 2001-04-10 15:04:08 ADT\nDEBUG: CheckPoint record at (0, 1522068)\nDEBUG: Redo record at (0, 1522068); Undo record at (0, 0); Shutdown TRUE\nDEBUG: NextTransactionId: 615; NextOid: 18720\nDEBUG: database system is in production state\nDEBUG: copy: line 445, XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\nDEBUG: MoveOfflineLogs: remove 0000000000000000\n\nand I ran the restore in 'script' to save everything, and as:\n\npsql -q template1 < pg_dumpall.out\n\nand there are no errors in the resultant file ...\n\nFor all intensive purposes, the restore *looked* clean ... but, going back\nand looking at the dump file, the dump wasn't clean *puzzled look*\n\nI'm going to have to look at this some more, but its pg_dumpall in v7.0.3\nthat is dumping the wrong data, not the restore :(\n\nall 77 databases got dump'd as the same database:\n\nYou are now connected to database wind.\nwind=# \\d\n List of relations\n Name | Type | Owner\n--------------------------+----------+---------\n buy | table | jeff\n buy_bid_seq | sequence | jeff\n clients_c_id_seq | sequence | jeff\n cppvad_clients | table | jeff\n cppvad_clients_cc_id_seq | sequence | jeff\n cppvad_info | table | jeff\n cppvad_info_cid_seq | sequence | jeff\n download | table | jeff\n download_dlid_seq | sequence | jeff\n exchange | table | jeff\n exchange_exid_seq | sequence | jeff\n gallery | table | scrappy\n listing | table | area902\n listing_lid_seq | sequence | area902\n ndict10 | table | pgsql\n ndict11 | table | pgsql\n ndict12 | table | pgsql\n ndict16 | table | pgsql\n ndict2 | table | pgsql\n ndict3 | table | pgsql\n ndict32 | table | pgsql\n ndict4 | table | pgsql\n ndict5 | table | pgsql\n ndict6 | table | pgsql\n ndict7 | table | pgsql\n ndict8 | table | pgsql\n ndict9 | table | pgsql\n projects | table | scrappy\n thepress | table | jeff\n thepress_id_seq | sequence | jeff\n ticket | table | pgsql\n ticket_comments | table | pgsql\n ticket_ticket_id_seq | sequence | pgsql\n ticket_times | table | pgsql\n(34 rows)\nwind=# \\connect viper\nYou are now connected to database viper.\nviper=# \\d\n List of relations\n Name | Type | Owner\n--------------------------+----------+---------\n buy | table | jeff\n buy_bid_seq | sequence | jeff\n clients_c_id_seq | sequence | jeff\n cppvad_clients | table | jeff\n cppvad_clients_cc_id_seq | sequence | jeff\n cppvad_info | table | jeff\n cppvad_info_cid_seq | sequence | jeff\n download | table | jeff\n download_dlid_seq | sequence | jeff\n exchange | table | jeff\n exchange_exid_seq | sequence | jeff\n gallery | table | scrappy\n listing | table | area902\n listing_lid_seq | sequence | area902\n ndict10 | table | pgsql\n ndict11 | table | pgsql\n ndict12 | table | pgsql\n ndict16 | table | pgsql\n ndict2 | table | pgsql\n ndict3 | table | pgsql\n ndict32 | table | pgsql\n ndict4 | table | pgsql\n ndict5 | table | pgsql\n ndict6 | table | pgsql\n ndict7 | table | pgsql\n ndict8 | table | pgsql\n ndict9 | table | pgsql\n projects | table | scrappy\n thepress | table | jeff\n thepress_id_seq | sequence | jeff\n ticket | table | pgsql\n ticket_comments | table | pgsql\n ticket_ticket_id_seq | sequence | pgsql\n ticket_times | table | pgsql\n(34 rows)\n\n\nneat ...\n\n\nOn Tue, 10 Apr 2001, Joel Burton wrote:\n\n> On Tue, 10 Apr 2001, The Hermit Hacker wrote:\n>\n> > all I did was use pg_dumpall from v7.0.3 to dump to a text file, and\n> > \"psql template1 < dumpfile\" to load it back in again ...\n> >\n> > obviously this doesn't work like it has in the past?\n>\n> Marc --\n>\n> Was there an error message during restore?\n>\n> I've been dumping/restoring w/7.1 since long before beta, w/o real\n> problems, but haven't been doing this w/7.0.3 stuff. But still, psql\n> should give you some error messages.\n>\n> (I'm sure you know this, but for the benefit of others on the list)\n> In Linux, I usually use the command)\n>\n> psql dbname < dumpfile 2>&1 | grep ERROR\n>\n> so that I don't miss any errors among the all the NOTICEs\n>\n>\n> --\n> Joel Burton <jburton@scw.org>\n> Director of Information Systems, Support Center of Washington\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 10 Apr 2001 15:43:39 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't import to v7.1?"
},
{
"msg_contents": "\nokay, not sure how we should document this, but apparently pg_dumpall\ndoesn't work as the man page at:\n\nhttp://www.postgresql.org/users-lounge/docs/7.0/user/app-pgdumpall.htm\n\nappears to suggest:\n\n==========================\n%pg_dumpall -h pgsql\npsql: No pg_hba.conf entry for host localhost, user pgsql, database\ntemplate1\n\\connect template1\nselect datdba into table tmp_pg_shadow from pg_database where\ndatname = 'template1';\ndelete from pg_shadow where usesysid <> tmp_pg_shadow.datdba;\ndrop table tmp_pg_shadow;\ncopy pg_shadow from stdin;\npsql: No pg_hba.conf entry for host localhost, user pgsql, database\ntemplate1\n\\.\ndelete from pg_group;\ncopy pg_group from stdin;\npsql: No pg_hba.conf entry for host localhost, user pgsql, database\ntemplate1\n\\.\npsql: No pg_hba.conf entry for host localhost, user pgsql, database\ntemplate1\n========================\n\nNow, I swore I did a 'setenv PGHOST db.hub.org' to get around it, and it\nstill failed, but now its working ... most confusing :(\n\nBut, still, pg_dumpall doesn't appear to accept the -h option in v7.0.3\n...\n\nOn Tue, 10 Apr 2001, The Hermit Hacker wrote:\n\n>\n> No errors, nothing ... here is the backend:\n>\n> %bin/postmaster -D /usr/local/pgsql/data\n> DEBUG: database system was shut down at 2001-04-10 15:04:08 ADT\n> DEBUG: CheckPoint record at (0, 1522068)\n> DEBUG: Redo record at (0, 1522068); Undo record at (0, 0); Shutdown TRUE\n> DEBUG: NextTransactionId: 615; NextOid: 18720\n> DEBUG: database system is in production state\n> DEBUG: copy: line 445, XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES\n> DEBUG: MoveOfflineLogs: remove 0000000000000000\n>\n> and I ran the restore in 'script' to save everything, and as:\n>\n> psql -q template1 < pg_dumpall.out\n>\n> and there are no errors in the resultant file ...\n>\n> For all intensive purposes, the restore *looked* clean ... but, going back\n> and looking at the dump file, the dump wasn't clean *puzzled look*\n>\n> I'm going to have to look at this some more, but its pg_dumpall in v7.0.3\n> that is dumping the wrong data, not the restore :(\n>\n> all 77 databases got dump'd as the same database:\n>\n> You are now connected to database wind.\n> wind=# \\d\n> List of relations\n> Name | Type | Owner\n> --------------------------+----------+---------\n> buy | table | jeff\n> buy_bid_seq | sequence | jeff\n> clients_c_id_seq | sequence | jeff\n> cppvad_clients | table | jeff\n> cppvad_clients_cc_id_seq | sequence | jeff\n> cppvad_info | table | jeff\n> cppvad_info_cid_seq | sequence | jeff\n> download | table | jeff\n> download_dlid_seq | sequence | jeff\n> exchange | table | jeff\n> exchange_exid_seq | sequence | jeff\n> gallery | table | scrappy\n> listing | table | area902\n> listing_lid_seq | sequence | area902\n> ndict10 | table | pgsql\n> ndict11 | table | pgsql\n> ndict12 | table | pgsql\n> ndict16 | table | pgsql\n> ndict2 | table | pgsql\n> ndict3 | table | pgsql\n> ndict32 | table | pgsql\n> ndict4 | table | pgsql\n> ndict5 | table | pgsql\n> ndict6 | table | pgsql\n> ndict7 | table | pgsql\n> ndict8 | table | pgsql\n> ndict9 | table | pgsql\n> projects | table | scrappy\n> thepress | table | jeff\n> thepress_id_seq | sequence | jeff\n> ticket | table | pgsql\n> ticket_comments | table | pgsql\n> ticket_ticket_id_seq | sequence | pgsql\n> ticket_times | table | pgsql\n> (34 rows)\n> wind=# \\connect viper\n> You are now connected to database viper.\n> viper=# \\d\n> List of relations\n> Name | Type | Owner\n> --------------------------+----------+---------\n> buy | table | jeff\n> buy_bid_seq | sequence | jeff\n> clients_c_id_seq | sequence | jeff\n> cppvad_clients | table | jeff\n> cppvad_clients_cc_id_seq | sequence | jeff\n> cppvad_info | table | jeff\n> cppvad_info_cid_seq | sequence | jeff\n> download | table | jeff\n> download_dlid_seq | sequence | jeff\n> exchange | table | jeff\n> exchange_exid_seq | sequence | jeff\n> gallery | table | scrappy\n> listing | table | area902\n> listing_lid_seq | sequence | area902\n> ndict10 | table | pgsql\n> ndict11 | table | pgsql\n> ndict12 | table | pgsql\n> ndict16 | table | pgsql\n> ndict2 | table | pgsql\n> ndict3 | table | pgsql\n> ndict32 | table | pgsql\n> ndict4 | table | pgsql\n> ndict5 | table | pgsql\n> ndict6 | table | pgsql\n> ndict7 | table | pgsql\n> ndict8 | table | pgsql\n> ndict9 | table | pgsql\n> projects | table | scrappy\n> thepress | table | jeff\n> thepress_id_seq | sequence | jeff\n> ticket | table | pgsql\n> ticket_comments | table | pgsql\n> ticket_ticket_id_seq | sequence | pgsql\n> ticket_times | table | pgsql\n> (34 rows)\n>\n>\n> neat ...\n>\n>\n> On Tue, 10 Apr 2001, Joel Burton wrote:\n>\n> > On Tue, 10 Apr 2001, The Hermit Hacker wrote:\n> >\n> > > all I did was use pg_dumpall from v7.0.3 to dump to a text file, and\n> > > \"psql template1 < dumpfile\" to load it back in again ...\n> > >\n> > > obviously this doesn't work like it has in the past?\n> >\n> > Marc --\n> >\n> > Was there an error message during restore?\n> >\n> > I've been dumping/restoring w/7.1 since long before beta, w/o real\n> > problems, but haven't been doing this w/7.0.3 stuff. But still, psql\n> > should give you some error messages.\n> >\n> > (I'm sure you know this, but for the benefit of others on the list)\n> > In Linux, I usually use the command)\n> >\n> > psql dbname < dumpfile 2>&1 | grep ERROR\n> >\n> > so that I don't miss any errors among the all the NOTICEs\n> >\n> >\n> > --\n> > Joel Burton <jburton@scw.org>\n> > Director of Information Systems, Support Center of Washington\n> >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 10 Apr 2001 15:59:03 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't import to v7.1?"
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> okay, not sure how we should document this, but apparently pg_dumpall\n> doesn't work as the man page at:\n>\n> http://www.postgresql.org/users-lounge/docs/7.0/user/app-pgdumpall.htm\n>\n> appears to suggest:\n\n> Now, I swore I did a 'setenv PGHOST db.hub.org' to get around it, and it\n> still failed, but now its working ... most confusing :(\n>\n> But, still, pg_dumpall doesn't appear to accept the -h option in v7.0.3\n\nExactly right. Options to pg_dumpall are only \"pg_dump options\", which\nmeans things like -o or -d. But pg_dumpall also runs a few psqls, which\ndon't see any of this.\n\nBtw., it would really seem like a neat feature if a given pg_dump suite\nwould also handle the respective previous version. Otherwise we're in a\nsituation like now where we've got a shiny new pg_dump but people that\nwant to upgrade are still stuck with the broken 7.0 incarnation.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 11 Apr 2001 01:09:21 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't\n\timport to v7.1?"
},
{
"msg_contents": "On Wed, 11 Apr 2001, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n>\n> > okay, not sure how we should document this, but apparently pg_dumpall\n> > doesn't work as the man page at:\n> >\n> > http://www.postgresql.org/users-lounge/docs/7.0/user/app-pgdumpall.htm\n> >\n> > appears to suggest:\n>\n> > Now, I swore I did a 'setenv PGHOST db.hub.org' to get around it, and it\n> > still failed, but now its working ... most confusing :(\n> >\n> > But, still, pg_dumpall doesn't appear to accept the -h option in v7.0.3\n>\n> Exactly right. Options to pg_dumpall are only \"pg_dump options\", which\n> means things like -o or -d. But pg_dumpall also runs a few psqls, which\n> don't see any of this.\n\nOkay, but, according to the man page, -h <host> *is* a pg_dump option ...\n\npg_dump [ dbname ]\npg_dump [ -h host ] [ -p port ]\n [ -t table ]\n [ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]\n [ -o ] [ -s ] [ -u ] [ -v ] [ -x ]\n [ dbname ]\n\n\n",
"msg_date": "Tue, 10 Apr 2001 20:17:13 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't\n\timport to v7.1?"
},
{
"msg_contents": "Thus spake The Hermit Hacker\n> all 77 databases got dump'd as the same database:\n\nPersonally I never use pg_dumpall. It is easy to write a script to get\nthe list of databases and use pg_dump to dump them individually. In fact\nI like dumping individual tables if I can. Mostly I like the ability to\nfix one table if there is a problem. Finding and fixing one 7 row table\nin a multi-gigabyte files really sucks. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 11 Apr 2001 06:27:18 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't import\n\tto v7.1?"
},
{
"msg_contents": "At 01:09 11/04/01 +0200, Peter Eisentraut wrote:\n>\n>Btw., it would really seem like a neat feature if a given pg_dump suite\n>would also handle the respective previous version. \n\nThis has been in the back of my mind for some time, and is why I initially\nbackported my pg_dump changes to 7.0. Unfortunately, I did not continue,\nand the backport wa targetted at 7.0, not 7.1.\n\nI would be willing to try to get a 7.0 compatibility mode going, perhaps as\na patch/contrib after 7.1 (or before, depending on release). There is\nprobably not that much effort involved; the main changes to the DB\ninterface are in the use of formatType and the function manager defintions\n- at least I think that's the case...\n\nPeter: what options are there for getting formatType working in 7.0?\n\nAlso, just in case people can think of other dump-related changes from 7.0,\nto 7.1, I have included a list below:\n\n- LOs stored differently (one or two line change)\n- formatType (a few places, but it's significant)\n- function manager (SQL substitution should work here, I hope, since the\nfmgr can detect the right protocol to use)\n- detection of relations that are views - old isViewRule would need to be\nresurrected\n- last builtin OID derivation changed\n- Handling of ZPBITOID & VARBITOID types? Not sure if this is OK for 7.0\n\nUnfortunately, I have not paid much attention to internal changes from 7.0\nto 7.1, so I have no idea what else was changed.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 11 Apr 2001 21:40:41 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3\n\tcan't import to v7.1?"
},
{
"msg_contents": "At 06:27 11/04/01 -0400, D'Arcy J.M. Cain wrote:\n>Finding and fixing one 7 row table\n>in a multi-gigabyte files really sucks. :-)\n\nAt least in 7.1 you can dump the who DB to a file/tape, then extract one\ntable from the dump file easily...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 11 Apr 2001 21:42:11 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3\n\tcan't import to v7.1?"
},
{
"msg_contents": "On Tue, 10 Apr 2001, Joel Burton wrote:\n\n> On Tue, 10 Apr 2001, The Hermit Hacker wrote:\n>\n> > all I did was use pg_dumpall from v7.0.3 to dump to a text file, and\n> > \"psql template1 < dumpfile\" to load it back in again ...\n> >\n> > obviously this doesn't work like it has in the past?\n>\n> Marc --\n>\n> Was there an error message during restore?\n>\n> I've been dumping/restoring w/7.1 since long before beta, w/o real\n> problems, but haven't been doing this w/7.0.3 stuff. But still, psql\n> should give you some error messages.\n>\n> (I'm sure you know this, but for the benefit of others on the list)\n> In Linux, I usually use the command)\n>\n> psql dbname < dumpfile 2>&1 | grep ERROR\n>\n> so that I don't miss any errors among the all the NOTICEs\n\nI recall having a problem when Marc moved the server away from\nhub.org and to db.hub.org. I couldn't import the database I\nexported from 7.0.x into it without first creating the sequences.\nCould this be something related - although I thought that had gotten\nfixed.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 11 Apr 2001 10:59:17 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't\n\timport to v7.1?"
},
{
"msg_contents": "At 17:18 11/04/01 +0200, Peter Eisentraut wrote:\n>\n>What I meant was that whenever the backend changes in a way that mandates\n>pg_dump changes we would leave the old way in place and only add a new\n>case to handle the new backend. \n\nThat's what I had in mind as well; I gave up on the backport because it\nseemed pointless (as you suggest).\n\n>\n>This would invariably introduce code bloat, but that could probably be\n>managed by a modular design within pg_dump, plus perhaps removing support\n>for really old versions once in a while.\n\nI was thinking that with time these version-specific cases will reduce (eg.\ndefinition schemas will help), and that we could put all the DB interface\ninto separate modules.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 12 Apr 2001 01:17:14 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3\n\tcan't import to v7.1?"
},
{
"msg_contents": "Philip Warner writes:\n\n> At 01:09 11/04/01 +0200, Peter Eisentraut wrote:\n> >\n> >Btw., it would really seem like a neat feature if a given pg_dump suite\n> >would also handle the respective previous version.\n>\n> This has been in the back of my mind for some time, and is why I initially\n> backported my pg_dump changes to 7.0. Unfortunately, I did not continue,\n> and the backport wa targetted at 7.0, not 7.1.\n\nThis is not really what I had in mind. Backporting enhancements would\nonly serve the users that manually installed the enhancements. Actually,\nit's quite idiosyncratic, because the point of a new release is to publish\nenhancements.\n\nWhat I meant was that whenever the backend changes in a way that mandates\npg_dump changes we would leave the old way in place and only add a new\ncase to handle the new backend. Stupid example:\n\nswitch (backend_version)\n{\n\tcase 71:\n\t\tresult = PQexex(\"select * from pg_class;\");\n\t\tbreak;\n\tcase 72:\n\t\tresult = PQexec(\"select * from pg_newnameforpgclass;\");\n\t\tbreak;\n}\n\nThis would invariably introduce code bloat, but that could probably be\nmanaged by a modular design within pg_dump, plus perhaps removing support\nfor really old versions once in a while.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 11 Apr 2001 17:18:33 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't\n\timport to v7.1?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Btw., it would really seem like a neat feature if a given pg_dump suite\n> would also handle the respective previous version. Otherwise we're in a\n> situation like now where we've got a shiny new pg_dump but people that\n> want to upgrade are still stuck with the broken 7.0 incarnation.\n\nNo more stuck than they were if they had needed to reload from their\ndump files into 7.0.\n\nI really doubt that it's worth going out of our way to try to keep\npg_dump compatible with obsolete backends. If we had infinite manpower,\nthen sure, but I think the time is better spent elsewhere.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 13:25:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't import to\n\tv7.1?"
}
] |
[
{
"msg_contents": "\n> > > Excessively long values are currently silently truncated when they are\n> > > inserted into char or varchar fields. This makes the entire notion of\n> > > specifying a length limit for these types kind of useless, IMO. Needless\n> > > to say, it's also not in compliance with SQL.\n> >\n> > To quote Tom \"paragraph and verse please\" :-)\n> \n> SQL 1992, 9.2 GR 3 e)\n> \n> \"\"\"\n> If the data type of T is variable-length character string and\n> the length in characters M of V is greater than the maximum\n> length in characters L of T, then,\n> \n> Case:\n> \n> i) If the rightmost M-L characters of V are all <space>s, then\n> the value of T is set to the first L characters of V and\n> the length in characters of T is set to L.\n> \n> ii) If one or more of the rightmost M-L characters of V are\n> not <space>s, then an exception condition is raised: data\n> ^^^^^^^^^\n> exception-string data, right truncation.\n> \"\"\"\n\nThank you. Is an \"exception condition\" necessarily an error, or \nis a warning also an exception condition ?\n\n> Similarly in SQL 1999 and for other data types.\n> \n> > > How do people feel about changing this to raise an error in this\n> > > situation?\n> >\n> > Can't do.\n> \n> Why not?\n\nBecause other db's only raise a warning. Of course we don't want to\ncopy that behavior if they are not conformant. See above question.\n\nAndreas\n",
"msg_date": "Wed, 11 Apr 2001 10:37:25 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: Truncation of char, varchar types"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n\n> Thank you. Is an \"exception condition\" necessarily an error, or\n> is a warning also an exception condition ?\n\nA warning/notice is called a \"completion condition\".\n\n> Because other db's only raise a warning. Of course we don't want to\n> copy that behavior if they are not conformant. See above question.\n\nSomeone said Oracle raises an error. Informix seems to be the only other\ndb that truncates silently. I think Oracle wins here...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 11 Apr 2001 20:25:41 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Truncation of char, varchar types"
}
] |
[
{
"msg_contents": "hello everybody,\nCan you help me?\n\nI have POSTGRESQL 7.0.3,\nI try to create simple view by typing.\n\ncreate view \"xx\" as select \"aa.yy\", \"bb.yy\" from \"yy\" order by \"bb.yy\"\n\nthe problem is that parameter order is not implemented with create view.\nso how can I create such simple query??\n\nBest regards\nMarcin\n\n\n\n",
"msg_date": "Wed, 11 Apr 2001 10:50:45 +0200",
"msg_from": "\"Marcin Wasilewski\" <marcingrupy@poczta.onet.pl>",
"msg_from_op": true,
"msg_subject": "ORDER BY ????"
},
{
"msg_contents": "\nOn Wed, 11 Apr 2001, Marcin Wasilewski wrote:\n\n> hello everybody,\n> Can you help me?\n> \n> I have POSTGRESQL 7.0.3,\n> I try to create simple view by typing.\n> \n> create view \"xx\" as select \"aa.yy\", \"bb.yy\" from \"yy\" order by \"bb.yy\"\n> \n> the problem is that parameter order is not implemented with create view.\n> so how can I create such simple query??\n\nYou probably want the order by on the select queries on \"xx\". I\nbelieve order bys are only legal on cursor creation and direct select\nstatements.\n\n\n",
"msg_date": "Fri, 13 Apr 2001 16:18:23 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY ????"
}
] |
[
{
"msg_contents": "Hello, everybody\n\nI have a new question to you. ;-)))\n1. I wrote a database in access an I put there some Polish words like ����.\n2. Migrate the base using PGADMIN to Postgresql.0.3 on Solaris 8.\n3. What can I do to see that Polish words.\n\n Maybe set a codepage or any standard.\n I need help from you.\nBest regards.\nMarcin\n\n\n\n",
"msg_date": "Wed, 11 Apr 2001 11:56:13 +0200",
"msg_from": "\"Marcin Wasilewski\" <marcingrupy@poczta.onet.pl>",
"msg_from_op": true,
"msg_subject": "Languages"
}
] |
[
{
"msg_contents": "debian unstable, i386.\nupgrade libreadline 4.2\npostgres doesn't compile.\n\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/interfaces/libpq -I../../../src/include -c -o tab-complete.o tab-complete.c\ntab-complete.c: In function `initialize_readline':\ntab-complete.c:103: warning: assignment from incompatible pointer type\ntab-complete.c: In function `psql_completion':\ntab-complete.c:292: warning: implicit declaration of function `completion_matches'\ntab-complete.c:292: warning: assignment makes pointer from integer without a cast\ntab-complete.c:296: warning: assignment makes pointer from integer without a cast\ntab-complete.c:301: warning: assignment makes pointer from integer without a cast\ntab-complete.c:309: warning: assignment makes pointer from integer without a cast\ntab-complete.c:320: warning: assignment makes pointer from integer without a cast\ntab-complete.c:325: warning: assignment makes pointer from integer without a cast\ntab-complete.c:332: warning: assignment makes pointer from integer without a cast\ntab-complete.c:337: warning: assignment makes pointer from integer without a cast\ntab-complete.c:342: warning: assignment makes pointer from integer without a cast\ntab-complete.c:347: warning: assignment makes pointer from integer without a cast\ntab-complete.c:350: warning: assignment makes pointer from integer without a cast\ntab-complete.c:366: warning: assignment makes pointer from integer without a cast\ntab-complete.c:371: warning: assignment makes pointer from integer without a cast\ntab-complete.c:378: warning: assignment makes pointer from integer without a cast\ntab-complete.c:381: warning: assignment makes pointer from integer without a cast\ntab-complete.c:392: warning: assignment makes pointer from integer without a cast\ntab-complete.c:400: warning: assignment makes pointer from integer without a cast\ntab-complete.c:406: warning: assignment makes pointer from integer without a cast\ntab-complete.c:410: warning: assignment makes pointer from integer without a cast\ntab-complete.c:413: warning: assignment makes pointer from integer without a cast\ntab-complete.c:420: warning: assignment makes pointer from integer without a cast\ntab-complete.c:423: warning: assignment makes pointer from integer without a cast\ntab-complete.c:429: warning: assignment makes pointer from integer without a cast\ntab-complete.c:435: warning: assignment makes pointer from integer without a cast\ntab-complete.c:440: warning: assignment makes pointer from integer without a cast\ntab-complete.c:448: warning: assignment makes pointer from integer without a cast\ntab-complete.c:455: warning: assignment makes pointer from integer without a cast\ntab-complete.c:460: warning: assignment makes pointer from integer without a cast\ntab-complete.c:465: warning: assignment makes pointer from integer without a cast\ntab-complete.c:473: warning: assignment makes pointer from integer without a cast\ntab-complete.c:478: warning: assignment makes pointer from integer without a cast\ntab-complete.c:490: warning: assignment makes pointer from integer without a cast\ntab-complete.c:493: warning: assignment makes pointer from integer without a cast\ntab-complete.c:496: warning: assignment makes pointer from integer without a cast\ntab-complete.c:506: warning: assignment makes pointer from integer without a cast\ntab-complete.c:514: warning: assignment makes pointer from integer without a cast\ntab-complete.c:521: warning: assignment makes pointer from integer without a cast\ntab-complete.c:532: warning: assignment makes pointer from integer without a cast\ntab-complete.c:541: warning: assignment makes pointer from integer without a cast\ntab-complete.c:545: warning: assignment makes pointer from integer without a cast\ntab-complete.c:553: warning: assignment makes pointer from integer without a cast\ntab-complete.c:556: warning: assignment makes pointer from integer without a cast\ntab-complete.c:559: warning: assignment makes pointer from integer without a cast\ntab-complete.c:569: warning: assignment makes pointer from integer without a cast\ntab-complete.c:572: warning: assignment makes pointer from integer without a cast\ntab-complete.c:578: warning: assignment makes pointer from integer without a cast\ntab-complete.c:582: warning: assignment makes pointer from integer without a cast\ntab-complete.c:587: warning: assignment makes pointer from integer without a cast\ntab-complete.c:592: warning: assignment makes pointer from integer without a cast\ntab-complete.c:599: warning: assignment makes pointer from integer without a cast\ntab-complete.c:604: warning: assignment makes pointer from integer without a cast\ntab-complete.c:606: warning: assignment makes pointer from integer without a cast\ntab-complete.c:608: warning: assignment makes pointer from integer without a cast\ntab-complete.c:619: warning: assignment makes pointer from integer without a cast\ntab-complete.c:622: warning: assignment makes pointer from integer without a cast\ntab-complete.c:626: warning: assignment makes pointer from integer without a cast\ntab-complete.c:634: warning: assignment makes pointer from integer without a cast\ntab-complete.c:640: warning: assignment makes pointer from integer without a cast\ntab-complete.c:646: warning: assignment makes pointer from integer without a cast\ntab-complete.c:651: warning: assignment makes pointer from integer without a cast\ntab-complete.c:660: warning: assignment makes pointer from integer without a cast\ntab-complete.c:666: warning: assignment makes pointer from integer without a cast\ntab-complete.c:672: warning: assignment makes pointer from integer without a cast\ntab-complete.c:678: warning: assignment makes pointer from integer without a cast\ntab-complete.c:682: warning: assignment makes pointer from integer without a cast\ntab-complete.c:687: warning: assignment makes pointer from integer without a cast\ntab-complete.c:690: warning: assignment makes pointer from integer without a cast\ntab-complete.c:698: warning: assignment makes pointer from integer without a cast\ntab-complete.c:702: warning: assignment makes pointer from integer without a cast\ntab-complete.c:704: warning: assignment makes pointer from integer without a cast\ntab-complete.c:709: warning: assignment makes pointer from integer without a cast\ntab-complete.c:714: warning: assignment makes pointer from integer without a cast\ntab-complete.c:716: warning: assignment makes pointer from integer without a cast\ntab-complete.c:718: warning: assignment makes pointer from integer without a cast\ntab-complete.c:725: warning: assignment makes pointer from integer without a cast\ntab-complete.c:734: `filename_completion_function' undeclared (first use in this function)\ntab-complete.c:734: (Each undeclared identifier is reported only once\ntab-complete.c:734: for each function it appears in.)\ntab-complete.c:734: warning: assignment makes pointer from integer without a cast\ntab-complete.c:749: warning: assignment makes pointer from integer without a cast\ntab-complete.c:763: warning: assignment makes pointer from integer without a cast\nmake[3]: *** [tab-complete.o] Error 1\n\nciao,\nandrea\n",
"msg_date": "Wed, 11 Apr 2001 16:50:31 +0200",
"msg_from": "andrea gelmini <bungle@linux.it>",
"msg_from_op": true,
"msg_subject": "cvs postgres doesn't compile with libreadline 4.2"
},
{
"msg_contents": "andrea gelmini <bungle@linux.it> writes:\n> debian unstable, i386.\n> upgrade libreadline 4.2\n> postgres doesn't compile.\n\nSeems strange. Did you re-run PG's configure after installing libreadline?\nAre you sure that the include (.h) files found by configure match the\nlibrary (.a or .so) file?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 19:50:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cvs postgres doesn't compile with libreadline 4.2 "
},
{
"msg_contents": "andrea gelmini writes:\n\n> debian unstable, i386.\n> upgrade libreadline 4.2\n> postgres doesn't compile.\n\nIt seems there were some incompatible changes in readline 4.2. Use\nversion 4.1 until we have a fix.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 14 Apr 2001 04:18:37 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: cvs postgres doesn't compile with libreadline 4.2"
},
{
"msg_contents": "\n\nOn Sat, 14 Apr 2001, Peter Eisentraut wrote:\n\n> andrea gelmini writes:\n> \n> > debian unstable, i386.\n> > upgrade libreadline 4.2\n> > postgres doesn't compile.\n> \n> It seems there were some incompatible changes in readline 4.2. Use\n> version 4.1 until we have a fix.\n\nThe essence of the problem seems to be in the following lines of\nreadline.h that comes with libreadline 4.2 (/usr/include/readline/readline.h):\n\nLine 415-429\n\n#if 0\n/* Backwards compatibility (compat.c). These will go away sometime. */\nextern void free_undo_list __P((void));\nextern int maybe_save_line __P((void));\nextern int maybe_unsave_line __P((void));\nextern int maybe_replace_line __P((void));\n\nextern int ding __P((void));\nextern int alphabetic __P((int));\nextern int crlf __P((void));\n\nextern char **completion_matches __P((char *, rl_compentry_func_t *));\nextern char *username_completion_function __P((const char *, int));\nextern char *filename_completion_function __P((const char *, int));\n#endif\n\n\n\nAnd the fix to make it compile is the following:\n\nsrc/bin/psql/tab-complete.c\n\nLine 64:\n\nchar *filename_completion_function(char *, int);\n\nshould read\n\nchar *rl_filename_completion_function(char *, int);\n\n\nReadline continued to work for me after having upgraded my Debian to\nthe latest unstable release (which apparently contained alse libreadline\n4.2) and patching the tab-complete.c file.\n\nSorry for the \"pseudodiff\" but hopefully it gives a clue. Since I do not\nknow much about libreadline and its history I hope someone else will\ntell how the actual patch should look like (so it would work with older\nversions).\n\n\nJuhan Ernits\n\n\n\n\n",
"msg_date": "Mon, 16 Apr 2001 08:06:20 +0200 (GMT)",
"msg_from": "Juhan-Peep Ernits <juhan@cc.ioc.ee>",
"msg_from_op": false,
"msg_subject": "Re: cvs postgres doesn't compile with libreadline 4.2"
}
] |
[
{
"msg_contents": "I notice that the docs have commented-out all mention of the age()\nfunctions, with the note that \"These two functions don't seem to do what\nit says here, or anything reasonable at all for that matter.\"\n\n??\n\nHow did we conclude that, and how could these be confusing? I do not\nrecall any discussion on the topic, and I'll restore the documentation\nuntil someone can refresh my memory on why these are a problem.\n\n<grumble>\n\n - Thomas\n",
"msg_date": "Wed, 11 Apr 2001 15:15:18 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "age() function documentation"
},
{
"msg_contents": "> > <grumble>\n> http://www.postgresql.org/mhonarc/pgsql-hackers/2001-02/msg00550.html\n\nOK, so that narrows down the list of suspects ;)\n\nWhy do you have a problem with the age() function? It *does* behave\ndifferently than date subtraction, as explicitly mentioned in the docs\n(preserving years, etc etc). Would we like some additional clarification\nin the docs perhaps? Seems to be preferable to dropping all mention,\nespecially since it is a useful function.\n\n - Thomas\n",
"msg_date": "Wed, 11 Apr 2001 16:09:38 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: age() function documentation"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> I notice that the docs have commented-out all mention of the age()\n> functions, with the note that \"These two functions don't seem to do what\n> it says here, or anything reasonable at all for that matter.\"\n>\n> ??\n>\n> How did we conclude that, and how could these be confusing? I do not\n> recall any discussion on the topic, and I'll restore the documentation\n> until someone can refresh my memory on why these are a problem.\n>\n> <grumble>\n\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/2001-02/msg00550.html\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 11 Apr 2001 18:14:15 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: age() function documentation"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Why do you have a problem with the age() function? It *does* behave\n> differently than date subtraction, as explicitly mentioned in the docs\n> (preserving years, etc etc).\n\nAs you see in one of the examples I posted, it does not preserve years and\nmonths. What exactly does that mean anyway? Simple subtraction also\npreserves years and months, as I see it.\n\n> Would we like some additional clarification in the docs perhaps? Seems\n> to be preferable to dropping all mention, especially since it is a\n> useful function.\n\nBy all means.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 11 Apr 2001 18:49:45 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: age() function documentation"
},
{
"msg_contents": "> Thomas Lockhart writes:\n> \n> > Why do you have a problem with the age() function? It *does* behave\n> > differently than date subtraction, as explicitly mentioned in the docs\n> > (preserving years, etc etc).\n> \n> As you see in one of the examples I posted, it does not preserve years and\n> months. What exactly does that mean anyway? Simple subtraction also\n> preserves years and months, as I see it.\n\n From your URL email, this one seems to work:\n\n\tselect age(date '1999-05-17', date '1957-06-13');\n\t age\n\t-------------------------------\n\t 41 years 11 mons 3 days 23:00\n\t(1 row)\n\nThis one did not:\n\t\n\tpeter=# select date '1999-08-13' - date '1999-06-13';\n\t ?column?\n\t----------\n\t 61\n\t(1 row)\n\nand this one is less than one month:\n\t\n\tpeter=# select age(date '1999-05-17', date '1999-06-13');\n\t age\n\t----------\n\t -27 days\n\t(1 row)\n\nI will admit age() is a little confusing, but it seems to work as\nintended.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 11 Apr 2001 13:16:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: age() function documentation"
},
{
"msg_contents": "> As you see in one of the examples I posted, it does not preserve years and\n> months. What exactly does that mean anyway? Simple subtraction also\n> preserves years and months, as I see it.\n\nOK, so there is a documentation problem, since the functions do exactly\nwhat they claim!\n\nWhat do you mean by \"it does not preserve years and months\"? I look at\nthe same example, and run the same example here, and it does exactly\nwhat I would expect, in the way I described it in the docs ;)\n\n> > Would we like some additional clarification in the docs perhaps? Seems\n> > to be preferable to dropping all mention, especially since it is a\n> > useful function.\n> By all means.\n\nOK, I'll add some info. But assuming that we are just missing a clear\ndefinition of what \"preserves years and months\" means, here it is:\n\nTypical date/time arithmetic resolves to an absolute time or interval.\nIn those cases, *qualitative* quantities such as years and months are\nresolved to a specific absolute interval at the time of calculation.\n\nThe age() functions *preserve* the qualitative fields year and month. So\nyou see the difference in results:\n\nlockhart=# select age('today', '1957-06-13');\n-------------------------\n 43 years 9 mons 28 days\n\nlockhart=# select timestamp 'today' - timestamp '1957-06-13';\n------------\n 16008 days\n\nIn the case for the DATE type, the result is an integer value (not an\ninterval) which I believe was done intentionally but I'm not recalling\nexactly why; I can research it if necessary:\n\nlockhart=# select date 'today' - date '1957-06-13';\n----------\n 16008\n\nreturns the number of days (which is also an absolute, quantitative\ntime).\n\n - Thomas\n",
"msg_date": "Wed, 11 Apr 2001 21:26:21 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: age() function documentation"
},
{
"msg_contents": "Thomas Lockhart wrote:\n\n> The age() functions *preserve* the qualitative fields year and month. So\n> you see the difference in results:\n\nWhy take away age()? I usually use it to check against INTERVALs? See:\n\nvillage=> select age(date '1999-05-17', date '1957-06-13') > '40\nyears'::interval;\n ?column?\n----------\n t\n(1 row)\n \nvillage=> select date '1999-05-17' - date '1957-06-13' > '40\nyears'::interval;\n ?column?\n----------\n f\n(1 row)\n\nIt's useful and I would like to have it this way.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Thu, 12 Apr 2001 10:38:08 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: age() function documentation"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> The age() functions *preserve* the qualitative fields year and month. So\n> you see the difference in results:\n>\n> lockhart=# select age('today', '1957-06-13');\n> -------------------------\n> 43 years 9 mons 28 days\n>\n> lockhart=# select timestamp 'today' - timestamp '1957-06-13';\n> ------------\n> 16008 days\n>\n> In the case for the DATE type, the result is an integer value (not an\n> interval) which I believe was done intentionally but I'm not recalling\n> exactly why; I can research it if necessary:\n>\n> lockhart=# select date 'today' - date '1957-06-13';\n> ----------\n> 16008\n>\n> returns the number of days (which is also an absolute, quantitative\n> time).\n\nISTM that this is more a result of\n\na) timestamp subtraction not implemented per spec\n\nb) date substraction not implemented at all (it does date - integer)\n\nc) implicit type conversions running wild\n\nd) intervals not implemented per spec\n\n(spec == SQL). Lots of fun projects here... ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 12 Apr 2001 17:47:40 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: age() function documentation"
},
{
"msg_contents": "> ISTM that this is more a result of\n> a) timestamp subtraction not implemented per spec\n\nMaybe. But it is implemented consistantly, and is more functional and\ncapable than the brain-damaged SQL9x spec (c.f. Date and Darwen) asks.\n\n> b) date substraction not implemented at all (it does date - integer)\n\nNo, and changing what it *does* do has ramifications.\n\n> c) implicit type conversions running wild\n\nNo.\n\n> d) intervals not implemented per spec\n\n? Why would you say this?\n\n> (spec == SQL). Lots of fun projects here... ;-)\n\nSQL == foolishness, sometimes. Especially when it comes to date/time\ndefinitions and arithmetic. But that does not mean that there are things\nwhich could be better, just that a blind conformance to the SQL standard\nin this area will fundamentally damage our capabilities, so keep that in\nmind.\n\nWhat issue are you specifically addressing? It is clear that we do not\nall have the same understanding of the age() function, but is that a\npart of your statements above? Or not??\n\nPlease be specific about what you think needs changing, and why. And\nI'll actually be able to pay attention after the 7.1 release ;)\n\n - Thomas\n",
"msg_date": "Thu, 12 Apr 2001 16:07:10 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: age() function documentation"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> > b) date substraction not implemented at all (it does date - integer)\n>\n> No, and changing what it *does* do has ramifications.\n\nOkay, I see there's 'date - date' after all. But 'date - date' should\nstill return some kind of time interval, not an integer. Of course\nchanges have ramification, but standing still does, too.\n\n> > d) intervals not implemented per spec\n>\n> ? Why would you say this?\n\nBecause it's a fact. SQL has year to month intervals and day to second\nintervals, no all encompassing interval. It sounds stupid at first, but a\nlot of weird little definitional problems would go away if we had support\nfor these. Months and years are \"unstable\" units when used together with\ndays, minutes, etc. but they are a consistent system when only used among\nthemselves. The current implementation already reflects this by making\n\"time\" and \"month\" different struct members, so I guess what lacks a\nlittle are user-accessible means of controlling which gets used.\n\nThe difference is between age() and timestamp subtraction is in fact that\nthe former returns a year to month interval and the other a day to second\ninterval. But in the current implementation the only effective difference\nis that the interval is diplayed differently, which is a confusing concept\nbecause data values are not defined by their representation but by their\nvalue.\n\n> Please be specific about what you think needs changing, and why. And\n> I'll actually be able to pay attention after the 7.1 release ;)\n\nFix^H^H^HEnhancing the interval type up to spec would really go a long way\nI think. We could redefine Interval like\n\nstruct Interval {\n bool is_month_to_year;\n union {\n double seconds;\n struct my {\n int32 months;\n int32 years;\n }\n }\n}\n\nThis would make it mostly compatible to its current behaviour.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 12 Apr 2001 19:26:05 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: age() function documentation"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Typical date/time arithmetic resolves to an absolute time or interval.\n> In those cases, *qualitative* quantities such as years and months are\n> resolved to a specific absolute interval at the time of calculation.\n>\n> The age() functions *preserve* the qualitative fields year and month. So\n> you see the difference in results:\n>\n> lockhart=# select age('today', '1957-06-13');\n> -------------------------\n> 43 years 9 mons 28 days\n>\n> lockhart=# select timestamp 'today' - timestamp '1957-06-13';\n> ------------\n> 16008 days\n\nPerhaps age() could be documented along the lines of:\n\nCalculates the difference between the arguments and expresses the\nresulting interval in terms of years, months and possibly smaller units.\nOrdinary timestamp subtraction is different from age() because it\nexpresses its result only in days and smaller units.\n\nPlus a contrasting example, such as the above.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 12 Apr 2001 19:32:40 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: age() function documentation"
}
] |
[
{
"msg_contents": "\nHi,\nJust 2 questions.\n\n1. Does kerberos work with PGSql 7.1rc4? I tried it today without success...\nAnyone got it working? With MIT? With win2k ADS?\n\n2. When will 7.1 be released. (apporx :-) )\n\nThanks\n-jec\n\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \nJean-Eric Cuendet\nLinkvest SA\nAv des Baumettes 19, 1020 Renens Switzerland\nTel +41 21 632 9043 Fax +41 21 632 9090\nhttp://www.linkvest.com E-mail: jean-eric.cuendet@linkvest.com\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \n\n\n",
"msg_date": "Wed, 11 Apr 2001 17:49:00 +0200",
"msg_from": "Jean-Eric Cuendet <Jean-Eric.Cuendet@linkvest.com>",
"msg_from_op": true,
"msg_subject": "Postgres & Kerberos"
}
] |
[
{
"msg_contents": "Roberto Mello suggested I post my problem here. He suggested Tom Lane\nmight take a look...\n\nI dumped an 7.0.3 database and restored that to rc2, which went fine after\na bit of reordering help (It was an OpenACS table set).\n\nNow when I dump the same database with rc2 (or 4) I get a\ndifferent set of ordering problems.\nSome five functions are used in views before their definitions. In the\noriginal (7.0) dump they were in the correct order, but rc2/4 (the only\nones I tried) got it wrong. The original OIDs for the\nfunctions in the 7.1 dump are lower than those of the views. I do not know\nwhat is wrong. I can reproduce the results on another box. I have a copy\nof the relevant dumps (both the initial 7.0.3 >> 7.1rc2 and the rc4 >> rc4\ndump), anyone interested may have them for testing.\n\nI compiled on a fairly clean RH6.2/AMD-K6/256M box with nothing more than\n./configure; make; make install (so that it ended up in /usr/local/pgsql)\nThe box has the 7.0.3 RPMs installed, 7.1 runs on port 5433 and has a\nseparate postmaster account (postgr71).\n\nRegards,\nPascal Scheffers.\n\n\n-----BEGIN GEEK CODE BLOCK-----\nVersion: 3.12\nGIT$/MU/ED/S/P d(++) s+:+ a?(-) C++ UL++++ P+(--) L+++ E(++) W+++ N++ o? K\nw++$(---) O- M-- V-- PS@ PE Y+(-) PGP(++) t+@ 5++ X- R tv b++ DI@ D? G e++\nh---(-/----) y+++\n------END GEEK CODE BLOCK------\nco-hosting is for sissies. get your own machine out there. NOW!\n\n\n",
"msg_date": "Wed, 11 Apr 2001 18:53:55 +0200 (CEST)",
"msg_from": "Pascal Scheffers <pascal@scheffers.net>",
"msg_from_op": true,
"msg_subject": "pg_dump ordering problem (rc4)"
},
{
"msg_contents": "At 18:53 11/04/01 +0200, Pascal Scheffers wrote:\n> I have a copy\n>of the relevant dumps (both the initial 7.0.3 >> 7.1rc2 and the rc4 >> rc4\n>dump), anyone interested may have them for testing.\n\nYes please.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 12 Apr 2001 03:12:38 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump ordering problem (rc4)"
},
{
"msg_contents": "Pascal Scheffers <pascal@scheffers.net> writes:\n> Some five functions are used in views before their definitions. In the\n> original (7.0) dump they were in the correct order, but rc2/4 (the only\n> ones I tried) got it wrong. The original OIDs for the\n> functions in the 7.1 dump are lower than those of the views. I do not know\n> what is wrong. I can reproduce the results on another box. I have a copy\n> of the relevant dumps (both the initial 7.0.3 >> 7.1rc2 and the rc4 >> rc4\n> dump), anyone interested may have them for testing.\n\nPlease. Philip Warner would likely want to see them too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 14:28:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump ordering problem (rc4) "
},
{
"msg_contents": "\nTom,\n\n> > dump), anyone interested may have them for testing.\n>\n> Please. Philip Warner would likely want to see them too.\nI don't have his email address... but I am quite willing to send it.\n\nPascal.\n\n\n\n",
"msg_date": "Fri, 13 Apr 2001 22:25:46 +0200 (CEST)",
"msg_from": "Pascal Scheffers <pascal@scheffers.net>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump ordering problem (rc4) "
}
] |
[
{
"msg_contents": ">> Lamar Owen writes:\n>>\n>> > One quick note -- since 'R' < 'b', the RC RPM's must be forced to\n>> > install with --oldpackage, as RPM does a simple strcmp of version\n>> > numbers -- 7.1RC3 < 7.1beta1, for instance. Just force it with\n>> > --oldpackage if you have a 7.1beta RPM already installed.\n\nCouldn't this be fixed (in future releases) with rcX and BetaX? I believe\nr > B.\n\nlen morgan\n\n",
"msg_date": "Wed, 11 Apr 2001 17:08:56 -0500",
"msg_from": "\"Len Morgan\" <len-morgan@crcom.net>",
"msg_from_op": true,
"msg_subject": "Re: RPM upgrade caveats going from a beta version to RC"
},
{
"msg_contents": "\"Len Morgan\" wrote:\n >>> Lamar Owen writes:\n >>>\n >>> > One quick note -- since 'R' < 'b', the RC RPM's must be forced to\n >>> > install with --oldpackage, as RPM does a simple strcmp of version\n >>> > numbers -- 7.1RC3 < 7.1beta1, for instance. Just force it with\n >>> > --oldpackage if you have a 7.1beta RPM already installed.\n >\n >Couldn't this be fixed (in future releases) with rcX and BetaX? I believe\n >r > B.\n\nor, for that matter, by betaX rcX. But we have still gone from rc4 to\nnothing in the just-released 7.1, which is backward as far as packaging\nsystems are concerned.\n\n$ dpkg --compare-versions 7.1 gt 7.1rc4 || echo BSD style does not work!\nBSD style does not work!\n\n\nMarc, _please_ make the next release greater than its betas!\n\nAs far as Debian is concerned, 7.1 will be known as 7.1release. Sigh...\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Be strong, and let your heart take courage, all you \n who hope in the Lord.\" Psalm 31:24 \n\n\n",
"msg_date": "Sat, 14 Apr 2001 09:56:22 +0100",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: RPM upgrade caveats going from a beta version to RC "
}
] |
[
{
"msg_contents": "I will be doing an OS upgrade, tomorrow, Thursday, so will be\nunavailable for a while. TODO.detail and my web site will be down at\nthat time as well.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 11 Apr 2001 21:32:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Machine down"
}
] |
[
{
"msg_contents": "\nI was trying to make a minor change today to the gram.y file to make\nPostgreSQL recognize \"DOUBLE\" as a data type the way DB2 does. I ran into\nreduce / reduce conflicts using both of the methods I tried. \n\nHaving fought extensively with Bison before on a SQL oriented language\ntranslation project, I am amazed that you were able to get a grammar as\ncomplex as PostgreSQL to work without major difficulty.\n\nI was wondering about what the sense of the list would be to someday accepting\na rewrite using a hand-coded LL(k) recursive descent parser. Anyone?\n\n - Mark Butler\n",
"msg_date": "Wed, 11 Apr 2001 19:38:26 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Yacc / Bison difficulties"
},
{
"msg_contents": "> \n> I was trying to make a minor change today to the gram.y file to make\n> PostgreSQL recognize \"DOUBLE\" as a data type the way DB2 does. I ran into\n> reduce / reduce conflicts using both of the methods I tried. \n> \n> Having fought extensively with Bison before on a SQL oriented language\n> translation project, I am amazed that you were able to get a grammar as\n> complex as PostgreSQL to work without major difficulty.\n> \n> I was wondering about what the sense of the list would be to someday accepting\n> a rewrite using a hand-coded LL(k) recursive descent parser. Anyone?\n\nInteresting. What advantages would there be?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 11 Apr 2001 23:30:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Yacc / Bison difficulties"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> Interesting. What advantages would there be?\n\nAs any one who has ever attempted to build a C++ parser using Yacc or Bison\ncan attest, it is very difficult to get an LALR based parser to correctly\nparse a sophisticated grammar. The advantages of using a hand written\nrecursive descent parser lie in four areas:\n\n1) ease of implementing grammar changes \n2) ease of debugging\n3) ability to handle unusual cases\n4) ability to support context sensitive grammars\n\nContext sensitivity is useful for handling things like embedded programming\nlanguages without having to escape whole procedures as string literals, for\nexample. We could support procedural language plugins without the current\nlimitations on syntax.\n\nAnother nice capability is the ability to enable and disable grammar rules at\nrun time - you could add run time options to disable all non SQL-92 grammar\nrules for application portability testing or emulate Oracle's outer join\nsyntax, as a couple of examples.\n\n - Mark Butler\n",
"msg_date": "Wed, 11 Apr 2001 22:37:56 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Re: Hand written parsers"
},
{
"msg_contents": "Mark Butler <butlerm@middle.net> writes:\n\n> Bruce Momjian wrote:\n> \n> > Interesting. What advantages would there be?\n> \n> As any one who has ever attempted to build a C++ parser using Yacc or Bison\n> can attest, it is very difficult to get an LALR based parser to correctly\n> parse a sophisticated grammar. The advantages of using a hand written\n> recursive descent parser lie in four areas:\n> \n> 1) ease of implementing grammar changes \n> 2) ease of debugging\n> 3) ability to handle unusual cases\n> 4) ability to support context sensitive grammars\n> \n> Context sensitivity is useful for handling things like embedded programming\n> languages without having to escape whole procedures as string literals, for\n> example. We could support procedural language plugins without the current\n> limitations on syntax.\n> \n> Another nice capability is the ability to enable and disable grammar rules at\n> run time - you could add run time options to disable all non SQL-92 grammar\n> rules for application portability testing or emulate Oracle's outer join\n> syntax, as a couple of examples.\n\nOn the other hand, recursive descent parsers tend to be more ad hoc,\nthey tend to be harder to maintain, and they tend to be less\nefficient.\n\nI believe that yacc based parsers support context sensitivity just as\nwell as recursive descent parsers.\n\nI'm not sure that C++ is a fair example. The problem with parsing C++\nis the result of a convoluted declaration syntax, so much so that some\nconstructs are completely syntactically ambiguous no matter how you\nlook at them and must be decided by semantic rules. It's true that in\nsuch a case recursive descent is easier, because it is easier to mix\nsemantic rules and syntax rules. But the SQL syntax is much simpler;\nis this really a problem with SQL? And I note that despite the\ndifficulties, the g++ parser is yacc based.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 462: A mushroom cloud has no silver lining.\n",
"msg_date": "11 Apr 2001 22:44:59 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Hand written parsers"
},
{
"msg_contents": "On Wed, Apr 11, 2001 at 10:44:59PM -0700, Ian Lance Taylor wrote:\n> Mark Butler <butlerm@middle.net> writes:\n> > ...\n> > The advantages of using a hand written recursive descent parser lie in\n> > 1) ease of implementing grammar changes \n> > 2) ease of debugging\n> > 3) ability to handle unusual cases\n> > 4) ability to support context sensitive grammars\n> > ...\n> > Another nice capability is the ability to enable and disable grammar\n> > rules at run time ...\n>\n> On the other hand, recursive descent parsers tend to be more ad hoc,\n> they tend to be harder to maintain, and they tend to be less\n> efficient. ... And I note that despite the\n> difficulties, the g++ parser is yacc based.\n\nYacc and yacc-like programs are most useful when the target grammar (or \nyour understanding of it) is not very stable. With Yacc you can make \nsweeping changes much more easily; big changes can be a lot of work in \na hand-coded parser. Once your grammar stabilizes, though, hand coding \ncan provide flexibility that is inconceivable in a parser generator, \nalbeit at some cost in speed and compact description. (I doubt parser \nspeed is an issue for PG.)\n\nG++ has flirted seriously with switching to a recursive-descent parser,\nlargely to be able to offer meaningful error messages and to recover\nbetter from errors, as well as to be able to parse some problematic\nbut conformant (if unlikely) programs.\n\nNote that the choice is not just between Yacc and a hand-coded parser.\nSince Yacc, many more powerful parser generators have been released,\none of which might be just right for PG.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Thu, 12 Apr 2001 00:53:56 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: Hand written parsers"
},
{
"msg_contents": "set nomail\n",
"msg_date": "Thu, 12 Apr 2001 11:19:12 -0700",
"msg_from": "\"Howard Williams\" <howieshouse@home.com>",
"msg_from_op": false,
"msg_subject": ""
},
{
"msg_contents": "Mark Butler writes:\n\n> I was trying to make a minor change today to the gram.y file to make\n> PostgreSQL recognize \"DOUBLE\" as a data type the way DB2 does. I ran into\n> reduce / reduce conflicts using both of the methods I tried.\n\nSee attached patch.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/",
"msg_date": "Thu, 12 Apr 2001 20:22:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Yacc / Bison difficulties"
},
{
"msg_contents": "Thanks. I didn't realize the need to move the DOUBLE token from the TokenId to\nthe ColId production. Will this patch be integrated into the head branch?\n\n - Mark Butler\n\nPeter Eisentraut wrote:\n> \n> Mark Butler writes:\n> \n> > I was trying to make a minor change today to the gram.y file to make\n> > PostgreSQL recognize \"DOUBLE\" as a data type the way DB2 does. I ran into\n> > reduce / reduce conflicts using both of the methods I tried.\n> \n> See attached patch.\n",
"msg_date": "Thu, 12 Apr 2001 12:57:46 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Re: Yacc / Bison difficulties"
},
{
"msg_contents": "Mark Butler writes:\n\n> Thanks. I didn't realize the need to move the DOUBLE token from the TokenId to\n> the ColId production. Will this patch be integrated into the head branch?\n\nNot sure. It's not a standard type, but at least two other RDBMS have it\nand the name does make sense. Any comments?\n\n>\n> - Mark Butler\n>\n> Peter Eisentraut wrote:\n> >\n> > Mark Butler writes:\n> >\n> > > I was trying to make a minor change today to the gram.y file to make\n> > > PostgreSQL recognize \"DOUBLE\" as a data type the way DB2 does. I ran into\n> > > reduce / reduce conflicts using both of the methods I tried.\n> >\n> > See attached patch.\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 13 Apr 2001 00:30:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Yacc / Bison difficulties"
},
{
"msg_contents": "> Mark Butler writes:\n> \n> > Thanks. I didn't realize the need to move the DOUBLE token from the TokenId to\n> > the ColId production. Will this patch be integrated into the head branch?\n> \n> Not sure. It's not a standard type, but at least two other RDBMS have it\n> and the name does make sense. Any comments?\n\nThat's a tough call. We already have some duplicate type symbols, but\nthis is not a standard SQL type. I would see if we can get others to\nsay it is a good idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 12 Apr 2001 21:18:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Yacc / Bison difficulties"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> That's a tough call. We already have some duplicate type symbols, but\n> this is not a standard SQL type. I would see if we can get others to\n> say it is a good idea.\n\nBut the DOUBLE keyword is already reserved by ANSI for use in the \"DOUBLE\nPRECISION\" type, so a \"DOUBLE\" synonym for it shouldn't make much of a\ndifference. Right?\n\n - Mark Butler\n",
"msg_date": "Thu, 12 Apr 2001 23:21:16 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Re: DOUBLE synonym for DOUBLE PRECISION"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Mark Butler writes:\n>> Thanks. I didn't realize the need to move the DOUBLE token from the TokenId to\n>> the ColId production. Will this patch be integrated into the head branch?\n\n> Not sure. It's not a standard type, but at least two other RDBMS have it\n> and the name does make sense. Any comments?\n\nSeems like a reasonable change (for 7.2, not now).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 02:35:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Yacc / Bison difficulties "
},
{
"msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> Yacc and yacc-like programs are most useful when the target grammar (or \n> your understanding of it) is not very stable. With Yacc you can make \n> sweeping changes much more easily; big changes can be a lot of work in \n> a hand-coded parser.\n\nAnd, in fact, this is precisely the killer reason why we will not switch\nto a handwritten parser anytime in the foreseeable future. Postgres'\ngrammar is NOT stable. Compare the gram.y files for any two recent\nreleases. I foresee changes at least as large in upcoming releases,\nbtw, as we implement more of SQL92/99 and drop ancient PostQuel-isms.\n\nI have some interest in proposals to switch to a better parser-generator\ntool than yacc ... but yacc has the advantages of being widely\navailable and widely understood. You'd need a pretty significant\nimprovement over yacc to make it worth switching.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 03:12:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Hand written parsers "
},
{
"msg_contents": "On Fri, Apr 13, 2001 at 03:12:57AM -0400, Tom Lane wrote:\n> I have some interest in proposals to switch to a better parser-generator\n> tool than yacc ...\n\nThere are tools to produce efficient top-down parsers as well. Those\ndoing such parsers \"by hand\" may be interested in looking at PCCTS\n(http://www.polhode.com/pccts.html) or ANTLR (http://www.antlr.org/).\n-- \nBruce Guenter <bruceg@em.ca> http://em.ca/~bruceg/",
"msg_date": "Sun, 15 Apr 2001 07:35:44 -0600",
"msg_from": "Bruce Guenter <bruceg@em.ca>",
"msg_from_op": false,
"msg_subject": "Re: Re: Hand written parsers"
}
] |
[
{
"msg_contents": "\n> > Thank you. Is an \"exception condition\" necessarily an error, or\n> > is a warning also an exception condition ?\n> \n> A warning/notice is called a \"completion condition\".\n> \n> > Because other db's only raise a warning. Of course we don't want to\n> > copy that behavior if they are not conformant. See above question.\n> \n> Someone said Oracle raises an error.\n\nYes, I am very sorry.\n\n> Informix seems to be the only other db that truncates silently.\n\nRaises a warning instead of error. Would need to check Sybase and DB2, but ...\n\n> I think Oracle wins here...\n\nYes, good. Do we want this in 7.1.0 ? Seems, yes :-(\n\nAndreas\n",
"msg_date": "Thu, 12 Apr 2001 10:24:01 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: Truncation of char, varchar types"
},
{
"msg_contents": "> > Someone said Oracle raises an error.\n> > Informix seems to be the only other db that truncates silently.\n> Raises a warning instead of error. Would need to check Sybase and DB2, but ...\n> Yes, good. Do we want this in 7.1.0 ? Seems, yes :-(\n\nNo, pretty sure this will not fly for the 7.1.x cycle, no matter what we\nend up deciding is the best approach (with SQL9x getting more votes than\nmost other proposals, but The Right Thing having the most votes of all\n;)\n\nWe are at the tail end of the beta cycle, and a change in behavior which\nis this fundamental needs more time and discussion. All imho of\ncourse...\n\n - Thomas\n",
"msg_date": "Thu, 12 Apr 2001 12:43:50 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Truncation of char, varchar types"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n\n> Yes, good. Do we want this in 7.1.0 ? Seems, yes :-(\n\nNo way. I'm just giving some food for thought while development is slow.\n\nIn any case there seems to be support for the proposed feature. I'm just\nwaiting for someone to complain that he relies on the existing behaviour,\nbut I doubt that.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 12 Apr 2001 19:43:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: Truncation of char, varchar types"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n\n> Yes, good. Do we want this in 7.1.0 ? Seems, yes :-(\n\nI agree this change is very good idea, but 7.2 is probably a better target.\n\n - Mark Butler\n",
"msg_date": "Thu, 12 Apr 2001 13:35:11 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": false,
"msg_subject": "Re: AW: Truncation of char, varchar types"
}
] |
[
{
"msg_contents": "\n\n---------- Forwarded Message ----------\nSubject: Re: Yacc / Bison difficulties\nDate: Thu, 12 Apr 2001 13:01:40 +0200\nFrom: Robert Schrem <robert.schrem@wiredminds.de>\nTo: Mark Butler <butlerm@middle.net>\n\n\nOn Thursday 12 April 2001 03:38, you wrote:\n> I was trying to make a minor change today to the gram.y file to make\n> PostgreSQL recognize \"DOUBLE\" as a data type the way DB2 does. I ran into\n> reduce / reduce conflicts using both of the methods I tried.\n>\n> Having fought extensively with Bison before on a SQL oriented language\n> translation project, I am amazed that you were able to get a grammar as\n> complex as PostgreSQL to work without major difficulty.\n>\n> I was wondering about what the sense of the list would be to someday\n> accepting a rewrite using a hand-coded LL(k) recursive descent parser.\n> Anyone?\n\nI think writing a parser by hand is inadequate these days.\nLook at http://www.antlr.org for probably the best LL(k) parser\ngenerator software you can think of. It's much better than\nbison (and hand written parsers anyway):\n\n1. Easier to maintain (much easier)\n2. Clever syntax error detection for meanigfull error messages\n3. Less testing than for a hand written parser\n4. A hand written parser that is usually less efficient\n\nThe only drawback for the porstgresql project is, that\nANTLR only generates C++ and JAVA source code. But there\nis also an older Version of ANTLR that can also generate C\nsources.\n\nIf you are interested in this area have a look at the following\npages for even more tools for compiler/interpreter construction:\n\nhttp://www.compilerconstruction.org/\nhttp://www.idiom.com/free-compilers/\n\nIf you researche on all parser generators on these web sites\n(and recognize how much brainwork already done in this field\nof software development) you will probably agree, that it's\nadviceable, to not write parsers by hand anymore these\ndays (my two euro :) ...\n\nRobert Schrem\n\n> - Mark Butler\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n--\n\nRobert Schrem\nSoftware Development\nWiredMinds Informationssysteme GmbH\nAm Wilhelmsplatz 11, 70182 Stuttgart\ne: Robert.Schrem@WiredMinds.de\nv: ++49 +711 4 90 48 212\nf: ++49 +711 4 90 48 111\nw: http://www.WiredMinds.de\n\nThe nice thing about standards is that there are so many of them\nto choose from. -- Andrew S. Tanenbaum\n\n-------------------------------------------------------\n\n-- \n\nRobert Schrem\nSoftware Development\nWiredMinds Informationssysteme GmbH\nAm Wilhelmsplatz 11, 70182 Stuttgart\ne: Robert.Schrem@WiredMinds.de\nv: ++49 +711 4 90 48 212\nf: ++49 +711 4 90 48 111\nw: http://www.WiredMinds.de\n\nThe nice thing about standards is that there are so many of them \nto choose from. -- Andrew S. Tanenbaum\n",
"msg_date": "Thu, 12 Apr 2001 18:29:43 +0200",
"msg_from": "Robert Schrem <robert.schrem@WiredMinds.de>",
"msg_from_op": true,
"msg_subject": "Fwd: Re: Yacc / Bison difficulties"
}
] |
[
{
"msg_contents": "Is there a Web site or some info somewhere that tells you how to\nestimate the size of your database.\n\nI know what my schema is and how many records will be in each table (but\nnon have been inserted yet). How can I project how much disk space I\nwill need for the database?\n\nThanks!\nMitesh\n",
"msg_date": "Thu, 12 Apr 2001 15:10:09 -0700",
"msg_from": "\"Mitesh Shah\" <Mitesh.Shah@bangnetworks.com>",
"msg_from_op": true,
"msg_subject": "Estimating Size of Database"
},
{
"msg_contents": "In the FAQ..\n\nhttp://www.postgresql.org/docs/faq-english.html#4.7\n\nGood luck!\n\n-Mitch\nSoftware development : \nYou can have it cheap, fast or working. Choose two.\n----- Original Message ----- \nFrom: \"Mitesh Shah\" <Mitesh.Shah@bangnetworks.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Thursday, April 12, 2001 6:10 PM\nSubject: Estimating Size of Database\n\n\n> Is there a Web site or some info somewhere that tells you how to\n> estimate the size of your database.\n> \n> I know what my schema is and how many records will be in each table (but\n> non have been inserted yet). How can I project how much disk space I\n> will need for the database?\n> \n> Thanks!\n> Mitesh\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n",
"msg_date": "Thu, 12 Apr 2001 18:15:39 -0400",
"msg_from": "\"Mitch Vincent\" <mitch@venux.net>",
"msg_from_op": false,
"msg_subject": "Re: Estimating Size of Database"
}
] |
[
{
"msg_contents": "Thanks!\n\nOne follow up question. In the example given, it says there are 36\nbytes for each row header and 4 bytes for each pointer to a tuple. I'm\nnot sure where these numbers (36 and 4) are coming from. Are they\nstandard for *every* table? If my table has more than just two\nintegers, for example, will each row header be more than 36 bytes?\n\nThanks in advance.\nMitesh\n\n-----Original Message-----\nFrom: Mitch Vincent [mailto:mitch@venux.net]\nSent: Thursday, April 12, 2001 3:16 PM\nTo: Mitesh Shah; pgsql-hackers@postgresql.org\nSubject: Re: Estimating Size of Database\n\n\nIn the FAQ..\n\nhttp://www.postgresql.org/docs/faq-english.html#4.7\n\nGood luck!\n\n-Mitch\nSoftware development : \nYou can have it cheap, fast or working. Choose two.\n----- Original Message ----- \nFrom: \"Mitesh Shah\" <Mitesh.Shah@bangnetworks.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Thursday, April 12, 2001 6:10 PM\nSubject: Estimating Size of Database\n\n\n> Is there a Web site or some info somewhere that tells you how to\n> estimate the size of your database.\n> \n> I know what my schema is and how many records will be in each table\n(but\n> non have been inserted yet). How can I project how much disk space I\n> will need for the database?\n> \n> Thanks!\n> Mitesh\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n",
"msg_date": "Thu, 12 Apr 2001 15:26:06 -0700",
"msg_from": "\"Mitesh Shah\" <Mitesh.Shah@bangnetworks.com>",
"msg_from_op": true,
"msg_subject": "RE: Estimating Size of Database"
},
{
"msg_contents": "Mitesh Shah writes:\n\n> One follow up question. In the example given, it says there are 36\n> bytes for each row header and 4 bytes for each pointer to a tuple. I'm\n> not sure where these numbers (36 and 4) are coming from. Are they\n> standard for *every* table? If my table has more than just two\n> integers, for example, will each row header be more than 36 bytes?\n\nMore or less. Quoth the source:\n\ntypedef struct HeapTupleHeaderData\n{\n Oid t_oid; /* OID of this tuple -- 4 bytes */\n\n CommandId t_cmin; /* insert CID stamp -- 4 bytes each */\n CommandId t_cmax; /* delete CommandId stamp */\n\n TransactionId t_xmin; /* insert XID stamp -- 4 bytes each */\n TransactionId t_xmax; /* delete XID stamp */\n\n ItemPointerData t_ctid; /* current TID of this or newer tuple */\n int16 t_natts; /* number of attributes */\n\n uint16 t_infomask; /* various infos */\n\n uint8 t_hoff; /* sizeof() tuple header */\n\n /* ^ - 31 bytes - ^ */\n\n bits8 t_bits[MinHeapTupleBitmapSize / 8];\n /* bit map of NULLs */\n\n /* MORE DATA FOLLOWS AT END OF STRUCT */\n} HeapTupleHeaderData;\n\nMost of the fields are for maintaining information required for\ntransaction rollback and multi-version concurrency control, in case you\ncan't quite decode it. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 13 Apr 2001 01:04:06 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RE: Estimating Size of Database"
}
] |
[
{
"msg_contents": "\nI was looking at how hard it would be to support altering column types and it\nseems to me that it would be trivial to support changing nullability,\nincreasing the maximum length of the VARCHAR data type and increasing the\nprecision or scale of the DECIMAL / NUMERIC data type. \n\nOracle allows you to update a column to null and then modify its data type to\nany other type. This is easy because it stores all columns in a variable\nlength format and a null looks the same regardless of type.\n\nI understand that with the current heap tuple format described in\nbackend/access/common.c that changing the type of any fixed length attribute\nrequires updating every row. \n\nSurely if we have an write exclusive table lock we can rewrite tuples in place\nrather than creating new versions with its corresponding 2x space requirement.\n\nWe could presumably do the following to change a column data type:\n\nPreconditions:\n 1. Type conversion is possible from old type to new type and either\n a) old type is unconditionally convertible (e.g. length/precision \n increasing)\n b) read locked scan of table reveals that all values are convertible\n 2. Exclusive write lock on table\n\nif(new and old types are variable length and are binary compatible)\n {\n 1. Change type in catalog\n 2. Done\n }\nelse\n {\n 1. Visit all current tuples and rewrite in place, converting attribute value\nto new \n type, and shifting all other attributes and null bitmask appropriately\n 2. Change type in catalog\n 3. Done\n }\n \nDoes this sound reasonable? Also, is anyone working on ALTER TABLE DROP\nCOLUMN right now?\n\nSpeaking of which, couldn't we make it so that UPDATES and DELETES running\nunder an exclusive table lock do an inline vacuum?\n\n- Mark Butler\n",
"msg_date": "Thu, 12 Apr 2001 20:48:41 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE MODIFY COLUMN"
},
{
"msg_contents": "Mark Butler wrote:\n> \n> I was looking at how hard it would be to support altering column types and it\n> seems to me that it would be trivial to support changing nullability,\n\nYes. The problem is how to formulate 'DROP CONSTRAINT' feature. \n\n> increasing the maximum length of the VARCHAR data type and increasing the\n> precision or scale of the DECIMAL / NUMERIC data type.\n> \n\nYes. The problem is how PostgreSQL could recognize the fact.\n\n[snip]\n\n> I understand that with the current heap tuple format described in\n> backend/access/common.c that changing the type of any fixed length attribute\n> requires updating every row.\n> \n> Surely if we have an write exclusive table lock we can rewrite tuples in place\n> rather than creating new versions with its corresponding 2x space requirement.\n> \n\nPostgreSQL has a no overwrite storage manager, so this\nseems to have little advantage. We now have a mechanism\nto replace existent relation files safely. We could \navoid running VACUUM, multiple version of tuples in\na file etc ...\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 13 Apr 2001 14:52:37 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE MODIFY COLUMN"
},
{
"msg_contents": "Mark Butler <butlerm@middle.net> writes:\n> Surely if we have an write exclusive table lock we can rewrite tuples\n> in place rather than creating new versions with its corresponding 2x\n> space requirement.\n\nNyet. Consider transaction rollback.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 02:36:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE MODIFY COLUMN "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Mark Butler <butlerm@middle.net> writes:\n> > Surely if we have an write exclusive table lock we can rewrite tuples\n> > in place rather than creating new versions with its corresponding 2x\n> > space requirement.\n> \n> Nyet. Consider transaction rollback.\n\nWell, the first thing to consider would be to make this type of DDL operation\nun-abortable. If the database goes down while the table modification is in\nprogress, the recovery process could continue the operation to completion\nbefore releasing the table for general access.\n\nThe problem with the standard alternatives is that they waste space and are\nslow:\n\nAlt 1. create new version of tuples in new format like DROP COLUMN proposal\nAlt 2. rename column; add new column; copy column; drop (hide) old column \nAlt 3. rename indices; rename table; copy table; recreate indices; \n\nNow this probably only makes a difference in a data warehouse environment,\nwhere the speed\nof mass load / update operations is much more important than being able to\nroll them back.\n\nI suppose there are two really radical alternatives as well:\n\nRadical Alt 1: Use format versioning to allow multiple row formats to\nco-exist,\n lazy update to latest format\n\nRadical Alt 2: Delta compress different versions of the same row on the same\npage\n\nI can see that the conventional alternatives make sense for now, however.\n\n- Mark Butler\n",
"msg_date": "Fri, 13 Apr 2001 02:00:21 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE MODIFY COLUMN"
}
] |
[
{
"msg_contents": "I noticed the storage format for the numeric type is rather inefficient: \n\ntypedef struct NumericData\n{\n int32 varlen; /* Variable size */\n int16 n_weight; /* Weight of 1st digit */\n uint16 n_rscale; /* Result scale */\n uint16 n_sign_dscale; /* Sign + display scale */\n unsigned char n_data[1]; /* Digit data (2 decimal digits/byte) */\n} NumericData;\ntypedef NumericData *Numeric;\n\nOracle uses a similar variable length format for all numeric types, and they\ndocument its storage requirement as 2 + (sig digits/2) bytes. One byte is\nused for the column length, one byte for the exponent, a variable number of\nbytes for the significant digits. \n\nA zero value uses two bytes total in Oracle, where in the current version of\nPostgreSQL it uses ten bytes. Given the pending demise of the money type, the\nremaining alternative is rather wasteful for use in large financial\napplications.\n\nIs there a reason why varlen has to be an int32? uint8 would be more than\nenough. The other three fields could be int8 as well. I do not understand\nwhy we need four header fields - a much more efficient decimal type could be\nimplemented as follows:\n\ntypedef struct DecimalData\n{\n int8 varlen; /* variable size */\n int8 d_sign_exponent; /* 1 bit sign, 7 bit exponent */\n int8 d_mantissa[1]; /* variable precision binary integer mantissa */\n};\n\nValue represented is (-1 ^ sign)*(mantissa)*(10 ^ exponent).\n\nThis would be more space efficient than Oracle and would support precisions up\nto DECIMAL(63). Having a reasonable maximum precision would allow a fixed\nlength internal representation which make processing *much* faster* by using\nbinary arithmetic and eliminating the necessity to palloc() buffers for every\ntemporary result. \n\n(Aside: Doesn't the current numeric type use up memory in a hurry in a large\nsum(numeric_column) query? - Or are all those digitbuf_free()'s actually being\ncleaned up? And shouldn't the type operator calling convention be changed to\npass a result buffer so these palloc()'s could be mostly avoided? )\n\nAs an even faster, lower max precision alternative:\n\ntypedef struct FastDecimalData \n{\n int64 fd_mantissa; \n int8 fd_sign;\n int8 fd_exponent;\n};\n\nValue represented is (-1 ^ sign)*(mantissa)*(10 ^ exponent).\n\nThis would support precisions up to DECIMAL(18). Intermediate results could\nbe stored using a 128 bit format to avoid loss of precision.\n\nAny comments?\n\n - Mark Butler\n",
"msg_date": "Thu, 12 Apr 2001 23:13:16 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "NUMERIC type efficiency problem"
},
{
"msg_contents": "Mark Butler <butlerm@middle.net> writes:\n> I noticed the storage format for the numeric type is rather inefficient: \n> ...\n> A zero value uses two bytes total in Oracle, where in the current version of\n> PostgreSQL it uses ten bytes.\n\nYawn ... given row overhead, alignment padding, etc, this is not nearly\nas big a deal as you make it ...\n\n> Is there a reason why varlen has to be an int32?\n\nYes. That's what all varlena types use.\n\n> The other three fields could be int8 as well.\n\nIf we were willing to restrict numeric values to a much tighter range,\nperhaps so. I rather like a numeric type that can handle ranges wider\nthan double precision, however.\n\n> Having a reasonable maximum precision would allow a fixed\n> length internal representation which make processing *much* faster* by using\n> binary arithmetic and eliminating the necessity to palloc() buffers for every\n> temporary result. \n\nI don't have any objection in principle to an additional datatype \"small\nnumeric\", or some such name, with a different representation. I do\nobject to emasculating the type we have.\n\nA more significant point is that you have presented no evidence to back\nup your claim that this would be materially faster than the existing\ntype. I doubt that the extra pallocs are all that expensive. (I think\nit'd be far more helpful to reimplement numeric using base-10000\nrepresentation --- four decimal digits per int16 --- and then eliminate\nthe distinction between storage format and computation format. See past\ndiscussions in the pghackers archives.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 02:57:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC type efficiency problem "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Yawn ... given row overhead, alignment padding, etc, this is not nearly\n> as big a deal as you make it ...\n\nFor a table with ten decimal columns with an average of 5 significant digits\napiece, each row could be reduced from ~170 bytes to about ~90 bytes, which\ncould be rather significant, I would think. (In Oracle such a row takes ~55\nbytes.)\n\nBy the way, is alignment padding really a good use of disk space? Surely\nstorage efficiency trumps minor CPU overhead in any I/O bound database. Or\nare there other considerations? (like processor portability perhaps?)\n \n> I don't have any objection in principle to an additional datatype \"small\n> numeric\", or some such name, with a different representation. I do\n> object to emasculating the type we have.\n\nSurely we can't get rid of it, but it might be a good idea to make NUMERIC(p)\nmap to multiple representations, given that it is a ANSI standard data-type.\n\nI suggest using a system like the FLOAT(p) -> float4 / float8 mapping. \nColumns declared with precisions higher than 16 or so could map to the current\nunrestricted representation, and columns with more typical precisions could\nmap to a more efficient fixed length representation.\n \n> A more significant point is that you have presented no evidence to back\n> up your claim that this would be materially faster than the existing\n> type. I doubt that the extra pallocs are all that expensive. (I think\n> it'd be far more helpful to reimplement numeric using base-10000\n> representation --- four decimal digits per int16 --- and then eliminate\n> the distinction between storage format and computation format. See past\n> discussions in the pghackers archives.)\n\nThat sounds like it would help a lot. I certainly don't have any hard\nevidence yet. Thanks for the pointer.\n\n- Mark Butler\n",
"msg_date": "Fri, 13 Apr 2001 03:21:24 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Re: NUMERIC type efficiency problem"
},
{
"msg_contents": "Mark Butler <butlerm@middle.net> writes:\n> By the way, is alignment padding really a good use of disk space? Surely\n> storage efficiency trumps minor CPU overhead in any I/O bound database.\n\nWeren't you just complaining about excess palloc's ;-) ? Seriously,\nI have no idea about the costs/benefits of aligning data on disk.\nThat decision was made way back in the Berkeley days, and hasn't been\nquestioned since then AFAIK. Feel free to experiment if you are\ninterested.\n\n> I suggest using a system like the FLOAT(p) -> float4 / float8 mapping.\n> Columns declared with precisions higher than 16 or so could map to the\n> current unrestricted representation, and columns with more typical\n> precisions could map to a more efficient fixed length representation.\n\nGiven that the \"more efficient representation\" would only be internal to\ncalculation subroutines, it seems easier to exploit preallocation at\nruntime. This is already done in a number of places in Postgres.\nIt'd look something like\n\n\t{\n\t\tdigit\t*tmp;\n\t\tdigit\t tmpbuf[MAX_FIXED_DIGITS];\n\n\t\tif (digits_needed > MAX_FIXED_DIGITS)\n\t\t\ttmp = palloc(...);\n\t\telse\n\t\t\ttmp = tmpbuf;\n\n\t\t// use tmp here\n\n\t\tif (tmp != tmpbuf)\n\t\t\tpfree(tmp);\n\t}\n\nUgly, but most of the ugliness could be hidden inside a couple of\nmacros.\n\nAgain, though, I wouldn't bother with this until I had some profiles\nproving that the palloc overhead is worth worrying about.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 11:26:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC type efficiency problem "
},
{
"msg_contents": "Tom Lane wrote:\n\n> A more significant point is that you have presented no evidence to back\n> up your claim that this would be materially faster than the existing\n> type. I doubt that the extra pallocs are all that expensive. (I think\n> it'd be far more helpful to reimplement numeric using base-10000\n> representation --- four decimal digits per int16 --- and then eliminate\n> the distinction between storage format and computation format. See past\n> discussions in the pghackers archives.)\n\n\nI did several tests with functions designed to sum the number 12345 a million\ntimes. The results are as follows (Pentium II 450, Redhat 6.2):\n\nPostgres PL/PGSQL original numeric: 14.8 seconds\nPostgres PL/PGSQL modified numeric: 11.0 seconds \nPostgres PL/PGSQL float8: 10.7 seconds\nGNU AWK: 2.5 seconds\nOracle PL/SQL number: 2.0 seconds\n\nThe modified Postgres numeric type is the original source code modified to use\na 32 digit NumericVar attribute digit buffer that eliminates palloc()/pfree()\ncalls when ndigits < 32.\n\nSurely those are performance differences worth considering...\n\n- Mark Butler\n\n\nNote: The functions are as follows, all called with 12345 as a parameter,\nexcept for the awk program, which has it hard coded:\n\nPostgreSQL\n==========\n\n\ncreate function test_f1(float8) returns float8 as '\ndeclare\n i integer;\n val float8;\nbegin\n \n val := 0;\n \n for i in 1 .. 1000000 loop\n val := val + $1;\n end loop;\n\n return val;\nend;'\nlanguage 'plpgsql';\n\ncreate function test_f2(numeric) returns numeric as '\ndeclare\n i integer;\n val numeric;\nbegin\n \n val := 0;\n \n for i in 1 .. 1000000 loop\n val := val + $1;\n end loop;\n\n return val;\nend;'\nlanguage 'plpgsql';\n\n\nAwk\n===\n\nBEGIN {\n val = 0;\n p = 12345;\n for(i = 1; i <= 1000000; i++)\n { \n val = val + p;\n }\n printf(\"%20f\\n\", val);\n}\n\n\nOracle\n======\n\ncreate or replace function test_f2(p number) return number is\n i number;\n val number;\nbegin\n \n val := 0;\n \n for i in 1 .. 1000000 loop\n val := val + p;\n end loop;\n\n return val;\nend;\n/\n",
"msg_date": "Fri, 13 Apr 2001 15:16:58 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "NUMERIC type benchmarks"
},
{
"msg_contents": "Mark Butler wrote:\n\n> I did several tests with functions designed to sum the number 12345 a million\n> times. The results are as follows (Pentium II 450, Redhat 6.2):\n> \n> Postgres PL/PGSQL original numeric: 14.8 seconds\n> Postgres PL/PGSQL modified numeric: 11.0 seconds\n> Postgres PL/PGSQL float8: 10.7 seconds\n> GNU AWK: 2.5 seconds\n> Oracle PL/SQL number: 2.0 seconds\n\nI have a new result:\n\n Postgres PL/PGSQL integer: 7.5 seconds\n\nI do not know what to attribute the large difference between float8 and int to\nother than pg_alloc overhead used in the calling convention for float8. \nCommments?\n\n- Mark Butler\n",
"msg_date": "Fri, 13 Apr 2001 17:47:12 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Re: New benchmark result"
},
{
"msg_contents": "Am Freitag, 13. April 2001 23:16 schrieben Sie:\n> Tom Lane wrote:\n> > A more significant point is that you have presented no evidence to back\n> > up your claim that this would be materially faster than the existing\n> > type. I doubt that the extra pallocs are all that expensive. (I think\n> > it'd be far more helpful to reimplement numeric using base-10000\n> > representation --- four decimal digits per int16 --- and then eliminate\n> > the distinction between storage format and computation format. See past\n> > discussions in the pghackers archives.)\n>\n> I did several tests with functions designed to sum the number 12345 a\n> million times. The results are as follows (Pentium II 450, Redhat 6.2):\n>\n> Postgres PL/PGSQL original numeric: 14.8 seconds\n> Postgres PL/PGSQL modified numeric: 11.0 seconds\n> Postgres PL/PGSQL float8: 10.7 seconds\n> GNU AWK: 2.5 seconds\n> Oracle PL/SQL number: 2.0 seconds\n>\n> The modified Postgres numeric type is the original source code modified to\n> use a 32 digit NumericVar attribute digit buffer that eliminates\n> palloc()/pfree() calls when ndigits < 32.\n>\n> Surely those are performance differences worth considering...\n\nI tested that on a similar configuration (P-III 450) and got the same \nresults. When the addition is removed from the loop and replaced with a \nsimple assignment, the total execution time goes down to ~6.5 seconds. That \nmeans that the modified numeric is nearly twice as fast, sure worth \nconsidering that.\n\n-- \n===================================================\n Mario Weilguni � � � � � � � � KPNQwest Austria GmbH\n�Senior Engineer Web Solutions Nikolaiplatz 4\n�tel: +43-316-813824 � � � � 8020 graz, austria\n�fax: +43-316-813824-26 � � � http://www.kpnqwest.at\n�e-mail: mario.weilguni@kpnqwest.com\n===================================================\n",
"msg_date": "Sun, 15 Apr 2001 10:42:19 +0200",
"msg_from": "Mario Weilguni <mweilguni@sime.com>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC type benchmarks"
},
{
"msg_contents": "Mario Weilguni wrote:\n\n> I tested that on a similar configuration (P-III 450) and got the same\n> results. When the addition is removed from the loop and replaced with a\n> simple assignment, the total execution time goes down to ~6.5 seconds. That\n> means that the modified numeric is nearly twice as fast, sure worth\n> considering that.\n\nI am embarrassed to admit I had an undeleted overloaded function that caused\nme to time the wrong function. The correct numbers should be:\n\nPostgres PL/PGSQL original numeric: 14.8 seconds\nPostgres PL/PGSQL modified numeric: 14.0 seconds\nPostgres PL/PGSQL float8: 10.7 seconds\nGNU AWK: 2.5 seconds\nOracle PL/SQL number: 2.0 seconds\n\nThis means that Tom Lane was absolutely right - for the current numeric type\nimplementation, palloc() overhead is not a dominant concern. A serious\nsolution needs to change the internal format to use a larger base, as Tom\nsuggested.\n\n- Mark Butler\n",
"msg_date": "Sun, 15 Apr 2001 18:26:43 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Re: NUMERIC type benchmarks - CORRECTED"
},
{
"msg_contents": "Mark Butler <butlerm@middle.net> writes:\n> ... The correct numbers should be:\n\n> Postgres PL/PGSQL original numeric: 14.8 seconds\n> Postgres PL/PGSQL modified numeric: 14.0 seconds\n> Postgres PL/PGSQL float8: 10.7 seconds\n> GNU AWK: 2.5 seconds\n> Oracle PL/SQL number: 2.0 seconds\n\n> This means that Tom Lane was absolutely right - for the current numeric type\n> implementation, palloc() overhead is not a dominant concern. A serious\n> solution needs to change the internal format to use a larger base, as Tom\n> suggested.\n\nWhat do you get if you use int4 in PL/PGSQL? The above numbers look to\nme like the real problem may be PL/PGSQL interpretation overhead, and\nnot the datatype at all...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Apr 2001 21:28:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NUMERIC type benchmarks - CORRECTED "
}
] |
[
{
"msg_contents": "=> create view testview as select relname, 'Constant'::text from pg_class;\n\n=> \\d testview\n View \"testview\"\n Attribute | Type | Modifier\n-----------+------+----------\n relname | name |\n ?column? | text |\nView definition: SELECT DISTINCT pg_class.relname, 'Constant'::text FROM\npg_class ORDER BY pg_class.relname, 'Constant'::text;\n\nNote how the order by clause is not valid SQL. You get\n\nERROR: Non-integer constant in ORDER BY\n\nI suppose the ORDER BY clause appears because of some weird query parse\ntree hackery and is not easy to get rid of. Maybe using column numbers\ninstead of spelling out the select list again would work?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 13 Apr 2001 13:32:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Dump/restore of views containing select distinct fails"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> => create view testview as select relname, 'Constant'::text from pg_class;\n\nI assume you meant SELECT DISTINCT there...\n\n> => \\d testview\n> View \"testview\"\n> Attribute | Type | Modifier\n> -----------+------+----------\n> relname | name |\n> ?column? | text |\n> View definition: SELECT DISTINCT pg_class.relname, 'Constant'::text FROM\n> pg_class ORDER BY pg_class.relname, 'Constant'::text;\n\n> Note how the order by clause is not valid SQL. You get\n\n> ERROR: Non-integer constant in ORDER BY\n\nOoops.\n\n> I suppose the ORDER BY clause appears because of some weird query parse\n> tree hackery and is not easy to get rid of.\n\nNot without parsetree changes to distinguish explicit from implicit\nsortlist items (yet another place where we shot ourselves in the foot\nby not keeping adequate info about the original form of a query...)\n\n> Maybe using column numbers\n> instead of spelling out the select list again would work?\n\nYes, I think that's what we need to do. This particular case could\nperhaps be handled by allowing non-integer constants to fall through\nin findTargetlistEntry(), but that solution still fails for\n\nregression=# create view vv1 as select distinct f1, 42 from int4_tbl;\nCREATE\nregression=# \\d vv1\n View \"vv1\"\n Attribute | Type | Modifier\n-----------+---------+----------\n f1 | integer |\n ?column? | integer |\nView definition: SELECT DISTINCT int4_tbl.f1, 42 FROM int4_tbl ORDER BY int4_tbl.f1, 42;\n\nBasically we should not let the rule decompiler emit any simple constant\nliterally in ORDER BY --- it should emit the column number instead,\nturning this into\n\nSELECT DISTINCT int4_tbl.f1, 42 FROM int4_tbl ORDER BY int4_tbl.f1, 2;\n\n(I think we should do this only for literal constants, keeping the\nmore-symbolic form whenever possible.) Will work on it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 18:41:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dump/restore of views containing select distinct fails "
}
] |
[
{
"msg_contents": "Hi all,\n\nI've read pg_dump threads with great interest as I plan to upgrade from\n703 to 71 on next monday if 71 final is out.\n\nAlthough I did'nt have any of the ploblems listed in threads durin my\ntests from 7.0.3 to 7.1betas and 7.1RCs.\n\nI have noticed that pg_dump \tand pg_dumpall could'nt generate correctly\nCREATE FUNCTION when function has been created using the form\n\nCREATE FUNCTION foo(x) RETURNS type AS 'xxxx/foo.so','foo2' LANGUAGE C;\n\npg_dumps \"forgets\" the 'foo2' part: that being a pain in the a... when\nreloading...\n\nRegards,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Fri, 13 Apr 2001 22:30:19 +0200",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": true,
"msg_subject": "pg_dump problem"
},
{
"msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n> I have noticed that pg_dump \tand pg_dumpall could'nt generate correctly\n> CREATE FUNCTION when function has been created using the form\n> CREATE FUNCTION foo(x) RETURNS type AS 'xxxx/foo.so','foo2' LANGUAGE C;\n> pg_dumps \"forgets\" the 'foo2' part: that being a pain in the a... when\n> reloading...\n\nThis is fixed in 7.1 ... not a lot we can do about 7.0 at this point ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 17:42:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump problem "
}
] |
[
{
"msg_contents": "I see the problem. Your 7.0.3 dump contains several instances of this\npattern:\n\nCREATE TABLE \"users_alertable\" (\n\t\"user_id\" int4,\n\t\"email\" character varying(100),\n\t\"first_names\" character varying(100),\n\t\"last_name\" character varying(100),\n\t\"password\" character varying(30)\n);\n\n...\n\nCREATE FUNCTION \"user_vacations_kludge\" (int4 ) RETURNS int4 AS '\nbegin\n return count(*) \n from user_vacations v, users u\n where u.user_id = $1 and v.user_id = u.user_id\n and current_timestamp between v.start_date and v.end_date;\nend;' LANGUAGE 'plpgsql';\n\n...\n\nCREATE RULE \"_RETusers_alertable\" AS ON SELECT TO users_alertable DO INSTEAD SELECT u.user_id, u.email, u.first_names, u.last_name, u.\"password\" FROM users u WHERE (((((u.on_vacation_until ISNULL) OR (u.on_vacation_until < \"timestamp\"('now'::text))) AND (u.user_state = 'authorized'::\"varchar\")) AND ((u.email_bouncing_p ISNULL) OR (u.email_bouncing_p = 'f'::bpchar))) AND (user_vacations_kludge(u.user_id) = 0));\n\nAlthough this works fine, 7.1 folds the table + rule down into a single\nCREATE VIEW, which comes before the CREATE FUNCTION because that's what\nthe OID ordering suggests will work. Ugh.\n\nA possible kluge answer is to make pg_dump's OID-ordering of views\ndepend on the OID of the view rule rather than the view relation.\nI am not sure if that would break any cases that work now, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 17:34:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump ordering problem (rc4) "
},
{
"msg_contents": "At 17:34 13/04/01 -0400, Tom Lane wrote:\n>\n>A possible kluge answer is to make pg_dump's OID-ordering of views\n>depend on the OID of the view rule rather than the view relation.\n>I am not sure if that would break any cases that work now, however.\n>\n\nThis seems good to me; it should be based on the 'oid of the view', and\nAFAICT, the rule OID should be it. Should I do this?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 14 Apr 2001 10:36:50 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump ordering problem (rc4) "
},
{
"msg_contents": "> At 17:34 13/04/01 -0400, Tom Lane wrote:\n> >\n> >A possible kluge answer is to make pg_dump's OID-ordering of views\n> >depend on the OID of the view rule rather than the view relation.\n> >I am not sure if that would break any cases that work now, however.\n> >\n> \n> This seems good to me; it should be based on the 'oid of the view', and\n> AFAICT, the rule OID should be it. Should I do this?\n\nThe view oid is certainly better than the base relation oid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Apr 2001 21:15:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump ordering problem (rc4)"
},
{
"msg_contents": "At 21:15 13/04/01 -0400, Bruce Momjian wrote:\n>> At 17:34 13/04/01 -0400, Tom Lane wrote:\n>> >\n>> >A possible kluge answer is to make pg_dump's OID-ordering of views\n>> >depend on the OID of the view rule rather than the view relation.\n>> >I am not sure if that would break any cases that work now, however.\n>> >\n>> \n>> This seems good to me; it should be based on the 'oid of the view', and\n>> AFAICT, the rule OID should be it. Should I do this?\n>\n>The view oid is certainly better than the base relation oid.\n>\n\nSince I'm in pg_dump at the moment, I'll make the change...\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 14 Apr 2001 11:43:02 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump ordering problem (rc4)"
},
{
"msg_contents": ">>> >\n>>> >A possible kluge answer is to make pg_dump's OID-ordering of views\n>>> >depend on the OID of the view rule rather than the view relation.\n>>> >I am not sure if that would break any cases that work now, however.\n>>> >\n>>> \n>>> This seems good to me; it should be based on the 'oid of the view', and\n>>> AFAICT, the rule OID should be it. Should I do this?\n>>\n>>The view oid is certainly better than the base relation oid.\n>>\n>\n>Since I'm in pg_dump at the moment, I'll make the change...\n>\n\nHaving now looked at pg_dump more closely, I'm not at all sure I want to\nmake the change directly in pg_dump. The reason is that I am trying to move\nversion-specific stuff from pg_dump, and I currently get a view definition\nby doing 'select pg_getviewdef(<name>)' (rather than looking up the rule etc).\n\nWould people mind me adding a 'pg_getviewoid(<name>)' for pg_dump's use?\n(especially since 7.1 now seems to be out...)\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 14 Apr 2001 17:30:25 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump ordering problem (rc4)"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Having now looked at pg_dump more closely, I'm not at all sure I want to\n> make the change directly in pg_dump. The reason is that I am trying to move\n> version-specific stuff from pg_dump, and I currently get a view definition\n> by doing 'select pg_getviewdef(<name>)' (rather than looking up the rule etc).\n\n> Would people mind me adding a 'pg_getviewoid(<name>)' for pg_dump's use?\n\nWhile that would be a clean solution, it would mean that the problem\nwill remain until 7.2, because you don't get to assume another initdb\nuntil 7.2. I don't think we want to wait that long; I want to see a fix\nof some kind in 7.1.1.\n\nA possible compromise is to do a direct lookup of the OID in 7.1.*\nwith plans to replace it with some backend-side solution in 7.2 and\nbeyond.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Apr 2001 03:53:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump ordering problem (rc4) "
},
{
"msg_contents": "At 03:53 14/04/01 -0400, Tom Lane wrote:\n>\n>> Would people mind me adding a 'pg_getviewoid(<name>)' for pg_dump's use?\n>\n>While that would be a clean solution, it would mean that the problem\n>will remain until 7.2, because you don't get to assume another initdb\n>until 7.2. I don't think we want to wait that long; I want to see a fix\n>of some kind in 7.1.1.\n>\n>A possible compromise is to do a direct lookup of the OID in 7.1.*\n>with plans to replace it with some backend-side solution in 7.2 and\n>beyond.\n>\n\nI don't suppose we can change the pg_views view without an initdb?\n\nIf so, a better solution would be to add a 'view_oid' column. Otherwise,\nI'll have to kludge pg_dump.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 14 Apr 2001 18:29:30 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump ordering problem (rc4) "
},
{
"msg_contents": "At 18:29 14/04/01 +1000, Philip Warner wrote:\n>\n>I don't suppose we can change the pg_views view without an initdb?\n>\n\nHaving now looked at the source, I realize that initdb is where this view\nis defined. However, is there any reason that we can not change this\ndefinition when upgrading to 7.1.1? \n\nThe embargo on initdb is, I think, primarily related to avoiding\nexport/import of databases - is that right? If so, then doing\nnon-destructive changes to things like system views does not seem too evil\n(in this case an update of a row of pg_rewrite and the addition of an attr\nto pg_views, I think).\n\nAm I missing something here? ISTM that the more higher level definitions we\nhave (eg. functions returning multiple rows, DEFINITION SCHEMAs etc), the\nmore we may need to allow changes to be made of things that are\n*customarily* defined in initdb.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 14 Apr 2001 21:15:28 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump ordering problem (rc4) "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> I don't suppose we can change the pg_views view without an initdb?\n\nNo, not very readily.\n\nEven assuming that we were willing to require dbadmins to run a script\nduring upgrade (which I wouldn't want to do unless forced to it), it's\nnot that easy to fix all the occurrences of a system view. Remember\nthere will be a separate copy in each database, including template0\nwhich you can't even get to. On top of which, DROP VIEW/CREATE VIEW\nwouldn't work because the view would then have the wrong OID, and would\nlook like a user-created object to pg_dump. You'd have to manually\nmanipulate tuples in pg_class, pg_attribute, pg_rewrite, etc.\n\nKluging pg_dump is a *lot* cleaner.\n\nI agree with the idea of adding the rule OID as a new column of pg_views\nfor 7.2, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Apr 2001 12:50:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump ordering problem (rc4) "
}
] |
[
{
"msg_contents": "\nWell folks, I just fixed the CVS tags (renamed REL7_1 to REL7_1_BETA and\nmoved REL7_1 to today) and packaged up the release ... this is it, any new\nfixes go into v7.1.1 ... :)\n\nI'm preparing a formal PR/Announce, and will send that out later on this\nevening, but want to give some of the mirror sites a chance to update\nbefore doing such ...\n\nIf anyone wants to grab a copy of this, make sure there are no outstanding\nissues with the packaging itself, please do ...\n\nThere are no changes between rc4 and full release, except that D'Arcy\nremoved a 'beta' comment from the Python interface ... so if you are\nrunning rc4 now, no need to upgrade ...\n\nUnless any major disagreements, I'd like to scheduale v7.1.1 now, for May\n1st, at which time I'll do our normal branch for v7.2 ... so, if you are\nsitting on any *bug fixes* for v7.1, plesae start shoving them in\neffective this email ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 13 Apr 2001 18:47:16 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Tag'd, packaged and ready to go ..."
},
{
"msg_contents": "> Well folks, I just fixed the CVS tags (renamed REL7_1 to REL7_1_BETA and\n> moved REL7_1 to today) and packaged up the release ... this is it, any new\n> fixes go into v7.1.1 ... :)\n\nOK, I have some (small) patches for documentation, but afaicr it is not\ncritical.\n\nPostscript docs should be completely done in the next few days, with\nsome available almost immediately. The Reference Manual will take the\nlongest, as the jade rtf output causes trouble in M$Word as well as in\nApplixware :(\n\nDid we get all of the ancillary plain text documents generated?\n\n - Thomas\n",
"msg_date": "Fri, 13 Apr 2001 22:35:58 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Tag'd, packaged and ready to go ..."
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Did we get all of the ancillary plain text documents generated?\n\nYes, unless you have changes for the installation instructions, the\nrelease history, or the regression test procedure.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 14 Apr 2001 03:48:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Tag'd, packaged and ready to go ..."
}
] |
[
{
"msg_contents": "\nHere is what we've always sent to to date ... anyone have any good ones\nto add?\n\n\nAddresses : freebsd-announce@freebsd.org,\n freebsd-database@freebsd.org,\n lweditors@linuxworld.com,\n lwn@lwn.net,\n malda@slashdot.org,\n pgsql-announce@postgresql.org,\n pgsql-general@postgresql.org,\n php-db@lists.php.net,\n plexus@xshare.com\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 13 Apr 2001 19:24:26 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Anyone have any good addresses ... ?"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n\n> Here is what we've always sent to to date ... anyone have any good ones\n> to add?\n> \n> \n> Addresses : freebsd-announce@freebsd.org,\n> freebsd-database@freebsd.org,\n> lweditors@linuxworld.com,\n> lwn@lwn.net,\n> malda@slashdot.org,\n> pgsql-announce@postgresql.org,\n> pgsql-general@postgresql.org,\n> php-db@lists.php.net,\n> plexus@xshare.com\n\nFreshmeat, linuxtoday. If the release includes RPMs for Red Hat Linux,\nredhat-announce is also a suitable location.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "13 Apr 2001 18:32:26 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Anyone have any good addresses ... ?"
},
{
"msg_contents": "On 13 Apr 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n>\n> > Here is what we've always sent to to date ... anyone have any good ones\n> > to add?\n> >\n> >\n> > Addresses : freebsd-announce@freebsd.org,\n> > freebsd-database@freebsd.org,\n> > lweditors@linuxworld.com,\n> > lwn@lwn.net,\n> > malda@slashdot.org,\n> > pgsql-announce@postgresql.org,\n> > pgsql-general@postgresql.org,\n> > php-db@lists.php.net,\n> > plexus@xshare.com\n>\n> Freshmeat, linuxtoday. If the release includes RPMs for Red Hat Linux,\n> redhat-announce is also a suitable location.\n\ndo you have email addresses fo freshmeat/linuxtoday? I have 6 web sites\nthat I have bookmarked for announces also, so if you have a good web URL,\nI'll take those too ...\n\nas for RPMs, will leave that for Lamar once he's got those ready :)\n\n\n",
"msg_date": "Fri, 13 Apr 2001 19:43:17 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Anyone have any good addresses ... ?"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n\n> On 13 Apr 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n> \n> > The Hermit Hacker <scrappy@hub.org> writes:\n> >\n> > > Here is what we've always sent to to date ... anyone have any good ones\n> > > to add?\n> > >\n> > >\n> > > Addresses : freebsd-announce@freebsd.org,\n> > > freebsd-database@freebsd.org,\n> > > lweditors@linuxworld.com,\n> > > lwn@lwn.net,\n> > > malda@slashdot.org,\n> > > pgsql-announce@postgresql.org,\n> > > pgsql-general@postgresql.org,\n> > > php-db@lists.php.net,\n> > > plexus@xshare.com\n> >\n> > Freshmeat, linuxtoday. If the release includes RPMs for Red Hat Linux,\n> > redhat-announce is also a suitable location.\n> \n> do you have email addresses fo freshmeat/linuxtoday? I have 6 web sites\n> that I have bookmarked for announces also, so if you have a good web URL,\n> I'll take those too ...\n\nSeems to be web based (painful):\n\nhttp://freshmeat.net/faq/view/20/\nhttp://linuxtoday.com/contribute.php3\n \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "13 Apr 2001 18:54:17 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Anyone have any good addresses ... ?"
},
{
"msg_contents": "> \n> Here is what we've always sent to to date ... anyone have any good ones\n> to add?\n> \n> \n> Addresses : freebsd-announce@freebsd.org,\n> freebsd-database@freebsd.org,\n> lweditors@linuxworld.com,\n> lwn@lwn.net,\n> malda@slashdot.org,\n> pgsql-announce@postgresql.org,\n> pgsql-general@postgresql.org,\n> php-db@lists.php.net,\n> plexus@xshare.com\n\nI think xshare is dead.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Apr 2001 19:00:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Anyone have any good addresses ... ?"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Here is what we've always sent to to date ... anyone have any good ones\n> to add?\n\nI think that this is still the moderator's address for comp.os.linux.announce:\n\n\tlinux-announce@news.ornl.gov\n-- \nmatthew rice <matt@starnix.com> starnix inc.\ntollfree: 1-87-pro-linux thornhill, ontario, canada\nhttp://www.starnix.com professional linux services & products\n",
"msg_date": "13 Apr 2001 19:00:37 -0400",
"msg_from": "Matthew Rice <matt@starnix.com>",
"msg_from_op": false,
"msg_subject": "Re: Anyone have any good addresses ... ?"
},
{
"msg_contents": "> \n> Here is what we've always sent to to date ... anyone have any good ones\n> to add?\n> \n> \n> Addresses : freebsd-announce@freebsd.org,\n> freebsd-database@freebsd.org,\n> lweditors@linuxworld.com,\n> lwn@lwn.net,\n> malda@slashdot.org,\n> pgsql-announce@postgresql.org,\n> pgsql-general@postgresql.org,\n> php-db@lists.php.net,\n> plexus@xshare.com\n\nDo we do freshmeat?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Apr 2001 19:00:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Anyone have any good addresses ... ?"
},
{
"msg_contents": "\nFreshmeat updated, Linuxtoday bookmarked ... thanks ;)\n\n\nOn 13 Apr 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n>\n> > On 13 Apr 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n> >\n> > > The Hermit Hacker <scrappy@hub.org> writes:\n> > >\n> > > > Here is what we've always sent to to date ... anyone have any good ones\n> > > > to add?\n> > > >\n> > > >\n> > > > Addresses : freebsd-announce@freebsd.org,\n> > > > freebsd-database@freebsd.org,\n> > > > lweditors@linuxworld.com,\n> > > > lwn@lwn.net,\n> > > > malda@slashdot.org,\n> > > > pgsql-announce@postgresql.org,\n> > > > pgsql-general@postgresql.org,\n> > > > php-db@lists.php.net,\n> > > > plexus@xshare.com\n> > >\n> > > Freshmeat, linuxtoday. If the release includes RPMs for Red Hat Linux,\n> > > redhat-announce is also a suitable location.\n> >\n> > do you have email addresses fo freshmeat/linuxtoday? I have 6 web sites\n> > that I have bookmarked for announces also, so if you have a good web URL,\n> > I'll take those too ...\n>\n> Seems to be web based (painful):\n>\n> http://freshmeat.net/faq/view/20/\n> http://linuxtoday.com/contribute.php3\n>\n>\n> --\n> Trond Eivind Glomsr�d\n> Red Hat, Inc.\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 13 Apr 2001 20:16:39 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Anyone have any good addresses ... ?"
},
{
"msg_contents": "\nemail added, thanks ...\n\nOn 13 Apr 2001, Matthew Rice wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Here is what we've always sent to to date ... anyone have any good ones\n> > to add?\n>\n> I think that this is still the moderator's address for comp.os.linux.announce:\n>\n> \tlinux-announce@news.ornl.gov\n> --\n> matthew rice <matt@starnix.com> starnix inc.\n> tollfree: 1-87-pro-linux thornhill, ontario, canada\n> http://www.starnix.com professional linux services & products\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 13 Apr 2001 20:17:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Anyone have any good addresses ... ?"
},
{
"msg_contents": "On Fri, 13 Apr 2001, Bruce Momjian wrote:\n\n> >\n> > Here is what we've always sent to to date ... anyone have any good ones\n> > to add?\n> >\n> >\n> > Addresses : freebsd-announce@freebsd.org,\n> > freebsd-database@freebsd.org,\n> > lweditors@linuxworld.com,\n> > lwn@lwn.net,\n> > malda@slashdot.org,\n> > pgsql-announce@postgresql.org,\n> > pgsql-general@postgresql.org,\n> > php-db@lists.php.net,\n> > plexus@xshare.com\n>\n> Do we do freshmeat?\n\nYup ... just submit'd the update ...\n\n\n",
"msg_date": "Fri, 13 Apr 2001 20:17:23 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Anyone have any good addresses ... ?"
},
{
"msg_contents": "On Fri, Apr 13, 2001 at 06:32:26PM -0400, Trond Eivind Glomsr?d wrote:\n> The Hermit Hacker <scrappy@hub.org> writes:\n> \n> > Here is what we've always sent to to date ... anyone have any good ones\n> > to add?\n> > \n> > \n> > Addresses : freebsd-announce@freebsd.org,\n> > freebsd-database@freebsd.org,\n> > lweditors@linuxworld.com,\n> > lwn@lwn.net,\n> > malda@slashdot.org,\n> > pgsql-announce@postgresql.org,\n> > pgsql-general@postgresql.org,\n> > php-db@lists.php.net,\n> > plexus@xshare.com\n> \n> Freshmeat, linuxtoday. If the release includes RPMs for Red Hat Linux,\n> redhat-announce is also a suitable location.\n\nLinux Journal: linux@ssc.com\nFreshmeat: jeff.covey@freshmeat.net\nLinuxToday: http://linuxtoday.com/contribute.php3\n\n-- \nNathan Myers\nncm@zembu.com\n",
"msg_date": "Fri, 13 Apr 2001 16:37:32 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Anyone have any good addresses ... ?"
}
] |
[
{
"msg_contents": "Hi Lamar. What are the plans for RPMs? Do we have an \"integrated RPM\"\nwhich will work with Mandrake, or should I keep carrying along my\npatches to make the spec file work for now?\n\nHow are you planning on packaging the hardcopy docs? They are not yet\navailable, but will be Real Soon Now :(\n\n - Thomas\n",
"msg_date": "Fri, 13 Apr 2001 22:44:12 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "7.1 RPMs"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> Hi Lamar. What are the plans for RPMs? Do we have an \"integrated RPM\"\n> which will work with Mandrake, or should I keep carrying along my\n> patches to make the spec file work for now?\n\nI haven't addressed that as yet. Is it safe to assume that -ffast-math\nshould be Considered Harmful in the RPM_OPT_FLAGS? Is -ffast-math\n_ever_ a Good Thing for our routines? I can easily enough strip out\n-ffast-math from the flags for all cases (xarg -n 1 grep -v\nffast-math|xargs is your friend....).\n\nWhile I don't plan on following the Mandrake Way WRT repackaging our\ntarball with bzip2, the source RPM should use whatever compression for\nthe man pages that the buildrootpolicy for that distribution supplies.\n \n> How are you planning on packaging the hardcopy docs? They are not yet\n> available, but will be Real Soon Now :(\n\nIn the postgresql-docs subpackage, along with the SGML source. The html\nbuilt docs made from the SGML source is still going into the main\ntarball, as they are nice and browseable in their standard location.\n\nIf I release a -1 RPM without the hardcopy, I can release a -2 with....\n\nI have a couple of patches from Trond to integrate, and a decision to\nmake regarding the contribs: should all the contrib tree go into one big\nRPM (860KB or so),or should each contrib directory get its own RPM\n(reminiscent of the PM3 binary RPM monster)? One patch from Trond has\nbeen duplicated by Karl: which patch allows building as non-root again.\n\nThere's also a question about the Python client -- it would be good if\nsomeone who has downloaded one of the RC RPM's could test that, as I'm\nnot a snake charmer. :-)\n\nAlso, I need either a standard way to build the java stuff (meaning my\nown JDK that is reasonably standard by consensus -- kaffe ships with\nRedHat 7.0 -- isthat an acceptable JDK-substitute?) or someone needs to\npackage 7.1 JDBC jars for my packaging pleasure. I'm running low enough\non disk space on my devel machines (one of which is a notebook) to make\nmy own JDK a second choice.\n\nOliver, what are doing with the JDBC client?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 13 Apr 2001 19:56:02 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 RPMs"
},
{
"msg_contents": "On Fri, 13 Apr 2001, Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > In the postgresql-docs subpackage, along with the SGML source.\n \n> Why would you want to ship the source?\n\nFor those with SGML tools and viewers, who might like to build hardcopy of\ntheir own. Frankly, it was an easy thing to do; had been done; and I saw no\nreal reason to stop doing it. I _does_ take up a little space, however.\n\nThe SGML source had been distributed as part of the main RPM, prior to 7.1.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 13 Apr 2001 22:11:26 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "Lamar Owen writes:\n\n> In the postgresql-docs subpackage, along with the SGML source.\n\nWhy would you want to ship the source?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 14 Apr 2001 04:22:50 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "> I haven't addressed that as yet. Is it safe to assume that -ffast-math\n> should be Considered Harmful in the RPM_OPT_FLAGS? Is -ffast-math\n> _ever_ a Good Thing for our routines? I can easily enough strip out\n> -ffast-math from the flags for all cases (xarg -n 1 grep -v\n> ffast-math|xargs is your friend....).\n\nI believe that there is no room for -ffast-math in PostgreSQL. The\ncompiler man page says that the option allows the compiler to generate\ncode which does not conform to IEEE math standards, and istm that we\nmight consider that as affecting our predictability and portability. The\nman page also warns against using -ffast-math and -O together, but I\ncan't tell if that is a warning to users or a \"note to myself\" for the\ncompiler maintainers.\n\n> While I don't plan on following the Mandrake Way WRT repackaging our\n> tarball with bzip2, the source RPM should use whatever compression for\n> the man pages that the buildrootpolicy for that distribution supplies.\n\nCertainly so for the tarball. However the tarballs are delivered is how\nwe should use them imho. Why is it an issue for the man pages? They are\nuncompressed in our source tarball (? haven't checked) and if we have\nthem uncompressed in the RPM build area then they get compressed by RPM\nsomewhere between installation and RPM packaging, right?\n\n> > How are you planning on packaging the hardcopy docs? They are not yet\n> > available, but will be Real Soon Now :(\n> In the postgresql-docs subpackage, along with the SGML source. The html\n> built docs made from the SGML source is still going into the main\n> tarball, as they are nice and browseable in their standard location.\n> If I release a -1 RPM without the hardcopy, I can release a -2 with....\n\nGreat.\n\n> There's also a question about the Python client -- it would be good if\n> someone who has downloaded one of the RC RPM's could test that, as I'm\n> not a snake charmer. :-)\n\nNot sure about the python stuff, and I don't recall doing anything in\nthe past on that topic.\n\n> Also, I need either a standard way to build the java stuff (meaning my\n> own JDK that is reasonably standard by consensus -- kaffe ships with\n> RedHat 7.0 -- isthat an acceptable JDK-substitute?) or someone needs to\n> package 7.1 JDBC jars for my packaging pleasure. I'm running low enough\n> on disk space on my devel machines (one of which is a notebook) to make\n> my own JDK a second choice.\n\nIn the past I have found that kaffe did not handle enough java code for\nmy needs, but that was not for the JDBC driver. I am currently using\njikes for my projects, and it produces *nice* code in my experience.\n\n - Thomas\n",
"msg_date": "Sat, 14 Apr 2001 14:00:40 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: 7.1 RPMs"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> I believe that there is no room for -ffast-math in PostgreSQL. The\n\nI have placed code in the spec to strip out ffast-math from the CFLAGS.\nI have not as yet followed Peter's advice on exporting CFLAGS and\nleaving COPT -- but that's not to say that I won't.\n \n> > While I don't plan on following the Mandrake Way WRT repackaging our\n> > tarball with bzip2, the source RPM should use whatever compression for\n> > the man pages that the buildrootpolicy for that distribution supplies.\n \n> Certainly so for the tarball. However the tarballs are delivered is how\n> we should use them imho.\n\nExactly.\n\n> them uncompressed in the RPM build area then they get compressed by RPM\n> somewhere between installation and RPM packaging, right?\n\nRight. Each distributor can use its own buildrootpolicy -- which\nhandles man page compression, executable stripping, and the like. Not an\nissue -- but Mandrake historically bzip2's them.\n \n> Not sure about the python stuff, and I don't recall doing anything in\n> the past on that topic.\n\nI thought you did up the current build way ... but I could be mistaken. \nI need to see if the python interface makefile Does the Right Thing now\nWRT RPM_BUILD_ROOT/DESTDIR processing. The main make stuff now acts\nsanely inthe presence of RPM_BUILD_ROOT -- well, except the perl\ninterface, but that's a special case.\n \n> In the past I have found that kaffe did not handle enough java code for\n> my needs, but that was not for the JDBC driver. I am currently using\n> jikes for my projects, and it produces *nice* code in my experience.\n\nJikes is open source, right? I know it is available for Red Hat (ships\nwith it on one of the applications CD's,IIRC.) How does a Jikes-built\nJDBC sound to people? Ormaybe I don't understand the Java Way well\nenough to decide. Gotta learn it a little....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 14 Apr 2001 10:29:42 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> > In the past I have found that kaffe did not handle enough java code for\n> > my needs, but that was not for the JDBC driver. I am currently using\n> > jikes for my projects, and it produces *nice* code in my experience.\n> \n> Jikes is open source, right? I know it is available for Red Hat (ships\n> with it on one of the applications CD's,IIRC.) How does a Jikes-built\n> JDBC sound to people? Ormaybe I don't understand the Java Way well\n> enough to decide. Gotta learn it a little....\n\nJikes is an excellent compiler, but it needs a set of classes to\ncompile against from a JDK.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "14 Apr 2001 11:33:08 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "Do we need to start thinking about an RPM mailing list? Seems there is\nlots of traffic.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 14 Apr 2001 12:40:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "\nIf someone wants to come up with an idea for name, i think that the whole\nWin camp could be seperated also ...\n\npgsql-windows and pgsql-rpm ?\n\nas far as newsgroups are concerned, they would both fall under ports:\n\ncomp.databases.postgresql.ports.linux.rpm\ncomp.databases.postgresql.ports.windows\n\nI'm willing to create, as long as ppl are willing to use *shrug*\n\nOn Sat, 14 Apr 2001, Bruce Momjian wrote:\n\n> Do we need to start thinking about an RPM mailing list? Seems there is\n> lots of traffic.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sat, 14 Apr 2001 14:55:51 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "The Hermit Hacker wrote:\n> If someone wants to come up with an idea for name, i think that the whole\n> Win camp could be seperated also ...\n \n> pgsql-windows and pgsql-rpm ?\n \n> as far as newsgroups are concerned, they would both fall under ports:\n\nIf that's what you want to do. Although, I'd recommend pgsql-cygwin,\nlest someone erroneously think we directly support Win32.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 14 Apr 2001 14:18:43 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> If someone wants to come up with an idea for name, i think that the whole\n> Win camp could be seperated also ...\n\n> pgsql-windows and pgsql-rpm ?\n\nA windows list seems like a good idea. But I'm not sure that a separate\nlist for RPMs is a good idea. In the first place, it's fuzzy: is it\nto be used just for RPM packaging discussion, or is it going to draw\noff --- for example --- all bug reports from people who happen to have\ninstalled from RPM instead of source? I suppose the former is intended,\nbut it's not going to be clear to people. I think we've already got too\nmany lists with fuzzy boundaries. In the second place, the RPM\npackaging discussion is quite sporadic; I think the traffic would be nil\nexcept at times when Lamar is working on new RPMs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Apr 2001 14:20:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs "
},
{
"msg_contents": "On Sat, 14 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > If someone wants to come up with an idea for name, i think that the whole\n> > Win camp could be seperated also ...\n>\n> > pgsql-windows and pgsql-rpm ?\n>\n> A windows list seems like a good idea. But I'm not sure that a\n> separate list for RPMs is a good idea. In the first place, it's\n> fuzzy: is it to be used just for RPM packaging discussion, or is it\n> going to draw off --- for example --- all bug reports from people who\n> happen to have installed from RPM instead of source? I suppose the\n> former is intended, but it's not going to be clear to people. I think\n> we've already got too many lists with fuzzy boundaries. In the second\n> place, the RPM packaging discussion is quite sporadic; I think the\n> traffic would be nil except at times when Lamar is working on new\n> RPMs.\n\nThat's why I wasn't sure how to classify the RPM one ...\n\nI like Lamar's suggestion of pgsql-cygwin though ... sound reasonable?\n\n\n",
"msg_date": "Sat, 14 Apr 2001 15:28:11 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs "
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Do we need to start thinking about an RPM mailing list? Seems there is\n> > lots of traffic.\n> \n> The traffic naturally peaks around release time, and this time especially\n> because yours truly messed up the whole build system that the packagers\n> were so careful to work around. I trust that in a few weeks we'll enter a\n> new quiet period. My vote is that technical packaging discussions should\n> go on -hackers just like a makefile discussion.\n> \n\nOK, it was just an idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 14 Apr 2001 14:29:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> I like Lamar's suggestion of pgsql-cygwin though ... sound reasonable?\n\nYes, that's probably better than pgsql-windows ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Apr 2001 14:38:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Do we need to start thinking about an RPM mailing list? Seems there is\n> lots of traffic.\n\nThe traffic naturally peaks around release time, and this time especially\nbecause yours truly messed up the whole build system that the packagers\nwere so careful to work around. I trust that in a few weeks we'll enter a\nnew quiet period. My vote is that technical packaging discussions should\ngo on -hackers just like a makefile discussion.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 14 Apr 2001 20:39:17 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "On Sat, 14 Apr 2001, Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n>\n> > Do we need to start thinking about an RPM mailing list? Seems there is\n> > lots of traffic.\n>\n> The traffic naturally peaks around release time, and this time\n> especially because yours truly messed up the whole build system that\n> the packagers were so careful to work around. I trust that in a few\n> weeks we'll enter a new quiet period. My vote is that technical\n> packaging discussions should go on -hackers just like a makefile\n> discussion.\n\nWhy not a \"pgsql-build\", or something like that, list? Where build/make\nfile discussions can take place? Vs server issues? I'd really like to\nfind some way of reducing traffic on -hackers like we did with -interfaces\n... if we can come up with a good list for it ...\n\npgsql-build (or a better name?) could be for RPM discussions, just as easy\nas Makefile/Configure discussions ...\n\n\n",
"msg_date": "Sat, 14 Apr 2001 15:45:37 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> I like Lamar's suggestion of pgsql-cygwin though ... sound reasonable?\n\nWe have pgsql-ports, which isn't seeing too much traffic as it is. Seems\nlike the cygwin people hang out there anyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 14 Apr 2001 20:57:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs "
},
{
"msg_contents": "On Sat, 14 Apr 2001, The Hermit Hacker wrote:\n\n> On Sat, 14 Apr 2001, Peter Eisentraut wrote:\n>\n> > Bruce Momjian writes:\n> >\n> > > Do we need to start thinking about an RPM mailing list? Seems there is\n> > > lots of traffic.\n> >\n> > The traffic naturally peaks around release time, and this time\n> > especially because yours truly messed up the whole build system that\n> > the packagers were so careful to work around. I trust that in a few\n> > weeks we'll enter a new quiet period. My vote is that technical\n> > packaging discussions should go on -hackers just like a makefile\n> > discussion.\n>\n> Why not a \"pgsql-build\", or something like that, list? Where build/make\n> file discussions can take place? Vs server issues? I'd really like to\n> find some way of reducing traffic on -hackers like we did with -interfaces\n> ... if we can come up with a good list for it ...\n>\n> pgsql-build (or a better name?) could be for RPM discussions, just as easy\n> as Makefile/Configure discussions ...\n\nHow 'bout pgsql-hackers-rpm. Because the make/config stuff can impact\nso many different parts of PostgreSQL, that stuff should probably remain\non hackers.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sat, 14 Apr 2001 15:06:53 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "> The Hermit Hacker <scrappy@hub.org> writes:\n> > I like Lamar's suggestion of pgsql-cygwin though ... sound reasonable?\n> \n> Yes, that's probably better than pgsql-windows ...\n\nBut then again, the comment this is more properly done on ports makes\nsense.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 14 Apr 2001 15:08:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "On Sat, 14 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > I like Lamar's suggestion of pgsql-cygwin though ... sound reasonable?\n>\n> Yes, that's probably better than pgsql-windows ...\n\nDone ...\n\n\n",
"msg_date": "Sat, 14 Apr 2001 16:15:42 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs "
},
{
"msg_contents": "On Sat, 14 Apr 2001, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n>\n> > I like Lamar's suggestion of pgsql-cygwin though ... sound reasonable?\n>\n> We have pgsql-ports, which isn't seeing too much traffic as it is. Seems\n> like the cygwin people hang out there anyway.\n\nYa, well, there is alot of traffic on -hackers that should probably be\nover there anyway *shrug*\n\n\n",
"msg_date": "Sat, 14 Apr 2001 16:16:09 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs "
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> If someone wants to come up with an idea for name, i think that the whole\n> Win camp could be seperated also ...\n>\n> pgsql-windows and pgsql-rpm ?\n\nThere seem to be a lot of Linux users, too. How about a new mailing list?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 15 Apr 2001 04:02:14 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> The traffic naturally peaks around release time, and this time especially\n> because yours truly messed up the whole build system that the packagers\n> were so careful to work around.\n\nNow, Peter. The build, IMHO, is much better than before -- if anything,\nthe fact that we had to change as much as we did shows that the previous\nbuild system, no offense intended to anyone who worked on it :-), wasn't\n'packaging friendly.' I ripped out more special treatment than I had to\nput in. The less code in the spec, the better the spec (and the better\nthe package).\n\nNow I just have to get the Python build version-agnostic and using the\nregular build -- even if I have to dink with the makefiles myself. I\nthink the perl build, due to its two-stage needs, will remain the\nspecial case that it is.\n\n> I trust that in a few weeks we'll enter a\n> new quiet period. My vote is that technical packaging discussions should\n> go on -hackers just like a makefile discussion.\n\nI tend to agree -- but at the same time I'm easy to get along with in\nthat regard. Packaging envelopes the whole program -- I must see the\nforest that the -hackers group has built out of trees. And I have to\nknow some details of the trees occasionally. I'm sure Oliver would\nagree.\n\nAnd, to let everyone know, I'm having a blast doing this. And I'm glad\nmy work schedule eased up some in the last month so I could put some\ntime to this task.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 14 Apr 2001 22:37:02 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "> > In the past I have found that kaffe did not handle enough java code for\n> > my needs, but that was not for the JDBC driver. I am currently using\n> > jikes for my projects, and it produces *nice* code in my experience.\n> Jikes is open source, right? I know it is available for Red Hat (ships\n> with it on one of the applications CD's,IIRC.) How does a Jikes-built\n> JDBC sound to people? Ormaybe I don't understand the Java Way well\n> enough to decide. Gotta learn it a little....\n\nFor the RPM, you should be able to build JDBC with whatever compiler\nproduces reliable, quality code. jikes is one good candidate imho.\n\nNot sure if the configure info for JDBC looks for choices of compiler,\nor if you can specify the compiler for the build. I can look into it,\nbut am not sure how one interacts with ant (we are using ant to do the\nJava build now, right??) or if ant handles it already.\n\n - Thomas\n",
"msg_date": "Sun, 15 Apr 2001 05:39:45 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "> Do we need to start thinking about an RPM mailing list? Seems there is\n> lots of traffic.\n\nThe delete key is your friend. So is procmail, if you just can't stand\nto see the letters \"R\", \"P\", and \"M\" too close together ;)\n\nI'm not a big fan of the trend to fork off a mailing list anytime more\nthan a few messages on a single topic come through. The synergy and\ncross-pollination that we get by having us all see various topics wrt\ndevelopment far outweigh the minor annoyance to some on having to delete\ntopics they don't find interesting.\n\nAs an example, RPM building is only a part of the general packaging of\nPostgreSQL, but it illustrates issues which anyone touching\nconfiguration or Makefiles should be aware of. So forcing \"those Linux\npeople\" onto some specialty list weakens the knowledge base we all could\ndraw from.\n\nAll imho of course...\n\n - Thomas\n",
"msg_date": "Sun, 15 Apr 2001 05:48:16 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: 7.1 RPMs"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> I'm not a big fan of the trend to fork off a mailing list anytime more\n> than a few messages on a single topic come through. The synergy and\n> cross-pollination that we get by having us all see various topics wrt\n> development far outweigh the minor annoyance to some on having to delete\n> topics they don't find interesting.\n\nThat's my feeling also. There's a considerable downside to fragmenting\nthe PG discussions across multiple lists: not only that people may miss\nstuff that they should have seen, but also the increased probability\nthat messages will be sent to the wrong lists to begin with.\n\nWe should only split off new lists when the volume gets to the point of\nbeing absolutely intolerable --- which the RPM stuff is not. The Cygwin\nstuff might be sufficiently specialized that it can survive as a\nseparate list, but I thought that was a marginal call at best.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Apr 2001 02:24:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs "
},
{
"msg_contents": "On Sun, 15 Apr 2001, Thomas Lockhart wrote:\n\n> > Do we need to start thinking about an RPM mailing list? Seems there is\n> > lots of traffic.\n>\n> The delete key is your friend. So is procmail, if you just can't stand\n> to see the letters \"R\", \"P\", and \"M\" too close together ;)\n>\n> I'm not a big fan of the trend to fork off a mailing list anytime more\n> than a few messages on a single topic come through. The synergy and\n> cross-pollination that we get by having us all see various topics wrt\n> development far outweigh the minor annoyance to some on having to delete\n> topics they don't find interesting.\n>\n> As an example, RPM building is only a part of the general packaging of\n> PostgreSQL, but it illustrates issues which anyone touching\n> configuration or Makefiles should be aware of. So forcing \"those Linux\n> people\" onto some specialty list weakens the knowledge base we all could\n> draw from.\n>\n> All imho of course...\n\nagreed, that's why I was kinda thinking maybe some sort of 'build' list\n... something that deals with configure, makefile and packaging issues ...\n\n\n",
"msg_date": "Sun, 15 Apr 2001 03:26:30 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Do we need to start thinking about an RPM mailing list? Seems there is\n> lots of traffic.\n\nIIRC, this question was asked about 6 months ago, and the answer was RPM\nshould be discussed on PostgreSQL-Ports. \n\nOn the other hand, it seems in practice most people are unaware of this,\nand do in fact is general or devel.\n\nMaybe if there were PostgreSQL-RPM as an alias to PostgreSQL-Ports?\n\nOr a separate list, I don't mind either way, but it's pretty clear to me\nthat currently there is alot of misdirected traffic..\n\n-- \nKarl\n",
"msg_date": "Sun, 15 Apr 2001 06:59:54 -0400",
"msg_from": "Karl DeBisschop <karl@debisschop.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 RPMs"
}
] |
[
{
"msg_contents": "Hi\n\nI've been experimenting with 7.1rc4 for a couple of hours. I was messing with\nblobs, and the new toast setup worked quite nicely. One thing I especially\nliked was the fact that by having pg_dump create a dumpfile in the custom or\ntar format, I could also backup all blobs in one go.\n\nUnfortunately, practice was a bit different. Which is why I would like to know\nif these functions are intended for general use.\n\nA small log:\n\nsol2:~$ uname -srm\nSunOS 5.8 sun4u\nsol2:~$ createdb blaat\nCREATE DATABASE\nsol2:~$ psql -c 'create table test(a oid)' blaat\nCREATE\nsol2:~$ psql -c \"insert into test values(lo_import('/etc/hosts'))\" blaat \nINSERT 18761 1\nsol2:~$ pg_dump -b -Fc -f blaat.bk blaat\nsol2:~$ pg_restore -l blaat.bk\n;\n; Archive created at Sat Apr 14 01:03:02 2001\n; dbname: blaat\n; TOC Entries: 4\n; Compression: -1\n; Dump Version: 1.5-2\n; Format: CUSTOM\n;\n;\n; Selected TOC Entries:\n;\n2; 18749 TABLE test mathijs\n3; 18749 TABLE DATA test mathijs\n4; 0 BLOBS BLOBS \nsol2:~$ grep serv /etc/hosts\n10.1.8.12 serv2.ilse.nl\n10.1.8.10 serv0.ilse.nl\nsol2:~$ grep serv blaat.bk \nsol2:~$ pg_dump -b -Ft -f blaat.tar blaat\nzsh: segmentation fault (core dumped) pg_dump -b -Ft -f blaat.tar blaat\nsol2:~$ psql -c 'select version()' blaat\n version \n-------------------------------------------------------------------\n PostgreSQL 7.1rc4 on sparc-sun-solaris2.8, compiled by GCC 2.95.3\n(1 row)\n\nA backtrace reveals the following:\n#0 0xff132e5c in strlen () from /usr/lib/libc.so.1\n#1 0xff181890 in _doprnt () from /usr/lib/libc.so.1\n#2 0xff183a04 in vsnprintf () from /usr/lib/libc.so.1\n#3 0x2710c in ahprintf (AH=0x56cd0, fmt=0x430a8 \"-- File: %s\\n\")\n at pg_backup_archiver.c:1116\n#4 0x2ee90 in _PrintExtraToc (AH=0x56cd0, te=0x5e838) at pg_backup_tar.c:305\n#5 0x290e0 in _printTocEntry (AH=0x56cd0, te=0x5e838, ropt=0x681b0)\n at pg_backup_archiver.c:1877\n#6 0x25470 in RestoreArchive (AHX=0x56cd0, ropt=0x681b0)\n at pg_backup_archiver.c:269\n#7 0x2ffb8 in _CloseArchive (AH=0x56cd0) at pg_backup_tar.c:840\n#8 0x24f68 in CloseArchive (AHX=0x56cd0) at pg_backup_archiver.c:136\n#9 0x15128 in main (argc=6, argv=0xffbefcac) at pg_dump.c:1114\n\nWhat happens is that in line 305 of pg_backup_tar.c, ahprintf is handed a NULL\npointer.\n\n300 static void\n301 _PrintExtraToc(ArchiveHandle *AH, TocEntry *te)\n302 {\n303 lclTocEntry *ctx = (lclTocEntry *) te->formatData;\n304 \n305 ahprintf(AH, \"-- File: %s\\n\", ctx->filename);\n306 }\n\nCould this be caused by the fact that IMHO blobs aren't dumped correctly?\n\nRegards,\n\nMathijs\n--\nIt's not that perl programmers are idiots, it's that the language\nrewards idiotic behavior in a way that no other language or tool has\never done.\n Erik Naggum\n",
"msg_date": "Sat, 14 Apr 2001 01:14:07 +0200",
"msg_from": "Mathijs Brands <mathijs@ilse.nl>",
"msg_from_op": true,
"msg_subject": "pg_dump, formats & blobs"
},
{
"msg_contents": "At 01:14 14/04/01 +0200, Mathijs Brands wrote:\n...\n>sol2:~$ pg_dump -b -Fc -f blaat.bk blaat\n>sol2:~$ pg_restore -l blaat.bk\n...\n>;\n>; Archive created at Sat Apr 14 01:03:02 2001\n...\n\nThis all looks fine.\n\n\n>sol2:~$ pg_dump -b -Ft -f blaat.tar blaat\n>zsh: segmentation fault (core dumped) pg_dump -b -Ft -f blaat.tar blaat\n\nThis is less good. It's caused by the final part of TAR output, which also\ndumps a plain SQL script for reference (not actually ever used by\npg_restore). I will fix this in CVS; ctx->filename is set to null for this\nscript, and my compiler outputs '(null)', which is very forgiving of it. \n\n\n>\n>Could this be caused by the fact that IMHO blobs aren't dumped correctly?\n>\n\nIs there some other problem with BLOBs that you did not mention? AFAICT,\nthis is only a problem with TAR output (an will be fixed ASAP). \n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 14 Apr 2001 11:44:18 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump, formats & blobs"
},
{
"msg_contents": "On Sat, Apr 14, 2001 at 11:44:18AM +1000, Philip Warner allegedly wrote:\n> At 01:14 14/04/01 +0200, Mathijs Brands wrote:\n> ...\n> >sol2:~$ pg_dump -b -Fc -f blaat.bk blaat\n> >sol2:~$ pg_restore -l blaat.bk\n> ...\n> >;\n> >; Archive created at Sat Apr 14 01:03:02 2001\n> ...\n> \n> This all looks fine.\n\nHmm, I can only agree.\n\nsol2:~$ cksum postgresql-7.1rc4.tar.gz\n61540329 8088934 postgresql-7.1rc4.tar.gz\nsol2:~$ dropdb blaat\nDROP DATABASE\nsol2:~$ createdb blaat\nCREATE DATABASE\nsol2:~$ psql -c 'create table test(a oid)' blaat\nCREATE\nsol2:~$ psql -c \"insert into test values(lo_import('/export/home/mathijs/postgresql-7.1rc4.tar.gz'))\" blaat\nINSERT 22753 1\nsol2:~$ pg_dump -b -Fc -f blaat.bk blaat\nsol2:~$ psql -c 'drop table test ; vacuum' blaat\nVACUUM\nsol2:~$ pg_restore -d blaat blaat.bk\nsol2:~$ psql -c \"select lo_export(test.a, '/export/home/mathijs/testfile') from test\" blaat\n lo_export\n-----------\n 1\n(1 row)\n\nsol2:~$ cksum testfile\n61540329 8088934 testfile\nsol2:~$ pg_restore -l blaat.bk\n;\n; Archive created at Sat Apr 14 03:59:02 2001\n; dbname: blaat\n; TOC Entries: 4\n; Compression: -1\n; Dump Version: 1.5-2\n; Format: CUSTOM\n;\n;\n; Selected TOC Entries:\n;\n2; 18792 TABLE test mathijs\n3; 18792 TABLE DATA test mathijs\n4; 0 BLOBS BLOBS\n\nI couldn't get blobs to be restored correctly (must've been doing\nsomething wrong). When something doesn't work, never question your\nown methods ;)\n\n> >sol2:~$ pg_dump -b -Ft -f blaat.tar blaat\n> >zsh: segmentation fault (core dumped) pg_dump -b -Ft -f blaat.tar blaat\n> \n> This is less good. It's caused by the final part of TAR output, which also\n> dumps a plain SQL script for reference (not actually ever used by\n> pg_restore). I will fix this in CVS; ctx->filename is set to null for this\n> script, and my compiler outputs '(null)', which is very forgiving of it. \n\nIt's more likely that your C library is more forgiving (ie. Open Source OS?).\n\n> >Could this be caused by the fact that IMHO blobs aren't dumped correctly?\n> >\n> \n> Is there some other problem with BLOBs that you did not mention? AFAICT,\n> this is only a problem with TAR output (an will be fixed ASAP). \n\nYeah, they're not fool proof ;)\n\nSorry about the false alarm. I was convinced restoring blobs\ndidn't work correctly.\n\nRegards,\n\nMathijs\n-- \n$_='while(read+STDIN,$_,2048){$a=29;$c=142;if((@a=unx\"C*\",$_)[20]&48){$h=5;\n$_=unxb24,join\"\",@b=map{xB8,unxb8,chr($_^$a[--$h+84])}@ARGV;s/...$/1$&/;$d=\nunxV,xb25,$_;$b=73;$e=256|(ord$b[4])<<9|ord$b[3];$d=$d>>8^($f=($t=255)&($d\n>>12^$d>>4^$d^$d/8))<<17,$e=$e>>8^($t&($g=($q=$e>>14&7^$e)^$q*8^$q<<6))<<9\n,$_=(map{$_%16or$t^=$c^=($m=(11,10,116,100,11,122,20,100)[$_/16%8])&110;$t\n^=(72,@z=(64,72,$a^=12*($_%16-2?0:$m&17)),$b^=$_%64?12:0,@z)[$_%8]}(16..271))\n[$_]^(($h>>=8)+=$f+(~$g&$t))for@a[128..$#a]}print+x\"C*\",@a}';s/x/pack+/g;eval \n",
"msg_date": "Sat, 14 Apr 2001 04:10:26 +0200",
"msg_from": "Mathijs Brands <mathijs@ilse.nl>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump, formats & blobs"
},
{
"msg_contents": "At 04:10 14/04/01 +0200, Mathijs Brands wrote:\n>\n>Sorry about the false alarm. I was convinced restoring blobs\n>didn't work correctly.\n>\n\nThe tar problem is now fixed in CVS.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 14 Apr 2001 23:13:45 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump, formats & blobs"
}
] |
[
{
"msg_contents": "Ok, the 7.1-1 RPMset for Red Hat 6.2 has been uploaded. The Source RPM is up,\nin RPM 3 format. Red Hat 7.0 RPMs are on their way. (whew). 28.8K is slow\nwhen uploading 8MB worth of binary RPMset -- as it was when downloading the\nsource RPM from my 6.2 build machine, located at work and hung off a T1, to my\n7.0 home machine.\n\nIf you have any queasiness about installing binary RPM's -- then rebuild from\nthe source RPM. You will need to review the new REBUILDING FROM SOURCE RPM\nsection in README.rpm-dist, packaged in the src.rpm (as well as in the main\nbinary RPM), then review the instructions and tags in the spec file. You are\nable to build the package with pieces not built -- things such as the tcl, tk,\nperl, odbcm python, and jdbc clients, as well as the test package, are\noptional. The main, libs, server, docs, ans contrib subpackages are not\noptional at this time.\n\nWhen 7.1 is officially announced, I'll put together an 'official' RPM\nannouncement (unless you want to mention that as part of the main announcement,\nMarc) that I'll post on the various places as well.\n\nRegression passes with this RPMset on both RH 6.2 and RH 7.0 with no special\noptions or environment variable (or locale sysconfig file) funniness.\n\nIf you find obvious errors, please let me know so I can prep a '-2' set quickly\n-- however, it will need to be brought to my attention by tomorrow night, or on\nMonday, as I do not plan on doing any RPM work on Easter. Nor do I really plan\non doing any RPM work tomorrow night -- but I will if need be.\n\nNo hardcopy docs are included, and the contrib package is One Big Package at\nthe moment.\n\nPlease try various combinations of installations, as the dependencies in this\nset are new.\n\nNOTE TO LINUX DISTRIBUTION PEOPLE ON THIS LIST:\nI am in process of changing the focus of the PostgreSQL.org RPMset from a\ndo-all-end-all-be-all RPM to a 'template' RPMset, that will work for most\neveryone but is meant as a foundation from which to build your\ndistribution-specific RPMset for PostgreSQL. I would request that your\ndistribution-specifc RPMset be flagged as such if it deviates from what is in\nthis RPMset -- change the release number to associate it with your distro, such\nas the 'mdk' added to all Mandrake RPMs flag those RPM's as belonging to\nMandrake. The source RPM should build as-is on any distribution that is\nreasonably close to LSB compliant and uses an RPM of at least version 3.0.4 --\n3.0.5 or greater is very much preferred. I will try to refrain from using RPM\n4 specific features until my list of supported distributions no longer includes\nRed Hat 6.2, which has RPM 3.0.5 as its errata update release -- but you will\nNEED RPM 3.0.5 to rebuild, more than likely.\n\nI would also like to receive any patches to any files in the RPM that you\nmodify in any way -- if those changes would be useful to the generic RPMset.\n\nPlease read the README.rpm-dist file packaged in both the source RPM and the\nmain package. For your convenience, that file is attached to this message.\n\nA minimal PostgreSQL server installation will need the main package, the libs\nsubpackage, and the server subpackage to function. Client only installations\nmay pick and choose -- eg, if you use the Python client exclusively, then you\nonly need the libs and python subpackages. \n\nBIG NOTE:\nBefore upgrading your previous PostgreSQL installation, be sure you understand\nhow the upgrade process works. Make absolutely SURE that you have the previous\nversion's RPMset to reinstall if you need to do so, and take a full ASCII\npg_dumpall (or your preferred backup method, such as iterative pg_dumps as\ndiscussed on this list). The semi-automatic process, when it works, works\nwell. It has been known to not work, as it is attempting to do a very hard\nthing. BE SURE YOU HAVE A KNOWN GOOD BACKUP. PLEASE -- for the sake of YOUR\ndata. If you need large objects migrated, you need to compile the contributed\npg_dumplo utility yourself for 7.0.x and dump those large objects FIRST.\n\nThere are instances of data dumped with 7.0 not restoring properly with 7.1 (as\ndocumented on this list) -- have a copy of you BINARY tree stored offline so\nthat you can go back to your previous version if the need arises.\n\nOtherwise, enjoy the RPMset :-)\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11",
"msg_date": "Fri, 13 Apr 2001 22:56:18 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "7.1-1 RPMset uploading..."
}
] |
[
{
"msg_contents": "We've gotten several different reports lately of peculiar compilation\nerrors and warnings involving readline in 7.1. They look like configure\nis actively doing the wrong thing --- for example, how could we see\nreports like this:\n\ntab-complete.c:734: `filename_completion_function' undeclared (first use in this function)\ntab-complete.c:734: (Each undeclared identifier is reported only once\ntab-complete.c:734: for each function it appears in.)\n\nwhen the code makes a point of providing a declaration for\nfilename_completion_function if configure didn't see it in the headers?\n\nAfter eyeballing the code I think I have a theory. psql/input.h will\npreferentially include <readline/readline.h>, not <readline.h>, if both\nHAVE_READLINE_READLINE_H and HAVE_READLINE_H are defined. But the tests\nin configure make the opposite choice! Maybe the people who are having\ntrouble have two different versions of readline header files visible at\nthose two names, leading to configure's results being wrong for the\nheader file that input.h actually selects?\n\nIn normal situations this still wouldn't matter because configure only\ndefines one of the two symbols HAVE_READLINE_READLINE_H and HAVE_READLINE_H.\nBUT: suppose someone runs configure, then installs a newer libreadline\nand runs configure again? I think caching of configure results could\nlead to both symbols becoming defined, if both headers are out there.\n\nIt's a bit of a reach, but I'm having a hard time seeing how configure\ncould produce the wrong results otherwise. Thoughts?\n\nAndrea and Kevin, what do your src/include/config.h files have for\nthese symbols?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 23:07:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Possible explanation for readline configuration problems"
},
{
"msg_contents": "> We've gotten several different reports lately of peculiar compilation\n> errors and warnings involving readline in 7.1. They look like configure\n> is actively doing the wrong thing --- for example, how could we see\n> reports like this:\n> \n> tab-complete.c:734: `filename_completion_function' undeclared (first use in this function)\n> tab-complete.c:734: (Each undeclared identifier is reported only once\n> tab-complete.c:734: for each function it appears in.)\n> \n> when the code makes a point of providing a declaration for\n> filename_completion_function if configure didn't see it in the headers?\n> \n> After eyeballing the code I think I have a theory. psql/input.h will\n> preferentially include <readline/readline.h>, not <readline.h>, if both\n> HAVE_READLINE_READLINE_H and HAVE_READLINE_H are defined. But the tests\n> in configure make the opposite choice! Maybe the people who are having\n> trouble have two different versions of readline header files visible at\n> those two names, leading to configure's results being wrong for the\n> header file that input.h actually selects?\n\nThis sounds like an excellent guess. Hard to imagine how readline has\ngotten such a bizarre list of configurations.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Apr 2001 23:43:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible explanation for readline configuration problems"
},
{
"msg_contents": "I wrote:\n> ... how could we see reports like this:\n\n> tab-complete.c:734: `filename_completion_function' undeclared (first use in this function)\n\n> when the code makes a point of providing a declaration for\n> filename_completion_function if configure didn't see it in the headers?\n\nNever mind ...\n\nI pulled down readline 4.2, and the answer is depressingly clear: the\nReadline boys have decided to rename filename_completion_function to\nrl_filename_completion_function. This graphically illustrates the\nfundamental bogosity of AC_EGREP_HEADER: it still finds a match,\nblithely ignoring the fact that it matched only part of an identifier.\n\nMost of the other compiler warnings that we've been hearing about arise\nbecause the readline headers have been const-ified. Suppressing these\nwarnings across both old and new readlines will be a pain in the neck :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 23:52:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Possible explanation for readline configuration problems "
},
{
"msg_contents": "Tom Lane writes:\n\n> We've gotten several different reports lately of peculiar compilation\n> errors and warnings involving readline in 7.1. They look like configure\n> is actively doing the wrong thing --- for example, how could we see\n> reports like this:\n>\n> tab-complete.c:734: `filename_completion_function' undeclared (first use in this function)\n> tab-complete.c:734: (Each undeclared identifier is reported only once\n> tab-complete.c:734: for each function it appears in.)\n>\n> when the code makes a point of providing a declaration for\n> filename_completion_function if configure didn't see it in the headers?\n\nBecause\n\n(a) the readline dudes changed the type of filename_completion_function in\nversion 4.2 from\n\n extern char *filename_completion_function __P((char *, int));\n\nto\n\n extern char *filename_completion_function __P((const char *, int));\n\nand\n\n(b) the readline dudes commented out (#if 0) the declaration of\nfilename_completion_function in the header in favour of the new\nrl_filename_completion_function (but left the symbol in the library).\n\nOther symbols were treated similarly, leading to implicit declarations\nwith ints.\n\nNeedless to say, the readline developers deserve to be shot for doing this\nin a minor release.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 14 Apr 2001 11:01:58 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Possible explanation for readline configuration problems"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> (b) the readline dudes commented out (#if 0) the declaration of\n> filename_completion_function in the header in favour of the new\n> rl_filename_completion_function (but left the symbol in the library).\n\nThe \"#if 0\" silliness is not what's confusing configure however,\nbecause what it's grepping is preprocessor output; it will not see the\ncommented-out declaration of filename_completion_function. What it\n*does* see is the declaration of rl_filename_completion_function, and\nbecause of the lack of defense against partial-word matches, mistakes\nthat for filename_completion_function. Our code would actually work if\nAC_EGREP_HEADER weren't so nonrobust about what it will take as a match.\n\n> Needless to say, the readline developers deserve to be shot for doing this\n> in a minor release.\n\nI don't suppose our life would be any simpler if they'd called it 5.0\ninstead of 4.2, though. Visions of mayhem aside, we ought to try to\npersuade our code to work cleanly with 4.2 as well as earlier releases.\nDo you have time to work on that for 7.1.1 (ie, end of the month or so)?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Apr 2001 13:39:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: Possible explanation for readline configuration problems "
}
] |
[
{
"msg_contents": "Using the head branch, when I execute the following in psql on Redhat 6.2\ni386, the postmaster process dies gives an error message about corrupted\nshared memory:\n\n--begin---\ncreate function bug1(integer) returns numeric as '\nbegin\n return $1;\nend;'\nlanguage 'plpgsql';\n\n\nselect bug1(5);\n\n---end---\n\nAny ideas?\n\n - Mark Butler\n",
"msg_date": "Fri, 13 Apr 2001 21:17:57 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Postmaster fatal defect - pl/pgsql return type conversion"
},
{
"msg_contents": "Mark Butler wrote:\n> \n> Using the head branch, when I execute the following in psql on Redhat 6.2\n> i386, the postmaster process dies gives an error message about corrupted\n> shared memory:\n\nI just updated to REL7_1 and recompiled and the problem has gone away.\n\n- Mark Butler\n",
"msg_date": "Fri, 13 Apr 2001 21:37:17 -0600",
"msg_from": "Mark Butler <butlerm@middle.net>",
"msg_from_op": true,
"msg_subject": "Re: Postmaster fatal defect - False alarm"
},
{
"msg_contents": "Mark Butler <butlerm@middle.net> writes:\n> Using the head branch, when I execute the following in psql on Redhat 6.2\n> i386, the postmaster process dies gives an error message about corrupted\n> shared memory:\n\nWorks for me ... when was your last update?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2001 23:59:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster fatal defect - pl/pgsql return type conversion "
}
] |
[
{
"msg_contents": "\nJust as an FYI for those considering the shift ... I just upgraded all of\nmy databases over to v7.1 from v7.0.3 and it was smooth as silk. The only\nproblems were having to compile and load a few modules from contrib that\nsome of my clients were using ...\n\nTook about an hour and a half to do >100 databases ..\n\nVery impressive ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sat, 14 Apr 2001 04:29:55 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Upgrade complete ... all went smooth ..."
}
] |
[
{
"msg_contents": "At 17:34 13/04/01 -0400, Tom Lane wrote:\n>\n>A possible kluge answer is to make pg_dump's OID-ordering of views\n>depend on the OID of the view rule rather than the view relation.\n>I am not sure if that would break any cases that work now, however.\n>\n\nFixed in CVS.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 14 Apr 2001 23:12:51 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump ordering problem (rc4) "
}
] |
[
{
"msg_contents": "\n77 databases in data/base directory ... all number'd ...\n\nselect * from pg_database;\n\ndoesn't give me the reference to which directory is which database ... so\nwhat table do we need to join on to get this information?\n\nthanks ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sat, 14 Apr 2001 15:04:32 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Name -> number ..."
},
{
"msg_contents": "> \n> 77 databases in data/base directory ... all number'd ...\n> \n> select * from pg_database;\n> \n> doesn't give me the reference to which directory is which database ... so\n> what table do we need to join on to get this information?\n> \n> thanks ...\n\nInfo is in pg_class.relfilenode. Now the big question is where do\ndatabase names go. My guess is pg_database.oid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 14 Apr 2001 14:31:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Name -> number ..."
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> 77 databases in data/base directory ... all number'd ...\n\n> select * from pg_database;\n\n> doesn't give me the reference to which directory is which database ... so\n> what table do we need to join on to get this information?\n\nselect oid, datname from pg_database;\n\nI think Bruce did a contrib utility to keep track of this, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Apr 2001 14:40:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Name -> number ... "
},
{
"msg_contents": "\nd'oh, should have extended my query ...\n\nselect oid,* from pg_database;\n\ngives the directory name ...\n\nthanks :)\n\n\nOn Sat, 14 Apr 2001, Bruce Momjian wrote:\n\n> >\n> > 77 databases in data/base directory ... all number'd ...\n> >\n> > select * from pg_database;\n> >\n> > doesn't give me the reference to which directory is which database ... so\n> > what table do we need to join on to get this information?\n> >\n> > thanks ...\n>\n> Info is in pg_class.relfilenode. Now the big question is where do\n> database names go. My guess is pg_database.oid.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sat, 14 Apr 2001 15:47:36 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Name -> number ..."
},
{
"msg_contents": "> \n> d'oh, should have extended my query ...\n> \n> select oid,* from pg_database;\n> \n> gives the directory name ...\n> \n\nInteresting to not that reffilenode is for tables, but oid is for\ndatabases. I hadn't realized that distinction until you asked. You\ncan't rename databases, so the oid is OK for this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 14 Apr 2001 14:52:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Name -> number ..."
}
] |
[
{
"msg_contents": "\nWhere are the 7.1 rpms downloadable from? (even the RC based ones....)?\n\n Chris Bowlby,\n -----------------------------------------------------\n Web Developer @ Hub.org.\n excalibur@hub.org\n www.hub.org\n 1-902-542-3657\n -----------------------------------------------------\n\n",
"msg_date": "Sat, 14 Apr 2001 14:33:47 -0400 (EDT)",
"msg_from": "Chris Bowlby <excalibur@hub.org>",
"msg_from_op": true,
"msg_subject": "7.1 RPMS..."
},
{
"msg_contents": "> Where are the 7.1 rpms downloadable from? (even the RC based ones....)?\n\nftp://ftp.postgresql.org/pub/dev/...\n\nor\n\nftp://ftp.postgresql.org/pub/binary/...\n",
"msg_date": "Sun, 15 Apr 2001 05:50:56 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 RPMS..."
}
] |
[
{
"msg_contents": "first, congratulations to all for the 7.1 release, good work.\n\nsecond, I've found (some years ago to be honest) that pgsql can ALTER\nTABLE while the table is widely available for all users...\n\nthis strike me as pretty bad/dangerous/confusing/messy (even from the\nPOV of implementation)\n\nshould I supposed to do something like: \nALTER DATABASE CLOSE\nor\nALTER DATABASE SHUTDOWN [options] (simil oracle)\n\nbefore doing changes to the definition of tables ?\n\nsomeone can say this is a feature, don't think so, even then I \nthink that ALTER TABLE DROP COLUMN could be implemented a *lot*\nmore easily with this method.\n\n(and requiring this operation to enter DDL sound very logical)\n\nregards,\nsergio\n\n_________________________________________________________\n�Lo probaste?\nCorreo gratis y para toda la vida en http://correo.yahoo.com.ar\n",
"msg_date": "Sat, 14 Apr 2001 17:21:30 -0300 (ART)",
"msg_from": "\"=?iso-8859-1?q?Sergio=20A.=20Kessler?=\" <sergio_kessler@yahoo.com.ar>",
"msg_from_op": true,
"msg_subject": "alter database without shutting down !?"
}
] |
[
{
"msg_contents": "http://www.crn.com/Sections/Fast_Forward/fast_forward.asp?ArticleID=25670\n\nMarc will be pleased to note that the PostgreSQL project came out of the\nFreeBSD project, and is Great Bridge's database. Gotta love\njournalistic license.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 14 Apr 2001 22:59:25 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Hey guys, check this out."
},
{
"msg_contents": "On Sat, 14 Apr 2001, Lamar Owen wrote:\n\n> http://www.crn.com/Sections/Fast_Forward/fast_forward.asp?ArticleID=25670\n> \n> Marc will be pleased to note that the PostgreSQL project came out of the\n> FreeBSD project, and is Great Bridge's database. Gotta love\n> journalistic license.\n\nOh, and 7.1's due to ship at the end of June... *cough* Somebody should\nforward Marc's announcement from last night to them.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Sat, 14 Apr 2001 22:17:23 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": false,
"msg_subject": "Re: Hey guys, check this out."
},
{
"msg_contents": "\nthere is little, to nothing, factual about that whole article ...\n\n\"Great Bridge essentially gives away its open-source database application\nat little cost...\"\n\n\t- thannk god I can get it completely for free, eh?\n\n\"Great Bridge executives say their licensing costs for the software...\"\n\n\t- someone want to tell me when they started charging licensing\n\t costs?\n\n\"...nearly 25 years ago\"\n\n\t- I thought it was around '85? That's only 15 years ...\n\n\"Version 7.1, due to ship at the end of June...\"\n\n\t- should I pull what we've released? *raised eyebrow*\n\n\"...is the marketing and channel arm for PostgreSQL\"\n\n\t- they are?\n\nI wonder, is Great Bridge going to step up and correct the *several*\nmis-conceptions that have been touted in their name? There were alot of\nquotes from GB in there, so I wonder ... ?\n\nOn Sat, 14 Apr 2001, Lamar Owen wrote:\n\n> http://www.crn.com/Sections/Fast_Forward/fast_forward.asp?ArticleID=25670\n>\n> Marc will be pleased to note that the PostgreSQL project came out of the\n> FreeBSD project, and is Great Bridge's database. Gotta love\n> journalistic license.\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sun, 15 Apr 2001 00:19:14 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Hey guys, check this out."
},
{
"msg_contents": "\nI rarely sign a note as the webmaster, I like to keep a low profile.\nI just sent the witch a note and made it a point to let her know who\nI am - and BCCd Marc. He can cross post it if he wishes. I also sent\nit to the editor of that rag.\n\nVince.\n\n\n\nOn Sun, 15 Apr 2001, The Hermit Hacker wrote:\n\n>\n> there is little, to nothing, factual about that whole article ...\n>\n> \"Great Bridge essentially gives away its open-source database application\n> at little cost...\"\n>\n> \t- thannk god I can get it completely for free, eh?\n>\n> \"Great Bridge executives say their licensing costs for the software...\"\n>\n> \t- someone want to tell me when they started charging licensing\n> \t costs?\n>\n> \"...nearly 25 years ago\"\n>\n> \t- I thought it was around '85? That's only 15 years ...\n>\n> \"Version 7.1, due to ship at the end of June...\"\n>\n> \t- should I pull what we've released? *raised eyebrow*\n>\n> \"...is the marketing and channel arm for PostgreSQL\"\n>\n> \t- they are?\n>\n> I wonder, is Great Bridge going to step up and correct the *several*\n> mis-conceptions that have been touted in their name? There were alot of\n> quotes from GB in there, so I wonder ... ?\n>\n> On Sat, 14 Apr 2001, Lamar Owen wrote:\n>\n> > http://www.crn.com/Sections/Fast_Forward/fast_forward.asp?ArticleID=25670\n> >\n> > Marc will be pleased to note that the PostgreSQL project came out of the\n> > FreeBSD project, and is Great Bridge's database. Gotta love\n> > journalistic license.\n> > --\n> > Lamar Owen\n> > WGCR Internet Radio\n> > 1 Peter 4:11\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 15 Apr 2001 00:34:18 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Hey guys, check this out."
},
{
"msg_contents": "\nokay, I didn't get a copy of it ... :(\n\nOn Sun, 15 Apr 2001, Vince Vielhaber wrote:\n\n>\n> I rarely sign a note as the webmaster, I like to keep a low profile.\n> I just sent the witch a note and made it a point to let her know who\n> I am - and BCCd Marc. He can cross post it if he wishes. I also sent\n> it to the editor of that rag.\n>\n> Vince.\n>\n>\n>\n> On Sun, 15 Apr 2001, The Hermit Hacker wrote:\n>\n> >\n> > there is little, to nothing, factual about that whole article ...\n> >\n> > \"Great Bridge essentially gives away its open-source database application\n> > at little cost...\"\n> >\n> > \t- thannk god I can get it completely for free, eh?\n> >\n> > \"Great Bridge executives say their licensing costs for the software...\"\n> >\n> > \t- someone want to tell me when they started charging licensing\n> > \t costs?\n> >\n> > \"...nearly 25 years ago\"\n> >\n> > \t- I thought it was around '85? That's only 15 years ...\n> >\n> > \"Version 7.1, due to ship at the end of June...\"\n> >\n> > \t- should I pull what we've released? *raised eyebrow*\n> >\n> > \"...is the marketing and channel arm for PostgreSQL\"\n> >\n> > \t- they are?\n> >\n> > I wonder, is Great Bridge going to step up and correct the *several*\n> > mis-conceptions that have been touted in their name? There were alot of\n> > quotes from GB in there, so I wonder ... ?\n> >\n> > On Sat, 14 Apr 2001, Lamar Owen wrote:\n> >\n> > > http://www.crn.com/Sections/Fast_Forward/fast_forward.asp?ArticleID=25670\n> > >\n> > > Marc will be pleased to note that the PostgreSQL project came out of the\n> > > FreeBSD project, and is Great Bridge's database. Gotta love\n> > > journalistic license.\n> > > --\n> > > Lamar Owen\n> > > WGCR Internet Radio\n> > > 1 Peter 4:11\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sun, 15 Apr 2001 01:36:39 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Hey guys, check this out."
},
{
"msg_contents": "At 10:59 PM 14-04-2001 -0400, Lamar Owen wrote:\n>http://www.crn.com/Sections/Fast_Forward/fast_forward.asp?ArticleID=25670\n>\n>Marc will be pleased to note that the PostgreSQL project came out of the\n>FreeBSD project, and is Great Bridge's database. Gotta love\n>journalistic license.\n\nReporter must have missed the April 1st deadline.\n\n;)\n\nStill I must point out that the article isn't negative. And if it's read by\nPHB's it doesn't make a difference anyway <grin>- the general gist is: it's\ndecent, cheap, and virtually as good as proprietary databases.\n\nSo what if there are factual errors. This is mass-media we're talking\nabout. Just point it out if it's really negative AND it's strategically\nappropriate to do so. \n\nBy saying there's \"nothing factual\" does that apply to the following as well?\n\n\"Great Bridge is on the forefront of the open-source movement in providing\ntools that are enterprisewide and capable, and what's compelling is they\nprovide the 24x7 support\"\n\nI think some of you guys are overreacting. It's almost like those FreeBSD\nadvocates slamming Tucows or something. \n\nMaybe you guys should get some Great Bridge marketing/PR person to handle\nstuff like this.\n\nCheerio,\nLink.\n\n",
"msg_date": "Mon, 16 Apr 2001 09:31:27 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Hey guys, check this out."
},
{
"msg_contents": "On Mon, 16 Apr 2001, Lincoln Yeoh wrote:\n\n> Maybe you guys should get some Great Bridge marketing/PR person to handle\n> stuff like this.\n\nAfter reading Ned's comments I figured that's how it got that way in\nthe first place. But that's just speculation.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 15 Apr 2001 22:05:46 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Hey guys, check this out."
},
{
"msg_contents": "On Sun, Apr 15, 2001 at 10:05:46PM -0400, Vince Vielhaber wrote:\n> On Mon, 16 Apr 2001, Lincoln Yeoh wrote:\n> \n> > Maybe you guys should get some Great Bridge marketing/PR person to handle\n> > stuff like this.\n> \n> After reading Ned's comments I figured that's how it got that way in\n> the first place. But that's just speculation.\n\nYou probably figured wrong. \n\nAll those publications have editors who generally feel they're not \ndoing their job if they don't introduce errors, usually without even \ntalking to the reporter. That's probably how the \"FreeBSD\" reference \ngot in there: somebody saw \"Berkeley\" and decided \"FreeBSD\" would look \nmore \"techie\". It's stupid, but nothng to excoriate the reporter about.\n\nSam Williams's articles read completely differently according to \nwho publishes them. Typically the Linux magazines print what he \nwrites, and thereby get it mostly right, but the finance magazines \nmangle them to total nonsense.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Sun, 15 Apr 2001 20:38:05 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: Hey guys, check this out."
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> On Mon, 16 Apr 2001, Lincoln Yeoh wrote:\n>> Maybe you guys should get some Great Bridge marketing/PR person to handle\n>> stuff like this.\n\n> After reading Ned's comments I figured that's how it got that way in\n> the first place. But that's just speculation.\n\nIt's pretty obvious that CRN's article was written after talking to a\nGreat Bridge marketing or PR person (and no one else :-(). GB *is*\nengaged in trying to attract positive press attention to PostgreSQL\n(and itself of course); I hope no one is bothered by that activity per\nse. Now without having a transcript of that conversation, it's\ndifficult to know whether the errors in the article should be blamed\nmore on the CRN reporter or more on the GB person. I'd tend to blame\nthe reporter, but you might well wish to discount that as the biased\nview of a GB employee. I will say that all the GB people I talk to\nunderstand the difference between GB and the Postgres open-source\ncommunity --- but this reporter evidently does not.\n\nBy and large, I agree with Nathan Myers' comments. We can either see\nthis as a chance to educate that reporter about what open source is all\nabout, or we can flame her and turn her off open source completely.\nLet's try to take the high road.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Apr 2001 23:50:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Hey guys, check this out. "
},
{
"msg_contents": "At 08:38 PM 15-04-2001 -0700, you wrote:\n>On Sun, Apr 15, 2001 at 10:05:46PM -0400, Vince Vielhaber wrote:\n>> On Mon, 16 Apr 2001, Lincoln Yeoh wrote:\n>> \n>> > Maybe you guys should get some Great Bridge marketing/PR person to handle\n>> > stuff like this.\n>> \n>> After reading Ned's comments I figured that's how it got that way in\n>> the first place. But that's just speculation.\n>\n>You probably figured wrong. \n>\n>All those publications have editors who generally feel they're not \n>doing their job if they don't introduce errors, usually without even \n>talking to the reporter. That's probably how the \"FreeBSD\" reference \n>got in there: somebody saw \"Berkeley\" and decided \"FreeBSD\" would look \n>more \"techie\". It's stupid, but nothng to excoriate the reporter about.\n\nSometime back we were announcing a product and practically wrote everything\nfor the journalists and gave it to them so that they could just print it,\nand one newspaper still got LOTs of things wrong. In contrast another\nnewspaper was much better tho - facts right.\n\nThe standards haven't changed much, so I don't really bother reading the\nfirst newspaper for a lot of things. Whereas the 2nd one still seems to do\nok for tech stuff. \n\nThey're very rarely 100% correct. But they're primarily journalists, if\nthey were even 99.99% correct about things they'd probably be releasing\nPostgresql 8 instead of you guys ;).\n\nI believe you should choose your battles. Sometimes it's just not worth\nfighting, not even worth commenting. Other times it's almost compulsory\neven though there's no obvious/direct _personal_ gain in it.\n\nAlso look at the various stories and commentary floating about in the media\nabout the recent US-China plane incident. And what really happened? I\nfigure at least one of the planes should have a video recording of the\nincident. But we have everyone guessing what happened instead. Doh.\n\nCheerio,\nLink.\n\n",
"msg_date": "Mon, 16 Apr 2001 15:21:21 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Re: Hey guys, check this out."
}
] |
[
{
"msg_contents": "\nI've been messing around with a 7.0 compatible dump for 7.1, and it looks\ngood so far (at least I can dump & restore my local DBs and, to some\nextent, the regression DB, including BLOBs).\n\nIt is a mixture of 7.1 and 7.0 features, in the sense that it is basically\nthe 7.1 pg_dump but with 7.0 SQL/methods where necessary. As a result it\ndumps in OID order etc but has the 7.0 type formatting for 7.0 databases.\nIt also uses pg_views to get the view definition, and, at least on 7.0.2,\nthe view was mildly broken so the regression DB does not dump properly.\n\nI'd be interested in feedback etc, if people don't mind. The patch (against\nCVS) can be found at:\n\n ftp://ftp.rhyme.com.au/pub/postgresql/pg_dump/pg_dump_1504_patch.gz\n\nAt the moment it has no idea what do do with aggregates (is there\nanything?), and it assumes 'proisstrict' on functions.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 15 Apr 2001 13:35:42 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "pg_dump compatibility with 7.0"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At the moment it has no idea what do do with aggregates (is there\n> anything?), and it assumes 'proisstrict' on functions.\n\nI'll take the blame for the aggregate issue ;-).\n\nSFUNC1/STYPE1/INITCOND1 in 7.0 equate to SFUNC/STYPE/INITCOND in 7.1.\nThere is no 7.1 equivalent to 7.0's SFUNC2/STYPE2/INITCOND2 --- those\nhave to be saved/restored separately if you are dumping a 7.0 database\nwith intentions of restoring it to 7.0. On the other hand, if you are\ndumping 7.0 with an eye to restoring to 7.1, you could just as well\nraise an error if those fields are nonnull.\n\nAssuming 'proisstrict' for a 7.0 function seems reasonable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Apr 2001 01:08:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump compatibility with 7.0 "
},
{
"msg_contents": "At 01:08 15/04/01 -0400, Tom Lane wrote:\n>\n>SFUNC1/STYPE1/INITCOND1 in 7.0 equate to SFUNC/STYPE/INITCOND in 7.1.\n>There is no 7.1 equivalent to 7.0's SFUNC2/STYPE2/INITCOND2\n\nIt now outputs a warning to stderr as well as the dump file.\n\n\n> --- those\n>have to be saved/restored separately if you are dumping a 7.0 database\n>with intentions of restoring it to 7.0.\n\nI'm not sure I want to support 7.0->7.0 (or 7.1->7.0) in the dump for 7.1.\nThis could get really messy in a few versions time. It's mainly to allow\nsmoother upward conversions.\n\n\n>On the other hand, if you are\n>dumping 7.0 with an eye to restoring to 7.1, you could just as well\n>raise an error if those fields are nonnull.\n\nDone. It does not crash the dump, but it does write a 'warning entry' to\nstderr & the dump file so that subsequent a pg_restore will display a\nwarning as well.\n\nAmended file available at:\n\n ftp://ftp.rhyme.com.au/pub/postgresql/pg_dump/pg_dump_1505_patch.gz\n\nAt the moment the version compatibility changes are relatively trivial, so\nI'm not sure there is any value in creating multiple modules (I had thought\none per version would be TWTG). Should I apply this to CVS when it's been a\nlittle more tested?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 16 Apr 2001 02:09:48 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump compatibility with 7.0 "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.