threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "Orlandi wrote:\n> \n> I am user of Microfocus Cobol Netexpress 3.1 and I use\n> IDXFORMAT \" 4 \" for my files.\n> I know that I can use the format Btrieve ANSI and to install Btrieve\n> in my Linux server with Format : IDXFORMAT \" 6 \" , and in the\n> case of Btrieve I can just recompile the programs, without\n> needing to alter the source code , I also know about that.\n> The one that I need to know is:\n> What should I do to use Postgre as manager of files in the\n> Linux server ?\n\nBom dia Jairo. Unfortunately I have no experience with Microfocus Cobol\nor Btrieve. I'm copying this to the -hackers mailing list and perhaps\nsomeone will have some suggestions in this area.\n\nIn general, Postgres is an RDBMS which is highly compatible with other\nRDBMS products. So if you have an RDBMS solution already, then the port\nshould be pretty easy. If you don't already have an RDBMS in your\nsystem, then you will want to look into the general capabilities and\ninterface options to make sure that you have a clear path for your\napplication migration.\n\nGood luck!\n\n - Thomas\n",
"msg_date": "Fri, 20 Oct 2000 15:53:50 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: RE. COBOL FILES"
}
] |
[
{
"msg_contents": "\n> Kevin O'Gorman wrote:\n> > \n> > I've been unable to follow the directions\n> > in the Programmer's Guide\n> > for getting to the anonymous CVS server.\n> > \n> > I'm running RedHat 6.1, and CVS 1.10 which\n\nWhat is CVS?\n\nBob Kernell\nResearch Scientist\nSurface Validation Group\nAS&M, Inc.\nemail: kernell@sundog.larc.nasa.gov\ntel: 757-827-4631\n\n",
"msg_date": "Fri, 20 Oct 2000 13:14:35 -0400 (EDT)",
"msg_from": "Robert Kernell <kernell@sundog.larc.nasa.gov>",
"msg_from_op": true,
"msg_subject": "what is CVS?"
},
{
"msg_contents": "Robert Kernell <kernell@sundog.larc.nasa.gov> writes:\n\n> > Kevin O'Gorman wrote:\n> > > \n> > > I've been unable to follow the directions\n> > > in the Programmer's Guide\n> > > for getting to the anonymous CVS server.\n> > > \n> > > I'm running RedHat 6.1, and CVS 1.10 which\n> \n> What is CVS?\n\nAn open-source network-transparent version control system. \n\nhttp://www.cvshome.org/\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "20 Oct 2000 14:05:11 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: what is CVS?"
}
] |
[
{
"msg_contents": "I compute the code count with:\n\n\tfind . -name \\*.[chyl] | xargs cat| wc -l\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Oct 2000 13:30:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Now 376175 lines of code"
},
{
"msg_contents": "On Fri, Oct 20, 2000 at 01:30:25PM -0400, Bruce Momjian wrote:\n> I compute the code count with:\n> \n> \tfind . -name \\*.[chyl] | xargs cat| wc -l\n\nRight, that solves the problem others might be seeing, with the command\nline getting expanded and silently chopped off. For example, no one\nseems to have commented on the -8% of inline comments reported by\nPeter's c_count! Funny math, indeed.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Fri, 20 Oct 2000 12:56:32 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Now 376175 lines of code"
},
{
"msg_contents": "Ross J. Reedstrom writes:\n\n> For example, no one seems to have commented on the -8% of inline\n> comments reported by Peter's c_count! Funny math, indeed.\n\nIf you had actually done the math ;-) you would have noticed that the\npercentage of the inline comments is negative because those lines have\nboth comments and code, therefore the total has to exclude these lines\nonce when adding comments and code.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 20 Oct 2000 20:13:11 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Now 376175 lines of code"
}
] |
[
{
"msg_contents": "I've just built 7.1 from a slightly old point in the tree:\nOctober 9. Regression tests pass, but postmaster won't\nstart.\n\nI'm getting a complaint from pg_ctl that it cannot find\npostmaster.opts.default.\n\nWhat am I missing?\n\n++ kevin\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: \nmailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Fri, 20 Oct 2000 16:15:11 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "New build fails: cannot find postmaster.opts.default"
},
{
"msg_contents": "On Fri, Oct 20, 2000 at 04:15:11PM -0700, Kevin O'Gorman wrote:\n> I've just built 7.1 from a slightly old point in the tree:\n> October 9. Regression tests pass, but postmaster won't\n> start.\n> \n> I'm getting a complaint from pg_ctl that it cannot find\n> postmaster.opts.default.\n> \n> What am I missing?\n> \n\ntouch postmaster.opts.default in the directory pg_ctl complains about.\n\nAn empty file works fine.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Fri, 20 Oct 2000 18:41:25 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: New build fails: cannot find postmaster.opts.default"
},
{
"msg_contents": "Kevin O'Gorman writes:\n\n> I'm getting a complaint from pg_ctl that it cannot find\n> postmaster.opts.default.\n> \n> What am I missing?\n\nThe postmaster.opts.default file. :-)\n\nSeriously, I'm thinking this check is not necessary, a missing file\nshould be treated like an empty file. Objections?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 21 Oct 2000 02:04:30 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: New build fails: cannot find postmaster.opts.default"
}
] |
[
{
"msg_contents": "I got early access to the UnixWare 7.1.1b Feature Supplement\nUnixWare/OpenServer Development Kit (UDK FS for short). \n\nPostgreSQL 7.0.2 compiles CLEAN (ish...) and we can now support the\nC++ stuff. \n\nThe 3.2 bug in int8.c is GONE. \n\nQuestion: Why do we (for UnixWare) force i486 optimization? \n\nAttached is the configure output, gmake output for analysis by\ny'all...\n\nThe configure invocation was:\n\nCXXFLAGS=-O ./configure --with-perl --with-CC=cc --with-CXX=CC --with-includes=/usr/local/include --with-libs=/usr/local/lib >ler.conf.out 2>&1 & \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Fri, 20 Oct 2000 21:44:04 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "UnixWare 7.1.1b FS"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> Question: Why do we (for UnixWare) force i486 optimization? \n\nNo particularly good reason, I suppose. We could remove it and leave it\nup to the installer to choose the optimization level.\n\n> CXXFLAGS=-O ./configure --with-perl --with-CC=cc --with-CXX=CC --with-includes=/usr/local/include --with-libs=/usr/local/lib >ler.conf.out 2>&1 & \n\nDoes the default CXXFLAGS (-O2) not work?\n\nBtw., feel invited to download our development version\n(ftp://ftp.postgresql.org/pub/dev) and check that out as well.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 21 Oct 2000 10:47:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001021 03:42]:\n> Larry Rosenman writes:\n> \n> > Question: Why do we (for UnixWare) force i486 optimization? \n> \n> No particularly good reason, I suppose. We could remove it and leave it\n> up to the installer to choose the optimization level.\n> \n> > CXXFLAGS=-O ./configure --with-perl --with-CC=cc --with-CXX=CC --with-includes=/usr/local/include --with-libs=/usr/local/lib >ler.conf.out 2>&1 & \n> \n> Does the default CXXFLAGS (-O2) not work?\nIt comes through as -g....\n\n> \n> Btw., feel invited to download our development version\n> (ftp://ftp.postgresql.org/pub/dev) and check that out as well.\nI will, in the next couple of days...\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 21 Oct 2000 07:33:42 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [001021 07:34]:\n> I will, in the next couple of days...\n\nWell, I pulled todays (2000/10/21) snapshot. \n\nI didn't get very far...\n\n1) when I included -with-openssl, it didn't add /usr/local/ssl/lib to\nthe -L options, so couldn't find libssl.a\n\n2) I forced CC=cc and CXX=CC in src/templates/unixware and removed the\n-K i486 option, and we still pass GCC options. This is *NOT* good...\n\nI'll attach my script file, and the make output. \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Sat, 21 Oct 2000 08:59:56 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> 1) when I included -with-openssl, it didn't add /usr/local/ssl/lib to\n> the -L options, so couldn't find libssl.a\n\nConfirmed. It's being put into a different variable. I'll see about\nfixing this.\n\n> 2) I forced CC=cc and CXX=CC in src/templates/unixware and removed the\n> -K i486 option, and we still pass GCC options. This is *NOT* good...\n\nDon't do that then. Setting the compiler in the template file is not\nallowed.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 21 Oct 2000 18:49:22 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001021 11:45]:\n> Larry Rosenman writes:\n> \n> > 1) when I included -with-openssl, it didn't add /usr/local/ssl/lib to\n> > the -L options, so couldn't find libssl.a\n> \n> Confirmed. It's being put into a different variable. I'll see about\n> fixing this.\nCool. Thanks...\n> \n> > 2) I forced CC=cc and CXX=CC in src/templates/unixware and removed the\n> > -K i486 option, and we still pass GCC options. This is *NOT* good...\n> \n> Don't do that then. Setting the compiler in the template file is not\n> allowed.\nOK, then please allow --with-CC and --with-CXX. I tried it, and it\ndidn't honor the --with-CXX option. Please also document same in\n./configure --help. \n\nBruce: Please remove the CC and CXX lines from src/template/unixware. \n\nThe CFLAGS fix can/should stay. \n\nSee attached scriptfile. \n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Sat, 21 Oct 2000 12:04:22 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "I wrote:\n\n> Larry Rosenman writes:\n> \n> > 1) when I included -with-openssl, it didn't add /usr/local/ssl/lib to\n> > the -L options, so couldn't find libssl.a\n> \n> Confirmed. It's being put into a different variable. I'll see about\n> fixing this.\n\nNo, this is fine. It's in the LIBS variable, where it belongs. Whatever\nyou're seeing is a different problem. Check config.log to find out.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 21 Oct 2000 19:06:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "DOne.\n\n> * Peter Eisentraut <peter_e@gmx.net> [001021 11:45]:\n> > Larry Rosenman writes:\n> > \n> > > 1) when I included -with-openssl, it didn't add /usr/local/ssl/lib to\n> > > the -L options, so couldn't find libssl.a\n> > \n> > Confirmed. It's being put into a different variable. I'll see about\n> > fixing this.\n> Cool. Thanks...\n> > \n> > > 2) I forced CC=cc and CXX=CC in src/templates/unixware and removed the\n> > > -K i486 option, and we still pass GCC options. This is *NOT* good...\n> > \n> > Don't do that then. Setting the compiler in the template file is not\n> > allowed.\n> OK, then please allow --with-CC and --with-CXX. I tried it, and it\n> didn't honor the --with-CXX option. Please also document same in\n> ./configure --help. \n> \n> Bruce: Please remove the CC and CXX lines from src/template/unixware. \n> \n> The CFLAGS fix can/should stay. \n> \n> See attached scriptfile. \n> > \n> > -- \n> > Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Oct 2000 13:07:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001021 12:02]:\n> I wrote:\n> \n> > Larry Rosenman writes:\n> > \n> > > 1) when I included -with-openssl, it didn't add /usr/local/ssl/lib to\n> > > the -L options, so couldn't find libssl.a\n> > \n> > Confirmed. It's being put into a different variable. I'll see about\n> > fixing this.\n> \n> No, this is fine. It's in the LIBS variable, where it belongs. Whatever\n> you're seeing is a different problem. Check config.log to find out.\nit's stupider. It forgot the -lssl after the -L/usr/local/ssl/lib. \n\nHere is config.log:\nThis file contains any messages produced by compilers while\nrunning configure, to aid debugging if configure makes a mistake.\n\nconfigure:677: checking host system type\nconfigure:703: checking which template to use\nconfigure:866: checking whether to build with locale support\nconfigure:895: checking whether to build with recode support\nconfigure:925: checking whether to build with multibyte character support\nconfigure:978: checking for default port number\nconfigure:1012: checking for default soft limit on number of connections\nconfigure:1062: checking for gcc\nconfigure:1175: checking whether the C compiler (cc ) works\nconfigure:1191: cc -o conftest conftest.c 1>&5\nconfigure:1217: checking whether the C compiler (cc ) is a cross-compiler\nconfigure:1222: checking whether we are using GNU C\nconfigure:1231: cc -E conftest.c\nconfigure:1250: checking whether cc accepts -g\nconfigure:1286: checking whether the C compiler (cc -O -K host,inline,loop_unroll,alloca -Dsvr4 ) works\nconfigure:1302: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 conftest.c 1>&5\nconfigure:1328: checking whether the C compiler (cc -O -K host,inline,loop_unroll,alloca -Dsvr4 ) is a cross-compiler\nconfigure:1333: checking for Cygwin environment\nconfigure:1349: cc -c -O -K host,inline,loop_unroll,alloca -Dsvr4 conftest.c 1>&5\nUX:acomp: ERROR: \"configure\", line 1345: undefined symbol: __CYGWIN32__\nUX:acomp: WARNING: \"configure\", line 1346: statement not reached\nconfigure: failed program was:\n#line 1338 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\n\n#ifndef __CYGWIN__\n#define __CYGWIN__ __CYGWIN32__\n#endif\nreturn __CYGWIN__;\n; return 0; }\nconfigure:1366: checking for mingw32 environment\nconfigure:1378: cc -c -O -K host,inline,loop_unroll,alloca -Dsvr4 conftest.c 1>&5\nUX:acomp: ERROR: \"configure\", line 1374: undefined symbol: __MINGW32__\nUX:acomp: WARNING: \"configure\", line 1375: statement not reached\nconfigure: failed program was:\n#line 1371 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nreturn __MINGW32__;\n; return 0; }\nconfigure:1397: checking for executable suffix\nconfigure:1407: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 conftest.c 1>&5\nconfigure:1428: checking how to run the C preprocessor\nconfigure:1449: cc -E conftest.c >/dev/null 2>conftest.out\nconfigure:1683: checking whether to build with Tcl\nconfigure:1707: checking whether to build with Tk\nconfigure:1769: checking whether to build Perl modules\nconfigure:1796: checking whether to build Python modules\nconfigure:2073: checking whether to build the ODBC driver\nconfigure:2155: checking whether to build C++ modules\nconfigure:2183: checking for c++\nconfigure:2215: checking whether the C++ compiler (c++ ) works\nconfigure:2231: c++ -o conftest conftest.C -L/usr/local/ssl/lib 1>&5\nconfigure:2257: checking whether the C++ compiler (c++ ) is a cross-compiler\nconfigure:2262: checking whether we are using GNU C++\nconfigure:2271: c++ -E conftest.C\nconfigure:2290: checking whether c++ accepts -g\nconfigure:2322: checking how to run the C++ preprocessor\nconfigure:2340: c++ -E conftest.C >/dev/null 2>conftest.out\nconfigure:2375: checking for string\nconfigure:2385: c++ -E conftest.C >/dev/null 2>conftest.out\nconfigure:2454: checking for namespace std in C++\nconfigure:2481: c++ -c -g -O2 conftest.C 1>&5\nconfigure:2536: checking for a BSD compatible install\nconfigure:2614: checking for gawk\nconfigure:2644: checking for flex\nconfigure:2712: checking whether ln -s works\nconfigure:2777: checking for non-GNU ld\nconfigure:2812: checking if the linker (/usr/bin/ld) is GNU ld\nconfigure:2831: checking for ranlib\nconfigure:2863: checking for lorder\nconfigure:2895: checking for tar\nconfigure:2932: checking for perl\nconfigure:2966: checking for bison\nconfigure:3042: checking for main in -lsfio\nconfigure:3057: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lsfio -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lsfio\nconfigure: failed program was:\n#line 3050 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3085: checking for main in -lncurses\nconfigure:3100: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lncurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lncurses\nconfigure: failed program was:\n#line 3093 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3126: checking for main in -lcurses\nconfigure:3141: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3171: checking for main in -ltermcap\nconfigure:3186: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3214: checking for readline in -lreadline\nconfigure:3233: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3262: checking for library containing using_history\nconfigure:3280: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3329: checking for main in -lbsd\nconfigure:3344: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lbsd -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lbsd\nconfigure: failed program was:\n#line 3337 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3373: checking for setproctitle in -lutil\nconfigure:3392: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lutil -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUndefined\t\t\tfirst referenced\nsymbol \t\t\t in file\nsetproctitle conftest.o\nUX:ld: ERROR: Symbol referencing errors. No output written to conftest\nconfigure: failed program was:\n#line 3381 \"configure\"\n#include \"confdefs.h\"\n/* Override any gcc2 internal prototype to avoid an error. */\n/* We use char because int might match the return type of a gcc2\n builtin and then its argument prototype would still apply. */\nchar setproctitle();\n\nint main() {\nsetproctitle()\n; return 0; }\nconfigure:3420: checking for main in -lm\nconfigure:3435: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3463: checking for main in -ldl\nconfigure:3478: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3506: checking for main in -lsocket\nconfigure:3521: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3549: checking for main in -lnsl\nconfigure:3564: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3592: checking for main in -lipc\nconfigure:3607: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lipc -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lipc\nconfigure: failed program was:\n#line 3600 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3635: checking for main in -lIPC\nconfigure:3650: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lIPC -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lIPC\nconfigure: failed program was:\n#line 3643 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3678: checking for main in -llc\nconfigure:3693: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -llc -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -llc\nconfigure: failed program was:\n#line 3686 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3721: checking for main in -ldld\nconfigure:3736: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -ldld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -ldld\nconfigure: failed program was:\n#line 3729 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3764: checking for main in -lln\nconfigure:3779: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lln -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lln\nconfigure: failed program was:\n#line 3772 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3807: checking for main in -lld\nconfigure:3822: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3850: checking for main in -lcompat\nconfigure:3865: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lcompat -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lcompat\nconfigure: failed program was:\n#line 3858 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3893: checking for main in -lBSD\nconfigure:3908: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lBSD -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lBSD\nconfigure: failed program was:\n#line 3901 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:3936: checking for main in -lgen\nconfigure:3951: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:3979: checking for main in -lPW\nconfigure:3994: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lPW -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lPW\nconfigure: failed program was:\n#line 3987 \"configure\"\n#include \"confdefs.h\"\n\nint main() {\nmain()\n; return 0; }\nconfigure:4023: checking for library containing crypt\nconfigure:4041: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:4084: checking for inflate in -lz\nconfigure:4103: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nconfigure:4132: checking for library containing __inet_ntoa\nconfigure:4150: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUndefined\t\t\tfirst referenced\nsymbol \t\t\t in file\n__inet_ntoa conftest.o\nUX:ld: ERROR: Symbol referencing errors. No output written to conftest\nconfigure: failed program was:\n#line 4139 \"configure\"\n#include \"confdefs.h\"\n/* Override any gcc2 internal prototype to avoid an error. */\n/* We use char because int might match the return type of a gcc2\n builtin and then its argument prototype would still apply. */\nchar __inet_ntoa();\n\nint main() {\n__inet_ntoa()\n; return 0; }\nconfigure:4172: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lbind -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lbind\nconfigure: failed program was:\n#line 4161 \"configure\"\n#include \"confdefs.h\"\n/* Override any gcc2 internal prototype to avoid an error. */\n/* We use char because int might match the return type of a gcc2\n builtin and then its argument prototype would still apply. */\nchar __inet_ntoa();\n\nint main() {\n__inet_ntoa()\n; return 0; }\nconfigure:4430: checking for CRYPTO_new_ex_data in -lcrypto\nconfigure:4449: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lcrypto -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\nUX:ld: ERROR: library not found: -lcrypto\nconfigure: failed program was:\n#line 4438 \"configure\"\n#include \"confdefs.h\"\n/* Override any gcc2 internal prototype to avoid an error. */\n/* We use char because int might match the return type of a gcc2\n builtin and then its argument prototype would still apply. */\nchar CRYPTO_new_ex_data();\n\nint main() {\nCRYPTO_new_ex_data()\n; return 0; }\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 21 Oct 2000 12:07:38 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [001021 12:04]:\n> * Peter Eisentraut <peter_e@gmx.net> [001021 11:45]:\n> > > 2) I forced CC=cc and CXX=CC in src/templates/unixware and removed the\n> > > -K i486 option, and we still pass GCC options. This is *NOT* good...\n> > \n> > Don't do that then. Setting the compiler in the template file is not\n> > allowed.\n> OK, then please allow --with-CC and --with-CXX. I tried it, and it\n> didn't honor the --with-CXX option. Please also document same in\n> ./configure --help. \n\nOOPPSS. My fault on the --with-CXX not being accepted. I had\n--with-CXX=CC at the front of the line, and --with-CXX (no=CC) near\nthe end of the line. \n\nBUT, we still need to doc --with-CXX taking a value, and --with-CC=xx\nneeds to be doc'd. \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 21 Oct 2000 12:33:03 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> OK, then please allow --with-CC and --with-CXX. I tried it, and it\n> didn't honor the --with-CXX option. Please also document same in\n> ./configure --help. \n\nThe C compiler is chosen with the CC environment variable. That is\ndocumented.\n\nThe C++ compiler can be chosen with the CXX environment variable, which is\nnot documented, because it's usually not a good idea (shared library\nproblems, really a bug). Instead the template file should set the CCC\nvariable to pick the preferred C++ compiler. See for example how hpux or\nsolaris do it.\n\nAnd the --with-CXX option is honored, but only if you don't override it in\nthe template file. :)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 21 Oct 2000 19:34:39 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: UnixWare 7.1.1b FS"
},
{
"msg_contents": "OK, removing the second --with-CXX got us past configure, and gmake\nran a long while, but MAXBUFSIZE didn't get defined such that\nfe-connect.c died:\n\ngmake -C doc all\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql-snap/doc'\ngmake[1]: Nothing to be done for `all'.\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql-snap/doc'\ngmake -C src all\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql-snap/src'\ngmake -C backend all\ngmake[2]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend'\nprereqdir=`cd parser/ && pwd` && \\\n cd ../../src/include/parser/ && rm -f parse.h && \\\n ln -s $prereqdir/parse.h .\ngmake -C utils fmgroids.h\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils'\nCPP='cc -E' AWK='gawk' /bin/sh Gen_fmgrtab.sh ../../../src/include/catalog/pg_proc.h\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils'\ncd ../../src/include/utils/ && rm -f fmgroids.h && \\\n ln -s ../../../src/backend/utils/fmgroids.h .\ngmake -C access all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access'\ngmake -C common SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/common'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o heaptuple.o heaptuple.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o indextuple.o indextuple.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o indexvalid.o indexvalid.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o printtup.o printtup.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o scankey.o scankey.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o tupdesc.o tupdesc.c\n/usr/bin/ld -r -o SUBSYS.o heaptuple.o indextuple.o indexvalid.o printtup.o scankey.o tupdesc.o \ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/common'\ngmake -C gist SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/gist'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o gist.o gist.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o gistget.o gistget.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o gistscan.o gistscan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o giststrat.o giststrat.c\n/usr/bin/ld -r -o SUBSYS.o gist.o gistget.o gistscan.o giststrat.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/gist'\ngmake -C hash SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/hash'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hash.o hash.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hashfunc.o hashfunc.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hashinsert.o hashinsert.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hashovfl.o hashovfl.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hashpage.o hashpage.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hashscan.o hashscan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hashsearch.o hashsearch.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hashstrat.o hashstrat.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hashutil.o hashutil.c\n/usr/bin/ld -r -o SUBSYS.o hash.o hashfunc.o hashinsert.o hashovfl.o hashpage.o hashscan.o hashsearch.o hashstrat.o hashutil.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/hash'\ngmake -C heap SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/heap'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o heapam.o heapam.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hio.o hio.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o stats.o stats.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o tuptoaster.o tuptoaster.c\n/usr/bin/ld -r -o SUBSYS.o heapam.o hio.o stats.o tuptoaster.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/heap'\ngmake -C index SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/index'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o genam.o genam.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o indexam.o indexam.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o istrat.o istrat.c\n/usr/bin/ld -r -o SUBSYS.o genam.o indexam.o istrat.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/index'\ngmake -C nbtree SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/nbtree'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nbtcompare.o nbtcompare.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nbtinsert.o nbtinsert.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nbtpage.o nbtpage.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nbtree.o nbtree.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nbtscan.o nbtscan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nbtsearch.o nbtsearch.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nbtstrat.o nbtstrat.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nbtutils.o nbtutils.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nbtsort.o nbtsort.c\n/usr/bin/ld -r -o SUBSYS.o nbtcompare.o nbtinsert.o nbtpage.o nbtree.o nbtscan.o nbtsearch.o nbtstrat.o nbtutils.o nbtsort.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/nbtree'\ngmake -C rtree SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/rtree'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rtget.o rtget.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rtproc.o rtproc.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rtree.o rtree.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rtscan.o rtscan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rtstrat.o rtstrat.c\n/usr/bin/ld -r -o SUBSYS.o rtget.o rtproc.o rtree.o rtscan.o rtstrat.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/rtree'\ngmake -C transam SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/transam'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o transam.o transam.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o transsup.o transsup.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o varsup.o varsup.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o xact.o xact.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o xid.o xid.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o xlog.o xlog.c\nUX:acomp: WARNING: \"xlog.c\", line 613: statement not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o xlogutils.o xlogutils.c\nUX:acomp: WARNING: \"xlogutils.c\", line 401: empty translation unit\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rmgr.o rmgr.c\n/usr/bin/ld -r -o SUBSYS.o transam.o transsup.o varsup.o xact.o xid.o xlog.o xlogutils.o rmgr.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/transam'\n/usr/bin/ld -r -o SUBSYS.o common/SUBSYS.o gist/SUBSYS.o hash/SUBSYS.o heap/SUBSYS.o index/SUBSYS.o nbtree/SUBSYS.o rtree/SUBSYS.o transam/SUBSYS.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access'\ngmake -C bootstrap all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/bootstrap'\nbison -y -d bootparse.y\nsed -e 's/^yy/Int_yy/g' -e 's/\\([^a-zA-Z0-9_]\\)yy/\\1Int_yy/g' < y.tab.c > ./bootparse.c\nsed -e 's/^yy/Int_yy/g' -e 's/\\([^a-zA-Z0-9_]\\)yy/\\1Int_yy/g' < y.tab.h > ./bootstrap_tokens.h\nrm -f y.tab.c y.tab.h\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o bootparse.o bootparse.c\n/usr/local/bin/flex bootscanner.l\nsed -e 's/^yy/Int_yy/g' -e 's/\\([^a-zA-Z0-9_]\\)yy/\\1Int_yy/g' lex.yy.c > bootscanner.c\nrm -f lex.yy.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o bootscanner.o bootscanner.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o bootstrap.o bootstrap.c\n/usr/bin/ld -r -o SUBSYS.o bootparse.o bootscanner.o bootstrap.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/bootstrap'\ngmake -C catalog all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/catalog'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o catalog.o catalog.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o heap.o heap.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o index.o index.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o indexing.o indexing.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o aclchk.o aclchk.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pg_aggregate.o pg_aggregate.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pg_operator.o pg_operator.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pg_proc.o pg_proc.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pg_type.o pg_type.c\n/usr/bin/ld -r -o SUBSYS.o catalog.o heap.o index.o indexing.o aclchk.o pg_aggregate.o pg_operator.o pg_proc.o pg_type.o\nCPP='cc -E' AWK='gawk' /bin/sh genbki.sh -o global -I../../../src/include ../../../src/include/catalog/pg_database.h ../../../src/include/catalog/pg_variable.h ../../../src/include/catalog/pg_shadow.h ../../../src/include/catalog/pg_group.h ../../../src/include/catalog/pg_log.h\nCPP='cc -E' AWK='gawk' /bin/sh genbki.sh -o template1 -I../../../src/include ../../../src/include/catalog/pg_proc.h ../../../src/include/catalog/pg_type.h ../../../src/include/catalog/pg_attribute.h ../../../src/include/catalog/pg_class.h ../../../src/include/catalog/pg_inherits.h ../../../src/include/catalog/pg_index.h ../../../src/include/catalog/pg_statistic.h ../../../src/include/catalog/pg_operator.h ../../../src/include/catalog/pg_opclass.h ../../../src/include/catalog/pg_am.h ../../../src/include/catalog/pg_amop.h ../../../src/include/catalog/pg_amproc.h ../../../src/include/catalog/pg_language.h ../../../src/include/catalog/pg_aggregate.h ../../../src/include/catalog/pg_ipl.h ../../../src/include/catalog/pg_inheritproc.h ../../../src/include/catalog/pg_rewrite.h ../../../src/include/catalog/pg_listener.h ../../../src/include/catalog/pg_description.h ../../../src/include/catalog/indexing.h\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/catalog'\ngmake -C parser all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/parser'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o analyze.o analyze.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o gram.o gram.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o keywords.o keywords.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parser.o parser.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_agg.o parse_agg.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_clause.o parse_clause.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_expr.o parse_expr.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_func.o parse_func.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_node.o parse_node.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_oper.o parse_oper.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_relation.o parse_relation.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_type.o parse_type.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_coerce.o parse_coerce.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o parse_target.o parse_target.c\n/usr/local/bin/flex -Pbase_yy -o'scan.c' scan.l\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o scan.o scan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o scansup.o scansup.c\n/usr/bin/ld -r -o SUBSYS.o analyze.o gram.o keywords.o parser.o parse_agg.o parse_clause.o parse_expr.o parse_func.o parse_node.o parse_oper.o parse_relation.o parse_type.o parse_coerce.o parse_target.o scan.o scansup.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/parser'\ngmake -C commands all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/commands'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o async.o async.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o creatinh.o creatinh.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o command.o command.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o comment.o comment.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o copy.o copy.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o indexcmds.o indexcmds.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o define.o define.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o remove.o remove.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rename.o rename.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o vacuum.o vacuum.c\nUX:acomp: WARNING: \"vacuum.c\", line 1374: end-of-loop code not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o analyze.o analyze.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o view.o view.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o cluster.o cluster.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o explain.o explain.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o sequence.o sequence.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o trigger.o trigger.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o user.o user.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o proclang.o proclang.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o dbcommands.o dbcommands.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o variable.o variable.c\n/usr/bin/ld -r -o SUBSYS.o async.o creatinh.o command.o comment.o copy.o indexcmds.o define.o remove.o rename.o vacuum.o analyze.o view.o cluster.o explain.o sequence.o trigger.o user.o proclang.o dbcommands.o variable.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/commands'\ngmake -C executor all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/executor'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o execAmi.o execAmi.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o execFlatten.o execFlatten.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o execJunk.o execJunk.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o execMain.o execMain.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o execProcnode.o execProcnode.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o execQual.o execQual.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o execScan.o execScan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o execTuples.o execTuples.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o execUtils.o execUtils.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o functions.o functions.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeAppend.o nodeAppend.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeAgg.o nodeAgg.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeHash.o nodeHash.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeHashjoin.o nodeHashjoin.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeIndexscan.o nodeIndexscan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeMaterial.o nodeMaterial.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeMergejoin.o nodeMergejoin.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeNestloop.o nodeNestloop.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeResult.o nodeResult.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeSeqscan.o nodeSeqscan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeSetOp.o nodeSetOp.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeSort.o nodeSort.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeUnique.o nodeUnique.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeGroup.o nodeGroup.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o spi.o spi.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeSubplan.o nodeSubplan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeSubqueryscan.o nodeSubqueryscan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeTidscan.o nodeTidscan.c\n/usr/bin/ld -r -o SUBSYS.o execAmi.o execFlatten.o execJunk.o execMain.o execProcnode.o execQual.o execScan.o execTuples.o execUtils.o functions.o nodeAppend.o nodeAgg.o nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeMaterial.o nodeMergejoin.o nodeNestloop.o nodeResult.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o nodeGroup.o spi.o nodeSubplan.o nodeSubqueryscan.o nodeTidscan.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/executor'\ngmake -C lib all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/lib'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o bit.o bit.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hasht.o hasht.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o lispsort.o lispsort.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o stringinfo.o stringinfo.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o dllist.o dllist.c\n/usr/bin/ld -r -o SUBSYS.o bit.o hasht.o lispsort.o stringinfo.o dllist.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/lib'\ngmake -C libpq all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/libpq'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o be-fsstubs.o be-fsstubs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o auth.o auth.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o crypt.o crypt.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hba.o hba.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o password.o password.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pqcomm.o pqcomm.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pqformat.o pqformat.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pqpacket.o pqpacket.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pqsignal.o pqsignal.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o util.o util.c\n/usr/bin/ld -r -o SUBSYS.o be-fsstubs.o auth.o crypt.o hba.o password.o pqcomm.o pqformat.o pqpacket.o pqsignal.o util.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/libpq'\ngmake -C main all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/main'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o main.o main.c\n/usr/bin/ld -r -o SUBSYS.o main.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/main'\ngmake -C nodes all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/nodes'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodeFuncs.o nodeFuncs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nodes.o nodes.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o list.o list.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o copyfuncs.o copyfuncs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o equalfuncs.o equalfuncs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o makefuncs.o makefuncs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o outfuncs.o outfuncs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o readfuncs.o readfuncs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o print.o print.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o read.o read.c\n/usr/bin/ld -r -o SUBSYS.o nodeFuncs.o nodes.o list.o copyfuncs.o equalfuncs.o makefuncs.o outfuncs.o readfuncs.o print.o read.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/nodes'\ngmake -C optimizer all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer'\ngmake -C geqo SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/geqo'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_copy.o geqo_copy.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_eval.o geqo_eval.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_main.o geqo_main.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_misc.o geqo_misc.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_pool.o geqo_pool.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_recombination.o geqo_recombination.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_selection.o geqo_selection.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_erx.o geqo_erx.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_pmx.o geqo_pmx.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_cx.o geqo_cx.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_px.o geqo_px.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_ox1.o geqo_ox1.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geqo_ox2.o geqo_ox2.c\n/usr/bin/ld -r -o SUBSYS.o geqo_copy.o geqo_eval.o geqo_main.o geqo_misc.o geqo_pool.o geqo_recombination.o geqo_selection.o geqo_erx.o geqo_pmx.o geqo_cx.o geqo_px.o geqo_ox1.o geqo_ox2.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/geqo'\ngmake -C path SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/path'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o allpaths.o allpaths.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o clausesel.o clausesel.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o costsize.o costsize.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o indxpath.o indxpath.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o joinpath.o joinpath.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o joinrels.o joinrels.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o orindxpath.o orindxpath.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pathkeys.o pathkeys.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o tidpath.o tidpath.c\n/usr/bin/ld -r -o SUBSYS.o allpaths.o clausesel.o costsize.o indxpath.o joinpath.o joinrels.o orindxpath.o pathkeys.o tidpath.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/path'\ngmake -C plan SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/plan'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o createplan.o createplan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o initsplan.o initsplan.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o planmain.o planmain.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o planner.o planner.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o setrefs.o setrefs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o subselect.o subselect.c\n/usr/bin/ld -r -o SUBSYS.o createplan.o initsplan.o planmain.o planner.o setrefs.o subselect.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/plan'\ngmake -C prep SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/prep'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o prepqual.o prepqual.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o preptlist.o preptlist.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o prepunion.o prepunion.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o prepkeyset.o prepkeyset.c\n/usr/bin/ld -r -o SUBSYS.o prepqual.o preptlist.o prepunion.o prepkeyset.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/prep'\ngmake -C util SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/util'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o restrictinfo.o restrictinfo.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o clauses.o clauses.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o plancat.o plancat.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o joininfo.o joininfo.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pathnode.o pathnode.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o relnode.o relnode.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o tlist.o tlist.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o var.o var.c\n/usr/bin/ld -r -o SUBSYS.o restrictinfo.o clauses.o plancat.o joininfo.o pathnode.o relnode.o tlist.o var.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer/util'\n/usr/bin/ld -r -o SUBSYS.o geqo/SUBSYS.o path/SUBSYS.o plan/SUBSYS.o prep/SUBSYS.o util/SUBSYS.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/optimizer'\ngmake -C port all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/port'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o dynloader.o dynloader.c\nUX:acomp: WARNING: \"dynloader.c\", line 5: empty translation unit\n/usr/bin/ld -r -o SUBSYS.o dynloader.o \ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/port'\ngmake -C postmaster all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/postmaster'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o postmaster.o postmaster.c\n/usr/bin/ld -r -o SUBSYS.o postmaster.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/postmaster'\ngmake -C regex all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/regex'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -DPOSIX_MISTAKE -O -K host,inline,loop_unroll,alloca -Dsvr4 -o regcomp.o regcomp.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -DPOSIX_MISTAKE -O -K host,inline,loop_unroll,alloca -Dsvr4 -o regerror.o regerror.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -DPOSIX_MISTAKE -O -K host,inline,loop_unroll,alloca -Dsvr4 -o regexec.o regexec.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -DPOSIX_MISTAKE -O -K host,inline,loop_unroll,alloca -Dsvr4 -o regfree.o regfree.c\n/usr/bin/ld -r -o SUBSYS.o regcomp.o regerror.o regexec.o regfree.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/regex'\ngmake -C rewrite all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/rewrite'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rewriteRemove.o rewriteRemove.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rewriteDefine.o rewriteDefine.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rewriteHandler.o rewriteHandler.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rewriteManip.o rewriteManip.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rewriteSupport.o rewriteSupport.c\n/usr/bin/ld -r -o SUBSYS.o rewriteRemove.o rewriteDefine.o rewriteHandler.o rewriteManip.o rewriteSupport.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/rewrite'\ngmake -C storage all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage'\ngmake -C buffer SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/buffer'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o buf_table.o buf_table.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o buf_init.o buf_init.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o bufmgr.o bufmgr.c\nUX:acomp: WARNING: \"bufmgr.c\", line 1212: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"bufmgr.c\", line 2213: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"bufmgr.c\", line 2262: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"bufmgr.c\", line 2298: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"bufmgr.c\", line 2330: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"bufmgr.c\", line 2399: argument #1 incompatible with prototype: tas()\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o freelist.o freelist.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o localbuf.o localbuf.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o s_lock.o s_lock.c\nUX:acomp: WARNING: \"s_lock.c\", line 74: argument #1 incompatible with prototype: tas()\n/usr/bin/ld -r -o SUBSYS.o buf_table.o buf_init.o bufmgr.o freelist.o localbuf.o s_lock.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/buffer'\ngmake -C file SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/file'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o fd.o fd.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o buffile.o buffile.c\n/usr/bin/ld -r -o SUBSYS.o fd.o buffile.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/file'\ngmake -C ipc SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/ipc'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o ipc.o ipc.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o ipci.o ipci.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o shmem.o shmem.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o shmqueue.o shmqueue.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o sinval.o sinval.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o sinvaladt.o sinvaladt.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o spin.o spin.c\nUX:acomp: WARNING: \"spin.c\", line 118: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"spin.c\", line 123: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"spin.c\", line 124: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"spin.c\", line 131: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"spin.c\", line 157: argument #1 incompatible with prototype: tas()\nUX:acomp: WARNING: \"spin.c\", line 168: argument #1 incompatible with prototype: tas()\n/usr/bin/ld -r -o SUBSYS.o ipc.o ipci.o shmem.o shmqueue.o sinval.o sinvaladt.o spin.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/ipc'\ngmake -C large_object SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/large_object'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o inv_api.o inv_api.c\n/usr/bin/ld -r -o SUBSYS.o inv_api.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/large_object'\ngmake -C lmgr SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/lmgr'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o lmgr.o lmgr.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o lock.o lock.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o proc.o proc.c\n/usr/bin/ld -r -o SUBSYS.o lmgr.o lock.o proc.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/lmgr'\ngmake -C page SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/page'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o bufpage.o bufpage.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o itemptr.o itemptr.c\n/usr/bin/ld -r -o SUBSYS.o bufpage.o itemptr.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/page'\ngmake -C smgr SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/smgr'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o md.o md.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o mm.o mm.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o smgr.o smgr.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o smgrtype.o smgrtype.c\n/usr/bin/ld -r -o SUBSYS.o md.o mm.o smgr.o smgrtype.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage/smgr'\n/usr/bin/ld -r -o SUBSYS.o buffer/SUBSYS.o file/SUBSYS.o ipc/SUBSYS.o large_object/SUBSYS.o lmgr/SUBSYS.o page/SUBSYS.o smgr/SUBSYS.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/storage'\ngmake -C tcop all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/tcop'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o dest.o dest.c\nUX:acomp: WARNING: \"dest.c\", line 209: statement not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o fastpath.o fastpath.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o postgres.o postgres.c\nUX:acomp: WARNING: \"postgres.c\", line 238: end-of-loop code not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pquery.o pquery.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o utility.o utility.c\n/usr/bin/ld -r -o SUBSYS.o dest.o fastpath.o postgres.o pquery.o utility.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/tcop'\ngmake -C utils all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o fmgrtab.o fmgrtab.c\ngmake -C adt SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/adt'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o acl.o acl.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o arrayfuncs.o arrayfuncs.c\nUX:acomp: WARNING: \"arrayfuncs.c\", line 714: end-of-loop code not reached\nUX:acomp: WARNING: \"arrayfuncs.c\", line 766: end-of-loop code not reached\nUX:acomp: WARNING: \"arrayfuncs.c\", line 795: end-of-loop code not reached\nUX:acomp: WARNING: \"arrayfuncs.c\", line 798: end-of-loop code not reached\nUX:acomp: WARNING: \"arrayfuncs.c\", line 842: end-of-loop code not reached\nUX:acomp: WARNING: \"arrayfuncs.c\", line 881: end-of-loop code not reached\nUX:acomp: WARNING: \"arrayfuncs.c\", line 890: end-of-loop code not reached\nUX:acomp: WARNING: \"arrayfuncs.c\", line 957: end-of-loop code not reached\nUX:acomp: WARNING: \"arrayfuncs.c\", line 1123: end-of-loop code not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o arrayutils.o arrayutils.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o bool.o bool.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o cash.o cash.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o char.o char.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o date.o date.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o datetime.o datetime.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o datum.o datum.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o float.o float.c\nUX:acomp: WARNING: \"float.c\", line 1456: end-of-loop code not reached\nUX:acomp: WARNING: \"float.c\", line 1477: end-of-loop code not reached\nUX:acomp: WARNING: \"float.c\", line 1501: end-of-loop code not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o format_type.o format_type.c\nUX:acomp: WARNING: \"format_type.c\", line 72: end-of-loop code not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geo_ops.o geo_ops.c\nUX:acomp: WARNING: \"geo_ops.c\", line 772: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 1117: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 1447: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 1994: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 2003: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 2139: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 2238: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 2760: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 3283: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 3476: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 3589: end-of-loop code not reached\nUX:acomp: WARNING: \"geo_ops.c\", line 3661: end-of-loop code not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o geo_selfuncs.o geo_selfuncs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o int.o int.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o int8.o int8.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o like.o like.c\nUX:acomp: WARNING: \"like.c\", line 131: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 132: argument #1 incompatible with prototype: strlen()\nUX:acomp: WARNING: \"like.c\", line 133: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 150: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 151: argument #1 incompatible with prototype: strlen()\nUX:acomp: WARNING: \"like.c\", line 152: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 169: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 171: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 188: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 190: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 211: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 212: argument #1 incompatible with prototype: strlen()\nUX:acomp: WARNING: \"like.c\", line 213: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 230: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 231: argument #1 incompatible with prototype: strlen()\nUX:acomp: WARNING: \"like.c\", line 232: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 249: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 251: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 268: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 270: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 292: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 294: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 302: assignment type mismatch\nUX:acomp: WARNING: \"like.c\", line 325: assignment type mismatch\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o misc.o misc.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o nabstime.o nabstime.c\nUX:acomp: WARNING: \"nabstime.c\", line 719: statement not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o name.o name.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o not_in.o not_in.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o numeric.o numeric.c\nUX:acomp: WARNING: \"numeric.c\", line 1953: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 1991: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 2058: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 2118: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 2147: end-of-loop code not reached\nUX:acomp: WARNING: \"numeric.c\", line 2176: end-of-loop code not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o numutils.o numutils.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o oid.o oid.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o oracle_compat.o oracle_compat.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o regexp.o regexp.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o regproc.o regproc.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o ruleutils.o ruleutils.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o selfuncs.o selfuncs.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o sets.o sets.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o tid.o tid.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o timestamp.o timestamp.c\nUX:acomp: WARNING: \"timestamp.c\", line 199: end-of-loop code not reached\nUX:acomp: WARNING: \"timestamp.c\", line 1307: end-of-loop code not reached\nUX:acomp: WARNING: \"timestamp.c\", line 1723: end-of-loop code not reached\nUX:acomp: WARNING: \"timestamp.c\", line 1763: end-of-loop code not reached\nUX:acomp: WARNING: \"timestamp.c\", line 1773: end-of-loop code not reached\nUX:acomp: WARNING: \"timestamp.c\", line 1793: end-of-loop code not reached\nUX:acomp: WARNING: \"timestamp.c\", line 1917: end-of-loop code not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o varbit.o varbit.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o varchar.o varchar.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o varlena.o varlena.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o version.o version.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o network.o network.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o mac.o mac.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o inet_net_ntop.o inet_net_ntop.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o inet_net_pton.o inet_net_pton.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o ri_triggers.o ri_triggers.c\nUX:acomp: WARNING: \"ri_triggers.c\", line 470: statement not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pg_lzcompress.o pg_lzcompress.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o pg_locale.o pg_locale.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o formatting.o formatting.c\nUX:acomp: WARNING: \"formatting.c\", line 1333: statement not reached\nUX:acomp: WARNING: \"formatting.c\", line 2394: statement not reached\nUX:acomp: WARNING: \"formatting.c\", line 2455: end-of-loop code not reached\nUX:acomp: WARNING: \"formatting.c\", line 2558: end-of-loop code not reached\nUX:acomp: WARNING: \"formatting.c\", line 3995: statement not reached\nUX:acomp: WARNING: \"formatting.c\", line 4065: end-of-loop code not reached\nUX:acomp: WARNING: \"formatting.c\", line 4185: end-of-loop code not reached\nUX:acomp: WARNING: \"formatting.c\", line 4269: end-of-loop code not reached\nUX:acomp: WARNING: \"formatting.c\", line 4357: end-of-loop code not reached\nUX:acomp: WARNING: \"formatting.c\", line 4437: end-of-loop code not reached\nUX:acomp: WARNING: \"formatting.c\", line 4516: end-of-loop code not reached\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o ascii.o ascii.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o quote.o quote.c\n/usr/bin/ld -r -o SUBSYS.o acl.o arrayfuncs.o arrayutils.o bool.o cash.o char.o date.o datetime.o datum.o float.o format_type.o geo_ops.o geo_selfuncs.o int.o int8.o like.o misc.o nabstime.o name.o not_in.o numeric.o numutils.o oid.o oracle_compat.o regexp.o regproc.o ruleutils.o selfuncs.o sets.o tid.o timestamp.o varbit.o varchar.o varlena.o version.o network.o mac.o inet_net_ntop.o inet_net_pton.o ri_triggers.o pg_lzcompress.o pg_locale.o formatting.o ascii.o quote.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/adt'\ngmake -C cache SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/cache'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o catcache.o catcache.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o inval.o inval.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o rel.o rel.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o relcache.o relcache.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o syscache.o syscache.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o lsyscache.o lsyscache.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o fcache.o fcache.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o temprel.o temprel.c\n/usr/bin/ld -r -o SUBSYS.o catcache.o inval.o rel.o relcache.o syscache.o lsyscache.o fcache.o temprel.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/cache'\ngmake -C error SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/error'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o assert.o assert.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o elog.o elog.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o exc.o exc.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o excabort.o excabort.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o excid.o excid.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o format.o format.c\n/usr/bin/ld -r -o SUBSYS.o assert.o elog.o exc.o excabort.o excid.o format.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/error'\ngmake -C fmgr SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/fmgr'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o dfmgr.o dfmgr.c\nUX:acomp: WARNING: \"dfmgr.c\", line 163: assignment type mismatch\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o fmgr.o fmgr.c\n/usr/bin/ld -r -o SUBSYS.o dfmgr.o fmgr.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/fmgr'\ngmake -C hash SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/hash'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o dynahash.o dynahash.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o hashfn.o hashfn.c\n/usr/bin/ld -r -o SUBSYS.o dynahash.o hashfn.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/hash'\ngmake -C init SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/init'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o findbe.o findbe.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o globals.o globals.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o miscinit.o miscinit.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o postinit.o postinit.c\n/usr/bin/ld -r -o SUBSYS.o findbe.o globals.o miscinit.o postinit.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/init'\ngmake -C misc SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/misc'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o database.o database.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o superuser.o superuser.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o guc.o guc.c\n/usr/local/bin/flex guc-file.l\nsed -e 's/^yy/GUC_yy/g' -e 's/\\([^a-zA-Z0-9_]\\)yy/\\1GUC_yy/g' lex.yy.c > guc-file.c\nrm -f lex.yy.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o guc-file.o guc-file.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o ps_status.o ps_status.c\n/usr/bin/ld -r -o SUBSYS.o database.o superuser.o guc.o guc-file.o ps_status.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/misc'\ngmake -C mmgr SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/mmgr'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o aset.o aset.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o mcxt.o mcxt.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o portalmem.o portalmem.c\n/usr/bin/ld -r -o SUBSYS.o aset.o mcxt.o portalmem.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/mmgr'\ngmake -C sort SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/sort'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o logtape.o logtape.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o tuplesort.o tuplesort.c\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o tuplestore.o tuplestore.c\n/usr/bin/ld -r -o SUBSYS.o logtape.o tuplesort.o tuplestore.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/sort'\ngmake -C time SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/time'\ncc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K host,inline,loop_unroll,alloca -Dsvr4 -o tqual.o tqual.c\n/usr/bin/ld -r -o SUBSYS.o tqual.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils/time'\n/usr/bin/ld -r -o SUBSYS.o fmgrtab.o adt/SUBSYS.o cache/SUBSYS.o error/SUBSYS.o fmgr/SUBSYS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBSYS.o mmgr/SUBSYS.o sort/SUBSYS.o time/SUBSYS.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils'\ncc -O -K host,inline,loop_unroll,alloca -Dsvr4 -o postgres access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -L/usr/local/lib -L/opt/lib -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -Wl,-Bexport\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend'\ngmake -C include all\ngmake[2]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/include'\ngmake[2]: Nothing to be done for `all'.\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/include'\ngmake -C interfaces all\ngmake[2]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/interfaces'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/interfaces/libpq'\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/usr/local/pgsql/etc\"' -O -K host,inline,loop_unroll,alloca -Dsvr4 -K PIC -o fe-auth.o fe-auth.c\ncc -c -I/usr/local/include -I/opt/include -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/usr/local/pgsql/etc\"' -O -K host,inline,loop_unroll,alloca -Dsvr4 -K PIC -o fe-connect.o fe-connect.c\nUX:acomp: ERROR: \"fe-connect.c\", line 2113: integral constant expression expected\ngmake[3]: *** [fe-connect.o] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/interfaces/libpq'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src'\ngmake: *** [all] Error 2\n* Larry Rosenman <ler@lerctr.org> [001021 12:08]:\n> * Peter Eisentraut <peter_e@gmx.net> [001021 12:02]:\n> > I wrote:\n> > \n> > > Larry Rosenman writes:\n> > > \n> > > > 1) when I included -with-openssl, it didn't add /usr/local/ssl/lib to\n> > > > the -L options, so couldn't find libssl.a\n> > > \n> > > Confirmed. It's being put into a different variable. I'll see about\n> > > fixing this.\n> > \n> > No, this is fine. It's in the LIBS variable, where it belongs. Whatever\n> > you're seeing is a different problem. Check config.log to find out.\n> it's stupider. It forgot the -lssl after the -L/usr/local/ssl/lib. \n> \n> Here is config.log:\n> This file contains any messages produced by compilers while\n> running configure, to aid debugging if configure makes a mistake.\n> \n> configure:677: checking host system type\n> configure:703: checking which template to use\n> configure:866: checking whether to build with locale support\n> configure:895: checking whether to build with recode support\n> configure:925: checking whether to build with multibyte character support\n> configure:978: checking for default port number\n> configure:1012: checking for default soft limit on number of connections\n> configure:1062: checking for gcc\n> configure:1175: checking whether the C compiler (cc ) works\n> configure:1191: cc -o conftest conftest.c 1>&5\n> configure:1217: checking whether the C compiler (cc ) is a cross-compiler\n> configure:1222: checking whether we are using GNU C\n> configure:1231: cc -E conftest.c\n> configure:1250: checking whether cc accepts -g\n> configure:1286: checking whether the C compiler (cc -O -K host,inline,loop_unroll,alloca -Dsvr4 ) works\n> configure:1302: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 conftest.c 1>&5\n> configure:1328: checking whether the C compiler (cc -O -K host,inline,loop_unroll,alloca -Dsvr4 ) is a cross-compiler\n> configure:1333: checking for Cygwin environment\n> configure:1349: cc -c -O -K host,inline,loop_unroll,alloca -Dsvr4 conftest.c 1>&5\n> UX:acomp: ERROR: \"configure\", line 1345: undefined symbol: __CYGWIN32__\n> UX:acomp: WARNING: \"configure\", line 1346: statement not reached\n> configure: failed program was:\n> #line 1338 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> \n> #ifndef __CYGWIN__\n> #define __CYGWIN__ __CYGWIN32__\n> #endif\n> return __CYGWIN__;\n> ; return 0; }\n> configure:1366: checking for mingw32 environment\n> configure:1378: cc -c -O -K host,inline,loop_unroll,alloca -Dsvr4 conftest.c 1>&5\n> UX:acomp: ERROR: \"configure\", line 1374: undefined symbol: __MINGW32__\n> UX:acomp: WARNING: \"configure\", line 1375: statement not reached\n> configure: failed program was:\n> #line 1371 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> return __MINGW32__;\n> ; return 0; }\n> configure:1397: checking for executable suffix\n> configure:1407: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 conftest.c 1>&5\n> configure:1428: checking how to run the C preprocessor\n> configure:1449: cc -E conftest.c >/dev/null 2>conftest.out\n> configure:1683: checking whether to build with Tcl\n> configure:1707: checking whether to build with Tk\n> configure:1769: checking whether to build Perl modules\n> configure:1796: checking whether to build Python modules\n> configure:2073: checking whether to build the ODBC driver\n> configure:2155: checking whether to build C++ modules\n> configure:2183: checking for c++\n> configure:2215: checking whether the C++ compiler (c++ ) works\n> configure:2231: c++ -o conftest conftest.C -L/usr/local/ssl/lib 1>&5\n> configure:2257: checking whether the C++ compiler (c++ ) is a cross-compiler\n> configure:2262: checking whether we are using GNU C++\n> configure:2271: c++ -E conftest.C\n> configure:2290: checking whether c++ accepts -g\n> configure:2322: checking how to run the C++ preprocessor\n> configure:2340: c++ -E conftest.C >/dev/null 2>conftest.out\n> configure:2375: checking for string\n> configure:2385: c++ -E conftest.C >/dev/null 2>conftest.out\n> configure:2454: checking for namespace std in C++\n> configure:2481: c++ -c -g -O2 conftest.C 1>&5\n> configure:2536: checking for a BSD compatible install\n> configure:2614: checking for gawk\n> configure:2644: checking for flex\n> configure:2712: checking whether ln -s works\n> configure:2777: checking for non-GNU ld\n> configure:2812: checking if the linker (/usr/bin/ld) is GNU ld\n> configure:2831: checking for ranlib\n> configure:2863: checking for lorder\n> configure:2895: checking for tar\n> configure:2932: checking for perl\n> configure:2966: checking for bison\n> configure:3042: checking for main in -lsfio\n> configure:3057: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lsfio -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lsfio\n> configure: failed program was:\n> #line 3050 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3085: checking for main in -lncurses\n> configure:3100: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lncurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lncurses\n> configure: failed program was:\n> #line 3093 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3126: checking for main in -lcurses\n> configure:3141: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3171: checking for main in -ltermcap\n> configure:3186: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3214: checking for readline in -lreadline\n> configure:3233: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3262: checking for library containing using_history\n> configure:3280: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3329: checking for main in -lbsd\n> configure:3344: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lbsd -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lbsd\n> configure: failed program was:\n> #line 3337 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3373: checking for setproctitle in -lutil\n> configure:3392: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lutil -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> Undefined\t\t\tfirst referenced\n> symbol \t\t\t in file\n> setproctitle conftest.o\n> UX:ld: ERROR: Symbol referencing errors. No output written to conftest\n> configure: failed program was:\n> #line 3381 \"configure\"\n> #include \"confdefs.h\"\n> /* Override any gcc2 internal prototype to avoid an error. */\n> /* We use char because int might match the return type of a gcc2\n> builtin and then its argument prototype would still apply. */\n> char setproctitle();\n> \n> int main() {\n> setproctitle()\n> ; return 0; }\n> configure:3420: checking for main in -lm\n> configure:3435: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3463: checking for main in -ldl\n> configure:3478: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3506: checking for main in -lsocket\n> configure:3521: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3549: checking for main in -lnsl\n> configure:3564: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3592: checking for main in -lipc\n> configure:3607: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lipc -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lipc\n> configure: failed program was:\n> #line 3600 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3635: checking for main in -lIPC\n> configure:3650: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lIPC -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lIPC\n> configure: failed program was:\n> #line 3643 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3678: checking for main in -llc\n> configure:3693: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -llc -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -llc\n> configure: failed program was:\n> #line 3686 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3721: checking for main in -ldld\n> configure:3736: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -ldld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -ldld\n> configure: failed program was:\n> #line 3729 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3764: checking for main in -lln\n> configure:3779: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lln -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lln\n> configure: failed program was:\n> #line 3772 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3807: checking for main in -lld\n> configure:3822: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3850: checking for main in -lcompat\n> configure:3865: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lcompat -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lcompat\n> configure: failed program was:\n> #line 3858 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3893: checking for main in -lBSD\n> configure:3908: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lBSD -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lBSD\n> configure: failed program was:\n> #line 3901 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:3936: checking for main in -lgen\n> configure:3951: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:3979: checking for main in -lPW\n> configure:3994: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lPW -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lPW\n> configure: failed program was:\n> #line 3987 \"configure\"\n> #include \"confdefs.h\"\n> \n> int main() {\n> main()\n> ; return 0; }\n> configure:4023: checking for library containing crypt\n> configure:4041: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:4084: checking for inflate in -lz\n> configure:4103: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> configure:4132: checking for library containing __inet_ntoa\n> configure:4150: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> Undefined\t\t\tfirst referenced\n> symbol \t\t\t in file\n> __inet_ntoa conftest.o\n> UX:ld: ERROR: Symbol referencing errors. No output written to conftest\n> configure: failed program was:\n> #line 4139 \"configure\"\n> #include \"confdefs.h\"\n> /* Override any gcc2 internal prototype to avoid an error. */\n> /* We use char because int might match the return type of a gcc2\n> builtin and then its argument prototype would still apply. */\n> char __inet_ntoa();\n> \n> int main() {\n> __inet_ntoa()\n> ; return 0; }\n> configure:4172: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lbind -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lbind\n> configure: failed program was:\n> #line 4161 \"configure\"\n> #include \"confdefs.h\"\n> /* Override any gcc2 internal prototype to avoid an error. */\n> /* We use char because int might match the return type of a gcc2\n> builtin and then its argument prototype would still apply. */\n> char __inet_ntoa();\n> \n> int main() {\n> __inet_ntoa()\n> ; return 0; }\n> configure:4430: checking for CRYPTO_new_ex_data in -lcrypto\n> configure:4449: cc -o conftest -O -K host,inline,loop_unroll,alloca -Dsvr4 -I/usr/local/include -I/opt/include -I/usr/local/ssl/include -L/usr/local/lib -L/opt/lib conftest.c -lcrypto -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -L/usr/local/ssl/lib 1>&5\n> UX:ld: ERROR: library not found: -lcrypto\n> configure: failed program was:\n> #line 4438 \"configure\"\n> #include \"confdefs.h\"\n> /* Override any gcc2 internal prototype to avoid an error. */\n> /* We use char because int might match the return type of a gcc2\n> builtin and then its argument prototype would still apply. */\n> char CRYPTO_new_ex_data();\n> \n> int main() {\n> CRYPTO_new_ex_data()\n> ; return 0; }\n> > \n> > -- \n> > Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 21 Oct 2000 12:43:54 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: UnixWare 7.1.1b FS"
}
] |
[
{
"msg_contents": "Hello,\n\nI tried to do backup/restore of empty database.\nAnd get the following error:\n\nConnecting to database for restore\nConnecting to test4 as postgres\nCreating OPERATOR =\nArchiver(db): Could not execute query. Code = 7. Explanation from backend: \n'ERROR: OperatorDef: operator \"=\" already defined\n'.\n\nAnother funny thing is that dump of empty database occupies 657614 bytes.\nIt is quite much...\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Sat, 21 Oct 2000 18:13:57 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": true,
"msg_subject": "Problem do backup/restore for empty database"
},
{
"msg_contents": "> >I tried to do backup/restore of empty database.\n> >And get the following error:\n> >\n> >Archiver(db): Could not execute query. Code = 7. Explanation from backend:\n> >'ERROR: OperatorDef: operator \"=\" already defined\n> >'.\n> >\n> >Another funny thing is that dump of empty database occupies 657614 bytes.\n> >It is quite much...\n>\n> Works fine for me. What command are you using? What does\n\npg_dump --blob -Fc test3 -f test3.tar -v\npg_restore test3.tar --db=test4 -v\n\n> pg_restore archive-file-name\n\npg_restore test3.tar produces 688634 bytes length.\n\npg_restore -l test3.tar produces long list. Here there are some first lines:\n\n;\n; Archive created at Sat Oct 21 18:05:33 2000\n; dbname: test3\n; TOC Entries: 2899\n; Compression: -1\n; Dump Version: 1.4-17\n; Format: CUSTOM\n;\n;\n; Selected TOC Entries:\n;\n2338; 15 OPERATOR = postgres\n\n> produce?\n>\n> Have you done anything nasty to template1?\n\nJust initdb.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Sat, 21 Oct 2000 18:33:14 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem do backup/restore for empty database"
},
{
"msg_contents": "BTW, also, if it is possible it is a good idea to remove the following \nwarning if there are no BLOBs in the archive: Archiver: WARNING - skipping \nBLOB restoration\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Sat, 21 Oct 2000 18:45:45 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem do backup/restore for empty database"
},
{
"msg_contents": "At 18:13 21/10/00 +0700, Denis Perchine wrote:\n>\n>I tried to do backup/restore of empty database.\n>And get the following error:\n>\n>Archiver(db): Could not execute query. Code = 7. Explanation from backend: \n>'ERROR: OperatorDef: operator \"=\" already defined\n>'.\n>\n>Another funny thing is that dump of empty database occupies 657614 bytes.\n>It is quite much...\n>\n\nWorks fine for me. What command are you using? What does\n\n pg_restore archive-file-name\n\nproduce?\n\nHave you done anything nasty to template1?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 21 Oct 2000 22:17:36 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Problem do backup/restore for empty database"
},
{
"msg_contents": "At 18:33 21/10/00 +0700, Denis Perchine wrote:\n>\n>;\n>; Archive created at Sat Oct 21 18:05:33 2000\n>; dbname: test3\n>; TOC Entries: 2899\n>; Compression: -1\n>; Dump Version: 1.4-17\n>; Format: CUSTOM\n>;\n>;\n>; Selected TOC Entries:\n>;\n>2338; 15 OPERATOR = postgres\n>\n\nOK. I just built with absolute latest CVS again, and pg_dump has started\ndumping the entire schema, including system tables, functions etc. Looks\nlike the way it decides something is user-defined is now broken.\n\nSame is true for non-empty databases.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 21 Oct 2000 22:33:23 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Problem do backup/restore for empty database"
},
{
"msg_contents": "At 22:33 21/10/00 +1000, Philip Warner wrote:\n>\n>OK. I just built with absolute latest CVS again, and pg_dump has started\n>dumping the entire schema, including system tables, functions etc. Looks\n>like the way it decides something is user-defined is now broken.\n>\n>Same is true for non-empty databases.\n>\n\nLooks like the problem is the way pg_dump determines the last builtin OID.\nCurrently, the code executes:\n\n SELECT oid from pg_database where datname = 'template1';\n\nwhich in CVS version on my machine, produces '1'. This is clearly a problem.\n\nDoes anyone have a suggestion on the best way to get the last builtin OID?\nOr should the OID of template1 be larger???\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 21 Oct 2000 22:42:26 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Problem do backup/restore for empty database"
},
{
"msg_contents": "At 18:33 21/10/00 +0700, Denis Perchine wrote:\n>\n>pg_dump --blob -Fc test3 -f test3.tar -v\n>pg_restore test3.tar --db=test4 -v\n>\n\nShould work with current CVS sources now...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 23 Oct 2000 05:18:59 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Problem do backup/restore for empty database"
}
] |
[
{
"msg_contents": "Hello,\n\nhere it is as requested by Bruce.\nI tested it restoring my database with > 100000 BLOBS, and dumping it out.\nBut unfortunatly I can not restore it back due to problems in pg_dump.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------",
"msg_date": "Sat, 21 Oct 2000 18:19:37 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": true,
"msg_subject": "Patch to support transactions with BLOBs for current CVS"
},
{
"msg_contents": "> >here it is as requested by Bruce.\n> >I tested it restoring my database with > 100000 BLOBS, and dumping it out.\n> >But unfortunatly I can not restore it back due to problems in pg_dump.\n>\n> Please clarify - based on your last private email you stated that there was\n>\n> not a problem with pg_dump, but a problem in the your BLOB patch:\n\nYeps. I have fixed all the crap. And it restores perfect from my archive \nwhich I made some time ago with your pg_dump for 7.0.2. It was _10 if I am \nnot mistaken.\n\nAlso it dumps out fine, but restore back from new archive stops right away. \nAnd when I tried to do dump/restore on the empty database I was failed.\n\n> >From: Denis Perchine <dyp@perchine.com>\n> >Date: Fri, 20 Oct 2000 12:15:18 +0700\n> >\n> >Sorry... I had a look inside server log... Looks like my bug...\n>\n> Are you now saying there is a new problem with pg_dump?\n>\n>\n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n>\n> | --________--\n>\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Sat, 21 Oct 2000 18:36:28 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch to support transactions with BLOBs for current CVS"
},
{
"msg_contents": "> >Yeps. I have fixed all the crap. And it restores perfect from my archive\n> >which I made some time ago with your pg_dump for 7.0.2. It was _10 if I am\n> >not mistaken.\n> >\n> >Also it dumps out fine, but restore back from new archive stops right\n> > away. And when I tried to do dump/restore on the empty database I was\n> > failed.\n>\n> This is the same recent bug as from the hacker list?\n\nYeps. I just made a comment that I was unable to make a complete circle in my \ntests. :-)))\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Sat, 21 Oct 2000 18:46:44 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch to support transactions with BLOBs for current CVS"
},
{
"msg_contents": "At 18:19 21/10/00 +0700, Denis Perchine wrote:\n>Hello,\n>\n>here it is as requested by Bruce.\n>I tested it restoring my database with > 100000 BLOBS, and dumping it out.\n>But unfortunatly I can not restore it back due to problems in pg_dump.\n>\n\nPlease clarify - based on your last private email you stated that there was\nnot a problem with pg_dump, but a problem in the your BLOB patch:\n\n>From: Denis Perchine <dyp@perchine.com>\n>Date: Fri, 20 Oct 2000 12:15:18 +0700\n>\n>Sorry... I had a look inside server log... Looks like my bug...\n>\n\nAre you now saying there is a new problem with pg_dump?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 21 Oct 2000 22:21:56 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Patch to support transactions with BLOBs for\n current CVS"
},
{
"msg_contents": "At 18:36 21/10/00 +0700, Denis Perchine wrote:\n>> >here it is as requested by Bruce.\n>> >I tested it restoring my database with > 100000 BLOBS, and dumping it out.\n>> >But unfortunatly I can not restore it back due to problems in pg_dump.\n>>\n>> Please clarify - based on your last private email you stated that there was\n>>\n>> not a problem with pg_dump, but a problem in the your BLOB patch:\n>\n>Yeps. I have fixed all the crap. And it restores perfect from my archive \n>which I made some time ago with your pg_dump for 7.0.2. It was _10 if I am \n>not mistaken.\n>\n>Also it dumps out fine, but restore back from new archive stops right away. \n>And when I tried to do dump/restore on the empty database I was failed.\n>\n\nThis is the same recent bug as from the hacker list?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 21 Oct 2000 22:35:05 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Patch to support transactions with BLOBs for\n current CVS"
},
{
"msg_contents": "Applied. Thanks. I know it is a pain to generate a new patch against\nthe release.\n\n[ Charset koi8r unsupported, skipping... ]\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Oct 2000 11:55:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch to support transactions with BLOBs for current CVS"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Applied. Thanks. I know it is a pain to generate a new patch against\n> the release.\n\nRegression tests opr_sanity and sanity_check are now failing.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 22 Oct 2000 00:05:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Patch to support transactions with BLOBs for current\n CVS"
},
{
"msg_contents": "OK, Denis, can you run the regression tests with your patch and see what\nis going on?\n\n> Bruce Momjian writes:\n> \n> > Applied. Thanks. I know it is a pain to generate a new patch against\n> > the release.\n> \n> Regression tests opr_sanity and sanity_check are now failing.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Oct 2000 20:10:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Patch to support transactions with BLOBs\n\tfor current CVS"
},
{
"msg_contents": "Hi,\n\n> OK, Denis, can you run the regression tests with your patch and see what\n> is going on?\n>\n> > Bruce Momjian writes:\n> > > Applied. Thanks. I know it is a pain to generate a new patch against\n> > > the release.\n> >\n> > Regression tests opr_sanity and sanity_check are now failing.\n\nThis was due to change in template1.\nHere is regression.diff attached.\n\nAnd also there's test.patch attached which will fix this.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------",
"msg_date": "Sun, 22 Oct 2000 13:30:56 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Patch to support transactions with BLOBs for\n\tcurrent CVS"
}
] |
[
{
"msg_contents": "Hi,\nWhat is the latest version of PostgreSQL?\nIs there something like 7.1?\n\nPawel\n",
"msg_date": "Sat, 21 Oct 2000 15:04:12 +0200",
"msg_from": "Pawel Wegrzyn <Pawel.Wegrzyn@sigma.wsb-nlu.edu.pl>",
"msg_from_op": true,
"msg_subject": "latest version?"
},
{
"msg_contents": "Pawel Wegrzyn wrote:\n> \n> Hi,\n> What is the latest version of PostgreSQL?\n> Is there something like 7.1?\n\nThe most recent version 7.0.2. 7.1 is about to come - I am looking\nforward to it as well.\n\nRegards,\nMit freundlichem Gru�,\n\tHolger Klawitter\n--\nHolger Klawitter +49 (0)251 484 0637\nholger@klawitter.de http://www.klawitter.de/\n\n\n",
"msg_date": "Wed, 25 Oct 2000 11:54:17 +0200",
"msg_from": "Holger Klawitter <holger@klawitter.de>",
"msg_from_op": false,
"msg_subject": "Re: latest version?"
},
{
"msg_contents": "On Wed, 25 Oct 2000, Holger Klawitter wrote:\n\n> Pawel Wegrzyn wrote:\n> > \n> > Hi,\n> > What is the latest version of PostgreSQL?\n> > Is there something like 7.1?\n> \n> The most recent version 7.0.2. 7.1 is about to come - I am looking\n> forward to it as well.\n\n7.0.3 is about to come out, 7.1 is about 2 months away yet :)\n\n\n",
"msg_date": "Thu, 26 Oct 2000 00:55:55 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: latest version?"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n\n> On Wed, 25 Oct 2000, Holger Klawitter wrote:\n> \n> > Pawel Wegrzyn wrote:\n> > > \n> > > Hi,\n> > > What is the latest version of PostgreSQL?\n> > > Is there something like 7.1?\n> > \n> > The most recent version 7.0.2. 7.1 is about to come - I am looking\n> > forward to it as well.\n> \n> 7.0.3 is about to come out, 7.1 is about 2 months away yet :)\n\nHow compatible with 7.0 and 7.1 be from an application standpoint?\nWill applications linked with libraries from 7.0 be able to talk to\nthe 7.1 database? Any changes in library major versions? The other\nway? \n\nThe reason I'm asking is that Red Hat wants to maintain binary\ncompatibility in a for all x in y.x (that's what distribution\nnumbering means to us, other Linux distributions have other (and\nsometimes rather weird) schemes), but I'm also interested in upgrading\nthe database component.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "26 Oct 2000 10:20:32 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> The Hermit Hacker <scrappy@hub.org> writes:\n> \n> > On Wed, 25 Oct 2000, Holger Klawitter wrote:\n> > \n> > > Pawel Wegrzyn wrote:\n> > > > \n> > > > Hi,\n> > > > What is the latest version of PostgreSQL?\n> > > > Is there something like 7.1?\n> > > \n> > > The most recent version 7.0.2. 7.1 is about to come - I am looking\n> > > forward to it as well.\n> > \n> > 7.0.3 is about to come out, 7.1 is about 2 months away yet :)\n> \n> How compatible with 7.0 and 7.1 be from an application standpoint?\n> Will applications linked with libraries from 7.0 be able to talk to\n> the 7.1 database? Any changes in library major versions? The other\n> way? \n\nHistorically, all applications have been able to talk to newer servers,\nso a 6.4 client can talk to a 7.0 postmaster, and I believe 7.0 clients\ncan talk to 7.1 postmasters.\n\nWe usually do not go the other way, where 6.5 clients can not talk to\n6.4 postmasters. I believe 7.0->7.1 will be able to talk in any\n7.0.X/7.1 client and server combination.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 26 Oct 2000 15:48:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Trond Eivind Glomsr�d wrote:\n> > How compatible with 7.0 and 7.1 be from an application standpoint?\n> > Will applications linked with libraries from 7.0 be able to talk to\n> > the 7.1 database? Any changes in library major versions? The other\n> > way?\n \n> Historically, all applications have been able to talk to newer servers,\n> so a 6.4 client can talk to a 7.0 postmaster, and I believe 7.0 clients\n> can talk to 7.1 postmasters.\n \n> We usually do not go the other way, where 6.5 clients can not talk to\n> 6.4 postmasters. I believe 7.0->7.1 will be able to talk in any\n> 7.0.X/7.1 client and server combination.\n\nHe's meaning the libpq version for dynamic link loading. Is the\nlibpq.so lib changing versions (like the change from 6.5.x to 7.0.x\nchanged from libpq.so.2.0 to libpq.so.2.1, which broke binary RPM\ncompatibility for other RPM's linked against libpq.so.2.0, which failed\nwhen libpq.so.2.1 came on the scene). I think the answer is no, but I\nhaven't checked the details yet.\n\nNot just libpq, though -- libpgtcl.so has also been problematic.\n\nOf course, the file format on disk changes (again!), which is a whole\n'nother issue for RPM's......\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 26 Oct 2000 19:43:33 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I believe 7.0->7.1 will be able to talk in any\n> 7.0.X/7.1 client and server combination.\n\nThis should work as far as simple connectivity goes, but 7.0\napplications that do queries against system catalogs might find\nthat their queries need to be updated. For example, psql's backslash\ncommands didn't recognize views for awhile due to pg_class.relkind\nchanges.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 20:29:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.0 vs. 7.1 (was: latest version?) "
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian wrote:\n> > Trond Eivind Glomsr?d wrote:\n> > > How compatible with 7.0 and 7.1 be from an application standpoint?\n> > > Will applications linked with libraries from 7.0 be able to talk to\n> > > the 7.1 database? Any changes in library major versions? The other\n> > > way?\n> \n> > Historically, all applications have been able to talk to newer servers,\n> > so a 6.4 client can talk to a 7.0 postmaster, and I believe 7.0 clients\n> > can talk to 7.1 postmasters.\n> \n> > We usually do not go the other way, where 6.5 clients can not talk to\n> > 6.4 postmasters. I believe 7.0->7.1 will be able to talk in any\n> > 7.0.X/7.1 client and server combination.\n> \n> He's meaning the libpq version for dynamic link loading. Is the\n> libpq.so lib changing versions (like the change from 6.5.x to 7.0.x\n> changed from libpq.so.2.0 to libpq.so.2.1, which broke binary RPM\n> compatibility for other RPM's linked against libpq.so.2.0, which failed\n> when libpq.so.2.1 came on the scene). I think the answer is no, but I\n> haven't checked the details yet.\n\nI usually up the .so version numbers before entering beta. That way,\nthey get marked as newer than older versions.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 26 Oct 2000 23:11:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "[Taken off GENERAL, added HACKERS to cc:]\n\nBruce Momjian wrote:\n> > He's meaning the libpq version for dynamic link loading. Is the\n> > libpq.so lib changing versions (like the change from 6.5.x to 7.0.x\n> > changed from libpq.so.2.0 to libpq.so.2.1, which broke binary RPM\n> > compatibility for other RPM's linked against libpq.so.2.0, which failed\n> > when libpq.so.2.1 came on the scene). I think the answer is no, but I\n> > haven't checked the details yet.\n \n> I usually up the .so version numbers before entering beta. That way,\n> they get marked as newer than older versions.\n\nMay I ask: is it necessary? Have there been version-bumping changes to\nlibpq since 7.0.x? (With the rate that necessary improvement is\nhappening to PostgreSQL, probably).\n\nLet me explain:\nRPM's contain a plethora of dependency information, some of which is\nadded manually, but most of which is generated automatically. These\ndependencies are based on which 'soname' is needed to satisfy dynamic\nlinking requirements, interpreter requirements, etc. With version\nnumbers as part of the name, a change in version numbers changes the\ndependency. \nUnfortunately RPM deems a dependency upon libpq.so.2.0 to not be\nfulfilled by libpq.so.2.1 (how _can_ it know? A client linked to 2.0\nmight fail if 2.1 were to be loaded under it (hypothetically)).\n\nNow, that doesn't directly effect the PostgreSQL RPM's. What it does\neffect is the guy who wants to install PHP from with PostgreSQL support\nenabled and cannot because of a failed dependency. Who gets blamed?\nPostgreSQL.\n\nTrond may correct me on this, but I don't know of a workaround for\nthis. And any workaround has to be applied to packages that depend upon\nPostgreSQL, not to the PostgreSQL RPM's (which I would gladly modify) --\nalthough I am going to try something -- I know that a symlink to the old\nsoname works, even though it is a kludge and, IMO, stinks like a\npolecat.\n\nBut, enough rant. That _is_ I believe what Trond was asking about. We\nhave been bitten before with people installing the PHP from RedHat 6.2\nafter installing the PostgreSQL 7.0.x RPMset -- and dependency failures\nwreaked havoc.\n\nSo, PostgreSQL 7.1 is slated to be libpq.so.2.2, then?\n\nActually, Bruce, it would do me and Trond a great favor if a list of\nwhat so's are getting bumped and to what version were to be posted. At\nleast we can plan for a transition at that point. \n\nI just hate to pull a threepeat on RedHat customers. (RH 5.0 shipped PG\n6.2.1. RH 5.1 shipped PG 6.3.2. BONG!) (RH 6.0 shipped 6.4.2 (bong!) RH\n6.1 shipped 6.5.2 (double BONG!)). RH 7 shipped 7.0.x (small bong) --\nRH 7.1 ships 7.1.x (ouch bong).\n\nWhew. Trond, you ready for this?\n\n[Note: I have been ill, so this message may be more incoherent than my\nnormal scattered self]\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 26 Oct 2000 23:48:56 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "[ Blind CC to general added for comment below.]\n\n\n> [Taken off GENERAL, added HACKERS to cc:]\n> \n> Bruce Momjian wrote:\n> > > He's meaning the libpq version for dynamic link loading. Is the\n> > > libpq.so lib changing versions (like the change from 6.5.x to 7.0.x\n> > > changed from libpq.so.2.0 to libpq.so.2.1, which broke binary RPM\n> > > compatibility for other RPM's linked against libpq.so.2.0, which failed\n> > > when libpq.so.2.1 came on the scene). I think the answer is no, but I\n> > > haven't checked the details yet.\n> \n> > I usually up the .so version numbers before entering beta. That way,\n> > they get marked as newer than older versions.\n> \n> May I ask: is it necessary? Have there been version-bumping changes to\n> libpq since 7.0.x? (With the rate that necessary improvement is\n> happening to PostgreSQL, probably).\n\nNo, only major releases have bumps.\n\n> \n> But, enough rant. That _is_ I believe what Trond was asking about. We\n> have been bitten before with people installing the PHP from RedHat 6.2\n> after installing the PostgreSQL 7.0.x RPMset -- and dependency failures\n> wreaked havoc.\n> \n> So, PostgreSQL 7.1 is slated to be libpq.so.2.2, then?\n> \n> Actually, Bruce, it would do me and Trond a great favor if a list of\n> what so's are getting bumped and to what version were to be posted. At\n> least we can plan for a transition at that point. \n\nSee pgsql/src/tools/RELEASE_CHANGES. I edit interfaces/*/Makefile and\nincrease the minor number for every interface by one.\n\n\n\nLet me add one thing on this RPM issue. There has been a lot of talk\nrecently about RPM's, and what they should do, and what they don't do,\nand who should be blamed. Unfortunately, much of the discussion has\nbeen very unproductive and more like 'venting'.\n\nI really don't appreciate people 'venting' on these lists, especially\nsince we have _nothing_ to do with RPM's. All we do is make the\nPostgreSQL software.\n\nIf people want to discuss RPM's on the ports list, or want to create a\nnew list just about RPM's, that's OK, but venting is bad, and venting on\na list that has nothing to do with RPM's is even worse.\n\nWhat would be good is for someone to constructively make a posting about\nthe known problems, and come up with acceptible solutions. Asking us to\nfix it really isn't going to help because we don't deal with RPM's here,\nand we don't have enough free time to make significant changes to meet\nthe needs of RPM's.\n\nAlso, remember we support many Unix platforms, and Linux is only one of\nthem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 00:55:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Unfortunately RPM deems a dependency upon libpq.so.2.0 to not be\n> fulfilled by libpq.so.2.1 (how _can_ it know? A client linked to 2.0\n> might fail if 2.1 were to be loaded under it (hypothetically)).\n\nIf so, I claim RPM is broken.\n\nThe whole point of major/minor version numbering for .so's is that\na minor version bump is supposed to be binary-upward-compatible.\nIf the RPM stuff has arbitrarily decided that it won't honor that\ndefinition, why do we bother with multiple numbers at all?\n\n> So, PostgreSQL 7.1 is slated to be libpq.so.2.2, then?\n\nTo answer your question, there are no pending changes in libpq that\nwould mandate a major version bump (ie, nothing binary-incompatible,\nAFAIK). We could ship it with the exact same version number, but then\nhow are people to tell whether they have a 7.0 or 7.1 libpq?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 10:54:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?) "
},
{
"msg_contents": "> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Unfortunately RPM deems a dependency upon libpq.so.2.0 to not be\n> > fulfilled by libpq.so.2.1 (how _can_ it know? A client linked to 2.0\n> > might fail if 2.1 were to be loaded under it (hypothetically)).\n> \n> If so, I claim RPM is broken.\n> \n> The whole point of major/minor version numbering for .so's is that\n> a minor version bump is supposed to be binary-upward-compatible.\n> If the RPM stuff has arbitrarily decided that it won't honor that\n> definition, why do we bother with multiple numbers at all?\n> \n> > So, PostgreSQL 7.1 is slated to be libpq.so.2.2, then?\n> \n> To answer your question, there are no pending changes in libpq that\n> would mandate a major version bump (ie, nothing binary-incompatible,\n> AFAIK). We could ship it with the exact same version number, but then\n> how are people to tell whether they have a 7.0 or 7.1 libpq?\n\nYes, we need to have new numbers so binaries from different releases use\nthe proper .so files.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 11:41:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Lamar Owen wrote:\n> > May I ask: is it necessary? Have there been version-bumping changes to\n> > libpq since 7.0.x? (With the rate that necessary improvement is\n> > happening to PostgreSQL, probably).\n \n> No, only major releases have bumps.\n \n> See pgsql/src/tools/RELEASE_CHANGES. I edit interfaces/*/Makefile and\n> increase the minor number for every interface by one.\n\nThanks for the pointer.\n \n> Let me add one thing on this RPM issue. There has been a lot of talk\n> recently about RPM's, and what they should do, and what they don't do,\n> and who should be blamed. Unfortunately, much of the discussion has\n> been very unproductive and more like 'venting'.\n \n> I really don't appreciate people 'venting' on these lists, especially\n> since we have _nothing_ to do with RPM's. All we do is make the\n> PostgreSQL software.\n\n> What would be good is for someone to constructively make a posting about\n> the known problems, and come up with acceptible solutions. Asking us to\n> fix it really isn't going to help because we don't deal with RPM's here,\n> and we don't have enough free time to make significant changes to meet\n> the needs of RPM's.\n\nWhich is why I stepped up to the plate last year to help with RPM's.\n\nI apologize if you took my post (which I edited greatly) as 'venting' --\nit was not my intention to 'vent', much less offend. I just want to\nknow what to expect from the 7.1 release. I feel that that is germane\nto the Hackers list, as the knowledge necessary to answer the question\nis to be found on the list. (and you answered the question above).\n\nLike it or not, in the eyes of many people having solid RPM's is a core\nissue. If there are gotchas, I want to document them so people don't\nget blindsided. Or work around them. Or ask why the change is\nnecessary in the first place.\n\nI appreciate the fact that we are not here to make it easy for\ndistributors to package our software. I also appreciate the fact that\nif you don't at least make an effort to work with major distributors\n(and RedHat, TurboLinux, Caldera, and SuSE together comprise a major\nuserbase) that you run the risk of not being distributed in favor of an\ninferior product.\n\nI also appreciate and applaud the cross-platform mentality of the\nPostgreSQL developers. Linux is far from the only OS to be supported by\nPostgreSQL, true. But Linux is also the most popular OS for PostgreSQL\ndeployment.\n\nHowever, there are known problems that can bite people who are not using\nRPM's and are not running Linux. Some of those problems are such that\nit will take someone with more knowledge than I currently possess to\nsolve. One is the issue of upgrading/migrating tools. This is not an\nRPM-specific issue. To me, that is the only big issue that I have\nspoken about in a way that could even remotely be construed as\n'venting'. And it is not a Linux-specific issue. It is a core issue.\n\nI'll shut up now, as I have cross-distribution RPM problems to solve.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 27 Oct 2000 12:34:12 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "> > What would be good is for someone to constructively make a posting about\n> > the known problems, and come up with acceptable solutions. Asking us to\n> > fix it really isn't going to help because we don't deal with RPM's here,\n> > and we don't have enough free time to make significant changes to meet\n> > the needs of RPM's.\n> \n> Which is why I stepped up to the plate last year to help with RPM's.\n> \n> I apologize if you took my post (which I edited greatly) as 'venting' --\n> it was not my intention to 'vent', much less offend. I just want to\n> know what to expect from the 7.1 release. I feel that that is germane\n> to the Hackers list, as the knowledge necessary to answer the question\n> is to be found on the list. (and you answered the question above).\n\nNo, I was not pointing to you when I mentioned venting. There have been\nother RPM threads lately. I just want information on how to make things\nbetter for RPM's, not vents.\n\n> Like it or not, in the eyes of many people having solid RPM's is a core\n> issue. If there are gotchas, I want to document them so people don't\n> get blindsided. Or work around them. Or ask why the change is\n> necessary in the first place.\n\nSure.\n\n> I appreciate the fact that we are not here to make it easy for\n> distributors to package our software. I also appreciate the fact that\n> if you don't at least make an effort to work with major distributors\n> (and RedHat, TurboLinux, Caldera, and SuSE together comprise a major\n> userbase) that you run the risk of not being distributed in favor of an\n> inferior product.\n\nLet them. It is their decision. Frankly, I have seen this attitude\nbefore, and I don't like it. Just the mention that \"Gee, if you don't\ncooperate, we may yank you,\" is really a veiled threat. Now, I know you\naren't saying that, but the \"if you don't play nice, we will drop you\"\nargument sounds a lot more like MS that a Linux vendor should be acting,\nespecially since they are not telling us what they want or assisting in\nthe work.\n\nThe \"We are big. Just fix it and let us know when it is ready\" attitude\ndoes not work here, and that is what I am hearing mostly from the RPM\npeople.\n\n> I also appreciate and applaud the cross-platform mentality of the\n> PostgreSQL developers. Linux is far from the only OS to be supported by\n> PostgreSQL, true. But Linux is also the most popular OS for PostgreSQL\n> deployment.\n\nTrue, it is the most popular, but that doesn't make the others less\nimportant. \n\nThis whole statement comes across as, \"You run on Linux, and look, you\ntook the time to run on other OS's too. How quaint.\"\n\nIn the history of this project, Linux was an after-thought. None of our\nplatforms are inferior or superior, except to the extent that the\nplatform does not support Unix standard functions (like NT/Cygwin).\n\n> However, there are known problems that can bite people who are not using\n> RPM's and are not running Linux. Some of those problems are such that\n> it will take someone with more knowledge than I currently possess to\n> solve. One is the issue of upgrading/migrating tools. This is not an\n> RPM-specific issue. To me, that is the only big issue that I have\n> spoken about in a way that could even remotely be construed as\n> 'venting'. And it is not a Linux-specific issue. It is a core issue.\n\nAgain, your comments where quite helpful. We need more of them. We\nneed to hear more about the problems people are having with RPM's, and\nhow to make them better.\n\nThere must be a list of known problems. Let's hear them, so we can try\nto solve them as a group. However, in general, we do not make dramatic\nchange to work around OS bugs, and do not plan to make major changes to\nwork around the limitations of RPM's. My bet is that some middle layer\ncan be created that will fix that for us.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 13:05:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Let them. It is their decision. Frankly, I have seen this attitude\n> before, and I don't like it. Just the mention that \"Gee, if you don't\n> cooperate, we may yank you,\" is really a veiled threat. Now, I know you\n> aren't saying that, but the \"if you don't play nice, we will drop you\"\n> argument sounds a lot more like MS that a Linux vendor should be acting,\n> especially since they are not telling us what they want or assisting in\n> the work.\n\nFWIW, I've never threatened to do so. If I wanted to, I would just do\nit[1] - threats are bad and never cause anything but bad feelings.\n\nThat being said, my favorite wishes (in addition to as much SQL\ncompliance and performance as possible, of course) are:\n\n* migration on upgrade\n* old libraries being able to speak to newer databases, so old\n binaries can continue working after database upgrades\n* good sonames on libraries - if a library hasn't changed, bumping the\n number to show it's part of a new version isn't necesarry. If it is\n backwards compatible, just bump the minor version, if it isn't, bump\n the major version. Or even better, use versioned symbols (I don't\n know how many other OSes than Linux and Solaris supports this,\n though). \n\nAs for assisting, at least Red Hat contributes to a lot of projects,\nsome of which are important to postgres on one or more platforms: gdb,\ngcc, glibc and the linux kernel. There just isn't enough resources to\ndo everything, but I try to help out with the RPMs.\n\nWhen we make patches for packages, we try to cooperate with the\nauthor(s) to get them in - happily, we haven't had much of a need for\nthat with postgresql.\n\n> The \"We are big. Just fix it and let us know when it is ready\" attitude\n> does not work here, and that is what I am hearing mostly from the RPM\n> people.\n\nI haven't heard anyone say that.\n\n> There must be a list of known problems. Let's hear them, so we can try\n> to solve them as a group. However, in general, we do not make dramatic\n> change to work around OS bugs, and do not plan to make major changes to\n> work around the limitations of RPM's.\n\nI don't think there are any apart from the upgrade issues - if library\nversioning follows the standard, that certainly won't be a problem.\n\n\n[1] which I'm not even close to doing - I've spent a bit of time lately\nhunting down aliasing bugs in MySQL which causes wrong SQL query\nresults if compiled with \"-O2\". Ouch.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "27 Oct 2000 14:54:54 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Unfortunately RPM deems a dependency upon libpq.so.2.0 to not be\n> > fulfilled by libpq.so.2.1 (how _can_ it know? A client linked to 2.0\n> > might fail if 2.1 were to be loaded under it (hypothetically)).\n\nYou link against libpq.so.2 , not libpq.so.2.1. This isn't a problem.\n\n> If the RPM stuff has arbitrarily decided that it won't honor that\n> definition, why do we bother with multiple numbers at all?\n\nThere is no such problem.\n \n> > So, PostgreSQL 7.1 is slated to be libpq.so.2.2, then?\n> \n> To answer your question, there are no pending changes in libpq that\n> would mandate a major version bump (ie, nothing binary-incompatible,\n> AFAIK). We could ship it with the exact same version number, but then\n> how are people to tell whether they have a 7.0 or 7.1 libpq?\n\nIf there isn't any changes, why bump it? \n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "27 Oct 2000 14:57:39 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > How compatible with 7.0 and 7.1 be from an application standpoint?\n> > Will applications linked with libraries from 7.0 be able to talk to\n> > the 7.1 database? Any changes in library major versions? The other\n> > way? \n> \n> Historically, all applications have been able to talk to newer servers,\n> so a 6.4 client can talk to a 7.0 postmaster, and I believe 7.0 clients\n> can talk to 7.1 postmasters.\n\nGreat - that was what I wanted to know.\n\n> We usually do not go the other way, where 6.5 clients can not talk to\n> 6.4 postmasters. I believe 7.0->7.1 will be able to talk in any\n> 7.0.X/7.1 client and server combination.\n\nThanks!\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "27 Oct 2000 14:58:55 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> Bruce Momjian wrote:\n> > Trond Eivind Glomsr�d wrote:\n> > > How compatible with 7.0 and 7.1 be from an application standpoint?\n> > > Will applications linked with libraries from 7.0 be able to talk to\n> > > the 7.1 database? Any changes in library major versions? The other\n> > > way?\n> \n> > Historically, all applications have been able to talk to newer servers,\n> > so a 6.4 client can talk to a 7.0 postmaster, and I believe 7.0 clients\n> > can talk to 7.1 postmasters.\n> \n> > We usually do not go the other way, where 6.5 clients can not talk to\n> > 6.4 postmasters. I believe 7.0->7.1 will be able to talk in any\n> > 7.0.X/7.1 client and server combination.\n> \n> He's meaning the libpq version for dynamic link loading. \n\nNot only - I'm interested in both issues.\n\n> Is the libpq.so lib changing versions (like the change from 6.5.x to\n> 7.0.x changed from libpq.so.2.0 to libpq.so.2.1, which broke binary\n> RPM compatibility for other RPM's linked against libpq.so.2.0, which\n> failed when libpq.so.2.1 came on the scene\n\nHuh? Shouldn't happen.\n\n> Not just libpq, though -- libpgtcl.so has also been problematic.\n\nI don't think we ship that as a dynamic library.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "27 Oct 2000 15:01:21 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> Unfortunately RPM deems a dependency upon libpq.so.2.0 to not be\n> fulfilled by libpq.so.2.1 (how _can_ it know? A client linked to 2.0\n> might fail if 2.1 were to be loaded under it (hypothetically)).\n> \n> Now, that doesn't directly effect the PostgreSQL RPM's. What it does\n> effect is the guy who wants to install PHP from with PostgreSQL support\n> enabled and cannot because of a failed dependency. Who gets blamed?\n> PostgreSQL.\n> \n> Trond may correct me on this, but I don't know of a workaround for\n> this. \n\nThere usually are no such problems, and I'm not aware of any specific\nto postgresql either.\n \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "27 Oct 2000 15:03:33 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Let them. It is their decision. Frankly, I have seen this attitude\n> > before, and I don't like it. Just the mention that \"Gee, if you don't\n> > cooperate, we may yank you,\" is really a veiled threat. Now, I know you\n> > aren't saying that, but the \"if you don't play nice, we will drop you\"\n> > argument sounds a lot more like MS that a Linux vendor should be acting,\n> > especially since they are not telling us what they want or assisting in\n> > the work.\n> \n> FWIW, I've never threatened to do so. If I wanted to, I would just do\n> it[1] - threats are bad and never cause anything but bad feelings.\n\nSounds good.\n\n> \n> That being said, my favorite wishes (in addition to as much SQL\n> compliance and performance as possible, of course) are:\n\nOK, here is a nice list I can address constructively:\n\n> \n> * migration on upgrade\n\nYes, we have problems. 7.1 will have a more robust pg_dump, and I hope\nthat fixes most of those problems. As you see issues here, we need to\nhear about it.\n\n> * old libraries being able to speak to newer databases, so old\n> binaries can continue working after database upgrades\n\nWe have always been able to do that. Old clients can talk to newer\ndatabases, though new can't necessary talk to older. To the extent that\nthe client assumes a particular database structure, we have problems\nthere, especially psql. I most cases, those match the server, so I\nthink we are OK here, and most 3-rd party stuff doesn't touch the system\ntables in areas that are changed frequently.\n\n> * good sonames on libraries - if a library hasn't changed, bumping the\n> number to show it's part of a new version isn't necesarry. If it is\n> backwards compatible, just bump the minor version, if it isn't, bump\n> the major version. Or even better, use versioned symbols (I don't\n> know how many other OSes than Linux and Solaris supports this,\n> though). \n\nWe only bump minor .so numbers, except for 6.5 I think where we had a\nmajor overhaul of libpq and the major was bumped. I don't see another\nmajor bump on the horizon. I don't think we have ever shipped a server\nthat could not talk to clients at least one major revison backwards.\n\nThe big question is how RPM's handle that. I have no idea.\n\n> As for assisting, at least Red Hat contributes to a lot of projects,\n> some of which are important to postgres on one or more platforms: gdb,\n> gcc, glibc and the linux kernel. There just isn't enough resources to\n> do everything, but I try to help out with the RPMs.\n\nWe only need help to the extent RPM people are asking for major feature\nadditions that affect only RPM/Linux, and frankly, the RPM/Linux users\nshould be supplying patches to us for that anyway. No need for the\ncompany to get involved.\n\n> When we make patches for packages, we try to cooperate with the\n> author(s) to get them in - happily, we haven't had much of a need for\n> that with postgresql.\n\nYes, I have never seen one.\n\n> > The \"We are big. Just fix it and let us know when it is ready\" attitude\n> > does not work here, and that is what I am hearing mostly from the RPM\n> > people.\n> \n> I haven't heard anyone say that.\n\nSome of the RPM users have made some demands that sound a little like\nthat. :-)\n\n> > There must be a list of known problems. Let's hear them, so we can try\n> > to solve them as a group. However, in general, we do not make dramatic\n> > change to work around OS bugs, and do not plan to make major changes to\n> > work around the limitations of RPM's.\n> \n> I don't think there are any apart from the upgrade issues - if library\n> versioning follows the standard, that certainly won't be a problem.\n\nI would love to get a detailed list of upgrade problems so we can be\nsure 7.1 has them fixed. Certainly 7.1 is already a big improvement for\nupgrades.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 15:15:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "> > To answer your question, there are no pending changes in libpq that\n> > would mandate a major version bump (ie, nothing binary-incompatible,\n> > AFAIK). We could ship it with the exact same version number, but then\n> > how are people to tell whether they have a 7.0 or 7.1 libpq?\n> \n> If there isn't any changes, why bump it? \n\nThis is huge software. There are changes to every library in every\nmajor release, major for us meaning, i.e., 7.0->7.1. That is why I bump\nthe numbers.\n\nThe interesting issue is that the version number changes for .so do\n_not_ mean they only talk with servers of the same release. They will\ntalk to future servers of higher release numbers. This is done because\nthere is a backend protocol number that is passed from client to server\nwhich determines how the server should behave with that client.\n\nWe can't always have new clients talking to older servers because the\nold servers may not know the newer protocol. We could get fancy and\ntrade version numbers and try to get it working, but it has not been a\npriority, and few have asked for it. Having old clients talking to new\ndatabases has been enough for most users.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 15:21:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Unfortunately RPM deems a dependency upon libpq.so.2.0 to not be\n> > fulfilled by libpq.so.2.1 (how _can_ it know? A client linked to 2.0\n> > might fail if 2.1 were to be loaded under it (hypothetically)).\n> \n> If so, I claim RPM is broken.\n> \n> The whole point of major/minor version numbering for .so's is that\n> a minor version bump is supposed to be binary-upward-compatible.\n> If the RPM stuff has arbitrarily decided that it won't honor that\n> definition, why do we bother with multiple numbers at all?\n> \n> > So, PostgreSQL 7.1 is slated to be libpq.so.2.2, then?\n> \n> To answer your question, there are no pending changes in libpq that\n> would mandate a major version bump (ie, nothing binary-incompatible,\n> AFAIK). We could ship it with the exact same version number, but then\n> how are people to tell whether they have a 7.0 or 7.1 libpq?\n\nAnd that is a very good point. Hey, I'm caught in the middle here :-).\nI want to see PostgreSQL succeed and excel (which, to me, means becoming\nthe RDBMS of choice) on RPM-based Linux distributions, which I am sure\nis a goal of others too. And I'm sure no one here is against that.\n\nBut, there is friction between RedHat's (to use the first example of a\ndistributor to pop into my head) needs and the needs of the PostgreSQL\ngroup.\n\nMy gut feel is that RedHat may be better off shipping 7.0.x if the\nlibrary version numbers are a contributory problem. The data upgrade\nproblem is a bigger problem. To which RedHat might just want to stay at\n7.0.x until either a tool is written to painlessly migrate or until the\nnext major RedHat is released.\n\nOf course, that doesn't affect what I do as far as building 7.1 RPM's\nfor distribution from the PostgreSQL site (or by anyone who so desires\nto distribute them). I have no choice for my own self but to stay on\nthe curve. I need TOAST and OUTER JOINS too much.\n\nSo, what I feel may be the best compromise is for RedHat (and myself) to\ncontinue building 7.0.x RPM's with bugfixes, etc, while I build 7.1 ad\nsubsequent RPMset's for those who know what they're doing and not\nblindly upgrading their systems.\n\nTrond, do you have any comments on that? Or is the likely migration to\nkernel 2.4 in the next RedHat going to make a compatability compromise\nhere moot?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 27 Oct 2000 15:30:34 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> My gut feel is that RedHat may be better off shipping 7.0.x if the\n> library version numbers are a contributory problem. \n\nWe could provide compat-packages with just neeeded libraries.\n\n> The data upgrade problem is a bigger problem. To which RedHat might\n> just want to stay at 7.0.x until either a tool is written to\n> painlessly migrate or until the next major RedHat is released.\n\nWe upgrade everything from 3.0.3 (we no longer support upgrades from\n2.0 as we couldn't find a specific way to identify such a system and\nwe didn't want accidentaly upgrade other distributions), so there is\npain anyway.\n\n> Of course, that doesn't affect what I do as far as building 7.1 RPM's\n> for distribution from the PostgreSQL site (or by anyone who so desires\n> to distribute them). I have no choice for my own self but to stay on\n> the curve. I need TOAST and OUTER JOINS too much.\n\nOthers very likely have the same need. I'll be looking into issues\nwith these later.\n \n> So, what I feel may be the best compromise is for RedHat (and myself) to\n> continue building 7.0.x RPM's with bugfixes, etc, while I build 7.1 ad\n> subsequent RPMset's for those who know what they're doing and not\n> blindly upgrading their systems.\n\n> Trond, do you have any comments on that? Or is the likely migration to\n> kernel 2.4 in the next RedHat going to make a compatability compromise\n> here moot?\n\nNo, the 2.4 kernel should go right in - I've been using it extensively\non my system until recently (the most recent pretest has problems with\nflock for sendmail).\n\n\nAnyway, I've had a look at psql in objdump:\n\nDynamic Section:\n NEEDED libpq.so.2.1\n NEEDED libcrypt.so.1\n NEEDED libnsl.so.1\n NEEDED libdl.so.2\n NEEDED libm.so.6\n NEEDED libutil.so.1\n NEEDED libreadline.so.4.1\n NEEDED libtermcap.so.2\n NEEDED libncurses.so.5\n NEEDED libc.so.6\n\n[...]\n\nIt links against nice, round versions of most libraries but wants\nspecific versions of readline ad libpq.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "27 Oct 2000 15:42:15 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Bruce Momjian wrote:\n \n> > I appreciate the fact that we are not here to make it easy for\n> > distributors to package our software. I also appreciate the fact that\n> > if you don't at least make an effort to work with major distributors\n> > (and RedHat, TurboLinux, Caldera, and SuSE together comprise a major\n> > userbase) that you run the risk of not being distributed in favor of an\n> > inferior product.\n \n> Let them. It is their decision. Frankly, I have seen this attitude\n> before, and I don't like it. Just the mention that \"Gee, if you don't\n> cooperate, we may yank you,\" is really a veiled threat.\n\nI don't even see it as a veiled threat, Bruce. It simply _is_ a\nthreat. There are other RDBMS choices. Currently PostgreSQL is the\nOfficially Sanctioned RDBMS for multiple Linux distributions. As our\ncapabilities increase, it will make us more and more attractive as the\nChoice, Top Shelf Open Source RDBMS.\n\nHowever, the upgrade gotcha has left a very bitter taste in more than\none user's mouth. I'll not say more about that now, as I've said quite\nenough in the past. And I'm still trying to figure out enough of the\ninternals of the storage manager to try to write the migration tools\nmyself. But, I have other fish to fry right now, the biggest being\ncross-distribution RPM's.\n\n> > Linux is far from the only OS to be supported by\n> > PostgreSQL, true. But Linux is also the most popular OS for PostgreSQL\n> > deployment.\n \n> True, it is the most popular, but that doesn't make the others less\n> important.\n\nNo, it doesn't. \n \n> This whole statement comes across as, \"You run on Linux, and look, you\n> took the time to run on other OS's too. How quaint.\"\n \nI ran Unix before there was linux. I ran Unix years before Linus was\neven out of High School. Well, that is if you count Tandy Xenix V7 and\nSystem III as Unix. Or AT&T 3B1 SysVR2. Or Apollo DomainOS SR10.2. Or\nUltrix on a VAX 11/750 (running in tandem with VMS). And I'm considering\nmoving my most critical public servers from Linux over to OpenBSD. A\nLinux bigot I'm not.\n \n> > However, there are known problems that can bite people who are not using\n> > RPM's and are not running Linux. Some of those problems are such that\n> > it will take someone with more knowledge than I currently possess to\n \n> Again, your comments where quite helpful. We need more of them. We\n> need to hear more about the problems people are having with RPM's, and\n> how to make them better.\n\nBruce, sometimes I fear my own lack of communications skills. If I can\nmake my wife fighting mad at me with me having no clue as to what I said\nthat made her mad, I fear I can make anyone mad, without knowing what I\nsaid to do so. So, I guess you could say I'm a little paranoid about my\ncommunications skills. So, I'm glad you considered my comments helpful\n-- I was beginning to get worried.\n \n> There must be a list of known problems. Let's hear them, so we can try\n> to solve them as a group. However, in general, we do not make dramatic\n> change to work around OS bugs, and do not plan to make major changes to\n> work around the limitations of RPM's. My bet is that some middle layer\n> can be created that will fix that for us.\n\nMeet Mr. Middle Layer. :-) The PostgreSQL spec file that controls the\nRPM build is one of the most complex ones in the RedHat distribution,\nAFAIK. There's the middle layer. It does quite a bit of finagling\nalready.\n\nAnd the work that Peter E is doing is helping my cause significantly.\n\nBruce, when I recover fully from the illness I've had the last few days,\nI'll try to come up with a coherent listing of what I've had to work\naround in the past. My current headache won't let me think straight\nright now, which makes it likely that I won't effectively communicate\nthe issues.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 27 Oct 2000 15:51:25 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "[BCC to Hackers -- cc: to PORTS, as, as Bruce correctly pointed out,\nthat's where this discussion belongs.]\n\nTrond Eivind Glomsr�d wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > My gut feel is that RedHat may be better off shipping 7.0.x if the\n> > library version numbers are a contributory problem.\n \n> We could provide compat-packages with just neeeded libraries.\n\nYes, we could do that. And those libs could possibly just be the\nsymlinks (or even just a Provides: header).\n \n> We upgrade everything from 3.0.3 (we no longer support upgrades from\n> 2.0 as we couldn't find a specific way to identify such a system and\n> we didn't want accidentaly upgrade other distributions), so there is\n> pain anyway.\n\nI tried going from 4.1 (the earliest one I have installation CD's for)\nto pre-7.0 once. I don't recommend it.\n \n> > Of course, that doesn't affect what I do as far as building 7.1 RPM's\n> > for distribution from the PostgreSQL site (or by anyone who so desires\n> > to distribute them). I have no choice for my own self but to stay on\n> > the curve. I need TOAST and OUTER JOINS too much.\n \n> Others very likely have the same need. I'll be looking into issues\n> with these later.\n\nGood. Let me know what you decide, if you don't mind.\n \n> Anyway, I've had a look at psql in objdump:\n \n> Dynamic Section:\n> NEEDED libpq.so.2.1\n> NEEDED libreadline.so.4.1\n> [...]\n \n> It links against nice, round versions of most libraries but wants\n> specific versions of readline ad libpq.\n\nAnd unfortunately PHP and other PostgreSQL clients also link against the\nspecific libpq version. This has caused pain for those installing the\nPHP stuff from RPM which was linked against a RedHat 6.2 box with\nPostgreSQL 6.5.3 installed -- onto a RedHat 6.2 box with PostgreSQL\n7.0.2 installed. There is a failed dependency on libpq.so.2.0 -- even\nthough libpq.so.2.1 is there.\n\nA symlink works around the problem, if the symlink is part of the RPM so\nthat it gets in the rpm dep database. Of course, this only causes\nproblems with RedHat 6.2 and earlier, as RH 7's PHP stuff was built\nagainst 7.0.2 to start with. But, 7.1 with libpq.so.2.2 will cause\nsimilar dep failures for PHP packages built against 7.0.2.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 27 Oct 2000 16:03:19 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re:RPM dependencies (Was: 7.0 vs. 7.1 (was: latest version?))"
},
{
"msg_contents": "Trond Eivind Glomsr�d wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Unfortunately RPM deems a dependency upon libpq.so.2.0 to not be\n> > fulfilled by libpq.so.2.1 (how _can_ it know? A client linked to 2.0\n> > might fail if 2.1 were to be loaded under it (hypothetically)).\n\n> There usually are no such problems, and I'm not aware of any specific\n> to postgresql either.\n\nThere have been reports to the pgsql-bugs list and to the PHP list about\nthis very issue.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 27 Oct 2000 16:04:39 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "> However, the upgrade gotcha has left a very bitter taste in more than\n> one user's mouth. I'll not say more about that now, as I've said quite\n> enough in the past. And I'm still trying to figure out enough of the\n> internals of the storage manager to try to write the migration tools\n> myself. But, I have other fish to fry right now, the biggest being\n> cross-distribution RPM's.\n\nActually, I would prefer to see how we can improve what we have before\nmaking a binary conversion utility that will have to be updated for\nevery release.\n\n\n> Meet Mr. Middle Layer. :-) The PostgreSQL spec file that controls the\n> RPM build is one of the most complex ones in the RedHat distribution,\n> AFAIK. There's the middle layer. It does quite a bit of finagling\n> already.\n\nYes, I suspected the RPM was the middle layer. To the extent we can\nmake that easier, let's hear it. Tell us what you need to do, and what\nyou can't do, and see if any of us can figure out how to make things\neasier.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 18:06:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "> And unfortunately PHP and other PostgreSQL clients also link against the\n> specific libpq version. This has caused pain for those installing the\n> PHP stuff from RPM which was linked against a RedHat 6.2 box with\n> PostgreSQL 6.5.3 installed -- onto a RedHat 6.2 box with PostgreSQL\n> 7.0.2 installed. There is a failed dependency on libpq.so.2.0 -- even\n> though libpq.so.2.1 is there.\n> \n> A symlink works around the problem, if the symlink is part of the RPM so\n> that it gets in the rpm dep database. Of course, this only causes\n> problems with RedHat 6.2 and earlier, as RH 7's PHP stuff was built\n> against 7.0.2 to start with. But, 7.1 with libpq.so.2.2 will cause\n> similar dep failures for PHP packages built against 7.0.2.\n\nFor us, it would be great if libpq.so.2.1 linked against the\nlibpq.so.2.1, libpq.so.2.2, but not libpq.so.2.0. I would guess other\napps need this ability too. How do they handle it?\n\nI saw someone installing pgaccess from RPM. It wanted tcl/tk 8.0, and\nthey had tcl/tk 8.3 installed, and it failed. Seems this is a common\nRPM problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 18:15:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re:RPM dependencies (Was: 7.0 vs. 7.1 (was: latest\n\tversion?))"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > However, the upgrade gotcha has left a very bitter taste in more than\n> > one user's mouth. I'll not say more about that now, as I've said quite\n> > enough in the past. And I'm still trying to figure out enough of the\n> > internals of the storage manager to try to write the migration tools\n> > myself. But, I have other fish to fry right now, the biggest being\n> > cross-distribution RPM's.\n> \n> Actually, I would prefer to see how we can improve what we have before\n> making a binary conversion utility that will have to be updated for\n> every release.\n> \n> > Meet Mr. Middle Layer. :-) The PostgreSQL spec file that controls the\n> > RPM build is one of the most complex ones in the RedHat distribution,\n> > AFAIK. There's the middle layer. It does quite a bit of finagling\n> > already.\n> \n> Yes, I suspected the RPM was the middle layer. To the extent we can\n> make that easier, let's hear it. Tell us what you need to do, and what\n> you can't do, and see if any of us can figure out how to make things\n> easier.\n\nOk, here goes:\n*\tLocation-agnostic installation. Documentation (which I'll be happy to\ncontribute) on that. Peter E is already working in this area. Getting\nthe installation that 'make install' spits out massaged into an FHS\ncompliant setup is the majority of the RPM's spec file.\n\n*\tUpgrades that don't require an ASCII database dump for migration. This\ncan either be implemented as a program to do a pg_dump of an arbitrary\nversion of data, or as a binary migration utility. Currently, I'm\nsaving old executables to run under a special environment to pull a dump\n-- but it is far from optimal. What if the OS upgrade behind 99% of the\nupgrades makes it where those old executables can't run due to binary\nincompatibility (say I'm going from RedHat 3.0.3 to RedHat 7 -- 3.0.3,\nIIRC, as a.out...( and I know 3.0.3 didn't have PostgreSQL RPMs).)? \nWhat I could actually do to prevent that problem is build all of\nPostgreSQL's 6.1.x, 6.2.x, 6.3.x, 6.4.x, and 6.5.x and include the\nnecessary backend executables as part of the RPM.... But I think you see\nthe problem there. However, that would in my mind be better than the\ncurrent situation, albeit taking up a lot of space.\n\n*\tA less source-centric mindset. Let's see, how to explain? The\nregression tests are a good example. You need make. You need the source\ninstalled, configured, and built in the usual location. You need\nportions of contrib. RPM's need to be installable on compiler-crippled\nservers for security. While the demand for regression testing on such a\nbox may not be there, it certainly does give the user something to use\nto get standard output for bug reports. As a point, I run PostgreSQL in\nproduction on a compilerless machine. No compiler == more security. \nAnd Linux has enough security problems without a compiler being\navailable :-(. Oh, and I have no make on that machine either.\n\nThe documentation as well as many of the examples assume too much, IMHO,\nabout the install location and the install methodology.\n\nI think I may have a solution for the library versioning problem. \nRather than symlink libpq.so->libpq.so.2->libpq.so.2.x, I'll copy\nlibpq.so.2.1 to libpq.so.2 and symlink libpq.so to that. A little more\ncode for me. There is no real danger in version confusion with RPM's\nversioning and upgrade methodology, as long as you consistently use the\nRPMset. The PostgreSQL version number is readily found from an RPM\ndatabase query, making the so version immaterial.\n\nThe upgrade issue is the hot trigger for me at this time. It is and has\nbeen a major drain on my time and effort, as well as Trond's and others,\nto get the RPM upgrade working even remotely smoothly. And I am willing\nto code -- once I know how to go about doing it in the backend.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 27 Oct 2000 18:25:41 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > A symlink works around the problem, if the symlink is part of the RPM so\n> > that it gets in the rpm dep database. Of course, this only causes\n> > problems with RedHat 6.2 and earlier, as RH 7's PHP stuff was built\n> > against 7.0.2 to start with. But, 7.1 with libpq.so.2.2 will cause\n> > similar dep failures for PHP packages built against 7.0.2.\n \n> For us, it would be great if libpq.so.2.1 linked against the\n> libpq.so.2.1, libpq.so.2.2, but not libpq.so.2.0. I would guess other\n> apps need this ability too. How do they handle it?\n\nIf I were doing manual dependencies for the other packages, I would say:\nRequires: libpq.so => 2.1\n\nNo as to whether that works or not, I don't know. I know it won't work\nwith RPM prior to 3.0.4 or so.\n \n> I saw someone installing pgaccess from RPM. It wanted tcl/tk 8.0, and\n> they had tcl/tk 8.3 installed, and it failed. Seems this is a common\n> RPM problem.\n\nWell, actually, there are times you might not want greater than a\ncertain version. And you as a packager can make certain dependency\nrequirements manually. However, this libpq.so.2.0 vs 2.1 failure was an\nautomatic dependency.\n\nAnd, really, RPM shouldn't allow it for automatic requires. Suppose I\nhave an ancient client RPM that I want to install. Assuming for one\nsecond that nothing else has changed on the system except the PostgreSQL\nversion, if the client was built against PostgreSQL 6.2.1 with\nlibpq.so.1, and I force the install of it even though libpq.so.2 is\ninstalled, freakish things can happen. Been there and done that -- a\nclient linked against Postgres95 1.0.1 did really strange things when\nlibpq.so.2 was link loaded under it.\n\nWorse things happen if you have a package that requires tcl 7.4 and you\nhave tcl 8.3.2 installed.\n\nNot everyone is as generous as we are with upwards compatibility.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 27 Oct 2000 18:33:49 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re:RPM dependencies (Was: 7.0 vs. 7.1 (was: latest\n\tversion?))"
},
{
"msg_contents": "> Ok, here goes:\n\nCool, a list.\n\n> *\tLocation-agnostic installation. Documentation (which I'll be happy to\n> contribute) on that. Peter E is already working in this area. Getting\n> the installation that 'make install' spits out massaged into an FHS\n> compliant setup is the majority of the RPM's spec file.\n\nWell, we certainly don't want to make changes that make things harder or\nmore confusing for non-RPM installs. How are they affected here?\n\n> *\tUpgrades that don't require an ASCII database dump for migration. This\n> can either be implemented as a program to do a pg_dump of an arbitrary\n> version of data, or as a binary migration utility. Currently, I'm\n> saving old executables to run under a special environment to pull a dump\n> -- but it is far from optimal. What if the OS upgrade behind 99% of the\n> upgrades makes it where those old executables can't run due to binary\n> incompatibility (say I'm going from RedHat 3.0.3 to RedHat 7 -- 3.0.3,\n> IIRC, as a.out...( and I know 3.0.3 didn't have PostgreSQL RPMs).)? \n> What I could actually do to prevent that problem is build all of\n> PostgreSQL's 6.1.x, 6.2.x, 6.3.x, 6.4.x, and 6.5.x and include the\n> necessary backend executables as part of the RPM.... But I think you see\n> the problem there. However, that would in my mind be better than the\n> current situation, albeit taking up a lot of space.\n\nI really don't see the issue here. We can compress ASCII dump files, so\nthe space need should not be too bad. Can't you just check to see if\nthere is enough space, and error out if there is not? If the 2GIG limit\nis a problem, can't the split utility drop the files in <2gig chunks\nthat can be pasted together in a pipe on reload?\n\n> *\tA less source-centric mindset. Let's see, how to explain? The\n> regression tests are a good example. You need make. You need the source\n> installed, configured, and built in the usual location. You need\n> portions of contrib. RPM's need to be installable on compiler-crippled\n> servers for security. While the demand for regression testing on such a\n> box may not be there, it certainly does give the user something to use\n> to get standard output for bug reports. As a point, I run PostgreSQL in\n> production on a compilerless machine. No compiler == more security. \n> And Linux has enough security problems without a compiler being\n> available :-(. Oh, and I have no make on that machine either.\n\nWell, no compiler? I can't see how we would do that without making\nother OS installs harder. That is really the core of the issue. We\ncan't be making changes that make things harder for other OS's. Those\nhave to be isolated in the RPM, or in some other middle layer.\n\n\n> \n> The documentation as well as many of the examples assume too much, IMHO,\n> about the install location and the install methodology.\n\nWell, if we are not specific, things get very confusing for those other\nOS's. Being specific about locations makes things easier. Seems we may\nneed to patch RPM installs to fix that. Certainly a pain, but I see no\nother options.\n\n> \n> I think I may have a solution for the library versioning problem. \n> Rather than symlink libpq.so->libpq.so.2->libpq.so.2.x, I'll copy\n> libpq.so.2.1 to libpq.so.2 and symlink libpq.so to that. A little more\n> code for me. There is no real danger in version confusion with RPM's\n> versioning and upgrade methodology, as long as you consistently use the\n> RPMset. The PostgreSQL version number is readily found from an RPM\n> database query, making the so version immaterial.\n\nOh, that is good.\n\n> \n> The upgrade issue is the hot trigger for me at this time. It is and has\n> been a major drain on my time and effort, as well as Trond's and others,\n> to get the RPM upgrade working even remotely smoothly. And I am willing\n> to code -- once I know how to go about doing it in the backend.\n\nPlease give us more information about how the current upgrade is a\nproblem. We don't hear that much from other OS's. How are RPM's\nspecific, and maybe we can get a plan for a solution.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 18:36:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "> And, really, RPM shouldn't allow it for automatic requires. Suppose I\n> have an ancient client RPM that I want to install. Assuming for one\n> second that nothing else has changed on the system except the PostgreSQL\n> version, if the client was built against PostgreSQL 6.2.1 with\n> libpq.so.1, and I force the install of it even though libpq.so.2 is\n> installed, freakish things can happen. Been there and done that -- a\n> client linked against Postgres95 1.0.1 did really strange things when\n> libpq.so.2 was link loaded under it.\n> \n> Worse things happen if you have a package that requires tcl 7.4 and you\n> have tcl 8.3.2 installed.\n> \n> Not everyone is as generous as we are with upwards compatibility.\n\nAnd we aren't super-generous either. I am not sure how far back we go\nin allowing old libpq apps to talk to new servers. We go one version at\nleast. So you could allow for the current version number, plus one\nminor number greater, and know that would work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 19:08:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re:RPM dependencies (Was: 7.0 vs. 7.1 (was: latest\n\tversion?))"
},
{
"msg_contents": "Lamar Owen writes:\n\n> Getting the installation that 'make install' spits out massaged into\n> an FHS compliant setup is the majority of the RPM's spec file.\n\n./configure --prefix=/usr --sysconfdir=/etc\n\nOff you go... (I'll refrain from commenting further on the FHS.)\n\n> *\tUpgrades that don't require an ASCII database dump for migration.\n\nLet me ask you this question: When any given RPM-based Linux distribution\nwill update their system from ext2 to, say, ReiserFS across the board, how\nare they going to do it? Sincere question.\n\n> *\tA less source-centric mindset. Let's see, how to explain? The\n> regression tests are a good example. You need make. You need the source\n> installed, configured, and built in the usual location.\n\nThis is not an excuse, but almost every package behaves this way. Test\nsuites are designed to be run after \"make all\" and before \"make install\". \nWhen you ship a binary package then you're saying to users \"I did the\nbuilding and installation (and presumably everything else that the authors\nrecommend along the way) for you.\" RPM packages usually don't work very\nwell on systems that are not exactly like the one they were built on, so\nthis seems to be a fair assumption.\n\nGetting the regression tests to work from anywhere is not very hard, but\nit's not the most interesting project for most people. :-)\n\n> I think I may have a solution for the library versioning problem. \n> Rather than symlink libpq.so->libpq.so.2->libpq.so.2.x, I'll copy\n> libpq.so.2.1 to libpq.so.2 and symlink libpq.so to that.\n\nI'd still claim that if RPM thinks it's smarter than the dynamic loader,\nthen it's broken. All the shared libraries on Linux have a symlink from\nmore general to more specific names. PostgreSQL can't be the first to hit\nthis problem.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 28 Oct 2000 16:55:59 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "[Since I've rested over the weekend, I hope I don't come across this\nmorning as an angry old snarl, like some of my previous posts on this\nsubject unfortunately have been.]\n\nBruce Momjian wrote:\n> > * Location-agnostic installation. Documentation (which I'll be happy to\n> > contribute) on that. Peter E is already working in this area. Getting\n> > the installation that 'make install' spits out massaged into an FHS\n> > compliant setup is the majority of the RPM's spec file.\n \n> Well, we certainly don't want to make changes that make things harder or\n> more confusing for non-RPM installs. How are they affected here?\n\nThey wouldn't be. Peter E has seemingly done an excellent job in this\narea. I say seemingly because I haven't built an RPM from the 7.1 branch\nyet, but from what he has posted, he seems to understand the issue. \nMany thanks, Peter.\n \n> > * Upgrades that don't require an ASCII database dump for migration. This\n> > can either be implemented as a program to do a pg_dump of an arbitrary\n> > version of data, or as a binary migration utility. Currently, I'm\n \n> I really don't see the issue here.\n\nAt the risk of being redundant, here goes. As I've explained before,\nthe RPM upgrade environment, thanks to our standing with multiple\ndistributions as being shipped as a part of the OS, could be run as part\nof a general-purpose OS upgrade. In the environment of the general\npurpose OS upgrade, the RPM's installation scripts cannot fire up a\nbackend, nor can it assume one is running or is not running, nor can the\nRPM installation scripts fathom from the run-time environment whether\nthey are being run from a command line or from the OS upgrade (except on\nLinux Mandrake, which allows such usage).\n\nThus, if a system administrator upgrades a system, or if an end user who\nhas a pgaccess-customized data entry system for things as mundane as an\naddress list or recipe book, there is no opportunity to do a dump. The\ndump has to be performed _after_ the RPM upgrade.\n\nNow, this is far from optimal, I know. I _know_ that the user should\ntake pains with their data. I know that there should be a backup. I\nalso know that a user of PostgreSQL should realize that 'this is just\nthe way it is done' and do things Our Way.\n\nI also know that few new users will do it 'Our Way'. No other package\nthat I am aware of requires the manual intervention that PostgreSQL\ndoes, with the possible exception of upgrading to a different file\nsystem -- but that is something most new users won't do, and is\nsomething that is more difficult to automate.\n\nHowever, over the weekend, while resting (I did absolutely NO computer\nwork this weekend -- too close to burnout), I had a brainstorm.\n\nA binary migration tool does not need to be written, if a concession to\nthe needs of some users who just simply want to upgrade can be made.\n\nSuppose we can package old backends (with newer network code to connect\nto new clients). Suppose further that postmaster can be made\nintelligent enough to fire up old backends for old data, using\nPG_VERSION as a key. Suppose a NOTICE can be fired off warning the user\nthat 'The Database is running in Compatibility Mode -- some features may\nnot be available. Please perform a dump of your data, reinitialize the\ndatabase, and restore your data to access new features of version x.y'.\n\nI'm highly considering doing just that from a higher level. It will not\nbe nearly as smooth, but doable.\n\nOf course, that increases maintenance work, and I know it does. But I'm\ntrying to find a middle ground here, since providing a true migration\nutility (even if it just produces a dump of the old data) seems out of\nreach at this time.\n\nWe are currently forcing something like a popular word processing\nprogram once did -- it's proprietary file format changed. It was coded\nso that it could not even read the old files. But both the old and the\nnew versions could read and write an interchange format. People who\nblindly upgraded their word processor were hit with a major problem. \nThere was even a notice in the README -- which could be read after the\nprogram was installed.\n\nWhile the majority of us use PostgreSQL as a server behind websites and\nother clients, there will be a large number of new users who want to use\nit for much more mundane tasks. Like address books, or personal\ninformation management, or maybe even tax records. Frontends to\nPostgreSQL, thanks to PostgreSQL's advanced features, are likely to span\nthe gamut -- we already have OnShore TimeSheet for time tracking and\npayroll, as one example. And I even see database-backed intranet-style\nweb scripts being used on a client workstation for these sorts of\nthings. I personally do just that with my home Linux box -- I have a\nnumber of AOLserver dynamic pages that use PostgreSQL for many mundane\ntasks (a multilevel sermon database is one).\n\nWhile I don't need handholding in the upgrade process, I have provided\nsupport to users that do -- who are astonished at the way we upgrade. \nSeamless upgrading won't help me personally -- but it will help\nmultitudes of users -- not just RPM users. As a newbie to PostgreSQL I\nwas bitten, giving me compassion on those who might be bitten.\n\n> We can compress ASCII dump files, so\n> the space need should not be too bad.\n\nSpace isn't the problem. The procedure is the problem. Even if the\nuser fails to do it Right, we should at least attempt to help them\nrecover, IMHO.\n\n> > * A less source-centric mindset. Let's see, how to explain? The\n> > regression tests are a good example. You need make. You need the source\n[snip]\n> > it certainly does give the user something to use\n> > to get standard output for bug reports. As a point, I run PostgreSQL in\n\n> Well, no compiler? I can't see how we would do that without making\n> other OS installs harder. That is really the core of the issue. We\n> can't be making changes that make things harder for other OS's. Those\n> have to be isolated in the RPM, or in some other middle layer.\n\nAnd I've done that in the past with the older serialized regression\ntests.\n\nI don't see how executing a shell script instead of executing a make\ncommand would make it any harder for other OS users. I am not trying to\nmake it harder for other OS users. I _am_ trying to make it easier for\nusers who are getting funny results from queries to be able to run\nregression tests as a standardized way to see where the problem lies. \nMaybe there is a hardware issue -- regression testing might be the only\nway to have a standard way to pinpoint the problem.\n\nAnd telling someone who is having a problem with prepackaged binaries\n'Run the regression tests by executing the script\n/usr/lib/pgsql/tests/regress/regress.sh and pipe me the results' is much\neasier to do than 'Find me a test case where this blow up, and pipe me a\nbacktrace/dump/whatever' for the new users. Plus that regression output\nis a known quantity.\n\nOr, to put it in a soundbite, regression testing can be the user's best\nbug-zapping friend.\n\n> > The documentation as well as many of the examples assume too much, IMHO,\n> > about the install location and the install methodology.\n \n> Well, if we are not specific, things get very confusing for those other\n> OS's. Being specific about locations makes things easier. Seems we may\n> need to patch RPM installs to fix that. Certainly a pain, but I see no\n> other options.\n\nI can do that, I guess. I currently ship the README.rpm as part of the\npackage -- but I continue to hear from people who have not read it, but\nhave read the online docs. I have even put the unpacked source RPM up\non the ftp site so that people can read the README right online.\n \n> Please give us more information about how the current upgrade is a\n> problem. We don't hear that much from other OS's. How are RPM's\n> specific, and maybe we can get a plan for a solution.\n\nRPM's are expected to 'rpm -U' and you can simply _use_ the new version,\nwith little to no preparation. At least that is the theory. And it\nworks for most packages.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 30 Oct 2000 10:32:21 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > Getting the installation that 'make install' spits out massaged into\n> > an FHS compliant setup is the majority of the RPM's spec file.\n \n> ./configure --prefix=/usr --sysconfdir=/etc\n> Off you go... (I'll refrain from commenting further on the FHS.)\n\nI know alot of people don't like LSB/FHS, but, like it or not, I have to\nwork with it. And, many many thanks for putting in the work on the\nconfiguration as you have.\n \n> > * Upgrades that don't require an ASCII database dump for migration.\n \n> Let me ask you this question: When any given RPM-based Linux distribution\n> will update their system from ext2 to, say, ReiserFS across the board, how\n> are they going to do it? Sincere question.\n\nLike the TRS-80 model III, whose TRSDOS 1.3 could not read the TRS-80\nModel I's disks, written on TRSDOS 2.3 (TRSDOS's versioning was\nabsolutely horrendous). TRSDOS 1.3 included a CONVERT utility that\ncould read files from the old filesystem. \n\nI'm sure that the newer distributions using ReiserFS as the primary\nfilesystem will include legacy Ext2/3 support, at least for read-only,\nfor many versions to come.\n\nAnd that's my big beef -- a newer version of PostgreSQL can't even\npg_dump an old database. If that single function was supported, I would\nhave no problem with the upgrade whatsoever. \n \n> > * A less source-centric mindset. Let's see, how to explain? The\n> > regression tests are a good example. You need make. You need the source\n> > installed, configured, and built in the usual location.\n \n> This is not an excuse, but almost every package behaves this way. Test\n> suites are designed to be run after \"make all\" and before \"make install\".\n> When you ship a binary package then you're saying to users \"I did the\n> building and installation (and presumably everything else that the authors\n> recommend along the way) for you.\"\n\nYes, and I do just that. Regression testing is a regular part of my\nbuild process here.\n\n> RPM packages usually don't work very\n> well on systems that are not exactly like the one they were built on\n\nBoy, don't I know it.....~;-/\n \n> Getting the regression tests to work from anywhere is not very hard, but\n> it's not the most interesting project for most people. :-)\n\nI know. I'll probably do it myself, as that is something I _can_ do. \n \n> > I think I may have a solution for the library versioning problem.\n> > Rather than symlink libpq.so->libpq.so.2->libpq.so.2.x, I'll copy\n> > libpq.so.2.1 to libpq.so.2 and symlink libpq.so to that.\n \n> I'd still claim that if RPM thinks it's smarter than the dynamic loader,\n> then it's broken. All the shared libraries on Linux have a symlink from\n> more general to more specific names. PostgreSQL can't be the first to hit\n> this problem.\n\nRPM is getting it's .so dependency list straight from the mouth of the\ndynamic loader itself. RPM uses shell scripts, customizable for each\nsystem on which RPM runs, to determine the automatic dependencies --\nthose shell scripts run the dynamic loader to get the list of requires. \nSo, the dynamic loader itself is providing the list. \n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 30 Oct 2000 10:43:14 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Lamar Owen writes:\n\n> In the environment of the general purpose OS upgrade, the RPM's\n> installation scripts cannot fire up a backend, nor can it assume one\n> is running or is not running, nor can the RPM installation scripts\n> fathom from the run-time environment whether they are being run from a\n> command line or from the OS upgrade (except on Linux Mandrake, which\n> allows such usage).\n\nI don't understand why this is so. It seems perfectly possible that some\n%preremovebeforeupdate starts a postmaster, runs pg_dumpall, saves the\nfile somewhere, then the %postinstallafterupdate runs the inverse\noperation. Disk space is not a valid objection, you'll never get away\nwithout 2x storage. Security is not a problem either. Are you not\nupgrading in proper dependency order or what? Everybody does dump,\nremove, install, undump; so can the RPMs.\n\nOkay, so it's not as great as a new KDE starting up and asking \"may I\nupdate your configuration files?\", but understand that the storage format\nis optimized for performance, not easy processing by external tools or\nsomething like that.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 31 Oct 2000 10:46:39 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "[Explanation on why an RPM cannot dump a database during upgrade\nfollows. This is a lengthy explanation. If you don't want to read it,\nplease hit 'Delete' now. -- Also, I have blind copied Hackers, and cc:'d\nPORTS, as that is where this discussion belongs, per Bruce's wishes.]\n\nPeter Eisentraut wrote:\n> Lamar Owen writes:\n> > In the environment of the general purpose OS upgrade, the RPM's\n> > installation scripts cannot fire up a backend, nor can it assume one\n \n> I don't understand why this is so. It seems perfectly possible that some\n> %preremovebeforeupdate starts a postmaster, runs pg_dumpall, saves the\n> file somewhere, then the %postinstallafterupdate runs the inverse\n> operation. Disk space is not a valid objection, you'll never get away\n> without 2x storage. Security is not a problem either. Are you not\n> upgrading in proper dependency order or what? Everybody does dump,\n> remove, install, undump; so can the RPMs.\n\nThe RedHat installer (anaconda) is running in a terribly picky\nenvironment. There a very few tools in this environment -- after all,\nthis is an installer we're talking about here. Starting a postmaster is\nlikely to fail, and fail big. Further, the anaconda install environment\nis a chroot -- or, at least the environment the RPM scriptlets run in is\na chroot -- a chroot that is the active filesystem that is being\nupgraded. This filesystem likely contains old libraries, old\nexecutables, and other programs that may have a hard time running under\nthe limited installation kernel and the limited libraries available to\nthe installer.\n\nAnd since packages are actively discouraged from probing whether they're\nrunning in the anaconda chroot or not, it is not possible to start a\npostmaster. Mandrake allows packages to probe this -- which I\npersonally think is a bad idea -- packages that need to know this sort\nof information are usually packages that would be better off finding a\nleast common denominator upgrade path that will work the best. A single\nupgrade path is much easier to maintain the two upgrade paths.\n\nSure, during a command line upgrade, I can probe for a postmaster, and\neven start one -- but I dare say the majority of PostgreSQL RPM upgrades\ndon't happen from the command line. Even if I _can_ probe whether I'm\nin the anaconda chroot or not, I _still_ have to have an upgrade path in\ncase this _is_ an OS upgrade.\n\nThink about it: suppose I had a postmaster start up, and a pg_dumpall\nruns during OS upgrade. Calculating free space is not possible -- you\nare in the middle of an OS upgrade, and more packages may be selected\nfor installation than are already installed -- or, an upgrade to an\nexisting package may take more space than the previous version (XFree86\n3.3.6 to XFree86 4.0.1 is a good example) -- you have no way of knowing\nfrom the RPM installation scripts in the package how much free space\nthere will or won't be when the upgrade is complete. And anaconda\ndoesn't help you out with an ESTIMATED_SPACE_AFTER_INSTALL environment\nvariable.\n\nAnd you really can't assume 2x space -- the user may have decided that\nthis machine that didn't have TeX installed needs TeX installed, and\nEmacs, and, while it didn't have GNOME before, it needs it now.....\nSure, the user just got himself in a pickle -- but I'm not about to be\nthe scapegoat for _his_ pickle.\n\nAnd I can't assume that the /var partition (where the dataset resides)\nis separate, or that it even has enough space -- the user might be\ndumping to another filesystem, or maybe onto tape. And, in the confines\nof an RPM %pre scriptlet, I have no way of finding out.\n\nFurthermore, I can't accurately predict how much space even a compressed\nASCII dump will take . Calculating the size of the dataset in PGDATA\ndoes not accurately predict the size of the dumpfile.\n\nAs to using split or the like to split huge dumpfiles, that is a\nnecessity -- but the space calculation problem defeats the whole concept\nof dump-during-upgrade. I cannot determine how much space I have, and I\ncannot determine how much space I need -- and, if I overflow the\nfilesystem during an OS upgrade that is halfway complete (PostgreSQL\nusually is upgraded about two thirds of the way through or so), then I\nleave the user with a royally hosed system. I don't want that on my\nshoulders, do you? :-)\n\nTherefore, the pg_dumpall _has_ to occur _after_ the new version has\noverwritten the old version, and _after_ the OS upgrade is completed --\nunless the user has done what they should have done to begin with --\nbut, the fact of the matter is that many users simply won't do it Right.\n\nYou can't assume the user is going to be reasonable by your standard --\nin fact, you have to do the opposite -- your standard of reasonable, and\nthe user's standard of reasonable, might be totally different things.\n\nIncidentally, I originally attempted doing the dump inside the\npreinstall, and found it to be an almost impossible task. The above\nreasons might be solvable, but then there's this little problem: what if\nyou _are_ able to predict the space needed and the space available --\nand there's not enough space available? \n\nThe PostgreSQL RPM's are not a single package, and anaconda has no way\nof rolling back another part of an RPMset's installation if one part\nfails. So, you can't just abort because you failed to dump -- the\npackage that needs the dump is the server subpackage -- and the main\npackage has already finished installation by that time. And you can't\nroll it back.\n\nAnd the user has a hosed PostgreSQL installation as a result.\n\nAs to why the package is split, well, it is highly useful to many people\nto have a PostgreSQL _client_ installation that accesses a central\ndatabase server -- there is no need to have a postmaster and a full\nbackend when all you need is psql and the libraries and documentation\nthat goes along with psql.\n\nRPM's have to deal with both a very difficult environment, and users who\nmight not be as technically savvy as those who install from source.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 31 Oct 2000 10:51:09 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Lamar Owen wrote:\n\n> As to why the package is split, well, it is highly useful to many people\n> to have a PostgreSQL _client_ installation that accesses a central\n> database server -- there is no need to have a postmaster and a full\n> backend when all you need is psql and the libraries and documentation\n> that goes along with psql.\n\nMy personal experience is that the way the PostgreSQL RPMs are split is very good. It meshes nicely with other dependencies so that I don't need to install extra RPMs on our servers. I for one would not like to see that change. \n\n-- \nKarl DeBisschop kdebisschop@alert.infoplease.com\nLearning Network Reference http://www.infoplease.com\nNetsaint Plugin Developer kdebisschop@users.sourceforge.net\n",
"msg_date": "Tue, 31 Oct 2000 11:28:26 -0500",
"msg_from": "Karl DeBisschop <kdebisschop@alert.infoplease.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Karl DeBisschop wrote:\n> \n> Lamar Owen wrote:\n> \n> > As to why the package is split, well, it is highly useful to many people\n> > to have a PostgreSQL _client_ installation that accesses a central\n> > database server -- there is no need to have a postmaster and a full\n> > backend when all you need is psql and the libraries and documentation\n> > that goes along with psql.\n> \n> My personal experience is that the way the PostgreSQL RPMs are split is very good. It meshes nicely with other dependencies so that I don't need to install extra RPMs on our servers. I for one would not like to see that change.\n\nAnd I agree -- and have no plans to change. If anything the RPMset will\nincrease in number, not decrease.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 31 Oct 2000 11:37:50 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "> > Well, we certainly don't want to make changes that make things harder or\n> > more confusing for non-RPM installs. How are they affected here?\n> \n> They wouldn't be. Peter E has seemingly done an excellent job in this\n> area. I say seemingly because I haven't built an RPM from the 7.1 branch\n> yet, but from what he has posted, he seems to understand the issue. \n> Many thanks, Peter.\n\nOK, glad that is done.\n\n> > > * Upgrades that don't require an ASCII database dump for migration. This\n> > > can either be implemented as a program to do a pg_dump of an arbitrary\n> > > version of data, or as a binary migration utility. Currently, I'm\n> \n> > I really don't see the issue here.\n> \n> At the risk of being redundant, here goes. As I've explained before,\n> the RPM upgrade environment, thanks to our standing with multiple\n> distributions as being shipped as a part of the OS, could be run as part\n> of a general-purpose OS upgrade. In the environment of the general\n> purpose OS upgrade, the RPM's installation scripts cannot fire up a\n> backend, nor can it assume one is running or is not running, nor can the\n> RPM installation scripts fathom from the run-time environment whether\n> they are being run from a command line or from the OS upgrade (except on\n> Linux Mandrake, which allows such usage).\n\nOK, maybe doing it in an RPM is the wrong way to go. If an old version\nexists, maybe the RPM is only supposed to install the software in a\nsaved location, and the users must execute a command after the RPM\ninstall that starts the old postmaster, does the dump, puts the new\nPostgreSQL server in place, and reloads the database.\n\n> > Well, no compiler? I can't see how we would do that without making\n> > other OS installs harder. That is really the core of the issue. We\n> > can't be making changes that make things harder for other OS's. Those\n> > have to be isolated in the RPM, or in some other middle layer.\n> \n> And I've done that in the past with the older serialized regression\n> tests.\n> \n> I don't see how executing a shell script instead of executing a make\n> command would make it any harder for other OS users. I am not trying to\n> make it harder for other OS users. I _am_ trying to make it easier for\n> users who are getting funny results from queries to be able to run\n> regression tests as a standardized way to see where the problem lies. \n> Maybe there is a hardware issue -- regression testing might be the only\n> way to have a standard way to pinpoint the problem.\n\nYou are basically saying that because you can ship without a compiler\nsometimes, we are supposed to change the way our regression tests work.\nLet's suppose SCO says they don't ship with a compiler, and wants us to\nchange our code to accomodate it. Should we? You can be certain we\nwould not, and in the RPM case, you get the same answer.\n\nIf the patch is trivial, we will work around OS limitations, but we do\nnot redesign code to work around OS limitations. We expect the OS to\nget the proper features. That is what we do with NT. Cygwin provides\nthe needed features.\n\n> > Please give us more information about how the current upgrade is a\n> > problem. We don't hear that much from other OS's. How are RPM's\n> > specific, and maybe we can get a plan for a solution.\n> \n> RPM's are expected to 'rpm -U' and you can simply _use_ the new version,\n> with little to no preparation. At least that is the theory. And it\n> works for most packages.\n\nThis is the \"Hey, other people can do it, why can't you\" issue. We are\nlooking for suggestions from Linux users in how this can be done. \nPerhaps running a separate command after the RPM has been installed is\nthe only way to go.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 1 Nov 2000 22:26:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > At the risk of being redundant, here goes. As I've explained before,\n> > the RPM upgrade environment, thanks to our standing with multiple\n> > distributions as being shipped as a part of the OS, could be run as part\n \n> OK, maybe doing it in an RPM is the wrong way to go. If an old version\n> exists, maybe the RPM is only supposed to install the software in a\n> saved location, and the users must execute a command after the RPM\n> install that starts the old postmaster, does the dump, puts the new\n> PostgreSQL server in place, and reloads the database.\n\nThat's more or less what's being done now. The RPM's preinstall script\n(run before any files are overwritten from the new package) backs up the\nrequired executables from the old installation. The RPM then overwrites\nthe necessary files, and then any old files left over are removed, along\nwith database removal of their records.\n\nA script (actually, two scripts due to a bug in the first one) is\nprovided to dump the database using the old executables. Which works OK\nas long as the new OS release is executable compatible with the old\nrelease. Oliver Elphick originally wrote the script for the Debian\npackages, and I adapted it to the RPM environment.\n\nHowever, the dependency upon the new version of the OS being able to run\nthe old executables could be a killer in the future if executable\ncompatibility is removed -- after all, an upgrade might not be from the\nimmediately prior version of the OS.\n \n> You are basically saying that because you can ship without a compiler\n> sometimes, we are supposed to change the way our regression tests work.\n> Let's suppose SCO says they don't ship with a compiler, and wants us to\n> change our code to accomodate it. Should we? You can be certain we\n> would not, and in the RPM case, you get the same answer.\n \n> If the patch is trivial, we will work around OS limitations, but we do\n> not redesign code to work around OS limitations. We expect the OS to\n> get the proper features. That is what we do with NT. Cygwin provides\n> the needed features.\n\nNo, I'm saying that someone running any OS might want to do this:\n1.)\tThey have two machines, a development machine and a production\nmachine. Due to budget constraints, the dev machine is an el cheapo\nversion of the production machine (for the sake of argument, let's say\ndev is a Sun Ultra 1 bare-bones workstation, and the production is a\nhigh end SMP Ultra server, both running the same version of Solaris).\n\n2.)\tFor greater security, the production machine has been severely\ncrippled WRT development tools -- if a cracker gets in, don't give him\nany ammunition. Good procedure to follow for publicly exposed database\nservers, like those that sit behind websites. Requiring such a server\nto have a development system installed is a misfeature, IMHO.\n\n3.)\tAfter compiling and testing PostgreSQL on dev, the user transfers\nthe binaries only over to production. All is well, at first.\n\n4.)\tBut then the load on production goes up -- and PostgreSQL starts\nspitting errors and FATAL's. The problem cannot be duplicated on the\ndev machine -- looks like a Solaris SMP issue.\n\n5.)\tThe user decides to run regression on production in parallel mode to\nhelp debug the problem -- but cannot figure out how to do so without\ninstalling make and other development tools on it, when he specifically\ndid not want those tools on there for security. Serial regression,\nwhich is easily started in a no-make mode, doesn't expose the problem.\n\nAll I'm saying is that regression should be _runnable_ in all modes\nwithout needing anything but a shell and the PostgreSQL binary\ninstallation.\n\nThis is the problem -- it is not OS-specific.\n \n> > RPM's are expected to 'rpm -U' and you can simply _use_ the new version,\n> > with little to no preparation. At least that is the theory. And it\n> > works for most packages.\n \n> This is the \"Hey, other people can do it, why can't you\" issue. We are\n> looking for suggestions from Linux users in how this can be done.\n> Perhaps running a separate command after the RPM has been installed is\n> the only way to go.\n\nIt's not really an RPM issue -- it's a PostgreSQL issue -- there have\nbeen e-mails from users of other OS's -- even those that compile from\nsource -- expressing a desire for a smoother upgrade cycle. The RPM's,\nDebian packages, and other binary packages just put the extant problem\nin starker contrast. Until such occurs, I'll just have to continue\ndoing what I'm doing -- which I consider a stop-gap, not a solution.\n\nAnd, BTW, welcome back from the summit. I heard that there was a little\n'excitement' there :-).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 02 Nov 2000 10:28:21 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > > At the risk of being redundant, here goes. As I've explained before,\n> > > the RPM upgrade environment, thanks to our standing with multiple\n> > > distributions as being shipped as a part of the OS, could be run as part\n> \n> > OK, maybe doing it in an RPM is the wrong way to go. If an old version\n> > exists, maybe the RPM is only supposed to install the software in a\n> > saved location, and the users must execute a command after the RPM\n> > install that starts the old postmaster, does the dump, puts the new\n> > PostgreSQL server in place, and reloads the database.\n> \n> That's more or less what's being done now. The RPM's preinstall script\n> (run before any files are overwritten from the new package) backs up the\n> required executables from the old installation. The RPM then overwrites\n> the necessary files, and then any old files left over are removed, along\n> with database removal of their records.\n> \n> A script (actually, two scripts due to a bug in the first one) is\n> provided to dump the database using the old executables. Which works OK\n> as long as the new OS release is executable compatible with the old\n> release. Oliver Elphick originally wrote the script for the Debian\n> packages, and I adapted it to the RPM environment.\n> \n> However, the dependency upon the new version of the OS being able to run\n> the old executables could be a killer in the future if executable\n> compatibility is removed -- after all, an upgrade might not be from the\n> immediately prior version of the OS.\n\n\nThat is a tough one. I see your point. How would the RPM do this\nanyway? It is running the same version of the OS right? Did they move\nthe data files from the old OS to the new OS and now they want to\nupgrade? Hmm.\n\n> > You are basically saying that because you can ship without a compiler\n> > sometimes, we are supposed to change the way our regression tests work.\n> > Let's suppose SCO says they don't ship with a compiler, and wants us to\n> > change our code to accomodate it. Should we? You can be certain we\n> > would not, and in the RPM case, you get the same answer.\n> \n> > If the patch is trivial, we will work around OS limitations, but we do\n> > not redesign code to work around OS limitations. We expect the OS to\n> > get the proper features. That is what we do with NT. Cygwin provides\n> > the needed features.\n> \n> No, I'm saying that someone running any OS might want to do this:\n> 1.)\tThey have two machines, a development machine and a production\n> machine. Due to budget constraints, the dev machine is an el cheapo\n> version of the production machine (for the sake of argument, let's say\n> dev is a Sun Ultra 1 bare-bones workstation, and the production is a\n> high end SMP Ultra server, both running the same version of Solaris).\n\nYes, but if we added capabilities every time someone wanted something so\nit worked better in their environment, this software would be a mess,\nright?\n\n> > This is the \"Hey, other people can do it, why can't you\" issue. We are\n> > looking for suggestions from Linux users in how this can be done.\n> > Perhaps running a separate command after the RPM has been installed is\n> > the only way to go.\n> \n> It's not really an RPM issue -- it's a PostgreSQL issue -- there have\n> been e-mails from users of other OS's -- even those that compile from\n> source -- expressing a desire for a smoother upgrade cycle. The RPM's,\n> Debian packages, and other binary packages just put the extant problem\n> in starker contrast. Until such occurs, I'll just have to continue\n> doing what I'm doing -- which I consider a stop-gap, not a solution.\n\nYes, we all agree upgrades should be smoother. The problem is that the\ncost/benefit analysis always pushed us away from improving it.\n\n> \n> And, BTW, welcome back from the summit. I heard that there was a little\n> 'excitement' there :-).\n\nYes, it was very nice. I will post a summary to announce/general today.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Nov 2000 13:50:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > However, the dependency upon the new version of the OS being able to run\n> > the old executables could be a killer in the future if executable\n> > compatibility is removed -- after all, an upgrade might not be from the\n> > immediately prior version of the OS.\n \n> That is a tough one. I see your point. How would the RPM do this\n> anyway? It is running the same version of the OS right? Did they move\n> the data files from the old OS to the new OS and now they want to\n> upgrade? Hmm.\n\nWell, let's suppose the following: J. Random User has a database server\nthat has been running smoothly for ages on RedHat 5.2, running\nPostgreSQL 6.3.2. He has had no reason to upgrade since -- while MVCC\nwas a nice feature, he was really waiting for OUTER JOIN before\nupgrading, as his server is lightly loaded and won't benefit greatly\nfrom MVCC. \n\nLikewise, he's not upgraded from RedHat 5.2, because until RedHat got\nthe 2.4 kernel into a distribution, he wasn't ready to upgrade, as he\nneeds improved NFS performance, available in Linux kernel 2.4. And he\nwasn't about to go with a version of GCC that doesn't exist. So he\nskips the whole RedHat 6.x series -- he doesn't want to mess with kernel\n2.2 in any form, thanks to its abyssmal NFS performance.\n\nSo he waits on RedHat 7.2 to be released -- around October 2001 (if the\ntypical RedHat schedule holds). At this point, PostgreSQL 7.2.1 is the\nstandards bearer, with OUTER JOIN support that he craves, and robust WAL\nfor excellent recoverability, amongst other Neat Features(TM).\n\nNow, by the time of RedHat 7.2, kernel 2.4 is up to .15 or so, with gcc\n3.0 freshly (and officially) released, and glibc 2.2.5 finally fixing\nthe problems that had plagued both pre-2.2 glibc's AND the earliest 2.2\nglibc's -- but, the upshot is that glibc 2.0 compatibility is toast.\n\nNow, J Random slides in the new OS CD on a backup of his main server,\nand upgrades. RedHat 7.2's installer is very smart -- if no packages\nare left that use glibc 2.0, it doesn't install the compat-libs\nnecessary for glibc 2.0 apps to run.\n\nThe PostgreSQL RPMset's server subpackage preinstall script runs about\ntwo-thirds of the way through the upgrade, and backs up the old 6.3.2\nexecutables necessary to pull a dump. The old 6.3.2 rpm database\nentries are removed, and, as far as the system is concerned, no\ndependency upon glibc 2.0 remains, so no compat-libs get installed.\n\nJ Random checks out the new installation, and finds a conspicuous log\nmessage telling him to read /usr/share/doc/postgresql-7.2.1/README.rpm.\nHe does so, and runs the (fixed by then) postgresql-dump script, which\nattempts to start an old backend and do a pg_dumpall -- but, horrors,\nthe old postmaster can't start, glibc 2.0 is gone and glibc 2.2 blows\ncore loaded under postmaster-6.3.2. ARGGHHH....\n\nThat's the scenario I have nightmares about. Really.\n \n> Yes, but if we added capabilities every time someone wanted something so\n> it worked better in their environment, this software would be a mess,\n> right?\n\nYes, it would. I'll work on a patch, and we'll see what it looks like.\n \n> > been e-mails from users of other OS's -- even those that compile from\n> > source -- expressing a desire for a smoother upgrade cycle. The RPM's,\n \n> Yes, we all agree upgrades should be smoother. The problem is that the\n> cost/benefit analysis always pushed us away from improving it.\n\nI understand. \n\nI'm looking at some point in time in the future doing a\n'postgresql-upgrade' RPM that would include pre-built postmasters and\nother binaries necessary to dump any previous version PostgreSQL (since\nabout 6.2.1 or so -- 6.2.1 was the first RedHat official PostgreSQL RPM,\nalthough there were 6.1.1 RPM's before that, and there is still a\npostgres95-1.09 RPM out there), linked to the current libs for that\nRPM's OS release. It would be a large RPM (and the source RPM for it\nwould be _huge_, containing entire tarballs for at least 6.2.1, 6.3.2,\n6.4.2, 6.5.3, and 7.0.3). But, this may be the only way to make this\nwork barring a real migration utility.\n\n> > And, BTW, welcome back from the summit. I heard that there was a little\n> > 'excitement' there :-).\n \n> Yes, it was very nice. I will post a summary to announce/general today.\n\nGood. And a welcome back to Tom as well, as he went too, IIRC.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 02 Nov 2000 14:22:19 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> Now, J Random slides in the new OS CD on a backup of his main server,\n> and upgrades. RedHat 7.2's installer is very smart -- if no packages\n> are left that use glibc 2.0, it doesn't install the compat-libs\n> necessary for glibc 2.0 apps to run.\n\nActually, glibc is a bad example of things to break - it has versioned\nsymbols, so postgresql is pretty likely to continue working (barring\ndoing extremely low-level stuff, like doing weird things to the loader\nor depend on buggy behaviour (like Oracle did)).\n\nPostgresql doesn't use C++ either (which is a horrible mess wrt. binary\ncompatibility - there is no such thing, FTTB).\n\nHowever, if it depended on kernel specific behaviour (like things in\n/proc, which may or may not have changed its output format) it could\nbreak.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "02 Nov 2000 14:31:20 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> All I'm saying is that regression should be _runnable_ in all modes\n> without needing anything but a shell and the PostgreSQL binary\n> installation.\n\nI think this'd be mostly a waste of effort. IMHO, 99% of the problems\nthe regression tests might expose will be exposed if they are run\nagainst the RPMs by the RPM maker. (Something we have sometimes failed\nto do in the past ;-).) The regress tests are not that good at\ndetecting environment-specific problems; in fact, they go out of their\nway to suppress environmental differences. So I don't see any strong\nneed to support regression test running in binary distributions.\nEspecially not if we have to kluge around a lack of essential tools.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Nov 2000 15:32:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] 7.0 vs. 7.1 (was: latest version?) "
}
] |
[
{
"msg_contents": "\npg_dump uses the OID of template1 as the last builtin OID, but this now\nseems broken in CVS (it returns 1). Should this work? If not, what is the\nrecommended way to find the last built-in OID?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 21 Oct 2000 23:33:44 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Last builtin OID? "
},
{
"msg_contents": "Philip Warner writes:\n\n> pg_dump uses the OID of template1 as the last builtin OID, but this now\n> seems broken in CVS (it returns 1). Should this work? If not, what is the\n> recommended way to find the last built-in OID?\n\nIf you define the last builtin oid as the highest oid in existence after\ninitdb then it has always been whatever the load of pg_description at then\nend leaves you with.\n\nPerhaps you could move the CREATE TRIGGER pg_sync_pg_pwd (or something\nelse) to the very end to get a more predictable starting point.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 21 Oct 2000 16:00:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Last builtin OID? "
},
{
"msg_contents": "Could we set a starting OID for user stuff like (0x4000000) or some\nother round number?\n\nLER\n\n\n* Philip Warner <pjw@rhyme.com.au> [001021 19:52]:\n> At 16:00 21/10/00 +0200, Peter Eisentraut wrote:\n> >\n> >Perhaps you could move the CREATE TRIGGER pg_sync_pg_pwd (or something\n> >else) to the very end to get a more predictable starting point.\n> >\n> \n> Unfortunately, this is the sort of thing that caused the current problem\n> (ie. assuming that a certain item of metadata will remain 'at the end').\n> \n> What pg_dump needs is 'the oid of that the first thing created in a new DB\n> will have'. Would there be a simple way of adding a\n> variable/attribute/whatever to template1? Or adding something to the code\n> for 'Create Database' that sets a global variable that pg_dump can retreieve?\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 21 Oct 2000 20:22:35 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Last builtin OID?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Philip Warner writes:\n>> pg_dump uses the OID of template1 as the last builtin OID, but this now\n>> seems broken in CVS (it returns 1). Should this work? If not, what is the\n>> recommended way to find the last built-in OID?\n\n> If you define the last builtin oid as the highest oid in existence after\n> initdb then it has always been whatever the load of pg_description at then\n> end leaves you with.\n\n\"select max(oid) from pg_description\" won't do, unfortunately, since the\nuser might add more comments after creating objects of his own. Ugh.\n\n> Perhaps you could move the CREATE TRIGGER pg_sync_pg_pwd (or something\n> else) to the very end to get a more predictable starting point.\n\nThis seems pretty fragile; I'd rather not rely on the assumption that\nsome specific item is the last one created by initdb.\n\nThe reason this isn't simple is that we have a bunch of rows with\nhard-wired OIDs (all the ones specifically called out in\ninclude/catalog) plus a bunch more with non-hard-wired OIDs (if they\ndon't need to be well-known, why keep track of them?). We initialize\nthe OID counter to 16384, above the last hard-wired OID, so that the\nnon-hard-wired OIDs start there. But there's no way to be sure where\nthe non-hard-wired system OIDs stop and user OIDs begin.\n\nWhat if we specify a definite range for these \"soft system OIDs\"?\nSay, 1-16383 are for hardwired OIDs as now, 16384-32767 for soft OIDs,\nand user OIDs start at 32768. Then pg_dump's task is easy; it just\nuses the latter constant from some include file. We could implement\nthis by initializing the OID counter at 16384 as now, and then rewriting\nit to 32768 at the end of initdb.\n\nComments?\n\nBTW, this raises a point I'd never thought hard about: if the dbadmin\nadds some site-local objects to template1, then makes databases that\ncopy these objects, a pg_dumpall and restore will do the Wrong Thing.\npg_dump isn't smart about excluding objects inherited from template1\nfrom dumps of other databases. Is there any reasonable way to fix\nthat?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Oct 2000 21:40:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Last builtin OID? "
},
{
"msg_contents": "At 16:00 21/10/00 +0200, Peter Eisentraut wrote:\n>\n>Perhaps you could move the CREATE TRIGGER pg_sync_pg_pwd (or something\n>else) to the very end to get a more predictable starting point.\n>\n\nUnfortunately, this is the sort of thing that caused the current problem\n(ie. assuming that a certain item of metadata will remain 'at the end').\n\nWhat pg_dump needs is 'the oid of that the first thing created in a new DB\nwill have'. Would there be a simple way of adding a\nvariable/attribute/whatever to template1? Or adding something to the code\nfor 'Create Database' that sets a global variable that pg_dump can retreieve?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 22 Oct 2000 11:52:11 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Last builtin OID? "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> ... I would prefer the\n> 'CREATE DATABASE' code to set a value in a table somewhere to the last OID\n> used when the DB was created. This would deal with the 'extended template1'\n> scenario (which, incidentally, I am already a victim of). Could a new\n> attribute be added to pg_database?\n\nHm. Offhand I don't see a hole in that idea, but it's too simple and\nbeautiful to be right ;-) ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Oct 2000 21:56:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Last builtin OID? "
},
{
"msg_contents": "At 21:40 21/10/00 -0400, Tom Lane wrote:\n>We could implement\n>this by initializing the OID counter at 16384 as now, and then rewriting\n>it to 32768 at the end of initdb.\n...\n>BTW, this raises a point I'd never thought hard about: if the dbadmin\n>adds some site-local objects to template1, then makes databases that\n>copy these objects, a pg_dumpall and restore will do the Wrong Thing.\n\nSome kind of forced value would be good, although I would prefer the\n'CREATE DATABASE' code to set a value in a table somewhere to the last OID\nused when the DB was created. This would deal with the 'extended template1'\nscenario (which, incidentally, I am already a victim of). Could a new\nattribute be added to pg_database?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 22 Oct 2000 12:54:25 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Last builtin OID? "
}
] |
[
{
"msg_contents": "Per my ongoing discussion with PeterE, here is the patch I applied to\nsrc/template/unixware:\n\ncvs diff: Diffing .\nIndex: unixware\n===================================================================\nRCS file: /cvsroot/pgsql-snap/src/template/unixware,v\nretrieving revision 1.1.1.1\ndiff -c -r1.1.1.1 unixware\n*** unixware\t2000/10/21 13:35:54\t1.1.1.1\n--- unixware\t2000/10/21 13:49:45\n***************\n*** 1,6 ****\n AROPT=crs\n! CFLAGS='-O -K i486,host,inline,loop_unroll,alloca -Dsvr4'\n SHARED_LIB='-K PIC'\n SRCH_INC='/opt/include'\n SRCH_LIB='/opt/lib'\n DLSUFFIX=.so\n--- 1,8 ----\n AROPT=crs\n! CFLAGS='-O -K host,inline,loop_unroll,alloca -Dsvr4'\n SHARED_LIB='-K PIC'\n SRCH_INC='/opt/include'\n SRCH_LIB='/opt/lib'\n DLSUFFIX=.so\n+ CC=cc\n+ CXX=CC\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 21 Oct 2000 09:14:27 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "quickie patch to UnixWare Template"
},
{
"msg_contents": "Applied. Thanks.\n\n\n> Per my ongoing discussion with PeterE, here is the patch I applied to\n> src/template/unixware:\n> \n> cvs diff: Diffing .\n> Index: unixware\n> ===================================================================\n> RCS file: /cvsroot/pgsql-snap/src/template/unixware,v\n> retrieving revision 1.1.1.1\n> diff -c -r1.1.1.1 unixware\n> *** unixware\t2000/10/21 13:35:54\t1.1.1.1\n> --- unixware\t2000/10/21 13:49:45\n> ***************\n> *** 1,6 ****\n> AROPT=crs\n> ! CFLAGS='-O -K i486,host,inline,loop_unroll,alloca -Dsvr4'\n> SHARED_LIB='-K PIC'\n> SRCH_INC='/opt/include'\n> SRCH_LIB='/opt/lib'\n> DLSUFFIX=.so\n> --- 1,8 ----\n> AROPT=crs\n> ! CFLAGS='-O -K host,inline,loop_unroll,alloca -Dsvr4'\n> SHARED_LIB='-K PIC'\n> SRCH_INC='/opt/include'\n> SRCH_LIB='/opt/lib'\n> DLSUFFIX=.so\n> + CC=cc\n> + CXX=CC\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Oct 2000 11:41:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: quickie patch to UnixWare Template"
},
{
"msg_contents": "That may be premature, it still doesn't build, but it's probably the \nright answer when we fix the rest of it. \n\nLER\n\n* Bruce Momjian <pgman@candle.pha.pa.us> [001021 10:42]:\n> Applied. Thanks.\n> \n> \n> > Per my ongoing discussion with PeterE, here is the patch I applied to\n> > src/template/unixware:\n> > \n> > cvs diff: Diffing .\n> > Index: unixware\n> > ===================================================================\n> > RCS file: /cvsroot/pgsql-snap/src/template/unixware,v\n> > retrieving revision 1.1.1.1\n> > diff -c -r1.1.1.1 unixware\n> > *** unixware\t2000/10/21 13:35:54\t1.1.1.1\n> > --- unixware\t2000/10/21 13:49:45\n> > ***************\n> > *** 1,6 ****\n> > AROPT=crs\n> > ! CFLAGS='-O -K i486,host,inline,loop_unroll,alloca -Dsvr4'\n> > SHARED_LIB='-K PIC'\n> > SRCH_INC='/opt/include'\n> > SRCH_LIB='/opt/lib'\n> > DLSUFFIX=.so\n> > --- 1,8 ----\n> > AROPT=crs\n> > ! CFLAGS='-O -K host,inline,loop_unroll,alloca -Dsvr4'\n> > SHARED_LIB='-K PIC'\n> > SRCH_INC='/opt/include'\n> > SRCH_LIB='/opt/lib'\n> > DLSUFFIX=.so\n> > + CC=cc\n> > + CXX=CC\n> > \n> > -- \n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Sat, 21 Oct 2000 10:43:54 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: quickie patch to UnixWare Template"
}
] |
[
{
"msg_contents": "We currently have a patch in the doc/FAQ_SCO file for the \"accept\ndoesn't send AF_UNIX to the caller\" problem on SCO UnixWare 7.1.[01]. \nIs there any problem with configure finding out we are on one of those\nreleases (uname -v), and setting a UW= variable so the user doesn't\nhave to remember to patch their sources? \n\nLarry\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 21 Oct 2000 10:11:40 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Style question"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> We currently have a patch in the doc/FAQ_SCO file for the \"accept\n> doesn't send AF_UNIX to the caller\" problem on SCO UnixWare 7.1.[01]. \n> Is there any problem with configure finding out we are on one of those\n> releases (uname -v), and setting a UW= variable so the user doesn't\n> have to remember to patch their sources? \n\nI think we should install this patch conditional on some __UnixWare__\n#define, if there's a good one. If there isn't, then we can add our own\n#define ACCEPT_IS_BUSTED_IN_PECULIAR_WAYS in src/include/port/unixware.h.\n\nTesting runtime behaviour in configure is not good \"style\". :)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 21 Oct 2000 18:58:10 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Style question"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001021 11:54]:\n> Larry Rosenman writes:\n> \n> > We currently have a patch in the doc/FAQ_SCO file for the \"accept\n> > doesn't send AF_UNIX to the caller\" problem on SCO UnixWare 7.1.[01]. \n> > Is there any problem with configure finding out we are on one of those\n> > releases (uname -v), and setting a UW= variable so the user doesn't\n> > have to remember to patch their sources? \n> \n> I think we should install this patch conditional on some __UnixWare__\n> #define, if there's a good one. If there isn't, then we can add our own\n> #define ACCEPT_IS_BUSTED_IN_PECULIAR_WAYS in src/include/port/unixware.h.\n> \n> Testing runtime behaviour in configure is not good \"style\". :)\nI was just thinking of checking uname -v and if it is 7.1.0 or 7.1.1\nset a define that pq_comm.c sees and includes the fix. There isn't a\ngood #define yet.. :-( \n\nLER\n\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 21 Oct 2000 13:12:43 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Style question"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> I was just thinking of checking uname -v and if it is 7.1.0 or 7.1.1\n> set a define that pq_comm.c sees and includes the fix. There isn't a\n> good #define yet.. :-( \n\nWe could use the result of\n\nchecking host system type... i586-sco-sysv5uw7.1.1\n\nBut which one is the good one and which one is broken?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 21 Oct 2000 20:28:54 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Style question"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001021 13:25]:\n> Larry Rosenman writes:\n> \n> > I was just thinking of checking uname -v and if it is 7.1.0 or 7.1.1\n> > set a define that pq_comm.c sees and includes the fix. There isn't a\n> > good #define yet.. :-( \n> \n> We could use the result of\n> \n> checking host system type... i586-sco-sysv5uw7.1.1\n7.1.0 and 7.1.1 are both broken. SCO hasn't released a fixed version\nyet. \n\nLER\n\n> \n> But which one is the good one and which one is broken?\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 21 Oct 2000 13:29:09 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Style question"
}
] |
[
{
"msg_contents": "BTW, if someone wants shell/telnet/ssh access to this box, please let\nme know. I'm more than willing. \n\nIt's a P-III 500 w/128MB ram and 18GB disk, so I can support stuff. \n\nLarry\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Sat, 21 Oct 2000 10:42:09 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "UnixWare"
}
] |
[
{
"msg_contents": "What do other DBs do with their output variables if there is an embedded SQL\nquery resulting in a NULL return value? What I mean is:\n\nexec sql select text into :txt:ind from ...\n\nIf text is NULL, ind will be set, but does txt change?\n\nI was just told Informix blanks txt.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sat, 21 Oct 2000 18:03:26 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "embedded sql with indicators in other DBs"
},
{
"msg_contents": "Michael Meskes wrote:\n\n> What do other DBs do with their output variables if there is an embedded SQL\n> query resulting in a NULL return value? What I mean is:\n>\n> exec sql select text into :txt:ind from ...\n>\n> If text is NULL, ind will be set, but does txt change?\n>\n> I was just told Informix blanks txt.\n\nAdabas D does not touch txt.\nSo you might set txt to a reasonable value in case of NULL, or hold the value\nin txt of a previous sql statement. On the other hand if you forget to\ninitialize txt, Informix protects you from yourself.\n\nAt least the standard (sql94-bindings-3 clause 7.1) does not mention to change\nthe value of the variable in null case. Looks like undefined.\n\nI'm undecided. Not touching it looks more right to me but it might break\nexisting applications.\n\nChristof\n\n\n\n\n",
"msg_date": "Wed, 25 Oct 2000 16:34:34 +0200",
"msg_from": "Christof Petig <christof.petig@wtal.de>",
"msg_from_op": false,
"msg_subject": "Re: embedded sql with indicators in other DBs"
}
] |
[
{
"msg_contents": "First a core dump which can be relieved by:\n\nIndex: catalog.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/catalog/catalog.c,v\nretrieving revision 1.34\ndiff -c -u -r1.34 catalog.c\n--- catalog.c\t2000/10/16 14:52:02\t1.34\n+++ catalog.c\t2000/10/21 17:07:09\n@@ -173,7 +173,7 @@\n bool\n IsSystemRelationName(const char *relname)\n {\n-\tif (relname[0] && relname[1] && relname[2])\n+\tif (relname && relname[0] && relname[1] && relname[2])\n \t\treturn (relname[0] == 'p' &&\n \t\t\t\trelname[1] == 'g' &&\n \t\t\t\trelname[2] == '_');\n\n(symptoms at the end of message)\n\nBut now the bit I don't see how to solve: the regression postmaster doesn't\nstartup because it can't find tmp_check/data/base/1/1259. The only files I\nsee are 1/{1255,PG_VERSION}. Where does 1259 come from?\n\nCheers,\n\nPatrick\n\n\n#0 IsSystemRelationName (relname=0x0) at catalog.c:176\n#1 0x807ed9a in IsSharedSystemRelationName (relname=0x0) at catalog.c:197\n#2 0x80e9272 in RelationInitLockInfo (relation=0x82af018) at lmgr.c:119\n#3 0x81202ef in formrdesc (relationName=0x816ad7e \"pg_class\", natts=22, \n att=0x8173600) at relcache.c:1193\n#4 0x8120c12 in RelationCacheInitialize () at relcache.c:1953\n#5 0x81266b3 in InitPostgres (dbname=0xbfbfd666 \"template1\", username=0x0)\n at postinit.c:329\n#6 0x807dde0 in BootstrapMain (argc=7, argv=0xbfbfd510) at bootstrap.c:358\n#7 0x80bc67c in main (argc=8, argv=0xbfbfd50c) at main.c:119\n#8 0x806367e in ___start ()\n(gdb) print *relation\n$3 = {rd_fd = -1, rd_nblocks = 0, rd_refcnt = 0, rd_myxactonly = 0 '\\000', \n rd_isnailed = 0 '\\000', rd_unlinked = 0 '\\000', rd_indexfound = 0 '\\000', \n rd_uniqueindex = 0 '\\000', rd_am = 0x1000001, rd_rel = 0x0, rd_id = 0, \n rd_indexlist = 0x82af0a0, rd_lockInfo = {lockRelId = {relId = 0, dbId = 0}}, \n rd_att = 0x0, rd_rules = 0x0, rd_rulescxt = 0x82af128, rd_istrat = 0x0, \n rd_support = 0x0, trigdesc = 0x0}\n\n",
"msg_date": "Sat, 21 Oct 2000 18:17:49 +0100",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": true,
"msg_subject": "failed runcheck"
},
{
"msg_contents": "Applied\n\n> First a core dump which can be relieved by:\n> \n> Index: catalog.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/catalog/catalog.c,v\n> retrieving revision 1.34\n> diff -c -u -r1.34 catalog.c\n> --- catalog.c\t2000/10/16 14:52:02\t1.34\n> +++ catalog.c\t2000/10/21 17:07:09\n> @@ -173,7 +173,7 @@\n> bool\n> IsSystemRelationName(const char *relname)\n> {\n> -\tif (relname[0] && relname[1] && relname[2])\n> +\tif (relname && relname[0] && relname[1] && relname[2])\n> \t\treturn (relname[0] == 'p' &&\n> \t\t\t\trelname[1] == 'g' &&\n> \t\t\t\trelname[2] == '_');\n> \n> (symptoms at the end of message)\n> \n> But now the bit I don't see how to solve: the regression postmaster doesn't\n> startup because it can't find tmp_check/data/base/1/1259. The only files I\n> see are 1/{1255,PG_VERSION}. Where does 1259 come from?\n> \n> Cheers,\n> \n> Patrick\n> \n> \n> #0 IsSystemRelationName (relname=0x0) at catalog.c:176\n> #1 0x807ed9a in IsSharedSystemRelationName (relname=0x0) at catalog.c:197\n> #2 0x80e9272 in RelationInitLockInfo (relation=0x82af018) at lmgr.c:119\n> #3 0x81202ef in formrdesc (relationName=0x816ad7e \"pg_class\", natts=22, \n> att=0x8173600) at relcache.c:1193\n> #4 0x8120c12 in RelationCacheInitialize () at relcache.c:1953\n> #5 0x81266b3 in InitPostgres (dbname=0xbfbfd666 \"template1\", username=0x0)\n> at postinit.c:329\n> #6 0x807dde0 in BootstrapMain (argc=7, argv=0xbfbfd510) at bootstrap.c:358\n> #7 0x80bc67c in main (argc=8, argv=0xbfbfd50c) at main.c:119\n> #8 0x806367e in ___start ()\n> (gdb) print *relation\n> $3 = {rd_fd = -1, rd_nblocks = 0, rd_refcnt = 0, rd_myxactonly = 0 '\\000', \n> rd_isnailed = 0 '\\000', rd_unlinked = 0 '\\000', rd_indexfound = 0 '\\000', \n> rd_uniqueindex = 0 '\\000', rd_am = 0x1000001, rd_rel = 0x0, rd_id = 0, \n> rd_indexlist = 0x82af0a0, rd_lockInfo = {lockRelId = {relId = 0, dbId = 0}}, \n> rd_att = 0x0, rd_rules = 0x0, rd_rulescxt = 0x82af128, rd_istrat = 0x0, \n> rd_support = 0x0, trigdesc = 0x0}\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Oct 2000 14:41:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failed runcheck"
},
{
"msg_contents": "Did you run make distclean? I've run regtests before committing changes.\n\nVadim\n\n----- Original Message -----\nFrom: \"Patrick Welche\" <prlw1@newn.cam.ac.uk>\nTo: <pgsql-hackers@postgresql.org>\nSent: Saturday, October 21, 2000 10:17 AM\nSubject: [HACKERS] failed runcheck\n\n\n> First a core dump which can be relieved by:\n>\n> Index: catalog.c\n> ===================================================================\n> RCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/catalog/catalog.c,v\n> retrieving revision 1.34\n> diff -c -u -r1.34 catalog.c\n> --- catalog.c 2000/10/16 14:52:02 1.34\n> +++ catalog.c 2000/10/21 17:07:09\n> @@ -173,7 +173,7 @@\n> bool\n> IsSystemRelationName(const char *relname)\n> {\n> - if (relname[0] && relname[1] && relname[2])\n> + if (relname && relname[0] && relname[1] && relname[2])\n> return (relname[0] == 'p' &&\n> relname[1] == 'g' &&\n> relname[2] == '_');\n>\n> (symptoms at the end of message)\n>\n> But now the bit I don't see how to solve: the regression postmaster\ndoesn't\n> startup because it can't find tmp_check/data/base/1/1259. The only files I\n> see are 1/{1255,PG_VERSION}. Where does 1259 come from?\n>\n> Cheers,\n>\n> Patrick\n>\n>\n> #0 IsSystemRelationName (relname=0x0) at catalog.c:176\n> #1 0x807ed9a in IsSharedSystemRelationName (relname=0x0) at catalog.c:197\n> #2 0x80e9272 in RelationInitLockInfo (relation=0x82af018) at lmgr.c:119\n> #3 0x81202ef in formrdesc (relationName=0x816ad7e \"pg_class\", natts=22,\n> att=0x8173600) at relcache.c:1193\n> #4 0x8120c12 in RelationCacheInitialize () at relcache.c:1953\n> #5 0x81266b3 in InitPostgres (dbname=0xbfbfd666 \"template1\",\nusername=0x0)\n> at postinit.c:329\n> #6 0x807dde0 in BootstrapMain (argc=7, argv=0xbfbfd510) at\nbootstrap.c:358\n> #7 0x80bc67c in main (argc=8, argv=0xbfbfd50c) at main.c:119\n> #8 0x806367e in ___start ()\n> (gdb) print *relation\n> $3 = {rd_fd = -1, rd_nblocks = 0, rd_refcnt = 0, rd_myxactonly = 0 '\\000',\n> rd_isnailed = 0 '\\000', rd_unlinked = 0 '\\000', rd_indexfound = 0\n'\\000',\n> rd_uniqueindex = 0 '\\000', rd_am = 0x1000001, rd_rel = 0x0, rd_id = 0,\n> rd_indexlist = 0x82af0a0, rd_lockInfo = {lockRelId = {relId = 0, dbId =\n0}},\n> rd_att = 0x0, rd_rules = 0x0, rd_rulescxt = 0x82af128, rd_istrat = 0x0,\n> rd_support = 0x0, trigdesc = 0x0}\n>\n\n",
"msg_date": "Sat, 21 Oct 2000 13:48:39 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: failed runcheck"
},
{
"msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> [ core dump due to ]\n> #0 IsSystemRelationName (relname=0x0) at catalog.c:176\n> #1 0x807ed9a in IsSharedSystemRelationName (relname=0x0) at catalog.c:197\n> #2 0x80e9272 in RelationInitLockInfo (relation=0x82af018) at lmgr.c:119\n> #3 0x81202ef in formrdesc (relationName=0x816ad7e \"pg_class\", natts=22, \n> att=0x8173600) at relcache.c:1193\n> #4 0x8120c12 in RelationCacheInitialize () at relcache.c:1953\n\nand proposes to fix this by having IsSystemRelationName make an\narbitrary decision about whether a NULL input pointer should be\nconsidered to represent a system relation name or not. I do not\nlike that, because AFAICS the decision is completely arbitrary.\nIsSystemRelationName shouldn't be called with a NULL pointer\nin the first place, and it has every right to cause a coredump\nif that happens.\n\nIt looks to me like the immediate bug here is that\nRelationGetPhysicalRelationName is returning a NULL pointer. Probably\nthere are some missing or out-of-order steps in relcache initialization?\n\nI've been offline for a couple days due to DSL line failure :-( so\nI haven't seen Vadim's latest checkins. But I'm betting this is a bug\nin the changes to use OIDs as physical relnames. Do we even need\nRelationGetPhysicalRelationName anymore, and if so, what does it mean?\n\nNext question: why is RelationInitLockInfo using\nRelationGetPhysicalRelationName to get the input data for\nIsSharedSystemRelationName --- shouldn't that be a test on logical\nrelation name? Or maybe the entire premise of\nIsSharedSystemRelationName is bogus now, and we ought to use some other\nway to decide if a relation is cross-database or not?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Oct 2000 22:17:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failed runcheck "
},
{
"msg_contents": "> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> > [ core dump due to ]\n> > #0 IsSystemRelationName (relname=0x0) at catalog.c:176\n> > #1 0x807ed9a in IsSharedSystemRelationName (relname=0x0) at catalog.c:197\n> > #2 0x80e9272 in RelationInitLockInfo (relation=0x82af018) at lmgr.c:119\n> > #3 0x81202ef in formrdesc (relationName=0x816ad7e \"pg_class\", natts=22, \n> > att=0x8173600) at relcache.c:1193\n> > #4 0x8120c12 in RelationCacheInitialize () at relcache.c:1953\n> \n> and proposes to fix this by having IsSystemRelationName make an\n> arbitrary decision about whether a NULL input pointer should be\n> considered to represent a system relation name or not. I do not\n> like that, because AFAICS the decision is completely arbitrary.\n> IsSystemRelationName shouldn't be called with a NULL pointer\n> in the first place, and it has every right to cause a coredump\n> if that happens.\n\nI have removed the change. It will dump core again.\n\n> \n> It looks to me like the immediate bug here is that\n> RelationGetPhysicalRelationName is returning a NULL pointer. Probably\n> there are some missing or out-of-order steps in relcache initialization?\n> \n> I've been offline for a couple days due to DSL line failure :-( so\n\nThat is bad. Mine has been great, but I had my first DSL burp for 3\nhours on Friday after 4 months of continuous uptime.\n\n> I haven't seen Vadim's latest checkins. But I'm betting this is a bug\n> in the changes to use OIDs as physical relnames. Do we even need\n> RelationGetPhysicalRelationName anymore, and if so, what does it mean?\n\nGood bet.\n\n> \n> Next question: why is RelationInitLockInfo using\n> RelationGetPhysicalRelationName to get the input data for\n> IsSharedSystemRelationName --- shouldn't that be a test on logical\n> relation name? Or maybe the entire premise of\n> IsSharedSystemRelationName is bogus now, and we ought to use some other\n> way to decide if a relation is cross-database or not?\n\nNo, because if they create a temp table that masks a system table in the\ncurrent session, you want the physical name so it can know if it is a\nreal system table, or a temp/fake one.\n\nYou can ask why would someone try this, but it will work, or do\nsomething while trying.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 22 Oct 2000 01:16:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failed runcheck"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Next question: why is RelationInitLockInfo using\n>> RelationGetPhysicalRelationName to get the input data for\n>> IsSharedSystemRelationName --- shouldn't that be a test on logical\n>> relation name? Or maybe the entire premise of\n>> IsSharedSystemRelationName is bogus now, and we ought to use some other\n>> way to decide if a relation is cross-database or not?\n\n> No, because if they create a temp table that masks a system table in the\n> current session, you want the physical name so it can know if it is a\n> real system table, or a temp/fake one.\n\nWell, you clearly don't want to be fooled by temp relations. I was\nsorta visualizing a check based on relation OIDs instead of names...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Oct 2000 01:26:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failed runcheck "
},
{
"msg_contents": "On Sat, Oct 21, 2000 at 01:48:39PM -0700, Vadim Mikheev wrote:\n> Did you run make distclean? I've run regtests before committing changes.\n\nJust made sure - different computer - fresh cvs update/distclean/configure/make\ncd src/test/regress\ngmake clean\ngmake all\ngmake runcheck\n\nsame coredump\n\n#1 0x807f4be in IsSharedSystemRelationName (relname=0x0) at catalog.c:197\n ^^^^^^^^^^^\nie. relname[0] at catalog.c:176 is dereferencing a null pointer.\n\nCheers,\n\nPatrick\n",
"msg_date": "Mon, 23 Oct 2000 14:29:36 +0100",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": true,
"msg_subject": "Re: failed runcheck"
},
{
"msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> On Sat, Oct 21, 2000 at 01:48:39PM -0700, Vadim Mikheev wrote:\n>> Did you run make distclean? I've run regtests before committing changes.\n\n> Just made sure - different computer - fresh cvs update/distclean/configure/make\n> same coredump\n\n> #1 0x807f4be in IsSharedSystemRelationName (relname=0x0) at catalog.c:197\n> ^^^^^^^^^^^\n> ie. relname[0] at catalog.c:176 is dereferencing a null pointer.\n\nInteresting. Current sources pass regress tests on my machine, same as\nfor Vadim. I think you have found a platform-specific bug.\n\nCould you dig into it a little further and try to determine where the\nNULL is coming from?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 10:26:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failed runcheck "
},
{
"msg_contents": "On Sun, Oct 22, 2000 at 01:26:07AM -0400, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Next question: why is RelationInitLockInfo using\n> >> RelationGetPhysicalRelationName to get the input data for\n> >> IsSharedSystemRelationName --- shouldn't that be a test on logical\n> >> relation name? Or maybe the entire premise of\n> >> IsSharedSystemRelationName is bogus now, and we ought to use some other\n> >> way to decide if a relation is cross-database or not?\n> \n> > No, because if they create a temp table that masks a system table in the\n> > current session, you want the physical name so it can know if it is a\n> > real system table, or a temp/fake one.\n> \n> Well, you clearly don't want to be fooled by temp relations. I was\n> sorta visualizing a check based on relation OIDs instead of names...\n> \n\nWell, when I did a test implementation of OID filenames, lo these many\nmoons ago, I hacked around this problem by adding the (fixed) shared\nsystem table oids to the static array that is searched for matches by\nIsSharedSystemRelationName.\n\nAdmittedly, a hack, but it got past all the regresion tests.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Wed, 25 Oct 2000 12:52:17 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: failed runcheck"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n>> Well, you clearly don't want to be fooled by temp relations. I was\n>> sorta visualizing a check based on relation OIDs instead of names...\n\n> Well, when I did a test implementation of OID filenames, lo these many\n> moons ago, I hacked around this problem by adding the (fixed) shared\n> system table oids to the static array that is searched for matches by\n> IsSharedSystemRelationName.\n> Admittedly, a hack, but it got past all the regresion tests.\n\nNot a hack at all, IMHO, since all the shared system rels have\nnailed-down OIDs. It's pure historical artifact that\nIsSharedSystemRelationName wasn't IsSharedSystemRelationOID in the\nfirst place.\n\nWe probably want to be thinking about merging the \"shared system\nrelation\" concept together with the \"tablespace\" concept once we start\nto implement tablespaces. But for now, I think testing the OIDs is\nfine.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 14:18:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failed runcheck "
},
{
"msg_contents": "On Mon, Oct 23, 2000 at 10:26:39AM -0400, Tom Lane wrote:\n> \n> Could you dig into it a little further and try to determine where the\n> NULL is coming from?\n\nAll clear now! (I did do another cvs update in the meantime, but either way,\nI can't now repeat the previously repeatable core dump)\n\nCheers,\n\nPatrick\n",
"msg_date": "Wed, 25 Oct 2000 21:22:23 +0100",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": true,
"msg_subject": "Re: failed runcheck"
}
] |
[
{
"msg_contents": "Ok, I can't find it on the web site....\n\nHow do I check out the current tree? \n\n(I want to play with Peter_E's changes...)\n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 22 Oct 2000 18:20:47 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "AnonCVS access?"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> How do I check out the current tree? \n\nUp-to-date info is in\n\nhttp://www.postgresql.org/devel-corner/docs/postgres/anoncvs.htm\n\n(Hey Bruce, is this in the Developer's FAQ?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Oct 2000 19:43:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AnonCVS access? "
},
{
"msg_contents": "I couldn't find a link ANYWHERE on the site to this file. There are\nhints about it's existence, but it ain't linked obviously anywhere...\n\nThanks!\n\nLER\n\n* Tom Lane <tgl@sss.pgh.pa.us> [001022 18:44]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > How do I check out the current tree? \n> \n> Up-to-date info is in\n> \n> http://www.postgresql.org/devel-corner/docs/postgres/anoncvs.htm\n> \n> (Hey Bruce, is this in the Developer's FAQ?)\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 22 Oct 2000 18:46:38 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: AnonCVS access?"
},
{
"msg_contents": "No. I will add a mention to look on the programmer's manual for that\ninformation.\n\n> Larry Rosenman <ler@lerctr.org> writes:\n> > How do I check out the current tree? \n> \n> Up-to-date info is in\n> \n> http://www.postgresql.org/devel-corner/docs/postgres/anoncvs.htm\n> \n> (Hey Bruce, is this in the Developer's FAQ?)\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 22 Oct 2000 20:46:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AnonCVS access?"
},
{
"msg_contents": "On Sun, 22 Oct 2000, Bruce Momjian wrote:\n\n> No. I will add a mention to look on the programmer's manual for that\n> information.\n\nWhere is it in the programmer's manual? \n\nVince.\n\n> \n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > How do I check out the current tree? \n> > \n> > Up-to-date info is in\n> > \n> > http://www.postgresql.org/devel-corner/docs/postgres/anoncvs.htm\n> > \n> > (Hey Bruce, is this in the Developer's FAQ?)\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 22 Oct 2000 21:04:34 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: AnonCVS access?"
},
{
"msg_contents": "I thought it was there?\n\n> On Sun, 22 Oct 2000, Bruce Momjian wrote:\n> \n> > No. I will add a mention to look on the programmer's manual for that\n> > information.\n> \n> Where is it in the programmer's manual? \n> \n> Vince.\n> \n> > \n> > > Larry Rosenman <ler@lerctr.org> writes:\n> > > > How do I check out the current tree? \n> > > \n> > > Up-to-date info is in\n> > > \n> > > http://www.postgresql.org/devel-corner/docs/postgres/anoncvs.htm\n> > > \n> > > (Hey Bruce, is this in the Developer's FAQ?)\n> > > \n> > > \t\t\tregards, tom lane\n> > > \n> > \n> > \n> > \n> \n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 22 Oct 2000 21:05:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AnonCVS access?"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> Where is it in the programmer's manual? \n\nIt's in the appendices of the developer's guide.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Oct 2000 21:07:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AnonCVS access? "
},
{
"msg_contents": "On Sun, 22 Oct 2000, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > Where is it in the programmer's manual? \n> \n> It's in the appendices of the developer's guide.\n\nWhat developer's guide?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 22 Oct 2000 21:09:25 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: AnonCVS access? "
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n>> It's in the appendices of the developer's guide.\n\n> What developer's guide?\n\nhttp://www.postgresql.org/devel-corner/docs/postgres/\nhas a developer's guide link ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Oct 2000 21:10:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AnonCVS access? "
},
{
"msg_contents": "On Sun, 22 Oct 2000, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> >> It's in the appendices of the developer's guide.\n> \n> > What developer's guide?\n> \n> http://www.postgresql.org/devel-corner/docs/postgres/\n> has a developer's guide link ...\n\nHiding in plain sight. I must have looked at it 5 times and never\neven saw it. Must be time for bed!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 22 Oct 2000 21:13:18 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: AnonCVS access? "
}
] |
[
{
"msg_contents": " Date: Sunday, October 22, 2000 @ 19:25:11\nAuthor: tgl\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/test\n from hub.org:/home/projects/pgsql/tmp/cvs-serv76550\n\nModified Files:\n\truntest triggers.sql \n\nRemoved Files:\n\tmklang.sql \n\n----------------------------- Log Message -----------------------------\n\nplpgsql regress tests seem a tad out of date ... repair bit rot.\n",
"msg_date": "Sun, 22 Oct 2000 19:25:11 -0400 (EDT)",
"msg_from": "Tom Lane <tgl@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql/src/pl/plpgsql/test (runtest triggers.sql mklang.sql)"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> plpgsql regress tests seem a tad out of date ... repair bit rot.\n\n> What's the relation of this test suite to the \"plpgsql\" test in the\n> regression tests? From the comments surrounding it it seems they're\n> related.\n\nI think it may be an ancestor of the standard regress test. Could well\nbe that that whole directory is now redundant and ought to be removed,\nbut until Jan is back online and I can ask him, I wasn't going to go\nthat far ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 10:59:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/pl/plpgsql/test (runtest triggers.sql\n\tmklang.sql)"
},
{
"msg_contents": "Tom Lane writes:\n\n> plpgsql regress tests seem a tad out of date ... repair bit rot.\n\nWhat's the relation of this test suite to the \"plpgsql\" test in the\nregression tests? From the comments surrounding it it seems they're\nrelated.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 23 Oct 2000 17:01:14 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/pl/plpgsql/test (runtest triggers.sql\n\tmklang.sql)"
}
] |
[
{
"msg_contents": " Date: Sunday, October 22, 2000 @ 19:32:45\nAuthor: tgl\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected\n from hub.org:/home/projects/pgsql/tmp/cvs-serv80691/src/test/regress/expected\n\nModified Files:\n\tplpgsql.out inet.out foreign_key.out errors.out \n\n----------------------------- Log Message -----------------------------\n\nSome small polishing of Mark Hollomon's cleanup of DROP command: might\nas well allow DROP multiple INDEX, RULE, TYPE as well. Add missing\nCommandCounterIncrement to DROP loop, which could cause trouble otherwise\nwith multiple DROP of items affecting same catalog entries. Try to\nbring a little consistency to various error messages using 'does not exist',\n'nonexistent', etc --- I standardized on 'does not exist' since that's\nwhat the vast majority of the existing uses seem to be.\n",
"msg_date": "Sun, 22 Oct 2000 19:32:46 -0400 (EDT)",
"msg_from": "Tom Lane <tgl@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql/src/test/regress/expected (plpgsql.out inet.out foreign_key.out\n\terrors.out)"
},
{
"msg_contents": "> Modified Files:\n...\n> Some small polishing of Mark Hollomon's cleanup of DROP command: might\n> as well allow DROP multiple INDEX, RULE, TYPE as well. Add missing\n> CommandCounterIncrement to DROP loop, which could cause trouble otherwise\n> with multiple DROP of items affecting same catalog entries. Try to\n> bring a little consistency to various error messages using 'does not exist',\n> 'nonexistent', etc --- I standardized on 'does not exist' since that's\n> what the vast majority of the existing uses seem to be.\n\nGood idea(s). Thanks for cleaning up the error messages...\n\n - Thomas\n",
"msg_date": "Mon, 23 Oct 2000 06:53:42 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/test/regress/expected (plpgsql.out inet.out\n\tforeign_key.out errors.out)"
},
{
"msg_contents": "> > Modified Files:\n> ...\n> > Some small polishing of Mark Hollomon's cleanup of DROP command: might\n> > as well allow DROP multiple INDEX, RULE, TYPE as well. Add missing\n> > CommandCounterIncrement to DROP loop, which could cause trouble otherwise\n> > with multiple DROP of items affecting same catalog entries. Try to\n> > bring a little consistency to various error messages using 'does not exist',\n> > 'nonexistent', etc --- I standardized on 'does not exist' since that's\n> > what the vast majority of the existing uses seem to be.\n> \n> Good idea(s). Thanks for cleaning up the error messages...\n\nSpeaking of error messages, one idea for 7.2 might be to prepended\nnumbers to the error messages. That way, people could look up a more\ndetailed description of the error and possible causes. Now, none of us\nhave the time to do that, but the new companies may, and they will need\nthose numbers to help with technical support anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 23 Oct 2000 10:26:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/test/regress/expected (plpgsql.out inet.out\n\tforeign_key.out errors.out)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Speaking of error messages, one idea for 7.2 might be to prepended\n> numbers to the error messages.\n\nIsn't that long since on the TODO list? I know we've had long\ndiscussions about a thoroughgoing revision of error reporting.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 10:32:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/test/regress/expected (plpgsql.out inet.out\n\tforeign_key.out errors.out)"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Speaking of error messages, one idea for 7.2 might be to prepended\n> > numbers to the error messages.\n> \n> Isn't that long since on the TODO list? I know we've had long\n> discussions about a thoroughgoing revision of error reporting.\n\nYes. We have:\n\n\t* Allow elog() to return error codes, not just messages\n\t* Allow international error message support and add error codes\n\nI just thought I would mention it is on my radar screen now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 23 Oct 2000 10:33:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/test/regress/expected (plpgsql.out inet.out\n\tforeign_key.out errors.out)"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Speaking of error messages, one idea for 7.2 might be to prepended\n> > > numbers to the error messages.\n> > \n> > Isn't that long since on the TODO list? I know we've had long\n> > discussions about a thoroughgoing revision of error reporting.\n> \n> Yes. We have:\n> \n> \t* Allow elog() to return error codes, not just messages\n> \t* Allow international error message support and add error codes\n> \n> I just thought I would mention it is on my radar screen now.\n\nYeah, it's on mine too. The only thing I'm still unsure about the\n\"international\" part. Does anyone know of a gettext-ish thing that has an\nacceptable license?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 23 Oct 2000 17:30:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/test/regress/expected (plpgsql.out\n inet.out\n\tforeign_key.out errors.out)"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > Speaking of error messages, one idea for 7.2 might be to prepended\n> > > > numbers to the error messages.\n> > > \n> > > Isn't that long since on the TODO list? I know we've had long\n> > > discussions about a thoroughgoing revision of error reporting.\n> > \n> > Yes. We have:\n> > \n> > \t* Allow elog() to return error codes, not just messages\n> > \t* Allow international error message support and add error codes\n> > \n> > I just thought I would mention it is on my radar screen now.\n> \n> Yeah, it's on mine too. The only thing I'm still unsure about the\n> \"international\" part. Does anyone know of a gettext-ish thing that has an\n> acceptable license?\n\nYes, they must exist. I want a solution that doesn't make it difficult\nfor people add error messages. Having codes in the C files and error\nmessages in another file is quite a pain. My idea would enable us to\nnumber the error messages, keep the English text for the message in the\nsame file next to the code, then allow international support by creating\na utility that can dump out all the codes with the Engligh text to allow\ntranslators to make non-English versions. The translated file can then\nbe used by the backend to generate messages in other languagues using a\nSET command.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 23 Oct 2000 11:33:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/test/regress/expected (plpgsql.out\n inet.out\n\tforeign_key.out errors.out)"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Speaking of error messages, one idea for 7.2 might be to prepended\n> > numbers to the error messages.\n> \n> Isn't that long since on the TODO list? I know we've had long\n> discussions about a thoroughgoing revision of error reporting.\n\nYes, yes, yes! We need in numbers especially because of we\nhopefully will have savepoints in 7.2 and so we would get powerful\nerror handling by *applications* not by *human* only.\n\nVadim\n\n\n",
"msg_date": "Mon, 23 Oct 2000 14:23:06 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: pgsql/src/test/regress/expected (plpgsql.out\n\tinet.out foreign_key.out errors.out)"
}
] |
[
{
"msg_contents": "this patch and tar archive will add support for the darwin/osxpb to the current cvs tree.\n\na couple things to note:\n\n- unpack the tar archive in pgsql/\n\n- the config.guess and config.sub files have been updated by apple to support their new os. i don't think these changes have been folded back in to the main archive yet (at least they aren't in the pgsql cvs yet). these need to be copied over from /usr/libexec/config.* in order to obtain the correct os (e.g. powerpc-apple-darwin1.2 on PB).\n\n- the diff only patches configure.in so autoconf needs to be rerun\n\n- the situation with darwin's implementation of sysv semaphores is in progress at the moment (the shm/ipc support *is* there and seems to work fine). so this patch uses HAVE_SYS_SEM_H to conditionally build the src/backend/port/darwin semaphore code (borrowed from qnx4). I've followed the BeOS example of including the necessary sem.h declarations in src/include/port/darwin.h. this is rather messy at the moment and can be dumped once apple releases a version of PB with sysv sem built into the kernel.\n\n- i'm a bit confused over the __powerpc__ tas function in s_lock.c (there i assume for the ppc-linux port). it doesn't compile at all on darwin so i just added a version that does work on darwin under DARWIN_OS. it's potentially a bit confusing and s_lock.c should probably be changed to include a better conditional.\n\nbruce",
"msg_date": "Sun, 22 Oct 2000 19:51:06 -0400",
"msg_from": "Bruce Hartzler <bruceh@mail.utexas.edu>",
"msg_from_op": true,
"msg_subject": "add darwin/osxpb support to cvs"
},
{
"msg_contents": "Do you actually *need* -O0 on darwin with current sources?\nOr is that a leftover from 7.0.* ? AFAIK -O2 should work on PPC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 12:43:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add darwin/osxpb support to cvs "
},
{
"msg_contents": "I'll take this and integrate it. Please, nobody commit this right now,\nI'm messing with things...\n\nBruce Hartzler writes:\n\n> this patch and tar archive will add support for the darwin/osxpb to the current cvs tree.\n> \n> a couple things to note:\n> \n> - unpack the tar archive in pgsql/\n> \n> - the config.guess and config.sub files have been updated by apple to support their new os. i don't think these changes have been folded back in to the main archive yet (at least they aren't in the pgsql cvs yet). these need to be copied over from /usr/libexec/config.* in order to obtain the correct os (e.g. powerpc-apple-darwin1.2 on PB).\n> \n> - the diff only patches configure.in so autoconf needs to be rerun\n> \n> - the situation with darwin's implementation of sysv semaphores is in progress at the moment (the shm/ipc support *is* there and seems to work fine). so this patch uses HAVE_SYS_SEM_H to conditionally build the src/backend/port/darwin semaphore code (borrowed from qnx4). I've followed the BeOS example of including the necessary sem.h declarations in src/include/port/darwin.h. this is rather messy at the moment and can be dumped once apple releases a version of PB with sysv sem built into the kernel.\n> \n> - i'm a bit confused over the __powerpc__ tas function in s_lock.c (there i assume for the ppc-linux port). it doesn't compile at all on darwin so i just added a version that does work on darwin under DARWIN_OS. it's potentially a bit confusing and s_lock.c should probably be changed to include a better conditional.\n> \n> bruce\n> \n> \n> \n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 23 Oct 2000 19:29:45 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: add darwin/osxpb support to cvs"
},
{
"msg_contents": "Bruce Hartzler writes:\n\n> this patch and tar archive will add support for the darwin/osxpb to the current cvs tree.\n\nNext time you can make your patch with \"diff -crN\" so that you don't have\nto create a separate tarball.\n\n> - the config.guess and config.sub files have been updated by apple to\n> support their new os. i don't think these changes have been folded\n> back in to the main archive yet\n\nI installed the latest ones from GNU which claim to support it according\nto the ChangeLog.\n\n> - the situation with darwin's implementation of sysv semaphores is in\n> progress at the moment (the shm/ipc support *is* there and seems to\n> work fine). so this patch uses HAVE_SYS_SEM_H to conditionally build\n\nHAVE_SYS_SEM_H is a preprocessor symbol, not a makefile variable.\n\n> the src/backend/port/darwin semaphore code (borrowed from qnx4).\n\nIf you could arrange it, could you use the same files as QNX, perhaps with\nan #ifdef here or there\n\n> I've followed the BeOS example of including the necessary sem.h\n> declarations in src/include/port/darwin.h. this is rather messy at the\n> moment and can be dumped once apple releases a version of PB with sysv\n> sem built into the kernel.\n\nThe include/port/beos.h isn't really a shining example of how to do this. \nThis file is include *everywhere*, but we don't want to know about\nsemaphores everywhere. I'd prefer it if you use the QNX approach and\nsymlink sem.h into an include directory (e.g., /usr/local/include/sys),\nsince it's only temporary anyway.\n\nAlso, overriding configure results (� la #undef HAVE_UNION_SEMUN) isn't\ncool.\n\nI'm also somewhat concerned about the dynloader.c because it's under the\nApache license which has a funny advertisement clause. Comments from\nsomeone?\n\n> - i'm a bit confused over the __powerpc__ tas function in s_lock.c\n> (there i assume for the ppc-linux port). it doesn't compile at all on\n> darwin so i just added a version that does work on darwin under\n> DARWIN_OS. it's potentially a bit confusing and s_lock.c should\n> probably be changed to include a better conditional.\n\nThe compiler probably predefines something like __darwin__, which you\nshould use. You can find out with \n\ntouch foo.h\ncc -E -dM foo.h\nrm foo.h\n\nAnd finally, what's up with this:\n\nCFLAGS='-O0 -g -traditional-cpp'\n\n? What's wrong with the \"modern-cpp\"?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 24 Oct 2000 20:44:22 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] add darwin/osxpb support to cvs"
},
{
"msg_contents": ">Next time you can make your patch with \"diff -crN\" so that you don't have\n>to create a separate tarball.\n\nNo problem. I tried just doing a diff with cvs but wasn't able to get \nthe -N option to work. This is the first time I've ever tried \npatching unix software so I'm sorry if it's a bit messy. Thanks for \nyou help in getting it right.\n\n\n> > - the config.guess and config.sub files have been updated by apple to\n>> support their new os. i don't think these changes have been folded\n>> back in to the main archive yet\n>\n>I installed the latest ones from GNU which claim to support it according\n>to the ChangeLog.\n\nI'll try checking out the new versions and see if they work. I can \nsend you a diff with the ones I have here if you want to see the \nadditions Apple made.\n\n\n> > I've followed the BeOS example of including the necessary sem.h\n>> declarations in src/include/port/darwin.h. this is rather messy at the\n>> moment and can be dumped once apple releases a version of PB with sysv\n>> sem built into the kernel.\n>\n>The include/port/beos.h isn't really a shining example of how to do this.�\n>This file is include *everywhere*, but we don't want to know about\n>semaphores everywhere. I'd prefer it if you use the QNX approach and\n>symlink sem.h into an include directory (e.g., /usr/local/include/sys),\n>since it's only temporary anyway.\n\nI agree it's rather messy. I originally had just used the sym link in \n/usr/local/include, but as I said, some of the newer Darwin kernels \nhave the sysv sem.h file already there and I was worried about people \noverwriting it. If you think it's fair to put this responsibility on \nthe end user, I'm ok with that. I just thought it might be nice to \ncheck and see if the semaphore implementation was already there, and \nif not, build the necessary parts. I'll switch it back to the way it \nwas.\n\n>I'm also somewhat concerned about the dynloader.c because it's under the\n>Apache license which has a funny advertisement clause. Comments from\n>someone?\n\nI wondering about this too. I'll try emailing Wilfredo Sanchez and \nsee if I can get the code outside the Apache license. This would \nprobably be easiest.\n\n\n>The compiler probably predefines something like __darwin__, which you\n>should use. You can find out with\n\nIt doesn't actually provide __darwin__ but as I mentioned in a \nprevious post, Apple is suggesting people use __APPLE__ combined with \n__ppc__ or __i386__ for the different darwin builds. The section in \ns_lock.c should probably be changed to reflect this instead of using \nDARWIN_OS.\n\n\n>And finally, what's up with this:\n>\n>CFLAGS='-O0 -g -traditional-cpp'\n>\n>? What's wrong with the \"modern-cpp\"?\n\nApple's \"modern-cpp\" called cpp-precomp uses some strange parsing \nthat breaks on several files in the postgresql build. They are still \nworking on it apparently and are suggesting people simply use the \n-traditional-cpp flag when this happens instead of trying to update \nthe files.\n\nThanks again for all your comments and suggestions.\n\nBruce\n",
"msg_date": "Tue, 24 Oct 2000 17:16:41 -0400",
"msg_from": "Bruce Hartzler <bruceh@mail.utexas.edu>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] add darwin/osxpb support to cvs"
},
{
"msg_contents": "[ folks in Cc: were also interested in Darwin/MacOS X support ]\n\nBruce Hartzler writes:\n\n> this patch and tar archive will add support for the darwin/osxpb to the current cvs tree.\n\nGreetings.\n\nI installed parts of your patch, which should at least get you, or other\ninterested people going.\n\nOpen issues:\n\n* Dynamic loader code that's under a BSD license. I've just installed\ndummy files for now.\n\n* It was reported that the assembler doesn't like the__powerpc__ code in\nbackend/storage/buffer/s_lock.h. Your code looks essentially the same, so\nit seems to be a syntax discrepancy between the GNU assembler and whatever\nyour system uses. Perhaps we could check if the altered code works on\nother systems as well before we install duplicates? (The GNU assembler\nought to be pretty flexible.)\n\n* Semaphore support reportedly does exist in new kernels, so I don't think\nwe need to add that to the tree. Maybe a separate patch for older kernels\nis in order.\n\nBesides that, try to build all the optional parts (C++, Tcl, ODBC, ...) as\nwell.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 31 Oct 2000 21:15:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: add darwin/osxpb support to cvs"
},
{
"msg_contents": ">Greetings.\n>\n>I installed parts of your patch, which should at least get you, or other\n>interested people going.\n>Open issues:\n>\n>* Dynamic loader code that's under a BSD license. I've just installed\n>dummy files for now.\n\nThanks Peter,\n\nI heard back from Fred about the dynamic loader code. He gave the ok \nto use it outside the Apache license (see below).\n\n>* It was reported that the assembler doesn't like the__powerpc__ code in\n>backend/storage/buffer/s_lock.h. Your code looks essentially the same, so\n>it seems to be a syntax discrepancy between the GNU assembler and whatever\n>your system uses. Perhaps we could check if the altered code works on\n>other systems as well before we install duplicates? (The GNU assembler\n>ought to be pretty flexible.)\n\nYou are probably right. Apple's gcc-based compiler was split off from \nthe main tree a while back when they were adding a bunch of \nNEXT-based objective-c compiling routines to it. I know people have \nbeen trying to merge the changes back into the main gcc tree (and new \ngcc changes into Apple's) but I don't know how long this process it \ngoing to go on for. For now though, it won't compile the other PPC \ncode already in there.\n\n\n>* Semaphore support reportedly does exist in new kernels, so I don't think\n>we need to add that to the tree. Maybe a separate patch for older kernels\n>is in order.\n\nThis sound fine with me. The C++ stuff worked ok with the cvs tree. \nThe Tcl stuff is on the way and might be working now (it's on the \nDarwin cvs server). I'll look into ODBC.\n\nThanks for your help,\n\nBruce\n\n-------------------------------------------\n\nDelivered-To: bruceh@mail.utexas.edu\nDate: Wed, 25 Oct 2000 12:24:07 +0000\nReply-To: wsanchez@apple.com\nFrom: Wilfredo Sanchez <wsanchez@apple.com>\nTo: Bruce Hartzler <bruceh@mail.utexas.edu>\nMime-Version: 1.0 (Apple Message framework v337)\nSubject: Re: dynloader code in Apache\nStatus: R\n\n> I was wondering if you had released your dynloader code under any�\n> other licenses besides Apache's? I've brought it over in order to get�\n> Postgresql running on Darwin but I think the people managing the�\n> Postgresql code are a bit uneasy about using it under Apache's�\n> advertising license. Anything you might suggest would be most helpful.\n\n The code is pretty simple to reproduce from just looking at the headers.\nYou needn't worry about the license. Pretend I sent it to you as a patch.\n\n\t-Fred\n\nWilfredo S�nchez, wsanchez@apple.com\nOpen Source Engineering Lead\nApple Computer, Inc., Core Operating System Group\n1 Infinite Loop, Cupertino, CA 94086, 408.974-5174\n\n",
"msg_date": "Wed, 1 Nov 2000 04:07:53 +0200",
"msg_from": "Bruce Hartzler <bruceh@mail.utexas.edu>",
"msg_from_op": true,
"msg_subject": "Re: add darwin/osxpb support to cvs"
}
] |
[
{
"msg_contents": "Shared libpq works for me. I bet you were getting tripped up\nby some ENV vars I set globally...\n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Sun, 22 Oct 2000 19:23:51 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "GCC: Works fine for me..."
}
] |
[
{
"msg_contents": "I've already cc'd PeterE. I suspect we want the -lpq build to have\n-lsocket (at least on THIS (unixware) platform.\n\nLarry\n* KuroiNeko <evpopkov@carrier.kiev.ua> [001022 19:25]:\n> \n> Well, all in all, adding -lsocket is just enough. I was trying to compile\n> /home/ed/t.c, which contains just PQconnectdb() and PQfinish().\n> \n> $ cc t.c -o t -I/usr/local/pgsql/include -L/usr/local/pgsql/lib -lpq\n> \n> The above fails with undefined symbols:\n> \n> Undefined first referenced\n> symbol in file\n> inet_aton libpq.so\n> gethostbyname libpq.so\n> UX:ld: ERROR: Symbol referencing errors. No output written to t\n> \n> Adding -lsocket will make it compile. I mean this is _probably_ not a big\n> deal, but feels abit inconsistent. After all, t.c itself calls nothing from\n> -lsocket\n> Of course, the final decision should be made by maintainers, but I can't\n> help feeling this issue needs to be put up, or at least registered in your\n> records.\n> I'll try building .so and let you know.\n> \n> Thx\n> \n> Ed\n> \n> \n> --\n> \n> contaminated fish and microchips\n> huge supertankers on Arabian trips\n> oily propaganda from the leaders' lips\n> all about the future\n> there's people over here, people over there\n> everybody's looking for a little more air\n> crossing all the borders just to take their share\n> planning for the future\n> \n> Rainbow, Difficult to Cure\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 22 Oct 2000 19:26:40 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: Linking"
}
] |
[
{
"msg_contents": "\nHere is some regression stuff. CVS as of about an hour or so ago \n(right after Tom answered my note...)\n\n\n=============== Notes... =================\npostmaster must already be running for the regression tests to succeed.\nThe time zone is set to PST8PDT for these tests by the client frontend.\nPlease report any apparent problems to ports@postgresql.org\nSee regress/README for more information.\n\n=============== dropping old regression database... =================\nERROR: DROP DATABASE: Database \"regression\" does not exist\ndropdb: database removal failed\n=============== creating new regression database... =================\nCREATE DATABASE\n=============== installing languages... =================\ninstalling PL/pgSQL .. ok\n=============== running regression queries... =================\nboolean .. ok\nchar .. ok\nname .. ok\nvarchar .. ok\ntext .. ok\nint2 .. ok\nint4 .. ok\nint8 .. ok\noid .. ok\nfloat4 .. ok\nfloat8 .. ok\nnumeric .. ok\nstrings .. ok\nnumerology .. ok\npoint .. ok\nlseg .. ok\nbox .. ok\npath .. ok\npolygon .. ok\ncircle .. ok\ndate .. ok\ntime .. ok\ntimestamp .. ok\ninterval .. ok\nabstime .. ok\nreltime .. ok\ntinterval .. ok\ninet .. ok\ncomments .. failed\noidjoins .. ok\ntype_sanity .. ok\nopr_sanity .. ok\ngeometry .. failed\nhorology .. ok\ncreate_function_1 .. ok\ncreate_type .. ok\ncreate_table .. ok\ncreate_function_2 .. ok\ncopy .. ok\nconstraints .. ok\ntriggers .. ok\ncreate_misc .. ok\ncreate_aggregate .. ok\ncreate_operator .. ok\ncreate_index .. ok\ninherit .. ok\ncreate_view .. ok\nsanity_check .. ok\nerrors .. ok\nselect .. ok\nselect_into .. ok\nselect_distinct .. ok\nselect_distinct_on .. ok\nselect_implicit .. ok\nselect_having .. ok\nsubselect .. ok\nunion .. ok\ncase .. ok\njoin .. ok\naggregates .. ok\ntransactions .. ok\nrandom .. ok\nportals .. ok\narrays .. ok\nbtree_index .. ok\nhash_index .. ok\nmisc .. ok\nselect_views .. ok\nalter_table .. ok\nportals_p2 .. ok\nrules .. ok\nforeign_key .. ok\nlimit .. ok\nplpgsql .. failed\ntemp .. ok\n\n*** expected/comments.out\tFri Jul 14 10:43:55 2000\n--- results/comments.out\tSun Oct 22 19:38:45 2000\n***************\n*** 42,47 ****\n--- 42,48 ----\n */\n /* This block comment surrounds a query which itself has a block comment...\n SELECT /* embedded single line */ 'embedded' AS x2;\n+ ERROR: Unterminated /* comment\n */\n SELECT -- continued after the following block comments...\n /* Deeply nested comment.\n***************\n*** 57,65 ****\n Now just one deep...\n */\n 'deeply nested example' AS sixth;\n- sixth \n- -----------------------\n- deeply nested example\n- (1 row)\n- \n /* and this is the end of the file */\n--- 58,62 ----\n Now just one deep...\n */\n 'deeply nested example' AS sixth;\n /* and this is the end of the file */\n+ ERROR: parser: parse error at or near \"*/\"\n\n----------------------\n\n*** expected/geometry.out\tTue Sep 12 16:07:16 2000\n--- results/geometry.out\tSun Oct 22 19:38:49 2000\n***************\n*** 443,454 ****\n FROM CIRCLE_TBL;\n six | polygon \n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359078377e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718156754e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077235131e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n! | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.59807621137373),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048617))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239385585e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n--- 443,454 ----\n FROM CIRCLE_TBL;\n six | polygon \n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359017709e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n! | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983794))\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048617))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983794))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n***************\n*** 456,467 ****\n FROM CIRCLE_TBL;\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359078377e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718156754e-11),(2.12132034353258,-2.12132034358671),(-4.59307077235131e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181134),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181134),(200,-1.02068239385585e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n--- 456,467 ----\n FROM CIRCLE_TBL;\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359017709e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718035418e-11),(2.12132034353258,-2.12132034358671),(-4.59307077053127e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181135),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n\n----------------------\n\n*** expected/plpgsql.out\tSun Oct 22 18:32:45 2000\n--- results/plpgsql.out\tSun Oct 22 19:40:40 2000\n***************\n*** 1007,1053 ****\n--- 1007,1095 ----\n -- Second we install the wall connectors\n --\n insert into WSlot values ('WS.001.1a', '001', '', '');\n+ ERROR: fmgr_info: language 19040 has old-style handler\n insert into WSlot values ('WS.001.1b', '001', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.001.2a', '001', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.001.2b', '001', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.001.3a', '001', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.001.3b', '001', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.002.1a', '002', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.002.1b', '002', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.002.2a', '002', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.002.2b', '002', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.002.3a', '002', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.002.3b', '002', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.003.1a', '003', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.003.1b', '003', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.003.2a', '003', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.003.2b', '003', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.003.3a', '003', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.003.3b', '003', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.101.1a', '101', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.101.1b', '101', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.101.2a', '101', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.101.2b', '101', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.101.3a', '101', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.101.3b', '101', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.102.1a', '102', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.102.1b', '102', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.102.2a', '102', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.102.2b', '102', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.102.3a', '102', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.102.3b', '102', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.105.1a', '105', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.105.1b', '105', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.105.2a', '105', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.105.2b', '105', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.105.3a', '105', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.105.3b', '105', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.106.1a', '106', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.106.1b', '106', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.106.2a', '106', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.106.2b', '106', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.106.3a', '106', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into WSlot values ('WS.106.3b', '106', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n --\n -- Now create the patch fields and their slots\n --\n***************\n*** 1056,1081 ****\n--- 1098,1141 ----\n -- The cables for these will be made later, so they are unconnected for now\n --\n insert into PSlot values ('PS.base.a1', 'PF0_1', '', '');\n+ ERROR: fmgr_info: language 19040 has old-style handler\n insert into PSlot values ('PS.base.a2', 'PF0_1', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.a3', 'PF0_1', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.a4', 'PF0_1', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.a5', 'PF0_1', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.a6', 'PF0_1', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n --\n -- These are already wired to the wall connectors\n --\n insert into PSlot values ('PS.base.b1', 'PF0_1', '', 'WS.002.1a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.b2', 'PF0_1', '', 'WS.002.1b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.b3', 'PF0_1', '', 'WS.002.2a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.b4', 'PF0_1', '', 'WS.002.2b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.b5', 'PF0_1', '', 'WS.002.3a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.b6', 'PF0_1', '', 'WS.002.3b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.c1', 'PF0_1', '', 'WS.003.1a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.c2', 'PF0_1', '', 'WS.003.1b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.c3', 'PF0_1', '', 'WS.003.2a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.c4', 'PF0_1', '', 'WS.003.2b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.c5', 'PF0_1', '', 'WS.003.3a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.c6', 'PF0_1', '', 'WS.003.3b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n --\n -- This patchfield will be renamed later into PF0_2 - so its\n -- slots references in pfname should follow\n***************\n*** 1082,1123 ****\n--- 1142,1219 ----\n --\n insert into PField values ('PF0_X', 'Phonelines basement');\n insert into PSlot values ('PS.base.ta1', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.ta2', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.ta3', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.ta4', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.ta5', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.ta6', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.tb1', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.tb2', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.tb3', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.tb4', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.tb5', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.base.tb6', 'PF0_X', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PField values ('PF1_1', 'Wallslots 1st floor');\n insert into PSlot values ('PS.1st.a1', 'PF1_1', '', 'WS.101.1a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.a2', 'PF1_1', '', 'WS.101.1b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.a3', 'PF1_1', '', 'WS.101.2a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.a4', 'PF1_1', '', 'WS.101.2b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.a5', 'PF1_1', '', 'WS.101.3a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.a6', 'PF1_1', '', 'WS.101.3b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.b1', 'PF1_1', '', 'WS.102.1a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.b2', 'PF1_1', '', 'WS.102.1b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.b3', 'PF1_1', '', 'WS.102.2a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.b4', 'PF1_1', '', 'WS.102.2b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.b5', 'PF1_1', '', 'WS.102.3a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.b6', 'PF1_1', '', 'WS.102.3b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.c1', 'PF1_1', '', 'WS.105.1a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.c2', 'PF1_1', '', 'WS.105.1b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.c3', 'PF1_1', '', 'WS.105.2a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.c4', 'PF1_1', '', 'WS.105.2b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.c5', 'PF1_1', '', 'WS.105.3a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.c6', 'PF1_1', '', 'WS.105.3b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.d1', 'PF1_1', '', 'WS.106.1a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.d2', 'PF1_1', '', 'WS.106.1b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.d3', 'PF1_1', '', 'WS.106.2a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.d4', 'PF1_1', '', 'WS.106.2b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.d5', 'PF1_1', '', 'WS.106.3a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.d6', 'PF1_1', '', 'WS.106.3b');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n --\n -- Now we wire the wall connectors 1a-2a in room 001 to the\n -- patchfield. In the second update we make an error, and\n***************\n*** 1127,1197 ****\n update PSlot set backlink = 'WS.001.1b' where slotname = 'PS.base.a3';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------------------+----------+----------------------+----------------------\n! WS.001.1a | 001 | | PS.base.a1 \n! WS.001.1b | 001 | | PS.base.a3 \n! WS.001.2a | 001 | | \n! WS.001.2b | 001 | | \n! WS.001.3a | 001 | | \n! WS.001.3b | 001 | | \n! (6 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------------------+--------+----------------------+----------------------\n! PS.base.a1 | PF0_1 | | WS.001.1a \n! PS.base.a2 | PF0_1 | | \n! PS.base.a3 | PF0_1 | | WS.001.1b \n! PS.base.a4 | PF0_1 | | \n! PS.base.a5 | PF0_1 | | \n! PS.base.a6 | PF0_1 | | \n! (6 rows)\n \n update PSlot set backlink = 'WS.001.2a' where slotname = 'PS.base.a3';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------------------+----------+----------------------+----------------------\n! WS.001.1a | 001 | | PS.base.a1 \n! WS.001.1b | 001 | | \n! WS.001.2a | 001 | | PS.base.a3 \n! WS.001.2b | 001 | | \n! WS.001.3a | 001 | | \n! WS.001.3b | 001 | | \n! (6 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------------------+--------+----------------------+----------------------\n! PS.base.a1 | PF0_1 | | WS.001.1a \n! PS.base.a2 | PF0_1 | | \n! PS.base.a3 | PF0_1 | | WS.001.2a \n! PS.base.a4 | PF0_1 | | \n! PS.base.a5 | PF0_1 | | \n! PS.base.a6 | PF0_1 | | \n! (6 rows)\n \n update PSlot set backlink = 'WS.001.1b' where slotname = 'PS.base.a2';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------------------+----------+----------------------+----------------------\n! WS.001.1a | 001 | | PS.base.a1 \n! WS.001.1b | 001 | | PS.base.a2 \n! WS.001.2a | 001 | | PS.base.a3 \n! WS.001.2b | 001 | | \n! WS.001.3a | 001 | | \n! WS.001.3b | 001 | | \n! (6 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------------------+--------+----------------------+----------------------\n! PS.base.a1 | PF0_1 | | WS.001.1a \n! PS.base.a2 | PF0_1 | | WS.001.1b \n! PS.base.a3 | PF0_1 | | WS.001.2a \n! PS.base.a4 | PF0_1 | | \n! PS.base.a5 | PF0_1 | | \n! PS.base.a6 | PF0_1 | | \n! (6 rows)\n \n --\n -- Same procedure for 2b-3b but this time updating the WSlot instead\n--- 1223,1257 ----\n update PSlot set backlink = 'WS.001.1b' where slotname = 'PS.base.a3';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n update PSlot set backlink = 'WS.001.2a' where slotname = 'PS.base.a3';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n update PSlot set backlink = 'WS.001.1b' where slotname = 'PS.base.a2';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n --\n -- Same procedure for 2b-3b but this time updating the WSlot instead\n***************\n*** 1202,1407 ****\n update WSlot set backlink = 'PS.base.a6' where slotname = 'WS.001.3a';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------------------+----------+----------------------+----------------------\n! WS.001.1a | 001 | | PS.base.a1 \n! WS.001.1b | 001 | | PS.base.a2 \n! WS.001.2a | 001 | | PS.base.a3 \n! WS.001.2b | 001 | | PS.base.a4 \n! WS.001.3a | 001 | | PS.base.a6 \n! WS.001.3b | 001 | | \n! (6 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------------------+--------+----------------------+----------------------\n! PS.base.a1 | PF0_1 | | WS.001.1a \n! PS.base.a2 | PF0_1 | | WS.001.1b \n! PS.base.a3 | PF0_1 | | WS.001.2a \n! PS.base.a4 | PF0_1 | | WS.001.2b \n! PS.base.a5 | PF0_1 | | \n! PS.base.a6 | PF0_1 | | WS.001.3a \n! (6 rows)\n \n update WSlot set backlink = 'PS.base.a6' where slotname = 'WS.001.3b';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------------------+----------+----------------------+----------------------\n! WS.001.1a | 001 | | PS.base.a1 \n! WS.001.1b | 001 | | PS.base.a2 \n! WS.001.2a | 001 | | PS.base.a3 \n! WS.001.2b | 001 | | PS.base.a4 \n! WS.001.3a | 001 | | \n! WS.001.3b | 001 | | PS.base.a6 \n! (6 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------------------+--------+----------------------+----------------------\n! PS.base.a1 | PF0_1 | | WS.001.1a \n! PS.base.a2 | PF0_1 | | WS.001.1b \n! PS.base.a3 | PF0_1 | | WS.001.2a \n! PS.base.a4 | PF0_1 | | WS.001.2b \n! PS.base.a5 | PF0_1 | | \n! PS.base.a6 | PF0_1 | | WS.001.3b \n! (6 rows)\n \n update WSlot set backlink = 'PS.base.a5' where slotname = 'WS.001.3a';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------------------+----------+----------------------+----------------------\n! WS.001.1a | 001 | | PS.base.a1 \n! WS.001.1b | 001 | | PS.base.a2 \n! WS.001.2a | 001 | | PS.base.a3 \n! WS.001.2b | 001 | | PS.base.a4 \n! WS.001.3a | 001 | | PS.base.a5 \n! WS.001.3b | 001 | | PS.base.a6 \n! (6 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------------------+--------+----------------------+----------------------\n! PS.base.a1 | PF0_1 | | WS.001.1a \n! PS.base.a2 | PF0_1 | | WS.001.1b \n! PS.base.a3 | PF0_1 | | WS.001.2a \n! PS.base.a4 | PF0_1 | | WS.001.2b \n! PS.base.a5 | PF0_1 | | WS.001.3a \n! PS.base.a6 | PF0_1 | | WS.001.3b \n! (6 rows)\n \n insert into PField values ('PF1_2', 'Phonelines 1st floor');\n insert into PSlot values ('PS.1st.ta1', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.ta2', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.ta3', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.ta4', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.ta5', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.ta6', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.tb1', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.tb2', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.tb3', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.tb4', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.tb5', 'PF1_2', '', '');\n insert into PSlot values ('PS.1st.tb6', 'PF1_2', '', '');\n --\n -- Fix the wrong name for patchfield PF0_2\n --\n update PField set name = 'PF0_2' where name = 'PF0_X';\n select * from PSlot order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------------------+--------+----------------------+----------------------\n! PS.1st.a1 | PF1_1 | | WS.101.1a \n! PS.1st.a2 | PF1_1 | | WS.101.1b \n! PS.1st.a3 | PF1_1 | | WS.101.2a \n! PS.1st.a4 | PF1_1 | | WS.101.2b \n! PS.1st.a5 | PF1_1 | | WS.101.3a \n! PS.1st.a6 | PF1_1 | | WS.101.3b \n! PS.1st.b1 | PF1_1 | | WS.102.1a \n! PS.1st.b2 | PF1_1 | | WS.102.1b \n! PS.1st.b3 | PF1_1 | | WS.102.2a \n! PS.1st.b4 | PF1_1 | | WS.102.2b \n! PS.1st.b5 | PF1_1 | | WS.102.3a \n! PS.1st.b6 | PF1_1 | | WS.102.3b \n! PS.1st.c1 | PF1_1 | | WS.105.1a \n! PS.1st.c2 | PF1_1 | | WS.105.1b \n! PS.1st.c3 | PF1_1 | | WS.105.2a \n! PS.1st.c4 | PF1_1 | | WS.105.2b \n! PS.1st.c5 | PF1_1 | | WS.105.3a \n! PS.1st.c6 | PF1_1 | | WS.105.3b \n! PS.1st.d1 | PF1_1 | | WS.106.1a \n! PS.1st.d2 | PF1_1 | | WS.106.1b \n! PS.1st.d3 | PF1_1 | | WS.106.2a \n! PS.1st.d4 | PF1_1 | | WS.106.2b \n! PS.1st.d5 | PF1_1 | | WS.106.3a \n! PS.1st.d6 | PF1_1 | | WS.106.3b \n! PS.1st.ta1 | PF1_2 | | \n! PS.1st.ta2 | PF1_2 | | \n! PS.1st.ta3 | PF1_2 | | \n! PS.1st.ta4 | PF1_2 | | \n! PS.1st.ta5 | PF1_2 | | \n! PS.1st.ta6 | PF1_2 | | \n! PS.1st.tb1 | PF1_2 | | \n! PS.1st.tb2 | PF1_2 | | \n! PS.1st.tb3 | PF1_2 | | \n! PS.1st.tb4 | PF1_2 | | \n! PS.1st.tb5 | PF1_2 | | \n! PS.1st.tb6 | PF1_2 | | \n! PS.base.a1 | PF0_1 | | WS.001.1a \n! PS.base.a2 | PF0_1 | | WS.001.1b \n! PS.base.a3 | PF0_1 | | WS.001.2a \n! PS.base.a4 | PF0_1 | | WS.001.2b \n! PS.base.a5 | PF0_1 | | WS.001.3a \n! PS.base.a6 | PF0_1 | | WS.001.3b \n! PS.base.b1 | PF0_1 | | WS.002.1a \n! PS.base.b2 | PF0_1 | | WS.002.1b \n! PS.base.b3 | PF0_1 | | WS.002.2a \n! PS.base.b4 | PF0_1 | | WS.002.2b \n! PS.base.b5 | PF0_1 | | WS.002.3a \n! PS.base.b6 | PF0_1 | | WS.002.3b \n! PS.base.c1 | PF0_1 | | WS.003.1a \n! PS.base.c2 | PF0_1 | | WS.003.1b \n! PS.base.c3 | PF0_1 | | WS.003.2a \n! PS.base.c4 | PF0_1 | | WS.003.2b \n! PS.base.c5 | PF0_1 | | WS.003.3a \n! PS.base.c6 | PF0_1 | | WS.003.3b \n! PS.base.ta1 | PF0_2 | | \n! PS.base.ta2 | PF0_2 | | \n! PS.base.ta3 | PF0_2 | | \n! PS.base.ta4 | PF0_2 | | \n! PS.base.ta5 | PF0_2 | | \n! PS.base.ta6 | PF0_2 | | \n! PS.base.tb1 | PF0_2 | | \n! PS.base.tb2 | PF0_2 | | \n! PS.base.tb3 | PF0_2 | | \n! PS.base.tb4 | PF0_2 | | \n! PS.base.tb5 | PF0_2 | | \n! PS.base.tb6 | PF0_2 | | \n! (66 rows)\n \n select * from WSlot order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------------------+----------+----------------------+----------------------\n! WS.001.1a | 001 | | PS.base.a1 \n! WS.001.1b | 001 | | PS.base.a2 \n! WS.001.2a | 001 | | PS.base.a3 \n! WS.001.2b | 001 | | PS.base.a4 \n! WS.001.3a | 001 | | PS.base.a5 \n! WS.001.3b | 001 | | PS.base.a6 \n! WS.002.1a | 002 | | PS.base.b1 \n! WS.002.1b | 002 | | PS.base.b2 \n! WS.002.2a | 002 | | PS.base.b3 \n! WS.002.2b | 002 | | PS.base.b4 \n! WS.002.3a | 002 | | PS.base.b5 \n! WS.002.3b | 002 | | PS.base.b6 \n! WS.003.1a | 003 | | PS.base.c1 \n! WS.003.1b | 003 | | PS.base.c2 \n! WS.003.2a | 003 | | PS.base.c3 \n! WS.003.2b | 003 | | PS.base.c4 \n! WS.003.3a | 003 | | PS.base.c5 \n! WS.003.3b | 003 | | PS.base.c6 \n! WS.101.1a | 101 | | PS.1st.a1 \n! WS.101.1b | 101 | | PS.1st.a2 \n! WS.101.2a | 101 | | PS.1st.a3 \n! WS.101.2b | 101 | | PS.1st.a4 \n! WS.101.3a | 101 | | PS.1st.a5 \n! WS.101.3b | 101 | | PS.1st.a6 \n! WS.102.1a | 102 | | PS.1st.b1 \n! WS.102.1b | 102 | | PS.1st.b2 \n! WS.102.2a | 102 | | PS.1st.b3 \n! WS.102.2b | 102 | | PS.1st.b4 \n! WS.102.3a | 102 | | PS.1st.b5 \n! WS.102.3b | 102 | | PS.1st.b6 \n! WS.105.1a | 105 | | PS.1st.c1 \n! WS.105.1b | 105 | | PS.1st.c2 \n! WS.105.2a | 105 | | PS.1st.c3 \n! WS.105.2b | 105 | | PS.1st.c4 \n! WS.105.3a | 105 | | PS.1st.c5 \n! WS.105.3b | 105 | | PS.1st.c6 \n! WS.106.1a | 106 | | PS.1st.d1 \n! WS.106.1b | 106 | | PS.1st.d2 \n! WS.106.2a | 106 | | PS.1st.d3 \n! WS.106.2b | 106 | | PS.1st.d4 \n! WS.106.3a | 106 | | PS.1st.d5 \n! WS.106.3b | 106 | | PS.1st.d6 \n! (42 rows)\n \n --\n -- Install the central phone system and create the phone numbers.\n--- 1262,1336 ----\n update WSlot set backlink = 'PS.base.a6' where slotname = 'WS.001.3a';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n update WSlot set backlink = 'PS.base.a6' where slotname = 'WS.001.3b';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n update WSlot set backlink = 'PS.base.a5' where slotname = 'WS.001.3a';\n select * from WSlot where roomno = '001' order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n select * from PSlot where slotname ~ 'PS.base.a' order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n insert into PField values ('PF1_2', 'Phonelines 1st floor');\n insert into PSlot values ('PS.1st.ta1', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.ta2', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.ta3', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.ta4', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.ta5', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.ta6', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.tb1', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.tb2', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.tb3', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.tb4', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.tb5', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PSlot values ('PS.1st.tb6', 'PF1_2', '', '');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n --\n -- Fix the wrong name for patchfield PF0_2\n --\n update PField set name = 'PF0_2' where name = 'PF0_X';\n+ ERROR: fmgr_info: language 19040 has old-style handler\n select * from PSlot order by slotname;\n slotname | pfname | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n select * from WSlot order by slotname;\n slotname | roomno | slotlink | backlink \n! ----------+--------+----------+----------\n! (0 rows)\n \n --\n -- Install the central phone system and create the phone numbers.\n***************\n*** 1410,1445 ****\n--- 1339,1398 ----\n -- backlink field.\n --\n insert into PLine values ('PL.001', '-0', 'Central call', 'PS.base.ta1');\n+ ERROR: fmgr_info: language 19040 has old-style handler\n insert into PLine values ('PL.002', '-101', '', 'PS.base.ta2');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.003', '-102', '', 'PS.base.ta3');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.004', '-103', '', 'PS.base.ta5');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.005', '-104', '', 'PS.base.ta6');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.006', '-106', '', 'PS.base.tb2');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.007', '-108', '', 'PS.base.tb3');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.008', '-109', '', 'PS.base.tb4');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.009', '-121', '', 'PS.base.tb5');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.010', '-122', '', 'PS.base.tb6');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.015', '-134', '', 'PS.1st.ta1');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.016', '-137', '', 'PS.1st.ta3');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.017', '-139', '', 'PS.1st.ta4');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.018', '-362', '', 'PS.1st.tb1');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.019', '-363', '', 'PS.1st.tb2');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.020', '-364', '', 'PS.1st.tb3');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.021', '-365', '', 'PS.1st.tb5');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.022', '-367', '', 'PS.1st.tb6');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.028', '-501', 'Fax entrance', 'PS.base.ta2');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into PLine values ('PL.029', '-502', 'Fax 1st floor', 'PS.1st.ta1');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n --\n -- Buy some phones, plug them into the wall and patch the\n -- phone lines to the corresponding patchfield slots.\n --\n insert into PHone values ('PH.hc001', 'Hicom standard', 'WS.001.1a');\n+ ERROR: fmgr_info: language 19040 has old-style handler\n update PSlot set slotlink = 'PS.base.ta1' where slotname = 'PS.base.a1';\n insert into PHone values ('PH.hc002', 'Hicom standard', 'WS.002.1a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n update PSlot set slotlink = 'PS.base.ta5' where slotname = 'PS.base.b1';\n insert into PHone values ('PH.hc003', 'Hicom standard', 'WS.002.2a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n update PSlot set slotlink = 'PS.base.tb2' where slotname = 'PS.base.b3';\n insert into PHone values ('PH.fax001', 'Canon fax', 'WS.001.2a');\n+ ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n update PSlot set slotlink = 'PS.base.ta2' where slotname = 'PS.base.a3';\n --\n -- Install a hub at one of the patchfields, plug a computers\n***************\n*** 1446,1453 ****\n--- 1399,1408 ----\n -- ethernet interface into the wall and patch it to the hub.\n --\n insert into Hub values ('base.hub1', 'Patchfield PF0_1 hub', 16);\n+ ERROR: fmgr_info: language 19040 has old-style handler\n insert into System values ('orion', 'PC');\n insert into IFace values ('IF', 'orion', 'eth0', 'WS.002.1b');\n+ ERROR: fmgr_info: language 19040 has old-style handler\n update PSlot set slotlink = 'HS.base.hub1.1' where slotname = 'PS.base.b2';\n --\n -- Now we take a look at the patchfield\n***************\n*** 1454,1496 ****\n --\n select * from PField_v1 where pfname = 'PF0_1' order by slotname;\n pfname | slotname | backside | patch \n! --------+----------------------+----------------------------------------------------------+-----------------------------------------------\n! PF0_1 | PS.base.a1 | WS.001.1a in room 001 -> Phone PH.hc001 (Hicom standard) | PS.base.ta1 -> Phone line -0 (Central call)\n! PF0_1 | PS.base.a2 | WS.001.1b in room 001 -> - | -\n! PF0_1 | PS.base.a3 | WS.001.2a in room 001 -> Phone PH.fax001 (Canon fax) | PS.base.ta2 -> Phone line -501 (Fax entrance)\n! PF0_1 | PS.base.a4 | WS.001.2b in room 001 -> - | -\n! PF0_1 | PS.base.a5 | WS.001.3a in room 001 -> - | -\n! PF0_1 | PS.base.a6 | WS.001.3b in room 001 -> - | -\n! PF0_1 | PS.base.b1 | WS.002.1a in room 002 -> Phone PH.hc002 (Hicom standard) | PS.base.ta5 -> Phone line -103\n! PF0_1 | PS.base.b2 | WS.002.1b in room 002 -> orion IF eth0 (PC) | Patchfield PF0_1 hub slot 1\n! PF0_1 | PS.base.b3 | WS.002.2a in room 002 -> Phone PH.hc003 (Hicom standard) | PS.base.tb2 -> Phone line -106\n! PF0_1 | PS.base.b4 | WS.002.2b in room 002 -> - | -\n! PF0_1 | PS.base.b5 | WS.002.3a in room 002 -> - | -\n! PF0_1 | PS.base.b6 | WS.002.3b in room 002 -> - | -\n! PF0_1 | PS.base.c1 | WS.003.1a in room 003 -> - | -\n! PF0_1 | PS.base.c2 | WS.003.1b in room 003 -> - | -\n! PF0_1 | PS.base.c3 | WS.003.2a in room 003 -> - | -\n! PF0_1 | PS.base.c4 | WS.003.2b in room 003 -> - | -\n! PF0_1 | PS.base.c5 | WS.003.3a in room 003 -> - | -\n! PF0_1 | PS.base.c6 | WS.003.3b in room 003 -> - | -\n! (18 rows)\n \n select * from PField_v1 where pfname = 'PF0_2' order by slotname;\n pfname | slotname | backside | patch \n! --------+----------------------+--------------------------------+------------------------------------------------------------------------\n! PF0_2 | PS.base.ta1 | Phone line -0 (Central call) | PS.base.a1 -> WS.001.1a in room 001 -> Phone PH.hc001 (Hicom standard)\n! PF0_2 | PS.base.ta2 | Phone line -501 (Fax entrance) | PS.base.a3 -> WS.001.2a in room 001 -> Phone PH.fax001 (Canon fax)\n! PF0_2 | PS.base.ta3 | Phone line -102 | -\n! PF0_2 | PS.base.ta4 | - | -\n! PF0_2 | PS.base.ta5 | Phone line -103 | PS.base.b1 -> WS.002.1a in room 002 -> Phone PH.hc002 (Hicom standard)\n! PF0_2 | PS.base.ta6 | Phone line -104 | -\n! PF0_2 | PS.base.tb1 | - | -\n! PF0_2 | PS.base.tb2 | Phone line -106 | PS.base.b3 -> WS.002.2a in room 002 -> Phone PH.hc003 (Hicom standard)\n! PF0_2 | PS.base.tb3 | Phone line -108 | -\n! PF0_2 | PS.base.tb4 | Phone line -109 | -\n! PF0_2 | PS.base.tb5 | Phone line -121 | -\n! PF0_2 | PS.base.tb6 | Phone line -122 | -\n! (12 rows)\n \n --\n -- Finally we want errors\n--- 1409,1421 ----\n --\n select * from PField_v1 where pfname = 'PF0_1' order by slotname;\n pfname | slotname | backside | patch \n! --------+----------+----------+-------\n! (0 rows)\n \n select * from PField_v1 where pfname = 'PF0_2' order by slotname;\n pfname | slotname | backside | patch \n! --------+----------+----------+-------\n! (0 rows)\n \n --\n -- Finally we want errors\n***************\n*** 1498,1517 ****\n insert into PField values ('PF1_1', 'should fail due to unique index');\n ERROR: Cannot insert a duplicate key into unique index pfield_name\n update PSlot set backlink = 'WS.not.there' where slotname = 'PS.base.a1';\n- ERROR: WS.not.there does not exist\n update PSlot set backlink = 'XX.illegal' where slotname = 'PS.base.a1';\n- ERROR: illegal backlink beginning with XX\n update PSlot set slotlink = 'PS.not.there' where slotname = 'PS.base.a1';\n- ERROR: PS.not.there does not exist\n update PSlot set slotlink = 'XX.illegal' where slotname = 'PS.base.a1';\n- ERROR: illegal slotlink beginning with XX\n insert into HSlot values ('HS', 'base.hub1', 1, '');\n! ERROR: Cannot insert a duplicate key into unique index hslot_name\n insert into HSlot values ('HS', 'base.hub1', 20, '');\n! ERROR: no manual manipulation of HSlot\n delete from HSlot;\n- ERROR: no manual manipulation of HSlot\n insert into IFace values ('IF', 'notthere', 'eth0', '');\n! ERROR: system \"notthere\" does not exist\n insert into IFace values ('IF', 'orion', 'ethernet_interface_name_too_long', '');\n! ERROR: IFace slotname \"IF.orion.ethernet_interface_name_too_long\" too long (20 char max)\n--- 1423,1437 ----\n insert into PField values ('PF1_1', 'should fail due to unique index');\n ERROR: Cannot insert a duplicate key into unique index pfield_name\n update PSlot set backlink = 'WS.not.there' where slotname = 'PS.base.a1';\n update PSlot set backlink = 'XX.illegal' where slotname = 'PS.base.a1';\n update PSlot set slotlink = 'PS.not.there' where slotname = 'PS.base.a1';\n update PSlot set slotlink = 'XX.illegal' where slotname = 'PS.base.a1';\n insert into HSlot values ('HS', 'base.hub1', 1, '');\n! ERROR: fmgr_info: language 19040 has old-style handler\n insert into HSlot values ('HS', 'base.hub1', 20, '');\n! ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n delete from HSlot;\n insert into IFace values ('IF', 'notthere', 'eth0', '');\n! ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n insert into IFace values ('IF', 'orion', 'ethernet_interface_name_too_long', '');\n! ERROR: Internal error: fmgr_oldstyle received NULL function pointer\n\n----------------------\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 22 Oct 2000 19:44:37 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "regress issues: UW7.1.1/PG7.1dev/GCC"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> comments .. failed\n\n> geometry .. failed\n\n> *** expected/comments.out\tFri Jul 14 10:43:55 2000\n> --- results/comments.out\tSun Oct 22 19:38:45 2000\n> ***************\n> *** 42,47 ****\n> --- 42,48 ----\n> */\n> /* This block comment surrounds a query which itself has a block comment...\n> SELECT /* embedded single line */ 'embedded' AS x2;\n> + ERROR: Unterminated /* comment\n> */\n> SELECT -- continued after the following block comments...\n> /* Deeply nested comment.\n\nI'll bet lunch that the test driver is using an old psql that didn't know\nabout nested comments. Similar for the plpgsql failure. You should use\ngmake installcheck to run the tests with the improved driver.\n\nThe geometry failure is to be expected, all the others have passed for me\non your box.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 24 Oct 2000 18:06:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: regress issues: UW7.1.1/PG7.1dev/GCC"
}
] |
[
{
"msg_contents": "I'm having the error 'relation <number> modified while in use' fairly\noften. It is the same relation that's always giving a problem. Usually\nafter all currently-running backends die away with that error, error\ndisappears. If I shutdown, ipcclean, start up postgres, it also\ndisappears.\n\n\nWhat causes this? I'm having a feeling that it has to do with referential\nintegrity (the table in question is referenced by almost every other\ntable), and with [possibly] a leak of reference counts? \n\nThis is all with pg7.0.2 on i386.\n\n-alex\n\n",
"msg_date": "Sun, 22 Oct 2000 21:32:55 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "relation ### modified while in use"
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I'm having the error 'relation <number> modified while in use' fairly\n> often. It is the same relation that's always giving a problem.\n\nHmm, could we see the full schema dump for that relation?\n(pg_dump -s -t tablename dbname will do)\n\nIf you are not actively modifying the schema, then in theory you should\nnot see this message, but...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Oct 2000 23:05:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "I think this happens after I create/modify tables which reference this\ntable. This is spontaneous, and doesn't _always_ happen...\n\nAnything I could do next time it craps up to help track the problem down?\n\n-alex\n\n----\nCREATE TABLE \"customers\" (\n \"cust_id\" int4 DEFAULT nextval('customers_cust_id_seq'::text) NOT \nNULL,\n \"phone_npa\" character(3) NOT NULL,\n \"phone_nxx\" character(3) NOT NULL,\n \"phone_rest\" character(4) NOT NULL,\n \"e_mail\" character varying(30),\n \"daytime_npa\" character(3),\n \"daytime_nxx\" character(3),\n \"daytime_rest\" character(4),\n \"is_business\" bool DEFAULT 'f' NOT NULL,\n PRIMARY KEY (\"cust_id\") );\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER DELETE ON \"customers\" NOT\nDEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE\n\"RI_FKey_noaction_del\" ('<unnamed>', 'cc_charges', 'customers',\n'UNSPECIFIED', 'cust_id', 'cust_id');\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER UPDATE ON \"customers\" NOT\nDEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE\n\"RI_FKey_noaction_upd\" ('<unnamed>', 'cc_charges', 'customers',\n'UNSPECIFIED', 'cust_id', 'cust_id');\n\n\nOn Sun, 22 Oct 2000, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > I'm having the error 'relation <number> modified while in use' fairly\n> > often. It is the same relation that's always giving a problem.\n> \n> Hmm, could we see the full schema dump for that relation?\n> (pg_dump -s -t tablename dbname will do)\n> \n> If you are not actively modifying the schema, then in theory you should\n> not see this message, but...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n",
"msg_date": "Sun, 22 Oct 2000 23:28:40 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I think this happens after I create/modify tables which reference this\n> table. This is spontaneous, and doesn't _always_ happen...\n\nUm. I was hoping it was something more easily fixable :-(. What's\ncausing the relcache to decide that the rel has been modified is the\naddition or removal of foreign-key triggers on the rel. Which seems\nlegitimate. (It's barely possible that we could get away with allowing\ntriggers to be added or deleted mid-transaction, but that doesn't feel\nright to me.)\n\nThere are two distinct known bugs that allow the error to be reported.\nThese have been discussed before, but to recap:\n\n1. relcache will complain if the notification of cache invalidation\narrives after transaction start and before first use of the referenced\nrel (when there was already a relcache entry left over from a prior\ntransaction). In this situation we should allow the change to occur\nwithout complaint, ISTM. But the relcache doesn't currently have any\nconcept of first reference versus later references.\n\n2. Even with #1 fixed, you could still get this error, because we are\nway too willing to release locks on rels that have been referenced.\nTherefore you can get this sequence:\n\nSession 1\t\t\tSession 2\n\nbegin;\n\nselect * from foo;\n -- LockRelation(AccessShareLock);\n -- UnLockRelation(AccessShareLock);\n\n\t\t\t\tALTER foo ADD CONSTRAINT;\n\t\t\t\t -- LockRelation(AccessExclusiveLock);\n\t\t\t\t -- lock released at commit\n\nselect * from foo;\n -- LockRelation(AccessShareLock);\n -- table schema update is detected, error must be reported\n\n\nI think that we should hold at least AccessShareLock on any relation\nthat a transaction has touched, all the way to end of transaction.\nThis creates the potential for deadlocks that did not use to happen;\nfor example, if we have two transactions that concurrently both do\n\n\tbegin;\n\tselect * from foo; -- gets AccessShareLock\n\tLOCK TABLE foo;\t -- gets AccessExclusiveLock\n\t...\n\tend;\n\nthis will work currently because the SELECT releases AccessShareLock\nwhen done, but it will deadlock if SELECT does not release that lock.\n\nThat's annoying but I see no way around it, if we are to allow\nconcurrent transactions to do schema modifications of tables that other\ntransactions are using.\n\nComments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 01:01:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "On Mon, 23 Oct 2000, Tom Lane wrote:\n\n> when done, but it will deadlock if SELECT does not release that lock.\n> \n> That's annoying but I see no way around it, if we are to allow\n> concurrent transactions to do schema modifications of tables that other\n> transactions are using.\n\nI might be in above my head, but maybe this is time for yet another type\nof lock? \"Do-not-modify-this-table-under-me\" lock, which shall persist\nuntil transaction commits, and will conflict only with alter table\nlock/AccessExclusiveLock?\n\nI realise we have already many lock types, but this seems to be proper\nsolution to me...\n\nIn related vein: Is there a way to see who (at least process id) is\nholding locks on tables?\n\n",
"msg_date": "Mon, 23 Oct 2000 01:09:50 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I might be in above my head, but maybe this is time for yet another type\n> of lock?\n\nWouldn't help --- it's still a deadlock.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 01:11:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "On Mon, 23 Oct 2000, Alex Pilosov wrote:\n\n> On Mon, 23 Oct 2000, Tom Lane wrote:\n> \n> > when done, but it will deadlock if SELECT does not release that lock.\n> > \n> > That's annoying but I see no way around it, if we are to allow\n> > concurrent transactions to do schema modifications of tables that other\n> > transactions are using.\n> \n> I might be in above my head, but maybe this is time for yet another type\n> of lock? \"Do-not-modify-this-table-under-me\" lock, which shall persist\n> until transaction commits, and will conflict only with alter table\n> lock/AccessExclusiveLock?\n\nI just realised that I _am_ in above my head, and the above makes no\nsense, and is identical to holding AccessShareLock. \n\nSorry ;)\n\n-alex\n\n\n",
"msg_date": "Mon, 23 Oct 2000 01:13:12 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "On Mon, 23 Oct 2000, Tom Lane wrote:\n\n> \tbegin;\n> \tselect * from foo; -- gets AccessShareLock\n> \tLOCK TABLE foo;\t -- gets AccessExclusiveLock\n> \t...\n> \tend;\n> \n> this will work currently because the SELECT releases AccessShareLock\n> when done, but it will deadlock if SELECT does not release that lock.\nProbably a silly question, but since this is the same transaction,\ncouldn't the lock be 'upgraded' without a problem? \n\nOr postgres doesn't currently have idea of lock upgrades...?\n\n-alex\n\n\n\n",
"msg_date": "Mon, 23 Oct 2000 01:21:08 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> On Mon, 23 Oct 2000, Tom Lane wrote:\n>> begin;\n>> select * from foo; -- gets AccessShareLock\n>> LOCK TABLE foo;\t -- gets AccessExclusiveLock\n>> ...\n>> end;\n>> \n>> this will work currently because the SELECT releases AccessShareLock\n>> when done, but it will deadlock if SELECT does not release that lock.\n\n> Probably a silly question, but since this is the same transaction,\n> couldn't the lock be 'upgraded' without a problem? \n\nNo, the problem happens when two transactions do the above at about\nthe same time. After the SELECTs, both transactions are holding\nAccessShareLock, and both are waiting for the other to let go so's they\ncan get AccessExclusiveLock. AFAIK any concept of \"lock upgrade\" falls\nafoul of this basic deadlock risk.\n\nWe do have a need to be careful that the system doesn't try to do\nlock upgrades internally. For example, in\n\tLOCK TABLE foo;\nthe parsing step had better not grab AccessShareLock on foo in\nadvance of the main execution step asking for AccessExclusiveLock.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 01:27:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 01:01 23/10/00 -0400, Tom Lane wrote:\n>> (It's barely possible that we could get away with allowing\n>> triggers to be added or deleted mid-transaction, but that doesn't feel\n>> right to me.)\n\n> A little OT, but the above is a useful feature for managing data; it's not\n> common, but the following sequence is essential to managing a database safely:\n\n> - Start TX\n> - Drop a few triggers, constraints etc\n> - Add/change data to fix erroneous/no longer accurate business rules\n> (audited, of course)\n> - Reapply the triggers, constraints\n> - Make sure it looks right\n> - Commit/Rollback based on the above check\n\nThere is nothing wrong with the above as long as you hold exclusive\nlock on the tables being modified for the duration of the transaction.\n\nThe scenario I'm worried about is on the other side, ie, a transaction\nthat has already done some things to a table is notified of a change to\nthat table's triggers/constraints/etc being committed by another\ntransaction. Can it deal with that consistently? I don't think it can\nin general. What I'm proposing is that once an xact has touched a\ntable, other xacts should not be able to apply schema updates to that\ntable until the first xact commits.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 01:37:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Philip Warner <pjw@rhyme.com.au> writes:\n> > At 01:01 23/10/00 -0400, Tom Lane wrote:\n> >> (It's barely possible that we could get away with allowing\n> >> triggers to be added or deleted mid-transaction, but that doesn't feel\n> >> right to me.)\n>\n> > A little OT, but the above is a useful feature for managing data; it's not\n> > common, but the following sequence is essential to managing a database safely:\n>\n> > - Start TX\n> > - Drop a few triggers, constraints etc\n> > - Add/change data to fix erroneous/no longer accurate business rules\n> > (audited, of course)\n> > - Reapply the triggers, constraints\n> > - Make sure it looks right\n> > - Commit/Rollback based on the above check\n>\n> There is nothing wrong with the above as long as you hold exclusive\n> lock on the tables being modified for the duration of the transaction.\n>\n> The scenario I'm worried about is on the other side, ie, a transaction\n> that has already done some things to a table is notified of a change to\n> that table's triggers/constraints/etc being committed by another\n> transaction. Can it deal with that consistently? I don't think it can\n> in general. What I'm proposing is that once an xact has touched a\n> table, other xacts should not be able to apply schema updates to that\n> table until the first xact commits.\n>\n\nI agree with you.\nI've wondered why AccessShareLock is a short term lock.\n\nIf we have a mechanism to acquire a share lock on a tuple,we\ncould use it for managing system info generally. However the\nonly allowed lock on a tuple is exclusive. Access(Share/Exclusive)\nLock on tables would give us a restricted solution about pg_class\ntuples.\n\nThers'a possibility of deadlock in any case but there are few\ncases when AccessExclusiveLock is really needed and we could\nacquire an AccessExclusiveLock manually from the first if\nnecessary.\n\nI'm not sure about the use of AccessShareLock in parse-analyze-\noptimize phase however.\n\nRegards.\nHiroshi Inoue\n\n\n",
"msg_date": "Mon, 23 Oct 2000 15:29:08 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use"
},
{
"msg_contents": "At 01:01 23/10/00 -0400, Tom Lane wrote:\n>(It's barely possible that we could get away with allowing\n>triggers to be added or deleted mid-transaction, but that doesn't feel\n>right to me.)\n>\n\nA little OT, but the above is a useful feature for managing data; it's not\ncommon, but the following sequence is essential to managing a database safely:\n\n- Start TX\n- Drop a few triggers, constraints etc\n- Add/change data to fix erroneous/no longer accurate business rules\n(audited, of course)\n- Reapply the triggers, constraints\n- Make sure it looks right\n- Commit/Rollback based on the above check\n\nIt is very undesirable to drop the triggers/constraints in a separate\ntransaction since a communications failure could leave them unapplied. At\nleast in one TX, the recovery process should back out the TX.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 23 Oct 2000 16:31:07 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "At 01:37 23/10/00 -0400, Tom Lane wrote:\n>\n>What I'm proposing is that once an xact has touched a\n>table, other xacts should not be able to apply schema updates to that\n>table until the first xact commits.\n\nTotally agree. You may want to go further and say that metadata changes can\nnot be made while that *connection* exists: if the client has prepared a\nquery against a table will it cause a problem when the query is run? \n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 23 Oct 2000 17:14:31 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "> > in general. What I'm proposing is that once an xact has touched a\n> > table, other xacts should not be able to apply schema updates to that\n> > table until the first xact commits.\n> >\n> \n> I agree with you.\n\nI don't know. We discussed this issue just after 6.5 and decided to\nallow concurrent schema modifications.\nOracle has disctionary locks but run each DDL statement in separate\nxaction, so - no deadlock condition here. OTOH, I wouldn't worry\nabout deadlock - one just had to follow common anti-deadlock rules.\n\n> I've wondered why AccessShareLock is a short term lock.\n\nMUST BE. AccessShare-/Exclusive-Locks are *data* locks.\nIf one want to protect schema then new schema share/excl locks\nmust be inroduced. There is no conflict between data and\nschema locks - they are orthogonal.\n\nWe use AccessShare-/Exclusive-Locks for schema because of...\nwe allow concurrent schema modifications and no true schema\nlocks were required.\n\n> If we have a mechanism to acquire a share lock on a tuple,we\n> could use it for managing system info generally. However the\n> only allowed lock on a tuple is exclusive. Access(Share/Exclusive)\n\nActually, just look at lock.h:LTAG structure - lock manager supports\nlocking of \"some objects\" inside tables:\n\ntypedef struct LTAG\n{\n Oid relId;\n Oid dbId;\n union\n {\n BlockNumber blkno;\n Transaction xid;\n } objId;\n ...\n - we could add oid to union above and lock tables by acquiring lock\non pg_class with objId.oid = table' oid. Same way we could lock indices\nand whatever we want... if we want -:)\n\n> Lock on tables would give us a restricted solution about pg_class\n> tuples.\n> \n> Thers'a possibility of deadlock in any case but there are few\n> cases when AccessExclusiveLock is really needed and we could\n> acquire an AccessExclusiveLock manually from the first if\n> necessary.\n> \n> I'm not sure about the use of AccessShareLock in parse-analyze-\n> optimize phase however.\n\nThere is notion about breakable (parser) locks in Oracle documentation -:)\n\nVadim\n\n\n",
"msg_date": "Mon, 23 Oct 2000 04:06:48 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I'm not sure about the use of AccessShareLock in parse-analyze-\n> optimize phase however.\n\nThat's something we'll have to clean up while fixing this. Currently\nthe system may acquire and release AccessShareLock multiple times while\nparsing/rewriting/planning. We need to make sure that an appropriate\nlock is grabbed at *first* use and then held.\n\nShould save a few cycles as well as being more correct ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 10:13:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Don't we have this ability? What about taking a RowShare lock on the\n> pg_class tuple whenever you read from the table; then requiring schema\n> updates take a RowExclusive lock on the pg_class tuple?\n\nHow is that different from taking locks on the table itself?\n\nIn any case, we don't have the ability to hold multiple classes of locks\non individual tuples, AFAIK. UPDATE and SELECT FOR UPDATE use a\ndifferent mechanism that involves setting fields in the header of the\naffected tuple. There's no room there for more than one kind of lock;\nwhat's worse, checking and waiting for that lock is far slower than\nnormal lock-manager operations. (But on the plus side, you can be\nholding locks on any number of tuples without risking overflowing the\nlock manager table, and releasing the locks at commit takes no cycles.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 10:45:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Only slightly; one interpretation of a table lock is that it locks all of\n> the data in the table; and a lock on the pg_class row locks the metadata. I\n> must admit that I am having a little difficulty thinking of a case where\n> the distinction would be useful...\n\nI can't see any value in locking the data without locking the metadata.\nGiven that, the other way round is sort of moot...\n\n> So where do\n> SELECT FOR UPDATE IN ROW SHARE MODE \n\nWe don't support that (never heard of it before, in fact)\n\n> and \n> LOCK TABLE IN ROW EXCLUSIVE MODE statements. \n> fit in? \n\nThat one is just a table lock (RowExclusiveLock). All the variants\nof LOCK TABLE are table-level locks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 11:37:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "At 15:29 23/10/00 +0900, Hiroshi Inoue wrote:\n>\n>If we have a mechanism to acquire a share lock on a tuple,we\n>could use it for managing system info generally. However the\n>only allowed lock on a tuple is exclusive. Access(Share/Exclusive)\n>Lock on tables would give us a restricted solution about pg_class\n>tuples.\n>\n\nDon't we have this ability? What about taking a RowShare lock on the\npg_class tuple whenever you read from the table; then requiring schema\nupdates take a RowExclusive lock on the pg_class tuple?\n\nAs you say, it won't prevent deadlocks, but it seems like a reasonable\nthing to do.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 24 Oct 2000 01:39:13 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use"
},
{
"msg_contents": "At 10:45 23/10/00 -0400, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> Don't we have this ability? What about taking a RowShare lock on the\n>> pg_class tuple whenever you read from the table; then requiring schema\n>> updates take a RowExclusive lock on the pg_class tuple?\n>\n>How is that different from taking locks on the table itself?\n\nOnly slightly; one interpretation of a table lock is that it locks all of\nthe data in the table; and a lock on the pg_class row locks the metadata. I\nmust admit that I am having a little difficulty thinking of a case where\nthe distinction would be useful...\n\n\n>In any case, we don't have the ability to hold multiple classes of locks\n>on individual tuples, AFAIK. UPDATE and SELECT FOR UPDATE use a\n>different mechanism that involves setting fields in the header of the\n>affected tuple. There's no room there for more than one kind of lock;\n>what's worse, checking and waiting for that lock is far slower than\n>normal lock-manager operations. \n\nSo where do\n\n SELECT FOR UPDATE IN ROW SHARE MODE \nand \n LOCK TABLE IN ROW EXCLUSIVE MODE statements. \n\nfit in? \n\nThey *seem* to provide differing levels of row locking.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 24 Oct 2000 02:04:21 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "Are mailing list archives of various postgresql mailing list available\nanywhere?\n\nI know they were some time ago but I couldn't find any link on\nwww.postgresql.org now. I subscribed to a list mainly because I want to\nmonitor the progress but the amount of messages kills my inbox. It would\nbe really convenient for me if I could just browse the archives on web\nonce in a while.\n\nKrzysztof Kowalczyk\n",
"msg_date": "Mon, 23 Oct 2000 20:44:17 +0000",
"msg_from": "Krzysztof Kowalczyk <krzysztofk@pobox.com>",
"msg_from_op": false,
"msg_subject": "Mailing list archives available?"
},
{
"msg_contents": "\n\nPhilip Warner wrote:\n\n> At 15:29 23/10/00 +0900, Hiroshi Inoue wrote:\n> >\n> >If we have a mechanism to acquire a share lock on a tuple,we\n> >could use it for managing system info generally. However the\n> >only allowed lock on a tuple is exclusive. Access(Share/Exclusive)\n> >Lock on tables would give us a restricted solution about pg_class\n> >tuples.\n> >\n>\n> Don't we have this ability? What about taking a RowShare lock on the\n> pg_class tuple whenever you read from the table; then requiring schema\n> updates take a RowExclusive lock on the pg_class tuple?\n>\n\nBoth RowShare and RowExclusive lock are table level\nlocking. The implementation of tuple level locking is\nquite different from that of table level locking.\nThe information of table level locking is held in shared\nmemory. OTOH the information of tuple level locking\nis held in the tuple itself i.e. a transaction(t_xmax) is\nupdating/deleting/selecting for update the tuple....\nIf other backends are about to update/delete/select\nfor update a tuple,they check the information of the\ntuple and if the tuple is being updated/... they wait until\nthe end of the transaction(t_xmax).\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Tue, 24 Oct 2000 09:03:38 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use"
},
{
"msg_contents": "\n\nVadim Mikheev wrote:\n\n> > > in general. What I'm proposing is that once an xact has touched a\n> > > table, other xacts should not be able to apply schema updates to that\n> > > table until the first xact commits.\n> > >\n> >\n> > I agree with you.\n>\n> I don't know. We discussed this issue just after 6.5 and decided to\n> allow concurrent schema modifications.\n> Oracle has disctionary locks but run each DDL statement in separate\n> xaction, so - no deadlock condition here. OTOH, I wouldn't worry\n> about deadlock - one just had to follow common anti-deadlock rules.\n>\n> > I've wondered why AccessShareLock is a short term lock.\n>\n> MUST BE. AccessShare-/Exclusive-Locks are *data* locks.\n> If one want to protect schema then new schema share/excl locks\n> must be inroduced. There is no conflict between data and\n> schema locks - they are orthogonal.\n>\n\nOracle doesn't have Access...Lock locks.\nIn my understanding,locking levels you provided contains\nan implicit share/exclusive lock on the corrsponding\npg_class tuple i.e. AccessExclusive Lock acquires an\nexclusive lock on the corresping pg_class tuple and\nother locks acquire a share lock, Is it right ?\n\n>\n> We use AccessShare-/Exclusive-Locks for schema because of...\n> we allow concurrent schema modifications and no true schema\n> locks were required.\n>\n> > If we have a mechanism to acquire a share lock on a tuple,we\n> > could use it for managing system info generally. However the\n> > only allowed lock on a tuple is exclusive. Access(Share/Exclusive)\n>\n> Actually, just look at lock.h:LTAG structure - lock manager supports\n> locking of \"some objects\" inside tables:\n>\n> typedef struct LTAG\n> {\n> Oid relId;\n> Oid dbId;\n> union\n> {\n> BlockNumber blkno;\n> Transaction xid;\n> } objId;\n> ...\n> - we could add oid to union above and lock tables by acquiring lock\n> on pg_class with objId.oid = table' oid. Same way we could lock indices\n> and whatever we want... if we want -:)\n>\n\nAs you know well,this implemenation has a flaw that we have\nto be anxious about the shortage of shared memory.\n\n\n> > Lock on tables would give us a restricted solution about pg_class\n> > tuples.\n> >\n> > Thers'a possibility of deadlock in any case but there are few\n> > cases when AccessExclusiveLock is really needed and we could\n> > acquire an AccessExclusiveLock manually from the first if\n> > necessary.\n> >\n> > I'm not sure about the use of AccessShareLock in parse-analyze-\n> > optimize phase however.\n>\n> There is notion about breakable (parser) locks in Oracle documentation -:)\n>\n\nI've known it also but don't know how to realize the similar\nconcept in PostgreSQL.\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Tue, 24 Oct 2000 09:52:19 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use"
},
{
"msg_contents": "\nhttp://www.postgresql.org/mhonarc has them all listed .. not sure how to\nget there from the Web site ... Vince?\n\nOn Mon, 23 Oct 2000, Krzysztof Kowalczyk wrote:\n\n> Are mailing list archives of various postgresql mailing list available\n> anywhere?\n> \n> I know they were some time ago but I couldn't find any link on\n> www.postgresql.org now. I subscribed to a list mainly because I want to\n> monitor the progress but the amount of messages kills my inbox. It would\n> be really convenient for me if I could just browse the archives on web\n> once in a while.\n> \n> Krzysztof Kowalczyk\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 24 Oct 2000 00:33:53 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Mailing list archives available?"
},
{
"msg_contents": "On Tue, 24 Oct 2000, The Hermit Hacker wrote:\n\n> \n> http://www.postgresql.org/mhonarc has them all listed .. not sure how to\n> get there from the Web site ... Vince?\n\nThere are links from both the Developer's Corner and User's Lounge ->\nGeneral Info.\n\nVince.\n\n\n> \n> On Mon, 23 Oct 2000, Krzysztof Kowalczyk wrote:\n> \n> > Are mailing list archives of various postgresql mailing list available\n> > anywhere?\n> > \n> > I know they were some time ago but I couldn't find any link on\n> > www.postgresql.org now. I subscribed to a list mainly because I want to\n> > monitor the progress but the amount of messages kills my inbox. It would\n> > be really convenient for me if I could just browse the archives on web\n> > once in a while.\n> > \n> > Krzysztof Kowalczyk\n> > \n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 24 Oct 2000 06:18:53 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Mailing list archives available?"
},
{
"msg_contents": "On Tue, 24 Oct 2000, Vince Vielhaber wrote:\n\n> On Tue, 24 Oct 2000, The Hermit Hacker wrote:\n> \n> > \n> > http://www.postgresql.org/mhonarc has them all listed .. not sure how to\n> > get there from the Web site ... Vince?\n> \n> There are links from both the Developer's Corner and User's Lounge ->\n> General Info.\n\nYa know, I've gone in and looked several times and my eye always gets draw\ndown to the section titled ' Mailing Lists '? :) Can you put lnks from\nthe 'pgsql-{admin,announce,general,etc}' in that section to the archives\nas well, so its a bit easier to find? And maybe 'bold' the words \"mailing\nlists\" in the General Info section, so that it stands out a bit more? :)\n\n\n > > Vince.\n> \n> \n> > \n> > On Mon, 23 Oct 2000, Krzysztof Kowalczyk wrote:\n> > \n> > > Are mailing list archives of various postgresql mailing list available\n> > > anywhere?\n> > > \n> > > I know they were some time ago but I couldn't find any link on\n> > > www.postgresql.org now. I subscribed to a list mainly because I want to\n> > > monitor the progress but the amount of messages kills my inbox. It would\n> > > be really convenient for me if I could just browse the archives on web\n> > > once in a while.\n> > > \n> > > Krzysztof Kowalczyk\n> > > \n> > > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> \n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 24 Oct 2000 09:27:49 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Mailing list archives available?"
},
{
"msg_contents": "\nI sure hope this is a rerun 'cuze I did it yesterday.\n\nVince.\n\n\nOn Tue, 24 Oct 2000, The Hermit Hacker wrote:\n\n> On Tue, 24 Oct 2000, Vince Vielhaber wrote:\n> \n> > On Tue, 24 Oct 2000, The Hermit Hacker wrote:\n> > \n> > > \n> > > http://www.postgresql.org/mhonarc has them all listed .. not sure how to\n> > > get there from the Web site ... Vince?\n> > \n> > There are links from both the Developer's Corner and User's Lounge ->\n> > General Info.\n> \n> Ya know, I've gone in and looked several times and my eye always gets draw\n> down to the section titled ' Mailing Lists '? :) Can you put lnks from\n> the 'pgsql-{admin,announce,general,etc}' in that section to the archives\n> as well, so its a bit easier to find? And maybe 'bold' the words \"mailing\n> lists\" in the General Info section, so that it stands out a bit more? :)\n> \n> \n> > > Vince.\n> > \n> > \n> > > \n> > > On Mon, 23 Oct 2000, Krzysztof Kowalczyk wrote:\n> > > \n> > > > Are mailing list archives of various postgresql mailing list available\n> > > > anywhere?\n> > > > \n> > > > I know they were some time ago but I couldn't find any link on\n> > > > www.postgresql.org now. I subscribed to a list mainly because I want to\n> > > > monitor the progress but the amount of messages kills my inbox. It would\n> > > > be really convenient for me if I could just browse the archives on web\n> > > > once in a while.\n> > > > \n> > > > Krzysztof Kowalczyk\n> > > > \n> > > > \n> > > \n> > > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > > Systems Administrator @ hub.org \n> > > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n> > > \n> > > \n> > \n> > -- \n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> > \n> > \n> > \n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 25 Oct 2000 10:06:09 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Mailing list archives available?"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > I think this happens after I create/modify tables which reference this\n> > table. This is spontaneous, and doesn't _always_ happen...\n>\n> Um. I was hoping it was something more easily fixable :-(. What's\n> causing the relcache to decide that the rel has been modified is the\n> addition or removal of foreign-key triggers on the rel. Which seems\n> legitimate. (It's barely possible that we could get away with allowing\n> triggers to be added or deleted mid-transaction, but that doesn't feel\n> right to me.)\n>\n> There are two distinct known bugs that allow the error to be reported.\n> These have been discussed before, but to recap:\n>\n> 1. relcache will complain if the notification of cache invalidation\n> arrives after transaction start and before first use of the referenced\n> rel (when there was already a relcache entry left over from a prior\n> transaction). In this situation we should allow the change to occur\n> without complaint, ISTM. But the relcache doesn't currently have any\n> concept of first reference versus later references.\n>\n\nDo we have a conclusion about this thread ?\nIf no,how about changing heap_open(r) so that they allocate\nRelation descriptors after acquiring a lock on the table ?\nWe would use LockRelation() no longer.\n\nComments ?\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Tue, 31 Oct 2000 14:29:08 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Do we have a conclusion about this thread ?\n> If no,how about changing heap_open(r) so that they allocate\n> Relation descriptors after acquiring a lock on the table ?\n> We would use LockRelation() no longer.\n\nThat won't do by itself, because that will open us up to failures when\na relcache invalidation arrives mid-transaction and we don't happen to\nhave the relation open at the time. We could still have parse/plan\nresults that depend on the old relation definition.\n\nReally we need to fix things so that a lock is held from first use to\nend of transaction, independently of heap_open/heap_close.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Nov 2000 12:20:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Do we have a conclusion about this thread ?\n> > If no,how about changing heap_open(r) so that they allocate\n> > Relation descriptors after acquiring a lock on the table ?\n> > We would use LockRelation() no longer.\n> \n> That won't do by itself,\n\nDoesn't current heap_open() have a flaw that even the first \nuse of a relation in a transaction may cause an error\n\"relation ### modified while in use\" ?\n\n> because that will open us up to failures when\n> a relcache invalidation arrives mid-transaction and we don't happen to\n> have the relation open at the time. We could still have parse/plan\n> results that depend on the old relation definition.\n> \n\nPL/pgSQL already prepares a plan at the first execution\ntime and executes the plan repeatedly after that.\nWe would have general PREPARE/EXECUTE feature in the\nnear fututre. IMHO another mechanism to detect plan invali\ndation is needed.\n\nBTW,I sometimes see \n ERROR: SearchSysCache: recursive use of cache 10(16)\nunder small MAXNUMMESSAGES environment.\nI'm not sure about the cause but suspicious if sufficiently\nmany system relations are nailed for \"cache state reset\".\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Fri, 3 Nov 2000 06:49:35 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: relation ### modified while in use "
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> Doesn't current heap_open() have a flaw that even the first \n> use of a relation in a transaction may cause an error\n> \"relation ### modified while in use\" ?\n\nSure, that was the starting point of the discussion.\n\n>> because that will open us up to failures when\n>> a relcache invalidation arrives mid-transaction and we don't happen to\n>> have the relation open at the time. We could still have parse/plan\n>> results that depend on the old relation definition.\n\n> PL/pgSQL already prepares a plan at the first execution\n> time and executes the plan repeatedly after that.\n> We would have general PREPARE/EXECUTE feature in the\n> near fututre. IMHO another mechanism to detect plan invali\n> dation is needed.\n\nYes, we need the ability to invalidate cached plans. But that doesn't\nhave anything to do with this issue, IMHO. The problem at hand is that\na plan may be invalidated before it is even finished building. Do you\nexpect the parse-rewrite-plan-execute pipeline to be prepared to back up\nand restart if we notice a relation schema change report halfway down the\nprocess? How will we even *know* whether the schema change invalidates\nwhat we've done so far, unless we have a first-use-in-transaction flag?\n\n> BTW,I sometimes see \n> ERROR: SearchSysCache: recursive use of cache 10(16)\n> under small MAXNUMMESSAGES environment.\n> I'm not sure about the cause but suspicious if sufficiently\n> many system relations are nailed for \"cache state reset\".\n\nDoes this occur after a prior error message? I have been suspicious\nbecause there isn't a mechanism to clear the syscache-busy flags during\nxact abort. If we elog() out of a syscache fill operation, seems like\nthe busy flag will be left set, leading to exactly the above error on\nlater xacts' attempts to use that syscache. I think we need an\nAtEOXact_Syscache routine that runs around and clears the busy flags.\n(In the commit case, perhaps it should issue debug notices if it finds\nany that are set.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Nov 2000 17:01:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use "
},
{
"msg_contents": "On Fri, 3 Nov 2000, Hiroshi Inoue wrote:\n\n> PL/pgSQL already prepares a plan at the first execution\n> time and executes the plan repeatedly after that.\n> We would have general PREPARE/EXECUTE feature in the\n> near fututre. IMHO another mechanism to detect plan invali\n> dation is needed.\nExcellent point. While now I don't consider it too inconvenient to reload\nall my stored procedures after I change database structure, in future, I'd\nlove it to be handled by postgres itself.\n\nPossibly, plpgsql (or postgresql itself) could have a 'dependency' list of\nobjects that the current object depends on?\n\nThis would additionally help dump/restore (the old one, I'm not talking\nabout the newfangled way to do it), since, for restore, you need to dump\nthe objects in the order of their dependency, and plpgsql procedure can\npotentially depend on an object that has a higher OID...\n\n-alex\n\n\n",
"msg_date": "Thu, 2 Nov 2000 18:03:54 -0500 (EST)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "RE: relation ### modified while in use "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > Doesn't current heap_open() have a flaw that even the first \n> > use of a relation in a transaction may cause an error\n> > \"relation ### modified while in use\" ?\n> \n> Sure, that was the starting point of the discussion.\n>\n\nAt least my proposal resolves this flaw.\n \n> >> because that will open us up to failures when\n> >> a relcache invalidation arrives mid-transaction and we don't happen to\n> >> have the relation open at the time. We could still have parse/plan\n> >> results that depend on the old relation definition.\n> \n> > PL/pgSQL already prepares a plan at the first execution\n> > time and executes the plan repeatedly after that.\n> > We would have general PREPARE/EXECUTE feature in the\n> > near fututre. IMHO another mechanism to detect plan invali\n> > dation is needed.\n> \n> Yes, we need the ability to invalidate cached plans. But that doesn't\n> have anything to do with this issue, IMHO. The problem at hand is that\n> a plan may be invalidated before it is even finished building. Do you\n> expect the parse-rewrite-plan-execute pipeline to be prepared to back up\n> and restart if we notice a relation schema change report halfway down the\n> process? \n\nIMHO executor should re-parse-rewrite-plan if the target plan\nis no longer valid.\n\n> How will we even *know* whether the schema change invalidates\n> what we've done so far, unless we have a first-use-in-transaction flag?\n> \n\nRegards.\nHiroshi Inoue \n",
"msg_date": "Fri, 3 Nov 2000 22:51:15 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: relation ### modified while in use "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n>\n> > BTW,I sometimes see\n> > ERROR: SearchSysCache: recursive use of cache 10(16)\n> > under small MAXNUMMESSAGES environment.\n> > I'm not sure about the cause but suspicious if sufficiently\n> > many system relations are nailed for \"cache state reset\".\n>\n> Does this occur after a prior error message? I have been suspicious\n> because there isn't a mechanism to clear the syscache-busy flags during\n> xact abort. If we elog() out of a syscache fill operation, seems like\n> the busy flag will be left set, leading to exactly the above error on\n> later xacts' attempts to use that syscache. I think we need an\n> AtEOXact_Syscache routine that runs around and clears the busy flags.\n> (In the commit case, perhaps it should issue debug notices if it finds\n> any that are set.)\n>\n\nI don't know if I've seen the cases you pointed out.\nI have the following gdb back trace. Obviously it calls\nSearchSysCache() for cacheId 10 twice. I was able\nto get another gdb back trace but discarded it by\nmistake. Though I've added pause() just after detecting\nrecursive use of cache,backends continue the execution\nin most cases unfortunately.\nI've not examined the backtrace yet. But don't we have\nto nail system relation descriptors more than now ?\n\"cache state reset\" could arrive at any heap_open().\n\nNot that #0 corresponds to pause() and line numbers may\nbe different from yours.\n\n#0 0x40163db7 in __libc_pause ()\n#1 0x8141ade in SearchSysCache (cache=0x825b89c, v1=17113, v2=0, v3=0,\nv4=0)\n at catcache.c:1026\n#2 0x8145bd0 in SearchSysCacheTuple (cacheId=10, key1=17113, key2=0,\nkey3=0,\n key4=0) at syscache.c:505\n#3 0x807a100 in IndexSupportInitialize (indexStrategy=0x829d230,\n indexSupport=0x829ab2c, isUnique=0x829cf26 \"\", indexObjectId=17113,\n accessMethodObjectId=403, maxStrategyNumber=5, maxSupportNumber=1,\n maxAttributeNumber=2) at istrat.c:561\n#4 0x81437cd in IndexedAccessMethodInitialize (relation=0x829cf10)\n at relcache.c:1180\n#5 0x8143599 in RelationBuildDesc (buildinfo={infotype = 1, i = {\n info_id = 17113, info_name = 0x42d9 <Address 0x42d9 out of\nbounds>}},\n oldrelation=0x829cf10) at relcache.c:1095\n#6 0x8143f8d in RelationClearRelation (relation=0x829cf10, rebuildIt=1\n'\\001')\n at relcache.c:1687\n#7 0x81440fa in RelationFlushRelation (relationPtr=0x8246f8c,\n skipLocalRelations=1) at relcache.c:1789\n#8 0x80d02e3 in HashTableWalk (hashtable=0x823941c,\n function=0x81440d0 <RelationFlushRelation>, arg=1) at hasht.c:47\n#9 0x81442b5 in RelationCacheInvalidate () at relcache.c:1922\n#10 0x81421bd in ResetSystemCaches () at inval.c:559\n#11 0x810302b in InvalidateSharedInvalid (\n invalFunction=0x8142150 <CacheIdInvalidate>,\n resetFunction=0x81421b0 <ResetSystemCaches>) at sinval.c:153\n#12 0x8142332 in DiscardInvalid () at inval.c:722\n#13 0x8104a9f in LockRelation (relation=0x8280134, lockmode=1) at lmgr.c:151\n#14 0x807427d in heap_open (relationId=16580, lockmode=1) at heapam.c:638\n#15 0x8141b54 in SearchSysCache (cache=0x825b89c, v1=17116, v2=0, v3=0,\nv4=0)\n at catcache.c:1049\n#16 0x8145bd0 in SearchSysCacheTuple (cacheId=10, key1=17116, key2=0,\nkey3=0,\n key4=0) at syscache.c:505\n#17 0x80921d5 in CatalogIndexInsert (idescs=0xbfffeaac, nIndices=2,\n heapRelation=0x82443d0, heapTuple=0x827a4c8) at indexing.c:156\n#18 0x808e6e7 in AddNewAttributeTuples (new_rel_oid=137741,\ntupdesc=0x8279904)\n at heap.c:659\n#19 0x808e9c3 in heap_create_with_catalog (relname=0x82a02c4 \"bprime\",\n tupdesc=0x8279904, relkind=114 'r', istemp=0 '\\000',\n allow_system_table_mods=0 '\\000') at heap.c:911\n#20 0x80c320d in InitPlan (operation=CMD_SELECT, parseTree=0x8288100,\n plan=0x8277d70, estate=0x8277dfc) at execMain.c:729\n#21 0x80c2af1 in ExecutorStart (queryDesc=0x8278c14, estate=0x8277dfc)\n at execMain.c:131\n#22 0x810c327 in ProcessQuery (parsetree=0x8288100, plan=0x8277d70,\n dest=Remote) at pquery.c:260\n#23 0x810aeb5 in pg_exec_query_string (\n query_string=0x8287c58 \"SELECT *\\n INTO TABLE Bprime\\n FROM tenk1\\n\nWHERE unique2 < 1000;\", dest=Remote, parse_context=0x822efb4) at\npostgres.c:820\n#24 0x810be42 in PostgresMain (argc=4, argv=0xbfffed74, real_argc=4,\n real_argv=0xbffff654, username=0x823c881 \"reindex\") at postgres.c:1808\n#25 0x80f3913 in DoBackend (port=0x823c618) at postmaster.c:1963\n#26 0x80f34e6 in BackendStartup (port=0x823c618) at postmaster.c:1732\n#27 0x80f285a in ServerLoop () at postmaster.c:978\n#28 0x80f22f4 in PostmasterMain (argc=4, argv=0xbffff654) at\npostmaster.c:669\n#29 0x80d41bd in main (argc=4, argv=0xbffff654) at main.c:112\n\nRegards.\nHirsohi Inoue\n\n",
"msg_date": "Fri, 3 Nov 2000 22:51:26 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: relation ### modified while in use "
},
{
"msg_contents": "Hi\n\nRelationCacheInvalidate() is called from ResetSystemCaches()\nand calles RelationFlushRelation() for all relation descriptors\nexcept some nailed system relations.\nI'm wondering why nailed relations could be exceptions.\nConversely why must RelationCacheInvalidate() call\nRelationFlushRelation() for other system relations ?\nIsn't it sufficient to call smgrclose() and replace rd_rel\nmember of system relations by the latest ones instead\nof calling RelationFlushRelation() ?\nThere's -O option of postmaster(postgres) which allows\nsystem table structure modification. I'm suspicious\nif it has been used properly before.\n\nComments ?\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Mon, 06 Nov 2000 10:00:29 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "ResetSystemCaches(was Re: relation ### modified while in use)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> RelationCacheInvalidate() is called from ResetSystemCaches()\n> and calles RelationFlushRelation() for all relation descriptors\n> except some nailed system relations.\n> I'm wondering why nailed relations could be exceptions.\n> Conversely why must RelationCacheInvalidate() call\n> RelationFlushRelation() for other system relations ?\n> Isn't it sufficient to call smgrclose() and replace rd_rel\n> member of system relations by the latest ones instead\n> of calling RelationFlushRelation() ?\n\nPossibly you could do fixrdesc() instead of just ignoring the report\nentirely for nailed-in relations. Not sure it's worth worrying about\nthough --- in practice, what is this going to make possible? You can't\nchange the structure of a nailed-in system catalog, nor will adding\ntriggers or rules to it work very well, so I'm not quite seeing the\npoint.\n\nBTW, don't forget that there are nailed-in indexes as well as tables.\nNot sure if that matters to this code, but it might.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Nov 2000 12:08:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ResetSystemCaches(was Re: relation ### modified while in use) "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > RelationCacheInvalidate() is called from ResetSystemCaches()\n> > and calles RelationFlushRelation() for all relation descriptors\n> > except some nailed system relations.\n> > I'm wondering why nailed relations could be exceptions.\n> > Conversely why must RelationCacheInvalidate() call\n> > RelationFlushRelation() for other system relations ?\n> > Isn't it sufficient to call smgrclose() and replace rd_rel\n> > member of system relations by the latest ones instead\n> > of calling RelationFlushRelation() ?\n>\n> Possibly you could do fixrdesc() instead of just ignoring the report\n> entirely for nailed-in relations. Not sure it's worth worrying about\n> though --- in practice, what is this going to make possible? You can't\n> change the structure of a nailed-in system catalog, nor will adding\n> triggers or rules to it work very well, so I'm not quite seeing the\n> point.\n>\n\nHmm,my point is on not nailed system relations(indexes)\nnot on already nailed relations.\nCoundn't we skip system relations(indexes) in Relation\nCacheInvalidate() ?\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Tue, 07 Nov 2000 08:35:10 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: ResetSystemCaches(was Re: relation ### modified while in\n use)"
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n>> Does this occur after a prior error message? I have been suspicious\n>> because there isn't a mechanism to clear the syscache-busy flags during\n>> xact abort.\n\n> I don't know if I've seen the cases you pointed out.\n> I have the following gdb back trace. Obviously it calls\n> SearchSysCache() for cacheId 10 twice. I was able\n> to get another gdb back trace but discarded it by\n> mistake. Though I've added pause() just after detecting\n> recursive use of cache,backends continue the execution\n> in most cases unfortunately.\n> I've not examined the backtrace yet. But don't we have\n> to nail system relation descriptors more than now ?\n\nI don't think that's the solution; nailing more descriptors than we\nabsolutely must is not a pretty approach, and I don't think it solves\nthis problem anyway. Your example demonstrates that recursive use\nof a syscache is perfectly possible when a cache inval message arrives\njust as we are about to search for a syscache entry. Consider\nthe following path:\n\n1. We are doing index_open and ensuing relcache entry load for some user\nindex. In the middle of this, we need to fetch a not-currently-cached\npg_amop entry that is referenced by the index.\n\n2. As we open pg_amop, we receive an SI message for some other user\nindex that is referenced in the current query and so currently has\npositive refcnt. We therefore attempt to rebuild that index's relcache\nentry.\n\n3. At this point we have recursive invocation of relcache load, which\nmay well lead to a recursive attempt to fetch the very same pg_amop\nentry that the outer relcache load is trying to fetch.\n\nTherefore, the current error test of checking for re-entrant lookups in\nthe same syscache is bogus. It would still be bogus even if we refined\nit to notice whether the exact same entry is being sought.\n\nOn top of that, we have the issue I was concerned about that there is\nno mechanism for clearing the cache-busy flags during xact abort.\n\nRather than trying to fix this stuff, I propose that we simply remove\nthe test for recursive use of a syscache. AFAICS it will never catch\nany real bugs in production. It might catch bugs in development (ie,\nsomeone messes up the startup sequence in a way that causes a truly\ncircular cache lookup) but I think a stack overflow crash is a\nperfectly OK result then.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Nov 2000 13:51:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Recursive use of syscaches (was: relation ### modified while in use)"
},
{
"msg_contents": "> Rather than trying to fix this stuff, I propose that we simply remove\n> the test for recursive use of a syscache. AFAICS it will never catch\n> any real bugs in production. It might catch bugs in development (ie,\n> someone messes up the startup sequence in a way that causes a truly\n> circular cache lookup) but I think a stack overflow crash is a\n> perfectly OK result then.\n\nAgreed.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Nov 2000 14:05:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recursive use of syscaches (was: relation ### modified\n\twhile in use)"
},
{
"msg_contents": "I wrote:\n> On top of that, we have the issue I was concerned about that there is\n> no mechanism for clearing the cache-busy flags during xact abort.\n\nHmm, brain cells must be fading fast. On looking into the code I\nfind that there *is* such a mechanism --- installed by yours truly,\nonly three months ago.\n\nStill, I think getting rid of the test altogether is a better answer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Nov 2000 14:08:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recursive use of syscaches (was: relation ### modified while in\n\tuse)"
},
{
"msg_contents": "Tom Lane wrote:\n\n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> >> Does this occur after a prior error message? I have been suspicious\n> >> because there isn't a mechanism to clear the syscache-busy flags during\n> >> xact abort.\n>\n> > I don't know if I've seen the cases you pointed out.\n> > I have the following gdb back trace. Obviously it calls\n> > SearchSysCache() for cacheId 10 twice. I was able\n> > to get another gdb back trace but discarded it by\n> > mistake. Though I've added pause() just after detecting\n> > recursive use of cache,backends continue the execution\n> > in most cases unfortunately.\n> > I've not examined the backtrace yet. But don't we have\n> > to nail system relation descriptors more than now ?\n>\n> I don't think that's the solution; nailing more descriptors than we\n> absolutely must is not a pretty approach,\n\nI don't object to remove the check 'recursive use of cache'\nbecause it's not a real check of recursion.\nMy concern is the robustness of rel cache.\nIt seems pretty dangerous to discard system relation\ndescriptors used for cache mechanism especially in\ncase of error recovery.\nIt also seems pretty dangerous to recontruct relation\ndescriptors especially in case of error recovery.\n\nRegards.\nHiroshi Inoue\n\n\n",
"msg_date": "Fri, 10 Nov 2000 09:56:39 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Recursive use of syscaches (was: relation ### modified while in\n\tuse)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> My concern is the robustness of rel cache.\n> It seems pretty dangerous to discard system relation\n> descriptors used for cache mechanism especially in\n> case of error recovery.\n> It also seems pretty dangerous to recontruct relation\n> descriptors especially in case of error recovery.\n\nWhy? We are able to construct all the non-nailed relcache entries\nfrom scratch during backend startup. That seems a sufficient\nproof that we can reconstruct any or all of them on demand.\n\nUntil the changes I made today, there was a flaw in that logic,\nnamely that the specific order that relcache entries are built in\nduring startup might be somehow magic, ie, building them in another\norder might cause a recursive syscache call. But now, that doesn't\nmatter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Nov 2000 20:04:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recursive use of syscaches (was: relation ### modified while in\n\tuse)"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > My concern is the robustness of rel cache.\n> > It seems pretty dangerous to discard system relation\n> > descriptors used for cache mechanism especially in\n> > case of error recovery.\n> > It also seems pretty dangerous to recontruct relation\n> > descriptors especially in case of error recovery.\n>\n> Why? We are able to construct all the non-nailed relcache entries\n> from scratch during backend startup. That seems a sufficient\n> proof that we can reconstruct any or all of them on demand.\n>\n\n\nHmm,why is it sufficent ?\nAt backend startup there are no rel cache except\nsome nailed rels. When 'reset system cache' message\narrives,there would be many rel cache entries and\nsome of them may be in use.\nIn addtion there could be some inconsitency of db\nin the middle of the transaction. Is it safe to recon\nstruct rel cache under the inconsistency ?\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Fri, 10 Nov 2000 10:33:21 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: Recursive use of syscaches (was: relation ### modified\n\twhile in use)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Why? We are able to construct all the non-nailed relcache entries\n>> from scratch during backend startup. That seems a sufficient\n>> proof that we can reconstruct any or all of them on demand.\n\n> Hmm,why is it sufficent ?\n> At backend startup there are no rel cache except\n> some nailed rels. When 'reset system cache' message\n> arrives,there would be many rel cache entries and\n> some of them may be in use.\n\nDoesn't bother me. The ones that are in use will get rebuilt.\nThat might trigger recursive rebuilding of system-table relcache\nentries, and consequently recursive syscache lookups, but so what?\nThat already happens during backend startup: some relcache entries\nare loaded as a byproduct of attempts to build other ones.\n\n> In addtion there could be some inconsitency of db\n> in the middle of the transaction. Is it safe to recon\n> struct rel cache under the inconsistency ?\n\nNo worse than trying to start up while other transactions are\nrunning. We don't support on-the-fly modification of schemas\nfor system catalogs anyway, so I don't see the issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Nov 2000 21:43:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Recursive use of syscaches (was: relation ### modified while\n\tin use)"
}
] |
[
{
"msg_contents": "Having just installed a few times, and being new to it, I've\nfallen in some holes the rest of you may not notice.\n\nFor one thing, has anybody recently read the stuff that\nprints\nat the end of a 'make install'? It gives three or so\npointers\nto pages at postgresql.org, which pages do not exist, or at\nleast cannot be reached as described.\n\nFor another the INSTALL document is a bit coy about what\nhappens when you install as root. If you have Perl\nconfigured,\nthe process creates at least one file that will cause\nproblems\nif you later try a 'make' as non-root. You won't have\npermission to proceed. And I don't like to be root more\nthan\nI have to, so I don't like this.\n\nMoreover, I think the INSTALL instructions should be a bit\nmore careful about the bits where you have to be the\nPostgress\nroot rather than your usual user name. I found it a bit\nfrustrating going through the dump/restore process till I\nfigured that out.\n\nFinally, there may be a real bug in the dumpall/restore\nbusiness.\nI had a database created with a version I got from some RPM\npackages somewhere. I don't remember where, and I surely\ndidn't\nknow how they were configured. When I built a new version,\nI did NOT configure multibyte support. This caused restore\nto\nfail because one of the very early commands was setting the\nUS-ASCII encoding. PostgreSQL barked at me that multibyte\nwas not supported, and the process stopped. This seems\nextreme,\nsince I'm guessing that no multibyte means I'm locked in\nUS-ASCII. In any event, this seems an unnecessary gotcha\nfor the inexperienced, and it may mean that I'm locked into\nmultibyte suppport that I don't want or need.\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: \nmailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Sun, 22 Oct 2000 19:43:22 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Holes in the install process"
},
{
"msg_contents": "\"Kevin O'Gorman\" <kogorman@pacbell.net> writes:\n> For one thing, has anybody recently read the stuff that prints at the\n> end of a 'make install'?\n\nYeah, it's pretty out-of-date. Someone or other had promised to update\nit (Peter E. I think).\n\n> For another the INSTALL document is a bit coy about what happens when\n> you install as root. If you have Perl configured, the process creates\n> at least one file that will cause problems if you later try a 'make'\n> as non-root.\n\n\"At least one file\" isn't very helpful. If you want these things fixed,\nhow about *specifics*? Patches would be even better ;-). (The same\ngoes for documentation shortcomings, btw.)\n\n> I did NOT configure multibyte support. This caused restore to fail\n> because one of the very early commands was setting the US-ASCII\n> encoding. PostgreSQL barked at me that multibyte was not supported,\n> and the process stopped.\n\nHmm, I suppose CREATE DATABASE WITH ENCODING 'US-ASCII' had better be\naccepted even when multibyte isn't enabled. (Tatsuo, any comment here?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Oct 2000 23:30:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Holes in the install process "
}
] |
[
{
"msg_contents": "\n> And the --with-CXX option is honored, but only if you don't \n> override it in the template file. :)\n\nIs this the precedence we want ? \nI would have thought that commandline is preferred over template.\n\nAndreas\n",
"msg_date": "Mon, 23 Oct 2000 10:00:39 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: UnixWare 7.1.1b FS"
}
] |
[
{
"msg_contents": "\n> (Also, if you do want to check for a NULL input in current sources,\n> looking for a NULL pointer is the wrong way to code it anyway;\n> PG_ARGISNULL(n) is the right way.)\n\nFor pass by reference datatypes setting the reference to a null pointer\nfor a NULL value imho would be a fine thing in addition to the indicator, \nno ?\n\nAndreas\n",
"msg_date": "Mon, 23 Oct 2000 10:12:47 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: to_char() dumps core "
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> For pass by reference datatypes setting the reference to a null pointer\n> for a NULL value imho would be a fine thing in addition to the indicator, \n> no ?\n\nAt the moment it generally will be, but that's not certain to be true\nforever. I believe we've had discussions in the past about supporting\nmultiple kinds of NULL. (I had the idea this was actually required by\nSQL99, in fact, though I can't find anything about it at the moment.)\n\nThe obvious way to do that is to commandeer the otherwise unused\ncontents of a Datum when the associated null-flag is true. At that\npoint checking the null-flag will be the only valid way to check for\nNULL.\n\nAssuming that the null-kind values are small integers, attempts to\ndereference them will still SEGV on reasonable systems, so I don't\nthink any error checking is lost. Just don't do \"if (datum == NULL)\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 09:43:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: to_char() dumps core "
}
] |
[
{
"msg_contents": "> What I'm proposing is that once an xact has touched a\n> table, other xacts should not be able to apply schema updates to that\n> table until the first xact commits.\n\nNo, this would mean too many locks, and would leave the dba with hardly a \nchance to alter a table. \n\nIf I recall correctly the ANSI standard mandates that schema modifications \nbe seen immediately. Thus imho we need to refresh the relcache on first \naccess after modification. Thus two accesses to one table inside one tx \nwould be allowed to see two different versions (the exception beeing \nserializable isolation level).\n\nImho we only need to lock out an alter table if a cursor is open on that table.\n\nAndreas\n",
"msg_date": "Mon, 23 Oct 2000 10:46:14 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: relation ### modified while in use "
},
{
"msg_contents": "\n\nZeugswetter Andreas SB wrote:\n\n> > What I'm proposing is that once an xact has touched a\n> > table, other xacts should not be able to apply schema updates to that\n> > table until the first xact commits.\n>\n> No, this would mean too many locks, and would leave the dba with hardly a\n> chance to alter a table.\n>\n\nAre there many applications which have many SELECT statements(without\nFOR UPDATE) in one tx ?\nAs for locks,weak locks doesn't pass intensive locks. Dba seems to be able\nto alter a table at any time.\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Mon, 23 Oct 2000 18:35:17 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: AW: relation ### modified while in use"
}
] |
[
{
"msg_contents": "\n> What do other DBs do with their output variables if there is \n> an embedded SQL\n> query resulting in a NULL return value? What I mean is:\n> \n> exec sql select text into :txt:ind from ...\n> \n> If text is NULL, ind will be set, but does txt change?\n> \n> I was just told Informix blanks txt.\n\nNo, it gives a null string. \nIn general Informix has a value that represents null that is \n'distinct from all legal values in any given datatype'.\n\nAndreas\n",
"msg_date": "Mon, 23 Oct 2000 11:03:41 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: embedded sql with indicators in other DBs"
}
] |
[
{
"msg_contents": "\nI posted some regression failures twice, and never saw them on the\nlist or in the newsgroup. This is a test.\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 23 Oct 2000 04:30:00 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "testing my connection to list."
},
{
"msg_contents": "Ok, so why didn't my regression outputs post? \n\nMarc? \n\nLER\n* Larry Rosenman <ler@lerctr.org> [001023 04:32]:\n> \n> I posted some regression failures twice, and never saw them on the\n> list or in the newsgroup. This is a test.\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 23 Oct 2000 04:34:42 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: testing my connection to list."
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Ok, so why didn't my regression outputs post? \n> Marc? \n\nHow big were they? I think the default configuration for majordomo\nis that posts over 50K or so don't go through until hand-approved by\nmoderator. Marc tends to clean out that inbox every few days...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 10:15:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: testing my connection to list. "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001023 09:15]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Ok, so why didn't my regression outputs post? \n> > Marc? \n> \n> How big were they? I think the default configuration for majordomo\n> is that posts over 50K or so don't go through until hand-approved by\n> moderator. Marc tends to clean out that inbox every few days...\n63K. But I thought some of my others were larger. \n\nMaybe that limit needs to be bigger?\n\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 23 Oct 2000 09:32:11 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: testing my connection to list."
}
] |
[
{
"msg_contents": "\n> > > What I'm proposing is that once an xact has touched a\n> > > table, other xacts should not be able to apply schema updates to that\n> > > table until the first xact commits.\n> >\n> > No, this would mean too many locks, and would leave the dba with hardly a\n> > chance to alter a table.\n> >\n> \n> Are there many applications which have many SELECT statements(without\n> FOR UPDATE) in one tx ?\n\nWhy not ?\n\n> As for locks,weak locks doesn't pass intensive locks. Dba \n> seems to be able to alter a table at any time.\n\nSorry, I don't understand this sentence. Tom suggested placing a shared lock on \nany table that is accessed until end of tx. Noone can alter table until all users have\nclosed their txns and not accessed tables again. Remember that this would include\ncreating an index ...\n\nAndreas\n",
"msg_date": "Mon, 23 Oct 2000 11:52:06 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: relation ### modified while in use"
},
{
"msg_contents": "> > As for locks,weak locks doesn't pass intensive locks. Dba\n> > seems to be able to alter a table at any time.\n>\n> Sorry, I don't understand this sentence. Tom suggested placing a shared\nlock on\n> any table that is accessed until end of tx. Noone can alter table until\nall users have\n> closed their txns and not accessed tables again.\n\nMore of that - while one xaction will wait to alter a table no new xaction\nwill be\nallowed to access this table too.\n\n> Remember that this would include creating an index ...\n\nI don't think so. Index creation requires\n1. share lock on schema\n2. share lock on data\n\nVadim\n\n\n",
"msg_date": "Mon, 23 Oct 2000 04:12:35 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: relation ### modified while in use"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> As for locks,weak locks doesn't pass intensive locks. Dba \n>> seems to be able to alter a table at any time.\n\n> Sorry, I don't understand this sentence. Tom suggested placing a\n> shared lock on any table that is accessed until end of tx. Noone can\n> alter table until all users have closed their txns and not accessed\n> tables again.\n\nUntil existing xacts using that table have closed, yes. But I believe\nthe lock manager has some precedence rules that will allow the pending\nrequest for AccessExclusiveLock to take precedence over new requests\nfor lesser locks. So you're only held off for a long time if you have\nlong-running xacts that use the target table.\n\nI consider that behavior *far* safer than allowing schema changes to\nbe seen mid-transaction. Consider the following example:\n\n\tSession 1\t\t\tSession 2\n\n\tbegin;\n\n\tINSERT INTO foo ...;\n\n\t\t\t\t\tALTER foo ADD constraint;\n\n\tINSERT INTO foo ...;\n\n\tend;\n\nWhich, if any, of session 1's insertions will be subject to the\nconstraint? What are the odds that the dba will like the result?\n\nWith my proposal, session 2's ALTER would wait for session 1 to commit,\nand then the ALTER's own scan to verify the constraint will check all\nthe rows added by session 1.\n\nUnder your proposal, I think the rows inserted at the beginning of\nsession 1's xact would be committed without having been checked.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 10:10:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: relation ### modified while in use "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> In this case, wouldn't the answer depend on the isolation level of session\n> 1? For serializable TX, then constraint would not apply; 'read committed'\n> would mean the constraint was visible on the second insert and at the commit.\n\nThe important issue here is that all schema changes have to be read\non a read-committed basis, even if your transaction is otherwise\nserializable. Consider for example the possibility that the schema\nchange you're ignoring consists of a DROP INDEX or some such --- you'll\nfail if you proceed as though the index is still there. This is the\npoint Vadim was making a few days ago (but I didn't understand at the\ntime).\n\nI believe we can work out a consistent set of behavior such that user\ndata accesses (SELECT/UPDATE/etc) follow MVCC rules but system accesses\nto schema data always follow read-committed semantics. One of the\ncomponents of this has to be an agreement on how to handle locking.\nAFAICS, we have to adopt hold-some-kind-of-lock-till-end-of-xact,\nor we will have consistency problems between the user and system\nviews of the world.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 12:29:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: relation ### modified while in use "
},
{
"msg_contents": "At 10:10 23/10/00 -0400, Tom Lane wrote:\n>\n>I consider that behavior *far* safer than allowing schema changes to\n>be seen mid-transaction. Consider the following example:\n>\n>\tSession 1\t\t\tSession 2\n>\n>\tbegin;\n>\n>\tINSERT INTO foo ...;\n>\n>\t\t\t\t\tALTER foo ADD constraint;\n>\n>\tINSERT INTO foo ...;\n>\n>\tend;\n>\n>Which, if any, of session 1's insertions will be subject to the\n>constraint? What are the odds that the dba will like the result?\n>\n\nIn this case, wouldn't the answer depend on the isolation level of session\n1? For serializable TX, then constraint would not apply; 'read committed'\nwould mean the constraint was visible on the second insert and at the commit.\n\nI would err on the side of insisting all metadata changes occur in\nserializable transactions to make life a little easier.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 24 Oct 2000 03:16:21 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: relation ### modified while in use "
},
{
"msg_contents": "\n\nZeugswetter Andreas SB wrote:\n\n> > > > What I'm proposing is that once an xact has touched a\n> > > > table, other xacts should not be able to apply schema updates to that\n> > > > table until the first xact commits.\n> > >\n> > > No, this would mean too many locks, and would leave the dba with hardly a\n> > > chance to alter a table.\n> > >\n> >\n> > Are there many applications which have many SELECT statements(without\n> > FOR UPDATE) in one tx ?\n>\n> Why not ?\n>\n\nIt seems to me that multiple SELECT statements in a tx has little\nmeaning unless the tx is executed in SERIALIZABLE isolation level.\n\n>\n> > As for locks,weak locks doesn't pass intensive locks. Dba\n> > seems to be able to alter a table at any time.\n>\n> Sorry, I don't understand this sentence. Tom suggested placing a shared lock on\n> any table that is accessed until end of tx. Noone can alter table until all users have\n> closed their txns and not accessed tables again. Remember that this would include\n> creating an index ...\n>\n\nWhat I meant is the following though I may be misunderstanding your point.\n\nSession-1.\n # begin;\n # declare myc cursor for select * from t1;\n\nSession-2.\n # begin;\n # lock table t1; [blocked]\n\nSession-3.\n # begin;\n # select * from t1; [blocked]\n\nSession-1.\n # abort;\n\nThen\nSession-2.\n LOCK TABLE\n #\n\nbut\nSession-3.\n [still blocked]\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Tue, 24 Oct 2000 09:20:36 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "BLERe: AW: AW: relation ### modified while in use"
}
] |
[
{
"msg_contents": "\n> Until existing xacts using that table have closed, yes. But I believe\n> the lock manager has some precedence rules that will allow the pending\n> request for AccessExclusiveLock to take precedence over new requests\n> for lesser locks. So you're only held off for a long time if you have\n> long-running xacts that use the target table.\n> \n> I consider that behavior *far* safer than allowing schema changes to\n> be seen mid-transaction. Consider the following example:\n> \n> \tSession 1\t\t\tSession 2\n> \n> \tbegin;\n> \n> \tINSERT INTO foo ...;\n> \n> \t\t\t\t\tALTER foo ADD constraint;\n> \n> \tINSERT INTO foo ...;\n> \n> \tend;\n> \n> Which, if any, of session 1's insertions will be subject to the\n> constraint? What are the odds that the dba will like the result?\n> \n> With my proposal, session 2's ALTER would wait for session 1 \n> to commit,\n> and then the ALTER's own scan to verify the constraint will check all\n> the rows added by session 1.\n> \n> Under your proposal, I think the rows inserted at the beginning of\n> session 1's xact would be committed without having been checked.\n\nNo, the above is not a valid example, because Session 2 won't\nget the exclusive lock until Session 1 commits, since Session 1 already \nholds a lock on foo (for the inserted row). \n\nYou were talking about the \"select only\" case (and no for update eighter). \nI think that select statements need a shared lock for the duration of their \nexecution only.\n\nAndreas\n",
"msg_date": "Mon, 23 Oct 2000 16:30:36 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: relation ### modified while in use "
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> No, the above is not a valid example, because Session 2 won't\n> get the exclusive lock until Session 1 commits, since Session 1 already \n> holds a lock on foo (for the inserted row). \n\n> You were talking about the \"select only\" case (and no for update eighter). \n> I think that select statements need a shared lock for the duration of their \n> execution only.\n\nYou seem to think that locks on individual tuples conflict with\ntable-wide locks. AFAIK that's not true. The only way to prevent\nanother xact from gaining AccessExclusiveLock on a table is to be\nholding some lock *on the table*.\n\nAs for your claim that read-only xacts don't need to worry about\npreventing schema updates, what of adding/deleting ON SELECT rules?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 10:36:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: relation ### modified while in use "
}
] |
[
{
"msg_contents": "All,\n\nApologies for the nature of this post - I think/hope this will be of \ngeneral interest to the PostgreSQL community:\n\nGreat Bridge is hiring. Big time.\n\nWe're particularly interested in what we're calling \"knowledge \nengineers\" - the people who will work with paying Great Bridge customers \nto troubleshoot technical issues with PostgreSQL and other open source \ntechnologies, work on the fix themselves, and get paid to hack on \nPostgreSQL and other projects in their \"down\" time. Here's the position \ndescription from our website:\n\n--\n\nThis highly specialized engineer will be the front line of Great \nBridge's professional support services, working to troubleshoot and \nresolve customers' technical issues. You will communicate with customers \n(email, live chat, phone) to troubleshoot and resolve problems; work \nclosely with engineering staff on identifying bugs and scheduling fixes \nand hack at Great Bridge supported open source projects as available. \nExperience desired is 7-10 years of database programming or \nadministration and deep expertise in at least one major RDBMS \n(PostgreSQL, Oracle, DB2, Microsoft SQL Server, Sybase, Informix). Unix \norientation is critical, as well as experience in 2 - 3 of the following \nprogramming languages: C, C++, Perl, PHP, Python and Tcl/Tk. A \ndemonstrated ability to work in small teams in a cooperative environment \nwith strong customer service skills is required.\n\n--\n\nGreat Bridge is headquartered in Norfolk, Virginia - which boasts a mild \nclimate, moderate cost of living, and easy access to lots of water, \nincluding the Atlantic Ocean, the Chesapeake Bay, and lots of rivers and \ninlets. For more info on the region, see www.hamptonroads.com.\n\nGreat Bridge currently has 32 full-time employees and is growing fast. \nThe knowledge engineer positions are located in Norfolk, and offer very \ncompetitive salaries, stock options, and comprehensive benefits. If you \nlove PostgreSQL and open source, and want to get in on the ground floor \nof a leading open source software company, please contact me off-list \nand let's talk.\n\nThanks,\nNed\n\n-- \n----------------------------------------------------\nNed Lilly e: ned@greatbridge.com\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n",
"msg_date": "Mon, 23 Oct 2000 10:53:48 -0400",
"msg_from": "Ned Lilly <ned@greatbridge.com>",
"msg_from_op": true,
"msg_subject": "Great Bridge is hiring!"
},
{
"msg_contents": "On Mon, 23 Oct 2000, Ned Lilly wrote:\n\n> Date: Mon, 23 Oct 2000 10:53:48 -0400\n> From: Ned Lilly <ned@greatbridge.com>\n> To: pgsql-hackers@postgresql.org, pgsql-general@postgresql.org\n> Subject: [GENERAL] Great Bridge is hiring!\n\nBut for what? :)\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Mon, 23 Oct 2000 17:00:48 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": false,
"msg_subject": "Re: Great Bridge is hiring!"
}
] |
[
{
"msg_contents": "\n> > You were talking about the \"select only\" case (and no for update eighter). \n> > I think that select statements need a shared lock for the duration of their \n> > execution only.\n> \n> You seem to think that locks on individual tuples conflict with\n> table-wide locks.\n\nYes, very much so. Any other way would be subject to the same quirks \nyou would like to avoid, no ?\n\n> AFAIK that's not true.\n\nwell, imho room for improvement.\n\n> The only way to prevent\n> another xact from gaining AccessExclusiveLock on a table is to be\n> holding some lock *on the table*.\n\nYes, and holding a row exclusive lock must imho at least grab a shared\ntable lock (to avoid several problems, like missing an index update,\ninserting a null into a newly added not null column ...).\nAlternately the table exclusive lock could honour row locks \n(probably not possible, since we don't track those do we ?).\n\n> As for your claim that read-only xacts don't need to worry about\n> preventing schema updates, what of adding/deleting ON SELECT rules?\n\nWell, depends on what that rule does, you mean a new rule ?\nAd hoc I don't see a problem based on the idea that all modification gets \nappropriate locks.\n\nAndreas\n",
"msg_date": "Mon, 23 Oct 2000 17:00:27 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: AW: relation ### modified while in use "
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Yes, and holding a row exclusive lock must imho at least grab a shared\n> table lock\n\nAs indeed it does. Our disagreement seems to be just on the point of\nwhether it's safe to allow a read-only transaction to release its \nAccessShareLock locks partway through.\n\nMy opinion about that is colored by the known bugs that we have because\nthe parser/rewriter/planner currently do just that. You can cause the\nsystem to become mighty confused if the report of a table schema change\narrives partway through the parse/plan process, because decisions\nalready made are no longer valid. While we can probably patch the holes\nin this area by holding a lock throughout processing of one statement,\nI think that will just push the problem up to the application level.\nHow many apps are likely to be coded in a way that will be robust\nagainst intra-transaction schema changes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 11:44:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: AW: AW: relation ### modified while in use "
}
] |
[
{
"msg_contents": "\nOne thing my testing gave SCO was the fact that cc needs to know about\nthe -R option to ld. It will change before release to know that -R\ntakes an argument. \n\nJust keep that in mind....\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 23 Oct 2000 14:20:45 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "UDK...."
}
] |
[
{
"msg_contents": "When I migrated to the 7.1 tree, I brought with me my 7.0.2 database.\nIt's fair sized (6GB) but the problems I had seem to be in the\nsystem stuff.\n\nI'm not even sure if the database is correctly loaded now, and that's\ngotta be something no DBA would like. I'm not even all that happy\nabout the one NOTICE that I recieved.\n\nThe NOTICE is:\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'teach_pkey' for\ntable 'teach'\n\nProvoked by:\nCREATE TABLE \"teach\" (\n \"prof\" character varying(5) NOT NULL,\n \"class\" character varying(5),\n PRIMARY KEY (\"prof\")\n); \n\nThis seems like too much feedback to me.\n\nThe more serious message is 6 copies of\nCHANGE\nERROR: aclinsert3: insertion before world ACL??\n\nProvoked by things like:\nREVOKE ALL on \"pga_queries\" from PUBLIC;\nGRANT ALL on \"pga_queries\" to\nPUBLIC; \n\nAnd I have three problems with this:\n1) Why is pg_dumpall producing something that provokes errors at all?\n2) Why doesn't the error message give more info about the table involved?\n It's kinda hard to pick out the culprit from a 3GB script.\n3) I'm confused: is \"PUBLIC\" different from \"world\"?\n\nI'm sorry I can't dive in and show how to fix these things;\nit's my intention to get that good, but I'm not there yet.\n\n++ kevin\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Mon, 23 Oct 2000 12:28:21 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Errors on restoring a dumpall"
}
] |
[
{
"msg_contents": "> > > I've wondered why AccessShareLock is a short term lock.\n> >\n> > MUST BE. AccessShare-/Exclusive-Locks are *data* locks.\n> > If one want to protect schema then new schema share/excl locks\n> > must be inroduced. There is no conflict between data and\n> > schema locks - they are orthogonal.\n> >\n> \n> Oracle doesn't have Access...Lock locks.\n\nOracle has no vacuum. We need in AccessExclusiveLock to\nsupport vacuum - to stop any concurrent scans over table.\n\nBut maybe I try to make things more complex without\ngood reason - long term AccessShareLock would just\nblock vacuum till transaction end (in addition to blocked\nconcurrent DDL statements we discuss now) - not big\ninconvenience probably.\nSo ok, I have no strong objection against using\nAccess...Locks as schema locks.\n\n> In my understanding,locking levels you provided contains\n> an implicit share/exclusive lock on the corrsponding\n> pg_class tuple i.e. AccessExclusive Lock acquires an\n> exclusive lock on the corresping pg_class tuple and\n> other locks acquire a share lock, Is it right ?\n\nNo. Access...Locks are acquired over target table\n(table' oid is used as key for lmgr hash table),\nnot over corresponding pg_class tuple, in what case\nwe would use pg_clas' oid + table' oid as key\n(possibility I've described below).\n\n> > > If we have a mechanism to acquire a share lock on a tuple,we\n ^^^^^^^^^^^^^^^^^^^^^\n> > > could use it for managing system info generally. However the\n> > > only allowed lock on a tuple is exclusive. \n> > > Access(Share/Exclusive)\n> >\n...\n> > - we could add oid to union above and lock tables by acquiring lock\n> > on pg_class with objId.oid = table' oid. Same way we could \n> > lock indices and whatever we want... if we want -:)\n> \n> As you know well,this implemenation has a flaw that we have\n> to be anxious about the shortage of shared memory.\n\nDidn't you asked about share lock on a tuple?\nShare locks may be kept in memory only.\nI've just pointed that we have such mechanism -:)\nAnother possible answer is - Shared Catalog Cache.\n\nVadim\n",
"msg_date": "Mon, 23 Oct 2000 18:23:31 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: relation ### modified while in use"
},
{
"msg_contents": "\n\n\"Mikheev, Vadim\" wrote:\n\n> > > > I've wondered why AccessShareLock is a short term lock.\n> > >\n> > > MUST BE. AccessShare-/Exclusive-Locks are *data* locks.\n> > > If one want to protect schema then new schema share/excl locks\n> > > must be inroduced. There is no conflict between data and\n> > > schema locks - they are orthogonal.\n> > >\n> >\n> > Oracle doesn't have Access...Lock locks.\n>\n> Oracle has no vacuum. We need in AccessExclusiveLock to\n> support vacuum - to stop any concurrent scans over table.\n>\n> But maybe I try to make things more complex without\n> good reason - long term AccessShareLock would just\n> block vacuum till transaction end (in addition to blocked\n> concurrent DDL statements we discuss now) - not big\n> inconvenience probably.\n> So ok, I have no strong objection against using\n> Access...Locks as schema locks.\n>\n> > In my understanding,locking levels you provided contains\n> > an implicit share/exclusive lock on the corrsponding\n> > pg_class tuple i.e. AccessExclusive Lock acquires an\n> > exclusive lock on the corresping pg_class tuple and\n> > other locks acquire a share lock, Is it right ?\n>\n> No. Access...Locks are acquired over target table\n> (table' oid is used as key for lmgr hash table),\n> not over corresponding pg_class tuple, in what case\n> we would use pg_clas' oid + table' oid as key\n> (possibility I've described below).\n>\n\nYes,I know that \"lock table\" doesn't touch the correpon\nding pg_class tuple at all. However isn't it equivalent ?\nAt least\n\n>\n> > > > If we have a mechanism to acquire a share lock on a tuple,we\n>\n\nneed Access(Share/Exclusive)Lock ?\n\n\n> ...\n> > > - we could add oid to union above and lock tables by acquiring lock\n> > > on pg_class with objId.oid = table' oid. Same way we could\n> > > lock indices and whatever we want... if we want -:)\n> >\n> > As you know well,this implemenation has a flaw that we have\n> > to be anxious about the shortage of shared memory.\n>\n> Didn't you asked about share lock on a tuple?\n> Share locks may be kept in memory only.\n> I've just pointed that we have such mechanism -:)\n\nHmm,I remember you refered to SHARE lock on tuples once.\nI wasn't able to suppose how you would implement it then.\nI've also thought the enhancement of current locking\nmachanism which had been used for page level locking but\nhave always been discouraged by the shmem shortage flaw.\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Tue, 24 Oct 2000 11:09:28 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use"
},
{
"msg_contents": "> > > In my understanding,locking levels you provided contains\n> > > an implicit share/exclusive lock on the corrsponding\n> > > pg_class tuple i.e. AccessExclusive Lock acquires an\n> > > exclusive lock on the corresping pg_class tuple and\n> > > other locks acquire a share lock, Is it right ?\n> >\n> > No. Access...Locks are acquired over target table\n> > (table' oid is used as key for lmgr hash table),\n> > not over corresponding pg_class tuple, in what case\n> > we would use pg_clas' oid + table' oid as key\n> > (possibility I've described below).\n> >\n> \n> Yes,I know that \"lock table\" doesn't touch the correpon\n> ding pg_class tuple at all. However isn't it equivalent ?\n\n From what POV?\nLock manager will allow two simultaneous exclusive locks using these\ndifferent methods (keys) and so we can interpret (use) them differently.\n\nVadim\n\n\n",
"msg_date": "Mon, 23 Oct 2000 22:44:26 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use"
},
{
"msg_contents": "\n\nVadim Mikheev wrote:\n\n> > > > In my understanding,locking levels you provided contains\n> > > > an implicit share/exclusive lock on the corrsponding\n> > > > pg_class tuple i.e. AccessExclusive Lock acquires an\n> > > > exclusive lock on the corresping pg_class tuple and\n> > > > other locks acquire a share lock, Is it right ?\n> > >\n> > > No. Access...Locks are acquired over target table\n> > > (table' oid is used as key for lmgr hash table),\n> > > not over corresponding pg_class tuple, in what case\n> > > we would use pg_clas' oid + table' oid as key\n> > > (possibility I've described below).\n> > >\n> >\n> > Yes,I know that \"lock table\" doesn't touch the correpon\n> > ding pg_class tuple at all. However isn't it equivalent ?\n>\n> >From what POV?\n> Lock manager will allow two simultaneous exclusive locks using these\n> different methods (keys) and so we can interpret (use) them differently.\n>\n\nSeems my first explanation was really bad,sorry.\n\nWhen I saw Access(Share/Exclusive)Lock for the first time,\nI thought what they are for.\nFor VACUUM ? Yes. For DROP TABLE ? Yes. For ALTER TABLE ?\nMaybe yes...........\nOracle doesn't have VACUUM and probably handles the other\ncases using dictionary lock mechanism.\nUnfortunately we've had no dictionary lock mechanism.\nDon't Access(..)Lock locks compensate the lack of dictionary\nlock mechanism ?\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Wed, 25 Oct 2000 10:12:02 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: relation ### modified while in use"
}
] |
[
{
"msg_contents": "I believe these TODO items can now be marked \"done\":\n\n* SELECT foo UNION SELECT foo is incorrectly simplified to SELECT foo\n* Be smarter about promoting types when UNION merges different data types\n\n\t[ actually UNION is now exactly as smart as CASE, which is still\n\t not good enough, but it's no longer a UNION problem --- the\n\t issue is how much knowledge we have of type promotion ]\n\n* redesign INSERT ... SELECT to have two levels of target list\n* have INTERSECT/EXCEPT prevent duplicates unless ALL is specified\n\n* Views containing aggregates sometimes fail(Jan)\n\n* SELECT ... UNION ... ORDER BY fails when sort expr not in result list\n\n\t[ this is now disallowed, rather than misbehaving ]\n\n* SELECT ... UNION ... GROUP BY fails if column types disagree, no type\n promotion occurs\n\n* Allow long tuples by chaining or auto-storing outside db (TOAST)(Jan)\n\n* Allow compression of large fields or a compressed field type\n* Large objects\n\to Fix large object mapping scheme, own typeid or reltype(Peter)\n\to Not to stuff everything as files in a single directory, hash dirs\n\to Allow large object vacuuming\n\to Tables that start with xinv confused to be large objects\n\n* Allow DISTINCT on views\n* Allow views of aggregate columns\n* Allow views with subselects\n\n* Support UNION/INTERSECT/EXCEPT in sub-selects\n\n* Redesign the function call interface to handle NULLs better[function](Tom)\n\n* redesign UNION structures to have separarate target lists\n* Allow multi-level query trees for INSERT INTO ... SELECT\n\n* use fmgr_info()/fmgr_faddr() instead of fmgr() calls in high-traffic\n places, like GROUP BY, UNIQUE, index processing, etc.\n\n* In WHERE tab1.x=3 AND tab1.x=tab2.y, add tab2.y=3\n\n* Remove ANALYZE from VACUUM so it can be run separately without locks\n\n\nAlso, this is no longer relevant for large objects, though perhaps still\nof interest for sort files:\n\n* Put sort files, large objects in their own directory\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 21:53:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "TODO updates"
},
{
"msg_contents": "Thanks. Changes made. That's a lot of stuff.\n\n\n> I believe these TODO items can now be marked \"done\":\n> \n> * SELECT foo UNION SELECT foo is incorrectly simplified to SELECT foo\n> * Be smarter about promoting types when UNION merges different data types\n> \n> \t[ actually UNION is now exactly as smart as CASE, which is still\n> \t not good enough, but it's no longer a UNION problem --- the\n> \t issue is how much knowledge we have of type promotion ]\n> \n> * redesign INSERT ... SELECT to have two levels of target list\n> * have INTERSECT/EXCEPT prevent duplicates unless ALL is specified\n> \n> * Views containing aggregates sometimes fail(Jan)\n> \n> * SELECT ... UNION ... ORDER BY fails when sort expr not in result list\n> \n> \t[ this is now disallowed, rather than misbehaving ]\n> \n> * SELECT ... UNION ... GROUP BY fails if column types disagree, no type\n> promotion occurs\n> \n> * Allow long tuples by chaining or auto-storing outside db (TOAST)(Jan)\n> \n> * Allow compression of large fields or a compressed field type\n> * Large objects\n> \to Fix large object mapping scheme, own typeid or reltype(Peter)\n> \to Not to stuff everything as files in a single directory, hash dirs\n> \to Allow large object vacuuming\n> \to Tables that start with xinv confused to be large objects\n> \n> * Allow DISTINCT on views\n> * Allow views of aggregate columns\n> * Allow views with subselects\n> \n> * Support UNION/INTERSECT/EXCEPT in sub-selects\n> \n> * Redesign the function call interface to handle NULLs better[function](Tom)\n> \n> * redesign UNION structures to have separarate target lists\n> * Allow multi-level query trees for INSERT INTO ... SELECT\n> \n> * use fmgr_info()/fmgr_faddr() instead of fmgr() calls in high-traffic\n> places, like GROUP BY, UNIQUE, index processing, etc.\n> \n> * In WHERE tab1.x=3 AND tab1.x=tab2.y, add tab2.y=3\n> \n> * Remove ANALYZE from VACUUM so it can be run separately without locks\n> \n> \n> Also, this is no longer relevant for large objects, though perhaps still\n> of interest for sort files:\n> \n> * Put sort files, large objects in their own directory\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 23 Oct 2000 21:59:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO updates"
}
] |
[
{
"msg_contents": "Peter,\n\n As of current sources, large objects no longer occupy tables named\n'xinvNNNN' nor indexes named 'xinxNNNN'. Therefore, it'd be appropriate\nto remove the special tests that exclude tables/indices named that way\nfrom the tests in DatabaseMetaData.java. I have not done this because\nI'm not in a position to test changes to the JDBC driver; would you\nplease add it to your todo list?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2000 21:57:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "JDBC now needs updates for large objects"
}
] |
[
{
"msg_contents": "Is there any way to get libpq built with -lsocket on the unixware (and\nprobably other SVR4's) to get the network stuff required ? \n\n(other SVR4's prolly need -lsocket -lnsl) \n\nLarry\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Mon, 23 Oct 2000 21:59:46 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "libpq needs -lsocket on UnixWare"
},
{
"msg_contents": "> (other SVR4's prolly need -lsocket -lnsl)\n\n Something like\n\n AC_CHECK_LIB(socket,socket)\n\n or something like that? In fact, it complains about inet_aton and\ngethostbyname.\n\n\n--\n\n contaminated fish and microchips\n huge supertankers on Arabian trips\n oily propaganda from the leaders' lips\n all about the future\n there's people over here, people over there\n everybody's looking for a little more air\n crossing all the borders just to take their share\n planning for the future\n\n Rainbow, Difficult to Cure\n",
"msg_date": "Tue, 24 Oct 2000 03:07:29 +0000",
"msg_from": "KuroiNeko <evpopkov@carrier.kiev.ua>",
"msg_from_op": false,
"msg_subject": "Re: libpq needs -lsocket on UnixWare"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> Is there any way to get libpq built with -lsocket on the unixware (and\n> probably other SVR4's) to get the network stuff required ? \n\nTry now. OpenSSL should be working as well now.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 25 Oct 2000 18:25:50 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: libpq needs -lsocket on UnixWare"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001025 11:21]:\n> Larry Rosenman writes:\n> \n> > Is there any way to get libpq built with -lsocket on the unixware (and\n> > probably other SVR4's) to get the network stuff required ? \n> \n> Try now. OpenSSL should be working as well now.\nOpenSSL dies:\n\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o nodeUnique.o nodeUnique.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o nodeGroup.o nodeGroup.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o spi.o spi.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o nodeSubplan.o nodeSubplan.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o nodeSubqueryscan.o nodeSubqueryscan.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o nodeTidscan.o nodeTidscan.c\n/usr/bin/ld -r -o SUBSYS.o execAmi.o execFlatten.o execJunk.o execMain.o execProcnode.o execQual.o execScan.o execTuples.o execUtils.o functions.o nodeAppend.o nodeAgg.o nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeMaterial.o nodeMergejoin.o nodeNestloop.o nodeResult.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o nodeGroup.o spi.o nodeSubplan.o nodeSubqueryscan.o nodeTidscan.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/executor'\ngmake -C lib all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/lib'\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o bit.o bit.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o hasht.o hasht.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o lispsort.o lispsort.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o stringinfo.o stringinfo.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o dllist.o dllist.c\n/usr/bin/ld -r -o SUBSYS.o bit.o hasht.o lispsort.o stringinfo.o dllist.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/lib'\ngmake -C libpq all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/libpq'\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o be-fsstubs.o be-fsstubs.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o auth.o auth.c\ncc -c -I/usr/local/ssl/include -I../../../src/include -O -K inline -o crypt.o crypt.c\nUX:acomp: ERROR: \"/usr/include/crypt.h\", line 36: identifier redeclared: des_encrypt\ngmake[3]: *** [crypt.o] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/libpq'\ngmake[2]: *** [libpq-recursive] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 25 Oct 2000 12:20:20 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: libpq needs -lsocket on UnixWare"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> OpenSSL dies:\n\nYeah, I saw that too yesterday. Not sure if I want to blame your crypt.h\nheader or what. Don't know what to do yet, but it ought to get fixed.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 26 Oct 2000 17:03:17 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: libpq needs -lsocket on UnixWare"
}
] |
[
{
"msg_contents": "I was going to redo those queries this coming weekend anyhow (as thats when\nI'll be getting some spare time next), as there are still some problems with\nthe existing ones.\n\nAny other \"minor\" changes I should keep an eye out for?\n\nIdea: As we have this type of query in more than one part of the source tree\n(ie: psql, jdbc, probably odbc), should we have a section in the\ndocumentation containing common queries, like: retrieving a list of tables,\nviews etc?\n\nI haven't seen a definitive one in there, but it would be useful, and have\nthe other ones in the source be based on that one? Every time a change to\nthe system tables is made, unless everyone who maintains the code that's\ndependent on it hears about it, the queries can quickly get out of sync.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support Officer, Maidstone Borough Council\nEmail: petermount@maidstone.gov.uk\nWWW: http://www.maidstone.gov.uk\nAll views expressed within this email are not the views of Maidstone Borough\nCouncil\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Tuesday, October 24, 2000 2:58 AM\nTo: Peter Mount\nCc: pgsql-interfaces@postgreSQL.org; pgsql-hackers@postgreSQL.org\nSubject: JDBC now needs updates for large objects\n\n\nPeter,\n\n As of current sources, large objects no longer occupy tables named\n'xinvNNNN' nor indexes named 'xinxNNNN'. Therefore, it'd be appropriate\nto remove the special tests that exclude tables/indices named that way\nfrom the tests in DatabaseMetaData.java. I have not done this because\nI'm not in a position to test changes to the JDBC driver; would you\nplease add it to your todo list?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 09:00:25 +0100",
"msg_from": "Peter Mount <petermount@maidstone.gov.uk>",
"msg_from_op": true,
"msg_subject": "RE: JDBC now needs updates for large objects"
},
{
"msg_contents": "Peter Mount <petermount@maidstone.gov.uk> writes:\n> Idea: As we have this type of query in more than one part of the source tree\n> (ie: psql, jdbc, probably odbc), should we have a section in the\n> documentation containing common queries, like: retrieving a list of tables,\n> views etc?\n\nThat's a good thought. It'd be a useful practice to review such\nstandard queries from time to time anyway. For example, now that\nouter joins work, a lot of psql's backslash-command queries could\nbe simplified (don't need the UNION ALL WITH SELECT NULL hack).\n\nAnyone have time to work up a list?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 08:44:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] RE: JDBC now needs updates for large objects "
},
{
"msg_contents": "sorry,\n\ni must have missed something. when did outer join's start working? is there a patch that i can build?\n\ntonys.\n\n> That's a good thought. It'd be a useful practice to review such\n> standard queries from time to time anyway. For example, now that\n> outer joins work, a lot of psql's backslash-command queries could\n> be simplified (don't need the UNION ALL WITH SELECT NULL hack).\n> \n> Anyone have time to work up a list?\n> \n> regards, tom lane\n\n",
"msg_date": "Wed, 25 Oct 2000 11:15:26 -0400",
"msg_from": "\"Tony Simopoulos\" <karkalis@earthling.net>",
"msg_from_op": false,
"msg_subject": "Re: RE: JDBC now needs updates for large objects "
},
{
"msg_contents": "\"Tony Simopoulos\" <karkalis@earthling.net> writes:\n> i must have missed something. when did outer join's start working?\n\nThey're in current CVS (7.1-to-be). CVS tip is pretty unstable at\nthe moment with WAL stuff going on, but you could use it as a playpen\ninstallation. Or wait for 7.1 beta, which should be out real soon now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 11:56:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RE: JDBC now needs updates for large objects "
}
] |
[
{
"msg_contents": "\n> Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> > Yes, and holding a row exclusive lock must imho at least \n> grab a shared\n> > table lock\n> \n> As indeed it does. Our disagreement seems to be just on the point of\n> whether it's safe to allow a read-only transaction to release its \n> AccessShareLock locks partway through.\n\nYes, imho it must release the locks after each (read only) statement.\n\n> My opinion about that is colored by the known bugs that we have because\n> the parser/rewriter/planner currently do just that. You can cause the\n> system to become mighty confused if the report of a table schema change\n> arrives partway through the parse/plan process, because decisions\n> already made are no longer valid.\n\nI won't argue against that. I agree that the locks should be grabbed on first access\nand not released until statement end. But imho we can't hold them until tx end.\n\n> While we can probably patch the holes\n> in this area by holding a lock throughout processing of one statement,\n> I think that will just push the problem up to the application level.\n> How many apps are likely to be coded in a way that will be robust\n> against intra-transaction schema changes?\n\nI would not know one single of our programs, where adding a column,\ncreating an index or changing the schema in any other intended way\nwould have an impact on an application that is still supposed to work with \nthis new schema. (One of the first SQL rules is e.g. to not use select *)\n\nAnd besides I do not think that this is a problem that we are allowed to solve \non the db side, because it would flood us with locks. \nRemember that some other db's are always inside a transaction and\nit is standard to not do any commits if you work read only.\n\nThe only case where I do agree that the lock needs to be held until tx end\nis in serializable transaction isolation.\n\nAndreas\n",
"msg_date": "Tue, 24 Oct 2000 10:22:00 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: AW: AW: relation ### modified while in use "
}
] |
[
{
"msg_contents": "\n> > > Are there many applications which have many SELECT statements(without\n> > > FOR UPDATE) in one tx ?\n> >\n> > Why not ?\n> >\n> It seems to me that multiple SELECT statements in a tx has little\n> meaning unless the tx is executed in SERIALIZABLE isolation level.\n\nE.g. a table is accessed multiple times to select different data\nin an inner application loop. No need for serializable here.\n\nAndreas\n",
"msg_date": "Tue, 24 Oct 2000 11:18:26 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: BLERe: AW: AW: relation ### modified while in use"
},
{
"msg_contents": "\n\nZeugswetter Andreas SB wrote:\n\n> > > > Are there many applications which have many SELECT statements(without\n> > > > FOR UPDATE) in one tx ?\n> > >\n> > > Why not ?\n> > >\n> > It seems to me that multiple SELECT statements in a tx has little\n> > meaning unless the tx is executed in SERIALIZABLE isolation level.\n>\n> E.g. a table is accessed multiple times to select different data\n> in an inner application loop. No need for serializable here.\n>\n\nAnd seems no need to execute in one tx.\nHmm,we seems to be able to call a cleanup procedure\ninternally which is equivalent to 'commit' after each\nconsecutive read-only statement. Is it a problem ?\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Tue, 24 Oct 2000 18:31:07 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: AW: BLERe: AW: AW: relation ### modified while in use"
},
{
"msg_contents": "Hello,\nanyone thought about implementing two-phase commit to\nbe able to support distributed transactions ?\nI have no clue how complex it would be, someone knows ?\n\ndevik\n\n\n",
"msg_date": "Tue, 24 Oct 2000 13:52:38 +0200",
"msg_from": "devik@cdi.cz",
"msg_from_op": false,
"msg_subject": "Two-phase commit"
},
{
"msg_contents": "\n\nPhilip Warner wrote:\n\n> At 18:31 24/10/00 +0900, Hiroshi Inoue wrote:\n> >\n> >\n> >Zeugswetter Andreas SB wrote:\n> >\n> >> > > > Are there many applications which have many SELECT statements(without\n> >> > > > FOR UPDATE) in one tx ?\n> >> > >\n> >> > > Why not ?\n> >> > >\n> >> > It seems to me that multiple SELECT statements in a tx has little\n> >> > meaning unless the tx is executed in SERIALIZABLE isolation level.\n> >>\n> >> E.g. a table is accessed multiple times to select different data\n> >> in an inner application loop. No need for serializable here.\n> >>\n> >\n> >And seems no need to execute in one tx.\n> >Hmm,we seems to be able to call a cleanup procedure\n> >internally which is equivalent to 'commit' after each\n> >consecutive read-only statement. Is it a problem ?\n>\n> I have not followed the entire thread, but if you are in a serializable OR\n> repeatable-read transaction, I would think that read-only statements will\n> need to keep some kind of lock on the rows they read (or the table).\n>\n\nCurrently read-only statements keep AccessShareLock on the table\n(not on the rows) until the end of the statement and none objects\nto it. What we've discussed is whether we should keep the lock\nuntil the end of tx or not in read committed mode.\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Wed, 25 Oct 2000 15:40:49 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: AW: BLERe: AW: AW: relation ### modified whilein use"
},
{
"msg_contents": "At 18:31 24/10/00 +0900, Hiroshi Inoue wrote:\n>\n>\n>Zeugswetter Andreas SB wrote:\n>\n>> > > > Are there many applications which have many SELECT statements(without\n>> > > > FOR UPDATE) in one tx ?\n>> > >\n>> > > Why not ?\n>> > >\n>> > It seems to me that multiple SELECT statements in a tx has little\n>> > meaning unless the tx is executed in SERIALIZABLE isolation level.\n>>\n>> E.g. a table is accessed multiple times to select different data\n>> in an inner application loop. No need for serializable here.\n>>\n>\n>And seems no need to execute in one tx.\n>Hmm,we seems to be able to call a cleanup procedure\n>internally which is equivalent to 'commit' after each\n>consecutive read-only statement. Is it a problem ?\n\nI have not followed the entire thread, but if you are in a serializable OR\nrepeatable-read transaction, I would think that read-only statements will\nneed to keep some kind of lock on the rows they read (or the table).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 25 Oct 2000 16:47:40 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: AW: BLERe: AW: AW: relation ### modified while\n in use"
}
] |
[
{
"msg_contents": "\n> > > > > Are there many applications which have many SELECT statements(without\n> > > > > FOR UPDATE) in one tx ?\n> > > >\n> > > > Why not ?\n> > > >\n> > > It seems to me that multiple SELECT statements in a tx has little\n> > > meaning unless the tx is executed in SERIALIZABLE isolation level.\n> >\n> > E.g. a table is accessed multiple times to select different data\n> > in an inner application loop. No need for serializable here.\n> >\n> \n> And seems no need to execute in one tx.\n\nYes there is, if you need to do dml based on the results of the inner loop\nselect statement.\n\n> Hmm,we seems to be able to call a cleanup procedure\n> internally which is equivalent to 'commit' after each\n> consecutive read-only statement. Is it a problem ?\n\nWhich would, in the locking sense be the same thing as \nreleasing the shared lock after each read only statement.\n\nIt would only be done if the current tx did not modify any \ndata yet. This is imho an awkward praxis that we should avoid at all\ncosts.\n\nI have seen Oracle apps that start out with an update to a dummy \ntable, just to be sure the transaction started. This is nonsense,\nthat we imho don't want to copy.\n\nAlso the result would be, that the first readonly statements are allowed to \nsee schema changes, but selects after the first DML would not :-(\n\nAndreas\n",
"msg_date": "Tue, 24 Oct 2000 11:41:48 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: BLERe: AW: AW: relation ### modified while in u\n\tse"
},
{
"msg_contents": "\n\nZeugswetter Andreas SB wrote:\n\n[snip]\n\n>\n> Also the result would be, that the first readonly statements are allowed to\n> see schema changes, but selects after the first DML would not :-(\n>\n\nDoes it mean that even read-only statements aren't allowed\nto release locks after other DMLs ?\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Tue, 24 Oct 2000 19:05:51 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: BLERe: AW: AW: relation ### modified while in use"
}
] |
[
{
"msg_contents": "\n> More of that - while one xaction will wait to alter a table no new xaction\n> will be allowed to access this table too.\n\nYes, I forgot, that placing an exclusive lock will make later shared lock\nrequests wait.\n\nAndreas\n",
"msg_date": "Tue, 24 Oct 2000 11:51:48 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: relation ### modified while in use"
}
] |
[
{
"msg_contents": " Date: Tuesday, October 24, 2000 @ 05:56:09\nAuthor: vadim\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/backend/access/transam\n from hub.org:/home/projects/pgsql/tmp/cvs-serv70071/backend/access/transam\n\nModified Files:\n\txact.c xlog.c xlogutils.c \n\n----------------------------- Log Message -----------------------------\n\nWAL misc\n\n",
"msg_date": "Tue, 24 Oct 2000 05:56:09 -0400 (EDT)",
"msg_from": "\"Vadim B. Mikheev - CVS\" <vadim@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql/src/backend/access/transam (xact.c xlog.c xlogutils.c)"
},
{
"msg_contents": "Vadim B. Mikheev - CVS writes:\n\n> Date: Tuesday, October 24, 2000 @ 05:56:09\n> Author: vadim\n> \n> Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/access/transam\n> from hub.org:/home/projects/pgsql/tmp/cvs-serv70071/backend/access/transam\n> \n> Modified Files:\n> \txact.c xlog.c xlogutils.c \n> \n> ----------------------------- Log Message -----------------------------\n> \n> WAL misc\n\nWe seem to be missing a file \"src/include/access/xlogutils.h\".\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 24 Oct 2000 18:12:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/access/transam (xact.c xlog.c\n xlogutils.c)"
}
] |
[
{
"msg_contents": "\n> > Also the result would be, that the first readonly statements are allowed to\n> > see schema changes, but selects after the first DML would not :-(\n> \n> Does it mean that even read-only statements aren't allowed\n> to release locks after other DMLs ?\n\nThat is, what Tom is suggesting, but not after first DML but earlier \nafter transaction start. I would like to avoid this. I want read only statements \nto release locks upon completion regardless of transaction state \n(except in serializable isolation).\n\nAndreas \n",
"msg_date": "Tue, 24 Oct 2000 12:15:52 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: BLERe: AW: AW: relation ### modified while\n\tin use"
}
] |
[
{
"msg_contents": "I was unable to find the way to access the backend pid from libpq\n\nIt is probably saved somewhere as part of BackendKeyData message\nbut there seems to be no function to access it ?\n\nI'm using a temporary solution (my own 'C' function) but I'd like \nto use the info already received. \n\n---------------\nHannu\n",
"msg_date": "Tue, 24 Oct 2000 13:57:32 +0300",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "how to access backend pid from libpq ?"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> I was unable to find the way to access the backend pid from libpq\n> \n> It is probably saved somewhere as part of BackendKeyData message\n> but there seems to be no function to access it ?\n> \n> I'm using a temporary solution (my own 'C' function) but I'd like\n> to use the info already received.\n\nOk, I found it from the libpq source: PQbackendPID\n\nI still think it could be documented ;)\n\n----------------\nHannu\n",
"msg_date": "Tue, 24 Oct 2000 14:11:07 +0300",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: how to access backend pid from libpq ?"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I was unable to find the way to access the backend pid from libpq\n\n extern int PQbackendPID(const PGconn *conn);\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 11:32:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: how to access backend pid from libpq ? "
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Ok, I found it from the libpq source: PQbackendPID\n\n> I still think it could be documented ;)\n\nIt is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 17:23:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: how to access backend pid from libpq ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Ok, I found it from the libpq source: PQbackendPID\n> \n> > I still think it could be documented ;)\n> \n> It is.\n\nStrange how one starts to find things when you are told they are there\n;)\n\nI could have sworn that doing grep PQbackendPID over libpq.sgml in fresh \nREL7_0_PATCHES returned zero rows before I posted .\n\nSorry for panicing.\n\n-----------\nHannu\n",
"msg_date": "Wed, 25 Oct 2000 10:29:33 +0300",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: Re: how to access backend pid from libpq ?"
}
] |
[
{
"msg_contents": "Yes, the joins were one of the reasons I was going to do it.\n\nIf no one starts a list by Saturday, then I'll start one when I go through\nJDBC.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support Officer, Maidstone Borough Council\nEmail: petermount@maidstone.gov.uk\nWWW: http://www.maidstone.gov.uk\nAll views expressed within this email are not the views of Maidstone Borough\nCouncil\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Tuesday, October 24, 2000 1:44 PM\nTo: Peter Mount\nCc: pgsql-interfaces@postgresql.org; pgsql-hackers@postgresql.org\nSubject: Re: [INTERFACES] RE: JDBC now needs updates for large objects \n\n\nPeter Mount <petermount@maidstone.gov.uk> writes:\n> Idea: As we have this type of query in more than one part of the source\ntree\n> (ie: psql, jdbc, probably odbc), should we have a section in the\n> documentation containing common queries, like: retrieving a list of\ntables,\n> views etc?\n\nThat's a good thought. It'd be a useful practice to review such\nstandard queries from time to time anyway. For example, now that\nouter joins work, a lot of psql's backslash-command queries could\nbe simplified (don't need the UNION ALL WITH SELECT NULL hack).\n\nAnyone have time to work up a list?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 13:43:16 +0100",
"msg_from": "Peter Mount <petermount@maidstone.gov.uk>",
"msg_from_op": true,
"msg_subject": "RE: [INTERFACES] RE: JDBC now needs updates for large objects "
}
] |
[
{
"msg_contents": "The parser has some heuristics to try to match up existing functions and\noperators when not all types are known apriori. We've had this\ncapability since v6.4, with some modest evolution since then.\n\nCurrently, if there is more than one function, say, which *could* match\nthe specified query, and if the arguments with unspecified types\n(typically looking like a bare SQL9x string) come from different\n\"categories\" of types (e.g. integer and string, or float and date) then\nthe parser throws an error about not finding the function.\n\nI propose that we modify the heuristic slightly, so that if there are\nfunction matches with arguments from different categories, and if one or\nmore of the possible matches comes from the \"string\" category, then that\ncategory is preferred.\n\nThere are two good reasons for this, and one bad reason ;) :\n\n1) the original query carries \"string\" semantics, so it is a reasonable\nfallback interpretation for the query.\n\n2) a string fallback will make things like\n\n select tstampfield at time zone 'pst' from t1;\n\nand\n\n select tstampfield at time zone interval '-08:00' from t1;\n\npossible (oh, btw, I've got patches to implement \"at time zone...\"),\nwhere currently\n\n select tstampfield at time zone 'pst' from t1;\n\nfails and requires that 'pst' be specified as \"text 'pst'\".\n\n3) some braindead \"compatibility tests\" from some competing open-source\ndatabase projects have poorly designed queries which interpret this lack\nof fallback as a lack of support for database features. So instead of\ngetting extra points for having *more* capabilities in a particular\narea, they claim that we don't support anything in that area. Most\nannoying, and it is not likely to change.\n\nComments? I've got code which implements the fallback for functions, and\npresumably the same for operators will be easy to do...\n\n - Thomas\n",
"msg_date": "Tue, 24 Oct 2000 14:24:41 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Fallback behavior for \"UNKNOWN\" types -- proposed change"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> I propose that we modify the heuristic slightly, so that if there are\n> function matches with arguments from different categories, and if one or\n> more of the possible matches comes from the \"string\" category, then that\n> category is preferred.\n\nI would suggest a slightly different rule, but maybe it comes out at the\nsame place in the end: if we can't find a unique match treating UNKNOWN\nthe way we do now, try again assuming it is TEXT (or at least string\ncategory). As you say, this is reasonable given that the original\nliteral looked like a string.\n\nBTW, I have been thinking that numeric literals ought to be initially\nassigned a new pseudo-type \"UNKNOWNNUMERIC\", which would eventually\nget coerced to one specific numeric type along the same lines as type\nassignment for string literals. This looks like it might help deal\nwith the problems of float8 vs. numeric, etc. Don't have a complete\nproposal worked out yet, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 16:52:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fallback behavior for \"UNKNOWN\" types -- proposed change "
},
{
"msg_contents": "> I would suggest a slightly different rule, but maybe it comes out at the\n> same place in the end: if we can't find a unique match treating UNKNOWN\n> the way we do now, try again assuming it is TEXT (or at least string\n> category). As you say, this is reasonable given that the original\n> literal looked like a string.\n\nYeah, it is the same thing in the end, since the *only* place I've\nchanged in the code is the block which used to bail out when seeing a\n\"category conflict\".\n\nI assumed you would have an opinion ;) If anyone else has concerns\nbefore seeing the effects of the change in the development tree, speak\nup! Of course, if we see troubles after commit, things can change or\nrevert...\n\nOh, and UNKNOWNNUMERIC sounds like a plausible concept too.\n\n - Thomas\n",
"msg_date": "Wed, 25 Oct 2000 03:22:06 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Re: Fallback behavior for \"UNKNOWN\" types -- proposed change"
}
] |
[
{
"msg_contents": "I am forwarding this not to belittle MySQL, but to hopefully help in the\ndevelopment of our own encryption protocol for secure password\nauthentication over the network.\n\nThe point being is that if we offer the protocol to do it, we had better\nensure its security, or someone WILL find the hole. Hopefully it will\nbe people who want to help security and not exploit it.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n-------- Original Message --------\nSubject: [CORE SDI ADVISORY] MySQL weak authentication\nDate: Mon, 23 Oct 2000 19:09:24 -0300\nFrom: Iv�n Arce <core.lists.bugtraq@CORE-SDI.COM>\nReply-To: Iv�n Arce <core.lists.bugtraq@CORE-SDI.COM>\nOrganization: Core-SDI, Buenos Aires, Argentina\nTo: BUGTRAQ@SECURITYFOCUS.COM\n\n CORE SDI\n http://www.core-sdi.com\n\n Vulnerability Report for MySQL Authentication\nVulnerability\n\n\nDate Published: 2000-10-23\n\nAdvisory ID: CORE-20001023\n\nBugtraq ID: 1826\n\nCVE CAN: Not currently assigned.\n\nTitle: MySQL Authentication Vulnerability\n\nClass: Design Error\n\nRemotely Exploitable: Yes\n\nLocally Exploitable: No\n\n\nVulnerability Description:\n\n The \"MySQL Database Engine\" uses an authentication scheme designed\n to prevent the flow of plaintext passwords over the network and\n the storage of them in plaintext. For that purpose a challenge-response\n mechanism for authentication has been implemented on all versions\n of MySQL. Slight variations are to be found between version 3.20\n and 3.21 and above.\n\n Regrettably, this authentication mechanism is not cryptographically\n strong. Specifically, each time a user executes this mechanism,\n information allowing an attacker to recover this user's password is\n leaked. Using an attack of our design, described in the \"Technical\n details\" section of this advisory, an eavesdropper is able to recover\n the user's password after witnessing only a few executions of this\n protocol, and thence is able to authenticate to the database engine\n impersonating a valid user.\n\nVulnerable Packages/Systems:\n All versions of MySQL\n\nSolution/Vendor Information/Workaround:\n\n The vendor is aware of the problems described and suggests\n encrypting the traffic between client and server to prevent\n exploitation.\n For further details refer to:\n\nhttp://www.mysql.com/documentation/mysql/commented/manual.php?section=Securi\nty\n\n Plans to implement a stronger authentication mechanism are being\n discussed for future versions of MySQL.\n\n Additionally, advisories and information on security issues\n in MySQL can be obtained from:\n\n http://www.securityfocus.com/bid/1147\n http://www.securityfocus.com/bid/975\n http://www.securityfocus.com/bid/926\n\nVendor notified on: October 19th, 2000\n\nCredits:\n\n These vulnerabilities were found and researched by Ariel \"Wata\"\n Waissbein, Emiliano Kargieman, Carlos Sarraute, Gerardo Richarte and\n Agustin \"Kato\" Azubel of CORE SDI, Buenos Aires, Argentina.\n\n This advisory was drafted with the help of the SecurityFocus.com\n Vulnerability Help Team. For more information or assistance drafting\n advisories please mail vulnhelp@securityfocus.com.\n\nTechnical Description - Exploit/Concept Code:\n\n 1. The challenge/response mechanism\n\n The challenge-response mechanism devised in MySQL does the following:\n From mysql-3.22.32/sql/password.c:\n\n\n/***********************************************************************\n The main idea is that no passwords are sent between client & server on\n connection and that no passwords are saved in mysql in a decodable\n form.\n\n MySQL provides users with two primitives used for authentication: a\n hash function and a (supposedly) random generator. On connection a\nrandom\n string is generated by the server and sent to the client. The client,\n using as input the hash value of the random string he has received and\n the hash value of his password, calculates a new string using the\n random generator primitive.\n This 'check' string is sent to the server, where it is compared with a\n string generated from the stored hash_value of the password and the\n random string.\n\n The password is saved (in user.password) by using the PASSWORD()\n function in mysql.\n\n Example:\n update user set password=PASSWORD(\"hello\") where user=\"test\"\n This saves a hashed number as a string in the password field.\n \n**********************************************************************/\n\n To accomplish that purpose several functions and data structures are\n implemented:\n\n mysql-3.22.32/include/mysql_com.h:\n struct rand_struct {\n unsigned long seed1,seed2,max_value;\n double max_value_dbl;\n };\n\n mysql-3.22.32/sql/password.c:\n void randominit(struct rand_struct *rand_st,ulong seed1, ulong seed2)\n Initializes the PRNG, used by versions 3.21 and up\n\n static void old_randominit(struct rand_struct *rand_st,ulong seed1)\n Initializes the PRNG, used by versions up to 3.20\n\n double rnd(struct rand_struct *rand_st)\n Provides a random floating point (double) number taken from\n the PRNG between 0 and rand_st->max_value\n\n void hash_password(ulong *result, const char *password)\n Calculates a hash of a password string and stores it\n in 'result'.\n\n void make_scrambled_password(char *to,const char *password)\n Hashes and stores the password in a readable form in 'to'\n\n char *scramble(char *to,const char *message,const char *password,\n my_bool old_ver)\n Genererate a new message based on message and password\n The same thing is done in client and server and the results are\n checked.\n\n my_bool check_scramble(const char *scrambled, const char *message,\n ulong *hash_pass, my_bool old_ver)\n Checks if the string generated by the hashed password and the\n message sent matches the string received from the other endpoint.\n This is the check for the challenge-response mechanism.\n\n The MySQL engine initializes the PRNG upon startup of the server\n as follows:\n\n mysql-3.22.32/sql/mysqld.cc:main()\n randominit(&sql_rand,(ulong) start_time,(ulong) start_time/2);\n Where start_time is obtained using the seconds since 0:00 Jan 1,\n 1970 UTC using time(3) when the server starts. Our first observation\n is that the PRNG is seeded with an easily guessable value. Though,\n this observation has no direct implications in the vulnerability we\n present.\n\n Upon connection to the server from a client a new thread is created to\n handle it and a random string is calculate and stored in per\n connection structure, this is done in\n mysql-3.22.32/sql/mysqld.cc:create_new_thread():\n ...\n (thread_count-delayed_insert_threads > max_used_connections)\n max_used_connections=thread_count-delayed_insert_threads;\n thd->thread_id=thread_id++;\n for (uint i=0; i < 8 ; i++) // Generate password teststring\n thd->scramble[i]= (char) (rnd(&sql_rand)*94+33);\n thd->scramble[8]=0;\n thd->rand=sql_rand;\n threads.append(thd);\n\n /* Start a new thread to handle connection */\n ...\n The challenge/response exchange is performed and checked in\n mysql-3.22.32/sql/sql_parse.cc:check_connections():\n ....\n memcpy(end,thd->scramble,SCRAMBLE_LENGTH+1);\n end+=SCRAMBLE_LENGTH+1;\n ...\n if (net_write_command(net,protocol_version, buff, (uint) (end-buff))\n||\n (pkt_len=my_net_read(net)) == packet_error || pkt_len < 6)\n {\n inc_host_errors(&thd->remote.sin_addr);\n return(ER_HANDSHAKE_ERROR);\n }\n Here the random string has been sent (along with other server\n data) and the response has been read.\n The authentication checks are then perfomed\n ...\n char *passwd= strend((char*) net->read_pos+5)+1;\n if (passwd[0] && strlen(passwd) != SCRAMBLE_LENGTH)\n return ER_HANDSHAKE_ERROR;\n thd->master_access=acl_getroot(thd->host, thd->ip, thd->user,\n passwd, thd->scramble, &thd->priv_user,\n protocol_version == 9 ||\n !(thd->client_capabilities &\n CLIENT_LONG_PASSWORD));\n thd->password=test(passwd[0]);\n ...\n acl_getroot() in mysql-3.22.32/sql/sql_acl.cc does the permission\n checks for the username and host the connection comes from and\n calls the check_scramble function described above to verify the\n valid reponse to the challenge sent. If the response is checked\n valid we say this (challenge and response) test was passed.\n\n 2. The problem: Cryptographically weak authentication scheme\n\n\n The hash function provided by MySQL outputs eight-bytes strings\n (64 bits), whereas the random number generator outputs five-bytes\n strings (40 bits).\n Notice that as for the authentication mechanism described above, to\n impersonate a user only the hash value of this user's password is\n needed, e.g. not the actual password.\n\n We now describe why the hash value of the password can be\n efficiently calculated using only a few executions of the challenge-\n and-response mechanism for the same user. In particular, we introduce\n a weakness of this authentication scheme, and deduce that an attack\n more efficient than brute-force attack can be carried out.\n\n Firstly we describe how the MySQL random generator (PRNG) works.\n Then we proceed to analyse this scheme's security. The algorithm for\n making these calculations will be briefly described in the following\n section.\n\n Let n := 2^{30}-1 (here n is the max_value used in randominit() and\n old_randoninit() respectively). Fix a user U. And initiate a challenge\n and response. That is, suppose the server has sent a challenge to the\n user U. The hash value of this user's password is 8 bytes long. Denote\n by P1 the first (leftmost) 4 bytes of this hash value and by P2 the\n last 4 bytes (rightmost). Likewise, let C1 denote the first 4 bytes of\n the challenge's hash value and C2 the last 4. Then, the random\n generator works as follows:\n\n -calculate the values seed1 := P1^C1 and seed2 := P2^C2\n (here ^ denotes the bitwise exclusive or (XOR) function)\n\n -calculate recursively for 1 =< i =< 8\n\n seed1 = seed1+(3*seed2) modulo (n)\n seed2 = seed1+seed2+33 modulo (n)\n r[i] = floor((seed1/n)*31)+64\n\n -calculate form the preceding values\n\n seed1 = seed1+(3*seed2) modulo (n)\n seed2 = seed1+seed2+33 modulo (n)\n r[9] = floor((seed1/n)*31)\n\n -output the checksum value\n S=(r[1]^r[9] || r[2]^r[9] || ... || r[7]^r[9] || r[8]^r[9])\n\n It is this checksum that is sent, by U, to the server. The server, who\n has in store the hash value of U's password, recalculates the checksum\n by this same process and succintly verifies the authenticity of the\n value it has received. However it is a small collection of these\n checksums that allows any attacker to obtain P1 and P2 (the hash value\n of the user's password). Hence, it is therefore possible to\n impersonate any user with only the information that travels on the\n wire between server and client (user).\n\n The reason why the process of producing the checksum out of the hash\n values of both the password and the challenge is insecure is that this\n process can be efficently reversed due to it's rich arithmetic\n properties.\n More specifically, consider the random generator described above as a\n mapping 'f' that takes as input the two values X and Y and produces\n the checksum value f(X,Y)=S (e.g., in our case X:=P1^C1 and Y:=P2^C2).\n Then we can efficiently calculate all of the values X',Y' which map to\n the same checksum value than X,Y, i.e. if f(X,Y)=S, then we calculate\n the set of all the values X',Y' such that f(X',Y')=S. This set is of\n negligible size in comparison to the 2^64 points set of all the\n possible passwords' hashes in which it is contained. Furthermore,\n given a collection of challenges and responses made between the same\n user and the server, it is possible to efficiently calculate the set\n of all (hash values of) passwords passing the given tests.\n\n\n 3. The attack\n\n\n We now give a brief description of the attack we propose. This\n description shall enable readers to verify our assertion that the\n MySQL authentication scheme leaks information. This attack has been\n implemented on Squeak Smalltalk and is now perfectly running. A\n complete description of the attack-algorithm lies beyond the scope of\n this text and will be the matter of future work.\n\n The attack we designed is mainly divided into two stages. In these two\n stages we respectively use one of our two algorithmic tools:\n\n Procedure 1 is an algorithmic process which has as input a\n checksum S and the corresponding hash value of the challenge\n C1||C2, and outputs a set consisting of all the pairs X,Y mapping\n through the random generator to the checksum S, i.e. in symbols\n {(X,Y): f(X,Y)=S} (here of course we have 0 <=X,Y< 2^{32}).\n\n In our attack Procedure 1 is used to cut down the number of possible\n hashed passwords from the brute-force value 2^64 to a much smaller\n cardinality of 2^20. This set is highly efficiently described, e.g.\n less than 1Kb memory.\n For this smaller set, it is feasible to eliminate the invalid (hashed)\n passwords using further challenges and responses by our Procedure 2.\n\n Procedure 2 is an algorithmic process having as input a set SET of\n possible (hashed) passwords, and a new pair (S,C1||C2) of checksum\n and challenge, and producing as output the subset of SET of all the\n passwords passing this new test.\n\n The way in which Procedure 2 is used in our algorithm should now be\n clear. We first use Procedure 1 to reduce the set of passwords to the\n announced set consisting of 2^{20} points, using as input only two\n challenge and responses for the same user.\n This set contains all the passwords passing this two tests. Suppose\n now that the attacker has in his possession a new pair (S,C1||C2) of\n challenge and response, then he can use Procedure 2 to produce the\n smaller set of all the passwords passing the first three tests (the\n ones corresponding to the three pairs of challenge and response he has\n used). Notice that this process can be repeated for every new pair of\n challenge and response the attacker gets. With each application of\n this process the set of possible passwords becomes smaller.\n Furthermore, the cardinality of these sets is not only decresing\n but eventually becomes 1. In that case the one element remaining is\n the (hashed) password.\n\n\n 4. Statistics and Conclusions\n\n In the examples we tested, about 300 possible passwords were left with\n the use of only 10 pairs of challenge and response. Notice that in a\n plain brute-force attack about 2^{64}-300=18,446,744,073,709,551,316\n would remain as possible passwords. It took about 100 pairs of\n challenge and response to cut the 300 set two a set containing two\n possible passwords (i.e., a fake password and the password indeed).\n Finally it took about 300 pairs of challenge and response to\n get the password.\n\n We therefore are able to make a variety of attacks depending on the\n amount of pairs of challenge and response we get from the user we want\n to impersonate.\n The two extreme cases being very few pairs of challenge and response\n from the same user, and a lot of pairs of challenge and response. The\n second attack, that of many pairs of challenge and response captured,\n is straight-forward:\n Apply the algorithm described above until the password is found.\n The first case, that of only a few pairs of challenge and response\n captured, is as well easy to carry out: simply apply the algorithm we\n described with all the pairs of challenge and response captured, then\n use any possible password in the set produced by the application of\n the algorithm for authenticating yourself as a user (some of these\n fake passwords will still pass many tests!).\n\n\nDISCLAIMER:\n\n The contents of this advisory are copyright (c) 2000 CORE SDI S.A.\n and may be distributed freely provided that no fee is charged for this\n distribution and proper credit is given.\n\n$Id: MySQLauth-advisory.txt,v 1.11 2000/10/23 21:30:57 iarce Exp $\n\n---\n\n\"Understanding. A cerebral secretion that enables one having it to know\n a house from a horse by the roof on the house,\n It's nature and laws have been exhaustively expounded by Locke,\n who rode a house, and Kant, who lived in a horse.\" - Ambrose Bierce\n\n\n==================[ CORE Seguridad de la Informacion S.A. ]=========\nIv�n Arce\nPresidente\nPGP Fingerprint: C7A8 ED85 8D7B 9ADC 6836 B25D 207B E78E 2AD1 F65A\nemail : iarce@core-sdi.com\nhttp://www.core-sdi.com\nFlorida 141 2do cuerpo Piso 7\nC1005AAG Buenos Aires, Argentina.\nTel/Fax : +(54-11) 4331-5402\n=====================================================================\n\n\n--- For a personal reply use iarce@core-sdi.com\n",
"msg_date": "Tue, 24 Oct 2000 10:25:14 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "[Fwd: [CORE SDI ADVISORY] MySQL weak authentication]"
},
{
"msg_contents": "On Tue, Oct 24, 2000 at 10:25:14AM -0400, Lamar Owen wrote:\n> I am forwarding this not to belittle MySQL, but to hopefully help in the\n> development of our own encryption protocol for secure password\n> authentication over the network.\n> \n> The point being is that if we offer the protocol to do it, we had better\n> ensure its security, or someone WILL find the hole. Hopefully it will\n> be people who want to help security and not exploit it.\n\nIMO, anything short of a full SSL wrapped connection is fairly\npointless. What does it matter if the password is encrypted if\nsensitive query data flows in the clear?\n-- \nBruce Guenter <bruceg@em.ca> http://em.ca/~bruceg/",
"msg_date": "Wed, 25 Oct 2000 10:27:15 -0600",
"msg_from": "Bruce Guenter <bruceg@em.ca>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: [CORE SDI ADVISORY] MySQL weak authentication]"
},
{
"msg_contents": "Bruce Guenter wrote:\n> On Tue, Oct 24, 2000 at 10:25:14AM -0400, Lamar Owen wrote:\n> > The point being is that if we offer the protocol to do it, we had better\n> > ensure its security, or someone WILL find the hole. Hopefully it will\n> > be people who want to help security and not exploit it.\n \n> IMO, anything short of a full SSL wrapped connection is fairly\n> pointless. What does it matter if the password is encrypted if\n> sensitive query data flows in the clear?\n\nI tend to agree. SSL is a fully worked out means of doing secure\nconnections. It is portable, it is robust, and it is relatively secure.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 25 Oct 2000 13:56:17 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: [CORE SDI ADVISORY] MySQL weak authentication]"
},
{
"msg_contents": "On Tue, Oct 24, 2000 at 10:25:14AM -0400, Lamar Owen wrote:\n> I am forwarding this not to belittle MySQL, but to hopefully help in the\n> development of our own encryption protocol for secure password\n> authentication over the network.\n> \n> The point being is that if we offer the protocol to do it, we had better\n> ensure its security, or someone WILL find the hole. Hopefully it will\n> be people who want to help security and not exploit it.\n\nBetter not try to create it ourselves ;)\n\nhttp://srp.stanford.edu/\n\nIt has even RFC's assigned to it. RFC2945, RFC2944\nI put it into my TOLOOK list but have not found the time yet. :)\n\n-- \nmarko\n\n",
"msg_date": "Wed, 25 Oct 2000 23:27:25 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: [CORE SDI ADVISORY] MySQL weak authentication]"
},
{
"msg_contents": "On Wed, Oct 25, 2000 at 10:27:15AM -0600, Bruce Guenter wrote:\n> On Tue, Oct 24, 2000 at 10:25:14AM -0400, Lamar Owen wrote:\n> > I am forwarding this not to belittle MySQL, but to hopefully help in the\n> > development of our own encryption protocol for secure password\n> > authentication over the network.\n> > \n> > The point being is that if we offer the protocol to do it, we had better\n> > ensure its security, or someone WILL find the hole. Hopefully it will\n> > be people who want to help security and not exploit it.\n> \n> IMO, anything short of a full SSL wrapped connection is fairly\n> pointless. What does it matter if the password is encrypted if\n> sensitive query data flows in the clear?\n\nPasswords are sensitive too. They are actually orthogonal,\nfor data security we need something like SSL, but for\nauthentication/password security we need some strong authentication\nscheme anyway.\n\n\n-- \nmarko\n\n",
"msg_date": "Wed, 25 Oct 2000 23:37:13 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: [CORE SDI ADVISORY] MySQL weak authentication]"
}
] |
[
{
"msg_contents": "\ni'm developing one. a library for batch transactions, so you\ncan continue processing in the middle of the file(or table) in\ncase an abort happens. it can support multi-databases.\ni think i can share it to freshmeat.\n\nOn Tue, 24 Oct 2000 13:52:38 +0200, devik@cdi.cz wrote:\n\n> Hello,\n> anyone thought about implementing two-phase commit to\n> be able to support distributed transactions ?\n> I have no clue how complex it would be, someone knows ?\n> \n> devik\n> \n>\n\n\n\n\n\n_______________________________________________________\nSay Bye to Slow Internet!\nhttp://www.home.com/xinbox/signup.html\n\n",
"msg_date": "Tue, 24 Oct 2000 08:09:51 -0700 (PDT)",
"msg_from": "richard excite <richard_excite@excite.com>",
"msg_from_op": true,
"msg_subject": "Re: Two-phase commit"
},
{
"msg_contents": "hello,\nI'm not sure what will your lib solve ? For reliable\ndistributed TX, each SQL server have to support two\nphase commit. It means to have at least PREPARE command\nwhich is identical to COMMIT only doesn't really commit\n(last commit bit is not written but TX is at commit edge\nand can't complain to COMMIT cmd).\nAlso is should provide interface/way for querying state\nof particular TX (identified by XID) and if it is in\nPREPARED state then way to commit/rollback it: it is\nfor cases when connection between PG and XA manager\nterminates between PREPARE and COMMIT/ABORT. PG then\nalso should continue to hold all \"locks\" (or\nHEAP_MARKED_FOR_UPDATE in PG) until PREPARED TX is\nresolved.\nProbably it should not be hard .. ?\ndevik\n\nrichard excite wrote:\n> \n> i'm developing one. a library for batch transactions, so you\n> can continue processing in the middle of the file(or table) in\n> case an abort happens. it can support multi-databases.\n> i think i can share it to freshmeat.\n> \n> On Tue, 24 Oct 2000 13:52:38 +0200, devik@cdi.cz wrote:\n> \n> > Hello,\n> > anyone thought about implementing two-phase commit to\n> > be able to support distributed transactions ?\n> > I have no clue how complex it would be, someone knows ?\n> >\n> > devik\n> >\n> >\n> \n> _______________________________________________________\n> Say Bye to Slow Internet!\n> http://www.home.com/xinbox/signup.html\n\n\n",
"msg_date": "Wed, 25 Oct 2000 10:12:55 +0200",
"msg_from": "devik@cdi.cz",
"msg_from_op": false,
"msg_subject": "Re: Two-phase commit"
}
] |
[
{
"msg_contents": "> We seem to be missing a file \"src/include/access/xlogutils.h\".\n\nOps, sorry - please create empty file, it will work\n(I have no its src at office comp -:().\n\nVadim\n\n",
"msg_date": "Tue, 24 Oct 2000 09:48:23 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: pgsql/src/backend/access/transam (xact.c xlog.c\n\txlogutils.c)"
}
] |
[
{
"msg_contents": "\nTried a build from today's checkins, and we didn't find -lperl...\n\n\nmkdir blib/arch\nmkdir blib/arch/auto\nmkdir blib/arch/auto/plperl\nmkdir blib/lib/auto\nmkdir blib/lib/auto/plperl\n/bin/cc -c -I../../../src/include -I/usr/local/include -O -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" -Kpic -I/usr/local/lib/perl5/5.00503/i386-svr5/CORE plperl.c\nUX:acomp: WARNING: \"/usr/local/lib/perl5/5.00503/i386-svr5/CORE/perl.h\", line 1474: macro redefined: DEBUG\n/bin/cc -c -I../../../src/include -I/usr/local/include -O -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" -Kpic -I/usr/local/lib/perl5/5.00503/i386-svr5/CORE eloglvl.c\n/bin/perl -I/usr/local/lib/perl5/5.00503/i386-svr5 -I/usr/local/lib/perl5/5.00503 /usr/local/lib/perl5/5.00503/ExtUtils/xsubpp -typemap /usr/local/lib/perl5/5.00503/ExtUtils/typemap SPI.xs >xstmp.c && mv xstmp.c SPI.c\n/bin/cc -c -I../../../src/include -I/usr/local/include -O -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" -Kpic -I/usr/local/lib/perl5/5.00503/i386-svr5/CORE SPI.c\nUX:acomp: WARNING: \"/usr/local/lib/perl5/5.00503/i386-svr5/CORE/perl.h\", line 1474: macro redefined: DEBUG\nUX:acomp: WARNING: \"SPI.c\", line 73: end-of-loop code not reached\nUX:acomp: WARNING: \"SPI.c\", line 88: end-of-loop code not reached\nUX:acomp: WARNING: \"SPI.c\", line 103: end-of-loop code not reached\nUX:acomp: WARNING: \"SPI.c\", line 118: end-of-loop code not reached\nUX:acomp: WARNING: \"SPI.c\", line 133: end-of-loop code not reached\nUX:acomp: WARNING: \"SPI.c\", line 149: end-of-loop code not reached\nRunning Mkbootstrap for plperl ()\nchmod 644 plperl.bs\nLD_RUN_PATH=\"\" /bin/cc -o blib/arch/auto/plperl/plperl.so -G -Wl,-Bexport -L/usr/local/lib plperl.o eloglvl.o SPI.o /usr/local/lib/perl5/5.00503/i386-svr5/auto/Opcode/Opcode.so -L/usr/local/lib/perl5/5.00503/i386-svr5/CORE -lperl \nUX:ld: ERROR: library not found: -lperl\ngmake[4]: *** [blib/arch/auto/plperl/plperl.so] Error 1\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\ngmake[3]: *** [all] Error 2\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 24 Oct 2000 14:42:00 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "looks like we forgot something..."
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [001024 17:12]:\nOk, looks like my failure was a system issue. None of the PERL\nlibperl.so.*'s had a symlink as libperl.so. I fixed this. \nand we build and pass regression. \n\nBUT, we still probably need the option to pick a specific PERL so the\nuser can try different versions. \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 25 Oct 2000 05:46:56 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: looks like we forgot something..."
}
] |
[
{
"msg_contents": "The postmaster contains this code just before it waits for input:\n\n#ifdef USE_SSL\n for (curr = DLGetHead(PortList); curr; curr = DLGetSucc(curr))\n {\n if (((Port *) DLE_VAL(curr))->ssl &&\n SSL_pending(((Port *) DLE_VAL(curr))->ssl) > 0)\n {\n no_select = true;\n break;\n }\n }\n if (no_select)\n FD_ZERO(&rmask); /* So we don't accept() anything below */\n#endif\n\nI am not sure exactly what SSL_pending() is defined to mean, but as\nnear as I can tell, whenever SSL_pending() returns true, the postmaster\nwill completely ignore every other input-ready condition. This spells\n\"denial of service\" from where I sit: a nonresponsive SSL client will\ncause the postmaster to freeze up for all other clients.\n\nCan anyone who knows about SSL defend or even explain the above code?\nI am strongly inclined to just dike it out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Oct 2000 18:04:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Bogus-looking SSL code in postmaster wait loop"
}
] |
[
{
"msg_contents": "\n> I have not followed the entire thread, but if you are in a serializable OR\n> repeatable-read transaction,\n\nSerializable and repeatable read are the same thing, different wording.\n \n> I would think that read-only statements will\n> need to keep some kind of lock on the rows they read (or the table).\n\nYes, we were talking about the other isolation levels. Most,\nbut not all of my mails in this thread state this difference.\n\nAndreas\n",
"msg_date": "Wed, 25 Oct 2000 09:36:05 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: BLERe: AW: AW: relation ### modified while in u\n\tse"
}
] |
[
{
"msg_contents": "Hi, all.\n\nAn extrange behavior with PostgreSql 7.0.2:\n\n\tselect * from foo where exists\n\t(select * from foo)\n\nworks fine. But:\n\n\tselect * from foo where exists\n\t((select * from foo))\n\nshows an error:\n\n\tERROR: parser: parse error at or near \"(\"\n\nIs this a bug?\n\nThanks.\n\n\t\t\t\t\tDavid\n",
"msg_date": "Wed, 25 Oct 2000 10:47:58 +0200",
"msg_from": "DaVinci <bombadil@wanadoo.es>",
"msg_from_op": true,
"msg_subject": "A rare error"
},
{
"msg_contents": "\nLooks like it probably is. The spec production\nseems to allow query expressions to contain \n()ized non-join query expressions or \n()ized joined tables.\n\nStephan Szabo\nsszabo@bigpanda.com\n\nOn Wed, 25 Oct 2000, DaVinci wrote:\n\n> Hi, all.\n> \n> An extrange behavior with PostgreSql 7.0.2:\n> \n> \tselect * from foo where exists\n> \t(select * from foo)\n> \n> works fine. But:\n> \n> \tselect * from foo where exists\n> \t((select * from foo))\n> \n> shows an error:\n> \n> \tERROR: parser: parse error at or near \"(\"\n> \n> Is this a bug?\n> \n> Thanks.\n> \n> \t\t\t\t\tDavid\n> \n\n",
"msg_date": "Wed, 25 Oct 2000 09:01:03 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: A rare error"
},
{
"msg_contents": "DaVinci <bombadil@wanadoo.es> writes:\n> An extrange behavior with PostgreSql 7.0.2:\n> \tselect * from foo where exists\n> \t(select * from foo)\n> works fine. But:\n> \tselect * from foo where exists\n> \t((select * from foo))\n> shows an error:\n> \tERROR: parser: parse error at or near \"(\"\n> Is this a bug?\n\nI was fooling around with exactly that point a couple weeks ago. You'd\nthink it would be easy to allow extra parentheses around a sub-select,\nbut I couldn't figure out any way to do it that didn't provoke shift/\nreduce conflicts or worse.\n\nThe main problem is that if parentheses are both part of the expression\ngrammar (as they'd better be ;-)) and part of the SELECT grammar then\nfor a construct like\n\tselect (((select count(foo) from bar)));\nit's ambiguous whether the extra parens are expression parens or part\nof the inner SELECT statement. You may not care, but yacc does: it does\nnot like ambiguous grammars. AFAICS the only solution is not to allow\nparentheses at the very top level of a SELECT structure. Then the above\nis not ambiguous because all the extra parens are expression parens.\n\nThis solution leads directly to your complaint: the syntax is\n\tEXISTS ( SELECT ... )\nand you don't get to insert any unnecessary levels of parenthesis.\n\nWe could maybe hack something for EXISTS in particular (since we know\na parenthesized SELECT must follow it) but in the general case there\ndoesn't seem to be a way to make it work. For example, in current\nsources this is OK:\n\tselect * from foo where exists\n\t((select * from foo) union (select * from bar));\nbut not this:\n\tselect * from foo where exists\n\t((select * from foo) union ((select * from bar)));\n\tERROR: parser: parse error at or near \")\"\n\nIf there are any yacc hackers out there who think they can improve on\nthis, please grab gram.y from current CVS and have at it. It'd be nice\nnot to have an artificial restriction against redundant parentheses in\nSELECT structures.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 12:28:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A rare error "
}
] |
[
{
"msg_contents": "Would it be possible to add a path spec to the --with-perl configure\noption so that if we have 2 or more PERL versions on the system we can\npick which one to use? \n\nLarry\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 25 Oct 2000 04:45:10 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "--with-perl=/path/to/prefered/perl?"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> Would it be possible to add a path spec to the --with-perl configure\n> option so that if we have 2 or more PERL versions on the system we can\n> pick which one to use? \n\nPERL=/else/where/perl ./configure ...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 25 Oct 2000 16:40:02 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: --with-perl=/path/to/prefered/perl?"
},
{
"msg_contents": "> Would it be possible to add a path spec to the --with-perl configure\n> option so that if we have 2 or more PERL versions on the system we can\n> pick which one to use? \n\nWe used to have a configure switch, but it is not there anymore. The\nway I do it is to add PERL=perl5 in /pgsql/src/Makefile.custom.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 25 Oct 2000 12:08:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: --with-perl=/path/to/prefered/perl?"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Would it be possible to add a path spec to the --with-perl configure\n> option so that if we have 2 or more PERL versions on the system we can\n> pick which one to use? \n\nWhy do you need that, as opposed to just setting your PATH so that\nconfigure finds the right one first?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 12:10:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: --with-perl=/path/to/prefered/perl? "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001025 11:10]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Would it be possible to add a path spec to the --with-perl configure\n> > option so that if we have 2 or more PERL versions on the system we can\n> > pick which one to use? \n> \n> Why do you need that, as opposed to just setting your PATH so that\n> configure finds the right one first?\nI may be testing a new version of PERL and want PG to compile against\nthat one as WELL as the system default. \n\nLER\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 25 Oct 2000 11:50:16 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: --with-perl=/path/to/prefered/perl?"
},
{
"msg_contents": "> Larry Rosenman <ler@lerctr.org> writes:\n> > Would it be possible to add a path spec to the --with-perl configure\n> > option so that if we have 2 or more PERL versions on the system we can\n> > pick which one to use? \n> \n> Why do you need that, as opposed to just setting your PATH so that\n> configure finds the right one first?\n\nOn BSD/OS, there is perl (perl4) and perl5 (perl5). You have to set the\nname of the perl executable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 25 Oct 2000 14:18:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: --with-perl=/path/to/prefered/perl?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Larry Rosenman <ler@lerctr.org> writes:\n>>>> Would it be possible to add a path spec to the --with-perl configure\n>>>> option so that if we have 2 or more PERL versions on the system we can\n>>>> pick which one to use? \n>> \n>> Why do you need that, as opposed to just setting your PATH so that\n>> configure finds the right one first?\n\n> On BSD/OS, there is perl (perl4) and perl5 (perl5). You have to set the\n> name of the perl executable.\n\nOh, so it wouldn't be a search path but a specific executable name\n(with or without full path info). OK, that makes sense to me.\nI've had different perls installed with different executable names\nmyself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 14:20:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: --with-perl=/path/to/prefered/perl? "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001025 13:20]:\n> \n> Oh, so it wouldn't be a search path but a specific executable name\n> (with or without full path info). OK, that makes sense to me.\n> I've had different perls installed with different executable names\n> myself.\nBingo. Larry\n",
"msg_date": "Wed, 25 Oct 2000 13:26:21 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: --with-perl=/path/to/prefered/perl?"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > Would it be possible to add a path spec to the --with-perl configure\n> > > option so that if we have 2 or more PERL versions on the system we can\n> > > pick which one to use? \n> > \n> > Why do you need that, as opposed to just setting your PATH so that\n> > configure finds the right one first?\n> \n> On BSD/OS, there is perl (perl4) and perl5 (perl5). You have to set the\n> name of the perl executable.\n\nThe standard interface to any Autoconf configure script to override the\nname of a program has always been and will always be setting environment\nvariables like this:\n\nPERL=perl3 CC=cc YACC='bison -y' TCLSH=/usr/local/bin/tclsh AWK=mawk ./configure --options ...\n\n(Okay, I lied, in Autoconf 2.50-to-be you can also put the variable\nassignments after the \"./configure\", but that doesn't change the overall\nprinciple.)\n\nThe idea here is that you might set some of these environment variables\npermanently on your system and then every configure script you run will\nuse the same defaults.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 25 Oct 2000 20:59:08 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: --with-perl=/path/to/prefered/perl?"
},
{
"msg_contents": "> > On BSD/OS, there is perl (perl4) and perl5 (perl5). You have to set the\n> > name of the perl executable.\n> \n> The standard interface to any Autoconf configure script to override the\n> name of a program has always been and will always be setting environment\n> variables like this:\n> \n> PERL=perl3 CC=cc YACC='bison -y' TCLSH=/usr/local/bin/tclsh AWK=mawk ./configure --options ...\n> \n> (Okay, I lied, in Autoconf 2.50-to-be you can also put the variable\n> assignments after the \"./configure\", but that doesn't change the overall\n> principle.)\n> \n> The idea here is that you might set some of these environment variables\n> permanently on your system and then every configure script you run will\n> use the same defaults.\n\nThe trick here is that the software not call perl directly, but use the\nPREL environment variable if defined. I wish all software followed that\npractice like we do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 25 Oct 2000 16:12:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: --with-perl=/path/to/prefered/perl?"
}
] |
[
{
"msg_contents": "I have this problem:\nThe next message appears when I try to make an INSERT into a table:\n\n\nConnection refused\nDBD::Pg::st execute failed: ERROR: \nparser: parse error at or near \"refused\"\n\n\nThe situation is the next: I have a lot of processes making insertions\nin the same database, and I suspect that is the cause of the message,\nbut I'm not sure.\n\nCould be another cause?\nHow can I solve it?\n\nI have tried the \"autocommit\" option on, and i still have the\nproblem.\n\nThank you.\n\n\n\n\n\n\n\nI have this problem:The next message appears \nwhen I try to make an INSERT into a table:\n \nConnection refusedDBD::Pg::st execute \nfailed: ERROR: parser: parse error at or near \"refused\"\n \nThe situation is the next: I have a lot of \nprocesses making insertionsin the same database, and I suspect that is the \ncause of the message,but I'm not sure.\n \nCould be another cause?How can I solve \nit?\n \nI have tried the \"autocommit\" option on, and i \nstill have theproblem.\n \nThank you.",
"msg_date": "Wed, 25 Oct 2000 08:56:32 -0300",
"msg_from": "\"andres mackiewicz\" <mackiew@seciu.edu.uy>",
"msg_from_op": true,
"msg_subject": "DBD::Pg::st execute failed: ERROR"
},
{
"msg_contents": "\nIt seems like postgres is thinking that the query is incorrectly\nformatted. Do you know what query is being run when this happens?\nYou may wish to start the postmaster with -d2 which should print\nout the queries that postgres is seeing.\n\nStephan Szabo\nsszabo@bigpanda.com\n\nOn Wed, 25 Oct 2000, andres mackiewicz wrote:\n\n> I have this problem:\n> The next message appears when I try to make an INSERT into a table:\n> \n> \n> Connection refused\n> DBD::Pg::st execute failed: ERROR: \n> parser: parse error at or near \"refused\"\n> \n> \n> The situation is the next: I have a lot of processes making insertions\n> in the same database, and I suspect that is the cause of the message,\n> but I'm not sure.\n> \n> Could be another cause?\n> How can I solve it?\n> \n> I have tried the \"autocommit\" option on, and i still have the\n> problem.\n> \n> Thank you.\n> \n\n",
"msg_date": "Thu, 26 Oct 2000 08:12:04 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: DBD::Pg::st execute failed: ERROR"
}
] |
[
{
"msg_contents": "I'm not sure if this is a reported bug or not. SELECT statements with some\naggregates on certain complex views can give terrible results. An example:\n\nCREATE TABLE master (\n id int4 not null,\n no int4 check (no >= 0) default 0,\n primary key (id, no),\n started date check ((not started is null) or (not closed)),\n received date,\n starter int4 not null,\n description text,\n closed bool default 'f',\n date_of_closing timestamp,\n closed_by int4);\n\nCREATE TABLE detail (\n id int4 not null,\n no_ int4 not null,\n primary key (id, no_, modification, archive),\n ordering int4 not null,\n object int4 not null,\n ordered_by int4,\n quantity numeric(14,4) not null,\n quality int4 not null default 1,\n archive bool default 'f',\n starting int4,\n modification int4 not null check (modification >= 0),\n foreign key (id,modification) references\n\tmaster(id,no)); \n\nCREATE VIEW buggy_view AS\nSELECT de.id, de.no_, de.ordering, de.object, \nde.ordered_by, de.quantity, de.quality, ma.no FROM \ndetail de, master ma WHERE \n((((ma.no >= de.starting) AND (ma.no < de.modification)) AND de.archive) \nOR ((ma.no >= de.modification) AND (NOT de.archive))) GROUP BY \nde.id, de.no_, de.ordering, de.object,\nde.ordered_by, de.quantity, de.quality, ma.no;\n\nINSERT INTO master VALUES (1,0,now(),now(),1,'','f',now(),1);\nINSERT INTO detail VALUES (1,1,1,100,1,1000,1,'f',1,0);\nINSERT INTO detail VALUES (1,2,2,101,1,2000,1,'f',1,0);\n\nSELECT count(*) FROM buggy_view; -- I can see two rows of result! :-o\n\nI'm using PostgreSQL 7.0.2.\nI am interested in workarounds as well.\nTIA, Zoltan\n\n",
"msg_date": "Wed, 25 Oct 2000 14:10:07 +0200 (CEST)",
"msg_from": "Kovacs Zoltan Sandor <tip@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "bug in views/aggregates"
},
{
"msg_contents": "Kovacs Zoltan Sandor <tip@pc10.radnoti-szeged.sulinet.hu> writes:\n> I'm not sure if this is a reported bug or not. SELECT statements with some\n> aggregates on certain complex views can give terrible results. An example:\n\nAggregates on grouped views do not and cannot work in 7.0 or earlier\nreleases, because the existing rewriter implementation cannot cause the\nsystem to do multiple rounds of grouping/aggregation. Unfortunately\nthe rewriter is usually not bright enough to realize it can't do the\nright thing, either :-(\n\nThis is fixed for 7.1. In current sources your example produces\n\nregression=# SELECT count(*) FROM buggy_view;\n count\n-------\n 2\n(1 row)\n\n> I am interested in workarounds as well.\n\nFor now, you could select the view's results into a temp table,\nand then aggregate over the table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 23:01:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug in views/aggregates "
}
] |
[
{
"msg_contents": "> The postmaster contains this code just before it waits for input:\n> \n> #ifdef USE_SSL\n> for (curr = DLGetHead(PortList); curr; curr = DLGetSucc(curr))\n> {\n> if (((Port *) DLE_VAL(curr))->ssl &&\n> SSL_pending(((Port *) DLE_VAL(curr))->ssl) > 0)\n> {\n> no_select = true;\n> break;\n> }\n> }\n> if (no_select)\n> FD_ZERO(&rmask); /* So we don't accept() \n> anything below */\n> #endif\n> \n> I am not sure exactly what SSL_pending() is defined to mean, but as\n> near as I can tell, whenever SSL_pending() returns true, the \n> postmaster\n> will completely ignore every other input-ready condition. This spells\n> \"denial of service\" from where I sit: a nonresponsive SSL client will\n> cause the postmaster to freeze up for all other clients.\n> \n> Can anyone who knows about SSL defend or even explain the above code?\n> I am strongly inclined to just dike it out.\n\nSSL_pending() returns true when there is data in the SSL buffer of the\nsocket.\nThe problem is that since SSL uses block cipher, even if you read one just\nbyte from the socket (using ssl_read), OpenSSL will read a complete block\nfrom the network, in order to be able to decrypt it. In this case, the rest\nof that block will sit in the (SSL *)-structure. If you then select() the\nsocket, it will *not* return that there is more data available, because that\ndata has already been read from the network layer. Therefor, the select()\ncall would block *even though there is more data available* on that socket.\n\nThat would be an explanation, I think. I agree it's not an ideal situation.\nPerhaps it could be replaced with a check that made it do something like\nthis around the actual select statement?\n\n\n...\nstruct timeval tv;\ntv.tv_sec = 0;\ntv.tv_usec = 0;\nif (select(nSockets, &rmask, &wmask, (fd_set *) NULL,\n\tno_select?&tv:(struct timeval *)NULL) < 0)\n...\n\nThat way, select() would block *only* if there is nothing on the sockets\n*and* nothing in the SSL buffers. If there is data in *either* sockets *or*\nSSL buffers, it will go through to the loop that reads the data.\n\nWith this, completely removing the if(no_select) FD_ZERO part of the code.\n\n\nThis was a quick suggestion, so it may be flawed :-) Comments?\n\n\n//Magnus\n",
"msg_date": "Wed, 25 Oct 2000 16:58:02 +0200",
"msg_from": "Magnus Hagander <mha@sollentuna.net>",
"msg_from_op": true,
"msg_subject": "RE: Bogus-looking SSL code in postmaster wait loop"
},
{
"msg_contents": "Magnus Hagander <mha@sollentuna.net> writes:\n> SSL_pending() returns true when there is data in the SSL buffer of the\n> socket.\n> The problem is that since SSL uses block cipher, even if you read one just\n> byte from the socket (using ssl_read), OpenSSL will read a complete block\n> from the network, in order to be able to decrypt it. In this case, the rest\n> of that block will sit in the (SSL *)-structure. If you then select() the\n> socket, it will *not* return that there is more data available, because that\n> data has already been read from the network layer. Therefor, the select()\n> call would block *even though there is more data available* on that socket.\n\nOK. In that case the existing code is actually broken, because what\nwill happen as soon as SSL_pending returns true is that the select will\n*never* complete.\n\n> struct timeval tv;\n> tv.tv_sec = 0;\n> tv.tv_usec = 0;\n> if (select(nSockets, &rmask, &wmask, (fd_set *) NULL,\n> \tno_select?&tv:(struct timeval *)NULL) < 0)\n\n> That way, select() would block *only* if there is nothing on the sockets\n> *and* nothing in the SSL buffers.\n\nThis looks reasonable to me, and should avoid the DOS issue. We don't\nwant to skip the select() entirely, else we'd be ignoring our other\nclients.\n\nI'll put this in (with comments ;-)) unless there are objections from\nthe floor ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 11:42:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bogus-looking SSL code in postmaster wait loop "
}
] |
[
{
"msg_contents": "At 09:36 25/10/00 +0200, Zeugswetter Andreas SB wrote:\n>\n>> I have not followed the entire thread, but if you are in a serializable OR\n>> repeatable-read transaction,\n>\n>Serializable and repeatable read are the same thing, different wording.\n\nNot last time I looked. RR ensures that rows you have seen will still\nreturn the same data, but allows a reexecuted cursor to return more rows.\nSerializable means cursors always return expected data the second time they\nare executed.\n\n\n>> I would think that read-only statements will\n>> need to keep some kind of lock on the rows they read (or the table).\n>\n>Yes, we were talking about the other isolation levels. Most,\n>but not all of my mails in this thread state this difference.\n\nThe bit that worried me was that most emails only referred to serializable,\nnot RR (which they should have, I think).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 26 Oct 2000 01:01:16 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: AW: AW: BLERe: AW: AW: relation ### modified\n while in use"
},
{
"msg_contents": "\n\nPhilip Warner wrote:\n\n> At 09:36 25/10/00 +0200, Zeugswetter Andreas SB wrote:\n> >\n> >> I have not followed the entire thread, but if you are in a serializable OR\n> >> repeatable-read transaction,\n> >\n> >Serializable and repeatable read are the same thing, different wording.\n>\n> Not last time I looked. RR ensures that rows you have seen will still\n> return the same data, but allows a reexecuted cursor to return more rows.\n> Serializable means cursors always return expected data the second time they\n> are executed.\n>\n\nCurrently PostgreSQL doesn't support REPEATABLE READ isolation level.\nBut we could use SERIALIZABLE isolation level instead of RR isolaiton level\nbecause SERIALIZABLE isolation level satisfies the condition of RR isolation\nlevel(as you mentioned above).\n\n>\n> >> I would think that read-only statements will\n> >> need to keep some kind of lock on the rows they read (or the table).\n> >\n> >Yes, we were talking about the other isolation levels. Most,\n> >but not all of my mails in this thread state this difference.\n>\n> The bit that worried me was that most emails only referred to serializable,\n> not RR (which they should have, I think).\n>\n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n\n",
"msg_date": "Thu, 26 Oct 2000 14:57:50 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: BLERe: AW: AW: relation ### modifiedwhile in use"
}
] |
[
{
"msg_contents": "\n> > Idea: As we have this type of query in more than one part \n> of the source tree\n> > (ie: psql, jdbc, probably odbc), should we have a section in the\n> > documentation containing common queries, like: retrieving a \n> list of tables,\n> > views etc?\n> \n> That's a good thought. It'd be a useful practice to review such\n> standard queries from time to time anyway. For example, now that\n> outer joins work, a lot of psql's backslash-command queries could\n> be simplified (don't need the UNION ALL WITH SELECT NULL hack).\n> \n> Anyone have time to work up a list?\n\nPerhaps a good long-term solution for this would be to support\nINFORMATION_SCHEMA per SQL92? This requires basic schema support, of course\n:-)\nThat way, it would be possible to use other tools as well, and supporting a\nstandard is always nice :-) Also, it wouldn't be necessary to update all the\nfrontends if the system table format changes - just update those views.\nEverything may not be supported by INFORMATION_SCHEMA, but it may be a step\nin the way...\n\n//Magnus\n",
"msg_date": "Wed, 25 Oct 2000 17:02:17 +0200",
"msg_from": "Magnus Hagander <mha@sollentuna.net>",
"msg_from_op": true,
"msg_subject": "RE: Re: [INTERFACES] RE: JDBC now needs updates for lar\n\tge objects"
},
{
"msg_contents": "Magnus Hagander <mha@sollentuna.net> writes:\n> Perhaps a good long-term solution for this would be to support\n> INFORMATION_SCHEMA per SQL92? This requires basic schema support, of course\n> :-)\n\nYes, I think that's the right answer in the long run. Won't happen for\na release or three though...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Oct 2000 11:50:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [INTERFACES] RE: JDBC now needs updates for lar ge objects "
}
] |
[
{
"msg_contents": "Someone reported a problem with /usr/include/pgsql/os.h and the\nunderlying linux.h. I see a broken link for os.h, and somehow linux.h\ndoes not appear, though on the surface the spec file seems to be doing\nthe right thing.\n\nI'm working with 7.0.2-3mdk (which is the same as your 7.0.2-3). Do you\nsee this too, or has it been fixed in a subsequent version?\n\n - Thomas\n",
"msg_date": "Wed, 25 Oct 2000 15:21:45 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "7.0.x RPMs"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> Someone reported a problem with /usr/include/pgsql/os.h and the\n> underlying linux.h. I see a broken link for os.h, and somehow linux.h\n> does not appear, though on the surface the spec file seems to be doing\n> the right thing.\n\n> I'm working with 7.0.2-3mdk (which is the same as your 7.0.2-3). Do you\n> see this too, or has it been fixed in a subsequent version?\n\nI don't have a 7.0.2-3.\n\nIt has been fixed in a subsequent version. Please upgrade your RPM\ninstallation to the latest rpm-3.0.x from Mandrake, then pull\npostgresql-7.0.2-19.src.rpm from rawhide. You will need to upgrade RPM\nbecause that src.rpm is in v4 format. Yes: 7.0.2-_19_. Trond has been\nbusy :-).\n\nI am working on back porting, amongst other issues.\n\nThere has been discussion about _why_ this is happening. Opinion seems\nto be that 'make install' should be copying the port-specific file to\nos.h instead of symlinking. What would your opinion be?\n\nSubsequent versions (which haven't yet been uploaded to postgresql.org\nyet due to a number of issues) simply include linux.h in %files for\n-devel.\n\nI will have a backport to RH 6.2 of -19 done by the end of this week\n(being paid to do it, so I had better do it!), and will upload that.\n\nOh, Thomas, since the permissions on the PPC subdir in binary are such\nthat I can't delete the PPC RPMS, can you delete them and post a README\nthat there are serious problems with the LinuxPPC binary build currently\nposted on PostgreSQL.org? We need to get them rebuilt with -O2 or less.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 25 Oct 2000 11:33:28 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.x RPMs"
}
] |
[
{
"msg_contents": "HELP! \n\nI used to be able to manually set a sequence id by typing:\n\nSELECT setval('products_seq_id',100); but for some reason it is no longer working. It would previously return 100 and now it returns nothing and the counter is not increased to 100. Any suggestions on where to look?\n\nThanks much\nJeff\n\n\n\n\n\n\n\nHELP! \n \nI used to be able to manually set a sequence id by \ntyping:\n \nSELECT setval('products_seq_id',100); but for some \nreason it is no longer working. It would previously return 100 and now it \nreturns nothing and the counter is not increased to 100. Any suggestions \non where to look?\n \nThanks much\nJeff",
"msg_date": "Wed, 25 Oct 2000 11:22:07 -0400",
"msg_from": "\"Jeff Tucker\" <jjt@aye.net>",
"msg_from_op": true,
"msg_subject": "Postgres Question"
},
{
"msg_contents": "",
"msg_date": "Thu, 26 Oct 2000 22:54:44 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Question"
}
] |
[
{
"msg_contents": "I need some of features (union in views, in particular) which are only\navailable in 7.1-current. How insane would be to try it on production\nserver? Are there known bugs that could cause loss of data or loss of\nupdates? Or just newer features aren't polished yet?\n\nAlso, where is cvs repository? Is there a CVSup access to it?\n\n-alex\n\n\n",
"msg_date": "Thu, 26 Oct 2000 00:11:16 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": true,
"msg_subject": "sanity of using -current?"
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> I need some of features (union in views, in particular) which are only\n> available in 7.1-current. How insane would be to try it on production\n> server? Are there known bugs that could cause loss of data or loss of\n> updates? Or just newer features aren't polished yet?\n\nOn a production server? I wouldn't trust it with mission-critical data,\nfor sure. But that's more an issue of inadequate testing than that\nthere are known bugs.\n\nIf you try this, be sure to make frequent backups ...\n\n> Also, where is cvs repository? Is there a CVSup access to it?\n\nSee http://www.postgresql.org/devel-corner/docs/postgres/cvs.htm\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 11:05:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sanity of using -current? "
}
] |
[
{
"msg_contents": "While answering the n'th why-is-initdb-failing question that looked like\na version mismatch problem, it occurred to me to wonder why we don't\nmake initdb verify that the executable and library files it's using\nare all from the same release it is. I think this would eliminate an\ninstallation mistake that's practically reached FAQ status.\n\nA sketch of a way to do this is:\n\n1. Add a --version switch to postgres or postmaster to print its version\nand exit. Then initdb could check the executable's version against its\nown. (Alternatively we could rely on pg_config, but at a minimum that\nwould mean checking to make sure that pg_config is found in the same\ndirectory that postgres is in. A direct check on the key executable\nseems a lot safer.)\n\n2. During \"make install\", generate a PGVERSION file and store it in the\nsame directory that global.bki etc are stored in (the .../share install\ndirectory). initdb could look for this to ensure that PGLIB is pointing\nto a compatible library directory. Alternatively, add version info as\na comment in the first line of global.bki.\n\nI don't have time to pursue this right now, but maybe someone else would\nlike to pick up on it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 11:42:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Idea: cross-check versions during initdb"
},
{
"msg_contents": "Sounds like an easy one for a newbie to pick up. Let me look at it,\nbut I think I'd like dibs on it. \n\nLER\n\n* Tom Lane <tgl@sss.pgh.pa.us> [001026 13:29]:\n> While answering the n'th why-is-initdb-failing question that looked like\n> a version mismatch problem, it occurred to me to wonder why we don't\n> make initdb verify that the executable and library files it's using\n> are all from the same release it is. I think this would eliminate an\n> installation mistake that's practically reached FAQ status.\n> \n> A sketch of a way to do this is:\n> \n> 1. Add a --version switch to postgres or postmaster to print its version\n> and exit. Then initdb could check the executable's version against its\n> own. (Alternatively we could rely on pg_config, but at a minimum that\n> would mean checking to make sure that pg_config is found in the same\n> directory that postgres is in. A direct check on the key executable\n> seems a lot safer.)\n> \n> 2. During \"make install\", generate a PGVERSION file and store it in the\n> same directory that global.bki etc are stored in (the .../share install\n> directory). initdb could look for this to ensure that PGLIB is pointing\n> to a compatible library directory. Alternatively, add version info as\n> a comment in the first line of global.bki.\n> \n> I don't have time to pursue this right now, but maybe someone else would\n> like to pick up on it.\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 26 Oct 2000 13:42:29 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Idea: cross-check versions during initdb"
},
{
"msg_contents": "Added to TODO:\n\n* Prevent initdb from running wrong version of postmaster/postgres\n\n> While answering the n'th why-is-initdb-failing question that looked like\n> a version mismatch problem, it occurred to me to wonder why we don't\n> make initdb verify that the executable and library files it's using\n> are all from the same release it is. I think this would eliminate an\n> installation mistake that's practically reached FAQ status.\n> \n> A sketch of a way to do this is:\n> \n> 1. Add a --version switch to postgres or postmaster to print its version\n> and exit. Then initdb could check the executable's version against its\n> own. (Alternatively we could rely on pg_config, but at a minimum that\n> would mean checking to make sure that pg_config is found in the same\n> directory that postgres is in. A direct check on the key executable\n> seems a lot safer.)\n> \n> 2. During \"make install\", generate a PGVERSION file and store it in the\n> same directory that global.bki etc are stored in (the .../share install\n> directory). initdb could look for this to ensure that PGLIB is pointing\n> to a compatible library directory. Alternatively, add version info as\n> a comment in the first line of global.bki.\n> \n> I don't have time to pursue this right now, but maybe someone else would\n> like to pick up on it.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 26 Oct 2000 15:50:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea: cross-check versions during initdb"
},
{
"msg_contents": "> > 1. Add a --version switch to postgres or postmaster to print its version\n> > and exit.\n\npostmaster already has this. Someone can copy the code into\ntcop/postgres.c as well. But should we not use the catversion for this?\n\n> > to a compatible library directory. Alternatively, add version info as\n> > a comment in the first line of global.bki.\n\nI think that's better.\n\nBonus project: find out why initdb is picking up the wrong files in the\nfirst place.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 27 Oct 2000 21:10:12 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Idea: cross-check versions during initdb"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bonus project: find out why initdb is picking up the wrong files in the\n> first place.\n\nUser error is a sufficient explanation in the cases I've seen: wrong\npostgres executable found first in PATH, PGLIB or -L pointing to wrong\nplace, etc.\n\nThe changes you've made since 7.0 may reduce the incidence of mistakes,\nbut they won't eliminate them...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 15:12:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea: cross-check versions during initdb "
},
{
"msg_contents": "Larry Rosenman writes:\n\n> Sounds like an easy one for a newbie to pick up. Let me look at it,\n> but I think I'd like dibs on it. \n\nActually, initdb of 7.1 gets the directory location of the bootstrap files\nwired in at build time. The only way to override it is to use the -L\noption. So the problem seems a lot less grave that way.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 27 Oct 2000 23:30:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Idea: cross-check versions during initdb"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Actually, initdb of 7.1 gets the directory location of the bootstrap files\n> wired in at build time. The only way to override it is to use the -L\n> option. So the problem seems a lot less grave that way.\n\nThat does seem to reduce the odds of wrong-bki-files considerably.\nBut it's still dependent on the user's PATH to point to the right\nexecutables, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 17:36:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea: cross-check versions during initdb "
},
{
"msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Bonus project: find out why initdb is picking up the wrong files in the\n> > first place.\n> \n> User error is a sufficient explanation in the cases I've seen: wrong\n> postgres executable found first in PATH, PGLIB or -L pointing to wrong\n> place, etc.\n> \n> The changes you've made since 7.0 may reduce the incidence of mistakes,\n> but they won't eliminate them...\n\nMy guess is that the have some PostgreSQL binaries in /usr/local/bin and\npgsql/bin, and their path has one prefered over the other.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 27 Oct 2000 18:12:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea: cross-check versions during initdb]"
},
{
"msg_contents": "Tom Lane writes:\n\n> But it's still dependent on the user's PATH to point to the right\n> executables, no?\n\nThis is what's puzzling me. There's code in there that tries to locate\ninitdb and uses the executables and bki files (7.0 only) from the same\ntree. Evidently this code does not always work right, but that's what\nneeds to be fixed.\n\nCMDNAME=`basename $0`\n\n...\n\n#\n# Find out where we're located\n#\nif echo \"$0\" | grep '/' > /dev/null 2>&1\nthen\n # explicit dir name given\n PGPATH=`echo $0 | sed 's,/[^/]*$,,'` # (dirname command is not portable)\nelse\n # look for it in PATH ('which' command is not portable)\n for dir in `echo \"$PATH\" | sed 's/:/ /g'`\n do\n # empty entry in path means current dir\n [ -z \"$dir\" ] && dir='.'\n if [ -f \"$dir/$CMDNAME\" ]\n then\n PGPATH=\"$dir\"\n break\n fi\n done\nfi\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 28 Oct 2000 00:31:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Idea: cross-check versions during initdb "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> But it's still dependent on the user's PATH to point to the right\n>> executables, no?\n\n> This is what's puzzling me. There's code in there that tries to locate\n> initdb and uses the executables and bki files (7.0 only) from the same\n> tree.\n\nYeah, but how long has that code been in there? Wouldn't be at all\nsurprised if the complaints are coming from people who are managing\nto invoke a 6.5 initdb script against 7.0 postgres executable and/or\nlibrary files.\n\nOf course, people who manage to invoke a 6.5 or 7.0 initdb script aren't\ngoing to be helped anyway by defenses we put into 7.1 initdb :-(.\n\nPerhaps there need to be additional crosschecks performed by the\npostgres executable to ensure that (a) it's being called by a compatible\ninitdb and (b) it's being fed compatible bki files.\n\nPoint b could be addressed if we put version IDs into the bki files and\nhave BootstrapMain check for them. As for point a, maybe we could extend\nthe bootstrap switch set so that it includes a version number passed by\nthe initdb script; then BootstrapMain refuses to play unless the correct\nversion number is supplied. This would work if old postgres executables\nreject the version# info as an invalid switch ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 18:40:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea: cross-check versions during initdb "
},
{
"msg_contents": "Tom Lane writes:\n\n> Of course, people who manage to invoke a 6.5 or 7.0 initdb script aren't\n> going to be helped anyway by defenses we put into 7.1 initdb :-(.\n\nIt just occured to me that people invoking pre-7.1 initdbs with new input\nfiles are not going to get very far because the bki files have different\nnames now. They could still use a new backend with old bki files, though. \nWe could work around that if we subtly mangle the options set of\n\"postgres\". For example, the -C option is pretty useless. Nobody cares\nabout vital pieces of information like\n\nPOSTGRES backend interactive interface\n$Revision: 1.183 $ $Date: 2000/10/28 01:07:00 $\n\nWe could make postgres print a message like \"Are you sure you're not\ninvoking me from an old initdb?\" upon its receipt, since old initdb\nversions pass this option.\n\nAs for \"new initdb using old stuff\", I'm putting a check into initdb for\n`postgres --version`, which should make it safe. The danger of\naccidentally using old bki files is not a problem at this point, but a\nfuture version of genbki.sh might want to put a simple\n\n# PostgreSQL 7.2\n\nline atop its output files which initdb can check for. That should be\npretty low-interference.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 28 Oct 2000 18:20:18 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Idea: cross-check versions during initdb "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> Of course, people who manage to invoke a 6.5 or 7.0 initdb script aren't\n>> going to be helped anyway by defenses we put into 7.1 initdb :-(.\n\n> It just occured to me that people invoking pre-7.1 initdbs with new input\n> files are not going to get very far because the bki files have different\n> names now.\n\nOK, but that will only protect us for the 7.0 to 7.1 transition. Future\ntransitions will create the risk again. I think we should try to solve\nthese issues now while we're thinking about them.\n\n> a future version of genbki.sh might want to put a simple\n> # PostgreSQL 7.2\n> line atop its output files which initdb can check for.\n\nSeems like a plan to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 12:38:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea: cross-check versions during initdb "
}
] |
[
{
"msg_contents": "Saw Tom's commits, now it breaks here:\n\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ncc -O -K inline -o postgres access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -L/usr/local/lib -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -Wl,-Bexport\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake -C include all\ngmake[2]: Entering directory `/home/ler/pg-dev/pgsql/src/include'\ngmake[2]: Nothing to be done for `all'.\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/include'\ngmake -C interfaces all\ngmake[2]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ncc -c -I/usr/local/include -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -O -K inline -K PIC -o fe-auth.o fe-auth.c\ncc -c -I/usr/local/include -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -O -K inline -K PIC -o fe-connect.o fe-connect.c\nUX:acomp: ERROR: \"../../../src/include/mb/pg_wchar.h\", line 10: syntax error in macro parameters\ngmake[3]: *** [fe-connect.o] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 26 Oct 2000 13:36:16 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "more multibyte/After TGL..."
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Saw Tom's commits, now it breaks here:\n> cc -c -I/usr/local/include -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -O -K inline -K PIC -o fe-connect.o fe-connect.c\n> UX:acomp: ERROR: \"../../../src/include/mb/pg_wchar.h\", line 10: syntax error in macro parameters\n\nThis one is Tatsuo's fault: he's recently started relying on a gcc-ism:\n\n#ifdef FRONTEND\n#define elog(X...)\n#endif\n\nwhich will not do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 19:23:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: more multibyte/After TGL... "
},
{
"msg_contents": "> Larry Rosenman <ler@lerctr.org> writes:\n> > Saw Tom's commits, now it breaks here:\n> > cc -c -I/usr/local/include -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -O -K inline -K PIC -o fe-connect.o fe-connect.c\n> > UX:acomp: ERROR: \"../../../src/include/mb/pg_wchar.h\", line 10: syntax error in macro parameters\n> \n> This one is Tatsuo's fault: he's recently started relying on a gcc-ism:\n> \n> #ifdef FRONTEND\n> #define elog(X...)\n> #endif\n> \n> which will not do.\n\nOk, I have removed the \"gcc-ism\" macro. Please try to build again on\nyour non-gcc platform and please let me know if you have further\nproblem...\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 27 Oct 2000 11:31:50 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: more multibyte/After TGL... "
},
{
"msg_contents": "Todays Sources still die:\n\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o copy.o copy.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o startup.o startup.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o prompt.o prompt.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o variables.o variables.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o large_obj.o large_obj.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o print.o print.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o describe.o describe.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o tab-complete.o tab-complete.c\ncc -O -K inline -o psql command.o common.o help.o input.o stringutils.o mainloop.o copy.o startup.o prompt.o variables.o large_obj.o print.o describe.o tab-complete.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses \nUndefined\t\t\tfirst referenced\nsymbol \t\t\t in file\npg_encoding_to_char command.o\nUX:ld: ERROR: Symbol referencing errors. No output written to psql\ngmake[3]: *** [psql] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/psql'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n* Tatsuo Ishii <t-ishii@sra.co.jp> [001027 02:49]:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > Saw Tom's commits, now it breaks here:\n> > > cc -c -I/usr/local/include -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -O -K inline -K PIC -o fe-connect.o fe-connect.c\n> > > UX:acomp: ERROR: \"../../../src/include/mb/pg_wchar.h\", line 10: syntax error in macro parameters\n> > \n> > This one is Tatsuo's fault: he's recently started relying on a gcc-ism:\n> > \n> > #ifdef FRONTEND\n> > #define elog(X...)\n> > #endif\n> > \n> > which will not do.\n> \n> Ok, I have removed the \"gcc-ism\" macro. Please try to build again on\n> your non-gcc platform and please let me know if you have further\n> problem...\n> --\n> Tatsuo Ishii\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 07:26:59 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "They still die today. I did some looking, but I'm not sure how to fix\nit. \n\nApparently we need to have access to src/backend/utils/mb/common.c's\nobject file for the psql build. Not sure how to get there...\n\nLarry\n* Larry Rosenman <ler@lerctr.org> [001027 07:26]:\n> Todays Sources still die:\n> \n> cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o copy.o copy.c\n> cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o startup.o startup.c\n> cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o prompt.o prompt.c\n> cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o variables.o variables.c\n> cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o large_obj.o large_obj.c\n> cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o print.o print.c\n> cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o describe.o describe.c\n> cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o tab-complete.o tab-complete.c\n> cc -O -K inline -o psql command.o common.o help.o input.o stringutils.o mainloop.o copy.o startup.o prompt.o variables.o large_obj.o print.o describe.o tab-complete.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses \n> Undefined\t\t\tfirst referenced\n> symbol \t\t\t in file\n> pg_encoding_to_char command.o\n> UX:ld: ERROR: Symbol referencing errors. No output written to psql\n> gmake[3]: *** [psql] Error 1\n> gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/psql'\n> gmake[2]: *** [all] Error 2\n> gmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\n> gmake: *** [all] Error 2\n> * Tatsuo Ishii <t-ishii@sra.co.jp> [001027 02:49]:\n> > > Larry Rosenman <ler@lerctr.org> writes:\n> > > > Saw Tom's commits, now it breaks here:\n> > > > cc -c -I/usr/local/include -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -O -K inline -K PIC -o fe-connect.o fe-connect.c\n> > > > UX:acomp: ERROR: \"../../../src/include/mb/pg_wchar.h\", line 10: syntax error in macro parameters\n> > > \n> > > This one is Tatsuo's fault: he's recently started relying on a gcc-ism:\n> > > \n> > > #ifdef FRONTEND\n> > > #define elog(X...)\n> > > #endif\n> > > \n> > > which will not do.\n> > \n> > Ok, I have removed the \"gcc-ism\" macro. Please try to build again on\n> > your non-gcc platform and please let me know if you have further\n> > problem...\n> > --\n> > Tatsuo Ishii\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 28 Oct 2000 10:52:43 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "> They still die today. I did some looking, but I'm not sure how to fix\n> it. \n> \n> Apparently we need to have access to src/backend/utils/mb/common.c's\n> object file for the psql build. Not sure how to get there...\n> \n> Larry\n> * Larry Rosenman <ler@lerctr.org> [001027 07:26]:\n> > Todays Sources still die:\n> > \n> > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o copy.o copy.c\n> > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o startup.o startup.c\n> > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o prompt.o prompt.c\n> > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o variables.o variables.c\n> > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o large_obj.o large_obj.c\n> > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o print.o print.c\n> > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o describe.o describe.c\n> > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o tab-complete.o tab-complete.c\n> > cc -O -K inline -o psql command.o common.o help.o input.o stringutils.o mainloop.o copy.o startup.o prompt.o variables.o large_obj.o print.o describe.o tab-complete.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses \n> > Undefined\t\t\tfirst referenced\n> > symbol \t\t\t in file\n> > pg_encoding_to_char command.o\n> > UX:ld: ERROR: Symbol referencing errors. No output written to psql\n[snip]\n\nHum, I have tried a build with fresh sources from CVS, but see no\nproblem as you mentioned so far.\n\nThe function pg_encoding_to_char() is defined in\ninterfaces/libpq/common.c (actually symlink to\nbackend/utils/mb/common.c). Have you checked the pg_encoding_to_char()\nis exported from common.o?\n\nOr you have grabbed wrong version? Here is the version id in my\ncopy of common.c:\n\n * $Id: common.c,v 1.9 2000/06/13 07:35:15 tgl Exp $ */\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 29 Oct 2000 10:17:54 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "I have that version. I am, however, compiling with a NON-GCC\ncompiler.\n\nLarry\n* Tatsuo Ishii <t-ishii@sra.co.jp> [001028 20:16]:\n> > They still die today. I did some looking, but I'm not sure how to fix\n> > it. \n> > \n> > Apparently we need to have access to src/backend/utils/mb/common.c's\n> > object file for the psql build. Not sure how to get there...\n> > \n> > Larry\n> > * Larry Rosenman <ler@lerctr.org> [001027 07:26]:\n> > > Todays Sources still die:\n> > > \n> > > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o copy.o copy.c\n> > > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o startup.o startup.c\n> > > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o prompt.o prompt.c\n> > > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o variables.o variables.c\n> > > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o large_obj.o large_obj.c\n> > > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o print.o print.c\n> > > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o describe.o describe.c\n> > > cc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o tab-complete.o tab-complete.c\n> > > cc -O -K inline -o psql command.o common.o help.o input.o stringutils.o mainloop.o copy.o startup.o prompt.o variables.o large_obj.o print.o describe.o tab-complete.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses \n> > > Undefined\t\t\tfirst referenced\n> > > symbol \t\t\t in file\n> > > pg_encoding_to_char command.o\n> > > UX:ld: ERROR: Symbol referencing errors. No output written to psql\n> [snip]\n> \n> Hum, I have tried a build with fresh sources from CVS, but see no\n> problem as you mentioned so far.\n> \n> The function pg_encoding_to_char() is defined in\n> interfaces/libpq/common.c (actually symlink to\n> backend/utils/mb/common.c). Have you checked the pg_encoding_to_char()\n> is exported from common.o?\n> \n> Or you have grabbed wrong version? Here is the version id in my\n> copy of common.c:\n> \n> * $Id: common.c,v 1.9 2000/06/13 07:35:15 tgl Exp $ */\n> --\n> Tatsuo Ishii\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 28 Oct 2000 20:23:13 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [001028 20:25]:\n> I have that version. I am, however, compiling with a NON-GCC\n> compiler.\n> \nOk, just re-cvs'd, and still have the problem. \n\nConfigure:\n\nCC=cc CXX=CC ./configure --prefix=/home/ler/pg-test --enable-syslog --with-CXX --with-perl --enable-multibyte --with-includes=/usr/local/include --with-libs=/usr/local/lib\n\nLast 20 lines of make output. \ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o copy.o copy.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o startup.o startup.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o prompt.o prompt.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o variables.o variables.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o large_obj.o large_obj.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o print.o print.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o describe.o describe.c\ncc -c -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -O -K inline -o tab-complete.o tab-complete.c\ncc -O -K inline -o psql command.o common.o help.o input.o stringutils.o mainloop.o copy.o startup.o prompt.o variables.o large_obj.o print.o describe.o tab-complete.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses -Wl,-R/home/ler/pg-test/lib\nUndefined\t\t\tfirst referenced\nsymbol \t\t\t in file\npg_encoding_to_char command.o\nUX:ld: ERROR: Symbol referencing errors. No output written to psql\ngmake[3]: *** [psql] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/psql'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 28 Oct 2000 20:36:05 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Ok, just re-cvs'd, and still have the problem. \n\nI can't reproduce the problem either...\n\npg_encoding_to_char is in common.c from backend/utils/mb/. The way that\npsql gets holds of it is that in a MULTIBYTE build, common.c is built\nand included in libpq (interfaces/libpq), and then psql links with\nlibpq.\n\nTwo likely theories are\n\n(1) for some reason your link is picking up a different copy of libpq\nthan the one present in interfaces/libpq (link search path issue); or\n\n(2) you've got a compiled copy of libpq that was compiled without\nMULTIBYTE support, and hasn't gotten updated since you reconfigured\nwith MULTIBYTE support.\n\nThe latter would arguably be a failure to maintain proper dependencies.\nI'm not sure if Peter is trying to force a global rebuild when you\nreconfigure or not; maybe you're expected to do \"make clean\" when you\nalter configuration choices.\n\nAnyway, seems like the first thing to look at is whether\ninterfaces/libpq/libpq.a (or .so or .sl) contains pg_encoding_to_char\n(use nm(1) to check). If not, is there a common.o file in that\ndirectory?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 23:13:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: more multibyte/After TGL... "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001028 22:15]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Ok, just re-cvs'd, and still have the problem. \n> \n> I can't reproduce the problem either...\n> \n> pg_encoding_to_char is in common.c from backend/utils/mb/. The way that\n> psql gets holds of it is that in a MULTIBYTE build, common.c is built\n> and included in libpq (interfaces/libpq), and then psql links with\n> libpq.\n> \n> Two likely theories are\n> \n> (1) for some reason your link is picking up a different copy of libpq\n> than the one present in interfaces/libpq (link search path issue); or\nOk, it's in the libpq in src/interfaces/libpq:\n$ cd pg-dev/pgsql/src/interfaces/libpq\n$ ls\nCVS fe-auth.c iso8859.map pqexpbuffer.h\nEUC_JP_to_UTF.map fe-auth.h libpq-fe.h pqexpbuffer.o\nMakefile fe-auth.o libpq-int.h pqsignal.c\nREADME fe-connect.c libpq.a pqsignal.h\nUTF_to_EUC_JP.map fe-connect.o libpq.rc pqsignal.o\nbig5.c fe-exec.c libpq.so sjis.map\nbig5.o fe-exec.o libpq.so.2 wchar.c\ncommon.c fe-lobj.c libpq.so.2.1 wchar.o\ncommon.o fe-lobj.o libpqdll.c win32.h\nconv.c fe-misc.c libpqdll.def win32.mak\nconv.o fe-misc.o mbutils.c\ndllist.c fe-print.c mbutils.o\ndllist.o fe-print.o pqexpbuffer.c\n$ nm libpq.so|grep pg_encoding_to_char\n[268] |56720 |84 |FUNC |GLOB |0 |9\n|pg_encoding_to_char\n$ \n\n> \n> (2) you've got a compiled copy of libpq that was compiled without\n> MULTIBYTE support, and hasn't gotten updated since you reconfigured\n> with MULTIBYTE support.\nI did a gmake distclean before the reconfigure. There are multiple\nlibpq's on the system. Would LD_LIBRARY_PATH override the link spec'd\n-L? \n> \n> The latter would arguably be a failure to maintain proper dependencies.\n> I'm not sure if Peter is trying to force a global rebuild when you\n> reconfigure or not; maybe you're expected to do \"make clean\" when you\n> alter configuration choices.\n> \n> Anyway, seems like the first thing to look at is whether\n> interfaces/libpq/libpq.a (or .so or .sl) contains pg_encoding_to_char\n> (use nm(1) to check). If not, is there a common.o file in that\n> directory?\nSee above. \n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 28 Oct 2000 22:18:33 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> I did a gmake distclean before the reconfigure. There are multiple\n> libpq's on the system. Would LD_LIBRARY_PATH override the link spec'd\n> -L? \n\nDarn if I know. Sounds like wrong-libpq is the theory to pursue,\nthough. Better start scrutinizing the man page for your system's ld\n...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 23:22:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: more multibyte/After TGL... "
},
{
"msg_contents": "YUP, it's LD_LIBRARY_PATH. We need to make sure that the BUILD\nUnsets it...\n\n$ cc -O -K inline -o psql *.o -L ../../../src/interfaces/libpq -lpq -L\n/usr/local/lib -lz -lgen -lld -lnsl -lsocket -ldl -lm -lreadline\n-ltermcap -lcurses \nUndefined first referenced\nsymbol in file\npg_encoding_to_char command.o\nUX:ld: ERROR: Symbol referencing errors. No output written to psql\n$ unset LD_LIBRARY_PATH\n$ cc -O -K inline -o psql *.o -L ../../../src/interfaces/libpq -lpq -L\n/usr/l>\n$ \n\n* Larry Rosenman <ler@lerctr.org> [001028 22:20]:\n> * Tom Lane <tgl@sss.pgh.pa.us> [001028 22:15]:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > Ok, just re-cvs'd, and still have the problem. \n> > \n> > I can't reproduce the problem either...\n> > \n> > pg_encoding_to_char is in common.c from backend/utils/mb/. The way that\n> > psql gets holds of it is that in a MULTIBYTE build, common.c is built\n> > and included in libpq (interfaces/libpq), and then psql links with\n> > libpq.\n> > \n> > Two likely theories are\n> > \n> > (1) for some reason your link is picking up a different copy of libpq\n> > than the one present in interfaces/libpq (link search path issue); or\n> Ok, it's in the libpq in src/interfaces/libpq:\n> $ cd pg-dev/pgsql/src/interfaces/libpq\n> $ ls\n> CVS fe-auth.c iso8859.map pqexpbuffer.h\n> EUC_JP_to_UTF.map fe-auth.h libpq-fe.h pqexpbuffer.o\n> Makefile fe-auth.o libpq-int.h pqsignal.c\n> README fe-connect.c libpq.a pqsignal.h\n> UTF_to_EUC_JP.map fe-connect.o libpq.rc pqsignal.o\n> big5.c fe-exec.c libpq.so sjis.map\n> big5.o fe-exec.o libpq.so.2 wchar.c\n> common.c fe-lobj.c libpq.so.2.1 wchar.o\n> common.o fe-lobj.o libpqdll.c win32.h\n> conv.c fe-misc.c libpqdll.def win32.mak\n> conv.o fe-misc.o mbutils.c\n> dllist.c fe-print.c mbutils.o\n> dllist.o fe-print.o pqexpbuffer.c\n> $ nm libpq.so|grep pg_encoding_to_char\n> [268] |56720 |84 |FUNC |GLOB |0 |9\n> |pg_encoding_to_char\n> $ \n> \n> > \n> > (2) you've got a compiled copy of libpq that was compiled without\n> > MULTIBYTE support, and hasn't gotten updated since you reconfigured\n> > with MULTIBYTE support.\n> I did a gmake distclean before the reconfigure. There are multiple\n> libpq's on the system. Would LD_LIBRARY_PATH override the link spec'd\n> -L? \n> > \n> > The latter would arguably be a failure to maintain proper dependencies.\n> > I'm not sure if Peter is trying to force a global rebuild when you\n> > reconfigure or not; maybe you're expected to do \"make clean\" when you\n> > alter configuration choices.\n> > \n> > Anyway, seems like the first thing to look at is whether\n> > interfaces/libpq/libpq.a (or .so or .sl) contains pg_encoding_to_char\n> > (use nm(1) to check). If not, is there a common.o file in that\n> > directory?\n> See above. \n> > \n> > \t\t\tregards, tom lane\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 28 Oct 2000 22:24:04 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "LD_LIBRARY_PATH needs to go while building....\n\n\n* Tom Lane <tgl@sss.pgh.pa.us> [001028 22:22]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > I did a gmake distclean before the reconfigure. There are multiple\n> > libpq's on the system. Would LD_LIBRARY_PATH override the link spec'd\n> > -L? \n> \n> Darn if I know. Sounds like wrong-libpq is the theory to pursue,\n> though. Better start scrutinizing the man page for your system's ld\n> ...\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 28 Oct 2000 22:24:32 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [001028 22:28]:\n> LD_LIBRARY_PATH needs to go while building....\n> \nIt *IS* in the manpage at the very end. Now, how do we deal with this\nlittle bugaboo?\n\nLER\n\n> \n> * Tom Lane <tgl@sss.pgh.pa.us> [001028 22:22]:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > I did a gmake distclean before the reconfigure. There are multiple\n> > > libpq's on the system. Would LD_LIBRARY_PATH override the link spec'd\n> > > -L? \n> > \n> > Darn if I know. Sounds like wrong-libpq is the theory to pursue,\n> > though. Better start scrutinizing the man page for your system's ld\n> > ...\n> > \n> > \t\t\tregards, tom lane\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 28 Oct 2000 22:38:28 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "Larry Rosenman writes:\n\n> YUP, it's LD_LIBRARY_PATH.\n\nThat's odd. On my system (and on all others that I've heard of that have\nit) this only affects the runtime linker, not the \"ld\" linker.\n\n> We need to make sure that the BUILD Unsets it...\n\nAre you sure that this can't lead to failures if a program run during the\nbuild requires this to be set. (Something like tclsh comes to\nmind.) Seems like something you ought to do manually.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 29 Oct 2000 12:50:40 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001029 05:48]:\n> Larry Rosenman writes:\n> \n> > YUP, it's LD_LIBRARY_PATH.\n> \n> That's odd. On my system (and on all others that I've heard of that have\n> it) this only affects the runtime linker, not the \"ld\" linker.\nMaybe, but here is the tail end of ld's man page:\n The environment variable LD_LIBRARY_PATH may be used to specify\n library search directories. In the most general case, it will\ncontain\n two directory lists separated by a semicolon:\n dirlist1;dirlist2\n\n Thus, if ld is called with the following occurrences of -L:\n ld . . . -Lpath1 . . . -Lpathn . . . -lx\n\n then the search path ordering for the library x (libx.so or libx.a)\n is:\n dirlist1 path1 . . . pathn dirlist2 LIBPATH\n\n LD_LIBRARY_PATH is also used to specify library search directories\nto\n the dynamic linker at run time. That is, if LD_LIBRARY_PATH exists\nin\n the environment, the dynamic linker will search the directories it\n names before its default directory for shared objects to be linked\n with the program at execution.\n \n Additionally, the environment variable LD_RUN_PATH (which also\n contains a directory list) may be used to specify library search\n directories to the dynamic linker. If present and not empty, it is\n passed to the dynamic linker by ld via data stored in the output\n object file. LD_RUN_PATH is ignored if building a shared object.\nThe\n paths it specifies are searched by the dynamic linker before those\n specified by LD_LIBRARY_PATH. LD_RUN_PATH is obsolete. Its use is\n discouraged in favor of the -R option to ld. If -R is specified,\n LD_RUN_PATH is ignored.\n \nFiles\n\n libx.so\n libraries\n libx.a\n libraries\n a.out\n output file\n LIBPATH\n usually /usr/ccs/lib:/usr/lib\n /usr/lib/local/locale/LC_MESSAGES/uxcds\n language-specific message file [See LANG on environ(5).]\n \nReferences\n\n a.out(4), ar(4), as(1), cc(1), elfmark(1), end(3C), exec(2),\nexit(2)\n \nNotices\n\n Through its options, the link editor gives users great flexibility;\n however, those who use the -M mapfile option must assume some added\n responsibilities. Use of this feature is strongly discouraged.\n \n ld should be invoked directly only if -r is used to create a\n relocatable object to be used in a later link. Creation of an\n executable or shared object should be done through the CC or cc\n command. The CC command must be used if any input object files\ncontain\n C++ code.\n _________________________________________________________________\n \n 30 January 1998\n M-) 2000 The Santa Cruz Operation, Inc. All rights reserved.\n UnixWare/OpenServer Development Kit 7.1.1b Feature Supplement\n \n See also ld(1bsd)\n \ncan't open \ncan't open \n$ $ \n\nSo, at least for the UDK FS, we probably need to walk the\nLD_LIBRARY_PATH and cleanse it of any libraries that contain OUR libs. \n\n\n> \n> > We need to make sure that the BUILD Unsets it...\n> \n> Are you sure that this can't lead to failures if a program run during the\n> build requires this to be set. (Something like tclsh comes to\n> mind.) Seems like something you ought to do manually.\nBut could we do the above to cleanse it of our stuff? \n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 29 Oct 2000 08:33:43 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "Larry Rosenman writes:\n\n> So, at least for the UDK FS, we probably need to walk the\n> LD_LIBRARY_PATH and cleanse it of any libraries that contain OUR libs. \n\nHow do you know what your libs are? The failure is likely to occur if you\nyou installed 7.0 in /usr/local/pgsql and followed the installation\ninstructions to the point of setting LD_LIBRARY_PATH. Then you install\n7.1 in prefix=$HOME/pg-test. How do you then compute that you should\nstrip \"/usr/local/pgsql/bin\" from LD_LIBRARY_PATH?\n\nI think this falls back to the user. We should perhaps add a note in\nFAQ_SCO about it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 29 Oct 2000 17:34:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: more multibyte/After TGL..."
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001029 10:47]:\n> Larry Rosenman writes:\n> \n> > So, at least for the UDK FS, we probably need to walk the\n> > LD_LIBRARY_PATH and cleanse it of any libraries that contain OUR libs. \n> \n> How do you know what your libs are? The failure is likely to occur if you\n> you installed 7.0 in /usr/local/pgsql and followed the installation\n> instructions to the point of setting LD_LIBRARY_PATH. Then you install\n> 7.1 in prefix=$HOME/pg-test. How do you then compute that you should\n> strip \"/usr/local/pgsql/bin\" from LD_LIBRARY_PATH?\nWhat about looking down LD_LIBRARY_PATH for libpq.so* and any of our\nother shared objects and removing whereever we find libpq.so* and\nremoving those *WHILE WE ARE COMPILE/LINKING*? \n> \n> I think this falls back to the user. We should perhaps add a note in\n> FAQ_SCO about it.\nProbably. I looked at my laptop, and that behaviour is *NOT NEW*. \n\n(my laptop doesn't have the UDK FS on it). \n\nSO, it looks like we have had an issue, and didn't run into it till\nnow. \n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 29 Oct 2000 11:55:28 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: more multibyte/After TGL..."
}
] |
[
{
"msg_contents": " Hello! \n\n\n Its, may be ,little \"gluk\" in doc, - sources and docs and examples\nfor Create Rule not correlated.\n Conctrento :)) ->\n О©╫ О©╫О©╫О©╫О©╫ ->\n CREATE RULE example_1 AS\n ON UPDATE emp.salary WHERE old.name = \"Joe\"\n /* ^^^ */\n DO \n UPDATE emp \n SET salary = new.salary\n WHERE emp.name = \"Sam\";\n\n /* ^^^ - its maby type-bUg -> ON UPDATE TO )*/\nAnd in sources........\n /* ----------\n * The current rewrite handler is known to work on relation level\n * rules only. And for SELECT events, it expects one non-nothing\n * action that is instead and returns exactly a tuple of the\n * rewritten relation. This restricts SELECT rules to views.\n *\n * Jan\n * ----------\n */\n if (event_obj->attrs)\n elog(ERROR, \"attribute level rules currently not\nsupported\");\n In result on all query, where after \"ON\" we put something as\n\"emp.salary\" ,\n Postgres answer :ERROR: \"attribute level rules currently not supported\"\n But in docs \n CREATE RULE name AS ON event\n TO object [ WHERE condition ]\n DO [ INSTEAD ] [ action | NOTHING ]\n .....\n .....\nwhere --> \n object\n Object is either table or table.column. \n ^^^^^^^^^^^^^!!!!! 8-()\n О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫ О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫.\n In baglist-О©╫ (todo) i dont found anything about this trouble.\n This code,doc don't rewriten from version 6.4 - \n i diggin in several Suse Linux - distrib,sourses\n dowloaded from www.postgressql.org , and again see all aviable\ndocs about rules.\n\nWaiting for any help.... ;) \n\n Sorry ,pls for my dirty english..... \n Vic\n",
"msg_date": "Thu, 26 Oct 2000 21:46:07 +0300",
"msg_from": "Vic <vic@dcc.dp.ua>",
"msg_from_op": true,
"msg_subject": "Simple question about Postgress rule system and docs for it ."
}
] |
[
{
"msg_contents": "pgsql-hackers-owner@hub.org wrote:\n\nI've been looking into this. I thought it would be easy, but it\ndoesn't want an easy fix, because it's worse than it at first\nappeared.\n\nWere you aware that this is legal:\n (select avg(a),b from dummy group by b) order by b;\nbut this is not:\n (select avg(a),b from dummy) group by b order by b;\nand if we just allowed lots of nested parens, we would allow:\n ((((select avg(a),b from dummy group by b)))) order by b;\nwhich just seems silly.\n\nI for one don't like any of these, although I prefer the\none that is currently disallowed -- and it appears to me\nthat the parens being allowed in select_clause are being\nintroduced at the wrong level. I'm going to experiment\nsome with the concept of a \"select_term\" which will be the\nthing that can be parenthesized (in some contexts).\n\nBTW: yacc accepts LALR grammars, which are fairly restricted.\nThus the shift/reduce complaints and such don't mean it's\nambiguous, just that it's pushing the envelope of the LALR\nparadigm. A lot of yacc grammars do just that, and work\njust fine, but of course you have to know what you're doing.\nI don't, at least not to that level of wizardry, so I'll stay\naway from shift/reduce, etc. I mention it just to say that\nwe're not treading on ambiguity here, just the limits of yacc.\n\nFinally, I hereby solicit comments. Because of yacc's limits,\nit may turn out to be difficult to do things like have\nunlimited parens around a \"select_term\" and also have\nconstructs like \n NOT IN (select_term)\nbecause yacc might not know how to parse the outer parens.\nOTOH, maybe we don't want NOT IN (((SELECT foo FROM bar))).\nOTOOH, maybe we do because there could be program-generated\nSQL out there that would like that freedom. What do the\nreaders think?\n\nI don't know yet where the problems could be. I may need\nhelp figuring out what's important to provide and what is\nless so. Does anyone know if there are a lot of parens in\nthe regression tests?\n\n++ kevin\n\n\n> \n> Subject: Re: [GENERAL] A rare error\n> Date: Wed, 25 Oct 2000 12:28:35 -0400\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> To: DaVinci <bombadil@wanadoo.es>\n> CC: Lista PostgreSql <pgsql-general@postgresql.org>,\n> pgsql-hackers@postgresql.org\n> References: <20001025104758.A7643@fangorn.net>\n> \n> DaVinci <bombadil@wanadoo.es> writes:\n> > An extrange behavior with PostgreSql 7.0.2:\n> > select * from foo where exists\n> > (select * from foo)\n> > works fine. But:\n> > select * from foo where exists\n> > ((select * from foo))\n> > shows an error:\n> > ERROR: parser: parse error at or near \"(\"\n> > Is this a bug?\n> \n> I was fooling around with exactly that point a couple weeks ago. You'd\n> think it would be easy to allow extra parentheses around a sub-select,\n> but I couldn't figure out any way to do it that didn't provoke shift/\n> reduce conflicts or worse.\n> \n> The main problem is that if parentheses are both part of the expression\n> grammar (as they'd better be ;-)) and part of the SELECT grammar then\n> for a construct like\n> select (((select count(foo) from bar)));\n> it's ambiguous whether the extra parens are expression parens or part\n> of the inner SELECT statement. You may not care, but yacc does: it does\n> not like ambiguous grammars. AFAICS the only solution is not to allow\n> parentheses at the very top level of a SELECT structure. Then the above\n> is not ambiguous because all the extra parens are expression parens.\n> \n> This solution leads directly to your complaint: the syntax is\n> EXISTS ( SELECT ... )\n> and you don't get to insert any unnecessary levels of parenthesis.\n> \n> We could maybe hack something for EXISTS in particular (since we know\n> a parenthesized SELECT must follow it) but in the general case there\n> doesn't seem to be a way to make it work. For example, in current\n> sources this is OK:\n> select * from foo where exists\n> ((select * from foo) union (select * from bar));\n> but not this:\n> select * from foo where exists\n> ((select * from foo) union ((select * from bar)));\n> ERROR: parser: parse error at or near \")\"\n> \n> If there are any yacc hackers out there who think they can improve on\n> this, please grab gram.y from current CVS and have at it. It'd be nice\n> not to have an artificial restriction against redundant parentheses in\n> SELECT structures.\n> \n> regards, tom lane\n> \n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n\n\n",
"msg_date": "Thu, 26 Oct 2000 11:53:47 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] A rare error"
},
{
"msg_contents": "\"Kevin O'Gorman\" <kogorman@pacbell.net> writes:\n> Were you aware that this is legal:\n> (select avg(a),b from dummy group by b) order by b;\n> but this is not:\n> (select avg(a),b from dummy) group by b order by b;\n\nThe reason for that is that SQL doesn't think that \"order by\" should\nbe allowed in subqueries, only in a top-level SELECT.\n\nThat restriction makes sense in pure SQL, since tuple order is\nexplicitly *not* part of the computational model. In the eyes of the\nSQL spec, the only reason ORDER BY exists at all is for prettification\nof final output.\n\nHowever, once you add the LIMIT clause, queries like\n\tSELECT * FROM foo ORDER BY bar LIMIT 1\nsuddenly become quite interesting and useful as subqueries\n(this query gives you the whole row associated with the minimum\nvalue of bar, which is something you can't easily get in pure SQL).\n\nAs the sources stand tonight, you can have such a query as a subquery,\nbut only if you hide the ORDER/LIMIT inside a view definition. You'll\nget a syntax error if you try to write it in-line as a subquery.\nThere is no longer any good implementation reason for that; it is\nsolely a grammar restriction.\n\nSo I'm coming around to the idea that we should abandon the SQL\nrestriction and allow ORDER + LIMIT in subqueries. The trouble is\nhow to do it without confusing yacc.\n\n> BTW: yacc accepts LALR grammars, which are fairly restricted.\n> Thus the shift/reduce complaints and such don't mean it's\n> ambiguous, just that it's pushing the envelope of the LALR\n> paradigm. A lot of yacc grammars do just that, and work\n> just fine, but of course you have to know what you're doing.\n\nRight. Also, I believe it's possible that such a grammar will behave\ndifferently depending on which yacc you process it with, which would be\nbad. (We have not yet taken the step of insisting that pgsql's grammar\nis bison-only, and I don't want to.) So ensuring that we get no shift/\nreduce conflicts has been a shop rule around here all along.\n\nAnyway, the bottom line of all this rambling is that if you can get\nrid of the distinction between SelectStmt and select_clause altogether,\nthat would be fine with me. You might consider looking at whether you\ncan write two nonterminals: a SELECT construct that has no outer parens,\nand then an additional construct\n\n\tsubselect: SelectStmt | '(' subselect ')'\n\nwhich would be used for all the sub-select nonterminals in SelectStmt\nitself.\n\n> OTOH, maybe we don't want NOT IN (((SELECT foo FROM bar))).\n\nIf we can't do that then we're still going to get complaints, I think.\nThe original bug report in this thread was specifically that the thing\ndidn't like redundant parentheses; we should try to remove that\nrestriction in all contexts not just some.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 20:49:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] A rare error "
},
{
"msg_contents": " Date: Thu, 26 Oct 2000 20:49:22 -0400\n From: Tom Lane <tgl@sss.pgh.pa.us>\n\n Right. Also, I believe it's possible that such a grammar will behave\n differently depending on which yacc you process it with, which would be\n bad. (We have not yet taken the step of insisting that pgsql's grammar\n is bison-only, and I don't want to.) So ensuring that we get no shift/\n reduce conflicts has been a shop rule around here all along.\n\nActually, even the earliest version of yacc had very simple rules,\nwhich are inherited by all versions. In a shift/reduce conflict,\nalways shift. In a reduce/reduce conflict, always reduce by the rule\nwhich appears first in the grammar file. shift/shift conflicts\nindicate a grammer which is not LALR(1).\n\nI'm pretty sure that all versions of yacc also support %left, %right,\nand %nonassoc, which are simply techniques to eliminate shift/reduce\nconflicts in arithmetic and other expressions.\n\nI believe it is always possible to rewrite a grammer to eliminate all\nconflicts. But the rewrite can require an explosion in the number of\nrules.\n\nReduce/reduce conflicts can be risky because it is easy to\naccidentally change the ordering of the rules while editing. But\nshift/reduce conflicts are not risky. The C parser in gcc, for\nexample, written and maintained by parser experts, has 53 shift/reduce\nconflicts.\n\nIan\n",
"msg_date": "27 Oct 2000 00:50:28 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] A rare error"
}
] |
[
{
"msg_contents": "\nhttp://www.l-t.ee/marko/pgsql/pgcrypto-0.2c.diff (context diff)\nhttp://www.l-t.ee/marko/pgsql/pgcrypto-0.2u.diff (unidiff)\n\nhttp://www.l-t.ee/marko/pgsql/pgcrypto-0.2.tar.gz\n\ndiffs are for currect CVS - contrib/pgcrypto, tarball is\nstandalone variant with autoconf and 7.0.x compat hacks.\n\nWhats new:\n\n. new function for freeing digest, that avoids a memcpy\n in case of mhash.\n\n. some header shuffling.\n\nin tarball:\n\n. autoconf PG_INIT has a support for pg_config.\n\nThe tarball is basically for testing general PostgreSQL\nautoconf macros. I should do crypto library detection\nmacros too, but have not had time for that :(\n\n\nOn Fri, Oct 20, 2000 at 12:27:11AM +0200, Marko Kreen wrote:\n> as usual...\n> \n> On Thu, Oct 19, 2000 at 11:09:44PM +0200, Marko Kreen wrote:\n> > So, here it is. Changes:\n> > \n> > . Fixed a bug in mhash.c :)\n> > . uses regular contrib makefile, no autoconf\n> > . license boilerplate. lifted it from\n> > .../freebsd/src/share/examples/etc/bsd-style-copyright\n> > . dropped 7.0.x support macros\n> \n> . #include <sys/types> in pgcrypto.h\n> (I thought that postgres.h cares for that)\n> now compiles on FreeBSD\n> \n> . defaults to 'builtin'\n> \n> -- \n> marko\n> \n\n\n\n-- \nmarko\n\n",
"msg_date": "Thu, 26 Oct 2000 23:07:21 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": true,
"msg_subject": "pgcrypto 0.2"
}
] |
[
{
"msg_contents": " Date: Thursday, October 26, 2000 @ 17:35:49\nAuthor: tgl\n\nUpdate of /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes\n from hub.org:/home/projects/pgsql/tmp/cvs-serv71501/src/backend/nodes\n\nModified Files:\n\tcopyfuncs.c outfuncs.c print.c \n\n----------------------------- Log Message -----------------------------\n\nRe-implement LIMIT/OFFSET as a plan node type, instead of a hack in\nExecutorRun. This allows LIMIT to work in a view. Also, LIMIT in a\ncursor declaration will behave in a reasonable fashion, whereas before\nit was overridden by the FETCH count.\n",
"msg_date": "Thu, 26 Oct 2000 17:36:07 -0400 (EDT)",
"msg_from": "Tom Lane <tgl@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c)"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Date: Thursday, October 26, 2000 @ 17:35:49\n> Author: tgl\n>\n> Update of /home/projects/pgsql/cvsroot/pgsql/src/backend/nodes\n> from hub.org:/home/projects/pgsql/tmp/cvs-serv71501/src/backend/nodes\n>\n> Modified Files:\n> copyfuncs.c outfuncs.c print.c\n>\n> ----------------------------- Log Message -----------------------------\n>\n> Re-implement LIMIT/OFFSET as a plan node type, instead of a hack in\n> ExecutorRun. This allows LIMIT to work in a view. Also, LIMIT in a\n> cursor declaration will behave in a reasonable fashion,\n\nDoes \"reasonable\" mean that LIMIT is treated as optimizer's\nhint but doesn't restrict total FETCH counts ?\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Fri, 27 Oct 2000 08:45:30 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> Re-implement LIMIT/OFFSET as a plan node type, instead of a hack in\n>> ExecutorRun. This allows LIMIT to work in a view. Also, LIMIT in a\n>> cursor declaration will behave in a reasonable fashion,\n\n> Does \"reasonable\" mean that LIMIT is treated as optimizer's\n> hint but doesn't restrict total FETCH counts ?\n\nNo, it means that a LIMIT in a cursor means what it says: the cursor\nwill show that many rows and no more. FETCH lets you move around in\nthe cursor, but not override the limit. I decided that the other\nbehavior was just too darn weird... if you want to argue about that,\nlet's take it up on pghackers not committers.\n\nYes, the optimizer does pay attention to the limit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 19:49:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> >> Re-implement LIMIT/OFFSET as a plan node type, instead of a hack in\n> >> ExecutorRun. This allows LIMIT to work in a view. Also, LIMIT in a\n> >> cursor declaration will behave in a reasonable fashion,\n>\n> > Does \"reasonable\" mean that LIMIT is treated as optimizer's\n> > hint but doesn't restrict total FETCH counts ?\n>\n> No, it means that a LIMIT in a cursor means what it says: the cursor\n> will show that many rows and no more. FETCH lets you move around in\n> the cursor, but not override the limit. I decided that the other\n> behavior was just too darn weird... if you want to argue about that,\n> let's take it up on pghackers not committers.\n>\n> Yes, the optimizer does pay attention to the limit.\n>\n\nHmm,I'm not sure about your point.\nPlease correct me if I'm misunderstanding.\n\nWe can specify rows count by FETCH command.\nIt seems to me that LIMIT in declare cursor statement is only\nfor optimizer's hint.\n\nRegards.\nHiroshi Inoue\n\n\n",
"msg_date": "Fri, 27 Oct 2000 09:09:33 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> It seems to me that LIMIT in declare cursor statement is only\n> for optimizer's hint.\n\nI think that would just confuse people. If we want to have a hint\nthat says \"optimize for fast start\", it ought to be done in another\nway than saying that SELECT ... LIMIT means different things in\ndifferent contexts.\n\nPossibly the optimizer should always assume that cursors ought to\nbe optimized for fast start, LIMIT or no LIMIT --- does that seem\nlike a good idea to you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 20:17:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > It seems to me that LIMIT in declare cursor statement is only\n> > for optimizer's hint.\n>\n> I think that would just confuse people.\n\nIt could be. However what does LIMIT mean ?\nRows per FETCH ? Probably no.\nFETCH forward,backward,forward,backward,.... and suddenly EOF ?\n\n> If we want to have a hint\n> that says \"optimize for fast start\", it ought to be done in another\n> way than saying that SELECT ... LIMIT means different things in\n> different contexts.\n>\n\nYes I want to give optimizer a hint \"return first rows fast\".\nWhen Jan implemented LIMIT first,there was an option\n\"LIMIT ALL\" and it was exactly designed for the purpose.\n\nRegards.\nHiroshi Inoue\n\n",
"msg_date": "Fri, 27 Oct 2000 09:47:32 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Yes I want to give optimizer a hint \"return first rows fast\".\n> When Jan implemented LIMIT first,there was an option\n> \"LIMIT ALL\" and it was exactly designed for the purpose.\n\nWell, we could make that work that way again, I think. Need to look\nat the code, but I think the optimizer could tell the difference between\na LIMIT ALL clause and no limit clause at all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 20:59:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "<<< No Message Collected >>>\n",
"msg_date": "Thu, 26 Oct 2000 22:54:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Now that I look at it, the optimizer *already* prefers fast-start plans\n> for cursors. Is LIMIT ALL really necessary as an additional hint,\n> and if so how should it interact with the bias for cursors?\n>\n\nIf LIMIT doesn't restrict the total count of rows which cursors\ncould return,there's no problem. Otherwise LIMIT ALL would be\nneeded.\n\nHiroshi Inoue\n\n\n",
"msg_date": "Fri, 27 Oct 2000 12:04:33 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Now that I look at it, the optimizer *already* prefers fast-start plans\n>> for cursors. Is LIMIT ALL really necessary as an additional hint,\n>> and if so how should it interact with the bias for cursors?\n\n> If LIMIT doesn't restrict the total count of rows which cursors\n> could return,there's no problem. Otherwise LIMIT ALL would be\n> needed.\n\nBut is there a reason to treat LIMIT ALL differently from no LIMIT\nclause at all?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 23:05:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Now that I look at it, the optimizer *already* prefers fast-start plans\n> >> for cursors. Is LIMIT ALL really necessary as an additional hint,\n> >> and if so how should it interact with the bias for cursors?\n>\n> > If LIMIT doesn't restrict the total count of rows which cursors\n> > could return,there's no problem. Otherwise LIMIT ALL would be\n> > needed.\n>\n> But is there a reason to treat LIMIT ALL differently from no LIMIT\n> clause at all?\n>\n\nFor example,LIMIT ALL means LIMIT 1 for optimizer and means\nno LIMIT for executor.\nComments ?\n\nRegards, Hiroshi Inoue.\n\n\n",
"msg_date": "Fri, 27 Oct 2000 12:11:33 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> For example,LIMIT ALL means LIMIT 1 for optimizer and means\n> no LIMIT for executor.\n> Comments ?\n\nI don't see the point. In the context of a regular SELECT, optimizing\nthat way would be wrong, because we are going to fetch all the data.\nIn the context of a DECLARE CURSOR, we already have a bias for fast-\nstart plans, so why do we need another?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 23:14:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > For example,LIMIT ALL means LIMIT 1 for optimizer and means\n> > no LIMIT for executor.\n> > Comments ?\n>\n> I don't see the point. In the context of a regular SELECT, optimizing\n> that way would be wrong, because we are going to fetch all the data.\n> In the context of a DECLARE CURSOR, we already have a bias for fast-\n> start plans, so why do we need another?\n>\n\nHmm,I missed somthing ?\nHow would be the behavior of the following command sequence ?\n\nbegin;\ndeclare myc cursor for select * from t1 limit 1;\nfetch in myc;\nfetch in myc;\n\nCould the last fetch return a row ?\n\nRegards, Hiroshi Inoue.\n\n\n",
"msg_date": "Fri, 27 Oct 2000 12:32:51 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> How would be the behavior of the following command sequence ?\n\n> begin;\n> declare myc cursor for select * from t1 limit 1;\n> fetch in myc;\n> fetch in myc;\n\n> Could the last fetch return a row ?\n\nAs the code now stands, the second fetch would return nothing.\nI think this is clearly what any reasonable person would expect\ngiven the LIMIT 1 clause.\n\nLIMIT ALL is a different story, because there's no semantic difference\nbetween writing LIMIT ALL and writing no limit clause at all. We have\nthe option to create a distinction for planning purposes, however.\nQuestion is do we need one?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 23:36:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > How would be the behavior of the following command sequence ?\n>\n> > begin;\n> > declare myc cursor for select * from t1 limit 1;\n> > fetch in myc;\n> > fetch in myc;\n>\n> > Could the last fetch return a row ?\n>\n> As the code now stands, the second fetch would return nothing.\n> I think this is clearly what any reasonable person would expect\n> given the LIMIT 1 clause.\n>\n\nDifferent from ordinary select statements we could\ngain the same result in case of cursors.\n\nbegin;\ndeclare myc cursor for select * from t1;\nfetch in myc;\n\n\nFor exaple,\n\nbegin;\ndeclare myc cursor for select * from t1 limit all;\nfetch 20 in myc; (the first page)\n...(interaction)\nfetch 20 in myc; (the next page)\n..(interaction)\nfetch backward 20 in myc; (the previous page)\n...\n\nWhat I expect here is to get rows of each page in\nan average response time not the total throughput\nof db operation.\n\nRegards, Hiroshi Inoue\n\n\n\n\n>\n> LIMIT ALL is a different story, because there's no semantic difference\n> between writing LIMIT ALL and writing no limit clause at all. We have\n> the option to create a distinction for planning purposes, however.\n> Question is do we need one?\n>\n> regards, tom lane\n\n",
"msg_date": "Fri, 27 Oct 2000 12:59:02 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> begin;\n> declare myc cursor for select * from t1 limit all;\n> fetch 20 in myc; (the first page)\n> ...(interaction)\n> fetch 20 in myc; (the next page)\n> ..(interaction)\n> fetch backward 20 in myc; (the previous page)\n> ...\n\n> What I expect here is to get rows of each page in\n> an average response time not the total throughput\n> of db operation.\n\nYes, but why should the presence of \"limit all\" affect that?\nIt's not apparent to me why the optimizer should treat this\ncase differently from plain\ndeclare myc cursor for select * from t1;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 00:24:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > begin;\n> > declare myc cursor for select * from t1 limit all;\n> > fetch 20 in myc; (the first page)\n> > ...(interaction)\n> > fetch 20 in myc; (the next page)\n> > ..(interaction)\n> > fetch backward 20 in myc; (the previous page)\n> > ...\n>\n> > What I expect here is to get rows of each page in\n> > an average response time not the total throughput\n> > of db operation.\n>\n> Yes, but why should the presence of \"limit all\" affect that?\n> It's not apparent to me why the optimizer should treat this\n> case differently from plain\n> declare myc cursor for select * from t1;\n>\n\nAm I misunderstanding ?\nDoesn't optimizer make the plan for the query\n\"select * for t1\" which would use SeqScan\nin most cases ?\n\nRegards, Hiroshi Inoue\n\n\n",
"msg_date": "Fri, 27 Oct 2000 13:31:20 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Yes, but why should the presence of \"limit all\" affect that?\n>> It's not apparent to me why the optimizer should treat this\n>> case differently from plain\n>> declare myc cursor for select * from t1;\n\n> Am I misunderstanding ?\n> Doesn't optimizer make the plan for the query\n> \"select * for t1\" which would use SeqScan\n> in most cases ?\n\nIn a plain SELECT, yes. In a DECLARE CURSOR, it's currently set up\nto prefer indexscans anyway, LIMIT or no LIMIT (see lines 853 ff in\nsrc/backend/optimizer/plan/planner.c, current sources). I think it\nmakes sense to have that preference for DECLARE, and what I'm wondering\nis if we need an additional preference when the DECLARE contains a LIMIT\nclause --- and if so, what should that be?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 00:34:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Yes, but why should the presence of \"limit all\" affect that?\n> >> It's not apparent to me why the optimizer should treat this\n> >> case differently from plain\n> >> declare myc cursor for select * from t1;\n>\n> > Am I misunderstanding ?\n> > Doesn't optimizer make the plan for the query\n> > \"select * for t1\" which would use SeqScan\n> > in most cases ?\n>\n> In a plain SELECT, yes. In a DECLARE CURSOR, it's currently set up\n> to prefer indexscans anyway, LIMIT or no LIMIT (see lines 853 ff in\n> src/backend/optimizer/plan/planner.c, current sources).\n\nProbably you mean\n if (parse->isPortal)\n tuple_fraction = 0.10;\n\nSeems 0.10 isn't sufficently small in pretty many cases.\nIn addtion,SeqScan isn't used even when we want it e.g.\nin the case cursors are just used to avoid the exhaution\nof memory.\n\n\n> I think it\n> makes sense to have that preference for DECLARE, and what I'm wondering\n> is if we need an additional preference when the DECLARE contains a LIMIT\n> clause --- and if so, what should that be?\n>\n\nI don't think we can specify appropriate LIMIT for cursors.\nWe could judge if an application needs an average response\ntime of total throuput.\n\nRegards, Hiroshi Inoue\n\n\n",
"msg_date": "Fri, 27 Oct 2000 14:06:42 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Seems 0.10 isn't sufficently small in pretty many cases.\n\nAgreed, it's an arbitrary number. But where/how can we get a better\none?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 01:08:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Seems 0.10 isn't sufficently small in pretty many cases.\n>\n> Agreed, it's an arbitrary number. But where/how can we get a better\n> one?\n>\n\nI recommend the choise of \"LIMIT 1\" and \"no LIMIT\" by\nsome option. Others would have different opinions but\nhave ML members seen this discussion ? I've received\nno mails about this thread from ML.\n\nRegards, Hiroshi Inoue\n\n\n",
"msg_date": "Fri, 27 Oct 2000 14:19:54 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n print.c)"
},
{
"msg_contents": "> have ML members seen this discussion ?\n\nThe whole thread's only gone to pgsql-committers, which has pretty\nsmall readership AFAIK.\n\nIf you want a wider discussion, let's bring it up in pghackers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 01:20:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "At 20:17 26/10/00 -0400, Tom Lane wrote:\n>\n>Possibly the optimizer should always assume that cursors ought to\n>be optimized for fast start, LIMIT or no LIMIT --- does that seem\n>like a good idea to you?\n>\n\nNo to me; I'd vote for an 'OPTIMIZE FOR FAST START' phrase ahead of always\ndoing a fast start. It also seems natural that a LIMIT clause should (a)\nlimit the results in all cases and (b) be used by the optimiser in such a\nway that it *considers* a fast start approach more carefully.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 27 Oct 2000 20:43:52 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c\n outfuncs.c print.c)"
},
{
"msg_contents": "At 12:11 27/10/00 +0900, Hiroshi Inoue wrote:\n>\n>For example,LIMIT ALL means LIMIT 1 for optimizer and means\n>no LIMIT for executor.\n>Comments ?\n>\n\nIt seems there's two possibilities:\n\n(a) You know you will only use a limited number of rows, but you are not\nsure exactly how many. In this case, I'd vote for a 'OPTIMIZE FOR FAST\nSTART' clause.\n\n(b) You really want all rows, in which case you should let the optimizer do\nit's stuff. If it fails to work well, then use either 'OPTIMIZE FOR TOTAL\nCOST' or 'OPTIMIZE FOR FAST START' to change the behaviour.\n\nISTM that LIMIT ALL is just the syntax for the default limit clause - and\nshould, if anything, be equivalent to 'OPTIMIZE FOR TOTAL COST'.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 27 Oct 2000 20:50:17 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c\n outfuncs.c print.c)"
},
{
"msg_contents": "At 20:59 26/10/00 -0400, Tom Lane wrote:\n>Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> Yes I want to give optimizer a hint \"return first rows fast\".\n>> When Jan implemented LIMIT first,there was an option\n>> \"LIMIT ALL\" and it was exactly designed for the purpose.\n>\n>Well, we could make that work that way again, I think. \n\nI think that would be a *bad* idea. ISTM that the syntax is obtuse for the\nmeaning it is being given. The (mild) confusion in this thread is evidence\nof that, at least.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 27 Oct 2000 21:18:10 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c\n outfuncs.c print.c)"
},
{
"msg_contents": "At 21:03 28/10/00, wrote:\n>\n>In a plain SELECT, yes. In a DECLARE CURSOR, it's currently set up\n>to prefer indexscans anyway, LIMIT or no LIMIT (see lines 853 ff in\n>src/backend/optimizer/plan/planner.c, current sources). I think it\n>makes sense to have that preference for DECLARE, and what I'm wondering\n>is if we need an additional preference when the DECLARE contains a LIMIT\n>clause --- and if so, what should that be?\n\nDo you really think it's not such a good idea to have different optimizer\nbehaviour for SELECT and DECLARE CURSOR? My expectation is that putting a\nSELECT statement inside a cursor should not change it's performance. I'd be\ninterested to know the reasons for your choice.\n\nThe simplest solution would be to use the same logic in both cases; if the\ncorrect behaviour is not obvious to you, then our users will be left second\nguessing the optimizer once you start introducing several special cases\n(LIMIT in SELECT, LIMIT ALL in SELECT, LIMIT in DECLARE etc etc).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 28 Oct 2000 23:34:00 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c\n outfuncs.c print.c)"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Do you really think it's not such a good idea to have different optimizer\n> behaviour for SELECT and DECLARE CURSOR? My expectation is that putting a\n> SELECT statement inside a cursor should not change it's performance. I'd be\n> interested to know the reasons for your choice.\n\nI think it's an excellent idea to have different behaviors, and the\nreason is that we know a stand-alone SELECT will deliver all its result\nrows, whereas for DECLARE it's quite possible that not all the possible\nresult rows will be fetched. Moreover, the user is likely to fetch the\ncursor's results in bite-size chunks, so he will be interested in\naverage response time as well as total time.\n\nIn the proposal as written, LIMIT ALL and LIMIT n will in fact give rise\nto identical behavior in both contexts, and it's only the case without\nan explicit LIMIT that will behave differently. (If we add a\nSET-variable to control this, you could even make that behavior the same\ntoo, by setting the variable to 1.0.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 12:35:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
},
{
"msg_contents": "Tom,thanks for your good summary.\nSeems this is the latest posting for this thread.\n\n> -----Original Message-----\n> From: Tom Lane\n> \n> Philip Warner <pjw@rhyme.com.au> writes:\n> > Do you really think it's not such a good idea to have different \n> optimizer\n> > behaviour for SELECT and DECLARE CURSOR? My expectation is that \n> putting a\n> > SELECT statement inside a cursor should not change it's \n> performance. I'd be\n> > interested to know the reasons for your choice.\n> \n> I think it's an excellent idea to have different behaviors, and the\n> reason is that we know a stand-alone SELECT will deliver all its result\n> rows, whereas for DECLARE it's quite possible that not all the possible\n> result rows will be fetched. Moreover, the user is likely to fetch the\n> cursor's results in bite-size chunks, so he will be interested in\n> average response time as well as total time.\n>\n\nCursors have a different character from stand-alone SELECT.\nWe don't have to FETCH results continuously from cursors.\nIt's well known that an average response time is significant\nin some applications. For example,we could make interactive\npaging applications which require a next/prior page(small part\nof the result of a query) according to user's request.\n\nThere may be more excellent ways to achive it but I don't\nknow how to do it in PostgreSQL.\n\nRegards.\nHiroshi Inoue\n",
"msg_date": "Sun, 29 Oct 2000 22:39:07 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: pgsql/src/backend/nodes (copyfuncs.c outfuncs.c print.c) "
}
] |
[
{
"msg_contents": "The following sequence, which works in 7.0.2, now fails in current\nsources:\n\nCREATE TABLE person\n(\n ptype SMALLINT,\n id CHAR(10) PRIMARY KEY,\n name TEXT NOT NULL,\n address INTEGER REFERENCES address (id)\n ON UPDATE CASCADE\n ON DELETE NO ACTION,\n salutation TEXT DEFAULT 'Dear Sir',\n envelope TEXT,\n email TEXT,\n www TEXT,\n CONSTRAINT person_ptype CHECK (ptype >= 0 AND ptype <= 8),\n FOREIGN KEY (id, address) REFERENCES address(person, id)\n\tON UPDATE CASCADE\n\tON DELETE RESTRICT\n)\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'person_pkey' for \ntable 'person'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nCREATE\nCREATE TABLE individual\n(\n gender CHAR(1) CHECK (gender = 'M' OR\n gender = 'F' OR\n gender IS NULL),\n born DATE CHECK ((born >= '1 Jan 1880' AND\n born <= CURRENT_DATE) OR\n born IS NULL),\n surname TEXT,\n forenames TEXT,\n title TEXT,\n old_surname TEXT,\n mobile TEXT,\n ni_no TEXT,\n CONSTRAINT is_named CHECK (NOT (surname IS NULL AND forenames IS NULL)),\n CONSTRAINT individual_ptype CHECK (ptype = 1 OR (ptype >= 5 AND ptype <= \n7)),\n FOREIGN KEY (id, address) REFERENCES address(person, id)\n\tON UPDATE CASCADE\n\tON DELETE RESTRICT\n)\n INHERITS (person)\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\nERROR: columns referenced in foreign key constraint not found.\n\nI think this means that the FOREIGN KEY installer thinks that table\n\"individual\" does not have columns \"id\" and \"address\", which are\ninherited from \"person\". However, if this is correct, the wording of the\nerror message is poorly chosen, since it seems to refer to the columns\nmentioned after the REFERENCES keyword.\n\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Whosoever therefore shall be ashamed of me and of my \n words in this adulterous and sinful generation; of him\n also shall the Son of man be ashamed, when he cometh \n in the glory of his Father with the holy angels.\" \n Mark 8:38 \n\n\n",
"msg_date": "Thu, 26 Oct 2000 22:42:06 +0100",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Foreign key references now fails with inherited columns"
},
{
"msg_contents": "On Thu, 26 Oct 2000, Oliver Elphick wrote:\n\n> The following sequence, which works in 7.0.2, now fails in current\n> sources:\n\n> [example snipped]\n> \n> I think this means that the FOREIGN KEY installer thinks that table\n> \"individual\" does not have columns \"id\" and \"address\", which are\n> inherited from \"person\". However, if this is correct, the wording of the\n> error message is poorly chosen, since it seems to refer to the columns\n> mentioned after the REFERENCES keyword.\n\nYeah, that is what it means, and yes, that was probably a bad choice of\nwording (will fix that too). Hmm, without looking (no access to code from\nwork unless I get another copy), that probably means that the list of\ncolumns in the function doesn't get those columns (at least by that\npoint).\n\n\n",
"msg_date": "Thu, 26 Oct 2000 18:24:41 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key references now fails with inherited columns"
}
] |
[
{
"msg_contents": "After reviewing a number of past threads about the INET/CIDR mess,\nI have concluded that we should adopt the following behavior:\n\n1. A data value like '10.1.2.3/16' is a legal INET value (it implies\nthe host 10.1.2.3 in the network 10.1/16) but not a legal CIDR value.\nHence, cidr_in should reject such a value. Up to now it hasn't.\n\n2. We do not have a datatype corresponding strictly to a host address\nalone --- to store a plain address, use INET and let the mask width\ndefault to 32. inet_out suppresses display of a \"/32\" netmask (whereas\ncidr_out does not).\n\n3. Given that CIDRs never have invalid bits set, we can use the same\nordering rules for both datatypes: sort by address part, then by\nnumber of bits. This is compatible with what 7.0 did when sorting.\nIt is *not* quite the same as what current sources do, but I will revert\nthat change.\n\nI didn't see anyone objecting to this scheme in past discussions, but\nI also didn't see any clear statement that all the interested parties\nhad agreed to it. Last chance to complain...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2000 19:15:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Summary: what to do about INET/CIDR"
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001026 18:46]:\n> After reviewing a number of past threads about the INET/CIDR mess,\n> I have concluded that we should adopt the following behavior:\n> \n> 1. A data value like '10.1.2.3/16' is a legal INET value (it implies\n> the host 10.1.2.3 in the network 10.1/16) but not a legal CIDR value.\n> Hence, cidr_in should reject such a value. Up to now it hasn't.\n> \n> 2. We do not have a datatype corresponding strictly to a host address\n> alone --- to store a plain address, use INET and let the mask width\n> default to 32. inet_out suppresses display of a \"/32\" netmask (whereas\n> cidr_out does not).\n> \n> 3. Given that CIDRs never have invalid bits set, we can use the same\n> ordering rules for both datatypes: sort by address part, then by\n> number of bits. This is compatible with what 7.0 did when sorting.\n> It is *not* quite the same as what current sources do, but I will revert\n> that change.\n> \n> I didn't see anyone objecting to this scheme in past discussions, but\n> I also didn't see any clear statement that all the interested parties\n> had agreed to it. Last chance to complain...\nI'd like to see a way to get all 4 octets of a CIDR printed out... \n\nAlso a way to get network (.0) and broadcast (all ones) for a cidr\nblock out of our stuff. \n\nLarry\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 26 Oct 2000 18:49:38 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "\nmakes sense to me\n\nOn Thu, 26 Oct 2000, Tom Lane wrote:\n\n> After reviewing a number of past threads about the INET/CIDR mess,\n> I have concluded that we should adopt the following behavior:\n> \n> 1. A data value like '10.1.2.3/16' is a legal INET value (it implies\n> the host 10.1.2.3 in the network 10.1/16) but not a legal CIDR value.\n> Hence, cidr_in should reject such a value. Up to now it hasn't.\n> \n> 2. We do not have a datatype corresponding strictly to a host address\n> alone --- to store a plain address, use INET and let the mask width\n> default to 32. inet_out suppresses display of a \"/32\" netmask (whereas\n> cidr_out does not).\n> \n> 3. Given that CIDRs never have invalid bits set, we can use the same\n> ordering rules for both datatypes: sort by address part, then by\n> number of bits. This is compatible with what 7.0 did when sorting.\n> It is *not* quite the same as what current sources do, but I will revert\n> that change.\n> \n> I didn't see anyone objecting to this scheme in past discussions, but\n> I also didn't see any clear statement that all the interested parties\n> had agreed to it. Last chance to complain...\n> \n> \t\t\tregards, tom lane\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 26 Oct 2000 21:17:27 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Also a way to get network (.0) and broadcast (all ones) for a cidr\n> block out of our stuff. \n\nnetwork() and broadcast() have been there all along ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 10:49:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001027 09:49]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Also a way to get network (.0) and broadcast (all ones) for a cidr\n> > block out of our stuff. \n> \n> network() and broadcast() have been there all along ...\nbut don't work on CIDR types.....\n\nLER\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 09:51:42 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [001027 09:51]:\n> * Tom Lane <tgl@sss.pgh.pa.us> [001027 09:49]:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > Also a way to get network (.0) and broadcast (all ones) for a cidr\n> > > block out of our stuff. \n> > \n> > network() and broadcast() have been there all along ...\n> but don't work on CIDR types.....\nAnd I get to be wrong. \n\nSorry about that. \n\nBut, it would still be nice if we can force all 4 octets to be printed\nfor the network funcs..\n\nLER\n\n> \n> LER\n> \n> > \n> > \t\t\tregards, tom lane\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 09:55:37 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001027 09:49]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Also a way to get network (.0) and broadcast (all ones) for a cidr\n> > block out of our stuff. \n> \n> network() and broadcast() have been there all along ...\nOK, what I really meant was a way to coerce a CIDR entity to INET so \nthat host() can work with a CIDR type to print all 4 octets. \n\nDoes this help with what I want? \n\nCurrently you can't coerce a CIDR type to INET. \n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 11:08:24 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [001027 11:08]:\n> * Tom Lane <tgl@sss.pgh.pa.us> [001027 09:49]:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > Also a way to get network (.0) and broadcast (all ones) for a cidr\n> > > block out of our stuff. \n> > \n> > network() and broadcast() have been there all along ...\n> OK, what I really meant was a way to coerce a CIDR entity to INET so \n> that host() can work with a CIDR type to print all 4 octets. \n> \n> Does this help with what I want? \n> \n> Currently you can't coerce a CIDR type to INET. \nFor example, I feel the following should work:\n\nler=# \\d ler_test\n Table \"ler_test\"\n Attribute | Type | Modifier\n-----------+------+----------\n net | cidr |\n host | inet |\n\nler=# select * from ler_test;\n net | host\n---------------+------------------\n 207.158.72/24 | 207.158.72.11/24\n(1 row)\n\nler=# select host(net::inet) from ler_test;\nERROR: CIDR type has no host part\nERROR: CIDR type has no host part\nler=#\n> \n> > \n> > \t\t\tregards, tom lane\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 11:13:52 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Tom Lane writes:\n\n> 1. A data value like '10.1.2.3/16' is a legal INET value (it implies\n> the host 10.1.2.3 in the network 10.1/16) but not a legal CIDR value.\n> Hence, cidr_in should reject such a value. Up to now it hasn't.\n\nNod.\n\n> 2. We do not have a datatype corresponding strictly to a host address\n> alone --- to store a plain address, use INET and let the mask width\n> default to 32. inet_out suppresses display of a \"/32\" netmask (whereas\n> cidr_out does not).\n\nInet is supposed to be host address, with optional network specification.\n\nI also have in my notes (some might have been fixed since):\n\n* inet output is broken => 127.0.0.1/8\n* no cast function to \"text\" available (what about host()?)\n* equality/distinctness is broken in certain cases => select\n'10.0.0.1/27'::inet='10.0.0.2/27'::inet; returns true\n* operator commutators and negators are incorrect\n* ouput functions apparently null-terminate their result => select\nhost('10.0.0.1')='10.0.0.1'; returns false\n* comparing inet and cidr is not well defined\n* should '127.0.0.1/24'::cidr fail?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Fri, 27 Oct 2000 21:15:56 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Inet is supposed to be host address, with optional network specification.\n\nAgreed. As such, it probably should always display all 4 octets\nregardless of the maskwidth. It doesn't do that at the moment:\n\nregression=# select '127.0.0.1/8'::inet;\n ?column?\n----------\n 127.0/8\n(1 row)\n\nThis is clearly bad. I will change it to produce '127.0.0.1/8',\nunless someone has a better idea.\n\n> I also have in my notes (some might have been fixed since):\n\n> * inet output is broken => 127.0.0.1/8\n\nSee above.\n\n> * no cast function to \"text\" available\n\nI don't see much point in solving that issue on a one-datatype-at-a-time\nbasis. Sooner or later we should fix things so that the datatype I/O\nconversion functions can be invoked safely in expressions.\n\n> * equality/distinctness is broken in certain cases => select\n> '10.0.0.1/27'::inet='10.0.0.2/27'::inet; returns true\n\nThis is now fixed.\n\n> * operator commutators and negators are incorrect\n\nFixed.\n\n> * ouput functions apparently null-terminate their result => select\n> host('10.0.0.1')='10.0.0.1'; returns false\n\nNot sure what that has to do with output functions, but I get 'true'\nnow.\n\n> * comparing inet and cidr is not well defined\n\nPerhaps not. There was a whole lot of argument about that point,\nand it didn't seem to me that we came to any real agreement.\n\n> * should '127.0.0.1/24'::cidr fail?\n\nLooks like we've resolved that as \"yes\".\n\nThere are still unresolved issues about whether inet and cidr should be\nconsidered binary-equivalent, what network_sup/sub mean when comparing\ninet and cidr, whether we are missing any important functions, etc.\nI'm not hoping to get these resolved for 7.1, considering we are nearly\nat beta stage and don't even have a complete proposal for what to do.\nI'm satisfied for the moment with having eliminated the failure to\ncompare all bits of the values, which led to bogus equality results\nand consequent malfunction of indexes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 15:25:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "On Fri, 27 Oct 2000, Larry Rosenman wrote:\n> ler=# select * from ler_test;\n> net | host\n> ---------------+------------------\n> 207.158.72/24 | 207.158.72.11/24\n> (1 row)\n> \n> ler=# select host(net::inet) from ler_test;\n> ERROR: CIDR type has no host part\n> ERROR: CIDR type has no host part\nI agree. There should be a coercion function, but it should never be\nautomatic...But since now there aren't any automatic coercions, that's not\na problem ;)\n\nAlso, I agree with Larry that cidr _must_ be printed with 4 octets in\nthem, whether they are 0 or not. (i.e. it should print 207.158.72.0/24)\n\nThis is the standard way of specifying addresses in all network equipment.\nRFC specifies that, just the library that we use doesn't (yes, it is from\nVixie, but it doesn't make it RFC-compliant)\n\nI'll submit patches in a week or so, when I start straightening out my\nnetwork equipment tables...;)\n\n-alex\n\n",
"msg_date": "Fri, 27 Oct 2000 15:43:42 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "* Alex Pilosov <alex@pilosoft.com> [001027 14:43]:\n> On Fri, 27 Oct 2000, Larry Rosenman wrote:\n> > ler=# select * from ler_test;\n> > net | host\n> > ---------------+------------------\n> > 207.158.72/24 | 207.158.72.11/24\n> > (1 row)\n> > \n> > ler=# select host(net::inet) from ler_test;\n> > ERROR: CIDR type has no host part\n> > ERROR: CIDR type has no host part\n> I agree. There should be a coercion function, but it should never be\n> automatic...But since now there aren't any automatic coercions, that's not\n> a problem ;)\n> \n> Also, I agree with Larry that cidr _must_ be printed with 4 octets in\n> them, whether they are 0 or not. (i.e. it should print 207.158.72.0/24)\n> \n> This is the standard way of specifying addresses in all network equipment.\n> RFC specifies that, just the library that we use doesn't (yes, it is from\n> Vixie, but it doesn't make it RFC-compliant)\nand network(cidr) should print ONLY the octets, not the mask...\n\nLER\n\n> \n> I'll submit patches in a week or so, when I start straightening out my\n> network equipment tables...;)\n> \n> -alex\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 14:45:58 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "On Fri, 27 Oct 2000, Larry Rosenman wrote:\n> and network(cidr) should print ONLY the octets, not the mask...\nAgreed. There's a function to get the mask size, and the network should\njust return the network. Otherwise, it is impossible to use.\n\n-alex\n\n",
"msg_date": "Fri, 27 Oct 2000 15:55:21 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> OK, what I really meant was a way to coerce a CIDR entity to INET so \n> that host() can work with a CIDR type to print all 4 octets. \n\nHm. I don't see any really good reason why host() rejects CIDR input\nin the first place. What's wrong with producing the host address\nthat corresponds to extending the CIDR network address with zeroes?\n\n> Currently you can't coerce a CIDR type to INET. \n\nWell you can, but it doesn't *do* anything. One of the peculiarities\nof these two types is that the cidr-vs-inet flag is actually stored\nin the data value. The type-system differentiation between CIDR and\nINET is a complete no-op for everything except initial entry of a value\n(ie, conversion of a text string to CIDR or INET); all the operators\nthat care (which is darn few ... in fact it looks like host() is the\nonly one!) look right at the value to see which type they've been given.\nSo applying a type coercion may make the type system happy, but it\ndoesn't do a darn thing to the bits, and thus not to the behavior of\nsubsequent operators either. I have not yet figured out if that's a\ngood thing or a bad thing ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 16:07:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Also, I agree with Larry that cidr _must_ be printed with 4 octets in\n> them, whether they are 0 or not. (i.e. it should print 207.158.72.0/24)\n\n> This is the standard way of specifying addresses in all network equipment.\n> RFC specifies that, just the library that we use doesn't (yes, it is from\n> Vixie, but it doesn't make it RFC-compliant)\n\nSomehow, I am more inclined to believe Vixie's opinion on this than\neither yours or Larry's ;-)\n\nIf you think there is an RFC that demands the above behavior and not\nwhat Vixie recommended to us, let's see chapter and verse.\n\nFWIW, the direction we seem to be converging in is that INET will always\nprint all four octets. Maybe the answer for you is to use INET, rather\nthan to try to persuade us that you understand CIDR notation better than\nVixie does...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 16:14:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001027 15:14]:\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > Also, I agree with Larry that cidr _must_ be printed with 4 octets in\n> > them, whether they are 0 or not. (i.e. it should print 207.158.72.0/24)\n> \n> > This is the standard way of specifying addresses in all network equipment.\n> > RFC specifies that, just the library that we use doesn't (yes, it is from\n> > Vixie, but it doesn't make it RFC-compliant)\n> \n> Somehow, I am more inclined to believe Vixie's opinion on this than\n> either yours or Larry's ;-)\n> \n> If you think there is an RFC that demands the above behavior and not\n> what Vixie recommended to us, let's see chapter and verse.\n> \n> FWIW, the direction we seem to be converging in is that INET will always\n> print all four octets. Maybe the answer for you is to use INET, rather\n> than to try to persuade us that you understand CIDR notation better than\n> Vixie does...\nWhat I need is a way to convince PG to print all 4 octets from a CIDR\ntype. I *WANT* the safety of the CIDR type for blocks of addresses,\nbut need to be able to print all 4 octets out for NON-TECHIES. \n\nLER\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 15:16:36 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "On Fri, 27 Oct 2000, Tom Lane wrote:\n\n> Larry Rosenman <ler@lerctr.org> writes:\n> > OK, what I really meant was a way to coerce a CIDR entity to INET so \n> > that host() can work with a CIDR type to print all 4 octets. \n> \n> Hm. I don't see any really good reason why host() rejects CIDR input\n> in the first place. What's wrong with producing the host address\n> that corresponds to extending the CIDR network address with zeroes?\n_maybe_ cuz this is an invalid address. (an address cannot have all-zeros\nor all-ones host part). On other hand, postgres doesn't enforce that in\ninet_in, so its inconsistent to enforce it there...\n\n> > Currently you can't coerce a CIDR type to INET. \n> \n> Well you can, but it doesn't *do* anything. One of the peculiarities\n> of these two types is that the cidr-vs-inet flag is actually stored\n> in the data value. The type-system differentiation between CIDR and\n> INET is a complete no-op for everything except initial entry of a value\n> (ie, conversion of a text string to CIDR or INET); all the operators\n> that care (which is darn few ... in fact it looks like host() is the\n> only one!) look right at the value to see which type they've been given.\n> So applying a type coercion may make the type system happy, but it\n> doesn't do a darn thing to the bits, and thus not to the behavior of\n> subsequent operators either. I have not yet figured out if that's a\n> good thing or a bad thing ...\nProbably cidr_inet should make a copy instead of just \"blessing\" the\noriginal value?\n\n-alex\n\n",
"msg_date": "Fri, 27 Oct 2000 16:47:18 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "BTW, does it strike anyone else as peculiar that the host(),\nbroadcast(), network(), and netmask() functions yield results\nof type text, rather than type inet? Seems like it'd be considerably\nmore useful if they returned values of type inet with masklen = 32\n(except for network(), which would keep the original masklen while\ncoercing bits to its right to 0).\n\nGiven the current proposal that inet_out should always display all 4\noctets, and the existing fact that inet_out suppresses display of\na /32 netmask, the textual display of SELECT host(...) etc would\nremain the same as it is now. But AFAICS you could do more with\nan inet-type result value, like say compare it to other inet or cidr\nvalues ...\n\nComments? Why was it done this way, anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 18:03:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001027 17:04]:\n> BTW, does it strike anyone else as peculiar that the host(),\n> broadcast(), network(), and netmask() functions yield results\n> of type text, rather than type inet? Seems like it'd be considerably\n> more useful if they returned values of type inet with masklen = 32\n> (except for network(), which would keep the original masklen while\n> coercing bits to its right to 0).\n> \n> Given the current proposal that inet_out should always display all 4\n> octets, and the existing fact that inet_out suppresses display of\n> a /32 netmask, the textual display of SELECT host(...) etc would\n> remain the same as it is now. But AFAICS you could do more with\n> an inet-type result value, like say compare it to other inet or cidr\n> values ...\n> \n> Comments? Why was it done this way, anyway?\nIt doesn't bother me, as long as there is someway for me to get from a\nCIDR type to 4 octets output with no mask indicated, and print the\nbroadcast and netmask and bits out separately from ONE column in the\ntable. \n\nI.E. for select\nnetwork('207.158.72.0/24'),broadcast('207.158.72.0/24'),netmask('207.158.72.0/24') \nI get \n\n207.158.72.0 207.158.72.255 255.255.255.0 \n\nas output. \n\nAside from that, I'm not picky. \n\nLarry\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 17:24:06 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> I.E. for select network('207.158.72.0/24')\n> I get \n> 207.158.72.0\n\nTo my mind that should be done with host(), not network(). If you strip\nthe masklen information then what you have is no longer a network\nspecification, so expecting a function named network() to behave that\nway strikes me as bizarre.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 18:29:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001027 17:29]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > I.E. for select network('207.158.72.0/24')\n> > I get \n> > 207.158.72.0\n> \n> To my mind that should be done with host(), not network(). If you strip\n> the masklen information then what you have is no longer a network\n> specification, so expecting a function named network() to behave that\n> way strikes me as bizarre.\nFine, but host() rejects CIDR types right now....\n\nLER\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 17:37:32 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Fine, but host() rejects CIDR types right now....\n\nWhat's your point? network() doesn't behave the way you want right now,\neither.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 18:41:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "\nOn Fri, 27 Oct 2000, Tom Lane wrote:\n\n> BTW, does it strike anyone else as peculiar that the host(),\n> broadcast(), network(), and netmask() functions yield results\n> of type text, rather than type inet? Seems like it'd be considerably\n> more useful if they returned values of type inet with masklen = 32\n> (except for network(), which would keep the original masklen while\n> coercing bits to its right to 0).\nI absolutely agree, except for network(), which should return cidr.\n(after all, this is the network).\n\nAs I mentioned in another email, should inet datatype really care whether\nhost part is all-ones or all-zeros and reject that? It would make sense to\nme (10.0.0.0/8::inet is not a valid address, but 10.0.0.0/8::cidr is), but\nit would break some people's scripts...\n\nI'm talking here from a perspective of a network provider with P\nknowledge...I'm sure Marc can chime in here...\n\n -alex\n\n\n\n",
"msg_date": "Fri, 27 Oct 2000 18:42:03 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001027 17:41]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Fine, but host() rejects CIDR types right now....\n> \n> What's your point? network() doesn't behave the way you want right now,\n> either.\nFine, network() can return CIDR (207.158.72/24), but allow host(cidr)\nto print all 4 octets without the mask. \n\nLarry\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 17:43:56 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> On Fri, 27 Oct 2000, Tom Lane wrote:\n>> BTW, does it strike anyone else as peculiar that the host(),\n>> broadcast(), network(), and netmask() functions yield results\n>> of type text, rather than type inet?\n\n> I absolutely agree, except for network(), which should return cidr.\n\nWe could do that, but if we did, it would print out per CIDR format\n(eg, '192.1/16') whereas both you and Larry have been saying you want\na way to produce '192.1.0.0/16'. Perhaps we need two functions, one\nto produce the network in CIDR notation and one to produce it in INET\nnotation.\n\nFor that matter, perhaps we should not change host() to accept CIDR\nbut instead provide a separate function that does what I proposed\nhost() should do with a CIDR. Not sure.\n\n> As I mentioned in another email, should inet datatype really care whether\n> host part is all-ones or all-zeros and reject that?\n\nI'm inclined to think not, partially because that would mean that the\nresults of broadcast() and network() could *NOT* be considered valid\nINET values.\n\nThe way I'm visualizing this, INET is a generalized type that will store\nany 4-octet address plus any netmask width from 1 to 32. This includes\nnot only host addresses, but network specs and broadcast addresses.\nCIDR is a subset type that only accepts valid network specs (ie, no\nnonzero address bits to the right of the netmask). There is no subset\ntype that corresponds to \"valid host addresses only\" --- if there were,\nit would be a subset of INET but would have no valid values in common\nwith CIDR. We could make such a type but I dunno if it's worth the\ntrouble.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 18:54:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "I wrote:\n> There is no subset type that corresponds to \"valid host addresses\n> only\" --- if there were, it would be a subset of INET but would have\n> no valid values in common with CIDR.\n\nI take that back --- CIDR accepts w.x.y.z/32 for any w.x.y.z, which\nwould include valid host addresses. (But perhaps it should only\naccept netmasks shorter than 32 bits? Not sure if \"CIDR\" is commonly\nunderstood to be network specs only, or network and host specs.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 18:58:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "On Fri, 27 Oct 2000, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > Also, I agree with Larry that cidr _must_ be printed with 4 octets in\n> > them, whether they are 0 or not. (i.e. it should print 207.158.72.0/24)\n> \n> > This is the standard way of specifying addresses in all network equipment.\n> > RFC specifies that, just the library that we use doesn't (yes, it is from\n> > Vixie, but it doesn't make it RFC-compliant)\n> \n> Somehow, I am more inclined to believe Vixie's opinion on this than\n> either yours or Larry's ;-)\n\n> If you think there is an RFC that demands the above behavior and not\n> what Vixie recommended to us, let's see chapter and verse.\n\nAfter a long search of RFCs, I could not find any that _mandates_ one way\nover the other in all situations. However, in all RFC, whenever an example\nof IP addressing is used, the full (10.0.0.0/8) address is used far more\noften than compacted (10/8).\n\nI'd give you an example of BIND9, but in its inet_ntop function, it no\nlonger has the netmask length ;)\n\nAll networking software supports full syntax of address. Most of\nnetworking software supports compacted syntax.\n\nMany RFCs relating to the networking software, DO specify that full\nversion is required:\nftp://ftp.merit.edu/internet/documents/rfc/rfc2622.txt \nftp://ftp.merit.edu/internet/documents/rfc/rfc2673.txt verse 3.2.1\n\nRIPE NCC (the european version of ARIN) also likes the complete version in\ntheir standards documents (refer:\nhttp://www.lir.garr.it/docs/ripe-121.txt across the document\n\nARIN in their allocation templates, also uses full version: \n(again, across the document)\nhttp://www.arin.net/regserv/templates/isptemplate.txt\nhttp://www.arin.net/routingreg/route.html\nhttp://www.arin.net/routingreg/route-set.html\n\n\nIf this doesn't persuade you, I think I'll just ask Vixie to settle this.\n:)\n\n-alex\n\n> FWIW, the direction we seem to be converging in is that INET will always\n> print all four octets. Maybe the answer for you is to use INET, rather\n> than to try to persuade us that you understand CIDR notation better than\n> Vixie does...\n\n\n\n",
"msg_date": "Fri, 27 Oct 2000 19:17:00 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "On Fri, 27 Oct 2000, Tom Lane wrote:\n\n> The way I'm visualizing this, INET is a generalized type that will store\n> any 4-octet address plus any netmask width from 1 to 32. This includes\n> not only host addresses, but network specs and broadcast addresses.\n> CIDR is a subset type that only accepts valid network specs (ie, no\n> nonzero address bits to the right of the netmask). There is no subset\n\nI really don't think it should. We should have as much error-checking as\npossible. Broadcast address does _not_ have a netmask, i.e. 10.0.0.255/24\ndoes not make sense as inet, it should be 10.0.0.255/32\n\n(ie. broadcast() function must return a value with /32 mask)\n\n> type that corresponds to \"valid host addresses only\" --- if there were,\n> it would be a subset of INET but would have no valid values in common\n> with CIDR. We could make such a type but I dunno if it's worth the\n> trouble.\n\n",
"msg_date": "Fri, 27 Oct 2000 19:20:53 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> We should have as much error-checking as possible.\n\nOnly possible with a much tighter definition of what the intended use\nof each type is. For example, you seem to be saying that broadcast\naddresses aren't valid inet values, with which I do not agree unless\nthere is another type that they can be part of.\n\nMy inclination is to leave INET with the range of valid values it\ncurrently has, and to let people apply column constraints if they\nwant to restrict a particular column to, say, valid host addresses,\nor valid broadcast addresses, or whatever.\n\n> Broadcast address does _not_ have a netmask, i.e. 10.0.0.255/24\n> does not make sense as inet, it should be 10.0.0.255/32\n\nHow so? Without a netmask you have no way to know if it's a broadcast\naddress or not. 10.0.0.255/32 might be a perfectly valid host address\nin, say, 10.0/16. But 10.0.0.255/24 is recognizably the broadcast\naddress for 10.0.0/24 (and not for any other network...)\n\n> (ie. broadcast() function must return a value with /32 mask)\n\nI don't disagree with that part, but that's only because I see\nbroadcast() as mainly a display convenience. If we had a larger and\nmore thoroughly worked out set of inet/cidr operators, I'd be inclined\nto argue that broadcast('10.0.0.0/24') should yield 10.0.0.255/24 for\ncomputational convenience. Then we'd need to offer a separate function\nthat would let you strip off the netmask for display purposes (actually\nhost() would do for that...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 19:39:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001027 17:54]:\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > On Fri, 27 Oct 2000, Tom Lane wrote:\n> >> BTW, does it strike anyone else as peculiar that the host(),\n> >> broadcast(), network(), and netmask() functions yield results\n> >> of type text, rather than type inet?\n> \n> > I absolutely agree, except for network(), which should return cidr.\n> \n> We could do that, but if we did, it would print out per CIDR format\n> (eg, '192.1/16') whereas both you and Larry have been saying you want\n> a way to produce '192.1.0.0/16'. Perhaps we need two functions, one\n> to produce the network in CIDR notation and one to produce it in INET\n> notation.\nI'd agree with this.\n> \n> For that matter, perhaps we should not change host() to accept CIDR\n> but instead provide a separate function that does what I proposed\n> host() should do with a CIDR. Not sure.\n> \n> > As I mentioned in another email, should inet datatype really care whether\n> > host part is all-ones or all-zeros and reject that?\n> \n> I'm inclined to think not, partially because that would mean that the\n> results of broadcast() and network() could *NOT* be considered valid\n> INET values.\nTrue.\n> \n> The way I'm visualizing this, INET is a generalized type that will store\n> any 4-octet address plus any netmask width from 1 to 32. This includes\n> not only host addresses, but network specs and broadcast addresses.\n> CIDR is a subset type that only accepts valid network specs (ie, no\n> nonzero address bits to the right of the netmask). There is no subset\n> type that corresponds to \"valid host addresses only\" --- if there were,\n> it would be a subset of INET but would have no valid values in common\n> with CIDR. We could make such a type but I dunno if it's worth the\n> trouble.\nI believe this is true. Now if we could get the output stuff so there\nare BOTH ways of displaying the data (we seem to need both, from the\nstatements we get each time this has been brought up), such that you\ncan freely move between the 4-octet and short-octet (for lack of a\nbetter term) version of a CIDR network spec. \n\nThanks for any consideration, and if this could make 7.1, I'd be most\nappreciative...\n\nLarry\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 19:06:19 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Sigh ... I was really hoping not to get drawn into fixing these issues\nfor 7.1, but ...\n\nIt seems like much of the problem is that there isn't any easy way to\nchoose between CIDR-style display format ('127.1/16') and INET-style\nformat ('127.1.0.0/16'). We need to bite the bullet and add conversion\nfunctions, so that people can pick which they want.\n\nPicking and choosing among the ideas discussed, here's my stab at a\ncomplete proposal:\n\n1. CIDR-type values will be displayed in \"abbreviated\" format, eg\n \"127.1/16\". Since a CIDR value is no longer allowed to have any\n nonzero bits to the right of the mask, no information is lost by\n abbreviation. The /n will appear even when it is 32.\n\n2. INET-type values will always be displayed with all octets, eg\n \"127.1.0.0/16\". The /n part will be suppressed from display\n if it is 32. INET will accept any octet pattern as an address\n together with any netmask length from 1 to 32.\n\n3. We will add explicit functions cidr(inet) and inet(cidr) to force\n the data type to one or the other style, thus allowing selection\n of either display style. Note that cidr(inet) will raise an error\n if given something with nonzeroes to the right of the netmask.\n\n4. The function host(inet) will now return inet not text. It will\n take the address octets of the given value but force the netmask to 32\n and the display type to INET. So for example host('127.1/16'::cidr)\n will yield '127.1.0.0/32'::inet, which if displayed will appear\n as just '127.1.0.0', per item 2.\n\n5. The function broadcast(inet) will now return inet not text. It\n will take the given address octets and force the bits to the right\n of the netmask to 1. The display type will be set to inet. After\n more thought about my last message, I am inclined to have it return\n the same masklength as the input, so for example broadcast('127.1/16')\n would yield '127.1.255.255/16'::inet. If you want the broadcast\n address displayed without a netmask notation, you'd need to write\n host(broadcast(foo)). Alternatively, we could say that broadcast()\n always returns masklen 32, but I think this loses valuable\n functionality.\n\n6. The function network(inet) will now return cidr not text. The result\n has the same masklen as the input, with bits to the right of the mask\n zeroed to ensure it is a valid cidr value. The display type will be\n set to cidr. For example, network('127.1.2.3/16') will yield\n '127.1/16'::cidr. To get this result displayed in inet format, you'd\n write inet(network(foo)) --- yielding '127.1.0.0/16'. If you want it\n displayed with no netmask, write host(network(foo)) --- result\n '127.1.0.0'.\n\n7. The function netmask(inet) will now return inet not text. It will\n return octets with 1s in the input's netmask, 0s to the right, and\n output display type and masklen set to inet and 32. For example,\n netmask('127.1/16') = '255.255.0.0/32'::inet which will display as\n '255.255.0.0'. (I suppose a really anal definition would keep the\n input masklen, forcing you to write host(netmask(foo)) to get a\n display without \"/n\". But I don't see any value in that for\n netmasks.)\n\n8. Because we still consider inet and cidr to be binary-equivalent types,\n all of these functions will be applied to either inet or cidr columns\n without any type conversion. (In other words, cidr(inet) and\n inet(cidr) will only be applied if *explicitly* invoked.) I am not\n convinced whether this is a good thing. In this proposal, no system\n function except display will care whether its input is inet or cidr,\n so the lack of conversion doesn't matter. But in the long run it\n might be better to remove the binary-equivalence. Then, for example,\n host(cidr) would be implemented as host(inet(cidr)), costing an extra\n function call per operation. Right now I don't think we need to pay\n that price, but maybe someday we will.\n\nThoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 21:45:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Second proposal: what to do about INET/CIDR "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001027 20:45]:\n> Sigh ... I was really hoping not to get drawn into fixing these issues\n> for 7.1, but ...\n[SNIP]\nWorks WELL for me. THANK YOU, Tom.\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 21:07:23 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Second proposal: what to do about INET/CIDR"
},
{
"msg_contents": "Please read below if the whole thing with inet/cidr doesn't make you puke\nyet ;) The semi-longish proposal is at the bottom.\n\nOn Fri, 27 Oct 2000, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > We should have as much error-checking as possible.\n> \n> How so? Without a netmask you have no way to know if it's a broadcast\n> address or not. 10.0.0.255/32 might be a perfectly valid host address\n> in, say, 10.0/16. But 10.0.0.255/24 is recognizably the broadcast\n> address for 10.0.0/24 (and not for any other network...)\nRight, that's what I'm trying to say: It shouldn't allow you to use\n10.0.0.255/24 as a host address, but it should allow you to use \n10.0.0.255/16 \n\n> > (ie. broadcast() function must return a value with /32 mask)\n> \n> I don't disagree with that part, but that's only because I see\n> broadcast() as mainly a display convenience. If we had a larger and\n> more thoroughly worked out set of inet/cidr operators, I'd be inclined\n> to argue that broadcast('10.0.0.0/24') should yield 10.0.0.255/24 for\n> computational convenience. Then we'd need to offer a separate function\n> that would let you strip off the netmask for display purposes (actually\n> host() would do for that...)\n\n\nActually, now that I think longer about the whole scheme in terms of\nactual IP experience, here are my ideas:\na) inet is crock. I don't know anyone who would need to _care_ about a\nnetmask of a host, who wouldn't have a lookup table of networks/masks.\n(Think /etc/hosts, and /etc/netmasks).\n\nStoring a netmask of a network in a inet actually violates the relational\nconstraints: netmask is not a property of an IP address, its a property of\na network.\n\n99% of people who would be storing IP addresses into postgres database\nreally do not know nor care what is a netmask on that IP. Only people who\nwould care are ones who store their _internal_ addresses (read: addresses\nused on networks they manage). There is usually a very limited number of\nsuch networks (<1000). \n\nIt makes no sense to have in database both 10.0.0.1/24 and 10.0.0.2/16.\nNone whatsoever.\n\nThis does NOT apply to CIDR datatype, as there are real applications (such\nas storing routing tables) where you would care about netmask, but won't\ncare about a host part. \n\nWhat I am suggesting is we do the following:\na) inet will NOT have a netmask\n\nb) all the fancy comparison functions on inet should be deleted. \n(leave only > >= = <= <)\n\nc) the only things you can do on inet is to convert it to 4 octets (of\nint1), to a int8, and to retrieve its network from a table of networks.\n\nd) have a table, 'networks' (or any other name, maybe pg_networks?) which\nwould have one column 'network', with type cidr.\ncreate table networks (network cidr not null primary key)\n\ne) have a function network(inet) which would look up the address in a\ntable of networks using longest-prefix-match. I.E. something similar to:\n\nselect network from networks \nwhere $1<<network \norder by network_prefix(network)\ndesc limit 1;\n\n\nI realise that this sounds a little bit strange after all the arguments\nabout inet, but if you think about it, this is the only sane way to deal\nwith these datatypes. \n\nRight now, the datatypes we have look and sound pretty but are pretty much\nuseless in reality. Yes, it is nice to be able to store a netmask with\nevery IP address, it is useless in reality. (Yes, please, someone tell me\nif you are using inet with netmasks and you actually like it).\n\n\nI'd especially like to get input of Marc on this, as he's both a core team\nmember and has actual networking background...Oh yeah, if Marc can comment\non whether 10/8 or 10.0.0.0/8 is a proper way to represent a network, it'd\nbe great too :)\n\n\n\n",
"msg_date": "Fri, 27 Oct 2000 22:20:09 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Alex Pilosov <alex@pilosoft.com> [001027 21:20]:\n> Please read below if the whole thing with inet/cidr doesn't make you puke\n> yet ;) The semi-longish proposal is at the bottom.\n> \n> On Fri, 27 Oct 2000, Tom Lane wrote:\n[snip]\n> Actually, now that I think longer about the whole scheme in terms of\n> actual IP experience, here are my ideas:\n> a) inet is crock. I don't know anyone who would need to _care_ about a\n> netmask of a host, who wouldn't have a lookup table of networks/masks.\n> (Think /etc/hosts, and /etc/netmasks).\n> \n> Storing a netmask of a network in a inet actually violates the relational\n> constraints: netmask is not a property of an IP address, its a property of\n> a network.\nNot necessarily, especially for novices. Some people may want to\nstore the netmask with the IP of a host (think ifconfig being\nAUTOGEN'd). \n\n> \n> 99% of people who would be storing IP addresses into postgres database\n> really do not know nor care what is a netmask on that IP. Only people who\n> would care are ones who store their _internal_ addresses (read: addresses\n> used on networks they manage). There is usually a very limited number of\n> such networks (<1000). \nI disagree. I'm an ISP, and the network engineer for same. I have a\nBOATLOAD of Netblocks from ARIN and providers in a BUNCH of sizes. I\nneed to subnet them out to customers and for internal use. I like\nTom's latest proposal. This one LOSES functionality for ME. \n> \n> It makes no sense to have in database both 10.0.0.1/24 and 10.0.0.2/16.\n> None whatsoever.\nNot necessarily, especially with RFC1918 addresses, and reuse within\ndifferent unconnected networks of the SAME enterprise. \n> \n> This does NOT apply to CIDR datatype, as there are real applications (such\n> as storing routing tables) where you would care about netmask, but won't\n> care about a host part. \n> \n> What I am suggesting is we do the following:\n> a) inet will NOT have a netmask\nPlease DONT. See above.\n> \n> b) all the fancy comparison functions on inet should be deleted. \n> (leave only > >= = <= <)\n> \nMaybe. I think they should stay, but I'm one lowly network engineer.\n> c) the only things you can do on inet is to convert it to 4 octets (of\n> int1), to a int8, and to retrieve its network from a table of networks.\n> \n> d) have a table, 'networks' (or any other name, maybe pg_networks?) which\n> would have one column 'network', with type cidr.\n> create table networks (network cidr not null primary key)\nWhy?\n> \n> e) have a function network(inet) which would look up the address in a\n> table of networks using longest-prefix-match. I.E. something similar to:\nNo need. Let the user do it themselves. Similar to what we did for\nmacaddr's back in the summer. \n> \n> select network from networks \n> where $1<<network \n> order by network_prefix(network)\n> desc limit 1;\n> \n> \n> I realise that this sounds a little bit strange after all the arguments\n> about inet, but if you think about it, this is the only sane way to deal\n> with these datatypes. \n> \n> Right now, the datatypes we have look and sound pretty but are pretty much\n> useless in reality. Yes, it is nice to be able to store a netmask with\n> every IP address, it is useless in reality. (Yes, please, someone tell me\n> if you are using inet with netmasks and you actually like it).\n> \nSee above. \n> \n> I'd especially like to get input of Marc on this, as he's both a core team\n> member and has actual networking background...Oh yeah, if Marc can comment\n> on whether 10/8 or 10.0.0.0/8 is a proper way to represent a network, it'd\n> be great too :)\n> \n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 21:27:42 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "On Fri, 27 Oct 2000, Larry Rosenman wrote:\n\n> Not necessarily, especially for novices. Some people may want to\n> store the netmask with the IP of a host (think ifconfig being\n> AUTOGEN'd). \nFor a single host? Or for a network of hosts? But yes, I see your point if\na single host has x interfaces, and you are autogenerating ifconfig, with\nmy proposal, you'd need to insert each network into networks table.\n\n> > 99% of people who would be storing IP addresses into postgres database\n> > really do not know nor care what is a netmask on that IP. Only people who\n> > would care are ones who store their _internal_ addresses (read: addresses\n> > used on networks they manage). There is usually a very limited number of\n> > such networks (<1000). \n> I disagree. I'm an ISP, and the network engineer for same. I have a\n> BOATLOAD of Netblocks from ARIN and providers in a BUNCH of sizes. I\n> need to subnet them out to customers and for internal use. I like\n> Tom's latest proposal. This one LOSES functionality for ME. \nExplain how does it lose functionality?\n\n> > It makes no sense to have in database both 10.0.0.1/24 and 10.0.0.2/16.\n> > None whatsoever.\n> Not necessarily, especially with RFC1918 addresses, and reuse within\n> different unconnected networks of the SAME enterprise. \nMakes no sense to have them in one table, anyway, I stand corrected. \nFor people in situation you describe, you can have a second table of\nnetworks, and second function to look up networks in that table. \n\n> > This does NOT apply to CIDR datatype, as there are real applications (such\n> > as storing routing tables) where you would care about netmask, but won't\n> > care about a host part. \n> > \n> > What I am suggesting is we do the following:\n> > a) inet will NOT have a netmask\n> Please DONT. See above.\n> > \n> > b) all the fancy comparison functions on inet should be deleted. \n> > (leave only > >= = <= <)\n> > \n> Maybe. I think they should stay, but I'm one lowly network engineer.\n> > c) the only things you can do on inet is to convert it to 4 octets (of\n> > int1), to a int8, and to retrieve its network from a table of networks.\n> > \n> > d) have a table, 'networks' (or any other name, maybe pg_networks?) which\n> > would have one column 'network', with type cidr.\n> > create table networks (network cidr not null primary key)\n> Why?\nBecause netmask is a property of a network, not of an IP address.\n\n> > e) have a function network(inet) which would look up the address in a\n> > table of networks using longest-prefix-match. I.E. something similar to:\n> No need. Let the user do it themselves. Similar to what we did for\n> macaddr's back in the summer. \nYeah, it can be user-defined (or a contrib), no question about it, and for\npeople who have more than one table of networks, it will _have_ to be\nuser-defined.\n\nActually, that's probably what I'll end up doing on my own. \n\n\n",
"msg_date": "Fri, 27 Oct 2000 22:36:45 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": ">>>> e) have a function network(inet) which would look up the address in a\n>>>> table of networks using longest-prefix-match. I.E. something similar to:\n\n>> No need. Let the user do it themselves. Similar to what we did for\n>> macaddr's back in the summer. \n\n> Yeah, it can be user-defined (or a contrib), no question about it, and for\n> people who have more than one table of networks, it will _have_ to be\n> user-defined.\n\nIt seems clear to me that this mapping is best left to the user.\n\nA more interesting question is whether the system needs to provide any\nassisting functions that aren't there now. The lookup function you guys\nare postulating seems like it would be (in the simple cases)\n\tcreate function my_network(inet) returns cidr as\n\t'select network from my_networks where ???'\nMaybe it's too late at night, but I'm having a hard time visualizing\nwhat the ??? condition is and whether any additional system-level\nfunctions are needed to make it simple/efficient.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 22:53:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Alex Pilosov <alex@pilosoft.com> [001027 21:36]:\n> On Fri, 27 Oct 2000, Larry Rosenman wrote:\n> \n> > Not necessarily, especially for novices. Some people may want to\n> > store the netmask with the IP of a host (think ifconfig being\n> > AUTOGEN'd). \n> For a single host? Or for a network of hosts? But yes, I see your point if\n> a single host has x interfaces, and you are autogenerating ifconfig, with\n> my proposal, you'd need to insert each network into networks table.\nOr a table of Routers, listed by IP's. I want to be able to\nefficently store the interface name, IP, Mask. With your proposal, I\ncan't store it as one row in one table. With Tom's proposal, I can. \n\n> \n> > > 99% of people who would be storing IP addresses into postgres database\n> > > really do not know nor care what is a netmask on that IP. Only people who\n> > > would care are ones who store their _internal_ addresses (read: addresses\n> > > used on networks they manage). There is usually a very limited number of\n> > > such networks (<1000). \n> > I disagree. I'm an ISP, and the network engineer for same. I have a\n> > BOATLOAD of Netblocks from ARIN and providers in a BUNCH of sizes. I\n> > need to subnet them out to customers and for internal use. I like\n> > Tom's latest proposal. This one LOSES functionality for ME. \n> Explain how does it lose functionality?\nI may need to list an interface in their net, with their netmask, but\nnot have it in my networks table. I don't think that the system\nshould supply a networks table, per se. I have much more than 1000's\nnetworks in my shop. Please don't FORCE me to your model. I like\nTom's proposal, especially from the \"least surprise\" aspects. \n> \n> > > It makes no sense to have in database both 10.0.0.1/24 and 10.0.0.2/16.\n> > > None whatsoever.\n> > Not necessarily, especially with RFC1918 addresses, and reuse within\n> > different unconnected networks of the SAME enterprise. \n> Makes no sense to have them in one table, anyway, I stand corrected. \n> For people in situation you describe, you can have a second table of\n> networks, and second function to look up networks in that table. \nSee above. Please don't force me to your paradigm. \n> \n> > > This does NOT apply to CIDR datatype, as there are real applications (such\n> > > as storing routing tables) where you would care about netmask, but won't\n> > > care about a host part. \n> > > \n> > > What I am suggesting is we do the following:\n> > > a) inet will NOT have a netmask\n> > Please DONT. See above.\n> > > \n> > > b) all the fancy comparison functions on inet should be deleted. \n> > > (leave only > >= = <= <)\n> > > \n> > Maybe. I think they should stay, but I'm one lowly network engineer.\n> > > c) the only things you can do on inet is to convert it to 4 octets (of\n> > > int1), to a int8, and to retrieve its network from a table of networks.\n> > > \n> > > d) have a table, 'networks' (or any other name, maybe pg_networks?) which\n> > > would have one column 'network', with type cidr.\n> > > create table networks (network cidr not null primary key)\n> > Why?\n> Because netmask is a property of a network, not of an IP address.\n> \n> > > e) have a function network(inet) which would look up the address in a\n> > > table of networks using longest-prefix-match. I.E. something similar to:\n> > No need. Let the user do it themselves. Similar to what we did for\n> > macaddr's back in the summer. \n> Yeah, it can be user-defined (or a contrib), no question about it, and for\n> people who have more than one table of networks, it will _have_ to be\n> user-defined.\n> \n> Actually, that's probably what I'll end up doing on my own. \n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 22:04:39 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001027 21:53]:\n> >>>> e) have a function network(inet) which would look up the address in a\n> >>>> table of networks using longest-prefix-match. I.E. something similar to:\n> \n> >> No need. Let the user do it themselves. Similar to what we did for\n> >> macaddr's back in the summer. \n> \n> > Yeah, it can be user-defined (or a contrib), no question about it, and for\n> > people who have more than one table of networks, it will _have_ to be\n> > user-defined.\n> \n> It seems clear to me that this mapping is best left to the user.\n> \n> A more interesting question is whether the system needs to provide any\n> assisting functions that aren't there now. The lookup function you guys\n> are postulating seems like it would be (in the simple cases)\n> \tcreate function my_network(inet) returns cidr as\n> \t'select network from my_networks where ???'\n> Maybe it's too late at night, but I'm having a hard time visualizing\n> what the ??? condition is and whether any additional system-level\n> functions are needed to make it simple/efficient.\nI don't think we need this ASAP for 7.1. Let's get the basic stuff\nworking from a \"least surprise\" standpoint, and see what the user base\ncomes up with. I really think your proposal from earlier tonite is\nthe way to go, at least from my perspective. \n\nThanks again.\n\nLER\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 22:06:12 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "On Fri, 27 Oct 2000, Tom Lane wrote:\n\n> A more interesting question is whether the system needs to provide any\n> assisting functions that aren't there now. The lookup function you guys\n> are postulating seems like it would be (in the simple cases)\n> \tcreate function my_network(inet) returns cidr as\n> \t'select network from my_networks where ???'\nas in my mail:\nselect network from my_network where network>>$1 order by\nnetwork_prefix(network) desc limit 1;\n\n(i.e. if many networks cover the ip address, pick the one with longest\nprefix). The only hard question here, how to properly index this table.\nThis sounds like a perfect application of user-defined index method. \nI need to look up documentation on how they work...\n\n\nHowever, this probably won't pose a major problem in production: the\nnetworks table will be relatively small. \n\n> Maybe it's too late at night, but I'm having a hard time visualizing\n> what the ??? condition is and whether any additional system-level\n> functions are needed to make it simple/efficient.\n\nActually, you can scratch my proposal. I realise it could be inconvenient\nfor some people.\n\nI'll be probably putting all my hosts as inet::xxx/32, have the above\nlookup function to get real network, and do operations on that.\n\n\n\n",
"msg_date": "Fri, 27 Oct 2000 23:08:58 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "one more small request:\n\nint8_inet(inet) and inet_int8(int8): functions to convert an inet to an\nint8 and back. (not an int4, since postgres int4s are signed)\n\nThis allows me to do some additional manipulations on values. (ie. given a\nhost, determine its default gateway, for us, it is always first host on\nthat network, this could be implemented as inet_int8(int8_inet(network(x))+1), \nor splitting a cidr into two halves, \n\n-alex\n\n\n\n\n\n\n",
"msg_date": "Fri, 27 Oct 2000 23:27:38 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "On Fri, 27 Oct 2000, Tom Lane wrote:\n\n> BTW, does it strike anyone else as peculiar that the host(),\n> broadcast(), network(), and netmask() functions yield results\n> of type text, rather than type inet? Seems like it'd be considerably\n> more useful if they returned values of type inet with masklen = 32\n> (except for network(), which would keep the original masklen while\n> coercing bits to its right to 0).\nYep, absolutely. \n\n> Given the current proposal that inet_out should always display all 4\n> octets, and the existing fact that inet_out suppresses display of\n> a /32 netmask, the textual display of SELECT host(...) etc would\n> remain the same as it is now. But AFAICS you could do more with\n> an inet-type result value, like say compare it to other inet or cidr\n> values ...\n> Comments? Why was it done this way, anyway?\n\n",
"msg_date": "Fri, 27 Oct 2000 23:39:29 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "Tom Lane writes:\n\n> Hm. I don't see any really good reason why host() rejects CIDR input\n> in the first place. What's wrong with producing the host address\n> that corresponds to extending the CIDR network address with zeroes?\n\nBecause it's semantically wrong. It's just as wrong as converting DATE to\nTIMESTAMP by setting the time to zero. -- And we actually do this...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 28 Oct 2000 14:13:23 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "Tom Lane writes:\n\n> 3. We will add explicit functions cidr(inet) and inet(cidr) to force\n> the data type to one or the other style, thus allowing selection\n> of either display style. Note that cidr(inet) will raise an error\n> if given something with nonzeroes to the right of the netmask.\n\nNot sure if using functions that look like a cast to control output format\nis a good idea. The conversion inet => cidr seems most naturally left\nwith the network() function. The other conversion is not well-defined. \n(You could define it in several reasonable ways, but that still doesn't\nmake it \"well\".) ISTM that you'd really need some function build_inet(a\ncidr, b inet) returns inet, where b does not have a network and can\nsomehow be fitted into network a.\n\nActually, let's sign up Karel to write to_char(inet) and to_char(cidr).\n\n> But in the long run it might be better to remove the\n> binary-equivalence.\n\nI say kill it ASAP. I don't think there was ever a good reason for this\nbesides implementation convenience; and the troubles it has caused are\nwithout end.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 28 Oct 2000 14:21:06 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Second proposal: what to do about INET/CIDR "
},
{
"msg_contents": "On Fri, 27 Oct 2000, Alex Pilosov wrote:\n\n> > BTW, does it strike anyone else as peculiar that the host(),\n> > broadcast(), network(), and netmask() functions yield results\n> > of type text, rather than type inet? Seems like it'd be considerably\n> > more useful if they returned values of type inet with masklen = 32\n> > (except for network(), which would keep the original masklen while\n> > coercing bits to its right to 0).\n> I absolutely agree, except for network(), which should return cidr.\n> (after all, this is the network).\n> \n> As I mentioned in another email, should inet datatype really care whether\n> host part is all-ones or all-zeros and reject that? It would make sense to\n> me (10.0.0.0/8::inet is not a valid address, but 10.0.0.0/8::cidr is), but\n> it would break some people's scripts...\n\nHow about letting inet just be a \"simple\" storage type for four octets\nseperated by periods. (Along the line of (0-255) . (0-255) . (0-255) . (0-255) )\n\nThe only conversion function there would be that operates on inet is\ncidr(inet) - which returns the contents of the tuple, with a /32.\nAlternately, for those who think we absolutely need something more\ncomplicated (I don't think it's neeccesary), something along the lines of:\n\ncidr('10.20.23.252' => '10.20.23.252/32'\ncidr('10.20.23.0') => '10.20.23/24' (or '10.20.23.0/24')\ncidr('10.20.0.0') => '10.20/16' (or '10.20.0.0/16')\ncidr('10.0.0.0') => '10/8' (or '10.0.0.0/8')\n\nAlthough one should put a comment/warning in the documentation that cidr()\nassumes subnets to be on an octet boundary. (Which is the norm with inet\nstuff - cidr was created specifically to get away from\nsubnet-on-octet-boundaries.)\n\nSlight digression - on the discussion whether it's 127/8, or 127.0.0.0/8 -\nboth are accepted in most applications nowadays. But back to the issue\nat hand...\n\nThen let the cidr data type be for anything more advanced - like for\nstoring subnets on non-octet boundaries etc, and have host(),\nbroadcast(), network(), and netmask() functions. host() would return the\nIP with a /32 mask - and inet(cidr) would just return the IP, without a\nmask.\n\n> I'm talking here from a perspective of a network provider with P\n> knowledge...I'm sure Marc can chime in here...\n\nSo am I.\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Sat, 28 Oct 2000 11:17:30 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> 3. We will add explicit functions cidr(inet) and inet(cidr) to force\n>> the data type to one or the other style, thus allowing selection\n>> of either display style. Note that cidr(inet) will raise an error\n>> if given something with nonzeroes to the right of the netmask.\n\n> Not sure if using functions that look like a cast to control output format\n> is a good idea. The conversion inet => cidr seems most naturally left\n> with the network() function. The other conversion is not well-defined. \n> (You could define it in several reasonable ways, but that still doesn't\n> make it \"well\".)\n\nGood point: cidr() is exactly the same as network() under my proposal,\nso we don't need a separate function for that.\n\nWhile inet() and host() as proposed may be morally impure, they're no\nworse than date->timestamp and similar conversions that we have in\nabundance. I do not agree that they are ill-defined --- the spec\nI wrote seems perfectly clear.\n\nWould you be happier if inet() and host() were defined to produce\ntextual representations --- respectively \"w.x.y.z/n\" and \"w.x.y.z\"\nrather than actual INET values? That seems like it'd still solve\nthe demand for being able to extract these specific representations,\nwithout opening up the quagmire of whether these values are legitimate\nINET values.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 12:31:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Second proposal: what to do about INET/CIDR "
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [001028 02:23]:\n> I don't think we need this ASAP for 7.1. Let's get the basic stuff\n> working from a \"least surprise\" standpoint, and see what the user base\n> comes up with. I really think your proposal from earlier tonite is\n> the way to go, at least from my perspective. \n> \n> Thanks again.\nWhat was the final outcome? Will Tom's proposal make 7.1? \n\n(Do I need to learn how to code backend stuff?)\n\nLER\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 3 Nov 2000 14:45:11 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> What was the final outcome?\n\nI don't think we'd quite agreed what to do. The proposed code changes\nare not large, we just need a consensus on what the behavior ought to\nbe.\n\nSince a couple of people objected to the idea of using casts to control\nthe output format, here is a strawman Plan C for discussion. This\ndiffers from my last proposal in that the inet() and cidr() pseudo\ncast functions are gone, and there are two functions returning text\nvalues that can be used if you don't like the default display formats.\n\n1. CIDR-type values will be displayed in \"abbreviated\" format, eg\n \"127.1/16\". Since a CIDR value is no longer allowed to have any\n nonzero bits to the right of the mask, no information is lost by\n abbreviation. The /n will appear even when it is 32.\n\n2. INET-type values will always be displayed with all octets, eg\n \"127.1.0.0/16\". The /n part will be suppressed from display\n if it is 32. INET will accept any octet pattern as an address\n together with any netmask length from 1 to 32.\n\n3. The function host(inet) will return a text representation of\n just the IP part of an INET or CIDR value, eg, \"127.1.0.0\".\n All four octets will always appear, the netmask will never appear.\n (This is the same as its current behavior, I think.)\n\n4. A new function text(inet) will return a text representation of\n both the IP and netmask parts of an INET or CIDR value, eg,\n \"127.1.0.0/16\". Unlike the default display conversions, all four\n octets and the netmask length will always appear in the result.\n Note that the system will consider this function to be a typecast,\n so the same result can be gotten with inetval::text or\n CAST(inetval AS text).\n\n[ the rest is the same as in my last proposal: ]\n\n5. The function broadcast(inet) will now return inet not text. It\n will take the given address octets and force the bits to the right\n of the netmask to 1. The display type will be set to inet. I am\n inclined to have it return the same masklength as the input, so for\n example broadcast('127.1/16') would yield '127.1.255.255/16'::inet.\n If you want the broadcast address displayed without a netmask\n notation, you'd need to write host(broadcast(foo)). Alternatively,\n we could say that broadcast() always returns masklen 32, but I think\n this loses valuable functionality.\n\n6. The function network(inet) will now return cidr not text. The result\n has the same masklen as the input, with bits to the right of the mask\n zeroed to ensure it is a valid cidr value. The display type will be\n set to cidr. For example, network('127.1.2.3/16') will yield\n '127.1/16'::cidr. To get this result displayed in a different\n format, write host(network(foo)) or text(network(foo)).\n\n7. The function netmask(inet) will now return inet not text. It will\n return octets with 1s in the input's netmask, 0s to the right, and\n output display type and masklen set to inet and 32. For example,\n netmask('127.1/16') = '255.255.0.0/32'::inet which will display as\n '255.255.0.0'. (I suppose a really anal definition would keep the\n input masklen, forcing you to write host(netmask(foo)) to get a\n display without \"/n\". But I don't see any value in that for\n netmasks.)\n\n8. Because we still consider inet and cidr to be binary-equivalent types,\n all of these functions can be applied to either inet or cidr columns.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Nov 2000 16:19:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "Works for me.....\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 3 Nov 2000 15:26:07 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Agreed with all of it, but how about incorporating conversion from inet\nto int8? (first octet*256*256*256+second octet*256*256+third\noctet*256+fourth octet). \n\nThis will allow to do a lot of magic with addresses using plain math.\n\nAlso, I'd still like netmask_length, length of netmask in bits.\n\n-alex\n\nOn Fri, 3 Nov 2000, Tom Lane wrote:\n\n> 5. The function broadcast(inet) will now return inet not text. It\n> will take the given address octets and force the bits to the right\n> of the netmask to 1. The display type will be set to inet. I am\n> inclined to have it return the same masklength as the input, so for\n> example broadcast('127.1/16') would yield '127.1.255.255/16'::inet.\n> If you want the broadcast address displayed without a netmask\n> notation, you'd need to write host(broadcast(foo)). Alternatively,\n> we could say that broadcast() always returns masklen 32, but I think\n> this loses valuable functionality.\n> \n> 6. The function network(inet) will now return cidr not text. The result\n> has the same masklen as the input, with bits to the right of the mask\n> zeroed to ensure it is a valid cidr value. The display type will be\n> set to cidr. For example, network('127.1.2.3/16') will yield\n> '127.1/16'::cidr. To get this result displayed in a different\n> format, write host(network(foo)) or text(network(foo)).\n> \n> 7. The function netmask(inet) will now return inet not text. It will\n> return octets with 1s in the input's netmask, 0s to the right, and\n> output display type and masklen set to inet and 32. For example,\n> netmask('127.1/16') = '255.255.0.0/32'::inet which will display as\n> '255.255.0.0'. (I suppose a really anal definition would keep the\n> input masklen, forcing you to write host(netmask(foo)) to get a\n> display without \"/n\". But I don't see any value in that for\n> netmasks.)\n> \n> 8. Because we still consider inet and cidr to be binary-equivalent types,\n> all of these functions can be applied to either inet or cidr columns.\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n",
"msg_date": "Fri, 3 Nov 2000 21:40:22 -0500 (EST)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Alex Pilosov <alex@pilosoft.com> [001103 20:47]:\n> Agreed with all of it, but how about incorporating conversion from inet\n> to int8? (first octet*256*256*256+second octet*256*256+third\n> octet*256+fourth octet). \n> \n> This will allow to do a lot of magic with addresses using plain math.\n> \n> Also, I'd still like netmask_length, length of netmask in bits.\nmasklen(inet) is there:\nint4 | masklen | inet \n\nfrom a \\df. \n\nCan we also get it to work on cidr (or allow cast from inet to cidr).\n\n\n> \n> -alex\n> \n> On Fri, 3 Nov 2000, Tom Lane wrote:\n> \n> > 5. The function broadcast(inet) will now return inet not text. It\n> > will take the given address octets and force the bits to the right\n> > of the netmask to 1. The display type will be set to inet. I am\n> > inclined to have it return the same masklength as the input, so for\n> > example broadcast('127.1/16') would yield '127.1.255.255/16'::inet.\n> > If you want the broadcast address displayed without a netmask\n> > notation, you'd need to write host(broadcast(foo)). Alternatively,\n> > we could say that broadcast() always returns masklen 32, but I think\n> > this loses valuable functionality.\n> > \n> > 6. The function network(inet) will now return cidr not text. The result\n> > has the same masklen as the input, with bits to the right of the mask\n> > zeroed to ensure it is a valid cidr value. The display type will be\n> > set to cidr. For example, network('127.1.2.3/16') will yield\n> > '127.1/16'::cidr. To get this result displayed in a different\n> > format, write host(network(foo)) or text(network(foo)).\n> > \n> > 7. The function netmask(inet) will now return inet not text. It will\n> > return octets with 1s in the input's netmask, 0s to the right, and\n> > output display type and masklen set to inet and 32. For example,\n> > netmask('127.1/16') = '255.255.0.0/32'::inet which will display as\n> > '255.255.0.0'. (I suppose a really anal definition would keep the\n> > input masklen, forcing you to write host(netmask(foo)) to get a\n> > display without \"/n\". But I don't see any value in that for\n> > netmasks.)\n> > \n> > 8. Because we still consider inet and cidr to be binary-equivalent types,\n> > all of these functions can be applied to either inet or cidr columns.\n> > \n> > Comments?\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 3 Nov 2000 20:50:40 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> A separate function for formatting output seems necessary, but if we don't\n> reach an agreement though, it ought to work to cast CIDR to INET to get\n> all four octets, no?\n\nUh, weren't you one of the people objecting to relying on cidr-to-inet\ncasts to control formatting?\n\n> I think the typecast-to-text representation of CIDR should be visually the\n> same as the normal representation.\n\nWell, we need *some* way to extract a representation like \"w.x.y.z/n\".\nIf you don't like text() as the name of that formatting function,\nsuggest another name...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Nov 2000 21:46:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "Tom Lane writes:\n\n> 3. The function host(inet) will return a text representation of\n> just the IP part of an INET or CIDR value, eg, \"127.1.0.0\".\n> All four octets will always appear, the netmask will never appear.\n> (This is the same as its current behavior, I think.)\n\nI think there was definite merit in the host() function returning inet, as\nyou originally proposed (if only for consistency with the proposed changes\nto network() and broadcast()).\n\nA separate function for formatting output seems necessary, but if we don't\nreach an agreement though, it ought to work to cast CIDR to INET to get\nall four octets, no?\n\n> 4. A new function text(inet) will return a text representation of\n> both the IP and netmask parts of an INET or CIDR value, eg,\n> \"127.1.0.0/16\". Unlike the default display conversions, all four\n> octets and the netmask length will always appear in the result.\n> Note that the system will consider this function to be a typecast,\n> so the same result can be gotten with inetval::text or\n> CAST(inetval AS text).\n\nI think the typecast-to-text representation of CIDR should be visually the\nsame as the normal representation.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 5 Nov 2000 03:47:09 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > A separate function for formatting output seems necessary, but if we don't\n> > reach an agreement though, it ought to work to cast CIDR to INET to get\n> > all four octets, no?\n> \n> Uh, weren't you one of the people objecting to relying on cidr-to-inet\n> casts to control formatting?\n\nI didn't like the use of the to-text casts to control formatting, but if\nan existing cast would \"just handle it\", then why not?\n\n> > I think the typecast-to-text representation of CIDR should be visually the\n> > same as the normal representation.\n> \n> Well, we need *some* way to extract a representation like \"w.x.y.z/n\".\n> If you don't like text() as the name of that formatting function,\n> suggest another name...\n\nall_octets(cidr)::text maybe?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 5 Nov 2000 14:08:29 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001105 07:08]:\n> Tom Lane writes:\n> \n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > A separate function for formatting output seems necessary, but\n> > > if we don't reach an agreement though, it ought to work to cast\n> > > CIDR to INET to get all four octets, no?\n> > \n> > Uh, weren't you one of the people objecting to relying on\n> > cidr-to-inet casts to control formatting?\n> \n> I didn't like the use of the to-text casts to control formatting,\n> but if an existing cast would \"just handle it\", then why not?\n\n> \n> > > I think the typecast-to-text representation of CIDR should be\n> > > visually the same as the normal representation.\n> > \n> > Well, we need *some* way to extract a representation like\n> > \"w.x.y.z/n\". If you don't like text() as the name of that\n> > formatting function, suggest another name...\n> \n> all_octets(cidr)::text maybe?\nPersonally, I just want a way, guaranteed to work, to get all 4 octets\nprinted out for both CIDR and INET types. If I need to cast to INET,\nthat's fine. We also need to make sure that we can print all the\npieces out as well (masklen, broadcast, netmask, network). \n\nI really would like to see this resolved for 7.1, as I have a number\nof apps that need to interface with NON-techies, and we need to print\nout all 4 octets, as well as netmasks, etc. PostgreSQL is the perfect\nDB for the backend BECAUSE of the inet/cidr types. Yes, I could write\nconvoluted PHP code to print out the stuff, but why should I when the\nDB has all the information in a nice compact form, and a SELECT\nstatement could handle it? \n\nI do understand the philosophical problems, but we really are very\nclose. Can we promise that we'll get this ironed out for 7.1? \n\nThanks.\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org US Mail: 1905\nSteamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 5 Nov 2000 08:15:33 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Well, we need *some* way to extract a representation like \"w.x.y.z/n\".\n>> If you don't like text() as the name of that formatting function,\n>> suggest another name...\n\n> all_octets(cidr)::text maybe?\n\nNo, because that doesn't accurately describe what it does for inet\nitems --- those'd be shown with all octets anyway. For inet, the\ncritical thing this function will do is force the netmask to be shown\neven if it's /32.\n\nGiven that we are using host() for the function that shows just the\nIP address part of an inet/cidr value, how about hostandmask() for\nthe function that always shows everything?\n\nI still prefer text() though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Nov 2000 11:30:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Summary: what to do about INET/CIDR "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [001109 10:30]:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> >> Well, we need *some* way to extract a representation like \"w.x.y.z/n\".\n> >> If you don't like text() as the name of that formatting function,\n> >> suggest another name...\n> \n> > all_octets(cidr)::text maybe?\n> \n> No, because that doesn't accurately describe what it does for inet\n> items --- those'd be shown with all octets anyway. For inet, the\n> critical thing this function will do is force the netmask to be shown\n> even if it's /32.\n> \n> Given that we are using host() for the function that shows just the\n> IP address part of an inet/cidr value, how about hostandmask() for\n> the function that always shows everything?\n> \n> I still prefer text() though.\nWhat is the *PHILOSOPHICAL* objection to text() in this case?\n\nIt's a TEXT output? \n\nLER\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 9 Nov 2000 20:44:45 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Summary: what to do about INET/CIDR"
}
] |
[
{
"msg_contents": "I've been poking into the syntax in gram.y, and finding\nthat the provision of parentheses for SELECT statements\nis pretty broken. I have previously posted examples of\nodd things. On closer examination, it appears to need\nan overhaul.\n\nThere are two problems with this: (1) I'm new here, I don't\nknow the players and the protocols very well. I don't\nwant to offend. And (2) I don't have access to the SQL\nstandards so that we might get it right. What I do have\nis the Oracle docs, which may or may not have much to do\nwith how PostgreSQL ought to be, but it's worth noting\nthat parentheses are not allowed as freely as you might\nsuppose.\n\nLittle fixes here are going to get into trouble with yacc\nbecause the current approach is so awkward. It turns out\nit's the reason Select Statements cannot be listed in a\nCREATE RULE like the other kinds of commands.\n\nGiven a target syntax (like from the SQL standard) this\ncan be done in a day or so. The question is: should it\nhappen, and if so what is the target syntax?\n\n++ kevin\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Thu, 26 Oct 2000 18:05:05 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Select syntax (broken in current CVS tree)"
},
{
"msg_contents": "\"Kevin O'Gorman\" <kogorman@pacbell.net> writes:\n> I've been poking into the syntax in gram.y, and finding\n> that the provision of parentheses for SELECT statements\n> is pretty broken. I have previously posted examples of\n> odd things. On closer examination, it appears to need\n> an overhaul.\n\n> There are two problems with this: (1) I'm new here, I don't\n> know the players and the protocols very well. I don't\n> want to offend.\n\nThe existing handling of parens in SELECTs was done by me, a month\nor so back. I'm not satisfied with it, but decided that I couldn't\nspend any more time on it right then. If you can improve it, be\nmy guest.\n\n> And (2) I don't have access to the SQL\n> standards so that we might get it right.\n\nThe SQL spec is available (I haven't got a URL at hand but see the\nlist archives), but it really won't help you a lot in this case,\nbecause the grammar it gives is clearly ambiguous. The whole problem\nhere is to come up with a yacc-compatible grammar that does what we\nwant.\n\nAFAIK our current grammar is correct in that (a) it requires parens\nwhere they are required by the spec, and (b) it permits one level of\nparens where they are permitted by the spec. What it doesn't do is\npermit redundant multiple levels of parens.\n\nThe other thing it doesn't do is allow ORDER BY or LIMIT in sub-selects,\nonly in a top-level SELECT statement. This is correct per SQL92 spec,\nbut as I commented yesterday, I think we should ignore that spec\nrestriction henceforth. It's possible that dropping that distinction\nwould make the paren situation easier to solve --- I did not consider\nthe possibility of doing that when I was hacking on it last month.\n\n> Little fixes here are going to get into trouble with yacc\n> because the current approach is so awkward. It turns out\n> it's the reason Select Statements cannot be listed in a\n> CREATE RULE like the other kinds of commands.\n\nNo, the distinction between selects and other rule statements in CREATE\nRULE is there for an entirely different reason: to enforce a semantic\nrestriction. See past thread about whether multiple selects make sense\nin a rule. AFAIK the paren situation doesn't affect that.\n\n> Given a target syntax (like from the SQL standard) this\n> can be done in a day or so. The question is: should it\n> happen, and if so what is the target syntax?\n\nThe overall structure of the SQL-spec grammar is sufficiently different\nfrom ours that I'm not sure we want to adopt it at all. It's certainly\nnot going to be a one-day project if we try.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 11:19:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Select syntax (broken in current CVS tree) "
}
] |
[
{
"msg_contents": "There is an _excellent_ PostgreSQL article in the current (November\n2000) issue of Linux Journal. It is fair, even-handed, and was even\nwritten by a MySQL user. Almost a convert, I might add. (He even liked\nthe RPM's :-))\n\nIt's not linked on their online site (www.linuxjournal.com) as of yet.\n\nOh, and MySQL beat us 2 to 1 on their 2000 Reader's choice awards. We\ncame in second. The only other contender was Oracle. But, we were ONLY\nbeat 2 to 1. That is an improvement.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 27 Oct 2000 00:07:16 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL article in Linux Journal Nov 2000"
}
] |
[
{
"msg_contents": "hi.\n\nI just discovered that doing an alter table ... alter\ncolumn (to rename a column) does not do a complete\nrename throughout the database.\n\nfor example, say you have table a, with columns b and\nc. b is your primary key.\n\nnow rename b to new_b. if you do a dump of the schema\nafter you rename, you'll find that you can't reload\nthat schema because at the bottom of the definition of\ntable a you have PRIMARY KEY (\"b\").\n\nshouldn't rename update any index and key definitions?\n\nalso, and this may actually the source of the problem,\nwhile scanning my full (schema and data) dump, I\nnoticed that the contents of table pga_layout also had\nthe old values of columns that I have renamed.\n\nI'm very frightened right now, because I'm rather\ndependent upon my database right now. I don't like\nthe thought that my database is corrupt at the schema\nlevel.\n\nmichael\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Messenger - Talk while you surf! It's FREE.\nhttp://im.yahoo.com/\n",
"msg_date": "Thu, 26 Oct 2000 22:01:41 -0700 (PDT)",
"msg_from": "Michael Teter <michael_teter@yahoo.com>",
"msg_from_op": true,
"msg_subject": "renaming columns... danger?"
},
{
"msg_contents": "\nJust tested this on latest devel. version, and there does seem to be a\nproblem.\n\n[]$ psql test\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntest=# select version();\n version\n------------------------------------------------------------------------\n\n PostgreSQL 7.1devel on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\ntest=# create table a ( aa serial primary key );\nNOTICE: CREATE TABLE will create implicit sequence 'a_aa_seq' for\nSERIAL column 'a.aa'\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey'\nfor table 'a'\nCREATE\ntest=# alter TABLE a RENAME aa to new_aa;\nALTER\n\n[]$ pg_dump test\n--\n-- Selected TOC Entries:\n--\n\\connect - gaf\n--\n-- TOC Entry ID 2 (OID 20352)\n--\n-- Name: \"a_aa_seq\" Type: SEQUENCE Owner: gaf\n--\n\nCREATE SEQUENCE \"a_aa_seq\" start 1 increment 1 maxvalue 2147483647\nminvalue 1 cache 1 ;\n\n--\n-- TOC Entry ID 4 (OID 20370)\n--\n-- Name: a Type: TABLE Owner: gaf\n--\n\nCREATE TABLE \"a\" (\n \"new_aa\" integer DEFAULT nextval('\"a_aa_seq\"'::text) NOT NULL,\n PRIMARY KEY (\"aa\")\n);\n\n--\n-- Data for TOC Entry ID 5 (OID 20370) TABLE DATA a\n--\n\n-- Disable triggers\nUPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" ~* 'a';\nCOPY \"a\" FROM stdin;\n\\.\n-- Enable triggers\nBEGIN TRANSACTION;\nCREATE TEMP TABLE \"tr\" (\"tmp_relname\" name, \"tmp_reltriggers\" smallint);\n\nINSERT INTO \"tr\" SELECT C.\"relname\", count(T.\"oid\") FROM \"pg_class\" C,\n\"pg_trigger\" T WHERE C.\"oid\" = T.\"tgrelid\" AND C.\"relname\" ~* 'a' GROUP\nBY 1;\nUPDATE \"pg_class\" SET \"reltriggers\" = TMP.\"tmp_reltriggers\" FROM \"tr\"\nTMP WHERE \"pg_class\".\"relname\" = TMP.\"tmp_relname\";\nDROP TABLE \"tr\";\nCOMMIT TRANSACTION;\n\n--\n-- TOC Entry ID 3 (OID 20352)\n--\n-- Name: \"a_aa_seq\" Type: SEQUENCE SET Owner:\n--\n\nSELECT setval ('\"a_aa_seq\"', 1, 'f');\n\n\n\nMichael Teter wrote:\n\n> hi.\n>\n> I just discovered that doing an alter table ... alter\n> column (to rename a column) does not do a complete\n> rename throughout the database.\n>\n> for example, say you have table a, with columns b and\n> c. b is your primary key.\n>\n> now rename b to new_b. if you do a dump of the schema\n> after you rename, you'll find that you can't reload\n> that schema because at the bottom of the definition of\n> table a you have PRIMARY KEY (\"b\").\n>\n> shouldn't rename update any index and key definitions?\n>\n> also, and this may actually the source of the problem,\n> while scanning my full (schema and data) dump, I\n> noticed that the contents of table pga_layout also had\n> the old values of columns that I have renamed.\n>\n> I'm very frightened right now, because I'm rather\n> dependent upon my database right now. I don't like\n> the thought that my database is corrupt at the schema\n> level.\n>\n> michael\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Yahoo! Messenger - Talk while you surf! It's FREE.\n> http://im.yahoo.com/\n\n--\n> Poorly planned software requires a genius to write it\n> and a hero to use it.\n\nGrant Finnemore BSc(Eng) (mailto:gaf@ucs.co.za)\nSoftware Engineer Universal Computer Services\nTel (+27)(11)712-1366 PO Box 31266 Braamfontein 2017, South Africa\nCell (+27)(82)604-5536 20th Floor, 209 Smit St., Braamfontein\nFax (+27)(11)339-3421 Johannesburg, South Africa\n\n\n",
"msg_date": "Fri, 27 Oct 2000 10:30:19 +0200",
"msg_from": "Grant Finnemore <gaf@ucs.co.za>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] renaming columns... danger?"
},
{
"msg_contents": "\nJust tested this on latest devel. version, and there does seem to be a\nproblem.\n\n[]$ psql test\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntest=# select version();\n version\n------------------------------------------------------------------------\n\n PostgreSQL 7.1devel on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\ntest=# create table a ( aa serial primary key );\nNOTICE: CREATE TABLE will create implicit sequence 'a_aa_seq' for\nSERIAL column 'a.aa'\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey'\nfor table 'a'\nCREATE\ntest=# alter TABLE a RENAME aa to new_aa;\nALTER\n\n[]$ pg_dump test\n--\n-- Selected TOC Entries:\n--\n\\connect - gaf\n--\n-- TOC Entry ID 2 (OID 20352)\n--\n-- Name: \"a_aa_seq\" Type: SEQUENCE Owner: gaf\n--\n\nCREATE SEQUENCE \"a_aa_seq\" start 1 increment 1 maxvalue 2147483647\nminvalue 1 cache 1 ;\n\n--\n-- TOC Entry ID 4 (OID 20370)\n--\n-- Name: a Type: TABLE Owner: gaf\n--\n\nCREATE TABLE \"a\" (\n \"new_aa\" integer DEFAULT nextval('\"a_aa_seq\"'::text) NOT NULL,\n PRIMARY KEY (\"aa\")\n);\n\n--\n-- Data for TOC Entry ID 5 (OID 20370) TABLE DATA a\n--\n\n-- Disable triggers\nUPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" ~* 'a';\nCOPY \"a\" FROM stdin;\n\\.\n-- Enable triggers\nBEGIN TRANSACTION;\nCREATE TEMP TABLE \"tr\" (\"tmp_relname\" name, \"tmp_reltriggers\" smallint);\n\nINSERT INTO \"tr\" SELECT C.\"relname\", count(T.\"oid\") FROM \"pg_class\" C,\n\"pg_trigger\" T WHERE C.\"oid\" = T.\"tgrelid\" AND C.\"relname\" ~* 'a' GROUP\nBY 1;\nUPDATE \"pg_class\" SET \"reltriggers\" = TMP.\"tmp_reltriggers\" FROM \"tr\"\nTMP WHERE \"pg_class\".\"relname\" = TMP.\"tmp_relname\";\nDROP TABLE \"tr\";\nCOMMIT TRANSACTION;\n\n--\n-- TOC Entry ID 3 (OID 20352)\n--\n-- Name: \"a_aa_seq\" Type: SEQUENCE SET Owner:\n--\n\nSELECT setval ('\"a_aa_seq\"', 1, 'f');\n\n\n\nMichael Teter wrote:\n\n> hi.\n>\n> I just discovered that doing an alter table ... alter\n> column (to rename a column) does not do a complete\n> rename throughout the database.\n>\n> for example, say you have table a, with columns b and\n> c. b is your primary key.\n>\n> now rename b to new_b. if you do a dump of the schema\n> after you rename, you'll find that you can't reload\n> that schema because at the bottom of the definition of\n> table a you have PRIMARY KEY (\"b\").\n>\n> shouldn't rename update any index and key definitions?\n>\n> also, and this may actually the source of the problem,\n> while scanning my full (schema and data) dump, I\n> noticed that the contents of table pga_layout also had\n> the old values of columns that I have renamed.\n>\n> I'm very frightened right now, because I'm rather\n> dependent upon my database right now. I don't like\n> the thought that my database is corrupt at the schema\n> level.\n>\n> michael\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Yahoo! Messenger - Talk while you surf! It's FREE.\n> http://im.yahoo.com/\n\n--\n> Poorly planned software requires a genius to write it\n> and a hero to use it.\n\nGrant Finnemore BSc(Eng) (mailto:gaf@ucs.co.za)\nSoftware Engineer Universal Computer Services\nTel (+27)(11)712-1366 PO Box 31266 Braamfontein 2017, South Africa\nCell (+27)(82)604-5536 20th Floor, 209 Smit St., Braamfontein\nFax (+27)(11)339-3421 Johannesburg, South Africa\n\n\n",
"msg_date": "Fri, 27 Oct 2000 10:30:19 +0200",
"msg_from": "Grant Finnemore <gaf@ucs.co.za>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] renaming columns... danger?"
},
{
"msg_contents": "Subject: \t[SQL] renaming columns... danger?\n\n> I just discovered that doing an alter table ... alter\n> column (to rename a column) does not do a complete\n> rename throughout the database.\n\n> shouldn't rename update any index and key definitions?\n\n> I'm very frightened right now, because I'm rather\n> dependent upon my database right now. I don't like\n> the thought that my database is corrupt at the schema\n> level.\n> \nYes, I believe the same is true about trigger definitions and \nsuchlike. \nIn short - to do a rename on column I do a pg_dumpall and change \nall references of the name by hand :*(((\n\nBtw, is there a way to see what triggers are defined for particular \nfield? Or how to drop triggers, which (by default) are unnamed?\n\n\n",
"msg_date": "Fri, 27 Oct 2000 11:00:27 +0200",
"msg_from": "\"Emils Klotins\" <emils@grafton.lv>",
"msg_from_op": false,
"msg_subject": "Re: renaming columns... danger?"
},
{
"msg_contents": "Subject: \t[SQL] renaming columns... danger?\n\n> I just discovered that doing an alter table ... alter\n> column (to rename a column) does not do a complete\n> rename throughout the database.\n\n> shouldn't rename update any index and key definitions?\n\n> I'm very frightened right now, because I'm rather\n> dependent upon my database right now. I don't like\n> the thought that my database is corrupt at the schema\n> level.\n> \nYes, I believe the same is true about trigger definitions and \nsuchlike. \nIn short - to do a rename on column I do a pg_dumpall and change \nall references of the name by hand :*(((\n\nBtw, is there a way to see what triggers are defined for particular \nfield? Or how to drop triggers, which (by default) are unnamed?\n\n\n",
"msg_date": "Fri, 27 Oct 2000 11:00:27 +0200",
"msg_from": "\"Emils Klotins\" <emils@grafton.lv>",
"msg_from_op": false,
"msg_subject": "Re: renaming columns... danger?"
},
{
"msg_contents": "As for the latest CVS source, it looks still we have problems\nregarding alter table rename column and pg_dump as Grant has\nmentioned. Results of pg_dump is attached.\n\ntest=# create table a ( aa serial primary key );\nNOTICE: CREATE TABLE will create implicit sequence 'a_aa_seq' for SERIAL column 'a.aa'\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey' for table 'a'\nCREATE\ntest=# alter TABLE a RENAME aa to new_aa;\nALTER\ntest=# \\q\n[t-ishii@srapc1474 current]$ pg_dump test > /tmp/aaa\n[t-ishii@srapc1474 current]$ dropdb test\nDROP DATABASE\n[t-ishii@srapc1474 current]$ createdb test\nCREATE DATABASE\n[t-ishii@srapc1474 current]$ psql test < /tmp/aaa\nUsing pager is off.\nYou are now connected as new user t-ishii.\nCREATE\nERROR: CREATE TABLE: column \"aa\" named in key does not exist\nUPDATE 53\nERROR: Relation 'a' does not exist\ninvalid command \\.\nBEGIN\nCREATE\nINSERT 18819 1\nUPDATE 1\nDROP\nCOMMIT\n setval \n--------\n 1\n(1 row)\n\n[t-ishii@srapc1474 current]$ psql test\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nUsing pager is off.\ntest=# \\dt\nNo relations found.\ntest=# \n\n------------------------------ attachments ------------------------------\n Multipart/Mixed 2/\n 1 Text/Plain(guess) CoverPage*\nB 2 Application/Octet-Stream aaa.gz\n 3 .\n--------0-1-2-3-4-5-6-7-8-9----------------------------------------------\n",
"msg_date": "Sun, 07 Jan 2001 15:12:15 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] renaming columns... danger?"
},
{
"msg_contents": "> As for the latest CVS source, it looks still we have problems\n> regarding alter table rename column and pg_dump as Grant has\n> mentioned. Results of pg_dump is attached.\n\nSorry, an attachmet was missing.",
"msg_date": "Sun, 07 Jan 2001 15:15:38 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] renaming columns... danger?"
},
{
"msg_contents": "At 15:15 7/01/01 +0900, Tatsuo Ishii wrote:\n>> As for the latest CVS source, it looks still we have problems\n>> regarding alter table rename column and pg_dump as Grant has\n>> mentioned. Results of pg_dump is attached.\n>\n>Sorry, an attachmet was missing.\n>\n\nI can reproduce this in 7.0.2 as well; it's because the PK attr name comes\nfrom the index relation attr names, not the original relation. I'll look\ninto alternative queries but would welcome suggestions.\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 07 Jan 2001 18:55:44 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] renaming columns... danger?"
}
] |
[
{
"msg_contents": "[To hackers this time]\n\nAt 12:11 27/10/00 +0900, Hiroshi Inoue wrote:\n>\n>For example,LIMIT ALL means LIMIT 1 for optimizer and means\n>no LIMIT for executor.\n>Comments ?\n>\n\nIt seems there's two possibilities:\n\n(a) You know you will only use a limited number of rows, but you are not\nsure exactly how many. In this case, I'd vote for a 'OPTIMIZE FOR FAST\nSTART' clause.\n\n(b) You really want all rows, in which case you should let the optimizer do\nit's stuff. If it fails to work well, then use either 'OPTIMIZE FOR TOTAL\nCOST' or 'OPTIMIZE FOR FAST START' to change the behaviour.\n\nISTM that LIMIT ALL is just the syntax for the default limit clause - and\nshould, if anything, be equivalent to 'OPTIMIZE FOR TOTAL COST'.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 27 Oct 2000 21:20:43 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/nodes (copyfuncs.c\n\toutfuncs.c print.c)"
}
] |
[
{
"msg_contents": "\n[To hackers this time]\n\nAt 20:59 26/10/00 -0400, Tom Lane wrote:\n>Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> Yes I want to give optimizer a hint \"return first rows fast\".\n>> When Jan implemented LIMIT first,there was an option\n>> \"LIMIT ALL\" and it was exactly designed for the purpose.\n>\n>Well, we could make that work that way again, I think. \n\nI think that would be a *bad* idea. ISTM that the syntax is obtuse for the\nmeaning it is being given. The (mild) confusion in this thread is evidence\nof that, at least.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 27 Oct 2000 21:21:23 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/nodes (copyfuncs.c\n\toutfuncs.c print.c)"
},
{
"msg_contents": "On Fri, Oct 27, 2000 at 09:21:23PM +1000, Philip Warner wrote:\n> \n> [To hackers this time]\n> \n> At 20:59 26/10/00 -0400, Tom Lane wrote:\n> >Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> >> Yes I want to give optimizer a hint \"return first rows fast\".\n> >> When Jan implemented LIMIT first,there was an option\n> >> \"LIMIT ALL\" and it was exactly designed for the purpose.\n> >\n> >Well, we could make that work that way again, I think. \n> \n> I think that would be a *bad* idea. ISTM that the syntax is obtuse for the\n> meaning it is being given. The (mild) confusion in this thread is evidence\n> of that, at least.\n> \n\nSyncronicity, man. I didn't see the beginning of this thread (not on\nCOMMITERS) so I may be repeating things from there. \n\nI was recently cleaning out a stack of old trade-rags lying around, and\nsnipped an article out of a DB2 mag I've been getting. Very technical,\nand discusses the uses (and abuses) of OPTIMIZE FOR N ROWS, where N is\nan actual number. Discusses how the DB2 optimizer will use this hint to\ndecide if it should use an index to get the right order, even if it's a\nfull scan, and the total cost might be higher. I'll see if I can find it\nonline, if anyones interested.\n\nThe original article is all in the context of cursors (and multi-gig\ntables), but I think LIMIT brings in many of the same optimization\nconsiderations.\n\nISTM that the most common use of LIMIT right now is to simulate a cursor\nto provide some state over the stateless HTTP protocol, no? So the LIMIT\nis not 'fast start' vs 'total cost': the webpage often allows the enduser\nto select the batchsize. At some batchsize, 'total cost' wins over a\nsimplistic 'fast start' approach. And only the optimizer has any hope of\nfiguring out where that might be, as it will change with the exact query \nstructure.\n\nRoss\n-- \nOpen source code is like a natural resource, it's the result of providing\nfood and sunshine to programmers, and then staying out of their way.\n[...] [It] is not going away because it has utility for both the developers \nand users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.\n",
"msg_date": "Fri, 27 Oct 2000 12:41:50 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: [COMMITTERS] pgsql/src/backend/nodes (copyfuncs.c outfuncs.c\n\tprint.c)"
}
] |
[
{
"msg_contents": "Hiroshi and I had a discussion last night that needs to reach a wider\naudience than just the bystanders on pgsql-committers. Let me see if\nI can reconstruct the main points.\n\nIn 7.0, a LIMIT clause can appear in a DECLARE CURSOR, but it's ignored:\n\nplay=> select * from vv1;\n f1\n-------------\n 0\n 123456\n -123456\n 2147483647\n -2147483647\n 0\n(6 rows)\n\nplay=> begin;\nBEGIN\nplay=> declare c cursor for select * from vv1 limit 2;\nSELECT\nplay=> fetch 10 from c;\n f1\n-------------\n 0\n 123456\n -123456\n 2147483647\n -2147483647\n 0\n(6 rows)\n\nThe reason for this behavior is that LIMIT and the FETCH count are\nimplemented by the same mechanism (ExecutorRun's count parameter)\nand so FETCH has no choice but to override the LIMIT with its own\nargument.\n\nYesterday I reimplemented LIMIT as a separate plan node type, in order\nto make it work in views. A side effect of this is that ExecutorRun's\ncount parameter is now *only* used for FETCH, and therefore a LIMIT\nappearing in a DECLARE CURSOR does what IMHO it should do: you get\nthat many rows and no more from the cursor.\n\nregression=# begin;\nBEGIN\nregression=# declare c cursor for select * from vv1 limit 2;\nSELECT\nregression=# fetch 10 from c;\n f1\n--------\n 0\n 123456\n(2 rows)\n\nHiroshi was a little concerned about this change in behavior, and\nso the first order of business is whether anyone wants to defend the\nold way? IMHO it was incontrovertibly a bug, but ...\n\nThe second question is how the presence of a LIMIT clause ought to\naffect the planner's behavior. In 7.0, we taught the planner to\npay attention to LIMIT as an indicator whether it ought to prefer\nfast-start plans over lowest-total-cost plans. For example, consider\n\n\tSELECT * FROM tab ORDER BY col;\n\nand assume there's a b-tree index on col. Then the planner has two\npossible choices of plan: an indexscan on col, or a sequential scan\nfollowed by sort. The indexscan will begin delivering tuples right\naway, whereas the sort has to finish the sequential scan and perform\nthe sort before it can deliver the first tuple. OTOH the total cost\nto deliver the entire result is likely to be less for the sort plan\n(let's assume for this discussion that it is). So for the above\nquery the planner should and will choose the sort plan. But for\n\n\tSELECT * FROM tab ORDER BY col LIMIT 1;\n\nit will choose the indexscan plan because of the low startup cost.\nThis is implemented by pricing a query that uses LIMIT on the basis\nof linear interpolation between the startup and total costs, with the\ninterpolation point determined by the fraction of tuples we expect to\nretrieve.\n\nThis is all pretty clear and seems to work OK for stand-alone SELECT.\nBut what about a DECLARE CURSOR? The planner has no way to know how\nmuch of the cursor's result will actually be FETCHed by the user, so\nit's not clear how to use all this shiny new LIMIT planning mechanism\nfor a DECLARE CURSOR.\n\nWhat happens in 7.0 and current code is that for a DECLARE CURSOR,\nthe planner ignores any LIMIT clause and arbitrarily assumes that the\nuser will FETCH about 10% of the available data. Hence, the planning\nis done on the basis of least \"startup + 0.10*(total - startup)\" cost.\n\nIgnoring the limit clause was correct in 7.0, given the fact that the\nlimit wouldn't actually be used at runtime, but it's wrong now (unless\nI'm beaten down on the semantics change). Also, the 10% estimate is\nthe sort of compromise that's likely to satisfy nobody --- if you intend\nto fetch all the data, quite likely you want the least total cost,\nwhereas if you only want the first few rows, you probably want a plan\nbiased even more heavily towards startup cost at the expense of total\ncost.\n\nAfter thinking some more about yesterday's discussions, I propose that\nwe adopt the following planning behavior for cursors:\n\n1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the\nbasis of 10%-or-so fetch (I'd consider anywhere from 5% to 25% to be\njust as reasonable, if people want to argue about the exact number;\nperhaps a SET variable is in order?). 10% seems to be a reasonable\ncompromise between delivering tuples promptly and not choosing a plan\nthat will take forever if the user fetches the whole result.\n\n2. If DECLARE CURSOR contains a specific \"LIMIT n\" clause, plan on\nthe assumption that n tuples will be fetched. For small n this allows\nthe user to heavily bias the plan towards fast start. Since the LIMIT\nwill actually be enforced by the executor, the user cannot bias the\nplan more heavily than is justified by the number of tuples he's\nintending to fetch, however.\n\n3. If DECLARE CURSOR contains \"LIMIT ALL\", plan on the assumption that\nall tuples will be fetched, ie, select lowest-total-cost plan.\n\n(Note: LIMIT ALL has been in the grammar right along, but up to now\nit has been entirely equivalent to leaving out the LIMIT clause. This\nproposal essentially suggests allowing it to act as a planner hint that\nthe user really does intend to fetch all the tuples.)\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 12:18:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "LIMIT in DECLARE CURSOR: request for comments"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 12:18 27/10/00 -0400, Tom Lane wrote:\n>> 1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the\n>> basis of 10%-or-so fetch (I'd consider anywhere from 5% to 25% to be\n>> just as reasonable, if people want to argue about the exact number;\n>> perhaps a SET variable is in order?).\n\n> SET sounds good; will this work on a per-connection basis?\n\nA SET variable would be connection-local, same as any other ...\n\n> I don't suppose you'd consider 'OPTIMIZE FOR TOTAL COST' and 'OPTIMIZE FOR\n> FAST START' optimizer hints?\n\nI don't much care for adding such syntax to DECLARE CURSOR, if that's\nwhat you're suggesting. LIMIT ALL would have the same effect as\n'OPTIMIZE FOR TOTAL COST' anyway. LIMIT 1 (or a small number) would\nhave the effect of 'OPTIMIZE FOR FAST START', but would constrain you\nto not fetch any more rows than that. If we had a SET variable then\nyou could twiddle that value to favor fast-start or total-cost concerns\nover a continuous range, without constraining how many rows you actually\nfetch from a LIMIT-less cursor.\n\n> Also, does the change you have made to the executor etc mean that\n> subselect-with-limit is now possible?\n\nThe executor will do it, but unless Kevin figures out how to fix the\ngrammar, you'll have to put the LIMIT into a view definition, not inline\nin a subquery. View-with-LIMIT does work as of today.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 17:33:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LIMIT in DECLARE CURSOR: request for comments "
},
{
"msg_contents": "At 12:18 27/10/00 -0400, Tom Lane wrote:\n>\n>1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the\n>basis of 10%-or-so fetch (I'd consider anywhere from 5% to 25% to be\n>just as reasonable, if people want to argue about the exact number;\n>perhaps a SET variable is in order?). 10% seems to be a reasonable\n>compromise between delivering tuples promptly and not choosing a plan\n>that will take forever if the user fetches the whole result.\n\nSET sounds good; will this work on a per-connection basis?\n\n\n>2. If DECLARE CURSOR contains a specific \"LIMIT n\" clause, plan on\n>the assumption that n tuples will be fetched. For small n this allows\n>the user to heavily bias the plan towards fast start. Since the LIMIT\n>will actually be enforced by the executor, the user cannot bias the\n>plan more heavily than is justified by the number of tuples he's\n>intending to fetch, however.\n\nFine.\n\n\n>3. If DECLARE CURSOR contains \"LIMIT ALL\", plan on the assumption that\n>all tuples will be fetched, ie, select lowest-total-cost plan.\n\nGood.\n\n\n>\n>Comments?\n>\n\nI don't suppose you'd consider 'OPTIMIZE FOR TOTAL COST' and 'OPTIMIZE FOR\nFAST START' optimizer hints?\n\nAlso, does the change you have made to the executor etc mean that\nsubselect-with-limit is now possible?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 28 Oct 2000 07:50:36 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT in DECLARE CURSOR: request for comments"
},
{
"msg_contents": "At 12:18 PM 10/27/00 -0400, Tom Lane wrote:\n\n>Hiroshi was a little concerned about this change in behavior, and\n>so the first order of business is whether anyone wants to defend the\n>old way? IMHO it was incontrovertibly a bug, but ...\n\nSure feels like a bug to me. Having it ignored isn't what I'd expect.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Fri, 27 Oct 2000 14:57:51 -0700",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT in DECLARE CURSOR: request for comments"
},
{
"msg_contents": "Tom Lane writes:\n\n> 1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the\n> basis of 10%-or-so fetch\n\nI'd say that normally you're not using cursors because you intend to throw\naway 80% or 90% of the result set, but instead you're using it because\nit's convenient in your programming environment (e.g., ecpg). There are\nother ways of getting only some rows, this is not it.\n\nSo I think if you want to make optimization decisions based on cursors\nbeing used versus a \"normal\" select, then the only thing you can safely\ntake into account is the network roundtrip and client processing per\nfetch, but that might be as random as anything.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 31 Oct 2000 10:51:04 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT in DECLARE CURSOR: request for comments"
},
{
"msg_contents": "At 10:51 31/10/00 +0100, Peter Eisentraut wrote:\n>Tom Lane writes:\n>\n>> 1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the\n>> basis of 10%-or-so fetch\n>\n>I'd say that normally you're not using cursors because you intend to throw\n>away 80% or 90% of the result set, but instead you're using it because\n>it's convenient in your programming environment (e.g., ecpg). There are\n>other ways of getting only some rows, this is not it.\n\nYes!\n\n\n>So I think if you want to make optimization decisions based on cursors\n>being used versus a \"normal\" select, then the only thing you can safely\n>take into account is the network roundtrip and client processing per\n>fetch, but that might be as random as anything.\n\nWhich is why I like the client being able to ask the optimizer for certain\nkinds of solutions *explicitly*.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 31 Oct 2000 22:43:37 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT in DECLARE CURSOR: request for comments"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> 1. If DECLARE CURSOR does not contain a LIMIT, continue to plan on the\n>> basis of 10%-or-so fetch\n\n> I'd say that normally you're not using cursors because you intend to throw\n> away 80% or 90% of the result set, but instead you're using it because\n> it's convenient in your programming environment (e.g., ecpg). There are\n> other ways of getting only some rows, this is not it.\n\nI didn't say I was assuming that the user would only fetch 10% of the\nrows. Since what we're really doing is a linear interpolation between\nstartup and total cost, what this is essentially doing is favoring low\nstartup cost, but not to the complete exclusion of total cost. I think\nthat that describes the behavior we want for a cursor pretty well.\n\nIt remains to argue about what the relative weighting ought to be\n... which might be best answered by making it user-settable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Nov 2000 12:24:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: LIMIT in DECLARE CURSOR: request for comments "
}
] |
[
{
"msg_contents": "\nAs a large part of you will have noticed by now, this past week has been\nkiller on the mailing lists. One of the web clients on that machine\ndecided to not warn us of one of their countries holidays, and, as a\nresult, we got hit with a diluge of hits, similar to a slashdot effect ...\n\nWe ordered, yesterday, a new Dual-PIII 733Mhz server that we are moving\nall of the mail services over to, including the PostgreSQL mailing lists,\nso that they are seperate from the web servers, so that this doesn't\nhappen again in the future. \n\nThe server is supposed to get in later today and installed between then\nand tomorrow. then we'll start to migrate over the mail services ...\n\nIf a Dual-PIII 733Mhz server dedicated to email can't handle the load,\nthen I'm going to be very afraid ...\n\nSorry for the inconience this is causing, and thanks for the patience ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Fri, 27 Oct 2000 13:40:51 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Mailing List Slowdowns ..."
}
] |
[
{
"msg_contents": "I can't type today....\n----- Forwarded message from Larry Rosenman <ler@lerctr.org> -----\n\nFrom: Larry Rosenman <ler@lerctr.org>\nSubject: Re: [HACKERS] Summary: what to do about INET/CIDR\nDate: Fri, 27 Oct 2000 15:09:36 -0500\nMessage-ID: <20001027150936.A16595@lerami.lerctr.org>\nUser-Agent: Mutt/1.3.10i\nX-Mailer: Mutt http://www.mutt.org/\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: pgsql-hackers@posgresql.org\n\n* Tom Lane <tgl@sss.pgh.pa.us> [001027 15:07]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > OK, what I really meant was a way to coerce a CIDR entity to INET so \n> > that host() can work with a CIDR type to print all 4 octets. \n> \n> Hm. I don't see any really good reason why host() rejects CIDR input\n> in the first place. What's wrong with producing the host address\n> that corresponds to extending the CIDR network address with zeroes?\nAgreed. If we could do that, I'd be satisfied. \n\nThis is what started my tirade in the summer (trying to do an IP\nAllocation system). \n\n\n> \n> > Currently you can't coerce a CIDR type to INET. \n> \n> Well you can, but it doesn't *do* anything. One of the peculiarities\n> of these two types is that the cidr-vs-inet flag is actually stored\n> in the data value. The type-system differentiation between CIDR and\n> INET is a complete no-op for everything except initial entry of a value\n> (ie, conversion of a text string to CIDR or INET); all the operators\n> that care (which is darn few ... in fact it looks like host() is the\n> only one!) look right at the value to see which type they've been given.\n> So applying a type coercion may make the type system happy, but it\n> doesn't do a darn thing to the bits, and thus not to the behavior of\n> subsequent operators either. I have not yet figured out if that's a\n> good thing or a bad thing ...\nOIC. Hadn't looked that closely. What I want is a way to print all 4\noctets of a CIDR/INET entry at ALL times. \n\nLER\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n----- End forwarded message -----\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 27 Oct 2000 15:11:09 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "(forw) Re: Summary: what to do about INET/CIDR"
}
] |
[
{
"msg_contents": "pgsql-hackers-owner@hub.org wrote:\n> \n> \"Kevin O'Gorman\" <kogorman@pacbell.net> writes:\n> Anyway, the bottom line of all this rambling is that if you can get\n> rid of the distinction between SelectStmt and select_clause altogether,\n> that would be fine with me. You might consider looking at whether you\n> can write two nonterminals: a SELECT construct that has no outer parens,\n> and then an additional construct\n> \n> subselect: SelectStmt | '(' subselect ')'\n> \n> which would be used for all the sub-select nonterminals in SelectStmt\n> itself.\n\nI'm headed in that direction. I've been calling it 'subquery'.\n\n> \n> > OTOH, maybe we don't want NOT IN (((SELECT foo FROM bar))).\n> \n> If we can't do that then we're still going to get complaints, I think.\n> The original bug report in this thread was specifically that the thing\n> didn't like redundant parentheses; we should try to remove that\n> restriction in all contexts not just some.\n\nAll that being said, I'm not sure enough notice has been taken of one\naspect of the changes already in place, and likely to become more\npronounced. It may be okay with everybody, but I don't want it to be\na big surprise: queries may no longer begin with SELECT, but instead\nwith an arbitrary number of left parens. In some cases, the semantics\ngets lost in the syntax. Consider:\n\n(SELECT * INTO newtable FROM table1) UNION (SELECT * FROM table2);\n\nNotice the INTO? Doesn't this seem like an odd place for it, in what\nappears to be a subordinate query? Where else would it go? How would\nit grab you in an expression with five or more levels of parens?\nHow about five levels of parens and a complicated targetlist before\nyou get to the INTO?\n\nWhat I'm suggesting is that the parens be allowed only on the right\nhand side of the set operations. How does that strike you?\n\n> \n> regards, tom lane\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Fri, 27 Oct 2000 13:40:37 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] A rare error"
},
{
"msg_contents": "Kevin O'Gorman wrote:\n> \n> pgsql-hackers-owner@hub.org wrote:\n> >\n> > \"Kevin O'Gorman\" <kogorman@pacbell.net> writes:\n> > Anyway, the bottom line of all this rambling is that if you can get\n> > rid of the distinction between SelectStmt and select_clause altogether,\n> > that would be fine with me. You might consider looking at whether you\n> > can write two nonterminals: a SELECT construct that has no outer parens,\n> > and then an additional construct\n> >\n> > subselect: SelectStmt | '(' subselect ')'\n> >\n> > which would be used for all the sub-select nonterminals in SelectStmt\n> > itself.\n> \n> I'm headed in that direction. I've been calling it 'subquery'.\n> \n> >\n> > > OTOH, maybe we don't want NOT IN (((SELECT foo FROM bar))).\n> >\n> > If we can't do that then we're still going to get complaints, I think.\n> > The original bug report in this thread was specifically that the thing\n> > didn't like redundant parentheses; we should try to remove that\n> > restriction in all contexts not just some.\n> \n> All that being said, I'm not sure enough notice has been taken of one\n> aspect of the changes already in place, and likely to become more\n> pronounced. It may be okay with everybody, but I don't want it to be\n> a big surprise: queries may no longer begin with SELECT, but instead\n> with an arbitrary number of left parens. In some cases, the semantics\n> gets lost in the syntax. Consider:\n> \n> (SELECT * INTO newtable FROM table1) UNION (SELECT * FROM table2);\n> \n> Notice the INTO? Doesn't this seem like an odd place for it, in what\n> appears to be a subordinate query? Where else would it go? How would\n> it grab you in an expression with five or more levels of parens?\n> How about five levels of parens and a complicated targetlist before\n> you get to the INTO?\n> \n\nThis just occurred to me: how would you sort the results of this query?\nThe path of least resistance from the way things work now would be most\nnon-obvious: put the ORDER BY on the leftmost query. It looks like this\n\n (SELECT * INTO newtable FROM table1 ORDER BY field1) UNION (SELECT * FROM\ntable2);\n\nAnd I have to say that's about the ugliest construct I've seen in\na pretty ugly language.\n\n> What I'm suggesting is that the parens be allowed only on the right\n> hand side of the set operations. How does that strike you?\n\nAnyway, that's the direction I'm going in now, but as always, I solicit\ncomments.\n\n> \n> >\n> > regards, tom lane\n> \n> --\n> Kevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\n> Permanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\n> At school: mailto:kogorman@cs.ucsb.edu\n> Web: http://www.cs.ucsb.edu/~kogorman/index.html\n> Web: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n> \n> \"There is a freedom lying beyond circumstance,\n> derived from the direct intuition that life can\n> be grounded upon its absorption in what is\n> changeless amid change\"\n> -- Alfred North Whitehead\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Fri, 27 Oct 2000 13:58:57 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] A rare error"
},
{
"msg_contents": "> (SELECT * INTO newtable FROM table1) UNION (SELECT * FROM table2);\nPossibly a silly (and definitely not standards-conformant) suggestion:\n\nMaybe grammar should be amended to allow for\n(SELECT * FROM table1) UNION (SELECT * FROM table2) INTO newtable\n\ni.e. \n\nunion_expr:\n (select_expr) union (union_expr) [into into_table]\n\n> Notice the INTO? Doesn't this seem like an odd place for it, in what\n> appears to be a subordinate query? Where else would it go? How would\n> it grab you in an expression with five or more levels of parens?\n> How about five levels of parens and a complicated targetlist before\n> you get to the INTO?\n> \n> What I'm suggesting is that the parens be allowed only on the right\n> hand side of the set operations. How does that strike you?\n> \n> > \n> > regards, tom lane\n> \n> \n\n",
"msg_date": "Fri, 27 Oct 2000 23:57:14 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] A rare error"
},
{
"msg_contents": "\"Kevin O'Gorman\" <kogorman@pacbell.net> writes:\n> All that being said, I'm not sure enough notice has been taken of one\n> aspect of the changes already in place, and likely to become more\n> pronounced. It may be okay with everybody, but I don't want it to be\n> a big surprise: queries may no longer begin with SELECT, but instead\n> with an arbitrary number of left parens.\n\nThat's no surprise, because it's been true for a long time. It's\ncertainly true in the 6.5 grammar, which is the oldest I have on hand.\n\n> In some cases, the semantics gets lost in the syntax. Consider:\n\n> (SELECT * INTO newtable FROM table1) UNION (SELECT * FROM table2);\n\n> Notice the INTO? Doesn't this seem like an odd place for it, in what\n> appears to be a subordinate query? Where else would it go? How would\n> it grab you in an expression with five or more levels of parens?\n> How about five levels of parens and a complicated targetlist before\n> you get to the INTO?\n\nAgreed, it's pretty ugly. This one is only partially SQL92's fault,\nsince it defines SELECT ... INTO for just a limited context:\n\n <select statement: single row> ::=\n SELECT [ <set quantifier> ] <select list>\n INTO <select target list>\n <table expression>\n\n(<select target list> here appears to mean a list of local variables in\na calling program, a la ECPG, and doesn't really have anything to do\nwith the table-destination semantics that Postgres puts on the\nconstruct. But I digress.) The above restricted form of SELECT does\nnot admit UNION/INTERSECT/EXCEPT constructs at the top level. Postgres\nhas generalized this to allow INTO <target> in a UNION/etc construct,\nwhich means the word SELECT is not necessarily going to be the very\nfirst thing you see. We do require the INTO to be in the leftmost\nprimitive SELECT, so the only thing you can really see in front of\n\"SELECT <selectlist> INTO\" is some number of left parentheses. To me\nthe potential hairiness of the <selectlist> seems like a much bigger\nreadability issue than the leading parens --- but we got that part of\nthe syntax straight from SQL92.\n\n> What I'm suggesting is that the parens be allowed only on the right\n> hand side of the set operations. How does that strike you?\n\nWill not do, first because EXCEPT is not symmetric, and second because\nSQL92 does not describe any such limitation.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 01:15:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] A rare error "
},
{
"msg_contents": "\"Kevin O'Gorman\" <kogorman@pacbell.net> writes:\n> This just occurred to me: how would you sort the results of this query?\n> The path of least resistance from the way things work now would be most\n> non-obvious: put the ORDER BY on the leftmost query. It looks like this\n> (SELECT * INTO newtable FROM table1 ORDER BY field1) UNION (SELECT * FROM\n> table2);\n> And I have to say that's about the ugliest construct I've seen in\n> a pretty ugly language.\n\nNo. This is not SQL92: the spec is perfectly definite that it does not\nallow such a construct. What it allows is\n\n\tSELECT ...foo... UNION SELECT ...bar... ORDER BY baz\n\nand here the ORDER BY is to be interpreted as ordering the results of\nthe UNION, not the results of the righthand sub-SELECT. This is one\nof the cases that you'll need to be careful to get right when\nrejiggering the syntax.\n\nPurely as an implementation issue, the current gram.y code drills down\nto find the leftmost sub-SELECT and attaches the outer-level ORDER BY\nclause to that Select node. analyze.c later extracts the ORDER BY and\nattaches it to a top-level Query node that doesn't correspond to any\nnode existing in the gram.y output. That's all behind the scenes,\nhowever, and shouldn't be exposed to the tender eyes of mere mortal\nusers.\n\nAFAICS, the input\n (SELECT * FROM table1 ORDER BY field1) UNION (SELECT * FROM table2);\nshould either be rejected (as current sources and all prior releases\nwould do) or else treat the ORDER BY as ordering the leftmost subselect\nbefore it feeds into the UNION. There is no point in such an ORDER BY\nby itself, since UNION will feel free to reorder the tuples --- but\nOTOH something like\n (SELECT ... ORDER BY ... LIMIT 1) UNION (SELECT ...)\nseems entirely sensible and useful to me.\n\nIn short: there is a considerable difference between\n\n\t(SELECT ...foo... UNION SELECT ...bar...) ORDER BY baz\n\n\tSELECT ...foo... UNION (SELECT ...bar... ORDER BY baz)\n\n\t(SELECT ...foo... ORDER BY baz) UNION SELECT ...bar...\n\nand any attempt to allow ORDER BY on subqueries will have to be\ncareful to keep these straight. This may well mean that you need\nto rejigger the output structures of gram.y as well as the grammar\nitself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 01:32:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] A rare error "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> > One thing I noticed that may surprise: the \"%left UNION\" and such that\n> > appear in the source don't seem to do anything. I think our syntax\n> > doesn't look like operators to yacc, and I suspect it's the opt_all\n> > that's doing it. That part of yacc I don't understand.\n> \n> Hmm, that should work. My reading of the bison manual is that the\n> precedence of a production is taken from the rightmost terminal symbol\n> in the production, so\n> \n> | select_clause UNION opt_all select_clause\n> | select_clause INTERSECT opt_all select_clause\n> | select_clause EXCEPT opt_all select_clause\n> \n> should have the correct relative precedences.\n> \n> Don't you get shift/reduce errors if you remove those precedence specs?\n> I'd expect the <select_clause> grammar to be ambiguous without operator\n> precedence specs ...\n> \n> regards, tom lane\n\nYah. I would have thought so too. However, when I comment out the\ntwo %left lines (being careful not to dusturb line numbers) I get the\nabsolutely identical gram.c output. So at least for those two things\nthe associativity does nothing at all. I'm inclined to leave them commented\nout, so they don't mislead.\n\nOf course, I was pretty sure the syntax there was unambiguous in any\ncase, so I'm not surprised there's no error; come to think of it, maybe\nthat's why %left has no effect. There has to be something going on,\nbecause if I comment out the next line (the one with JOIN in it),\nI suddenly get 32 shift/reduce errors.\n\nThis brings up another point. I'm still very new at reading the \nSQL92 spec, so I need help being sure I've got it right. If we're going\nto want precedence for these operators, I can do it in the syntax, and\nit's only a little work. I don't see precedence in SQL92; set operations\nseem to be left associative of equal priority. Be careful what you\nask for, you'll likely get it.\n\nAnd appropos of another comment you made, when we decide how it's going\nto be, we should have a bunch more things put in the regression tests,\nnot just UNIONs, to make sure it doesn't change unnoticed.\n\n++ kevin\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Sat, 28 Oct 2000 11:48:14 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Re: syntax"
},
{
"msg_contents": "\"Kevin O'Gorman\" <kogorman@pacbell.net> writes:\n>> Don't you get shift/reduce errors if you remove those precedence specs?\n>> I'd expect the <select_clause> grammar to be ambiguous without operator\n>> precedence specs ...\n\n> Yah. I would have thought so too. However, when I comment out the\n> two %left lines (being careful not to dusturb line numbers) I get the\n> absolutely identical gram.c output. So at least for those two things\n> the associativity does nothing at all. I'm inclined to leave them commented\n> out, so they don't mislead.\n\nNot to put too fine a point on it, but are you talking about the\noriginal grammar or your modified one? Your modified one is erroneous\nbecause it will always associate successive UNION/INTERSECT/EXCEPT\noperators left-to-right; this does not meet the SQL spec which insists\nthat INTERSECT binds more tightly than the other two. Given that, I'm\nnot surprised that the precedences have no effect.\n\n> I don't see precedence in SQL92; set operations\n> seem to be left associative of equal priority.\n\nBetter take another look at the <query expression>, <query term>,\n<query primary> hierarchy then...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 16:41:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: syntax "
},
{
"msg_contents": "> Not to put too fine a point on it, but are you talking about the\n> original grammar or your modified one? Your modified one is erroneous\n> because it will always associate successive UNION/INTERSECT/EXCEPT\n> operators left-to-right; this does not meet the SQL spec which insists\n> that INTERSECT binds more tightly than the other two. Given that, I'm\n> not surprised that the precedences have no effect.\n> \n> > I don't see precedence in SQL92; set operations\n> > seem to be left associative of equal priority.\n> \n> Better take another look at the <query expression>, <query term>,\n> <query primary> hierarchy then...\n\nIs there something here to patch? Hmm, I don't see anything... I will\ncome back later. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 28 Oct 2000 16:54:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: syntax"
}
] |
[
{
"msg_contents": "Hi\n\nI have tried reformatting dates in many ways but every thing I have\ntried fails.\n\nI used insert and update to create and change dates with different\nstyles and\nwas successful.\n\nIs this a known bug?\n\nIs their a fix for this bug?\n\nor\n\nDo I have to import date variables using insert/update statements?\n\nI built the binaries from Redhat's postgresql-7.0.2-2.src.rpm package\nand installed\nall the binary packages that were built:\n\npostgresql-7.0.2-2.i386.rpm\npostgresql-devel-7.0.2-2.i386.rpm\npostgresql-jdbc-7.0.2-2.i386.rpm\npostgresql-odbc-7.0.2-2.i386.rpm\npostgresql-perl-7.0.2-2.i386.rpm\npostgresql-python-7.0.2-2.i386.rpm\npostgresql-server-7.0.2-2.i386.rpm\npostgresql-tcl-7.0.2-2.i386.rpm\npostgresql-test-7.0.2-2.i386.rpm\npostgresql-tk-7.0.2-2.i386.rpm\n\nThe reason I built the binaries from the source code, was because the\nftp server\nat redhat was overloaded and was only lucky enough to get on after\nnumerous attempts.\n\nI have been building using linux since 1995, and postgresql since 1997.\n\nThis is the first major problem I have had.\n\nI have only requested help a couple of times, but do contribute to the\nlist on occasions.\n\nGuy\n",
"msg_date": "Fri, 27 Oct 2000 14:43:56 -0600",
"msg_from": "Guy Fraser <guy@incentre.net>",
"msg_from_op": true,
"msg_subject": "Can't import date using copy"
},
{
"msg_contents": "Guy Fraser <guy@incentre.net> writes:\n> I have tried reformatting dates in many ways but every thing I have\n> tried fails.\n\nAs a rule, COPY will interpret incoming dates according to the current\nsetting of the DATESTYLE variable. It'd help if you explained the date\nformat you are trying to read and mentioned which datestyles you've\ntried to use ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 00:03:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can't import date using copy "
},
{
"msg_contents": "On Fri, 27 Oct 2000, Guy Fraser wrote:\n\n> Hi\n> \n> I have tried reformatting dates in many ways but every thing I have\n> tried fails.\n\nI've been able to use copy to import date values in a simple test:\ncreate table dt (a date);\ncopy dt from stdin;\n01-12-2000\n12-11-1999\n\\.\n\nput the date values in fine.\n\nWhat are you trying specifically, and what are you getting as an error?\n\n",
"msg_date": "Fri, 27 Oct 2000 21:12:59 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Can't import date using copy"
}
] |
[
{
"msg_contents": "(This is mostly directed at Vadim, but kibitzing is welcome.)\n\nHere's what I plan to do to make DROP TABLE rollbackable and clean up\nthe handling of CREATE TABLE rollback. Comments?\n\n\nOverview:\n\n1. smgrcreate() will create the file for the relation same as it does now,\nand will add the rel's RelFileNode information to an smgr-private list of\nrels created in the current transaction.\n\n2. smgrunlink() normally will NOT immediately delete the file; instead it\nwill perform smgrclose() and then add the rel's RelFileNode information to\nan smgr-private list of rels to be deleted at commit. However, if the\nfile appears in the list created by smgrcreate() --- ie, the rel is being\ncreated and deleted in the same xact --- then we can delete it\nimmediately. In this case we remove the file from the smgrcreate list\nand do not put it on the unlink list.\n\n3. smgrcommit() will delete all the files mentioned in the list created\nby smgrunlink, then discard both lists.\n\n4. smgrabort() will delete all the files mentioned in the list created\nby smgrcreate, then discard both lists.\n\nPoints 1 and 4 will replace the existing relcache-based mechanism for\ndeleting files created in the current xact when the xact aborts.\n\n\nVarious details:\n\nTo support deleting files at xact commit/abort, we will need something\nlike an \"mdblindunlink\" entrypoint to md.c. I am inclined to simply\nredefine mdunlink to take a RelFileNode instead of a complete Relation,\nrather than supporting two entrypoints --- I don't think there'll be any\nfuture use for the existing mdunlink. Objections?\n\nbufmgr.c's ReleaseRelationBuffers drops any dirty buffers for the target\nrel, and therefore it must NOT be called inside the transaction (else,\nrollback would mean we'd lost data). I am inclined to let it continue\nto behave that way, but to call it from smgrcommit/smgrabort, not from\nanywhere else. This would mean changing its API to take a RelFileNode,\nbut that seems no big problem. This way, dirty buffers for a doomed\nrelation will be allowed to live until transaction commit, in the hopes\nthat we will be able to discard them unwritten.\n\nWill remove notices in DROP TABLE etc. warning that these operations\nare not rollbackable. Note that CREATE/DROP DATABASE is still not\nrollback-able, and so those two ops will continue to elog(ERROR) when\ncalled in a transaction block. Ditto for VACUUM; probably also ditto\nfor REINDEX, though I haven't looked closely at that yet.\n\nThe temp table name mapper will need to be modified so that it can\nundo all current-xact changes to its name mapping list at xact abort.\nCurrently I think it only handles undoing additions, not\ndeletions/renames. This does not need to be WAL-aware, does it?\n\n\nWAL:\n\nAFAICS, things will behave properly if calls to smgrcreate/smgrunlink\nare logged as WAL events. For redo, they are executed just the same\nas normal, except they shouldn't complain if the target file already\nexists (or already doesn't exist, for unlink). Undo of smgrcreate\nis just immediate mdunlink; undo of smgrunlink is a no-op.\n\nI have not studied the WAL code enough to be prepared to add the\nlogging/undo/redo code, and it looks like you haven't implemented that\nanyway yet for smgr.c, so I will leave that part to you, OK?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2000 17:21:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Proposal for DROP TABLE rollback mechanism"
},
{
"msg_contents": "> 1. smgrcreate() will create the file for the relation same as it does now,\n> and will add the rel's RelFileNode information to an smgr-private list of\n> rels created in the current transaction.\n>\n> 2. smgrunlink() normally will NOT immediately delete the file; instead it\n> will perform smgrclose() and then add the rel's RelFileNode information to\n ^^^^^^^^^^^^^^^\nSeems that smgrclose still need in Relation (where rd_fd lives and this is\nbad) and you're going to put just file node to smgrunlink (below). Shouldn't\nrelation be removed from relcache (and smgrclose called from there)\nbefore smgrunlink?\n\n> an smgr-private list of rels to be deleted at commit. However, if the\n> file appears in the list created by smgrcreate() --- ie, the rel is being\n> created and deleted in the same xact --- then we can delete it\n> immediately. In this case we remove the file from the smgrcreate list\n> and do not put it on the unlink list.\n\n(This wouldn't work for savepoints but ok for now.)\n\n> 3. smgrcommit() will delete all the files mentioned in the list created\n> by smgrunlink, then discard both lists.\n>\n> 4. smgrabort() will delete all the files mentioned in the list created\n> by smgrcreate, then discard both lists.\n>\n> Points 1 and 4 will replace the existing relcache-based mechanism for\n> deleting files created in the current xact when the xact aborts.\n>\n>\n> Various details:\n>\n> To support deleting files at xact commit/abort, we will need something\n> like an \"mdblindunlink\" entrypoint to md.c. I am inclined to simply\n> redefine mdunlink to take a RelFileNode instead of a complete Relation,\n> rather than supporting two entrypoints --- I don't think there'll be any\n> future use for the existing mdunlink. Objections?\n\nNo one. Actually I would like to see all smgr entries taking RelFileNode\ninstead of Relation. This would require smgr cache to map nodes to fd-es.\nHaving this we could get rid of all blind smgr entries: smgropen\nwould put file node & fd into smgr cache and all other smgrmethods\nwould act as blind ones do now - no file node found in cache then\nopen file, performe op, close file.\nBut probably it's too late to implement this now.\n\n> bufmgr.c's ReleaseRelationBuffers drops any dirty buffers for the target\n> rel, and therefore it must NOT be called inside the transaction (else,\n> rollback would mean we'd lost data). I am inclined to let it continue\n> to behave that way, but to call it from smgrcommit/smgrabort, not from\n> anywhere else. This would mean changing its API to take a RelFileNode,\n> but that seems no big problem. This way, dirty buffers for a doomed\n> relation will be allowed to live until transaction commit, in the hopes\n> that we will be able to discard them unwritten.\n\nMmmm, why not call FlushRelationBuffers? Calling bufmgr from smgr\ndoesn't look like right thing. ?\n\n> Will remove notices in DROP TABLE etc. warning that these operations\n> are not rollbackable. Note that CREATE/DROP DATABASE is still not\n> rollback-able, and so those two ops will continue to elog(ERROR) when\n> called in a transaction block. Ditto for VACUUM; probably also ditto\n> for REINDEX, though I haven't looked closely at that yet.\n>\n> The temp table name mapper will need to be modified so that it can\n> undo all current-xact changes to its name mapping list at xact abort.\n> Currently I think it only handles undoing additions, not\n> deletions/renames. This does not need to be WAL-aware, does it?\n>\n>\n> WAL:\n>\n> AFAICS, things will behave properly if calls to smgrcreate/smgrunlink\n> are logged as WAL events. For redo, they are executed just the same\n\nYes, there will be logging for smgrcreate, but not for smgrunlink because\nof we'll make real unlinking after commit. All what is needed for WAL\nis list of file nodes to remove - I need to put this list into commit log\nrecord to ensure that files are removed on redo and need in ability\nto remove file immediately in this case.\n\n> as normal, except they shouldn't complain if the target file already\n> exists (or already doesn't exist, for unlink). Undo of smgrcreate\n> is just immediate mdunlink; undo of smgrunlink is a no-op.\n>\n> I have not studied the WAL code enough to be prepared to add the\n> logging/undo/redo code, and it looks like you haven't implemented that\n> anyway yet for smgr.c, so I will leave that part to you, OK?\n\nUnfortunately, there will be no undo in 7.1 -:(\nI've found that to undo index updates we would need in either compensation\nrecords or in xmin/cmin in index tuples. So, we'll still live with dust in\nstorage -:(\nRedo is much easy.\n\nVadim\n\n\n",
"msg_date": "Sat, 28 Oct 2000 10:38:22 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for DROP TABLE rollback mechanism"
},
{
"msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n>> 2. smgrunlink() normally will NOT immediately delete the file; instead it\n>> will perform smgrclose() and then add the rel's RelFileNode information to\n> ^^^^^^^^^^^^^^^\n> Seems that smgrclose still need in Relation (where rd_fd lives and this is\n> bad) and you're going to put just file node to smgrunlink (below). Shouldn't\n> relation be removed from relcache (and smgrclose called from there)\n> before smgrunlink?\n\nNo, the way the existing higher-level code works is first to call\nsmgrunlink (NOT smgrclose) and then remove the relcache entry. I don't\nsee a need to change that. In the interval where we're waiting to\ncommit, there will be no relcache entry, only a RelFileNode value\nsitting in smgr's list of files to delete at commit.\n\n> Actually I would like to see all smgr entries taking RelFileNode\n> instead of Relation.\n\nI agree, but I think that's a project for a future release. Not enough\ntime for it for 7.1.\n\n>> bufmgr.c's ReleaseRelationBuffers drops any dirty buffers for the target\n>> rel, and therefore it must NOT be called inside the transaction (else,\n>> rollback would mean we'd lost data). I am inclined to let it continue\n>> to behave that way, but to call it from smgrcommit/smgrabort, not from\n>> anywhere else. This would mean changing its API to take a RelFileNode,\n>> but that seems no big problem. This way, dirty buffers for a doomed\n>> relation will be allowed to live until transaction commit, in the hopes\n>> that we will be able to discard them unwritten.\n\n> Mmmm, why not call FlushRelationBuffers? Calling bufmgr from smgr\n> doesn't look like right thing. ?\n\nYes, it's a little bit ugly, but if we call FlushRelationBuffers then we\nwill likely be doing some useless writes (to flush out pages that we are\nonly going to throw away anyway). If we leave the buffers alone till\ncommit, then we'll only write out pages if we need to recycle a buffer\nfor another use during that transaction.\n\nAlso, I don't feel comfortable with the idea of doing\nFlushRelationBuffers mid-transaction and then relying on the buffer\ncache to still be empty of pages for that relation much later on when\nwe finally commit. Sure, it *should* be empty, but I'll be happier\nif we flush the buffer cache immediately before deleting the file.\n\nWhat might make sense is to make a pass over the buffer cache at the\ntime of DROP (inside the transaction) to make sure there are no pinned\nbuffers for the rel --- if so, we want to elog() during the transaction\nnot after commit. We could also release any non-dirty buffers at\nthat point. Then after commit we know we don't care about the dirty\nbuffers anymore, so we come back and discard them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 13:52:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for DROP TABLE rollback mechanism "
},
{
"msg_contents": "> > Mmmm, why not call FlushRelationBuffers? Calling bufmgr from smgr\n> > doesn't look like right thing. ?\n> \n> Yes, it's a little bit ugly, but if we call FlushRelationBuffers then we\n> will likely be doing some useless writes (to flush out pages that we are\n> only going to throw away anyway). If we leave the buffers alone till\n> commit, then we'll only write out pages if we need to recycle a buffer\n> for another use during that transaction.\n\nBTW, why do we force buffers to disk in FlushRelationBuffers at all?\nSeems all what is required is to flush them *from* pool, not *to* disk\nimmediately. In WAL bufmgr version (temporary in xlog_bufmgr.c)\nI've changed this. Actually I wouldn't worry about these writes at all -\ndrop relation is rare case, - but ok.\n\n> Also, I don't feel comfortable with the idea of doing\n> FlushRelationBuffers mid-transaction and then relying on the buffer\n> cache to still be empty of pages for that relation much later on when\n> we finally commit. Sure, it *should* be empty, but I'll be happier\n> if we flush the buffer cache immediately before deleting the file.\n\n(It *must* be empty -:) Sorry, I don't understand \"*immediately* before\"\nin multi-user environment. Relation must be excl locked and this must\nbe only guarantee for us, not time of doing anything with this relation.)\n\n> What might make sense is to make a pass over the buffer cache at the\n> time of DROP (inside the transaction) to make sure there are no pinned\n> buffers for the rel --- if so, we want to elog() during the transaction\n> not after commit. We could also release any non-dirty buffers at\n> that point. Then after commit we know we don't care about the dirty\n> buffers anymore, so we come back and discard them.\n\nPlease note that there is xlog_bufmgr.c If you'll add/change something in\nbufmgr please let me know later.\n\nVadim\n\n\n",
"msg_date": "Sat, 28 Oct 2000 11:40:19 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for DROP TABLE rollback mechanism "
},
{
"msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> BTW, why do we force buffers to disk in FlushRelationBuffers at all?\n> Seems all what is required is to flush them *from* pool, not *to* disk\n> immediately.\n\nGood point. Seems like it'd be sufficient to do a standard async write\nrather than write + fsync.\n\nWe'd still need some additional logic at commit time, however, because\nwe want to make sure there are no BufferDirtiedByMe bits set for the\nfile we're about to delete...\n\n>> Also, I don't feel comfortable with the idea of doing\n>> FlushRelationBuffers mid-transaction and then relying on the buffer\n>> cache to still be empty of pages for that relation much later on when\n>> we finally commit. Sure, it *should* be empty, but I'll be happier\n>> if we flush the buffer cache immediately before deleting the file.\n\n> (It *must* be empty -:)\n\nIn the absence of bugs, yes, it must be empty for the case where we\nare committing a DROP. But I want a safety check to catch those bugs.\nBesides, what of the case where we are aborting a CREATE? There must be\na bufmgr call that will allow us to ensure there are no buffers left for\nthe relation we intend to delete.\n\nHow about this: change FlushRelationBuffers so that it does standard\nasync writes for dirty buffers and then removes all the rel's buffers\nfrom the pool. This is invoked inside the transaction, same as now.\nMake DROP TABLE call this routine, *not* RemoveRelationBuffers.\nThen call RemoveRelationBuffers from smgr during transaction commit or\nabort. In the commit case, there really shouldn't be any buffers for\nthe rel, so we can emit an elog NOTICE (it's too late for ERROR, no?)\nif we find any. But in the abort case, we'd not be surprised to find\nbuffers, even dirty buffers, and we just want to throw 'em away. This\nwould also be the place to clean out the BufferDirtiedByMe state.\n\nIf you don't like the smgr->bufmgr call, the reason for that is we're\nasking smgr to keep the list of pending relation deletes. We could make\na new module, logically above smgr and bufmgr, to keep that list\ninstead. But I don't think it's worth the trouble.\n\n> Please note that there is xlog_bufmgr.c If you'll add/change something in\n> bufmgr please let me know later.\n\nWill keep you informed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 16:57:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for DROP TABLE rollback mechanism "
},
{
"msg_contents": "> > BTW, why do we force buffers to disk in FlushRelationBuffers at all?\n> > Seems all what is required is to flush them *from* pool, not *to* disk\n> > immediately.\n>\n> Good point. Seems like it'd be sufficient to do a standard async write\n> rather than write + fsync.\n>\n> We'd still need some additional logic at commit time, however, because\n> we want to make sure there are no BufferDirtiedByMe bits set for the\n> file we're about to delete...\n\nNote that there is no BufferDirtiedByMe in WAL bufmgr version.\n\n> How about this: change FlushRelationBuffers so that it does standard\n> async writes for dirty buffers and then removes all the rel's buffers\n\n^^^^^^^^^^^^^^^^^^^^^^^^^\nWhen it's used from vacuum no reason to do this.\n\n> from the pool. This is invoked inside the transaction, same as now.\n> Make DROP TABLE call this routine, *not* RemoveRelationBuffers.\n> Then call RemoveRelationBuffers from smgr during transaction commit or\n> abort. In the commit case, there really shouldn't be any buffers for\n> the rel, so we can emit an elog NOTICE (it's too late for ERROR, no?)\n\n^^^^^^^^^^^^^^^^^^^^^\nSure, but good time for Assert -:)\n\n> if we find any. But in the abort case, we'd not be surprised to find\n> buffers, even dirty buffers, and we just want to throw 'em away. This\n> would also be the place to clean out the BufferDirtiedByMe state.\n\nBTW, currently smgrcommit is called twice on commit *before* xact\nstatus in pg_log updated and so you can't use it to remove files. In\nWAL version smgrcommit isn't called at all now but we easy can\nadd smgrcommit call after commit record is logged.\n\nVadim\n\n\n",
"msg_date": "Sat, 28 Oct 2000 15:39:58 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for DROP TABLE rollback mechanism "
},
{
"msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> Please note that there is xlog_bufmgr.c If you'll add/change something in\n> bufmgr please let me know later.\n\nPer your request: I've changed bufmgr.c. I think I made appropriate\nchanges in xlog_bufmgr, but please check. The changes were:\n\n1. Modify FlushRelationBuffers to do plain write, not flush (no fsync)\nof dirty buffers. This was per your suggestion. FlushBuffer() now\ntakes an extra parameter indicating whether fsync is wanted. I think\nthis change does not affect xlog_bufmgr at all.\n\n2. Rename ReleaseRelationBuffers to DropRelationBuffers to make it more\nclear what it's doing.\n\n3. Add a DropRelFileNodeBuffers, which is just like DropRelationBuffers\nexcept it takes a RelFileNode argument. This is used by smgr to ensure\nthat the buffer cache is clear of buffers for a rel about to be deleted.\n\n4. Update comments about usage of DropRelationBuffers and\nFlushRelationBuffers. \n\nRollback of DROP TABLE now works in non-WAL code, and seems to work in\nWAL code too. I did not add WAL logging, because I'm not quite sure\nwhat to do, so rollforward probably does the wrong thing. Could you\ndeal with that part? smgr.c is the place that keeps the list of what\nto delete at commit or abort.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Nov 2000 17:23:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for DROP TABLE rollback mechanism "
}
] |
[
{
"msg_contents": "Picking up from a discussion several months back, the build now uses the\n-rpath option (or -Wl,-R or whatever yours uses) to store the location of\nthe shared libraries into the executables and the shared libraries\nthemselves. That means that the LD_LIBRARY_PATH/ld.so.conf thing should\nno longer be necessary.\n\nWhen making a binary package you might not want to use this. Use\n\"configure --disable-rpath\" to disable it.\n\nDoesn't work on all platforms, though. (OTOH, if you're using hpux,\nosf/cc, or irix5 then this is old news for you, but now it's a feature\nacross the board.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 28 Oct 2000 02:20:43 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Notice: rpath in use"
}
] |
[
{
"msg_contents": "I'm back a bit from the tip of the CVS tree, so this might not\nbe current, but as of around 8 October, installing as root gets\nin the way of later operations. I think this is an artifact\nof having Perl in the build.\n\nI did a 'su root' and a 'make install', then as my normal user\nself, attempted to do the regression tests. They failed and\nleft the attached file in install.log. It points at a problem\nwith permissions on a file left behind by the 'make install'\nin the Perl stuff.\n\nBTW, I'm back from the tip because of instability when I started\nlooking at this stuff. Is it stable enough to install and run\nnow?\n\n++ kevin\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead",
"msg_date": "Fri, 27 Oct 2000 17:58:23 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Problem with installing as root"
},
{
"msg_contents": "Kevin O'Gorman writes:\n\n> I did a 'su root' and a 'make install', then as my normal user\n> self, attempted to do the regression tests. They failed and\n> left the attached file in install.log. It points at a problem\n> with permissions on a file left behind by the 'make install'\n> in the Perl stuff.\n\nThe Perl interface build is a big, ugly hack. I'm going to modify the\nregression tests to omit the Perl directory when creating its temporary\ninstall, but replacing the hack is still a bonus project for someone.\n\n> BTW, I'm back from the tip because of instability when I started\n> looking at this stuff. Is it stable enough to install and run\n> now?\n\nProbably not wildly more stable than, say, a week ago.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 28 Oct 2000 16:40:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Problem with installing as root"
}
] |
[
{
"msg_contents": "Okay, here's my attempt at fixing the problems with parentheses in\nsubqueries. It passes the normal 'runcheck' tests, and I've tried\na few simple things like \n select 1 as foo union (((((select 2))))) order by foo;\n\nThere are a few things that it doesn't do that have been talked \nabout here at least a little:\n\n1) It doesn't allow things like \"IN(((select 1)))\" -- the select\nhere has to be at the top level. This is not new.\n\n2) It does NOT preserve the odd syntax I found when I started looking\nat this, where a SELECT statement could begin with parentheses. Thus,\n (SELECT a from foo) order by a;\nfails.\n\nI have preserved the ability, used in the regression tests, to\nhave a single select statement in what appears to be a RuleActionMulti\n(but wasn't -- the parens were part of select_clause syntax).\nIn my version, this is a special form.\n\nThis may cause some discussion: I have differentiated the two kinds\nof RuleActionMulti. Perhaps nobody knew there were two kinds, because\nI don't think the second form appears in the regression tests. This\none uses square brackets instead of parentheses, but originally was\notherwise the same as the one in parentheses. In this version of\ngram.y, the square bracket form treats SELECT statements the same\nas the other allowed statements. As discussed before on this list,\npsql cannot make sense out of the results of such a thing, but an\napplication might. And I have designs on just such an application.\n\n++ kevin\n\n\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead",
"msg_date": "Fri, 27 Oct 2000 18:11:00 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Gram.y patches for better parenthesis handling."
},
{
"msg_contents": "\"Kevin O'Gorman\" <kogorman@pacbell.net> writes:\n> 2) It does NOT preserve the odd syntax I found when I started looking\n> at this, where a SELECT statement could begin with parentheses. Thus,\n> (SELECT a from foo) order by a;\n> fails.\n\nUm, as a general rule that's not an acceptable limitation. Consider\n\n\t(SELECT foo EXCEPT SELECT bar) INTERSECT SELECT baz;\n\nWithout parens this will mean something quite different, since\nINTERSECT has higher precedence than EXCEPT.\n\nAlso, a leading paren is clearly legal according to SQL92 --- trace\nfor example the productions\n <direct select statement: multiple rows>\n <query expression>\n <non-join query expression>\n <non-join query term>\n <non-join query primary> ::=\n <left paren> <non-join query expression> <right paren>\n\n(UNION/EXCEPT structures are <non-join query expression> in this\nhierarchy.)\n\nThe reason that making this grammar yacc-compatible is so hard is\nprecisely that leading parens must sometimes be part of the SELECT\nstructure, whereas extraneous parens need to be kept out of it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 00:55:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Gram.y patches for better parenthesis handling. "
},
{
"msg_contents": "Applied. Thanks.\n\n\n> Okay, here's my attempt at fixing the problems with parentheses in\n> subqueries. It passes the normal 'runcheck' tests, and I've tried\n> a few simple things like \n> select 1 as foo union (((((select 2))))) order by foo;\n> \n> There are a few things that it doesn't do that have been talked \n> about here at least a little:\n> \n> 1) It doesn't allow things like \"IN(((select 1)))\" -- the select\n> here has to be at the top level. This is not new.\n> \n> 2) It does NOT preserve the odd syntax I found when I started looking\n> at this, where a SELECT statement could begin with parentheses. Thus,\n> (SELECT a from foo) order by a;\n> fails.\n> \n> I have preserved the ability, used in the regression tests, to\n> have a single select statement in what appears to be a RuleActionMulti\n> (but wasn't -- the parens were part of select_clause syntax).\n> In my version, this is a special form.\n> \n> This may cause some discussion: I have differentiated the two kinds\n> of RuleActionMulti. Perhaps nobody knew there were two kinds, because\n> I don't think the second form appears in the regression tests. This\n> one uses square brackets instead of parentheses, but originally was\n> otherwise the same as the one in parentheses. In this version of\n> gram.y, the square bracket form treats SELECT statements the same\n> as the other allowed statements. As discussed before on this list,\n> psql cannot make sense out of the results of such a thing, but an\n> application might. And I have designs on just such an application.\n> \n> ++ kevin\n> \n> \n> \n> -- \n> Kevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\n> Permanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\n> At school: mailto:kogorman@cs.ucsb.edu\n> Web: http://www.cs.ucsb.edu/~kogorman/index.html\n> Web: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n> \n> \"There is a freedom lying beyond circumstance,\n> derived from the direct intuition that life can\n> be grounded upon its absorption in what is\n> changeless amid change\" \n> -- Alfred North Whitehead\n\n> --- gram.y.orig\tThu Oct 26 13:13:04 2000\n> +++ gram.y\tFri Oct 27 17:37:58 2000\n> @@ -124,14 +124,15 @@\n> \t\tDropGroupStmt, DropPLangStmt, DropSchemaStmt, DropStmt, DropTrigStmt,\n> \t\tDropUserStmt, DropdbStmt, ExplainStmt, ExtendStmt, FetchStmt,\n> \t\tGrantStmt, IndexStmt, InsertStmt, ListenStmt, LoadStmt, LockStmt,\n> -\t\tNotifyStmt, OptimizableStmt, ProcedureStmt, ReindexStmt,\n> +\t\tNotifyStmt, OptimizableStmt, ProcedureStmt\n> +\t\tQualifiedSelectStmt, ReindexStmt,\n> \t\tRemoveAggrStmt, RemoveFuncStmt, RemoveOperStmt, RemoveStmt,\n> \t\tRenameStmt, RevokeStmt, RuleActionStmt, RuleActionStmtOrEmpty,\n> \t\tRuleStmt, SelectStmt, SetSessionStmt, TransactionStmt, TruncateStmt,\n> \t\tUnlistenStmt, UpdateStmt, VacuumStmt, VariableResetStmt,\n> \t\tVariableSetStmt, VariableShowStmt, ViewStmt\n> \n> -%type <node>\tselect_clause, select_subclause\n> +%type <node>\tsubquery, simple_select, select_head, set_select\n> \n> %type <list>\tSessionList\n> %type <node>\tSessionClause\n> @@ -174,19 +175,20 @@\n> \t\tresult, OptTempTableName, relation_name_list, OptTableElementList,\n> \t\tOptUnder, OptInherit, definition, opt_distinct,\n> \t\topt_with, func_args, func_args_list, func_as,\n> -\t\toper_argtypes, RuleActionList, RuleActionMulti,\n> +\t\toper_argtypes, RuleActionList, RuleActionMulti, \n> +\t\tRuleActionOrSelectMulti, RuleActions, RuleActionBracket,\n> \t\topt_column_list, columnList, opt_va_list, va_list,\n> \t\tsort_clause, sortby_list, index_params, index_list, name_list,\n> \t\tfrom_clause, from_list, opt_array_bounds,\n> \t\texpr_list, attrs, target_list, update_target_list,\n> \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n> -\t\topt_select_limit\n> +\t\topt_select_limit, select_limit\n> \n> %type <typnam>\tfunc_arg, func_return, aggr_argtype\n> \n> %type <boolean>\topt_arg, TriggerForOpt, TriggerForType, OptTemp\n> \n> -%type <list>\tfor_update_clause, update_list\n> +%type <list>\topt_for_update_clause, for_update_clause, update_list\n> %type <boolean>\topt_all\n> %type <boolean>\topt_table\n> %type <boolean>\topt_chain, opt_trans\n> @@ -2689,7 +2691,7 @@\n> RuleStmt: CREATE RULE name AS\n> \t\t { QueryIsRule=TRUE; }\n> \t\t ON event TO event_object where_clause\n> -\t\t DO opt_instead RuleActionList\n> +\t\t DO opt_instead RuleActions\n> \t\t\t\t{\n> \t\t\t\t\tRuleStmt *n = makeNode(RuleStmt);\n> \t\t\t\t\tn->rulename = $3;\n> @@ -2702,17 +2704,42 @@\n> \t\t\t\t}\n> \t\t;\n> \n> -RuleActionList: NOTHING\t\t\t\t{ $$ = NIL; }\n> -\t\t| SelectStmt\t\t\t\t\t{ $$ = makeList1($1); }\n> -\t\t| RuleActionStmt\t\t\t\t{ $$ = makeList1($1); }\n> -\t\t| '[' RuleActionMulti ']'\t\t{ $$ = $2; }\n> -\t\t| '(' RuleActionMulti ')'\t\t{ $$ = $2; } \n> +RuleActions: NOTHING\t\t\t\t{ $$ = NIL; }\n> +\t\t| RuleActionStmt\t\t\t{ $$ = makeList1($1); }\n> +\t\t| SelectStmt\t\t\t\t{ $$ = makeList1($1); }\n> +\t\t| RuleActionList\n> +\t\t| RuleActionBracket\n> +\t\t;\n> +\n> +/* LEGACY: Version 7.0 did not like SELECT statements in these lists,\n> + * but because of an oddity in the syntax for select_clause, allowed\n> + * certain forms like \"DO INSTEAD (select 1)\", and this is used in\n> + * the regression tests.\n> + * Here, we're allowing just one SELECT in parentheses, to preserve\n> + * any such expectations, and make the regression tests work.\n> + * ++ KO'G\n> + */\n> +RuleActionList:\t\t'(' RuleActionMulti ')'\t\t{ $$ = $2; } \n> +\t\t| '(' SelectStmt ')'\t{ $$ = makeList1($2); }\n> +\t\t;\n> +\n> +/* An undocumented feature, bracketed lists are allowed to contain\n> + * SELECT statements on the same basis as the others. Before this,\n> + * they were the same as parenthesized lists, and did not allow\n> + * SelectStmts. Anybody know why they were here originally? Or if\n> + * they're in the regression tests at all?\n> + * ++ KO'G\n> + */\n> +RuleActionBracket:\t'[' RuleActionOrSelectMulti ']'\t\t{ $$ = $2; } \n> \t\t;\n> \n> /* the thrashing around here is to discard \"empty\" statements... */\n> RuleActionMulti: RuleActionMulti ';' RuleActionStmtOrEmpty\n> \t\t\t\t{ if ($3 != (Node *) NULL)\n> -\t\t\t\t\t$$ = lappend($1, $3);\n> +\t\t\t\t\tif ($1 != NIL)\n> +\t\t\t\t\t\t$$ = lappend($1, $3);\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\t$$ = makeList1($3);\n> \t\t\t\t else\n> \t\t\t\t\t$$ = $1;\n> \t\t\t\t}\n> @@ -2724,6 +2751,31 @@\n> \t\t\t\t}\n> \t\t;\n> \n> +RuleActionOrSelectMulti: RuleActionOrSelectMulti ';' RuleActionStmtOrEmpty\n> +\t\t\t\t{ if ($3 != (Node *) NULL)\n> +\t\t\t\t\tif ($1 != NIL)\n> +\t\t\t\t\t\t$$ = lappend($1, $3);\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\t$$ = makeList1($3);\n> +\t\t\t\t else\n> +\t\t\t\t\t$$ = $1;\n> +\t\t\t\t}\n> +\t\t| RuleActionOrSelectMulti ';' SelectStmt\n> +\t\t\t\t{ if ($1 != NIL)\n> +\t\t\t\t\t\t$$ = lappend($1, $3);\n> +\t\t\t\t else\n> +\t\t\t\t\t\t$$ = makeList1($3);\n> +\t\t\t\t}\n> +\t\t| RuleActionStmtOrEmpty\n> +\t\t\t\t{ if ($1 != (Node *) NULL)\n> +\t\t\t\t\t$$ = makeList1($1);\n> +\t\t\t\t else\n> +\t\t\t\t\t$$ = NIL;\n> +\t\t\t\t}\n> +\t\t| SelectStmt\t\t{ $$ = makeList1($1); }\n> +\t\t;\n> +\n> +\n> RuleActionStmt:\tInsertStmt\n> \t\t| UpdateStmt\n> \t\t| DeleteStmt\n> @@ -3289,7 +3341,12 @@\n> * However, this is not checked by the grammar; parse analysis must check it.\n> */\n> \n> -SelectStmt:\t select_clause sort_clause for_update_clause opt_select_limit\n> +SelectStmt:\tQualifiedSelectStmt\n> +\t\t| select_head\n> +\t\t;\n> +\n> +QualifiedSelectStmt:\n> +\t\t select_head sort_clause opt_for_update_clause opt_select_limit\n> \t\t\t{\n> \t\t\t\tSelectStmt *n = findLeftmostSelect($1);\n> \n> @@ -3299,34 +3356,35 @@\n> \t\t\t\tn->limitCount = nth(1, $4);\n> \t\t\t\t$$ = $1;\n> \t\t\t}\n> -\t\t;\n> -\n> -/* This rule parses Select statements that can appear within set operations,\n> - * including UNION, INTERSECT and EXCEPT. '(' and ')' can be used to specify\n> - * the ordering of the set operations. Without '(' and ')' we want the\n> - * operations to be ordered per the precedence specs at the head of this file.\n> - *\n> - * Since parentheses around SELECTs also appear in the expression grammar,\n> - * there is a parse ambiguity if parentheses are allowed at the top level of a\n> - * select_clause: are the parens part of the expression or part of the select?\n> - * We separate select_clause into two levels to resolve this: select_clause\n> - * can have top-level parentheses, select_subclause cannot.\n> - *\n> - * Note that sort clauses cannot be included at this level --- a sort clause\n> - * can only appear at the end of the complete Select, and it will be handled\n> - * by the topmost SelectStmt rule. Likewise FOR UPDATE and LIMIT.\n> - */\n> -select_clause: '(' select_subclause ')'\n> +\t\t| select_head for_update_clause opt_select_limit\n> \t\t\t{\n> -\t\t\t\t$$ = $2; \n> +\t\t\t\tSelectStmt *n = findLeftmostSelect($1);\n> +\n> +\t\t\t\tn->sortClause = NULL;\n> +\t\t\t\tn->forUpdate = $2;\n> +\t\t\t\tn->limitOffset = nth(0, $3);\n> +\t\t\t\tn->limitCount = nth(1, $3);\n> +\t\t\t\t$$ = $1;\n> \t\t\t}\n> -\t\t| select_subclause\n> +\t\t| select_head select_limit\n> \t\t\t{\n> -\t\t\t\t$$ = $1; \n> +\t\t\t\tSelectStmt *n = findLeftmostSelect($1);\n> +\n> +\t\t\t\tn->sortClause = NULL;\n> +\t\t\t\tn->forUpdate = NULL;\n> +\t\t\t\tn->limitOffset = nth(0, $2);\n> +\t\t\t\tn->limitCount = nth(1, $2);\n> +\t\t\t\t$$ = $1;\n> \t\t\t}\n> \t\t;\n> \n> -select_subclause: SELECT opt_distinct target_list\n> +subquery:\t'(' subquery ')'\t\t\t{ $$ = $2; }\n> +\t\t| '(' QualifiedSelectStmt ')'\t{ $$ = $2; }\n> +\t\t| '(' set_select ')'\t\t\t{ $$ = $2; }\n> +\t\t| simple_select\t\t\t\t\t{ $$ = $1; }\n> +\t\t;\n> +\n> +simple_select: SELECT opt_distinct target_list\n> \t\t\t result from_clause where_clause\n> \t\t\t group_clause having_clause\n> \t\t\t\t{\n> @@ -3341,7 +3399,13 @@\n> \t\t\t\t\tn->havingClause = $8;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> -\t\t| select_clause UNION opt_all select_clause\n> +\t\t;\n> +\n> +select_head: simple_select\t\t\t{ $$ = $1; }\n> +\t\t|\tset_select\t\t\t\t{ $$ = $1; }\n> +\t\t;\n> +\n> +set_select: select_head UNION opt_all subquery\n> \t\t\t{\t\n> \t\t\t\tSetOperationStmt *n = makeNode(SetOperationStmt);\n> \t\t\t\tn->op = SETOP_UNION;\n> @@ -3350,7 +3414,7 @@\n> \t\t\t\tn->rarg = $4;\n> \t\t\t\t$$ = (Node *) n;\n> \t\t\t}\n> -\t\t| select_clause INTERSECT opt_all select_clause\n> +\t\t| select_head INTERSECT opt_all subquery\n> \t\t\t{\n> \t\t\t\tSetOperationStmt *n = makeNode(SetOperationStmt);\n> \t\t\t\tn->op = SETOP_INTERSECT;\n> @@ -3359,7 +3423,7 @@\n> \t\t\t\tn->rarg = $4;\n> \t\t\t\t$$ = (Node *) n;\n> \t\t\t}\n> -\t\t| select_clause EXCEPT opt_all select_clause\n> +\t\t| select_head EXCEPT opt_all subquery\n> \t\t\t{\n> \t\t\t\tSetOperationStmt *n = makeNode(SetOperationStmt);\n> \t\t\t\tn->op = SETOP_EXCEPT;\n> @@ -3424,7 +3488,6 @@\n> \t\t;\n> \n> sort_clause: ORDER BY sortby_list\t\t\t\t{ $$ = $3; }\n> -\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NIL; }\n> \t\t;\n> \n> sortby_list: sortby\t\t\t\t\t\t\t{ $$ = makeList1($1); }\n> @@ -3446,7 +3509,7 @@\n> \t\t;\n> \n> \n> -opt_select_limit:\tLIMIT select_limit_value ',' select_offset_value\n> +select_limit:\tLIMIT select_limit_value ',' select_offset_value\n> \t\t\t{ $$ = makeList2($4, $2); }\n> \t\t| LIMIT select_limit_value OFFSET select_offset_value\n> \t\t\t{ $$ = makeList2($4, $2); }\n> @@ -3456,6 +3519,9 @@\n> \t\t\t{ $$ = makeList2($2, $4); }\n> \t\t| OFFSET select_offset_value\n> \t\t\t{ $$ = makeList2($2, NULL); }\n> +\t\t;\n> +\n> +opt_select_limit: select_limit\t{ $$ = $1; }\n> \t\t| /* EMPTY */\n> \t\t\t{ $$ = makeList2(NULL, NULL); }\n> \t\t;\n> @@ -3555,6 +3621,9 @@\n> \n> for_update_clause: FOR UPDATE update_list\t\t{ $$ = $3; }\n> \t\t| FOR READ ONLY\t\t\t\t\t\t\t{ $$ = NULL; }\n> +\t\t;\n> +\n> +opt_for_update_clause:\tfor_update_clause\t\t{ $$ = $1; }\n> \t\t| /* EMPTY */\t\t\t\t\t\t\t{ $$ = NULL; }\n> \t\t;\n> \n> @@ -3598,7 +3667,7 @@\n> \t\t\t\t\t$1->name = $2;\n> \t\t\t\t\t$$ = (Node *) $1;\n> \t\t\t\t}\n> -\t\t| '(' select_subclause ')' alias_clause\n> +\t\t| '(' SelectStmt ')' alias_clause\n> \t\t\t\t{\n> \t\t\t\t\tRangeSubselect *n = makeNode(RangeSubselect);\n> \t\t\t\t\tn->subquery = $2;\n> @@ -4134,7 +4203,7 @@\n> * Define row_descriptor to allow yacc to break the reduce/reduce conflict\n> * with singleton expressions.\n> */\n> -row_expr: '(' row_descriptor ')' IN '(' select_subclause ')'\n> +row_expr: '(' row_descriptor ')' IN '(' SelectStmt ')'\n> \t\t\t\t{\n> \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> \t\t\t\t\tn->lefthand = $2;\n> @@ -4144,7 +4213,7 @@\n> \t\t\t\t\tn->subselect = $6;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> -\t\t| '(' row_descriptor ')' NOT IN '(' select_subclause ')'\n> +\t\t| '(' row_descriptor ')' NOT IN '(' SelectStmt ')'\n> \t\t\t\t{\n> \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> \t\t\t\t\tn->lefthand = $2;\n> @@ -4154,7 +4223,7 @@\n> \t\t\t\t\tn->subselect = $7;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> -\t\t| '(' row_descriptor ')' all_Op sub_type '(' select_subclause ')'\n> +\t\t| '(' row_descriptor ')' all_Op sub_type '(' SelectStmt ')'\n> \t\t\t\t{\n> \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> \t\t\t\t\tn->lefthand = $2;\n> @@ -4167,7 +4236,7 @@\n> \t\t\t\t\tn->subselect = $7;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> -\t\t| '(' row_descriptor ')' all_Op '(' select_subclause ')'\n> +\t\t| '(' row_descriptor ')' all_Op '(' SelectStmt ')'\n> \t\t\t\t{\n> \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> \t\t\t\t\tn->lefthand = $2;\n> @@ -4498,7 +4567,7 @@\n> \t\t\t\t\t\t$$ = n;\n> \t\t\t\t\t}\n> \t\t\t\t}\n> -\t\t| a_expr all_Op sub_type '(' select_subclause ')'\n> +\t\t| a_expr all_Op sub_type '(' SelectStmt ')'\n> \t\t\t\t{\n> \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> \t\t\t\t\tn->lefthand = makeList1($1);\n> @@ -4894,7 +4963,7 @@\n> \t\t\t\t\tn->agg_distinct = FALSE;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> -\t\t| '(' select_subclause ')'\n> +\t\t| '(' SelectStmt ')'\n> \t\t\t\t{\n> \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> \t\t\t\t\tn->lefthand = NIL;\n> @@ -4904,7 +4973,7 @@\n> \t\t\t\t\tn->subselect = $2;\n> \t\t\t\t\t$$ = (Node *)n;\n> \t\t\t\t}\n> -\t\t| EXISTS '(' select_subclause ')'\n> +\t\t| EXISTS '(' SelectStmt ')'\n> \t\t\t\t{\n> \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> \t\t\t\t\tn->lefthand = NIL;\n> @@ -5003,7 +5072,7 @@\n> \t\t\t\t{ $$ = $1; }\n> \t\t;\n> \n> -in_expr: select_subclause\n> +in_expr: SelectStmt\n> \t\t\t\t{\n> \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> \t\t\t\t\tn->subselect = $1;\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 28 Oct 2000 11:44:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Gram.y patches for better parenthesis handling."
},
{
"msg_contents": "Err, with Tom's objections, why was this applied?\n* Bruce Momjian <pgman@candle.pha.pa.us> [001028 11:34]:\n> Applied. Thanks.\n> \n> \n> > Okay, here's my attempt at fixing the problems with parentheses in\n> > subqueries. It passes the normal 'runcheck' tests, and I've tried\n> > a few simple things like \n> > select 1 as foo union (((((select 2))))) order by foo;\n> > \n> > There are a few things that it doesn't do that have been talked \n> > about here at least a little:\n> > \n> > 1) It doesn't allow things like \"IN(((select 1)))\" -- the select\n> > here has to be at the top level. This is not new.\n> > \n> > 2) It does NOT preserve the odd syntax I found when I started looking\n> > at this, where a SELECT statement could begin with parentheses. Thus,\n> > (SELECT a from foo) order by a;\n> > fails.\n> > \n> > I have preserved the ability, used in the regression tests, to\n> > have a single select statement in what appears to be a RuleActionMulti\n> > (but wasn't -- the parens were part of select_clause syntax).\n> > In my version, this is a special form.\n> > \n> > This may cause some discussion: I have differentiated the two kinds\n> > of RuleActionMulti. Perhaps nobody knew there were two kinds, because\n> > I don't think the second form appears in the regression tests. This\n> > one uses square brackets instead of parentheses, but originally was\n> > otherwise the same as the one in parentheses. In this version of\n> > gram.y, the square bracket form treats SELECT statements the same\n> > as the other allowed statements. As discussed before on this list,\n> > psql cannot make sense out of the results of such a thing, but an\n> > application might. And I have designs on just such an application.\n> > \n> > ++ kevin\n> > \n> > \n> > \n> > -- \n> > Kevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\n> > Permanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\n> > At school: mailto:kogorman@cs.ucsb.edu\n> > Web: http://www.cs.ucsb.edu/~kogorman/index.html\n> > Web: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n> > \n> > \"There is a freedom lying beyond circumstance,\n> > derived from the direct intuition that life can\n> > be grounded upon its absorption in what is\n> > changeless amid change\" \n> > -- Alfred North Whitehead\n> \n> > --- gram.y.orig\tThu Oct 26 13:13:04 2000\n> > +++ gram.y\tFri Oct 27 17:37:58 2000\n> > @@ -124,14 +124,15 @@\n> > \t\tDropGroupStmt, DropPLangStmt, DropSchemaStmt, DropStmt, DropTrigStmt,\n> > \t\tDropUserStmt, DropdbStmt, ExplainStmt, ExtendStmt, FetchStmt,\n> > \t\tGrantStmt, IndexStmt, InsertStmt, ListenStmt, LoadStmt, LockStmt,\n> > -\t\tNotifyStmt, OptimizableStmt, ProcedureStmt, ReindexStmt,\n> > +\t\tNotifyStmt, OptimizableStmt, ProcedureStmt\n> > +\t\tQualifiedSelectStmt, ReindexStmt,\n> > \t\tRemoveAggrStmt, RemoveFuncStmt, RemoveOperStmt, RemoveStmt,\n> > \t\tRenameStmt, RevokeStmt, RuleActionStmt, RuleActionStmtOrEmpty,\n> > \t\tRuleStmt, SelectStmt, SetSessionStmt, TransactionStmt, TruncateStmt,\n> > \t\tUnlistenStmt, UpdateStmt, VacuumStmt, VariableResetStmt,\n> > \t\tVariableSetStmt, VariableShowStmt, ViewStmt\n> > \n> > -%type <node>\tselect_clause, select_subclause\n> > +%type <node>\tsubquery, simple_select, select_head, set_select\n> > \n> > %type <list>\tSessionList\n> > %type <node>\tSessionClause\n> > @@ -174,19 +175,20 @@\n> > \t\tresult, OptTempTableName, relation_name_list, OptTableElementList,\n> > \t\tOptUnder, OptInherit, definition, opt_distinct,\n> > \t\topt_with, func_args, func_args_list, func_as,\n> > -\t\toper_argtypes, RuleActionList, RuleActionMulti,\n> > +\t\toper_argtypes, RuleActionList, RuleActionMulti, \n> > +\t\tRuleActionOrSelectMulti, RuleActions, RuleActionBracket,\n> > \t\topt_column_list, columnList, opt_va_list, va_list,\n> > \t\tsort_clause, sortby_list, index_params, index_list, name_list,\n> > \t\tfrom_clause, from_list, opt_array_bounds,\n> > \t\texpr_list, attrs, target_list, update_target_list,\n> > \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n> > -\t\topt_select_limit\n> > +\t\topt_select_limit, select_limit\n> > \n> > %type <typnam>\tfunc_arg, func_return, aggr_argtype\n> > \n> > %type <boolean>\topt_arg, TriggerForOpt, TriggerForType, OptTemp\n> > \n> > -%type <list>\tfor_update_clause, update_list\n> > +%type <list>\topt_for_update_clause, for_update_clause, update_list\n> > %type <boolean>\topt_all\n> > %type <boolean>\topt_table\n> > %type <boolean>\topt_chain, opt_trans\n> > @@ -2689,7 +2691,7 @@\n> > RuleStmt: CREATE RULE name AS\n> > \t\t { QueryIsRule=TRUE; }\n> > \t\t ON event TO event_object where_clause\n> > -\t\t DO opt_instead RuleActionList\n> > +\t\t DO opt_instead RuleActions\n> > \t\t\t\t{\n> > \t\t\t\t\tRuleStmt *n = makeNode(RuleStmt);\n> > \t\t\t\t\tn->rulename = $3;\n> > @@ -2702,17 +2704,42 @@\n> > \t\t\t\t}\n> > \t\t;\n> > \n> > -RuleActionList: NOTHING\t\t\t\t{ $$ = NIL; }\n> > -\t\t| SelectStmt\t\t\t\t\t{ $$ = makeList1($1); }\n> > -\t\t| RuleActionStmt\t\t\t\t{ $$ = makeList1($1); }\n> > -\t\t| '[' RuleActionMulti ']'\t\t{ $$ = $2; }\n> > -\t\t| '(' RuleActionMulti ')'\t\t{ $$ = $2; } \n> > +RuleActions: NOTHING\t\t\t\t{ $$ = NIL; }\n> > +\t\t| RuleActionStmt\t\t\t{ $$ = makeList1($1); }\n> > +\t\t| SelectStmt\t\t\t\t{ $$ = makeList1($1); }\n> > +\t\t| RuleActionList\n> > +\t\t| RuleActionBracket\n> > +\t\t;\n> > +\n> > +/* LEGACY: Version 7.0 did not like SELECT statements in these lists,\n> > + * but because of an oddity in the syntax for select_clause, allowed\n> > + * certain forms like \"DO INSTEAD (select 1)\", and this is used in\n> > + * the regression tests.\n> > + * Here, we're allowing just one SELECT in parentheses, to preserve\n> > + * any such expectations, and make the regression tests work.\n> > + * ++ KO'G\n> > + */\n> > +RuleActionList:\t\t'(' RuleActionMulti ')'\t\t{ $$ = $2; } \n> > +\t\t| '(' SelectStmt ')'\t{ $$ = makeList1($2); }\n> > +\t\t;\n> > +\n> > +/* An undocumented feature, bracketed lists are allowed to contain\n> > + * SELECT statements on the same basis as the others. Before this,\n> > + * they were the same as parenthesized lists, and did not allow\n> > + * SelectStmts. Anybody know why they were here originally? Or if\n> > + * they're in the regression tests at all?\n> > + * ++ KO'G\n> > + */\n> > +RuleActionBracket:\t'[' RuleActionOrSelectMulti ']'\t\t{ $$ = $2; } \n> > \t\t;\n> > \n> > /* the thrashing around here is to discard \"empty\" statements... */\n> > RuleActionMulti: RuleActionMulti ';' RuleActionStmtOrEmpty\n> > \t\t\t\t{ if ($3 != (Node *) NULL)\n> > -\t\t\t\t\t$$ = lappend($1, $3);\n> > +\t\t\t\t\tif ($1 != NIL)\n> > +\t\t\t\t\t\t$$ = lappend($1, $3);\n> > +\t\t\t\t\telse\n> > +\t\t\t\t\t\t$$ = makeList1($3);\n> > \t\t\t\t else\n> > \t\t\t\t\t$$ = $1;\n> > \t\t\t\t}\n> > @@ -2724,6 +2751,31 @@\n> > \t\t\t\t}\n> > \t\t;\n> > \n> > +RuleActionOrSelectMulti: RuleActionOrSelectMulti ';' RuleActionStmtOrEmpty\n> > +\t\t\t\t{ if ($3 != (Node *) NULL)\n> > +\t\t\t\t\tif ($1 != NIL)\n> > +\t\t\t\t\t\t$$ = lappend($1, $3);\n> > +\t\t\t\t\telse\n> > +\t\t\t\t\t\t$$ = makeList1($3);\n> > +\t\t\t\t else\n> > +\t\t\t\t\t$$ = $1;\n> > +\t\t\t\t}\n> > +\t\t| RuleActionOrSelectMulti ';' SelectStmt\n> > +\t\t\t\t{ if ($1 != NIL)\n> > +\t\t\t\t\t\t$$ = lappend($1, $3);\n> > +\t\t\t\t else\n> > +\t\t\t\t\t\t$$ = makeList1($3);\n> > +\t\t\t\t}\n> > +\t\t| RuleActionStmtOrEmpty\n> > +\t\t\t\t{ if ($1 != (Node *) NULL)\n> > +\t\t\t\t\t$$ = makeList1($1);\n> > +\t\t\t\t else\n> > +\t\t\t\t\t$$ = NIL;\n> > +\t\t\t\t}\n> > +\t\t| SelectStmt\t\t{ $$ = makeList1($1); }\n> > +\t\t;\n> > +\n> > +\n> > RuleActionStmt:\tInsertStmt\n> > \t\t| UpdateStmt\n> > \t\t| DeleteStmt\n> > @@ -3289,7 +3341,12 @@\n> > * However, this is not checked by the grammar; parse analysis must check it.\n> > */\n> > \n> > -SelectStmt:\t select_clause sort_clause for_update_clause opt_select_limit\n> > +SelectStmt:\tQualifiedSelectStmt\n> > +\t\t| select_head\n> > +\t\t;\n> > +\n> > +QualifiedSelectStmt:\n> > +\t\t select_head sort_clause opt_for_update_clause opt_select_limit\n> > \t\t\t{\n> > \t\t\t\tSelectStmt *n = findLeftmostSelect($1);\n> > \n> > @@ -3299,34 +3356,35 @@\n> > \t\t\t\tn->limitCount = nth(1, $4);\n> > \t\t\t\t$$ = $1;\n> > \t\t\t}\n> > -\t\t;\n> > -\n> > -/* This rule parses Select statements that can appear within set operations,\n> > - * including UNION, INTERSECT and EXCEPT. '(' and ')' can be used to specify\n> > - * the ordering of the set operations. Without '(' and ')' we want the\n> > - * operations to be ordered per the precedence specs at the head of this file.\n> > - *\n> > - * Since parentheses around SELECTs also appear in the expression grammar,\n> > - * there is a parse ambiguity if parentheses are allowed at the top level of a\n> > - * select_clause: are the parens part of the expression or part of the select?\n> > - * We separate select_clause into two levels to resolve this: select_clause\n> > - * can have top-level parentheses, select_subclause cannot.\n> > - *\n> > - * Note that sort clauses cannot be included at this level --- a sort clause\n> > - * can only appear at the end of the complete Select, and it will be handled\n> > - * by the topmost SelectStmt rule. Likewise FOR UPDATE and LIMIT.\n> > - */\n> > -select_clause: '(' select_subclause ')'\n> > +\t\t| select_head for_update_clause opt_select_limit\n> > \t\t\t{\n> > -\t\t\t\t$$ = $2; \n> > +\t\t\t\tSelectStmt *n = findLeftmostSelect($1);\n> > +\n> > +\t\t\t\tn->sortClause = NULL;\n> > +\t\t\t\tn->forUpdate = $2;\n> > +\t\t\t\tn->limitOffset = nth(0, $3);\n> > +\t\t\t\tn->limitCount = nth(1, $3);\n> > +\t\t\t\t$$ = $1;\n> > \t\t\t}\n> > -\t\t| select_subclause\n> > +\t\t| select_head select_limit\n> > \t\t\t{\n> > -\t\t\t\t$$ = $1; \n> > +\t\t\t\tSelectStmt *n = findLeftmostSelect($1);\n> > +\n> > +\t\t\t\tn->sortClause = NULL;\n> > +\t\t\t\tn->forUpdate = NULL;\n> > +\t\t\t\tn->limitOffset = nth(0, $2);\n> > +\t\t\t\tn->limitCount = nth(1, $2);\n> > +\t\t\t\t$$ = $1;\n> > \t\t\t}\n> > \t\t;\n> > \n> > -select_subclause: SELECT opt_distinct target_list\n> > +subquery:\t'(' subquery ')'\t\t\t{ $$ = $2; }\n> > +\t\t| '(' QualifiedSelectStmt ')'\t{ $$ = $2; }\n> > +\t\t| '(' set_select ')'\t\t\t{ $$ = $2; }\n> > +\t\t| simple_select\t\t\t\t\t{ $$ = $1; }\n> > +\t\t;\n> > +\n> > +simple_select: SELECT opt_distinct target_list\n> > \t\t\t result from_clause where_clause\n> > \t\t\t group_clause having_clause\n> > \t\t\t\t{\n> > @@ -3341,7 +3399,13 @@\n> > \t\t\t\t\tn->havingClause = $8;\n> > \t\t\t\t\t$$ = (Node *)n;\n> > \t\t\t\t}\n> > -\t\t| select_clause UNION opt_all select_clause\n> > +\t\t;\n> > +\n> > +select_head: simple_select\t\t\t{ $$ = $1; }\n> > +\t\t|\tset_select\t\t\t\t{ $$ = $1; }\n> > +\t\t;\n> > +\n> > +set_select: select_head UNION opt_all subquery\n> > \t\t\t{\t\n> > \t\t\t\tSetOperationStmt *n = makeNode(SetOperationStmt);\n> > \t\t\t\tn->op = SETOP_UNION;\n> > @@ -3350,7 +3414,7 @@\n> > \t\t\t\tn->rarg = $4;\n> > \t\t\t\t$$ = (Node *) n;\n> > \t\t\t}\n> > -\t\t| select_clause INTERSECT opt_all select_clause\n> > +\t\t| select_head INTERSECT opt_all subquery\n> > \t\t\t{\n> > \t\t\t\tSetOperationStmt *n = makeNode(SetOperationStmt);\n> > \t\t\t\tn->op = SETOP_INTERSECT;\n> > @@ -3359,7 +3423,7 @@\n> > \t\t\t\tn->rarg = $4;\n> > \t\t\t\t$$ = (Node *) n;\n> > \t\t\t}\n> > -\t\t| select_clause EXCEPT opt_all select_clause\n> > +\t\t| select_head EXCEPT opt_all subquery\n> > \t\t\t{\n> > \t\t\t\tSetOperationStmt *n = makeNode(SetOperationStmt);\n> > \t\t\t\tn->op = SETOP_EXCEPT;\n> > @@ -3424,7 +3488,6 @@\n> > \t\t;\n> > \n> > sort_clause: ORDER BY sortby_list\t\t\t\t{ $$ = $3; }\n> > -\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NIL; }\n> > \t\t;\n> > \n> > sortby_list: sortby\t\t\t\t\t\t\t{ $$ = makeList1($1); }\n> > @@ -3446,7 +3509,7 @@\n> > \t\t;\n> > \n> > \n> > -opt_select_limit:\tLIMIT select_limit_value ',' select_offset_value\n> > +select_limit:\tLIMIT select_limit_value ',' select_offset_value\n> > \t\t\t{ $$ = makeList2($4, $2); }\n> > \t\t| LIMIT select_limit_value OFFSET select_offset_value\n> > \t\t\t{ $$ = makeList2($4, $2); }\n> > @@ -3456,6 +3519,9 @@\n> > \t\t\t{ $$ = makeList2($2, $4); }\n> > \t\t| OFFSET select_offset_value\n> > \t\t\t{ $$ = makeList2($2, NULL); }\n> > +\t\t;\n> > +\n> > +opt_select_limit: select_limit\t{ $$ = $1; }\n> > \t\t| /* EMPTY */\n> > \t\t\t{ $$ = makeList2(NULL, NULL); }\n> > \t\t;\n> > @@ -3555,6 +3621,9 @@\n> > \n> > for_update_clause: FOR UPDATE update_list\t\t{ $$ = $3; }\n> > \t\t| FOR READ ONLY\t\t\t\t\t\t\t{ $$ = NULL; }\n> > +\t\t;\n> > +\n> > +opt_for_update_clause:\tfor_update_clause\t\t{ $$ = $1; }\n> > \t\t| /* EMPTY */\t\t\t\t\t\t\t{ $$ = NULL; }\n> > \t\t;\n> > \n> > @@ -3598,7 +3667,7 @@\n> > \t\t\t\t\t$1->name = $2;\n> > \t\t\t\t\t$$ = (Node *) $1;\n> > \t\t\t\t}\n> > -\t\t| '(' select_subclause ')' alias_clause\n> > +\t\t| '(' SelectStmt ')' alias_clause\n> > \t\t\t\t{\n> > \t\t\t\t\tRangeSubselect *n = makeNode(RangeSubselect);\n> > \t\t\t\t\tn->subquery = $2;\n> > @@ -4134,7 +4203,7 @@\n> > * Define row_descriptor to allow yacc to break the reduce/reduce conflict\n> > * with singleton expressions.\n> > */\n> > -row_expr: '(' row_descriptor ')' IN '(' select_subclause ')'\n> > +row_expr: '(' row_descriptor ')' IN '(' SelectStmt ')'\n> > \t\t\t\t{\n> > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > \t\t\t\t\tn->lefthand = $2;\n> > @@ -4144,7 +4213,7 @@\n> > \t\t\t\t\tn->subselect = $6;\n> > \t\t\t\t\t$$ = (Node *)n;\n> > \t\t\t\t}\n> > -\t\t| '(' row_descriptor ')' NOT IN '(' select_subclause ')'\n> > +\t\t| '(' row_descriptor ')' NOT IN '(' SelectStmt ')'\n> > \t\t\t\t{\n> > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > \t\t\t\t\tn->lefthand = $2;\n> > @@ -4154,7 +4223,7 @@\n> > \t\t\t\t\tn->subselect = $7;\n> > \t\t\t\t\t$$ = (Node *)n;\n> > \t\t\t\t}\n> > -\t\t| '(' row_descriptor ')' all_Op sub_type '(' select_subclause ')'\n> > +\t\t| '(' row_descriptor ')' all_Op sub_type '(' SelectStmt ')'\n> > \t\t\t\t{\n> > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > \t\t\t\t\tn->lefthand = $2;\n> > @@ -4167,7 +4236,7 @@\n> > \t\t\t\t\tn->subselect = $7;\n> > \t\t\t\t\t$$ = (Node *)n;\n> > \t\t\t\t}\n> > -\t\t| '(' row_descriptor ')' all_Op '(' select_subclause ')'\n> > +\t\t| '(' row_descriptor ')' all_Op '(' SelectStmt ')'\n> > \t\t\t\t{\n> > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > \t\t\t\t\tn->lefthand = $2;\n> > @@ -4498,7 +4567,7 @@\n> > \t\t\t\t\t\t$$ = n;\n> > \t\t\t\t\t}\n> > \t\t\t\t}\n> > -\t\t| a_expr all_Op sub_type '(' select_subclause ')'\n> > +\t\t| a_expr all_Op sub_type '(' SelectStmt ')'\n> > \t\t\t\t{\n> > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > \t\t\t\t\tn->lefthand = makeList1($1);\n> > @@ -4894,7 +4963,7 @@\n> > \t\t\t\t\tn->agg_distinct = FALSE;\n> > \t\t\t\t\t$$ = (Node *)n;\n> > \t\t\t\t}\n> > -\t\t| '(' select_subclause ')'\n> > +\t\t| '(' SelectStmt ')'\n> > \t\t\t\t{\n> > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > \t\t\t\t\tn->lefthand = NIL;\n> > @@ -4904,7 +4973,7 @@\n> > \t\t\t\t\tn->subselect = $2;\n> > \t\t\t\t\t$$ = (Node *)n;\n> > \t\t\t\t}\n> > -\t\t| EXISTS '(' select_subclause ')'\n> > +\t\t| EXISTS '(' SelectStmt ')'\n> > \t\t\t\t{\n> > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > \t\t\t\t\tn->lefthand = NIL;\n> > @@ -5003,7 +5072,7 @@\n> > \t\t\t\t{ $$ = $1; }\n> > \t\t;\n> > \n> > -in_expr: select_subclause\n> > +in_expr: SelectStmt\n> > \t\t\t\t{\n> > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > \t\t\t\t\tn->subselect = $1;\n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 28 Oct 2000 11:38:53 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Gram.y patches for better parenthesis handling."
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Err, with Tom's objections, why was this applied?\n\n> * Bruce Momjian <pgman@candle.pha.pa.us> [001028 11:34]:\n>> Applied. Thanks.\n\nItchy trigger finger today, Bruce?\n\nPlease revert the change --- I'm still discussing it with Kevin offlist,\nbut I don't feel it's acceptable as-is because it breaks reasonable\n(non-redundant) UNION/INTERSECT/EXCEPT constructs that have worked for\nseveral releases past.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Oct 2000 12:41:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Gram.y patches for better parenthesis handling. "
},
{
"msg_contents": "On Sat, 28 Oct 2000, Larry Rosenman wrote:\n\n> Err, with Tom's objections, why was this applied?\n\nwas going to ask this too ... someone going patch-happy again? :)\n\n\n> * Bruce Momjian <pgman@candle.pha.pa.us> [001028 11:34]:\n> > Applied. Thanks.\n> > \n> > \n> > > Okay, here's my attempt at fixing the problems with parentheses in\n> > > subqueries. It passes the normal 'runcheck' tests, and I've tried\n> > > a few simple things like \n> > > select 1 as foo union (((((select 2))))) order by foo;\n> > > \n> > > There are a few things that it doesn't do that have been talked \n> > > about here at least a little:\n> > > \n> > > 1) It doesn't allow things like \"IN(((select 1)))\" -- the select\n> > > here has to be at the top level. This is not new.\n> > > \n> > > 2) It does NOT preserve the odd syntax I found when I started looking\n> > > at this, where a SELECT statement could begin with parentheses. Thus,\n> > > (SELECT a from foo) order by a;\n> > > fails.\n> > > \n> > > I have preserved the ability, used in the regression tests, to\n> > > have a single select statement in what appears to be a RuleActionMulti\n> > > (but wasn't -- the parens were part of select_clause syntax).\n> > > In my version, this is a special form.\n> > > \n> > > This may cause some discussion: I have differentiated the two kinds\n> > > of RuleActionMulti. Perhaps nobody knew there were two kinds, because\n> > > I don't think the second form appears in the regression tests. This\n> > > one uses square brackets instead of parentheses, but originally was\n> > > otherwise the same as the one in parentheses. In this version of\n> > > gram.y, the square bracket form treats SELECT statements the same\n> > > as the other allowed statements. As discussed before on this list,\n> > > psql cannot make sense out of the results of such a thing, but an\n> > > application might. And I have designs on just such an application.\n> > > \n> > > ++ kevin\n> > > \n> > > \n> > > \n> > > -- \n> > > Kevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\n> > > Permanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\n> > > At school: mailto:kogorman@cs.ucsb.edu\n> > > Web: http://www.cs.ucsb.edu/~kogorman/index.html\n> > > Web: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n> > > \n> > > \"There is a freedom lying beyond circumstance,\n> > > derived from the direct intuition that life can\n> > > be grounded upon its absorption in what is\n> > > changeless amid change\" \n> > > -- Alfred North Whitehead\n> > \n> > > --- gram.y.orig\tThu Oct 26 13:13:04 2000\n> > > +++ gram.y\tFri Oct 27 17:37:58 2000\n> > > @@ -124,14 +124,15 @@\n> > > \t\tDropGroupStmt, DropPLangStmt, DropSchemaStmt, DropStmt, DropTrigStmt,\n> > > \t\tDropUserStmt, DropdbStmt, ExplainStmt, ExtendStmt, FetchStmt,\n> > > \t\tGrantStmt, IndexStmt, InsertStmt, ListenStmt, LoadStmt, LockStmt,\n> > > -\t\tNotifyStmt, OptimizableStmt, ProcedureStmt, ReindexStmt,\n> > > +\t\tNotifyStmt, OptimizableStmt, ProcedureStmt\n> > > +\t\tQualifiedSelectStmt, ReindexStmt,\n> > > \t\tRemoveAggrStmt, RemoveFuncStmt, RemoveOperStmt, RemoveStmt,\n> > > \t\tRenameStmt, RevokeStmt, RuleActionStmt, RuleActionStmtOrEmpty,\n> > > \t\tRuleStmt, SelectStmt, SetSessionStmt, TransactionStmt, TruncateStmt,\n> > > \t\tUnlistenStmt, UpdateStmt, VacuumStmt, VariableResetStmt,\n> > > \t\tVariableSetStmt, VariableShowStmt, ViewStmt\n> > > \n> > > -%type <node>\tselect_clause, select_subclause\n> > > +%type <node>\tsubquery, simple_select, select_head, set_select\n> > > \n> > > %type <list>\tSessionList\n> > > %type <node>\tSessionClause\n> > > @@ -174,19 +175,20 @@\n> > > \t\tresult, OptTempTableName, relation_name_list, OptTableElementList,\n> > > \t\tOptUnder, OptInherit, definition, opt_distinct,\n> > > \t\topt_with, func_args, func_args_list, func_as,\n> > > -\t\toper_argtypes, RuleActionList, RuleActionMulti,\n> > > +\t\toper_argtypes, RuleActionList, RuleActionMulti, \n> > > +\t\tRuleActionOrSelectMulti, RuleActions, RuleActionBracket,\n> > > \t\topt_column_list, columnList, opt_va_list, va_list,\n> > > \t\tsort_clause, sortby_list, index_params, index_list, name_list,\n> > > \t\tfrom_clause, from_list, opt_array_bounds,\n> > > \t\texpr_list, attrs, target_list, update_target_list,\n> > > \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n> > > -\t\topt_select_limit\n> > > +\t\topt_select_limit, select_limit\n> > > \n> > > %type <typnam>\tfunc_arg, func_return, aggr_argtype\n> > > \n> > > %type <boolean>\topt_arg, TriggerForOpt, TriggerForType, OptTemp\n> > > \n> > > -%type <list>\tfor_update_clause, update_list\n> > > +%type <list>\topt_for_update_clause, for_update_clause, update_list\n> > > %type <boolean>\topt_all\n> > > %type <boolean>\topt_table\n> > > %type <boolean>\topt_chain, opt_trans\n> > > @@ -2689,7 +2691,7 @@\n> > > RuleStmt: CREATE RULE name AS\n> > > \t\t { QueryIsRule=TRUE; }\n> > > \t\t ON event TO event_object where_clause\n> > > -\t\t DO opt_instead RuleActionList\n> > > +\t\t DO opt_instead RuleActions\n> > > \t\t\t\t{\n> > > \t\t\t\t\tRuleStmt *n = makeNode(RuleStmt);\n> > > \t\t\t\t\tn->rulename = $3;\n> > > @@ -2702,17 +2704,42 @@\n> > > \t\t\t\t}\n> > > \t\t;\n> > > \n> > > -RuleActionList: NOTHING\t\t\t\t{ $$ = NIL; }\n> > > -\t\t| SelectStmt\t\t\t\t\t{ $$ = makeList1($1); }\n> > > -\t\t| RuleActionStmt\t\t\t\t{ $$ = makeList1($1); }\n> > > -\t\t| '[' RuleActionMulti ']'\t\t{ $$ = $2; }\n> > > -\t\t| '(' RuleActionMulti ')'\t\t{ $$ = $2; } \n> > > +RuleActions: NOTHING\t\t\t\t{ $$ = NIL; }\n> > > +\t\t| RuleActionStmt\t\t\t{ $$ = makeList1($1); }\n> > > +\t\t| SelectStmt\t\t\t\t{ $$ = makeList1($1); }\n> > > +\t\t| RuleActionList\n> > > +\t\t| RuleActionBracket\n> > > +\t\t;\n> > > +\n> > > +/* LEGACY: Version 7.0 did not like SELECT statements in these lists,\n> > > + * but because of an oddity in the syntax for select_clause, allowed\n> > > + * certain forms like \"DO INSTEAD (select 1)\", and this is used in\n> > > + * the regression tests.\n> > > + * Here, we're allowing just one SELECT in parentheses, to preserve\n> > > + * any such expectations, and make the regression tests work.\n> > > + * ++ KO'G\n> > > + */\n> > > +RuleActionList:\t\t'(' RuleActionMulti ')'\t\t{ $$ = $2; } \n> > > +\t\t| '(' SelectStmt ')'\t{ $$ = makeList1($2); }\n> > > +\t\t;\n> > > +\n> > > +/* An undocumented feature, bracketed lists are allowed to contain\n> > > + * SELECT statements on the same basis as the others. Before this,\n> > > + * they were the same as parenthesized lists, and did not allow\n> > > + * SelectStmts. Anybody know why they were here originally? Or if\n> > > + * they're in the regression tests at all?\n> > > + * ++ KO'G\n> > > + */\n> > > +RuleActionBracket:\t'[' RuleActionOrSelectMulti ']'\t\t{ $$ = $2; } \n> > > \t\t;\n> > > \n> > > /* the thrashing around here is to discard \"empty\" statements... */\n> > > RuleActionMulti: RuleActionMulti ';' RuleActionStmtOrEmpty\n> > > \t\t\t\t{ if ($3 != (Node *) NULL)\n> > > -\t\t\t\t\t$$ = lappend($1, $3);\n> > > +\t\t\t\t\tif ($1 != NIL)\n> > > +\t\t\t\t\t\t$$ = lappend($1, $3);\n> > > +\t\t\t\t\telse\n> > > +\t\t\t\t\t\t$$ = makeList1($3);\n> > > \t\t\t\t else\n> > > \t\t\t\t\t$$ = $1;\n> > > \t\t\t\t}\n> > > @@ -2724,6 +2751,31 @@\n> > > \t\t\t\t}\n> > > \t\t;\n> > > \n> > > +RuleActionOrSelectMulti: RuleActionOrSelectMulti ';' RuleActionStmtOrEmpty\n> > > +\t\t\t\t{ if ($3 != (Node *) NULL)\n> > > +\t\t\t\t\tif ($1 != NIL)\n> > > +\t\t\t\t\t\t$$ = lappend($1, $3);\n> > > +\t\t\t\t\telse\n> > > +\t\t\t\t\t\t$$ = makeList1($3);\n> > > +\t\t\t\t else\n> > > +\t\t\t\t\t$$ = $1;\n> > > +\t\t\t\t}\n> > > +\t\t| RuleActionOrSelectMulti ';' SelectStmt\n> > > +\t\t\t\t{ if ($1 != NIL)\n> > > +\t\t\t\t\t\t$$ = lappend($1, $3);\n> > > +\t\t\t\t else\n> > > +\t\t\t\t\t\t$$ = makeList1($3);\n> > > +\t\t\t\t}\n> > > +\t\t| RuleActionStmtOrEmpty\n> > > +\t\t\t\t{ if ($1 != (Node *) NULL)\n> > > +\t\t\t\t\t$$ = makeList1($1);\n> > > +\t\t\t\t else\n> > > +\t\t\t\t\t$$ = NIL;\n> > > +\t\t\t\t}\n> > > +\t\t| SelectStmt\t\t{ $$ = makeList1($1); }\n> > > +\t\t;\n> > > +\n> > > +\n> > > RuleActionStmt:\tInsertStmt\n> > > \t\t| UpdateStmt\n> > > \t\t| DeleteStmt\n> > > @@ -3289,7 +3341,12 @@\n> > > * However, this is not checked by the grammar; parse analysis must check it.\n> > > */\n> > > \n> > > -SelectStmt:\t select_clause sort_clause for_update_clause opt_select_limit\n> > > +SelectStmt:\tQualifiedSelectStmt\n> > > +\t\t| select_head\n> > > +\t\t;\n> > > +\n> > > +QualifiedSelectStmt:\n> > > +\t\t select_head sort_clause opt_for_update_clause opt_select_limit\n> > > \t\t\t{\n> > > \t\t\t\tSelectStmt *n = findLeftmostSelect($1);\n> > > \n> > > @@ -3299,34 +3356,35 @@\n> > > \t\t\t\tn->limitCount = nth(1, $4);\n> > > \t\t\t\t$$ = $1;\n> > > \t\t\t}\n> > > -\t\t;\n> > > -\n> > > -/* This rule parses Select statements that can appear within set operations,\n> > > - * including UNION, INTERSECT and EXCEPT. '(' and ')' can be used to specify\n> > > - * the ordering of the set operations. Without '(' and ')' we want the\n> > > - * operations to be ordered per the precedence specs at the head of this file.\n> > > - *\n> > > - * Since parentheses around SELECTs also appear in the expression grammar,\n> > > - * there is a parse ambiguity if parentheses are allowed at the top level of a\n> > > - * select_clause: are the parens part of the expression or part of the select?\n> > > - * We separate select_clause into two levels to resolve this: select_clause\n> > > - * can have top-level parentheses, select_subclause cannot.\n> > > - *\n> > > - * Note that sort clauses cannot be included at this level --- a sort clause\n> > > - * can only appear at the end of the complete Select, and it will be handled\n> > > - * by the topmost SelectStmt rule. Likewise FOR UPDATE and LIMIT.\n> > > - */\n> > > -select_clause: '(' select_subclause ')'\n> > > +\t\t| select_head for_update_clause opt_select_limit\n> > > \t\t\t{\n> > > -\t\t\t\t$$ = $2; \n> > > +\t\t\t\tSelectStmt *n = findLeftmostSelect($1);\n> > > +\n> > > +\t\t\t\tn->sortClause = NULL;\n> > > +\t\t\t\tn->forUpdate = $2;\n> > > +\t\t\t\tn->limitOffset = nth(0, $3);\n> > > +\t\t\t\tn->limitCount = nth(1, $3);\n> > > +\t\t\t\t$$ = $1;\n> > > \t\t\t}\n> > > -\t\t| select_subclause\n> > > +\t\t| select_head select_limit\n> > > \t\t\t{\n> > > -\t\t\t\t$$ = $1; \n> > > +\t\t\t\tSelectStmt *n = findLeftmostSelect($1);\n> > > +\n> > > +\t\t\t\tn->sortClause = NULL;\n> > > +\t\t\t\tn->forUpdate = NULL;\n> > > +\t\t\t\tn->limitOffset = nth(0, $2);\n> > > +\t\t\t\tn->limitCount = nth(1, $2);\n> > > +\t\t\t\t$$ = $1;\n> > > \t\t\t}\n> > > \t\t;\n> > > \n> > > -select_subclause: SELECT opt_distinct target_list\n> > > +subquery:\t'(' subquery ')'\t\t\t{ $$ = $2; }\n> > > +\t\t| '(' QualifiedSelectStmt ')'\t{ $$ = $2; }\n> > > +\t\t| '(' set_select ')'\t\t\t{ $$ = $2; }\n> > > +\t\t| simple_select\t\t\t\t\t{ $$ = $1; }\n> > > +\t\t;\n> > > +\n> > > +simple_select: SELECT opt_distinct target_list\n> > > \t\t\t result from_clause where_clause\n> > > \t\t\t group_clause having_clause\n> > > \t\t\t\t{\n> > > @@ -3341,7 +3399,13 @@\n> > > \t\t\t\t\tn->havingClause = $8;\n> > > \t\t\t\t\t$$ = (Node *)n;\n> > > \t\t\t\t}\n> > > -\t\t| select_clause UNION opt_all select_clause\n> > > +\t\t;\n> > > +\n> > > +select_head: simple_select\t\t\t{ $$ = $1; }\n> > > +\t\t|\tset_select\t\t\t\t{ $$ = $1; }\n> > > +\t\t;\n> > > +\n> > > +set_select: select_head UNION opt_all subquery\n> > > \t\t\t{\t\n> > > \t\t\t\tSetOperationStmt *n = makeNode(SetOperationStmt);\n> > > \t\t\t\tn->op = SETOP_UNION;\n> > > @@ -3350,7 +3414,7 @@\n> > > \t\t\t\tn->rarg = $4;\n> > > \t\t\t\t$$ = (Node *) n;\n> > > \t\t\t}\n> > > -\t\t| select_clause INTERSECT opt_all select_clause\n> > > +\t\t| select_head INTERSECT opt_all subquery\n> > > \t\t\t{\n> > > \t\t\t\tSetOperationStmt *n = makeNode(SetOperationStmt);\n> > > \t\t\t\tn->op = SETOP_INTERSECT;\n> > > @@ -3359,7 +3423,7 @@\n> > > \t\t\t\tn->rarg = $4;\n> > > \t\t\t\t$$ = (Node *) n;\n> > > \t\t\t}\n> > > -\t\t| select_clause EXCEPT opt_all select_clause\n> > > +\t\t| select_head EXCEPT opt_all subquery\n> > > \t\t\t{\n> > > \t\t\t\tSetOperationStmt *n = makeNode(SetOperationStmt);\n> > > \t\t\t\tn->op = SETOP_EXCEPT;\n> > > @@ -3424,7 +3488,6 @@\n> > > \t\t;\n> > > \n> > > sort_clause: ORDER BY sortby_list\t\t\t\t{ $$ = $3; }\n> > > -\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NIL; }\n> > > \t\t;\n> > > \n> > > sortby_list: sortby\t\t\t\t\t\t\t{ $$ = makeList1($1); }\n> > > @@ -3446,7 +3509,7 @@\n> > > \t\t;\n> > > \n> > > \n> > > -opt_select_limit:\tLIMIT select_limit_value ',' select_offset_value\n> > > +select_limit:\tLIMIT select_limit_value ',' select_offset_value\n> > > \t\t\t{ $$ = makeList2($4, $2); }\n> > > \t\t| LIMIT select_limit_value OFFSET select_offset_value\n> > > \t\t\t{ $$ = makeList2($4, $2); }\n> > > @@ -3456,6 +3519,9 @@\n> > > \t\t\t{ $$ = makeList2($2, $4); }\n> > > \t\t| OFFSET select_offset_value\n> > > \t\t\t{ $$ = makeList2($2, NULL); }\n> > > +\t\t;\n> > > +\n> > > +opt_select_limit: select_limit\t{ $$ = $1; }\n> > > \t\t| /* EMPTY */\n> > > \t\t\t{ $$ = makeList2(NULL, NULL); }\n> > > \t\t;\n> > > @@ -3555,6 +3621,9 @@\n> > > \n> > > for_update_clause: FOR UPDATE update_list\t\t{ $$ = $3; }\n> > > \t\t| FOR READ ONLY\t\t\t\t\t\t\t{ $$ = NULL; }\n> > > +\t\t;\n> > > +\n> > > +opt_for_update_clause:\tfor_update_clause\t\t{ $$ = $1; }\n> > > \t\t| /* EMPTY */\t\t\t\t\t\t\t{ $$ = NULL; }\n> > > \t\t;\n> > > \n> > > @@ -3598,7 +3667,7 @@\n> > > \t\t\t\t\t$1->name = $2;\n> > > \t\t\t\t\t$$ = (Node *) $1;\n> > > \t\t\t\t}\n> > > -\t\t| '(' select_subclause ')' alias_clause\n> > > +\t\t| '(' SelectStmt ')' alias_clause\n> > > \t\t\t\t{\n> > > \t\t\t\t\tRangeSubselect *n = makeNode(RangeSubselect);\n> > > \t\t\t\t\tn->subquery = $2;\n> > > @@ -4134,7 +4203,7 @@\n> > > * Define row_descriptor to allow yacc to break the reduce/reduce conflict\n> > > * with singleton expressions.\n> > > */\n> > > -row_expr: '(' row_descriptor ')' IN '(' select_subclause ')'\n> > > +row_expr: '(' row_descriptor ')' IN '(' SelectStmt ')'\n> > > \t\t\t\t{\n> > > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > > \t\t\t\t\tn->lefthand = $2;\n> > > @@ -4144,7 +4213,7 @@\n> > > \t\t\t\t\tn->subselect = $6;\n> > > \t\t\t\t\t$$ = (Node *)n;\n> > > \t\t\t\t}\n> > > -\t\t| '(' row_descriptor ')' NOT IN '(' select_subclause ')'\n> > > +\t\t| '(' row_descriptor ')' NOT IN '(' SelectStmt ')'\n> > > \t\t\t\t{\n> > > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > > \t\t\t\t\tn->lefthand = $2;\n> > > @@ -4154,7 +4223,7 @@\n> > > \t\t\t\t\tn->subselect = $7;\n> > > \t\t\t\t\t$$ = (Node *)n;\n> > > \t\t\t\t}\n> > > -\t\t| '(' row_descriptor ')' all_Op sub_type '(' select_subclause ')'\n> > > +\t\t| '(' row_descriptor ')' all_Op sub_type '(' SelectStmt ')'\n> > > \t\t\t\t{\n> > > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > > \t\t\t\t\tn->lefthand = $2;\n> > > @@ -4167,7 +4236,7 @@\n> > > \t\t\t\t\tn->subselect = $7;\n> > > \t\t\t\t\t$$ = (Node *)n;\n> > > \t\t\t\t}\n> > > -\t\t| '(' row_descriptor ')' all_Op '(' select_subclause ')'\n> > > +\t\t| '(' row_descriptor ')' all_Op '(' SelectStmt ')'\n> > > \t\t\t\t{\n> > > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > > \t\t\t\t\tn->lefthand = $2;\n> > > @@ -4498,7 +4567,7 @@\n> > > \t\t\t\t\t\t$$ = n;\n> > > \t\t\t\t\t}\n> > > \t\t\t\t}\n> > > -\t\t| a_expr all_Op sub_type '(' select_subclause ')'\n> > > +\t\t| a_expr all_Op sub_type '(' SelectStmt ')'\n> > > \t\t\t\t{\n> > > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > > \t\t\t\t\tn->lefthand = makeList1($1);\n> > > @@ -4894,7 +4963,7 @@\n> > > \t\t\t\t\tn->agg_distinct = FALSE;\n> > > \t\t\t\t\t$$ = (Node *)n;\n> > > \t\t\t\t}\n> > > -\t\t| '(' select_subclause ')'\n> > > +\t\t| '(' SelectStmt ')'\n> > > \t\t\t\t{\n> > > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > > \t\t\t\t\tn->lefthand = NIL;\n> > > @@ -4904,7 +4973,7 @@\n> > > \t\t\t\t\tn->subselect = $2;\n> > > \t\t\t\t\t$$ = (Node *)n;\n> > > \t\t\t\t}\n> > > -\t\t| EXISTS '(' select_subclause ')'\n> > > +\t\t| EXISTS '(' SelectStmt ')'\n> > > \t\t\t\t{\n> > > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > > \t\t\t\t\tn->lefthand = NIL;\n> > > @@ -5003,7 +5072,7 @@\n> > > \t\t\t\t{ $$ = $1; }\n> > > \t\t;\n> > > \n> > > -in_expr: select_subclause\n> > > +in_expr: SelectStmt\n> > > \t\t\t\t{\n> > > \t\t\t\t\tSubLink *n = makeNode(SubLink);\n> > > \t\t\t\t\tn->subselect = $1;\n> > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 28 Oct 2000 15:17:38 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Gram.y patches for better parenthesis handling."
},
{
"msg_contents": "> Larry Rosenman <ler@lerctr.org> writes:\n> > Err, with Tom's objections, why was this applied?\n> \n> > * Bruce Momjian <pgman@candle.pha.pa.us> [001028 11:34]:\n> >> Applied. Thanks.\n> \n> Itchy trigger finger today, Bruce?\n> \n> Please revert the change --- I'm still discussing it with Kevin offlist,\n> but I don't feel it's acceptable as-is because it breaks reasonable\n> (non-redundant) UNION/INTERSECT/EXCEPT constructs that have worked for\n> several releases past.\n\nBacked out. I applied it because you said \"it was not unacceptible\", so\nI thought you liked it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 28 Oct 2000 15:41:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Gram.y patches for better parenthesis handling."
},
{
"msg_contents": "I've taken another look at this stuff. I think it's a big improvement, but\nwe didn't notice that it does NOT do the thing we set out to. You\nstill cannot say \n select foo union (((select bar)))\n\nand as I think about it, there's no way to allow that without unifying\nwith c_expr. Consider: yacc is a bottom-up parser, with limited\nlook-ahead. So when it sees\n \"(((select bar)\" and it has just turned it into\n \"(((SelectStmt)\" what is it going to reduce next? It has one\nchar of lookahead, so it can peek out to\n \"(((SelectStmt))\" but it still doesn't know what to reduce.\nIt could be that what is wanted is either of\n \"((c_expr)\" or \"((some_select_type)\"\nand guessing is not allowed.... :-)\n\nI have experimented with unifying c_expr and selects, and it is\nall kinds of messy, because to the parser it appears that you're\nintroducing all sorts of expression tokens into the Select syntax.\nOf course we would reject all such strings for other reasons, but\npoor yacc has no way of knowing that.\n\nI'm afraid that for now, we should accept the improvements that\nhave been achieved, and consider a more general treatment of\nparentheses later.\n\nWhat do you think?\n\n++ kevin\n\n\nOn Sun, 29 Oct 2000, Kevin O'Gorman wrote:\n> \n> That was very helpful. And I was able to eliminate most of the\n> remaining restrictions by letting a SelectStmt be EITHER\n> a) a simple_select (which has no top-level parens or qualifiers)\n> or\n> b) a select_clause with at least one qualifier.\n> Option b reintroduces the SORT and LIMIT operations at the\n> top level, outside parentheses.\n> \n> The remaining restriction that is not SQL92 is that a SelectStmt\n> may not have wholly inclusive parens -- there has to be something\n> outside of them. If they're really important, one way to do it would\n> be to say an OptimizableStmt can be a c_expr, and check that it\n> has a top-level SelectStmt or SetOperationStmt, then treat it as\n> a Select Statement.\n> \n> I also made mods to put the top-level sort,limit and forUpdate in the\n> top-level node of the tree. In order to make this be workable, I also\n> modified parsenodes.h so that the structure for SetOperationStmt is a\n> synonym for SelectStmt, and merged the fields of the two node\n> types.\n> \n> I've attached diffs from the current tip of the tree (as of this morning).\n> These are still not completely functional until the downstream code\n> is changed too, of course.\n> \n> \n> On Sat, 28 Oct 2000, you wrote:\n> > After some more experimentation, I found that gram.y can be modified\n> > per the attached diffs to yield a conflict-free grammar that allows\n> > redundant parens and ORDER/LIMIT in subclauses. The remaining\n> > restrictions are:\n> > \n> > * To attach ORDER/LIMIT to a member select of a UNION/INTERSECT/EXCEPT,\n> > you need to write parens, eg\n> > \t(SELECT foo ORDER BY bar) UNION SELECT baz;\n> > \tSELECT foo UNION (SELECT bar ORDER BY baz);\n> > In the second case the parens are clearly necessary to distinguish the\n> > intent from attaching the ORDER BY to the UNION result. So this seems\n> > OK to me.\n> > \n> > * Parens cannot separate a select clause from its ORDER/LIMIT clauses.\n> > For example, this is not allowed:\n> > \t(SELECT foo FROM table) ORDER BY bar;\n> > \n> > The latter restriction is pretty annoying, first because SQL92 says\n> > it should be legal, and second because we used to accept it. However,\n> > it might be an acceptable tradeoff for allowing ORDER in subselects.\n> > Thoughts?\n> > \n> > BTW these diffs are just proof-of-concept for the grammar being\n> > conflict-free; I haven't changed the output structures, so don't\n> > try to run the code!\n> > \n> > \t\t\tregards, tom lane\n> \n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Sun, 29 Oct 2000 17:52:02 -0800",
"msg_from": "\"Kevin O'Gorman\" <kevin@trixie.kosman.via.ayuda.com>",
"msg_from_op": false,
"msg_subject": "Re: I believe it will (was Re: Hmm, will this do?)"
},
{
"msg_contents": "At 17:52 29/10/00 -0800, Kevin O'Gorman wrote:\n>\n>I'm afraid that for now, we should accept the improvements that\n>have been achieved, and consider a more general treatment of\n>parentheses later.\n>\n>What do you think?\n>\n\nJust to clarify: what is the status of the improvements that are\nimplemented? Are you saying we have ORDER/LIMIT in subselect, but the only\nthing not added is multiple levels of parentheses?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 30 Oct 2000 14:40:09 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: I believe it will (was Re: Hmm, will this\n do?)"
}
] |
[
{
"msg_contents": "Hi Zoltan. Your recent mail has had a reply-to of\n\n Kovacs Zoltan Sandor <zoli@pc10.radnoti-szeged.sulinet.hu>\n\nand a couple of messages from me to you have bounced. It looks like it\nreaches the machine, but says that the \"user is unknown\". Is that really\na good address for you? Hope to hear from you...\n\n - Thomas\n",
"msg_date": "Sat, 28 Oct 2000 02:21:02 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Zoltan, call home!"
}
] |
[
{
"msg_contents": "hello,\nI have a problem with a rule \"on insert\"\nI have a table \"my_table\" with a field named \"index\"\nthat has a default value of \"nextval('seq')\"\nand the folowing rule\n------------------------------------------------------\nCREATE RULE log_ins AS ON INSERT TO my_table\nDO INSERT INTO log(tbl,idrow,query,time)\nVALUES('my_table',NEW.index,'INSERT','now'::text) ;\n-----------------------------------------------------\nmy problem is:\nin \"log\" table I have for \"NEW.index\" a value of 13\namd in \"my_table\" I have for \"NEW.index\" a value of 14\nalways (+1) which is the increment of sequence \"seq\"\ncan someone help me ?\n\nP.S. what is the suntax for multiple actions in a rule ?\n\n-razvan-\n\n",
"msg_date": "Sat, 28 Oct 2000 12:55:56 +0200",
"msg_from": "\"Razvan Radu\" <razvanr@digiview.ro>",
"msg_from_op": true,
"msg_subject": "rule on insert"
}
] |
[
{
"msg_contents": "I need to compile a complete list of different locations to find \n\"libpq-fe.h\" in various PostgreSQL distributions so that I can \nautomatically add the correct path when compiling a client application. \nPlease post the location \"libpq-fe.h\" for your distribution and I will \ncompile a list and post it back to pgsql-hackers.\n\nThanks.\nTim\n\n-- \nTimothy H. Keitt\nDepartment of Ecology and Evolution\nState University of New York at Stony Brook\nPhone: 631-632-1101, FAX: 631-632-7626\nhttp://life.bio.sunysb.edu/ee/keitt/\n\n",
"msg_date": "Sat, 28 Oct 2000 12:59:06 -0400",
"msg_from": "\"Timothy H. Keitt\" <Timothy.Keitt@SUNYSB.Edu>",
"msg_from_op": true,
"msg_subject": "Location of client header files"
},
{
"msg_contents": "Timothy H. Keitt writes:\n\n> I need to compile a complete list of different locations to find \n> \"libpq-fe.h\" in various PostgreSQL distributions so that I can \n> automatically add the correct path when compiling a client application. \n> Please post the location \"libpq-fe.h\" for your distribution and I will \n> compile a list and post it back to pgsql-hackers.\n\nBesides the default /usr/local/pgsql/include, good candidates are\n/usr/include/pgsql (RPMs) and /usr/include/postgresql (Debian?). Future\nversions will provide a program 'pg_config' in the style of gtk-config\nthat will print the actual installation directory.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 8 Nov 2000 20:55:37 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Location of client header files"
}
] |
[
{
"msg_contents": "Concur.\n\n++ kevin\n\n\n\n> Subject: Re: Gram.y patches for better parenthesis handling.\n> Date: Sat, 28 Oct 2000 12:41:19 -0400\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> To: Larry Rosenman <ler@lerctr.org>\n> CC: PGSQL Hackers List <pgsql-hackers@hub.org>, pgman@candle.pha.pa.us\n> References: <39FA27A4.A68C08DD@pacbell.net> <200010281544.LAA08525@candle.pha.pa.us> <20001028113853.A25462@lerami.lerctr.org>\n> \n> Larry Rosenman <ler@lerctr.org> writes:\n> > Err, with Tom's objections, why was this applied?\n> \n> > * Bruce Momjian <pgman@candle.pha.pa.us> [001028 11:34]:\n> >> Applied. Thanks.\n> \n> Itchy trigger finger today, Bruce?\n> \n> Please revert the change --- I'm still discussing it with Kevin offlist,\n> but I don't feel it's acceptable as-is because it breaks reasonable\n> (non-redundant) UNION/INTERSECT/EXCEPT constructs that have worked for\n> several releases past.\n> \n> regards, tom lane\n\n-- \nKevin O'Gorman (805) 650-6274 mailto:kogorman@pacbell.net\nPermanent e-mail forwarder: mailto:Kevin.O'Gorman.64@Alum.Dartmouth.org\nAt school: mailto:kogorman@cs.ucsb.edu\nWeb: http://www.cs.ucsb.edu/~kogorman/index.html\nWeb: http://trixie.kosman.via.ayuda.com/~kevin/index.html\n\n\"There is a freedom lying beyond circumstance,\nderived from the direct intuition that life can\nbe grounded upon its absorption in what is\nchangeless amid change\" \n -- Alfred North Whitehead\n",
"msg_date": "Sat, 28 Oct 2000 12:12:00 -0700",
"msg_from": "\"Kevin O'Gorman\" <kogorman@pacbell.net>",
"msg_from_op": true,
"msg_subject": "Re: Gram.y patches for better parenthesis handling."
}
] |
[
{
"msg_contents": "Now that we have numeric file names, I would like to have a command I\ncan run from psql that will dump a mapping of numeric file name to table\nname, i.e.,\n\n\t121233\tpg_proc\n\t143423\tpg_index\n\n\nWith that feature, I can write scripts pgfile2name and pgname2file that\nmap file names to table names. People can run standard Unix commands\nand have meaningful display output:\n\n\tls -l | pgfile2name\n\nchanges:\n\n\t-rwx------ 4 postgres postgres 512 Oct 27 10:52 198323\n\nto:\n\n\t-rwx------ 4 postgres postgres 512 Oct 27 10:52 pg_class\n\nThe only missing piece would be to identify files on a backup tape. Not\nsure how to handle that unless a map file already exists on the tape.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 28 Oct 2000 16:52:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Numeric file names"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Now that we have numeric file names, I would like to have a command I\n> can run from psql that will dump a mapping of numeric file name to table\n> name, i.e.,\n> \n> \t121233\tpg_proc\n> \t143423\tpg_index\n\nselect oid, relname from pg_class;\n\n> With that feature, I can write scripts pgfile2name and pgname2file that\n> map file names to table names. People can run standard Unix commands\n> and have meaningful display output:\n> \n> \tls -l | pgfile2name\n\nsed `psql -Aqt -d ${database} -c \"select'-e s/' || oid || '/' || relname || '/g' from pg_class\"`\n\n\nWhat I'd find useful is a program that you can occasionally run on a\ndatabase directory that creates links from \"name\" to \"oid\".\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 28 Oct 2000 23:58:10 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Numeric file names"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Now that we have numeric file names, I would like to have a command I\n> > can run from psql that will dump a mapping of numeric file name to table\n> > name, i.e.,\n> > \n> > \t121233\tpg_proc\n> > \t143423\tpg_index\n> \n> select oid, relname from pg_class;\n\nOh, we went with oid-based file names. OK.\n\n> > With that feature, I can write scripts pgfile2name and pgname2file that\n> > map file names to table names. People can run standard Unix commands\n> > and have meaningful display output:\n> > \n> > \tls -l | pgfile2name\n> \n> sed `psql -Aqt -d ${database} -c \"select'-e s/' || oid || '/' || relname || '/g' from pg_class\"`\n> \n> \n> What I'd find useful is a program that you can occasionally run on a\n> database directory that creates links from \"name\" to \"oid\".\n\nYes, that too. You can then do ls -L on the symlinks to see the\nunderlying sizes.\n\nMy utilities are more generic. Also, they will allow programs like\nfstat/lsof to show meaningful output, though it may be tough to guess\nthe database from the fstat output. lsof prints the full path, so that\nis OK. The script will guess the database from the path name.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 28 Oct 2000 18:07:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Numeric file names"
},
{
"msg_contents": "> > With that feature, I can write scripts pgfile2name and pgname2file that\n> > map file names to table names. People can run standard Unix commands\n> > and have meaningful display output:\n> > \n> > \tls -l | pgfile2name\n> \n> sed `psql -Aqt -d ${database} -c \"select'-e s/' || oid || '/' || relname || '/g' from pg_class\"`\n> \n> \n> What I'd find useful is a program that you can occasionally run on a\n> database directory that creates links from \"name\" to \"oid\".\n\nLet me not grab this utility as my own. If someone else wants to code\nit, go ahead. I will be in California for most of next week.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 28 Oct 2000 18:08:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Numeric file names"
},
{
"msg_contents": "> > Now that we have numeric file names, I would like to have a command I\n> > can run from psql that will dump a mapping of numeric file name to table\n> > name, i.e.,\n> >\n> > 121233 pg_proc\n> > 143423 pg_index\n>\n> select oid, relname from pg_class;\n\nNo. select relfilenode, relname from pg_class - in theory relfilenode may\ndiffer\nfrom relation oid.\n\nVadim\n\n\n",
"msg_date": "Sat, 28 Oct 2000 15:44:04 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Numeric file names"
}
] |
[
{
"msg_contents": "Someone's been spending too much time on C code...\n\nthe current initdb.sh uses == which doesn't work.\n\nHere's a patch:\n\nIndex: initdb.sh\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/initdb/initdb.sh,v\nretrieving revision 1.107\ndiff -c -r1.107 initdb.sh\n*** initdb.sh\t2000/10/28 22:14:14\t1.107\n--- initdb.sh\t2000/10/29 03:49:36\n***************\n*** 109,119 ****\n \n if [ x\"$self_path\" != x\"\" ] \\\n && [ -x \"$self_path/postgres\" ] \\\n! && [ x\"`$self_path/postgres --version 2>/dev/null`\" == x\"postgres (PostgreSQL) $VERSION\" ]\n then\n PGPATH=$self_path\n elif [ -x \"$bindir/postgres\" ]; then\n! if [ x\"`$bindir/postgres --version 2>/dev/null`\" == x\"postgres (PostgreSQL) $VERSION\" ]\n then\n PGPATH=$bindir\n else\n--- 109,119 ----\n \n if [ x\"$self_path\" != x\"\" ] \\\n && [ -x \"$self_path/postgres\" ] \\\n! && [ x\"`$self_path/postgres --version 2>/dev/null`\" = x\"postgres (PostgreSQL) $VERSION\" ]\n then\n PGPATH=$self_path\n elif [ -x \"$bindir/postgres\" ]; then\n! if [ x\"`$bindir/postgres --version 2>/dev/null`\" = x\"postgres (PostgreSQL) $VERSION\" ]\n then\n PGPATH=$bindir\n else\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 28 Oct 2000 22:49:52 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "initdb.sh fix..."
},
{
"msg_contents": "Larry Rosenman writes:\n\n> Someone's been spending too much time on C code...\n> \n> the current initdb.sh uses == which doesn't work.\n\nWorks in bash. :-) Thanks.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 29 Oct 2000 12:43:05 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb.sh fix..."
}
] |
[
{
"msg_contents": "\nJust wanted to check that this was a known problem:\n\n ERROR: fmgr_info: language 20322 has old-style handler\n\nthis is from current CVS when trying to call a procedure written in plpgsql.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 30 Oct 2000 01:23:32 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Handler for plpgsql out of date?"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Just wanted to check that this was a known problem:\n> ERROR: fmgr_info: language 20322 has old-style handler\n> this is from current CVS when trying to call a procedure written in plpgsql.\n\nIt's certainly not a known problem --- if it were, the plpgsql regress\ntest would be failing.\n\nI have a nasty feeling that the reason for the failure is that you\nrestored a DB dump that declares the plpgsql call handler as language\n'C' (correct for 7.0) not 'newC' (correct for current sources).\n\nI don't see any easy way around this :-(.\n\nAnyone have an idea how we can avoid importing a bad declaration from\nan old dump?\n\nBTW, I'm leaving in a few minutes and won't be home till Wed. evening,\nso don't expect any thoughts from this quarter...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 29 Oct 2000 09:57:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Handler for plpgsql out of date? "
}
] |
[
{
"msg_contents": "I wanna know how BAR (Backup And Restore) is done now (PostgreSQL 7.0.2) and \nhow it will be done when PostgreSQL 7.1 comes out.\nWhat I want is a total recover of data up to the time that the database got \nscratched.\nWhat I mean is:\nCertain people write to the database all day long. Every mid-night I do a \nfull backup (level 0). If the database gets destroyed for any reason, I would \ndo I full restore of the last mid-night backup, but I also want all the info \nthat was changed after the last Level 0 backup.\nIs this posible with PostgreSQL 7.0.2?\nWill it be posible with PostgreSQL 7.1?\n\nThanks!!\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Sun, 29 Oct 2000 11:27:16 -0300",
"msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "BAR now and with 7.1"
}
] |
[
{
"msg_contents": "Configured as:\n\nCC=cc CXX=CC ./configure --prefix=/home/ler/pg-test --enable-syslog --with-CXX --with-perl --enable-multibyte --with-includes=/usr/local/include --with-libs=/usr/local/lib\n\nTodays sources fail regression. Here is the regression.diffs:\n*** ./expected/int8.out\tFri Apr 7 14:17:39 2000\n--- ./results/int8.out\tSun Oct 29 09:02:46 2000\n***************\n*** 5,121 ****\n CREATE TABLE INT8_TBL(q1 int8, q2 int8);\n INSERT INTO INT8_TBL VALUES('123','456');\n INSERT INTO INT8_TBL VALUES('123','4567890123456789');\n INSERT INTO INT8_TBL VALUES('4567890123456789','123');\n INSERT INTO INT8_TBL VALUES('4567890123456789','4567890123456789');\n INSERT INTO INT8_TBL VALUES('4567890123456789','-4567890123456789');\n SELECT * FROM INT8_TBL;\n q1 | q2 \n! ------------------+-------------------\n 123 | 456\n! 123 | 4567890123456789\n! 4567890123456789 | 123\n! 4567890123456789 | 4567890123456789\n! 4567890123456789 | -4567890123456789\n! (5 rows)\n \n SELECT '' AS five, q1 AS plus, -q1 AS minus FROM INT8_TBL;\n five | plus | minus \n! ------+------------------+-------------------\n | 123 | -123\n! | 123 | -123\n! | 4567890123456789 | -4567890123456789\n! | 4567890123456789 | -4567890123456789\n! | 4567890123456789 | -4567890123456789\n! (5 rows)\n \n SELECT '' AS five, q1, q2, q1 + q2 AS plus FROM INT8_TBL;\n five | q1 | q2 | plus \n! ------+------------------+-------------------+------------------\n | 123 | 456 | 579\n! | 123 | 4567890123456789 | 4567890123456912\n! | 4567890123456789 | 123 | 4567890123456912\n! | 4567890123456789 | 4567890123456789 | 9135780246913578\n! | 4567890123456789 | -4567890123456789 | 0\n! (5 rows)\n \n SELECT '' AS five, q1, q2, q1 - q2 AS minus FROM INT8_TBL;\n five | q1 | q2 | minus \n! ------+------------------+-------------------+-------------------\n | 123 | 456 | -333\n! | 123 | 4567890123456789 | -4567890123456666\n! | 4567890123456789 | 123 | 4567890123456666\n! | 4567890123456789 | 4567890123456789 | 0\n! | 4567890123456789 | -4567890123456789 | 9135780246913578\n! (5 rows)\n \n SELECT '' AS three, q1, q2, q1 * q2 AS multiply FROM INT8_TBL\n WHERE q1 < 1000 or (q2 > 0 and q2 < 1000);\n three | q1 | q2 | multiply \n! -------+------------------+------------------+--------------------\n | 123 | 456 | 56088\n! | 123 | 4567890123456789 | 561850485185185047\n! | 4567890123456789 | 123 | 561850485185185047\n! (3 rows)\n \n SELECT '' AS five, q1, q2, q1 / q2 AS divide FROM INT8_TBL;\n five | q1 | q2 | divide \n! ------+------------------+-------------------+----------------\n | 123 | 456 | 0\n! | 123 | 4567890123456789 | 0\n! | 4567890123456789 | 123 | 37137318076884\n! | 4567890123456789 | 4567890123456789 | 1\n! | 4567890123456789 | -4567890123456789 | -1\n! (5 rows)\n \n SELECT '' AS five, q1, float8(q1) FROM INT8_TBL;\n five | q1 | float8 \n! ------+------------------+----------------------\n | 123 | 123\n! | 123 | 123\n! | 4567890123456789 | 4.56789012345679e+15\n! | 4567890123456789 | 4.56789012345679e+15\n! | 4567890123456789 | 4.56789012345679e+15\n! (5 rows)\n \n SELECT '' AS five, q2, float8(q2) FROM INT8_TBL;\n five | q2 | float8 \n! ------+-------------------+-----------------------\n | 456 | 456\n! | 4567890123456789 | 4.56789012345679e+15\n! | 123 | 123\n! | 4567890123456789 | 4.56789012345679e+15\n! | -4567890123456789 | -4.56789012345679e+15\n! (5 rows)\n \n SELECT '' AS five, q1, int8(float8(q1)) AS \"two coercions\" FROM INT8_TBL;\n five | q1 | two coercions \n! ------+------------------+------------------\n | 123 | 123\n! | 123 | 123\n! | 4567890123456789 | 4567890123456789\n! | 4567890123456789 | 4567890123456789\n! | 4567890123456789 | 4567890123456789\n! (5 rows)\n \n SELECT '' AS five, 2 * q1 AS \"twice int4\" FROM INT8_TBL;\n five | twice int4 \n! ------+------------------\n | 246\n! | 246\n! | 9135780246913578\n! | 9135780246913578\n! | 9135780246913578\n! (5 rows)\n \n SELECT '' AS five, q1 * 2 AS \"twice int4\" FROM INT8_TBL;\n five | twice int4 \n! ------+------------------\n | 246\n! | 246\n! | 9135780246913578\n! | 9135780246913578\n! | 9135780246913578\n! (5 rows)\n \n -- TO_CHAR()\n --\n--- 5,83 ----\n CREATE TABLE INT8_TBL(q1 int8, q2 int8);\n INSERT INTO INT8_TBL VALUES('123','456');\n INSERT INTO INT8_TBL VALUES('123','4567890123456789');\n+ ERROR: int8 value out of range: \"4567890123456789\"\n INSERT INTO INT8_TBL VALUES('4567890123456789','123');\n+ ERROR: int8 value out of range: \"4567890123456789\"\n INSERT INTO INT8_TBL VALUES('4567890123456789','4567890123456789');\n+ ERROR: int8 value out of range: \"4567890123456789\"\n INSERT INTO INT8_TBL VALUES('4567890123456789','-4567890123456789');\n+ ERROR: int8 value out of range: \"4567890123456789\"\n SELECT * FROM INT8_TBL;\n q1 | q2 \n! -----+-----\n 123 | 456\n! (1 row)\n \n SELECT '' AS five, q1 AS plus, -q1 AS minus FROM INT8_TBL;\n five | plus | minus \n! ------+------+-------\n | 123 | -123\n! (1 row)\n \n SELECT '' AS five, q1, q2, q1 + q2 AS plus FROM INT8_TBL;\n five | q1 | q2 | plus \n! ------+-----+-----+------\n | 123 | 456 | 579\n! (1 row)\n \n SELECT '' AS five, q1, q2, q1 - q2 AS minus FROM INT8_TBL;\n five | q1 | q2 | minus \n! ------+-----+-----+-------\n | 123 | 456 | -333\n! (1 row)\n \n SELECT '' AS three, q1, q2, q1 * q2 AS multiply FROM INT8_TBL\n WHERE q1 < 1000 or (q2 > 0 and q2 < 1000);\n three | q1 | q2 | multiply \n! -------+-----+-----+----------\n | 123 | 456 | 56088\n! (1 row)\n \n SELECT '' AS five, q1, q2, q1 / q2 AS divide FROM INT8_TBL;\n five | q1 | q2 | divide \n! ------+-----+-----+--------\n | 123 | 456 | 0\n! (1 row)\n \n SELECT '' AS five, q1, float8(q1) FROM INT8_TBL;\n five | q1 | float8 \n! ------+-----+--------\n | 123 | 123\n! (1 row)\n \n SELECT '' AS five, q2, float8(q2) FROM INT8_TBL;\n five | q2 | float8 \n! ------+-----+--------\n | 456 | 456\n! (1 row)\n \n SELECT '' AS five, q1, int8(float8(q1)) AS \"two coercions\" FROM INT8_TBL;\n five | q1 | two coercions \n! ------+-----+---------------\n | 123 | 123\n! (1 row)\n \n SELECT '' AS five, 2 * q1 AS \"twice int4\" FROM INT8_TBL;\n five | twice int4 \n! ------+------------\n | 246\n! (1 row)\n \n SELECT '' AS five, q1 * 2 AS \"twice int4\" FROM INT8_TBL;\n five | twice int4 \n! ------+------------\n | 246\n! (1 row)\n \n -- TO_CHAR()\n --\n***************\n*** 124,134 ****\n to_char_1 | to_char | to_char \n -----------+------------------------+------------------------\n | 123 | 456\n! | 123 | 4,567,890,123,456,789\n! | 4,567,890,123,456,789 | 123\n! | 4,567,890,123,456,789 | 4,567,890,123,456,789\n! | 4,567,890,123,456,789 | -4,567,890,123,456,789\n! (5 rows)\n \n SELECT '' AS to_char_2, to_char(q1, '9G999G999G999G999G999D999G999'), to_char(q2, '9,999,999,999,999,999.999,999') \n \tFROM INT8_TBL;\t\n--- 86,92 ----\n to_char_1 | to_char | to_char \n -----------+------------------------+------------------------\n | 123 | 456\n! (1 row)\n \n SELECT '' AS to_char_2, to_char(q1, '9G999G999G999G999G999D999G999'), to_char(q2, '9,999,999,999,999,999.999,999') \n \tFROM INT8_TBL;\t\n***************\n*** 135,145 ****\n to_char_2 | to_char | to_char \n -----------+--------------------------------+--------------------------------\n | 123.000,000 | 456.000,000\n! | 123.000,000 | 4,567,890,123,456,789.000,000\n! | 4,567,890,123,456,789.000,000 | 123.000,000\n! | 4,567,890,123,456,789.000,000 | 4,567,890,123,456,789.000,000\n! | 4,567,890,123,456,789.000,000 | -4,567,890,123,456,789.000,000\n! (5 rows)\n \n SELECT '' AS to_char_3, to_char( (q1 * -1), '9999999999999999PR'), to_char( (q2 * -1), '9999999999999999.999PR') \n \tFROM INT8_TBL;\n--- 93,99 ----\n to_char_2 | to_char | to_char \n -----------+--------------------------------+--------------------------------\n | 123.000,000 | 456.000,000\n! (1 row)\n \n SELECT '' AS to_char_3, to_char( (q1 * -1), '9999999999999999PR'), to_char( (q2 * -1), '9999999999999999.999PR') \n \tFROM INT8_TBL;\n***************\n*** 146,156 ****\n to_char_3 | to_char | to_char \n -----------+--------------------+------------------------\n | <123> | <456.000>\n! | <123> | <4567890123456789.000>\n! | <4567890123456789> | <123.000>\n! | <4567890123456789> | <4567890123456789.000>\n! | <4567890123456789> | 4567890123456789.000\n! (5 rows)\n \n SELECT '' AS to_char_4, to_char( (q1 * -1), '9999999999999999S'), to_char( (q2 * -1), 'S9999999999999999') \n \tFROM INT8_TBL;\n--- 100,106 ----\n to_char_3 | to_char | to_char \n -----------+--------------------+------------------------\n | <123> | <456.000>\n! (1 row)\n \n SELECT '' AS to_char_4, to_char( (q1 * -1), '9999999999999999S'), to_char( (q2 * -1), 'S9999999999999999') \n \tFROM INT8_TBL;\n***************\n*** 157,295 ****\n to_char_4 | to_char | to_char \n -----------+-------------------+-------------------\n | 123- | -456\n! | 123- | -4567890123456789\n! | 4567890123456789- | -123\n! | 4567890123456789- | -4567890123456789\n! | 4567890123456789- | +4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_5, to_char(q2, 'MI9999999999999999') FROM INT8_TBL;\t\n to_char_5 | to_char \n -----------+--------------------\n | 456\n! | 4567890123456789\n! | 123\n! | 4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_6, to_char(q2, 'FMS9999999999999999') FROM INT8_TBL;\n to_char_6 | to_char \n! -----------+-------------------\n | +456\n! | +4567890123456789\n! | +123\n! | +4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_7, to_char(q2, 'FM9999999999999999THPR') FROM INT8_TBL;\n to_char_7 | to_char \n! -----------+--------------------\n | 456TH\n! | 4567890123456789TH\n! | 123RD\n! | 4567890123456789TH\n! | <4567890123456789>\n! (5 rows)\n \n SELECT '' AS to_char_8, to_char(q2, 'SG9999999999999999th') FROM INT8_TBL;\t\n to_char_8 | to_char \n -----------+---------------------\n | + 456th\n! | +4567890123456789th\n! | + 123rd\n! | +4567890123456789th\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_9, to_char(q2, '0999999999999999') FROM INT8_TBL;\t\n to_char_9 | to_char \n -----------+-------------------\n | 0000000000000456\n! | 4567890123456789\n! | 0000000000000123\n! | 4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_10, to_char(q2, 'S0999999999999999') FROM INT8_TBL;\t\n to_char_10 | to_char \n ------------+-------------------\n | +0000000000000456\n! | +4567890123456789\n! | +0000000000000123\n! | +4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_11, to_char(q2, 'FM0999999999999999') FROM INT8_TBL;\t\n to_char_11 | to_char \n! ------------+-------------------\n | 0000000000000456\n! | 4567890123456789\n! | 0000000000000123\n! | 4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_12, to_char(q2, 'FM9999999999999999.000') FROM INT8_TBL;\n to_char_12 | to_char \n! ------------+-----------------------\n | 456.000\n! | 4567890123456789.000\n! | 123.000\n! | 4567890123456789.000\n! | -4567890123456789.000\n! (5 rows)\n \n SELECT '' AS to_char_13, to_char(q2, 'L9999999999999999.000') FROM INT8_TBL;\t\n to_char_13 | to_char \n ------------+------------------------\n | 456.000\n! | 4567890123456789.000\n! | 123.000\n! | 4567890123456789.000\n! | -4567890123456789.000\n! (5 rows)\n \n SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n to_char_14 | to_char \n! ------------+-------------------\n | 456\n! | 4567890123456789\n! | 123\n! | 4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n to_char_15 | to_char \n ------------+-------------------------------------------\n | +4 5 6 . 0 0 0 \n! | + 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 . 0 0 0\n! | +1 2 3 . 0 0 0 \n! | + 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 . 0 0 0\n! | - 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 . 0 0 0\n! (5 rows)\n \n SELECT '' AS to_char_16, to_char(q2, '99999 \"text\" 9999 \"9999\" 999 \"\\\\\"text between quote marks\\\\\"\" 9999') FROM INT8_TBL;\n to_char_16 | to_char \n ------------+-----------------------------------------------------------\n | text 9999 \"text between quote marks\" 456\n! | 45678 text 9012 9999 345 \"text between quote marks\" 6789\n! | text 9999 \"text between quote marks\" 123\n! | 45678 text 9012 9999 345 \"text between quote marks\" 6789\n! | -45678 text 9012 9999 345 \"text between quote marks\" 6789\n! (5 rows)\n \n SELECT '' AS to_char_17, to_char(q2, '999999SG9999999999') FROM INT8_TBL;\n to_char_17 | to_char \n ------------+-------------------\n | + 456\n! | 456789+0123456789\n! | + 123\n! | 456789+0123456789\n! | 456789-0123456789\n! (5 rows)\n \n--- 107,189 ----\n to_char_4 | to_char | to_char \n -----------+-------------------+-------------------\n | 123- | -456\n! (1 row)\n \n SELECT '' AS to_char_5, to_char(q2, 'MI9999999999999999') FROM INT8_TBL;\t\n to_char_5 | to_char \n -----------+--------------------\n | 456\n! (1 row)\n \n SELECT '' AS to_char_6, to_char(q2, 'FMS9999999999999999') FROM INT8_TBL;\n to_char_6 | to_char \n! -----------+---------\n | +456\n! (1 row)\n \n SELECT '' AS to_char_7, to_char(q2, 'FM9999999999999999THPR') FROM INT8_TBL;\n to_char_7 | to_char \n! -----------+---------\n | 456TH\n! (1 row)\n \n SELECT '' AS to_char_8, to_char(q2, 'SG9999999999999999th') FROM INT8_TBL;\t\n to_char_8 | to_char \n -----------+---------------------\n | + 456th\n! (1 row)\n \n SELECT '' AS to_char_9, to_char(q2, '0999999999999999') FROM INT8_TBL;\t\n to_char_9 | to_char \n -----------+-------------------\n | 0000000000000456\n! (1 row)\n \n SELECT '' AS to_char_10, to_char(q2, 'S0999999999999999') FROM INT8_TBL;\t\n to_char_10 | to_char \n ------------+-------------------\n | +0000000000000456\n! (1 row)\n \n SELECT '' AS to_char_11, to_char(q2, 'FM0999999999999999') FROM INT8_TBL;\t\n to_char_11 | to_char \n! ------------+------------------\n | 0000000000000456\n! (1 row)\n \n SELECT '' AS to_char_12, to_char(q2, 'FM9999999999999999.000') FROM INT8_TBL;\n to_char_12 | to_char \n! ------------+---------\n | 456.000\n! (1 row)\n \n SELECT '' AS to_char_13, to_char(q2, 'L9999999999999999.000') FROM INT8_TBL;\t\n to_char_13 | to_char \n ------------+------------------------\n | 456.000\n! (1 row)\n \n SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n to_char_14 | to_char \n! ------------+---------\n | 456\n! (1 row)\n \n SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n to_char_15 | to_char \n ------------+-------------------------------------------\n | +4 5 6 . 0 0 0 \n! (1 row)\n \n SELECT '' AS to_char_16, to_char(q2, '99999 \"text\" 9999 \"9999\" 999 \"\\\\\"text between quote marks\\\\\"\" 9999') FROM INT8_TBL;\n to_char_16 | to_char \n ------------+-----------------------------------------------------------\n | text 9999 \"text between quote marks\" 456\n! (1 row)\n \n SELECT '' AS to_char_17, to_char(q2, '999999SG9999999999') FROM INT8_TBL;\n to_char_17 | to_char \n ------------+-------------------\n | + 456\n! (1 row)\n \n\n======================================================================\n\n*** ./expected/timestamp.out\tFri Sep 22 10:33:31 2000\n--- ./results/timestamp.out\tSun Oct 29 09:04:17 2000\n***************\n*** 13,25 ****\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";\n True \n ------\n! t\n (1 row)\n \n SELECT (timestamp 'tomorrow' = (timestamp 'yesterday' + interval '2 days')) as \"True\";\n True \n ------\n! t\n (1 row)\n \n SELECT (timestamp 'current' = 'now') as \"True\";\n--- 13,25 ----\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";\n True \n ------\n! f\n (1 row)\n \n SELECT (timestamp 'tomorrow' = (timestamp 'yesterday' + interval '2 days')) as \"True\";\n True \n ------\n! f\n (1 row)\n \n SELECT (timestamp 'current' = 'now') as \"True\";\n***************\n*** 81,87 ****\n SELECT count(*) AS one FROM TIMESTAMP_TBL WHERE d1 = timestamp 'today' + interval '1 day';\n one \n -----\n! 1\n (1 row)\n \n SELECT count(*) AS one FROM TIMESTAMP_TBL WHERE d1 = timestamp 'today' - interval '1 day';\n--- 81,87 ----\n SELECT count(*) AS one FROM TIMESTAMP_TBL WHERE d1 = timestamp 'today' + interval '1 day';\n one \n -----\n! 0\n (1 row)\n \n SELECT count(*) AS one FROM TIMESTAMP_TBL WHERE d1 = timestamp 'today' - interval '1 day';\n\n======================================================================\n\n*** ./expected/geometry.out\tTue Sep 12 16:07:16 2000\n--- ./results/geometry.out\tSun Oct 29 09:04:25 2000\n***************\n*** 127,133 ****\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140472)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n--- 127,133 ----\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140473)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n***************\n*** 150,160 ****\n six | box \n -----+----------------------------------------------------------------------------\n | (2.12132034355964,2.12132034355964),(-2.12132034355964,-2.12132034355964)\n! | (71.7106781186548,72.7106781186548),(-69.7106781186548,-68.7106781186548)\n! | (4.53553390593274,6.53553390593274),(-2.53553390593274,-0.535533905932738)\n! | (3.12132034355964,4.12132034355964),(-1.12132034355964,-0.121320343559643)\n | (107.071067811865,207.071067811865),(92.9289321881345,192.928932188135)\n! | (170.710678118655,70.7106781186548),(29.2893218813452,-70.7106781186548)\n (6 rows)\n \n -- translation\n--- 150,160 ----\n six | box \n -----+----------------------------------------------------------------------------\n | (2.12132034355964,2.12132034355964),(-2.12132034355964,-2.12132034355964)\n! | (71.7106781186547,72.7106781186547),(-69.7106781186547,-68.7106781186547)\n! | (4.53553390593274,6.53553390593274),(-2.53553390593274,-0.535533905932737)\n! | (3.12132034355964,4.12132034355964),(-1.12132034355964,-0.121320343559642)\n | (107.071067811865,207.071067811865),(92.9289321881345,192.928932188135)\n! | (170.710678118655,70.7106781186547),(29.2893218813453,-70.7106781186547)\n (6 rows)\n \n -- translation\n***************\n*** 443,454 ****\n FROM CIRCLE_TBL;\n six | polygon \n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359078377e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718156754e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077235131e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n! | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.59807621137373),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048617))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239385585e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n--- 443,454 ----\n FROM CIRCLE_TBL;\n six | polygon \n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359017709e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n! | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983794))\n | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048617))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983794))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n***************\n*** 456,467 ****\n FROM CIRCLE_TBL;\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359078377e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718156754e-11),(2.12132034353258,-2.12132034358671),(-4.59307077235131e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181134),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181134),(200,-1.02068239385585e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n--- 456,467 ----\n FROM CIRCLE_TBL;\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359017709e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718035418e-11),(2.12132034353258,-2.12132034358671),(-4.59307077053127e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181135),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n***************\n*** 503,513 ****\n WHERE (p1.f1 <-> c1.f1) > 0\n ORDER BY distance, circle, point using <<;\n twentyfour | circle | point | distance \n! ------------+----------------+------------+-------------------\n! | <(100,0),100> | (5.1,34.5) | 0.976531926977964\n | <(1,2),3> | (-3,4) | 1.47213595499958\n | <(0,0),3> | (-3,4) | 2\n! | <(100,0),100> | (-3,4) | 3.07764064044151\n | <(100,0),100> | (-5,-12) | 5.68348972285122\n | <(1,3),5> | (-10,0) | 6.40175425099138\n | <(1,3),5> | (10,10) | 6.40175425099138\n--- 503,513 ----\n WHERE (p1.f1 <-> c1.f1) > 0\n ORDER BY distance, circle, point using <<;\n twentyfour | circle | point | distance \n! ------------+----------------+------------+------------------\n! | <(100,0),100> | (5.1,34.5) | 0.97653192697797\n | <(1,2),3> | (-3,4) | 1.47213595499958\n | <(0,0),3> | (-3,4) | 2\n! | <(100,0),100> | (-3,4) | 3.07764064044152\n | <(100,0),100> | (-5,-12) | 5.68348972285122\n | <(1,3),5> | (-10,0) | 6.40175425099138\n | <(1,3),5> | (10,10) | 6.40175425099138\n***************\n*** 519,525 ****\n | <(0,0),3> | (10,10) | 11.142135623731\n | <(1,3),5> | (-5,-12) | 11.1554944214035\n | <(1,2),3> | (-5,-12) | 12.2315462117278\n! | <(1,3),5> | (5.1,34.5) | 26.7657047773224\n | <(1,2),3> | (5.1,34.5) | 29.757594539282\n | <(0,0),3> | (5.1,34.5) | 31.8749193547455\n | <(100,200),10> | (5.1,34.5) | 180.778038568384\n--- 519,525 ----\n | <(0,0),3> | (10,10) | 11.142135623731\n | <(1,3),5> | (-5,-12) | 11.1554944214035\n | <(1,2),3> | (-5,-12) | 12.2315462117278\n! | <(1,3),5> | (5.1,34.5) | 26.7657047773223\n | <(1,2),3> | (5.1,34.5) | 29.757594539282\n | <(0,0),3> | (5.1,34.5) | 31.8749193547455\n | <(100,200),10> | (5.1,34.5) | 180.778038568384\n\n======================================================================\n\n*** ./expected/subselect.out\tThu Mar 23 01:42:13 2000\n--- ./results/subselect.out\tSun Oct 29 09:05:45 2000\n***************\n*** 160,167 ****\n select q1, float8(count(*)) / (select count(*) from int8_tbl)\n from int8_tbl group by q1;\n q1 | ?column? \n! ------------------+----------\n! 123 | 0.4\n! 4567890123456789 | 0.6\n! (2 rows)\n \n--- 160,166 ----\n select q1, float8(count(*)) / (select count(*) from int8_tbl)\n from int8_tbl group by q1;\n q1 | ?column? \n! -----+----------\n! 123 | 1\n! (1 row)\n \n\n======================================================================\n\n*** ./expected/union.out\tThu Oct 5 14:11:39 2000\n--- ./results/union.out\tSun Oct 29 09:05:46 2000\n***************\n*** 259,298 ****\n --\n SELECT q2 FROM int8_tbl INTERSECT SELECT q1 FROM int8_tbl;\n q2 \n! ------------------\n! 123\n! 4567890123456789\n! (2 rows)\n \n SELECT q2 FROM int8_tbl INTERSECT ALL SELECT q1 FROM int8_tbl;\n q2 \n! ------------------\n! 123\n! 4567890123456789\n! 4567890123456789\n! (3 rows)\n \n SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n q2 \n! -------------------\n! -4567890123456789\n 456\n! (2 rows)\n \n SELECT q2 FROM int8_tbl EXCEPT ALL SELECT q1 FROM int8_tbl;\n q2 \n! -------------------\n! -4567890123456789\n 456\n! (2 rows)\n \n SELECT q2 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q1 FROM int8_tbl;\n q2 \n! -------------------\n! -4567890123456789\n 456\n! 4567890123456789\n! (3 rows)\n \n --\n -- Mixed types\n--- 259,289 ----\n --\n SELECT q2 FROM int8_tbl INTERSECT SELECT q1 FROM int8_tbl;\n q2 \n! ----\n! (0 rows)\n \n SELECT q2 FROM int8_tbl INTERSECT ALL SELECT q1 FROM int8_tbl;\n q2 \n! ----\n! (0 rows)\n \n SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n q2 \n! -----\n 456\n! (1 row)\n \n SELECT q2 FROM int8_tbl EXCEPT ALL SELECT q1 FROM int8_tbl;\n q2 \n! -----\n 456\n! (1 row)\n \n SELECT q2 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q1 FROM int8_tbl;\n q2 \n! -----\n 456\n! (1 row)\n \n --\n -- Mixed types\n\n======================================================================\n\n*** ./expected/random.out\tThu Jan 6 00:40:54 2000\n--- ./results/random.out\tSun Oct 29 09:05:50 2000\n***************\n*** 31,35 ****\n WHERE random NOT BETWEEN 80 AND 120;\n random \n --------\n! (0 rows)\n \n--- 31,36 ----\n WHERE random NOT BETWEEN 80 AND 120;\n random \n --------\n! 121\n! (1 row)\n \n\n======================================================================\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 29 Oct 2000 09:09:29 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "regression failure/UnixWare7.1.1/current sources/multibyte."
}
] |
[
{
"msg_contents": "When I configure PG using:\n\nCC=cc CXX=CC ./configure --prefix=/home/ler/pg-test --enable-syslog --with-CXX --with-perl --enable-multibyte --with-includes=/usr/local/include --with-libs=/usr/local/lib\n\nCC doesn't see the -O flag.\n\n\nCC -g -K PIC -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -c -o pgconnection.o pgconnection.cc\nCC -g -K PIC -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -c -o pgdatabase.o pgdatabase.cc\nCC -g -K PIC -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -c -o pgtransdb.o pgtransdb.cc\nCC -g -K PIC -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -c -o pgcursordb.o pgcursordb.cc\nCC -g -K PIC -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -c -o pglobject.o pglobject.cc\nCC -G -Wl,-z,text -o libpq++.so.3.1 pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o pglobject.o -L/usr/local/lib -L../../../src/interfaces/libpq -lpq -Wl,-R/home/ler/pg-test/lib\n\nWhy? \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 29 Oct 2000 09:20:30 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "CC not getting -O passed?"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001029 14:58]:\n> Larry Rosenman writes:\n> \n> > CC doesn't see the -O flag.\n> > Why? \n> \n> Because C++ is not C. You can specify the flags manually with\n> CXXFLAGS=...\nBUT, we default C to -O, why not C++? \n\n\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 29 Oct 2000 15:01:47 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: CC not getting -O passed?"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [001030 10:53]:\n> Larry Rosenman writes:\n> \n> > BUT, we default C to -O, why not C++? \n> \n> Basically because we haven't done it yet. I'm not sure whether we're\n> going beta anytime soon, if not it'll probably get implemented.\nMy impression was REAL SOON NOW....\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 30 Oct 2000 10:55:10 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: CC not getting -O passed?"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> BUT, we default C to -O, why not C++? \n\nBasically because we haven't done it yet. I'm not sure whether we're\ngoing beta anytime soon, if not it'll probably get implemented.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 30 Oct 2000 17:58:16 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: CC not getting -O passed?"
}
] |
[
{
"msg_contents": "\nMorning all ...\n\n\tToday, we are moving the mailing lists over to the new mail\nserver. There *might* be a brief period where any mail sent to the lists\nwill be returned with a 'user unknown' error, as there will be a brief\nperiod where the aliases will be disabled on the old server and the MX\nrecords are changed ...\n\n\tI will send out another email when the change is complete, and I\nbelieve all is working again ... after which, if you *still* get that\nerror, please feel free to email me about it ...\n\n\tI'm still setup the environment on the new server, so it will be a\nfew hours yet, but it will be today, so figured would at least get a\nwarning email out :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 29 Oct 2000 11:36:47 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "READ THIS: changes in mailing list ..."
},
{
"msg_contents": "\nWhere can I find documentation about the best compiler for Postgres that\nrunned in Sun Ultra 5 with Mandrake Linux 7.X ?\n\n\n\nthanx\n\nRodrigo Castro Hdz \n\n",
"msg_date": "Sun, 29 Oct 2000 10:11:29 -0600 (CST)",
"msg_from": "Rodrigo Castro <roche@zaratustra.ulatina.ac.cr>",
"msg_from_op": false,
"msg_subject": "Optimization"
}
] |
[
{
"msg_contents": "I seem to have trouble again getting cvs logs for just the 7.0.X branch.\nI am running this command from a cvs checkout tree of 7.0.X:\n\n\t$ cvs log -d'>2000-06-07 00:00:00 GMT' -rREL7_0_PATCHES\n\nAnd am seeing entries like below. Can someone please explain why I am\nseeing stuff committed in current?\n\n---------------------------------------------------------------------------\n\n\tRCS file: /home/projects/pgsql/cvsroot/pgsql/COPYRIGHT,v\n\tWorking file: COPYRIGHT\n\thead: 1.5\n\tbranch:\n\tlocks: strict\n\taccess list:\n\tsymbolic names:\n\t\tREL7_0_PATCHES: 1.5.0.2\n\t\tREL7_0: 1.5\n\t\tREL6_5_PATCHES: 1.3.0.4\n\t\tREL6_5: 1.3\n\t\tREL6_4: 1.3.0.2\n\t\trelease-6-3: 1.3\n\t\tSUPPORT: 1.1.1.1\n\t\tPG95-DIST: 1.1.1\n\tkeyword substitution: kv\n\ttotal revisions: 6;\tselected revisions: 0\n\tdescription:\n\t=============================================================================\n\t\n\tRCS file: /home/projects/pgsql/cvsroot/pgsql/GNUmakefile.in,v\n\tWorking file: GNUmakefile.in\n\thead: 1.14\n\tbranch:\n\tlocks: strict\n\taccess list:\n\tsymbolic names:\n\tkeyword substitution: kv\n\ttotal revisions: 14;\tselected revisions: 13\n\tdescription:\n\t----------------------------\n\trevision 1.14\n\tdate: 2000/10/02 22:21:21; author: petere; state: Exp; lines: +4 -2\n\t\"installcheck\" doesn't need to depend on \"all\" since we depend on the user\n\tto start up a postmaster anyway.\n\t----------------------------\n\trevision 1.13\n\tdate: 2000/09/29 17:17:31; author: petere; state: Exp; lines: +3 -1\n\tNew unified regression test driver, test/regress makefile cleanup,\n\tadd \"check\" and \"installcheck\" targets, straighten out make variable naming\n\tof host_os, host_cpu, etc.\n\t----------------------------\n\trevision 1.12\n\tdate: 2000/09/21 20:17:41; author: petere; state: Exp; lines: +1 -20\n\tReplace brain-dead Autoconf macros AC_ARG_{ENABLE,WITH} with something\n\tthat's actually useful, robust, consistent.\n\t\n\tBetter plan to generate aclocal.m4 as well: use m4 include directives,\n\trather than cat.\n\t...\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Oct 2000 13:12:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "More cvs branch problems"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I seem to have trouble again getting cvs logs for just the 7.0.X branch.\n> I am running this command from a cvs checkout tree of 7.0.X:\n> \n> \t$ cvs log -d'>2000-06-07 00:00:00 GMT' -rREL7_0_PATCHES\n> \n> And am seeing entries like below. Can someone please explain why I am\n> seeing stuff committed in current?\n\nYou might want to check out cvs2cl\n(http://www.red-bean.com/kfogel/cvs2cl.shtml). It sems to work nicely and\nyou get much nicer output because it combines identical log messages in\none multi-file log entry.\n\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 2 Nov 2000 17:00:16 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: More cvs branch problems"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Bruce Momjian writes:\n> > And am seeing entries like below. Can someone please explain why I am\n> > seeing stuff committed in current?\n \n> You might want to check out cvs2cl\n> (http://www.red-bean.com/kfogel/cvs2cl.shtml). It sems to work nicely and\n> you get much nicer output because it combines identical log messages in\n> one multi-file log entry.\n\nInteresting.\n\nI do have a question -- just how much configuration (and other) changes\noccurred to REL7_0_PATCHES (since the logs seem to not be telling the\nwhole story)?\n\nI say this because I found at least one such change -- REL7_0_PATCHES,\nunlike 7.0.2, has an '--enable-syslog' configure switch. Not a biggie\nfor me, but I wonder just what else was committed to REL7_0_PATCHES in\nthe same ilk?\n\nGetting ready to do a mass diff of REL7_0_PATCHES against 7.0.2 release.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 02 Nov 2000 11:13:21 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: More cvs branch problems"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> I do have a question -- just how much configuration (and other) changes\n> occurred to REL7_0_PATCHES (since the logs seem to not be telling the\n> whole story)?\n> I say this because I found at least one such change -- REL7_0_PATCHES,\n> unlike 7.0.2, has an '--enable-syslog' configure switch.\n\nThat's probably the only one, since by back-patching it Marc was\nviolating one of our standard rules: no new features in dot-releases,\nonly bug fixes.\n\nTo spread blame around fairly ;-), I'm afraid a lot of my own back-patch\nlog entries just say something like \"backpatch FOO\" without much detail.\nThere's more detail in the corresponding log entry in the development\nbranch, but to get that, you'll need to run cvs2cl without a branch\nrestriction. Sorry...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Nov 2000 12:46:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More cvs branch problems "
},
{
"msg_contents": "Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > I say this because I found at least one such change -- REL7_0_PATCHES,\n> > unlike 7.0.2, has an '--enable-syslog' configure switch.\n \n> That's probably the only one, since by back-patching it Marc was\n> violating one of our standard rules: no new features in dot-releases,\n> only bug fixes.\n\nIf I may be so bold, this isn't the first time that rule has been\nviolated, and, it probably won't be the last. And for many things it\nwould be an issue -- but this one isn't, if it's the only one, or if\nchanges are in this ilk. It's the feature changes that haven't been\nbeta-tested properly that directly affect operations that worry me --\nand those are rare. And the USE_SYSLOG feature itself, even without a\n--enable-syslog, has been beta tested pretty thoroughly (and fixed a\ncouple of times, most notably with the syslog message splitter that,\nIIRC, Hiroshi added). \n\nBut I'm not complaining -- the addition of --enable-syslog makes my job\na little easier, as I now don't need to have that particular prepatch to\nconfig.h.in (which patch I loathed making) to enable USE_SYSLOG for the\nRPM's. Of course, making my job easier doesn't necessarily make it\nright :-).\n \n> To spread blame around fairly ;-), I'm afraid a lot of my own back-patch\n> log entries just say something like \"backpatch FOO\" without much detail.\n> There's more detail in the corresponding log entry in the development\n> branch, but to get that, you'll need to run cvs2cl without a branch\n> restriction. Sorry...\n\nIOW, the cvs logs (or cvs2cl output) will really have to be gone through\nby hand to really get a changelist from 7.0.2 to 7.0.3 instead of a\nchangelist from 7.0.2 to CURRENT.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 02 Nov 2000 13:01:44 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: More cvs branch problems"
},
{
"msg_contents": "On Thu, 2 Nov 2000, Tom Lane wrote:\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > I do have a question -- just how much configuration (and other) changes\n> > occurred to REL7_0_PATCHES (since the logs seem to not be telling the\n> > whole story)?\n> > I say this because I found at least one such change -- REL7_0_PATCHES,\n> > unlike 7.0.2, has an '--enable-syslog' configure switch.\n> \n> That's probably the only one, since by back-patching it Marc was\n> violating one of our standard rules: no new features in dot-releases,\n> only bug fixes.\n\nWhat new feature? syslog logging has been there for awhile, all I did was\nallow it to be enabled through configure vs having to edit the config.h\nfile ...\n\n\n",
"msg_date": "Thu, 2 Nov 2000 20:21:21 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: More cvs branch problems "
}
] |
[
{
"msg_contents": "\none list at a time, I move and test .. -hackers is the second ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sun, 29 Oct 2000 14:45:18 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "checking new server ..."
}
] |
[
{
"msg_contents": "\nmost odd ... \n\n",
"msg_date": "Sun, 29 Oct 2000 15:33:17 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "another try "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.