threads
listlengths
1
2.99k
[ { "msg_contents": "\nIn order to allow us to split easily across multiple machines, and move\nthings transparently, there following changes are being made:\n\nAnonCVS\n\n\t- to access the repository via anon-cvs, please connect to a\n\t CVSROOT of:\n\n\t:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\n\t- passwd for anoncvs@anoncvs.postgresql.org: blank\n\n\nCVSup\n\n\t- to access the cvsup server, please now connect to:\n\n\t\tcvsup.postgresql.org\n\nboth of which will be updated from the master server every 4hrs, and\naccess to the master server will be disabled over the next couple of days\n...\n\nAny problems, please let me know as soon as possible ...\n\n", "msg_date": "Thu, 6 Sep 2001 13:52:39 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Some changes to CVSup and AnonCVS access ..." }, { "msg_contents": "\nI have updated cvs.sgml to show the new locations.\n\n> \n> In order to allow us to split easily across multiple machines, and move\n> things transparently, there following changes are being made:\n> \n> AnonCVS\n> \n> \t- to access the repository via anon-cvs, please connect to a\n> \t CVSROOT of:\n> \n> \t:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n> \n> \t- passwd for anoncvs@anoncvs.postgresql.org: blank\n> \n> \n> CVSup\n> \n> \t- to access the cvsup server, please now connect to:\n> \n> \t\tcvsup.postgresql.org\n> \n> both of which will be updated from the master server every 4hrs, and\n> access to the master server will be disabled over the next couple of days\n> ...\n> \n> Any problems, please let me know as soon as possible ...\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 17:37:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Some changes to CVSup and AnonCVS access ..." } ]
[ { "msg_contents": "With the following configure script:\n\nCC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n\t--enable-debug \\\n\t--with-tcl --with-tclconfig=/usr/local/lib \\\n\t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n\ngmake breaks:\n\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n/usr/bin/ld -r -o SUBSYS.o logtape.o tuplesort.o tuplestore.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/sort'\ngmake -C time SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\ncc -O -K inline -g -I../../../../src/include -I/usr/local/include -c -o tqual.o tqual.c\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n/usr/bin/ld -r -o SUBSYS.o tqual.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\ngmake -C mb SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\ngmake[4]: *** No rule to make target `encnames.o', needed by `SUBSYS.o'. Stop.\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\ngmake[3]: *** [mb-recursive] Error 2\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ngmake[2]: *** [utils-recursive] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\nI had re-cvs checkout my source tree into a clean directory....\n\nLER\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 6 Sep 2001 13:34:21 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "cvs tip: gmake breakage." }, { "msg_contents": "\nYes, I think this is broken because a multi-byte commit failed to add\nsome files. I have contacted Tatsuo about this.\n\n> With the following configure script:\n> \n> CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> \t--enable-debug \\\n> \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> \t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n> \n> gmake breaks:\n> \n> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> /usr/bin/ld -r -o SUBSYS.o logtape.o tuplesort.o tuplestore.o\n> gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/sort'\n> gmake -C time SUBSYS.o\n> gmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\n> cc -O -K inline -g -I../../../../src/include -I/usr/local/include -c -o tqual.o tqual.c\n> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> /usr/bin/ld -r -o SUBSYS.o tqual.o\n> gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\n> gmake -C mb SUBSYS.o\n> gmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\n> gmake[4]: *** No rule to make target `encnames.o', needed by `SUBSYS.o'. Stop.\n> gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\n> gmake[3]: *** [mb-recursive] Error 2\n> gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\n> gmake[2]: *** [utils-recursive] Error 2\n> gmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\n> gmake: *** [all] Error 2\n> \n> I had re-cvs checkout my source tree into a clean directory....\n> \n> LER\n> \n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Sep 2001 14:50:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: cvs tip: gmake breakage." } ]
[ { "msg_contents": "Hey dudes, sorry. I am sad to see it go.\n", "msg_date": "Thu, 06 Sep 2001 19:28:03 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Great Bridge" } ]
[ { "msg_contents": "Hi Darren,\n\nI'd be interested in finding out what your usage stats are, and any\nrequirements for hosting the replication content.\n\nAs qualification, we maintain an infrastructure at GlobalCentre (a child\ncompany of http://www.exodus.com/ ) This hosting facility also serves\nhttp://www.yellowpages.com.au and http://www.whitepages.com.au (two of the\nmost popular sites in Australia).\n\nBy way of a cc to the PostgreSQL hackers list, this offer goes out to anyone\nelse who was effected by the GreatBridge shutdown.\n\nCheers,\n\nMark Pritchard\nSenior Technical Architect\nTangent Systems Australia\n-------------------------------------------------\nemail mark@tangent.net.au\nph +61 3 9809 1311\nfax +61 3 9809 1322\nmob 0411 402 034\n-------------------------------------------------\nThe central task of a natural science is to make the wonderful commonplace:\nto show that complexity, correctly viewed, is only a mask for simplicity; to\nfind pattern hidden in apparent chaos. - Herb Simon\n\n\n> -----Original Message-----\n> From: pgreplication-general-admin@greatbridge.org\n> [mailto:pgreplication-general-admin@greatbridge.org]On Behalf Of Darren\n> Johnson\n> Sent: Friday, 7 September 2001 11:14 AM\n> To: pgreplication-general\n> Subject: [Pgreplication-general] GreatBridge ceases operations\n>\n>\n> The following announcement was made today...\n>\n> http://www.greatbridge.com\n>\n> I plan to continue this replication effort and I encourage all of\n> you to do the same. Please bear with me while I find a new host\n> for this project, and I will keep you informed on that front. The\n> current site and mailing list will be available while I explore other\n> avenues.\n>\n> If any one has a suggestion or comment on this issue, you can\n> reach me at the following address.\n>\n> darren.johnson@home.com\n>\n> I would like to thank everyone for their contributions, and I appreciate\n> your support during this transition.\n>\n> Sincerely,\n>\n> Darren B. Johnson\n>\n> _______________________________________________\n> Pgreplication-general mailing list\n> Pgreplication-general@greatbridge.org\n> http://www.greatbridge.org/mailman/listinfo/pgreplication-general\n>\n\n", "msg_date": "Fri, 7 Sep 2001 12:05:16 +1000", "msg_from": "\"Mark Pritchard\" <mark@tangent.net.au>", "msg_from_op": true, "msg_subject": "Re: [Pgreplication-general] GreatBridge ceases operations" }, { "msg_contents": "Mark Pritchard wrote:\n> Hi Darren,\n>\n> I'd be interested in finding out what your usage stats are, and any\n> requirements for hosting the replication content.\n>\n> As qualification, we maintain an infrastructure at GlobalCentre (a child\n> company of http://www.exodus.com/ ) This hosting facility also serves\n> http://www.yellowpages.com.au and http://www.whitepages.com.au (two of the\n> most popular sites in Australia).\n>\n> By way of a cc to the PostgreSQL hackers list, this offer goes out to anyone\n> else who was effected by the GreatBridge shutdown.\n\n The core team is actually looking for a possibility to\n smoothly transfer the entire greatbridge.org site to\n *.postgresql.org. Please wait a little until the dust\n settles.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 7 Sep 2001 08:13:32 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [Pgreplication-general] GreatBridge ceases operations" } ]
[ { "msg_contents": "I just noticed some unexpected behavior from byteain:\n\ntest=# select '\\\\009'::bytea;\n ?column?\n----------\n \\011\n(1 row)\n\ntest=# select '\\\\444'::bytea;\n ?column?\n----------\n $\n(1 row)\n\ntest=# select '\\\\999'::bytea;\n ?column?\n----------\n \\221\n(1 row)\n\nThe reason is the following code in byteain:\n\n else if (!isdigit((unsigned char) *tp++) ||\n !isdigit((unsigned char) *tp++) ||\n !isdigit((unsigned char) *tp++))\n elog(ERROR, \"Bad input string for type bytea\");\n\nIt checks for a '\\' followed by three digits, but does not attempt to\nenforce that the three digits actually produce a valid octal number. Anyone\nobject to me fixing this?\n\n-- Joe\n\n\n", "msg_date": "Thu, 6 Sep 2001 20:11:32 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "byteain bug(?)" }, { "msg_contents": "\nShould be fixed. I noticed this myself. Should we require three digits?\n\n\n> I just noticed some unexpected behavior from byteain:\n> \n> test=# select '\\\\009'::bytea;\n> ?column?\n> ----------\n> \\011\n> (1 row)\n> \n> test=# select '\\\\444'::bytea;\n> ?column?\n> ----------\n> $\n> (1 row)\n> \n> test=# select '\\\\999'::bytea;\n> ?column?\n> ----------\n> \\221\n> (1 row)\n> \n> The reason is the following code in byteain:\n> \n> else if (!isdigit((unsigned char) *tp++) ||\n> !isdigit((unsigned char) *tp++) ||\n> !isdigit((unsigned char) *tp++))\n> elog(ERROR, \"Bad input string for type bytea\");\n> \n> It checks for a '\\' followed by three digits, but does not attempt to\n> enforce that the three digits actually produce a valid octal number. Anyone\n> object to me fixing this?\n> \n> -- Joe\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 00:02:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: byteain bug(?)" }, { "msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n> It checks for a '\\' followed by three digits, but does not attempt to\n> enforce that the three digits actually produce a valid octal number. Anyone\n> object to me fixing this?\n\nClearly a bug. Fix away...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Sep 2001 00:09:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: byteain bug(?) " }, { "msg_contents": ">\n> Should be fixed. I noticed this myself. Should we require three digits?\n>\n>\n> > I just noticed some unexpected behavior from byteain:\n> >\n> >\n> > test=# select '\\\\009'::bytea;\n> > ?column?\n> > ----------\n> > \\011\n> > (1 row)\n\n<snip>\n\n> >\n> > It checks for a '\\' followed by three digits, but does not attempt to\n> > enforce that the three digits actually produce a valid octal number.\nAnyone\n> > object to me fixing this?\n> >\n\nBased on the thread this morning on patches, I was thinking we should allow\n'\\\\', '\\0', or '\\###' where ### is any valid octal. At least that's what I\nwas going to have decode(bytea, 'escape') handle.\n\n-- Joe\n\n\n", "msg_date": "Thu, 6 Sep 2001 21:23:21 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: byteain bug(?)" }, { "msg_contents": "> > > It checks for a '\\' followed by three digits, but does not attempt to\n> > > enforce that the three digits actually produce a valid octal number.\n> Anyone\n> > > object to me fixing this?\n> > >\n> \n> Based on the thread this morning on patches, I was thinking we should allow\n> '\\\\', '\\0', or '\\###' where ### is any valid octal. At least that's what I\n> was going to have decode(bytea, 'escape') handle.\n\nYep, it is way too open right now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 00:24:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: byteain bug(?)" }, { "msg_contents": "> > > > It checks for a '\\' followed by three digits, but does not attempt\nto\n> > > > enforce that the three digits actually produce a valid octal number.\n> > Anyone\n> > > > object to me fixing this?\n> > > >\n> >\n> > Based on the thread this morning on patches, I was thinking we should\nallow\n> > '\\\\', '\\0', or '\\###' where ### is any valid octal. At least that's what\nI\n> > was going to have decode(bytea, 'escape') handle.\n>\n> Yep, it is way too open right now.\n\nOn further thought, I think I'll have to not allow '\\0' and require '\\000'\ninstead. Otherwise, how should the following be interpreted:\n\n'\\0123'\n\nIs that '\\0' followed by the literals '1', '2', and '3'? Or is it '\\012'\nfollowed by the literal '3'?\n\nSo, I'll go with '\\\\' or '\\###' where ### is any valid octal, for both\nbyteain and decode(bytea, 'escape').\n\nComments?\n\n-- Joe\n\n\n\n", "msg_date": "Thu, 6 Sep 2001 22:52:14 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: byteain bug(?)" }, { "msg_contents": "> > > > > It checks for a '\\' followed by three digits, but does not attempt\n> to\n> > > > > enforce that the three digits actually produce a valid octal number.\n> > > Anyone\n> > > > > object to me fixing this?\n> > > > >\n> > >\n> > > Based on the thread this morning on patches, I was thinking we should\n> allow\n> > > '\\\\', '\\0', or '\\###' where ### is any valid octal. At least that's what\n> I\n> > > was going to have decode(bytea, 'escape') handle.\n> >\n> > Yep, it is way too open right now.\n> \n> On further thought, I think I'll have to not allow '\\0' and require '\\000'\n> instead. Otherwise, how should the following be interpreted:\n> \n> '\\0123'\n> \n> Is that '\\0' followed by the literals '1', '2', and '3'? Or is it '\\012'\n> followed by the literal '3'?\n> \n> So, I'll go with '\\\\' or '\\###' where ### is any valid octal, for both\n> byteain and decode(bytea, 'escape').\n\nVery good point.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 10:25:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: byteain bug(?)" } ]
[ { "msg_contents": "CVSROOT:\t/home/projects/pgsql/cvsroot\nModule name:\tpgsql\nChanges by:\tscrappy@hub.org\t01/09/06 23:32:11\n\nAdded files:\n\tsrc/backend/utils/mb: encnames.c win1251.c \n\tsrc/backend/utils/mb/Unicode: alt_to_utf8.map koi8r_to_utf8.map \n\t utf8_to_alt.map utf8_to_koi8r.map \n\t utf8_to_win1251.map \n\t win1251_to_utf8.map \n\nLog message:\n\tAdd missing files.\n\n", "msg_date": "Thu, 6 Sep 2001 23:32:12 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "pgsql/src/backend/utils/mb encnames.c win1251. ..." }, { "msg_contents": "This commit doesn't compile...\n\nWith configure input:\n\nCC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n\t--enable-debug \\\n\t--with-tcl --with-tclconfig=/usr/local/lib \\\n\t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n\nI get:\n\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\ncc -O -K inline -g -I../../../../src/include -I/usr/local/include -c -o tuplestore.o tuplestore.c\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n/bin/ld -r -o SUBSYS.o logtape.o tuplesort.o tuplestore.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/sort'\ngmake -C time SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\ncc -O -K inline -g -I../../../../src/include -I/usr/local/include -c -o tqual.o tqual.c\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n/bin/ld -r -o SUBSYS.o tqual.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\ngmake -C mb SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\ncc -O -K inline -g -I../../../../src/include -I/usr/local/include -c -o encnames.o encnames.c\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\nUX:acomp: ERROR: \"encnames.c\", line 36: syntax error in macro parameters\ngmake[4]: *** [encnames.o] Error 1\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\ngmake[3]: *** [mb-recursive] Error 2\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ngmake[2]: *** [utils-recursive] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\n\n* Marc G. Fournier <scrappy@hub.org> [010906 22:37]:\n> CVSROOT:\t/home/projects/pgsql/cvsroot\n> Module name:\tpgsql\n> Changes by:\tscrappy@hub.org\t01/09/06 23:32:11\n> \n> Added files:\n> \tsrc/backend/utils/mb: encnames.c win1251.c \n> \tsrc/backend/utils/mb/Unicode: alt_to_utf8.map koi8r_to_utf8.map \n> \t utf8_to_alt.map utf8_to_koi8r.map \n> \t utf8_to_win1251.map \n> \t win1251_to_utf8.map \n> \n> Log message:\n> \tAdd missing files.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 6 Sep 2001 23:37:53 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/utils/mb encnames.c win1251. ..." } ]
[ { "msg_contents": "I'd just like to reassure everyone that the projects currently hosted\nby greatbridge.org will be taken care of; there's no need for people to\nscramble around looking for new sites.\n\nThe physical hosting will be picked up by hub.org. We still have to\nwork out what the site name will be and the details of getting things\ntransferred, but we'll strive to make it as painless as practicable\nfor the projects involved.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2001 23:56:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Status of greatbridge.org" } ]
[ { "msg_contents": "Is the schedule still to go beta Monday?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 00:32:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Beta Monday?" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010906 23:40]:\n> Is the schedule still to go beta Monday?\nI would object based on the bug I just sent. the last multi-byte\nstuff doesn't compile (the file is there now...) ...\n\nLER\n\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 6 Sep 2001 23:41:46 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "\n\nOh, I see. Can you send an detailed email to hackers to Tatsuo can get\nit fixed? If we have to back it out, we will.\n\nMonday is not written in stone. If people want it later, we can do\nthat. Having it compile is a beta requirement. :-)\n\n\n> * Bruce Momjian <pgman@candle.pha.pa.us> [010906 23:40]:\n> > Is the schedule still to go beta Monday?\n> I would object based on the bug I just sent. the last multi-byte\n> stuff doesn't compile (the file is there now...) ...\n> \n> LER\n> \n> \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> > \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 00:45:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is the schedule still to go beta Monday?\n\nWell, I dunno about you, but I've had a few distractions to deal with\nover the past couple days ;-)\n\nI'll keep trying to clean up loose ends, but I wonder if we should put\nit off a few days more. There's still a long list of unreviewed patches\nand undone todo items...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Sep 2001 00:58:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Beta Monday? " }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010906 23:45]:\n> \n> \n> Oh, I see. Can you send an detailed email to hackers to Tatsuo can get\n> it fixed? If we have to back it out, we will.\n> \n> Monday is not written in stone. If people want it later, we can do\n> that. Having it compile is a beta requirement. :-)\nAlready Done. \n\nBut here is the tail of the output:\n\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\ncc -O -K inline -g -I../../../../src/include -I/usr/local/include -c -o tuplestore.o tuplestore.c\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n/bin/ld -r -o SUBSYS.o logtape.o tuplesort.o tuplestore.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/sort'\ngmake -C time SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\ncc -O -K inline -g -I../../../../src/include -I/usr/local/include -c -o tqual.o tqual.c\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n/bin/ld -r -o SUBSYS.o tqual.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\ngmake -C mb SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\ncc -O -K inline -g -I../../../../src/include -I/usr/local/include -c -o encnames.o encnames.c\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\nUX:acomp: ERROR: \"encnames.c\", line 36: syntax error in macro parameters\ngmake[4]: *** [encnames.o] Error 1\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\ngmake[3]: *** [mb-recursive] Error 2\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ngmake[2]: *** [utils-recursive] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\nconfigure input:\n\nCC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n\t--enable-debug \\\n\t--with-tcl --with-tclconfig=/usr/local/lib \\\n\t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n> \n> \n> > * Bruce Momjian <pgman@candle.pha.pa.us> [010906 23:40]:\n> > > Is the schedule still to go beta Monday?\n> > I would object based on the bug I just sent. the last multi-byte\n> > stuff doesn't compile (the file is there now...) ...\n> > \n> > LER\n> > \n> > \n> > > \n> > > -- \n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > > \n> > \n> > -- \n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 7 Sep 2001 07:31:50 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is the schedule still to go beta Monday?\n> \n> Well, I dunno about you, but I've had a few distractions to deal with\n> over the past couple days ;-)\n> \n> I'll keep trying to clean up loose ends, but I wonder if we should put\n> it off a few days more. There's still a long list of unreviewed patches\n> and undone todo items...\n\nFine by me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 09:59:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "On Fri, Sep 07, 2001 at 07:31:50AM -0500, Larry Rosenman wrote:\n> * Bruce Momjian <pgman@candle.pha.pa.us> [010906 23:45]:\n> > \n> > \n> > Oh, I see. Can you send an detailed email to hackers to Tatsuo can get\n> > it fixed? If we have to back it out, we will.\n> > \n> > Monday is not written in stone. If people want it later, we can do\n> > that. Having it compile is a beta requirement. :-)\n> Already Done. \n\n I want send on monday small bugfix for to_char() (now I haven't time). \nIMHO good time for beta is in the Monday envening :-)\n \n> But here is the tail of the output:\n> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> UX:acomp: ERROR: \"encnames.c\", line 36: syntax error in macro parameters\n ^^^^^^^^^^^^^\n Here is:\n\n/* #define DEBUG_ENCODING */\n#ifdef DEBUG_ENCODING\n#ifdef FRONTEND\n#define encdebug(_format, _a...) fprintf(stderr, _format, ##_a)\n#else\n#define encdebug(_format, _a...) elog(NOTICE, _format, ##_a)\n#endif\n#else\n#define encdebug(_format, _a...)\n ^^^^^^^^^^^\n line 36\n\n#endif\n\n I don't see some problem with my gcc. Or I something overlook?\n\n\n I check current CVS and encoding names patch was commit incorrect!\n\n Well, again:\n \n * following files are renamed (see mb/Unicode\n -- is needful do:\n\ncvs remove src/utils/mb/Unicode/KOI8_to_utf8.map\ncvs add src/utils/mb/Unicode/koi8r_to_utf8.map\n\ncvs remove src/utils/mb/Unicode/WIN_to_utf8.map\ncvs add\t src/utils/mb/Unicode/win1251_to_utf8.map\n\ncvs remove src/utils/mb/Unicode/utf8_to_KOI8.map\ncvs add\t src/utils/mb/Unicode/utf8_to_koi8r.map\n\ncvs remove src/utils/mb/Unicode/utf8_to_WIN.map\ncvs add\t src/utils/mb/Unicode/utf8_to_win1251.map\n\n * new file:\n\ncvs add src/utils/mb/encname.c\n\n * removed file:\n\ncvs remove src/utils/mb/common.c\n\n\n\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 7 Sep 2001 16:00:48 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "> I check current CVS and encoding names patch was commit incorrect!\n> \n> Well, again:\n> \n> * following files are renamed (see mb/Unicode\n> -- is needful do:\n> \n> cvs remove src/utils/mb/Unicode/KOI8_to_utf8.map\n> cvs add src/utils/mb/Unicode/koi8r_to_utf8.map\n> \n> cvs remove src/utils/mb/Unicode/WIN_to_utf8.map\n> cvs add\t src/utils/mb/Unicode/win1251_to_utf8.map\n> \n> cvs remove src/utils/mb/Unicode/utf8_to_KOI8.map\n> cvs add\t src/utils/mb/Unicode/utf8_to_koi8r.map\n> \n> cvs remove src/utils/mb/Unicode/utf8_to_WIN.map\n> cvs add\t src/utils/mb/Unicode/utf8_to_win1251.map\n\nOK, removed unused file. New versions were already added.\n\n\n> \n> * new file:\n> \n> cvs add src/utils/mb/encname.c\n\nAlready added.\n\n> \n> * removed file:\n> \n> cvs remove src/utils/mb/common.c\n\nRemoved.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 10:17:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "* Karel Zak <zakkr@zf.jcu.cz> [010907 09:00]:\n> On Fri, Sep 07, 2001 at 07:31:50AM -0500, Larry Rosenman wrote:\n> > * Bruce Momjian <pgman@candle.pha.pa.us> [010906 23:45]:\n> > > \n> > > \n> > > Oh, I see. Can you send an detailed email to hackers to Tatsuo can get\n> > > it fixed? If we have to back it out, we will.\n> > > \n> > > Monday is not written in stone. If people want it later, we can do\n> > > that. Having it compile is a beta requirement. :-)\n> > Already Done. \n> \n> I want send on monday small bugfix for to_char() (now I haven't time). \n> IMHO good time for beta is in the Monday envening :-)\n> \n> > But here is the tail of the output:\n> > UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> > UX:acomp: ERROR: \"encnames.c\", line 36: syntax error in macro parameters\n> ^^^^^^^^^^^^^\n> Here is:\n> \n> /* #define DEBUG_ENCODING */\n> #ifdef DEBUG_ENCODING\n> #ifdef FRONTEND\n> #define encdebug(_format, _a...) fprintf(stderr, _format, ##_a)\n> #else\n> #define encdebug(_format, _a...) elog(NOTICE, _format, ##_a)\n> #endif\n> #else\n> #define encdebug(_format, _a...)\n> ^^^^^^^^^^^\n> line 36\n> \n> #endif\n> \n> I don't see some problem with my gcc. Or I something overlook?\n> \n> \n> I check current CVS and encoding names patch was commit incorrect!\nKarel,\n If you want a shell account on my box, I can create one. Also,\nthe doc for this compiler is at: http://www.lerctr.org:457/ \nor http://uw7doc.sco.com:1997/ \n\nLarry.\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 7 Sep 2001 09:28:59 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n>> But here is the tail of the output:\n>> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n>> UX:acomp: ERROR: \"encnames.c\", line 36: syntax error in macro parameters\n> ^^^^^^^^^^^^^\n\n> #define encdebug(_format, _a...)\n> ^^^^^^^^^^^\n> line 36\n\n> I don't see some problem with my gcc. Or I something overlook?\n\n\"...\" in macro parameters is a gcc-ism. This code is unportable and\nmust be fixed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Sep 2001 10:30:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Beta Monday? " }, { "msg_contents": "> Karel Zak <zakkr@zf.jcu.cz> writes:\n> >> But here is the tail of the output:\n> >> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> >> UX:acomp: ERROR: \"encnames.c\", line 36: syntax error in macro parameters\n> > ^^^^^^^^^^^^^\n> \n> > #define encdebug(_format, _a...)\n> > ^^^^^^^^^^^\n> > line 36\n> \n> > I don't see some problem with my gcc. Or I something overlook?\n> \n> \"...\" in macro parameters is a gcc-ism. This code is unportable and\n> must be fixed.\n\nYes, that stuff can't be done. We do use ## to bind macro params, but\nwe don't do variable-length macro calls. This is debug stuff anyway so\nI am looking at how to get it compiling right away.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 10:39:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "> I'll keep trying to clean up loose ends, but I wonder if we should put\n> it off a few days more. There's still a long list of unreviewed patches\n> and undone todo items...\n\n*sigh*\n\nI'd like a few more days also. I've got \"timestamp with time zone\" work\nto do, and unexpectedly may not have the time in the next two days to\nfinish it up.\n\n - Thomas\n", "msg_date": "Fri, 07 Sep 2001 14:40:45 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "On Fri, Sep 07, 2001 at 10:39:19AM -0400, Bruce Momjian wrote:\n> > Karel Zak <zakkr@zf.jcu.cz> writes:\n> > >> But here is the tail of the output:\n> > >> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> > >> UX:acomp: ERROR: \"encnames.c\", line 36: syntax error in macro parameters\n> > > ^^^^^^^^^^^^^\n> > \n> > > #define encdebug(_format, _a...)\n> > > ^^^^^^^^^^^\n> > > line 36\n> > \n> > > I don't see some problem with my gcc. Or I something overlook?\n> > \n> > \"...\" in macro parameters is a gcc-ism. This code is unportable and\n> > must be fixed.\n> \n> Yes, that stuff can't be done. We do use ## to bind macro params, but\n> we don't do variable-length macro calls. This is debug stuff anyway so\n> I am looking at how to get it compiling right away.\n\n Hmm.. my world is comiled by 'gcc' and I'm still must learning that \nhere are some cathedrals and not bazaars only :-) Sorry.\n\n OK, please erase all these debug macros (encdebug). I used it for binary \nsearch check only. It's not needful. Do you want a patch (but not until \nMonday)?\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 7 Sep 2001 16:58:26 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "> > I am looking at how to get it compiling right away.\n> \n> Hmm.. my world is comiled by 'gcc' and I'm still must learning that \n> here are some cathedrals and not bazaars only :-) Sorry.\n> \n> OK, please erase all these debug macros (encdebug). I used it for binary \n> search check only. It's not needful. Do you want a patch (but not until \n> Monday)?\n\nOK, I am on it. Removing now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 10:59:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "> On Fri, Sep 07, 2001 at 10:39:19AM -0400, Bruce Momjian wrote:\n> > > Karel Zak <zakkr@zf.jcu.cz> writes:\n> > > >> But here is the tail of the output:\n> > > >> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> > > >> UX:acomp: ERROR: \"encnames.c\", line 36: syntax error in macro parameters\n> > > > ^^^^^^^^^^^^^\n> > > \n> > > > #define encdebug(_format, _a...)\n> > > > ^^^^^^^^^^^\n> > > > line 36\n> > > \n> > > > I don't see some problem with my gcc. Or I something overlook?\n> > > \n> > > \"...\" in macro parameters is a gcc-ism. This code is unportable and\n> > > must be fixed.\n> > \n> > Yes, that stuff can't be done. We do use ## to bind macro params, but\n> > we don't do variable-length macro calls. This is debug stuff anyway so\n> > I am looking at how to get it compiling right away.\n> \n> Hmm.. my world is comiled by 'gcc' and I'm still must learning that \n> here are some cathedrals and not bazaars only :-) Sorry.\n> \n> OK, please erase all these debug macros (encdebug). I used it for binary \n> search check only. It's not needful. Do you want a patch (but not until \n> Monday)?\n\nOK, macros gone. Should compile fine now for people using multibyte.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 11:01:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "Karel Zak writes:\n\n> > But here is the tail of the output:\n> > UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> > UX:acomp: ERROR: \"encnames.c\", line 36: syntax error in macro parameters\n> ^^^^^^^^^^^^^\n> Here is:\n>\n> /* #define DEBUG_ENCODING */\n> #ifdef DEBUG_ENCODING\n> #ifdef FRONTEND\n> #define encdebug(_format, _a...) fprintf(stderr, _format, ##_a)\n> #else\n> #define encdebug(_format, _a...) elog(NOTICE, _format, ##_a)\n> #endif\n> #else\n> #define encdebug(_format, _a...)\n> ^^^^^^^^^^^\n> line 36\n>\n> #endif\n>\n> I don't see some problem with my gcc. Or I something overlook?\n\nThat's because this is a gcc-specific feature. You cannot portably use\nvarargs macros.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 7 Sep 2001 17:20:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Beta Monday?" }, { "msg_contents": "> > > I am looking at how to get it compiling right away.\n> > \n> > Hmm.. my world is comiled by 'gcc' and I'm still must learning that \n> > here are some cathedrals and not bazaars only :-) Sorry.\n> > \n> > OK, please erase all these debug macros (encdebug). I used it for binary \n> > search check only. It's not needful. Do you want a patch (but not until \n> > Monday)?\n> \n> OK, I am on it. Removing now.\n\nThanks. I've been on a business trip now, and does not have good\ninternet access. I should have fixed the bug. Sorry for the confusion.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 08 Sep 2001 09:51:23 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Beta Monday?" } ]
[ { "msg_contents": "Hi,\n\nI want to extend the catalog, i think the key is\npg_attribute.h, pg_class.h, pg_type.h\n\nis it enough:\n\n1. add pg_xxx.h \n2.modify pg_attribute.h, pg_class.h, pg_type.h, catname.h, \n indexing.h, indexing.c, Makefile\n\ni tried once, it was ok when target to template. but like a mess\nwhen target to global.\n\nwhat is the thing.\n", "msg_date": "Fri, 07 Sep 2001 17:06:42 +0800", "msg_from": "\"kevin\" <wangky@wesec.com>", "msg_from_op": true, "msg_subject": "how to extend the catalog?" }, { "msg_contents": "\"kevin\" <wangky@wesec.com> writes:\n> I want to extend the catalog,\n\nWhat do you want to do *exactly*?\n\n> is it enough:\n> 1. add pg_xxx.h \n> 2.modify pg_attribute.h, pg_class.h, pg_type.h, catname.h, \n> indexing.h, indexing.c, Makefile\n\nYou shouldn't need to touch pg_attribute.h, pg_class.h, pg_type.h,\nunless you are hacking one of the bootstrapped system tables or adding\na new table that has to be known to the bootstrapper.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 02:22:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to extend the catalog? " }, { "msg_contents": "Yes, that is what i want to do.\n\nIn fact, i want to add some audit function,like some other comercial db.\nDo you think it is a good idea?\n\nSo, i think 2 system tables are needed, one for recording audit and the\nother for defining audit event. The 2 system table should be created when\ninitdb.\n\nAnd, i think it may have 4 places to add codes to handle audit.\nlogon, logoff, executor, error.\n\nAnd, i don't know if it is necessary to change query structure.\n\nYes, That is what i want to do, i will do it, i need your help.\n\n\t\t\tregards, kevin\n\n\n> What do you want to do *exactly*?\n> \n> You shouldn't need to touch pg_attribute.h, pg_class.h, pg_type.h,\n> unless you are hacking one of the bootstrapped system tables or adding a\n> new table that has to be known to the bootstrapper.\n", "msg_date": "Mon, 17 Sep 2001 13:36:28 +0800", "msg_from": "\"kevin\" <wangky@wesec.com>", "msg_from_op": true, "msg_subject": "Re: how to extend the catalog?" } ]
[ { "msg_contents": "gcc -O2 -pipe -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o hba.o hba.c\nhba.c: In function `ident_unix':\nhba.c:923: sizeof applied to an incomplete type\nhba.c:960: dereferencing pointer to incomplete type\nhba.c:965: dereferencing pointer to incomplete type\ngmake: *** [hba.o] Error 1\n\n\nNow, the problem is sizeof(Cred), typedef struct cmsgcred Cred, and I don't\nhave a cmsgcred anywhere! The closest is my sys/ucred.h which defines a\nstruct ucred {\n u_short cr_ref; /* reference count */\n uid_t cr_uid; /* effective user id */\n gid_t cr_gid; /* effective group id */\n short cr_ngroups; /* number of groups */\n gid_t cr_groups[NGROUPS]; /* groups */\n};\n\nThoughts?\n\nCheers,\n\nPatrick\n", "msg_date": "Fri, 7 Sep 2001 13:55:50 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "backend hba.c prob" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> hba.c: In function `ident_unix':\n> hba.c:923: sizeof applied to an incomplete type\n\n> Now, the problem is sizeof(Cred), typedef struct cmsgcred Cred, and I don't\n> have a cmsgcred anywhere!\n\nThat's new code and we expected some portability issues with it :-(\n\nWhat platform are you on exactly? What changes are needed to make the\ncode work there, and how might we #ifdef or autoconfigure a test for it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Sep 2001 10:14:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend hba.c prob " }, { "msg_contents": "> gcc -O2 -pipe -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o hba.o hba.c\n> hba.c: In function `ident_unix':\n> hba.c:923: sizeof applied to an incomplete type\n> hba.c:960: dereferencing pointer to incomplete type\n> hba.c:965: dereferencing pointer to incomplete type\n> gmake: *** [hba.o] Error 1\n> \n> \n> Now, the problem is sizeof(Cred), typedef struct cmsgcred Cred, and I don't\n> have a cmsgcred anywhere! The closest is my sys/ucred.h which defines a\n> struct ucred {\n> u_short cr_ref; /* reference count */\n> uid_t cr_uid; /* effective user id */\n> gid_t cr_gid; /* effective group id */\n> short cr_ngroups; /* number of groups */\n> gid_t cr_groups[NGROUPS]; /* groups */\n> };\n> \n> Thoughts?\n\nActually, yes.\n\nThe code currently runs on FreeBSD and BSD/OS. Right now, it tests for\nBSD/OS and if it fails, assume it is FreeBSD. That is what the #ifndef\nfc_uid is for. Now, I assume you are on a *BSD which is not one of\nthose. Do you have a struct fcred? I will browse your OS headers as\nsoon as I know your OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 10:37:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend hba.c prob" }, { "msg_contents": "On Fri, 7 Sep 2001, Bruce Momjian wrote:\n\n> > gcc -O2 -pipe -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o hba.o hba.c\n> > hba.c: In function `ident_unix':\n> > hba.c:923: sizeof applied to an incomplete type\n> > hba.c:960: dereferencing pointer to incomplete type\n> > hba.c:965: dereferencing pointer to incomplete type\n> > gmake: *** [hba.o] Error 1\n\n> The code currently runs on FreeBSD and BSD/OS. Right now, it tests for\n> BSD/OS and if it fails, assume it is FreeBSD. That is what the #ifndef\n> fc_uid is for. Now, I assume you are on a *BSD which is not one of\n> those. Do you have a struct fcred? I will browse your OS headers as\n> soon as I know your OS.\n\nYeah, i'm seeing the same problem on OpenBSD-current (and 2.9). No Cred\nanywhere!\n\nThis:\n\n\troot@mizer:/usr/src$ egrep -r \"fcred\" *\n\nturned up nothing interesting either.\n\nThoughts?\n\n- Brandon\n\n\n----------------------------------------------------------------------------\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Fri, 7 Sep 2001 11:18:41 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: backend hba.c prob" }, { "msg_contents": "On Fri, Sep 07, 2001 at 10:14:27AM -0400, Tom Lane wrote:\n> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> > hba.c: In function `ident_unix':\n> > hba.c:923: sizeof applied to an incomplete type\n> \n> > Now, the problem is sizeof(Cred), typedef struct cmsgcred Cred, and I don't\n> > have a cmsgcred anywhere!\n> \n> That's new code and we expected some portability issues with it :-(\n> \n> What platform are you on exactly?\n\nNetBSD-1.5X/i386 Remeber me? :)\n\n> What changes are needed to make the\n> code work there, and how might we #ifdef or autoconfigure a test for it?\n\nI need to look at it some more for that..\n\nCheers,\n\nPatrick\n", "msg_date": "Fri, 7 Sep 2001 17:21:26 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: backend hba.c prob" }, { "msg_contents": "> On Fri, Sep 07, 2001 at 10:14:27AM -0400, Tom Lane wrote:\n> > Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> > > hba.c: In function `ident_unix':\n> > > hba.c:923: sizeof applied to an incomplete type\n> > \n> > > Now, the problem is sizeof(Cred), typedef struct cmsgcred Cred, and I don't\n> > > have a cmsgcred anywhere!\n> > \n> > That's new code and we expected some portability issues with it :-(\n> > \n> > What platform are you on exactly?\n> \n> NetBSD-1.5X/i386 Remeber me? :)\n> \n> > What changes are needed to make the\n> > code work there, and how might we #ifdef or autoconfigure a test for it?\n> \n> I need to look at it some more for that..\n\nOK, I have modified the CVS CREDS code to work on FreeBSD and BSD/OS,\nand hopefully NetBSD. I talked to Jason at Linuxworld and I think this\ncode should work. Please test the CVS version and let me know. OpenBSD\ndoesn't support creds as far as I can tell.\n\nTo test, define 'ident sameuser' for 'local' in pg_hba.conf and restart\npostmaster. Then connect as local user.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 16:05:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend hba.c prob" }, { "msg_contents": "> On Fri, 7 Sep 2001, Bruce Momjian wrote:\n> \n> > > gcc -O2 -pipe -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o hba.o hba.c\n> > > hba.c: In function `ident_unix':\n> > > hba.c:923: sizeof applied to an incomplete type\n> > > hba.c:960: dereferencing pointer to incomplete type\n> > > hba.c:965: dereferencing pointer to incomplete type\n> > > gmake: *** [hba.o] Error 1\n> \n> > The code currently runs on FreeBSD and BSD/OS. Right now, it tests for\n> > BSD/OS and if it fails, assume it is FreeBSD. That is what the #ifndef\n> > fc_uid is for. Now, I assume you are on a *BSD which is not one of\n> > those. Do you have a struct fcred? I will browse your OS headers as\n> > soon as I know your OS.\n> \n> Yeah, i'm seeing the same problem on OpenBSD-current (and 2.9). No Cred\n> anywhere!\n> \n> This:\n> \n> \troot@mizer:/usr/src$ egrep -r \"fcred\" *\n> \n> turned up nothing interesting either.\n\nOK, CVS should compile on OpenBSD now. However, there is no SCM_CREDS\ncapability on OpenBSD that I can see so 'ident' will not work on 'local'\nin pg_hba.conf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 16:06:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend hba.c prob" }, { "msg_contents": "On Fri, Sep 07, 2001 at 04:05:58PM -0400, Bruce Momjian wrote:\n... \n> OK, I have modified the CVS CREDS code to work on FreeBSD and BSD/OS,\n> and hopefully NetBSD. I talked to Jason at Linuxworld and I think this\n> code should work. Please test the CVS version and let me know. OpenBSD\n> doesn't support creds as far as I can tell.\n> \n> To test, define 'ident sameuser' for 'local' in pg_hba.conf and restart\n> postmaster. Then connect as local user.\n\nAll tested OK under NetBSD :)\n\nCheers,\n\nPatrick\n", "msg_date": "Wed, 12 Sep 2001 13:29:27 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: backend hba.c prob" }, { "msg_contents": "> On Fri, Sep 07, 2001 at 04:05:58PM -0400, Bruce Momjian wrote:\n> ... \n> > OK, I have modified the CVS CREDS code to work on FreeBSD and BSD/OS,\n> > and hopefully NetBSD. I talked to Jason at Linuxworld and I think this\n> > code should work. Please test the CVS version and let me know. OpenBSD\n> > doesn't support creds as far as I can tell.\n> > \n> > To test, define 'ident sameuser' for 'local' in pg_hba.conf and restart\n> > postmaster. Then connect as local user.\n> \n> All tested OK under NetBSD :)\n\nReally? Looks like I hit it right the first time. The NetBSD method is\nvery similar to the BSD/OS version.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 10:32:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: backend hba.c prob" } ]
[ { "msg_contents": "CVSROOT:\t/home/projects/pgsql/cvsroot\nModule name:\tpgsql\nChanges by:\tscrappy@hub.org\t01/09/07 11:01:45\n\nModified files:\n\tsrc/backend/utils/mb: encnames.c \n\nLog message:\n\tRemove variable length macros used in debugging, per Karel.\n\n", "msg_date": "Fri, 7 Sep 2001 11:01:46 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "pgsql/src/backend/utils/mb encnames.c" }, { "msg_contents": "* Marc G. Fournier <scrappy@hub.org> [010907 10:06]:\n> CVSROOT:\t/home/projects/pgsql/cvsroot\n> Module name:\tpgsql\n> Changes by:\tscrappy@hub.org\t01/09/07 11:01:45\n> \n> Modified files:\n> \tsrc/backend/utils/mb: encnames.c \n> \n> Log message:\n> \tRemove variable length macros used in debugging, per Karel.\n\nNow we die differently:\n\n -e \"s,@configure@,$configure,g\" \\\n -e 's,@version@,7.2devel,g' \\\n pg_config.sh >pg_config\nchmod a+x pg_config\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/pg_config'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/bin/pg_encoding'\ngmake -C ../../../src/interfaces/libpq all\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ngmake[4]: Nothing to be done for `all'.\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ncc -O -K inline -g -I../../../src/include -I/usr/local/include -c -o pg_encoding.o pg_encoding.c\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\ncc -O -K inline -g pg_encoding.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -Wl,-R/usr/local/pgsql/lib -lz -lresolv -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -o pg_encoding\nUX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\nUndefined\t\t\tfirst referenced\nsymbol \t\t\t in file\npg_valid_server_encoding pg_encoding.o\nUX:ld: ERROR: Symbol referencing errors. No output written to pg_encoding\ngmake[3]: *** [pg_encoding] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/pg_encoding'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\nconfigure input: \n\nCC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n\t--enable-debug \\\n\t--with-tcl --with-tclconfig=/usr/local/lib \\\n\t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 7 Sep 2001 10:18:45 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/utils/mb encnames.c" }, { "msg_contents": "\nI am seeing no failure here with enable-multibyte and enable-locale.\nCan you update cvs, do a make clean, and try again.\n\n> * Marc G. Fournier <scrappy@hub.org> [010907 10:06]:\n> > CVSROOT:\t/home/projects/pgsql/cvsroot\n> > Module name:\tpgsql\n> > Changes by:\tscrappy@hub.org\t01/09/07 11:01:45\n> > \n> > Modified files:\n> > \tsrc/backend/utils/mb: encnames.c \n> > \n> > Log message:\n> > \tRemove variable length macros used in debugging, per Karel.\n> \n> Now we die differently:\n> \n> -e \"s,@configure@,$configure,g\" \\\n> -e 's,@version@,7.2devel,g' \\\n> pg_config.sh >pg_config\n> chmod a+x pg_config\n> gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/pg_config'\n> gmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/bin/pg_encoding'\n> gmake -C ../../../src/interfaces/libpq all\n> gmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\n> gmake[4]: Nothing to be done for `all'.\n> gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\n> cc -O -K inline -g -I../../../src/include -I/usr/local/include -c -o pg_encoding.o pg_encoding.c\n> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> cc -O -K inline -g pg_encoding.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -Wl,-R/usr/local/pgsql/lib -lz -lresolv -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -o pg_encoding\n> UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> Undefined\t\t\tfirst referenced\n> symbol \t\t\t in file\n> pg_valid_server_encoding pg_encoding.o\n> UX:ld: ERROR: Symbol referencing errors. No output written to pg_encoding\n> gmake[3]: *** [pg_encoding] Error 1\n> gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/pg_encoding'\n> gmake[2]: *** [all] Error 2\n> gmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\n> gmake: *** [all] Error 2\n> \n> configure input: \n> \n> CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> \t--enable-debug \\\n> \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> \t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 12:12:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/utils/mb encnames.c" }, { "msg_contents": "Still Fails here....\n\nLER\n\n* Bruce Momjian <pgman@candle.pha.pa.us> [010907 11:27]:\n> \n> I am seeing no failure here with enable-multibyte and enable-locale.\n> Can you update cvs, do a make clean, and try again.\n> \n> > * Marc G. Fournier <scrappy@hub.org> [010907 10:06]:\n> > > CVSROOT:\t/home/projects/pgsql/cvsroot\n> > > Module name:\tpgsql\n> > > Changes by:\tscrappy@hub.org\t01/09/07 11:01:45\n> > > \n> > > Modified files:\n> > > \tsrc/backend/utils/mb: encnames.c \n> > > \n> > > Log message:\n> > > \tRemove variable length macros used in debugging, per Karel.\n> > \n> > Now we die differently:\n> > \n> > -e \"s,@configure@,$configure,g\" \\\n> > -e 's,@version@,7.2devel,g' \\\n> > pg_config.sh >pg_config\n> > chmod a+x pg_config\n> > gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/pg_config'\n> > gmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/bin/pg_encoding'\n> > gmake -C ../../../src/interfaces/libpq all\n> > gmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\n> > gmake[4]: Nothing to be done for `all'.\n> > gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\n> > cc -O -K inline -g -I../../../src/include -I/usr/local/include -c -o pg_encoding.o pg_encoding.c\n> > UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> > cc -O -K inline -g pg_encoding.o -L../../../src/interfaces/libpq -lpq -L/usr/local/lib -Wl,-R/usr/local/pgsql/lib -lz -lresolv -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap -o pg_encoding\n> > UX:cc: WARNING: debugging and optimization mutually exclusive; -O disabled\n> > Undefined\t\t\tfirst referenced\n> > symbol \t\t\t in file\n> > pg_valid_server_encoding pg_encoding.o\n> > UX:ld: ERROR: Symbol referencing errors. No output written to pg_encoding\n> > gmake[3]: *** [pg_encoding] Error 1\n> > gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/pg_encoding'\n> > gmake[2]: *** [all] Error 2\n> > gmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin'\n> > gmake[1]: *** [all] Error 2\n> > gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\n> > gmake: *** [all] Error 2\n> > \n> > configure input: \n> > \n> > CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> > \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> > \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> > \t--enable-debug \\\n> > \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> > \t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > \n> > -- \n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 7 Sep 2001 11:35:12 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/src/backend/utils/mb encnames.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am seeing no failure here with enable-multibyte and enable-locale.\n> Can you update cvs, do a make clean, and try again.\n\npg_encoding builds okay here.\n\nI think Marc said something about having recently changed the anon-CVS\nserver to be a mirror of the master CVS, rather than the same server.\nThis would mean that Larry might not be looking at the same sources\nyou are. Maybe the mirror update interval needs to be tightened.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Sep 2001 13:01:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUGS] pgsql/src/backend/utils/mb encnames.c " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am seeing no failure here with enable-multibyte and enable-locale.\n> > Can you update cvs, do a make clean, and try again.\n> \n> pg_encoding builds okay here.\n> \n> I think Marc said something about having recently changed the anon-CVS\n> server to be a mirror of the master CVS, rather than the same server.\n> This would mean that Larry might not be looking at the same sources\n> you are. Maybe the mirror update interval needs to be tightened.\n\nI am on the phone with him now. I think the problem is that he is\nlinking pg_encoding binary against an old libpq. He is researching why\nthis is happening.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 13:02:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUGS] pgsql/src/backend/utils/mb encnames.c" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am seeing no failure here with enable-multibyte and enable-locale.\n> > Can you update cvs, do a make clean, and try again.\n> \n> pg_encoding builds okay here.\n> \n> I think Marc said something about having recently changed the anon-CVS\n> server to be a mirror of the master CVS, rather than the same server.\n> This would mean that Larry might not be looking at the same sources\n> you are. Maybe the mirror update interval needs to be tightened.\n\nOK, looks like an OS bug. In the compile of pg_encoding:\n\ngcc -O2 -pipe -m486 -Wall -Wmissing-prototypes -Wmissing-declarations -g\n-Wall -O1 -Wmissing-prototypes -Wmissing-declarations\n-I../../../src/include -I/usr/local/include/readline\n-I/usr/contrib/include -c -o pg_encoding.o pg_encoding.c\ngcc -O2 -pipe -m486 -Wall -Wmissing-prototypes -Wmissing-declarations -g\n-Wall -O1 -Wmissing-prototypes -Wmissing-declarations pg_encoding.o\n-L../../../src/interfaces/libpq -lpq -L/usr/local/lib -L/usr/contrib/lib\n-Wl,-rpath,/usr/local/pgsql/lib -g -Wall -O1 -Wmissing-prototypes\n-Wmissing-declarations -lz -lcompat -lipc -ldl -lm -lutil -lreadline\n-ltermcap -o pg_encoding\n\nThe line:\n\n\t-L../../../src/interfaces/libpq -lpq\n\ndoes not seem to search for libpq in the -L first, and probably checks\nLD_RUN_PATH or something like that. No idea but it seems only his OS is\naffected. Installing a new libpq in his install directory fixed it.\n\nOne possible cause would be to use a symlink to get to pgsql/src. In\nthat case, ../../.. puts you in the symlink directory and not to the top\nof the cvs tree. That is not an issue for him, but a possible cause of\nfailure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 13:11:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUGS] pgsql/src/backend/utils/mb encnames.c" } ]
[ { "msg_contents": "Marc, can we get the proper autoconf on postgresql.org so I can run\nautoconf. I have autoconf here but am getting errors running it:\n\n#$ autoconf\nconfigure.in:139: /usr/contrib/bin/gm4: Non-numeric argument to built-in\n`divert'\nconfigure.in:146: /usr/contrib/bin/gm4: Non-numeric argument to built-in\n`divert'\nconfigure.in:149: /usr/contrib/bin/gm4: Non-numeric argument to built-in\n`divert'\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 13:54:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "autoconf on server" }, { "msg_contents": "Bruce Momjian writes:\n\n> Marc, can we get the proper autoconf on postgresql.org so I can run\n> autoconf. I have autoconf here but am getting errors running it:\n>\n> #$ autoconf\n> configure.in:139: /usr/contrib/bin/gm4: Non-numeric argument to built-in\n> `divert'\n> configure.in:146: /usr/contrib/bin/gm4: Non-numeric argument to built-in\n> `divert'\n> configure.in:149: /usr/contrib/bin/gm4: Non-numeric argument to built-in\n> `divert'\n\nYou need to use Autoconf version 2.13.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 7 Sep 2001 20:47:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: autoconf on server" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > Marc, can we get the proper autoconf on postgresql.org so I can run\n> > autoconf. I have autoconf here but am getting errors running it:\n> >\n> > #$ autoconf\n> > configure.in:139: /usr/contrib/bin/gm4: Non-numeric argument to built-in\n> > `divert'\n> > configure.in:146: /usr/contrib/bin/gm4: Non-numeric argument to built-in\n> > `divert'\n> > configure.in:149: /usr/contrib/bin/gm4: Non-numeric argument to built-in\n> > `divert'\n> \n> You need to use Autoconf version 2.13.\n\nThanks. Works great now. Tom, no need to apply that patch I sent for\nSCM. I can do my own autoconf now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 14:48:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: autoconf on server" } ]
[ { "msg_contents": "I finally got all the way through a compile set:\n\nCC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n\t--enable-debug \\\n\t--with-tcl --with-tclconfig=/usr/local/lib \\\n\t--with-tkconfig=/usr/local/lib --enable-locale \nand when I try to connect to an existing DB, loaded from a pg_dump\nfrom the previous 7.2devel sources, I get:\nTRAP: Failed Assertion(\"!(ClientEncoding):\", File: \"mbutils.c\", Line:\n314)\n!(ClientEncoding) (0) [No such file or directory]\nDEBUG: server process (pid 3077) was terminated by signal 6\nDEBUG: terminating any other active server processes\nDEBUG: all server processes terminated; reinitializing shared memory\nand semaphores\nDEBUG: database system was interrupted at 2001-09-07 21:00:33 CDT\nDEBUG: checkpoint record is at 0/2922408\nDEBUG: redo record is at 0/2922408; undo record is at 0/0; shutdown\nTRUE\nDEBUG: next transaction id: 824; next oid: 371237\nDEBUG: database system was not properly shut down; automatic recovery\nin progress\nDEBUG: ReadRecord: record with zero length at 0/2922448\nDEBUG: redo is not required\nDEBUG: database system is ready\n\n\n\nTHIS IS UNACCEPTABLE. \n\nHow do I get out of it? \n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 7 Sep 2001 21:06:18 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "* Larry Rosenman <ler@lerctr.org> [010907 21:06]:\n> I finally got all the way through a compile set:\n> \n> CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> \t--enable-debug \\\n> \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> \t--with-tkconfig=/usr/local/lib --enable-locale \n> and when I try to connect to an existing DB, loaded from a pg_dump\n> from the previous 7.2devel sources, I get:\n> TRAP: Failed Assertion(\"!(ClientEncoding):\", File: \"mbutils.c\", Line:\n> 314)\n> !(ClientEncoding) (0) [No such file or directory]\n> DEBUG: server process (pid 3077) was terminated by signal 6\n> DEBUG: terminating any other active server processes\n> DEBUG: all server processes terminated; reinitializing shared memory\n> and semaphores\n> DEBUG: database system was interrupted at 2001-09-07 21:00:33 CDT\n> DEBUG: checkpoint record is at 0/2922408\n> DEBUG: redo record is at 0/2922408; undo record is at 0/0; shutdown\n> TRUE\n> DEBUG: next transaction id: 824; next oid: 371237\n> DEBUG: database system was not properly shut down; automatic recovery\n> in progress\n> DEBUG: ReadRecord: record with zero length at 0/2922448\n> DEBUG: redo is not required\n> DEBUG: database system is ready\n> \n> \n> \n> THIS IS UNACCEPTABLE. \n> \n> How do I get out of it? \n> \n> LER\nThe following patch fixes it:\n\n\nIndex: mbutils.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/mb/mbutils.c,v\nretrieving revision 1.20\ndiff -c -r1.20 mbutils.c\n*** mbutils.c\t2001/09/06 04:57:29\t1.20\n--- mbutils.c\t2001/09/08 02:11:55\n***************\n*** 21,27 ****\n *\n * Karel Zak (Aug 2001)\n */\n! static pg_enc2name\t*ClientEncoding = NULL;\n static pg_enc2name\t*DatabaseEncoding = &pg_enc2name_tbl[ PG_SQL_ASCII ];\n \n static void\t(*client_to_mic) ();\t/* something to MIC */\n--- 21,27 ----\n *\n * Karel Zak (Aug 2001)\n */\n! static pg_enc2name\t*ClientEncoding = &pg_enc2name_tbl[ PG_SQL_ASCII ];\n static pg_enc2name\t*DatabaseEncoding = &pg_enc2name_tbl[ PG_SQL_ASCII ];\n \n static void\t(*client_to_mic) ();\t/* something to MIC */\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 7 Sep 2001 21:12:18 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "On Fri, Sep 07, 2001 at 09:06:18PM -0500, Larry Rosenman wrote:\n> I finally got all the way through a compile set:\n> \n> CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> \t--enable-debug \\\n> \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> \t--with-tkconfig=/usr/local/lib --enable-locale \n> and when I try to connect to an existing DB, loaded from a pg_dump\n> from the previous 7.2devel sources, I get:\n> TRAP: Failed Assertion(\"!(ClientEncoding):\", File: \"mbutils.c\", Line:\n> 314)\n> !(ClientEncoding) (0) [No such file or directory]\n\n Interesting. I don't know why, but someting don't call\npg_set_client_encoding() before usage encoding routines (maybe\nlibpq don't set client encoding if it's default SQL_ASCII, but\nI'm almost sure that I check this case).\n\n A simple and robus solution is in the begin of mbutils.c set default\nClientEncoding to SQL_ASCII (like default DatabaseEncoding). Bruce, can\nyou change it? It's one line change. Again thanks.\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Sat, 8 Sep 2001 10:29:39 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "> On Sat, Sep 08, 2001 at 10:29:38AM +0200, Karel Zak wrote:\n> > On Fri, Sep 07, 2001 at 09:06:18PM -0500, Larry Rosenman wrote:\n> > > I finally got all the way through a compile set:\n> > > \n> > > CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> > > \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> > > \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> > > \t--enable-debug \\\n> > > \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> > > \t--with-tkconfig=/usr/local/lib --enable-locale \n> > > and when I try to connect to an existing DB, loaded from a pg_dump\n> > > from the previous 7.2devel sources, I get:\n> > > TRAP: Failed Assertion(\"!(ClientEncoding):\", File: \"mbutils.c\", Line:\n> > > 314)\n> > > !(ClientEncoding) (0) [No such file or directory]\n> > \n> > Interesting. I don't know why, but someting don't call\n> > pg_set_client_encoding() before usage encoding routines (maybe\n> > libpq don't set client encoding if it's default SQL_ASCII, but\n> > I'm almost sure that I check this case).\n> > \n> > A simple and robus solution is in the begin of mbutils.c set default\n> > ClientEncoding to SQL_ASCII (like default DatabaseEncoding). Bruce, can\n> > you change it? It's one line change. Again thanks.\n\n Forget it! A default client encoding must be set by actual database encoding... \nPlease apply the small attached patch that solve it better.\n\n I check and test it with attached patch and it works correct:\n\ntest=# SHOW CLIENT_ENCODING;\nNOTICE: Current client encoding is SQL_ASCII\nSHOW VARIABLE\ntest=# SHOW SERVER_ENCODING;\nNOTICE: Current server encoding is SQL_ASCII\nSHOW VARIABLE\ntest=# CREATE DATABASE l2 WITH ENCODING='ISO-8859-2';\nCREATE DATABASE\ntest=# \\c l2\nYou are now connected to database l2.\nl2=# \\l\n List of databases\n Name | Owner | Encoding\n-----------+----------+-----------\n l2 | zakkr | LATIN2\n template0 | postgres | SQL_ASCII\n template1 | postgres | SQL_ASCII\n test | postgres | SQL_ASCII\n(4 rows)\n\nl2=# SHOW SERVER_ENCODING;\nNOTICE: Current server encoding is LATIN2\nSHOW VARIABLE\nl2=# SHOW CLIENT_ENCODING;\nNOTICE: Current client encoding is LATIN2\nSHOW VARIABLE\nl2=#\n\n Larry, wait when Bruce apply this small change and try previous\nexamples.\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz", "msg_date": "Sat, 8 Sep 2001 12:50:47 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "\nPatch applied. Thanks.\n\n\n> > On Sat, Sep 08, 2001 at 10:29:38AM +0200, Karel Zak wrote:\n> > > On Fri, Sep 07, 2001 at 09:06:18PM -0500, Larry Rosenman wrote:\n> > > > I finally got all the way through a compile set:\n> > > > \n> > > > CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> > > > \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> > > > \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> > > > \t--enable-debug \\\n> > > > \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> > > > \t--with-tkconfig=/usr/local/lib --enable-locale \n> > > > and when I try to connect to an existing DB, loaded from a pg_dump\n> > > > from the previous 7.2devel sources, I get:\n> > > > TRAP: Failed Assertion(\"!(ClientEncoding):\", File: \"mbutils.c\", Line:\n> > > > 314)\n> > > > !(ClientEncoding) (0) [No such file or directory]\n> > > \n> > > Interesting. I don't know why, but someting don't call\n> > > pg_set_client_encoding() before usage encoding routines (maybe\n> > > libpq don't set client encoding if it's default SQL_ASCII, but\n> > > I'm almost sure that I check this case).\n> > > \n> > > A simple and robus solution is in the begin of mbutils.c set default\n> > > ClientEncoding to SQL_ASCII (like default DatabaseEncoding). Bruce, can\n> > > you change it? It's one line change. Again thanks.\n> \n> Forget it! A default client encoding must be set by actual database encoding... \n> Please apply the small attached patch that solve it better.\n> \n> I check and test it with attached patch and it works correct:\n> \n> test=# SHOW CLIENT_ENCODING;\n> NOTICE: Current client encoding is SQL_ASCII\n> SHOW VARIABLE\n> test=# SHOW SERVER_ENCODING;\n> NOTICE: Current server encoding is SQL_ASCII\n> SHOW VARIABLE\n> test=# CREATE DATABASE l2 WITH ENCODING='ISO-8859-2';\n> CREATE DATABASE\n> test=# \\c l2\n> You are now connected to database l2.\n> l2=# \\l\n> List of databases\n> Name | Owner | Encoding\n> -----------+----------+-----------\n> l2 | zakkr | LATIN2\n> template0 | postgres | SQL_ASCII\n> template1 | postgres | SQL_ASCII\n> test | postgres | SQL_ASCII\n> (4 rows)\n> \n> l2=# SHOW SERVER_ENCODING;\n> NOTICE: Current server encoding is LATIN2\n> SHOW VARIABLE\n> l2=# SHOW CLIENT_ENCODING;\n> NOTICE: Current client encoding is LATIN2\n> SHOW VARIABLE\n> l2=#\n> \n> Larry, wait when Bruce apply this small change and try previous\n> examples.\n> \n> \t\tKarel\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Sep 2001 10:30:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "> > > > CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> > > > \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> > > > \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> > > > \t--enable-debug \\\n> > > > \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> > > > \t--with-tkconfig=/usr/local/lib --enable-locale \n> > > > and when I try to connect to an existing DB, loaded from a pg_dump\n> > > > from the previous 7.2devel sources, I get:\n> > > > TRAP: Failed Assertion(\"!(ClientEncoding):\", File: \"mbutils.c\", Line:\n> > > > 314)\n> > > > !(ClientEncoding) (0) [No such file or directory]\n> > > \n> > > Interesting. I don't know why, but someting don't call\n> > > pg_set_client_encoding() before usage encoding routines (maybe\n> > > libpq don't set client encoding if it's default SQL_ASCII, but\n> > > I'm almost sure that I check this case).\n> > > \n> > > A simple and robus solution is in the begin of mbutils.c set default\n> > > ClientEncoding to SQL_ASCII (like default DatabaseEncoding). Bruce, can\n> > > you change it? It's one line change. Again thanks.\n\nKarel,\n\nThe bug Larry reported seems for such a case of connecting non\nexistent database. The backend tries to send the error message to the\nfrontend using pg_server_to_client WITHOUT getting an encoding info\nfrom the database. To fix this Larry's patch or you stat in the\nprevious mail are sufficient. I will commit the fix.\n\n> Forget it! A default client encoding must be set by actual database encoding... \n\nWhy? set_default_client_encoding does the job anyway.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 08 Sep 2001 23:51:24 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "\nTatsuo, I applied this patch. Please fix as needed.\n\n\n> > > > > CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> > > > > \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> > > > > \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> > > > > \t--enable-debug \\\n> > > > > \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> > > > > \t--with-tkconfig=/usr/local/lib --enable-locale \n> > > > > and when I try to connect to an existing DB, loaded from a pg_dump\n> > > > > from the previous 7.2devel sources, I get:\n> > > > > TRAP: Failed Assertion(\"!(ClientEncoding):\", File: \"mbutils.c\", Line:\n> > > > > 314)\n> > > > > !(ClientEncoding) (0) [No such file or directory]\n> > > > \n> > > > Interesting. I don't know why, but someting don't call\n> > > > pg_set_client_encoding() before usage encoding routines (maybe\n> > > > libpq don't set client encoding if it's default SQL_ASCII, but\n> > > > I'm almost sure that I check this case).\n> > > > \n> > > > A simple and robus solution is in the begin of mbutils.c set default\n> > > > ClientEncoding to SQL_ASCII (like default DatabaseEncoding). Bruce, can\n> > > > you change it? It's one line change. Again thanks.\n> \n> Karel,\n> \n> The bug Larry reported seems for such a case of connecting non\n> existent database. The backend tries to send the error message to the\n> frontend using pg_server_to_client WITHOUT getting an encoding info\n> from the database. To fix this Larry's patch or you stat in the\n> previous mail are sufficient. I will commit the fix.\n> \n> > Forget it! A default client encoding must be set by actual database encoding... \n> \n> Why? set_default_client_encoding does the job anyway.\n> --\n> Tatsuo Ishii\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Sep 2001 11:07:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "* Tatsuo Ishii <t-ishii@sra.co.jp> [010908 10:02]:\n> > > > > CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> > > > > \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> > > > > \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> > > > > \t--enable-debug \\\n> > > > > \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> > > > > \t--with-tkconfig=/usr/local/lib --enable-locale \n> > > > > and when I try to connect to an existing DB, loaded from a pg_dump\n> > > > > from the previous 7.2devel sources, I get:\n> > > > > TRAP: Failed Assertion(\"!(ClientEncoding):\", File: \"mbutils.c\", Line:\n> > > > > 314)\n> > > > > !(ClientEncoding) (0) [No such file or directory]\n> > > > \n> > > > Interesting. I don't know why, but someting don't call\n> > > > pg_set_client_encoding() before usage encoding routines (maybe\n> > > > libpq don't set client encoding if it's default SQL_ASCII, but\n> > > > I'm almost sure that I check this case).\n> > > > \n> > > > A simple and robus solution is in the begin of mbutils.c set default\n> > > > ClientEncoding to SQL_ASCII (like default DatabaseEncoding). Bruce, can\n> > > > you change it? It's one line change. Again thanks.\n> \n> Karel,\n> \n> The bug Larry reported seems for such a case of connecting non\n> existent database. The backend tries to send the error message to the\n> frontend using pg_server_to_client WITHOUT getting an encoding info\n> from the database. To fix this Larry's patch or you stat in the\n> previous mail are sufficient. I will commit the fix.\nI use password authentication, and that seems to be what tripped it. \n\nThe applied patch works for me. \n\nThanks, Gentlemen.\n\nLER\n\n> \n> > Forget it! A default client encoding must be set by actual database encoding... \n> \n> Why? set_default_client_encoding does the job anyway.\n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 8 Sep 2001 10:13:49 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "> Tatsuo, I applied this patch. Please fix as needed.\n\nSure. I have come back from the business trip. I will take care of\nthis.\n--\nTatsuo Ishii\n", "msg_date": "Sun, 09 Sep 2001 08:25:49 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "> > Karel,\n> > \n> > The bug Larry reported seems for such a case of connecting non\n> > existent database. The backend tries to send the error message to the\n> > frontend using pg_server_to_client WITHOUT getting an encoding info\n> > from the database. To fix this Larry's patch or you stat in the\n> > previous mail are sufficient. I will commit the fix.\n> I use password authentication, and that seems to be what tripped it. \n\nOh I see.\n\n> The applied patch works for me. \n> \n> Thanks, Gentlemen.\n\nYou are welcome.\n--\nTatsuo Ishii\n", "msg_date": "Sun, 09 Sep 2001 08:27:39 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "On Sat, Sep 08, 2001 at 11:51:24PM +0900, Tatsuo Ishii wrote:\n> > > > > CC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n> > > > > \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> > > > > \t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n> > > > > \t--enable-debug \\\n> > > > > \t--with-tcl --with-tclconfig=/usr/local/lib \\\n> > > > > \t--with-tkconfig=/usr/local/lib --enable-locale \n> > > > > and when I try to connect to an existing DB, loaded from a pg_dump\n> > > > > from the previous 7.2devel sources, I get:\n> > > > > TRAP: Failed Assertion(\"!(ClientEncoding):\", File: \"mbutils.c\", Line:\n> > > > > 314)\n> > > > > !(ClientEncoding) (0) [No such file or directory]\n> > > > \n> > > > Interesting. I don't know why, but someting don't call\n> > > > pg_set_client_encoding() before usage encoding routines (maybe\n> > > > libpq don't set client encoding if it's default SQL_ASCII, but\n> > > > I'm almost sure that I check this case).\n> > > > \n> > > > A simple and robus solution is in the begin of mbutils.c set default\n> > > > ClientEncoding to SQL_ASCII (like default DatabaseEncoding). Bruce, can\n> > > > you change it? It's one line change. Again thanks.\n> \n> Karel,\n> \n> The bug Larry reported seems for such a case of connecting non\n> existent database. The backend tries to send the error message to the\n> frontend using pg_server_to_client WITHOUT getting an encoding info\n> from the database. To fix this Larry's patch or you stat in the\n> previous mail are sufficient. I will commit the fix.\n> \n> > Forget it! A default client encoding must be set by actual database encoding... \n> \n> Why? set_default_client_encoding does the job anyway.\n\n Here can't be used static default encoding as for DatabaseEncoding, because\ntypical code is\n\n if (!ClientEncoding)\n\t/* ...means \"if user doesn't set itself client \n\t * encoding by SET command\"\n\t */ \n\tClientEncoding = DatabaseEncoding;\n\n and if you set anywhere before this as default \nClientEncoding = &pg_enc2name_tbl[ PG_SQL_ASCII ]; the ClientEncoding will\nalways TRUE and always SQL_ASCII and the only way is change it by 'SET\nCLIENT_ENCODING' command. But we don't want it, wanted is after connection \nset as default ClientEncoding same encoding as actual DabaseEncoding. \n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 10 Sep 2001 08:27:52 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "> > Why? set_default_client_encoding does the job anyway.\n> \n> Here can't be used static default encoding as for DatabaseEncoding, because\n> typical code is\n> \n> if (!ClientEncoding)\n> \t/* ...means \"if user doesn't set itself client \n> \t * encoding by SET command\"\n> \t */ \n> \tClientEncoding = DatabaseEncoding;\n> \n> and if you set anywhere before this as default \n> ClientEncoding = &pg_enc2name_tbl[ PG_SQL_ASCII ]; the ClientEncoding will\n> always TRUE and always SQL_ASCII and the only way is change it by 'SET\n> CLIENT_ENCODING' command. But we don't want it, wanted is after connection \n> set as default ClientEncoding same encoding as actual DabaseEncoding. \n\nDon't worry about that. Before anything user could do, postgres's\nstart up procedure sets the appropreate encoding to ClientEncoding\nvariable.\n\nAlso please note that \"wanted is after connection set as default\nClientEncoding same encoding as actual DabaseEncoding\" is not\ncorrent. The ClientEncoding might be set differently if\nPGCLIENTENCODING is set before postmaster starts up.\n--\nTatsuo Ishii\n\n", "msg_date": "Mon, 10 Sep 2001 15:50:46 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "On Mon, Sep 10, 2001 at 03:50:46PM +0900, Tatsuo Ishii wrote:\n\n> Don't worry about that. Before anything user could do, postgres's\n> start up procedure sets the appropreate encoding to ClientEncoding\n> variable.\n\n Larry's backend knows method how call conversion routines, without\nset ClientEncoding:-) IMHO with my patch is always sure that backend\nnever crash for this.\n \n> Also please note that \"wanted is after connection set as default\n> ClientEncoding same encoding as actual DabaseEncoding\" is not\n> corrent. The ClientEncoding might be set differently if\n> PGCLIENTENCODING is set before postmaster starts up.\n\n You are right. I was mean \"if PGCLIENTENCODING is not set\".\n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 10 Sep 2001 09:13:00 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CURRENT CVS: MULTIBYTE: CANT CONNECT...." }, { "msg_contents": "> > Don't worry about that. Before anything user could do, postgres's\n> > start up procedure sets the appropreate encoding to ClientEncoding\n> > variable.\n> \n> Larry's backend knows method how call conversion routines, without\n> set ClientEncoding:-) IMHO with my patch is always sure that backend\n> never crash for this.\n\nLooks like you are trying to protect yourself from the internal logic\nbugs that should be found by Asserts or carefull debugging IMHO.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 10 Sep 2001 17:35:36 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] CURRENT CVS: MULTIBYTE: CANT CONNECT...." } ]
[ { "msg_contents": "\n> A solution, could be to query for the existance of the PK, just before the\n> insertion. But there is a little span between the test and the\n> insertion, where another insertion from another transaction could void\n> the existance test. Any clever ideas on how to solve this? Using\n> triggers maybe? Other solutions?\n>\n\nAll you need to do is use a sequence. If you set the sequence to be the\nprimary key with a default value of nextval(seq_name) then you will never\nhave a collision. Alternatly if you need to know that number before you\nstart inserting you can select next_val(seq_name) before you inser and use\nthat. By the way the datatype serial automates exactly what I described.\n\n", "msg_date": "Sat, 8 Sep 2001 11:21:58 -0500", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": true, "msg_subject": "Re: Abort state on duplicated PKey in transactions" }, { "msg_contents": "Hi dear people,\n\nNow I'm reposting this on hackers from general, sorry if no fully\nsuitable.\n\nWe are building a RAD tool (GeneXus) support, for PostgreSQL, as I have\nmentioned before a number of times.\n\nA problem which arose, is that within a transaction, if one inserts on a\ntable and the PK restriction is violated, the transaction aborts and\nleaves itself in abort state. One has to END the transaction and start\na new one. This is a problem, in large transactions, where lots of\nthings have been done to the database, and an insertion is to be done,\nwhich may yield an error just because the PK already existed. The whole\ntransaction should have to be redone if the insertion failed. A\nsolution, could be to query for the existance of the PK, just before the\ninsertion. But there is a little span between the test and the\ninsertion, where another insertion from another transaction could void\nthe existance test. Any clever ideas on how to solve this? Using\ntriggers maybe? Other solutions?\n\nI'm aware that savepoints and nested transactions will be implemented in\nfuture versions, but how to solve the problem before that starts\nworking?\n\nThanks\n\nRegards,\nHaroldo.\n", "msg_date": "Sat, 08 Sep 2001 12:20:28 -0500", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Abort state on duplicated PKey in transactions" }, { "msg_contents": "\n\n\"Matthew T. O'Connor\" wrote:\n> \n> > A solution, could be to query for the existance of the PK, just before the\n> > insertion. But there is a little span between the test and the\n> > insertion, where another insertion from another transaction could void\n> > the existance test. Any clever ideas on how to solve this? Using\n> > triggers maybe? Other solutions?\n> >\n> \n> All you need to do is use a sequence. If you set the sequence to be the\n> primary key with a default value of nextval(seq_name) then you will never\n> have a collision. Alternatly if you need to know that number before you\n> start inserting you can select next_val(seq_name) before you inser and use\n> that. By the way the datatype serial automates exactly what I described.\n\nYes, but there are situations where a sequenced PK isn't what is needed.\nImagine a DW app, where composed PKs such as (ClientNum, Year, Month,\nArticleNum) in a table which has ArticleQty as a secondary field are\nused, in order to consolidate detail record from other tables. There,\nthe processing cycle goes like checking for the existance of the PK, if\nit exists, add ArticleQtyDetail to ArticleQty, and update; and if it\ndoesn't exist, insert the record with ArticleQtyDetail as the starting\nvalue of ArticleQty. See it? Then, if between the \"select from\" and the\n\"insert into\", other process in the system (due to parallel processing\nfor instance) inserts a record with the same key, then the first\ntransaction would cancel, forcing redoing of all the processing. So,\nsort of atomicity of the check?update:insert operation is needed. How\ncan that be easily implemented using locks and triggers for example?\n\nRegards,\nHaroldo.\n", "msg_date": "Sat, 08 Sep 2001 14:22:32 -0500", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Abort state on duplicated PKey in transactions" }, { "msg_contents": "I had a similar issue.\n\nI needed to make sure I had a unique row- insert if not there, update if\nthere. \n\nSo I resorted to locking the whole table, then select, then insert/update.\n\nWhat Tom told me to do was to use lock table tablename in exclusive mode\nfor my case.\n\nThis blocks select for updates, but doesn't block selects.\n\nSo you must check with a select for update, then only do the insert if it's\nok.\n\nIf you don't check with a select for update it will not block, and bad\nthings could happen :).\n\nHowever I couldn't do a \"for update\" with an aggregate, so in my\ngeneralised \"putrow\" routine I can't use \"in exclusive mode\".\n\nI basically wanted to do a select count(*) from datable where whereclause\nfor update.\n\nIf the count was 0 then only insert, else if 1 update, else make some noise\n:).\n\nThe alternative is to actually fetch the rows which can be slower.\n\nRegards,\nLink.\n\n\nAt 12:20 PM 08-09-2001 -0500, Haroldo Stenger wrote:\n>transaction should have to be redone if the insertion failed. A\n>solution, could be to query for the existance of the PK, just before the\n>insertion. But there is a little span between the test and the\n>insertion, where another insertion from another transaction could void\n>the existance test. Any clever ideas on how to solve this? Using\n>triggers maybe? Other solutions?\n\n\n", "msg_date": "Mon, 10 Sep 2001 12:10:08 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Abort state on duplicated PKey in transactions" }, { "msg_contents": "Thanks. You saved me work by pointing me to the FOR UPDATE detail, and\nthe aggregate non-locking restriction. Can anyone comment on why this is\nso? Reposting in HACKERS was a good idea :)\n\nSide note: GeneXus (http://www.genexus.com) support, will be formally\nannounced in 10 days in an important international event\n(http://www.artech.com.uy/cgi-bin/webartech/HEvver01.exe?S,131,0,10).\nThis will leverage PostgreSQL in the business environments which GeneXus\ndeals with. Is anyone interested in receiving more information on what\nGeneXus is and does?\n\nRegards,\nHaroldo.\n\nLincoln Yeoh wrote:\n> \n> I had a similar issue.\n> \n> I needed to make sure I had a unique row- insert if not there, update if\n> there.\n> \n> So I resorted to locking the whole table, then select, then insert/update.\n> \n> What Tom told me to do was to use lock table tablename in exclusive mode\n> for my case.\n> \n> This blocks select for updates, but doesn't block selects.\n> \n> So you must check with a select for update, then only do the insert if it's\n> ok.\n> \n> If you don't check with a select for update it will not block, and bad\n> things could happen :).\n> \n> However I couldn't do a \"for update\" with an aggregate, so in my\n> generalised \"putrow\" routine I can't use \"in exclusive mode\".\n> \n> I basically wanted to do a select count(*) from datable where whereclause\n> for update.\n> \n> If the count was 0 then only insert, else if 1 update, else make some noise\n> :).\n> \n> The alternative is to actually fetch the rows which can be slower.\n> \n> Regards,\n> Link.\n> \n> At 12:20 PM 08-09-2001 -0500, Haroldo Stenger wrote:\n> >transaction should have to be redone if the insertion failed. A\n> >solution, could be to query for the existance of the PK, just before the\n> >insertion. But there is a little span between the test and the\n> >insertion, where another insertion from another transaction could void\n> >the existance test. Any clever ideas on how to solve this? Using\n> >triggers maybe? Other solutions?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n", "msg_date": "Mon, 10 Sep 2001 09:28:39 -0500", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: Abort state on duplicated PKey in transactions" } ]
[ { "msg_contents": "There seems to be a problem with the setup of the new anoncvs server.\nI can not run `cvs update' on subdirectories after having done a\ncheckout.\n\nI did this:\n cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\n\nNow I can do this:\n cd pgsql\n cvs update -l .\nbut I can not do this:\n cd pgsql\n cvs update\n\nWhen I try, I get this error:\n\ncannot create_adm_p /tmp/cvs-serv56140/ChangeLogs\nPermission denied\n\nThe process ID in the directory name changes each time I run it.\n\nUnfortunately, the CVS server does not report a precise enough error\nmessage to indicate exactly what it was trying to do when it got a\n``Permission denied'' error. Basically, though, it was trying to\ncreate a directory or a file on /tmp. What are the permissions on\n/tmp? How is it mounted? What version of CVS are you running?\n\nPlease let me know if you can not recreate this problem.\n\nIan\n", "msg_date": "8 Sep 2001 14:21:04 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Problem with new anoncvs server" } ]
[ { "msg_contents": "Hello everybody.\n\nI've just released the 0.3.0 version of DBBalancer.\nDBBalancer is a program that has several functions:\n\n- Provides connection pooling.\n- Provides load balancing over replicated databases and\n- Provides write replication. This feature is just experimental, needing \nstill quite a lot of testing.\n\nI'd like you to share your feelings about it with me. It would be usefult to \ntalk about if this is useful, if you consider the concept valid, and anything \nyou want.\n\nYou can answer in this list (I read it almost every day) or, if this is not \nappropiate, in the specific DBBalancer Sourceforce mailing list at \ndbbalancer-users@lists.sourceforge.net.\n\nThe project source is at http://www.sourceforge.net/projects/dbbalancer\n\nThese are the changes introduced in version 0.3.0:\n\n0.3.0:\n- New config file format, no longer XML.\n- Xerces requirement removed. Hope this makes compiling easier.\n- Documentation updated.\n- In \"write replication\" mode, start a transaction for each connection. \nRollback if a error is detected in any of the backends. This is configurable\n- Write replication enhancements.\n\n\nWARNING: Though some features are believed to work more safely than others, \nnone of them has passed extensive testings. So please don't use it with \nvaluable data.\n\n-- \n\n----------------------------------\nRegards from Spain. Daniel Varela\n----------------------------------\n\nIf you think education is expensive, try ignorance.\n -Derek Bok (Former Harvard President)\n", "msg_date": "Sun, 9 Sep 2001 04:01:19 +0200", "msg_from": "Daniel Varela Santoalla <dvs@arrakis.es>", "msg_from_op": true, "msg_subject": "DBBalancer 0.3.0: Connection\n\t=?iso-8859-1?q?p=F2oling=20and=20load=20balancing=20for?= PostgreSQL" } ]
[ { "msg_contents": "Question:\n\nWhat has changed with the CVS repository lately? I notice that all of the\ncommit messages I've read lately on pgsql-committers seem to come from\nMarc Fournier. Has Marc just been committing all recent changes, or are\nall commit messages, regardless of committer, showing as from Marc?\n\nNeil\n\n-- \nNeil Padgett\nRed Hat Canada Ltd. E-Mail: npadgett@redhat.com\n2323 Yonge Street, Suite #300,\nToronto, ON M4P 2C9\n\n\n\n\n\n", "msg_date": "Sun, 9 Sep 2001 01:27:51 -0400 (EDT)", "msg_from": "Neil Padgett <npadgett@redhat.com>", "msg_from_op": true, "msg_subject": "CVS commit messages" }, { "msg_contents": "\nI have to look into the commit script to see about pulling out th eproper\ncommitter ... cvs logs show the person who did the commit, but the email's\nare coming from me ...\n\nOn Sun, 9 Sep 2001, Neil Padgett wrote:\n\n> Question:\n>\n> What has changed with the CVS repository lately? I notice that all of the\n> commit messages I've read lately on pgsql-committers seem to come from\n> Marc Fournier. Has Marc just been committing all recent changes, or are\n> all commit messages, regardless of committer, showing as from Marc?\n>\n> Neil\n>\n> --\n> Neil Padgett\n> Red Hat Canada Ltd. E-Mail: npadgett@redhat.com\n> 2323 Yonge Street, Suite #300,\n> Toronto, ON M4P 2C9\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Sun, 9 Sep 2001 10:02:13 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS commit messages" }, { "msg_contents": "> Question:\n> \n> What has changed with the CVS repository lately? I notice that all of the\n> commit messages I've read lately on pgsql-committers seem to come from\n> Marc Fournier. Has Marc just been committing all recent changes, or are\n> all commit messages, regardless of committer, showing as from Marc?\n\nThere is some bug that puts Marc's name on everything. The CVS log\nmessages have the proper name but the email doesn't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Sep 2001 10:19:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS commit messages" } ]
[ { "msg_contents": "Somehow I cannot get ecpg anymore:\n\ncvs server: nothing known about pgsql-ecpg \n\nCVSROOT is :pserver:meskes@cvs.postgresql.org:/home/projects/pgsql/cvsroot.\n\nAny idea what I misconfigured?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 9 Sep 2001 11:09:26 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "CVS access" } ]
[ { "msg_contents": "All,\n\nI am working a some patches to the code and I noticed that \"pg_dump -C\ndatabase\" doesn't provide the database location information in the dump\nfile. Is this correct?\n\nThanks\nJim\n\n\nExample:\n\n datname | datdba | encoding | datistemplate | datallowconn |\ndatlastsysoid | datpath | idxpath \n-----------+--------+----------+---------------+--------------+---------------+---------+---------\n jb1 | 5433 | 0 | f | t | \n18540 | | PGIDX1\n template1 | 5433 | 0 | t | t | \n18540 | | \n template0 | 5433 | 0 | t | f | \n18540 | | \n jb2 | 5433 | 0 | f | t | \n18540 | PGDATA1 | PGIDX1\n jb3 | 5433 | 0 | f | t | \n18540 | PGDATA1 | \n 4051 | 5433 | 0 | f | t | \n18540 | PGDATA1 | PGIDX1\n\n\n(Please ignore the IDXPATH column for now as I am trying to add support\nfor INDEX locations as I am running out of room on my current system and\nI don't like the \"symlink your own tables/index files\" idea)\n\nand the output of pg_dump -C\n\n--\n-- Selected TOC Entries:\n--\n\\connect - pgtest\n--\n-- TOC Entry ID 1 (OID 0)\n--\n-- Name: jb2 Type: DATABASE Owner: pgtest\n--\n\nCreate Database \"jb2\";\n\n\\connect jb2 pgtest\n\n\n", "msg_date": "Sun, 9 Sep 2001 08:44:40 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "PG_DUMP -C option" } ]
[ { "msg_contents": "\nAll,\n\nI am working a some patches to the code and I noticed that \"pg_dump -C\ndatabase\" doesn't provide the database location information in the dump\nfile. Is this correct?\n\nThanks\nJim\n\n\nExample:\n\n datname | datdba | encoding | datistemplate | datallowconn |\ndatlastsysoid | datpath | idxpath \n-----------+--------+----------+---------------+--------------+---------------+---------+---------\n jb1 | 5433 | 0 | f | t | \n18540 | | PGIDX1\n template1 | 5433 | 0 | t | t | \n18540 | | \n template0 | 5433 | 0 | t | f | \n18540 | | \n jb2 | 5433 | 0 | f | t | \n18540 | PGDATA1 | PGIDX1\n jb3 | 5433 | 0 | f | t | \n18540 | PGDATA1 | \n 4051 | 5433 | 0 | f | t | \n18540 | PGDATA1 | PGIDX1\n\n\n(Please ignore the IDXPATH column for now as I am trying to add support\nfor INDEX locations as I am running out of room on my current system and\nI don't like the \"symlink your own tables/index files\" idea)\n\nand the output of pg_dump -C\n\n--\n-- Selected TOC Entries:\n--\n\\connect - pgtest\n--\n-- TOC Entry ID 1 (OID 0)\n--\n-- Name: jb2 Type: DATABASE Owner: pgtest\n--\n\nCreate Database \"jb2\";\n\n\\connect jb2 pgtest\n\n\n\n\n\n", "msg_date": "Sun, 9 Sep 2001 08:50:33 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "pg_dump -C option" }, { "msg_contents": "Jim Buttafuoco writes:\n\n> I am working a some patches to the code and I noticed that \"pg_dump -C\n> database\" doesn't provide the database location information in the dump\n> file. Is this correct?\n\nYour observation is correct, but the behaviour is not. Feel free to\nsend a patch.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 10 Sep 2001 19:55:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump -C option" } ]
[ { "msg_contents": "The JDBC driver's test suite currently fails on DatabaseMetaData\nmethods that provide information about NULLs and sort order.\nI've looked through the documentation, but couldn't find\nanything about it.\n\nThe JDBC driver returns different values, depending on the\nbackend version (<7.2 vs. >= 7.2), so this suggests something\nchanged recently. This is probably also the cause of the test\nsuite failure, since the test case has no conditional coding for\ndifferent backend versions. So presumably, the test suite needs\nto be updated, but I wanted to doublecheck the functionality on\nthis list.\n\nWhat we need to know is:\n- Do null values sort higher or lower than any other value in a\ndomain? Higher would mean that null values appear at the end in\nan ascending sort order.\n- Will null values appear at the start or the end regardless of\nthe sort order?\n\nCurrently the JDBC driver says:\n- Backend >= 7.2 sorts nulls higher than any other value in a\ndomain. In other words: ascending means nulls at the end,\ndescending means nulls at the start.\n- Backend < 7.2 puts nulls at the end regardless of sort order.\n\nCan someone confirm if this is correct? \n\nWould it be useful to add this information to the documentation,\ne.g. the documentation of ORDER BY in SELECT?\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Sun, 09 Sep 2001 14:50:35 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": true, "msg_subject": "NULLs and sort order" }, { "msg_contents": "Rene Pijlman writes:\n\n> Currently the JDBC driver says:\n> - Backend >= 7.2 sorts nulls higher than any other value in a\n> domain. In other words: ascending means nulls at the end,\n> descending means nulls at the start.\n> - Backend < 7.2 puts nulls at the end regardless of sort order.\n\nThat is correct.\n\n> Would it be useful to add this information to the documentation,\n> e.g. the documentation of ORDER BY in SELECT?\n\nMost likely.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 9 Sep 2001 15:25:17 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [JDBC] NULLs and sort order" }, { "msg_contents": "On Sun, 9 Sep 2001 15:25:17 +0200 (CEST), you wrote:\n>That is correct.\n\nThanks.\n\n>> Would it be useful to add this information to the documentation,\n>> e.g. the documentation of ORDER BY in SELECT?\n>\n>Most likely.\n\nI'll post it on the docs list.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Sun, 09 Sep 2001 15:30:17 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": true, "msg_subject": "Re: [JDBC] NULLs and sort order" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Rene Pijlman writes:\n>> Currently the JDBC driver says:\n>> - Backend >= 7.2 sorts nulls higher than any other value in a\n>> domain. In other words: ascending means nulls at the end,\n>> descending means nulls at the start.\n>> - Backend < 7.2 puts nulls at the end regardless of sort order.\n\n> That is correct.\n\nActually it's more complex than that. 7.2 will provide the above-stated\nconsistent ordering of nulls relative to non-nulls. The problem with\nearlier versions is that the ordering of nulls depends on what plan the\noptimizer chooses for the query: sorting based on a scan of a btree\nindex would work the same as is described for 7.2, whereas sorting\nbased on an explicit sort step would put the nulls at the end (for\neither ASC or DESC sort). So there was *no* consistent behavior at all\nin prior versions. The fix that's been applied for 7.2 is to make\nexplicit sorts act the same as indexscans already did.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 01:12:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [JDBC] NULLs and sort order " } ]
[ { "msg_contents": "\nAll,\n\nI am working a some patches to the code and I noticed that \"pg_dump -C\ndatabase\" doesn't provide the database location information in the dump\nfile. Is this correct?\n\nThanks\nJim\n\n\nExample:\n\n datname | datdba | encoding | datistemplate | datallowconn |\ndatlastsysoid | datpath | idxpath \n-----------+--------+----------+---------------+--------------+---------------+---------+---------\n jb1 | 5433 | 0 | f | t | \n18540 | | PGIDX1\n template1 | 5433 | 0 | t | t | \n18540 | | \n template0 | 5433 | 0 | t | f | \n18540 | | \n jb2 | 5433 | 0 | f | t | \n18540 | PGDATA1 | PGIDX1\n jb3 | 5433 | 0 | f | t | \n18540 | PGDATA1 | \n 4051 | 5433 | 0 | f | t | \n18540 | PGDATA1 | PGIDX1\n\n\n(Please ignore the IDXPATH column for now as I am trying to add support\nfor INDEX locations as I am running out of room on my current system and\nI don't like the \"symlink your own tables/index files\" idea)\n\nand the output of pg_dump -C\n\n--\n-- Selected TOC Entries:\n--\n\\connect - pgtest\n--\n-- TOC Entry ID 1 (OID 0)\n--\n-- Name: jb2 Type: DATABASE Owner: pgtest\n--\n\nCreate Database \"jb2\"; <<== This is missing the with location... stuff\n\n\\connect jb2 pgtest\n\n\n\n\n\n", "msg_date": "Sun, 9 Sep 2001 13:49:10 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "I'm working on a problem in the JDBC driver that's related to\ntimezones.\n\nHow does PostgreSQL handle timezones in the FE/BE protocol\nexactly?\n\nWhen a client sends a time or timestamp value to the server via\nthe FE/BE protocol, should that be:\n1) a value in the client's timezone?\n2) a value in the server's timezone?\n3) a value in a common frame of reference (GMT/UTC)?\n4) any value with an explicit timezone?\n\nAnd how should a time or timestamp value returned by the server\nbe interpreted in the client interface?\n\nAnd how does this all depend on the timezone setting of the\nserver?\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Sun, 09 Sep 2001 20:28:15 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": true, "msg_subject": "Timezones and time/timestamp values in FE/BE protocol" }, { "msg_contents": "Rene,\n\nSince the FE/BE protocol deals only with string representations of \nvalues, the protocol doesn't have too much to do with it directly. It \nis what happens on the client and server sides that is important here.\n\nUnder the covers the server stores all timestamp values as GMT. When a \nselect statement queries one the value is converted to the session \ntimezone and formated to a string that includes the timezone offset used \n (i.e. 2001-09-09 14:24:35.12-08 which the database had stored as \n2001-09-09 22:24:35.12 GMT). The client then needs to handle this \naccordingly and convert to a different timezone if desired.\n\nOn an insert or update the client and server are essentially doing the \nopposite. The client converts the timestamp value to a string and then \nthe server converts that string to GMT for storage. If the client does \nnot pass the timezone offset (i.e. 2001-09-09 14:24:35.12 instead of \n2001-09-09 14:24:35.12-08) then the server needs to guess the timezone \nand will use the session timezone.\n\nNow when it comes to the JDBC code this is what happens. (Since you \ndidn't state what specific problem you where having I will give a \ngeneral overview).\n\nWhen the JDBC driver connects to the server it does one thing timestamp \nrelated. It does a 'set datestyle to \"ISO\"' so that the client and the \nserver both know how the strings are formated.\n\nI don't know what the session timezone defaults to, but it really \nshouldn't matter since the server always sends the timezone offset as \npart of the string representation of the timestamp value. Therefore the \nJDBC client can always figure out how to convert the string to a Java \nTimestamp object.\n\nOn the insert/update opperation the JDBC client converts the Timestamp \nobject to GMT (see the logic in setTimestamp() of PreparedStatement) and \nthen builds the string to send to the server as the formated date/time \nplus the timezone offset used (GMT in this case). Thus it does \nsomething that looks like: \"2001-09-09 14:24:35.12\" + \"+00\". When the \nserver gets this string it has all the information it needs to convert \nto GMT for storage (it actually doesn't need to do anything since the \nvalue is clearly already in GMT).\n\nI hope this helps to answer your questions. If you could post a bit \nmore about the issue you are having I might be able to be more specific.\n\nthanks,\n--Barry\n\n\n\nRene Pijlman wrote:\n> I'm working on a problem in the JDBC driver that's related to\n> timezones.\n> \n> How does PostgreSQL handle timezones in the FE/BE protocol\n> exactly?\n> \n> When a client sends a time or timestamp value to the server via\n> the FE/BE protocol, should that be:\n> 1) a value in the client's timezone?\n> 2) a value in the server's timezone?\n> 3) a value in a common frame of reference (GMT/UTC)?\n> 4) any value with an explicit timezone?\n> \n> And how should a time or timestamp value returned by the server\n> be interpreted in the client interface?\n> \n> And how does this all depend on the timezone setting of the\n> server?\n> \n> Regards,\n> Ren� Pijlman <rene@lab.applinet.nl>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n\n", "msg_date": "Sun, 09 Sep 2001 13:38:52 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Timezones and time/timestamp values in FE/BE protocol" }, { "msg_contents": "On Sun, 09 Sep 2001 13:38:52 -0700, you wrote:\n[...]\nThanks for your explanation. This helps a lot.\n\n>If you could post a bit more about the issue you are having \n>I might be able to be more specific.\n\nI'm looking at the 4 remaining failures of our own JDBC test\nsuite. They all have to do with timestamps and times, and they\nare all caused by a 1 hour shift between the expected value and\nthe actual value. I run both the backend and the JVM on the same\nLinux test server. \n\nIts located in Amsterdam, The Netherlands, Central European\nDaylight Savings Time (CETDST, UTC+2, GMT+2). I always thought I\nwas in CET=GMT+1, but now the offset is 2, because of daylight\nsaving time (whoever invented that should be #!$^&). Perhaps I\nshould go live in Greenwich, they don't seem to have daylight\nsaving time overthere.\n\nIn psql I see: \n show timezone;\n NOTICE: Time zone is unset\n\nHere is some detailed information about the failures. I'm\nrefering to line numbers in 7.2 current CVS:\nTimeTest.java revision 1.1\nTimestampTest.java revision 1.2\n\n1) TimeTest.java:89\n\ngetHours(t) expected 1, actual 0\nt.toString() returns the expected \"01:02:03\", but this is\nbecause java.sql.Time.toString() converts to the JVM's timezone.\n\n2) TimeTest.java:96\n\ngetHours(t) expected 23, actual 0\nt.toString returns \"00:59:59\"\n\n3) TimestampTest.java:115\n\nExpected: getTimestamp(1970,6,2,8,13,0) returns \"1970-06-02\n08:13:00.0\"\nActual: t.toString() returns \"1970-06-02 09:13:00.0\"\n\n4) TimestampTest.java:115 (second time around)\n\nExpected: getTimestamp(1970,6,2,8,13,0) returns \"1970-06-02\n08:13:00.0\"\nActual: t.toString() returns \"1970-06-02 07:13:00.0\"\n\nMy first impression is that in all cases a timezone shift is\napplied in only one direction (store vs. retrieve). The cause\nmight also be a problem with daylight saving time, there are\nsome comments about that in TimestampTest.java.\n\nUp till now I've managed without a graphical debugger, but to\nget a good feel for what's happening between the test code and\nthe wire I think it'll be easier to setup JBuilder with the\ndriver and step through the code.\n\nBut now its almost bedtime in my timezone, and you never know\nwith these mailing lists. Sometimes the solution is in your\ninbox when you wake up :-)\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Sun, 09 Sep 2001 23:43:36 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": true, "msg_subject": "Re: Timezones and time/timestamp values in FE/BE protocol" }, { "msg_contents": "On Sun, 09 Sep 2001 23:43:36 +0200, I wrote:\n>I'm looking at the 4 remaining failures of our own JDBC test\n>suite. They all have to do with timestamps and times\n\nFYI, Liam mailed me that he will soon post a patch for this.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Wed, 12 Sep 2001 23:41:39 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": true, "msg_subject": "Re: Timezones and time/timestamp values in FE/BE protocol" } ]
[ { "msg_contents": "(sorry for the repost. I forgot the subject last time...)\n\nAll,\n\nI am working a some patches to the code and I noticed that \"pg_dump -C\ndatabase\" doesn't provide the database location information in the dump\nfile. Is this correct?\n\nThanks\nJim\n\n\nExample:\n\n datname | datdba | encoding | datistemplate | datallowconn |\ndatlastsysoid | datpath | idxpath \n-----------+--------+----------+---------------+--------------+---------------+---------+---------\njb1 | 5433 | 0 | f | t | \n18540 | | PGIDX1\ntemplate1 | 5433 | 0 | t | t | \n18540 | | \ntemplate0 | 5433 | 0 | t | f | \n18540 | | \njb2 | 5433 | 0 | f | t | \n18540 | PGDATA1 | PGIDX1\njb3 | 5433 | 0 | f | t | \n18540 | PGDATA1 | \n4051 | 5433 | 0 | f | t | \n18540 | PGDATA1 | PGIDX1\n\n\n(Please ignore the IDXPATH column for now as I am trying to add support\nfor INDEX locations as I am running out of room on my current system and\nI don't like the \"symlink your own tables/index files\" idea)\n\nand the output of pg_dump -C\n\n--\n-- Selected TOC Entries:\n--\n\\connect - pgtest\n--\n-- TOC Entry ID 1 (OID 0)\n--\n-- Name: jb2 Type: DATABASE Owner: pgtest\n--\n\nCreate Database \"jb2\"; <<== This is missing the with location... stuff\n\n\\connect jb2 pgtest\n", "msg_date": "Sun, 9 Sep 2001 17:36:54 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "pg_dump -C and locations (with subject this time)" } ]
[ { "msg_contents": "There are many places in our docs where INV_ARCHIVE is mentioned. In\nmy understanding, INV_ARCHIVE has never been supported since\nPostgreSQL 6.0 was born. Shall we remove it from the docs?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 10 Sep 2001 11:57:50 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "INV_ARCHIVE?" }, { "msg_contents": "> There are many places in our docs where INV_ARCHIVE is mentioned. In\n> my understanding, INV_ARCHIVE has never been supported since\n> PostgreSQL 6.0 was born. Shall we remove it from the docs?\n\nIt is for large object/inverted object archiving, which we don't have.\nI have applied this patch to remove it.\n\nGood eye. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/libpgtcl.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/libpgtcl.sgml,v\nretrieving revision 1.15\ndiff -c -r1.15 libpgtcl.sgml\n*** doc/src/sgml/libpgtcl.sgml\t2001/05/12 22:51:35\t1.15\n--- doc/src/sgml/libpgtcl.sgml\t2001/09/10 04:12:38\n***************\n*** 1134,1140 ****\n <TITLE>Usage\n </TITLE>\n <PARA>\n! mode can be any OR'ing together of INV_READ, INV_WRITE, and INV_ARCHIVE. \n The OR delimiter character is \"|\".\n <ProgramListing>\n [pg_lo_creat $conn \"INV_READ|INV_WRITE\"]\n--- 1134,1140 ----\n <TITLE>Usage\n </TITLE>\n <PARA>\n! mode can be any OR'ing together of INV_READ and INV_WRITE. \n The OR delimiter character is \"|\".\n <ProgramListing>\n [pg_lo_creat $conn \"INV_READ|INV_WRITE\"]\nIndex: doc/src/sgml/lobj.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/lobj.sgml,v\nretrieving revision 1.16\ndiff -c -r1.16 lobj.sgml\n*** doc/src/sgml/lobj.sgml\t2001/02/09 19:24:09\t1.16\n--- doc/src/sgml/lobj.sgml\t2001/09/10 04:12:39\n***************\n*** 117,133 ****\n <filename>$<envar>PGROOT</envar>/src/backend/libpq/libpq-fs.h</filename>\n The access type (read, write, or both) is controlled by\n OR ing together the bits <acronym>INV_READ</acronym> and\n! <acronym>INV_WRITE</acronym>. If\n! the large object should be archived -- that is, if \n! historical versions of it should be moved periodically to\n! a special archive relation -- then the <acronym>INV_ARCHIVE</acronym> bit\n! should be set. The low-order sixteen bits of mask are\n the storage manager number on which the large object\n should reside. For sites other than Berkeley, these\n bits should always be zero.\n The commands below create an (Inversion) large object:\n <programlisting>\n! inv_oid = lo_creat(INV_READ|INV_WRITE|INV_ARCHIVE);\n </programlisting>\n </para>\n </sect2>\n--- 117,129 ----\n <filename>$<envar>PGROOT</envar>/src/backend/libpq/libpq-fs.h</filename>\n The access type (read, write, or both) is controlled by\n OR ing together the bits <acronym>INV_READ</acronym> and\n! <acronym>INV_WRITE</acronym>. The low-order sixteen bits of mask are\n the storage manager number on which the large object\n should reside. For sites other than Berkeley, these\n bits should always be zero.\n The commands below create an (Inversion) large object:\n <programlisting>\n! inv_oid = lo_creat(INV_READ|INV_WRITE);\n </programlisting>\n </para>\n </sect2>\nIndex: doc/src/sgml/pygresql.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/pygresql.sgml,v\nretrieving revision 1.1\ndiff -c -r1.1 pygresql.sgml\n*** doc/src/sgml/pygresql.sgml\t2001/03/04 18:54:07\t1.1\n--- doc/src/sgml/pygresql.sgml\t2001/09/10 04:12:50\n***************\n*** 418,424 ****\n <varlistentry>\n <term><varname>INV_READ</varname></term>\n <term><varname>INV_WRITE</varname></term>\n- <term><varname>INV_ARCHIVE</varname></term>\n <listitem>\n <para>\n large objects access modes, used by\n--- 418,423 ----\n***************\n*** 2253,2259 ****\n <para>\n <function>locreate()</function> method creates a large object in the database.\n The mode can be defined by OR-ing the constants defined in the pg module\n! (<literal>INV_READ, INV_WRITE</literal> and <literal>INV_ARCHIVE</literal>).\n </para>\n </refsect1>\n \n--- 2252,2258 ----\n <para>\n <function>locreate()</function> method creates a large object in the database.\n The mode can be defined by OR-ing the constants defined in the pg module\n! (<literal>INV_READ and INV_WRITE</literal>).\n </para>\n </refsect1>", "msg_date": "Mon, 10 Sep 2001 00:14:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: INV_ARCHIVE?" }, { "msg_contents": "On Mon, Sep 10, 2001 at 12:14:44AM -0400, Bruce Momjian wrote:\n> > There are many places in our docs where INV_ARCHIVE is mentioned. In\n> > my understanding, INV_ARCHIVE has never been supported since\n> > PostgreSQL 6.0 was born. Shall we remove it from the docs?\n> \n> It is for large object/inverted object archiving, which we don't have.\n> I have applied this patch to remove it.\n\n I never use 6.0, but may be try dump LO by pg_dumplo (in 7.0 release)?\n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 10 Sep 2001 09:21:05 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: INV_ARCHIVE?" } ]
[ { "msg_contents": "In typeconv.sgml we have an example:\n\ntgl=> select (4.3 !);\n ?column?\n----------\n 24\n(1 row)\n\nHowever, actually it does not work:\n\ntest=# select (4.3 !);\nERROR: Unable to identify a postfix operator '!' for type 'double precision'\n\tYou may need to add parentheses or an explicit cast\n\nShall we correct the doc or is that a bug?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 10 Sep 2001 13:35:58 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "factorial doc bug?" }, { "msg_contents": "On Mon, 10 Sep 2001, Tatsuo Ishii wrote:\n\n> In typeconv.sgml we have an example:\n> \n> tgl=> select (4.3 !);\n> ?column?\n> ----------\n> 24\n> (1 row)\n\nMathematically speaking, one cannot find the factorial of such a\nnumber. Users could easily cast/round a float to an integer - making it\nsuitable for such an operation.\n\nI'd say it was a documentation issue.\n\nGavin\n\n", "msg_date": "Mon, 10 Sep 2001 15:03:55 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "> On Mon, 10 Sep 2001, Tatsuo Ishii wrote:\n> \n> > In typeconv.sgml we have an example:\n> > \n> > tgl=> select (4.3 !);\n> > ?column?\n> > ----------\n> > 24\n> > (1 row)\n> \n> Mathematically speaking, one cannot find the factorial of such a\n> number. Users could easily cast/round a float to an integer - making it\n> suitable for such an operation.\n> \n> I'd say it was a documentation issue.\n\nMy point is the docs claims that PostgreSQL automaticaly converts 4.3\nto 4 in this case.\n\n>This example illustrates an interesting result. Traditionally, the\n>factorial operator is defined for integers only. The <productname>Postgres</productname>\n>operator catalog has only one entry for factorial, taking an integer operand.\n>If given a non-integer numeric argument, <productname>Postgres</productname>\n>will try to convert that argument to an integer for evaluation of the\n>factorial.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 10 Sep 2001 14:12:06 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "> Shall we correct the doc or is that a bug?\n\nFix the docs...\n\n - Thomas\n", "msg_date": "Tue, 11 Sep 2001 16:14:03 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "> > Shall we correct the doc or is that a bug?\n> \n> Fix the docs...\n\nAre you saying we should remove the whole chapter below from the docs?\n--\nTatsuo Ishii\n\n>5.2.1.3. Factorial\n>\n>This example illustrates an interesting result. Traditionally, the factorial operator is defined for integers only. The Postgres operator catalog has only\n>one entry for factorial, taking an integer operand. If given a non-integer numeric argument, Postgres will try to convert that argument to an integer\n>for evaluation of the factorial. \n>\n>tgl=> select (4.3 !);\n> ?column?\n>----------\n> 24\n>(1 row)\n>\n> Note: Of course, this leads to a mathematically suspect result, since in principle the factorial of a non-integer is not defined. However,\n> the role of a database is not to teach mathematics, but to be a tool for data manipulation. If a user chooses to take the factorial of a\n> floating point number, Postgres will try to oblige.\n", "msg_date": "Wed, 12 Sep 2001 09:59:35 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "> Are you saying we should remove the whole chapter below from the docs?\n\nHmm. I wrote that :/\n\nI vaguely recall some discussion of this topic (a few months ago?). I'm\nnot certain that the current behavior was an intended result of changes\nin the \"automatic coersion\" algorithms, but I think it was. Tom Lane is\nprobably the person who made those changes, and we should have him in\nthe discussion on whether the current behavior is appropriate. \n\nKeep in mind that he is a mathematician, and I'll guess that he won't\nhave much patience with folks who expect a result for a factorial of a\nfractional number ;) But there may have been another case which made it\nclearer that the old behavior was a bad road to take. We can look at the\narchives, right?\n\n - Thomas\n", "msg_date": "Wed, 12 Sep 2001 01:35:33 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "> > Are you saying we should remove the whole chapter below from the docs?\n> \n> Hmm. I wrote that :/\n> \n> I vaguely recall some discussion of this topic (a few months ago?). I'm\n> not certain that the current behavior was an intended result of changes\n> in the \"automatic coersion\" algorithms, but I think it was. Tom Lane is\n> probably the person who made those changes, and we should have him in\n> the discussion on whether the current behavior is appropriate. \n> \n> Keep in mind that he is a mathematician, and I'll guess that he won't\n> have much patience with folks who expect a result for a factorial of a\n> fractional number ;) But there may have been another case which made it\n> clearer that the old behavior was a bad road to take. We can look at the\n> archives, right?\n\nOk, let's wait for him coming back...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 12 Sep 2001 10:38:58 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "> Are you saying we should remove the whole chapter below from the docs?\n\nActually, it may be simply that we (now) implement factorial operators\nfor int8, int4, and int2. Not sure what previous releases implemented,\nbut perhaps it is just an issue of knowing which one should be used for\nthe operation. If before we only had, say, int4, then the coersion code\ncould easily assume that it was the correct coersion.\n\n - Thomas\n", "msg_date": "Wed, 12 Sep 2001 01:39:23 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "Thomas Lockhart writes:\n\n> Keep in mind that he is a mathematician, and I'll guess that he won't\n> have much patience with folks who expect a result for a factorial of a\n> fractional number ;)\n\nReal mathematicians will be perfectly happy with a factorial for a\nfractional number, as long as it's properly and consistently defined. ;-)\n\nSeriously, there is a well-established definition of factorials of\nnon-integral real numbers, but the current behaviour is probably the most\nintuitive for the vast majority of users.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 12 Sep 2001 14:45:10 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "On Wed, Sep 12, 2001 at 02:45:10PM +0200, Peter Eisentraut wrote:\n> Thomas Lockhart writes:\n> \n> > Keep in mind that he is a mathematician, and I'll guess that he won't\n> > have much patience with folks who expect a result for a factorial of a\n> > fractional number ;)\n> \n> Real mathematicians will be perfectly happy with a factorial for a\n> fractional number, as long as it's properly and consistently defined. ;-)\n> \n> Seriously, there is a well-established definition of factorials of\n> non-integral real numbers, but the current behaviour is probably the most\n> intuitive for the vast majority of users.\n\nI would be happy with with exp(lgamma(x+1)) as a synonym for x!\n(So 4.3!=38.078 as far as I'm concerned :) )\n\nCheers,\n\nPatrick\n", "msg_date": "Wed, 12 Sep 2001 16:10:22 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "> On Wed, Sep 12, 2001 at 02:45:10PM +0200, Peter Eisentraut wrote:\n> > Thomas Lockhart writes:\n> > \n> > > Keep in mind that he is a mathematician, and I'll guess that he won't\n> > > have much patience with folks who expect a result for a factorial of a\n> > > fractional number ;)\n> > \n> > Real mathematicians will be perfectly happy with a factorial for a\n> > fractional number, as long as it's properly and consistently defined. ;-)\n> > \n> > Seriously, there is a well-established definition of factorials of\n> > non-integral real numbers, but the current behaviour is probably the most\n> > intuitive for the vast majority of users.\n> \n> I would be happy with with exp(lgamma(x+1)) as a synonym for x!\n> (So 4.3!=38.078 as far as I'm concerned :) )\n\nYes, gamms is the standard for non-integer factorial but we don't\nimplement it that way. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 13:28:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial doc bug?" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Actually, it may be simply that we (now) implement factorial operators\n> for int8, int4, and int2. Not sure what previous releases implemented,\n> but perhaps it is just an issue of knowing which one should be used for\n> the operation. If before we only had, say, int4, then the coersion code\n> could easily assume that it was the correct coersion.\n\nI think this must be the correct explanation. Observe this experiment:\n\nregression=# create operator !! (procedure = int4fac, leftarg = int4);\nCREATE\nregression=# select 4.3 !!;\n ?column?\n----------\n 24\n(1 row)\n\nregression=# create operator !! (procedure = int2fac, leftarg = int2);\nCREATE\nregression=# select 4.3 !!;\nERROR: Unable to identify a postfix operator '!!' for type 'double precision'\n You may need to add parentheses or an explicit cast\nregression=#\n\nThe int2 and int8 factorial operators were new in 7.0. The example in\nthe docs is older --- its claim that there's only one factorial op in\nthe catalogs is clearly out of date. So I'd say we should change the\nexample. Have we got any other operators that only come in an int4\nflavor, and are likely to stay that way?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 16:18:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial doc bug? " }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> ... Tom Lane is\n> probably the person who made those changes, and we should have him in\n> the discussion on whether the current behavior is appropriate. \n\n> Keep in mind that he is a mathematician, and I'll guess that he won't\n> have much patience with folks who expect a result for a factorial of a\n> fractional number ;)\n\nActually, I'm an engineer by training, not a mathematician --- either\ncamp will tell you there's a big difference ;-)\n\nI have no objection to adding a \"float8 !\" operator using the\ngamma-based definition, if someone felt like doing it. But even if we\ndid, that would not fix the example in typeconv.sgml; indeed it would\nrender the example completely wrong with respect to the point it was\noriginally written to make. We need an operator that exists only for\nint4 to demonstrate implicit coercion. Unfortunately, I see no\ncandidate for one in the current catalogs. Has anyone got another idea\nabout how to replace this example with a correct one?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 17:36:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial doc bug? " } ]
[ { "msg_contents": "Hi,\n\nI'm going to add a new function \"pg_client_encoding\" returning the\ncurrent client side encoding name. I know there is a similar\nfunctionality already there in PostgreSQL (show client_encoding) but\nit's pain to handle notice message by a program.\n\nAlso note that JDBC driver and maybe some other APIs use\ngetdatabaseencoding, but I think it's not adequate for FE APIs to know\nactual encoding passed to FE side, since an encoding conversion might\nbe made in BE side. For example, if PGCLIENTENCODING is set to SJIS\nbefore starting postmaster, the actual encoding passed to FE would be\nSJIS even the database encoding is EUC_JP.\n\nComments?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 10 Sep 2001 13:46:28 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "pg_client_encoding" }, { "msg_contents": "On Mon, Sep 10, 2001 at 01:46:28PM +0900, Tatsuo Ishii wrote:\n> Hi,\n> \n> I'm going to add a new function \"pg_client_encoding\" returning the\n> current client side encoding name. I know there is a similar\n> functionality already there in PostgreSQL (show client_encoding) but\n> it's pain to handle notice message by a program.\n> \n> Also note that JDBC driver and maybe some other APIs use\n> getdatabaseencoding, but I think it's not adequate for FE APIs to know\n> actual encoding passed to FE side, since an encoding conversion might\n> be made in BE side. For example, if PGCLIENTENCODING is set to SJIS\n> before starting postmaster, the actual encoding passed to FE would be\n> SJIS even the database encoding is EUC_JP.\n> \n> Comments?\n\n What some common function like pg_show():\n\n SELECT pg_show('CLIENT_ENCODING');\n SELECT pg_show('SERVER_ENCODING');\n SELECT pg_show('DATESTYLE');\n\n that returns same result as standard 'SHOW' command, but not as NOTICE?\nA lot of code for this function can be shared with current SHOW routines.\nI'm sure non-libpq clients (like JDBC) maintainers will happy with it. \n\n\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 10 Sep 2001 09:40:32 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: pg_client_encoding" }, { "msg_contents": "> Tatsuo,\n> \n> Did you ever commit this new function? I just tried a 'select \n> pg_client_encoding()' and it told me that there was no such function. \n> This was on sources that I pulled and built two days ago.\n> \n> I was planning on changing the JDBC code to use this function instead of \n> getdatabaseencoding().\n\nSorry for the delay. I have just added pg_client_encoding() which\nreturns client side encoding name.\n\n> Also, what names will this new function return (the old character set \n> names like getdatabaseencoding still does, or the new names)?\n\nThe \"old\" ones. To make sure, here are the encoding names list\ncurrently supported. \n\nencoding\twhat pg_client_encoding/\talias\n\t\tgetdatabaseencoding\n\t\treturns\n----------------------------------------------------------------\nASCII\t\tSQL_ASCII\nUTF-8\t\tUNICODE\t\t\t\tUTF_8\nMULE-INTERNAL\tMULE_INTERNAL\nISO-8859-1\tLATIN1\t\t\t\tISO_8859_1\nISO-8859-2\tLATIN2\t\t\t\tISO_8859_2\nISO-8859-3\tLATIN3\t\t\t\tISO_8859_3\nISO-8859-4\tLATIN4\t\t\t\tISO_8859_4\nISO-8859-5\tISO_8859_5\nISO-8859-6\tISO_8859_6\nISO-8859-7\tISO_8859_7\nISO-8859-8\tISO_8859_8\nISO-8859-9\tLATIN5\t\t\t\tISO_8859_9\nISO-8859-10\tISO_8859_10\t\t\tLATIN6\nISO-8859-13\tISO_8859_13\t\t\tLATIN7\nISO-8859-14\tISO_8859_14\t\t\tLATIN8\nISO-8859-15\tISO_8859_15\t\t\tLATIN9\nISO-8859-16\tISO_8859_16\nEUC-JP\t\tEUC_JP\nEUC-CN\t\tEUC_CN\nEUC-KR\t\tEUC_KR\nEUC-TW\t\tEUC_TW\nShift_JIS\tSJIS\t\t\t\tSHIFT_JIS\nBig5\t\tBIG5\nWindows1250\tWIN1250\nWindows1251\tWIN\nKOI8-R\t\tKOI8\t\t\t\tKOI8R\nIBM866\t\tALT\n\n> thanks,\n> --Barry\n> \n> \n> \n> Tatsuo Ishii wrote:\n> \n> > Hi,\n> > \n> > I'm going to add a new function \"pg_client_encoding\" returning the\n> > current client side encoding name. I know there is a similar\n> > functionality already there in PostgreSQL (show client_encoding) but\n> > it's pain to handle notice message by a program.\n> > \n> > Also note that JDBC driver and maybe some other APIs use\n> > getdatabaseencoding, but I think it's not adequate for FE APIs to know\n> > actual encoding passed to FE side, since an encoding conversion might\n> > be made in BE side. For example, if PGCLIENTENCODING is set to SJIS\n> > before starting postmaster, the actual encoding passed to FE would be\n> > SJIS even the database encoding is EUC_JP.\n> > \n> > Comments?\n> > --\n> > Tatsuo Ishii\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> > \n> \n> \n", "msg_date": "Fri, 12 Oct 2001 11:22:32 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: pg_client_encoding" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> encoding\twhat pg_client_encoding/\talias\n> \t\tgetdatabaseencoding\n> \t\treturns\n> ----------------------------------------------------------------\n> ASCII\t\tSQL_ASCII\n> UTF-8\t\tUNICODE\t\t\t\tUTF_8\n> MULE-INTERNAL\tMULE_INTERNAL\n> ISO-8859-1\tLATIN1\t\t\t\tISO_8859_1\n> ISO-8859-2\tLATIN2\t\t\t\tISO_8859_2\n> ISO-8859-3\tLATIN3\t\t\t\tISO_8859_3\n> ISO-8859-4\tLATIN4\t\t\t\tISO_8859_4\n> ISO-8859-5\tISO_8859_5\n> ISO-8859-6\tISO_8859_6\n> ISO-8859-7\tISO_8859_7\n> ISO-8859-8\tISO_8859_8\n> ISO-8859-9\tLATIN5\t\t\t\tISO_8859_9\n> ISO-8859-10\tISO_8859_10\t\t\tLATIN6\n> ISO-8859-13\tISO_8859_13\t\t\tLATIN7\n> ISO-8859-14\tISO_8859_14\t\t\tLATIN8\n> ISO-8859-15\tISO_8859_15\t\t\tLATIN9\n> ISO-8859-16\tISO_8859_16\n\nWhy aren't you using LATINx for (some of) these as well?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 13 Oct 2001 20:13:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_client_encoding" }, { "msg_contents": "> > ASCII\t\tSQL_ASCII\n> > UTF-8\t\tUNICODE\t\t\t\tUTF_8\n> > MULE-INTERNAL\tMULE_INTERNAL\n> > ISO-8859-1\tLATIN1\t\t\t\tISO_8859_1\n> > ISO-8859-2\tLATIN2\t\t\t\tISO_8859_2\n> > ISO-8859-3\tLATIN3\t\t\t\tISO_8859_3\n> > ISO-8859-4\tLATIN4\t\t\t\tISO_8859_4\n> > ISO-8859-5\tISO_8859_5\n> > ISO-8859-6\tISO_8859_6\n> > ISO-8859-7\tISO_8859_7\n> > ISO-8859-8\tISO_8859_8\n> > ISO-8859-9\tLATIN5\t\t\t\tISO_8859_9\n> > ISO-8859-10\tISO_8859_10\t\t\tLATIN6\n> > ISO-8859-13\tISO_8859_13\t\t\tLATIN7\n> > ISO-8859-14\tISO_8859_14\t\t\tLATIN8\n> > ISO-8859-15\tISO_8859_15\t\t\tLATIN9\n> > ISO-8859-16\tISO_8859_16\n> \n> Why aren't you using LATINx for (some of) these as well?\n\nIf LATIN6 to 9 are well defined in the SQL or some other standards, I\nwould not object using them. I just don't have enough confidence.\nFor ISO-8859-5 to 8, and 16, I don't see well defined standards.\n--\nTatsuo Ishii\n\n", "msg_date": "Sun, 14 Oct 2001 11:13:45 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: pg_client_encoding" }, { "msg_contents": "* Tatsuo Ishii <t-ishii@sra.co.jp> [011014 16:05]:\n> > > ASCII\t\tSQL_ASCII\n> > > UTF-8\t\tUNICODE\t\t\t\tUTF_8\n> > > MULE-INTERNAL\tMULE_INTERNAL\n> > > ISO-8859-1\tLATIN1\t\t\t\tISO_8859_1\n> > > ISO-8859-2\tLATIN2\t\t\t\tISO_8859_2\n> > > ISO-8859-3\tLATIN3\t\t\t\tISO_8859_3\n> > > ISO-8859-4\tLATIN4\t\t\t\tISO_8859_4\n> > > ISO-8859-5\tISO_8859_5\n> > > ISO-8859-6\tISO_8859_6\n> > > ISO-8859-7\tISO_8859_7\n> > > ISO-8859-8\tISO_8859_8\n> > > ISO-8859-9\tLATIN5\t\t\t\tISO_8859_9\n> > > ISO-8859-10\tISO_8859_10\t\t\tLATIN6\n> > > ISO-8859-13\tISO_8859_13\t\t\tLATIN7\n> > > ISO-8859-14\tISO_8859_14\t\t\tLATIN8\n> > > ISO-8859-15\tISO_8859_15\t\t\tLATIN9\n> > > ISO-8859-16\tISO_8859_16\n> > \n> > Why aren't you using LATINx for (some of) these as well?\n> \n> If LATIN6 to 9 are well defined in the SQL or some other standards, I\n> would not object using them. I just don't have enough confidence.\n> For ISO-8859-5 to 8, and 16, I don't see well defined standards.\n\nISO-8859-16 *is* LATIN10, I just don't have the reference to prove it\n(I can look for it, if you want to).\n\nISO-8859-5 to 8 aren't latin scripts. From memory, 5 is cyrillic, 6 is\narabic, 7 is greek, 8 is ??? (hebrew ?)...\n\nSo it would make sense to add LATIN10, still :)\n\nPatrice\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/\n", "msg_date": "Sun, 14 Oct 2001 16:59:41 +0200", "msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>", "msg_from_op": false, "msg_subject": "Re: pg_client_encoding" }, { "msg_contents": "> * Tatsuo Ishii <t-ishii@sra.co.jp> [011014 16:05]:\n> > > > ASCII\t\tSQL_ASCII\n> > > > UTF-8\t\tUNICODE\t\t\t\tUTF_8\n> > > > MULE-INTERNAL\tMULE_INTERNAL\n> > > > ISO-8859-1\tLATIN1\t\t\t\tISO_8859_1\n> > > > ISO-8859-2\tLATIN2\t\t\t\tISO_8859_2\n> > > > ISO-8859-3\tLATIN3\t\t\t\tISO_8859_3\n> > > > ISO-8859-4\tLATIN4\t\t\t\tISO_8859_4\n> > > > ISO-8859-5\tISO_8859_5\n> > > > ISO-8859-6\tISO_8859_6\n> > > > ISO-8859-7\tISO_8859_7\n> > > > ISO-8859-8\tISO_8859_8\n> > > > ISO-8859-9\tLATIN5\t\t\t\tISO_8859_9\n> > > > ISO-8859-10\tISO_8859_10\t\t\tLATIN6\n> > > > ISO-8859-13\tISO_8859_13\t\t\tLATIN7\n> > > > ISO-8859-14\tISO_8859_14\t\t\tLATIN8\n> > > > ISO-8859-15\tISO_8859_15\t\t\tLATIN9\n> > > > ISO-8859-16\tISO_8859_16\n> > > \n> > > Why aren't you using LATINx for (some of) these as well?\n> > \n> > If LATIN6 to 9 are well defined in the SQL or some other standards, I\n> > would not object using them. I just don't have enough confidence.\n> > For ISO-8859-5 to 8, and 16, I don't see well defined standards.\n> \n> ISO-8859-16 *is* LATIN10, I just don't have the reference to prove it\n> (I can look for it, if you want to).\n> \n> ISO-8859-5 to 8 aren't latin scripts. From memory, 5 is cyrillic, 6 is\n> arabic, 7 is greek, 8 is ??? (hebrew ?)...\n> \n> So it would make sense to add LATIN10, still :)\n\nIf you were sure ISO-8859-16 == LATIN10, I could add it.\n\nOk, here is the modified encoding table (column1 is the standard name,\n2 is our \"official\" name, and 3 is alias). If there's no objection, I\nwill change them.\n\nASCII\t\tSQL_ASCII\nUTF-8\t\tUNICODE\t\tUTF_8\nMULE-INTERNAL\tMULE_INTERNAL\nISO-8859-1\tLATIN1\t\tISO_8859_1\nISO-8859-2\tLATIN2\t\tISO_8859_2\nISO-8859-3\tLATIN3\t\tISO_8859_3\nISO-8859-4\tLATIN4\t\tISO_8859_4\nISO-8859-5\tISO_8859_5\nISO-8859-6\tISO_8859_6\nISO-8859-7\tISO_8859_7\nISO-8859-8\tISO_8859_8\nISO-8859-9\tLATIN5\t\tISO_8859_9\nISO-8859-10\tLATIN6\t\tISO_8859_10\nISO-8859-13\tLATIN7\t\tISO_8859_13\nISO-8859-14\tLATIN8\t\tISO_8859_14\nISO-8859-15\tLATIN9\t\tISO_8859_15\nISO-8859-16\tLATIN10\t\tISO_8859_16\n", "msg_date": "Mon, 15 Oct 2001 10:05:20 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: pg_client_encoding" }, { "msg_contents": "Done.\n\n> Ok, here is the modified encoding table (column1 is the standard name,\n> 2 is our \"official\" name, and 3 is alias). If there's no objection, I\n> will change them.\n> \n> ASCII\t\tSQL_ASCII\n> UTF-8\t\tUNICODE\t\tUTF_8\n> MULE-INTERNAL\tMULE_INTERNAL\n> ISO-8859-1\tLATIN1\t\tISO_8859_1\n> ISO-8859-2\tLATIN2\t\tISO_8859_2\n> ISO-8859-3\tLATIN3\t\tISO_8859_3\n> ISO-8859-4\tLATIN4\t\tISO_8859_4\n> ISO-8859-5\tISO_8859_5\n> ISO-8859-6\tISO_8859_6\n> ISO-8859-7\tISO_8859_7\n> ISO-8859-8\tISO_8859_8\n> ISO-8859-9\tLATIN5\t\tISO_8859_9\n> ISO-8859-10\tLATIN6\t\tISO_8859_10\n> ISO-8859-13\tLATIN7\t\tISO_8859_13\n> ISO-8859-14\tLATIN8\t\tISO_8859_14\n> ISO-8859-15\tLATIN9\t\tISO_8859_15\n> ISO-8859-16\tLATIN10\t\tISO_8859_16\n", "msg_date": "Tue, 16 Oct 2001 19:10:57 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: pg_client_encoding" } ]
[ { "msg_contents": "In comment.sgml:\n\nCOMMENT ON AGGREGATE my_aggregate (double precision) IS 'Computes\nsample variance';\n\nthis raises error. However, \n\nCOMMENT ON AGGREGATE my_aggregate double precision IS 'Computes\nsample variance';\n\nworks but looks strange syntax. Should we fix the program or docs?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 10 Sep 2001 15:40:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "COMMENT ON" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> In comment.sgml:\n>\n> COMMENT ON AGGREGATE my_aggregate (double precision) IS 'Computes\n> sample variance';\n>\n> this raises error. However,\n>\n> COMMENT ON AGGREGATE my_aggregate double precision IS 'Computes\n> sample variance';\n>\n> works but looks strange syntax. Should we fix the program or docs?\n\nI vote for fixing the program.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 11 Sep 2001 16:41:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: COMMENT ON" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> COMMENT ON AGGREGATE my_aggregate double precision IS 'Computes\n>> sample variance';\n>> \n>> works but looks strange syntax. Should we fix the program or docs?\n\n> I vote for fixing the program.\n\nIf we fix this, we should also change DROP AGGREGATE, which also uses\nthe paren-less syntax. (I think the COMMENT ON syntax was modeled on\nDROP.)\n\nI'd be in favor of changing, but we do need to maintain consistency.\n\nAnother issue is that pg_dump knows about using both of these\ncommands... we'll have a compatibility problem if we don't continue\nto accept the old syntax for awhile.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Sep 2001 11:50:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COMMENT ON " }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> COMMENT ON AGGREGATE my_aggregate double precision IS 'Computes\n> sample variance';\n> \n> works but looks strange syntax. Should we fix the program or docs?\n\n>> I vote for fixing the program.\n\n> If we fix this, we should also change DROP AGGREGATE, which also uses\n> the paren-less syntax. (I think the COMMENT ON syntax was modeled on\n> DROP.)\n\n> I'd be in favor of changing, but we do need to maintain consistency.\n\n> Another issue is that pg_dump knows about using both of these\n> commands... we'll have a compatibility problem if we don't continue\n> to accept the old syntax for awhile.\n\nAll fixed ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2001 16:56:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COMMENT ON " } ]
[ { "msg_contents": "I found a non-existent option \"-list\" described in the doc of\nlibpgtcl's pg_result procedure. Shall we remove it from the docs?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 10 Sep 2001 16:59:51 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "pg_result -list" }, { "msg_contents": "> I found a non-existent option \"-list\" described in the doc of\n> libpgtcl's pg_result procedure. Shall we remove it from the docs?\n\nYes, removed. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 12:30:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_result -list" } ]
[ { "msg_contents": "I believe LOCK TABLE <table> IN EXCLUSIVE MODE should block everything but\nselects, but it locks for the entire transaction I think. Maybe in tcl you\ncould create your own locking using global variables. If the spin lock code\nis available to user functions you might be able to use that.\nAlternativley, inside a plpgsql function, could you use something like this:\n\nINSERT INTO ex_tbl (a,b,pk) SELECT var1 AS a,var2 AS b,var3 AS pk WHERE NOT\nEXISTS (SELECT * FROM ex_tbl WHERE pk=var3) LIMIT 1;\nGET DIAGNOSTICS rc =ROW_COUNT;\n\nwhere pk is the primary key is the primary key of ex_tbl.\nif rc=0 then you'd know the primary key already existed and if rc=1 then it\nwould have inserted succesfully\n- Stuart\n\n\"Haoldo Stenger\" wrote:\n\n> \"Matthew T. O'Connor\" wrote:\n> > \n> > > A solution, could be to query for the existance of the PK, just before\n> the\n> > > insertion. But there is a little span between the test and the\n> > > insertion, where another insertion from another transaction could void\n> > > the existance test. Any clever ideas on how to solve this? Using\n> > > triggers maybe? Other solutions?\n> > >\n> > \n> > All you need to do is use a sequence. If you set the sequence to be the\n> > primary key with a default value of nextval(seq_name) then you will\n> never\n> > have a collision. Alternatly if you need to know that number before you\n> > start inserting you can select next_val(seq_name) before you inser and\n> use\n> > that. By the way the datatype serial automates exactly what I\n> described.\n> \n> Yes, but there are situations where a sequenced PK isn't what is needed.\n> Imagine a DW app, where composed PKs such as (ClientNum, Year, Month,\n> ArticleNum) in a table which has ArticleQty as a secondary field are\n> used, in order to consolidate detail record from other tables. There,\n> the processing cycle goes like checking for the existance of the PK, if\n> it exists, add ArticleQtyDetail to ArticleQty, and update; and if it\n> doesn't exist, insert the record with ArticleQtyDetail as the starting\n> value of ArticleQty. See it? Then, if between the \"select from\" and the\n> \"insert into\", other process in the system (due to parallel processing\n> for instance) inserts a record with the same key, then the first\n> transaction would cancel, forcing redoing of all the processing. So,\n> sort of atomicity of the check?update:insert operation is needed. How\n> can that be easily implemented using locks and triggers for example?\n> \n> Regards,\n> Haroldo.\n", "msg_date": "Mon, 10 Sep 2001 13:09:15 +0100", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "Re: Abort state on duplicated PKey in transactions" } ]
[ { "msg_contents": "The x = NULL hack keeps biting people. Innocent people should not be\nexposed to incorrect behaviour because of (supposed) MS Access breakage.\nI strongly urge that we do one of the following:\n\n1) Provide a tunable knob to turn this on (cf. KSQO)\n\n2) Confine this to the ODBC driver somehow (which could be done via #1)\n\nActually, last time we discussed this there was some confusion whether\nAccess actually had the bug in question. That might be worth figuring\nout.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 10 Sep 2001 16:24:20 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "x = NULL" }, { "msg_contents": "> The x = NULL hack keeps biting people. Innocent people should not be\n> exposed to incorrect behaviour because of (supposed) MS Access breakage.\n> I strongly urge that we do one of the following:\n>\n> 1) Provide a tunable knob to turn this on (cf. KSQO)\n>\n> 2) Confine this to the ODBC driver somehow (which could be done via #1)\n>\n> Actually, last time we discussed this there was some confusion whether\n> Access actually had the bug in question. That might be worth figuring\n> out.\n>\n\nA while back I tested Oracle and MSSQL7 for this -- neither support it. See:\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1021527\n\nI just checked MS Access 2000 -- it also returns no records on x = NULL\nversus the correct answer with x IS NULL, at least for a simple query. IIRC,\nsomeone mentioned that the original issue was limited to the use of filtered\nforms in Access, or something like that. But ISTM, that if neither Oracle\nnor even MSSQL support the syntax, then PostgreSQL should not either.\n\n-- Joe\n\n\n", "msg_date": "Mon, 10 Sep 2001 09:13:14 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: x = NULL" }, { "msg_contents": "I'm getting tired of this, so unless someone can present a reason not to,\nI'll implement a GUC parameter to turn this off -- and turn it off by\ndefault.\n\nI wrote:\n\n> The x = NULL hack keeps biting people. Innocent people should not be\n> exposed to incorrect behaviour because of (supposed) MS Access breakage.\n> I strongly urge that we do one of the following:\n>\n> 1) Provide a tunable knob to turn this on (cf. KSQO)\n>\n> 2) Confine this to the ODBC driver somehow (which could be done via #1)\n>\n> Actually, last time we discussed this there was some confusion whether\n> Access actually had the bug in question. That might be worth figuring\n> out.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 18 Sep 2001 19:57:39 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: x = NULL" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I'm getting tired of this, so unless someone can present a reason not to,\n> I'll implement a GUC parameter to turn this off -- and turn it off by\n> default.\n\nYou'll have to push the switch-driven transformation into analyze.c ---\nit is not okay for gram.y to look at GUC parameters. But as long as you\ndo it correctly, I'm for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Sep 2001 00:14:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: x = NULL " } ]
[ { "msg_contents": "\nwill do.\n\n\n\n> Jim Buttafuoco writes:\n> \n> > I am working a some patches to the code and I noticed that \"pg_dump\n-C\n> > database\" doesn't provide the database location information in the\ndump\n> > file. Is this correct?\n> \n> Your observation is correct, but the behaviour is not. Feel free to\n> send a patch.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n> \n> \n\n\n", "msg_date": "Mon, 10 Sep 2001 16:25:07 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: pg_dump -C option" }, { "msg_contents": "Jim Buttafuoco writes:\n\n> will do.\n\nWhile you're at it, at least the encoding parameter should be saved as\nwell. Take a peek at what pg_dumpall saves.\n\n>\n>\n>\n> > Jim Buttafuoco writes:\n> >\n> > > I am working a some patches to the code and I noticed that \"pg_dump\n> -C\n> > > database\" doesn't provide the database location information in the\n> dump\n> > > file. Is this correct?\n> >\n> > Your observation is correct, but the behaviour is not. Feel free to\n> > send a patch.\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> >\n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to\n> majordomo@postgresql.org\n> >\n> >\n>\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 11 Sep 2001 16:40:25 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump -C option" }, { "msg_contents": "\nAdded to TODO:\n\n\t* Have pg_dump -C dump database location and encoding\n\t information\n\n> Jim Buttafuoco writes:\n> \n> > will do.\n> \n> While you're at it, at least the encoding parameter should be saved as\n> well. Take a peek at what pg_dumpall saves.\n> \n> >\n> >\n> >\n> > > Jim Buttafuoco writes:\n> > >\n> > > > I am working a some patches to the code and I noticed that \"pg_dump\n> > -C\n> > > > database\" doesn't provide the database location information in the\n> > dump\n> > > > file. Is this correct?\n> > >\n> > > Your observation is correct, but the behaviour is not. Feel free to\n> > > send a patch.\n> > >\n> > > --\n> > > Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> > >\n> > >\n> > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to\n> > majordomo@postgresql.org\n> > >\n> > >\n> >\n> >\n> >\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 Oct 2001 12:35:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump -C option" } ]
[ { "msg_contents": "I have heard from Tom Lane that he will be out of town until Wednesday. \nAs with any volunteer project, we can't hold people to dates and\nschedules. I know he is not done with everything he wants to do before\nbeta, so we have to decide whether we should push ahead with beta now or\nwait for him to return. On compromise would be to push ahead with beta\nnow and let Tom slip things in after beta begins.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Sep 2001 22:49:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Beta timing" }, { "msg_contents": "Hi Bruce,\n\nI reckon we should wait until he returns. This way people testing the\nbeta's get to try the new features Tom will add when he gets back, and\ndon't have to wait for the next version of the Beta.\n\ni.e. better testing.\n\nRegards and best wishes,\n\nJustin Clift\n\n\nBruce Momjian wrote:\n> \n> I have heard from Tom Lane that he will be out of town until Wednesday.\n> As with any volunteer project, we can't hold people to dates and\n> schedules. I know he is not done with everything he wants to do before\n> beta, so we have to decide whether we should push ahead with beta now or\n> wait for him to return. On compromise would be to push ahead with beta\n> now and let Tom slip things in after beta begins.\n> \n> Comments?\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 11 Sep 2001 13:23:33 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Beta timing" }, { "msg_contents": "I have done ALTER TABLE / ADD PRIMARY KEY at home now, and is currently\nbeing testing in the 10 mins a day I have free to code on Postgres. It\nshould only need another bit of testing before it's beta-worthy.\n\nGiven that adding primary keys is something that cannot be done at all with\npostgres at the moment, without hacking catalogs - you might want to wait\nfor it...\n\nPlus, given the activity on the list at the moment, it seems that there are\na fair few bugs left!\n\nChris\n\n> I have heard from Tom Lane that he will be out of town until Wednesday.\n> As with any volunteer project, we can't hold people to dates and\n> schedules. I know he is not done with everything he wants to do before\n> beta, so we have to decide whether we should push ahead with beta now or\n> wait for him to return. On compromise would be to push ahead with beta\n> now and let Tom slip things in after beta begins.\n>\n> Comments?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Tue, 11 Sep 2001 11:43:33 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Beta timing" }, { "msg_contents": "> I have heard from Tom Lane that he will be out of town until Wednesday. \n> As with any volunteer project, we can't hold people to dates and\n> schedules. I know he is not done with everything he wants to do before\n> beta, so we have to decide whether we should push ahead with beta now or\n> wait for him to return. On compromise would be to push ahead with beta\n> now and let Tom slip things in after beta begins.\n> \n> Comments?\n\nI'm going to commit changes for this:\n\n* Reject character sequences those are not valid in their charset\n\nSo I'll be with the postpone of starting of beta.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 11 Sep 2001 12:49:55 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Beta timing" }, { "msg_contents": "\nWait until everyone is ready/finished with their existing projects ...\nthis past week has thrown alot of turmoil into several lives that wasn't\nentirely unexpected, but sad nonetheless ...\n\nLet's do a poll on Friday evening for who has stuff outstanding left, give\nit until Monday for ppl to pop their heads up, and then try and deal with\nMonday as a Beta Start ... we aren't in a rush ...\n\nOn Mon, 10 Sep 2001, Bruce Momjian wrote:\n\n> I have heard from Tom Lane that he will be out of town until Wednesday.\n> As with any volunteer project, we can't hold people to dates and\n> schedules. I know he is not done with everything he wants to do before\n> beta, so we have to decide whether we should push ahead with beta now or\n> wait for him to return. On compromise would be to push ahead with beta\n> now and let Tom slip things in after beta begins.\n>\n> Comments?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Tue, 11 Sep 2001 07:51:02 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Beta timing" }, { "msg_contents": "> I have heard from Tom Lane that he will be out of town until Wednesday.\n\nYou may recall that I also have a substantial amount of work on\ndate/time issues which I would like to get in for beta. That is partway\ndone already, but not finished today.\n\nI'm guessing that an additional week (or perhaps more) will be necessary\nfor everyone (and I would certainly like that too).\n\n - Thomas\n", "msg_date": "Tue, 11 Sep 2001 16:38:11 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Beta timing" }, { "msg_contents": "(cough)\n\nCould someone look at my 'select from cursor foo' patch...?\n\ntnx\nOn Tue, 11 Sep 2001, Marc G. Fournier wrote:\n\n> \n> Wait until everyone is ready/finished with their existing projects ...\n> this past week has thrown alot of turmoil into several lives that wasn't\n> entirely unexpected, but sad nonetheless ...\n> \n> Let's do a poll on Friday evening for who has stuff outstanding left, give\n> it until Monday for ppl to pop their heads up, and then try and deal with\n> Monday as a Beta Start ... we aren't in a rush ...\n> \n> On Mon, 10 Sep 2001, Bruce Momjian wrote:\n> \n> > I have heard from Tom Lane that he will be out of town until Wednesday.\n> > As with any volunteer project, we can't hold people to dates and\n> > schedules. I know he is not done with everything he wants to do before\n> > beta, so we have to decide whether we should push ahead with beta now or\n> > wait for him to return. On compromise would be to push ahead with beta\n> > now and let Tom slip things in after beta begins.\n> >\n> > Comments?\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n", "msg_date": "Tue, 11 Sep 2001 13:50:20 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Beta timing" }, { "msg_contents": "> (cough)\n> \n> Could someone look at my 'select from cursor foo' patch...?\n\nTom Lane has claimed that, plus the EXPLAIN patch. That's why they are\nstuck in the patch queue. He has said he will take care of those and I\nam sure he will.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Sep 2001 14:53:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta timing" }, { "msg_contents": "> \n> Wait until everyone is ready/finished with their existing projects ...\n> this past week has thrown alot of turmoil into several lives that wasn't\n> entirely unexpected, but sad nonetheless ...\n> \n> Let's do a poll on Friday evening for who has stuff outstanding left, give\n> it until Monday for ppl to pop their heads up, and then try and deal with\n> Monday as a Beta Start ... we aren't in a rush ...\n\nOK, so we are clear at least until Monday. I will on vacation starting\nthis Sunday until October 16th. I will prepare the HISTORY file before\nI leave, and will run pgindent/jpgindent after beta starts while I am on\nvacation. Actually, if we don't do final before October 16, I can do it\nwhen I return.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Sep 2001 14:56:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta timing" }, { "msg_contents": "> > (cough)\n> > \n> > Could someone look at my 'select from cursor foo' patch...?\n> \n> Tom Lane has claimed that, plus the EXPLAIN patch. That's why they are\n> stuck in the patch queue. He has said he will take care of those and I\n> am sure he will.\n\nI should have reported to you that the patch was stuck in the queue\nwaiting his review. Sorry.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Sep 2001 16:04:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta timing" }, { "msg_contents": "On Tue, 11 Sep 2001, Bruce Momjian wrote:\n\n> >\n> > Wait until everyone is ready/finished with their existing projects ...\n> > this past week has thrown alot of turmoil into several lives that wasn't\n> > entirely unexpected, but sad nonetheless ...\n> >\n> > Let's do a poll on Friday evening for who has stuff outstanding left, give\n> > it until Monday for ppl to pop their heads up, and then try and deal with\n> > Monday as a Beta Start ... we aren't in a rush ...\n>\n> OK, so we are clear at least until Monday. I will on vacation starting\n> this Sunday until October 16th. I will prepare the HISTORY file before\n> I leave, and will run pgindent/jpgindent after beta starts while I am on\n> vacation. Actually, if we don't do final before October 16, I can do it\n> when I return.\n\nFigure that we'll be looking at Nov 1st or later for release, so you have\nloads of time to relax and get back :)\n\n\n", "msg_date": "Tue, 11 Sep 2001 17:34:43 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Beta timing" }, { "msg_contents": "> On Tue, 11 Sep 2001, Bruce Momjian wrote:\n> \n> > >\n> > > Wait until everyone is ready/finished with their existing projects ...\n> > > this past week has thrown alot of turmoil into several lives that wasn't\n> > > entirely unexpected, but sad nonetheless ...\n> > >\n> > > Let's do a poll on Friday evening for who has stuff outstanding left, give\n> > > it until Monday for ppl to pop their heads up, and then try and deal with\n> > > Monday as a Beta Start ... we aren't in a rush ...\n> >\n> > OK, so we are clear at least until Monday. I will on vacation starting\n> > this Sunday until October 16th. I will prepare the HISTORY file before\n> > I leave, and will run pgindent/jpgindent after beta starts while I am on\n> > vacation. Actually, if we don't do final before October 16, I can do it\n> > when I return.\n> \n> Figure that we'll be looking at Nov 1st or later for release, so you have\n> loads of time to relax and get back :)\n\nThat still give us two weeks to test the pgindent, which seems like\nenough time. I know Tom wants it to be run for this relase, and I know\nthe jdbc folks want it too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Sep 2001 17:36:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta timing" } ]
[ { "msg_contents": "Hello,\n\nFollowings are proposed fixes to jdbc.sgml(line numbers are for 7.1.3\ndoc). Comments?\n\njdbc.sgml\n\n[invalid column's name in a SELECT statement]\n\nlines 579\n\nx PreparedStatement ps = con.prepareStatement(\"SELECT oid FROM images WHERE name=?\");\n\no PreparedStatement ps = con.prepareStatement(\"SELECT imgoid FROM images WHERE imgname=?\");\n\n\n[the modifier in the document is different from the one in the source]\n\nlines 1280\norg.postgresql.geometric.PGcircle\n\nx public double radius\n\no double radius // in the source, here\n\n\n[invalid return type]\n\nlines 1996\norg.postgresql.largeobject.LargeObject#read()\n\nx public void read(byte buf[],\n\no public int read(byte buf[],\n\n\n[the discription of arguments type is incorrectly]\n\nlines 2419\na constructor of org.postgresql.util.Serialize\n\nx public Serialize(Connection c,\n String type) throws SQLException\n\no public Serialize(org.postgresql.Connection c,\n String type) throws SQLException\n\nlines 2462, 2504\norg.postgresql.util.Seriarize#create()\n\nx public static void create(Connection con,\n Object o) throws SQLException\n\no public static void create(org.postgresql.Connection con,\n Object o) throws SQLException\n\nlines 2518\norg.postgresql.util.Seriarize#create()\n\nx public static void create(Connection con,\n Class o) throws SQLException\n\no public static void create(org.postgresql.Connection con,\n Class o) throws SQLException\n\n\n[Cannot access to the page]\n\nlines 2910\n\nx See John Dumas's Java Crypt page for the original source.\n\n http://www.zeh.com/local/jfd/crypt.html\n\n(Sorry, I can't find a replacement page.)\n\nThanks.\n----\nHiroyuki Yatabe(yatabe@sra.co.jp)\nSoftware Research Associates, Inc.\n", "msg_date": "Tue, 11 Sep 2001 12:52:56 +0900", "msg_from": "Hiroyuki Yatabe <yatabe@sra.co.jp>", "msg_from_op": true, "msg_subject": "Proposals for jdbc.sgml(in 7.1.3 doc)" }, { "msg_contents": "Thanks. I am attaching the patch I applied.\n\n> Hello,\n> \n> Followings are proposed fixes to jdbc.sgml(line numbers are for 7.1.3\n> doc). Comments?\n> \n> jdbc.sgml\n> \n> [invalid column's name in a SELECT statement]\n> \n> lines 579\n> \n> x PreparedStatement ps = con.prepareStatement(\"SELECT oid FROM images WHERE name=?\");\n> \n> o PreparedStatement ps = con.prepareStatement(\"SELECT imgoid FROM images WHERE imgname=?\");\n> \n> \n> [the modifier in the document is different from the one in the source]\n> \n> lines 1280\n> org.postgresql.geometric.PGcircle\n> \n> x public double radius\n> \n> o double radius // in the source, here\n> \n> \n> [invalid return type]\n> \n> lines 1996\n> org.postgresql.largeobject.LargeObject#read()\n> \n> x public void read(byte buf[],\n> \n> o public int read(byte buf[],\n> \n> \n> [the discription of arguments type is incorrectly]\n> \n> lines 2419\n> a constructor of org.postgresql.util.Serialize\n> \n> x public Serialize(Connection c,\n> String type) throws SQLException\n> \n> o public Serialize(org.postgresql.Connection c,\n> String type) throws SQLException\n> \n> lines 2462, 2504\n> org.postgresql.util.Seriarize#create()\n> \n> x public static void create(Connection con,\n> Object o) throws SQLException\n> \n> o public static void create(org.postgresql.Connection con,\n> Object o) throws SQLException\n> \n> lines 2518\n> org.postgresql.util.Seriarize#create()\n> \n> x public static void create(Connection con,\n> Class o) throws SQLException\n> \n> o public static void create(org.postgresql.Connection con,\n> Class o) throws SQLException\n> \n> \n> [Cannot access to the page]\n> \n> lines 2910\n> \n> x See John Dumas's Java Crypt page for the original source.\n> \n> http://www.zeh.com/local/jfd/crypt.html\n> \n> (Sorry, I can't find a replacement page.)\n> \n> Thanks.\n> ----\n> Hiroyuki Yatabe(yatabe@sra.co.jp)\n> Software Research Associates, Inc.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/jdbc.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/jdbc.sgml,v\nretrieving revision 1.22\ndiff -c -r1.22 jdbc.sgml\n*** doc/src/sgml/jdbc.sgml\t2001/09/10 21:58:46\t1.22\n--- doc/src/sgml/jdbc.sgml\t2001/09/12 15:46:13\n***************\n*** 576,587 ****\n <classname>Statement</classname> class can equally be used.)\n \n <programlisting>\n! PreparedStatement ps = con.prepareStatement(\"SELECT oid FROM images WHERE name=?\");\n ps.setString(1, \"myimage.gif\");\n ResultSet rs = ps.executeQuery();\n if (rs != null) {\n while(rs.next()) {\n! InputStream is = rs.getBinaryInputStream(1);\n // use the stream in some way here\n is.close();\n }\n--- 576,587 ----\n <classname>Statement</classname> class can equally be used.)\n \n <programlisting>\n! PreparedStatement ps = con.prepareStatement(\"SELECT imgoid FROM images WHERE imgname=?\");\n ps.setString(1, \"myimage.gif\");\n ResultSet rs = ps.executeQuery();\n if (rs != null) {\n while(rs.next()) {\n! InputStream is = rs.getBinaryStream(1);\n // use the stream in some way here\n is.close();\n }\n***************\n*** 1277,1283 ****\n \n This is the center point\n \n! public double radius\n \n This is the radius\n \n--- 1277,1283 ----\n \n This is the center point\n \n! double radius\n \n This is the radius\n \n***************\n*** 1993,1999 ****\n \n <listitem>\n <synopsis>\n! public void read(byte buf[],\n int off,\n int len) throws SQLException\n </synopsis>\n--- 1993,1999 ----\n \n <listitem>\n <synopsis>\n! public int read(byte buf[],\n int off,\n int len) throws SQLException\n </synopsis>\n***************\n*** 2416,2422 ****\n \n Constructors\n \n! public Serialize(Connection c,\n String type) throws SQLException\n \n This creates an instance that can be used to serialize \n--- 2416,2422 ----\n \n Constructors\n \n! public Serialize(org.postgresql.Connection c,\n String type) throws SQLException\n \n This creates an instance that can be used to serialize \n***************\n*** 2459,2465 ****\n Throws: SQLException\n on error\n \n! public static void create(Connection con,\n Object o) throws SQLException\n \n This method is not used by the driver, but it creates a \n--- 2459,2465 ----\n Throws: SQLException\n on error\n \n! public static void create(org.postgresql.Connection con,\n Object o) throws SQLException\n \n This method is not used by the driver, but it creates a \n***************\n*** 2501,2507 ****\n Throws: SQLException\n on error\n \n! public static void create(Connection con,\n Object o) throws SQLException\n \n This method is not used by the driver, but it creates a \n--- 2501,2507 ----\n Throws: SQLException\n on error\n \n! public static void create(org.postgresql.Connection con,\n Object o) throws SQLException\n \n This method is not used by the driver, but it creates a \n***************\n*** 2907,2915 ****\n Contains static methods to encrypt and compare passwords with Unix \n encrypted passwords.\n \n! See John Dumas's Java Crypt page for the original source.\n \n! http://www.zeh.com/local/jfd/crypt.html\n \n Methods\n \n--- 2907,2915 ----\n Contains static methods to encrypt and compare passwords with Unix \n encrypted passwords.\n \n! See John Dumas's Java Crypt page for the original source. \n \n! (Invalid URL) http://www.zeh.com/local/jfd/crypt.html\n \n Methods", "msg_date": "Wed, 12 Sep 2001 11:48:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposals for jdbc.sgml(in 7.1.3 doc)" } ]
[ { "msg_contents": "I think that we have concluded that a bigger executable with the\nunicode conversion functionality does not have any performance\npenalty. So I would like to remove --enable-unicode-conversion option\nso that it is always enabled if --enable-multibye is specified.\nAny objection?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 11 Sep 2001 13:34:35 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "enable-unicode-conversion option?" }, { "msg_contents": "> I think that we have concluded that a bigger executable with the\n> unicode conversion functionality does not have any performance\n> penalty. So I would like to remove --enable-unicode-conversion option\n> so that it is always enabled if --enable-multibye is specified.\n> Any objection?\n\nMakes sense to me. We already have locale and multibyte. No need for a\nUnicode one too if we can do it automatically.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Sep 2001 01:08:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: enable-unicode-conversion option?" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> I think that we have concluded that a bigger executable with the\n> unicode conversion functionality does not have any performance\n> penalty. So I would like to remove --enable-unicode-conversion option\n> so that it is always enabled if --enable-multibye is specified.\n> Any objection?\n\nWill there be #define's one can set manually to get smaller executable ?\n\nI'm contemplating porting pg to PocketPC/WinCE platform and there the \nsize does matter. And as WinCE is pure unicode platform there should be \nno need for 'conversions'.\n\n--------------------\nHannu\n", "msg_date": "Tue, 11 Sep 2001 10:25:25 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: enable-unicode-conversion option?" }, { "msg_contents": "Hannu Krosing writes:\n\n> I'm contemplating porting pg to PocketPC/WinCE platform and there the\n> size does matter. And as WinCE is pure unicode platform there should be\n> no need for 'conversions'.\n\nThe Unicode support is about 1 MB on disk. If you want to run a server\nand you don't have the extra 1 MB then you've got problems.\n\nPlus, you might have clients connecting that are not pure Unicode.\n\nOf course I don't completely understand the setup you have in mind. Is\nWinCE POSIX-compatible?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 11 Sep 2001 16:17:22 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: enable-unicode-conversion option?" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Hannu Krosing writes:\n> \n> > I'm contemplating porting pg to PocketPC/WinCE platform and there the\n> > size does matter. And as WinCE is pure unicode platform there should be\n> > no need for 'conversions'.\n> \n> The Unicode support is about 1 MB on disk. If you want to run a server\n> and you don't have the extra 1 MB then you've got problems.\n\nA server does not necessaryly mean \"A Huge DataWarehousing Server\", it\nmay \nbe just a fancy addressbook with some growth potential.\n\n> Plus, you might have clients connecting that are not pure Unicode.\n\nI may refuse to serve them ;)\n\n> Of course I don't completely understand the setup you have in mind. Is\n> WinCE POSIX-compatible?\n\nno. it is an embedded os most compatible to Win32 (an OS found on some \nIntel x86 based PC's ;) so it would be much easier if a non-cygwin Win32 \nport was done first.\n\nit is run on Compaqs iPAQ and other handheld devices, where memory is\nstill\na bit problem (16MB of Flash ROM and 32-64 of RAM in standard\nconfigurations,\niPAQ has 200MHz StrongARM processor) While that may seem very little\nnow, I've \nrun postgres on much worse hardware only a few years ago (40MHz 486sx\nwith \n16Mb ram and 20Mb HDD)\n\nThe port, if ever done, will require much mucking about in internals and\nmay \nnot be possible inside the main source tree anyway so you should not\nworry too \nmuch if removing --enable-unicode-conversion would make some things much\neasier.\n\nThe clients will mostly connect from the same device and quite likely\none \nat a time. \n\nAs linux is already ported to iPAQ it may be easier to use Linux version\nof \npostgres there but even then space consuption is of some concern.\n\nOTOH it seems to be a good goal to have Postgres be able to run\neverywhere \nwhere Linux can ;)\n\n---------------\nHannu\n", "msg_date": "Tue, 11 Sep 2001 19:31:33 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: enable-unicode-conversion option?" }, { "msg_contents": "It seems to be resonable to leave #define UNICODE_CONVERSION somewhere\n(maybe in pg_config.h).\n--\nTatsuo Ishii\n\n> Peter Eisentraut wrote:\n> > \n> > Hannu Krosing writes:\n> > \n> > > I'm contemplating porting pg to PocketPC/WinCE platform and there the\n> > > size does matter. And as WinCE is pure unicode platform there should be\n> > > no need for 'conversions'.\n> > \n> > The Unicode support is about 1 MB on disk. If you want to run a server\n> > and you don't have the extra 1 MB then you've got problems.\n> \n> A server does not necessaryly mean \"A Huge DataWarehousing Server\", it\n> may \n> be just a fancy addressbook with some growth potential.\n> \n> > Plus, you might have clients connecting that are not pure Unicode.\n> \n> I may refuse to serve them ;)\n> \n> > Of course I don't completely understand the setup you have in mind. Is\n> > WinCE POSIX-compatible?\n> \n> no. it is an embedded os most compatible to Win32 (an OS found on some \n> Intel x86 based PC's ;) so it would be much easier if a non-cygwin Win32 \n> port was done first.\n> \n> it is run on Compaqs iPAQ and other handheld devices, where memory is\n> still\n> a bit problem (16MB of Flash ROM and 32-64 of RAM in standard\n> configurations,\n> iPAQ has 200MHz StrongARM processor) While that may seem very little\n> now, I've \n> run postgres on much worse hardware only a few years ago (40MHz 486sx\n> with \n> 16Mb ram and 20Mb HDD)\n> \n> The port, if ever done, will require much mucking about in internals and\n> may \n> not be possible inside the main source tree anyway so you should not\n> worry too \n> much if removing --enable-unicode-conversion would make some things much\n> easier.\n> \n> The clients will mostly connect from the same device and quite likely\n> one \n> at a time. \n> \n> As linux is already ported to iPAQ it may be easier to use Linux version\n> of \n> postgres there but even then space consuption is of some concern.\n> \n> OTOH it seems to be a good goal to have Postgres be able to run\n> everywhere \n> where Linux can ;)\n> \n> ---------------\n> Hannu\n> \n", "msg_date": "Wed, 12 Sep 2001 09:59:18 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: enable-unicode-conversion option?" }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> It seems to be resonable to leave #define UNICODE_CONVERSION somewhere\n> (maybe in pg_config.h).\n\nThat's what I was after.\n\n--------------\nHannu\n", "msg_date": "Wed, 12 Sep 2001 12:23:19 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: enable-unicode-conversion option?" } ]
[ { "msg_contents": "Hello,\n\nFollowing is an proposed fix to jdbc.sgml(line numbers are\nfor 7.1.3 doc). Comments?\n\n[using an undefined method of ResultSet]\n\nlines 564\n\nx InputStream is = rs.getBinaryInputStream(1);\n\no InputStream is = rs.getBinaryStream(1);\n\n\nThanks.\n----\nHiroyuki Yatabe(yatabe@sra.co.jp)\nSoftware Research Associates, Inc.\n", "msg_date": "Tue, 11 Sep 2001 14:10:57 +0900", "msg_from": "Hiroyuki Yatabe <yatabe@sra.co.jp>", "msg_from_op": true, "msg_subject": "A proposal for jdbc.sgml(in 7.1.3 doc)" }, { "msg_contents": "\nChange applied. Thanks.\n\n\n> Hello,\n> \n> Following is an proposed fix to jdbc.sgml(line numbers are\n> for 7.1.3 doc). Comments?\n> \n> [using an undefined method of ResultSet]\n> \n> lines 564\n> \n> x InputStream is = rs.getBinaryInputStream(1);\n> \n> o InputStream is = rs.getBinaryStream(1);\n> \n> \n> Thanks.\n> ----\n> Hiroyuki Yatabe(yatabe@sra.co.jp)\n> Software Research Associates, Inc.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 11:39:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [JDBC] A proposal for jdbc.sgml(in 7.1.3 doc)" } ]
[ { "msg_contents": "Hello,\n\nAttached patch is correction for 'doc/jdbc.sgml' of PostgreSQL 7.1.3.\n\nCorrection content:\n * I revised a mistake of type (copy and paste).\n * I revised multiplicity of description.\n\nPlease review,\n\n--\nRyouichi Matsuda", "msg_date": "Tue, 11 Sep 2001 20:41:24 +0900", "msg_from": "Ryouichi Matsuda <r-matuda@sra.co.jp>", "msg_from_op": true, "msg_subject": "Patch for doc/jdbc.sgml" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Hello,\n> \n> Attached patch is correction for 'doc/jdbc.sgml' of PostgreSQL 7.1.3.\n> \n> Correction content:\n> * I revised a mistake of type (copy and paste).\n> * I revised multiplicity of description.\n> \n> Please review,\n> \n> --\n> Ryouichi Matsuda\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 00:30:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [JDBC] Patch for doc/jdbc.sgml" }, { "msg_contents": "\nPatch applied. Thanks.\n\n> Hello,\n> \n> Attached patch is correction for 'doc/jdbc.sgml' of PostgreSQL 7.1.3.\n> \n> Correction content:\n> * I revised a mistake of type (copy and paste).\n> * I revised multiplicity of description.\n> \n> Please review,\n> \n> --\n> Ryouichi Matsuda\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 11:55:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [JDBC] Patch for doc/jdbc.sgml" } ]
[ { "msg_contents": "> > The x = NULL hack keeps biting people. Innocent people \n> should not be \n> > exposed to incorrect behaviour because of (supposed) MS Access \n> > breakage. I strongly urge that we do one of the following:\n> >\n> > 1) Provide a tunable knob to turn this on (cf. KSQO)\n> >\n> > 2) Confine this to the ODBC driver somehow (which could be done via \n> > #1)\n> >\n> > Actually, last time we discussed this there was some \n> confusion whether \n> > Access actually had the bug in question. That might be \n> worth figuring \n> > out.\n> >\n> \n> A while back I tested Oracle and MSSQL7 for this -- neither \n> support it. See: http://fts.postgresql.org/db/mw/msg.html?mid=1021527\n> \n> I just checked MS Access 2000 -- it also returns no records \n> on x = NULL versus the correct answer with x IS NULL, at \n> least for a simple query. IIRC, someone mentioned that the \n> original issue was limited to the use of filtered forms in \n> Access, or something like that. But ISTM, that if neither \n> Oracle nor even MSSQL support the syntax, then PostgreSQL \n> should not either.\n\nMSSQL supports =NULL syntax if you set ANSI_NULLS=OFF (can be set per\nconnection, and yuo can also specify which is default for a certain\ndatabase or installation).\nI beleive that ANSI_NULLS=OFF was default in SQL Server <= 6.5, and\nANSI_NULLS=ON are default in >= 7.0.\n\nAnd yes, if you create a Query with Access, it uses IS NULL / IS NOT\nNULL. The issue was with filtered forms only.\n\n//Magnus\n", "msg_date": "Tue, 11 Sep 2001 14:15:20 +0200", "msg_from": "\"Magnus Hagander\" <mha@sollentuna.net>", "msg_from_op": true, "msg_subject": "Re: x = NULL" } ]
[ { "msg_contents": "I recently upgraded to 7.1.3. I was experimenting with a script to \nexport data from FoxPro into an SQL file and multiple data files. The \nSQL file creates the tables, indexes, foreign keys, etc, and calls the \nCOPY command to load the data from the appropriate data files.\n\nIt appears, and I could easily be mistaken, that the COPY command does \nnot allow NULLs into a timestamp field, even though the field is defined \nto accept nulls. Actually, it appears that the behavior of the COPY \ncommand changed as I believe it would accept nulls in the prior release \n7.1.2.\n\nIn any case, I'm using the COPY command WITH NULL AS '^N'. And the \ndatafile contains ^N in timestamp fields that could be NULL, but the \ncommand fails with an invalid timestamp error, referencing the first \nline that contains the '^N' null sequence.\n\nAny thoughts?\n\nThanks,\nDwayne\n\n\n\n", "msg_date": "Tue, 11 Sep 2001 11:39:49 -0400", "msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "COPY command WITH NULLs bug?" }, { "msg_contents": "\"P. Dwayne Miller\" <dmiller@espgroup.net> writes:\n> It appears, and I could easily be mistaken, that the COPY command does \n> not allow NULLs into a timestamp field, even though the field is defined \n> to accept nulls.\n\nNot sure what your problem is, but that's not it. Perhaps you've got a\nproblem with stray carriage returns (\\r\\n instead of \\n), or what you\nhave in the file isn't actually equal to what you specified as the WITH\nNULL string, or something else.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Sep 2001 12:22:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY command WITH NULLs bug? " } ]
[ { "msg_contents": "Hi all,\n\nAttached is a patch that adds support for specifying a location for\nindexes via the \"create database\" command.\n\nI believe this patch is complete, but it is my first .\n\n\nThanks\nJim", "msg_date": "Tue, 11 Sep 2001 14:23:12 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Index location patch for review" }, { "msg_contents": "> Hi all,\n> \n> Attached is a patch that adds support for specifying a location for\n> indexes via the \"create database\" command.\n> \n> I believe this patch is complete, but it is my first .\n\nThis patch allows index locations to be specified as different from data\nlocations. Is this a feature direction we want to go in? Comments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 10:26:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index location patch for review" }, { "msg_contents": "> > Attached is a patch that adds support for specifying a location for\n> > indexes via the \"create database\" command.\n> This patch allows index locations to be specified as different from data\n> locations. Is this a feature direction we want to go in? Comments?\n\nI have not looked at the patch, either implementation or proposed\nsyntax, but in general we certainly want to head in a direction where we\nhave full control over placement of storage resources.\n\n - Thomas\n", "msg_date": "Wed, 12 Sep 2001 15:10:12 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index location patch for review" }, { "msg_contents": "\nI am very new to this mailinglist so I apologize if I start talking early but\nI've been working as a sysadmin and that kind of problems for a long while\nnow and my suggestion is that it is a start but I think that we should aim a\nlittle higher than this and use something more like the Oracle approach\ninstead. Where they introduce an abstraction layer in the form of a\ntablespace. And this tablespace is then referenced from the create table or\ncreate index instead.\neg:\ntable -> tablespace -> path to physical storage\nindex -> tablespace -> path to physical storage\n\nAdvantages:\nChanges can be done to storage whithout need to change create scripts for db,\ntables and so on.\nDesigners can specify in which tablespace tables/indexes should reside based\non usage.\nSysadmins can work with tablespaces and change paths without changing\nanything in the database/table/index definitions.\n\nThe alternative is symlinks to distribute the load and that is not a pretty\nsight dba-wise.\n\nHope you can bare with me on this, since I think it is an very important\nissue.\nI'm unfortunately not a fast coder yet (but I'm getting faster :-) ). But I\ncould start writing a spec if someone is interrested.\n\nBruce Momjian wrote:\n\n> > Hi all,\n> >\n> > Attached is a patch that adds support for specifying a location for\n> > indexes via the \"create database\" command.\n> >\n> > I believe this patch is complete, but it is my first .\n>\n> This patch allows index locations to be specified as different from data\n> locations. Is this a feature direction we want to go in? Comments?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n", "msg_date": "Wed, 12 Sep 2001 18:36:34 +0200", "msg_from": "Stefan Rindeskar <sr@globecom.net>", "msg_from_op": false, "msg_subject": "Re: Index location patch for review" } ]
[ { "msg_contents": "I have something like this:\n\nbilly=# EXPLAIN SELECT * from kursy where id_trasy=1 and\ndata_kursu=date('2001-12-12');\nNOTICE: QUERY PLAN:\nIndex Scan using pp on kursy (cost=0.00..51.55 rows=1 width=18)\n\nbilly=# EXPLAIN SELECT * from kursy where id_trasy=1 \nand data_kursu='2001-12-12'; \nNOTICE: QUERY PLAN:\nIndex Scan using pp on kursy (cost=0.00..2.02 rows=1 width=18)\n\nWhy the first expression is 25 times slower?\nI suppose, that planner thinks, that date('2001-12-12') is a dynamic\nvariable - is it true? I found this problem when i had to add date and\ninteger. Little \"iscachable\" function helped me, but I still don't know\nwhy it happened.\n\nCREATE FUNCTION date_sum(date,integer) returns date AS'\nBEGIN\n return $1+$2;\nEND;\n'LANGUAGE 'plpgsql' WITH (iscachable);\n\n\n", "msg_date": "Wed, 12 Sep 2001 14:22:18 +0200", "msg_from": "Tomasz Myrta <jasiek@lamer.pl>", "msg_from_op": true, "msg_subject": "dynamic-static date" }, { "msg_contents": "Tomasz Myrta <jasiek@lamer.pl> writes:\n> Why the first expression is 25 times slower?\n\nHard to say, when you haven't shown us the schema. (Column datatypes,\ndefinitions of available indexes, etc are all critical information for\nthis sort of question.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Sep 2001 13:38:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: dynamic-static date " }, { "msg_contents": "Tom Lane wrote:\n> \n> Tomasz Myrta <jasiek@lamer.pl> writes:\n> > Why the first expression is 25 times slower?\n> \n> Hard to say, when you haven't shown us the schema. (Column datatypes,\n> definitions of available indexes, etc are all critical information for\n> this sort of question.)\nOK \nDon't panic with names, They are polish ;-)\n\n1. TABLES\ncreate table TRASY(\n id_trasy integer not null PRIMARY KEY,\n del date default '9999-12-31',\n nazwa varchar (80)\n);\n\ncreate table KURSY(\n id_kursu integer not null PRIMARY KEY,\n id_trasy integer not null references TRASY,\n data_kursu date not null,\n limit_miejsc smallint not null\n);\n\n2. INDEXES\n\n trasy | CREATE UNIQUE INDEX trasy_pkey ON trasy USING btree\n(id_trasy int4_ops)\n kursy | CREATE UNIQUE INDEX kursy_pkey ON kursy USING btree\n(id_kursu int4_ops)\n kursy | CREATE INDEX ind_kurs_ ON kursy USING btree (id_trasy\nint4_ops, data_kursu date_ops)\n\n3. TEST\n\nThis time kursy has less rows:\n\nsaik=# EXPLAIN SELECT * from kursy where id_trasy=1 and\nsaik-# data_kursu=date('2001-12-12');\nNOTICE: QUERY PLAN:\n\nIndex Scan using ind_kurs_ on kursy (cost=0.00..8.19 rows=1 width=14)\n\nEXPLAIN\nsaik=# EXPLAIN SELECT * from kursy where id_trasy=1 \nsaik-# and data_kursu='2001-12-12'; \nNOTICE: QUERY PLAN:\n\nIndex Scan using ind_kurs_ on kursy (cost=0.00..2.02 rows=1 width=14)\n\nI think that's all\n\nTomek\n\n", "msg_date": "Thu, 13 Sep 2001 23:18:02 +0200", "msg_from": "Tomasz Myrta <jasiek@lamer.pl>", "msg_from_op": true, "msg_subject": "dynamic-static date once again" }, { "msg_contents": "Tomasz Myrta <jasiek@lamer.pl> writes:\n> create table KURSY(\n> id_kursu integer not null PRIMARY KEY,\n> id_trasy integer not null references TRASY,\n> data_kursu date not null,\n> limit_miejsc smallint not null\n> );\n> CREATE INDEX ind_kurs_ ON kursy USING btree (id_trasy\n> int4_ops, data_kursu date_ops)\n\n> saik=# EXPLAIN SELECT * from kursy where id_trasy=1 and\n> saik-# data_kursu=date('2001-12-12');\n> NOTICE: QUERY PLAN:\n\n> Index Scan using ind_kurs_ on kursy (cost=0.00..8.19 rows=1 width=14)\n\n> EXPLAIN\n> saik=# EXPLAIN SELECT * from kursy where id_trasy=1 \n> saik-# and data_kursu='2001-12-12'; \n> NOTICE: QUERY PLAN:\n\n> Index Scan using ind_kurs_ on kursy (cost=0.00..2.02 rows=1 width=14)\n\nOkay, the reason for the difference in cost estimate (which you should\nnever mistake for reality, btw ;-)) is that the second example is using\nboth columns of the index, whereas the first example is using only the\nfirst index column --- the restriction data_kursu=date('2001-12-12')\nwill be checked explicitly at each row, not implemented as an indexscan\nbound.\n\nThe cause is precisely that date() is considered a noncachable function,\nand so the planner doesn't think that date('2001-12-12') is a constant.\nAnd the reason for that is that the date/time datatypes have a construct\ncalled 'current', which is indeed not a constant.\n\nI think we have agreed that 'current' is a Bad Idea and should be\neliminated from the date/time datatypes --- but until that happens,\nforcing the constant to be considered a constant is your only\nalternative. Write\n\tdate '2001-12-12'\nor\n\t'2001-12-12'::date\ninstead of writing date().\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 17:48:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: dynamic-static date once again " }, { "msg_contents": "...\n> I think we have agreed that 'current' is a Bad Idea and should be\n> eliminated from the date/time datatypes...\n\nI've started purging it from the timestamp code I'm working on for 7.2.\nShould be gone by the start of beta...\n\n - Thomas\n", "msg_date": "Mon, 17 Sep 2001 06:26:31 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: dynamic-static date once again" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> I think we have agreed that 'current' is a Bad Idea and should be\n>> eliminated from the date/time datatypes...\n\n> I've started purging it from the timestamp code I'm working on for 7.2.\n\nOh good. Let's not forget to review the pg_proc entries after that\nhappens, to see which ones can safely be marked cachable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Sep 2001 10:11:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: dynamic-static date once again " } ]
[ { "msg_contents": "Ciao,\nI had the need to exclude tables from the dump so I made this patch,\nI do something like\n\npg_dump -X \\\"Test_*\\\" -X \\\"Devel*\\\" test\n\nI'm not a C guru, but it work, the only thing I was unable to get rid\nof is the dump of sequences for that table,\n\nso I have to add -X tablename_id_seq\n\nIf you can suggest a way to work around it, I will try to fix it\n\nhope it can be useful to the project\n\nbye\n\n-- \n-------------------------------------------------------\nGiuseppe Tanzilli\t\tg.tanzilli@gruppocsf.com\nCSF Sistemi srl\t\t\tphone ++39 0775 7771\nVia del Ciavattino\nAnagni FR\nItaly", "msg_date": "Wed, 12 Sep 2001 17:25:16 +0200", "msg_from": "Giuseppe Tanzilli - CSF <g.tanzilli@gruppocsf.com>", "msg_from_op": true, "msg_subject": "pg_dump patch: Allow -X'exclude table from dump by pattern'" }, { "msg_contents": "Giuseppe Tanzilli - CSF writes:\n\n> Ciao,\n> I had the need to exclude tables from the dump so I made this patch,\n> I do something like\n>\n> pg_dump -X \\\"Test_*\\\" -X \\\"Devel*\\\" test\n\nWe already have an option -t to select the table name to dump. This could\nbe expanded to interpret the name as a pattern of some kind (RE or LIKE\npattern). If you want to work on that I think no one would object.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 12 Sep 2001 20:47:49 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump patch: Allow -X'exclude table from dump by" } ]
[ { "msg_contents": "> > Attached is a patch that adds support for specifying a location for\n> > indexes via the \"create database\" command.\n> > \n> > I believe this patch is complete, but it is my first .\n> \n> This patch allows index locations to be specified as \n> different from data locations. Is this a feature direction\n> we want to go in? Comments?\n\nHaving the table and index on separate drives can do wonders for i/o\nperformance. :)\n\ndarrenk\n\n", "msg_date": "Wed, 12 Sep 2001 11:42:44 -0400", "msg_from": "Darren King <DarrenK@Routescape.com>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review" } ]
[ { "msg_contents": "> > Attached is a patch that adds support for specifying a\n> > location for indexes via the \"create database\" command.\n> > \n> > I believe this patch is complete, but it is my first .\n> \n> This patch allows index locations to be specified as\n> different from data locations. Is this a feature direction\n> we want to go in? Comments?\n\nThe more general and \"standard\" way to go are TABLESPACEs.\nBut probably proposed feature will be compatible with\ntablespaces, when we'll got them: we could use new \"create\ndatabase\" syntax to specify default tablespace for indices.\nUnfortunately I removed message with patch, can you send it\nto me, Bruce?\n\nVadim\n", "msg_date": "Wed, 12 Sep 2001 09:58:20 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> The more general and \"standard\" way to go are TABLESPACEs.\n> But probably proposed feature will be compatible with\n> tablespaces, when we'll got them:\n\nWill it be? I'm afraid of creating a backwards-compatibility\nproblem for ourselves when it comes time to implement tablespaces.\n\nAt the very least I'd like to see some information demonstrating\nhow much benefit there is to this proposed patch, before we\nconsider whether to adopt it. If there's a significant performance\nbenefit to splitting a PG database along the table-vs-index divide,\nthen it's interesting as a short-term improvement ... but Jim didn't\neven make that assertion, let alone provide evidence to back it up.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Sep 2001 13:54:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index location patch for review " }, { "msg_contents": "> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> > The more general and \"standard\" way to go are TABLESPACEs.\n> > But probably proposed feature will be compatible with\n> > tablespaces, when we'll got them:\n> \n> Will it be? I'm afraid of creating a backwards-compatibility\n> problem for ourselves when it comes time to implement tablespaces.\n> \n> At the very least I'd like to see some information demonstrating\n> how much benefit there is to this proposed patch, before we\n> consider whether to adopt it. If there's a significant performance\n> benefit to splitting a PG database along the table-vs-index divide,\n> then it's interesting as a short-term improvement ... but Jim didn't\n> even make that assertion, let alone provide evidence to back it up.\n\nIf that is your only concern, I can tell you for sure that if the\nlocations are on different drives, there will be a performance benefit. \nIt is standard database practice to put indexes on different drives than\ndata. In fact, sometimes you want to put two tables that are frequently\njoined on separate drives.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 14:22:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index location patch for review" }, { "msg_contents": "...\n> At the very least I'd like to see some information demonstrating\n> how much benefit there is to this proposed patch, before we\n> consider whether to adopt it. If there's a significant performance\n> benefit to splitting a PG database along the table-vs-index divide,\n> then it's interesting as a short-term improvement ... but Jim didn't\n> even make that assertion, let alone provide evidence to back it up.\n\nClearly there can be a *storage management* benefit to having control\nover what gets put where, so this does not need to be justified strictly\non a performance basis.\n\nFor features like this, we will feel free to evolve them or\nrevolutionize them with further development, so I'm not worried about\nthe backward compatibility issue for cases like this.\n\nComments?\n\n - Thomas\n", "msg_date": "Wed, 12 Sep 2001 18:24:34 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Index location patch for review" }, { "msg_contents": "> ...\n> > At the very least I'd like to see some information demonstrating\n> > how much benefit there is to this proposed patch, before we\n> > consider whether to adopt it. If there's a significant performance\n> > benefit to splitting a PG database along the table-vs-index divide,\n> > then it's interesting as a short-term improvement ... but Jim didn't\n> > even make that assertion, let alone provide evidence to back it up.\n> \n> Clearly there can be a *storage management* benefit to having control\n> over what gets put where, so this does not need to be justified strictly\n> on a performance basis.\n> \n> For features like this, we will feel free to evolve them or\n> revolutionize them with further development, so I'm not worried about\n> the backward compatibility issue for cases like this.\n> \n> Comments?\n\nAgreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 14:26:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index location patch for review" } ]
[ { "msg_contents": "New problems with CVSup. We should all upgrade asap, though I'm not sure\nof the current status of builds for non-FreeBSD machines. Marc, could we\npossibly install this on the postgresql.org machine(s)?\n\n - Thomas\n\n-----Original Message-----\nFrom: jdp@polstra.com [mailto:jdp@polstra.com]\nSent: Sunday, September 09, 2001 19:40 PM\nTo: nanbor@cs.wustl.edu\nSubject: HEADS UP: CVSup timestamp bug\n\n\nThis morning a bug was discovered in most versions of CVSup up to\nand including SNAP_16_1c. The bug causes all newly-updated files to\nreceive incorrect timestamps. Usually the files receive timestamps\nfrom early in 1970. This bug has been present for a very long time,\nbut it only began to have an effect when the Unix representation of\nthe date and time passed 1,000,000,000. That occurred on 9 September\n2001 at 01:46:40 UTC. Yes, other people had Y2K bugs, but I managed\nto produce an S1G bug.\n\nI have fixed the bug and have released a new snapshot of CVSup,\nSNAP_16_1d. I have also created binary packages for FreeBSD-4.x which\ncan be installed using \"pkg_add\". For information about updating your\nCVSup installation, look here:\n\n http://people.freebsd.org/~jdp/s1g/\n\nTo fix the bug, both the client and the server need to be upgraded to\nSNAP_16_1d. The FreeBSD mirror site maintainers have been working\nfeverishly to upgrade their installations. Many of them are already\nupgraded, and the rest will be upgraded soon. Meanwhile, all CVSup\nusers should upgrade their CVSup installations.\n\nI apologize for the inconvenience caused by this bug, and thank you\nin advance for your patience.\n\nJohn Polstra\n", "msg_date": "Wed, 12 Sep 2001 17:51:10 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "[Fwd: [Fwd: [tao-users] FW: HEADS UP: CVSup timestamp bug]]" }, { "msg_contents": "\n\nGot it upgraded on the cvsup.postgresql.org server ... still have to do\nthe other servers ...\n\nOn Wed, 12 Sep 2001, Thomas Lockhart wrote:\n\n> New problems with CVSup. We should all upgrade asap, though I'm not sure\n> of the current status of builds for non-FreeBSD machines. Marc, could we\n> possibly install this on the postgresql.org machine(s)?\n>\n> - Thomas\n>\n> -----Original Message-----\n> From: jdp@polstra.com [mailto:jdp@polstra.com]\n> Sent: Sunday, September 09, 2001 19:40 PM\n> To: nanbor@cs.wustl.edu\n> Subject: HEADS UP: CVSup timestamp bug\n>\n>\n> This morning a bug was discovered in most versions of CVSup up to\n> and including SNAP_16_1c. The bug causes all newly-updated files to\n> receive incorrect timestamps. Usually the files receive timestamps\n> from early in 1970. This bug has been present for a very long time,\n> but it only began to have an effect when the Unix representation of\n> the date and time passed 1,000,000,000. That occurred on 9 September\n> 2001 at 01:46:40 UTC. Yes, other people had Y2K bugs, but I managed\n> to produce an S1G bug.\n>\n> I have fixed the bug and have released a new snapshot of CVSup,\n> SNAP_16_1d. I have also created binary packages for FreeBSD-4.x which\n> can be installed using \"pkg_add\". For information about updating your\n> CVSup installation, look here:\n>\n> http://people.freebsd.org/~jdp/s1g/\n>\n> To fix the bug, both the client and the server need to be upgraded to\n> SNAP_16_1d. The FreeBSD mirror site maintainers have been working\n> feverishly to upgrade their installations. Many of them are already\n> upgraded, and the rest will be upgraded soon. Meanwhile, all CVSup\n> users should upgrade their CVSup installations.\n>\n> I apologize for the inconvenience caused by this bug, and thank you\n> in advance for your patience.\n>\n> John Polstra\n>\n\n", "msg_date": "Sat, 15 Sep 2001 22:51:51 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [Fwd: [Fwd: [tao-users] FW: HEADS UP: CVSup timestamp bug]]" }, { "msg_contents": "> Got it upgraded on the cvsup.postgresql.org server ... still have to do\n> the other servers ...\n\nI'm hopelessly confused on what servers we have, and whether that one is\nnew, old, online, offline, being built, or being decommissioned. Can I\nuse this machine (or virtual machine) for cvsup now?\n\n - Thomas\n", "msg_date": "Mon, 17 Sep 2001 06:28:51 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: [Fwd: [Fwd: [tao-users] FW: HEADS UP: CVSup timestamp bug]]" }, { "msg_contents": "On Mon, 17 Sep 2001, Thomas Lockhart wrote:\n\n> > Got it upgraded on the cvsup.postgresql.org server ... still have to do\n> > the other servers ...\n>\n> I'm hopelessly confused on what servers we have, and whether that one is\n> new, old, online, offline, being built, or being decommissioned. Can I\n> use this machine (or virtual machine) for cvsup now?\n\nyes, but due to the CVSROOT move yesterday, its only as current as before\nthe move ... I just have to change around its pointers this morning ...\n\nRight now, we have:\n\nanoncvs.postgresql.org\n\t== cvsup.postgresql.org\n\t\t- same machine, brand new\ncvs.postgresql.org\n\t== www.postgresql.org\n\t== mail.postgresql.org\n\t== ssh/login server\n\t\t- same machine\n\nrsync.postgresql.org\n\t== ftp.postgresql.org\n\t== primary www server\n\t\t- old server, slowly being migrated between the above two\n\t\t machines (rsync -> anoncvs, ftp/primary -> cvs)\n\n\n", "msg_date": "Mon, 17 Sep 2001 08:02:32 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [Fwd: [Fwd: [tao-users] FW: HEADS UP: CVSup timestamp bug]]" }, { "msg_contents": "I am still unable to update my cvs tree! What server, username, password,\ncvsroot and module do I need to use?\n\nThanks,\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> Sent: Monday, 17 September 2001 8:03 PM\n> To: Thomas Lockhart\n> Cc: Hackers List\n> Subject: Re: [HACKERS] [Fwd: [Fwd: [tao-users] FW: HEADS UP: CVSup\n> timestamp bug]]\n>\n>\n> On Mon, 17 Sep 2001, Thomas Lockhart wrote:\n>\n> > > Got it upgraded on the cvsup.postgresql.org server ... still\n> have to do\n> > > the other servers ...\n> >\n> > I'm hopelessly confused on what servers we have, and whether that one is\n> > new, old, online, offline, being built, or being decommissioned. Can I\n> > use this machine (or virtual machine) for cvsup now?\n>\n> yes, but due to the CVSROOT move yesterday, its only as current as before\n> the move ... I just have to change around its pointers this morning ...\n>\n> Right now, we have:\n>\n> anoncvs.postgresql.org\n> \t== cvsup.postgresql.org\n> \t\t- same machine, brand new\n> cvs.postgresql.org\n> \t== www.postgresql.org\n> \t== mail.postgresql.org\n> \t== ssh/login server\n> \t\t- same machine\n>\n> rsync.postgresql.org\n> \t== ftp.postgresql.org\n> \t== primary www server\n> \t\t- old server, slowly being migrated between the above two\n> \t\t machines (rsync -> anoncvs, ftp/primary -> cvs)\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Tue, 18 Sep 2001 10:39:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [Fwd: [Fwd: [tao-users] FW: HEADS UP: CVSup timestamp bug]]" }, { "msg_contents": "> Right now, we have:\n\nOK, these are three *physically distinct* machines with some aliases\nattached to them? Or are virtual hosts involved too?? I need a site map\nfrom someone since nothing but my home directory is in the place they\nused to be (actually, even that moved but I *do* know how to find my\nhome directory ;). I've found some stuff, but no doc areas and no ftp\nareas.\n\nHelp!! (At your convenience, of course; I *know* you are doing a lot of\nwork on this ;)\n\n - Thomas\n\ngolem> host cvsup.postgresql.org\ncvsup.postgresql.org is a nickname for rs.PostgreSQL.org\nrs.PostgreSQL.org has address 64.39.15.238\nrs.PostgreSQL.org has address 64.39.15.238\ngolem> host anoncvs.postgresql.org\nanoncvs.postgresql.org is a nickname for rs.PostgreSQL.org\nrs.PostgreSQL.org has address 64.39.15.238\nrs.PostgreSQL.org has address 64.39.15.238\ngolem> host cvs.postgresql.org\ncvs.postgresql.org is a nickname for mail.postgresql.org\nmail.postgresql.org has address 216.126.85.28\nmail.postgresql.org has address 216.126.85.28\ngolem> host mail.postgresql.org\nmail.postgresql.org has address 216.126.85.28\ngolem> host www.postgresql.org\nwww.postgresql.org is a nickname for rs.postgresql.org\nrs.postgresql.org has address 64.39.15.238\nrs.postgresql.org has address 64.39.15.238\ngolem> host ftp.postgresql.org\nftp.postgresql.org is a nickname for postgresql.org\npostgresql.org has address 216.126.84.28\npostgresql.org has address 216.126.84.28\ngolem> host rsync.postgresql.org\nrsync.postgresql.org is a nickname for rs.postgresql.org\nrs.postgresql.org has address 64.39.15.238\nrs.postgresql.org has address 64.39.15.238\n\n> anoncvs.postgresql.org\n> == cvsup.postgresql.org\n> - same machine, brand new\n> cvs.postgresql.org\n> == www.postgresql.org\n> == mail.postgresql.org\n> == ssh/login server\n> - same machine\n> \n> rsync.postgresql.org\n> == ftp.postgresql.org\n> == primary www server\n> - old server, slowly being migrated between the above two\n> machines (rsync -> anoncvs, ftp/primary -> cvs)\n", "msg_date": "Wed, 19 Sep 2001 14:34:01 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: [Fwd: [Fwd: [tao-users] FW: HEADS UP: CVSup timestamp bug]]" } ]
[ { "msg_contents": "I agree that groups of objects in separate data storage areas are needed\nand that is what I am trying to get to. Don't you think that Postgresql\nwith locations/files is the same as Oracle tablespaces. I don't think\nwe want to invent our own filesystem (which is what a tablespace really\nis...).\n\nJim\n\n\n\n> > > Attached is a patch that adds support for specifying a\n> > > location for indexes via the \"create database\" command.\n> > > \n> > > I believe this patch is complete, but it is my first .\n> > \n> > This patch allows index locations to be specified as\n> > different from data locations. Is this a feature direction\n> > we want to go in? Comments?\n> \n> The more general and \"standard\" way to go are TABLESPACEs.\n> But probably proposed feature will be compatible with\n> tablespaces, when we'll got them: we could use new \"create\n> database\" syntax to specify default tablespace for indices.\n> Unfortunately I removed message with patch, can you send it\n> to me, Bruce?\n> \n> Vadim\n> \n> \n\n\n", "msg_date": "Wed, 12 Sep 2001 14:22:02 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review" } ]
[ { "msg_contents": "\njust change the work tablespace below to location and that is exactly\nwhat this patch is trying to do. You can think of the LOCATION and\nINDEX_LOCATION provided to the create database command as the default\nstorage locations for these objects. In the future, I want to enable\nthe DBA to specify LOCATIONS any object just like Oracle. I am also\nplanning on a pg_locations table and \"create location\" command which\nwill do what the current initlocation script does and more.\n\n\nJim\n\n\n> \n> I am very new to this mailinglist so I apologize if I start talking\nearly but\n> I've been working as a sysadmin and that kind of problems for a long\nwhile\n> now and my suggestion is that it is a start but I think that we should\naim a\n> little higher than this and use something more like the Oracle\napproach\n> instead. Where they introduce an abstraction layer in the form of a\n> tablespace. And this tablespace is then referenced from the create\ntable or\n> create index instead.\n> eg:\n> table -> tablespace -> path to physical storage\n> index -> tablespace -> path to physical storage\n> \n> Advantages:\n> Changes can be done to storage whithout need to change create scripts\nfor db,\n> tables and so on.\n> Designers can specify in which tablespace tables/indexes should reside\nbased\n> on usage.\n> Sysadmins can work with tablespaces and change paths without changing\n> anything in the database/table/index definitions.\n> \n> The alternative is symlinks to distribute the load and that is not a\npretty\n> sight dba-wise.\n> \n> Hope you can bare with me on this, since I think it is an very\nimportant\n> issue.\n> I'm unfortunately not a fast coder yet (but I'm getting faster :-) ).\nBut I\n> could start writing a spec if someone is interrested.\n> \n> Bruce Momjian wrote:\n> \n> > > Hi all,\n> > >\n> > > Attached is a patch that adds support for specifying a location \nfor\n> > > indexes via the \"create database\" command.\n> > >\n> > > I believe this patch is complete, but it is my first .\n> >\n> > This patch allows index locations to be specified as different from\ndata\n> > locations. Is this a feature direction we want to go in? Comments?\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n> >\n> > ---------------------------(end of\nbroadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> \n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n> \n> \n\n\n", "msg_date": "Wed, 12 Sep 2001 14:25:54 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review" } ]
[ { "msg_contents": "> > The more general and \"standard\" way to go are TABLESPACEs.\n> > But probably proposed feature will be compatible with\n> > tablespaces, when we'll got them:\n> \n> Will it be? I'm afraid of creating a backwards-compatibility\n> problem for ourselves when it comes time to implement tablespaces.\n\nAs I said, INDEX_LOCATION in CREATE DATABASE could mean location\nof default tablespace for indices in future and one will be able\nto override tablespace for particular index with TABLESPACE\nclause in CREATE INDEX command.\n\n> At the very least I'd like to see some information demonstrating\n> how much benefit there is to this proposed patch, before we\n> consider whether to adopt it. If there's a significant performance\n> benefit to splitting a PG database along the table-vs-index divide,\n> then it's interesting as a short-term improvement ... but Jim didn't\n> even make that assertion, let alone provide evidence to back it up.\n\nAgreed. He mentioned significant performance difference but it would\nbe great to see results of pgbench tests with scaling factor of >= 10.\nJim?\n\nAlso, after reviewing patch I have to say that it will NOT work\nwith WAL. Jim, please do not name index' dir as \"<TBL_NODE>_index\".\nInstead, just use different TBL_NODE for indices (different number).\nIt's not good to put if(reln->rd_rel->relkind == RELKIND_INDEX)\nstuff into storage manager - only two numbers (tblnode & relnode)\nmust be used to identify file, no any other logical information\ntotally unrelated to storage issues.\n\nVadim\n", "msg_date": "Wed, 12 Sep 2001 12:07:46 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review " } ]
[ { "msg_contents": "Vadim,\n\nI don't understand the WAL issue below, can you explain. The dir name\nis the same name as the database with _index added to it. This is how\nthe current datpath stuff works. I really just copied the datpath code\nto get this patch to work...\n\nAlso I have been running this patch (both 7.1.3 and 7.2devel) against\nsome of my companies applications. I have loaded a small database 10G\ndata and 15G indexes both with and without the patch. There seems to be\nbetween 5% and 10% performance gain doing most common db commands\n(selects, selects with joins and inserts). The system is a DUAL P3 733\nwith 3 IDE disks. One for PGDATA, second for APPDATA and third for\nAPPIDX. As you can see I have seperated WAL files, GLOBAL, Application\ndata and application indexes over 3 disks. Our production systems have\naround 50k queries/day ( not including data loads), so I believe that\nwhen this patch get put into production, with 20 disks and 10 database\nthe performance increase should go up.\n\n\nI should also add, that I have been working on the second part of this\npatch, which will allow tables and indexes to be put into LOCATIONS\nalso. I am going planning on having a PG_LOCATIONS table and\nCREATE|DROP|ALTER location SQL command instead of the initlocation shell\nscript we currently have. The only thing stopping me now is 7.2 testing\nI am planning on doing once the beta begins and problems adding a\nlocation column to the pg_class table with the necessary support code in\nheap.c...\n\n\n\nThanks for all the comments (keep them comming)\n\nJim\n\n> > > The more general and \"standard\" way to go are TABLESPACEs.\n> > > But probably proposed feature will be compatible with\n> > > tablespaces, when we'll got them:\n> > \n> > Will it be? I'm afraid of creating a backwards-compatibility\n> > problem for ourselves when it comes time to implement tablespaces.\n> \n> As I said, INDEX_LOCATION in CREATE DATABASE could mean location\n> of default tablespace for indices in future and one will be able\n> to override tablespace for particular index with TABLESPACE\n> clause in CREATE INDEX command.\n> \n> > At the very least I'd like to see some information demonstrating\n> > how much benefit there is to this proposed patch, before we\n> > consider whether to adopt it. If there's a significant performance\n> > benefit to splitting a PG database along the table-vs-index divide,\n> > then it's interesting as a short-term improvement ... but Jim didn't\n> > even make that assertion, let alone provide evidence to back it up.\n> \n> Agreed. He mentioned significant performance difference but it would\n> be great to see results of pgbench tests with scaling factor of >= 10.\n> Jim?\n> \n> Also, after reviewing patch I have to say that it will NOT work\n> with WAL. Jim, please do not name index' dir as \"<TBL_NODE>_index\".\n> Instead, just use different TBL_NODE for indices (different number).\n> It's not good to put if(reln->rd_rel->relkind == RELKIND_INDEX)\n> stuff into storage manager - only two numbers (tblnode & relnode)\n> must be used to identify file, no any other logical information\n> totally unrelated to storage issues.\n> \n> Vadim\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n\n", "msg_date": "Wed, 12 Sep 2001 15:32:42 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review " } ]
[ { "msg_contents": "> I don't understand the WAL issue below, can you explain. The dir name\n> is the same name as the database with _index added to it. This is how\n> the current datpath stuff works. I really just copied the datpath\n> code to get this patch to work...\n\nAt the time of after crash recovery WAL is not able to read relation\ndescription from catalog and so only relfilenode is provided for\nstorage manager in relation structure (look backend/access/transam/\nxlogutils.c:XLogOpenRelation). Well, we could add Index/Table\nfile type identifier to RmgrData (rmgr.c in the same dir) to set\nrelkind in relation structure, but I don't see any reason to\ndo so when we can just use different tblnode number for indices and\nname index dirs just like other dirs under 'base' named - ie\nonly tblnode number is used for dir names, without any additions\nunrelated to storage issues.\n\nVadim\n", "msg_date": "Wed, 12 Sep 2001 12:54:25 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review " } ]
[ { "msg_contents": "> Also I have been running this patch (both 7.1.3 and 7.2devel) against\n> some of my companies applications. I have loaded a small database 10G\n\nWe are not familiar with your applications. It would be better to see\nresults of test suit available to the community. pgbench is first to\ncome in mind. Such tests would be more valuable.\n\nVadim\n", "msg_date": "Wed, 12 Sep 2001 13:05:37 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review " } ]
[ { "msg_contents": "I could also symlink all index files back to the tblnode directory?\n\n\n\n> > I don't understand the WAL issue below, can you explain. The dir\nname\n> > is the same name as the database with _index added to it. This is\nhow\n> > the current datpath stuff works. I really just copied the datpath\n> > code to get this patch to work...\n> \n> At the time of after crash recovery WAL is not able to read relation\n> description from catalog and so only relfilenode is provided for\n> storage manager in relation structure (look backend/access/transam/\n> xlogutils.c:XLogOpenRelation). Well, we could add Index/Table\n> file type identifier to RmgrData (rmgr.c in the same dir) to set\n> relkind in relation structure, but I don't see any reason to\n> do so when we can just use different tblnode number for indices and\n> name index dirs just like other dirs under 'base' named - ie\n> only tblnode number is used for dir names, without any additions\n> unrelated to storage issues.\n> \n> Vadim\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n> \n> \n\n\n", "msg_date": "Wed, 12 Sep 2001 16:27:49 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review " } ]
[ { "msg_contents": "Here is my pgbench results. As you can see the I am getting 2X tps with\nthe 2 directories. I believe this is a BIG win for Postgresql if we can\nfigure out the WAL recovery issues.\n\n\nCan someone other than me apply the patch and verify the pgbench\nresults.\n\n\nMy hardward setup is a dual processor P3/733 running Redhat 7.1 with 512\nmegs of memory. The postgresql.conf file is the installed version with\nNO changes.\n\nJim\n\n\ntemplate1=# create database one_dir with location='PGDATA1';\ntemplate1=# create database two_dir with location='PGDATA1'\nindex_location='PGIDX1';\nfor X in 1 2 3 4 5 6 7 8 9 10 \ndo\n\tpgbench -i -s 10 one_dir >>one_dir.log\n\tpgbench -i -s 10 two_dir >>two_dir.log\ndone\n\nbash-2.04$ grep 'excluding' one_dir.log\ntps = 44.319306(excluding connections establishing)\ntps = 34.641020(excluding connections establishing)\ntps = 50.516889(excluding connections establishing)\ntps = 52.747039(excluding connections establishing)\ntps = 16.203821(excluding connections establishing)\ntps = 36.902861(excluding connections establishing)\ntps = 52.511769(excluding connections establishing)\ntps = 53.479882(excluding connections establishing)\ntps = 54.599429(excluding connections establishing)\ntps = 36.780419(excluding connections establishing)\ntps = 48.048279(excluding connections establishing)\n\nbash-2.04$ grep 'excluding' two_dir.log\ntps = 58.739049(excluding connections establishing)\ntps = 100.259270(excluding connections establishing)\ntps = 103.156166(excluding connections establishing)\ntps = 110.829358(excluding connections establishing)\ntps = 111.929690(excluding connections establishing)\ntps = 106.840118(excluding connections establishing)\ntps = 101.563159(excluding connections establishing)\ntps = 102.877060(excluding connections establishing)\ntps = 103.784717(excluding connections establishing)\ntps = 53.056309(excluding connections establishing)\ntps = 73.842428(excluding connections establishing)\n\n\n> > Also I have been running this patch (both 7.1.3 and 7.2devel)\nagainst\n> > some of my companies applications. I have loaded a small database\n10G\n> \n> We are not familiar with your applications. It would be better to see\n> results of test suit available to the community. pgbench is first to\n> come in mind. Such tests would be more valuable.\n> \n> Vadim\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n\n\n", "msg_date": "Wed, 12 Sep 2001 16:58:02 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review " } ]
[ { "msg_contents": "I help run a job database and have a table of search records. I want\na query that will return the top 10 jobs by search frequency. I'm\nfamiliar with ORDER BY and LIMIT, so I basically need this:\n\nGiven a table search_records:\njob_num\n-------\n1\n2\n2\n3\n4\n4\n4\n\nI want a query that will return:\njob_num | count\n--------+------\n1 |1\n2 |2\n3 |1\n4 |3\n\nI tried\n\nselect distinct job_num, (select count(*) from search_records j where\nj.job_num=k.job_num) from search_records k\n\nbut it is horribly slow (it takes several minutes on a table of about\n25k rows!). I assume it scans the entire table for every job_num in\norder to count the number of occurences of that job_num, taking order\nn^2 time. Since I can easily use job_num as an index (being integers\nfrom 0 to roughly 400 so far) I could just do a \"select * from\nsearch_records\" and do the counting in PHP (our HTML pre-processor) in\norder n time. However, I don't know how to do an order n*log(n) sort\nin PHP, just n^2, so there would still be an efficiency problem.\nI have Postgresql 7.0.3.\nHelp is of course greatly appreciated.\n", "msg_date": "12 Sep 2001 15:16:39 -0700", "msg_from": "adamcrume@hotmail.com (Adam)", "msg_from_op": true, "msg_subject": "count of occurences" }, { "msg_contents": "HACKERS: see the end of this message about a possible optimisation for\nORDER BY+LIMIT cases (the normal use of LIMIT?)\n\nAdam wrote:\n> \n> I help run a job database and have a table of search records. I want\n> a query that will return the top 10 jobs by search frequency. I'm\n> familiar with ORDER BY and LIMIT, so I basically need this:\n> \n> Given a table search_records:\n> job_num\n> -------\n> 1\n> 2\n> 2\n> 3\n> 4\n> 4\n> 4\n> \n> I want a query that will return:\n> job_num | count\n> --------+------\n> 1 |1\n> 2 |2\n> 3 |1\n> 4 |3\n> \n> I tried\n> \n> select distinct job_num, (select count(*) from search_records j where\n> j.job_num=k.job_num) from search_records k\n> \n> but it is horribly slow (it takes several minutes on a table of about\n> 25k rows!). I assume it scans the entire table for every job_num in\n> order to count the number of occurences of that job_num, taking order\n> n^2 time. Since I can easily use job_num as an index (being integers\n> from 0 to roughly 400 so far) I could just do a \"select * from\n> search_records\" and do the counting in PHP (our HTML pre-processor) in\n> order n time. However, I don't know how to do an order n*log(n) sort\n> in PHP, just n^2, so there would still be an efficiency problem.\n> I have Postgresql 7.0.3.\n> Help is of course greatly appreciated.\n\nI have not tried it but how about:-\n\nselect job_num from\n(select job_num, count(*) as c from search_records group by job_num)\norder by c limit 10;\n\nI am not sure if count(*) would work in this context, if not try count()\non some field that is in every record. \n\n\nIf you can be sure that the top 10 will have at least a certain\nthreshold of searches (perhaps >1!) then it MIGHT be faster, due to less\ndata being sorted for the outer selects order by, (experiment) to do:-\n\nselect job_num from\n(select job_num, count(*) as c from search_records group by job_num\nHAVING c>1)\norder by c limit 10;\n\nIt would depend on how efficient the ORDER BY and LIMIT work together.\n(The ORDER BY could build a list of LIMIT n items and just replace items\nin that list...a lot more efficient both of memory and comparisons than\nbuilding the full list and then keeping the top n)\n\nHACKERS: If it does not do this it might be a usefull optimisation. \nThere would probably need to be a cutoff limit on whether to apply this\nmethod or sort and keep n. Also for LIMIT plus OFFSET it would need to\nbuild a list of the the total of the LIMIT and OFFSET figures.\n\n-- \nThis is the identity that I use for NewsGroups. Email to \nthis will just sit there. If you wish to email me replace\nthe domain with knightpiesold . co . uk (no spaces).\n", "msg_date": "Thu, 13 Sep 2001 13:03:54 +0100", "msg_from": "\"Thurstan R. McDougle\" <trmcdougle@my-deja.com>", "msg_from_op": false, "msg_subject": "Re: count of occurences PLUS optimisation" }, { "msg_contents": "On Thu, Sep 13, 2001 at 01:03:54PM +0100, Thurstan R. McDougle wrote:\n> It would depend on how efficient the ORDER BY and LIMIT work together.\n> (The ORDER BY could build a list of LIMIT n items and just replace items\n> in that list...a lot more efficient both of memory and comparisons than\n> building the full list and then keeping the top n)\n\nThere is some of this already. In the output of EXPLAIN you see two numbers.\nThe first is the estimated time toget the first tuple, the second is to get\nall the tuples.\n\nWhen LIMIT is applied, the estimated total cost is adjusted based on the\nnumber of rows. So with a small number of tuples the planner will favour\nplans that get tuples early even if the total cost would be larger.\n\n> HACKERS: If it does not do this it might be a usefull optimisation. \n> There would probably need to be a cutoff limit on whether to apply this\n> method or sort and keep n. Also for LIMIT plus OFFSET it would need to\n> build a list of the the total of the LIMIT and OFFSET figures.\n\nThe problem is that it sometimes doesn't help as much as you'd expect. If\nyou see a Sort stage in the plan, that means that everything below that has\nto be completly calculated.\n\nThe only solution is to use a sorted index to avoid the sort step, if\npossible.\n\nHTH,\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> Magnetism, electricity and motion are like a three-for-two special offer:\n> if you have two of them, the third one comes free.\n", "msg_date": "Fri, 14 Sep 2001 00:02:31 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: count of occurences PLUS optimisation" }, { "msg_contents": "Adam, \ntry this \nselect distinct job_num, count (job_num) from search_record \n group by job_num ; \ndon't worry about an ordered list - group by does it for you. \nRegards, Christoph \n> \n> HACKERS: see the end of this message about a possible optimisation for\n> ORDER BY+LIMIT cases (the normal use of LIMIT?)\n> \n> Adam wrote:\n> > \n> > I help run a job database and have a table of search records. I want\n> > a query that will return the top 10 jobs by search frequency. I'm\n> > familiar with ORDER BY and LIMIT, so I basically need this:\n> > \n> > Given a table search_records:\n> > job_num\n> > -------\n> > 1\n> > 2\n> > 2\n> > 3\n> > 4\n> > 4\n> > 4\n> > \n> > I want a query that will return:\n> > job_num | count\n> > --------+------\n> > 1 |1\n> > 2 |2\n> > 3 |1\n> > 4 |3\n> > \n> > I tried\n> > \n> > select distinct job_num, (select count(*) from search_records j where\n> > j.job_num=k.job_num) from search_records k\n> > \n> > but it is horribly slow (it takes several minutes on a table of about\n> > 25k rows!). I assume it scans the entire table for every job_num in\n> > order to count the number of occurences of that job_num, taking order\n> > n^2 time. Since I can easily use job_num as an index (being integers\n> > from 0 to roughly 400 so far) I could just do a \"select * from\n> > search_records\" and do the counting in PHP (our HTML pre-processor) in\n> > order n time. However, I don't know how to do an order n*log(n) sort\n> > in PHP, just n^2, so there would still be an efficiency problem.\n> > I have Postgresql 7.0.3.\n> > Help is of course greatly appreciated.\n> \n> I have not tried it but how about:-\n> \n> select job_num from\n> (select job_num, count(*) as c from search_records group by job_num)\n> order by c limit 10;\n> \n> I am not sure if count(*) would work in this context, if not try count()\n> on some field that is in every record. \n> \n> \n> If you can be sure that the top 10 will have at least a certain\n> threshold of searches (perhaps >1!) then it MIGHT be faster, due to less\n> data being sorted for the outer selects order by, (experiment) to do:-\n> \n> select job_num from\n> (select job_num, count(*) as c from search_records group by job_num\n> HAVING c>1)\n> order by c limit 10;\n> \n> It would depend on how efficient the ORDER BY and LIMIT work together.\n> (The ORDER BY could build a list of LIMIT n items and just replace items\n> in that list...a lot more efficient both of memory and comparisons than\n> building the full list and then keeping the top n)\n> \n> HACKERS: If it does not do this it might be a usefull optimisation. \n> There would probably need to be a cutoff limit on whether to apply this\n> method or sort and keep n. Also for LIMIT plus OFFSET it would need to\n> build a list of the the total of the LIMIT and OFFSET figures.\n> \n> -- \n> This is the identity that I use for NewsGroups. Email to \n> this will just sit there. If you wish to email me replace\n> the domain with knightpiesold . co . uk (no spaces).\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n", "msg_date": "Thu, 13 Sep 2001 14:44:54 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": false, "msg_subject": "Re: count of occurences PLUS optimisation" }, { "msg_contents": "Martijn van Oosterhout wrote:\n> \n> On Thu, Sep 13, 2001 at 01:03:54PM +0100, Thurstan R. McDougle wrote:\n> > It would depend on how efficient the ORDER BY and LIMIT work together.\n> > (The ORDER BY could build a list of LIMIT n items and just replace items\n> > in that list...a lot more efficient both of memory and comparisons than\n> > building the full list and then keeping the top n)\n> \n> There is some of this already. In the output of EXPLAIN you see two numbers.\n> The first is the estimated time toget the first tuple, the second is to get\n> all the tuples.\n> \n> When LIMIT is applied, the estimated total cost is adjusted based on the\n> number of rows. So with a small number of tuples the planner will favour\n> plans that get tuples early even if the total cost would be larger.\n> \n> > HACKERS: If it does not do this it might be a usefull optimisation.\n> > There would probably need to be a cutoff limit on whether to apply this\n> > method or sort and keep n. Also for LIMIT plus OFFSET it would need to\n> > build a list of the the total of the LIMIT and OFFSET figures.\n> \n> The problem is that it sometimes doesn't help as much as you'd expect. If\n> you see a Sort stage in the plan, that means that everything below that has\n> to be completly calculated.\n> \n> The only solution is to use a sorted index to avoid the sort step, if\n> possible.\n\n\nWhat I am talking about is WHEN the sort is required we could make the\nsort more efficient as inserting into a SHORT ordered list should be\nbetter than building a BIG list and sorting it, then only keeping a\nsmall part of the list.\n\nIn the example in question there would be perhaps 400 records, but only\n10 are needed. From the questions on these lists it seems quite common\nfor only a very low proportion of the records to be required (less then\n10%/upto 100 typically), in these cases it would seem to be a usefull\noptimisation.\n\n\n> \n> HTH,\n> --\n> Martijn van Oosterhout <kleptog@svana.org>\n> http://svana.org/kleptog/\n> > Magnetism, electricity and motion are like a three-for-two special offer:\n> > if you have two of them, the third one comes free.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \nThis is the identity that I use for NewsGroups. Email to \nthis will just sit there. If you wish to email me replace\nthe domain with knightpiesold . co . uk (no spaces).\n", "msg_date": "Thu, 13 Sep 2001 17:38:56 +0100", "msg_from": "\"Thurstan R. McDougle\" <trmcdougle@my-deja.com>", "msg_from_op": false, "msg_subject": "Re: count of occurences PLUS optimisation" }, { "msg_contents": "On Thu, Sep 13, 2001 at 05:38:56PM +0100, Thurstan R. McDougle wrote:\n> What I am talking about is WHEN the sort is required we could make the\n> sort more efficient as inserting into a SHORT ordered list should be\n> better than building a BIG list and sorting it, then only keeping a\n> small part of the list.\n\nFor a plain SORT, it would be possible. Anything to avoid materialising the\nentire table in memory. Unfortunatly it won't help if there is a GROUP\nafterwards because the group can't really know when to stop.\n\nBut yes, if you had LIMIT<SORT<...>> you could do that. I can't imagine it\nwould be too hard to arrange.\n\n> In the example in question there would be perhaps 400 records, but only\n> 10 are needed. From the questions on these lists it seems quite common\n> for only a very low proportion of the records to be required (less then\n> 10%/upto 100 typically), in these cases it would seem to be a usefull\n> optimisation.\n\nSay you have a query:\n\nselect id, count(*) from test group by id order by count desc limit 10;\n\nThis becomes:\n\nLIMIT < SORT < GROUP < SORT < test > > > >\n\nThe inner sort would still have to scan the whole table, unless you have an\nindex on id. In that case your optimisation would be cool.\n\nHave I got it right now?\n\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> Magnetism, electricity and motion are like a three-for-two special offer:\n> if you have two of them, the third one comes free.\n", "msg_date": "Fri, 14 Sep 2001 16:38:00 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: count of occurences PLUS optimisation" }, { "msg_contents": "Sorry about the size of this message!, it covers several optimisation\nareas.\n\nYes we are talking about a limited situation of ORDER BY (that does not\nmatch the GROUP BY order) plus LIMIT, but one that is easy to identify.\n\nIt also has the advantage that the number to be LIMITed will 9 times out\nof 10 be known at query plan time (as LIMIT seems to mostly be used with\na constant), so making it an optimization that rarely needs to estimate.\n\nIt could even be tested for at the query run stage rather than the query\nplan stage in those cases where the limit is not known in advance,\nalthough that would make the explain less accurate. Probably for\nplanning we should just assume that if a LIMIT is present that it is\nlikely to be for a smallish number. The planner currently estimates\nthat 10% of the tuples will be returned in these cases.\n\nThe level up to which building a shorter list is better than a sort and\nkeep/discard should be evaluated. It would perhaps depend on what\nproportion the LIMIT is of the estimated set returned by the GROUP BY.\nOne point to note is that, IIRC, an ordered lists efficiency drops\nfaster than that of most decent sorts once the data must be paged\nto/from disk.\n\nFor larger LIMITs we should still get some benefits if we get the first\nLIMIT items, then sort just them and compare each new item against the\nlowest item in this list. Maybe form a batch of new items, then merge\nthe new batch in to produce a new LIMIT n long sorted list and repeat. \nOne major advantage to this is that as the new batch is being fetched we\nno longer need to keep the existing list in ram. I should think that\neach new batch should be no longer than we can fit in ram or the amount\nof ram that is best to enable an efficient list merge phase.\n\nWe could kick into this second mode when we the LIMIT exceeds the cutoff\nor available ram.\n\nI have just noticed while looking through the planner and executor that\nthe 'unique' node (SELECT DISTINCT [ON]) comes between the sort and\nlimit nodes and is run seperately. Would it not be more efficient, in\nthe normal case of distinct on ORDER BY order (or start of ORDER BY),\nfor uniqueness to be handled within the the sorting as these routines\nare already comparing the tuples? Also if the unique node is seperate\nthen it makes the merging of sort and limit impossible if DISTINCT is\npresent.\nHowever there is still the case of distinct where a sort is not\nrequested, needed (index scan instead?) or is not suitable for the\ndistinct, so a seperate distinct node executor is still required.\n\nTaking all these into account it seems that quite a lot of code would\nneed changing to implement this optimisation. Specifically the SORT,\nUNIQUE and LIMIT nodes (and their planners) and the sort utils would\nneed a seperate variant and the current nodes would need altering.\nIt is a pity as I would expect fairly large benefits in those cases of\nLIMITing to a small subset of a large dataset.\n\nMartijn van Oosterhout wrote:\n> \n> On Thu, Sep 13, 2001 at 05:38:56PM +0100, Thurstan R. McDougle wrote:\n> > What I am talking about is WHEN the sort is required we could make the\n> > sort more efficient as inserting into a SHORT ordered list should be\n> > better than building a BIG list and sorting it, then only keeping a\n> > small part of the list.\n> \n> For a plain SORT, it would be possible. Anything to avoid materialising the\n> entire table in memory. Unfortunatly it won't help if there is a GROUP\n> afterwards because the group can't really know when to stop.\n> \n> But yes, if you had LIMIT<SORT<...>> you could do that. I can't imagine it\n> would be too hard to arrange.\n> \n> > In the example in question there would be perhaps 400 records, but only\n> > 10 are needed. From the questions on these lists it seems quite common\n> > for only a very low proportion of the records to be required (less then\n> > 10%/upto 100 typically), in these cases it would seem to be a usefull\n> > optimisation.\n> \n> Say you have a query:\n> \n> select id, count(*) from test group by id order by count desc limit 10;\n> \n> This becomes:\n> \n> LIMIT < SORT < GROUP < SORT < test > > > >\n> \n> The inner sort would still have to scan the whole table, unless you have an\n> index on id. In that case your optimisation would be cool.\n> \n> Have I got it right now?\n> \n> --\n> Martijn van Oosterhout <kleptog@svana.org>\n> http://svana.org/kleptog/\n> > Magnetism, electricity and motion are like a three-for-two special offer:\n> > if you have two of them, the third one comes free.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \nThis is the identity that I use for NewsGroups. Email to \nthis will just sit there. If you wish to email me replace\nthe domain with knightpiesold . co . uk (no spaces).\n", "msg_date": "Fri, 14 Sep 2001 13:21:30 +0100", "msg_from": "\"Thurstan R. McDougle\" <trmcdougle@my-deja.com>", "msg_from_op": false, "msg_subject": "Re: count of occurences PLUS optimisation" }, { "msg_contents": "You're just missing 'group by', and a little\nsimplicity.\n\nTry this:\n\nselect job_num, count(job_num) as frequency\nfrom search_records \ngroup by job_num\norder by frequency desc \nlimit 10;\n\nHave fun,\n\nAndrew Gould\n\n--- Adam <adamcrume@hotmail.com> wrote:\n> I help run a job database and have a table of search\n> records. I want\n> a query that will return the top 10 jobs by search\n> frequency. I'm\n> familiar with ORDER BY and LIMIT, so I basically\n> need this:\n> \n> Given a table search_records:\n> job_num\n> -------\n> 1\n> 2\n> 2\n> 3\n> 4\n> 4\n> 4\n> \n> I want a query that will return:\n> job_num | count\n> --------+------\n> 1 |1\n> 2 |2\n> 3 |1\n> 4 |3\n> \n> I tried\n> \n> select distinct job_num, (select count(*) from\n> search_records j where\n> j.job_num=k.job_num) from search_records k\n> \n> but it is horribly slow (it takes several minutes on\n> a table of about\n> 25k rows!). I assume it scans the entire table for\n> every job_num in\n> order to count the number of occurences of that\n> job_num, taking order\n> n^2 time. Since I can easily use job_num as an\n> index (being integers\n> from 0 to roughly 400 so far) I could just do a\n> \"select * from\n> search_records\" and do the counting in PHP (our HTML\n> pre-processor) in\n> order n time. However, I don't know how to do an\n> order n*log(n) sort\n> in PHP, just n^2, so there would still be an\n> efficiency problem.\n> I have Postgresql 7.0.3.\n> Help is of course greatly appreciated.\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n__________________________________________________\nTerrorist Attacks on U.S. - How can you help?\nDonate cash, emergency relief information\nhttp://dailynews.yahoo.com/fc/US/Emergency_Information/\n", "msg_date": "Fri, 14 Sep 2001 11:40:00 -0700 (PDT)", "msg_from": "Andrew Gould <andrewgould@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: count of occurences" } ]
[ { "msg_contents": "Moving the test to a system with SCSI disks gave different results. \nThere is NO difference between having the indexes on the same disk or \ndifferent disk with the data while running pgbench. So I leave it up to\nyou guys as to include the patch or not. I do believe that even if\nperformance doesn't increase, this patch as alot of other benefits for\nadmins.\n\nLet me know\nJim\n\n\n> Here is my pgbench results. As you can see the I am getting 2X tps\nwith\n> the 2 directories. I believe this is a BIG win for Postgresql if we\ncan\n> figure out the WAL recovery issues.\n> \n> \n> Can someone other than me apply the patch and verify the pgbench\n> results.\n> \n> \n> My hardward setup is a dual processor P3/733 running Redhat 7.1 with\n512\n> megs of memory. The postgresql.conf file is the installed version with\n> NO changes.\n> \n> Jim\n> \n> \n> template1=# create database one_dir with location='PGDATA1';\n> template1=# create database two_dir with location='PGDATA1'\n> index_location='PGIDX1';\n> for X in 1 2 3 4 5 6 7 8 9 10 \n> do\n> \tpgbench -i -s 10 one_dir >>one_dir.log\n> \tpgbench -i -s 10 two_dir >>two_dir.log\n> done\n> \n> bash-2.04$ grep 'excluding' one_dir.log\n> tps = 44.319306(excluding connections establishing)\n> tps = 34.641020(excluding connections establishing)\n> tps = 50.516889(excluding connections establishing)\n> tps = 52.747039(excluding connections establishing)\n> tps = 16.203821(excluding connections establishing)\n> tps = 36.902861(excluding connections establishing)\n> tps = 52.511769(excluding connections establishing)\n> tps = 53.479882(excluding connections establishing)\n> tps = 54.599429(excluding connections establishing)\n> tps = 36.780419(excluding connections establishing)\n> tps = 48.048279(excluding connections establishing)\n> \n> bash-2.04$ grep 'excluding' two_dir.log\n> tps = 58.739049(excluding connections establishing)\n> tps = 100.259270(excluding connections establishing)\n> tps = 103.156166(excluding connections establishing)\n> tps = 110.829358(excluding connections establishing)\n> tps = 111.929690(excluding connections establishing)\n> tps = 106.840118(excluding connections establishing)\n> tps = 101.563159(excluding connections establishing)\n> tps = 102.877060(excluding connections establishing)\n> tps = 103.784717(excluding connections establishing)\n> tps = 53.056309(excluding connections establishing)\n> tps = 73.842428(excluding connections establishing)\n> \n> \n> > > Also I have been running this patch (both 7.1.3 and 7.2devel)\n> against\n> > > some of my companies applications. I have loaded a small database\n> 10G\n> > \n> > We are not familiar with your applications. It would be better to\nsee\n> > results of test suit available to the community. pgbench is first to\n> > come in mind. Such tests would be more valuable.\n> > \n> > Vadim\n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> > \n> \n> \n> \n\n\n", "msg_date": "Wed, 12 Sep 2001 20:18:46 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review (more pgbench results)" }, { "msg_contents": "> Moving the test to a system with SCSI disks gave different results. \n> There is NO difference between having the indexes on the same disk or \n> different disk with the data while running pgbench. So I leave it up to\n> you guys as to include the patch or not. I do believe that even if\n> performance doesn't increase, this patch as alot of other benefits for\n> admins.\n\nI bet it is the SCSI tagged queueing that is making up for the same disk\nperformance. Agreed administration is enough of a need to add the\nfeature.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 20:49:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index location patch for review (more pgbench results)" } ]
[ { "msg_contents": "Hi dear people,\n\n(My condolences to all afected by terrorist acts in US)\n\nAs I have been telling for a while, GeneXus database rapid application\ndeveloping tool, will now add to its set of four databases supported,\nPostgreSQL. I think this is Great News (C) ;-) Why? Because GeneXus has a great\ndeal of already developed applications, which can switch from one database to\nanother seamlessly, and now PostgreSQL, can become a target. I predict that in\ntwo years, 20% of GeneXus developed projects, will run on PostgreSQL. This means\na lot of oportunities for PG consultants, and specialists.\n\nI need feedback from you. I need to know if you are as excited as I am about\nthis. \n\nPlease, don't stay silent if you want to say something. At least, email me\npersonally.\n\nThanks indeed.\n\nRegards,\nHaroldo.\n", "msg_date": "Wed, 12 Sep 2001 23:34:10 -0500", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": true, "msg_subject": "Need feedback: GeneXus will support PostgreSQL" }, { "msg_contents": "What is it? 8) is it middleware? Is it pre-built applications? I'm confused!\n\n-r\n\nAt 11:34 PM 9/12/01 -0500, Haroldo Stenger wrote:\n\n>Hi dear people,\n>\n>(My condolences to all afected by terrorist acts in US)\n>\n>As I have been telling for a while, GeneXus database rapid application\n>developing tool, will now add to its set of four databases supported,\n>PostgreSQL. I think this is Great News (C) ;-) Why? Because GeneXus has a \n>great\n>deal of already developed applications, which can switch from one database to\n>another seamlessly, and now PostgreSQL, can become a target. I predict that in\n>two years, 20% of GeneXus developed projects, will run on PostgreSQL. This \n>means\n>a lot of oportunities for PG consultants, and specialists.\n>\n>I need feedback from you. I need to know if you are as excited as I am about\n>this.\n>\n>Please, don't stay silent if you want to say something. At least, email me\n>personally.\n>\n>Thanks indeed.\n>\n>Regards,\n>Haroldo.\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01", "msg_date": "Thu, 13 Sep 2001 01:34:42 -0400", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": false, "msg_subject": "Re: Need feedback: GeneXus will support PostgreSQL" }, { "msg_contents": "On Thursday 13 September 2001 15:34, Ryan Mahoney wrote:\n> What is it? 8) is it middleware? Is it pre-built applications? I'm\n> confused!\n\nIt is sort of an application framework and builder.\nIMHO not very useful, as it does not support any real world programming \nlanguage apart from Java (supports C#, Java, RPG, COBOL, C/SQL, Visual \nBasic, Visual Foxpro, Visual Studio.Net) and only a couple of non-Microsoft \nplatforms. Their claims of language & platform independence are simply not \ntrue.\n\nHorst\n", "msg_date": "Thu, 13 Sep 2001 19:03:34 +1000", "msg_from": "Horst Herb <hherb@malleenet.net.au>", "msg_from_op": false, "msg_subject": "Re: Need feedback: GeneXus will support PostgreSQL" }, { "msg_contents": "On Thu, 13 Sep 2001, Horst Herb wrote:\n\n> On Thursday 13 September 2001 15:34, Ryan Mahoney wrote:\n> > What is it? 8) is it middleware? Is it pre-built applications? I'm\n> > confused!\n> \n> It is sort of an application framework and builder.\n> IMHO not very useful, as it does not support any real world programming \n> language apart from Java (supports C#, Java, RPG, COBOL, C/SQL, Visual \n> Basic, Visual Foxpro, Visual Studio.Net) and only a couple of non-Microsoft \n> platforms. Their claims of language & platform independence are simply not \n> true.\n\nI don't know, pretty much any framework/builder that wants to work with\npostgres is probably a good thing. I haven't read through most of their\ninformation yet, so I'll have to reserve judgement on anything else...\n\n\n", "msg_date": "Thu, 13 Sep 2001 09:32:30 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Need feedback: GeneXus will support PostgreSQL" }, { "msg_contents": "Ryan Mahoney wrote:\n> \n> What is it? 8) is it middleware? Is it pre-built applications? I'm confused!\n\nGenexus is a VRAD tool, database oriented. You code in a mix of graphical forms,\nand plain code. The code combines logic paradigm, event oriented programming,\nand imperative code. There is a very powerful command (for each/enfor) which is\na loop over records in the DB, with a given order, a given start and a given\nstop points, and added restrictions. Nested for eachs are allowed, making it\neasy to do grouping of records and totals. There are four main programming\ntemplates \"Transactions\", \"Work Panels\", \"Procedures\" and \"Reports\". Once the\napp is modelled using these elements, a working prototype can be generated from\nit, in a language out of VB, VFox, Java, C, COBOL, RPG, XBase, C#. Regarding the\ndatabases with which the app will interact, one can choose between Oracle, MS\nSQL Server, Informix, and IBM DB2. And now, PostgreSQL. Along the programming\nprocess, the database is automatically created, and normalized (3rd normal\nform).\n\nThen the full working app is generated in the target language.\n\nI'm the main fan of GNU/Linux, FreeBSD, and PostgreSQL, within the GeneXus\ncommunity. I'm the one responsible of persuading GeneXus' people of the urge of\nthis recent decision. Another cool initiative is www.gxopen.com.uy, which is a\nwebsite devoted to open source GeneXus-made projects.\n\nThanks.\n\nRegards,\nHaroldo.\n", "msg_date": "Thu, 13 Sep 2001 14:47:57 -0500", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": true, "msg_subject": "Re: Need feedback: GeneXus will support PostgreSQL" }, { "msg_contents": "(On-topic since GeneXus' people decision of supporting PG.)\n\nHorst Herb wrote:\n> \n> On Thursday 13 September 2001 15:34, Ryan Mahoney wrote:\n> > What is it? 8) is it middleware? Is it pre-built applications? I'm\n> > confused!\n> \n> It is sort of an application framework and builder.\n> IMHO not very useful, as it does not support any real world programming\n> language apart from Java (supports C#, Java, RPG, COBOL, C/SQL, Visual\n> Basic, Visual Foxpro, Visual Studio.Net) and only a couple of non-Microsoft\n> platforms. Their claims of language & platform independence are simply not\n> true.\n\nHmm.. Used to be truer than now. But people like me who deeply understand what\ncan be accomplished with GeneXus, and fully believe in Open Source, are changing\nthe point of focus of future versions of GeneXus. Most of the current focus is\nMS related. But recent decision to support PostgreSQL, as the fifth database\nshows renewal. Also, I would add that C/SQL is as a good thing as Java depending\non what must be built.\n\nRegards,\nHaroldo.\n", "msg_date": "Thu, 13 Sep 2001 14:58:18 -0500", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": true, "msg_subject": "Re: Need feedback: GeneXus will support PostgreSQL" } ]
[ { "msg_contents": "In our PHP app, we are also forced to parse error messages to get that kind\nof information. Register my vote for error codes (Tom Lane style...)\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Haller Christoph\n> Sent: Thursday, 13 September 2001 7:18 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] ERROR: Cannot insert a duplicate key into a unique\n> index\n>\n>\n> [HACKERS] ERROR: Cannot insert a duplicate key into a unique index\n>\n> I'm working on a C code application using loads of\n> insert commands.\n> It is essential to distinguish between an error\n> coming from a misformed command or other fatal\n> reasons and a duplicate key.\n> In either case, the PQresultStatus() returns\n> PGRES_FATAL_ERROR\n> I can check PQresultErrorMessage() for the\n> error message above, but then I have to rely\n> on this string never be changed.\n> This is no good programming style.\n> Does anybody have another, better idea or is\n> there at least a header file available, where\n> all the error messages can be found?\n>\n> Regards, Christoph\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Thu, 13 Sep 2001 18:31:06 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: ERROR: Cannot insert a duplicate key into a unique index" }, { "msg_contents": "[HACKERS] ERROR: Cannot insert a duplicate key into a unique index\n\nI'm working on a C code application using loads of \ninsert commands. \nIt is essential to distinguish between an error \ncoming from a misformed command or other fatal \nreasons and a duplicate key. \nIn either case, the PQresultStatus() returns \nPGRES_FATAL_ERROR\nI can check PQresultErrorMessage() for the \nerror message above, but then I have to rely \non this string never be changed. \nThis is no good programming style. \nDoes anybody have another, better idea or is \nthere at least a header file available, where \nall the error messages can be found? \n\nRegards, Christoph\n", "msg_date": "Thu, 13 Sep 2001 11:17:46 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": false, "msg_subject": "ERROR: Cannot insert a duplicate key into a unique index" }, { "msg_contents": "Error codes would be excellent!\n\n-r\n\n\n>In our PHP app, we are also forced to parse error messages to get that kind\n>of information. Register my vote for error codes (Tom Lane style...)\n>\n>Chris\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01", "msg_date": "Thu, 13 Sep 2001 11:26:46 -0400", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": false, "msg_subject": "Re: ERROR: Cannot insert a duplicate key into a" } ]
[ { "msg_contents": "> Moving the test to a system with SCSI disks gave different results. \n> There is NO difference between having the indexes on the same disk or \n> different disk with the data while running pgbench. So I \n> leave it up to you guys as to include the patch or not. I do believe\n> that even if performance doesn't increase, this patch as alot of other\n> benefits for admins.\n\nAgreed.\n\nVadim\n", "msg_date": "Thu, 13 Sep 2001 10:01:58 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: Index location patch for review (more pgbench resul" } ]
[ { "msg_contents": "I would like to do the following: Put the dynamically loadable shared\nobjects for the language handlers (language handlers being the only shared\nobjects we install by default) into a private subdirectory\n'$(libdir)/postgresql'. The default directory where the backend looks for\nshared objects will point at this location.\n\nAdvantages: Cleaner file system layout, keeps things out of $(libdir).\n\nWe can install the contrib modules into that directory without cluttering\nthe file system. Then we can eliminate hard-coding the path to the contrib\nmodules because they will be found in the directory by default.\n\nThird-party extension modules can put their files into this directory\nwithout worries about file system clashes. (I would encourage third-party\nproducts to use their own installation hierarchy, because otherwise\nthey'll have the same problems as we have with Perl, but it is a\nconvenient solution nevertheless.)\n\nThere should not be any compatibility problems because these files aren't\naccessed directly. The binary packages already do this, because of some\nof these concerns.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 14 Sep 2001 01:47:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Proposed installation dir change" } ]
[ { "msg_contents": "Count me in for error codes. You can just see part of the code i'm using to\ndeal with the problem (some of the error messages changed from 7.0 to 7.1 --\ni had to fix that):\n\n def parseError(self, errval):\n # first compile all the exceptions. Ideally we don't have to compile\n # all of them first, but this makes the code that much readable, and\n # speed is not important in the error case.\n re_refInt = re.compile('.*referential integrity.*')\n re_dupKey = re.compile('.*duplicate key.*')\n re_nullAttr = re.compile('.*Fail to add null value in not null\nattribute (.*)')\n re_nonInt = re.compile('.*pg_atoi: error in \"(.*)\": can\\'t parse.*')\n re_nonBool = re.compile('.*Bad boolean external representation\n\\'(.*)\\'')\n re_nonIP = re.compile('.*invalid INET value.*')\n re_nonTimestamp = re.compile('.*Bad timestamp external\nrepresentation \\'(.*)\\'')\n re_invalidAttribute = re.compile('.*Relation \\'(.*)\\' does not have\nattribute \\'(.*)\\'.*')\n re_deadlockDetected = re.compile('.*Deadlock detected.*')\n\n # various errors captured from posgres\n # ERROR: Relation 'users' does not have attribute 'blah'\n # ERROR: <unnamed> referential integrity violation - key in\nmanagedservers stillreferenced from managedserverstatus\n #\n # ERROR: Bad boolean external representation '3.4.5.6'\n # ERROR: invalid INET value '3456' ## for ip\n # ERROR: ExecAppend: Fail to add null value in not null attribute\nmystr\n # ERROR: Relation 'users' does not have attribute 'blah'\n # pg_atoi: error in \"hello\": can't parse \"hello\"\n # ERROR: Bad timestamp external representation 'fdsa'\n # ERROR: Deadlock detected.\n\n\n-----Original Message-----\nFrom: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au]\nSent: Thursday, September 13, 2001 3:31 AM\nTo: Haller Christoph; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] ERROR: Cannot insert a duplicate key into a\nunique index\n\n\nIn our PHP app, we are also forced to parse error messages to get that kind\nof information. Register my vote for error codes (Tom Lane style...)\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Haller Christoph\n> Sent: Thursday, 13 September 2001 7:18 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] ERROR: Cannot insert a duplicate key into a unique\n> index\n>\n>\n> [HACKERS] ERROR: Cannot insert a duplicate key into a unique index\n>\n> I'm working on a C code application using loads of\n> insert commands.\n> It is essential to distinguish between an error\n> coming from a misformed command or other fatal\n> reasons and a duplicate key.\n> In either case, the PQresultStatus() returns\n> PGRES_FATAL_ERROR\n> I can check PQresultErrorMessage() for the\n> error message above, but then I have to rely\n> on this string never be changed.\n> This is no good programming style.\n> Does anybody have another, better idea or is\n> there at least a header file available, where\n> all the error messages can be found?\n>\n> Regards, Christoph\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n", "msg_date": "Thu, 13 Sep 2001 17:25:59 -0700", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: Cannot insert a duplicate key into a unique" } ]
[ { "msg_contents": "Great Bridge ceased operation and not going to support postgreSQL\n(because of lack of investor)\n\nIn these days of economic downturn, recession and world-wide economic\ndepression...(and even the looming war) I am wondering\nhow the MySQL team is finding time to support and develop duplicate SQL\nserver products...\n\nI am NOT FINDING time even to fully understand every line of postgreSQL\nsource code and use all the capabilities of POstgreSQL!!!\n\nI hope the MySQL team will drop the development and Jump into PostgreSQL\ndevelopment. Pgsql going to be the only sql server to\nrun the WORLD ECONOMY smoothly.. There is no time support and develop\ntwo duplicate products!! PostgreSQL is very advanced SQL server\nmore advanced than mysql.\n\nIf they (mysql developers) have lot of time to waste, I can give them\nplenty of work at my home!!\n\n", "msg_date": "Fri, 14 Sep 2001 01:57:34 GMT", "msg_from": "peace_flower <\"alavoor[AT]\"@yahoo.com>", "msg_from_op": true, "msg_subject": "Where do they find the time??? Great Bridge closed now!!!??" }, { "msg_contents": "peace_flower <\"alavoor[AT]\"@yahoo.com> writes:\n> I hope the MySQL team will drop the development and Jump into PostgreSQL\n> development. Pgsql going to be the only sql server to\n> run the WORLD ECONOMY smoothly.. There is no time support and develop\n> two duplicate products!! PostgreSQL is very advanced SQL server\n> more advanced than mysql.\n\nWhat a coincidence. I was about to say the exact opposite. Obviously,\nPostgreSQL isn't the one true database and everyone should Jump into MySQL.\nIt is easier to type.\n\nI hope your post was meant as a joke because it was hilarious.\n-- \nmatthew rice <matt@starnix.com> starnix inc.\ntollfree: 1-87-pro-linux thornhill, ontario, canada\nhttp://www.starnix.com professional linux services & products\n", "msg_date": "14 Sep 2001 12:06:20 -0400", "msg_from": "Matthew Rice <matt@starnix.com>", "msg_from_op": false, "msg_subject": "Re: Where do they find the time??? Great Bridge closed now!!!??" }, { "msg_contents": "> What a coincidence. I was about to say the exact opposite. Obviously,\n> PostgreSQL isn't the one true database and everyone should Jump\n> into MySQL.\n> It is easier to type.\n>\n> I hope your post was meant as a joke because it was hilarious.\n\nThese days MySQL is less of a database and more of an SQL interface to about\n10 different database backend products...\n\nChris\n\n", "msg_date": "Mon, 17 Sep 2001 09:53:38 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Where do they find the time??? Great Bridge closed\n\tnow!!!??" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > What a coincidence. I was about to say the exact opposite. Obviously,\n> > PostgreSQL isn't the one true database and everyone should Jump\n> > into MySQL.\n> > It is easier to type.\n> >\n> > I hope your post was meant as a joke because it was hilarious.\n> \n> These days MySQL is less of a database and more of an SQL interface to about\n> 10 different database backend products...\n\nOn that note, maybe we should write a wrapper function so it becomes a\nfrontend interface for PostgreSQL!\n\n;-)\n\n+ Justin\n\n> \n> Chris\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 17 Sep 2001 13:25:12 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Where do they find the time??? Great Bridge " }, { "msg_contents": "Heh, don't laugh. That would actually be a great product, considering all \nthose pre-written PHP and PERL scripts for MySQL. I'm the SysAdmin of an \nISP, and I have a lot of conversations with our web clients like:\n\nDo you support MySQL?\n\nNo, we use Postgres, it's got better features.\n\nYeah, but I found this script on the web, but it only supports MySQL, and \nI'm to lazy/stupid/cheap to convert it to Postgres.\n\nOf course, I'm too lazy/cheap to admin two different database servers, so \nthe clients are out of luck unless they want to pay extra for MySQL (no one \nhas as of yet).\n\n\nAt 12:25 AM 9/17/01, Justin Clift wrote:\n>Christopher Kings-Lynne wrote:\n>[snip]\n> > These days MySQL is less of a database and more of an SQL interface to \n> about\n> > 10 different database backend products...\n>\n>On that note, maybe we should write a wrapper function so it becomes a\n>frontend interface for PostgreSQL!\n>\n>;-)\n>\n>+ Justin\n>\n> >\n> > Chris\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>--\n>\"My grandfather once told me that there are two kinds of people: those\n>who work and those who take the credit. He told me to try to be in the\n>first group; there was less competition there.\"\n> - Indira Gandhi\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n\n\nHeh, don't laugh.  That would actually be a great\nproduct, considering all those pre-written PHP and PERL scripts for\nMySQL.  I'm the SysAdmin of an ISP, and I have  a lot of\nconversations with our web clients like:\n\nDo you support MySQL?\n\nNo, we use Postgres, it's got better features.\n\nYeah, but I found this script on the web, but it only supports MySQL, and\nI'm to lazy/stupid/cheap to convert it to Postgres.\n\nOf course, I'm too lazy/cheap to admin two different database servers, so\nthe clients are out of luck unless they want to pay extra for MySQL (no\none has as of yet). \n\n\nAt 12:25 AM 9/17/01, Justin Clift wrote:\nChristopher Kings-Lynne wrote:\n[snip]\n> These days MySQL is less of a database and more of an SQL interface\nto about\n> 10 different database backend products...\n\nOn that note, maybe we should write a wrapper function so it becomes\na\nfrontend interface for PostgreSQL!\n\n;-)\n\n+ Justin\n\n> \n> Chris\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people:\nthose\nwho work and those who take the credit. He told me to try to be in\nthe\nfirst group; there was less competition there.\"\n     - Indira Gandhi\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html", "msg_date": "Mon, 17 Sep 2001 01:47:45 -0300", "msg_from": "Charles Tassell <ctassell@isn.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Where do they find the time??? Great" }, { "msg_contents": "I wonder if it would really be possible to do?\n\n+ Justin\n\n\nCharles Tassell wrote:\n> \n> Heh, don't laugh. That would actually be a great product, considering\n> all those pre-written PHP and PERL scripts for MySQL. I'm the\n> SysAdmin of an ISP, and I have a lot of conversations with our web\n> clients like:\n> \n> Do you support MySQL?\n> \n> No, we use Postgres, it's got better features.\n> \n> Yeah, but I found this script on the web, but it only supports MySQL,\n> and I'm to lazy/stupid/cheap to convert it to Postgres.\n> \n> Of course, I'm too lazy/cheap to admin two different database servers,\n> so the clients are out of luck unless they want to pay extra for MySQL\n> (no one has as of yet).\n> \n> At 12:25 AM 9/17/01, Justin Clift wrote:\n> \n> > Christopher Kings-Lynne wrote:\n> > [snip]\n> > > These days MySQL is less of a database and more of an SQL\n> > interface to about\n> > > 10 different database backend products...\n> >\n> > On that note, maybe we should write a wrapper function so it becomes\n> > a\n> > frontend interface for PostgreSQL!\n> >\n> > ;-)\n> >\n> > + Justin\n> >\n> > >\n> > > Chris\n> > >\n> > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to\n> > majordomo@postgresql.org\n> >\n> > --\n> > \"My grandfather once told me that there are two kinds of people:\n> > those\n> > who work and those who take the credit. He told me to try to be in\n> > the\n> > first group; there was less competition there.\"\n> > - Indira Gandhi\n> >\n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 17 Sep 2001 14:51:10 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Where do they find the time??? GreatBridge" }, { "msg_contents": "Hey,\n\n>I wonder if it would really be possible to do?\n\nSure is.\n\nFor php there's Pear (or I've written a DB wrapper, easily changed for \ndatabases)\nPerl - already has DBI / DBD, no point redoing that.\nPython - no idea, never used it.\n\nThe dumps would be a bit different but, something to work on.\n\nOf course, it depends on the script you're looking at, whether it's worth \nre-writing from scratch or trying to convert it.\n\n\n-----------------\n Chris Smith\nhttp://www.squiz.net/\n\n", "msg_date": "Mon, 17 Sep 2001 14:58:56 +1000", "msg_from": "Chris <csmith@squiz.net>", "msg_from_op": false, "msg_subject": "interfacing multiple db's (was Re: Where do they find the\n\ttime??? GreatBridge)" }, { "msg_contents": "On Mon, Sep 17, 2001 at 02:51:10PM +1000, Justin Clift wrote:\n> I wonder if it would really be possible to do?\n\nWell, it seems to me you could do it at serveral levels:\n\n1. Have a proxy running on whatever port MySQL uses that simply translates\nthe queries coming in and emulates the protocol. Problem is, you'd have to\npossibly translate function names, etc. I don't think there'd be any real\nSQL constructs they support be we don't. You can always ignore anything not\nsupported unless it really has a material effect.\n\n2. Support multiple grammers, configurable by database. Then you could\nsupport:\n\nCREATE DATABASE dummy EMULATING mysql;\n\nCreate a table of all the functions in mysql with a loadable module that\ndefines them all. Would make lots of people *really* happy. You could\nemulate every database under the sun. That'd be an excellent marketing\npoint.\n\n\"PostgreSQL is much more powerful then MySQL. It even has an emulation mode\nso all your existing MySQL programs will run without changes.\"\n\nThe thing is, I don't even think this would be too hard to do. A bit of time\nto setup maybe. You could define a function pg_parser which is the parser\nfor this database.\n\nOh, you'd also have to support different client communication protocols. I\nhave no idea how hard that would be.\n\nWould also need to provide a per connection override so that psql and\npg_dump could be used without having to rewrite them.\n\n3. Similar to proxy but built into the database. Don't like this. Too many\nlevels of parsing. Seems the wrong place.\n\n4. Supply a MySQL perl module that does the converting. Maybe be easy to do\nbut only covers one application at a time. More robust as a separate\napplication but would probably have to do complete parsing to give complete\nsupport.\n\nProbably others but I can't think of any right now.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> Magnetism, electricity and motion are like a three-for-two special offer:\n> if you have two of them, the third one comes free.\n", "msg_date": "Mon, 17 Sep 2001 15:40:41 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Where do they find the time??? GreatBridge" } ]
[ { "msg_contents": "I wanted to extract foriegn keys from the postgresql database related\nto each of the tables.. I tried to use the getImportedKeys and\ngetExportedKeys of java.sql.DatabaseMetadata... But it didnt give any\nexpected results... So can anyone tell me how to query the system\ncatalogs to extract this info??\nThanx\nJiby\n", "msg_date": "13 Sep 2001 22:56:16 -0700", "msg_from": "jiby@intelesoftech.com (jiby george)", "msg_from_op": true, "msg_subject": "querying system catalogs to extract foreign keys" }, { "msg_contents": "On 13 Sep 2001 22:56:16 -0700, you wrote:\n>I tried to use the getImportedKeys and getExportedKeys of \n>java.sql.DatabaseMetadata... But it didnt give any expected \n>results... \n\nThis is probably a limitation or bug in the JDBC driver. Please\npost details of your problem on pgsql-jdbc@postgresql.org. E.g.\nwhat results did you get, and what did you not get?\n\n>So can anyone tell me how to query the system\n>catalogs to extract this info??\n\nThe system catalogs are documented on\nhttp://www.postgresql.org/idocs/index.php?catalogs.html\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Fri, 14 Sep 2001 20:38:01 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": false, "msg_subject": "Re: querying system catalogs to extract foreign keys" }, { "msg_contents": "Hi,\n\nIn addition to this, Joel Burton's paper regarding Hacking the\nReferential Integrity tables gives very good insight into how to find\nout exactly what you're looking for, and the final example of SQL code\nat the end of the article will work as is :\n\nhttp://techdocs.postgresql.org/techdocs/hackingreferentialintegrity.php\n\nModified code to show what you want :\n\nSELECT c.relname as \"Trigger Table\",\nsubstr(f.proname, 9) as \"Trigger Function\",\nt.tgconstrname as \"Constraint Name\",\nc2.relname as \"Constraint Table\",\nt.tgdeferrable as \"Deferrable?\",\nt.tginitdeferred as \"Initially Deferred?\",\nt.tgargs as \"Trigger Arguments\"\nFROM pg_trigger t,\npg_class c,\npg_class c2,\npg_proc f\nWHERE t.tgrelid=c.oid\nAND t.tgconstrrelid=c2.oid\nAND tgfoid=f.oid\nAND t.tgenabled = 't'\nAND tgname ~ '^RI_'\nORDER BY t.oid;\n\nNote the \"Trigger Arguments\" (bytea) column is where you look to find\nout the fields involved in the RI trigger.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nRene Pijlman wrote:\n> \n> On 13 Sep 2001 22:56:16 -0700, you wrote:\n> >I tried to use the getImportedKeys and getExportedKeys of\n> >java.sql.DatabaseMetadata... But it didnt give any expected\n> >results...\n> \n> This is probably a limitation or bug in the JDBC driver. Please\n> post details of your problem on pgsql-jdbc@postgresql.org. E.g.\n> what results did you get, and what did you not get?\n> \n> >So can anyone tell me how to query the system\n> >catalogs to extract this info??\n> \n> The system catalogs are documented on\n> http://www.postgresql.org/idocs/index.php?catalogs.html\n> \n> Regards,\n> Ren� Pijlman <rene@lab.applinet.nl>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 16 Sep 2001 15:03:34 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: querying system catalogs to extract foreign keys" }, { "msg_contents": "Or if you download WebPG (phpPgAdmin) it includes code for retrieving\nforeign keys.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Justin Clift\n> Sent: Sunday, 16 September 2001 1:04 PM\n> To: Rene Pijlman\n> Cc: jiby george; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] querying system catalogs to extract foreign keys\n>\n>\n> Hi,\n>\n> In addition to this, Joel Burton's paper regarding Hacking the\n> Referential Integrity tables gives very good insight into how to find\n> out exactly what you're looking for, and the final example of SQL code\n> at the end of the article will work as is :\n>\n> http://techdocs.postgresql.org/techdocs/hackingreferentialintegrity.php\n>\n> Modified code to show what you want :\n>\n> SELECT c.relname as \"Trigger Table\",\n> substr(f.proname, 9) as \"Trigger Function\",\n> t.tgconstrname as \"Constraint Name\",\n> c2.relname as \"Constraint Table\",\n> t.tgdeferrable as \"Deferrable?\",\n> t.tginitdeferred as \"Initially Deferred?\",\n> t.tgargs as \"Trigger Arguments\"\n> FROM pg_trigger t,\n> pg_class c,\n> pg_class c2,\n> pg_proc f\n> WHERE t.tgrelid=c.oid\n> AND t.tgconstrrelid=c2.oid\n> AND tgfoid=f.oid\n> AND t.tgenabled = 't'\n> AND tgname ~ '^RI_'\n> ORDER BY t.oid;\n>\n> Note the \"Trigger Arguments\" (bytea) column is where you look to find\n> out the fields involved in the RI trigger.\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n> Rene Pijlman wrote:\n> >\n> > On 13 Sep 2001 22:56:16 -0700, you wrote:\n> > >I tried to use the getImportedKeys and getExportedKeys of\n> > >java.sql.DatabaseMetadata... But it didnt give any expected\n> > >results...\n> >\n> > This is probably a limitation or bug in the JDBC driver. Please\n> > post details of your problem on pgsql-jdbc@postgresql.org. E.g.\n> > what results did you get, and what did you not get?\n> >\n> > >So can anyone tell me how to query the system\n> > >catalogs to extract this info??\n> >\n> > The system catalogs are documented on\n> > http://www.postgresql.org/idocs/index.php?catalogs.html\n> >\n> > Regards,\n> > Ren� Pijlman <rene@lab.applinet.nl>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Mon, 17 Sep 2001 10:11:53 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: querying system catalogs to extract foreign keys" } ]
[ { "msg_contents": "Hi\n\nI saw in TODO\nCLIENTS\n* Add XML interface: psql, pg_dump, COPY, separate server\nand there's some code for JDBC in contrib/retep directory.\n\nAre there any plans to add xml support to postgresql, to return\nrown in formatted in xml for start? not only from psql, but\nfrom everywhere (e.g. php)\n\nThanks!\n-- \nMarius Andreiana\n--\nYou don't have to go to jail for helping your neighbour\nhttp://www.gnu.org/philosophy/\n\n", "msg_date": "14 Sep 2001 14:45:29 +0300", "msg_from": "Marius Andreiana <marius@wdg.ro>", "msg_from_op": true, "msg_subject": "XML support?" } ]
[ { "msg_contents": "> Hi\n> \n> I saw in TODO\n> CLIENTS\n> * Add XML interface: psql, pg_dump, COPY, separate server\n> and there's some code for JDBC in contrib/retep directory.\n> \n> Are there any plans to add xml support to postgresql, to return\n> rown in formatted in xml for start? not only from psql, but\n> from everywhere (e.g. php)\n\nI am (as time allows) adding to the xml under contrib/retep, but I don't know \nof anyone else working on it.\n\nAdding xml support to psql shouldn't be too difficult (it has html support \nalready), and there is the ResultSet->XML stuff under contrib/retep.\n\nPeter\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Fri, 14 Sep 2001 08:27:37 -0400 (EDT)", "msg_from": "Peter T Mount <peter@retep.org.uk>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "\n> Hi\n> \n> I saw in TODO\n> CLIENTS\n> * Add XML interface: psql, pg_dump, COPY, separate server\n> and there's some code for JDBC in contrib/retep directory.\n> \n> Are there any plans to add xml support to postgresql, to return\n> rown in formatted in xml for start? not only from psql, but\n> from everywhere (e.g. php)\n\nI am (as time allows) adding to the xml under contrib/retep, but I don't know \nof anyone else working on it.\n\nAdding xml support to psql shouldn't be too difficult (it has html support \nalready), and there is the ResultSet->XML stuff under contrib/retep.\n\nPeter\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Fri, 14 Sep 2001 08:29:26 -0400 (EDT)", "msg_from": "Peter T Mount <peter@retep.org.uk>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "> Hi\n> \n> I saw in TODO\n> CLIENTS\n> * Add XML interface: psql, pg_dump, COPY, separate server\n> and there's some code for JDBC in contrib/retep directory.\n> \n> Are there any plans to add xml support to postgresql, to return\n> rown in formatted in xml for start? not only from psql, but\n> from everywhere (e.g. php)\n\nI am (as time allows) adding to the xml under contrib/retep, but I don't know \nof anyone else working on it.\n\nAdding xml support to psql shouldn't be too difficult (it has html support \nalready), and there is the ResultSet->XML stuff under contrib/retep.\n\nPeter\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Fri, 14 Sep 2001 08:31:05 -0400 (EDT)", "msg_from": "Peter T Mount <peter@retep.org.uk>", "msg_from_op": true, "msg_subject": "XML" } ]
[ { "msg_contents": "Hi,\nI think Tom Lane is right as always. My postgresql server was configured with --enable-locale option and it works perfect with Turkish stuff. However I could not find a solution to the problem below.\nAny hint?\nThanks and Regards\nErol\n\n<eroloz@esg.com.tr> writes:\n>> I get an error when the following command executed;\n>> /usr/local/pgsql/bin/pg_dump trollandtoad > trollandtoad.out\n>>\n>>SET TRANSACTION command failed. Explanation from backend: 'ERROR: Bad TRAN=\n>>SACTION ISOLATION LEVEL (serializable)\n\n>Hmm. It would seem that strcasecmp() on your platform reports that the\n>strings \"SERIALIZABLE\" and \"serializable\" are not equal. A locale\n>problem perhaps?\n>\n\t\t\t>regards, tom lane\n\n\n\n\n\n\n\nHi,\nI think Tom Lane is right as always. My postgresql \nserver was configured with --enable-locale option and it works perfect with \nTurkish stuff. However I could not find a solution to the problem \nbelow.\nAny hint?\nThanks and Regards\nErol\n \n<eroloz@esg.com.tr> writes:>> I get \nan error when the following command executed;>> \n/usr/local/pgsql/bin/pg_dump trollandtoad > \ntrollandtoad.out>>>>SET TRANSACTION command failed. \nExplanation from backend: 'ERROR:  Bad TRAN=>>SACTION ISOLATION \nLEVEL (serializable)>Hmm.  It would seem that strcasecmp() on \nyour platform reports that the>strings \"SERIALIZABLE\" and \"serializable\" \nare not equal.  A locale>problem perhaps?>\t\t\t>regards, \ntom lane", "msg_date": "Fri, 14 Sep 2001 16:13:55 +0300", "msg_from": "\"=?iso-8859-9?B?RXJvbCDWeg==?=\" <eroloz@esg.com.tr>", "msg_from_op": true, "msg_subject": "pg_dump error - LOCALIZATION PROBLEM" }, { "msg_contents": "Erol �z writes:\n\n> I think Tom Lane is right as always. My postgresql server was\n> configured with --enable-locale option and it works perfect with\n> Turkish stuff. However I could not find a solution to the problem\n> below.\n\nUntested, but try this:\n\nEdit src/backend/commands/variable.c, look for the function\nparse_XactIsoLevel(). Change the code that looks like this:\n\n if (strcasecmp(value, \"SERIALIZABLE\") == 0)\n XactIsoLevel = XACT_SERIALIZABLE;\n else if (strcasecmp(value, \"COMMITTED\") == 0)\n XactIsoLevel = XACT_READ_COMMITTED;\n\ninto:\n\n if (strcmp(value, \"serializable\") == 0)\n XactIsoLevel = XACT_SERIALIZABLE;\n else if (strcmp(value, \"committed\") == 0)\n XactIsoLevel = XACT_READ_COMMITTED;\n\nRecompile and install.\n\n> <eroloz@esg.com.tr> writes:\n> >> I get an error when the following command executed;\n> >> /usr/local/pgsql/bin/pg_dump trollandtoad > trollandtoad.out\n> >>\n> >>SET TRANSACTION command failed. Explanation from backend: 'ERROR: Bad TRAN=\n> >>SACTION ISOLATION LEVEL (serializable)\n>\n> >Hmm. It would seem that strcasecmp() on your platform reports that the\n> >strings \"SERIALIZABLE\" and \"serializable\" are not equal. A locale\n> >problem perhaps?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 14 Sep 2001 17:12:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump error - LOCALIZATION PROBLEM" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Untested, but try this:\n\n> Edit src/backend/commands/variable.c, look for the function\n> parse_XactIsoLevel(). Change the code that looks like this:\n\n> if (strcasecmp(value, \"SERIALIZABLE\") == 0)\n> XactIsoLevel = XACT_SERIALIZABLE;\n> else if (strcasecmp(value, \"COMMITTED\") == 0)\n> XactIsoLevel = XACT_READ_COMMITTED;\n\n> into:\n\n> if (strcmp(value, \"serializable\") == 0)\n> XactIsoLevel = XACT_SERIALIZABLE;\n> else if (strcmp(value, \"committed\") == 0)\n> XactIsoLevel = XACT_READ_COMMITTED;\n\nHmm. Given that we expect the lexer to have downcased any unquoted\nwords, this seems like a workable solution --- where else are we using\nstrcasecmp() unnecessarily?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 17:57:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump error - LOCALIZATION PROBLEM " }, { "msg_contents": "Tom Lane writes:\n\n> Hmm. Given that we expect the lexer to have downcased any unquoted\n> words, this seems like a workable solution --- where else are we using\n> strcasecmp() unnecessarily?\n\nI've identified several other such places. However, in reality we have to\nconsider every single strcasecmp() call suspicious. In many places an\nASCII-only alternative is needed or the code needs to be rewritten.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 17 Sep 2001 01:04:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump error - LOCALIZATION PROBLEM " }, { "msg_contents": ">> Hmm. Given that we expect the lexer to have downcased any unquoted\n>> words, this seems like a workable solution --- where else are we using\n>> strcasecmp() unnecessarily?\n\nWait a minute --- I spoke too quickly. The lexer's behavior is to\ndowncase unquoted identifiers in a *locale sensitive* fashion --- it\nuses isupper() and tolower(). We concluded that that was correct for\nidentifiers according to SQL99, whereas keyword matching should not be\nlocale-dependent. See the comments for ScanKeywordLookup.\n\n> I've identified several other such places. However, in reality we have to\n> consider every single strcasecmp() call suspicious. In many places an\n> ASCII-only alternative is needed or the code needs to be rewritten.\n\nI think our problems are worse than that: once the identifier has been\nthrough a locale-dependent case conversion we really have a problem\nmatching it to an ASCII string. The only real solution may be to\nrequire *all* keywords to be matched in the lexer, and forbid strcmp()\nmatching in later phases entirely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 19:18:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump error - LOCALIZATION PROBLEM " }, { "msg_contents": " hi,\n\n I have also seen the same problem. But there is another problem related\nwith locale.\nThe function MIN is translated into m�n ( in Turkish locale support) and\npostgres gives an\nerror message as follows:\nFunction 'm�n(int8)' does not exist .\n\n But when I use \"LIKE\" , postgres does the operations correctly. I don't\nknow the internals of postgres,\nbut I want to solve this problem somehow?\n Thanks in advance.\n\n\nTom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Untested, but try this:\n>\n> > Edit src/backend/commands/variable.c, look for the function\n> > parse_XactIsoLevel(). Change the code that looks like this:\n>\n> > if (strcasecmp(value, \"SERIALIZABLE\") == 0)\n> > XactIsoLevel = XACT_SERIALIZABLE;\n> > else if (strcasecmp(value, \"COMMITTED\") == 0)\n> > XactIsoLevel = XACT_READ_COMMITTED;\n>\n> > into:\n>\n> > if (strcmp(value, \"serializable\") == 0)\n> > XactIsoLevel = XACT_SERIALIZABLE;\n> > else if (strcmp(value, \"committed\") == 0)\n> > XactIsoLevel = XACT_READ_COMMITTED;\n>\n> Hmm. Given that we expect the lexer to have downcased any unquoted\n> words, this seems like a workable solution --- where else are we using\n> strcasecmp() unnecessarily?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n", "msg_date": "Mon, 17 Sep 2001 13:00:50 +0300", "msg_from": "Burak Bilen <bilen@metu.edu.tr>", "msg_from_op": false, "msg_subject": "Re: pg_dump error - LOCALIZATION PROBLEM" }, { "msg_contents": "Tom Lane writes:\n\n> I think our problems are worse than that: once the identifier has been\n> through a locale-dependent case conversion we really have a problem\n> matching it to an ASCII string. The only real solution may be to\n> require *all* keywords to be matched in the lexer, and forbid strcmp()\n> matching in later phases entirely.\n\nThere are several classes of strcasecmp() misuse:\n\n1. Using strcasecmp() on strings that are guaranteed to be lower case,\nbecause the parser has assigned to the variable one of a finite set of\nliteral strings. See CREATE SEQUENCE, commands/sequence.c for example.\n\n2. Using strcasecmp() on strings that were parsed as keywords. See CREATE\nOPERATOR, CREATE AGGREGATE, CREATE TYPE, commands/define.c.\n\n3. Using strcasecmp() on the values of GUC variables.\n\n4. Using strcasecmp() for parsing configuration files or other things with\nseparate syntax rules. See libpq/hba.c for reading the recode table.\n\nFor #1, strcasecmp is just a waste.\n\nFor #2, we should export parts of ScanKeywordLookup as a generic function,\nperhaps \"normalize_identifier\", and then we can replace\n\n strcasecmp(var, \"expected_value\")\n\nwith\n\n strcmp(normalize_identifier(var), \"expected_value\")\n\nFor #3, it's not quite clear, because the string value could have been\ncreated by an identifier or a string constant, so it's either #2 or #4.\n\nFor #4, we need some ASCII-only strcasecmp version.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 17 Sep 2001 21:07:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] pg_dump error - LOCALIZATION PROBLEM " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> 2. Using strcasecmp() on strings that were parsed as keywords. See CREATE\n> OPERATOR, CREATE AGGREGATE, CREATE TYPE, commands/define.c.\n\nBut the real point is that they were parsed as identifiers, *not*\nkeywords, and therefore have already been through a locale-dependent\ncase conversion. (Look at what happens in scan.l after\nScanKeywordLookup fails.) Unless we can undo or short-circuit that,\nit won't help to apply a correct ASCII-only comparison.\n\nPossibly we should change the parser's Ident node type to carry both the\nraw string and the downcased-as-identifier string. The latter would\nserve the existing needs, the former could be used for keyword matching.\n\n> For #2, we should export parts of ScanKeywordLookup as a generic function,\n> perhaps \"normalize_identifier\", ...\n> For #4, we need some ASCII-only strcasecmp version.\n\nI think these are the same thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Sep 2001 15:39:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] pg_dump error - LOCALIZATION PROBLEM " } ]
[ { "msg_contents": "Are the table structures of the System Tables changed often? Have they\nchanged from v7.1.1 and v7.1.2?\n\nPeter\n-- \n+---------------------------\n| Data Architect\n| your data; how you want it\n| http://www.codebydesign.com\n+---------------------------\n", "msg_date": "Fri, 14 Sep 2001 08:00:35 -0700", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "System Tables" }, { "msg_contents": "Peter Harvey writes:\n\n> Are the table structures of the System Tables changed often?\n\nOnly between major releases (if necessary).\n\n> Have they changed from v7.1.1 and v7.1.2?\n\nNo.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 15 Sep 2001 00:45:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: System Tables" } ]
[ { "msg_contents": "I started getting these error messages\n\nwebunl=> \\dt \nNOTICE: AllocSetFree: detected write past chunk end in \nTransactionCommandContext 3a4608 pqReadData() -- backend closed the channel \nunexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request. \nThe connection to the server was lost. \nAttempting reset: Failed. \n!>\n\nThe logs on the first times today I had these problems said this:\n\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-1] DEBUG: \nquery: SELECT c.relname as \"Name\", 'table'::text as \"Type\", u.usename as \n\"Owner\"\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-2] FROM \npg_class c, pg_user u\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-3] WHERE \nc.relowner = u.usesysid AND c.relkind = 'r'\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-4] AND \nc.relname !~ '^pg_'\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-5] UNION\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-6] SELECT \nc.relname as \"Name\", 'table'::text as \"Type\", NULL as \"Owner\"\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-7] FROM \npg_class c\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-8] WHERE \nc.relkind = 'r'\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-9] AND \nnot exists (select 1 from pg_user where usesysid = c.relowner)\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-10] AND \nc.relname !~ '^pg_'\nSep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-11] ORDER \nBY \"Name\"\nSep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [13] NOTICE: \nAllocSetFree: detected write past chunk end in TransactionCommandContext \n3a4608\nSep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [14] NOTICE: \nAllocSetFree: detected write past chunk end in TransactionCommandContext \n3a4608\nSep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [15] NOTICE: \nAllocSetFree: detected write past chunk end in TransactionCommandContext \n3a4608\nSep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [16] NOTICE: \nAllocSetFree: detected write past chunk end in TransactionCommandContext \n3a4608\nSep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [17] NOTICE: \nAllocSetFree: detected write past chunk end in TransactionCommandContext \n3aadf0\nSep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [18] NOTICE: \nAllocSetFree: detected write past chunk end in TransactionCommandContext \n3aadf0\nSep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [19] NOTICE: \nAllocSetFree: detected write past chunk end in TransactionCommandContext \n3aadf0\nSep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [20] NOTICE: \nAllocSetFree: detected write past chunk end in TransactionCommandContext \n3aadf0\n\nAny idea? Some databases are screwed up\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Fri, 14 Sep 2001 17:05:41 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": true, "msg_subject": "chunk size problem" } ]
[ { "msg_contents": "All,\n\nJust wondering what is the status of this patch. Is seems from comments\nthat people like the idea. I have also looked in the archives for other\npeople looking for this kind of feature and have found alot of interest.\n\nIf you think it is a good idea for 7.2, let me know what needs to be\nchanged and I will work on it this weekend.\n\nThanks\nJim\n\n\n\n", "msg_date": "Fri, 14 Sep 2001 17:33:42 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Status of index location patch" }, { "msg_contents": "> Just wondering what is the status of this patch. Is seems from comments\n> that people like the idea. I have also looked in the archives for other\n> people looking for this kind of feature and have found alot of interest.\n> \n> If you think it is a good idea for 7.2, let me know what needs to be\n> changed and I will work on it this weekend.\n\nJust change index' dir naming as was already discussed.\n\nVadim\n\n\n", "msg_date": "Fri, 14 Sep 2001 23:00:24 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Status of index location patch" }, { "msg_contents": "On Saturday 15 September 2001 03:03, Jim Buttafuoco wrote:\n> Just wondering what is the status of this patch. Is seems from comments\n> that people like the idea. I have also looked in the archives for other\n> people looking for this kind of feature and have found alot of interest.\n\nCan we have a web based tracking system for patch tracking? I am ready to \nvolunteer. I may be of some help.\n\n Shridhar\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Sat, 15 Sep 2001 17:41:36 +0530", "msg_from": "Chamanya <chamanya@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Status of index location patch" } ]
[ { "msg_contents": "I have written some triggers which will call some procedures.\n\nI am looking for some way wherein I can edit these procedures\n\nIs there any way to do so?\n\nregards,\njoseph\n\n\n\n\n\n\n\n\n \nI have written some triggers which will call some \nprocedures.\n \nI am looking for some way wherein I can edit \nthese procedures\n \nIs there any way to do so?\n \nregards,\njoseph", "msg_date": "Fri, 14 Sep 2001 16:53:20 -0700", "msg_from": "\"francis\" <francis@inapp.com>", "msg_from_op": true, "msg_subject": "Trigger - Editing Procedures" } ]
[ { "msg_contents": "Sorry, I forgot to repost in hackers.\nStephan Szabo wrote:\n> \n> > We are running into a situation where psql is aborting the transaction\n> > when one call returns an error. Is there a way to continue on with\n> > transaction or at least save what has already happened (like an Oracle\n> > Save Point)?\n> \n> Not yet, although savepoints have been talked about. (maybe 7.3?)\n\nNew SMGR opens the way to this *very important* feature. Vadim?\n\nAnother proposal to solve this was recently proposed, not using WAL. Bruce?\n\nI think this issue is rasing in the lists frequently enough, as for giving it\ntop priority. Maybe is isn't so easy. Comments?\n\nThanks\n\nRegards,\nHaroldo.", "msg_date": "Sat, 15 Sep 2001 12:26:07 -0500", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": true, "msg_subject": "[Fwd: [ADMIN] Transaction Aborting on sql call failure]" } ]
[ { "msg_contents": "Vadim,\n\nI guess I am still confused...\n\nIn dbcommands.c resolve_alt_dbpath() takes the db oid as a argument. \nThis number is used to \"find\" the directory where the data files live. \nAll the patch does is put the indexes into a \"db oid\"_index directory\ninstead of \"db oid\"\n\n\nThis is for tables snprintf(ret, len, \"%s/base/%u\", prefix, dboid);\nThis is for indexes snprintf(ret, len, \"%s/base/%u_index\", prefix,\ndboid);\n\nAnd in catalog.c\ntables: sprintf(path, \"%s/base/%u/%u\", DataDir, rnode.tblNode,\nrnode.relNode);\nindexes: sprintf(path, \"%s/base/%u_index/%u\", DataDir,\n rnode.tblNode,rnode.relNode);\n\nCan you explain how I would get the tblNode for an existing database\nindex files if it doesn't have the same OID as the database entry in\npg_databases.\n\nJim\n\n\n> > Just wondering what is the status of this patch. Is seems from\ncomments\n> > that people like the idea. I have also looked in the archives for\nother\n> > people looking for this kind of feature and have found alot of\ninterest.\n> > \n> > If you think it is a good idea for 7.2, let me know what needs to be\n> > changed and I will work on it this weekend.\n> \n> Just change index' dir naming as was already discussed.\n> \n> Vadim\n> \n> \n\n\n", "msg_date": "Sat, 15 Sep 2001 13:54:39 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Status of index location patch" }, { "msg_contents": "> Can you explain how I would get the tblNode for an existing database\n> index files if it doesn't have the same OID as the database entry in\n> pg_databases.\n\nWell, keeping in mind future tablespace implementation I would\nadd tblNode to pg_class and in pg_databases I'd have\ndefaultTblNode and indexTblNode.\nIf it's too late to do for 7.2 then let's wait till 7.3.\n\nVadim\n\n\n", "msg_date": "Sat, 15 Sep 2001 12:40:51 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Status of index location patch" }, { "msg_contents": "\nJim, do you have an updated patch that you would like applied for 7.3?\n\n\n---------------------------------------------------------------------------\n\nJim Buttafuoco wrote:\n> Vadim,\n> \n> I guess I am still confused...\n> \n> In dbcommands.c resolve_alt_dbpath() takes the db oid as a argument. \n> This number is used to \"find\" the directory where the data files live. \n> All the patch does is put the indexes into a \"db oid\"_index directory\n> instead of \"db oid\"\n> \n> \n> This is for tables snprintf(ret, len, \"%s/base/%u\", prefix, dboid);\n> This is for indexes snprintf(ret, len, \"%s/base/%u_index\", prefix,\n> dboid);\n> \n> And in catalog.c\n> tables: sprintf(path, \"%s/base/%u/%u\", DataDir, rnode.tblNode,\n> rnode.relNode);\n> indexes: sprintf(path, \"%s/base/%u_index/%u\", DataDir,\n> rnode.tblNode,rnode.relNode);\n> \n> Can you explain how I would get the tblNode for an existing database\n> index files if it doesn't have the same OID as the database entry in\n> pg_databases.\n> \n> Jim\n> \n> \n> > > Just wondering what is the status of this patch. Is seems from\n> comments\n> > > that people like the idea. I have also looked in the archives for\n> other\n> > > people looking for this kind of feature and have found alot of\n> interest.\n> > > \n> > > If you think it is a good idea for 7.2, let me know what needs to be\n> > > changed and I will work on it this weekend.\n> > \n> > Just change index' dir naming as was already discussed.\n> > \n> > Vadim\n> > \n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 13:05:50 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Status of index location patch" }, { "msg_contents": "Bruce,\n\nI stopped all work on this since people seemed confused about the \ntablespace/location words. I don't think enough of the \"core\" team likes \nthis idea. Am I wrong here? Did I explain the patch good enough? \n\nPlease let me know, I still am planning on doing it for internal use. I \nwould prefer that it was a standard feature. If you think I should still \npursue this, let me know what I need to do to get it off the ground.\n\nThanks for your help\nJim\n\n\n\n> Jim, do you have an updated patch that you would like applied for 7.3?\n> \n> ---------------------------------------------------------------------------\n> \n> Jim Buttafuoco wrote:\n> > Vadim,\n> > \n> > I guess I am still confused...\n> > \n> > In dbcommands.c resolve_alt_dbpath() takes the db oid as a argument. \n> > This number is used to \"find\" the directory where the data files live. \n> > All the patch does is put the indexes into a \"db oid\"_index directory\n> > instead of \"db oid\"\n> > \n> > \n> > This is for tables snprintf(ret, len, \"%s/base/%u\", prefix, dboid);\n> > This is for indexes snprintf(ret, len, \"%s/base/%u_index\", prefix,\n> > dboid);\n> > \n> > And in catalog.c\n> > tables: sprintf(path, \"%s/base/%u/%u\", DataDir, rnode.tblNode,\n> > rnode.relNode);\n> > indexes: sprintf(path, \"%s/base/%u_index/%u\", DataDir,\n> > rnode.tblNode,rnode.relNode);\n> > \n> > Can you explain how I would get the tblNode for an existing database\n> > index files if it doesn't have the same OID as the database entry in\n> > pg_databases.\n> > \n> > Jim\n> > \n> > \n> > > > Just wondering what is the status of this patch. Is seems from\n> > comments\n> > > > that people like the idea. I have also looked in the archives for\n> > other\n> > > > people looking for this kind of feature and have found alot of\n> > interest.\n> > > > \n> > > > If you think it is a good idea for 7.2, let me know what needs to be\n> > > > changed and I will work on it this weekend.\n> > > \n> > > Just change index' dir naming as was already discussed.\n> > > \n> > > Vadim\n> > > \n> > > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n\n", "msg_date": "Sun, 3 Mar 2002 15:34:36 -0500", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Status of index location patch" }, { "msg_contents": "Jim Buttafuoco wrote:\n> Bruce,\n> \n> I stopped all work on this since people seemed confused about the \n> tablespace/location words. I don't think enough of the \"core\" team likes \n> this idea. Am I wrong here? Did I explain the patch good enough? \n> \n> Please let me know, I still am planning on doing it for internal use. I \n> would prefer that it was a standard feature. If you think I should still \n> pursue this, let me know what I need to do to get it off the ground.\n\nI think ideally we need to work in the direction outlined in the TODO\nlist link for tablespaces. We clearly need this functionality, but I\nthink there is a perception we need to do it right the first time. I\ndon't see your work as that far away from where we want to be so I\nencourage you to read the link next to tablespaces and see what parts\nyou like and what parts you don't.\n\nThe usual steps are to outline the functionality you want to add, then\nwe can discuss implementation. It is a feature we very badly need.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 3 Mar 2002 15:46:54 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Status of index location patch" } ]
[ { "msg_contents": "Yes that is exactly what I am going to do for 7.3 (had trouble adding\ntblNode to pg_class so I stopped for now...)\n\n\n> > Can you explain how I would get the tblNode for an existing database\n> > index files if it doesn't have the same OID as the database entry\nin\n> > pg_databases.\n> \n> Well, keeping in mind future tablespace implementation I would\n> add tblNode to pg_class and in pg_databases I'd have\n> defaultTblNode and indexTblNode.\n> If it's too late to do for 7.2 then let's wait till 7.3.\n> \n> Vadim\n> \n> \n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n> \n> \n\n\n", "msg_date": "Sat, 15 Sep 2001 16:01:04 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: Status of index location patch" } ]
[ { "msg_contents": "I can bet that open-source code SQL server like PostgreSQL is bomb-proof\nand even in case of nuclear war world-wide,\nthe source code of PostgreSQL will be very safe at some point on the\nplanet and can easily be distributed and multiplied rapidly.\n\nThe reason being internet is nuclear-bomb proof storage center and\nPostgreSQL source code is stored on internet..\n\nThis is a point why large enterprises should consider PostgreSQL.\n\nInternet SQL server like PostgreSQL will gain rapid momentum\nworld-wide....\n\n\n", "msg_date": "Sat, 15 Sep 2001 20:18:23 GMT", "msg_from": "peace_flower <\"alavoor[AT]\"@yahoo.com>", "msg_from_op": true, "msg_subject": "NewYork Bombing: SQL server bomb proof!!" }, { "msg_contents": "Not to mention that PostgreSQL is WAY ahead on technology.\n\npeace_flower wrote:\n\n> I can bet that open-source code SQL server like PostgreSQL is bomb-proof\n> and even in case of nuclear war world-wide,\n> the source code of PostgreSQL will be very safe at some point on the\n> planet and can easily be distributed and multiplied rapidly.\n>\n> The reason being internet is nuclear-bomb proof storage center and\n> PostgreSQL source code is stored on internet..\n>\n> This is a point why large enterprises should consider PostgreSQL.\n>\n> Internet SQL server like PostgreSQL will gain rapid momentum\n> world-wide....\n\n", "msg_date": "Sat, 15 Sep 2001 22:17:03 GMT", "msg_from": "Bryon Lape <blape@grey-net.com>", "msg_from_op": false, "msg_subject": "Re: NewYork Bombing: SQL server bomb proof!!" }, { "msg_contents": "This is probably the worst post I have seen in a newsgroup ever.\nUsing this tragedy so promote a product is disgusting.\nYou are not doing the product you are promoting a favor with this.\n\nI will not comment on the technical content of this post.\n\nSerge\n\n\n", "msg_date": "Sun, 16 Sep 2001 15:54:29 -0400", "msg_from": "Serge Rielau <srielau@ca.ibm.com>", "msg_from_op": false, "msg_subject": "Re: NewYork Bombing: SQL server bomb proof!!" }, { "msg_contents": "As someone that had to wait until 9:30 Tuesday night to find out if friends\nin the pentagon were okay (and lucky enough to get that call) I can say your\nposting is so offensive as to induce illness. This puts you in the same\ncategory as the scum that are stealing victims SSNs and creating fake\ncharities. Do you think anyone would EVER deal with someone that has this\nattitude towards the deaths of thousands?? crawl back under your rock and\nrot, you f*ing bastard!\npeace_flower <\"alavoor[AT]\"@yahoo.com> wrote in message\nnews:3BA3B7A7.FD644810@yahoo.com...\n> I can bet that open-source code SQL server like PostgreSQL is bomb-proof\n> and even in case of nuclear war world-wide,\n> the source code of PostgreSQL will be very safe at some point on the\n> planet and can easily be distributed and multiplied rapidly.\n>\n> The reason being internet is nuclear-bomb proof storage center and\n> PostgreSQL source code is stored on internet..\n>\n> This is a point why large enterprises should consider PostgreSQL.\n>\n> Internet SQL server like PostgreSQL will gain rapid momentum\n> world-wide....\n>\n>\n\n\n", "msg_date": "Mon, 17 Sep 2001 07:51:40 -0400", "msg_from": "\"Chris Boyle\" <cboyle@no.spam.hargray.com>", "msg_from_op": false, "msg_subject": "Re: NewYork Bombing: SQL server bomb proof!!" }, { "msg_contents": "Alavoor\n\nYou have historically posted ludicrous messages from a variety of email\naddresses which go to prove nothing more than your idiocy. This message\nhowever plumbs depths of sickness to which I had hoped comp.databases.*\nwould not be subjected. You sir are a disgrace.\n\nI remain glad to see that your arguments remain as daft as ever so that\nthere is no hope of anyone taking you seriously.\n\n--\nNiall Litchfield\nOracle DBA\nAudit Commission UK\n\"peace_flower\" <\"alavoor[AT]\"@yahoo.com> wrote in message\nnews:3BA3B7A7.FD644810@yahoo.com...\n> I can bet that open-source code SQL server like PostgreSQL is bomb-proof\n> and even in case of nuclear war world-wide,\n> the source code of PostgreSQL will be very safe at some point on the\n> planet and can easily be distributed and multiplied rapidly.\n>\n> The reason being internet is nuclear-bomb proof storage center and\n> PostgreSQL source code is stored on internet..\n>\n> This is a point why large enterprises should consider PostgreSQL.\n>\n> Internet SQL server like PostgreSQL will gain rapid momentum\n> world-wide....\n>\n>\n\n\n\n\n", "msg_date": "Mon, 24 Sep 2001 12:39:56 +0100", "msg_from": "\"Niall Litchfield\" <n-litchfield@audit-commission.gov.uk>", "msg_from_op": false, "msg_subject": "Re: NewYork Bombing: SQL server bomb proof!!" }, { "msg_contents": "peace_flower wrote:\n\n> I can bet that open-source code SQL server like PostgreSQL is bomb-proof\n> and even in case of nuclear war world-wide,\n> the source code of PostgreSQL will be very safe at some point on the\n> planet and can easily be distributed and multiplied rapidly.\n>\n> The reason being internet is nuclear-bomb proof storage center and\n> PostgreSQL source code is stored on internet..\n>\n> This is a point why large enterprises should consider PostgreSQL.\n>\n> Internet SQL server like PostgreSQL will gain rapid momentum\n> world-wide....\n\nRemind me to care after it is over.\n\nIsn't there some usenet group like alt.fantasy.after.wwIII where you can\npost?\n\nOh and thank you for posting this to every usenet group you could spell.\n\nDaniel A. Morgan\n\n", "msg_date": "Mon, 24 Sep 2001 17:37:40 -0700", "msg_from": "\"Daniel A. Morgan\" <Daniel.Morgan@attws.com>", "msg_from_op": false, "msg_subject": "Re: NewYork Bombing: SQL server bomb proof!!" } ]
[ { "msg_contents": "Hello,\nI have strange problem with seeing uncommited changes that were made inside trigger.\n\ni have function func1 that inserts one record into table tab1.\nThere is after trigger on that table tab1 which in turns \ninserts several records in another table tab2.\nThen in func1 i am trying to access new records from table \ntab2 and i get nothing. I do not see these records.\nIs it documented behaviour, i couldn't find anything in docs.\n\nThanks\n\n", "msg_date": "Sat, 15 Sep 2001 16:28:08 -0400", "msg_from": "Vlad Seryakov <vlad@crystalballinc.com>", "msg_from_op": true, "msg_subject": "Trigger commited problem" } ]
[ { "msg_contents": "\njust curious, is there any reason why a plperl RPM package isn't included\nwith the \"official\" distribution (from postgres website)?\n\nNo incredible deal just to build it myself, just wondering.\n\n-rchit\n", "msg_date": "Sat, 15 Sep 2001 15:43:29 -0700", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": true, "msg_subject": "plperl rpm package" } ]
[ { "msg_contents": "This patch warns about oid/xid wraparound during VACUUM. Apply the part\npeople consider appropriate. I may not be around before beta.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/commands/vacuum.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/vacuum.c,v\nretrieving revision 1.195\ndiff -c -r1.195 vacuum.c\n*** src/backend/commands/vacuum.c\t2001/05/25 15:45:32\t1.195\n--- src/backend/commands/vacuum.c\t2001/06/13 21:01:37\n***************\n*** 17,22 ****\n--- 17,23 ----\n #include <fcntl.h>\n #include <unistd.h>\n #include <time.h>\n+ #include <limits.h>\n #include <sys/time.h>\n #include <sys/types.h>\n #include <sys/file.h>\n***************\n*** 159,166 ****\n static bool enough_space(VacPage vacpage, Size len);\n static void init_rusage(VacRUsage *ru0);\n static char *show_rusage(VacRUsage *ru0);\n \n- \n /*\n * Primary entry point for VACUUM and ANALYZE commands.\n */\n--- 160,167 ----\n static bool enough_space(VacPage vacpage, Size len);\n static void init_rusage(VacRUsage *ru0);\n static char *show_rusage(VacRUsage *ru0);\n+ static void check_limits(void);\n \n /*\n * Primary entry point for VACUUM and ANALYZE commands.\n */\n***************\n*** 236,241 ****\n--- 237,243 ----\n \n \t/* clean up */\n \tvacuum_shutdown();\n+ \tcheck_limits();\n }\n \n /*\n***************\n*** 2645,2648 ****\n--- 2647,2674 ----\n \t\t\t (int) (ru1.tv.tv_usec - ru0->tv.tv_usec) / 10000);\n \n \treturn result;\n+ }\n+ \n+ /*\n+ *\tcheck if we are near OID or XID wraparound\n+ */\n+ static void check_limits(void)\n+ {\n+ \tOid nextOid;\n+ \n+ \t/* If we are 75% to the limit, warn the user */\n+ \tif (GetCurrentTransactionId() > UINT_MAX - UINT_MAX / 4)\n+ \t\telog(NOTICE,\"You are %.0f%% toward the limit for transaction ids.\\n\"\n+ \t\t\t\"\\t Dumping your databases, running initdb, and reloading will reset\\n\"\n+ \t\t\t\"\\t the transaction id counter.\",\n+ \t\t\tGetCurrentTransactionId() / (float)UINT_MAX * 100);\n+ \n+ \t/* If we are 75% to the limit, warn the user */\n+ \tGetNewObjectId(&nextOid);\n+ \tif (nextOid > OID_MAX - OID_MAX / 4)\n+ \t\telog(NOTICE,\"You are %.0f%% toward the limit for object ids.\\n\"\n+ \t\t\t\"\\t If you are not using object ids as primary keys, dumping your\\n\"\n+ \t\t\t\"\\t databases, running initdb, and reloading will reset\\n\"\n+ \t\t\t\"\\t the oid counter.\",\n+ \t\t\t(float)nextOid / OID_MAX * 100);\n }", "msg_date": "Sun, 16 Sep 2001 00:25:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Warning about oid/xid wraparound" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> This patch warns about oid/xid wraparound during VACUUM. Apply the part\n> people consider appropriate. I may not be around before beta.\n\nNone of it is appropriate anymore...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 01:03:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warning about oid/xid wraparound " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > This patch warns about oid/xid wraparound during VACUUM. Apply the part\n> > people consider appropriate. I may not be around before beta.\n> \n> None of it is appropriate anymore...\n\nOID wraparound isn't a problem?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 16 Sep 2001 01:05:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Warning about oid/xid wraparound" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> None of it is appropriate anymore...\n\n> OID wraparound isn't a problem?\n\nWell, it could be a problem if an app is relying on uniqueness of OIDs\nwithout having installed an unique index on OIDs. However, I do not\nthink it is the business of the backend to issue nuisance warnings that\nwill come out whether an app is using unsafe practices or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 01:22:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warning about oid/xid wraparound " } ]
[ { "msg_contents": "\nThis will most likely screw some ppl up, and fix others ...\n\nCVSROOT has now moved to the new machine, finally ... and I've cleaned up\npathing ... and CVS_RSH=ssh now works again too ...\n\nSo, now CVSROOT is accessible as:\n\n:pserver:<userid>@cvs.postgresql.org:/cvsroot\n\n-or-\n\n:ext:<userid>@cvs.postgresql.org:/cvsroot\n\n\t- where CVS_RSH is set to ssh\n\nNow, I don't imagine it being *that* simple to move it over, so please let\nme know if anyone sees any errors on commits or stuff like that ...\n\nFor those with already checked out repositories, from everything I've\nread, all you have to do is change the value of the CVS/Root file to point\nto the new Root ...\n\nThis, I'm also figuring, is going to fix the email's going out as me for\n-committers ...\n\nanoncvs.postgresql.org is going to be out of sync until, most likely,\ntomorrow, for anyone trying to use that ... anoncvs is *no longer*\navailable through the main cvs repository either ...\n\n\n\n", "msg_date": "Sun, 16 Sep 2001 13:05:45 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Major change to CVS effective immediately ..." }, { "msg_contents": "----- Original Message ----- \nFrom: Marc G. Fournier <scrappy@hub.org>\nSent: Sunday, September 16, 2001 1:05 PM\n\n\n> Now, I don't imagine it being *that* simple to move it over, so please let\n> me know if anyone sees any errors on commits or stuff like that ...\n\nCVSweb seems to be screwed up.\nIt gives an error:\n\n------------8<------------\nError\nError: $CVSROOT not found!\nThe server on which the CVS tree lives is probably down. Please try again in a few minutes. \n------------8<------------\n\nwhen I access it with my browser.\n\nSerguei\n\n", "msg_date": "Sun, 16 Sep 2001 13:20:08 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "\nCVSWeb is going to be broken for a day or two, while Vince and I work out\nsome issues as regards moving the main www site over to the same server\n... but thanks for pointing it out, as I hadn't thought about it ..\n\nOn Sun, 16 Sep 2001, Serguei Mokhov wrote:\n\n> ----- Original Message -----\n> From: Marc G. Fournier <scrappy@hub.org>\n> Sent: Sunday, September 16, 2001 1:05 PM\n>\n>\n> > Now, I don't imagine it being *that* simple to move it over, so please let\n> > me know if anyone sees any errors on commits or stuff like that ...\n>\n> CVSweb seems to be screwed up.\n> It gives an error:\n>\n> ------------8<------------\n> Error\n> Error: $CVSROOT not found!\n> The server on which the CVS tree lives is probably down. Please try again in a few minutes.\n> ------------8<------------\n>\n> when I access it with my browser.\n>\n> Serguei\n>\n>\n\n", "msg_date": "Sun, 16 Sep 2001 13:37:00 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "----- Original Message ----- \nFrom: Marc G. Fournier <scrappy@hub.org>\nSent: Sunday, September 16, 2001 1:37 PM\n\n> CVSWeb is going to be broken for a day or two, while Vince and I work out\n> some issues as regards moving the main www site over to the same server\n> ... but thanks for pointing it out, as I hadn't thought about it ..\n\nIt's just because the web interface is the only way for me for now\nI can access the CVS repository. Otherwise, I wouldn't have noticed the problem\nmost likely...\n\nSerguei\n\n", "msg_date": "Sun, 16 Sep 2001 13:54:10 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "Marc G. Fournier writes:\n\n> Now, I don't imagine it being *that* simple to move it over, so please let\n> me know if anyone sees any errors on commits or stuff like that ...\n\ncvs commit\n ...\ncvs server: failed to create lock directory for \\\n `/cvsroot/pgsql/doc/src/sgml' (/cvsroot/pgsql/doc/src/sgml/#cvs.lock): \\\n Permission denied\ncvs server: lock failed - giving up\ncvs [server aborted]: lock failed - giving up\n\n> For those with already checked out repositories, from everything I've\n> read, all you have to do is change the value of the CVS/Root file to point\n> to the new Root ...\n\nCVS/Repository as well.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 16 Sep 2001 20:01:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "On Sun, 16 Sep 2001, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > Now, I don't imagine it being *that* simple to move it over, so please let\n> > me know if anyone sees any errors on commits or stuff like that ...\n>\n> cvs commit\n> ...\n> cvs server: failed to create lock directory for \\\n> `/cvsroot/pgsql/doc/src/sgml' (/cvsroot/pgsql/doc/src/sgml/#cvs.lock): \\\n> Permission denied\n> cvs server: lock failed - giving up\n> cvs [server aborted]: lock failed - giving up\n\nOkay, everything looks well to me on the server itself ... first stupid\nquestion, what is the IP of cvs.postgresql.org for you? it should be\n85.28, but you might still have the old one cached ...\n\n\n", "msg_date": "Sun, 16 Sep 2001 14:07:59 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": ">> For those with already checked out repositories, from everything I've\n>> read, all you have to do is change the value of the CVS/Root file to point\n>> to the new Root ...\n\n> CVS/Repository as well.\n\nWouldn't you have to apply this change in *every* /CVS subdirectory of\nthe tree? A fresh checkout seems easier.\n\n\nHowever, I find that the permissions problem stymies a checkout too...\nand yes, my DNS cache is up to date:\n\n$ nslookup cvs.postgresql.org\nServer: localhost\nAddress: 127.0.0.1\n\nNon-authoritative answer:\nName: mail.postgresql.org\nAddress: 216.126.85.28\nAliases: cvs.postgresql.org\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 15:35:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ... " }, { "msg_contents": "Tom Lane writes:\n\n> Wouldn't you have to apply this change in *every* /CVS subdirectory of\n> the tree? A fresh checkout seems easier.\n\nI just ran this on my tree:\n\nfind -name Root -exec perl -pi -e 's,:pserver:([a-z]+)\\@postgresql.org:/home/projects/pgsql/cvsroot,:pserver:\\1\\@cvs.postgresql.org:/cvsroot,' '{}' ';'\nfind -name Repository -exec perl -pi -e 's,/home/projects/pgsql/cvsroot/(.*),/cvsroot/\\1,' '{}' ';'\n\nand it looks like it didn't botch.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 16 Sep 2001 22:02:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ... " }, { "msg_contents": "Thus spake Marc G. Fournier\n> This will most likely screw some ppl up, and fix others ...\n> \n> CVSROOT has now moved to the new machine, finally ... and I've cleaned up\n> pathing ... and CVS_RSH=ssh now works again too ...\n> \n> So, now CVSROOT is accessible as:\n> \n> :pserver:<userid>@cvs.postgresql.org:/cvsroot\n> \n> -or-\n> \n> :ext:<userid>@cvs.postgresql.org:/cvsroot\n> \n> \t- where CVS_RSH is set to ssh\n\nNone of this helps me. I still can't get into that system. Can you\nplease check this and get me back in. If I can't get in I will have\nto move PyGreSQL back to my own CVS repository and I think it is good\nfor both projects to leave it where it is. Will you be in today? I\ncan call you later to discuss this and hopefully resolve it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 18 Sep 2001 07:25:52 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "* \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n|\n| anoncvs.postgresql.org is going to be out of sync until, most likely,\n| tomorrow, for anyone trying to use that ... anoncvs is *no longer*\n| available through the main cvs repository either ...\n\nIs anoncvs.postgresql.org working yet ? \n\nI just tried :\n\n# cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/home/projects/pgsql/cvsroot login\n\nand \n\n# cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/cvsroot login\n\nwith \"postgresql\" as password. In both cases the response was this :\n\n\ncvs [login aborted]: authorization failed: server anoncvs.postgresql.org rejected access\n\nregards, \n\n Gunnar\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n", "msg_date": "18 Sep 2001 15:24:53 +0200", "msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "\n\n\n:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\n\nOn 18 Sep 2001, Gunnar [iso-8859-1] R�nning wrote:\n\n> * \"Marc G. Fournier\" <scrappy@hub.org> wrote:\n> |\n> | anoncvs.postgresql.org is going to be out of sync until, most likely,\n> | tomorrow, for anyone trying to use that ... anoncvs is *no longer*\n> | available through the main cvs repository either ...\n>\n> Is anoncvs.postgresql.org working yet ?\n>\n> I just tried :\n>\n> # cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/home/projects/pgsql/cvsroot login\n>\n> and\n>\n> # cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/cvsroot login\n>\n> with \"postgresql\" as password. In both cases the response was this :\n>\n>\n> cvs [login aborted]: authorization failed: server anoncvs.postgresql.org rejected access\n>\n> regards,\n>\n> Gunnar\n>\n> --\n> Gunnar R�nning - gunnar@polygnosis.com\n> Senior Consultant, Polygnosis AS, http://www.polygnosis.com/\n>\n\n", "msg_date": "Wed, 19 Sep 2001 10:14:44 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "\ncan you ssh into cvs.postgresql.org?\n\nOn Tue, 18 Sep 2001, D'Arcy J.M. Cain wrote:\n\n> Thus spake Marc G. Fournier\n> > This will most likely screw some ppl up, and fix others ...\n> >\n> > CVSROOT has now moved to the new machine, finally ... and I've cleaned up\n> > pathing ... and CVS_RSH=ssh now works again too ...\n> >\n> > So, now CVSROOT is accessible as:\n> >\n> > :pserver:<userid>@cvs.postgresql.org:/cvsroot\n> >\n> > -or-\n> >\n> > :ext:<userid>@cvs.postgresql.org:/cvsroot\n> >\n> > \t- where CVS_RSH is set to ssh\n>\n> None of this helps me. I still can't get into that system. Can you\n> please check this and get me back in. If I can't get in I will have\n> to move PyGreSQL back to my own CVS repository and I think it is good\n> for both projects to leave it where it is. Will you be in today? I\n> can call you later to discuss this and hopefully resolve it.\n>\n> --\n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n>\n\n", "msg_date": "Wed, 19 Sep 2001 10:15:27 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "On Wed, Sep 19, 2001 at 10:14:44AM -0400, Marc G. Fournier wrote:\n> \n> :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot\n\nWhile trying a cvs update, I get\n\n? ChangeLogs/libecpg.so.3.1.1\n? ChangeLogs/HTML\n? ChangeLogs/GTAGS\n? ChangeLogs/GPATH\n? ChangeLogs/GRTAGS\n? ChangeLogs/GSYMS\n? ChangeLogs/libpqpp.h\ncannot create_adm_p /tmp/cvs-serv27285/ChangeLogs\n\n\nCheers,\n\nPatrick\n", "msg_date": "Wed, 19 Sep 2001 18:39:16 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "Thus spake Marc G. Fournier\n> can you ssh into cvs.postgresql.org?\n\nYes! I could not do that before. Did you fix something?\n\nI will be sending some PyGreSQL changes over shortly. Thanks.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 19 Sep 2001 14:48:18 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "While checking out TOT pgsql today onto an HFS+ file system (case-preserving, case-insensitive), I hit the following CVS conflict:\n\npgsql/src/backend/utils/mb/Unicode/utf8_to_alt.map pgsql/src/backend/utils/mb/Unicode/utf8_to_ALT.map\n\nHFS+ can not store two differerent files in a path that differs only by case.\n\nMac OS X users will be grateful if you can find a way to rename one of these files.\n\n-pmb\n\n\n", "msg_date": "Wed, 19 Sep 2001 12:41:10 -0700", "msg_from": "Peter Bierman <bierman@apple.com>", "msg_from_op": false, "msg_subject": "Case sensitive file names" }, { "msg_contents": "Peter Bierman writes:\n\n> While checking out TOT pgsql today onto an HFS+ file system (case-preserving, case-insensitive), I hit the following CVS conflict:\n>\n> pgsql/src/backend/utils/mb/Unicode/utf8_to_alt.map pgsql/src/backend/utils/mb/Unicode/utf8_to_ALT.map\n>\n> HFS+ can not store two differerent files in a path that differs only by case.\n\nRemove both of these files and update again. The files were recently\nrenamed to have a consistent case-ness.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 19 Sep 2001 22:47:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Case sensitive file names" }, { "msg_contents": "At 10:47 PM +0200 9/19/01, Peter Eisentraut wrote:\n>Peter Bierman writes:\n>\n>> While checking out TOT pgsql today onto an HFS+ file system (case-preserving, case-insensitive), I hit the following CVS conflict:\n>>\n>> pgsql/src/backend/utils/mb/Unicode/utf8_to_alt.map pgsql/src/backend/utils/mb/Unicode/utf8_to_ALT.map\n>>\n>> HFS+ can not store two differerent files in a path that differs only by case.\n>\n>Remove both of these files and update again. The files were recently\n>renamed to have a consistent case-ness.\n\n\nThis was from an anoncvs HEAD/TOT checkout I did into an empty directory less than an hour ago.\n\nIf it's already been fixed (yay!), the fix isn't at anoncvs yet.\n\n-pmb\n\n\n", "msg_date": "Wed, 19 Sep 2001 14:03:04 -0700", "msg_from": "Peter Bierman <bierman@apple.com>", "msg_from_op": false, "msg_subject": "Re: Case sensitive file names" }, { "msg_contents": "----- Original Message -----\nFrom: Peter Bierman <bierman@apple.com>\nSent: Wednesday, September 19, 2001 3:41 PM\n\n> While checking out TOT pgsql today onto an HFS+ file system (case-preserving, case-insensitive), I hit the following CVS conflict:\n>\n> pgsql/src/backend/utils/mb/Unicode/utf8_to_alt.map pgsql/src/backend/utils/mb/Unicode/utf8_to_ALT.map\n\nI thought the latter was supposed to go (it was supposed to be renamed to the former, wasn't it?)\n\nS.\n\n", "msg_date": "Wed, 19 Sep 2001 17:03:10 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Case sensitive file names" }, { "msg_contents": "Peter Bierman:\n\n> While checking out TOT pgsql today onto an HFS+ file system\n(case-preserving, case-insensitive), I hit the following CVS conflict:\n>\n> pgsql/src/backend/utils/mb/Unicode/utf8_to_alt.map\npgsql/src/backend/utils/mb/Unicode/utf8_to_ALT.map\n>\n> HFS+ can not store two differerent files in a path that differs only by\ncase.\n>\n> Mac OS X users will be grateful if you can find a way to rename one of\nthese files.\n\n\nI had that problem today to -- I work under Cygwin on Windows 2000; NTFS is\nalso case-preserving but case-insensitive:\n\nU pgsql/src/backend/utils/mb/Unicode/utf8_to_ALT.map\ncvs checkout: move away pgsql/src/backend/utils/mb/Unicode/utf8_to_alt.map;\nit is in the way\nC pgsql/src/backend/utils/mb/Unicode/utf8_to_alt.map\n\n\n\nCheers,\n\nColin\n\n\n\n", "msg_date": "Wed, 19 Sep 2001 23:04:38 +0200", "msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Case sensitive file names" }, { "msg_contents": "Peter Bierman writes:\n\n> At 10:47 PM +0200 9/19/01, Peter Eisentraut wrote:\n> >Peter Bierman writes:\n> >\n> >> While checking out TOT pgsql today onto an HFS+ file system (case-preserving, case-insensitive), I hit the following CVS conflict:\n> >>\n> >> pgsql/src/backend/utils/mb/Unicode/utf8_to_alt.map pgsql/src/backend/utils/mb/Unicode/utf8_to_ALT.map\n> >>\n> >> HFS+ can not store two differerent files in a path that differs only by case.\n> >\n> >Remove both of these files and update again. The files were recently\n> >renamed to have a consistent case-ness.\n>\n>\n> This was from an anoncvs HEAD/TOT checkout I did into an empty directory less than an hour ago.\n\nIndeed, someone forgot to remove the old file. I just removed it a second\nago, so you should be fine now.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 19 Sep 2001 23:34:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Case sensitive file names" }, { "msg_contents": "Peter Bierman <bierman@apple.com> writes:\n> If it's already been fixed (yay!), the fix isn't at anoncvs yet.\n\nI think there is some lag between the master CVS and anoncvs now.\nMarc, is that correct? How much lag?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Sep 2001 20:44:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "CVS vs anoncvs (was Re: Case sensitive file names)" }, { "msg_contents": "\nshould be four hours, but I haven't had a chance, with the newest\nworm/virus going around right now having killed our core router yesterday,\nto redirect the sync'ag with the new server ... will do that first thing\ntomorrow ...\n\nOn Wed, 19 Sep 2001, Tom Lane wrote:\n\n> Peter Bierman <bierman@apple.com> writes:\n> > If it's already been fixed (yay!), the fix isn't at anoncvs yet.\n>\n> I think there is some lag between the master CVS and anoncvs now.\n> Marc, is that correct? How much lag?\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Wed, 19 Sep 2001 21:08:59 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: CVS vs anoncvs (was Re: Case sensitive file names)" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n\n> should be four hours, but I haven't had a chance, with the newest\n> worm/virus going around right now having killed our core router yesterday,\n> to redirect the sync'ag with the new server ... will do that first thing\n> tomorrow ...\n>\n> On Wed, 19 Sep 2001, Tom Lane wrote:\n>\n> > Peter Bierman <bierman@apple.com> writes:\n> > > If it's already been fixed (yay!), the fix isn't at anoncvs yet.\n> >\n> > I think there is some lag between the master CVS and anoncvs now.\n> > Marc, is that correct? How much lag?\n> >\n> > regards, tom lane\n> >\n\nIt's definitely more than 16 hours. I still can't see M. Meskes' commits\n(16:09 MEST, 10:09 EDT)\n\nWhile you're at it, could you please fix this error:\n\n~/pgsql-cvs/pgsql > cvs -z3 update -dP\ncannot create_adm_p /tmp/cvs-serv2966/ChangeLogs\nPermission denied\n\nfor i in `find -type d ! -name CVS ` ; do (cd $i ; cvs -z3 update -l -d )\ndone\ncvs server: Updating .\ncvs server: Updating .\n[.....]\n\nThis works somehow but is really ugly and bandwidth-wasting. This even occurs\nwith a fresh checkout:\n\n~/pgsql-cvs/tmp > cvs -d\n:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql/ChangeLogs\n\ncvs server: Updating pgsql/ChangeLogs\nU pgsql/ChangeLogs/ChangeLog-7.1-7.1.1\nU pgsql/ChangeLogs/ChangeLog-7.1RC1-to-7.1RC2\nU pgsql/ChangeLogs/ChangeLog-7.1RC2-to-7.1RC3\nU pgsql/ChangeLogs/ChangeLog-7.1RC3-to-7.1rc4\nU pgsql/ChangeLogs/ChangeLog-7.1beta1-to-7.1beta3\nU pgsql/ChangeLogs/ChangeLog-7.1beta3-to-7.1beta4\nU pgsql/ChangeLogs/ChangeLog-7.1beta4-to-7.1beta5\nU pgsql/ChangeLogs/ChangeLog-7.1beta5-to-7.1beta6\nU pgsql/ChangeLogs/ChangeLog-7.1beta6-7.1RC1\nU pgsql/ChangeLogs/ChangeLog-7.1rc4-7.1\n~/pgsql-cvs/tmp > cd pgsql/\n~/pgsql-cvs/tmp/pgsql > cvs update\ncannot create_adm_p /tmp/cvs-serv4350/ChangeLogs\nPermission denied\n\nYours\n Christof\n\n\n", "msg_date": "Thu, 20 Sep 2001 08:25:57 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "anoncvs troubles (was Re: CVS vs anoncvs)" }, { "msg_contents": "> While you're at it, could you please fix this error:\n>\n> ~/pgsql-cvs/pgsql > cvs -z3 update -dP\n> cannot create_adm_p /tmp/cvs-serv2966/ChangeLogs\n> Permission denied\n\nInstead of checking out over your existing checkout, checkout to a new dir\nand there's no problem.\n\nChris\n\n", "msg_date": "Thu, 20 Sep 2001 15:31:54 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: anoncvs troubles (was Re: CVS vs anoncvs)" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n\n> > While you're at it, could you please fix this error:\n> >\n> > ~/pgsql-cvs/pgsql > cvs -z3 update -dP\n> > cannot create_adm_p /tmp/cvs-serv2966/ChangeLogs\n> > Permission denied\n>\n> Instead of checking out over your existing checkout, checkout to a new dir\n> and there's no problem.\n\nSorry, I want to update (only the differences cross the wire) or diff, not\ncheck out all again twice a day (which works).\n\nChristof\n\n\n", "msg_date": "Thu, 20 Sep 2001 12:46:13 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: anoncvs troubles (was Re: CVS vs anoncvs)" }, { "msg_contents": "\nOkay, its updated effective a few minutes ago ... and the upate should\nwork as well ...\n\nOn Thu, 20 Sep 2001, Christof Petig wrote:\n\n> \"Marc G. Fournier\" wrote:\n>\n> > should be four hours, but I haven't had a chance, with the newest\n> > worm/virus going around right now having killed our core router yesterday,\n> > to redirect the sync'ag with the new server ... will do that first thing\n> > tomorrow ...\n> >\n> > On Wed, 19 Sep 2001, Tom Lane wrote:\n> >\n> > > Peter Bierman <bierman@apple.com> writes:\n> > > > If it's already been fixed (yay!), the fix isn't at anoncvs yet.\n> > >\n> > > I think there is some lag between the master CVS and anoncvs now.\n> > > Marc, is that correct? How much lag?\n> > >\n> > > regards, tom lane\n> > >\n>\n> It's definitely more than 16 hours. I still can't see M. Meskes' commits\n> (16:09 MEST, 10:09 EDT)\n>\n> While you're at it, could you please fix this error:\n>\n> ~/pgsql-cvs/pgsql > cvs -z3 update -dP\n> cannot create_adm_p /tmp/cvs-serv2966/ChangeLogs\n> Permission denied\n>\n> for i in `find -type d ! -name CVS ` ; do (cd $i ; cvs -z3 update -l -d )\n> done\n> cvs server: Updating .\n> cvs server: Updating .\n> [.....]\n>\n> This works somehow but is really ugly and bandwidth-wasting. This even occurs\n> with a fresh checkout:\n>\n> ~/pgsql-cvs/tmp > cvs -d\n> :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql/ChangeLogs\n>\n> cvs server: Updating pgsql/ChangeLogs\n> U pgsql/ChangeLogs/ChangeLog-7.1-7.1.1\n> U pgsql/ChangeLogs/ChangeLog-7.1RC1-to-7.1RC2\n> U pgsql/ChangeLogs/ChangeLog-7.1RC2-to-7.1RC3\n> U pgsql/ChangeLogs/ChangeLog-7.1RC3-to-7.1rc4\n> U pgsql/ChangeLogs/ChangeLog-7.1beta1-to-7.1beta3\n> U pgsql/ChangeLogs/ChangeLog-7.1beta3-to-7.1beta4\n> U pgsql/ChangeLogs/ChangeLog-7.1beta4-to-7.1beta5\n> U pgsql/ChangeLogs/ChangeLog-7.1beta5-to-7.1beta6\n> U pgsql/ChangeLogs/ChangeLog-7.1beta6-7.1RC1\n> U pgsql/ChangeLogs/ChangeLog-7.1rc4-7.1\n> ~/pgsql-cvs/tmp > cd pgsql/\n> ~/pgsql-cvs/tmp/pgsql > cvs update\n> cannot create_adm_p /tmp/cvs-serv4350/ChangeLogs\n> Permission denied\n>\n> Yours\n> Christof\n>\n>\n>\n\n", "msg_date": "Thu, 20 Sep 2001 08:00:22 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: anoncvs troubles (was Re: CVS vs anoncvs)" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n\n> Okay, its updated effective a few minutes ago ... and the upate should\n> work as well ...\n\nShould ...\n\n~/pgsql-cvs/pgsql/src/interfaces/ecpg/preproc > cvs status preproc.y\ncvs server: failed to create lock directory for\n`/projects/cvsroot/pgsql/src/interfaces/ecpg/preproc'\n(/projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/#cvs.lock): Permission denied\n\ncvs server: failed to obtain dir lock in repository\n`/projects/cvsroot/pgsql/src/interfaces/ecpg/preproc'\ncvs [server aborted]: read lock failed - giving up\n\n~/pgsql-cvs/pgsql > cvs update\ncannot create_adm_p /tmp/cvs-serv48812/ChangeLogs\nPermission denied\n\n~/pgsql-cvs/pgsql > cvs update -l\ncvs server: Updating .\ncvs server: failed to create lock directory for `/projects/cvsroot/pgsql'\n(/projects/cvsroot/pgsql/#cvs.lock): Permission denied\ncvs server: failed to obtain dir lock in repository `/projects/cvsroot/pgsql'\ncvs [server aborted]: read lock failed - giving up\n\n... but it does not, yet.\n\nChristof\n\n\n", "msg_date": "Thu, 20 Sep 2001 16:38:09 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: anoncvs troubles (was Re: CVS vs anoncvs)" }, { "msg_contents": "On Wed, 19 Sep 2001, D'Arcy J.M. Cain wrote:\n\n> Thus spake Marc G. Fournier\n> > can you ssh into cvs.postgresql.org?\n>\n> Yes! I could not do that before. Did you fix something?\n\nNope :(\n\n\n", "msg_date": "Fri, 21 Sep 2001 08:16:59 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Major change to CVS effective immediately ..." }, { "msg_contents": "Thus spake Marc G. Fournier\n> > > can you ssh into cvs.postgresql.org?\n> >\n> > Yes! I could not do that before. Did you fix something?\n> \n> Nope :(\n\nJust weird. Oh well. All's well, etc.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 21 Sep 2001 08:21:36 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Major change to CVS effective immediately ..." } ]
[ { "msg_contents": "I am trying to use pg_dump as user root but using another username for the\ndatabase.\n\nI set USER=\"postgres\" and exported it. It still tries to login using the\nroot username. PGPASSWORD can be set and exported, is there a way I can do\nthis for the username?\n\nAlso, could it be possible for someone to program support in pg_dump to\nhave a -U flag/argument to set the username that pg_dump will use?\n\np.s. I'd like not to use -S and expect.\n\nThankyou.\n\n", "msg_date": "Mon, 17 Sep 2001 09:46:29 +1000 (EST)", "msg_from": "speedboy <speedboy@nomicrosoft.org>", "msg_from_op": true, "msg_subject": "pg_dump and -U flag." }, { "msg_contents": "nevermind, PGUSER is the variable\n\n", "msg_date": "Mon, 17 Sep 2001 09:55:27 +1000 (EST)", "msg_from": "speedboy <speedboy@nomicrosoft.org>", "msg_from_op": true, "msg_subject": "Re: pg_dump and -U flag." } ]
[ { "msg_contents": "This is a immediate order to COMPLETELY STOP and CEASE all development\nof MySQL database server.\nThis is due to rapid changes in the Global Economic conditions..\n\nYou do not have time to deal with one powerful SQL server like\nPostgreSQL, where do you have time to\ndeal with two types of SQL servers. Question is why have two??\n\nThere is no time to deal with two open source SQL servers. After doing\nresearch, it is recommended\nthat you stick with just PostgreSQL. With huge amount of efforts MySQL\ncan be brought closer to PostgreSQL\nlevel (and perhaps it may NEVER be possible to bring MySQL to the level\nof technology of PostgreSQL). Even\nif it is done it will be a waste of time..\n\nThe WORLD economy started taking nose dive for the last 2 years.\nLast year a mild global economic recession started which forced\nthousands of companies world-wide closing down.\nLast year millions of dot-com went bust.\n\nIt is predicted that there is a impending \"World-War-III\" like situation\nis developing in the middle-east and Afghanistan\nwhich may have significant effect in Asian and European countries.\nBut that may NOT have lot of economic effects on North/South American\ncountries like Brazil, USA, Canada, Mexico..\n\nNevertheless, overall economy of the globe will get the impact.\n\nAnd, hence drop off the MySQL now and migrate all your data to\nPostgreSQL..\n\nBy the way, PostgreSQL runs on all platforms - All unixes, linux, Apple\nMacintosh and MS Windows 98/NT/2000\n\n", "msg_date": "Mon, 17 Sep 2001 01:19:56 GMT", "msg_from": "peace_flower <\"alavoor[AT]\"@yahoo.com>", "msg_from_op": true, "msg_subject": "MySQL development MUST immdediately cease - Due to GlobalEconomic\n\tcondition.." }, { "msg_contents": "Huh???\n\nWhat's with this?\n\nCheerio,\nLink.\n\nAlways follow the doctor's prescription when taking medication. When in\ndoubt clarify with the doctor.\n\n\nAt 01:19 AM 9/17/01 GMT, peace_flower wrote:\n>This is a immediate order to COMPLETELY STOP and CEASE all development\n>of MySQL database server.\n>This is due to rapid changes in the Global Economic conditions..\n>\n>You do not have time to deal with one powerful SQL server like\n>PostgreSQL, where do you have time to\n\n\n", "msg_date": "Mon, 17 Sep 2001 22:45:12 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: MySQL development MUST immdediately cease - Due to" }, { "msg_contents": "\nQuick question... Am I the only person getting\nrather annoyed by these messages that have \nstarted coming through recently? Once was\nokay, but this is getting rediculous.\n\n\n", "msg_date": "Mon, 17 Sep 2001 08:11:00 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MySQL development MUST immdediately cease - Due to" }, { "msg_contents": "I have a table that keeps permissions for other tables in the database.\nWhat I want to do is create a rule that will insert into the permissions\ntable a default permission whenever a table row is inserted. Within the\npermission table, I keep the default permissions to use for each table.\nI index these by using a table_id=0. So, the rule would need to get the\ndefault permission, and insert a new row into the permissions table. The\n(abbreviated) perm table would look something like this:\n\nCREATE TABLE perm (\nid\t\tSERIAL,\ntable_name\tvarchar(30),\ntable_id\tinteger,\npermission\tinteger\n)\n\nexample default settings for each table\n---------------------------------------\nINSERT INTO perm ('table1', 0, 1);\nINSERT INTO perm ('table2', 0, 1);\n\t.\n\t.\n\t.\n\nso, whenever a row in another table is inserted, I want to update the\nperm table with the default perm.\n\nI tried this rule:\n\nCREATE RULE insert_perm_table1 AS\n ON INSERT TO table1\nDO\n INSERT INTO perm (table_name, table_id, permission) \t\tSELECT\ntable_name, new.table1_id, permission\n \tFROM perm\n \tWHERE table_name='table1' and table_id=0;\n\n\n\nSo, basically I am taking the default entry, and substituting the\ntable_id of 0 for the new one, and then inserting. The rule executes,\nbut I get different table_ids for the 2 tables (table1 and perm). The\ntable1 entry has an 'table_id' of one greater than the perm table entry.\n\nAnyone have any idea why? Is there a better solution (triggers maybe)?\n\nthanks,\n\n--brett\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 17 Sep 2001 14:55:07 -0700", "msg_from": "Brett Schwarz <brett_schwarz@yahoo.com>", "msg_from_op": false, "msg_subject": "RRules using existing data" }, { "msg_contents": "----- Original Message ----- \nFrom: peace_flower <alavoor@yahoo.com>\nSent: Sunday, September 16, 2001 9:19 PM\nSubject: [HACKERS] MySQL development MUST immdediately cease - Due to GlobalEconomic condition..\n\nDear Peace Flower,\n\nWould you mind me asking a question? Why do you post this\nkind of stuff here? Just to make everyone smile in here?\nWell, if so it's good thing when everyone smiles, but\na joke repeated more than once is, I'm sorry, stupidity.\nIf you're trying to promote PostgreSQL, then it's not\nquite right place to promote it. If bring down MySQL is\nyour intention, it's not the right place either. You seem to have\ngood pursuing and promotional writing skills, so why don't\nyou employ them in some article and post it somewhere in the news?\nThen you can post a link here to the article if it gets published,\nand you will get adequate critics from various sources. Maybe you\nwill get paid even. Just think of it.\n\nPeaceful and flowerful regards,\nSerguei\n\nPS: I'm sorry for the off-topic post.\n\n", "msg_date": "Mon, 17 Sep 2001 19:24:54 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "[OT] Re: MySQL development MUST immdediately cease - Due to\n\tGlobalEconomic condition.." }, { "msg_contents": "I agree. \nThis mailing list is not a forum to express \naversions to other software products, \nno matter how justified these aversions are. \n\n", "msg_date": "Tue, 18 Sep 2001 9:40:40 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": false, "msg_subject": "Re: MySQL development MUST immdediately cease - Due to" }, { "msg_contents": "To whoever sent this posting (being Al Dev, or someone spoofing),\n\nPlease stop posting to the PostgreSQL mailing lists.\n\nYou are not helping PostgreSQL with your postings, instead you are\ninciting anger and hostility.\n\nCease and desist these postings immediately.\n\n\nJustin Clift\n\n\npeace_flower wrote:\n> \n> This is a immediate order to COMPLETELY STOP and CEASE all development\n> of MySQL database server.\n> This is due to rapid changes in the Global Economic conditions..\n> \n> You do not have time to deal with one powerful SQL server like\n> PostgreSQL, where do you have time to\n> deal with two types of SQL servers. Question is why have two??\n> \n> There is no time to deal with two open source SQL servers. After doing\n> research, it is recommended\n> that you stick with just PostgreSQL. With huge amount of efforts MySQL\n> can be brought closer to PostgreSQL\n> level (and perhaps it may NEVER be possible to bring MySQL to the level\n> of technology of PostgreSQL). Even\n> if it is done it will be a waste of time..\n> \n> The WORLD economy started taking nose dive for the last 2 years.\n> Last year a mild global economic recession started which forced\n> thousands of companies world-wide closing down.\n> Last year millions of dot-com went bust.\n> \n> It is predicted that there is a impending \"World-War-III\" like situation\n> is developing in the middle-east and Afghanistan\n> which may have significant effect in Asian and European countries.\n> But that may NOT have lot of economic effects on North/South American\n> countries like Brazil, USA, Canada, Mexico..\n> \n> Nevertheless, overall economy of the globe will get the impact.\n> \n> And, hence drop off the MySQL now and migrate all your data to\n> PostgreSQL..\n> \n> By the way, PostgreSQL runs on all platforms - All unixes, linux, Apple\n> Macintosh and MS Windows 98/NT/2000\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 18 Sep 2001 22:20:01 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MySQL development MUST immdediately cease - Due to " } ]
[ { "msg_contents": "If removing artificial limitiations and improving 24/7 usability were\nconsiderations for 7.1 and 7.2, what will be the focus for 7.3?\n\nPersonally, I would like to see the SQL features completed, ie:\n\nADD PRIMARY KEY\nSET NULL / SET NOT NULL\nDROP COLUMN\nDROP PRIMARY KEY\nDROP UNIQUE\nDROP FOREIGN KEY\n\nI plan to work on all of these (except DROP COLUMN!), but my time is\nsomewhat limited these days.\n\nWhat will everyone else be doing, particularly now that Great Bridge is\ngone?\n\nChris\n\n", "msg_date": "Mon, 17 Sep 2001 09:57:42 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "7.3" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> If removing artificial limitiations and improving 24/7 usability were\n> considerations for 7.1 and 7.2, what will be the focus for 7.3?\n\nWhatever people want to work on. The above-stated descriptions of 7.1\nand 7.2 work aren't bad, but they are after-the-fact summaries of what\ngot done, not plans agreed on in advance. I learned awhile ago that\ntrying to herd this bunch of cats is pointless ;-)\n\nFWIW, I think that schemas and tablespaces will be near the top of my\nown priority list.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Sep 2001 22:40:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 " } ]
[ { "msg_contents": "I just took a dreadful look at the RPMs. I've managed to build something\nthat resembles a 7.2 package, but there are a number of things that we\nshould talk about so this ends up being useful.\n\n* The {pgaccess} parameter doesn't do anything AFAICT. PgAccess is\ninstalled whenever Tk support is configured (which is correct, IMO).\nMaybe this is just a legacy item?\n\n* I removed the {plperl} parameter, because PL/Perl now builds by default\non Linux. Should plperl continue to stay its own package? I'd say yes,\nbecause you don't need in on every client.\n\n* Related to previous, the tcl package currently includes client and\nserver components. I'd say this is more useful as two separate packages.\n\n* Similar issues with PL/Python\n\n* Is libpgtcl.a supposed to be in the -libs package, while libpgtcl.so is\nin -tcl? What about libpgtcl.h? Currently, the -devel package has an\nimplicit dependency on Tcl, which should probably not be there.\n\n* How long do we want to keep the libpq.so.2.0 symlink?\n\n* I fail to understand the motivation behind the way the -contrib package\nis done. You seem to be installing the source code. I scrapped that and\ninstalled the binaries the way it was designed to be.\n\n* The -docs package is misleading. Maybe it should be called -docs-devel\nor something. However, I'm having a hard time understanding how people\ncan make use of this.\n\n* I request that rh-pgdump.sh and postgresql-dump be renamed to something\nthat conveys a semantic difference from pg_dump. Possibly, these files\nshould not be installed into /usr/bin if they're not general purpose.\n\n* What about the JDBC driver? I think the driver should be compiled in\nplace by whatever JDK the build system provides.\n\n* Start thinking about how to package National Language Support.\n\n* Lot's of dependencies failing to be declared.\n\nThere are also a number of plain bug fixes that need to be integrated.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 17 Sep 2001 05:22:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "7.2 RPMs" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n<snip>\n> * What about the JDBC driver? I think the driver should be compiled in\n> place by whatever JDK the build system provides.\n\nDon't know about the rest of this stuff, but I totally agree here. \nThere should be a dependancy on Ant and some kind of JDK, and it should\ncompile the driver specifically. This way the user I reckon the user is\nWAY more likely to have something which works well for them.\n\nBumped into a problem with this just over a week ago.\n\nRegards and best wishes,\n\nJustin Clift\n\n<snip>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 17 Sep 2001 13:30:42 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> I just took a dreadful look at the RPMs. I've managed to build something\n> that resembles a 7.2 package, but there are a number of things that we\n> should talk about so this ends up being useful.\n> \n> * The {pgaccess} parameter doesn't do anything AFAICT. PgAccess is\n> installed whenever Tk support is configured (which is correct, IMO).\n> Maybe this is just a legacy item?\n\nFor 7.1.3, it does make a difference....\n\n%if %pgaccess\n\t# pgaccess installation\n\tpushd src/bin\n\tinstall -m 755 pgaccess/pgaccess $RPM_BUILD_ROOT/usr/bin\n\tmkdir -p $RPM_BUILD_ROOT/usr/share/pgsql/pgaccess\n\tinstall -m 644 pgaccess/main.tcl $RPM_BUILD_ROOT/usr/share/pgsql/pgaccess\n\ttar cf - pgaccess/lib pgaccess/images | tar xf - -C $RPM_BUILD_ROOT/usr/share/pgsql\n\tcp -a pgaccess/doc/html ../../doc/pgaccess\n\tcp pgaccess/demo/*.sql ../../doc/pgaccess\n\tpopd\n%endif\n\n(in addition to the actual package). The 7.2 build procedure might\ndiffer. It's still useful to have several packages, as it under some\ncircumstances it would be useful to have tk bindings but not ship\npgaccess. E.g. if you were to ship an Asian version, this segregation\nmight be useful.\n> \n> * I removed the {plperl} parameter, because PL/Perl now builds by default\n> on Linux. Should plperl continue to stay its own package? I'd say yes,\n> because you don't need in on every client.\n\nNot only that, but you very often don't want to build it. If you have\na static perl package, plperl can't be created. It will sort of work\non IA32, but bomb out elsewhere. Ideally, the configure process should\nfigure out this on it's own (you can't create dynamic extensions\nlinking in a static lib).\n\n> * Related to previous, the tcl package currently includes client and\n> server components. I'd say this is more useful as two separate packages.\n\nIt might, if you imply that tcl is useful at all ;).\n \n> * Similar issues with PL/Python\n\nSame issues wrt. static libraries as for perl, but could easily be\nseparated. \n \n> * Is libpgtcl.a supposed to be in the -libs package, while libpgtcl.so is\n> in -tcl? What about libpgtcl.h? Currently, the -devel package has an\n> implicit dependency on Tcl, which should probably not be there.\n> \n> * How long do we want to keep the libpq.so.2.0 symlink?\n\nA long time :)\n \n> * I fail to understand the motivation behind the way the -contrib package\n> is done. You seem to be installing the source code. I scrapped that and\n> installed the binaries the way it was designed to be.\n\nOften the only source of docs, but I wouldn't miss it. I've often\nwanted to kill this inconsistency myself.\n\n> * I request that rh-pgdump.sh and postgresql-dump be renamed to something\n> that conveys a semantic difference from pg_dump. Possibly, these files\n> should not be installed into /usr/bin if they're not general\n> purpose.\n\nThey are programs serving specific dumping purposes.\n \n> * What about the JDBC driver? I think the driver should be compiled in\n> place by whatever JDK the build system provides.\n\nMany build systems don't have a JDK, as there are no open (or even\ndistributable) JDKs.\n\n> * Start thinking about how to package National Language Support.\n\nLook at the find_lang macro.\n\n> * Lot's of dependencies failing to be declared.\n\nFor the finished packages, those are generated automatically. As for\nbuild dependencies, I'm unaware of any missing ones.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "17 Sep 2001 11:00:21 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> > * The {pgaccess} parameter doesn't do anything AFAICT. PgAccess is\n> > installed whenever Tk support is configured (which is correct, IMO).\n> > Maybe this is just a legacy item?\n>\n> For 7.1.3, it does make a difference....\n>\n> %if %pgaccess\n[...]\n> %endif\n>\n> (in addition to the actual package). The 7.2 build procedure might\n> differ. It's still useful to have several packages, as it under some\n> circumstances it would be useful to have tk bindings but not ship\n> pgaccess. E.g. if you were to ship an Asian version, this segregation\n> might be useful.\n\nGiven that pgtksh is rather small in size I don't know if that's worth the\ncontortions. However, note that pgaccess is still built if you turn on Tk\nbut turn off %pgaccess. (There was also a plan to make pgaccess use\npgtksh, but it's not happening for 7.2.)\n\n> Not only that, but you very often don't want to build it. If you have\n> a static perl package, plperl can't be created. It will sort of work\n> on IA32, but bomb out elsewhere. Ideally, the configure process should\n> figure out this on it's own (you can't create dynamic extensions\n> linking in a static lib).\n\nThere are provisions in the source for figuring this out automatically.\nCurrently, the only \"figuring\" it does is to allow it on Linux. (It is my\nunderstanding that it works on Linux independent of the CPU architecture.\nIn the past there have been problems with insufficient dynamic loader\nimplementations, but there is no principal design obstacle.)\n\nBut it would really be of advantage if distributors (i.e., you) supplied a\nshared libperl by default. There are at least two high profile users\n(PostgreSQL and Apache) running into this problem.\n\n> > * I request that rh-pgdump.sh and postgresql-dump be renamed to something\n> > that conveys a semantic difference from pg_dump. Possibly, these files\n> > should not be installed into /usr/bin if they're not general\n> > purpose.\n>\n> They are programs serving specific dumping purposes.\n\nMaybe they should be named to reflect these purposes? Currently,\npostgresql-dump is just another spelling of pg_dump, and rh-pgdump.sh\nconveys the meaning of \"Red Hat's (better/different) pg_dump\".\n\n> > * What about the JDBC driver? I think the driver should be compiled in\n> > place by whatever JDK the build system provides.\n>\n> Many build systems don't have a JDK, as there are no open (or even\n> distributable) JDKs.\n\n From Red Hat I would have expected the answer \"use gcj\". ;-) (I don't\nknow how complete the class library is there, and Ant probably doesn't\nsupport it anyway.) However, two questions arise:\n\n* If the build system doesn't have a JDK, why do you need a JDBC driver?\n\n* There is currently no \"official\" source of PostgreSQL JDBC driver\nbinaries. So I don't know how you plan to obtain a precompiled jar\nwithout making it yourself.\n\n\nWell, do you have time to work on this and do you keep the RPM input files\nunder version control somewhere, so I can send some incremental patches?\nThe preliminary spec file patch is already the same size as the spec file.\n:-/\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 17 Sep 2001 20:21:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "On Sunday 16 September 2001 11:22 pm, Peter Eisentraut wrote:\n> I just took a dreadful look at the RPMs. I've managed to build something\n> that resembles a 7.2 package, but there are a number of things that we\n> should talk about so this ends up being useful.\n\nFirst, thanks for taking a look. However,I don't think 'dreadful' is a \nreally appropriate word here. But I'll let it slide. RPM packaging in \ngeneral can be a dreadful experience -- and that's what I'm going to assume \nthat you meant.\n\nAnd, while your list is a usable list of things, the lack of addressing any \nof these items in no way prevents a package of 7.2 from being 'useful' for \nvarious degrees of usefulness.\n\n> * The {pgaccess} parameter doesn't do anything AFAICT. PgAccess is\n> installed whenever Tk support is configured (which is correct, IMO).\n> Maybe this is just a legacy item?\n\nAs I've not yet had the time to build any 7.2 RPMsets, I'll have to look. It \nmay very well be legacy if 7.2's makefiles do such a decision as to install \npgaccess and where to install it.\n\nBut, pgaccess!=tk_client_support and might not even be desired by a tk client \nuser.\n\n> * I removed the {plperl} parameter, because PL/Perl now builds by default\n> on Linux. Should plperl continue to stay its own package? I'd say yes,\n> because you don't need in on every client.\n\nPL/perl requires a dynamic load libperl.so -- which Red Hat doesn't ship. If \nconfigure can test for a dynamic libperl (which is being done in the makefile \nas of 7.1.3), then that's where that decision ought to be made. However, \nKarl DeBisschop can shed much light on this particular subject, and his \nopinion should be weighed, as he knowsof some interesting situations.\n\nAs to it remaining a separate package -- absolutely. PL/perl is a \nserver-side package, while the perl client might be installed without a \nserver on its system. Don't want to force the server on a perl client \nmachine.\n\n> * Related to previous, the tcl package currently includes client and\n> server components. I'd say this is more useful as two separate packages.\n\n> * Similar issues with PL/Python\n\nI agree, and had planned on doing just this.\n\n> * Is libpgtcl.a supposed to be in the -libs package, while libpgtcl.so is\n> in -tcl? What about libpgtcl.h? Currently, the -devel package has an\n> implicit dependency on Tcl, which should probably not be there.\n\nHuh? The libs package is intended to be the base client libraries that all \nclients need. The tcl library is only needed by the tcl client. The \nlibpgtcl.a static lib is only needed in development, and doesn't depend upon \ntcl directly. Although I have yet to see a RedHat system without tcl \ninstalled.... The tcl package could, I guess, inherit libpgtcl.a from the \n_devel_ package (libpgtcl.a lives there, not in libs) without problems.\n\n> * How long do we want to keep the libpq.so.2.0 symlink?\n\nGood question. Trond's answer is a 'long time' -- When is the next major \nuprev in the library going to be? This needs to be researched -- the \nquestion that needs answering here is 'how many third-party packages (such as \nthe php postgresql interface; the DBI postgresql interface, etc) are going to \nbe broken by this?'\n\n> * I fail to understand the motivation behind the way the -contrib package\n> is done. You seem to be installing the source code. I scrapped that and\n> installed the binaries the way it was designed to be.\n\nContrib, to my eyes, is both an example set as well as useful stuff. As 7.1 \nwas the first setof RPM's with contrib compiled at all (previously, the \nentire contrib tree was packaged as source code for documentation!), having \nthe source as well enables examples -- however, I understand both sides of \nthat.\n\nHowever, I'm concerned about your wording 'the way it was designed to be' -- \nwould you mind explaining exactly what you meant (a copy of your spec file \nwill explain far better than any narrative could, BTW)?\n\n> * The -docs package is misleading. Maybe it should be called -docs-devel\n> or something. However, I'm having a hard time understanding how people\n> can make use of this.\n\n'docs-sgml' perhaps? Maybe they want to try their hand at using an SGML \neditor/publishing system to generate various docs formats? It was previously \npackaged as part of the main package and I split it out.\n\n> * I request that rh-pgdump.sh and postgresql-dump be renamed to something\n> that conveys a semantic difference from pg_dump. Possibly, these files\n> should not be installed into /usr/bin if they're not general purpose.\n\nHmmm. Any suggestions as to location and name? Might I suggest \n'kludge-to-get-around-postgresql-lack-of-upgradability' -- or is that too \ninflammatory? :-) However, I tend to agree -- /usr/bin might not be the best \nlocation for these scripts.\n\n> * What about the JDBC driver? I think the driver should be compiled in\n> place by whatever JDK the build system provides.\n\nGot an open source JDK suggestion? One that is _standard_ for the target \ndistributions?\n\n> * Start thinking about how to package National Language Support.\n\nExplain what you mean by this.\n\n> * Lot's of dependencies failing to be declared.\n\nMost dependencies are automatic and do not need declaration. Can you give a \nlist of undeclared dependencies that are not auto generated during the build \nthat are not part of a standard development system for building, and part of \na standard installation for run-time?\n\n> There are also a number of plain bug fixes that need to be integrated.\n\nOk. List, please?\n\nA copy of your initial spec file and patchset would also be useful.\n\nOnce again, thanks for the look-through. You previous look-throughs wevery \nuseful, and I appreciate them. And I'll go ahead and apologize if my \ncomments seem to be short and maybe even grumpy -- I just got done with an 80 \nhour week involving a 25 kilowatt AM broadcast transmitter installation, so \nI'm not at my best at the moment -- but I'm not intending to be short or \ngrumpy (although my wife might disagree.... :-)).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 17 Sep 2001 14:23:05 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Trond Eivind Glomsr�d writes:\n> \n> > > * The {pgaccess} parameter doesn't do anything AFAICT. PgAccess is\n> > > installed whenever Tk support is configured (which is correct, IMO).\n> > > Maybe this is just a legacy item?\n> >\n> > For 7.1.3, it does make a difference....\n> >\n> > %if %pgaccess\n> [...]\n> > %endif\n> >\n> > (in addition to the actual package). The 7.2 build procedure might\n> > differ. It's still useful to have several packages, as it under some\n> > circumstances it would be useful to have tk bindings but not ship\n> > pgaccess. E.g. if you were to ship an Asian version, this segregation\n> > might be useful.\n> \n> Given that pgtksh is rather small in size I don't know if that's worth the\n> contortions. However, note that pgaccess is still built if you turn on Tk\n> but turn off %pgaccess. (There was also a plan to make pgaccess use\n> pgtksh, but it's not happening for 7.2.)\n\nIt may be built, but at least you don't get the package... Personally,\nI wouldn't mind separating the database core from all the other things\nbundled with it (drivers, support programs). It seems a little cleaner.\n\n> > Not only that, but you very often don't want to build it. If you have\n> > a static perl package, plperl can't be created. It will sort of work\n> > on IA32, but bomb out elsewhere. Ideally, the configure process should\n> > figure out this on it's own (you can't create dynamic extensions\n> > linking in a static lib).\n> \n> There are provisions in the source for figuring this out automatically.\n> Currently, the only \"figuring\" it does is to allow it on Linux. (It is my\n> understanding that it works on Linux independent of the CPU architecture.\n> In the past there have been problems with insufficient dynamic loader\n> implementations, but there is no principal design obstacle.)\n\nNo. It works on IA32, but breaks elsewhere.\n \n> But it would really be of advantage if distributors (i.e., you) supplied a\n> shared libperl by default. There are at least two high profile users\n> (PostgreSQL and Apache) running into this problem.\n\nIt's not unlikely to happen for the next major series (we try hard to\nstay binary compatible within a series).\n \n> Maybe they should be named to reflect these purposes? Currently,\n> postgresql-dump is just another spelling of pg_dump, and rh-pgdump.sh\n> conveys the meaning of \"Red Hat's (better/different) pg_dump\".\n\nIt was basically \"doh, the existing dump script is very broken and we\nfreeze very soon\" a release or two ago. I think Lamar was the one who\ndiscoverd it and I the one who wrote it rather quickly.\n \n> > > * What about the JDBC driver? I think the driver should be compiled in\n> > > place by whatever JDK the build system provides.\n> >\n> > Many build systems don't have a JDK, as there are no open (or even\n> > distributable) JDKs.\n> \n> From Red Hat I would have expected the answer \"use gcj\". ;-) (I don't\n> know how complete the class library is there, and Ant probably doesn't\n> support it anyway.) However, two questions arise:\n\ngcj is nice, but far from complete. It's also Java 1.1 without AWT,\nAFAIR, and most interesting stuff use 1.2 and up now.\n\n> * If the build system doesn't have a JDK, why do you need a JDBC\n> driver?\n\nThat you can use with gcj :). The main reason it's useful is that\nother can install their own JDK, typically when running servlets or\nother server based Java apps.\n\n> Well, do you have time to work on this and do you keep the RPM input files\n> under version control somewhere, so I can send some incremental\n> patches?\n\nIf you send it, I can have a first look. As for time, that's something\nother people have. And when I have it, I don't have anything to use it\nfor either (maxed out with 5 weeks unused vacation now, but have no\nidea what to use most of it for)\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "17 Sep 2001 15:02:36 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "On Monday 17 September 2001 02:21 pm, Peter Eisentraut wrote:\n> Trond Eivind Glomsr�d writes:\n>>\n\n> Given that pgtksh is rather small in size I don't know if that's worth the\n> contortions. However, note that pgaccess is still built if you turn on Tk\n> but turn off %pgaccess. (There was also a plan to make pgaccess use\n> pgtksh, but it's not happening for 7.2.)\n\nBuilt!=shipped in the RPMset. Lots of things are built -- but if it's not in \nthe %files list, it don't get packaged.\n\n> Maybe they should be named to reflect these purposes? Currently,\n> postgresql-dump is just another spelling of pg_dump, and rh-pgdump.sh\n> conveys the meaning of \"Red Hat's (better/different) pg_dump\".\n\nI've already suggested a name that fits the purpose.\n\n> * If the build system doesn't have a JDK, why do you need a JDBC driver?\n\nTo use a compiled bytecode java application built against our JDBC with a JRE?\n\n> * There is currently no \"official\" source of PostgreSQL JDBC driver\n> binaries. So I don't know how you plan to obtain a precompiled jar\n> without making it yourself.\n\nYes, we would have to build it now. However, the question still looms: \n_which_ JDK should be used to build it for maximum JVM/JRE compatibility for \nthe bytecode distribution? I've asked this question before, and no consensus \nwas reached.\n\n> Well, do you have time to work on this and do you keep the RPM input files\n> under version control somewhere, so I can send some incremental patches?\n\nI will have time shortly. \n\nIt has been discussed in the past on two separate occassions about putting \nthe spec file into CVS at postgresql.org, but, again, no consensus was \nreached and no action was taken by core to implement that. If I had to I \ncould set up my own CVS repository -- but I haven't needed to as yet.\n\nSend a patch to me and Trond against the last PGDG release specfile. If you \nchange the patchset, it needs to be included, as well as patches to any \nscripts distributed.\n\n> The preliminary spec file patch is already the same size as the spec file.\n\n??? That's pretty big. E-mail me and Trond your changes, please.\n\nWe're getting ready to go into beta, and I was getting ready to ramp up to \ndeal with 7.2beta RPMs anyway. This just quickens the issue.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 17 Sep 2001 16:48:56 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "Trond Eivind Glomsr�d writes:\n\n> > There are provisions in the source for figuring this out automatically.\n> > Currently, the only \"figuring\" it does is to allow it on Linux. (It is my\n> > understanding that it works on Linux independent of the CPU architecture.\n> > In the past there have been problems with insufficient dynamic loader\n> > implementations, but there is no principal design obstacle.)\n>\n> No. It works on IA32, but breaks elsewhere.\n\nLibtool seems to think otherwise. And the people who provided the\npatches to libtool are the ones who should know best.\n\n> > But it would really be of advantage if distributors (i.e., you) supplied a\n> > shared libperl by default. There are at least two high profile users\n> > (PostgreSQL and Apache) running into this problem.\n>\n> It's not unlikely to happen for the next major series (we try hard to\n> stay binary compatible within a series).\n\nYou don't break binary compatibility by providing a shared library\nalongside a static one.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 17 Sep 2001 23:36:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Trond Eivind Glomsr�d writes:\n> \n> > > There are provisions in the source for figuring this out automatically.\n> > > Currently, the only \"figuring\" it does is to allow it on Linux. (It is my\n> > > understanding that it works on Linux independent of the CPU architecture.\n> > > In the past there have been problems with insufficient dynamic loader\n> > > implementations, but there is no principal design obstacle.)\n> >\n> > No. It works on IA32, but breaks elsewhere.\n> \n> Libtool seems to think otherwise. And the people who provided the\n> patches to libtool are the ones who should know best.\n\nDynamic code works on those platforms. What doesn't work is dlopen()\nof code not compiled with -fpic (which means extensions linking with\nstatic libraries). I've not seen libtool claim otherwise, but it would\nbe broken. Another can of worms is nsswitch inside glibc, which in\nsome circumstances will use a dynamic module in a statically linked\nprogram. \n \n> > > But it would really be of advantage if distributors (i.e., you) supplied a\n> > > shared libperl by default. There are at least two high profile users\n> > > (PostgreSQL and Apache) running into this problem.\n> >\n> > It's not unlikely to happen for the next major series (we try hard to\n> > stay binary compatible within a series).\n> \n> You don't break binary compatibility by providing a shared library\n> alongside a static one.\n\nThis mean backward as well... eg. perl packages for RHL 7.1 should run\non RHL 7 as well. Same for RHL 7.2, if we make such a release.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "17 Sep 2001 17:40:01 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "Lamar Owen writes:\n\n> And, while your list is a usable list of things, the lack of addressing any\n> of these items in no way prevents a package of 7.2 from being 'useful' for\n> various degrees of usefulness.\n\n...useful in the sense that the work I'm doing becomes useful.\n\n> > * Is libpgtcl.a supposed to be in the -libs package, while libpgtcl.so is\n> > in -tcl? What about libpgtcl.h? Currently, the -devel package has an\n> > implicit dependency on Tcl, which should probably not be there.\n>\n> Huh? The libs package is intended to be the base client libraries that all\n> clients need. The tcl library is only needed by the tcl client. The\n> libpgtcl.a static lib is only needed in development, and doesn't depend upon\n> tcl directly. Although I have yet to see a RedHat system without tcl\n> installed.... The tcl package could, I guess, inherit libpgtcl.a from the\n> _devel_ package (libpgtcl.a lives there, not in libs) without problems.\n\nMy interpretation of dependency is \"this file cannot be made use of unless\nthat package is installed\". Hence, libpgtcl.h and libpgtcl.a have a\ndependency on the tcl package and therefore the postgresql-devel package\nhas that same dependency. That is one thing, and other interpretations\nmay be valid.\n\nThe other thing is that no matter how you arrange it, libpgtcl.a and\nlibpgtcl.so should be in the same package. I placed them in -devel for\nnow since that is what you seemed to be intending anyway.\n\n> > * How long do we want to keep the libpq.so.2.0 symlink?\n>\n> Good question. Trond's answer is a 'long time' -- When is the next major\n> uprev in the library going to be?\n\n\"Never\" is my best guess.\n\n> Contrib, to my eyes, is both an example set as well as useful stuff.\n\nThat used to be sort of true. Currently, contrib is more \"useful stuff\"\nthan example. Examples are in the documentation and the tutorial\ndirectory.\n\n> However, I'm concerned about your wording 'the way it was designed to be' --\n> would you mind explaining exactly what you meant (a copy of your spec file\n> will explain far better than any narrative could, BTW)?\n\nI mean contrib is intended to be compiled, installed, and used.\n\n> 'docs-sgml' perhaps? Maybe they want to try their hand at using an SGML\n> editor/publishing system to generate various docs formats?\n\nDifficult without having a real source tree available. Plus, people that\nwant to work on documentation also have the option of getting the\npostgresql-docs-xxx.tar.gz source package that contains the documentation\nsources.\n\n> Hmmm. Any suggestions as to location and name? Might I suggest\n> 'kludge-to-get-around-postgresql-lack-of-upgradability' -- or is that too\n> inflammatory? :-)\n\nNo, but it's longer than the 14 characters that POSIX allows for file\nnames. ;-) But \"upgrade\" is a reasonable start.\n\n> > * What about the JDBC driver? I think the driver should be compiled in\n> > place by whatever JDK the build system provides.\n>\n> Got an open source JDK suggestion? One that is _standard_ for the target\n> distributions?\n\nThere is no standard C compiler in the target distributions either...\n\nYou don't need a standard JDK either. You want to build the driver to fit\nthe JDK that the distribution provides. If the distribution doesn't\nprovide Java support, you don't need a JDBC driver.\n\nNote that the choice of JDK is actually hidden from the build process.\nYou just need Ant, which comes in RPM form.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 17 Sep 2001 23:44:10 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "On Monday 17 September 2001 05:40 pm, Trond Eivind Glomsr�d wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > You don't break binary compatibility by providing a shared library\n> > alongside a static one.\n\n> This mean backward as well... eg. perl packages for RHL 7.1 should run\n> on RHL 7 as well. Same for RHL 7.2, if we make such a release.\n\nDistributors' needs are very different from our needs, Peter. Maybe a \npotential Red Hat 8 can do such. However, the backwards compatibilty issue \nis some rub.\n\nOur PGDG packages, OTOH, don't have to be limited in that way. Which is one \nreason you may want to start there, not the Red H?at package (which is close, \nbut not identical, to ours).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 17 Sep 2001 17:52:00 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: 7.2 RPMs" }, { "msg_contents": "On Monday 17 September 2001 05:44 pm, Peter Eisentraut wrote:\n> ...useful in the sense that the work I'm doing becomes useful.\n\nOk. My mind is a little muddy right now, and different interpretations of \nwordings aren't coming easily. \n\n> The other thing is that no matter how you arrange it, libpgtcl.a and\n> libpgtcl.so should be in the same package. I placed them in -devel for\n> now since that is what you seemed to be intending anyway.\n\nYes, that is and was my intentions; I just missed the significance of what \nyou said. Of course, the .so and the .a need to be together in this instance \n(like the libpq.so/libpq.a instance as well, which was addressed earlier in \nthe 7.1.x RPMset cycle).\n\n> > Contrib, to my eyes, is both an example set as well as useful stuff.\n\n> That used to be sort of true. Currently, contrib is more \"useful stuff\"\n> than example. Examples are in the documentation and the tutorial\n> directory.\n\nThen the change is valid. \n\n> > However, I'm concerned about your wording 'the way it was designed to be'\n> > -- would you mind explaining exactly what you meant (a copy of your spec\n> > file will explain far better than any narrative could, BTW)?\n\n> I mean contrib is intended to be compiled, installed, and used.\n\nOk. I was more talking about location in the filesystem, but I get your \npoint.\n\n> > 'docs-sgml' perhaps? Maybe they want to try their hand at using an SGML\n> > editor/publishing system to generate various docs formats?\n\n> Difficult without having a real source tree available.\n\nHmmm. I've not tried to do anything with the SGML yet....\n\n> > Hmmm. Any suggestions as to location and name? Might I suggest\n> > 'kludge-to-get-around-postgresql-lack-of-upgradability' -- or is that too\n> > inflammatory? :-)\n\n> No, but it's longer than the 14 characters that POSIX allows for file\n> names. ;-) But \"upgrade\" is a reasonable start.\n\nBut we already had a pg_upgrade in the tarball. 'pg_migrate' perhaps? And \nit _is_ a kludge.\n\n> > > * What about the JDBC driver? I think the driver should be compiled in\n> > > place by whatever JDK the build system provides.\n> >\n> > Got an open source JDK suggestion? One that is _standard_ for the target\n> > distributions?\n\n> There is no standard C compiler in the target distributions either...\n\nGcc is the de facto linux distribution standard, and one can reasonably \nassume that a standard C compiler is present. The same is not true of JDK's, \nAFAIK.\n\n> Note that the choice of JDK is actually hidden from the build process.\n> You just need Ant, which comes in RPM form.\n\nHmmm. How does one get started with 'Ant' and a JDK? I personally don't use \nJava -- but heretofore it's been easy to get jars of the JDBC to package for \npeople who do use Java. Is a JDBC RPM package something people are actively \nusing? I _have_ received a few questions from people trying to use the JDBC \nRPM, so I think it is a useful thing to have.\n\nSomebody who knows Java: enlighten me on the portability or lack thereof of \nour distributed JDBC RPM's jar, please. If I can build a reasonably portable \njar of our JDBC,I'm willing to try.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 17 Sep 2001 18:18:13 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: 7.2 RPMs" } ]
[ { "msg_contents": "Greetings:\n\nI am (have attempted to) porting PostgreSQL 7.1.2 to Lynx RT OS (http://www.lynx.com). This OS does not support dynamic loading (on non-MIPS platforms). \nAfter successful build, I did a 'make check' and\nam getting the following errors. I would appreciate if anyone can point me toward the possible causes.\n\nRegards,\n-Chris\n\nHistory of changes:\n- renamed socket.h, pg_socket.h (to remove conflicts with <socket.h> which the compiler does not differentiate).\n- Changed 'yylval' in bootscanner.l to 'extern YYSTYPE yylval' to remove conflict with bootparse.c\n- and a few minor changes.\n\nThe errors are:\nFile: postmaster.log\nοΏ½-----------------------------------------------------------------------------------------\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster: reaping dead processes...\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster: ServerLoop: handling reading 5\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster: ServerLoop: handling reading 5\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster: ServerLoop: handling writing 5\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster: BackendStartup: pid 34 user postgres db template1 socket 5\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster child[34]: starting with (postgres -d5 -v131072 -p template1 )\nFindExec: found \"/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postgres\" using argv[0]\nDEBUG: connection: host=127.0.0.1 user=postgres database=template1\nDEBUG: InitPostgres\nDEBUG: _mdfd_blind_getseg: couldn't open /pg/postgresql-7.1.2/src/test/regress/./tmp_check/data/global/0: File or directory doesn't exist\nFATAL 1: cannot write block -1 of 0/0 blind: File or directory doesn't exist\nDEBUG: proc_exit(0)\nDEBUG: shmem_exit(0)\nDEBUG: exit(0)\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster: reaping dead processes...\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster: CleanupProc: pid 34 exited with status 0\nFast Shutdown request at Sun Sep 16 12:07:53 2001\nDEBUG: shutting down\nNOTICE: Please reconnect to the database system and repeat your query.to terminate your database system connection and exit.\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster: reaping dead processes...\n/pg/postgresql-7.1.2/src/test/regress/./tmp_check/install//pg/bin/postmaster: Shutdown proc 12 exited with status 256\nοΏ½--------------------------------------------------------------------------------------\n\n\n\n\n\n'make check' output\n.......................................................................................\n$ tail log/make.check.out\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I. -I../../src/include -D__NO_INCLUDE_WARN__ -DHAVE_SYS_SHM_H -DREFINT_VERBOSE -c autoinc.c -o autoinc.o\n/bin/sh ./pg_regress --debug --temp-install --top-builddir=../../.. --schedule=./parallel_schedule --multibyte=\n============== creating temporary installation ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nrunning on port 65432 with pid 88\n============== creating database \"regression\" ==============\npsql: FATAL 1: cannot write block -1 of 0/0 blind: File or directory doesn't exist\ncreatedb: database creation failed\npg_regress: createdb failed\n\n\n\n\n", "msg_date": "Sun, 16 Sep 2001 21:15:00 -0700", "msg_from": "\"Chris Hakimi\" <chakimi@sj.symbol.com>", "msg_from_op": true, "msg_subject": "Port of 7.1.2 to Lynx RT-OS" } ]
[ { "msg_contents": "Due to international instability, I have shortened my vacation and will\nremain in the US. Because of this, I _will_ be attending the upcoming\nOSDN database summit:\n\n http://www.osdn.com/conferences/osdb2/\n\nI had previously reported I could not attend.\n\nAlso, I will be around for more of the beta period. I should be out\nmost of this week and at the OSDN summit the next week but should return\nlate September to be involved in the beta process.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 17 Sep 2001 00:59:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "OSDN conference, vacation" } ]
[ { "msg_contents": "Hi All,\n\n(sorry for the .questions/.hackers crosspost)\n\nI'm a relatively new Linux / PostgreSQL user, and I've been scouring for a \nfew weeks now to find an answer to my problem -- bugging newsgroups with \nmy newbie questions is my last resort.\n\nI have a Postgres DB that I wish to have automagically pg_dumped to a \ndated text file and burned to a Multi-Session CD.\n\nThese are two seperate issues (I can't get either to work properly) that I \nneed help with -- the scripted database dump and then the sequential cd \nburn.\n\nHere is the command I tried using:\n\npg_dump -C -d myDb > myDbSept.out\n\n\nThe problem is that I need a login / password to execute this command, and \nit is asking for a password at an echo prompt! I tried using the PGUSER / \nPGPASSWORD variables, but they didn't work (or, at least I don't know how \nto use them properly)\n\nAs for the cd-burning component, that's a whole other mess I'm going to \nhave to bug the linux.cd people about.\n\nCould someone please nudge me in the right direction, or if they've \nalready done something like this (with the burn), perhaps post up a sample \nof their script?\n\nThanks ever so much,\nRums.\n\nREMOVErumsAThomeDOTcom\n", "msg_date": "Mon, 17 Sep 2001 06:01:16 GMT", "msg_from": "Rums Dabs <REMOTErumsTHIS@home.com>", "msg_from_op": true, "msg_subject": "CD-RW Scheduled Database Backup..." } ]
[ { "msg_contents": "Hi all,\n\nI was just looking through libpq large object code an noticed what seemed\nto be a bug. The function lo_create() is declared as Oid yet it contains\nthe following code:\n\n if (conn->lobjfuncs == (PGlobjfuncs *) NULL)\n {\n if (lo_initialize(conn) < 0)\n return -1;\n }\n\nIf lo_initialize returns < 0, you have some pretty serious problems - out\nof memory, conn is invalid, etc. However, casting -1 to Oid returns what\nseems to be a valid Oid. Shouldn't it return InvalidOid?\n\nGavin\n\n\n\n", "msg_date": "Mon, 17 Sep 2001 18:37:49 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "lo_creat() bug" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> If lo_initialize returns < 0, you have some pretty serious problems - out\n> of memory, conn is invalid, etc. However, casting -1 to Oid returns what\n> seems to be a valid Oid. Shouldn't it return InvalidOid?\n\nYes, evidently so. Good catch!\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Sep 2001 10:32:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: lo_creat() bug " } ]
[ { "msg_contents": "I'd like to have a hot standby server for our main PSQL...\n\nI've seen that there are a number of products that might do the trick,\nMission Critical Linux (and their Convolu Cluster/IBM FC SAN), Usogres\nand DBBalancer.\n\nAll seem (at least 'on paper') to solve this problem. The Convolu seem\na little expensive (and won't provide any loadbalancing which might be\nneeded later), usogres seems promising, but like vise don't support load\nbalancing. DBBalancer is the most promising, but the coders feel the\ndifferent ports for read/write is a little ... annoying :)\n\nIs there any other products I missed? \n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\ngenetic munitions radar security Saddam Hussein 767 spy Soviet attack\nkilled SEAL Team 6 president smuggle NORAD Iran\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "17 Sep 2001 12:34:22 +0200", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "Hot spare PSQL server" }, { "msg_contents": ">>>>> \"Justin\" == Justin Clift <justin@postgresql.org> writes:\n\n Justin> Hi, There's also \"PostgreSQL Replicator\", which is at :\n\n Justin> http://pgreplicator.sourceforge.net\n\nUnfortunatly this seems to be asynchronous replication (you manually\nhave to initiate the syncronization)... \n\nI'd like to have synchronous replication...\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\nexplosion Rule Psix spy Ortega Panama arrangements PLO genetic Uzi\nTreasury Peking Waco, Texas quiche jihad security\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "17 Sep 2001 14:13:31 +0200", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "Re: Hot spare PSQL server" }, { "msg_contents": "> \n> \n> Unfortunatly this seems to be asynchronous replication (you manually\n> have to initiate the syncronization)... \n> \n> I'd like to have synchronous replication...\n\nThere is a synchronous replication project here...\n\nhttp://www.greatbridge.org/project/pgreplication/projdisplay.php\nWe currently have a \"working model\" based on PostgreSQL-6.4.2, and are \nworking on\nimplementing the ideas in PostgreSQL-7.1.3.\n\nDarren\n\n", "msg_date": "Mon, 17 Sep 2001 10:42:43 -0400", "msg_from": "Darren Johnson <darren.johnson@home.com>", "msg_from_op": false, "msg_subject": "Re: Hot spare PSQL server" }, { "msg_contents": "Thus spake Turbo Fredriksson\n> Justin> http://pgreplicator.sourceforge.net\n> \n> Unfortunatly this seems to be asynchronous replication (you manually\n> have to initiate the syncronization)... \n\nAnd it also uses the GNU license.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Mon, 17 Sep 2001 12:56:36 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Hot spare PSQL server" }, { "msg_contents": ">>>>> \"Darren\" == Darren Johnson <darren.johnson@home.com> writes:\n\n >> Unfortunatly this seems to be asynchronous replication (you\n >> manually have to initiate the syncronization)... I'd like to\n >> have synchronous replication...\n\n Darren> There is a synchronous replication project here...\n\n Darren> http://www.greatbridge.org/project/pgreplication/projdisplay.php\n Darren> We currently have a \"working model\" based on\n Darren> PostgreSQL-6.4.2, and are working on implementing the\n Darren> ideas in PostgreSQL-7.1.3.\n\nThe site is very broken... Where can I find information on how it works etc?\nThe FAQ link leads to an empty page...\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\n767 pits radar colonel bomb congress CIA critical $400 million in gold\nbullion counter-intelligence nuclear nitrate strategic Khaddafi killed\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "18 Sep 2001 09:47:03 +0200", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "Re: Hot spare PSQL server" } ]
[ { "msg_contents": "Hi Friends,\n\nI'm just a beginner, partaking in a project.\n\nIs there any way to list or edit the user-defined\nprocedures, functions and triggers?\n\nI have already loaded them into the postgres database\nand trying to edit and list it.\n\n\n\nthanks,\nfrancis\n\n\n__________________________________________________\nTerrorist Attacks on U.S. - How can you help?\nDonate cash, emergency relief information\nhttp://dailynews.yahoo.com/fc/US/Emergency_Information/\n", "msg_date": "Mon, 17 Sep 2001 04:59:22 -0700 (PDT)", "msg_from": "Francis Joseph <fjoseph_at@yahoo.com>", "msg_from_op": true, "msg_subject": "triggers - functions listing URGENT" } ]
[ { "msg_contents": "I suppose that a user group would be more appropriate for this question,\nbut I think there is some potential for further development here.\n\nI want to port a program that makes extensive use of\nBindColumn/DefineColumn semantics of ODBC, as well as bulk copy\noperations. My problem is that the current ODBC driver doesn't support\nbulk copy and libpq doesn't support Bind/Define. What should I be\nusing, and will I have to develop this functionality for one of these\ninterfaces?\n\n--Kevin\n", "msg_date": "Mon, 17 Sep 2001 10:19:44 -0400", "msg_from": "Kevin <TenToThe8th@yahoo.com>", "msg_from_op": true, "msg_subject": "BindColumn and Bulk Copy" } ]
[ { "msg_contents": "I was looking at Bruce's book online and was wondering if somewhere, \nthere was a description of the benefits/costs of a single server hosting \nmultiple databases, implemented with...\n\n1) a single postmaster running an instance with multiple databases, or\n2) multiple postmasters (running on different ports), each with one \ndatabase.\n\nThanks\n\n", "msg_date": "Mon, 17 Sep 2001 10:54:06 -0400", "msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "multiple postmasters vs multiple databases" } ]
[ { "msg_contents": "I want to mention that the number of patches submitted has dropped off\ndramatically. Seems people are prepared for beta and we should start\nbeta as soon as we can. I think the current plan is Friday.\n\nAlso, I will be on vacation this week. Tom will apply any patches that\nlook good.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 17 Sep 2001 11:59:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Beta time" }, { "msg_contents": "> I want to mention that the number of patches submitted has dropped off\n> dramatically. Seems people are prepared for beta and we should start\n> beta as soon as we can. I think the current plan is Friday.\n\nI'm doing a substantial amount of work on the date/time types. Not\ncertain it will be ready for Friday. Will know more by then, of course\n;)\n\n - Thomas\n", "msg_date": "Mon, 17 Sep 2001 20:13:08 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Beta time" }, { "msg_contents": "I spent an hour or two trying to get my ADD PRIMARY KEY patch to work but\nI'm beginning to think my code is suffering from bit rot. Basically, during\nthe iteration over the indices on the table, looking for other primary\nindices, none are found.\n\nI am checking the indexStruct->indisprimary field, but it always resolves to\nfalse. indisunique works fine. It is a trivial change to the ADD UNIQUE\ncode, but it doesn't work. Viewing the system catalogs and '\\d' both show\nthe indices as primary, but the SearchSysCache funtion believes that they\nare not.\n\nIs DefineIndex for primary indices broken or something?\n\nI have tried putting a CommandCounterIncrement() in there out of\ndesperation, but it does nothing. Does anyone have any ideas? Might have\nto leave it for 7.3 I guess.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Tuesday, 18 September 2001 12:00 AM\n> To: PostgreSQL-development\n> Subject: [HACKERS] Beta time\n>\n>\n> I want to mention that the number of patches submitted has dropped off\n> dramatically. Seems people are prepared for beta and we should start\n> beta as soon as we can. I think the current plan is Friday.\n>\n> Also, I will be on vacation this week. Tom will apply any patches that\n> look good.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Tue, 18 Sep 2001 10:44:44 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Beta time" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I am checking the indexStruct->indisprimary field, but it always resolves to\n> false. indisunique works fine. It is a trivial change to the ADD UNIQUE\n> code, but it doesn't work. Viewing the system catalogs and '\\d' both show\n> the indices as primary, but the SearchSysCache funtion believes that they\n> are not.\n\nDoesn't make any sense to me either. Want to post your code?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Sep 2001 00:40:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Beta time " }, { "msg_contents": "Hi has anyone tried Intel's compiler yet?\n\nhttp://developer.intel.com/software/products/eval/\n\nJust wondering what would happen.\n\nCheerio,\nLink.\n\n", "msg_date": "Tue, 18 Sep 2001 15:29:57 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Anyone tried compiling postgresql with the Intel compilers?" }, { "msg_contents": "Attached is the CONSTR_PRIMARY switch block from command.c. I've marked the\nproblem test with '@@'.\n\nBasically the patch all seems to work properly, except that it doesn't\nrecognise existing primarty keys. ie. You can go ALTER TABLE test ADD\nPRIMARY KEY(a) multiple times and it will keep adding a primary key. My ADD\nUNIQUE patch that has been committed is virtually identical, and has no such\nproblem.\n\nChris\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Tuesday, 18 September 2001 12:41 PM\n> To: Christopher Kings-Lynne\n> Cc: Bruce Momjian; PostgreSQL-development\n> Subject: Re: [HACKERS] Beta time\n>\n>\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > I am checking the indexStruct->indisprimary field, but it\n> always resolves to\n> > false. indisunique works fine. It is a trivial change to the\n> ADD UNIQUE\n> > code, but it doesn't work. Viewing the system catalogs and\n> '\\d' both show\n> > the indices as primary, but the SearchSysCache funtion believes\n> that they\n> > are not.\n>\n> Doesn't make any sense to me either. Want to post your code?\n>\n> \t\t\tregards, tom lane\n>", "msg_date": "Wed, 19 Sep 2001 11:07:32 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Beta time " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Attached is the CONSTR_PRIMARY switch block from command.c. I've marked the\n> problem test with '@@'.\n\nHmmm .... this code has got a number of problems, but I don't see why\n*that* would fail. Anyone?\n\nWhat I do see:\n\n1. Should not \"break\" out of loop over indexes after detecting a\nmatching non-primary-key index. This allows detection of the NOTICE\ncondition to distract you from detecting the ERROR condition on a\nlater index. I'd suggest issuing the NOTICE inside the loop, actually,\nand not breaking at all. (See also #4)\n\n2. What's with the \"if (keyno > 0)\"? That breaks detection of\neverything on indexes on system columns, eg OID. (Of course, the\n\"rel_attrs[keyno - 1]\" reference doesn't work for system columns,\nbut sticking your head in the sand is no answer.)\n\n3. pfree'ing iname at the bottom doesn't strike me as a good\nidea; isn't that possibly part of your input querytree?\n\n4. If you're going to be so pedantic as to issue a warning notice about\na duplicate non-primary index, it'd be polite to give the name of that\nindex. Else how shall I know which index you think I should drop?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Sep 2001 01:07:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Beta time " }, { "msg_contents": "> 1. Should not \"break\" out of loop over indexes after detecting a\n> matching non-primary-key index. This allows detection of the NOTICE\n> condition to distract you from detecting the ERROR condition on a\n> later index. I'd suggest issuing the NOTICE inside the loop, actually,\n> and not breaking at all. (See also #4)\n\nOK.\n\n> 2. What's with the \"if (keyno > 0)\"? That breaks detection of\n> everything on indexes on system columns, eg OID. (Of course, the\n> \"rel_attrs[keyno - 1]\" reference doesn't work for system columns,\n> but sticking your head in the sand is no answer.)\n\nThis is code that I've borrowed from elsewhere. I'll have a good look at it\ntho.\n\n> 3. pfree'ing iname at the bottom doesn't strike me as a good\n> idea; isn't that possibly part of your input querytree?\n\nHmmm. OK. What about in the case where iname is null and I give it a\nmakeObjectName?\n\n> 4. If you're going to be so pedantic as to issue a warning notice about\n> a duplicate non-primary index, it'd be polite to give the name of that\n> index. Else how shall I know which index you think I should drop?\n\nI'll improve the messages. As for me being pedantic - that's a result of\nwhat you specified as the best behaviour should be when I posted on the\nlist!\n\nYou may also want to look at the CONSTR_UNIQUE block that's already been\ncommitted, as it may also have similar issues. Any fixes I make to PRIMARY,\nI will also fix in UNIQUE...\n\nCheers,\n\nChris\n\n", "msg_date": "Wed, 19 Sep 2001 13:32:13 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Beta time " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> 3. pfree'ing iname at the bottom doesn't strike me as a good\n>> idea; isn't that possibly part of your input querytree?\n\n> Hmmm. OK. What about in the case where iname is null and I give it a\n> makeObjectName?\n\nDon't worry about it. palloc'd space will be recovered anyway at end of\nstatement. It's really not worth the code space to pfree every little\nbit of space you might use, except in routines that could be executed\nmany times in a single query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Sep 2001 09:41:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Beta time " }, { "msg_contents": "> 1. Should not \"break\" out of loop over indexes after detecting a\n> matching non-primary-key index. This allows detection of the NOTICE\n> condition to distract you from detecting the ERROR condition on a\n> later index. I'd suggest issuing the NOTICE inside the loop, actually,\n> and not breaking at all. (See also #4)\n\nI don't quite understand what you mean here?\n\n> 2. What's with the \"if (keyno > 0)\"? That breaks detection of\n> everything on indexes on system columns, eg OID. (Of course, the\n> \"rel_attrs[keyno - 1]\" reference doesn't work for system columns,\n> but sticking your head in the sand is no answer.)\n\nThat is that part of the code that I least understand, so I would\nappreciate it if someone took 1 minute and told me how this _should_ be\nwritten. Note that I used this code from the ADD FOREIGN KEY stuff.\n\n /* Look at key[i] in the index and check that it is over the same column\n as key[i] in the constraint. This is to differentiate between (a,b)\n and (b,a) */\n if (i < INDEX_MAX_KEYS && indexStruct->indkey[i] != 0)\n {\n int\t keyno = indexStruct->indkey[i];\n\n if (keyno > 0)\n {\n char *name = NameStr(rel_attrs[keyno - 1]->attname);\n if (strcmp(name, key->name) == 0) keys_matched++;\n }\n }\n\nI admit I was confused as to why it's keyno - 1??\n\n> 3. pfree'ing iname at the bottom doesn't strike me as a good\n> idea; isn't that possibly part of your input querytree?\n\nOK, gone.\n\n> 4. If you're going to be so pedantic as to issue a warning notice about\n> a duplicate non-primary index, it'd be polite to give the name of that\n> index. Else how shall I know which index you think I should drop?\n\nI was going to do this, but then realised all I had access to in the\nindexStruct was the oid of the index relation? What's the easiest way of\nretrieving the name of an index given it's oid, or the oid of it's\nrelation?\n\nOnce I've figured these probs out, I'll fix the ADD UNIQUE code as well.\n\nChris\n\n\n", "msg_date": "Sat, 22 Sep 2001 18:30:49 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Beta time " }, { "msg_contents": "Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n>> I'd suggest issuing the NOTICE inside the loop, actually,\n>> and not breaking at all. (See also #4)\n\n> I don't quite understand what you mean here?\n\nJust do elog(NOTICE) inside the loop over indexes, rather than setting a\nflag to do it later. For that matter, I see no reason not to raise the\nelog(ERROR) condition inside the loop, rather than setting a flag to\ndo it later.\n\n> I admit I was confused as to why it's keyno - 1??\n\nTo make a zero-based C array index from the one-based attribute number.\nBut the problem with this code is it doesn't handle indexes on system\nattributes such as OID, which have negative attribute numbers and are\nnot shown in rel->rd_att->attrs. I'd be inclined to use get_attname,\nand not bother with looking into the relcache rd_att structure at all.\n\n> I was going to do this, but then realised all I had access to in the\n> indexStruct was the oid of the index relation? What's the easiest way of\n> retrieving the name of an index given it's oid, or the oid of it's\n> relation?\n\nget_rel_name\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Sep 2001 11:46:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Beta time " } ]
[ { "msg_contents": "Have a look into the system tables \npg_proc\npg_trigger\n\nHave a look into the documentation, \nrefer to System Catalogs\n\nWithin psql you get \nSystem Catalogs listed by \\dS\nspecific information about a table \nby i. e. \\d pg_proc\n\nRegards, Christoph \n", "msg_date": "Mon, 17 Sep 2001 16:06:37 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": true, "msg_subject": "Re: triggers procedures listing URGENT" } ]
[ { "msg_contents": "I'm following the instructions on the web site which say's:\n-----------------------------------------------------------\nDo an initial login to the CVS server:\n\n$ cvs -d :pserver:anoncvs@postgresql.org:/home/projects/pgsql/cvsroot login\n\nYou will be prompted for a password; enter 'postgresql'. You should only\nneed to do this once, since the password will be saved in .cvspass in your\nhome directory.\n------------------------------------------------------------\nThis doesn't seem to work. Is there something else I should use instead?\n\nThanks.\n\nMark\n\n\n", "msg_date": "Mon, 17 Sep 2001 22:18:44 GMT", "msg_from": "\"news.grapid1.mi.home.com\" <nospam_mshead@mhsk.smhs.com>", "msg_from_op": true, "msg_subject": "CVS access problem" }, { "msg_contents": "\nThe location of the cvs repository recently changed. It is know\naccessible as\n\n\t:pserver:<userid>@cvs.postgresql.org:/cvsroot\n\nYour commandline for an initial login should be:\n\t\n\t$ cvs -d :pserver:anoncvs@cvs.postgresql.org:/cvsroot login\n\nArne Weiner.\n\n\n\"news.grapid1.mi.home.com\" wrote:\n> \n> I'm following the instructions on the web site which say's:\n> -----------------------------------------------------------\n> Do an initial login to the CVS server:\n> \n> $ cvs -d :pserver:anoncvs@postgresql.org:/home/projects/pgsql/cvsroot login\n> \n> You will be prompted for a password; enter 'postgresql'. You should only\n> need to do this once, since the password will be saved in .cvspass in your\n> home directory.\n> ------------------------------------------------------------\n> This doesn't seem to work. Is there something else I should use instead?\n> \n> Thanks.\n> \n> Mark\n", "msg_date": "Tue, 18 Sep 2001 08:47:44 +0200", "msg_from": "Arne Weiner <aswr@gmx.de>", "msg_from_op": false, "msg_subject": "Re: CVS access problem" }, { "msg_contents": "> The location of the cvs repository recently changed. It is know\n> accessible as\n>\n> \t:pserver:<userid>@cvs.postgresql.org:/cvsroot\n>\n> Your commandline for an initial login should be:\n>\n> \t$ cvs -d :pserver:anoncvs@cvs.postgresql.org:/cvsroot login\n>\n> Arne Weiner.\n\nI'm trying that exact command line above but I get this:\n\nFatal error, aborting.\nanoncvs: no such user\ncvs login: authorization failed: server cvs.postgresql.org rejected access\nto /cvs\nroot for user anoncvs\n\nChris\n\n", "msg_date": "Tue, 18 Sep 2001 16:31:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: CVS access problem" }, { "msg_contents": "\nanoncvs.postgresql.org, not cvs.postgresql.org ... Arne posted the wrong\none :(\n\n\nOn Tue, 18 Sep 2001, Christopher Kings-Lynne wrote:\n\n> > The location of the cvs repository recently changed. It is know\n> > accessible as\n> >\n> > \t:pserver:<userid>@cvs.postgresql.org:/cvsroot\n> >\n> > Your commandline for an initial login should be:\n> >\n> > \t$ cvs -d :pserver:anoncvs@cvs.postgresql.org:/cvsroot login\n> >\n> > Arne Weiner.\n>\n> I'm trying that exact command line above but I get this:\n>\n> Fatal error, aborting.\n> anoncvs: no such user\n> cvs login: authorization failed: server cvs.postgresql.org rejected access\n> to /cvs\n> root for user anoncvs\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Tue, 18 Sep 2001 08:14:03 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS access problem" }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> anoncvs.postgresql.org, not cvs.postgresql.org ... Arne posted the wrong\n> one :(\n\nWhoops! I'm sorry, I overlooked that!\n\nArne Weiner.\n", "msg_date": "Tue, 18 Sep 2001 18:28:19 +0200", "msg_from": "Arne Weiner <aswr@gmx.de>", "msg_from_op": false, "msg_subject": "Re: CVS access problem" } ]
[ { "msg_contents": "Hi Friends,\n\nI'm just a beginner, partaking in a project.\n\nIs there any way to list or edit the user-defined\nprocedures, functions and triggers?\n\nI have already loaded them into the postgres database\nand trying to edit and list it.\n\n\n\nthanks,\nfrancis\n\n\n\n\n\n\n\n\n\n\nHi Friends,I'm just a beginner, partaking in a project.Is \nthere any way to list or edit the user-definedprocedures, functions and \ntriggers?I have already loaded them into the postgres databaseand \ntrying to edit and list \nit.thanks,francis", "msg_date": "Mon, 17 Sep 2001 18:06:47 -0700", "msg_from": "\"flash\" <fjoseph@flashmail.com>", "msg_from_op": true, "msg_subject": "triggers procedures listing URGENT" }, { "msg_contents": "Within psql \nexamine the system tables \npg_proc\npg_trigger\n\nRefer to the documentation \nChapter System Catalogs \n\nRegards, Christoph \n\nPS\nI was sending this mail yesterday \nto 'pgsql-hackers@postgresql.org' \nbut it seemed to disappear, \nat least I did not get it back. \nHas anybody experienced similar effects?\n\n", "msg_date": "Tue, 18 Sep 2001 11:05:06 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": false, "msg_subject": "Re: triggers procedures listing URGENT" } ]
[ { "msg_contents": "Would it be an idea to add timestamps to the PostgreSQL error/debug/notice\nlog?\n\nSometimes I would really like to know when an event has occurred!\n\nChris\n\n", "msg_date": "Tue, 18 Sep 2001 10:28:48 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Putting timestamps in PostgreSQL log" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Would it be an idea to add timestamps to the PostgreSQL error/debug/notice\n> log?\n\nAlready done, see log_timestamp entry in postgresql.conf.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Sep 2001 23:17:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Putting timestamps in PostgreSQL log " }, { "msg_contents": "On Lun 17 Sep 2001 23:28, Christopher Kings-Lynne wrote:\n> Would it be an idea to add timestamps to the PostgreSQL error/debug/notice\n> log?\n>\n> Sometimes I would really like to know when an event has occurred!\n\nUse syslog and you'll get timestamps in your log.\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Tue, 18 Sep 2001 18:22:15 -0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: Putting timestamps in PostgreSQL log" } ]
[ { "msg_contents": "I experienced that UNIONs in 7.1.1 are rather slow:\n\ntir=# explain (select nev from cikk) union (select tevekenyseg from log);\nNOTICE: QUERY PLAN:\n\nUnique (cost=667.63..687.18 rows=782 width=12)\n -> Sort (cost=667.63..667.63 rows=7817 width=12)\n -> Append (cost=0.00..162.17 rows=7817 width=12)\n -> Subquery Scan *SELECT* 1 (cost=0.00..28.16 rows=1316 width=12)\n -> Seq Scan on cikk (cost=0.00..28.16 rows=1316 width=12)\n -> Subquery Scan *SELECT* 2 (cost=0.00..134.01 rows=6501 width=12)\n -> Seq Scan on log (cost=0.00..134.01 rows=6501 width=12)\n\nOf course a simple SELECT is fast:\n\ntir=# explain select nev from cikk;\nNOTICE: QUERY PLAN:\n\nSeq Scan on cikk (cost=0.00..28.16 rows=1316 width=12)\n\n\nFor me it seems to be slow due to the sorting. Is this right?\nIs this normal at all? Is it possible to make it faster?\n\nTIA, Zoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Tue, 18 Sep 2001 15:52:28 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "slow UNIONing" }, { "msg_contents": "Kovacs,\n\n\nA 'union all' will be much faster than 'union'. 'union all' returns all \nresults from both queries, whereas 'union' will return all distinct \nrecords. The 'union' requires a sort and a merge to remove the \nduplicate values. Below are explain output for a union query and a \nunion all query.\n\nfiles=# explain\nfiles-# select dummy from test\nfiles-# union all\nfiles-# select dummy from test;\nNOTICE: QUERY PLAN:\n\nAppend (cost=0.00..40.00 rows=2000 width=12)\n -> Subquery Scan *SELECT* 1 (cost=0.00..20.00 rows=1000 width=12)\n -> Seq Scan on test (cost=0.00..20.00 rows=1000 width=12)\n -> Subquery Scan *SELECT* 2 (cost=0.00..20.00 rows=1000 width=12)\n -> Seq Scan on test (cost=0.00..20.00 rows=1000 width=12)\n\nEXPLAIN\nfiles=# explain\nfiles-# select dummy from test\nfiles-# union\nfiles-# select dummy from test;\nNOTICE: QUERY PLAN:\n\nUnique (cost=149.66..154.66 rows=200 width=12)\n -> Sort (cost=149.66..149.66 rows=2000 width=12)\n -> Append (cost=0.00..40.00 rows=2000 width=12)\n -> Subquery Scan *SELECT* 1 (cost=0.00..20.00 rows=1000 \nwidth=12)\n -> Seq Scan on test (cost=0.00..20.00 rows=1000 \nwidth=12)\n -> Subquery Scan *SELECT* 2 (cost=0.00..20.00 rows=1000 \nwidth=12)\n -> Seq Scan on test (cost=0.00..20.00 rows=1000 \nwidth=12)\n\nEXPLAIN\nfiles=#\n\n\nthanks,\n--Barry\n\n\nKovacs Zoltan wrote:\n\n> I experienced that UNIONs in 7.1.1 are rather slow:\n> \n> tir=# explain (select nev from cikk) union (select tevekenyseg from log);\n> NOTICE: QUERY PLAN:\n> \n> Unique (cost=667.63..687.18 rows=782 width=12)\n> -> Sort (cost=667.63..667.63 rows=7817 width=12)\n> -> Append (cost=0.00..162.17 rows=7817 width=12)\n> -> Subquery Scan *SELECT* 1 (cost=0.00..28.16 rows=1316 width=12)\n> -> Seq Scan on cikk (cost=0.00..28.16 rows=1316 width=12)\n> -> Subquery Scan *SELECT* 2 (cost=0.00..134.01 rows=6501 width=12)\n> -> Seq Scan on log (cost=0.00..134.01 rows=6501 width=12)\n> \n> Of course a simple SELECT is fast:\n> \n> tir=# explain select nev from cikk;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on cikk (cost=0.00..28.16 rows=1316 width=12)\n> \n> \n> For me it seems to be slow due to the sorting. Is this right?\n> Is this normal at all? Is it possible to make it faster?\n> \n> TIA, Zoltan\n> \n> \n\n\n", "msg_date": "Tue, 18 Sep 2001 19:30:26 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: slow UNIONing" }, { "msg_contents": "Kovacs Zoltan writes:\n\n> I experienced that UNIONs in 7.1.1 are rather slow:\n>\n> tir=# explain (select nev from cikk) union (select tevekenyseg from log);\n\nTry UNION ALL. Plain UNION will eliminate duplicates, so it becomes\nslower.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 19 Sep 2001 12:22:19 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: slow UNIONing" } ]
[ { "msg_contents": "Hi all,\n\nCan I use ecpg with large objects? All examples in documentation are for \nlibpq.\n\n Thanks\n\n Shridhar\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 19 Sep 2001 00:46:03 +0530", "msg_from": "Chamanya <chamanya@yahoo.com>", "msg_from_op": true, "msg_subject": "Large objects and ecpg" }, { "msg_contents": "On Wed, Sep 19, 2001 at 12:46:03AM +0530, Chamanya wrote:\n> Can I use ecpg with large objects? All examples in documentation are for \n> libpq.\n\nYes and no. Since ECPG uses libpq it should not be too difficult to use the\nlo functions too. But there is no way to use them via some EXEC SQL\nstatements.\n\nAny idea how these statements should look like? Would be a good idea to\nimplement.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 19 Sep 2001 08:41:43 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Large objects and ecpg" } ]
[ { "msg_contents": "\n> I experienced that UNIONs in 7.1.1 are rather slow:\n> \n> tir=# explain (select nev from cikk) union (select \n> tevekenyseg from log);\n> NOTICE: QUERY PLAN:\n> \n> Unique (cost=667.63..687.18 rows=782 width=12)\n> -> Sort (cost=667.63..667.63 rows=7817 width=12)\n> -> Append (cost=0.00..162.17 rows=7817 width=12)\n> -> Subquery Scan *SELECT* 1 (cost=0.00..28.16 \n> rows=1316 width=12)\n> -> Seq Scan on cikk (cost=0.00..28.16 \n> rows=1316 width=12)\n> -> Subquery Scan *SELECT* 2 \n> (cost=0.00..134.01 rows=6501 width=12)\n> -> Seq Scan on log (cost=0.00..134.01 \n> rows=6501 width=12)\n> \n> Of course a simple SELECT is fast:\n> \n> tir=# explain select nev from cikk;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on cikk (cost=0.00..28.16 rows=1316 width=12)\n> \n> \n> For me it seems to be slow due to the sorting. Is this right?\n> Is this normal at all? Is it possible to make it faster?\n\nIf you know, that your result does not produce duplicates\n(which are filtered away with \"union\") you can use a \n\"union all\" which should be substantially faster, since it does \nnot need to sort.\n\nAndreas\n", "msg_date": "Wed, 19 Sep 2001 09:15:52 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: slow UNIONing" }, { "msg_contents": "> > For me it seems to be slow due to the sorting. Is this right?\n> > Is this normal at all? Is it possible to make it faster?\n> \n> If you know, that your result does not produce duplicates\n> (which are filtered away with \"union\") you can use a \n> \"union all\" which should be substantially faster, since it does \n> not need to sort.\n\nThank you to all who helped. I knew nothing about UNION ALL, but now it's\nOK. Regards, Zoltan\n\n", "msg_date": "Fri, 21 Sep 2001 08:50:57 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": false, "msg_subject": "Re: slow UNIONing" } ]
[ { "msg_contents": "[HACKERS] Where are the ERROR: Messages\nI'm converting from a C based program \nto PostgreSQL. There are some DBMS \nspecific errors caught, i.e. deadlocks. \nNot only for this reason, but also from \ngeneral curiosity I'd like to know what \nkinds of ERRORS PostgreSQL is dealing with. \nWhich source code/header file is hiding \nthis infromation? \nRegards, Christoph \n\n", "msg_date": "Wed, 19 Sep 2001 11:36:51 METDST", "msg_from": "Haller Christoph <ch@rodos.fzk.de>", "msg_from_op": true, "msg_subject": "Where are the ERROR: Messages " } ]