threads
listlengths
1
2.99k
[ { "msg_contents": "I'm an idiot - the form on the Users Lounge has the list.\n\nChris\n\n", "msg_date": "Tue, 20 Nov 2001 13:39:24 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Previous email about missing pgsql-sql" } ]
[ { "msg_contents": " P O S T G R E S Q L\n\n 7 . 2 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nFix geometry expected files\nComplete timestamp/current changes\nRemove beta1 directory from snapshot\nAlpha regression problem\n\nDocumentation Changes\n---------------------\nBuild manual pages\nComplete timestamp/current changes\nFix documentation build\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Nov 2001 10:53:09 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Open items" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> If I'd downloaded this thing over a decent DSL or cable modem \n> line, bzip2 would actually be a net loss in total \n> download + uncompress time.\n\nI think the download time is a lot more important to people than \nthe uncompression time. A savings of nearly 1.5 Megs is \nsignificant, no matter what type of line you are on. If we can \nshave off 1.5M for a 56K user, why not?\n\nMy runtime tests were also different:\n\nbzip -9: 8.959 real\nbzip -1: 7.473 real\ngzip -9: 1.491 real\n\nThat's not much of a difference, and (IMO) is more than offset \nby the smaller download size. Bandwidth should be a more \nimportant factor: after all, the next few steps (tar, \nconfigure, make) are going to make the unzipping seem fast \nin comparison. :)\n\nI'm not advocating *replacing* gzip with bzip2, but I do think \nwe should make it an option. It should not be that much \ntrouble.\n\nDigital signatures, on the other hand, are a lot more trouble \nbut are much more important than the gzip/bzip2 issue....\n\n\nGreg Sabino Mullane\ngreg@turnstep.com\nPGP Key: 0x14964AC8 200111201606\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBO/rG+LybkGcUlkrIEQJO8wCdGlZgyQUTYwLUMTrSwcmmnUx0nlYAn37H\nI6W1G8h+7jQIIiBTuHQeKQB7\n=PtZi\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Tue, 20 Nov 2001 16:07:22 -0500", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "Re: beta3 " }, { "msg_contents": "\"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n\n> > If I'd downloaded this thing over a decent DSL or cable modem \n> > line, bzip2 would actually be a net loss in total \n> > download + uncompress time.\n> \n> I think the download time is a lot more important to people than \n> the uncompression time. A savings of nearly 1.5 Megs is \n> significant, no matter what type of line you are on.\n\nAnd for CD space (I'd love bzipped binaries), ftp space, etc (not only\nfor mirrors, but for distributions shipping postgresql and mirrors\nthereeof.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "20 Nov 2001 16:44:54 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "On 20 Nov 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n\n> \"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n>\n> > > If I'd downloaded this thing over a decent DSL or cable modem\n> > > line, bzip2 would actually be a net loss in total\n> > > download + uncompress time.\n> >\n> > I think the download time is a lot more important to people than\n> > the uncompression time. A savings of nearly 1.5 Megs is\n> > significant, no matter what type of line you are on.\n>\n> And for CD space (I'd love bzipped binaries), ftp space, etc (not only\n> for mirrors, but for distributions shipping postgresql and mirrors\n> thereeof.\n\nHuh? You are advocating adding to the ftp space required? *raised\neyebrow*\n\n\n", "msg_date": "Tue, 20 Nov 2001 22:40:31 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: beta3" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n\n> On 20 Nov 2001, Trond Eivind [iso-8859-1] Glomsr�d wrote:\n> \n> > \"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> >\n> > > > If I'd downloaded this thing over a decent DSL or cable modem\n> > > > line, bzip2 would actually be a net loss in total\n> > > > download + uncompress time.\n> > >\n> > > I think the download time is a lot more important to people than\n> > > the uncompression time. A savings of nearly 1.5 Megs is\n> > > significant, no matter what type of line you are on.\n> >\n> > And for CD space (I'd love bzipped binaries), ftp space, etc (not only\n> > for mirrors, but for distributions shipping postgresql and mirrors\n> > thereeof.\n> \n> Huh? You are advocating adding to the ftp space required? *raised\n> eyebrow*\n\nReplace gz with bz2, and you'll save :) - also, bzipped files will\nallow others who only ship one instance (like us - we only need one\ntarball for the SRPM), and these will save space.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "21 Nov 2001 10:34:50 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: beta3" } ]
[ { "msg_contents": "Back in August we had a discussion about whether SPI isn't broken in\nits handling of CommandCounterIncrement:\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1029236\nThat discussion tailed off without any agreement what to do, but I've\njust been reminded of the problem, and I now understand that there's\na separate and (I think) easily-fixable bug besides the larger issue.\nMoreover, this bug represents a nasty regression from 7.1, so I think\nwe *must* do something about it now.\n\nThe example I've just been looking at (courtesy of Patrick MacDonald)\nis\n\ncreate table table1 (name text);\n\ncreate or replace function failing () returns integer as '\n\n declare\n counter integer;\n thisRow record;\n tmpName1 text;\n tmpName2 text;\n\n begin\n \n select count(*) into counter from table1;\n\n raise notice ''First count is: %'', counter;\n\n tmpName1 = ''name'' || counter + 1;\n tmpName2 = ''name'' || counter + 2;\n\n -- insert two values\n insert into table1 values (tmpName1);\n insert into table1 values (tmpName2);\n\n -- loop through the values and display name\n for thisRow in select * from table1 \n loop\n raise notice ''Name :: %'', thisRow.name;\n end loop;\n\n select count(*) into counter from table1;\n\n raise notice ''New count is: %'', counter;\n \n -- the last name displayed should be the last\n -- name inserted\n raise notice ''Last name should be: %'', tmpName2;\n\n return counter;\n end;\n '\n language 'plpgsql';\n\n\nIn current sources, this function behaves correctly the first time you\ninvoke it (in a given backend), and incorrectly on subsequent uses:\n\nregression=# select failing();\nNOTICE: First count is: 0\nNOTICE: Name :: name1\nNOTICE: Name :: name2\nNOTICE: New count is: 2\nNOTICE: Last name should be: name2\n failing\n---------\n 2\n(1 row)\n\nregression=# select failing();\nNOTICE: First count is: 2\nNOTICE: Name :: name1\nNOTICE: Name :: name2\nNOTICE: Name :: name3\nNOTICE: New count is: 4\nNOTICE: Last name should be: name4\n failing\n---------\n 4\n(1 row)\n\nregression=#\n\nNote the failure to display \"Name :: name4\". The problem is that\nno CommandCounterIncrement happens between the second INSERT and\nthe FOR ... SELECT, so the inserted row is considered not yet visible.\n\nThis worked correctly in 7.1. It fails in current sources because\nplpgsql's FOR is now based on SPI cursor support, and there is no\nCommandCounterIncrement in SPI_cursor_open, whereas there is one\nin SPI_exec and SPI_execp. (But the first time through, a\nCommandCounterIncrement occurs as a side effect of query planning\nfor the SELECT, so the problem is masked.)\n\nI believe an appropriate fix is to add a CommandCounterIncrement\ncall near the start of SPI_cursor_open; this will make it behave\nsimilarly to the other SPI query-execution calls. I have verified\nthat this change fixes Patrick's example, as well as the first-call-\nvs-later-calls discrepancies in the examples I posted in August. And\nit doesn't break the regression tests.\n\nHowever, this quick fix does not address the larger issue of whether we\nneed to try to make the CommandCounterIncrement-ing behavior consistent\nbetween the cases where plpgsql has to generate a query plan and the\ncases where it's already got one cached. I think we're going to have\nto come back and revisit that in a future release cycle.\n\nSince this isn't a complete fix, I thought I'd better run it by\npgsql-hackers to see if anyone has a problem with it or sees a\nbetter quick fix. Any comments out there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Nov 2001 17:41:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "SPI and CommandCounterIncrement, redux" } ]
[ { "msg_contents": "My friends, how i do to get a list of all rules in one table? Thanks!!!!\n\n", "msg_date": "Tue, 20 Nov 2001 20:56:55 -0300", "msg_from": "=?iso-8859-1?Q?F=E1bio_Santana?= <fabio3c@terra.com.br>", "msg_from_op": true, "msg_subject": "RULES" }, { "msg_contents": "On Tue, Nov 20, 2001 at 08:56:55PM -0300, F�bio Santana wrote:\n> My friends, how i do to get a list of all rules in one table? Thanks!!!!\n\nselect rulename,definition from pg_rules where tablename='YourTable';\n\n(Don't have any defined, so can't check)\n\nI have a similar question on triggers:\n\ncreate table a (\n id integer primary key\n);\n\ncreate table b (\n a_id integer references a(id) match full\n);\n\nselect * from pg_trigger where tgname ~* '^RI_';\n\nGives me 3 rows. They all contain the same tgargs. Is it therefore\nsufficient to select distinct tgnargs,tgargs if I just want to be able to\nrecreate the \"references... match full\" part of the create table statement?\n\nIt seems that the rows differ in\n\ntgtype\ttgrelid\t\ttgconstrrelid\ttgfoid\n 9\t\ttable a\t\ttable b \t\tRI_FKey_noaction_del\n17\t\ttable a\t\ttable b\t\t\tRI_FKey_noaction_upd\n21\t\ttable b\t\ttable a\t\t\tRI_FKey_check_ins\n\n9=row,delete, 17=row,update, 21=row,insert,update ?\n\nWhy are the first 2 constraints there? It seems to be the last one which\nsays \"If I insert,update table b, check it is a valid entry with table a\"\n\nIs that right?\n\nPatrick\n", "msg_date": "Wed, 21 Nov 2001 12:58:37 +0000", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: RULES" }, { "msg_contents": "<note that this is not really HACKERs type material, so I moved the\nresponse to the SQL list: I'm CCing Patrick directly, since I don't \nknow if he reads that list>\n\nOn Wed, Nov 21, 2001 at 12:58:37PM +0000, Patrick Welche wrote:\n> \n> create table a (\n> id integer primary key\n> );\n> \n> create table b (\n> a_id integer references a(id) match full\n> );\n> \n> select * from pg_trigger where tgname ~* '^RI_';\n> \n> Gives me 3 rows. They all contain the same tgargs. Is it therefore\n> sufficient to select distinct tgnargs,tgargs if I just want to be able to\n> recreate the \"references... match full\" part of the create table statement?\n> \n> It seems that the rows differ in\n> \n> tgtype\ttgrelid\t\ttgconstrrelid\ttgfoid\n> 9\t\ttable a\t\ttable b \t\tRI_FKey_noaction_del\n> 17\t\ttable a\t\ttable b\t\t\tRI_FKey_noaction_upd\n> 21\t\ttable b\t\ttable a\t\t\tRI_FKey_check_ins\n> \n> 9=row,delete, 17=row,update, 21=row,insert,update ?\n> \n> Why are the first 2 constraints there? It seems to be the last one which\n> says \"If I insert,update table b, check it is a valid entry with table a\"\n> \n> Is that right?\n\nAs far as it goes. Realize that a primary key <-> foreign key relationship\nis two way: it constrains the parent table as well as the child.\n\nConsider what happens if you have something like this:\n\ntest=# select * from a;\n id \n----\n 1\n 2\n 3\n 4\n(4 rows)\n\ntest=# select * from b;\n a_id \n------\n 1\n 1\n 3\n 3\n 2\n 1\n 3\n(7 rows)\n\ntest=# \n\nSo, what happens if you do:\n\ntest=# delete from a where id=4;\nDELETE 1\ntest=# delete from a where id=3;\nERROR: <unnamed> referential integrity violation - key in a still referenced from b\ntest=# update a set id=4 where id=3;\nERROR: <unnamed> referential integrity violation - key in a still referenced from b\n\nSince the key is still in use in b, it can't be deleted or modified in a.\nNote that if the key had been setup as a CASCADE, then modifying (or deleting)\nfrom a would effect b as well, as so:\n\ndrop table b;\n\ncreate table b ( a_id integer references a(id) match full ON UPDATE cascade);\n\n<fill with some data>\n\ntest=# select * from b;\n a_id \n------\n 3\n 3\n 1\n 1\n 2\n 3\n(6 rows)\n\ntest=# update a set id=4 where id=3;\nUPDATE 1\ntest=# select * from b;\n a_id \n------\n 1\n 1\n 2\n 4\n 4\n 4\n(6 rows)\n\nPretty cool, huh?\n\nRoss\n-- \nRoss Reedstrom, Ph.D. reedstrm@rice.edu\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n", "msg_date": "Wed, 21 Nov 2001 09:57:31 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RULES" }, { "msg_contents": "\nOn Wed, 21 Nov 2001, Ross J. Reedstrom wrote:\n\nWant to pop in here.\n\n> On Wed, Nov 21, 2001 at 12:58:37PM +0000, Patrick Welche wrote:\n> >\n> > create table a (\n> > id integer primary key\n> > );\n> >\n> > create table b (\n> > a_id integer references a(id) match full\n> > );\n> >\n> > select * from pg_trigger where tgname ~* '^RI_';\n> >\n> > Gives me 3 rows. They all contain the same tgargs. Is it therefore\n> > sufficient to select distinct tgnargs,tgargs if I just want to be able to\n> > recreate the \"references... match full\" part of the create table statement?\n\nNot quite, you'll lose the referential action information if you don't\ninclude info out of tgfoids on the pk table's triggers and you'll lose the\ndeferment info if you don't pay attention to tgdeferrable and\ntginitdeferred. In your case you're not using those, but...\nThere have been messages in the past about how to get the reference\ninformation. You should be able to find a function or something in\nthe archives :)\n\n", "msg_date": "Wed, 21 Nov 2001 08:43:06 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RULES" } ]
[ { "msg_contents": "\nGuys,\nSomething seems to be broken with the time operators in 7.2b2:\n\nregress=# select '7:00'::time - '8:00'::time;\n ?column? \n----------\n 01:00\n(1 row)\n\nregress=# select '8:00'::time - '7:00'::time;\n ?column? \n----------\n -01:00\n(1 row)\n\nregress=# select version();\n version \n-------------------------------------------------------------\n PostgreSQL 7.2b2 on i586-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\nKind regards,\nManuel.\n", "msg_date": "20 Nov 2001 18:13:42 -0600", "msg_from": "Manuel Sugawara <masm@fciencias.unam.mx>", "msg_from_op": true, "msg_subject": "broken time operator?" }, { "msg_contents": "> Something seems to be broken with the time operators in 7.2b2:\n...\n\nHmm. You are right. Thanks for catching this; will be fixed in cvs\ntonight.\n\n - Thomas\n", "msg_date": "Wed, 21 Nov 2001 03:12:32 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: broken time operator?" }, { "msg_contents": "On Wed, Nov 21, 2001 at 03:12:32AM +0000, Thomas Lockhart wrote:\n> > Something seems to be broken with the time operators in 7.2b2:\n> ...\n> \n> Hmm. You are right. Thanks for catching this; will be fixed in cvs\n> tonight.\n\n dig in the ribs: is not this operation in regeression tests? :-)\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 21 Nov 2001 09:23:44 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: broken time operator?" }, { "msg_contents": "> dig in the ribs: is not this operation in regeression tests? :-)\n\n:)\n\n - Thomas\n", "msg_date": "Wed, 21 Nov 2001 13:27:45 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: broken time operator?" } ]
[ { "msg_contents": "Bruce,\n\nPlease add followings to HISTORY:\n\nFormats with the correct number of columns for UNICODE in psql (Patrice)\nAdd ISO 8859-5,6,7,8 support (Tatsuo)\nOptimize LIKE/ILIKE when using single-byte encodings (Tatsuo)\n\nAlso I suggest we note somewhere that now octet_length(text) returns\nthe length BEFORE compression for an incompatible change.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Nov 2001 11:01:32 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "HISTORY addition" }, { "msg_contents": "> Bruce,\n> \n> Please add followings to HISTORY:\n> \n> Formats with the correct number of columns for UNICODE in psql (Patrice)\n> Add ISO 8859-5,6,7,8 support (Tatsuo)\n> Optimize LIKE/ILIKE when using single-byte encodings (Tatsuo)\n\nDone.\n\n> Also I suggest we note somewhere that now octet_length(text) returns\n> the length BEFORE compression for an incompatible change.\n\n octet_length(text_col) now returns non-compressed length (Tatsuo, Bruce)\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Nov 2001 21:19:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY addition" }, { "msg_contents": "> > Also I suggest we note somewhere that now octet_length(text) returns\n> > the length BEFORE compression for an incompatible change.\n> \n> octet_length(text_col) now returns non-compressed length (Tatsuo, Bruce)\n> \n> Done.\n\nThanks. But I did nothing for this. Please remove my name.\n\nBTW, are we going to make followings happen for 7.2?\n\n* Add octet_length_server() and octet_length_client() (Thomas, Tatsuo)\n* Make octet_length_client the same as octet_length()\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Nov 2001 11:35:06 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: HISTORY addition" }, { "msg_contents": "> > > Also I suggest we note somewhere that now octet_length(text) returns\n> > > the length BEFORE compression for an incompatible change.\n> > \n> > octet_length(text_col) now returns non-compressed length (Tatsuo, Bruce)\n> > \n> > Done.\n> \n> Thanks. But I did nothing for this. Please remove my name.\n\nBut I did very little code for it myself. Just copied from another\nfunction. Finding the problem was the majority of the work, and you did\nthat.\n\n> BTW, are we going to make followings happen for 7.2?\n> \n> * Add octet_length_server() and octet_length_client() (Thomas, Tatsuo)\n> * Make octet_length_client the same as octet_length()\n \nNo, we will leave octet_length() alone and revisit for 7.3. It is a low\npriority and adding these would require initdb, and it is too late in\nthe release cycle anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Nov 2001 21:39:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY addition" }, { "msg_contents": "> > Thanks. But I did nothing for this. Please remove my name.\n> \n> But I did very little code for it myself. Just copied from another\n> function. Finding the problem was the majority of the work, and you did\n> that.\n\nThanks.\n\n> > BTW, are we going to make followings happen for 7.2?\n> > \n> > * Add octet_length_server() and octet_length_client() (Thomas, Tatsuo)\n> > * Make octet_length_client the same as octet_length()\n> \n> No, we will leave octet_length() alone and revisit for 7.3. It is a low\n> priority and adding these would require initdb, and it is too late in\n> the release cycle anyway.\n\nOk.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Nov 2001 11:43:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: HISTORY addition" } ]
[ { "msg_contents": "Just a quick news.\n\ngmake[4]: Entering directory `/mnt/t-ishii/pgsql/src/interfaces/ecpg/preproc'\n/usr/vac/bin/xlc -O2 -qmaxmem=16384 -qsrcmsg -qlonglong -I./../include -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSION=9 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/usr/local/pgsql/include\\\" -c -o variable.o variable.c\n 325 | // mmerror(ET_FATAL, \"No multilevel (more than 2) pointer supported %d\",pointer_len);\n a....................................................................................b..........\na - 1506-046 (S) Syntax error.\nb - 1506-099 (S) Unexpected argument.\ngmake[4]: *** [variable.o] Error 1\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Nov 2001 12:47:06 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "ecpg+AIX 5L compile failure with current" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Just a quick news.\n> gmake[4]: Entering directory `/mnt/t-ishii/pgsql/src/interfaces/ecpg/preproc'\n> /usr/vac/bin/xlc -O2 -qmaxmem=16384 -qsrcmsg -qlonglong -I./../include -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSION=9 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/usr/local/pgsql/include\\\" -c -o variable.o variable.c\n> 325 | // mmerror(ET_FATAL, \"No multilevel (more than 2) pointer supported %d\",pointer_len);\n> a....................................................................................b..........\n> a - 1506-046 (S) Syntax error.\n> b - 1506-099 (S) Unexpected argument.\n\n\nDoes this compiler not like // comments? I thought we'd made a pass of\ncleaning those out recently, but obviously one's snuck into ecpg ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Nov 2001 23:38:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg+AIX 5L compile failure with current " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Just a quick news.\n> > gmake[4]: Entering directory `/mnt/t-ishii/pgsql/src/interfaces/ecpg/preproc'\n> > /usr/vac/bin/xlc -O2 -qmaxmem=16384 -qsrcmsg -qlonglong -I./../include -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSION=9 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/usr/local/pgsql/include\\\" -c -o variable.o variable.c\n> > 325 | // mmerror(ET_FATAL, \"No multilevel (more than 2) pointer supported %d\",pointer_len);\n> > a....................................................................................b..........\n> > a - 1506-046 (S) Syntax error.\n> > b - 1506-099 (S) Unexpected argument.\n> \n> \n> Does this compiler not like // comments? I thought we'd made a pass of\n> cleaning those out recently, but obviously one's snuck into ecpg ...\n\nYes, // came in with:\n\n date: 2001/11/16 08:36:37; author: meskes; state: Exp; lines: +24 -7\n Committed again to add the missing files/patches.\n\nOf course, this was post pgindent. Fix applied, // -> /* */.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 00:03:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg+AIX 5L compile failure with current" }, { "msg_contents": "> > gmake[4]: Entering directory `/mnt/t-ishii/pgsql/src/interfaces/ecpg/preproc'\n> > /usr/vac/bin/xlc -O2 -qmaxmem=16384 -qsrcmsg -qlonglong -I./../include -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSION=9 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/usr/local/pgsql/include\\\" -c -o variable.o variable.c\n> > 325 | // mmerror(ET_FATAL, \"No multilevel (more than 2) pointer supported %d\",pointer_len);\n> > a....................................................................................b..........\n> > a - 1506-046 (S) Syntax error.\n> > b - 1506-099 (S) Unexpected argument.\n> \n> \n> Does this compiler not like // comments?\n\nRight.\n\n> I thought we'd made a pass of\n> cleaning those out recently, but obviously one's snuck into ecpg ...\n", "msg_date": "Wed, 21 Nov 2001 15:57:25 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: ecpg+AIX 5L compile failure with current " }, { "msg_contents": "On Wed, Nov 21, 2001 at 12:03:23AM -0500, Bruce Momjian wrote:\n> Yes, // came in with:\n> \n> date: 2001/11/16 08:36:37; author: meskes; state: Exp; lines: +24 -7\n> Committed again to add the missing files/patches.\n\nSorry, I didn't check the patch I applied carefully enough.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 21 Nov 2001 10:27:24 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg+AIX 5L compile failure with current" } ]
[ { "msg_contents": "Attached is a patch to accept the smallest value of int8.\n\nThe smallest value -9223372036854775808 is rejected as follows:\n\n test=# create table test_int8 (val int8);\n CREATE\n test=# insert into test_int8 values (-9223372036854775807);\n INSERT 4026531936 1\n test=# insert into test_int8 values (-9223372036854775808);\n ERROR: int8 value out of range: \"-9223372036854775808\"\n test=#\n\nIndex: src/backend/utils/adt/int8.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/utils/adt/int8.c,v\nretrieving revision 1.35\ndiff -u -3 -p -r1.35 int8.c\n--- src/backend/utils/adt/int8.c\t2001/10/25 14:10:06\t1.35\n+++ src/backend/utils/adt/int8.c\t2001/11/21 05:35:25\n@@ -28,6 +28,12 @@\n \n #define MAXINT8LEN\t\t25\n \n+#ifndef INT64_MAX\n+#define INT64_MAX (0x7FFFFFFFFFFFFFFFLL)\n+#endif\n+#ifndef INT64_MIN\n+#define INT64_MIN (-INT64_MAX-1)\n+#endif\n #ifndef INT_MAX\n #define INT_MAX (0x7FFFFFFFL)\n #endif\n@@ -77,7 +83,7 @@ int8in(PG_FUNCTION_ARGS)\n \t\telog(ERROR, \"Bad int8 external representation \\\"%s\\\"\", str);\n \twhile (*ptr && isdigit((unsigned char) *ptr))\t\t/* process digits */\n \t{\n-\t\tint64\t\tnewtmp = tmp * 10 + (*ptr++ - '0');\n+\t\tint64\t\tnewtmp = tmp * 10 - (*ptr++ - '0');\n \n \t\tif ((newtmp / 10) != tmp)\t\t/* overflow? */\n \t\t\telog(ERROR, \"int8 value out of range: \\\"%s\\\"\", str);\n@@ -86,7 +92,13 @@ int8in(PG_FUNCTION_ARGS)\n \tif (*ptr)\t\t\t\t\t/* trailing junk? */\n \t\telog(ERROR, \"Bad int8 external representation \\\"%s\\\"\", str);\n \n-\tresult = (sign < 0) ? -tmp : tmp;\n+ if (sign < 0) {\n+\t\tresult = tmp;\n+\t} else {\n+\t\tif (tmp == INT64_MIN)\n+\t\t\telog(ERROR, \"int8 value out of range: \\\"%s\\\"\", str);\n+\t result = -tmp;\n+\t}\n \n \tPG_RETURN_INT64(result);\n }", "msg_date": "Wed, 21 Nov 2001 14:50:28 +0900 (JST)", "msg_from": "sugita@sra.co.jp", "msg_from_op": true, "msg_subject": "Rejection of the smallest int8" }, { "msg_contents": "sugita@sra.co.jp writes:\n> Attached is a patch to accept the smallest value of int8.\n\nThis has been proposed before. The problem with it is that it's\nnot portable: the C standard does not specify the direction of rounding\nof integer division when the dividend is negative. So the test\ninside the loop that tries to detect overflow would be likely to fail\non some machines.\n\nIf you can see a way around that, we're all ears ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 10:06:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rejection of the smallest int8 " }, { "msg_contents": "I said:\n>> Attached is a patch to accept the smallest value of int8.\n\n> This has been proposed before. The problem with it is that it's\n> not portable: the C standard does not specify the direction of rounding\n> of integer division when the dividend is negative.\n\nBTW, does anyone have a copy of the ANSI C standard to check this?\n\nI have a draft of C99, which says that truncation is towards 0\nregardless of the sign, but I think that this is something that was\ntightened up in C99; we can't rely on older compilers to follow it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 10:30:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rejection of the smallest int8 " }, { "msg_contents": "I said:\n> If you can see a way around that, we're all ears ...\n\nOf course there's always the brute-force solution:\n\n\tif (strcmp(ptr, \"-9223372036854775808\") == 0)\n\t return -9223372036854775808;\n\telse\n\t <<proceed with int8in>>\n\n(modulo some #ifdef hacking to attach the correct L or LL suffix to the\nconstant, but you get the idea)\n\nThis qualifies as pretty durn ugly, but might indeed be more portable\nthan any other alternative. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 12:54:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rejection of the smallest int8 " }, { "msg_contents": "Tom Lane writes:\n\n> This has been proposed before. The problem with it is that it's\n> not portable: the C standard does not specify the direction of rounding\n> of integer division when the dividend is negative. So the test\n> inside the loop that tries to detect overflow would be likely to fail\n> on some machines.\n>\n> If you can see a way around that, we're all ears ...\n\nUse strtoll/strtoull if available. They should be on \"most\" systems\nanyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 21 Nov 2001 23:13:58 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Rejection of the smallest int8 " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Use strtoll/strtoull if available. They should be on \"most\" systems\n> anyway.\n\nMph. The reason int8in is coded the way it is is to avoid having to\ndeal with strtoll configuration (does it exist? Is it the right thing?\nDon't forget Alphas, where int8 == long). We'd still need a fallback\nif it doesn't exist, so I'm not that excited about this answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 17:15:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rejection of the smallest int8 " }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\nSubject: Re: [PATCHES] Rejection of the smallest int8 \nDate: Wed, 21 Nov 2001 12:54:31 -0500\n\n;;; I said:\n;;; > If you can see a way around that, we're all ears ...\n;;; \n;;; Of course there's always the brute-force solution:\n;;; \n;;; \tif (strcmp(ptr, \"-9223372036854775808\") == 0)\n;;; \t return -9223372036854775808;\n;;; \telse\n;;; \t <<proceed with int8in>>\n;;; \n;;; (modulo some #ifdef hacking to attach the correct L or LL suffix to the\n;;; constant, but you get the idea)\n;;; \n;;; This qualifies as pretty durn ugly, but might indeed be more portable\n;;; than any other alternative. Comments?\n\nI made a new patch. Toward zero fault is fixed.\n\nKind regards,\n\n\nKenji Sugita\nsugita@sra.co.jp\n\nIndex: int8.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/utils/adt/int8.c,v\nretrieving revision 1.35\ndiff -u -3 -p -r1.35 int8.c\n--- int8.c\t2001/10/25 14:10:06\t1.35\n+++ int8.c\t2001/11/22 08:48:09\n@@ -28,6 +28,12 @@\n \n #define MAXINT8LEN\t\t25\n \n+#ifndef INT64_MAX\n+#define INT64_MAX (0x7FFFFFFFFFFFFFFFLL)\n+#endif\n+#ifndef INT64_MIN\n+#define INT64_MIN (-INT64_MAX-1)\n+#endif\n #ifndef INT_MAX\n #define INT_MAX (0x7FFFFFFFL)\n #endif\n@@ -62,6 +68,7 @@ int8in(PG_FUNCTION_ARGS)\n \tchar\t *ptr = str;\n \tint64\t\ttmp = 0;\n \tint\t\t\tsign = 1;\n+\tint64\t\tlimit;\n \n \t/*\n \t * Do our own scan, rather than relying on sscanf which might be\n@@ -75,18 +82,26 @@ int8in(PG_FUNCTION_ARGS)\n \t\tptr++;\n \tif (!isdigit((unsigned char) *ptr)) /* require at least one digit */\n \t\telog(ERROR, \"Bad int8 external representation \\\"%s\\\"\", str);\n+\tif (sign < 0)\n+\t\tlimit = INT64_MIN;\n+\telse\n+\t\tlimit = -INT64_MAX;\n \twhile (*ptr && isdigit((unsigned char) *ptr))\t\t/* process digits */\n \t{\n-\t\tint64\t\tnewtmp = tmp * 10 + (*ptr++ - '0');\n+\t\tint\t\t\tdigit;\n \n-\t\tif ((newtmp / 10) != tmp)\t\t/* overflow? */\n+\t\tif (tmp < -(INT64_MAX/10))\t\t/* overflow? */\n+\t\t\telog(ERROR, \"int8 value out of range: \\\"%s\\\"\", str);\n+\t\tdigit = *ptr++ - '0';\n+\t\ttmp *= 10;\n+\t\tif (tmp < limit + digit)\n \t\t\telog(ERROR, \"int8 value out of range: \\\"%s\\\"\", str);\n-\t\ttmp = newtmp;\n+\t\ttmp -= digit;\n \t}\n \tif (*ptr)\t\t\t\t\t/* trailing junk? */\n \t\telog(ERROR, \"Bad int8 external representation \\\"%s\\\"\", str);\n \n-\tresult = (sign < 0) ? -tmp : tmp;\n+\tresult = (sign < 0) ? tmp : -tmp;\n \n \tPG_RETURN_INT64(result);\n }", "msg_date": "Thu, 22 Nov 2001 18:00:52 +0900 (JST)", "msg_from": "sugita@sra.co.jp", "msg_from_op": true, "msg_subject": "Re: Rejection of the smallest int8 " } ]
[ { "msg_contents": "I think we should remove doc/internals.ps from CVS. It is something\nthat belongs on the web site, not in CVS. It is 640k.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 01:13:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "internals.ps" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think we should remove doc/internals.ps from CVS. It is something\n> that belongs on the web site, not in CVS. It is 640k.\n\nSince it is clearly not a *source* file, it does not belong in CVS.\nThe source for the document might belong in CVS, if we have it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 09:53:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: internals.ps " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think we should remove doc/internals.ps from CVS. It is something\n> > that belongs on the web site, not in CVS. It is 640k.\n> \n> Since it is clearly not a *source* file, it does not belong in CVS.\n> The source for the document might belong in CVS, if we have it ...\n\nI moved it into the web cvs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 10:45:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: internals.ps" } ]
[ { "msg_contents": "\n> Note the failure to display \"Name :: name4\". The problem is that\n> no CommandCounterIncrement happens between the second INSERT and\n> the FOR ... SELECT, so the inserted row is considered not yet visible.\n\nI wonder why the insert does not do the CommandCounterIncrement,\nsince it is a statement that did modification and all subsequent \nstatements should see the effect ?\nIn other places this is the usual practice (CommandCounterIncrement\nafter \nmodification), no ?\nOr is this a stupid question.\n\nAndreas\n", "msg_date": "Wed, 21 Nov 2001 10:06:47 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: SPI and CommandCounterIncrement, redux" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> Note the failure to display \"Name :: name4\". The problem is that\n>> no CommandCounterIncrement happens between the second INSERT and\n>> the FOR ... SELECT, so the inserted row is considered not yet visible.\n\n> I wonder why the insert does not do the CommandCounterIncrement,\n> since it is a statement that did modification and all subsequent \n> statements should see the effect ?\n> In other places this is the usual practice (CommandCounterIncrement\n> after modification), no ?\n\nSPI's habit is to do CommandCounterIncrement *before* it starts a query,\nrather than after. I think this makes sense, since otherwise we'd have\nto do a CommandCounterIncrement at the start of every function call,\nwhether the function contained any subqueries or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 09:39:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SPI and CommandCounterIncrement, redux " } ]
[ { "msg_contents": "\n> The random_page_cost is changed because of an assumption that the\nbigger systems\n> will be more busy. The more busy a machine is doing I/O the lower the\ndifferential\n> between a sequential and random access. (\"sequential\" to the\napplication is less\n> likely sequential to the physical disk.)\n\nI think this reasoning is valid, but would we then not rather need\nsomething like\na scan_page_cost, that would need to be raised ? Or are the CPU costs so\nsmall,\nthat only the relation between scan and random counts ? \n\n> I'd like to open a debate about the benefit/cost of shared_buffers.\nThe question\n> is: \"Will postgres' management of shared buffers out perform O/S\ncache? Is there a\n> point of diminishing return on number of buffers? If so, what?\n\nI think the main point for PostgreSQL buffers is to account for \"dirty\"\npages.\nThis only because we use OS files and can thus rely on OS file caching.\nIf your application is update intensive, then you should have sufficient\nbuffers to hold most dirtied pages between checkpoints. Does that make\nsense ?\n\nAndreas\n", "msg_date": "Wed, 21 Nov 2001 11:38:56 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: postgresql.conf (Proposed settings)" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > The random_page_cost is changed because of an assumption that the\n> bigger systems\n> > will be more busy. The more busy a machine is doing I/O the lower the\n> differential\n> > between a sequential and random access. (\"sequential\" to the\n> application is less\n> > likely sequential to the physical disk.)\n> \n> I think this reasoning is valid, but would we then not rather need\n> something like\n> a scan_page_cost, that would need to be raised ? Or are the CPU costs so\n> small,\n> that only the relation between scan and random counts ?\n\n From what I can gather, \"sequential scan\" of next block is the base unit. All\nother measurements are based off this constant. What that constant is, I don't\nknow. \n\n> \n> > I'd like to open a debate about the benefit/cost of shared_buffers.\n> The question\n> > is: \"Will postgres' management of shared buffers out perform O/S\n> cache? Is there a\n> > point of diminishing return on number of buffers? If so, what?\n> \n> I think the main point for PostgreSQL buffers is to account for \"dirty\"\n> pages.\n> This only because we use OS files and can thus rely on OS file caching.\n> If your application is update intensive, then you should have sufficient\n> buffers to hold most dirtied pages between checkpoints. Does that make\n> sense ?\n\nActually this is something to think about. If Bruce implemented his removal of\ncaching for seqential scans of tables, then the postgres cache should out\nperform OS cache if you are doing seqential scans. I would also be curious\nabout the cost differential between reading a block vs scaning postgres' cache.\nGetting a block out of postgres' cache requires searching the cache, getting a\nblock for OS cache requires similar action within the OS, as well as, the OS\nhit, and the work required to put it in cache including the memory allocation.\n\nI will grant that the applications for which we are using Postgres use some\nvery odd queries, but I can see 80% cache hit rates if the statistics are\nright.\n", "msg_date": "Wed, 21 Nov 2001 06:58:10 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: postgresql.conf (Proposed settings)" } ]
[ { "msg_contents": "Hi,\n\nI am learning PostgreSQL programming, but it is still so difficult.\nLet me ask some of you who have better experience than me- in fact, I am a\nnovice!\n\nI am trying to access index structure by using user-defined functions, and\nas the first step,\nI wrote the following simple user-defined functions.\n\nPG_FUNCTION_INFO_V1(open_gist);\nPG_FUNCTION_INFO_V1(close_gist);\n\n/***************************************************************************\n/\nRelation open_gist(PG_FUNCTION_ARGS)\n{\n char *index_name = (char *) PG_GETARG_POINTER(0);\n elog(NOTICE, \"%s\\n\", index_name);\n return index_openr(index_name);\n}\n/***************************************************************************\n/\nvoid close_gist(PG_FUNCTION_ARGS)\n{\n Relation index_relation = (Relation) PG_GETARG_POINTER(0);\n index_close(index_relation);\n}\n\nThe problem is that I cannot understand the PostgreSQL's call-convention,\nthough I have go through\nthe header file \"fmgr.h\".\nI tried to follow some examples as above, but it won't work.\nI just got a garbage string on screen printed by elog(), when I execute\n\"select open_gist('myindex');\".\nSo, I tried to pass index name directly to index_openr(), that is\nindex_openr(\"myindex\"), then there was no problem.\nI think it is the problem about how to pass arguments.\n\nAnd do you think I can execute the above functions like this:\n\n select close_gist(open_gist('myindex'));\n\nMy question is whether the return data from open_gist() can be passed to\nclose_gist() or not.\nI mean, because data type \"Relation\" is just internal data type, not the\nbase data type of PostgreSQL,\nI am worried about the representation of return data type.\nDo I need to register \"Relation\" as user-defined data type as well?\n(When I create the two functions, I declared input and output data types to\nbe \"opaque\".)\n\nCould you advise me anything about that?\n\nCheers.\n\n\n\n\n\n", "msg_date": "Wed, 21 Nov 2001 13:19:39 -0000", "msg_from": "\"Seung Hyun Jeong\" <jeongs@cs.man.ac.uk>", "msg_from_op": true, "msg_subject": "about call-convention in PostgreSQL programming" }, { "msg_contents": "\"Seung Hyun Jeong\" <jeongs@cs.man.ac.uk> writes:\n> Relation open_gist(PG_FUNCTION_ARGS)\n> {\n> char *index_name = (char *) PG_GETARG_POINTER(0);\n\nYou didn't say what datatype your function is declared to accept ...\nbut there is no Postgres datatype that is identical to a C string.\nYou have some conversion work to do if you want to, say, produce\na C string from a \"text\" input.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 10:00:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: about call-convention in PostgreSQL programming " } ]
[ { "msg_contents": "I committed some fixes to remove 'current' support from the date/time\ntypes. I *think* that these are now in cvs, but I was unable to see some\nof the changes (docs changes especially) when trying to get them back\nfrom the master cvsup server. I'm a bit worried that updating cvs from\ncvs.postgresql.org was not the right place to do it (but recall that it\nis, so ??).\n\nI did not update the date/time regression tests to reflect these recent\nchanges, having run out of time. Tom (or someone) can you please do that\nfor me?\n\nTIA, and have a good Thanksgiving. I'm out of town until Monday...\n\n - Thomas\n", "msg_date": "Wed, 21 Nov 2001 14:11:14 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Regression tests need updating..." }, { "msg_contents": "> I committed some fixes to remove 'current' support from the date/time\n> types. I *think* that these are now in cvs, but I was unable to see some\n> of the changes (docs changes especially) when trying to get them back\n> from the master cvsup server. I'm a bit worried that updating cvs from\n> cvs.postgresql.org was not the right place to do it (but recall that it\n> is, so ??).\n\nI think I see your changes now (I have not seen the commit messages\nyet, though).\n\n> I did not update the date/time regression tests to reflect these recent\n> changes, having run out of time. Tom (or someone) can you please do that\n> for me?\n> \n> TIA, and have a good Thanksgiving. I'm out of town until Monday...\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n", "msg_date": "Thu, 22 Nov 2001 00:05:42 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Regression tests need updating..." }, { "msg_contents": "Hi,\n\nGreatbridge had a webpage with some benchmarks that they arranged for.\nThe site is now gone, does anyone know if there is a copy of the\nbenchmarks, specifically the graphs.\n\nDave\n\n", "msg_date": "Wed, 21 Nov 2001 10:21:29 -0500", "msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Benchmarks" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I did not update the date/time regression tests to reflect these recent\n> changes, having run out of time. Tom (or someone) can you please do that\n> for me?\n\nOn it now.\n\nHave a good holiday ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 13:12:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests need updating... " }, { "msg_contents": "> Hi,\n> \n> Greatbridge had a webpage with some benchmarks that they arranged for.\n> The site is now gone, does anyone know if there is a copy of the\n> benchmarks, specifically the graphs.\n\nOh, I do have the graphs. Here they are.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Fri, 23 Nov 2001 11:58:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Benchmarks" }, { "msg_contents": "Hi,\n\nI am running PostgreSQL 7.0.3 on a linux box.\n\nMy application uses java/persistence layer/jdbc which does the following\nwhen inserting a row\n\n1) gets a connection from the pool and gets a new id for the row using\nselect nextval('sequence');\n2) returns the connection to the pool\n3) gets another connection from the pool and inserts the data into the\nrow using the id retrieved in step 1\n4) returns the connection to the pool\n\nRecently we have had instances where the postmaster has 'crashed' and it\nappears that the sequence numbers were not written to disk\n\nAfter the postmaster restarts we have instances where there are rows in\nthe database with id's greater than the current sequence value. The\ninteresting part is that values are always different by 4?\n\nIn other words there is a row with an id 1004 and the sequence is\ncurrently 1000?\n\nWe are running with fsync on. One thing that has changed lately is we\nare running with quite a few buffers?\n\nAny insight would be appreciated,\n\nDave\n\n", "msg_date": "Fri, 23 Nov 2001 12:29:21 -0500", "msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Problems with sequences" }, { "msg_contents": "Tom, \n\nOk, we're long overdue for upgrading to 7.1.3 anyway, now I have a real\ngood excuse to do it.\n\nDave\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us] \nSent: Friday, November 23, 2001 2:22 PM\nTo: dave@fastcrypt.com\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Problems with sequences \n\n\n\"Dave Cramer\" <dave@fastcrypt.com> writes:\n> I am running PostgreSQL 7.0.3 on a linux box.\n> Recently we have had instances where the postmaster has 'crashed' and \n> it appears that the sequence numbers were not written to disk After \n> the postmaster restarts we have instances where there are rows in the \n> database with id's greater than the current sequence value. The \n> interesting part is that values are always different by 4? In other \n> words there is a row with an id 1004 and the sequence is currently \n> 1000?\n\nIf you can reproduce this under 7.1.3 or later I'd be interested in\npursuing it. 7.0 is rapidly attaining the status of \"ancient history\".\nQuite aside from plain old bug fixes, the existence of WAL in 7.1 would\nmake a huge difference in the possible causes of such a problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Nov 2001 14:21:33 -0500", "msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "Re: Problems with sequences " }, { "msg_contents": "\"Dave Cramer\" <dave@fastcrypt.com> writes:\n> I am running PostgreSQL 7.0.3 on a linux box.\n> Recently we have had instances where the postmaster has 'crashed' and it\n> appears that the sequence numbers were not written to disk\n> After the postmaster restarts we have instances where there are rows in\n> the database with id's greater than the current sequence value. The\n> interesting part is that values are always different by 4?\n> In other words there is a row with an id 1004 and the sequence is\n> currently 1000?\n\nIf you can reproduce this under 7.1.3 or later I'd be interested in\npursuing it. 7.0 is rapidly attaining the status of \"ancient history\".\nQuite aside from plain old bug fixes, the existence of WAL in 7.1 would\nmake a huge difference in the possible causes of such a problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2001 14:22:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problems with sequences " }, { "msg_contents": "> > I did not update the date/time regression tests to reflect these recent\n> > changes, having run out of time. Tom (or someone) can you please do that\n> > for me?\n> On it now.\n\nThanks! Not sure if there is more current news; I'm wading through > 700\nmessages and answering as I go :(\n\n - Thomas\n", "msg_date": "Mon, 26 Nov 2001 17:22:36 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Regression tests need updating..." } ]
[ { "msg_contents": "hello, what is wrong?\n\npostgresql 7.1.3\nsuse linux 7.1\n\nconfigure postgresql:\n./configure --prefix=$DIR_DEST --exec-prefix=$DIR_DEST \n\t\t--enable-local --enable-multibyte \n\t\t--enable-odbc --enable-syslog --with-java\n\ninit database:\n$DIR_DEST/bin/initdb --pgdata=$DIR_DB01 --encoding=LATIN1\n\ncheck LANG-variable:\npostgres@server:/app/pgsql/bin > echo $LANG\nde_DE \n\nand that's the result:\n\n\tmy_db=# select lower('�������');\n\t lower\n\t---------\n\t �������\n\t(1 row) \n\nuppercase umlauts are not converted.\n\n\nthanks\nfrank\n", "msg_date": "Wed, 21 Nov 2001 15:15:22 +0100", "msg_from": "=?ISO-8859-1?Q?=22K=F6nig=2C_Frank=22?= <Frank.Koenig@rossmann.de>", "msg_from_op": true, "msg_subject": "upper and lower doesn't work with german umlaut?" }, { "msg_contents": "=?ISO-8859-1?Q?=22K=F6nig=2C_Frank=22?= <Frank.Koenig@rossmann.de> writes:\n> uppercase umlauts are not converted.\n\nDid you run initdb with the correct LANG environment?\n\nTo check, run contrib/pg_controldata and see what it says about\nLC_COLLATE and LC_CTYPE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2001 11:43:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german umlaut? " }, { "msg_contents": "\"K�nig, Frank\" writes:\n\n> ./configure --prefix=$DIR_DEST --exec-prefix=$DIR_DEST\n> \t\t--enable-local --enable-multibyte\n ^^^^^\nlocale\n\n> \t\t--enable-odbc --enable-syslog --with-java\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 23 Nov 2001 22:57:54 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: upper and lower doesn't work with german umlaut?" }, { "msg_contents": "> \"K?nig, Frank\" writes:\n> \n> > ./configure --prefix=$DIR_DEST --exec-prefix=$DIR_DEST\n> > \t\t--enable-local --enable-multibyte\n> ^^^^^\n> locale\n> \n> > \t\t--enable-odbc --enable-syslog --with-java\n\nAnd why does configure ignore flags it doesn't support. I sure don't\nlike that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Nov 2001 20:56:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: upper and lower doesn't work with german umlaut?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> And why does configure ignore flags it doesn't support. I sure don't\n> like that.\n\nComplain to the GNU people. I've always considered this a serious bug\nin autoconf, but they steadfastly maintain it's a feature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2001 22:45:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: upper and lower doesn't work with german umlaut? " }, { "msg_contents": "\nTom Lane a �crit :\n\n> =?ISO-8859-1?Q?=22K=F6nig=2C_Frank=22?= <Frank.Koenig@rossmann.de> writes:\n> > uppercase umlauts are not converted.\n>\n> Did you run initdb with the correct LANG environment?\n>\n> To check, run contrib/pg_controldata and see what it says about\n> LC_COLLATE and LC_CTYPE.\n\nI have relativily the same problem! Where can I find the executable \"contrib\"\n?\nThanks!\n\n\n", "msg_date": "Tue, 27 Nov 2001 09:42:56 +0100", "msg_from": "Vincent.Gaboriau@answare.fr", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german umlaut?" }, { "msg_contents": "\nOn Tue, 27 Nov 2001 Vincent.Gaboriau@answare.fr wrote:\n\n>\n> Tom Lane a �crit :\n>\n> > =?ISO-8859-1?Q?=22K=F6nig=2C_Frank=22?= <Frank.Koenig@rossmann.de> writes:\n> > > uppercase umlauts are not converted.\n> >\n> > Did you run initdb with the correct LANG environment?\n> >\n> > To check, run contrib/pg_controldata and see what it says about\n> > LC_COLLATE and LC_CTYPE.\n>\n> I have relativily the same problem! Where can I find the executable \"contrib\"\n> ?\n\ncontrib refers to the contrib directory at the top level of the source\ntree.\n\n\n", "msg_date": "Tue, 27 Nov 2001 07:43:28 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german" }, { "msg_contents": "\n\nStephan Szabo a �crit :\n\n> On Tue, 27 Nov 2001 Vincent.Gaboriau@answare.fr wrote:\n>\n> >\n> > Tom Lane a �crit :\n> >\n> > > =?ISO-8859-1?Q?=22K=F6nig=2C_Frank=22?= <Frank.Koenig@rossmann.de> writes:\n> > > > uppercase umlauts are not converted.\n> > >\n> > > Did you run initdb with the correct LANG environment?\n> > >\n> > > To check, run contrib/pg_controldata and see what it says about\n> > > LC_COLLATE and LC_CTYPE.\n> >\n> > I have relativily the same problem! Where can I find the executable \"contrib\"\n> > ?\n>\n> contrib refers to the contrib directory at the top level of the source\n> tree.\n>\n\nThe sources of postgres have been deleted, so I can't find this \"contrib\"!\nSo I surch another way to know if postgres have been installed with the local\nconfiguration.\nCan anyone help me?\n\n\n", "msg_date": "Tue, 27 Nov 2001 17:08:31 +0100", "msg_from": "Vincent.Gaboriau@answare.fr", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german" }, { "msg_contents": "On Tue, 27 Nov 2001 Vincent.Gaboriau@answare.fr wrote:\n\n>\n>\n> Stephan Szabo a �crit :\n>\n> > On Tue, 27 Nov 2001 Vincent.Gaboriau@answare.fr wrote:\n> >\n> > >\n> > > Tom Lane a �crit :\n> > >\n> > > > =?ISO-8859-1?Q?=22K=F6nig=2C_Frank=22?= <Frank.Koenig@rossmann.de> writes:\n> > > > > uppercase umlauts are not converted.\n> > > >\n> > > > Did you run initdb with the correct LANG environment?\n> > > >\n> > > > To check, run contrib/pg_controldata and see what it says about\n> > > > LC_COLLATE and LC_CTYPE.\n> > >\n> > > I have relativily the same problem! Where can I find the executable \"contrib\"\n> > > ?\n> >\n> > contrib refers to the contrib directory at the top level of the source\n> > tree.\n> >\n>\n> The sources of postgres have been deleted, so I can't find this \"contrib\"!\n> So I surch another way to know if postgres have been installed with the local\n> configuration.\n> Can anyone help me?\n\nYou might be able to get it by looking through the\n<data dir>/global/pg_control file, but it's a binary file so you'll have\nto search for it.\n\n\n", "msg_date": "Tue, 27 Nov 2001 08:57:05 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 2001 November 27 11:08 am, Vincent.Gaboriau@answare.fr wrote:\n\n> > > > To check, run contrib/pg_controldata and see what it says about\n> > > > LC_COLLATE and LC_CTYPE.\n> > >\n> > > I have relativily the same problem! Where can I find the executable\n> > > \"contrib\" ?\n> >\n> > contrib refers to the contrib directory at the top level of the source\n> > tree.\n>\n> The sources of postgres have been deleted, so I can't find this \"contrib\"!\n\nUnder debian linux, it is\n/usr/lib/postgresql/bin/pg_controldata\nYou should be able to find it quickly using\n% locate pg_controldata\nAlternately, if your system isn't properly maintained (and, unless you've got \nsome seriously nice hardware, this will take some time: grab a coffee while \nyou wait...), try \n% find / -name 'pg_controldata' -print 2> /dev/null\n\n- -- \nAndrew G. Hammond mailto:drew@xyzzy.dhs.org http://xyzzy.dhs.org/~drew/\n56 2A 54 EF 19 C0 3B 43 72 69 5B E3 69 5B A1 1F 613-389-5481\n5CD3 62B0 254B DEB1 86E0 8959 093E F70A B457 84B1\n\"To blow recursion you must first blow recur\" -- me\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niEYEARECAAYFAjwDzroACgkQCT73CrRXhLGCDgCfcWiJQWBl25oDh1qkgliH9hZz\nORIAnAtcWijMWTZXtWKWiPUorVz82GXU\n=9Tyz\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 27 Nov 2001 12:34:50 -0500", "msg_from": "\"Andrew G. Hammond\" <drew@xyzzy.dhs.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german" }, { "msg_contents": "> > The sources of postgres have been deleted, so I can't find this \"contrib\"!\n> > So I surch another way to know if postgres have been installed with the local\n> > configuration.\n> > Can anyone help me?\n>\n> You might be able to get it by looking through the\n> <data dir>/global/pg_control file, but it's a binary file so you'll have\n> to search for it.\n\nI had found it, but I don't know speak fluent binary language ;-)\nDoes a way exist to \"decompile\" it or to get informations on it?\n\nThanks.\n\n\n", "msg_date": "Wed, 28 Nov 2001 09:37:18 +0100", "msg_from": "Vincent.Gaboriau@answare.fr", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german" }, { "msg_contents": "Vincent.Gaboriau@answare.fr writes:\n>> You might be able to get it by looking through the\n>> <data dir>/global/pg_control file, but it's a binary file so you'll have\n>> to search for it.\n\n> I had found it, but I don't know speak fluent binary language ;-)\n> Does a way exist to \"decompile\" it or to get informations on it?\n\nIf you can't be troubled to compile up pg_controldata, then you'll\nhave to resort to good old od:\n\n$ od -c pg_control\n0000000 314 201 030 267 255 u 344 277 \\0 \\0 \\0 G 013 355 p 253\n0000020 \\0 \\0 \\0 004 < 004 ) 006 \\0 \\0 \\0 \\0 \\0 \\0 \\0 8\n0000040 \\0 \\0 \\0 \\0 7 026 e 210 \\0 \\0 \\0 \\0 7 026 D h\n0000060 \\0 \\0 \\0 \\0 7 026 e 210 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n0000100 \\0 \\0 \\0 \\t \\0 001 357 235 \\0 017 017 354 < 004 ) 004\n0000120 \\0 \\0 \\0 \\0 002 \\0 \\0 C \\0 \\0 \\0 \\0 \\0 \\0 \\0\n0000140 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n*\n0000320 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 C \\0 \\0 \\0 \\0 \\0 \\0 \\0\n0000340 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n*\n0020000\n\nThe LC_COLLATE and LC_CTYPE locale strings should be the last nonzero\nthings in the file --- they're both \"C\" in this example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 10:11:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german " }, { "msg_contents": "\n\nTom Lane a �crit :\n\n> Vincent.Gaboriau@answare.fr writes:\n> >> You might be able to get it by looking through the\n> >> <data dir>/global/pg_control file, but it's a binary file so you'll have\n> >> to search for it.\n>\n> > I had found it, but I don't know speak fluent binary language ;-)\n> > Does a way exist to \"decompile\" it or to get informations on it?\n>\n> If you can't be troubled to compile up pg_controldata, then you'll\n> have to resort to good old od:\n>\n> $ od -c pg_control\n> 0000000 314 201 030 267 255 u 344 277 \\0 \\0 \\0 G 013 355 p 253\n> 0000020 \\0 \\0 \\0 004 < 004 ) 006 \\0 \\0 \\0 \\0 \\0 \\0 \\0 8\n> 0000040 \\0 \\0 \\0 \\0 7 026 e 210 \\0 \\0 \\0 \\0 7 026 D h\n> 0000060 \\0 \\0 \\0 \\0 7 026 e 210 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n> 0000100 \\0 \\0 \\0 \\t \\0 001 357 235 \\0 017 017 354 < 004 ) 004\n> 0000120 \\0 \\0 \\0 \\0 002 \\0 \\0 C \\0 \\0 \\0 \\0 \\0 \\0 \\0\n> 0000140 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n> *\n> 0000320 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 C \\0 \\0 \\0 \\0 \\0 \\0 \\0\n> 0000340 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n> *\n> 0020000\n>\n> The LC_COLLATE and LC_CTYPE locale strings should be the last nonzero\n> things in the file --- they're both \"C\" in this example.\n>\n\nThanks to Tom Lane!\nBut I have This in my pg_control file:\n\n# od -c pg_control\n0000000 \\0 \\0 \\0 \\0 001 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\b \\0 \\0 \\0\n0000020 � 006 004 < 004 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 002 \\0\n0000040 � � � \\v \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n0000060 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n*\n0020000\n#\n\nSo no LC_COLLATE and LC_TYPE in the pg_control. But these variables are set in\nthe user environement!\n\n", "msg_date": "Wed, 28 Nov 2001 17:29:51 +0100", "msg_from": "Vincent.Gaboriau@answare.fr", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german" }, { "msg_contents": "Vincent.Gaboriau@answare.fr writes:\n> But I have This in my pg_control file:\n\n> # od -c pg_control\n> 0000000 \\0 \\0 \\0 \\0 001 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\b \\0 \\0 \\0\n> 0000020 � 006 004 < 004 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 002 \\0\n> 0000040 � � � \\v \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n> 0000060 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n> *\n> 0020000\n> #\n\nEr ... *what* version did you say you were running? That doesn't look\nlike a 7.1 pg_control to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 11:35:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german " }, { "msg_contents": "\n\nTom Lane a �crit :\n\n> Vincent.Gaboriau@answare.fr writes:\n> > But I have This in my pg_control file:\n>\n> > # od -c pg_control\n> > 0000000 \\0 \\0 \\0 \\0 001 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\b \\0 \\0 \\0\n> > 0000020 � 006 004 < 004 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 002 \\0\n> > 0000040 � � � \\v \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n> > 0000060 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0 \\0\n> > *\n> > 0020000\n> > #\n>\n> Er ... *what* version did you say you were running? That doesn't look\n> like a 7.1 pg_control to me.\n\nNo, I have an 7.0.2 on Linux Mandrake 7.2.\n\n", "msg_date": "Wed, 28 Nov 2001 17:44:50 +0100", "msg_from": "Vincent.Gaboriau@answare.fr", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german" }, { "msg_contents": "Vincent.Gaboriau@answare.fr writes:\n>> Er ... *what* version did you say you were running? That doesn't look\n>> like a 7.1 pg_control to me.\n\n> No, I have an 7.0.2 on Linux Mandrake 7.2.\n\nTime to update then. 7.0 doesn't freeze the LC_COLLATE setting at\ninitdb, which means that you can corrupt your indexes by starting the\npostmaster with different LC settings at different times. Which is\ndepressingly easy to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 12:13:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german " }, { "msg_contents": "\n\nTom Lane a �crit :\n\n> Vincent.Gaboriau@answare.fr writes:\n> >> Er ... *what* version did you say you were running? That doesn't look\n> >> like a 7.1 pg_control to me.\n>\n> > No, I have an 7.0.2 on Linux Mandrake 7.2.\n>\n> Time to update then. 7.0 doesn't freeze the LC_COLLATE setting at\n> initdb, which means that you can corrupt your indexes by starting the\n> postmaster with different LC settings at different times. Which is\n> depressingly easy to do.\n\nOk! And does the new version correct my character sequences error?\n\n\n", "msg_date": "Thu, 29 Nov 2001 10:01:45 +0100", "msg_from": "Vincent.Gaboriau@answare.fr", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german" }, { "msg_contents": "Vincent.Gaboriau@answare.fr writes:\n> Ok! And does the new version correct my character sequences error?\n\nWhich was ...?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 09:24:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german " }, { "msg_contents": "\n\nTom Lane a �crit :\n\n> Vincent.Gaboriau@answare.fr writes:\n> > Ok! And does the new version correct my character sequences error?\n>\n> Which was ...?\n\nRefer to \" [ADMIN] character sequence problem\"\n\nThanks for your help.\n\nregards, Vincent.\n\n", "msg_date": "Thu, 29 Nov 2001 15:43:20 +0100", "msg_from": "Vincent.Gaboriau@answare.fr", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german" }, { "msg_contents": "> Vincent.Gaboriau@answare.fr writes:\n>> Ok! And does the new version correct my character sequences error?\n\n> Which was ...?\n\nOh, never mind (for some reason my first search for your previous\nmessages didn't turn up anything).\n\nI'm not sure. The LIKE queries you were complaining of didn't look like\nthey could use an index anyway, so index corruption wouldn't explain\nmisbehavior there. But I'd recommend updating from 7.0 to 7.1\nregardless.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 10:06:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] upper and lower doesn't work with german " }, { "msg_contents": "Hello,\n\nUsing vacuum a table is very usefull to compress the space of a table .\nHowever I noticed the index grows also very fast. Does anyone have a way to\ncompress the index? I know if I dump the database, and destroy the orginal\ndatabase, then create new one using the same database, and restory the dump\nfile, that will dramatically compress the space the database has taken. This\nis not a good way for a live database that is driven by the web application\nthat means I have to shut down the web application service during this\nclearup.\n\nThanks,\n\nBangh\n\n", "msg_date": "Thu, 29 Nov 2001 10:06:02 -0600", "msg_from": "bangh <banghe@baileylink.net>", "msg_from_op": false, "msg_subject": "Vacuum" }, { "msg_contents": "bangh <banghe@baileylink.net> writes:\n> However I noticed the index grows also very fast. Does anyone have a way to\n> compress the index?\n\nREINDEX, or just drop and recreate the indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 13:30:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum " }, { "msg_contents": "On Thu, 29 Nov 2001, Tom Lane wrote:\n\n> bangh <banghe@baileylink.net> writes:\n> > However I noticed the index grows also very fast. Does anyone have a way to\n> > compress the index?\n>\n> REINDEX, or just drop and recreate the indexes.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\nOkay, after some more testing and cussing/discussing with myself, I have\ndecided that my \"reindex\" perl script is ready for public consumption.\n\nTo use the attached script, you will need to put your username, password\nand hostname into the program (just look for the comment). If you run it\nwithout any parameters, it will give you some simple instructions. I use\nit something like:\n\n\t$ fixtable.pl -I -t relevance kids\n\nThe program will then dump out SQL code to re-create ALL indexes (-I) for\nthe 'relevance' table (-t) in my 'kids' database. You can feed the output\nof the script to 'psql', or look at it first (I am paranoid) and then\ncut-n-paste it to 'psql' yourself.\n\nThe SQL code creates a new index for each index on a table, drops the\noriginal index, renames the new index to the old name, and then repeats\nfor the next index on the table. This means that the user you login as\nwhen you run 'psql' must own the indexes (otherwise the DROP INDEX fails).\nIf the drop fails for any reason, you will end up with TWO (2) identical\nindexes which will almost certainly hurt performance on inserts. Also,\nyou need to have enough free space to have a second copy of your largest\nindex on the table or the CREATE INDEX will fail.\n\nNOTE: This script DOES NOT DO ANYTHING with permissions because I don't\nuse them. If someone would like to give me some SQL code that will return\nthe old permissions, and set them on the new index, I would be happy to\nadd this functionality.\n\nFinally. I am providing this script because I am a nice guy. If your\nmachine explodes or any table gets injured by your use of this script I\ncannot and will not be held responsible. I wrote this script for my own\nuse, and the only time it has ever failed for me was when I had duplicated\nvalues in a UNIQUE index field. Once I corrected the data corruption, the\nscript worked correctly. So, at least at first, I recommend that you look\nat the SQL that is generated, and make sure you understand what it is\ntrying to do BEFORE you use it.\n\n- brian\n\nWm. Brian McCane | Life is full of doors that won't open\nSearch http://recall.maxbaud.net/ | when you knock, equally spaced amid those\nUsenet http://freenews.maxbaud.net/ | that open when you don't want them to.\nAuction http://www.sellit-here.com/ | - Roger Zelazny \"Blood of Amber\"", "msg_date": "Thu, 29 Nov 2001 14:13:13 -0600 (CST)", "msg_from": "Brian McCane <bmccane@mccons.net>", "msg_from_op": false, "msg_subject": "Re: Vacuum " } ]
[ { "msg_contents": "Currently, if PG has a single-argument function named after its result\ntype, the function is assumed to represent a valid implicit type\ncoercion. For example, I can do\n\nregression=# select now() || now();\n ?column?\n------------------------------------------------------------\n 2001-11-21 15:12:13.226482-052001-11-21 15:12:13.226482-05\n(1 row)\n\nbecause there is a function text(timestamp) returning text, and\nthis function gets invoked implicitly.\n\nIt strikes me that this is a bad idea, and will get a lot worse as we\nadd more conversion functions. With enough implicit coercions one will\nnever be entirely sure how the system will interpret a mixed-datatype\nexpression. Nonetheless, having more conversion functions seems like a\ngood idea --- I think there should be a numeric-to-text conversion\nfunction, for example, and someone was just complaining in pggeneral\nabout the lack of a boolean-to-text coercion. The problem is that\nthere's no way to create a conversion function without inducing an\nimplicit coercion path (unless you give the function a nonobvious name).\n\nWhat I'd like to suggest (for 7.3 or later) is adding a boolean column\nto pg_proc that indicates \"can be implicit coercion\". A function for\nwhich this is not set can be used as an explicitly requested coercion,\nbut not an implicit one. Thus it'd be possible to define text(boolean)\nand allow it to be called explicitly, without creating an implicit\ncoercion path and thereby losing a lot of type safety.\n\nI have not gone over the existing implicit coercions to see which ones\nI like and don't like, but I think a good first cut at maintaining\nsanity would be to disable any cross-type-category implicit coercions.\nThus, for example, int4 to float8 seems like an okay implicit coercion,\nbut not int4 to text.\n\nNote that this would cause the system to reject some things it accepts\nnow, for example:\n\nregression=# select 45::int4 || 66::int4;\n ?column?\n----------\n 4566\n(1 row)\n\nThis sort of thing would need explicit coercions to text under my proposal.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 15:24:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Implicit coercions need to be reined in" }, { "msg_contents": "\nOn Wed, 21 Nov 2001, Tom Lane wrote:\n\n> It strikes me that this is a bad idea, and will get a lot worse as we\n> add more conversion functions. With enough implicit coercions one will\n> never be entirely sure how the system will interpret a mixed-datatype\n> expression. Nonetheless, having more conversion functions seems like a\n> good idea --- I think there should be a numeric-to-text conversion\n> function, for example, and someone was just complaining in pggeneral\n> about the lack of a boolean-to-text coercion. The problem is that\n> there's no way to create a conversion function without inducing an\n> implicit coercion path (unless you give the function a nonobvious name).\n>\n> What I'd like to suggest (for 7.3 or later) is adding a boolean column\n> to pg_proc that indicates \"can be implicit coercion\". A function for\n> which this is not set can be used as an explicitly requested coercion,\n> but not an implicit one. Thus it'd be possible to define text(boolean)\n> and allow it to be called explicitly, without creating an implicit\n> coercion path and thereby losing a lot of type safety.\n\nI think something of this sort is a good thing. :) It's a bit of a pain in\none way, but it makes understanding what's going on simpler, especially\ngiven things like the person who was getting text cut off at 31 or\nwhatever characters due to the fact that there was an implicit coercion\nthrough name.\n\n", "msg_date": "Wed, 21 Nov 2001 12:58:03 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in" }, { "msg_contents": "Tom Lane writes:\n\n> What I'd like to suggest (for 7.3 or later) is adding a boolean column\n> to pg_proc that indicates \"can be implicit coercion\".\n\nGreat! I've always wished for this sort of thing.\n\nI think from this there are only a few more steps to the full SQL99\nspecification of user-defined casts. In particular it'd involve two\nboolean flags: one that means \"this is a cast function\" (the function is\nused internally for a CAST() invocation), and one for allowing it to be\nused for automatic conversion. We currently don't have the first one on\nthe radar because we use the function name and argument types as the\nindicator.\n\nOne more thing is that instead of looking up a cast function from type A\nto type B as \"returns A, is named A, takes arg B\" it could be looked up as\n\"returns A, takes arg B, is a cast function\", which would generalize this\nwhole thing a bit more.\n\nReferences: SQL99 Part 2 11.49, 11.52 (I haven't read the whole thing,\nbut I think it's the idea.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 22 Nov 2001 17:29:39 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> One more thing is that instead of looking up a cast function from type A\n> to type B as \"returns A, is named A, takes arg B\" it could be looked up as\n> \"returns A, takes arg B, is a cast function\", which would generalize this\n> whole thing a bit more.\n\nI thought about that, but it would require keeping an extra index on\npg_proc (there's no efficient way to search by result type now). Not\nclear that it's worth it, especially considering that there's really\nonly one reasonable name for a cast function anyway. What else would\nyou want to call it than the name of the destination type?\n\nAlso, this convention assures that there's at most *one* cast function\nfor any type coercion A to B. If the name could be anything then we'd\nneed some auxiliary mechanism to prevent conflicting cast functions\nfrom being declared. Seems like a lot of mechanism for a dubious goal.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 12:03:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Implicit coercions need to be reined in " }, { "msg_contents": "Awhile back I suggested adding a boolean column to pg_proc to control\nwhich type coercion functions could be invoked implicitly, and which\nwould need an explicit cast:\nhttp://archives.postgresql.org/pgsql-hackers/2001-11/msg00803.php\nThere is a relevant bug report #484 showing the dangers of too many\nimplicit coercion paths:\nhttp://archives.postgresql.org/pgsql-bugs/2001-10/msg00108.php\n\nI have added such a column as part of the pg_proc changes I'm currently\ndoing to migrate aggregates into pg_proc. So it's now time to debate\nthe nitty-gritty: exactly which coercion functions should not be\nimplicitly invokable anymore?\n\nMy first-cut attempt at this is shown by the two printouts below.\nThe first cut does not allow any implicit coercions to text from types\nthat are not in the text category, which seems a necessary rule to me\n--- the above-cited bug report shows why free coercions to text are\ndangerous. However, it turns out that several of the regression\ntests fail with this rule; see the regression diffs below.\n\nShould I consider these regression tests wrong, and correct them?\nIf not, how can we limit implicit coercions to text enough to avoid\nthe problems illustrated by bug #484?\n\nAnother interesting point is that I allowed implicit coercions from\nfloat8 to numeric; this is necessary to avoid breaking cases like\n\tinsert into foo(numeric_col) values(12.34);\nsince the constant will be initially typed as float8. However, because\nI didn't allow the reverse coercion implicitly, this makes numeric\n\"more preferred\" than float8. Thus, for example,\n\tselect '12.34'::numeric + 12.34;\nwhich draws a can't-resolve-operator error in 7.2, is resolved as\nnumeric addition with these changes. Is this a good thing, or not?\nWe could preserve the can't-resolve behavior by marking numeric->float8\nas an allowed implicit coercion, but that seems ugly. I'm not sure we\ncan do a whole lot better without some more wide-ranging revisions of\nthe way we handle untyped numeric literals (as in past proposals to\ninvent an UNKNOWNNUMERIC pseudo-type).\n\nAlso, does anyone have any other nits to pick with this classification\nof which coercions are implicitly okay? I've started with a fairly\ntough approach of disallowing most implicit coercions, but perhaps this\ngoes too far.\n\n\t\t\tregards, tom lane\n\nCoercions allowed implicitly:\n\n oid | result | input | prosrc \n------+-------------+-------------+-----------------------\n 860 | bpchar | char | char_bpchar\n 408 | bpchar | name | name_bpchar\n 861 | char | bpchar | bpchar_char\n 944 | char | text | text_char\n 312 | float4 | float8 | dtof\n 236 | float4 | int2 | i2tof\n 318 | float4 | int4 | i4tof\n 311 | float8 | float4 | ftod\n 235 | float8 | int2 | i2tod\n 316 | float8 | int4 | i4tod\n 482 | float8 | int8 | i8tod\n 314 | int2 | int4 | i4toi2\n 714 | int2 | int8 | int82\n 313 | int4 | int2 | i2toi4\n 480 | int4 | int8 | int84\n 754 | int8 | int2 | int28\n 481 | int8 | int4 | int48\n 1177 | interval | reltime | reltime_interval\n 1370 | interval | time | time_interval\n 409 | name | bpchar | bpchar_name\n 407 | name | text | text_name\n 1400 | name | varchar | text_name\n 1742 | numeric | float4 | float4_numeric\n 1743 | numeric | float8 | float8_numeric\n 1782 | numeric | int2 | int2_numeric\n 1740 | numeric | int4 | int4_numeric\n 1781 | numeric | int8 | int8_numeric\n 946 | text | char | char_text\n 406 | text | name | name_text\n 2046 | time | timetz | timetz_time\n 2023 | timestamp | abstime | abstime_timestamp\n 2024 | timestamp | date | date_timestamp\n 2027 | timestamp | timestamptz | timestamptz_timestamp\n 1173 | timestamptz | abstime | abstime_timestamptz\n 1174 | timestamptz | date | date_timestamptz\n 2028 | timestamptz | timestamp | timestamp_timestamptz\n 2047 | timetz | time | time_timetz\n 1401 | varchar | name | name_text\n(38 rows)\n\nCoercions that will require explicit CAST, ::type, or typename(x) syntax\n(NB: in 7.2 all of these would have been allowed implicitly):\n\n oid | result | input | prosrc\n------+-------------+-------------+------------------------------------------\n 2030 | abstime | timestamp | timestamp_abstime\n 1180 | abstime | timestamptz | timestamptz_abstime\n 1480 | box | circle | circle_box\n 1446 | box | polygon | poly_box\n 1714 | cidr | text | text_cidr\n 1479 | circle | box | box_circle\n 1474 | circle | polygon | poly_circle\n 1179 | date | abstime | abstime_date\n 748 | date | text | text_date\n 2029 | date | timestamp | timestamp_date\n 1178 | date | timestamptz | timestamptz_date\n 1745 | float4 | numeric | numeric_float4\n 839 | float4 | text | text_float4\n 1746 | float8 | numeric | numeric_float8\n 838 | float8 | text | text_float8\n 1713 | inet | text | text_inet\n 238 | int2 | float4 | ftoi2\n 237 | int2 | float8 | dtoi2\n 1783 | int2 | numeric | numeric_int2\n 818 | int2 | text | text_int2\n 319 | int4 | float4 | ftoi4\n 317 | int4 | float8 | dtoi4\n 1744 | int4 | numeric | numeric_int4\n 819 | int4 | text | text_int4\n 483 | int8 | float8 | dtoi8\n 1779 | int8 | numeric | numeric_int8\n 1289 | int8 | text | text_int8\n 1263 | interval | text | text_interval\n 1541 | lseg | box | box_diagonal\n 767 | macaddr | text | text_macaddr\n 817 | oid | text | text_oid\n 1447 | path | polygon | poly_path\n 1534 | point | box | box_center\n 1416 | point | circle | circle_center\n 1532 | point | lseg | lseg_center\n 1533 | point | path | path_center\n 1540 | point | polygon | poly_center\n 1448 | polygon | box | box_poly\n 1544 | polygon | circle | select polygon(12, $1)\n 1449 | polygon | path | path_poly\n 1200 | reltime | int4 | int4reltime\n 1194 | reltime | interval | interval_reltime\n 749 | text | date | date_text\n 841 | text | float4 | float4_text\n 840 | text | float8 | float8_text\n 730 | text | inet | network_show\n 113 | text | int2 | int2_text\n 112 | text | int4 | int4_text\n 1288 | text | int8 | int8_text\n 1193 | text | interval | interval_text\n 752 | text | macaddr | macaddr_text\n 114 | text | oid | oid_text\n 948 | text | time | time_text\n 2034 | text | timestamp | timestamp_text\n 1192 | text | timestamptz | timestamptz_text\n 939 | text | timetz | timetz_text\n 1364 | time | abstime | select time(cast($1 as timestamp without time zone))\n 1419 | time | interval | interval_time\n 837 | time | text | text_time\n 1316 | time | timestamp | timestamp_time\n 2022 | timestamp | text | text_timestamp\n 1191 | timestamptz | text | text_timestamptz\n 938 | timetz | text | text_timetz\n 1388 | timetz | timestamptz | timestamptz_timetz\n 1619 | varchar | int4 | int4_text\n 1623 | varchar | int8 | int8_text\n(66 rows)\n\n\nRegression failures with this set of choices (I've edited the output to\nremove diffs that are merely consequences of the actual failures):\n\n*** ./expected/char.out\tMon May 21 12:54:46 2001\n--- ./results/char.out\tWed Apr 10 11:48:16 2002\n***************\n*** 18,23 ****\n--- 18,25 ----\n -- any of the following three input formats are acceptable \n INSERT INTO CHAR_TBL (f1) VALUES ('1');\n INSERT INTO CHAR_TBL (f1) VALUES (2);\n+ ERROR: column \"f1\" is of type 'character' but expression is of type 'integer'\n+ \tYou will need to rewrite or cast the expression\n INSERT INTO CHAR_TBL (f1) VALUES ('3');\n -- zero-length char \n INSERT INTO CHAR_TBL (f1) VALUES ('');\n\n*** ./expected/varchar.out\tMon May 21 12:54:46 2001\n--- ./results/varchar.out\tWed Apr 10 11:48:17 2002\n***************\n*** 7,12 ****\n--- 7,14 ----\n -- any of the following three input formats are acceptable \n INSERT INTO VARCHAR_TBL (f1) VALUES ('1');\n INSERT INTO VARCHAR_TBL (f1) VALUES (2);\n+ ERROR: column \"f1\" is of type 'character varying' but expression is of type 'integer'\n+ \tYou will need to rewrite or cast the expression\n INSERT INTO VARCHAR_TBL (f1) VALUES ('3');\n -- zero-length char \n INSERT INTO VARCHAR_TBL (f1) VALUES ('');\n\n*** ./expected/strings.out\tFri Jun 1 13:49:17 2001\n--- ./results/strings.out\tWed Apr 10 11:49:29 2002\n***************\n*** 137,147 ****\n (1 row)\n \n SELECT POSITION(5 IN '1234567890') = '5' AS \"5\";\n! 5 \n! ---\n! t\n! (1 row)\n! \n --\n -- test LIKE\n -- Be sure to form every test as a LIKE/NOT LIKE pair.\n--- 137,145 ----\n (1 row)\n \n SELECT POSITION(5 IN '1234567890') = '5' AS \"5\";\n! ERROR: Function 'pg_catalog.position(unknown, int4)' does not exist\n! \tUnable to identify a function that satisfies the given argument types\n! \tYou may need to add explicit typecasts\n --\n -- test LIKE\n -- Be sure to form every test as a LIKE/NOT LIKE pair.\n\n*** ./expected/alter_table.out\tFri Apr 5 12:03:45 2002\n--- ./results/alter_table.out\tWed Apr 10 11:51:06 2002\n***************\n*** 363,374 ****\n CREATE TEMP TABLE FKTABLE (ftest1 varchar);\n ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable;\n NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n -- As should this\n ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable(ptest1);\n NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n DROP TABLE pktable;\n- NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"fktable\"\n- NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"fktable\"\n DROP TABLE fktable;\n CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 text,\n PRIMARY KEY(ptest1, ptest2));\n--- 363,376 ----\n CREATE TEMP TABLE FKTABLE (ftest1 varchar);\n ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable;\n NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n+ ERROR: Unable to identify an operator '=' for types 'character varying' and 'integer'\n+ \tYou will have to retype this query using an explicit cast\n -- As should this\n ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable(ptest1);\n NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n+ ERROR: Unable to identify an operator '=' for types 'character varying' and 'integer'\n+ \tYou will have to retype this query using an explicit cast\n DROP TABLE pktable;\n DROP TABLE fktable;\n CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 text,\n PRIMARY KEY(ptest1, ptest2));\n\n*** ./expected/rules.out\tThu Mar 21 10:24:35 2002\n--- ./results/rules.out\tWed Apr 10 11:51:11 2002\n***************\n*** 1026,1037 ****\n 'Al Bundy',\n 'epoch'::text\n );\n UPDATE shoelace_data SET sl_avail = 6 WHERE sl_name = 'sl7';\n SELECT * FROM shoelace_log;\n sl_name | sl_avail | log_who | log_when \n! ------------+----------+----------+--------------------------\n! sl7 | 6 | Al Bundy | Thu Jan 01 00:00:00 1970\n! (1 row)\n \n CREATE RULE shoelace_ins AS ON INSERT TO shoelace\n DO INSTEAD\n--- 1026,1038 ----\n 'Al Bundy',\n 'epoch'::text\n );\n+ ERROR: column \"log_when\" is of type 'timestamp without time zone' but expression is of type 'text'\n+ \tYou will need to rewrite or cast the expression\n UPDATE shoelace_data SET sl_avail = 6 WHERE sl_name = 'sl7';\n SELECT * FROM shoelace_log;\n sl_name | sl_avail | log_who | log_when \n! ---------+----------+---------+----------\n! (0 rows)\n \n CREATE RULE shoelace_ins AS ON INSERT TO shoelace\n DO INSTEAD\n\n*** ./expected/foreign_key.out\tWed Mar 6 01:10:56 2002\n--- ./results/foreign_key.out\tWed Apr 10 11:51:17 2002\n***************\n*** 733,747 ****\n -- because varchar=int does exist\n CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable);\n NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n DROP TABLE FKTABLE;\n! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pktable\"\n! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pktable\"\n -- As should this\n CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable(ptest1));\n NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n DROP TABLE FKTABLE;\n! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pktable\"\n! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pktable\"\n DROP TABLE PKTABLE;\n -- Two columns, two tables\n CREATE TABLE PKTABLE (ptest1 int, ptest2 text, PRIMARY KEY(ptest1, ptest2));\n--- 733,749 ----\n -- because varchar=int does exist\n CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable);\n NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n+ ERROR: Unable to identify an operator '=' for types 'character varying' and 'integer'\n+ \tYou will have to retype this query using an explicit cast\n DROP TABLE FKTABLE;\n! ERROR: table \"fktable\" does not exist\n -- As should this\n CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable(ptest1));\n NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n+ ERROR: Unable to identify an operator '=' for types 'character varying' and 'integer'\n+ \tYou will have to retype this query using an explicit cast\n DROP TABLE FKTABLE;\n! ERROR: table \"fktable\" does not exist\n DROP TABLE PKTABLE;\n -- Two columns, two tables\n CREATE TABLE PKTABLE (ptest1 int, ptest2 text, PRIMARY KEY(ptest1, ptest2));\n\n*** ./expected/domain.out\tWed Mar 20 13:34:37 2002\n--- ./results/domain.out\tWed Apr 10 11:51:23 2002\n***************\n*** 111,116 ****\n--- 111,118 ----\n create domain ddef2 oid DEFAULT '12';\n -- Type mixing, function returns int8\n create domain ddef3 text DEFAULT 5;\n+ ERROR: Column \"ddef3\" is of type text but default expression is of type integer\n+ \tYou will need to rewrite or cast the expression\n create sequence ddef4_seq;\n create domain ddef4 int4 DEFAULT nextval(cast('ddef4_seq' as text));\n create domain ddef5 numeric(8,2) NOT NULL DEFAULT '12.12';\n", "msg_date": "Wed, 10 Apr 2002 13:08:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Implicit coercions need to be reined in " }, { "msg_contents": "Tom,\n\nMy feeling is that this change as currently scoped will break a lot of \nexisting apps. Especially the case where people are using where clauses \nof the form: bigintcolumn = '999' to get a query to use the index on \na column of type bigint.\n\nthanks,\n--Barry\n\n\nTom Lane wrote:\n> Awhile back I suggested adding a boolean column to pg_proc to control\n> which type coercion functions could be invoked implicitly, and which\n> would need an explicit cast:\n> http://archives.postgresql.org/pgsql-hackers/2001-11/msg00803.php\n> There is a relevant bug report #484 showing the dangers of too many\n> implicit coercion paths:\n> http://archives.postgresql.org/pgsql-bugs/2001-10/msg00108.php\n> \n> I have added such a column as part of the pg_proc changes I'm currently\n> doing to migrate aggregates into pg_proc. So it's now time to debate\n> the nitty-gritty: exactly which coercion functions should not be\n> implicitly invokable anymore?\n> \n> My first-cut attempt at this is shown by the two printouts below.\n> The first cut does not allow any implicit coercions to text from types\n> that are not in the text category, which seems a necessary rule to me\n> --- the above-cited bug report shows why free coercions to text are\n> dangerous. However, it turns out that several of the regression\n> tests fail with this rule; see the regression diffs below.\n> \n> Should I consider these regression tests wrong, and correct them?\n> If not, how can we limit implicit coercions to text enough to avoid\n> the problems illustrated by bug #484?\n> \n> Another interesting point is that I allowed implicit coercions from\n> float8 to numeric; this is necessary to avoid breaking cases like\n> \tinsert into foo(numeric_col) values(12.34);\n> since the constant will be initially typed as float8. However, because\n> I didn't allow the reverse coercion implicitly, this makes numeric\n> \"more preferred\" than float8. Thus, for example,\n> \tselect '12.34'::numeric + 12.34;\n> which draws a can't-resolve-operator error in 7.2, is resolved as\n> numeric addition with these changes. Is this a good thing, or not?\n> We could preserve the can't-resolve behavior by marking numeric->float8\n> as an allowed implicit coercion, but that seems ugly. I'm not sure we\n> can do a whole lot better without some more wide-ranging revisions of\n> the way we handle untyped numeric literals (as in past proposals to\n> invent an UNKNOWNNUMERIC pseudo-type).\n> \n> Also, does anyone have any other nits to pick with this classification\n> of which coercions are implicitly okay? I've started with a fairly\n> tough approach of disallowing most implicit coercions, but perhaps this\n> goes too far.\n> \n> \t\t\tregards, tom lane\n> \n> Coercions allowed implicitly:\n> \n> oid | result | input | prosrc \n> ------+-------------+-------------+-----------------------\n> 860 | bpchar | char | char_bpchar\n> 408 | bpchar | name | name_bpchar\n> 861 | char | bpchar | bpchar_char\n> 944 | char | text | text_char\n> 312 | float4 | float8 | dtof\n> 236 | float4 | int2 | i2tof\n> 318 | float4 | int4 | i4tof\n> 311 | float8 | float4 | ftod\n> 235 | float8 | int2 | i2tod\n> 316 | float8 | int4 | i4tod\n> 482 | float8 | int8 | i8tod\n> 314 | int2 | int4 | i4toi2\n> 714 | int2 | int8 | int82\n> 313 | int4 | int2 | i2toi4\n> 480 | int4 | int8 | int84\n> 754 | int8 | int2 | int28\n> 481 | int8 | int4 | int48\n> 1177 | interval | reltime | reltime_interval\n> 1370 | interval | time | time_interval\n> 409 | name | bpchar | bpchar_name\n> 407 | name | text | text_name\n> 1400 | name | varchar | text_name\n> 1742 | numeric | float4 | float4_numeric\n> 1743 | numeric | float8 | float8_numeric\n> 1782 | numeric | int2 | int2_numeric\n> 1740 | numeric | int4 | int4_numeric\n> 1781 | numeric | int8 | int8_numeric\n> 946 | text | char | char_text\n> 406 | text | name | name_text\n> 2046 | time | timetz | timetz_time\n> 2023 | timestamp | abstime | abstime_timestamp\n> 2024 | timestamp | date | date_timestamp\n> 2027 | timestamp | timestamptz | timestamptz_timestamp\n> 1173 | timestamptz | abstime | abstime_timestamptz\n> 1174 | timestamptz | date | date_timestamptz\n> 2028 | timestamptz | timestamp | timestamp_timestamptz\n> 2047 | timetz | time | time_timetz\n> 1401 | varchar | name | name_text\n> (38 rows)\n> \n> Coercions that will require explicit CAST, ::type, or typename(x) syntax\n> (NB: in 7.2 all of these would have been allowed implicitly):\n> \n> oid | result | input | prosrc\n> ------+-------------+-------------+------------------------------------------\n> 2030 | abstime | timestamp | timestamp_abstime\n> 1180 | abstime | timestamptz | timestamptz_abstime\n> 1480 | box | circle | circle_box\n> 1446 | box | polygon | poly_box\n> 1714 | cidr | text | text_cidr\n> 1479 | circle | box | box_circle\n> 1474 | circle | polygon | poly_circle\n> 1179 | date | abstime | abstime_date\n> 748 | date | text | text_date\n> 2029 | date | timestamp | timestamp_date\n> 1178 | date | timestamptz | timestamptz_date\n> 1745 | float4 | numeric | numeric_float4\n> 839 | float4 | text | text_float4\n> 1746 | float8 | numeric | numeric_float8\n> 838 | float8 | text | text_float8\n> 1713 | inet | text | text_inet\n> 238 | int2 | float4 | ftoi2\n> 237 | int2 | float8 | dtoi2\n> 1783 | int2 | numeric | numeric_int2\n> 818 | int2 | text | text_int2\n> 319 | int4 | float4 | ftoi4\n> 317 | int4 | float8 | dtoi4\n> 1744 | int4 | numeric | numeric_int4\n> 819 | int4 | text | text_int4\n> 483 | int8 | float8 | dtoi8\n> 1779 | int8 | numeric | numeric_int8\n> 1289 | int8 | text | text_int8\n> 1263 | interval | text | text_interval\n> 1541 | lseg | box | box_diagonal\n> 767 | macaddr | text | text_macaddr\n> 817 | oid | text | text_oid\n> 1447 | path | polygon | poly_path\n> 1534 | point | box | box_center\n> 1416 | point | circle | circle_center\n> 1532 | point | lseg | lseg_center\n> 1533 | point | path | path_center\n> 1540 | point | polygon | poly_center\n> 1448 | polygon | box | box_poly\n> 1544 | polygon | circle | select polygon(12, $1)\n> 1449 | polygon | path | path_poly\n> 1200 | reltime | int4 | int4reltime\n> 1194 | reltime | interval | interval_reltime\n> 749 | text | date | date_text\n> 841 | text | float4 | float4_text\n> 840 | text | float8 | float8_text\n> 730 | text | inet | network_show\n> 113 | text | int2 | int2_text\n> 112 | text | int4 | int4_text\n> 1288 | text | int8 | int8_text\n> 1193 | text | interval | interval_text\n> 752 | text | macaddr | macaddr_text\n> 114 | text | oid | oid_text\n> 948 | text | time | time_text\n> 2034 | text | timestamp | timestamp_text\n> 1192 | text | timestamptz | timestamptz_text\n> 939 | text | timetz | timetz_text\n> 1364 | time | abstime | select time(cast($1 as timestamp without time zone))\n> 1419 | time | interval | interval_time\n> 837 | time | text | text_time\n> 1316 | time | timestamp | timestamp_time\n> 2022 | timestamp | text | text_timestamp\n> 1191 | timestamptz | text | text_timestamptz\n> 938 | timetz | text | text_timetz\n> 1388 | timetz | timestamptz | timestamptz_timetz\n> 1619 | varchar | int4 | int4_text\n> 1623 | varchar | int8 | int8_text\n> (66 rows)\n> \n> \n> Regression failures with this set of choices (I've edited the output to\n> remove diffs that are merely consequences of the actual failures):\n> \n> *** ./expected/char.out\tMon May 21 12:54:46 2001\n> --- ./results/char.out\tWed Apr 10 11:48:16 2002\n> ***************\n> *** 18,23 ****\n> --- 18,25 ----\n> -- any of the following three input formats are acceptable \n> INSERT INTO CHAR_TBL (f1) VALUES ('1');\n> INSERT INTO CHAR_TBL (f1) VALUES (2);\n> + ERROR: column \"f1\" is of type 'character' but expression is of type 'integer'\n> + \tYou will need to rewrite or cast the expression\n> INSERT INTO CHAR_TBL (f1) VALUES ('3');\n> -- zero-length char \n> INSERT INTO CHAR_TBL (f1) VALUES ('');\n> \n> *** ./expected/varchar.out\tMon May 21 12:54:46 2001\n> --- ./results/varchar.out\tWed Apr 10 11:48:17 2002\n> ***************\n> *** 7,12 ****\n> --- 7,14 ----\n> -- any of the following three input formats are acceptable \n> INSERT INTO VARCHAR_TBL (f1) VALUES ('1');\n> INSERT INTO VARCHAR_TBL (f1) VALUES (2);\n> + ERROR: column \"f1\" is of type 'character varying' but expression is of type 'integer'\n> + \tYou will need to rewrite or cast the expression\n> INSERT INTO VARCHAR_TBL (f1) VALUES ('3');\n> -- zero-length char \n> INSERT INTO VARCHAR_TBL (f1) VALUES ('');\n> \n> *** ./expected/strings.out\tFri Jun 1 13:49:17 2001\n> --- ./results/strings.out\tWed Apr 10 11:49:29 2002\n> ***************\n> *** 137,147 ****\n> (1 row)\n> \n> SELECT POSITION(5 IN '1234567890') = '5' AS \"5\";\n> ! 5 \n> ! ---\n> ! t\n> ! (1 row)\n> ! \n> --\n> -- test LIKE\n> -- Be sure to form every test as a LIKE/NOT LIKE pair.\n> --- 137,145 ----\n> (1 row)\n> \n> SELECT POSITION(5 IN '1234567890') = '5' AS \"5\";\n> ! ERROR: Function 'pg_catalog.position(unknown, int4)' does not exist\n> ! \tUnable to identify a function that satisfies the given argument types\n> ! \tYou may need to add explicit typecasts\n> --\n> -- test LIKE\n> -- Be sure to form every test as a LIKE/NOT LIKE pair.\n> \n> *** ./expected/alter_table.out\tFri Apr 5 12:03:45 2002\n> --- ./results/alter_table.out\tWed Apr 10 11:51:06 2002\n> ***************\n> *** 363,374 ****\n> CREATE TEMP TABLE FKTABLE (ftest1 varchar);\n> ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable;\n> NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> -- As should this\n> ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable(ptest1);\n> NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> DROP TABLE pktable;\n> - NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"fktable\"\n> - NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"fktable\"\n> DROP TABLE fktable;\n> CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 text,\n> PRIMARY KEY(ptest1, ptest2));\n> --- 363,376 ----\n> CREATE TEMP TABLE FKTABLE (ftest1 varchar);\n> ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable;\n> NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> + ERROR: Unable to identify an operator '=' for types 'character varying' and 'integer'\n> + \tYou will have to retype this query using an explicit cast\n> -- As should this\n> ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable(ptest1);\n> NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> + ERROR: Unable to identify an operator '=' for types 'character varying' and 'integer'\n> + \tYou will have to retype this query using an explicit cast\n> DROP TABLE pktable;\n> DROP TABLE fktable;\n> CREATE TEMP TABLE PKTABLE (ptest1 int, ptest2 text,\n> PRIMARY KEY(ptest1, ptest2));\n> \n> *** ./expected/rules.out\tThu Mar 21 10:24:35 2002\n> --- ./results/rules.out\tWed Apr 10 11:51:11 2002\n> ***************\n> *** 1026,1037 ****\n> 'Al Bundy',\n> 'epoch'::text\n> );\n> UPDATE shoelace_data SET sl_avail = 6 WHERE sl_name = 'sl7';\n> SELECT * FROM shoelace_log;\n> sl_name | sl_avail | log_who | log_when \n> ! ------------+----------+----------+--------------------------\n> ! sl7 | 6 | Al Bundy | Thu Jan 01 00:00:00 1970\n> ! (1 row)\n> \n> CREATE RULE shoelace_ins AS ON INSERT TO shoelace\n> DO INSTEAD\n> --- 1026,1038 ----\n> 'Al Bundy',\n> 'epoch'::text\n> );\n> + ERROR: column \"log_when\" is of type 'timestamp without time zone' but expression is of type 'text'\n> + \tYou will need to rewrite or cast the expression\n> UPDATE shoelace_data SET sl_avail = 6 WHERE sl_name = 'sl7';\n> SELECT * FROM shoelace_log;\n> sl_name | sl_avail | log_who | log_when \n> ! ---------+----------+---------+----------\n> ! (0 rows)\n> \n> CREATE RULE shoelace_ins AS ON INSERT TO shoelace\n> DO INSTEAD\n> \n> *** ./expected/foreign_key.out\tWed Mar 6 01:10:56 2002\n> --- ./results/foreign_key.out\tWed Apr 10 11:51:17 2002\n> ***************\n> *** 733,747 ****\n> -- because varchar=int does exist\n> CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable);\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> DROP TABLE FKTABLE;\n> ! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pktable\"\n> ! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pktable\"\n> -- As should this\n> CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable(ptest1));\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> DROP TABLE FKTABLE;\n> ! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pktable\"\n> ! NOTICE: DROP TABLE implicitly drops referential integrity trigger from table \"pktable\"\n> DROP TABLE PKTABLE;\n> -- Two columns, two tables\n> CREATE TABLE PKTABLE (ptest1 int, ptest2 text, PRIMARY KEY(ptest1, ptest2));\n> --- 733,749 ----\n> -- because varchar=int does exist\n> CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable);\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> + ERROR: Unable to identify an operator '=' for types 'character varying' and 'integer'\n> + \tYou will have to retype this query using an explicit cast\n> DROP TABLE FKTABLE;\n> ! ERROR: table \"fktable\" does not exist\n> -- As should this\n> CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable(ptest1));\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)\n> + ERROR: Unable to identify an operator '=' for types 'character varying' and 'integer'\n> + \tYou will have to retype this query using an explicit cast\n> DROP TABLE FKTABLE;\n> ! ERROR: table \"fktable\" does not exist\n> DROP TABLE PKTABLE;\n> -- Two columns, two tables\n> CREATE TABLE PKTABLE (ptest1 int, ptest2 text, PRIMARY KEY(ptest1, ptest2));\n> \n> *** ./expected/domain.out\tWed Mar 20 13:34:37 2002\n> --- ./results/domain.out\tWed Apr 10 11:51:23 2002\n> ***************\n> *** 111,116 ****\n> --- 111,118 ----\n> create domain ddef2 oid DEFAULT '12';\n> -- Type mixing, function returns int8\n> create domain ddef3 text DEFAULT 5;\n> + ERROR: Column \"ddef3\" is of type text but default expression is of type integer\n> + \tYou will need to rewrite or cast the expression\n> create sequence ddef4_seq;\n> create domain ddef4 int4 DEFAULT nextval(cast('ddef4_seq' as text));\n> create domain ddef5 numeric(8,2) NOT NULL DEFAULT '12.12';\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n", "msg_date": "Wed, 10 Apr 2002 22:16:34 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> My feeling is that this change as currently scoped will break a lot of \n> existing apps. Especially the case where people are using where clauses \n> of the form: bigintcolumn = '999' to get a query to use the index on \n> a column of type bigint.\n\nEh? That case will not change behavior in the slightest, because\nthere's no type conversion --- the literal is interpreted as the target\ntype to start with.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 01:20:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Implicit coercions need to be reined in " }, { "msg_contents": "OK. My mistake. In looking at the regression failures in your post, I \nthought I saw errors being reported of this type. My bad.\n\n--Barry\n\nTom Lane wrote:\n> Barry Lind <barry@xythos.com> writes:\n> \n>>My feeling is that this change as currently scoped will break a lot of \n>>existing apps. Especially the case where people are using where clauses \n>>of the form: bigintcolumn = '999' to get a query to use the index on \n>>a column of type bigint.\n> \n> \n> Eh? That case will not change behavior in the slightest, because\n> there's no type conversion --- the literal is interpreted as the target\n> type to start with.\n> \n> \t\t\tregards, tom lane\n> \n\n\n", "msg_date": "Wed, 10 Apr 2002 22:39:51 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> OK. My mistake. In looking at the regression failures in your post, I \n> thought I saw errors being reported of this type. My bad.\n\nWell, although that particular case isn't a problem, I am sure that this\nchange will break some existing applications --- the question is how\nmany, and do we feel that they're all poorly coded?\n\nI suspect that the main thing that will cause issues is removal of\nimplicit coercions to text. For example, in 7.2 and before you can do\n\ntest72=# select 'At the tone, the time will be ' || now();\n ?column?\n-------------------------------------------------------------\n At the tone, the time will be 2002-04-11 11:49:27.309181-04\n(1 row)\n\nsince there is an implicit timestamp->text coercion; or in a less\nplausible example,\n\ntest72=# select 123 || (33.0/7.0);\n ?column?\n---------------------\n 1234.71428571428571\n(1 row)\n\nWith my proposed changes, both of these examples will require explicit\ncasts. The latter case might not bother people but I'm sure that\nsomeone out there is using code much like the former case.\n\nSince I didn't see an immediate batch of squawks, I think I will go\nahead and commit what I have; we can always revisit the implicit-allowed\nflag settings later.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 11:57:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Implicit coercions need to be reined in " }, { "msg_contents": "...\n> Since I didn't see an immediate batch of squawks, I think I will go\n> ahead and commit what I have; we can always revisit the implicit-allowed\n> flag settings later.\n\nSquawk. But I haven't had time to look at the full ramifications of your\nproposed change, so it is inappropriate to comment, right?\n\nWe have never been in complete agreement on the optimal behavior for\ntype coersion, but it seems that most users are blissfully ignorant of\nthe potential downsides of the current behavior. Another way to phrase\nthat would be to say that it actually does the right thing in the vast\nmajority of cases out in the field.\n\nWe'll probably both agree that it would be nice to avoid *hard coded*\nrules of any kind for this, but do you share my concern that moving this\nto a database table-driven set of rules will affect performance too\nmuch?\n\n - Thomas\n", "msg_date": "Thu, 11 Apr 2002 09:12:33 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> We have never been in complete agreement on the optimal behavior for\n> type coersion, but it seems that most users are blissfully ignorant of\n> the potential downsides of the current behavior. Another way to phrase\n> that would be to say that it actually does the right thing in the vast\n> majority of cases out in the field.\n\nCould be; we probably see more complaints about the lack of any coercion\npath for particular cases than about inappropriate implicit coercions.\nBut we do see a fair number of the latter. (And in the cases where I've\nresisted adding more coercions, it was precisely because I thought it'd\nbe dangerous to allow them implicitly --- that concern goes away once\nwe can mark a coercion function as not implicitly invokable.)\n\n> We'll probably both agree that it would be nice to avoid *hard coded*\n> rules of any kind for this, but do you share my concern that moving this\n> to a database table-driven set of rules will affect performance too\n> much?\n\nAFAICT the performance cost is negligible: find_coercion_function has to\nlook at the pg_proc row anyway. The relevant change looks like\n\n \t\t\t\t\t\t PointerGetDatum(oid_array),\n \t\t\t\t\t\t ObjectIdGetDatum(typnamespace));\n! \tif (!HeapTupleIsValid(ftup))\n! \t{\n! \t\tReleaseSysCache(targetType);\n! \t\treturn InvalidOid;\n! \t}\n! \t/* Make sure the function's result type is as expected, too */\n! \tpform = (Form_pg_proc) GETSTRUCT(ftup);\n! \tif (pform->prorettype != targetTypeId)\n \t{\n \t\tReleaseSysCache(ftup);\n- \t\tReleaseSysCache(targetType);\n- \t\treturn InvalidOid;\n \t}\n! \tfuncid = ftup->t_data->t_oid;\n! \tReleaseSysCache(ftup);\n \tReleaseSysCache(targetType);\n \treturn funcid;\n }\n--- 711,734 ----\n \t\t\t\t\t\t Int16GetDatum(nargs),\n \t\t\t\t\t\t PointerGetDatum(oid_array),\n \t\t\t\t\t\t ObjectIdGetDatum(typnamespace));\n! \tif (HeapTupleIsValid(ftup))\n \t{\n+ \t\tForm_pg_proc pform = (Form_pg_proc) GETSTRUCT(ftup);\n+ \n+ \t\t/* Make sure the function's result type is as expected */\n+ \t\tif (pform->prorettype == targetTypeId && !pform->proretset &&\n+ \t\t\t!pform->proisagg)\n+ \t\t{\n+ \t\t\t/* If needed, make sure it can be invoked implicitly */\n+ \t\t\tif (isExplicit || pform->proimplicit)\n+ \t\t\t{\n+ \t\t\t\t/* Okay to use it */\n+ \t\t\t\tfuncid = ftup->t_data->t_oid;\n+ \t\t\t}\n+ \t\t}\n \t\tReleaseSysCache(ftup);\n \t}\n! \n \tReleaseSysCache(targetType);\n \treturn funcid;\n }\n\n\nI do not see any reason not to install the mechanism; we can fine-tune\nthe actual pg_class.proimplicit settings as we get experience with them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 12:30:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Implicit coercions need to be reined in " }, { "msg_contents": "Since it seems we still want to debate this a little, I've modified the\ninitial set of implicit-coercion-allowed flags to allow silent coercions\nfrom the standard datatypes to text. This un-breaks most of the\nregression tests that were failing before. I still want to debate the\nwisdom of allowing this, but there's no point in changing the regress\ntests until we're agreed.\n\nAn interesting breakage that remained was that the foreign_key tests\nwere assuming a \"text = integer\" comparison would fail, while a\n\"varchar = integer\" comparison would succeed ... which is not only\npretty bogus in itself, but becomes even more so when you notice that\nthere isn't a varchar = integer operator. Apparently, because we had\nimplicit coercions in *both* directions between text and integer,\nthe system couldn't figure out how to resolve text = integer; but\nsince there was an int->varchar and no varchar->int coercion, it\nwould resolve varchar = integer as varchar = integer::varchar.\n\nWith the attached settings, both cases are accepted as doing text =\nint::text. I'm not convinced that this is a step forward; I'd prefer\nto see explicit coercion needed to cross type categories. But that's\nthe matter for debate.\n\nThe lines marked XXX are the ones that I enabled since yesterday, and\nwould like to disable again:\n\n implicit | result | input | prosrc\n----------+-------------+-------------+--------------------------------------\n no | abstime | timestamp | timestamp_abstime\n no | abstime | timestamptz | timestamptz_abstime\n no | box | circle | circle_box\n no | box | polygon | poly_box\n yes | bpchar | char | char_bpchar\n yes | bpchar | name | name_bpchar\n yes | char | text | text_char\n no | cidr | text | text_cidr\n no | circle | box | box_circle\n no | circle | polygon | poly_circle\n no | date | abstime | abstime_date\n no | date | text | text_date\n no | date | timestamp | timestamp_date\n no | date | timestamptz | timestamptz_date\n yes | float4 | float8 | dtof\n yes | float4 | int2 | i2tof\n yes | float4 | int4 | i4tof\n no | float4 | numeric | numeric_float4\n no | float4 | text | text_float4\n yes | float8 | float4 | ftod\n yes | float8 | int2 | i2tod\n yes | float8 | int4 | i4tod\n yes | float8 | int8 | i8tod\n no | float8 | numeric | numeric_float8\n no | float8 | text | text_float8\n no | inet | text | text_inet\n no | int2 | float4 | ftoi2\n no | int2 | float8 | dtoi2\n yes | int2 | int4 | i4toi2\n yes | int2 | int8 | int82\n no | int2 | numeric | numeric_int2\n no | int2 | text | text_int2\n no | int4 | float4 | ftoi4\n no | int4 | float8 | dtoi4\n yes | int4 | int2 | i2toi4\n yes | int4 | int8 | int84\n no | int4 | numeric | numeric_int4\n no | int4 | text | text_int4\n no | int8 | float8 | dtoi8\n yes | int8 | int2 | int28\n yes | int8 | int4 | int48\n no | int8 | numeric | numeric_int8\n no | int8 | text | text_int8\n yes | interval | reltime | reltime_interval\n no | interval | text | text_interval\n yes | interval | time | time_interval\n no | lseg | box | box_diagonal\n no | macaddr | text | text_macaddr\n yes | name | bpchar | bpchar_name\n yes | name | text | text_name\n yes | name | varchar | text_name\n yes | numeric | float4 | float4_numeric\n yes | numeric | float8 | float8_numeric\n yes | numeric | int2 | int2_numeric\n yes | numeric | int4 | int4_numeric\n yes | numeric | int8 | int8_numeric\n no | oid | text | text_oid\n no | path | polygon | poly_path\n no | point | box | box_center\n no | point | circle | circle_center\n no | point | lseg | lseg_center\n no | point | path | path_center\n no | point | polygon | poly_center\n no | polygon | box | box_poly\n no | polygon | circle | select polygon(12, $1)\n no | polygon | path | path_poly\n no | reltime | int4 | int4reltime\n no | reltime | interval | interval_reltime\n yes | text | char | char_text\n XXX | text | date | date_text\n XXX | text | float4 | float4_text\n XXX | text | float8 | float8_text\n no | text | inet | network_show\n XXX | text | int2 | int2_text\n XXX | text | int4 | int4_text\n XXX | text | int8 | int8_text\n XXX | text | interval | interval_text\n no | text | macaddr | macaddr_text\n yes | text | name | name_text\n no | text | oid | oid_text\n XXX | text | time | time_text\n XXX | text | timestamp | timestamp_text\n XXX | text | timestamptz | timestamptz_text\n XXX | text | timetz | timetz_text\n no | time | abstime | select time(cast($1 as timestamp without time zone))\n no | time | interval | interval_time\n no | time | text | text_time\n no | time | timestamp | timestamp_time\n yes | time | timetz | timetz_time\n yes | timestamp | abstime | abstime_timestamp\n yes | timestamp | date | date_timestamp\n no | timestamp | text | text_timestamp\n yes | timestamp | timestamptz | timestamptz_timestamp\n yes | timestamptz | abstime | abstime_timestamptz\n yes | timestamptz | date | date_timestamptz\n no | timestamptz | text | text_timestamptz\n yes | timestamptz | timestamp | timestamp_timestamptz\n no | timetz | text | text_timetz\n yes | timetz | time | time_timetz\n no | timetz | timestamptz | timestamptz_timetz\n no | varchar | int4 | int4_text\n no | varchar | int8 | int8_text\n yes | varchar | name | name_text\n(103 rows)\n\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Apr 2002 16:23:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Implicit coercions need to be reined in " } ]
[ { "msg_contents": "\"Mariusz Czu�ada\" <manieq@idea.net.pl> writes:\n> Is it possible (and safe) to move database files to other location while\n> server is working?\n\nNo. Shut down the postmaster first.\n\n> But had to shutdown server first. So my question (and\n> suggestion) is to consider:\n> ALTER DATABASE <dbname> { OFFLINE [ { WAIT | IMMEDIATE }] | ONLINE };\n\nOf course, you have this ability now on an installation-wide basis\nwith the available postmaster shutdown options. It's difficult to get\nexcited about expending the work to make this doable on a per-database\nbasis, mainly because I foresee multi-database installations getting\nmuch less popular once we implement SQL schemas. Lots of schemas in\none (user) database per installation will become the norm, I think.\nIn that scenario a per-database shutdown option will be useless.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 19:05:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Taking databases offline " }, { "msg_contents": "> > But had to shutdown server first. So my question (and\n> > suggestion) is to consider:\n> > ALTER DATABASE <dbname> { OFFLINE [ { WAIT | IMMEDIATE }] | ONLINE };\n> \n> Of course, you have this ability now on an installation-wide basis\n> with the available postmaster shutdown options. It's difficult to get\n> excited about expending the work to make this doable on a per-database\n> basis, mainly because I foresee multi-database installations getting\n> much less popular once we implement SQL schemas. Lots of schemas in\n> one (user) database per installation will become the norm, I think.\n> In that scenario a per-database shutdown option will be useless.\n\nYou can shut database access with pg_hba.conf. I would edit\npg_hba.conf, shutdown postmaster to flush all pages, then start up with\ndatabase inactive. You can then re-enable access to the database later\nwith pg_hba.conf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Nov 2001 19:26:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Taking databases offline" }, { "msg_contents": "Hi!\n\nIs it possible (and safe) to move database files to other location while\nserver is working? I had to move one to another volume/path and 'sym-linked'\nit to old dir. But had to shutdown server first. So my question (and\nsuggestion) is to consider:\n\nALTER DATABASE <dbname> { OFFLINE [ { WAIT | IMMEDIATE }] | ONLINE };\n\nTaking db offline with IMMEDIATE disconnects all sessions, with WAIT\noption... waits for all sessions to disonnect, with no options - fails if\nanyone is connected to database; then file system actions are allowed (like\nin my case). After ALTER... ONLINE normal use of database is restored.\n\nWaitng for comments,\n\nMariusz Czulada\n\n\n", "msg_date": "Wed, 21 Nov 2001 23:17:20 -0800", "msg_from": "\"Mariusz Czu���ada\" <manieq@idea.net.pl>", "msg_from_op": false, "msg_subject": "Taking databases offline" } ]
[ { "msg_contents": "Hi,\n\nRecently I was crawling through the INSTALL\nfile and didn't find the --enable-nls option anywhere.\nShouldn't it be mentioned there? Somewhere like before\nor after --enable-debug? Was it documented at all?\nAnd, perhaps, this file might have some other options\nmissing - not a good thing!\n\nMay I propose smth like this for the forthcoming\nbeta if it is right place to put this in\n(one might want to tweak the English, of course):\n\n--------------------------8<-----------------------\n --enable-nls\n \n Enables Native Language Support (NLS) for error messages emitted\n by the psql, libpq, pg_dump, and postgres components. Currently\n supported languages are Czech (some of psql), French (some of\n psql), German (psql, pg_dump, libpq, some of postgres), Russian\n (psql, some of pg_dump, libpq, some of postgres), Simplified and\n Traditional Chinese (all?), and Swedish (some of psql). If you \n don't see your language here or no translations available for\n the desired component, and you're willing to contribute, see\n http://webmail.postgresql.org/~petere/nls.php for details.\n--------------------------8<-----------------------\n\n\n--\nSerguei A. Mokhov\n \n\n", "msg_date": "Wed, 21 Nov 2001 20:23:24 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Mention NLS option in INSTALL " }, { "msg_contents": "> Recently I was crawling through the INSTALL\n> file and didn't find the --enable-nls option anywhere.\n\nIt is mentioned in the sgml sources. Looks like INSTALL hasn't\nbeen regenerated lately.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Nov 2001 20:46:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mention NLS option in INSTALL " }, { "msg_contents": "----- Original Message ----- \nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Wednesday, November 21, 2001 8:46 PM\n\n> > Recently I was crawling through the INSTALL\n> > file and didn't find the --enable-nls option anywhere.\n> \n> It is mentioned in the sgml sources. Looks like INSTALL hasn't\n> been regenerated lately.\n\nAh, OK. Sorry.\nThat's the documentation issue you're currenly fighting\nwith, guys. I see. I just was a bit surprised.\n\n-s\n\n", "msg_date": "Wed, 21 Nov 2001 21:21:42 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Re: Mention NLS option in INSTALL " } ]
[ { "msg_contents": "Hi!\n\nIs it possible (and safe) to move database files to other location while\nserver is working? I had to move one to another volume/path and 'sym-linked'\nit to old dir. But had to shutdown server first. So my question (and\nsuggestion) is to consider:\n\nALTER DATABASE <dbname> { OFFLINE [ { WAIT | IMMEDIATE }] | ONLINE };\n\nTaking db offline with IMMEDIATE disconnects all sessions, with WAIT\noption... waits for all sessions to disonnect, with no options - fails if\nanyone is connected to database; then file system actions are allowed (like\nin my case). After ALTER... ONLINE normal use of database is restored.\n\nWaitng for comments,\n\nMariusz Czulada\n\n\n", "msg_date": "Wed, 21 Nov 2001 23:22:57 -0800", "msg_from": "=?iso-8859-2?Q?Mariusz_Czu=B3ada?= <manieq@wp.pl>", "msg_from_op": true, "msg_subject": "Taking databases offline" } ]
[ { "msg_contents": "Is it possible to specify WHERE (on a filesystem) a certain database\nshould be located?\n\nThis would be nice to have when doing mirroring (with rServ for example).\nThat way i can replicate/mirror a database on a different disk...\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@bayour.com\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Gothenburg/Sweden\n\nUzi Nazi Rule Psix $400 million in gold bullion iodine pits AK-47 Cuba\nQaddafi subway tritium toluene NORAD Delta Force KGB\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "22 Nov 2001 10:31:04 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "Database mirroring" }, { "msg_contents": "On Thu, Nov 22, 2001 at 10:31:04AM +0100, Turbo Fredriksson wrote:\n> Is it possible to specify WHERE (on a filesystem) a certain database\n> should be located?\n> \n> This would be nice to have when doing mirroring (with rServ for example).\n> That way i can replicate/mirror a database on a different disk...\n\n See docs about CREATE DATABASE statement and things pertinent to\n this process.\n\nCREATE DATABASE name\n [ WITH [ LOCATION = 'dbpath' ]\n ^^^^^^^^^^^^^^^^^^^^^\n [ TEMPLATE = template ]\n [ ENCODING = encoding ] ]\n\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 22 Nov 2001 11:13:41 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Database mirroring" }, { "msg_contents": "Quoting Karel Zak <zakkr@zf.jcu.cz>:\n\n> On Thu, Nov 22, 2001 at 10:31:04AM +0100, Turbo Fredriksson wrote:\n> > Is it possible to specify WHERE (on a filesystem) a certain database\n> > should be located?\n> > \n> > This would be nice to have when doing mirroring (with rServ for example).\n> > That way i can replicate/mirror a database on a different disk...\n> \n> See docs about CREATE DATABASE statement and things pertinent to\n> this process.\n> \n> CREATE DATABASE name\n> [ WITH [ LOCATION = 'dbpath' ]\n> ^^^^^^^^^^^^^^^^^^^^^\n> [ TEMPLATE = template ]\n> [ ENCODING = encoding ] ]\n\nCool, thanx!\n\n\nNow my idea to replicate with rServ got a whole new meaning.\n\nCould anyone make it easy for me, and give me some examples on how to use rServ\nto syncronize/replicate to a remote database?\n\nI successfully tried rServ to replicate to a db on the same host (using the\nexamples).\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@bayour.com\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Gothenburg/Sweden\n\nSouth Africa counter-intelligence cryptographic Khaddafi class\nstruggle FSF Panama Uzi 767 tritium Saddam Hussein critical president\nAK-47 munitions\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "22 Nov 2001 12:04:32 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "Re: Database mirroring" }, { "msg_contents": "did yo take a look a the replication projects ?:\nhttp://www.erserver.com/\nhttp://pgreplicator.sourceforge.net/\nhttp://gborg.postgresql.org/project/pgreplication/projdisplay.php\n\nOn 22 Nov 2001 12:04:32 +0100\nTurbo Fredriksson <turbo@bayour.com> was typing:\n\n> Could anyone make it easy for me, and give me some examples on how to use rServ\n> to syncronize/replicate to a remote database?\n> \n> I successfully tried rServ to replicate to a db on the same host (using the\n> examples).\n", "msg_date": "Wed, 28 Nov 2001 19:08:08 +0100", "msg_from": "Jaume Teixi <teixi@6tems.com>", "msg_from_op": false, "msg_subject": "Re: Database mirroring" } ]
[ { "msg_contents": "Hi,\nWe ( me and my teammate ) try to write a graphical client in Java.\nWe made our first stable version ( pgInhaler.ifrance.com for ones who want\nto try it ...) and we need some JDBC features for next version :\n - catch EXPLAIN plan\n - cancel QUERY\nIs it possible ?\n\nSuggestions and fellings about this project will be welcome.\n\nNicolas.\n\n", "msg_date": "Thu, 22 Nov 2001 10:47:15 +0100", "msg_from": "\"Nicolas Verger\" <nicolas@verger.net>", "msg_from_op": true, "msg_subject": "JDBC improvements" }, { "msg_contents": "Nicolas,\n\nAFAIK you should be able to get the EXPLAIN plan from the jdbc driver. \nAll INFO messages from the backend are treated as warnings by the jdbc \ndriver. If you do a getWarnings() you should be able to get the explain \nplan information.\n\nCancel query is on the jdbc todo list. However I don't know of anyone \nthat plans to implement it. So a patch that adds that functionality \nwould be welcome.\n\nthanks,\n--Barry\n\n\nNicolas Verger wrote:\n\n> Hi,\n> We ( me and my teammate ) try to write a graphical client in Java.\n> We made our first stable version ( pgInhaler.ifrance.com for ones who want\n> to try it ...) and we need some JDBC features for next version :\n> - catch EXPLAIN plan\n> - cancel QUERY\n> Is it possible ?\n> \n> Suggestions and fellings about this project will be welcome.\n> \n> Nicolas.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n\n", "msg_date": "Mon, 26 Nov 2001 09:46:55 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC improvements" }, { "msg_contents": "> -----Message d'origine-----\n> De : Barry Lind [mailto:barry@xythos.com]\n> Nicolas,\n>\n> AFAIK you should be able to get the EXPLAIN plan from the jdbc driver.\n> All INFO messages from the backend are treated as warnings by the jdbc\n> driver. If you do a getWarnings() you should be able to get the explain\n> plan information.\n>\n> Cancel query is on the jdbc todo list. However I don't know of anyone\n> that plans to implement it. So a patch that adds that functionality\n> would be welcome.\n>\n> thanks,\n> --Barry\n>\n>\n\nOk, the getWarnings() method works on the Connection but not on the Statment\nnor ResultSet... Why ?\nI look in the source and it's just a little patch to do it.\nI'm not used in GPL licence so if I make this patch what may I do with it ?\n\nI watched for cancel query too, and I may work on it too ...\n\nNicolas\n\n\n", "msg_date": "Fri, 30 Nov 2001 18:02:30 +0100", "msg_from": "\"Nicolas Verger\" <nicolas@verger.net>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] JDBC improvements" }, { "msg_contents": "Nicolas,\n\nIf you have a patch you want to submit, send it to the pgsql-patches \nmail list and cc the pgsql-jdbc mail list. Given that we are very near \nthe release of 7.2, it likely won't make it into 7.2 unfortunately.\n\nthanks,\n--Barry\n\n\nNicolas Verger wrote:\n\n>>-----Message d'origine-----\n>>De : Barry Lind [mailto:barry@xythos.com]\n>>Nicolas,\n>>\n>>AFAIK you should be able to get the EXPLAIN plan from the jdbc driver.\n>>All INFO messages from the backend are treated as warnings by the jdbc\n>>driver. If you do a getWarnings() you should be able to get the explain\n>>plan information.\n>>\n>>Cancel query is on the jdbc todo list. However I don't know of anyone\n>>that plans to implement it. So a patch that adds that functionality\n>>would be welcome.\n>>\n>>thanks,\n>>--Barry\n>>\n>>\n>>\n> \n> Ok, the getWarnings() method works on the Connection but not on the Statment\n> nor ResultSet... Why ?\n> I look in the source and it's just a little patch to do it.\n> I'm not used in GPL licence so if I make this patch what may I do with it ?\n> \n> I watched for cancel query too, and I may work on it too ...\n> \n> Nicolas\n> \n> \n> \n\n\n", "msg_date": "Fri, 30 Nov 2001 11:03:44 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC improvements" }, { "msg_contents": "Nicolas,\n\nJust send your patch to the list and we will include it in version 7.3\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Nicolas Verger\nSent: Friday, November 30, 2001 12:02 PM\nTo: Barry Lind\nCc: Psql-Hackers; pgsql-jdbc@postgresql.org\nSubject: Re: [JDBC] [HACKERS] JDBC improvements\n\n\n> -----Message d'origine-----\n> De : Barry Lind [mailto:barry@xythos.com]\n> Nicolas,\n>\n> AFAIK you should be able to get the EXPLAIN plan from the jdbc driver.\n\n> All INFO messages from the backend are treated as warnings by the jdbc\n\n> driver. If you do a getWarnings() you should be able to get the \n> explain plan information.\n>\n> Cancel query is on the jdbc todo list. However I don't know of anyone\n\n> that plans to implement it. So a patch that adds that functionality \n> would be welcome.\n>\n> thanks,\n> --Barry\n>\n>\n\nOk, the getWarnings() method works on the Connection but not on the\nStatment nor ResultSet... Why ? I look in the source and it's just a\nlittle patch to do it. I'm not used in GPL licence so if I make this\npatch what may I do with it ?\n\nI watched for cancel query too, and I may work on it too ...\n\nNicolas\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n", "msg_date": "Mon, 3 Dec 2001 13:45:10 -0500", "msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC improvements" }, { "msg_contents": "> > Cancel query is on the jdbc todo list. However I don't know of anyone\n> > that plans to implement it. So a patch that adds that functionality\n> > would be welcome.\n> >\n> > thanks,\n> > --Barry\n> >\n> >\n> \n> Ok, the getWarnings() method works on the Connection but not on the Statment\n> nor ResultSet... Why ?\n> I look in the source and it's just a little patch to do it.\n> I'm not used in GPL licence so if I make this patch what may I do with it ?\n> \n> I watched for cancel query too, and I may work on it too ...\n\nWe are BSD license. Is that what you meant? Sure, send it over to jdbc\nlist or patches list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Dec 2001 13:45:16 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC improvements" }, { "msg_contents": "> > Ok, the getWarnings() method works on the Connection but not on\n> the Statment\n> > nor ResultSet... Why ?\n> > I look in the source and it's just a little patch to do it.\n> > I'm not used in GPL licence so if I make this patch what may I\n> do with it ?\n> >\n> > I watched for cancel query too, and I may work on it too ...\n>\n> We are BSD license. Is that what you meant? Sure, send it over to jdbc\n> list or patches list.\n\nOk, so I send the first patch. It correct the propagation of the SQLWarnings\nto the Statement and the ResultSet\n\nChange are :\nAdd method addWarnings(SQLWarning) into org.postgresql.ResultSet\nAdd method addWarning(String) into org.postgresql.Statement\nModify method execute() into org.postgresql.core.QueryExecutor\n\t- Clear the warning of the current statement before process the query\n\t- Set the new warnings to the statement too\n\t- Add the statement warning to the ResultSet when the query is processed", "msg_date": "Fri, 7 Dec 2001 10:45:05 +0100", "msg_from": "\"Nicolas Verger\" <nicolas@verger.net>", "msg_from_op": true, "msg_subject": "Patch : Re: JDBC improvements" }, { "msg_contents": "\nI will keep this and apply for 7.3. Thanks.\n\n\n---------------------------------------------------------------------------\n\n> > > Ok, the getWarnings() method works on the Connection but not on\n> > the Statment\n> > > nor ResultSet... Why ?\n> > > I look in the source and it's just a little patch to do it.\n> > > I'm not used in GPL licence so if I make this patch what may I\n> > do with it ?\n> > >\n> > > I watched for cancel query too, and I may work on it too ...\n> >\n> > We are BSD license. Is that what you meant? Sure, send it over to jdbc\n> > list or patches list.\n> \n> Ok, so I send the first patch. It correct the propagation of the SQLWarnings\n> to the Statement and the ResultSet\n> \n> Change are :\n> Add method addWarnings(SQLWarning) into org.postgresql.ResultSet\n> Add method addWarning(String) into org.postgresql.Statement\n> Modify method execute() into org.postgresql.core.QueryExecutor\n> \t- Clear the warning of the current statement before process the query\n> \t- Set the new warnings to the statement too\n> \t- Add the statement warning to the ResultSet when the query is processed\n> \n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Dec 2001 21:12:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch : Re: JDBC improvements" }, { "msg_contents": "hi all,\n\nwhat is the java.sql matching type for an psql \"interval\" type ? And what\nflavor of getXXX method of Resultset is used to retrieve a data of such a\ntype ?\n\nThanx for your help\n\n--hermann\n\n", "msg_date": "Fri, 14 Dec 2001 10:50:54 +0100", "msg_from": "\"Hermann RANGAMANA\" <hrangamana@primagendys.fr>", "msg_from_op": false, "msg_subject": "psql interval type" }, { "msg_contents": "\nJust send over the patch and we will add it for 7.3.\n\n---------------------------------------------------------------------------\n\nNicolas Verger wrote:\n> > -----Message d'origine-----\n> > De : Barry Lind [mailto:barry@xythos.com]\n> > Nicolas,\n> >\n> > AFAIK you should be able to get the EXPLAIN plan from the jdbc driver.\n> > All INFO messages from the backend are treated as warnings by the jdbc\n> > driver. If you do a getWarnings() you should be able to get the explain\n> > plan information.\n> >\n> > Cancel query is on the jdbc todo list. However I don't know of anyone\n> > that plans to implement it. So a patch that adds that functionality\n> > would be welcome.\n> >\n> > thanks,\n> > --Barry\n> >\n> >\n> \n> Ok, the getWarnings() method works on the Connection but not on the Statment\n> nor ResultSet... Why ?\n> I look in the source and it's just a little patch to do it.\n> I'm not used in GPL licence so if I make this patch what may I do with it ?\n> \n> I watched for cancel query too, and I may work on it too ...\n> \n> Nicolas\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 19:26:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: JDBC improvements" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nNicolas Verger wrote:\n> > > Ok, the getWarnings() method works on the Connection but not on\n> > the Statment\n> > > nor ResultSet... Why ?\n> > > I look in the source and it's just a little patch to do it.\n> > > I'm not used in GPL licence so if I make this patch what may I\n> > do with it ?\n> > >\n> > > I watched for cancel query too, and I may work on it too ...\n> >\n> > We are BSD license. Is that what you meant? Sure, send it over to jdbc\n> > list or patches list.\n> \n> Ok, so I send the first patch. It correct the propagation of the SQLWarnings\n> to the Statement and the ResultSet\n> \n> Change are :\n> Add method addWarnings(SQLWarning) into org.postgresql.ResultSet\n> Add method addWarning(String) into org.postgresql.Statement\n> Modify method execute() into org.postgresql.core.QueryExecutor\n> \t- Clear the warning of the current statement before process the query\n> \t- Set the new warnings to the statement too\n> \t- Add the statement warning to the ResultSet when the query is processed\n> \n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 19:31:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch : Re: JDBC improvements" } ]
[ { "msg_contents": "\n> With DELETE FROM foo, let's suppose you have 10 pages in the table.\nTo\n> modify page 1, you write to page 11\n\nBut what with the indexes ? They would all need to be modified\naccordingly.\nIf you did something like chaining, then before long all tuples would be\n\nchained, even those that were not touched.\n\nIf you really want to avoid the page writes to WAL, imho the best way\nwould be \nto revive the original PG page design where the physical position of\nslots in a \nheap page where only changed by vacuum.\n\nThen, a heap page that was only partly written would only be a problem\niff \nthe hardware wrote wrong data, not if it only skipped part of the write.\n\nReasonable hardware does detect such corrupted pages.\nE.g. on AIX if you reduce the PG pagesize to 4k, an only partly written\npage \nthat stays undetected can be ruled out.\n\nThen you would only need to write index pages to WAL, but not heap\npages.\n\nMaybe a better idea would be to only conditionally write pages to WAL if\nslot \npositions changed. In the \"delete\" example heap slot positions certainly\ndo \nnot need to change. \nTo be extra safe it would probably be necessary to not split tuple\nheaders\n(at least the xact info) across physical pages. Then it would also be\nsafe to \nuse a pg pagesize that is a multiple of the physical page size.\n\nor so ? ...\nAndreas\n", "msg_date": "Thu, 22 Nov 2001 12:18:12 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)" } ]
[ { "msg_contents": "\n> Thus, for example, int4 to float8 seems like an okay implicit\ncoercion,\n> but not int4 to text.\n\nint4 to text is imho one of the most important implicit coercions of\nall.\nStill, a field to mark an implicit coercion function is probably a good\nthing. \n\nImho a numeric to text implicit coercion would also be great :-) \n\nI come from a db where all coercions are possible implicitly,\nthis has not been a problem as long as there is a way to overrule.\n\nAndreas\n", "msg_date": "Thu, 22 Nov 2001 14:23:36 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Implicit coercions need to be reined in" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> I come from a db where all coercions are possible implicitly,\n> this has not been a problem as long as there is a way to overrule.\n\nYeah, but how rich was its type structure compared to Postgres'?\n\nIt might indeed be safe/reasonable to allow implicit coercions to\ntext from all other types. I'm not sure. I am sure that if any\ndatatype coercion one could possibly want is available implicitly,\nit's going to be very difficult to predict the system's behavior.\nIn fact, this would probably make the default behavior appear to\nhave *fewer* automatic coercions not more: anytime there wasn't\nan exact type match, the parser would have too many alternatives\nand would be unable to select a unique function or operator candidate\nfrom among those it could reach by means of implicit coercions.\nWe've seen some reports of such problems already, and it'll get worse\nas we add implicit coercions.\n\nOf course, you could always turn on the \"can be implicit coercion\"\nflag for whichever pg_proc entries you really wanted ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 11:46:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Implicit coercions need to be reined in " } ]
[ { "msg_contents": "Is anyone looking at what happens with rserv and tables without OIDs?\n", "msg_date": "Thu, 22 Nov 2001 09:05:12 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "create table .... without OID, and rserv" } ]
[ { "msg_contents": "I'm trying to set up rServ to replicate my own database. I do everything\nlike the InitRservTest script does.\n\nThing is, it don't seem possible to replicate MULTIPLE columns in a table...\n\n----- s n i p -----\n[turbo@barrabas turbo]$ MasterAddTable --host=localhost --user=postgres test actions actactionid\n[turbo@barrabas turbo]$ MasterAddTable --host=localhost --user=postgres test actions srvserverid\nERROR: CreateTrigger: trigger _rserv_trigger_t_ already defined on relation actions\n----- s n i p -----\n\n\nWhere did i go wrong?\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@bayour.com\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Gothenburg/Sweden\n\nNORAD SDI colonel kibo domestic disruption Albanian supercomputer\nSoviet Panama [Hello to all my fans in domestic surveillance] KGB\nOrtega plutonium congress 767\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n", "msg_date": "22 Nov 2001 15:45:57 +0100", "msg_from": "Turbo Fredriksson <turbo@bayour.com>", "msg_from_op": true, "msg_subject": "rServ and multiple columns in db to replicate" } ]
[ { "msg_contents": "\nSimple, really ... using v7.2b3 that hasn't been released yet ...\n\ntraf_stats=# select EXTRACT(WEEK FROM TIMESTAMP runtime) from hourly_stats;\nERROR: parser: parse error at or near \"runtime\"\ntraf_stats=# \\d hourly_stats\n Table \"hourly_stats\"\n Column | Type | Modifiers\n------------+-----------------------------+-----------\n from_ip | inet |\n to_ip | inet |\n port | integer |\n bytes | bigint |\n runtime | timestamp(6) with time zone |\n no_records | integer |\nIndexes: hourly_from_ip,\n hourly_to_ip\n\n\n", "msg_date": "Thu, 22 Nov 2001 10:02:44 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Can't \"EXTRACT\" from a field?" }, { "msg_contents": "On 22 Nov 2001 at 10:02 (-0500), Marc G. Fournier wrote:\n| \n| Simple, really ... using v7.2b3 that hasn't been released yet ...\n| \n| traf_stats=# select EXTRACT(WEEK FROM TIMESTAMP runtime) from hourly_stats;\n| ERROR: parser: parse error at or near \"runtime\"\n\nThe following works for me (on 7.2b3).\n\n create table test( id serial, tid timestamp default now() );\n select extract(week from tid) from test;\n\n\ngram.y has\n extract_list: extract_arg FROM a_expr\n\nwhich appears to be in keeping with the sql99 def.\n part2- <extract expression> ::=\n part2: EXTRACT <left paren> <extract field>\n part2- FROM <extract source> <right paren>\n\n\nI don't know if there was ever any other format for extract(), but\nthings look normal from here.\n\n\nbtw Marc, can you help me in getting archives of the various lists?\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Thu, 22 Nov 2001 10:57:31 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Can't \"EXTRACT\" from a field?" }, { "msg_contents": "On Thu, 22 Nov 2001, Brent Verner wrote:\n\n> On 22 Nov 2001 at 10:02 (-0500), Marc G. Fournier wrote:\n> |\n> | Simple, really ... using v7.2b3 that hasn't been released yet ...\n> |\n> | traf_stats=# select EXTRACT(WEEK FROM TIMESTAMP runtime) from hourly_stats;\n> | ERROR: parser: parse error at or near \"runtime\"\n>\n> The following works for me (on 7.2b3).\n>\n> create table test( id serial, tid timestamp default now() );\n> select extract(week from tid) from test;\n>\n>\n> gram.y has\n> extract_list: extract_arg FROM a_expr\n>\n> which appears to be in keeping with the sql99 def.\n> part2- <extract expression> ::=\n> part2: EXTRACT <left paren> <extract field>\n> part2- FROM <extract source> <right paren>\n>\n>\n> I don't know if there was ever any other format for extract(), but\n> things look normal from here.\n\nya, I hadn't clued in until fighting with it some more that if the fieldis\nalready a timestamp, yyou don't have to put it in as 'EXTRACT(WEEK FROM\nTIMESTAMP tid) :(\n\n\n> btw Marc, can you help me in getting archives of the various lists?\n\nthey are all at archives.postgresql.org ... no?\n\n\n", "msg_date": "Thu, 22 Nov 2001 11:18:56 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Can't \"EXTRACT\" from a field?" }, { "msg_contents": "On 22 Nov 2001 at 11:18 (-0500), Marc G. Fournier wrote:\n| On Thu, 22 Nov 2001, Brent Verner wrote:\n| \n| > On 22 Nov 2001 at 10:02 (-0500), Marc G. Fournier wrote:\n| > |\n| > | Simple, really ... using v7.2b3 that hasn't been released yet ...\n| > |\n| > | traf_stats=# select EXTRACT(WEEK FROM TIMESTAMP runtime) from hourly_stats;\n| > | ERROR: parser: parse error at or near \"runtime\"\n| >\n| > The following works for me (on 7.2b3).\n| >\n| > create table test( id serial, tid timestamp default now() );\n| > select extract(week from tid) from test;\n| >\n| >\n| > gram.y has\n| > extract_list: extract_arg FROM a_expr\n| >\n| > which appears to be in keeping with the sql99 def.\n| > part2- <extract expression> ::=\n| > part2: EXTRACT <left paren> <extract field>\n| > part2- FROM <extract source> <right paren>\n| >\n| >\n| > I don't know if there was ever any other format for extract(), but\n| > things look normal from here.\n| \n| ya, I hadn't clued in until fighting with it some more that if the fieldis\n| already a timestamp, yyou don't have to put it in as 'EXTRACT(WEEK FROM\n| TIMESTAMP tid) :(\n\nI didn't even know what the args to extract were, which is how I ended\nup in gram.y...\n\nnote: extract() is correctly not listed as a function, but it doesn't \n have any '\\h' help available. Is this a TODO kind-of-thing? If\n it is I can try to add it sometime later today. There is not\nnote2: This documentation is incorrect as of last night's cvs.\n an example given in the docs is.\n SELECT EXTRACT(CENTURY FROM TIMESTAMP '2001-02-16 20:38:40');\n I can go ahead and update func.sgml if noone else is already\n getting it.\n\n| > btw Marc, can you help me in getting archives of the various lists?\n| \n| they are all at archives.postgresql.org ... no?\n\nNot that I see. archives. points at www2.us.\n\nI ftp'd around every postgresql.org (and hub.org) anonftp server\nI could find, and sent an 'index' command to majordomo, and it has\n/some/ files listed, but I could not ever retreive those files...\n\nI'm really looking forward to having locally (mutt!!) searchable\ndocs...\n\nDid you ever get the documentation issue worked out? Is there anything\nI can do to help with that? I did get man.tar and postgres.tar built\nfrom cvs on my debian box.\n\nThanks,\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Thu, 22 Nov 2001 11:46:13 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Can't \"EXTRACT\" from a field?" }, { "msg_contents": "On 22 Nov 2001 at 11:18 (-0500), Marc G. Fournier wrote:\n| \n| ya, I hadn't clued in until fighting with it some more that if the fieldis\n| already a timestamp, yyou don't have to put it in as 'EXTRACT(WEEK FROM\n| TIMESTAMP tid) :(\n\nOk, scratch my previous email WRT the sgml docs being wrong...\n\nSomething is strange, tho.\n\nbrent=# select extract( week from timestamp ('2001-02-06 20:38:40'::timestamp)>\nERROR: parser: parse error at or near \"'\"\nbrent=# select extract( week from \"timestamp\" ('2001-02-06 20:38:40'::timestam>\n date_part \n-----------\n 6\n(1 row)\n\nbrent=# select extract( week from timestamp ('2001-02-06 20:38:40') );\nERROR: parser: parse error at or near \"'\"\nbrent=# select extract( week from timestamp '2001-02-06 20:38:40'::timestamp );\n date_part \n-----------\n 6\n(1 row)\n\nbrent=# select extract( week from timestamp '2001-02-06 20:38:40' );\n date_part \n-----------\n 6\n(1 row)\n\nNotice:\n timestamp( type ) => fail\n \"timestamp\"( type ) => OK\n timestamp type => OK\n timestamp column_of_type => fail [1] Marc's original observation.\n\n\ncan't help any more...\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Thu, 22 Nov 2001 12:22:15 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Can't \"EXTRACT\" from a field?" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> note: extract() is correctly not listed as a function, but it doesn't \n> have any '\\h' help available. Is this a TODO kind-of-thing? If\n> it is I can try to add it sometime later today.\n\nThere is no mechanism for keeping track of help entries for functions,\nonly statements.\n\n> note2: This documentation is incorrect as of last night's cvs.\n> an example given in the docs is.\n> SELECT EXTRACT(CENTURY FROM TIMESTAMP '2001-02-16 20:38:40');\n\nThis example is fine. Try it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 12:50:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can't \"EXTRACT\" from a field? " }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> Something is strange, tho.\n\nThe construct you're poking around the edges of here is\n\n\t\ttype-name literal-string\n\nwhich is an SQLish typed constant. (Or more accurately, it's Thomas'\ngeneralization of some type-specific constant syntaxes that appear in\nSQL92. AFAIK the spec itself doesn't claim this is a type-universal\nconstruction.)\n\nCasting something other than a string literal requires different, more\nexplicit syntax; eg, CAST(foo AS type), foo::type, or if the type name\nis allowable as a function name type(foo) will work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 20:33:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can't \"EXTRACT\" from a field? " } ]
[ { "msg_contents": "\nOkay, PeterE got the appropriate scripts setup, and I just ran the\npackaging script, which appears to have all gone great ...\n\nAs usual, I want to give it ~24hrs to perculate through the mirrors before\ndoing a larger announce, but if anyone would like to take a quick test\nthrough and make sure the packaging isn't missing anything, the tar files\nare avaialble at:\n\n\tftp://ftp.postgresql.org/pub/beta\n\nLet us know if there are any problems and we'll re-package as appropriate\n...\n\n\n", "msg_date": "Thu, 22 Nov 2001 12:57:32 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "v7.2b3 packaged, but not announced beyond here yet ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> if anyone would like to take a quick test\n> through and make sure the packaging isn't missing anything, the tar files\n> are avaialble at:\n> \tftp://ftp.postgresql.org/pub/beta\n\nWhy do all the CVS $Header$ lines in this tarball look like\n\n$Header: /projects/cvsroot/pgsql/GNUmakefile.in,v 1.23 2001/11/21 23:19:25 momjian Exp $\n\nand not\n\n$Header: /cvsroot/pgsql/GNUmakefile.in,v 1.23 2001/11/21 23:19:25 momjian Exp $\n\nwhich is what I see in CVS checkout (as well as earlier beta tarballs)?\n\nThis makes it *real* painful to diff the tarball against my local\ncheckout, which is what I usually do to validate a tarball.\n\nI think it might be okay other than that, but it's hard to tell...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 21:19:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet ... " }, { "msg_contents": "> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > if anyone would like to take a quick test\n> > through and make sure the packaging isn't missing anything, the tar files\n> > are avaialble at:\n> > \tftp://ftp.postgresql.org/pub/beta\n> \n> Why do all the CVS $Header$ lines in this tarball look like\n> \n> $Header: /projects/cvsroot/pgsql/GNUmakefile.in,v 1.23 2001/11/21 23:19:25 momjian Exp $\n> \n> and not\n> \n> $Header: /cvsroot/pgsql/GNUmakefile.in,v 1.23 2001/11/21 23:19:25 momjian Exp $\n> \n> which is what I see in CVS checkout (as well as earlier beta tarballs)?\n> \n> This makes it *real* painful to diff the tarball against my local\n> checkout, which is what I usually do to validate a tarball.\n> \n> I think it might be okay other than that, but it's hard to tell...\n\nFor some strange reason, anoncvs uses /projects/cvsroot while committers\ncvs uses just /cvsroot. I am sure the problem is that he is pulling\nfrom anoncvs and not from cvs. My guess is that you will have to pull\nout $header lines before doing the diff. Yes, a pain.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 22 Nov 2001 21:32:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet ..." }, { "msg_contents": "\nI know what the problem is, and am currently working on getting it fixed,\nright Vince? :)\n\n\nOn Thu, 22 Nov 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > if anyone would like to take a quick test\n> > through and make sure the packaging isn't missing anything, the tar files\n> > are avaialble at:\n> > \tftp://ftp.postgresql.org/pub/beta\n>\n> Why do all the CVS $Header$ lines in this tarball look like\n>\n> $Header: /projects/cvsroot/pgsql/GNUmakefile.in,v 1.23 2001/11/21 23:19:25 momjian Exp $\n>\n> and not\n>\n> $Header: /cvsroot/pgsql/GNUmakefile.in,v 1.23 2001/11/21 23:19:25 momjian Exp $\n>\n> which is what I see in CVS checkout (as well as earlier beta tarballs)?\n>\n> This makes it *real* painful to diff the tarball against my local\n> checkout, which is what I usually do to validate a tarball.\n>\n> I think it might be okay other than that, but it's hard to tell...\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Thu, 22 Nov 2001 21:34:33 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am sure the problem is that he is pulling\n> from anoncvs and not from cvs.\n\nIf so, could I request that future tarballs be pulled from logged-in\ncvs? It's unlikely that anybody but the committers circle will do\nsuch diffing, so you may as well make it easier for us rather than\nother people.\n\n(Actually an even better answer would be to make anoncvs and committers\ncvs show the same path, but if that's not practical I won't argue.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 21:35:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet ... " }, { "msg_contents": "On Thursday 22 November 2001 12:57 pm, Marc G. Fournier wrote:\n> As usual, I want to give it ~24hrs to perculate through the mirrors before\n> doing a larger announce, but if anyone would like to take a quick test\n> through and make sure the packaging isn't missing anything, the tar files\n> are avaialble at:\n\n> \tftp://ftp.postgresql.org/pub/beta\n\n> Let us know if there are any problems and we'll re-package as appropriate\n\n[back on-list]\n\nAfter much help from Marc, and a good tarball (apparently), we have RPMs. \nRegression does the expected thing on RedHat 7.2 (locale settings prevent a \ncomplete PASS -- the diffs are attached for those who are curious).\n\nRPMs for testing are at \nftp.postgresql.org/pub/binary/beta/RPMS/{SRPMS|redhat-7.2}\n\nMany thanks to Marc and PeterE for the man pages and the html docs being \nbuilt again, and a special thank you to Peter for the initial patchset that \nallowed a smooth build relatively quickly. Oh, and Marc, the man pages and \nthe html docs ARE being built properly for the RPMset's consumption.\n\nPlease test this RPMset if you do RPM work, but also consider them BETA. \nThat of course means that 'rpm -Fvh' on your production server is not \nrecommended.\n\nBe sure, if upgrading, to dump out your whole database system first, as the \nmigration tools are a little finicky. This is, after all, a major version \nupgrade.\n\nI'll make a more public announcement after Marc makes a more public 7.2b3 \nannouncement.\n\nDevelopers who are a member of group 'pgsql' on cvs.postgresql.org and who \nwant to build RPMsets for other distributions/architectures, feel free to do \nso and upload into that tree, setting up your own subdir. Group write is and \nshould be set appropriately. Yes, Thomas, that means you and Mandrake \nwhichever you're running. :-)\n\nOthers who wish to build RPMs for non-redhat-7.2 architectures, contact me \noff-list with locations and details.\n\nNote that due to the possibility of third-party Trojan RPMsets, those who can \nrebuild from the src rpm should do so for testing. Signed binary RPMs for \ndistributions will likely be generated by those distributors once we have a \nfinal release. I will also likely sign a final RPM release, with my public \nkey being available on the ftp site.\n\nEnjoy! (If I sound cheery, well, there's some sort of chemical in turkey meat \nthat, if consumed in a large enough quantity, makes you very cheerful -- \nalmost jolly. :-)).\n\nAnd, as today is Thanksgiving in the 'States, I'll just add to this \nannouncement my THANK YOU to all the developers, contributors, testers, and \nusers of this magnificent RDBMS we call PostgreSQL.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11", "msg_date": "Thu, 22 Nov 2001 23:08:36 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet ..." }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Regression does the expected thing on RedHat 7.2 (locale settings prevent a \n> complete PASS -- the diffs are attached for those who are curious).\n\nThis seems rather broken, seeing as how \"make check\" takes pains to\ncreate a C-locale temporary installation. How are you managing to\ndefeat that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 23:35:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet ... " }, { "msg_contents": "On Thu, 22 Nov 2001, Marc G. Fournier wrote:\n\n>\n> I know what the problem is, and am currently working on getting it fixed,\n> right Vince? :)\n\nDepends, you talking about the dns entry or the header line? I can\nfix the dns entry, but not the header line.\n\n>\n>\n> On Thu, 22 Nov 2001, Tom Lane wrote:\n>\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > if anyone would like to take a quick test\n> > > through and make sure the packaging isn't missing anything, the tar files\n> > > are avaialble at:\n> > > \tftp://ftp.postgresql.org/pub/beta\n> >\n> > Why do all the CVS $Header$ lines in this tarball look like\n> >\n> > $Header: /projects/cvsroot/pgsql/GNUmakefile.in,v 1.23 2001/11/21 23:19:25 momjian Exp $\n> >\n> > and not\n> >\n> > $Header: /cvsroot/pgsql/GNUmakefile.in,v 1.23 2001/11/21 23:19:25 momjian Exp $\n> >\n> > which is what I see in CVS checkout (as well as earlier beta tarballs)?\n> >\n> > This makes it *real* painful to diff the tarball against my local\n> > checkout, which is what I usually do to validate a tarball.\n> >\n> > I think it might be okay other than that, but it's hard to tell...\n> >\n> > \t\t\tregards, tom lane\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 22 Nov 2001 23:39:42 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet" }, { "msg_contents": "On Thursday 22 November 2001 11:35 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Regression does the expected thing on RedHat 7.2 (locale settings prevent\n> > a complete PASS -- the diffs are attached for those who are curious).\n\n> This seems rather broken, seeing as how \"make check\" takes pains to\n> create a C-locale temporary installation. How are you managing to\n> defeat that?\n\nBy running the regression tests in a binary-only installation. I am of the \nopinion that people might want to run regression, or have the regression \ndatabase, or see the regression queries as examples, on a non-development \nmachine -- one with no make. I myself, as part of the burn-in of new \ndatabase servers, run multiple regression tests on the soon-to-be production \nserver -- and I _never_ install compilers, make, or associated packages on \nproduction servers.\n\nSo I prebuild the regression binaries, etc, and put them into a subpackage \ncalled test -- then, the user just cd's to /usr/lib/pgsql/test/regress, makes \nsure postmaster is running, su's to postgres, and runs ./pg_regress with the \nright scheduling options. No source tree required. When done with the tests, \nrpm -e postgresql-test frees up the 4MB of space taken.\n\nISTM that the regression tests should be locale-agnostic, or be able to force \na specific locale without requiring a make. It's not a big deal, as long as \nyou know what to expect, though. So, the regression results I just posted \nshould be considered normal for the binary-only regression test on a locale \nof en_US. In fact, our regression testscould even be used to find broken \nlocales -- is there even a test for locale?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 23 Nov 2001 10:26:54 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet ..." }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n>> This seems rather broken, seeing as how \"make check\" takes pains to\n>> create a C-locale temporary installation. How are you managing to\n>> defeat that?\n\n> By running the regression tests in a binary-only installation.\n\nRemind me to pay no attention whatsoever to regression test failure\nreports coming from people who use the RPM installation.\n\nI regard this setup as worse than useless, because it is guaranteed\nto cause regression failures that most people will not know how to\ninterpret. We will be getting lots of complaints, and I for one am\nnot going to waste my time scanning them closely to see if there is\nanything real there.\n\n> ISTM that the regression tests should be locale-agnostic,\n\nThey are, when used as intended.\n\nIn reality, it's extremely questionable that the regression tests\nwill tell anything useful to a person who's installing prebuilt\nplatform-specific binaries. The builder of the binaries should have\nrun the regress tests. So I'm not sure what the point is --- but I am\nsure that this setup will cause a lot more problems than it solves.\nI recommend forgetting about postgresql-test.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2001 11:08:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet ... " }, { "msg_contents": "On Friday 23 November 2001 11:08 am, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > By running the regression tests in a binary-only installation.\n\n> Remind me to pay no attention whatsoever to regression test failure\n> reports coming from people who use the RPM installation.\n\nI typically field those anyway.\n\n> I regard this setup as worse than useless, because it is guaranteed\n> to cause regression failures that most people will not know how to\n> interpret. We will be getting lots of complaints, and I for one am\n> not going to waste my time scanning them closely to see if there is\n> anything real there.\n\nAs this has been in the RPMset for, let's see, TWO YEARS now, I don't think \nit's going to be that big of a problem. The locale deal has been there that \nlong -- and is something I can see a mile away -- which of course means that \nI'll check those reports myself, and only will pass along the parts that are \nreal failures.\n\nThe discussion in a separate thread about being able to specify the collation \nin queries sounds like it would solve a good deal of the problem, by \nspecifying the C collation in the regression queries. The other failures \ninvolve a currency symbol, the presence of which still confuses me to a \ncertain extent.\n\n> They are, when used as intended.\n\nAnd it is my contention that the 'make check' method of running regression \ntesting is broken in itself, by assuming a source tree, by assuming a \ndevelopment platform, and by assuming that regression testing is a \ndeveloper-only activity. I disagree with all those assumptions -- our \ndocumentation even has stated (and may still state) that the regression \ndatabase and the regression queries are a good source of examples -- both for \nthe queries, and for the datamodels.\n\nBut those three assumptions run deep -- particularly the first two, which \nunderlie the whole package's philosophy in more than one area.\n\nI'm just providing what I believe is a useful feature that has been requested \nby RPM, non-development-platform, users.\n\nIf I must, I can patch the expected outputs to match the default en_US locale \n-- but I am very loathe to do that! After all, the en_US locale may not be \nwhat the user is running.\n\n> In reality, it's extremely questionable that the regression tests\n> will tell anything useful to a person who's installing prebuilt\n> platform-specific binaries.\n\nIn reality, I think they are more useful than that to the enduser.\n\n> The builder of the binaries should have\n> run the regress tests.\n\nI run them both in their 'intended' way as well as in the final prebuilt way \nbefore releasing any RPMsets. There have been a couple of times that I \nskipped that step, usually with bad results, but it is one of the things I \njust do, now.\n\n> So I'm not sure what the point is --- but I am\n> sure that this setup will cause a lot more problems than it solves.\n\nAs the test subpackage has existed for two years, I would contend that the \npackage hasn't caused quite as many problems as you prophesy.\n\n> I recommend forgetting about postgresql-test.\n\nIf the consensus of the steering committee is that we shouldn't distribute \nthe postgresql-test subpackage prebuilt, from ftp.postgresql.org, I will \ndisable it in the build. The possibility to build it will still be \navailable: it would just not be built by default.\n\nAgain, I have fielded questions about RPM issues for over two years, now, and \nI fully intend to continue doing so. The standard 'failures' will be fully \ndocumented in the README.rpm-dist (the fact that there are known failures is \nalready documented) in the RPMset, which is where someone who wants to run \nthe tests is going to have to go for information on running the tests \nthemselves. So, Tom, while I understand your concerns, I just want to make \nsure it is realized that this is not a new thing, and that I volunteer to \nfield those results.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 23 Nov 2001 12:56:09 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet ..." }, { "msg_contents": "On Saturday 24 November 2001 03:08, Tom Lane wrote:\n\n> In reality, it's extremely questionable that the regression tests\n> will tell anything useful to a person who's installing prebuilt\n> platform-specific binaries. The builder of the binaries should have\n> run the regress tests. So I'm not sure what the point is --- but I am\n> sure that this setup will cause a lot more problems than it solves.\n> I recommend forgetting about postgresql-test.\n\nTom,\n\nI have problems interpreting your statement.\nI assumed that the purpose of a regression test is to verify that software \nis doing what it is supposed to do on the system it as installed.\n\nIf I install a binary package, you say that I cannot rely on the \nregression test? What does the builder of the package know about the \nparticularities of my system? How else could I verify the functionality of \nthe installed software?\n\nEither the regression test is construed in a way that it works with any \ninstallation, or users should be advised NOT to use binary packages at all \nand only install in a way that they can verify the installation with the \nregression test. How would you think any user would trust his/her \ninstallation in a production environment otherwise?\n\nDid I misunderstand your statement or is my trust in PostgreSQL binary \ninstallations unjustified?\n\nHorst\n", "msg_date": "Sat, 24 Nov 2001 10:23:28 +1100", "msg_from": "Horst Herb <hherb@malleenet.net.au>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packaged, but not announced beyond here yet ..." }, { "msg_contents": "Horst Herb <hherb@malleenet.net.au> writes:\n> I assumed that the purpose of a regression test is to verify that software \n> is doing what it is supposed to do on the system it as installed.\n\nSo it is. What Lamar is proposing to provide as part of the RPMs is not\nusable for that purpose, at least not easily. That's why I'm not happy.\n\nA possible solution is to give pg_regress.sh a third behavior in which\nit relies on an already-installed set of binaries and library files,\nbut still initdb's a temporary data area and starts a test postmaster\ntherein. That would allow controlling the locale issue in the tested\ndatabase. I'm not sure if this can be done easily enough to make it\na viable answer for 7.2, though. It's a tad late in the beta cycle\nto be contemplating any major surgery on pg_regress.\n\nPeter, any thoughts about how hard that might be?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2001 18:55:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "RPMs and regression tests (was Re: v7.2b3 packaged...)" }, { "msg_contents": "On Friday 23 November 2001 06:55 pm, Tom Lane wrote:\n> Horst Herb <hherb@malleenet.net.au> writes:\n> > I assumed that the purpose of a regression test is to verify that\n> > software is doing what it is supposed to do on the system it as\n> > installed.\n\n> So it is. What Lamar is proposing to provide as part of the RPMs is not\n> usable for that purpose, at least not easily. That's why I'm not happy.\n\nIt's not being proposed; it's existing behavior for two years. I'm not happy \nwith there being failures in the tests, either -- but I _have_ documented the \nfact of failures.\n\n> A possible solution is to give pg_regress.sh a third behavior in which\n> it relies on an already-installed set of binaries and library files,\n> but still initdb's a temporary data area and starts a test postmaster\n> therein. \n\nThis would be helpful. While I understand the desire to be able to test a \nset of binaries prior to installation (and support such a possibility), this \nbehavior should be the default, not the other way around. I shouldn't need \nmake to run regression.\n\nAnd I'm willing to do the work. Unless Peter just wants to do it, I'll see \nwhat I can get working for the 7.3 cycle.\n\nI don't think it will be too hard, from a quick look at make check and \nfriends.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 23 Nov 2001 19:33:44 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: RPMs and regression tests (was Re: v7.2b3 packaged...)" }, { "msg_contents": "Tom Lane writes:\n\n> A possible solution is to give pg_regress.sh a third behavior in which\n> it relies on an already-installed set of binaries and library files,\n> but still initdb's a temporary data area and starts a test postmaster\n> therein.\n\nThis sounds like a good plan, but I refuse to even think about it further\nso I don't get any ideas about trying it for 7.2. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 25 Nov 2001 23:19:45 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: RPMs and regression tests (was Re: v7.2b3 packaged...)" } ]
[ { "msg_contents": "Hi\n\nI examined multibyte psql.exe and libpq.dll Yesterday\naccording to the bug report from Hiroshi Saito.\nAt first the compilation didn't work at all and I found\nthe following in src/win32.mak though it's irrelevant to\nthe bug report.\n\nALL:\n cd include\n if not exist pg_config.h copy pg_config.h.win32 pg_config.h\n\nCurrently I'm working under cygwin environment mainly and\nthe above script doesn't work well after the configure of\ncygwin (un??)fortunately. It's pretty painful and risky to\nchange pg_config.h.win32 and the original pg_config.h.\nHow about including pg_config.h.win32 in pg_config.h e.g.\n\n#ifdef WIN32\n#include \"pg_config.h.win32\"\n#else\n.\n.\n\n?\n\nComments ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 23 Nov 2001 10:39:12 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "pg_config.h.win32" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> It's pretty painful and risky to\n> change pg_config.h.win32 and the original pg_config.h.\n> How about including pg_config.h.win32 in pg_config.h e.g.\n\nWhat? I thought pg_config.h.win32 was a substitute pg_config.h\nto use for native-windows builds (where you can't run autoconf).\nUnder cygwin I think you should just run configure and expect it\nto produce a correct pg_config.h by itself. If it doesn't, we\nneed to fix the configure script.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 21:13:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_config.h.win32 " }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Must I double the source tree for cygwin and native\n> Windows ? I'm using the common source tree for ODBC\n> and there's no problem for compilation.\n\nWhat? I don't quite follow what the problem is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 21:38:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_config.h.win32 " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > It's pretty painful and risky to\n> > change pg_config.h.win32 and the original pg_config.h.\n> > How about including pg_config.h.win32 in pg_config.h e.g.\n> \n> What? I thought pg_config.h.win32 was a substitute pg_config.h\n> to use for native-windows builds (where you can't run autoconf).\n> Under cygwin I think you should just run configure and expect it\n> to produce a correct pg_config.h by itself. If it doesn't, we\n> need to fix the configure script.\n\nMust I double the source tree for cygwin and native\nWindows ? I'm using the common source tree for ODBC\nand there's no problem for compilation. It seems\npretty absurd to have the whole source tree only\nfor native-windows clients but it's pretty annoying\nto download only necessary files.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 23 Nov 2001 11:38:43 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: pg_config.h.win32" }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Must I double the source tree for cygwin and native\n> > Windows ? I'm using the common source tree for ODBC\n> > and there's no problem for compilation.\n> \n> What? I don't quite follow what the problem is.\n\npg_config.h generated under cygwin doesn't fit\nin with native-windows. Of cource pg_config.h.win32\ndoesn't fit in with cygwin either.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 23 Nov 2001 11:48:28 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: pg_config.h.win32" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> pg_config.h generated under cygwin doesn't fit\n> in with native-windows. Of cource pg_config.h.win32\n> doesn't fit in with cygwin either.\n\nShould it? I'm quite content to regard them as separate platforms.\nI don't expect LinuxPPC and Mac OSX to share a pg_config.h, even\nthough I run them on the same box.\n\nI'm also not seeing why #including one file in the other would make\nthat problem better?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 21:53:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_config.h.win32 " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > pg_config.h generated under cygwin doesn't fit\n> > in with native-windows. Of cource pg_config.h.win32\n> > doesn't fit in with cygwin either.\n> \n> Should it? I'm quite content to regard them as separate platforms.\n> I don't expect LinuxPPC and Mac OSX to share a pg_config.h, even\n> though I run them on the same box.\n\nYes but the current native-windows stuff doesn't seem\nvaluable to keep a separate source tree.\n\n> \n> I'm also not seeing why #including one file in the other would make\n> that problem better?\n\n??? I'm changing my pg_config.h locally in reality\nlike as follows.\n\n#ifdef WIN32\n#include \"pg_config.h.win32\"\n#else\n.\n. (the original content of pg_config.h)\n.\n#endif /* WIN32 */\n\nThere's no problem to compile under both cygwin\nand native-windows. It's much safer than replacing\npg_config.h.win32 and the original pg_config.h each\ntime.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 23 Nov 2001 13:15:04 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: pg_config.h.win32" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> I'm also not seeing why #including one file in the other would make\n>> that problem better?\n\n> ??? I'm changing my pg_config.h locally in reality\n> like as follows.\n\n> #ifdef WIN32\n> #include \"pg_config.h.win32\"\n> #else\n> .\n> . (the original content of pg_config.h)\n> .\n> #endif /* WIN32 */\n\nHmm. This would only help if you'd successfully run configure (so that\npg_config.h has been created from pg_config.h.in) and then you wanted to\nuse that same source tree to compile for native Windows, *without* doing\na \"make distclean\" which is normally considered a required step before\nusing a source tree to build for a different platform.\n\nThat seems like an ugly way to proceed. Even if it works today, I would\nnot want to make a commitment that it will continue to work in future.\nI'd rather not uglify pg_config.h.in for all platforms in service of\na one-platform-pair hack that I'd call unsupported anyway...\n\nOut of curiosity, how much difference is there between pg_config.h built\non Cygwin/Windows and pg_config.h.win32? Obviously the one is a lot\nbigger, but if you simply used the Cygwin pg_config.h to build the stuff\nthat works on native Windows, how far is it from working?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2001 14:34:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_config.h.win32 " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> I'm also not seeing why #including one file in the other would make\n> >> that problem better?\n> \n> > ??? I'm changing my pg_config.h locally in reality\n> > like as follows.\n> \n> > #ifdef WIN32\n> > #include \"pg_config.h.win32\"\n> > #else\n> > .\n> > . (the original content of pg_config.h)\n> > .\n> > #endif /* WIN32 */\n> \n> Hmm. This would only help if you'd successfully run configure (so that\n> pg_config.h has been created from pg_config.h.in) and then you wanted to\n> use that same source tree to compile for native Windows, *without* doing\n> a \"make distclean\" which is normally considered a required step before\n> using a source tree to build for a different platform.\n> \n> That seems like an ugly way to proceed. Even if it works today, I would\n> not want to make a commitment that it will continue to work in future.\n> I'd rather not uglify pg_config.h.in for all platforms in service of\n> a one-platform-pair hack that I'd call unsupported anyway...\n\nI see. I'm not responsible for libpq.dll and psql.exe anyway.\nSomeone would check it as before.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 26 Nov 2001 09:25:14 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: pg_config.h.win32" } ]
[ { "msg_contents": ">From the gcc 3.0.2 release ntoes:\n\n\"The poorly documented extension that allowed string constants in C, C++ and\nObjective C to contain unescaped newlines has been deprecated and may be\nremoved in a future version. Programs using this extension may be fixed in\nseveral ways: the bare newline may be replaced by \\n, or preceded by \\n\\, or\nstring concatenation may be used with the bare newline preceded by \\n\" and \"\nplaced at the start of the next line. \"\n\nI seem to recall that I came across a fair bit of this kind of thing in the\npsql source code at least. I wonder if it should be fixed to be\ncompliant...\n\nChris\n\n", "msg_date": "Fri, 23 Nov 2001 11:50:49 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Unescaped new lines in postgres" }, { "msg_contents": "\nCan you show an example?\n\n---------------------------------------------------------------------------\n\n> >From the gcc 3.0.2 release ntoes:\n> \n> \"The poorly documented extension that allowed string constants in C, C++ and\n> Objective C to contain unescaped newlines has been deprecated and may be\n> removed in a future version. Programs using this extension may be fixed in\n> several ways: the bare newline may be replaced by \\n, or preceded by \\n\\, or\n> string concatenation may be used with the bare newline preceded by \\n\" and \"\n> placed at the start of the next line. \"\n> \n> I seem to recall that I came across a fair bit of this kind of thing in the\n> psql source code at least. I wonder if it should be fixed to be\n> compliant...\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 22 Nov 2001 23:26:16 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unescaped new lines in postgres" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> From the gcc 3.0.2 release ntoes:\n> \"The poorly documented extension that allowed string constants in C, C++ and\n> Objective C to contain unescaped newlines has been deprecated and may be\n> removed in a future version.\n\nNot everyone uses gcc. One would think that this sort of thing would be\nsufficiently unportable to have been weeded out of PG long ago, irrespective\nof what gcc does or doesn't do.\n\nWhile I didn't try searching through the whole source tree, a quick grep\nthrough psql didn't find any instances of such coding...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Nov 2001 23:42:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unescaped new lines in postgres " }, { "msg_contents": ">From heap.c:\n\n /*\n * sanity checks\n */\n if (relname && !allow_system_table_mods &&\n IsSystemRelationName(relname) && IsNormalProcessingMode())\n elog(ERROR, \"invalid relation name \\\"%s\\\"; \"\n \"the 'pg_' name prefix is reserved for system\ncatalogs\",\n relname);\n\nI guess this is a slightly different example than what the issue is below.\nMaybe I'm wrong and the above code is still legal. Even so, seems a bit\ndodgy to me...\n\nChris\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, 23 November 2001 12:26 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] Unescaped new lines in postgres\n>\n>\n>\n> Can you show an example?\n>\n> ------------------------------------------------------------------\n> ---------\n>\n> > >From the gcc 3.0.2 release ntoes:\n> >\n> > \"The poorly documented extension that allowed string constants\n> in C, C++ and\n> > Objective C to contain unescaped newlines has been deprecated and may be\n> > removed in a future version. Programs using this extension may\n> be fixed in\n> > several ways: the bare newline may be replaced by \\n, or\n> preceded by \\n\\, or\n> > string concatenation may be used with the bare newline preceded\n> by \\n\" and \"\n> > placed at the start of the next line. \"\n> >\n> > I seem to recall that I came across a fair bit of this kind of\n> thing in the\n> > psql source code at least. I wonder if it should be fixed to be\n> > compliant...\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Fri, 23 Nov 2001 12:53:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Unescaped new lines in postgres" }, { "msg_contents": "> >From heap.c:\n> \n> /*\n> * sanity checks\n> */\n> if (relname && !allow_system_table_mods &&\n> IsSystemRelationName(relname) && IsNormalProcessingMode())\n> elog(ERROR, \"invalid relation name \\\"%s\\\"; \"\n> \"the 'pg_' name prefix is reserved for system\n> catalogs\",\n> relname);\n> \n> I guess this is a slightly different example than what the issue is below.\n> Maybe I'm wrong and the above code is still legal. Even so, seems a bit\n> dodgy to me...\n\nYes, I have seen this before. It looked pretty nifty when I saw it, and\nI have not seen it used in any other code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 22 Nov 2001 23:55:15 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unescaped new lines in postgres" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> From heap.c:\n> /*\n> * sanity checks\n> */\n> if (relname && !allow_system_table_mods &&\n> IsSystemRelationName(relname) && IsNormalProcessingMode())\n> elog(ERROR, \"invalid relation name \\\"%s\\\"; \"\n> \"the 'pg_' name prefix is reserved for system\n> catalogs\",\n> relname);\n\n> I guess this is a slightly different example than what the issue is below.\n> Maybe I'm wrong and the above code is still legal.\n\nThis is legal ANSI C. The spec says that one of the later phases of the\npreprocessor is\n\n 6. Adjacent string literal tokens are concatenated.\n\nI have an old book on C portability that says that this behavior\nactually predates the ANSI spec, but wasn't universal in pre-ANSI\ncompilers.\n\nIt's also legal per spec to write\n\n\t\t\t\"invalid relation name \\\"%s\\\"; \\\nthe 'pg_' name prefix is reserved for system catalogs\",\n\n(note the backslash before the newline) but this strikes me as worse\nstyle since you now have a critical dependency on (a) lack of trailing\nwhitespace on the first line and (b) lack of leading whitespace on the\nsecond.\n\nWhat the gcc release note is about is\n\n\t\t\t\"invalid relation name \\\"%s\\\"; \nthe 'pg_' name prefix is reserved for system catalogs\",\n\nUnescaped newlines in string literals are specifically forbidden by\nthe spec, but apparently gcc takes them anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2001 11:23:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unescaped new lines in postgres " } ]
[ { "msg_contents": "I've searched several faqs and news groups, tried using different\n./configure options. Tried tweaking src/Makefile.shlib as discribed in\nseveral usenet posts that I've found on dejanews.com. I've tried\nfollowing the instructions in the SCO specific faq and I can't get past\nthe following error message: \n\n\"fd.c\", line 286: error: undefined symbol: NOFILE\n\nI'm wondering if there are any fixes\\patches\\hacks that work reliably,\nsince I can't seem to find anything that works, or interpret and\nimplement what I've read into something that works. If any one was any\n\"definitive\" instructions, or perhaps can give a more complete\ndescription of what the problem is and what needs to be done to solve\nit, perhaps a \"to-do list\". Although I haven't looked at the code yet I\ndid see a couple of people mention that they tweaked fd.c, however I\ncouldn't find any listings of what the tweaked.\n\nAny help is greatly appreciated, I don't know if anyone has volunteered\ntheir SCO OpenServer 5.0.6 as a guinea pig, but I'm open to that as an\noption.\n\nThank you in advance,\n\tLevi Senft\n", "msg_date": "Fri, 23 Nov 2001 07:27:56 GMT", "msg_from": "Levi Senft <levisenft@home.com>", "msg_from_op": true, "msg_subject": "PostGreSQL 7.1.3 & SCO OpenServer 5.0.6 problems" }, { "msg_contents": "At 07.27 23/11/01 +0000, Levi Senft wrote:\n\n>I've searched several faqs and news groups, tried using different\n>./configure options. Tried tweaking src/Makefile.shlib as discribed in\n>several usenet posts that I've found on dejanews.com. I've tried\n>following the instructions in the SCO specific faq and I can't get past\n>the following error message:\n>\n>\"fd.c\", line 286: error: undefined symbol: NOFILE\n>\n>I'm wondering if there are any fixes\\patches\\hacks that work reliably,\n>since I can't seem to find anything that works, or interpret and\n>implement what I've read into something that works. If any one was any\n>\"definitive\" instructions, or perhaps can give a more complete\n>description of what the problem is and what needs to be done to solve\n>it, perhaps a \"to-do list\". Although I haven't looked at the code yet I\n>did see a couple of people mention that they tweaked fd.c, however I\n>couldn't find any listings of what the tweaked.\n>\n>Any help is greatly appreciated, I don't know if anyone has volunteered\n>their SCO OpenServer 5.0.6 as a guinea pig, but I'm open to that as an\n>option.\n\nI've modified the include/port/sco.h with\n\n#ifndef NOFILE\n#define NOFILE NOFILES_MIN\n#endif\n\n\nRoberto Fichera.\n\n", "msg_date": "Fri, 23 Nov 2001 14:46:29 +0100", "msg_from": "Roberto Fichera <kernel@tekno-soft.it>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostGreSQL 7.1.3 & SCO OpenServer 5.0.6 problems" }, { "msg_contents": "Levi Senft <levisenft@home.com> writes:\n> \"fd.c\", line 286: error: undefined symbol: NOFILE\n\nThrow in a default definition, eg\n\n#ifndef NOFILE\n#define NOFILE 50\n#endif\n\nThis is fixed in current sources, btw, so you might just want to install\n7.2 beta...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2001 11:37:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostGreSQL 7.1.3 & SCO OpenServer 5.0.6 problems " } ]
[ { "msg_contents": "Hi, has anyone started to translate the backend messages into\nHungarian? If not, I can start it.\n\nZoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Fri, 23 Nov 2001 11:24:45 +0100 (CET)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "hu.po" }, { "msg_contents": "----- Original Message ----- \nFrom: Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>\nSent: Friday, November 23, 2001 5:24 AM\n\n> Hi, has anyone started to translate the backend messages into\n> Hungarian? If not, I can start it.\n\nYou should've CC'ed this to -general also, me thinks.\n\n", "msg_date": "Sat, 24 Nov 2001 12:52:12 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: hu.po" } ]
[ { "msg_contents": "\nOkay, now that Vince and I have the DNS re-sorted out, and I'm pulling\nfrom the right source, I've rebuilt the packages ...\n\nTom, can you check them over now and let me know if they look okay?\nHeader appears right now (/cvsroot vs /projects/cvsroot) ...\n\nI'll do an announce later on tonight until there are any noticeable\nproblems with it ...\n\n\n", "msg_date": "Fri, 23 Nov 2001 08:35:58 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "v7.2b3 packages rebuilt ..." }, { "msg_contents": "On Friday 23 November 2001 08:35 am, Marc G. Fournier wrote:\n> I'll do an announce later on tonight until there are any noticeable\n> problems with it ...\n\nRPMs of 7.2b3-the-second built and uploaded. No noticeable difference here in \nthe process.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 23 Nov 2001 10:54:59 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packages rebuilt ..." }, { "msg_contents": "> \n> Okay, now that Vince and I have the DNS re-sorted out, and I'm pulling\n> from the right source, I've rebuilt the packages ...\n> \n> Tom, can you check them over now and let me know if they look okay?\n> Header appears right now (/cvsroot vs /projects/cvsroot) ...\n\nI assume that /projects/cvsroot is still needed by anoncvs and /cvsroot\nfor login cvs? I need to know for cvs.sgml.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Nov 2001 12:10:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packages rebuilt ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Tom, can you check them over now and let me know if they look okay?\n> Header appears right now (/cvsroot vs /projects/cvsroot) ...\n\nLooks good now. Thanks for fixing the path discrepancy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2001 12:29:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packages rebuilt ... " }, { "msg_contents": "\nnothing has changed as far as the docs are concerned ... :)\n\nOn Fri, 23 Nov 2001, Bruce Momjian wrote:\n\n> >\n> > Okay, now that Vince and I have the DNS re-sorted out, and I'm pulling\n> > from the right source, I've rebuilt the packages ...\n> >\n> > Tom, can you check them over now and let me know if they look okay?\n> > Header appears right now (/cvsroot vs /projects/cvsroot) ...\n>\n> I assume that /projects/cvsroot is still needed by anoncvs and /cvsroot\n> for login cvs? I need to know for cvs.sgml.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Fri, 23 Nov 2001 14:35:17 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.2b3 packages rebuilt ..." }, { "msg_contents": "On Fri, 23 Nov 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Tom, can you check them over now and let me know if they look okay?\n> > Header appears right now (/cvsroot vs /projects/cvsroot) ...\n>\n> Looks good now. Thanks for fixing the path discrepancy.\n\nNo probs ... I figure Peter worked hard to make sure everything was\nbuilding properly with the docs, best to make sure that its easy for us to\ncheck/test the package before we send it out to the general populace ...\n\n\n", "msg_date": "Fri, 23 Nov 2001 14:36:28 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.2b3 packages rebuilt ... " }, { "msg_contents": "Marc,\n\nSeems there is a problem with anoncvs:\n\ncvs update: authorization failed: server anoncvs.postgresql.org rejected access to /projects/cvsroot for user anoncvs\n\nOleg\n\n\nOn Fri, 23 Nov 2001, Marc G. Fournier wrote:\n\n>\n> nothing has changed as far as the docs are concerned ... :)\n>\n> On Fri, 23 Nov 2001, Bruce Momjian wrote:\n>\n> > >\n> > > Okay, now that Vince and I have the DNS re-sorted out, and I'm pulling\n> > > from the right source, I've rebuilt the packages ...\n> > >\n> > > Tom, can you check them over now and let me know if they look okay?\n> > > Header appears right now (/cvsroot vs /projects/cvsroot) ...\n> >\n> > I assume that /projects/cvsroot is still needed by anoncvs and /cvsroot\n> > for login cvs? I need to know for cvs.sgml.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 24 Nov 2001 21:46:55 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: v7.2b3 packages rebuilt ..." }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> Seems there is a problem with anoncvs:\n> cvs update: authorization failed: server anoncvs.postgresql.org rejected access to /projects/cvsroot for user anoncvs\n\nYeah, I'm seeing the same:\n\n$ cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n(Logging in to anoncvs@anoncvs.postgresql.org)\nCVS password:\t\t\t<-- entered empty password here\ncvs [login aborted]: authorization failed: server anoncvs.postgresql.org rejected access\n$\n\nanoncvs.postgresql.org is resolving as 64.49.215.8, in case that\nmatters.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Nov 2001 21:40:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "anoncvs busted (was Re: v7.2b3 packages rebuilt ...)" } ]
[ { "msg_contents": "I've found, compiled and tested the libpq++ c++ program examples. I've even\nmodified them to work with my database. Now I want to take an existing\nprogram written in c++ and add the postgres classes. I also want to do this\nin my working directory, not the samples directory. The make file with the\nexamples is pretty intense. Does anyone have a simpler version that will\nset the appropriate paths for the includes? I naturally want to use the\nheader files included with postgres, not copy them around. \n", "msg_date": "Fri, 23 Nov 2001 15:19:24 GMT", "msg_from": "shield123321@hotmail.com", "msg_from_op": true, "msg_subject": "Setting up MAKE file for Postgres and C++/Newbie question" }, { "msg_contents": "A make file can be very simple indeed. These days makefiles have become\nalmost incomprehensible to all but a few!\nIf you don't want to be flexible and portable in the extreme, a simple\nmakefile for the current directory for any project goes like this:\n--------------------------------------\nINC=\"/pathFor/Other/Includes\"\n\nfoo : foo.o\n gcc -o foo foo.o\nfoo.o : foo.c\n gcc -c foo.c -I $(INC);\n--------------------------------------\nSome years ago I wrote a program makemake which created makefiles based on\nthe contents of the current dir.\nIt was for a DOS based system but should work anywhere (with perhaps a\ncouple of tweeks).\nI will see if I can dig it out.\nHope this helps.\nLee Crampton\n\n<shield123321@hotmail.com> wrote in message\nnews:0OtL7.43114$gQ1.17272737@news1.elmhst1.il.home.com...\n> I've found, compiled and tested the libpq++ c++ program examples. I've\neven\n> modified them to work with my database. Now I want to take an existing\n> program written in c++ and add the postgres classes. I also want to do\nthis\n> in my working directory, not the samples directory. The make file with\nthe\n> examples is pretty intense. Does anyone have a simpler version that will\n> set the appropriate paths for the includes? I naturally want to use the\n> header files included with postgres, not copy them around.\n\n\n", "msg_date": "Sun, 25 Nov 2001 11:54:20 -0000", "msg_from": "\"Lee Crampton\" <lee@avbrief.com>", "msg_from_op": false, "msg_subject": "Re: Setting up MAKE file for Postgres and C++/Newbie question" } ]
[ { "msg_contents": "Just a thought, when using ORACLE there are a lot of system views that\nenable a DBA to look at how much resources are being used by ORACLE\nsystem. With this type of info the DBA is able to determine if he need\nto adjust the 100+'s of configuration options in the init.ora file. I\nhave many Postgres system with db's from 10MB to 900GB and shared memory\nfrom 128MB to 512MB. As far as I can tell there is know way for a\nPostgres DB to determine how must of the memory is being used. I would\nlike to see some type of tool (either a program and or system views)\nthat would tell the DBA how the shared memory is being used. With this\ntype of info, the DBA can tune the memory better.\n\nWhat do you think.\nJim\n\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > ... However, I feel we should have *some*\n> > data points before we commit to a number that, as you say, most\nusers will\n> > implicitly be stuck with.\n> \n> Data points would be a good thing. I will freely admit that I have\nmade\n> no measurements to back up my opinion :-(\n> \n> > As for sort memory, I have no idea why this isn't much larger by\ndefault.\n> \n> The problem with sort memory is that you don't know what the\nmultiplier\n> is for it. SortMem is per sort/hash/whatever plan step, which means\n> that not only might one backend be consuming several times SortMem on\n> a complex query, but potentially all MaxBackend backends might be\ndoing\n> the same. In practice that seems like a pretty unlikely scenario, but\n> surely you should figure *some* function of SortMem * MaxBackends as\n> the number you need to compare to available RAM.\n> \n> The present 512K default is on the small side for current hardware,\n> no doubt, but that doesn't mean we should crank it up without thought.\n> We just recently saw a trouble report from someone who had pushed it\n> to the moon and found out the hard way not to do that.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n\n", "msg_date": "Fri, 23 Nov 2001 11:28:49 -0500", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: postgresql.conf (Proposed settings) " } ]
[ { "msg_contents": "I just discovered an issue related to one of the bytea changes in 7.2.\n\nWhen creating a linked table in MS Access, bytea columns get mapped to \n\"OLE Object\" as a datatype, and this type is not able to be indexed. \nPreviously, it was not possible to create an index on a bytea column, so \nthis was not an issue, but now you can.\n\nTrying to create a link to a table with an index on a bytea column, you \nget an error: \"Invalid field definition 'bytea_field_name' in \ndefinition of index or relationship\". I confirmed that the index was in \nfact the issue by dropping it and then successfully creating the link.\n\nAny thoughts on how to get around this, short of dropping indexes on all \nbytea columns? Is their any way to suppress bytea column indexes from \nview to the ODBC driver?\n\nThanks,\n\nJoe\n\n\n\n", "msg_date": "Fri, 23 Nov 2001 21:09:45 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "bytea/ODBC/MSAccess issue" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> When creating a linked table in MS Access, bytea columns get mapped to \n> \"OLE Object\" as a datatype, and this type is not able to be indexed. \n\nCould we make our ODBC driver map bytea to some datatype that Access\ndoesn't choke on?\n\nSooner or later our response to this sort of problem will have to be\n\"MS Access will be first against the wall when the revolution comes\".\nIn the meantime I'm willing to entertain marginal hacks in ODBC...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Nov 2001 00:42:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bytea/ODBC/MSAccess issue " }, { "msg_contents": "Tom Lane wrote:\n\n> Joe Conway <joseph.conway@home.com> writes:\n> \n>>When creating a linked table in MS Access, bytea columns get mapped to \n>>\"OLE Object\" as a datatype, and this type is not able to be indexed. \n>>\n> \n> Could we make our ODBC driver map bytea to some datatype that Access\n> doesn't choke on?\n> \n\n\nI did a bit of testing, and it seems that the MS Access \"Memo\" datatype \n(which PostgreSQL TEXT maps to) can handle zero bytes. The help page has \nthis to say:\n\n\"Up to 65,535 characters. (If the Memo field is manipulated through DAO \nand only text and numbers [not binary data] will be stored in it, then \nthe size of the Memo field is limited by the size of the database.)\"\n\nSo it seems to indicate it will work for bytea up to 65K in size. I \nguess it would be ideal if we could control the mapping through the ODBC \nsettings dialog (i.e. either map bytea to Memo or OLE Object as options).\n\nI've never hacked on ODBC before, but I guess I'll take a look. Is the \nrelease cycle for ODBC tied to that of the main distribution? If so, I \nguess this is a 7.3 fix.\n\n> Sooner or later our response to this sort of problem will have to be\n> \"MS Access will be first against the wall when the revolution comes\".\n\n\n:)\n\n\n> In the meantime I'm willing to entertain marginal hacks in ODBC...\n> \n\nI don't use MS Access very often, and could easily do without this \nchange myself, but it is a likely source of complaints after 7.2 is \nreleased if nothing is done about it.\n\nJoe\n\n", "msg_date": "Sat, 24 Nov 2001 11:03:28 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: bytea/ODBC/MSAccess issue" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> Joe Conway <joseph.conway@home.com> writes:\n> > When creating a linked table in MS Access, bytea columns get mapped to \n> > \"OLE Object\" as a datatype, and this type is not able to be indexed. \n> \n> Could we make our ODBC driver map bytea to some datatype that Access\n> doesn't choke on?\n\nIIRC our ODBC driver maps bytea to SQL_VARBINARY which is\nable to be indexed in MS Access. Probably Joe is changing the\n*Max Varchar* option > 255. \nCurrently the mapping of our driver is\n SQL_VARBINARY (the length <= 255) \t<---> bytea\n SQL_LONGVARBINARY (the length can be > 255) <---> lo\n.\nMS Access couldn't handle the binary type index > 255 bytes.\nPostgreSQL hasn't been able to have indexes on bytea until\nquite recently and bytea is unavailable for LO. \nEvery application has its limitation.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 26 Nov 2001 06:34:49 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: bytea/ODBC/MSAccess issue " }, { "msg_contents": "Hiroshi Inoue wrote:\n\n>>-----Original Message-----\n>>From: Tom Lane\n>>\n>>Joe Conway <joseph.conway@home.com> writes:\n>>\n>>>When creating a linked table in MS Access, bytea columns get mapped to \n>>>\"OLE Object\" as a datatype, and this type is not able to be indexed. \n>>>\n>>Could we make our ODBC driver map bytea to some datatype that Access\n>>doesn't choke on?\n>>\n> \n> IIRC our ODBC driver maps bytea to SQL_VARBINARY which is\n> able to be indexed in MS Access. Probably Joe is changing the\n> *Max Varchar* option > 255. \n> Currently the mapping of our driver is\n> SQL_VARBINARY (the length <= 255) \t<---> bytea\n> SQL_LONGVARBINARY (the length can be > 255) <---> lo\n> .\n> MS Access couldn't handle the binary type index > 255 bytes.\n> PostgreSQL hasn't been able to have indexes on bytea until\n> quite recently and bytea is unavailable for LO. \n> Every application has its limitation.\n> \n> regards,\n> Hiroshi Inoue\n\nThanks for the reply, Hiroshi. This advice worked, and after a little\nresearch I see (as you said) that the limitation is with MS Access -- so\nnot much we can do :(\n\nJoe\n\n", "msg_date": "Sun, 25 Nov 2001 23:19:54 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: bytea/ODBC/MSAccess issue" } ]
[ { "msg_contents": "We had a discussion in late September about deprecating the postmaster's\n-o switch in the 7.2 documentation, with an eye to removing it in 7.3:\nsee thread starting at\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1036935\n\nIt wasn't entirely clear to me whether everyone had agreed to the idea\nof marking -o deprecated in the 7.2 docs, so here's your chance to\nobject if you think it's a bad idea...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Nov 2001 16:24:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "> We had a discussion in late September about deprecating the postmaster's\n> -o switch in the 7.2 documentation, with an eye to removing it in 7.3:\n> see thread starting at\n> \thttp://fts.postgresql.org/db/mw/msg.html?mid=1036935\n> \n> It wasn't entirely clear to me whether everyone had agreed to the idea\n> of marking -o deprecated in the 7.2 docs, so here's your chance to\n> object if you think it's a bad idea...\n\nIs there no overlap in the postgres/postmaster flags?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Nov 2001 16:58:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is there no overlap in the postgres/postmaster flags?\n\nYeah, there is. That's why we need to give people warning of what\nwe intend to break. See prior thread.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Nov 2001 17:00:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is there no overlap in the postgres/postmaster flags?\n> \n> Yeah, there is. That's why we need to give people warning of what\n> we intend to break. See prior thread.\n\nSo we just mention it is going away, but there are duplicates so they\ncan't start removing -o yet?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Nov 2001 17:03:15 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> So we just mention it is going away, but there are duplicates so they\n> can't start removing -o yet?\n\nWell, we'd have to give a table of recommended translations, eg\n\n\t-o '-S n'\t=>\t--sort-mem=n\n\nThe translations all exist already, but we've got to start telling\npeople to quit using the old switches, or we'll never be able to clean\nthem up.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Nov 2001 17:10:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > So we just mention it is going away, but there are duplicates so they\n> > can't start removing -o yet?\n> \n> Well, we'd have to give a table of recommended translations, eg\n> \n> \t-o '-S n'\t=>\t--sort-mem=n\n> \n> The translations all exist already, but we've got to start telling\n> people to quit using the old switches, or we'll never be able to clean\n> them up.\n\nYes, now I remember. Do we support long command-line options on all\nplatforms? I don't think so.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Nov 2001 17:12:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, now I remember. Do we support long command-line options on all\n> platforms? I don't think so.\n\nGUC variable assignment works on all platforms.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Nov 2001 17:14:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, now I remember. Do we support long command-line options on all\n> > platforms? I don't think so.\n> \n> GUC variable assignment works on all platforms.\n\nOK, but when we recommend, we had better tell them to start using GUC\nand not long command-line options _unless_ long options are supported on\ntheir platform. Without that, there will be confusion.\n\nOf course, now that we are recommending GUC, all the flags become\nuseless. Kind of a circular arguement here. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Nov 2001 17:18:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, but when we recommend, we had better tell them to start using GUC\n> and not long command-line options _unless_ long options are supported on\n> their platform. Without that, there will be confusion.\n\nThis is entirely irrelevant, because the postmaster and backend don't\nhave any long options (except GUC variables which work anyway).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Nov 2001 17:22:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, but when we recommend, we had better tell them to start using GUC\n> > and not long command-line options _unless_ long options are supported on\n> > their platform. Without that, there will be confusion.\n> \n> This is entirely irrelevant, because the postmaster and backend don't\n> have any long options (except GUC variables which work anyway).\n\nOh, I see. We don't use long options for postmaster/postgres, just the\n-c option to set a GUC value. Got it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Nov 2001 17:27:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > So we just mention it is going away, but there are duplicates so they\n> > can't start removing -o yet?\n> \n> Well, we'd have to give a table of recommended translations, eg\n> \n> \t-o '-S n'\t=>\t--sort-mem=n\n\nThis is the part that threw me off. I see in the postmaster docs under\n-c:\n On some systems it is also possible to equivalently\n use GNU-style\t long\toptions in the form\n --name=value.\n\nso we would have to recommend '-c sort-mem=n.'\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Nov 2001 17:47:57 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> This is the part that threw me off. I see in the postmaster docs under\n> -c:\n> On some systems it is also possible to equivalently\n> use GNU-style\t long\toptions in the form\n> --name=value.\n\n> so we would have to recommend '-c sort-mem=n.'\n\n--sort-mem works, period. Read the code.\n\nThat part of the docs is in error, evidently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Nov 2001 18:01:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > This is the part that threw me off. I see in the postmaster docs under\n> > -c:\n> > On some systems it is also possible to equivalently\n> > use GNU-style\t long\toptions in the form\n> > --name=value.\n> \n> > so we would have to recommend '-c sort-mem=n.'\n> \n> --sort-mem works, period. Read the code.\n> \n> That part of the docs is in error, evidently.\n\nDocs updated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Nov 2001 19:15:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Tom Lane writes:\n\n> We had a discussion in late September about deprecating the postmaster's\n> -o switch in the 7.2 documentation, with an eye to removing it in 7.3:\n\nI'm not sure this notice would be utterly helpful since we have no\nconcrete plans for any of the other options that need to be merged. The\nbest we can say right now is that \"all 'postgres' options might change or\ndisappear soon\".\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 25 Nov 2001 23:29:57 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Bruce Momjian writes:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > This is the part that threw me off. I see in the postmaster docs under\n> > > -c:\n> > > On some systems it is also possible to equivalently\n> > > use GNU-style\t long\toptions in the form\n> > > --name=value.\n> >\n> > > so we would have to recommend '-c sort-mem=n.'\n> >\n> > --sort-mem works, period. Read the code.\n> >\n> > That part of the docs is in error, evidently.\n>\n> Docs updated.\n\nPlease change it back.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 25 Nov 2001 23:30:57 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Tom Lane writes:\n\n> --sort-mem works, period. Read the code.\n>\n> That part of the docs is in error, evidently.\n\nNo it's not, unfortunately. BSD versions of getopt, including the one we\nship as replacement, have a bug that considers any argument that starts\nwith '--' to be equivalent with '--' (which means end of options).\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 25 Nov 2001 23:31:06 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> No it's not, unfortunately. BSD versions of getopt, including the one we\n> ship as replacement, have a bug that considers any argument that starts\n> with '--' to be equivalent with '--' (which means end of options).\n\nUgh. Nonetheless, that doesn't equate to \"you need GNU getopt to use\nthis\". Can we be more specific about whether it works or not?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2001 17:55:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> We had a discussion in late September about deprecating the postmaster's\n>> -o switch in the 7.2 documentation, with an eye to removing it in 7.3:\n\n> I'm not sure this notice would be utterly helpful since we have no\n> concrete plans for any of the other options that need to be merged.\n\nI was just planning to recommend not using -o, in favor of the already-\nexisting alternatives (-o -F => -F, -o -S n => --sort-mem=n, etc).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2001 17:58:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > No it's not, unfortunately. BSD versions of getopt, including the one we\n> > ship as replacement, have a bug that considers any argument that starts\n> > with '--' to be equivalent with '--' (which means end of options).\n> \n> Ugh. Nonetheless, that doesn't equate to \"you need GNU getopt to use\n> this\". Can we be more specific about whether it works or not?\n\nOK, I just checked BSD/OS, and see in the docs:\n\n The getopt() function returns -1 when the argument list is exhausted, or\n a non-recognized option is encountered. The interpretation of options\n in the argument list may be cancelled by the option `--' (double dash)\n which causes getopt() to signal the end of argument processing and\n returns -1. When all options have been processed (i.e., up to the first\n non-option argument), getopt() returns -1.\n\nHowever, I also see:\n\n Option arguments are allowed to begin with ``-''; this is reasonable but\n reduces the amount of error checking possible.\n\nI see in postmaster.c:\n\n while ((opt = getopt(argc, argv, \"A:a:B:b:c:D:d:Fh:ik:lm:MN:no:p:Ss-:\")) != \n ^\nAnd I started the postmaster using:\n\n ./bin/postmaster -B 2000 -i $DEBUG --sort-mem=60\n\nso while the documentation says \"--\" ends arguments, it appears if you\nspecify \"-\" to getopt, it will honor it and not end the argument list. \n\nBecause this is identical on BSD/OS and FreeBSD, I assume all the BSD's\nare the same. Peter, was there a specific failure of \"--\" options that\nyou remember?\n\nI will be glad to put the docs back to warning about \"--\" options if is\nindeed true, or perhaps we can be more specific.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Nov 2001 19:50:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> And I started the postmaster using:\n> ./bin/postmaster -B 2000 -i $DEBUG --sort-mem=60\n\nBut\n\n(a) did the sort_mem setting \"take\"?\n\n(b) can you put more than one variable setting in there and have\nthem all take?\n\nI tried\n\n\tpostmaster ... --enable_hashjoin=false --enable-mergejoin=false\n\nand then verified\n\nregression=# show enable_hashjoin;\nNOTICE: enable_hashjoin is off\nSHOW VARIABLE\nregression=# show enable_mergejoin;\nNOTICE: enable_mergejoin is off\nSHOW VARIABLE\n\nso it works okay on HPUX.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2001 19:55:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> No it's not, unfortunately. BSD versions of getopt, including the one we\n> ship as replacement, have a bug that considers any argument that starts\n> with '--' to be equivalent with '--' (which means end of options).\n\nI believe we could trivially fix our substitute version: at line 74 of\nsrc/utils/getopt.c,\n\n\t\tif (place[1] && *++place == '-')\n\nshould be\n\n\t\tif (place[1] && *++place == '-' && place[1] == '\\0')\n\nIt might still not work on older BSDen, but there's no reason it\nshouldn't work on platforms where we use our own code.\n\nA slightly more aggressive answer would be to use our own code always,\nor to test for brokenness of the system getopt in configure and use our\nown code if so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2001 20:04:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > And I started the postmaster using:\n> > ./bin/postmaster -B 2000 -i $DEBUG --sort-mem=60\n> \n> But\n> \n> (a) did the sort_mem setting \"take\"?\n> \n\nSure did. I tried a sort value too low and it complained. \n\n> (b) can you put more than one variable setting in there and have\n> them all take?\n\nYes. In your test:\n\n ./bin/postmaster -B 2000 -i $DEBUG --enable_hashjoin=false --enable-mergejoin=false\n\nI get from a restarted postmaster:\n\n\ttest=> show enable_hashjoin;\n\tNOTICE: enable_hashjoin is off\n\tSHOW VARIABLE\n\ttest=> show enable_mergejoin;\n\tNOTICE: enable_mergejoin is off\n\tSHOW VARIABLE\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Nov 2001 20:44:34 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> (a) did the sort_mem setting \"take\"?\n\n> Sure did. I tried a sort value too low and it complained. \n\nOkay, so the original bug is fixed on your version of BSD. (Which\nis what, again?)\n\nI looked a bit at configure and realized that we have no configure\ntest that causes src/utils/getopt.c to be selected. Apparently,\nthe *only* platform where src/utils/getopt.c is used is native WIN32,\nso the \"--foo\" bug in it is irrelevant to the postmaster anyway.\nBut I'm still inclined to fix the bug.\n\nIt would be good to try to get a reading on whether there are any\ncurrent BSD distros that still have the getopt bug. But what I'm\ninclined to do is note under the description of \"--foo\" that there\nare a few older platforms where it won't work and you have to use -c,\nrather than writing the docs on the assumption that -c is what most\npeople need to use.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2001 20:50:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> (a) did the sort_mem setting \"take\"?\n> \n> > Sure did. I tried a sort value too low and it complained. \n> \n> Okay, so the original bug is fixed on your version of BSD. (Which\n> is what, again?)\n\nI am using BSD/OS 4.2. Because the other BSD's mention \"--\" as\nsupported, I assume they are OK too. Perhaps our BSD getopt() is an\nolder version.\n\nIn digging, I see this comment in the getopt.c BSD/OS source:\n\n * Pick up from where we left off, or start anew if necessary.\n * When starting on a new argument, check for \"-\" and \"--\".\n * Compatibility exception: a lone \"-\" is considered an option\n * if the option string includes \"-\".\n\nI also see new code handling \"--\" However, I don't see this message or\ncode in the NetBSD getopt.c source. I do see this NetBSD commit message\nfrom January, 1999:\n\n 1003.2-92 specifies the string \"--\" to be recognized as the option list\n delimiter as opposed to any string merely beginning with '-''-'; change\n to match the standard. From Simon J. Gerraty <sjg@quick.com.au> in PR\n lib/6762.\n\nso it looks like each BSD fixed it in their own way. Looking at\nFreeBSD, I don't see any commit message describing the fix. If I\ncompare the NetBSD, FreeBSD, and our own getopt sources, I see this\naddition in NetBSD which appears to be the fix mentioned in the NetBSD\ncommit message above:\n\n if (place[1] && *++place == '-' /* found \"--\" */\n --> && place[1] == '\\0') {\n\nAnd to confirm that, I see at:\n\n http://cvsweb.netbsd.org/bsdweb.cgi/basesrc/lib/libc/stdlib/getopt.c.diff?r1=1.12&r2=1.13\n\nthis exact change:\n\n- if (place[1] && *++place == '-') { /* found \"--\" */\n+ if (place[1] && *++place == '-' /* found \"--\" */\n+ && place[1] == '\\0') {\n\nSo, every BSD has fixed it themselves, and we should probably apply the\nabove fix to our own copy, or just grab NetBSD's. Also, because FreeBSD\ndoesn't have this fix, we should ask them to add it, and perhaps add a\nconfigure test to see if getopt \"--\" works on this platform.\n\n\n> I looked a bit at configure and realized that we have no configure\n> test that causes src/utils/getopt.c to be selected. Apparently,\n> the *only* platform where src/utils/getopt.c is used is native WIN32,\n> so the \"--foo\" bug in it is irrelevant to the postmaster anyway.\n> But I'm still inclined to fix the bug.\n> \n> It would be good to try to get a reading on whether there are any\n> current BSD distros that still have the getopt bug. But what I'm\n> inclined to do is note under the description of \"--foo\" that there\n> are a few older platforms where it won't work and you have to use -c,\n> rather than writing the docs on the assumption that -c is what most\n> people need to use.\n\nAgreed, though we may want to hard-code using our own, fixed getopt()\nfor FreeBSD.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Nov 2001 21:32:32 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> (a) did the sort_mem setting \"take\"?\n> \n> > Sure did. I tried a sort value too low and it complained. \n> \n> Okay, so the original bug is fixed on your version of BSD. (Which\n> is what, again?)\n\nI just ran a test on postgresql.org. There is no SysV support so I\ncan't initdb, but I see:\n\n $ postmaster -D x\n postmaster does not find the database system.\n Expected to find it in the PGDATA directory \"x\",\n but unable to open file \"x/global/pg_control\": No such file or directory\n\n $ postmaster -c sort-mem=30 -D x\n postmaster does not find the database system.\n Expected to find it in the PGDATA directory \"x\",\n but unable to open file \"x/global/pg_control\": No such file or directory\n\n $ postmaster --sort-mem=30 -D x\n postmaster: invalid argument -- -D\n Try 'postmaster --help' for more information.\n\nNotice that the last attempt has the -D after a \"--\" options, rather\nthan after a \"-c\" options. Sure looks like FreeBSD doesn't have the\nfix.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Nov 2001 21:39:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "> Tom Lane writes:\n> \n> > Ugh. Nonetheless, that doesn't equate to \"you need GNU getopt to use\n> > this\".\n> \n> ... which is not what it used to say either.\n> \n> > Can we be more specific about whether it works or not?\n> \n> I reported the bug on 2000-11-06 to this list and added the -c option\n> briefly afterwards. The bug caused initdb to fail on whatever FreeBSD\n> version postgresql.org was running at the time.\n> \n> I agree that we could alter the \"works on some platforms\" to \"might not\n> work on some platforms\".\n\nI just checked and OpenBSD doesn't have the fix either. Questions:\n\n1) should we patch our getopt.c with that single-line fix?\n2) should we force our own getopt on FreeBSD and OpenBSD for 7.2?\n3) how should we update the docs for this?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Nov 2001 14:15:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Tom Lane writes:\n\n> Ugh. Nonetheless, that doesn't equate to \"you need GNU getopt to use\n> this\".\n\n... which is not what it used to say either.\n\n> Can we be more specific about whether it works or not?\n\nI reported the bug on 2000-11-06 to this list and added the -c option\nbriefly afterwards. The bug caused initdb to fail on whatever FreeBSD\nversion postgresql.org was running at the time.\n\nI agree that we could alter the \"works on some platforms\" to \"might not\nwork on some platforms\".\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 26 Nov 2001 20:17:29 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I just checked and OpenBSD doesn't have the fix either. Questions:\n\n> 1) should we patch our getopt.c with that single-line fix?\n\nFixing our getopt seems like a no-brainer: yes.\n\n> 2) should we force our own getopt on FreeBSD and OpenBSD for 7.2?\n\nThis is more debatable. If it were earlier in the beta cycle I'd say\nyes, but at this point I don't think we should take any risks for what\nis after all a cosmetic issue. Put it on TODO for 7.3, instead.\n\n> 3) how should we update the docs for this?\n\n\"--name=val may not work on some platforms, if not use -c name=val\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Nov 2001 14:25:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I just checked and OpenBSD doesn't have the fix either. Questions:\n> \n> > 1) should we patch our getopt.c with that single-line fix?\n> \n> Fixing our getopt seems like a no-brainer: yes.\n\nDone.\n\n> > 2) should we force our own getopt on FreeBSD and OpenBSD for 7.2?\n> \n> This is more debatable. If it were earlier in the beta cycle I'd say\n> yes, but at this point I don't think we should take any risks for what\n> is after all a cosmetic issue. Put it on TODO for 7.3, instead.\n\nAdded to TODO:\n\n * Use our own getopt() for FreeBSD/OpenBSD to allow --xxx flags (Bruce)\n\n> > 3) how should we update the docs for this?\n> \n> \"--name=val may not work on some platforms, if not use -c name=val\".\n\nOK, I will add that it doesn't work on FreeBSD and OpenBSD,\nspecifically. This way, if it doesn't work on other platforms,\nhopefully we will hear about it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Nov 2001 14:32:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" } ]
[ { "msg_contents": "According to incidents.org, a new worm that infects MS SQL servers\nis currently spreading fast, and it's being used to lauch distributed\ndenial-of-service attacks against various sites: see\nhttp://www.incidents.org/diary/diary.php?id=82\n\nThe security flaw that the worm exploits is not, um, deep. It seems\nthat Microsoft ships MS SQL with a default system-admin account having\nthe fixed name \"sa\" and no password. If that hasn't been changed,\nanyone can do anything they want using the server machine.\n\nWhile Microsoft's carelessness about security is (justly) infamous,\nI'm not as inclined to say \"Redmond is a bunch of bozos\" as \"there\nbut for the grace of God go we\". This is a heads-up that security\nissues *do* matter, even for databases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2001 00:20:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Security note: MS SQL is current worm vector" }, { "msg_contents": "This may impact syabse ASE istallations as well. AFAIR sybase use system\nacocunt sa and no password.\n\ndali\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Tom Lane\nSent: Sunday, 25 November 2001 18:20\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] Security note: MS SQL is current worm vector\n\n\nAccording to incidents.org, a new worm that infects MS SQL servers is\ncurrently spreading fast, and it's being used to lauch distributed\ndenial-of-service attacks against various sites: see\nhttp://www.incidents.org/diary/diary.php?id=82\n\nThe security flaw that the worm exploits is not, um, deep. It seems\nthat Microsoft ships MS SQL with a default system-admin account having\nthe fixed name \"sa\" and no password. If that hasn't been changed,\nanyone can do anything they want using the server machine.\n\nWhile Microsoft's carelessness about security is (justly) infamous, I'm\nnot as inclined to say \"Redmond is a bunch of bozos\" as \"there but for\nthe grace of God go we\". This is a heads-up that security issues *do*\nmatter, even for databases.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n", "msg_date": "Sun, 25 Nov 2001 21:35:02 +1300", "msg_from": "\"Dalibor Andzakovic\" <dali@dali.net.nz>", "msg_from_op": false, "msg_subject": "Re: Security note: MS SQL is current worm vector" }, { "msg_contents": "Yeah, by default Postgresql ships practically without any access controls.\n\nFortunately most self compiled Postgresql installations don't have remote\naccess enabled (I have long assumed that on most Unix or Unixlike systems\nlocal users = root users, so postgresql's lack of local user security by\ndefault isn't that big an issue).\n\nI have no experience with prepackaged Postgresql installations.\n\nAnyway most DB installations should be behind firewalls. That said many\nmicrosoft users may not even know they have a DB installation, let alone\nthat they need to set a password ;).\n\nCheerio,\nLink.\n\nAt 12:20 AM 11/25/01 -0500, Tom Lane wrote:\n>According to incidents.org, a new worm that infects MS SQL servers\n>is currently spreading fast, and it's being used to lauch distributed\n>denial-of-service attacks against various sites: see\n>http://www.incidents.org/diary/diary.php?id=82\n>\n>The security flaw that the worm exploits is not, um, deep. It seems\n>that Microsoft ships MS SQL with a default system-admin account having\n>the fixed name \"sa\" and no password. If that hasn't been changed,\n>anyone can do anything they want using the server machine.\n>\n>While Microsoft's carelessness about security is (justly) infamous,\n>I'm not as inclined to say \"Redmond is a bunch of bozos\" as \"there\n>but for the grace of God go we\". This is a heads-up that security\n>issues *do* matter, even for databases.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n>\n\n", "msg_date": "Sun, 25 Nov 2001 16:35:52 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Security note: MS SQL is current worm vector" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> Yeah, by default Postgresql ships practically without any access controls.\n\nIt ain't *that* bad. The default configuration is \"no remote access,\nperiod\", even if you give -i in the postmaster switches. True, there\nare no local access controls by default, but unless someone ignores\nthe instructions and runs the postmaster as \"bin\" or another\nquasi-privileged user, there's no way I can see to use the database to\nbreak into root. (Barring site security holes, which could be exploited\nby any local user anyway.)\n\nMS SQL's problem is that any remote attacker who can reach the machine\nby TCP is instantly root, or whatever the equivalent concept is on NT.\nIf you don't have the server port firewalled you're a sitting duck.\n\nI do wonder whether we shouldn't list \"think about your access controls\"\nas an explicit step in the installation instructions or server startup\ninstructions. The default configuration is definitely uncool on\nmultiuser machines, but a novice might not find that out till too late.\n\t \n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2001 12:13:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Security note: MS SQL is current worm vector " }, { "msg_contents": "On Sunday 25 November 2001 18:13, Tom Lane wrote:\n> Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > Yeah, by default Postgresql ships practically without any access\n> > controls.\n>\n(...)\n> I do wonder whether we shouldn't list \"think about your access controls\"\n> as an explicit step in the installation instructions or server startup\n> instructions. The default configuration is definitely uncool on\n> multiuser machines, but a novice might not find that out till too late.\n\nIt might be worth explicitly mentioning the following:\n\n1) use initdb with the -W option, so that a superuser password\n is set during db initialisation and before the server is started;\n2) before starting the server change the appropriate settings\n in pg_hba.conf from 'trusted' to 'password' (or whatever other\n authentication system is to be used).\n\nParticularly the point about initdb with -W isn't mentioned\nin the \"7.1 Administrator's Guide\" (section 3.2, 'Creating\na database cluster'), which is probably the first port of call \nfor many first time admin/users.\n\nFollowing these steps should exclude any possibility\nof even local users gaining uncontrolled access to the\nbackend. (Motto: \"Never Trust Anyone\" ;-)\n\nYours\n\nIan Barwick\n", "msg_date": "Sun, 25 Nov 2001 19:17:44 +0100", "msg_from": "Ian Barwick <barwick@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Security note: MS SQL is current worm vector" }, { "msg_contents": "On Sunday 25 November 2001 18:13, Tom Lane wrote:\n> Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > Yeah, by default Postgresql ships practically without any access\n> > controls.\n>\n(...)\n> I do wonder whether we shouldn't list \"think about your access controls\"\n> as an explicit step in the installation instructions or server startup\n> instructions. The default configuration is definitely uncool on\n> multiuser machines, but a novice might not find that out till too late.\n\nIt might be worth explicitly mentioning the following:\n\n1) use initdb with the -W option, so that a superuser password\n is set during db initialisation and before the server is started;\n2) before starting the server change the appropriate settings\n in pg_hba.conf from 'trusted' to 'password' (or whatever other\n authentication system is to be used).\n\nParticularly the point about initdb with -W isn't mentioned\nin the \"7.1 Administrator's Guide\" (section 3.2, 'Creating\na database cluster'), which is probably the first port of call \nfor many first time admin/users.\n\nFollowing these steps should exclude any possibility\nof even local users gaining uncontrolled access to the\nbackend. (Motto: \"Never Trust Anyone\" ;-)\n\nYours\n\nIan Barwick\n", "msg_date": "Sun, 25 Nov 2001 20:17:41 +0100", "msg_from": "Ian Barwick <barwick@akademie.de>", "msg_from_op": false, "msg_subject": "Re: Security note: MS SQL is current worm vector" }, { "msg_contents": "On Sunday 25 November 2001 03:35 am, Lincoln Yeoh wrote:\n> Fortunately most self compiled Postgresql installations don't have remote\n> access enabled (I have long assumed that on most Unix or Unixlike systems\n> local users = root users, so postgresql's lack of local user security by\n> default isn't that big an issue).\n\n> I have no experience with prepackaged Postgresql installations.\n\nThe RPMset ships with TCP/IP socket listening off by default. I've had more \nquestions on 'why isn't it turned on by default like it was in 7.0' than any \nother single subject. To all who asked -- _this_ is why.\n\nHowever, since postmaster doesn't start or run as root, a compromise of \npostmaster isn't going to result in catastrophic remote root. At worst your \ndatabase is compromised -- which is bad, but not as bad as your machine being \na stepping-stone for a DDoS.\n\nThis is, IMHO, one of the worst things about NT 'services' -- they have \nentirely too many rights in the filesystem.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sun, 25 Nov 2001 19:55:28 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Security note: MS SQL is current worm vector" }, { "msg_contents": "> On Sunday 25 November 2001 18:13, Tom Lane wrote:\n> > Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > > Yeah, by default Postgresql ships practically without any access\n> > > controls.\n> >\n> (...)\n> > I do wonder whether we shouldn't list \"think about your access controls\"\n> > as an explicit step in the installation instructions or server startup\n> > instructions. The default configuration is definitely uncool on\n> > multiuser machines, but a novice might not find that out till too late.\n> \n> It might be worth explicitly mentioning the following:\n> \n> 1) use initdb with the -W option, so that a superuser password\n> is set during db initialisation and before the server is started;\n\nI have added documentation for the -W flag. You can see it at:\n\n\thttp://216.55.132.35/main/writings/pgsql/sgml/creating-cluster.html\n\n\n> 2) before starting the server change the appropriate settings\n> in pg_hba.conf from 'trusted' to 'password' (or whatever other\n> authentication system is to be used).\n\nAlso mentioned.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Dec 2001 15:50:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Security note: MS SQL is current worm vector" } ]
[ { "msg_contents": "Hello,\n\nI need use a notify in rules with where conditions. In 7.1.3, i found a\nbug and i fix it. Then I downloaded the version 7.2 to verify the bug\nand to send them the patch, but the behavior had been changed.\n\ntest=# select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\n\nThe tables:\n\ntest=# \\d rd\n Table \"rd\"\n Column | Type | Modifiers\n--------+----------+-----------\n a | smallint |\n b | smallint |\n c | text |\n\n Table \"ri\"\n Column | Type | Modifiers\n--------+----------+--------------\n d | smallint |\n e | text |\n a | smallint | default 1000\n b | smallint | default 2000\n\nWhen i try to create the rule:\n\nCREATE RULE upd_rd\nAS ON\nUPDATE TO ri\nWHERE\n NEW.a IS NOT NULL \n\tAND NEW.b IS NOT NULL\nDO (\n notify foo;\n update rd set a=new.a,b=new.b\n where a=old.a and b=old.b;\n)\n\nthe output was:\n\npsql:sql/rd_ri/bug_upd.sql:11: pqReadData() -- backend closed the\nchannel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\npsql:sql/rd_ri/bug_upd.sql:11: connection to server was lost\n\n\nI fix it with the attached patch and the rule works well. But in 7.2b2 i\ncan't do it...\n\nWith the same rule, when i when I try to create it in 7.2b3 the output\nwas:\n\ntest=# select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.2b3 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\npsql:sql/rd_ri/bug_upd.sql:11: ERROR: Rules with WHERE conditions may\nonly have SELECT, INSERT, UPDATE, or DELETE actions\n\n\nWhy was changed?\nIs it possible to maintain the previous behavior with my correction?\n\nThanks in advance,\nSergio", "msg_date": "Sun, 25 Nov 2001 19:21:44 -0300", "msg_from": "Sergio Pili <sergiop@sinectis.com.ar>", "msg_from_op": true, "msg_subject": "Change in rule behavior?" }, { "msg_contents": "Sergio Pili <sergiop@sinectis.com.ar> writes:\n> psql:sql/rd_ri/bug_upd.sql:11: ERROR: Rules with WHERE conditions may\n> only have SELECT, INSERT, UPDATE, or DELETE actions\n> Why was changed?\n> Is it possible to maintain the previous behavior with my correction?\n\nBecause it didn't work. Your patch might prevent a coredump, but it\ndoesn't make the rule work correctly. See the comment on the error\nmessage:\n\n /*\n * We cannot support utility-statement actions (eg NOTIFY)\n * with nonempty rule WHERE conditions, because there's no way\n * to make the utility action execute conditionally.\n */\n if (top_subqry->commandType == CMD_UTILITY &&\n stmt->whereClause != NULL)\n elog(ERROR, \"Rules with WHERE conditions may only have SELECT, INSERT, UPDATE, or DELETE actions\");\n\nWhat actually happened in your patched 7.1 was that the NOTIFY message\nwas sent regardless of whether the WHERE condition was true or not.\nIf you want that behavior, it's easy enough to get: put the NOTIFY in a\nseparate rule that has no WHERE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2001 18:17:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Change in rule behavior? " }, { "msg_contents": "> What actually happened in your patched 7.1 was that the NOTIFY message\n> was sent regardless of whether the WHERE condition was true or not.\n> If you want that behavior, it's easy enough to get: put the NOTIFY in a\n> separate rule that has no WHERE.\n\n\nOh!, you are all right of course.\n\nIn fact I not discovered this change with the Notify but with the\nDeactivate Rule...\nWe proposed this command and you responded that you needed an example.\nWe send it and we don't receive answer. I take advantage to wonder for\nthat topic... \n\nUsing the Deactivate Rule that we propose, doesn't care that the same\none is executed regardless of whether the WHERE condition was true or\nnot, but that would be already a detail of the new command and not\nsomething general as me it proposed.\n\nThanks and regards\n\nSergio.\n", "msg_date": "Mon, 26 Nov 2001 07:53:48 -0300", "msg_from": "Sergio Pili <sergiop@sinectis.com.ar>", "msg_from_op": true, "msg_subject": "Re: Change in rule behavior?" } ]
[ { "msg_contents": "Hello,\n\npg_get_ruledef cannot read the following rule:\n\ntest=# select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.2b3 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\n\nWith the following tables:\n\ntest=# \\d rd\n Table \"rd\"\n Column | Type | Modifiers\n--------+----------+-----------\n a | smallint |\n b | smallint |\n c | text |\n\n Table \"ri\"\n Column | Type | Modifiers\n--------+----------+--------------\n d | smallint |\n e | text |\n a | smallint | default 1000\n b | smallint | default 2000\n\n\nI create the following rule:\n\nCREATE RULE ins_rd\nAS ON INSERT TO ri\nWHERE NEW.a IS NOT NULL \n\tAND NEW.b IS NOT NULL\nDO\n INSERT INTO rd (a,b)\n select distinct new.a,new.b\n\n\nThe rule works well. But when i select pg_rules:\n\ntest=# select * from pg_rules;\nERROR: Invalid attnum 3 for rangetable entry *SELECT*\n\n\nThanks in advance\n\nSergio.\n", "msg_date": "Sun, 25 Nov 2001 19:24:16 -0300", "msg_from": "Sergio Pili <sergiop@sinectis.com.ar>", "msg_from_op": true, "msg_subject": "Bug in pg_get_ruledef?" }, { "msg_contents": "Sergio Pili <sergiop@sinectis.com.ar> writes:\n> test=# select * from pg_rules;\n> ERROR: Invalid attnum 3 for rangetable entry *SELECT*\n\nProblem confirmed here. Will look at it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2001 18:05:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in pg_get_ruledef? " }, { "msg_contents": "Sergio Pili <sergiop@sinectis.com.ar> writes:\n> pg_get_ruledef cannot read the following rule:\n\nFix committed --- many thanks for the report!\n\nAttached is the patch against current sources, if you need it.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/utils/adt/ruleutils.c.orig\tMon Nov 19 14:51:20 2001\n--- src/backend/utils/adt/ruleutils.c\tSun Nov 25 19:18:32 2001\n***************\n*** 769,775 ****\n--- 769,788 ----\n \t\tappendStringInfo(buf, \" WHERE \");\n \n \t\tqual = stringToNode(ev_qual);\n+ \n+ \t\t/*\n+ \t\t * We need to make a context for recognizing any Vars in the qual\n+ \t\t * (which can only be references to OLD and NEW). Use the rtable\n+ \t\t * of the first query in the action list for this purpose.\n+ \t\t */\n \t\tquery = (Query *) lfirst(actions);\n+ \n+ \t\t/*\n+ \t\t * If the action is INSERT...SELECT, OLD/NEW have been pushed\n+ \t\t * down into the SELECT, and that's what we need to look at.\n+ \t\t * (Ugly kluge ... try to fix this when we redesign querytrees.)\n+ \t\t */\n+ \t\tquery = getInsertSelectQuery(query, NULL);\n \n \t\tcontext.buf = buf;\n \t\tcontext.namespaces = makeList1(&dpns);\n", "msg_date": "Sun, 25 Nov 2001 19:30:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in pg_get_ruledef? " } ]
[ { "msg_contents": "We needed to install 7.2b3 on Irix 6.5.13 with MIPSpro Compilers:\nVersion 7.30\nwhen installing as we installed 7.1.3, it is with-template=irix5 adding\nto irix5 these lines:\n CC=cc\n CFLAGS='-n32 -O2 -r12000'\n LDFLAGS='-n32 -O2 -r12000'\n\n where -o2:\n\n Turns on extensive optimization. The\noptimizations at\n this level are generally conservative, in the\nsense that\n they are virtually always beneficial, provide\n improvements commensurate to the compile time\nspent to\n achieve them, and avoid changes which affect such\nthings\n as floating point accuracy.\n\n -r12000 specifies the processor.\n\n and -n32: Generates a (new) 32-bit object.\n\n it is necessary to use cc because gcc gives assembler errors\n\nWhen installing 7.2b3 we receive after gmake:\n len = offsetof(PgStat_MsgTabpurge, m_tableid[msg.m_nentries])\nmust have a constant value\n\nI think it's because definition in stddef.h is incompatible with usage\n stdef.h:\n #if defined(_COMPILER_VERSION) && (_COMPILER_VERSION >= 400)\n #define offsetof(t, memb) ((size_t)__INTADDR__(&(((t\n*)0)->memb)))\n #else\n #define offsetof(s, m) (size_t)(&(((s *)0)->m))\n #endif\n\nwe managed to install forcing postgres to redefine offsetof, it is\nremoving #ifndef offsetof\nafter that it installed of course giving warning in all redefinition,\nbut it already works.\n\nHope it helps\n\n\n", "msg_date": "Mon, 26 Nov 2001 11:35:13 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "installing 7.2b3 on IRIX 6.5.13" }, { "msg_contents": "[ redirected from pgsql-admin ]\n\nLuis Amigo <lamigo@atc.unican.es> writes:\n> When installing 7.2b3 we receive after gmake:\n> len = offsetof(PgStat_MsgTabpurge, m_tableid[msg.m_nentries])\n> must have a constant value\n\nIndeed, a quick look in the C spec says that offsetof is required to\nhave a constant value, so this coding is unportable. I've repaired\nit in CVS. Thanks for the report!\n\nIt's not clear to me whether we should change template/irix5 or not.\nIt sounds like gcc is misinstalled on your machine, but that doesn't\nnecessarily mean that no one is using gcc successfully on IRIX, so\nI don't want to force CC=cc. Possibly this would make sense:\n\nif test \"$GCC\" = yes ; then\n CFLAGS=\"-O2\"\nelse\n CFLAGS=\"-n32 -O2 -r12000\"\n LDFLAGS=\"-n32 -O2 -r12000\"\nfi\n\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Nov 2001 17:39:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: installing 7.2b3 on IRIX 6.5.13 " }, { "msg_contents": "Tom Lane writes:\n\n> It's not clear to me whether we should change template/irix5 or not.\n> It sounds like gcc is misinstalled on your machine, but that doesn't\n> necessarily mean that no one is using gcc successfully on IRIX, so\n> I don't want to force CC=cc.\n\nOne of these days I'm going to write this down somewhere: GCC + Irix +\nPostgreSQL does not work -- until proven otherwise and/or GCC is fixed.\n(The reason that the assembly fails is unrelated to this bug; it's just\nthat no one ever bothered to work on it because there's no use anyway.)\nI was going to suggest myself someday that we force CC=cc, but it should\nbe done in configure.in (near line 274) and not in the template file.\n\n> Possibly this would make sense:\n>\n> if test \"$GCC\" = yes ; then\n> CFLAGS=\"-O2\"\n> else\n> CFLAGS=\"-n32 -O2 -r12000\"\n> LDFLAGS=\"-n32 -O2 -r12000\"\n> fi\n\nWe've had successful reports for Irix in the past, so I don't think the -n\nand -r flags are strictly necessary -- at least I'd like to see more\ninformation regarding them. What makes -n32 and -r12000 better than, say,\n-n64 and -r6000?\n\nThe -O2 seems okay. Its lack is probably a remnant from the old fmgr\ntimes.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 27 Nov 2001 16:36:04 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: installing 7.2b3 on IRIX 6.5.13 " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> One of these days I'm going to write this down somewhere: GCC + Irix +\n> PostgreSQL does not work -- until proven otherwise and/or GCC is fixed.\n\nOh. Yup, that should be documented or enforced by configure.\n\n> I was going to suggest myself someday that we force CC=cc, but it should\n> be done in configure.in (near line 274) and not in the template file.\n\nMakes sense to me to do it in configure.in. Shall we go ahead and put\nthat in?\n\n> We've had successful reports for Irix in the past, so I don't think the -n\n> and -r flags are strictly necessary -- at least I'd like to see more\n> information regarding them. What makes -n32 and -r12000 better than, say,\n> -n64 and -r6000?\n\nLuis' followup indicated that -r wasn't really needed. Not sure about -n.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2001 10:38:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: installing 7.2b3 on IRIX 6.5.13 " }, { "msg_contents": "Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > One of these days I'm going to write this down somewhere: GCC + Irix +\n> > PostgreSQL does not work -- until proven otherwise and/or GCC is fixed.\n>\n> Oh. Yup, that should be documented or enforced by configure.\n>\n> > I was going to suggest myself someday that we force CC=cc, but it should\n> > be done in configure.in (near line 274) and not in the template file.\n>\n> Makes sense to me to do it in configure.in. Shall we go ahead and put\n> that in?\n>\n> > We've had successful reports for Irix in the past, so I don't think the -n\n> > and -r flags are strictly necessary -- at least I'd like to see more\n> > information regarding them. What makes -n32 and -r12000 better than, say,\n> > -n64 and -r6000?\n>\n> Luis' followup indicated that -r wasn't really needed. Not sure about -n.\n>\n> regards, tom lane\n\nsorry for following to tom (a mouse mistake)\nIn our case -n32 (force new 32 bit object(maybe -o32 in old platforms)) was\nnecessary because with 64 bit object we were getting linking errors.\n-rx is processor instruction set, it is not necessary but I recommend it for\nbetter performance.\n\n\n\n", "msg_date": "Tue, 27 Nov 2001 17:56:10 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: installing 7.2b3 on IRIX 6.5.13" }, { "msg_contents": "Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > One of these days I'm going to write this down somewhere: GCC + Irix +\n> > PostgreSQL does not work -- until proven otherwise and/or GCC is fixed.\n>\n> Oh. Yup, that should be documented or enforced by configure.\n>\n> > I was going to suggest myself someday that we force CC=cc, but it should\n> > be done in configure.in (near line 274) and not in the template file.\n>\n> Makes sense to me to do it in configure.in. Shall we go ahead and put\n> that in?\n>\n> > We've had successful reports for Irix in the past, so I don't think the -n\n> > and -r flags are strictly necessary -- at least I'd like to see more\n> > information regarding them. What makes -n32 and -r12000 better than, say,\n> > -n64 and -r6000?\n>\n> Luis' followup indicated that -r wasn't really needed. Not sure about -n.\n>\n> regards, tom lane\n\nsorry for replying only to tom (a mouse mistake)\nIn our case -n32 (force new 32 bit object(maybe -o32 in old platforms)) was\nnecessary because with 64 bit object we were getting linking errors.\n-rx is processor instruction set, it is not necessary but I recommend it for\nbetter performance.\n\nI've been this afternoon search why it fails with gcc and I think I know the\nanswer:\nfrom sgi.com\ngcc vs. cc\n\n Code that runs fine when compiled with SGI cc and\ndoesn't run when compiled with gcc\n might be calling one of the following functions:\n\n inet_ntoa, inet_lnaof, inet_netof,\ninet_makeaddr, semctl\n\n (there may be others). These are functions that get\npassed or return structs that are smaller than\n 16 bytes but not 8 bytes long. gcc and SGI cc are\nincompatible in the way they pass these\n structs so compiling with gcc and linking with the SGI\nlibc.so (which was compiled with\n the SGI cc) is likely to cause these problems. Note that\nthis problem is pretty rare since such\n functions are not widely used. This may be considered a\nbug in gcc but is too involved to fix\n\nthis is fixed in gcc libs libgcc.a I'm not sure if it works cause I can't drop\nthe database just now maybe tomorrow,\nIf I can test it tomorrow I'll tell you\nHope it helps\n best wishes\n\n\n", "msg_date": "Tue, 27 Nov 2001 18:55:27 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: installing 7.2b3 on IRIX 6.5.13" }, { "msg_contents": "Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > One of these days I'm going to write this down somewhere: GCC + Irix +\n> > PostgreSQL does not work -- until proven otherwise and/or GCC is fixed.\n>\n> Oh. Yup, that should be documented or enforced by configure.\n>\n> > I was going to suggest myself someday that we force CC=cc, but it should\n> > be done in configure.in (near line 274) and not in the template file.\n>\n> Makes sense to me to do it in configure.in. Shall we go ahead and put\n> that in?\n>\n> > We've had successful reports for Irix in the past, so I don't think the -n\n> > and -r flags are strictly necessary -- at least I'd like to see more\n> > information regarding them. What makes -n32 and -r12000 better than, say,\n> > -n64 and -r6000?\n>\n> Luis' followup indicated that -r wasn't really needed. Not sure about -n.\n>\n> regards, tom lane\n\nI forgot to say but sure you know that inet_ntoa is widely used but only\ncritical is:\n./postmaster/postmaster.c:2036: host_addr =\ninet_ntoa(port->raddr.in.sin_addr);\nsorry for repeating\nregards\n\n\n", "msg_date": "Tue, 27 Nov 2001 18:57:42 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": true, "msg_subject": "Re: installing 7.2b3 on IRIX 6.5.13" }, { "msg_contents": "Luis Amigo <lamigo@atc.unican.es> writes:\n> In our case -n32 (force new 32 bit object(maybe -o32 in old platforms)) was\n> necessary because with 64 bit object we were getting linking errors.\n\nIs it worth trying to find and fix the cause of that? On other\nplatforms that can build either 32- or 64-bit executables (eg, HPUX 11)\nPostgres works in either mode. So I think it's probably just some\nminor tweak needed for IRIX. It'd be better to let the user choose\nwhich he wants than to force it in the template.\n\n> -rx is processor instruction set, it is not necessary but I recommend it for\n> better performance.\n\nAgain, it doesn't seem that we should try to guess the right value in\nthe template (unless IRIX has some command that the template script\ncould execute to get the right value?). The user can always specify\nthe CFLAGS he wants configure to use.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 10:48:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: installing 7.2b3 on IRIX 6.5.13 " } ]
[ { "msg_contents": "\nGood evening ...\n\n\tOn the 23rd of November, the PostgreSQL Global Development Group\npackaged up and released v7.2b3 of our upcoming release.\n\n\tAlthough primarily a documentation and packaging related release,\nthere are several fixes for bugs reported in v7.2b2.\n\n\tFor a complete listing of changes since v7.2b2, as well as the\nnewest source code, please go to:\n\n\t\tftp://ftp.postgresql.org/pub/beta\n\n\tRPMs for v7.2b3 are also available at:\n\n\t\tftp://ftp.postgresql.org/pub/binary/beta\n\n\tBug reports, as always, should be directed to\npgsql-bugs@postgresql.org, and the severity of all bugs reported will\ndetermine whether we move to the release cycle, or do another Beta, so we\nencourage as many administrators as possible to test this current release.\n\n\nMarc G. Fournier\nCoordinator, PGDG\n\n\n", "msg_date": "Mon, 26 Nov 2001 08:25:47 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "PostgreSQL v7.2b3 Released" } ]
[ { "msg_contents": "This patch mark datatype txtidx as 'extended' storage type.\nThanks.\n\n-- \nTeodor Sigaev\nteodor@stack.net", "msg_date": "Mon, 26 Nov 2001 16:42:11 +0300", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": true, "msg_subject": "Pls, apply patch fot contrib/tsearch" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n> This patch mark datatype txtidx as 'extended' storage type.\n> Thanks.\n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Nov 2001 12:45:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pls, apply patch fot contrib/tsearch" } ]
[ { "msg_contents": "Just donwloaded and built 7.2beta3 on Compaq Tru64 (Digital Unix) 4.0 on\nAlpha architecture and now all regression tests are passed.\n\ntest=# SELECT version();\n version\n----------------------------------------------------------------\n PostgreSQL 7.2b3 on alphaev67-dec-osf4.0g, compiled by cc -std\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Mon, 26 Nov 2001 16:36:38 +0200", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": true, "msg_subject": "7.2beta3 on Digital Alpha" }, { "msg_contents": "> Just donwloaded and built 7.2beta3 on Compaq Tru64 (Digital Unix) 4.0 on\n> Alpha architecture and now all regression tests are passed.\n\nThanks! Has anyone tested on more recent versions of Tru64 (I'm sure\nthey will work) and/or with gcc on this platform? We had reports\ncovering 4.0, 5.0, and two compilers for the previous release...\n\n - Thomas\n", "msg_date": "Wed, 28 Nov 2001 03:56:57 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: 7.2beta3 on Digital Alpha" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> Thanks! Has anyone tested on more recent versions of Tru64 (I'm sure\n> they will work) and/or with gcc on this platform? We had reports\n> covering 4.0, 5.0, and two compilers for the previous release...\n\nAll regression tests passed also with gcc version 2.95.1 19990816\n(release) on alphaev67-dec-osf4.0g\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Wed, 28 Nov 2001 12:00:49 +0200", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": true, "msg_subject": "Re: 7.2beta3 on Digital Alpha" }, { "msg_contents": "> All regression tests passed also with gcc version 2.95.1 19990816\n> (release) on alphaev67-dec-osf4.0g\n\nThanks. I'll note both compilers...\n\n - Thomas\n", "msg_date": "Wed, 28 Nov 2001 14:57:52 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: 7.2beta3 on Digital Alpha" } ]
[ { "msg_contents": "This is a patch that was posted some time ago to pgsql-patches and\nno one has commented on it.\n\nIt adds something that JDBC has that is not present in libpq (see below).\nIs it OK for inclusion?\n\nRegards to all and thanks for your time,\nFernando\n\n\n-------- Original Message --------\nFrom: Fernando Nasser <fnasser@redhat.com>\nSubject: [PATCHES] Libpq support for precision and scale\nTo: pgsql-patches@postgresql.org\n\nSome programs like utilities, IDEs, etc., frequently need to know the\nprecision and scale of the result fields (columns). Unfortunately\nlibpq does not have such routines yet (JDBC does).\n\nLiam and I created a few ones that do the trick, as inspired by the\nJDBC code. The functions are:\n\nchar *PQftypename(const PGresult *res, int field_num);\n\nReturns the type name (not the name of the column, as PQfname do).\n\n\nint PQfprecision(const PGresult *res, int field_num);\nint PQfscale(const PGresult *res, int field_num);\n\nReturn Scale and Precision of the type respectively.\n\n\nMost programs won't need this information and may not be willing\nto pay the overhead for metadata retrieval. Thus, we added\nan alternative function to be used instead of PQexec if one\nwishes extra metadata to be retrieved along with the query\nresults:\n\nPGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n\nIt provides the same functionality and it is used in exactly the\nsame way as PQexec but it includes extra metadata about the result\nfields. After this cal, it is possible to obtain the precision,\nscale and type name for each result field.\n\n\nThe PQftypename function returns the internal PostgreSQL type name.\nAs some programs may prefer something more user friendly than the\ninternal type names, we've thrown in a conversion routine as well:\n\nchar *PQtypeint2ext(const char *intname);\n\nThis routine converts from the internal type name to a more user\nfriendly type name convention.\n\n\nMore details are in the patch to the SGML documentation that is \npart of the patch (attached).\n\n\n--\nLiam Stewart <liams@redhat.com>\nFernando Nasser <fnasser@redhat.com>", "msg_date": "Mon, 26 Nov 2001 14:13:34 -0500", "msg_from": "Fernando Nasser <fnasser@cygnus.com>", "msg_from_op": true, "msg_subject": "[Fwd: [PATCHES] Libpq support for precision and scale]" }, { "msg_contents": "Fernando Nasser <fnasser@cygnus.com> writes:\n> This is a patch that was posted some time ago to pgsql-patches and\n> no one has commented on it.\n> It adds something that JDBC has that is not present in libpq (see below).\n> Is it OK for inclusion?\n\nHere are some comments ...\n\n> int PQfprecision(const PGresult *res, int field_num);\n> int PQfscale(const PGresult *res, int field_num);\n\n> Return Scale and Precision of the type respectively.\n\nThese seem okay, but I don't like the API detail that \"0 is returned if\ninformation is not available\". 0 is a valid result, at least for\nPQfscale. I would recommend returning -1. If you really want to\ndistinguish bad parameters from non-numeric datatype, then return -1\nand -2 for those two cases.\n\n> Most programs won't need this information and may not be willing\n> to pay the overhead for metadata retrieval. Thus, we added\n> an alternative function to be used instead of PQexec if one\n> wishes extra metadata to be retrieved along with the query\n> results:\n\n> PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n\nThis strikes me as very ugly, and unnecessary, and inefficient since\nit retrieves metadata for all columns even though the client might\nonly need to know about some of them. An even worse problem is that\nit'll fail entirely with a multi-query query string.\n\nWhat I think would be cleaner would be to do the metadata queries\non-the-fly as needed. With the caching that you already have in there,\non-the-fly queries wouldn't be any less efficient.\n\nBut to do a metadata query we must have access to the connection.\nWe could handle it two ways:\n\n1. Add a PGconn parameter to the querying functions.\n\n2. Make use of the PGconn link that's stored in PGresults, and\nspecify that these functions can only be used on PGresults that\ncame from a still-open connection.\n\nI think I prefer the first, since it makes it more visible to the\nprogrammer that queries may get executed. But it's a judgment call\nprobably; I could see an argument for the second as well. Any comments,\nanyone?\n\n> The PQftypename function returns the internal PostgreSQL type name.\n> As some programs may prefer something more user friendly than the\n> internal type names, we've thrown in a conversion routine as well:\n> char *PQtypeint2ext(const char *intname);\n> This routine converts from the internal type name to a more user\n> friendly type name convention.\n\nThis seems poorly designed. Pass it the type OID and typmod, both of\nwhich are readily available from a PQresult without extra computation.\nThat will let you call the backend's format_type ... of course you'll\nneed a PGconn too for that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2001 12:12:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale" }, { "msg_contents": "\nTom Lane wrote:\n> \n> Fernando Nasser <fnasser@cygnus.com> writes:\n> > This is a patch that was posted some time ago to pgsql-patches and\n> > no one has commented on it.\n> > It adds something that JDBC has that is not present in libpq (see\nbelow).\n> > Is it OK for inclusion?\n> \n> Here are some comments ...\n> \n\nThanks.\n\n> > int PQfprecision(const PGresult *res, int field_num);\n> > int PQfscale(const PGresult *res, int field_num);\n> \n> > Return Scale and Precision of the type respectively.\n> \n> These seem okay, but I don't like the API detail that \"0 is returned if\n> information is not available\". 0 is a valid result, at least for\n> PQfscale. I would recommend returning -1. If you really want to\n> distinguish bad parameters from non-numeric datatype, then return -1\n> and -2 for those two cases.\n> \n\nThis seems to be the libpq convention. On calls such as PQfsize and\nPQfmod, for instance, zero is a valid result and is also returned if\nthe information is not available.\n\nPlease note that we did not make this convention -- our original version\ndid return -1. But we decided that following a different rule for these\ntwo routines was even more confusing. And change the return convention\nfor the whole set of functions at this point seems out of the question.\n\nP.S.: Maybe whoever originally designed the libpq interface was trying\nto accomplish some sort of \"soft fail\" by returning zero. Just a guess\nof course.\n\n\n> > Most programs won't need this information and may not be willing\n> > to pay the overhead for metadata retrieval. Thus, we added\n> > an alternative function to be used instead of PQexec if one\n> > wishes extra metadata to be retrieved along with the query\n> > results:\n> \n> > PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n> \n> This strikes me as very ugly, and unnecessary, and inefficient since\n> it retrieves metadata for all columns even though the client might\n> only need to know about some of them. \n\nThis part I would not worry about. The new routines are for result sets\n(not arbitrary columns) so the fields present in it have already been\npre-selected. Also, this kind of information is useful for tools as\nthey don't know beforehand what the fields will be. In all cases\nwe can think of, the tool will always want metadata about all the\nfields.\n\n\n> An even worse problem is that\n> it'll fail entirely with a multi-query query string.\n> \n\nThis is a bummer. But I see no solution for this besides documenting\nthe restriction in the manual. If I am not mistaken we already have\nthe limitation of returning just the last result anyway (we just\ncollect the error messages).\n\n\n> What I think would be cleaner would be to do the metadata queries\n> on-the-fly as needed. With the caching that you already have in there,\n> on-the-fly queries wouldn't be any less efficient.\n> \n> But to do a metadata query we must have access to the connection.\n> We could handle it two ways:\n> \n> 1. Add a PGconn parameter to the querying functions.\n> \n\nThe problem is that results may be kept longer than connections\n(see below). The current solution did not require the connection\nas the metadata is for the result set, not tables.\n\nThe PGconn parameter would be reasonable for retrieving metadata\nabout table columns, for instance.\n\n\n> 2. Make use of the PGconn link that's stored in PGresults, and\n> specify that these functions can only be used on PGresults that\n> came from a still-open connection.\n> \n\nThat field has been deprecated (see comments in the source code) \nbecause a result may be kept even after the connection is closed.\n\n\n> I think I prefer the first, since it makes it more visible to the\n> programmer that queries may get executed. But it's a judgment call\n> probably; I could see an argument for the second as well. Any comments,\n> anyone?\n> \n\nIt would have to be the former (to avoid the stale pointer problem).\n\nBut requiring a connection adds a restriction to the use of this info\nand makes it have a different life span than the object it refers to\n(a PGresult), which is very weird.\n\n\n> > The PQftypename function returns the internal PostgreSQL type name.\n> > As some programs may prefer something more user friendly than the\n> > internal type names, we've thrown in a conversion routine as well:\n> > char *PQtypeint2ext(const char *intname);\n> > This routine converts from the internal type name to a more user\n> > friendly type name convention.\n> \n> This seems poorly designed. Pass it the type OID and typmod, both of\n> which are readily available from a PQresult without extra computation.\n> That will let you call the backend's format_type ... of course you'll\n> need a PGconn too for that.\n> \n\nRequiring the PGconn is bad. But we still could have a PQFtypeExt()\nreturning the \"external\" type if people prefer it that way.\nWe thought that this should be kept as an explicit conversion\noperation to make clear the distinction of what the backend knows\nabout and this outside world view of things.\n\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n", "msg_date": "Wed, 28 Nov 2001 10:15:38 -0500", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale" }, { "msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> Tom Lane wrote:\n>> These seem okay, but I don't like the API detail that \"0 is returned if\n>> information is not available\".\n\n> This seems to be the libpq convention. On calls such as PQfsize and\n> PQfmod, for instance, zero is a valid result and is also returned if\n> the information is not available.\n\nI don't think zero is (or ever will be) a valid PQfsize result. It was\nnot a valid PQfmod result at the time the routine was written, either,\nalthough I think that with Thomas' recent changes it might be possible\nto see a zero typmod for some of the datetime types. On the other hand\n-1 is a very common valid result for both PQfsize and PQfmod, so these\nroutines *would* have been broken on day one if they had returned -1.\n\nI don't think consistency with other routines that have different ranges\nof valid results is an adequate argument for making an API that's broken\nby design.\n\n> P.S.: Maybe whoever originally designed the libpq interface was trying\n> to accomplish some sort of \"soft fail\" by returning zero.\n\nNo, they were picking a value that couldn't be mistaken for a valid\nresult. At the time, anyway.\n\n\n>> 2. Make use of the PGconn link that's stored in PGresults, and\n>> specify that these functions can only be used on PGresults that\n>> came from a still-open connection.\n\n> That field has been deprecated (see comments in the source code) \n\nI know; I wrote those comments. But I'd be willing to un-deprecate it\nif it seemed the most convenient API for the inquiry functions would\nrequire it. On the whole though I think passing a PGconn to the\nmetadata inquiry functions would be the right way to go about this.\nNote that there isn't any fundamental reason to require that it be the\nsame PGconn that was used to acquire the PGresult. Any connection to\nthe same database would do fine. (In fact, for standard types, any\nconnection to a database of the same PG version would do fine...)\n\nAnyone else have an opinion?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 11:18:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale " }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nFernando Nasser wrote:\n> This is a patch that was posted some time ago to pgsql-patches and\n> no one has commented on it.\n> \n> It adds something that JDBC has that is not present in libpq (see below).\n> Is it OK for inclusion?\n> \n> Regards to all and thanks for your time,\n> Fernando\n> \n> \n> -------- Original Message --------\n> From: Fernando Nasser <fnasser@redhat.com>\n> Subject: [PATCHES] Libpq support for precision and scale\n> To: pgsql-patches@postgresql.org\n> \n> Some programs like utilities, IDEs, etc., frequently need to know the\n> precision and scale of the result fields (columns). Unfortunately\n> libpq does not have such routines yet (JDBC does).\n> \n> Liam and I created a few ones that do the trick, as inspired by the\n> JDBC code. The functions are:\n> \n> char *PQftypename(const PGresult *res, int field_num);\n> \n> Returns the type name (not the name of the column, as PQfname do).\n> \n> \n> int PQfprecision(const PGresult *res, int field_num);\n> int PQfscale(const PGresult *res, int field_num);\n> \n> Return Scale and Precision of the type respectively.\n> \n> \n> Most programs won't need this information and may not be willing\n> to pay the overhead for metadata retrieval. Thus, we added\n> an alternative function to be used instead of PQexec if one\n> wishes extra metadata to be retrieved along with the query\n> results:\n> \n> PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n> \n> It provides the same functionality and it is used in exactly the\n> same way as PQexec but it includes extra metadata about the result\n> fields. After this cal, it is possible to obtain the precision,\n> scale and type name for each result field.\n> \n> \n> The PQftypename function returns the internal PostgreSQL type name.\n> As some programs may prefer something more user friendly than the\n> internal type names, we've thrown in a conversion routine as well:\n> \n> char *PQtypeint2ext(const char *intname);\n> \n> This routine converts from the internal type name to a more user\n> friendly type name convention.\n> \n> \n> More details are in the patch to the SGML documentation that is \n> part of the patch (attached).\n> \n> \n> --\n> Liam Stewart <liams@redhat.com>\n> Fernando Nasser <fnasser@redhat.com>\n\n> Index: fe-connect.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.180\n> diff -c -p -r1.180 fe-connect.c\n> *** fe-connect.c\t2001/11/05 17:46:37\t1.180\n> --- fe-connect.c\t2001/11/07 19:00:35\n> *************** makeEmptyPGconn(void)\n> *** 1849,1854 ****\n> --- 1849,1855 ----\n> #ifdef USE_SSL\n> \tconn->allow_ssl_try = TRUE;\n> #endif\n> + \tconn->typecache = NULL;\n> \n> \t/*\n> \t * The output buffer size is set to 8K, which is the usual size of\n> *************** freePGconn(PGconn *conn)\n> *** 1891,1896 ****\n> --- 1892,1898 ----\n> \tif (!conn)\n> \t\treturn;\n> \tpqClearAsyncResult(conn);\t/* deallocate result and curTuple */\n> + \tpqTypeCacheClear(conn);\t\t/* free all type cache entries */\n> #ifdef USE_SSL\n> \tif (conn->ssl)\n> \t\tSSL_free(conn->ssl);\n> Index: fe-exec.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.113\n> diff -c -p -r1.113 fe-exec.c\n> *** fe-exec.c\t2001/10/25 05:50:13\t1.113\n> --- fe-exec.c\t2001/11/07 19:00:35\n> *************** char\t *const pgresStatus[] = {\n> *** 48,53 ****\n> --- 48,54 ----\n> static void pqCatenateResultError(PGresult *res, const char *msg);\n> static void saveErrorResult(PGconn *conn);\n> static PGresult *prepareAsyncResult(PGconn *conn);\n> + static PGresult *pqExec(PGconn *conn, const char *query, int metadata);\n> static int\taddTuple(PGresult *res, PGresAttValue * tup);\n> static void parseInput(PGconn *conn);\n> static void handleSendFailure(PGconn *conn);\n> *************** static int\tgetRowDescriptions(PGconn *co\n> *** 55,60 ****\n> --- 56,63 ----\n> static int\tgetAnotherTuple(PGconn *conn, int binary);\n> static int\tgetNotify(PGconn *conn);\n> static int\tgetNotice(PGconn *conn);\n> + static char *pqTypeCacheGet(PGconn *conn, Oid typenum);\n> + static void pqTypeCachePut(PGconn *conn, Oid typenum, char *typename);\n> \n> /* ---------------\n> * Escaping arbitrary strings to get valid SQL strings/identifiers.\n> *************** addTuple(PGresult *res, PGresAttValue * \n> *** 609,614 ****\n> --- 612,678 ----\n> \treturn TRUE;\n> }\n> \n> + /* Cache of the correspondence between type Oids and\n> + * type names. Without it too many queries can be made to\n> + * retrieve this same information from the catalog over and over.\n> + */\n> + \n> + static char *\n> + pqTypeCacheGet(PGconn *conn, Oid typenum)\n> + {\n> + \tchar *typename = NULL;\n> + \tPGtypecache *tc = conn->typecache;\n> + \n> + \t/* Look for type Oid. */\n> + \twhile (tc != NULL)\n> + \t{\n> + \t\tif (tc->typenum == typenum)\n> + \t\t{\n> + \t\t\ttypename = tc->typename;\n> + \t\t\tbreak;\n> + \t\t}\n> + \t\telse\n> + \t\t\ttc = tc->next;\n> + \t}\n> + \treturn typename;\n> + }\n> + \n> + static void\n> + pqTypeCachePut(PGconn *conn, Oid typenum, char *typename)\n> + {\n> + \tPGtypecache *typetocache;\n> + \n> + \ttypetocache = (PGtypecache *) malloc(sizeof(PGtypecache));\n> + \tif (typetocache == NULL)\n> + \t{\n> + \t\tfprintf(stderr, \"pqTypeCachePut: malloc failed.\\n\");\n> + \t\treturn;\n> + \t}\n> + \t\n> + \ttypetocache->typenum = typenum;\n> + \ttypetocache->typename = strdup(typename);\n> + \ttypetocache->next = conn->typecache;\n> + \tconn->typecache = typetocache;\n> + }\n> + \n> + void\n> + pqTypeCacheClear(PGconn *conn)\n> + {\n> + \tPGtypecache *tc;\n> + \tPGtypecache *ntc;\n> + \n> + \t/* Free all tcache entries (and typenames). */\n> + \ttc = conn->typecache;\n> + \tconn->typecache = NULL;\n> + \twhile (tc != NULL)\n> + \t{\n> + \t\tif (tc->typename)\n> + \t\t\tfree(tc->typename);\n> + \t\tntc = tc->next;\n> + \t\tfree(tc);\n> + \t\ttc = ntc;\n> + \t}\n> + }\n> \n> /*\n> * PQsendQuery\n> *************** PQgetResult(PGconn *conn)\n> *** 1277,1301 ****\n> \treturn res;\n> }\n> \n> \n> /*\n> ! * PQexec\n> *\t send a query to the backend and package up the result in a PGresult\n> *\n> * If the query was not even sent, return NULL; conn->errorMessage is set to\n> * a relevant message.\n> * If the query was sent, a new PGresult is returned (which could indicate\n> * either success or failure).\n> * The user is responsible for freeing the PGresult via PQclear()\n> * when done with it.\n> */\n> \n> ! PGresult *\n> ! PQexec(PGconn *conn, const char *query)\n> {\n> \tPGresult *result;\n> \tPGresult *lastResult;\n> \tbool\t\tsavedblocking;\n> \n> \t/*\n> \t * we assume anyone calling PQexec wants blocking behaviour, we force\n> --- 1341,1381 ----\n> \treturn res;\n> }\n> \n> + PGresult *\n> + PQexec(PGconn *conn, const char *query)\n> + {\n> + \t/* Don't get metadata. */\n> + \treturn pqExec (conn, query, 0 /* no metadata */);\n> + }\n> \n> + PGresult *\n> + PQexecIncludeMetadata(PGconn *conn, const char *query)\n> + {\n> + \t/* Get metadata as well. */\n> + \treturn pqExec (conn, query, 1 /* with metadata */);\n> + }\n> + \n> /*\n> ! * pqExec\n> *\t send a query to the backend and package up the result in a PGresult\n> *\n> * If the query was not even sent, return NULL; conn->errorMessage is set to\n> * a relevant message.\n> * If the query was sent, a new PGresult is returned (which could indicate\n> * either success or failure).\n> + * If it is called with metadata == 1, the metadata about the column \n> + * results will be obtained and saved in the PGresult.\n> * The user is responsible for freeing the PGresult via PQclear()\n> * when done with it.\n> */\n> \n> ! static PGresult *\n> ! pqExec(PGconn *conn, const char *query, int metadata)\n> {\n> \tPGresult *result;\n> \tPGresult *lastResult;\n> \tbool\t\tsavedblocking;\n> + \tint\t\t\ti;\n> \n> \t/*\n> \t * we assume anyone calling PQexec wants blocking behaviour, we force\n> *************** PQexec(PGconn *conn, const char *query)\n> *** 1363,1368 ****\n> --- 1443,1501 ----\n> \n> \tif (PQsetnonblocking(conn, savedblocking) == -1)\n> \t\treturn NULL;\n> + \n> + \t/*\n> + \t * If metadata is requested and everything is well, loop through\n> + \t * the result fields grabing the required information.\n> + \t */\n> + \n> + if (metadata && (lastResult->numAttributes > 0))\n> + \t\tfor (i = 0; i < lastResult->numAttributes; i++)\n> + \t\t{\n> + \t\t\tOid typenum;\n> + \t\t\tPGresult *result;\n> + \t\t\tchar *tempname;\n> + \t\t\tstatic char query[] = \"select typname from pg_type where oid = %lu\";\n> + \t\t\tchar *fullquery;\n> + \n> + \t\t\tif ((typenum = lastResult->attDescs[i].typid) == 0)\n> + \t\t\t\tcontinue;\n> + \n> + \t\t\t/* Look up the cache for the type name. */\n> + \t\t\ttempname = pqTypeCacheGet(conn, typenum);\n> + \n> + \t\t\t/* If it is a type that we still don't know the name,\n> + \t\t\t query for the type name and store it in the cache. */\n> + \t\tif (tempname == NULL)\n> + \t\t\t{\n> + \t\t\t\tfullquery = malloc (sizeof(query)\n> + \t\t\t\t\t\t\t\t\t+ 20 /* if Oids become 64 bits */);\n> + \t\t\t\tif (fullquery == NULL)\n> + \t\t\t\t {\n> + \t\t\t\t fprintf(stderr, \"pqExec: malloc failed.\\n\");\n> + \t\t\t\t\t return NULL;\n> + \t\t\t\t }\n> + \t\t\t\t/* If the typename was not in the cache, query the catalog\n> + \t \t\t\t and add it to the cache */\n> + \t\t\t\tsnprintf(fullquery, sizeof(query) + 20, query, typenum);\n> + \t\t\t\tresult = PQexec(conn, fullquery);\n> + \t\t\t\tfree(fullquery);\n> + \t\t\t\tif (!result || PQresultStatus(result) != PGRES_TUPLES_OK)\n> + \t\t\t\t{\n> + \t\t\t\t\tPQclear(result);\n> + \t\t\t\t\tcontinue;\n> + \t\t\t\t}\n> + \t\t\t\tif (PQntuples(result) != 1 || PQnfields(result) != 1) {\n> + \t\t\t\t\tPQclear(result);\n> + \t\t\t\t\tcontinue;\n> + \t\t\t\t}\n> + \t\t\t\tpqTypeCachePut(conn, typenum, PQgetvalue(result, 0, 0));\n> + \t\t\t\ttempname = pqTypeCacheGet(conn, typenum);\n> + \t\t\t}\n> + \n> + \t\t\tlastResult->attDescs[i].atttypname = strdup(tempname);\n> + \t\t}\n> + \n> \treturn lastResult;\n> \n> errout:\n> *************** PQftype(const PGresult *res, int field_n\n> *** 2104,2109 ****\n> --- 2237,2253 ----\n> \t\treturn InvalidOid;\n> }\n> \n> + char *\n> + PQftypeName(const PGresult *res, int field_num)\n> + {\n> + \tif (!check_field_number(res, field_num))\n> + \t\treturn NULL;\n> + \tif (res->attDescs)\n> + \t\treturn res->attDescs[field_num].atttypname;\n> + \telse\n> + \t\treturn NULL;\n> + }\n> + \n> int\n> PQfsize(const PGresult *res, int field_num)\n> {\n> *************** PQfmod(const PGresult *res, int field_nu\n> *** 2124,2129 ****\n> --- 2268,2330 ----\n> \t\treturn res->attDescs[field_num].atttypmod;\n> \telse\n> \t\treturn 0;\n> + }\n> + \n> + int\n> + PQfprecision(const PGresult *res, int field_num)\n> + {\n> + \tint mod;\n> + \tchar *type;\n> + \n> + \tif ((type = PQftypeName(res, field_num)) == NULL)\n> + \t\treturn 0;\n> + \tmod = PQfmod(res, field_num);\n> + \n> + \tif (strcmp(type, \"numeric\") == 0)\n> + \t\treturn ((0xFFFF0000) & mod) >> 16;\n> + \telse if (strcmp(type, \"int2\") == 0)\n> + \t\treturn 5;\n> + \telse if (strcmp(type, \"int4\") == 0)\n> + \t\treturn 10;\n> + \telse if (strcmp(type, \"int8\") == 0)\n> + \t\treturn 19; /* It would be 20 if it was unsigned. */\n> + \telse if (strcmp(type, \"float4\") == 0)\n> + \t\treturn 6;\n> + \telse if (strcmp(type, \"float8\") == 0)\n> + \t\treturn 15;\n> + \telse if (strcmp(type, \"varchar\") == 0 ||\n> + \t\t\t strcmp(type, \"bpchar\") == 0 ||\n> + \t\t\t strcmp(type, \"char\") == 0)\n> + \t\treturn mod - 4;\n> + \telse if (strcmp(type, \"varbit\") == 0 ||\n> + \t\t\t strcmp(type, \"bit\") == 0)\n> + \t\treturn mod;\n> + \n> + \treturn -1;\n> + }\n> + \n> + int\n> + PQfscale(const PGresult *res, int field_num)\n> + {\n> + \tint mod;\n> + \tchar *type;\n> + \n> + \tif ((type = PQftypeName(res, field_num)) == NULL)\n> + \t\treturn 0;\n> + \tmod = PQfmod(res, field_num);\n> + \n> + \tif (strcmp(type, \"numeric\") == 0)\n> + \t\treturn ((0x0000FFFF) & mod) - 4;\n> + \telse if (strcmp(type, \"int2\") == 0 ||\n> + \t\t\t strcmp(type, \"int4\") == 0 ||\n> + \t\t\t strcmp(type, \"int8\") == 0)\n> + \t\treturn 0;\n> + \telse if (strcmp(type, \"float4\") == 0)\n> + \t\treturn -1;\n> + \telse if (strcmp(type, \"float8\") == 0)\n> + \t\treturn -1;\n> + \t\t\n> + \treturn -1;\n> }\n> \n> char *\n> Index: fe-misc.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.60\n> diff -c -p -r1.60 fe-misc.c\n> *** fe-misc.c\t2001/11/05 17:46:37\t1.60\n> --- fe-misc.c\t2001/11/07 19:00:35\n> *************** WSSE_GOODEXIT:\n> *** 896,898 ****\n> --- 896,974 ----\n> }\n> \n> #endif\n> + \n> + char *\n> + PQinternal2common(const char *intname)\n> + {\n> + \tstatic char *typename;\n> + \n> + \tif (intname == NULL)\n> + \t\treturn NULL;\n> + \n> + \tif (strcmp(intname, \"int8\") == 0)\n> + \t\ttypename = \"bigint\";\n> + \telse if (strcmp(intname, \"bit\") == 0)\n> + \t\ttypename = \"bit\";\n> + \telse if (strcmp(intname, \"varbit\") == 0)\n> + \t\ttypename = \"varbit\";\t/* bit varying */\n> + \telse if (strcmp(intname, \"bool\") == 0)\n> + \t\ttypename = \"boolean\";\n> + \telse if (strcmp(intname, \"box\") == 0)\n> + \t\ttypename = \"box\";\n> + \telse if (strcmp(intname, \"bpchar\") == 0)\n> + \t\ttypename = \"char\";\t\t/* character */\n> + \telse if (strcmp(intname, \"varchar\") == 0)\n> + \t\ttypename = \"varchar\";\t/* character varying */\n> + \telse if (strcmp(intname, \"cidr\") == 0)\n> + \t\ttypename = \"cidr\";\n> + \telse if (strcmp(intname, \"circle\") == 0)\n> + \t\ttypename = \"circle\";\n> + \telse if (strcmp(intname, \"date\") == 0)\n> + \t\ttypename = \"date\";\n> + \telse if (strcmp(intname, \"float8\") == 0)\n> + \t\ttypename = \"double precision\";\n> + \telse if (strcmp(intname, \"inet\") == 0)\n> + \t\ttypename = \"inet\";\n> + \telse if (strcmp(intname, \"int4\") == 0)\n> + \t\ttypename = \"integer\";\n> + \telse if (strcmp(intname, \"interval\") == 0)\n> + \t\ttypename = \"interval\";\n> + \telse if (strcmp(intname, \"line\") == 0)\n> + \t\ttypename = \"line\";\n> + \telse if (strcmp(intname, \"lseg\") == 0)\n> + \t\ttypename = \"lseg\";\n> + \telse if (strcmp(intname, \"macaddr\") == 0)\n> + \t\ttypename = \"macaddr\";\n> + \telse if (strcmp(intname, \"decimal\") == 0)\n> + \t\ttypename = \"numeric\";\n> + \telse if (strcmp(intname, \"numeric\") == 0)\n> + \t\ttypename = \"numeric\";\n> + \telse if (strcmp(intname, \"oid\") == 0)\n> + \t\ttypename = \"oid\";\n> + \telse if (strcmp(intname, \"path\") == 0)\n> + \t\ttypename = \"path\";\n> + \telse if (strcmp(intname, \"point\") == 0)\n> + \t\ttypename = \"point\";\n> + \telse if (strcmp(intname, \"polygon\") == 0)\n> + \t\ttypename = \"polygon\";\n> + \telse if (strcmp(intname, \"float4\") == 0)\n> + \t\ttypename = \"real\";\n> + \telse if (strcmp(intname, \"int2\") == 0)\n> + \t\ttypename = \"smallint\";\n> + \telse if (strcmp(intname, \"serial\") == 0)\n> + \t\ttypename = \"serial\";\n> + \telse if (strcmp(intname, \"text\") == 0)\n> + \t\ttypename = \"text\";\n> + \telse if (strcmp(intname, \"time\") == 0)\n> + \t\ttypename = \"time\";\n> + \telse if (strcmp(intname, \"time with time zone\") == 0)\n> + \t\ttypename = \"time with time zone\";\n> + \telse if (strcmp(intname, \"timestamp\") == 0)\n> + \t\ttypename = \"timestamp\";\n> + \telse if (strcmp(intname, \"timestamp with time zone\") == 0)\n> + \t\ttypename = \"timestamp with time zone\";\n> + \telse\n> + \t\ttypename = NULL;\n> + \n> + \treturn typename;\n> + }\n> Index: libpq-fe.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.79\n> diff -c -p -r1.79 libpq-fe.h\n> *** libpq-fe.h\t2001/11/05 17:46:37\t1.79\n> --- libpq-fe.h\t2001/11/07 19:00:35\n> *************** extern\t\t\"C\"\n> *** 256,261 ****\n> --- 256,262 ----\n> \n> /* Simple synchronous query */\n> \textern PGresult *PQexec(PGconn *conn, const char *query);\n> + \textern PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n> \textern PGnotify *PQnotifies(PGconn *conn);\n> \textern void PQfreeNotify(PGnotify *notify);\n> \n> *************** extern\t\t\"C\"\n> *** 303,315 ****\n> \textern char *PQfname(const PGresult *res, int field_num);\n> \textern int\tPQfnumber(const PGresult *res, const char *field_name);\n> \textern Oid\tPQftype(const PGresult *res, int field_num);\n> \textern int\tPQfsize(const PGresult *res, int field_num);\n> \textern int\tPQfmod(const PGresult *res, int field_num);\n> \textern char *PQcmdStatus(PGresult *res);\n> \textern char *PQoidStatus(const PGresult *res);\t\t/* old and ugly */\n> \textern Oid\tPQoidValue(const PGresult *res);\t\t/* new and improved */\n> ! \textern char *PQcmdTuples(PGresult *res);\n> ! \textern char *PQgetvalue(const PGresult *res, int tup_num, int field_num);\n> \textern int\tPQgetlength(const PGresult *res, int tup_num, int field_num);\n> \textern int\tPQgetisnull(const PGresult *res, int tup_num, int field_num);\n> \n> --- 304,319 ----\n> \textern char *PQfname(const PGresult *res, int field_num);\n> \textern int\tPQfnumber(const PGresult *res, const char *field_name);\n> \textern Oid\tPQftype(const PGresult *res, int field_num);\n> + \textern char\t*PQftypeName(const PGresult *res, int field_num);\n> \textern int\tPQfsize(const PGresult *res, int field_num);\n> \textern int\tPQfmod(const PGresult *res, int field_num);\n> + \textern int\tPQfprecision(const PGresult *res, int field_num);\n> + \textern int\tPQfscale(const PGresult *res, int field_num);\n> \textern char *PQcmdStatus(PGresult *res);\n> \textern char *PQoidStatus(const PGresult *res);\t\t/* old and ugly */\n> \textern Oid\tPQoidValue(const PGresult *res);\t\t/* new and improved */\n> ! \textern char\t*PQcmdTuples(PGresult *res);\n> ! \textern char\t*PQgetvalue(const PGresult *res, int tup_num, int field_num);\n> \textern int\tPQgetlength(const PGresult *res, int tup_num, int field_num);\n> \textern int\tPQgetisnull(const PGresult *res, int tup_num, int field_num);\n> \n> *************** extern\t\t\"C\"\n> *** 371,376 ****\n> --- 375,383 ----\n> /* Get encoding id from environment variable PGCLIENTENCODING */\n> \textern int\tPQenv2encoding(void);\n> \n> + \t/* Convert internal type name to common type name */\n> + \textern char\t*PQinternal2common(const char *intname);\n> + \t\n> #ifdef __cplusplus\n> }\n> #endif\n> Index: libpq-int.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/libpq-int.h,v\n> retrieving revision 1.44\n> diff -c -p -r1.44 libpq-int.h\n> *** libpq-int.h\t2001/11/05 17:46:38\t1.44\n> --- libpq-int.h\t2001/11/07 19:00:35\n> *************** union pgresult_data\n> *** 75,88 ****\n> \tchar\t\tspace[1];\t\t/* dummy for accessing block as bytes */\n> };\n> \n> ! /* Data about a single attribute (column) of a query result */\n> \n> typedef struct pgresAttDesc\n> {\n> ! \tchar\t *name;\t\t\t/* type name */\n> \tOid\t\t\ttypid;\t\t\t/* type id */\n> \tint\t\t\ttyplen;\t\t\t/* type size */\n> \tint\t\t\tatttypmod;\t\t/* type-specific modifier info */\n> }\tPGresAttDesc;\n> \n> /* Data for a single attribute of a single tuple */\n> --- 75,91 ----\n> \tchar\t\tspace[1];\t\t/* dummy for accessing block as bytes */\n> };\n> \n> ! /* Data about a single attribute (column) of a query result.\n> ! * The type name is only available if PQexecIncludeMetadata() was used.\n> ! */\n> \n> typedef struct pgresAttDesc\n> {\n> ! \tchar\t *name;\t\t\t/* column name */\n> \tOid\t\t\ttypid;\t\t\t/* type id */\n> \tint\t\t\ttyplen;\t\t\t/* type size */\n> \tint\t\t\tatttypmod;\t\t/* type-specific modifier info */\n> + \tchar\t *atttypname;\t\t/* type name */\n> }\tPGresAttDesc;\n> \n> /* Data for a single attribute of a single tuple */\n> *************** typedef struct pgLobjfuncs\n> *** 191,196 ****\n> --- 194,208 ----\n> \tOid\t\t\tfn_lo_write;\t/* OID of backend function LOwrite\t\t*/\n> }\tPGlobjfuncs;\n> \n> + /* Entry in the cache of the correspondence between type Oids and type names.\n> + */\n> + typedef struct pgTypeCache\n> + {\n> + \tOid\t\t\t\t\ttypenum;\t/* OID of type\t\t*/\n> + \tchar\t\t\t *typename;\t/* name of type\t\t*/\n> + \tstruct pgTypeCache *next;\t\t/* name of type\t\t*/\n> + }\t\t\tPGtypecache;\n> + \n> /* PGconn stores all the state data associated with a single connection\n> * to a backend.\n> */\n> *************** struct pg_conn\n> *** 240,245 ****\n> --- 252,258 ----\n> \tchar\t\tcryptSalt[2];\t/* password salt received from backend */\n> \tPGlobjfuncs *lobjfuncs;\t\t/* private state for large-object access\n> \t\t\t\t\t\t\t\t * fns */\n> + \tPGtypecache *typecache;\t\t/* cached types for this connection. */\n> \n> \t/* Buffer for data received from backend and not yet processed */\n> \tchar\t *inBuffer;\t\t/* currently allocated buffer */\n> *************** extern void pqSetResultError(PGresult *r\n> *** 305,310 ****\n> --- 318,324 ----\n> extern void *pqResultAlloc(PGresult *res, size_t nBytes, bool isBinary);\n> extern char *pqResultStrdup(PGresult *res, const char *str);\n> extern void pqClearAsyncResult(PGconn *conn);\n> + extern void pqTypeCacheClear(PGconn *conn);\n> \n> /* === in fe-misc.c === */\n> \n> \n\n> Index: libpq.sgml\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/doc/src/sgml/libpq.sgml,v\n> retrieving revision 1.72\n> diff -c -p -r1.72 libpq.sgml\n> *** libpq.sgml\t2001/09/13 15:55:23\t1.72\n> --- libpq.sgml\t2001/11/07 19:06:52\n> *************** PGresult *PQexec(PGconn *conn,\n> *** 728,733 ****\n> --- 728,748 ----\n> \t <function>PQerrorMessage</function> to get more information about the error.\n> </para>\n> </listitem>\n> + \n> + <listitem>\n> + <para>\n> + <function>PQexecIncludeMetadata</function>\n> + Submit a query to the server and wait for the result;\n> + include extra metadata about the result fields.\n> + \t This makes available information such as the type name,\n> + \t precision and scale for each field in the result.\n> + <synopsis>\n> + PGresult *PQexecIncludeMetadata(PGconn *conn,\n> + const char *query);\n> + </synopsis>\n> + Used the same way as PQexec().\n> + </para>\n> + </listitem>\n> </itemizedlist>\n> \n> <para>\n> *************** You can query the system table <literal>\n> *** 964,969 ****\n> --- 979,986 ----\n> the name and properties of the various data types. The <acronym>OID</acronym>s\n> of the built-in data types are defined in <filename>src/include/catalog/pg_type.h</filename>\n> in the source tree.\n> + The function <function>PQftypename</function> can be used to retrieve the\n> + type name if the result was obtained via <function>PQexecIncludeMetadata</function>.\n> </para>\n> </listitem>\n> \n> *************** extracts data from a <acronym>BINARY</ac\n> *** 1010,1015 ****\n> --- 1027,1126 ----\n> </para>\n> </listitem>\n> </itemizedlist>\n> + \n> + <para>\n> + The following functions only produce meaningful results if \n> + <function>PQexecIncludeMetadata</function> was used\n> + (as opposed to <function>PQexec</function>).\n> + </para>\n> + \n> + <itemizedlist>\n> + \n> + <listitem>\n> + <para>\n> + <function>PQftypename</function>\n> + Returns the name of the column type as a string.\n> + Field indices start at 0.\n> + <synopsis>\n> + char *PQftypename(const PGresult *res,\n> + int field_index);\n> + </synopsis>\n> + \t Returns the name of the column type as a string.\n> + Copy the string if needed -- do not modify, free()\n> + or assume its persistence. The internal type name is\n> + returned; use PQtypeint2ext() to convert to a more SQL-ish style.\n> + \t NULL is returned if the field type name is not availble.\n> + </para>\n> + </listitem>\n> + \n> + <listitem>\n> + <para>\n> + <function>PQfprecision</function>\n> + Returns the precision of the field\n> + associated with the given field index.\n> + Field indices start at 0.\n> + <synopsis>\n> + int PQfprecision(const PGresult *res,\n> + int field_index);\n> + </synopsis>\n> + \t Returns the precision of the field\n> + associated with the given field index.\n> + \t For numeric types (INTEGER, FLOAT, etc.), PQfprecision returns the\n> + \t number of decimal digits in the specified field. For character and bit\n> + \t string types, such as VARCHAR and BIT, PQfprecision returns the\n> + \t maximum number of characters/bits allowed in the specified field.\n> + \t PQfprecision returns 0 if precision information is not available and\n> + \t -1 if precision is not applicable to the field in question. The latter\n> + \t will be the case if the type of the field is POINT, for example. \n> + </para>\n> + </listitem>\n> + \n> + <listitem>\n> + <para>\n> + <function>PQfscale</function>\n> + Returns the scale of the field\n> + associated with the given field index.\n> + Field indices start at 0.\n> + <synopsis>\n> + int PQfscale(const PGresult *res,\n> + int field_index);\n> + </synopsis>\n> + \t Returns the scale of the field\n> + associated with the given field index.\n> + \t PQfscale returns the scale of the field associated with the given\n> + \t field index. Scale is the number of digits after the decimal point,\n> + \t so this function is useful only with fields that are of a numeric\n> + \t type (INTEGER, FLOAT, NUMERIC, etc.). -1 is returned if scale is not\n> + \t applicable to the field type. 0 is returned if scale information is\n> + \t not available. \n> + </para>\n> + </listitem>\n> + </itemizedlist>\n> + \n> + <para>\n> + Use the function below to convert internal type names (like the\n> + ones returned by <function>PQftypename</function>) into something\n> + more user-friendly.\n> + </para>\n> + \n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + <function>PQtypeint2ext</function>\n> + Converts an internal type name into a SQL-ish\n> + type name.\n> + <synopsis>\n> + char *PQtypeint2ext(const char **intname);\n> + </synopsis>\n> + \t Converts an internal type name into a SQL-ish\n> + type name.\n> + NULL is returned if the internal type is not recognized\n> + (which will be the case if the type is a UDT).\n> + </para>\n> + </listitem>\n> + \n> + </itemizedlist>\n> + \n> </sect2>\n> \n> <sect2 id=\"libpq-exec-select-values\">\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 18:29:18 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Fwd: [PATCHES] Libpq support for precision and scale]" }, { "msg_contents": "\nSorry, I see later comments questioning the patch. Please review and\nresubmit:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n\n---------------------------------------------------------------------------\n\nFernando Nasser wrote:\n> This is a patch that was posted some time ago to pgsql-patches and\n> no one has commented on it.\n> \n> It adds something that JDBC has that is not present in libpq (see below).\n> Is it OK for inclusion?\n> \n> Regards to all and thanks for your time,\n> Fernando\n> \n> \n> -------- Original Message --------\n> From: Fernando Nasser <fnasser@redhat.com>\n> Subject: [PATCHES] Libpq support for precision and scale\n> To: pgsql-patches@postgresql.org\n> \n> Some programs like utilities, IDEs, etc., frequently need to know the\n> precision and scale of the result fields (columns). Unfortunately\n> libpq does not have such routines yet (JDBC does).\n> \n> Liam and I created a few ones that do the trick, as inspired by the\n> JDBC code. The functions are:\n> \n> char *PQftypename(const PGresult *res, int field_num);\n> \n> Returns the type name (not the name of the column, as PQfname do).\n> \n> \n> int PQfprecision(const PGresult *res, int field_num);\n> int PQfscale(const PGresult *res, int field_num);\n> \n> Return Scale and Precision of the type respectively.\n> \n> \n> Most programs won't need this information and may not be willing\n> to pay the overhead for metadata retrieval. Thus, we added\n> an alternative function to be used instead of PQexec if one\n> wishes extra metadata to be retrieved along with the query\n> results:\n> \n> PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n> \n> It provides the same functionality and it is used in exactly the\n> same way as PQexec but it includes extra metadata about the result\n> fields. After this cal, it is possible to obtain the precision,\n> scale and type name for each result field.\n> \n> \n> The PQftypename function returns the internal PostgreSQL type name.\n> As some programs may prefer something more user friendly than the\n> internal type names, we've thrown in a conversion routine as well:\n> \n> char *PQtypeint2ext(const char *intname);\n> \n> This routine converts from the internal type name to a more user\n> friendly type name convention.\n> \n> \n> More details are in the patch to the SGML documentation that is \n> part of the patch (attached).\n> \n> \n> --\n> Liam Stewart <liams@redhat.com>\n> Fernando Nasser <fnasser@redhat.com>\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 18:34:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Fwd: [PATCHES] Libpq support for precision and scale]" }, { "msg_contents": "\nSorry, this patch has been rejected. Please continue discussion on the\nhackers list. Thank you. I think we do need this functionality\nsomehow.\n\n\n---------------------------------------------------------------------------\n\nFernando Nasser wrote:\n> This is a patch that was posted some time ago to pgsql-patches and\n> no one has commented on it.\n> \n> It adds something that JDBC has that is not present in libpq (see below).\n> Is it OK for inclusion?\n> \n> Regards to all and thanks for your time,\n> Fernando\n> \n> \n> -------- Original Message --------\n> From: Fernando Nasser <fnasser@redhat.com>\n> Subject: [PATCHES] Libpq support for precision and scale\n> To: pgsql-patches@postgresql.org\n> \n> Some programs like utilities, IDEs, etc., frequently need to know the\n> precision and scale of the result fields (columns). Unfortunately\n> libpq does not have such routines yet (JDBC does).\n> \n> Liam and I created a few ones that do the trick, as inspired by the\n> JDBC code. The functions are:\n> \n> char *PQftypename(const PGresult *res, int field_num);\n> \n> Returns the type name (not the name of the column, as PQfname do).\n> \n> \n> int PQfprecision(const PGresult *res, int field_num);\n> int PQfscale(const PGresult *res, int field_num);\n> \n> Return Scale and Precision of the type respectively.\n> \n> \n> Most programs won't need this information and may not be willing\n> to pay the overhead for metadata retrieval. Thus, we added\n> an alternative function to be used instead of PQexec if one\n> wishes extra metadata to be retrieved along with the query\n> results:\n> \n> PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n> \n> It provides the same functionality and it is used in exactly the\n> same way as PQexec but it includes extra metadata about the result\n> fields. After this cal, it is possible to obtain the precision,\n> scale and type name for each result field.\n> \n> \n> The PQftypename function returns the internal PostgreSQL type name.\n> As some programs may prefer something more user friendly than the\n> internal type names, we've thrown in a conversion routine as well:\n> \n> char *PQtypeint2ext(const char *intname);\n> \n> This routine converts from the internal type name to a more user\n> friendly type name convention.\n> \n> \n> More details are in the patch to the SGML documentation that is \n> part of the patch (attached).\n> \n> \n> --\n> Liam Stewart <liams@redhat.com>\n> Fernando Nasser <fnasser@redhat.com>\n\n> Index: fe-connect.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.180\n> diff -c -p -r1.180 fe-connect.c\n> *** fe-connect.c\t2001/11/05 17:46:37\t1.180\n> --- fe-connect.c\t2001/11/07 19:00:35\n> *************** makeEmptyPGconn(void)\n> *** 1849,1854 ****\n> --- 1849,1855 ----\n> #ifdef USE_SSL\n> \tconn->allow_ssl_try = TRUE;\n> #endif\n> + \tconn->typecache = NULL;\n> \n> \t/*\n> \t * The output buffer size is set to 8K, which is the usual size of\n> *************** freePGconn(PGconn *conn)\n> *** 1891,1896 ****\n> --- 1892,1898 ----\n> \tif (!conn)\n> \t\treturn;\n> \tpqClearAsyncResult(conn);\t/* deallocate result and curTuple */\n> + \tpqTypeCacheClear(conn);\t\t/* free all type cache entries */\n> #ifdef USE_SSL\n> \tif (conn->ssl)\n> \t\tSSL_free(conn->ssl);\n> Index: fe-exec.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.113\n> diff -c -p -r1.113 fe-exec.c\n> *** fe-exec.c\t2001/10/25 05:50:13\t1.113\n> --- fe-exec.c\t2001/11/07 19:00:35\n> *************** char\t *const pgresStatus[] = {\n> *** 48,53 ****\n> --- 48,54 ----\n> static void pqCatenateResultError(PGresult *res, const char *msg);\n> static void saveErrorResult(PGconn *conn);\n> static PGresult *prepareAsyncResult(PGconn *conn);\n> + static PGresult *pqExec(PGconn *conn, const char *query, int metadata);\n> static int\taddTuple(PGresult *res, PGresAttValue * tup);\n> static void parseInput(PGconn *conn);\n> static void handleSendFailure(PGconn *conn);\n> *************** static int\tgetRowDescriptions(PGconn *co\n> *** 55,60 ****\n> --- 56,63 ----\n> static int\tgetAnotherTuple(PGconn *conn, int binary);\n> static int\tgetNotify(PGconn *conn);\n> static int\tgetNotice(PGconn *conn);\n> + static char *pqTypeCacheGet(PGconn *conn, Oid typenum);\n> + static void pqTypeCachePut(PGconn *conn, Oid typenum, char *typename);\n> \n> /* ---------------\n> * Escaping arbitrary strings to get valid SQL strings/identifiers.\n> *************** addTuple(PGresult *res, PGresAttValue * \n> *** 609,614 ****\n> --- 612,678 ----\n> \treturn TRUE;\n> }\n> \n> + /* Cache of the correspondence between type Oids and\n> + * type names. Without it too many queries can be made to\n> + * retrieve this same information from the catalog over and over.\n> + */\n> + \n> + static char *\n> + pqTypeCacheGet(PGconn *conn, Oid typenum)\n> + {\n> + \tchar *typename = NULL;\n> + \tPGtypecache *tc = conn->typecache;\n> + \n> + \t/* Look for type Oid. */\n> + \twhile (tc != NULL)\n> + \t{\n> + \t\tif (tc->typenum == typenum)\n> + \t\t{\n> + \t\t\ttypename = tc->typename;\n> + \t\t\tbreak;\n> + \t\t}\n> + \t\telse\n> + \t\t\ttc = tc->next;\n> + \t}\n> + \treturn typename;\n> + }\n> + \n> + static void\n> + pqTypeCachePut(PGconn *conn, Oid typenum, char *typename)\n> + {\n> + \tPGtypecache *typetocache;\n> + \n> + \ttypetocache = (PGtypecache *) malloc(sizeof(PGtypecache));\n> + \tif (typetocache == NULL)\n> + \t{\n> + \t\tfprintf(stderr, \"pqTypeCachePut: malloc failed.\\n\");\n> + \t\treturn;\n> + \t}\n> + \t\n> + \ttypetocache->typenum = typenum;\n> + \ttypetocache->typename = strdup(typename);\n> + \ttypetocache->next = conn->typecache;\n> + \tconn->typecache = typetocache;\n> + }\n> + \n> + void\n> + pqTypeCacheClear(PGconn *conn)\n> + {\n> + \tPGtypecache *tc;\n> + \tPGtypecache *ntc;\n> + \n> + \t/* Free all tcache entries (and typenames). */\n> + \ttc = conn->typecache;\n> + \tconn->typecache = NULL;\n> + \twhile (tc != NULL)\n> + \t{\n> + \t\tif (tc->typename)\n> + \t\t\tfree(tc->typename);\n> + \t\tntc = tc->next;\n> + \t\tfree(tc);\n> + \t\ttc = ntc;\n> + \t}\n> + }\n> \n> /*\n> * PQsendQuery\n> *************** PQgetResult(PGconn *conn)\n> *** 1277,1301 ****\n> \treturn res;\n> }\n> \n> \n> /*\n> ! * PQexec\n> *\t send a query to the backend and package up the result in a PGresult\n> *\n> * If the query was not even sent, return NULL; conn->errorMessage is set to\n> * a relevant message.\n> * If the query was sent, a new PGresult is returned (which could indicate\n> * either success or failure).\n> * The user is responsible for freeing the PGresult via PQclear()\n> * when done with it.\n> */\n> \n> ! PGresult *\n> ! PQexec(PGconn *conn, const char *query)\n> {\n> \tPGresult *result;\n> \tPGresult *lastResult;\n> \tbool\t\tsavedblocking;\n> \n> \t/*\n> \t * we assume anyone calling PQexec wants blocking behaviour, we force\n> --- 1341,1381 ----\n> \treturn res;\n> }\n> \n> + PGresult *\n> + PQexec(PGconn *conn, const char *query)\n> + {\n> + \t/* Don't get metadata. */\n> + \treturn pqExec (conn, query, 0 /* no metadata */);\n> + }\n> \n> + PGresult *\n> + PQexecIncludeMetadata(PGconn *conn, const char *query)\n> + {\n> + \t/* Get metadata as well. */\n> + \treturn pqExec (conn, query, 1 /* with metadata */);\n> + }\n> + \n> /*\n> ! * pqExec\n> *\t send a query to the backend and package up the result in a PGresult\n> *\n> * If the query was not even sent, return NULL; conn->errorMessage is set to\n> * a relevant message.\n> * If the query was sent, a new PGresult is returned (which could indicate\n> * either success or failure).\n> + * If it is called with metadata == 1, the metadata about the column \n> + * results will be obtained and saved in the PGresult.\n> * The user is responsible for freeing the PGresult via PQclear()\n> * when done with it.\n> */\n> \n> ! static PGresult *\n> ! pqExec(PGconn *conn, const char *query, int metadata)\n> {\n> \tPGresult *result;\n> \tPGresult *lastResult;\n> \tbool\t\tsavedblocking;\n> + \tint\t\t\ti;\n> \n> \t/*\n> \t * we assume anyone calling PQexec wants blocking behaviour, we force\n> *************** PQexec(PGconn *conn, const char *query)\n> *** 1363,1368 ****\n> --- 1443,1501 ----\n> \n> \tif (PQsetnonblocking(conn, savedblocking) == -1)\n> \t\treturn NULL;\n> + \n> + \t/*\n> + \t * If metadata is requested and everything is well, loop through\n> + \t * the result fields grabing the required information.\n> + \t */\n> + \n> + if (metadata && (lastResult->numAttributes > 0))\n> + \t\tfor (i = 0; i < lastResult->numAttributes; i++)\n> + \t\t{\n> + \t\t\tOid typenum;\n> + \t\t\tPGresult *result;\n> + \t\t\tchar *tempname;\n> + \t\t\tstatic char query[] = \"select typname from pg_type where oid = %lu\";\n> + \t\t\tchar *fullquery;\n> + \n> + \t\t\tif ((typenum = lastResult->attDescs[i].typid) == 0)\n> + \t\t\t\tcontinue;\n> + \n> + \t\t\t/* Look up the cache for the type name. */\n> + \t\t\ttempname = pqTypeCacheGet(conn, typenum);\n> + \n> + \t\t\t/* If it is a type that we still don't know the name,\n> + \t\t\t query for the type name and store it in the cache. */\n> + \t\tif (tempname == NULL)\n> + \t\t\t{\n> + \t\t\t\tfullquery = malloc (sizeof(query)\n> + \t\t\t\t\t\t\t\t\t+ 20 /* if Oids become 64 bits */);\n> + \t\t\t\tif (fullquery == NULL)\n> + \t\t\t\t {\n> + \t\t\t\t fprintf(stderr, \"pqExec: malloc failed.\\n\");\n> + \t\t\t\t\t return NULL;\n> + \t\t\t\t }\n> + \t\t\t\t/* If the typename was not in the cache, query the catalog\n> + \t \t\t\t and add it to the cache */\n> + \t\t\t\tsnprintf(fullquery, sizeof(query) + 20, query, typenum);\n> + \t\t\t\tresult = PQexec(conn, fullquery);\n> + \t\t\t\tfree(fullquery);\n> + \t\t\t\tif (!result || PQresultStatus(result) != PGRES_TUPLES_OK)\n> + \t\t\t\t{\n> + \t\t\t\t\tPQclear(result);\n> + \t\t\t\t\tcontinue;\n> + \t\t\t\t}\n> + \t\t\t\tif (PQntuples(result) != 1 || PQnfields(result) != 1) {\n> + \t\t\t\t\tPQclear(result);\n> + \t\t\t\t\tcontinue;\n> + \t\t\t\t}\n> + \t\t\t\tpqTypeCachePut(conn, typenum, PQgetvalue(result, 0, 0));\n> + \t\t\t\ttempname = pqTypeCacheGet(conn, typenum);\n> + \t\t\t}\n> + \n> + \t\t\tlastResult->attDescs[i].atttypname = strdup(tempname);\n> + \t\t}\n> + \n> \treturn lastResult;\n> \n> errout:\n> *************** PQftype(const PGresult *res, int field_n\n> *** 2104,2109 ****\n> --- 2237,2253 ----\n> \t\treturn InvalidOid;\n> }\n> \n> + char *\n> + PQftypeName(const PGresult *res, int field_num)\n> + {\n> + \tif (!check_field_number(res, field_num))\n> + \t\treturn NULL;\n> + \tif (res->attDescs)\n> + \t\treturn res->attDescs[field_num].atttypname;\n> + \telse\n> + \t\treturn NULL;\n> + }\n> + \n> int\n> PQfsize(const PGresult *res, int field_num)\n> {\n> *************** PQfmod(const PGresult *res, int field_nu\n> *** 2124,2129 ****\n> --- 2268,2330 ----\n> \t\treturn res->attDescs[field_num].atttypmod;\n> \telse\n> \t\treturn 0;\n> + }\n> + \n> + int\n> + PQfprecision(const PGresult *res, int field_num)\n> + {\n> + \tint mod;\n> + \tchar *type;\n> + \n> + \tif ((type = PQftypeName(res, field_num)) == NULL)\n> + \t\treturn 0;\n> + \tmod = PQfmod(res, field_num);\n> + \n> + \tif (strcmp(type, \"numeric\") == 0)\n> + \t\treturn ((0xFFFF0000) & mod) >> 16;\n> + \telse if (strcmp(type, \"int2\") == 0)\n> + \t\treturn 5;\n> + \telse if (strcmp(type, \"int4\") == 0)\n> + \t\treturn 10;\n> + \telse if (strcmp(type, \"int8\") == 0)\n> + \t\treturn 19; /* It would be 20 if it was unsigned. */\n> + \telse if (strcmp(type, \"float4\") == 0)\n> + \t\treturn 6;\n> + \telse if (strcmp(type, \"float8\") == 0)\n> + \t\treturn 15;\n> + \telse if (strcmp(type, \"varchar\") == 0 ||\n> + \t\t\t strcmp(type, \"bpchar\") == 0 ||\n> + \t\t\t strcmp(type, \"char\") == 0)\n> + \t\treturn mod - 4;\n> + \telse if (strcmp(type, \"varbit\") == 0 ||\n> + \t\t\t strcmp(type, \"bit\") == 0)\n> + \t\treturn mod;\n> + \n> + \treturn -1;\n> + }\n> + \n> + int\n> + PQfscale(const PGresult *res, int field_num)\n> + {\n> + \tint mod;\n> + \tchar *type;\n> + \n> + \tif ((type = PQftypeName(res, field_num)) == NULL)\n> + \t\treturn 0;\n> + \tmod = PQfmod(res, field_num);\n> + \n> + \tif (strcmp(type, \"numeric\") == 0)\n> + \t\treturn ((0x0000FFFF) & mod) - 4;\n> + \telse if (strcmp(type, \"int2\") == 0 ||\n> + \t\t\t strcmp(type, \"int4\") == 0 ||\n> + \t\t\t strcmp(type, \"int8\") == 0)\n> + \t\treturn 0;\n> + \telse if (strcmp(type, \"float4\") == 0)\n> + \t\treturn -1;\n> + \telse if (strcmp(type, \"float8\") == 0)\n> + \t\treturn -1;\n> + \t\t\n> + \treturn -1;\n> }\n> \n> char *\n> Index: fe-misc.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.60\n> diff -c -p -r1.60 fe-misc.c\n> *** fe-misc.c\t2001/11/05 17:46:37\t1.60\n> --- fe-misc.c\t2001/11/07 19:00:35\n> *************** WSSE_GOODEXIT:\n> *** 896,898 ****\n> --- 896,974 ----\n> }\n> \n> #endif\n> + \n> + char *\n> + PQinternal2common(const char *intname)\n> + {\n> + \tstatic char *typename;\n> + \n> + \tif (intname == NULL)\n> + \t\treturn NULL;\n> + \n> + \tif (strcmp(intname, \"int8\") == 0)\n> + \t\ttypename = \"bigint\";\n> + \telse if (strcmp(intname, \"bit\") == 0)\n> + \t\ttypename = \"bit\";\n> + \telse if (strcmp(intname, \"varbit\") == 0)\n> + \t\ttypename = \"varbit\";\t/* bit varying */\n> + \telse if (strcmp(intname, \"bool\") == 0)\n> + \t\ttypename = \"boolean\";\n> + \telse if (strcmp(intname, \"box\") == 0)\n> + \t\ttypename = \"box\";\n> + \telse if (strcmp(intname, \"bpchar\") == 0)\n> + \t\ttypename = \"char\";\t\t/* character */\n> + \telse if (strcmp(intname, \"varchar\") == 0)\n> + \t\ttypename = \"varchar\";\t/* character varying */\n> + \telse if (strcmp(intname, \"cidr\") == 0)\n> + \t\ttypename = \"cidr\";\n> + \telse if (strcmp(intname, \"circle\") == 0)\n> + \t\ttypename = \"circle\";\n> + \telse if (strcmp(intname, \"date\") == 0)\n> + \t\ttypename = \"date\";\n> + \telse if (strcmp(intname, \"float8\") == 0)\n> + \t\ttypename = \"double precision\";\n> + \telse if (strcmp(intname, \"inet\") == 0)\n> + \t\ttypename = \"inet\";\n> + \telse if (strcmp(intname, \"int4\") == 0)\n> + \t\ttypename = \"integer\";\n> + \telse if (strcmp(intname, \"interval\") == 0)\n> + \t\ttypename = \"interval\";\n> + \telse if (strcmp(intname, \"line\") == 0)\n> + \t\ttypename = \"line\";\n> + \telse if (strcmp(intname, \"lseg\") == 0)\n> + \t\ttypename = \"lseg\";\n> + \telse if (strcmp(intname, \"macaddr\") == 0)\n> + \t\ttypename = \"macaddr\";\n> + \telse if (strcmp(intname, \"decimal\") == 0)\n> + \t\ttypename = \"numeric\";\n> + \telse if (strcmp(intname, \"numeric\") == 0)\n> + \t\ttypename = \"numeric\";\n> + \telse if (strcmp(intname, \"oid\") == 0)\n> + \t\ttypename = \"oid\";\n> + \telse if (strcmp(intname, \"path\") == 0)\n> + \t\ttypename = \"path\";\n> + \telse if (strcmp(intname, \"point\") == 0)\n> + \t\ttypename = \"point\";\n> + \telse if (strcmp(intname, \"polygon\") == 0)\n> + \t\ttypename = \"polygon\";\n> + \telse if (strcmp(intname, \"float4\") == 0)\n> + \t\ttypename = \"real\";\n> + \telse if (strcmp(intname, \"int2\") == 0)\n> + \t\ttypename = \"smallint\";\n> + \telse if (strcmp(intname, \"serial\") == 0)\n> + \t\ttypename = \"serial\";\n> + \telse if (strcmp(intname, \"text\") == 0)\n> + \t\ttypename = \"text\";\n> + \telse if (strcmp(intname, \"time\") == 0)\n> + \t\ttypename = \"time\";\n> + \telse if (strcmp(intname, \"time with time zone\") == 0)\n> + \t\ttypename = \"time with time zone\";\n> + \telse if (strcmp(intname, \"timestamp\") == 0)\n> + \t\ttypename = \"timestamp\";\n> + \telse if (strcmp(intname, \"timestamp with time zone\") == 0)\n> + \t\ttypename = \"timestamp with time zone\";\n> + \telse\n> + \t\ttypename = NULL;\n> + \n> + \treturn typename;\n> + }\n> Index: libpq-fe.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.79\n> diff -c -p -r1.79 libpq-fe.h\n> *** libpq-fe.h\t2001/11/05 17:46:37\t1.79\n> --- libpq-fe.h\t2001/11/07 19:00:35\n> *************** extern\t\t\"C\"\n> *** 256,261 ****\n> --- 256,262 ----\n> \n> /* Simple synchronous query */\n> \textern PGresult *PQexec(PGconn *conn, const char *query);\n> + \textern PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n> \textern PGnotify *PQnotifies(PGconn *conn);\n> \textern void PQfreeNotify(PGnotify *notify);\n> \n> *************** extern\t\t\"C\"\n> *** 303,315 ****\n> \textern char *PQfname(const PGresult *res, int field_num);\n> \textern int\tPQfnumber(const PGresult *res, const char *field_name);\n> \textern Oid\tPQftype(const PGresult *res, int field_num);\n> \textern int\tPQfsize(const PGresult *res, int field_num);\n> \textern int\tPQfmod(const PGresult *res, int field_num);\n> \textern char *PQcmdStatus(PGresult *res);\n> \textern char *PQoidStatus(const PGresult *res);\t\t/* old and ugly */\n> \textern Oid\tPQoidValue(const PGresult *res);\t\t/* new and improved */\n> ! \textern char *PQcmdTuples(PGresult *res);\n> ! \textern char *PQgetvalue(const PGresult *res, int tup_num, int field_num);\n> \textern int\tPQgetlength(const PGresult *res, int tup_num, int field_num);\n> \textern int\tPQgetisnull(const PGresult *res, int tup_num, int field_num);\n> \n> --- 304,319 ----\n> \textern char *PQfname(const PGresult *res, int field_num);\n> \textern int\tPQfnumber(const PGresult *res, const char *field_name);\n> \textern Oid\tPQftype(const PGresult *res, int field_num);\n> + \textern char\t*PQftypeName(const PGresult *res, int field_num);\n> \textern int\tPQfsize(const PGresult *res, int field_num);\n> \textern int\tPQfmod(const PGresult *res, int field_num);\n> + \textern int\tPQfprecision(const PGresult *res, int field_num);\n> + \textern int\tPQfscale(const PGresult *res, int field_num);\n> \textern char *PQcmdStatus(PGresult *res);\n> \textern char *PQoidStatus(const PGresult *res);\t\t/* old and ugly */\n> \textern Oid\tPQoidValue(const PGresult *res);\t\t/* new and improved */\n> ! \textern char\t*PQcmdTuples(PGresult *res);\n> ! \textern char\t*PQgetvalue(const PGresult *res, int tup_num, int field_num);\n> \textern int\tPQgetlength(const PGresult *res, int tup_num, int field_num);\n> \textern int\tPQgetisnull(const PGresult *res, int tup_num, int field_num);\n> \n> *************** extern\t\t\"C\"\n> *** 371,376 ****\n> --- 375,383 ----\n> /* Get encoding id from environment variable PGCLIENTENCODING */\n> \textern int\tPQenv2encoding(void);\n> \n> + \t/* Convert internal type name to common type name */\n> + \textern char\t*PQinternal2common(const char *intname);\n> + \t\n> #ifdef __cplusplus\n> }\n> #endif\n> Index: libpq-int.h\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/libpq-int.h,v\n> retrieving revision 1.44\n> diff -c -p -r1.44 libpq-int.h\n> *** libpq-int.h\t2001/11/05 17:46:38\t1.44\n> --- libpq-int.h\t2001/11/07 19:00:35\n> *************** union pgresult_data\n> *** 75,88 ****\n> \tchar\t\tspace[1];\t\t/* dummy for accessing block as bytes */\n> };\n> \n> ! /* Data about a single attribute (column) of a query result */\n> \n> typedef struct pgresAttDesc\n> {\n> ! \tchar\t *name;\t\t\t/* type name */\n> \tOid\t\t\ttypid;\t\t\t/* type id */\n> \tint\t\t\ttyplen;\t\t\t/* type size */\n> \tint\t\t\tatttypmod;\t\t/* type-specific modifier info */\n> }\tPGresAttDesc;\n> \n> /* Data for a single attribute of a single tuple */\n> --- 75,91 ----\n> \tchar\t\tspace[1];\t\t/* dummy for accessing block as bytes */\n> };\n> \n> ! /* Data about a single attribute (column) of a query result.\n> ! * The type name is only available if PQexecIncludeMetadata() was used.\n> ! */\n> \n> typedef struct pgresAttDesc\n> {\n> ! \tchar\t *name;\t\t\t/* column name */\n> \tOid\t\t\ttypid;\t\t\t/* type id */\n> \tint\t\t\ttyplen;\t\t\t/* type size */\n> \tint\t\t\tatttypmod;\t\t/* type-specific modifier info */\n> + \tchar\t *atttypname;\t\t/* type name */\n> }\tPGresAttDesc;\n> \n> /* Data for a single attribute of a single tuple */\n> *************** typedef struct pgLobjfuncs\n> *** 191,196 ****\n> --- 194,208 ----\n> \tOid\t\t\tfn_lo_write;\t/* OID of backend function LOwrite\t\t*/\n> }\tPGlobjfuncs;\n> \n> + /* Entry in the cache of the correspondence between type Oids and type names.\n> + */\n> + typedef struct pgTypeCache\n> + {\n> + \tOid\t\t\t\t\ttypenum;\t/* OID of type\t\t*/\n> + \tchar\t\t\t *typename;\t/* name of type\t\t*/\n> + \tstruct pgTypeCache *next;\t\t/* name of type\t\t*/\n> + }\t\t\tPGtypecache;\n> + \n> /* PGconn stores all the state data associated with a single connection\n> * to a backend.\n> */\n> *************** struct pg_conn\n> *** 240,245 ****\n> --- 252,258 ----\n> \tchar\t\tcryptSalt[2];\t/* password salt received from backend */\n> \tPGlobjfuncs *lobjfuncs;\t\t/* private state for large-object access\n> \t\t\t\t\t\t\t\t * fns */\n> + \tPGtypecache *typecache;\t\t/* cached types for this connection. */\n> \n> \t/* Buffer for data received from backend and not yet processed */\n> \tchar\t *inBuffer;\t\t/* currently allocated buffer */\n> *************** extern void pqSetResultError(PGresult *r\n> *** 305,310 ****\n> --- 318,324 ----\n> extern void *pqResultAlloc(PGresult *res, size_t nBytes, bool isBinary);\n> extern char *pqResultStrdup(PGresult *res, const char *str);\n> extern void pqClearAsyncResult(PGconn *conn);\n> + extern void pqTypeCacheClear(PGconn *conn);\n> \n> /* === in fe-misc.c === */\n> \n> \n\n> Index: libpq.sgml\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/doc/src/sgml/libpq.sgml,v\n> retrieving revision 1.72\n> diff -c -p -r1.72 libpq.sgml\n> *** libpq.sgml\t2001/09/13 15:55:23\t1.72\n> --- libpq.sgml\t2001/11/07 19:06:52\n> *************** PGresult *PQexec(PGconn *conn,\n> *** 728,733 ****\n> --- 728,748 ----\n> \t <function>PQerrorMessage</function> to get more information about the error.\n> </para>\n> </listitem>\n> + \n> + <listitem>\n> + <para>\n> + <function>PQexecIncludeMetadata</function>\n> + Submit a query to the server and wait for the result;\n> + include extra metadata about the result fields.\n> + \t This makes available information such as the type name,\n> + \t precision and scale for each field in the result.\n> + <synopsis>\n> + PGresult *PQexecIncludeMetadata(PGconn *conn,\n> + const char *query);\n> + </synopsis>\n> + Used the same way as PQexec().\n> + </para>\n> + </listitem>\n> </itemizedlist>\n> \n> <para>\n> *************** You can query the system table <literal>\n> *** 964,969 ****\n> --- 979,986 ----\n> the name and properties of the various data types. The <acronym>OID</acronym>s\n> of the built-in data types are defined in <filename>src/include/catalog/pg_type.h</filename>\n> in the source tree.\n> + The function <function>PQftypename</function> can be used to retrieve the\n> + type name if the result was obtained via <function>PQexecIncludeMetadata</function>.\n> </para>\n> </listitem>\n> \n> *************** extracts data from a <acronym>BINARY</ac\n> *** 1010,1015 ****\n> --- 1027,1126 ----\n> </para>\n> </listitem>\n> </itemizedlist>\n> + \n> + <para>\n> + The following functions only produce meaningful results if \n> + <function>PQexecIncludeMetadata</function> was used\n> + (as opposed to <function>PQexec</function>).\n> + </para>\n> + \n> + <itemizedlist>\n> + \n> + <listitem>\n> + <para>\n> + <function>PQftypename</function>\n> + Returns the name of the column type as a string.\n> + Field indices start at 0.\n> + <synopsis>\n> + char *PQftypename(const PGresult *res,\n> + int field_index);\n> + </synopsis>\n> + \t Returns the name of the column type as a string.\n> + Copy the string if needed -- do not modify, free()\n> + or assume its persistence. The internal type name is\n> + returned; use PQtypeint2ext() to convert to a more SQL-ish style.\n> + \t NULL is returned if the field type name is not availble.\n> + </para>\n> + </listitem>\n> + \n> + <listitem>\n> + <para>\n> + <function>PQfprecision</function>\n> + Returns the precision of the field\n> + associated with the given field index.\n> + Field indices start at 0.\n> + <synopsis>\n> + int PQfprecision(const PGresult *res,\n> + int field_index);\n> + </synopsis>\n> + \t Returns the precision of the field\n> + associated with the given field index.\n> + \t For numeric types (INTEGER, FLOAT, etc.), PQfprecision returns the\n> + \t number of decimal digits in the specified field. For character and bit\n> + \t string types, such as VARCHAR and BIT, PQfprecision returns the\n> + \t maximum number of characters/bits allowed in the specified field.\n> + \t PQfprecision returns 0 if precision information is not available and\n> + \t -1 if precision is not applicable to the field in question. The latter\n> + \t will be the case if the type of the field is POINT, for example. \n> + </para>\n> + </listitem>\n> + \n> + <listitem>\n> + <para>\n> + <function>PQfscale</function>\n> + Returns the scale of the field\n> + associated with the given field index.\n> + Field indices start at 0.\n> + <synopsis>\n> + int PQfscale(const PGresult *res,\n> + int field_index);\n> + </synopsis>\n> + \t Returns the scale of the field\n> + associated with the given field index.\n> + \t PQfscale returns the scale of the field associated with the given\n> + \t field index. Scale is the number of digits after the decimal point,\n> + \t so this function is useful only with fields that are of a numeric\n> + \t type (INTEGER, FLOAT, NUMERIC, etc.). -1 is returned if scale is not\n> + \t applicable to the field type. 0 is returned if scale information is\n> + \t not available. \n> + </para>\n> + </listitem>\n> + </itemizedlist>\n> + \n> + <para>\n> + Use the function below to convert internal type names (like the\n> + ones returned by <function>PQftypename</function>) into something\n> + more user-friendly.\n> + </para>\n> + \n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + <function>PQtypeint2ext</function>\n> + Converts an internal type name into a SQL-ish\n> + type name.\n> + <synopsis>\n> + char *PQtypeint2ext(const char **intname);\n> + </synopsis>\n> + \t Converts an internal type name into a SQL-ish\n> + type name.\n> + NULL is returned if the internal type is not recognized\n> + (which will be the case if the type is a UDT).\n> + </para>\n> + </listitem>\n> + \n> + </itemizedlist>\n> + \n> </sect2>\n> \n> <sect2 id=\"libpq-exec-select-values\">\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 4 Mar 2002 14:57:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Fwd: [PATCHES] Libpq support for precision and scale]" }, { "msg_contents": "\nI have reviewed this patch and clearly has features I would like to get\ninto 7.3. We have been pushing too much type knowledge into apps and\nthis will give people a libpq solution that we can manage. Here are my\ncomments.\n\n> > These seem okay, but I don't like the API detail that \"0 is returned if\n> > information is not available\". 0 is a valid result, at least for\n> > PQfscale. I would recommend returning -1. If you really want to\n> > distinguish bad parameters from non-numeric datatype, then return -1\n> > and -2 for those two cases.\n> > \n> \n> This seems to be the libpq convention. On calls such as PQfsize and\n> PQfmod, for instance, zero is a valid result and is also returned if\n> the information is not available.\n> \n> Please note that we did not make this convention -- our original version\n> did return -1. But we decided that following a different rule for these\n> two routines was even more confusing. And change the return convention\n> for the whole set of functions at this point seems out of the question.\n> \n> P.S.: Maybe whoever originally designed the libpq interface was trying\n> to accomplish some sort of \"soft fail\" by returning zero. Just a guess\n> of course.\n\nI think the problem stems from the fact that some of our functions\nlegitimately can return -1, so zero was chosen as a failure code, while\nothers use -1 for failure. In fact, Tom mentioned that there are now\nsome types that have a valid atttypmod of 0 (timestamp?) meaning we may\nhave a problem there anyway. Any ideas on how to fix it?\n\nIn hindsight, we should have defined a macro equal to -2 and report that\nas the failure return for all functions that need it.\n\n\n> > > Most programs won't need this information and may not be willing\n> > > to pay the overhead for metadata retrieval. Thus, we added\n> > > an alternative function to be used instead of PQexec if one\n> > > wishes extra metadata to be retrieved along with the query\n> > > results:\n> > \n> > > PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n> > \n> > This strikes me as very ugly, and unnecessary, and inefficient since\n> > it retrieves metadata for all columns even though the client might\n> > only need to know about some of them. \n> \n> This part I would not worry about. The new routines are for result sets\n> (not arbitrary columns) so the fields present in it have already been\n> pre-selected. Also, this kind of information is useful for tools as\n> they don't know beforehand what the fields will be. In all cases\n> we can think of, the tool will always want metadata about all the\n> fields.\n\n\nI hesitate to add another PQexec function. That could complicate the\nAPI.\n\n> > An even worse problem is that\n> > it'll fail entirely with a multi-query query string.\n> > \n> \n> This is a bummer. But I see no solution for this besides documenting\n> the restriction in the manual. If I am not mistaken we already have\n> the limitation of returning just the last result anyway (we just\n> collect the error messages).\n> \n> \n> > What I think would be cleaner would be to do the metadata queries\n> > on-the-fly as needed. With the caching that you already have in there,\n> > on-the-fly queries wouldn't be any less efficient.\n> > \n> > But to do a metadata query we must have access to the connection.\n> > We could handle it two ways:\n> > \n> > 1. Add a PGconn parameter to the querying functions.\n> > \n> \n> The problem is that results may be kept longer than connections\n> (see below). The current solution did not require the connection\n> as the metadata is for the result set, not tables.\n> \n> The PGconn parameter would be reasonable for retrieving metadata\n> about table columns, for instance.\n\n\nI think this is the way to go. We just require the connection be valid.\nIf it isn't, we throw an error. I don't see this as a major restriction.\nIn fact, it would be interesting to just call this function\nautomatically when someone requests type info.\n\n\n> > 2. Make use of the PGconn link that's stored in PGresults, and\n> > specify that these functions can only be used on PGresults that\n> > came from a still-open connection.\n> > \n> \n> That field has been deprecated (see comments in the source code) \n> because a result may be kept even after the connection is closed.\n> \n> \n> > I think I prefer the first, since it makes it more visible to the\n> > programmer that queries may get executed. But it's a judgment call\n> > probably; I could see an argument for the second as well. Any comments,\n> > anyone?\n> > \n> \n> It would have to be the former (to avoid the stale pointer problem).\n> \n> But requiring a connection adds a restriction to the use of this info\n> and makes it have a different life span than the object it refers to\n> (a PGresult), which is very weird.\n\nYes, but how often is this going to happen? If we can throw a reliable\nerror message when it happens, it seems quite safe. \"If you are going to\nget type info, keep the connection open so we can get it.\"\n\n> > > The PQftypename function returns the internal PostgreSQL type name.\n> > > As some programs may prefer something more user friendly than the\n> > > internal type names, we've thrown in a conversion routine as well:\n> > > char *PQtypeint2ext(const char *intname);\n> > > This routine converts from the internal type name to a more user\n> > > friendly type name convention.\n> > \n> > This seems poorly designed. Pass it the type OID and typmod, both of\n> > which are readily available from a PQresult without extra computation.\n> > That will let you call the backend's format_type ... of course you'll\n> > need a PGconn too for that.\n> > \n> \n> Requiring the PGconn is bad. But we still could have a PQFtypeExt()\n> returning the \"external\" type if people prefer it that way.\n> We thought that this should be kept as an explicit conversion\n> operation to make clear the distinction of what the backend knows\n> about and this outside world view of things.\n\nIf they want more info about the result set, keeping the connection open\nso we can get that information seems perfectly logical. If we put it in\nthe manual in its own section as MetaData functions, and mention they\nneed a valid connection to work, I think it will be clear to everyone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 6 Mar 2002 23:09:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have reviewed this patch and clearly has features I would like to get\n> into 7.3. We have been pushing too much type knowledge into apps and\n> this will give people a libpq solution that we can manage. Here are my\n> comments.\n> \n\nWe definitively want this to go into 7.3. I am planning on update\nthis patch next week.\n\n\n> > > These seem okay, but I don't like the API detail that \"0 is returned if\n> > > information is not available\". 0 is a valid result, at least for\n> > > PQfscale. I would recommend returning -1. If you really want to\n> > > distinguish bad parameters from non-numeric datatype, then return -1\n> > > and -2 for those two cases.\n> > >\n> >\n> > This seems to be the libpq convention. On calls such as PQfsize and\n> > PQfmod, for instance, zero is a valid result and is also returned if\n> > the information is not available.\n> >\n> > Please note that we did not make this convention -- our original version\n> > did return -1. But we decided that following a different rule for these\n> > two routines was even more confusing. And change the return convention\n> > for the whole set of functions at this point seems out of the question.\n> >\n> > P.S.: Maybe whoever originally designed the libpq interface was trying\n> > to accomplish some sort of \"soft fail\" by returning zero. Just a guess\n> > of course.\n> \n> I think the problem stems from the fact that some of our functions\n> legitimately can return -1, so zero was chosen as a failure code, while\n> others use -1 for failure. In fact, Tom mentioned that there are now\n> some types that have a valid atttypmod of 0 (timestamp?) meaning we may\n> have a problem there anyway. Any ideas on how to fix it?\n> \n\nWe have agreed to change the error return code to -2. It will be in the\nREPOST of the patch next week.\n\n> In hindsight, we should have defined a macro equal to -2 and report that\n> as the failure return for all functions that need it.\n> \n\nNote that -2 is a valid result for some other functions :-(\n\nThere is no way of picking a value that works for all. Maybe these\nfunctions should just be returning a value and setting some global\n'libpqerr' variable that had to be set to assure the result was valid.\nAnyway, it is too late for that now as backwards compatibility makes\nit difficult to change the API that much.\n\n\n> > > > Most programs won't need this information and may not be willing\n> > > > to pay the overhead for metadata retrieval. Thus, we added\n> > > > an alternative function to be used instead of PQexec if one\n> > > > wishes extra metadata to be retrieved along with the query\n> > > > results:\n> > >\n> > > > PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n> > >\n> > > This strikes me as very ugly, and unnecessary, and inefficient since\n> > > it retrieves metadata for all columns even though the client might\n> > > only need to know about some of them.\n> >\n> > This part I would not worry about. The new routines are for result sets\n> > (not arbitrary columns) so the fields present in it have already been\n> > pre-selected. Also, this kind of information is useful for tools as\n> > they don't know beforehand what the fields will be. In all cases\n> > we can think of, the tool will always want metadata about all the\n> > fields.\n> \n> I hesitate to add another PQexec function. That could complicate the\n> API.\n> \n\nWe have agreed to add another call to set a flag for including the\nmetadata on the PQexec call (which would make it work like the\nPQexecIncludeMetadata described above). It will be in the REPOST patch.\n\nQuestion: should it affect only the next PQexec(), or should we require\nthe user to reset it?\n\nHow do we call it? I am thinking of PQsetIncludeMetadata()\nName suggestions for this call are welcome.\n\n> > > An even worse problem is that\n> > > it'll fail entirely with a multi-query query string.\n> > >\n> >\n> > This is a bummer. But I see no solution for this besides documenting\n> > the restriction in the manual. If I am not mistaken we already have\n> > the limitation of returning just the last result anyway (we just\n> > collect the error messages).\n> >\n> >\n> > > What I think would be cleaner would be to do the metadata queries\n> > > on-the-fly as needed. With the caching that you already have in there,\n> > > on-the-fly queries wouldn't be any less efficient.\n> > >\n> > > But to do a metadata query we must have access to the connection.\n> > > We could handle it two ways:\n> > >\n> > > 1. Add a PGconn parameter to the querying functions.\n> > >\n> >\n> > The problem is that results may be kept longer than connections\n> > (see below). The current solution did not require the connection\n> > as the metadata is for the result set, not tables.\n> >\n> > The PGconn parameter would be reasonable for retrieving metadata\n> > about table columns, for instance.\n> \n> I think this is the way to go. We just require the connection be valid.\n> If it isn't, we throw an error. I don't see this as a major restriction.\n> In fact, it would be interesting to just call this function\n> automatically when someone requests type info.\n> \n\nSorry but we disagree on this one. The metadata is related (part of)\na result, which is a different object, with a different life spam.\nThere is no way to be certain that the connection is still around\nand no reliable way of testing for it. If the conn field is a\ndangling pointer any check based on it depends on that heap memory\nnot being realocated already. Well, we could keep a list of results\nassociated with a connection and go cleaning the conn pointers in it \n_if_ the user uses PQfinish() properly. A little bit dangerous IMO.\n\nI would stick with Tom Lane's decision of deprecating pconn and leave\nthe metadata independent of it.\n\n\n> > > 2. Make use of the PGconn link that's stored in PGresults, and\n> > > specify that these functions can only be used on PGresults that\n> > > came from a still-open connection.\n> > >\n> >\n> > That field has been deprecated (see comments in the source code)\n> > because a result may be kept even after the connection is closed.\n> >\n> >\n> > > I think I prefer the first, since it makes it more visible to the\n> > > programmer that queries may get executed. But it's a judgment call\n> > > probably; I could see an argument for the second as well. Any comments,\n> > > anyone?\n> > >\n> >\n> > It would have to be the former (to avoid the stale pointer problem).\n> >\n> > But requiring a connection adds a restriction to the use of this info\n> > and makes it have a different life span than the object it refers to\n> > (a PGresult), which is very weird.\n> \n> Yes, but how often is this going to happen? If we can throw a reliable\n> error message when it happens, it seems quite safe. \"If you are going to\n> get type info, keep the connection open so we can get it.\"\n> \n\nThere is no reliable way of detecting this error (see above).\n\n\n> > > > The PQftypename function returns the internal PostgreSQL type name.\n> > > > As some programs may prefer something more user friendly than the\n> > > > internal type names, we've thrown in a conversion routine as well:\n> > > > char *PQtypeint2ext(const char *intname);\n> > > > This routine converts from the internal type name to a more user\n> > > > friendly type name convention.\n> > >\n> > > This seems poorly designed. Pass it the type OID and typmod, both of\n> > > which are readily available from a PQresult without extra computation.\n> > > That will let you call the backend's format_type ... of course you'll\n> > > need a PGconn too for that.\n> > >\n> >\n> > Requiring the PGconn is bad. But we still could have a PQFtypeExt()\n> > returning the \"external\" type if people prefer it that way.\n> > We thought that this should be kept as an explicit conversion\n> > operation to make clear the distinction of what the backend knows\n> > about and this outside world view of things.\n> \n> If they want more info about the result set, keeping the connection open\n> so we can get that information seems perfectly logical. If we put it in\n> the manual in its own section as MetaData functions, and mention they\n> need a valid connection to work, I think it will be clear to everyone.\n> \n\nThis may be possible for this specific conversion routine. The\nadvantage\nis that we don't need to keep this translation in the clients (so we\ndon't\nhave to track changes etc). I will take a look into this possibility.\nWe would have conn as a parameter for this call though (will not use the\ndangling pointer inside the result). \n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n", "msg_date": "Thu, 07 Mar 2002 20:34:50 -0500", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale" }, { "msg_contents": "Fernando Nasser wrote:\n> Bruce Momjian wrote:\n> > \n> > I have reviewed this patch and clearly has features I would like to get\n> > into 7.3. We have been pushing too much type knowledge into apps and\n> > this will give people a libpq solution that we can manage. Here are my\n> > comments.\n> > \n> \n> We definitively want this to go into 7.3. I am planning on update\n> this patch next week.\n\nGreat.\n\n> > I hesitate to add another PQexec function. That could complicate the\n> > API.\n> > \n> \n> We have agreed to add another call to set a flag for including the\n> metadata on the PQexec call (which would make it work like the\n> PQexecIncludeMetadata described above). It will be in the REPOST patch.\n> \n> Question: should it affect only the next PQexec(), or should we require\n> the user to reset it?\n> \n> How do we call it? I am thinking of PQsetIncludeMetadata()\n> Name suggestions for this call are welcome.\n\nUh, is it more efficient to do the setting before the query is called? \nIf so, I guess is would remain active until you turn off off. That\nseems the clearest. I like the separate function to turn it on.\n\n> > > The PGconn parameter would be reasonable for retrieving metadata\n> > > about table columns, for instance.\n> > \n> > I think this is the way to go. We just require the connection be valid.\n> > If it isn't, we throw an error. I don't see this as a major restriction.\n> > In fact, it would be interesting to just call this function\n> > automatically when someone requests type info.\n> > \n> \n> Sorry but we disagree on this one. The metadata is related (part of)\n> a result, which is a different object, with a different life spam.\n> There is no way to be certain that the connection is still around\n> and no reliable way of testing for it. If the conn field is a\n> dangling pointer any check based on it depends on that heap memory\n> not being realocated already. Well, we could keep a list of results\n> associated with a connection and go cleaning the conn pointers in it \n> _if_ the user uses PQfinish() properly. A little bit dangerous IMO.\n> \n> I would stick with Tom Lane's decision of deprecating pconn and leave\n> the metadata independent of it.\n\nOh, no reliable way to determine the error; that is bad.\n\nDoes your new PQsetIncludeMetadata() eliminate the need for the\nconnection pointer? If so, it is clearly better than the connection\nparameter as you suggest. I guess I am getting confused.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Mar 2002 20:46:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale" }, { "msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> We have agreed to add another call to set a flag for including the\n> metadata on the PQexec call (which would make it work like the\n> PQexecIncludeMetadata described above). It will be in the REPOST patch.\n\nThat works for me. Among other things, it solves the problem where the\ncode that wants the metadata is a layer or two above the place that's\nactually issuing PQexec. Setting a persistent option in the PGconn\nobject gets around the difficulty that the caller of PQexec may not know\nmetadata will be wanted later.\n\n> Question: should it affect only the next PQexec(), or should we require\n> the user to reset it?\n\nIt should be persistent till reset, see above.\n\n> An even worse problem is that\n> it'll fail entirely with a multi-query query string.\n\nI'm still quite unhappy about this; it more or less destroys the layer\nindependence mentioned above. Please think harder. Perhaps it could\nbe set up so that metadata is only collected for the last result of a\nquery string, after you determine that there are no more results?\nWhich is still not great, but better than failing outright with\nmulti-query strings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Mar 2002 20:46:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale " }, { "msg_contents": "BTW, I also had a bunch of concerns having to do with odd-seeming\nchoices about what information would be wired into libpq and what\nwould be retrieved at runtime from the backend. I don't recall the\ndetails at the moment, but I want to raise a flag that that is still\nan issue for me. I'd like to see some explicit design decisions\nabout what information will be treated in which way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Mar 2002 20:53:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale " }, { "msg_contents": "Tom Lane wrote:\n> BTW, I also had a bunch of concerns having to do with odd-seeming\n> choices about what information would be wired into libpq and what\n> would be retrieved at runtime from the backend. I don't recall the\n> details at the moment, but I want to raise a flag that that is still\n> an issue for me. I'd like to see some explicit design decisions\n> about what information will be treated in which way.\n\nI noticed that too, and looked into it. I didn't see any hard-wired\noids (at least that I remember), but I did see cases where the\nscale/precision results had to be accessed based on the specific type\ninvolved, e.g. NUMERIC. I don't see a way around this, and in fact most\npeople are doing this type-speicific stuff in their apps. The only\nother solution I can see is adding a backend function that does these\ntype-specific manipulations and returns them to the client. This seems\nquite attractive, especially considering changes in internal type\nrepresentations between releases.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 7 Mar 2002 20:57:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale" }, { "msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > We have agreed to add another call to set a flag for including the\n> > metadata on the PQexec call (which would make it work like the\n> > PQexecIncludeMetadata described above). It will be in the REPOST patch.\n> \n> That works for me. Among other things, it solves the problem where the\n> code that wants the metadata is a layer or two above the place that's\n> actually issuing PQexec. Setting a persistent option in the PGconn\n> object gets around the difficulty that the caller of PQexec may not know\n> metadata will be wanted later.\n> \n> > Question: should it affect only the next PQexec(), or should we require\n> > the user to reset it?\n> \n> It should be persistent till reset, see above.\n> \n\nAgreed.\n\n\n> > An even worse problem is that\n> > it'll fail entirely with a multi-query query string.\n> \n> I'm still quite unhappy about this; it more or less destroys the layer\n> independence mentioned above. Please think harder. Perhaps it could\n> be set up so that metadata is only collected for the last result of a\n> query string, after you determine that there are no more results?\n> Which is still not great, but better than failing outright with\n> multi-query strings.\n> \n\nI will look into doing like you suggest.\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n", "msg_date": "Thu, 07 Mar 2002 21:02:25 -0500", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale" }, { "msg_contents": "\nShould this capability be added some day?\n\n---------------------------------------------------------------------------\n\nFernando Nasser wrote:\n> \n> Tom Lane wrote:\n> > \n> > Fernando Nasser <fnasser@cygnus.com> writes:\n> > > This is a patch that was posted some time ago to pgsql-patches and\n> > > no one has commented on it.\n> > > It adds something that JDBC has that is not present in libpq (see\n> below).\n> > > Is it OK for inclusion?\n> > \n> > Here are some comments ...\n> > \n> \n> Thanks.\n> \n> > > int PQfprecision(const PGresult *res, int field_num);\n> > > int PQfscale(const PGresult *res, int field_num);\n> > \n> > > Return Scale and Precision of the type respectively.\n> > \n> > These seem okay, but I don't like the API detail that \"0 is returned if\n> > information is not available\". 0 is a valid result, at least for\n> > PQfscale. I would recommend returning -1. If you really want to\n> > distinguish bad parameters from non-numeric datatype, then return -1\n> > and -2 for those two cases.\n> > \n> \n> This seems to be the libpq convention. On calls such as PQfsize and\n> PQfmod, for instance, zero is a valid result and is also returned if\n> the information is not available.\n> \n> Please note that we did not make this convention -- our original version\n> did return -1. But we decided that following a different rule for these\n> two routines was even more confusing. And change the return convention\n> for the whole set of functions at this point seems out of the question.\n> \n> P.S.: Maybe whoever originally designed the libpq interface was trying\n> to accomplish some sort of \"soft fail\" by returning zero. Just a guess\n> of course.\n> \n> \n> > > Most programs won't need this information and may not be willing\n> > > to pay the overhead for metadata retrieval. Thus, we added\n> > > an alternative function to be used instead of PQexec if one\n> > > wishes extra metadata to be retrieved along with the query\n> > > results:\n> > \n> > > PGresult *PQexecIncludeMetadata(PGconn *conn, const char *query);\n> > \n> > This strikes me as very ugly, and unnecessary, and inefficient since\n> > it retrieves metadata for all columns even though the client might\n> > only need to know about some of them. \n> \n> This part I would not worry about. The new routines are for result sets\n> (not arbitrary columns) so the fields present in it have already been\n> pre-selected. Also, this kind of information is useful for tools as\n> they don't know beforehand what the fields will be. In all cases\n> we can think of, the tool will always want metadata about all the\n> fields.\n> \n> \n> > An even worse problem is that\n> > it'll fail entirely with a multi-query query string.\n> > \n> \n> This is a bummer. But I see no solution for this besides documenting\n> the restriction in the manual. If I am not mistaken we already have\n> the limitation of returning just the last result anyway (we just\n> collect the error messages).\n> \n> \n> > What I think would be cleaner would be to do the metadata queries\n> > on-the-fly as needed. With the caching that you already have in there,\n> > on-the-fly queries wouldn't be any less efficient.\n> > \n> > But to do a metadata query we must have access to the connection.\n> > We could handle it two ways:\n> > \n> > 1. Add a PGconn parameter to the querying functions.\n> > \n> \n> The problem is that results may be kept longer than connections\n> (see below). The current solution did not require the connection\n> as the metadata is for the result set, not tables.\n> \n> The PGconn parameter would be reasonable for retrieving metadata\n> about table columns, for instance.\n> \n> \n> > 2. Make use of the PGconn link that's stored in PGresults, and\n> > specify that these functions can only be used on PGresults that\n> > came from a still-open connection.\n> > \n> \n> That field has been deprecated (see comments in the source code) \n> because a result may be kept even after the connection is closed.\n> \n> \n> > I think I prefer the first, since it makes it more visible to the\n> > programmer that queries may get executed. But it's a judgment call\n> > probably; I could see an argument for the second as well. Any comments,\n> > anyone?\n> > \n> \n> It would have to be the former (to avoid the stale pointer problem).\n> \n> But requiring a connection adds a restriction to the use of this info\n> and makes it have a different life span than the object it refers to\n> (a PGresult), which is very weird.\n> \n> \n> > > The PQftypename function returns the internal PostgreSQL type name.\n> > > As some programs may prefer something more user friendly than the\n> > > internal type names, we've thrown in a conversion routine as well:\n> > > char *PQtypeint2ext(const char *intname);\n> > > This routine converts from the internal type name to a more user\n> > > friendly type name convention.\n> > \n> > This seems poorly designed. Pass it the type OID and typmod, both of\n> > which are readily available from a PQresult without extra computation.\n> > That will let you call the backend's format_type ... of course you'll\n> > need a PGconn too for that.\n> > \n> \n> Requiring the PGconn is bad. But we still could have a PQFtypeExt()\n> returning the \"external\" type if people prefer it that way.\n> We thought that this should be kept as an explicit conversion\n> operation to make clear the distinction of what the backend knows\n> about and this outside world view of things.\n> \n> \n> \n> -- \n> Fernando Nasser\n> Red Hat Canada Ltd. E-Mail: fnasser@redhat.com\n> 2323 Yonge Street, Suite #300\n> Toronto, Ontario M4P 2C9\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 25 Aug 2002 16:29:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Libpq support for precision and scale" } ]
[ { "msg_contents": "I did some \"benchmarks\" to check whether --enable-locale with LC_ALL=C is\njust as fast as --disable-locale, to possibly justify making locale\nsupport the default. This test only covers locale-aware comparisons,\nwhich seems to be the critical aspect for all intents and purposes.\n\nI loaded a table of a single text column with 454240 rows of English\nwords. The table had a size of 21.5 MB. The values were explicitly\nde-sorted, but the order was the same across all test runs. Then I ran\nSELECT * FROM test ORDER BY 1; and timed the wall-clock response time a\nfew times. All configuration parameters were left at the default.\n\nThe averaged results follow. Some logarithmic buffering cleverness\nappeared to surface, but the results are still distinct enough to be\nuseful.\n\nno locale:\t58s\nlocale=C:\t78s\t(ca. 33% slower)\nlocale=en_US:\t118s\t(ca. 100% slower)\n\nThis confused me, because in my C library a strcoll() call with locale=C\nis handed to strcmp() quite directly. A look into varlena.c:varstr_cmp()\nshows that the locale-aware path does some extra copying because there is\nno strncoll() function we can use with non-terminated strings.\n\nFor testing's sake I replaced the two palloc() calls in that function with\nalloca(), which is presumably the fastest possible memory allocator.\nResult:\n\nlocale=C,alloca:\t67s\t(ca. 15% slower)\n\nThis shows that we're wasting quite a bit of time allocating memory --\nprobably not only in this place. I'm pretty sure that the majority of the\nrest of the gap comes from the memcpy() operations. Not that there's a\nwhole lot we can do about either of these things.\n\nHowever, I feel that we could reasonably cope with this situation by\nreplacing\n\n#ifdef USE_LOCALE\n/* locale-aware code */\n#else\n/* non-locale code */\n#endif\n\nwith\n\nif (locale_is_not_C)\n{\n /* locale-ware code */\n}\nelse\n{\n /* non-locale code */\n}\n\nThis practice should have minuscule impact, and it's probably the plan for\nthe multibyte side of things as well.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 26 Nov 2001 20:15:33 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Locale timings" }, { "msg_contents": "This is a common way of doing things inside glibc, and the happy result is that \nif you really want to build a non-locale-aware system, you can use a \ncompile-time option that replaces the \"locale_is_not_C\" test with a constant. \nIt makes for more maintainable code because there's less chance for bitrot in \nthe usual case.\n\nM\n\nPeter Eisentraut wrote:\n\n> I did some \"benchmarks\" to check whether --enable-locale with LC_ALL=C is\n> just as fast as --disable-locale, to possibly justify making locale\n> support the default. This test only covers locale-aware comparisons,\n> which seems to be the critical aspect for all intents and purposes.\n> \n> I loaded a table of a single text column with 454240 rows of English\n> words. The table had a size of 21.5 MB. The values were explicitly\n> de-sorted, but the order was the same across all test runs. Then I ran\n> SELECT * FROM test ORDER BY 1; and timed the wall-clock response time a\n> few times. All configuration parameters were left at the default.\n> \n> The averaged results follow. Some logarithmic buffering cleverness\n> appeared to surface, but the results are still distinct enough to be\n> useful.\n> \n> no locale:\t58s\n> locale=C:\t78s\t(ca. 33% slower)\n> locale=en_US:\t118s\t(ca. 100% slower)\n> \n> This confused me, because in my C library a strcoll() call with locale=C\n> is handed to strcmp() quite directly. A look into varlena.c:varstr_cmp()\n> shows that the locale-aware path does some extra copying because there is\n> no strncoll() function we can use with non-terminated strings.\n> \n> For testing's sake I replaced the two palloc() calls in that function with\n> alloca(), which is presumably the fastest possible memory allocator.\n> Result:\n> \n> locale=C,alloca:\t67s\t(ca. 15% slower)\n> \n> This shows that we're wasting quite a bit of time allocating memory --\n> probably not only in this place. I'm pretty sure that the majority of the\n> rest of the gap comes from the memcpy() operations. Not that there's a\n> whole lot we can do about either of these things.\n> \n> However, I feel that we could reasonably cope with this situation by\n> replacing\n> \n> #ifdef USE_LOCALE\n> /* locale-aware code */\n> #else\n> /* non-locale code */\n> #endif\n> \n> with\n> \n> if (locale_is_not_C)\n> {\n> /* locale-ware code */\n> }\n> else\n> {\n> /* non-locale code */\n> }\n> \n> This practice should have minuscule impact, and it's probably the plan for\n> the multibyte side of things as well.\n> \n> \n\n\n", "msg_date": "Mon, 26 Nov 2001 14:41:31 -0500", "msg_from": "Michael Tiemann <tiemann@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Locale timings" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> However, I feel that we could reasonably cope with this situation by\n> replacing\n\n> #ifdef USE_LOCALE\n> /* locale-aware code */\n> #else\n> /* non-locale code */\n> #endif\n\n> with\n\n> if (locale_is_not_C)\n> {\n> /* locale-ware code */\n> }\n> else\n> {\n> /* non-locale code */\n> }\n\n> This practice should have minuscule impact,\n\nExcept perhaps on the size of the executable ;-). However, USE_LOCALE\naffects little enough stuff that that's probably not a big objection.\n\nA possibly more serious objection is whether there are still any systems\nout there that don't *have* locale support, and will give us build\nerrors on <locale.h> or strcoll() or some such. It looks like\n<locale.h> is required by ANSI C, so I think we can get away with this;\ndoes anyone still care about, eg, SunOS 4.1.x? (One could imagine\nproviding a stub implementation that equates strcoll() to strcmp() and\nso forth, if anyone still does care.)\n\nOne nice point is that this will actually make things faster for the\ncase of locale-enabled-code-running-in-C-locale, which I suspect is not\nuncommon, particularly for RPM users.\n\nI'm for it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Nov 2001 19:21:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Locale timings " } ]
[ { "msg_contents": "I was wondering if we should disable the writing of pre-page images into\nWAL if the user has turned off fsync?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Nov 2001 14:22:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Pre-page images in WAL" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was wondering if we should disable the writing of pre-page images into\n> WAL if the user has turned off fsync?\n\nI'm worried about what vulnerabilities that would create.\n\nHistorically we've always defined \"fsync off\" to mean \"I trust my\nkernel, hardware, and power supply ... but not necessarily Postgres\nitself\". In a Postgres crash, even with fsync off, you are not supposed\nto lose any committed transactions, so long as the kernel and hardware\nstay up.\n\nIn the brave new world of WAL, Postgres does not flush dirty buffers to\ndisk at transaction commit, relying on WAL to clean up if a database or\nsystem failure occurs. If we don't log page images to WAL then I think\nthere's a hole here wherein a Postgres crash can lose data even though\nno failure of the surrounding OS occurs. Maybe it's safe, but I'm not\nconvinced.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2001 01:52:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pre-page images in WAL " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was wondering if we should disable the writing of pre-page images into\n> > WAL if the user has turned off fsync?\n> \n> I'm worried about what vulnerabilities that would create.\n> \n> Historically we've always defined \"fsync off\" to mean \"I trust my\n> kernel, hardware, and power supply ... but not necessarily Postgres\n> itself\". In a Postgres crash, even with fsync off, you are not supposed\n> to lose any committed transactions, so long as the kernel and hardware\n> stay up.\n> \n> In the brave new world of WAL, Postgres does not flush dirty buffers to\n> disk at transaction commit, relying on WAL to clean up if a database or\n> system failure occurs. If we don't log page images to WAL then I think\n> there's a hole here wherein a Postgres crash can lose data even though\n> no failure of the surrounding OS occurs. Maybe it's safe, but I'm not\n> convinced.\n\nI understand. The only reason I mentioned this is because I thought\npre-page WAL was only to cache partially written disk pages.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 01:53:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Pre-page images in WAL" } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> Seems there is a problem with anoncvs:\n> cvs update: authorization failed: server \n> anoncvs.postgresql.org rejected access to \n> /projects/cvsroot for user anoncvs\n\nProblems like this (and similar problems with the website) \nseem to be occuring rather regularly. Perhaps someone could \nwrite a small expect script that would check things out \nhourly and email the correct person? (and run manually \nafter any changes). Seems this might catch the problem before \npeople start complaining via email lists.\n\nPerhaps even have something that once a day goes through the \nentire motion (surf, download, extract, compile, test) \nand makes sure everything works from A to Z.\n\n\nJust a thought,\nGreg Sabino Mullane\ngreg@turnstep.com\nPGP Key: 0x14964AC8 200111261433\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBPAKZk7ybkGcUlkrIEQI2EwCgh1xPqjxI9j8K21LZgt4AmU/uS7EAoPma\ngWLyH6zp2CkdgVhlBQjGqrtV\n=FDl2\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Mon, 26 Nov 2001 19:36:37 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "cvs/website/make problems" }, { "msg_contents": "\nLet me know once you have this written, and we can look at implementing it\n...\n\nMore seriously, this is the result of some *very* drastic *physical* moves\nover the past month that are just starting to settle down ... once we have\nit all settled out, then we pretty much go back to 'business as usual',\nbut its one of the reasons why the Beta, and Release, are being delayed as\nthey are, cause we don't want the release to go out without everything\nelse stabilized ...\n\nOn Mon, 26 Nov 2001, Greg Sabino Mullane wrote:\n\n>\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> > Seems there is a problem with anoncvs:\n> > cvs update: authorization failed: server\n> > anoncvs.postgresql.org rejected access to\n> > /projects/cvsroot for user anoncvs\n>\n> Problems like this (and similar problems with the website)\n> seem to be occuring rather regularly. Perhaps someone could\n> write a small expect script that would check things out\n> hourly and email the correct person? (and run manually\n> after any changes). Seems this might catch the problem before\n> people start complaining via email lists.\n>\n> Perhaps even have something that once a day goes through the\n> entire motion (surf, download, extract, compile, test)\n> and makes sure everything works from A to Z.\n>\n>\n> Just a thought,\n> Greg Sabino Mullane\n> greg@turnstep.com\n> PGP Key: 0x14964AC8 200111261433\n>\n> -----BEGIN PGP SIGNATURE-----\n> Comment: http://www.turnstep.com/pgp.html\n>\n> iQA/AwUBPAKZk7ybkGcUlkrIEQI2EwCgh1xPqjxI9j8K21LZgt4AmU/uS7EAoPma\n> gWLyH6zp2CkdgVhlBQjGqrtV\n> =FDl2\n> -----END PGP SIGNATURE-----\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Mon, 26 Nov 2001 16:56:01 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: cvs/website/make problems" } ]
[ { "msg_contents": "> > > 3) how should we update the docs for this?\n> > \n> > \"--name=val may not work on some platforms, if not use -c name=val\".\n> \n> OK, I will add that it doesn't work on FreeBSD and OpenBSD,\n> specifically. This way, if it doesn't work on other platforms,\n> hopefully we will hear about it.\n\nOK, new text is:\n\n\tThe <option>--</> option will not work on FreeBSD or OpenBSD.\n\tUse <option>-c</> instead. This should be fixed in\n\t<productname>PostgreSQL</productname> 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Nov 2001 14:41:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Call for objections: deprecate postmaster -o switch?" } ]
[ { "msg_contents": "Is this behavior intended in the backend? The problem is that when you \ncreate a rule on an object that calls a stored function and invoke that \nrule on an insert/update/delete statement your insert/update/delete \nstatement will now return a query result to the front end over the FE/BE \nprotocol. (I am not sure this is the exact senerio, but something \nsimilar). This means that the user now needs to perform a \nexecuteQuery() call when using these insert/update/delete statements in \nJDBC because the JDBC driver isn't able to accept a query response when \nissuing a insert/update/delete call.\n\nthanks,\n--Barry\n\n\n\n-------- Original Message --------\nSubject: Re: CallableStatements\nDate: Mon, 26 Nov 2001 12:14:32 -0800 (PST)\nFrom: Stuart Robinson <stuart@zapata.org>\nTo: Rene Pijlman <rene@lab.applinet.nl>\nCC: <pgsql-jdbc@postgresql.org>\n\nThere are various circumstances where you might want to call a stored\nprocedure with an executeUpdate method. For example, let's suppose you\nhave a view that combines a couple of tables and you want an application\nyou're building to be able to write to it. Since views are read-only, you\nwould create a rule that intercepts the inserts and updates and fires off\na stored procedure instead. Since the application is doing an insert or an\nupdate, it will use executeUpdate, but the stored procedure will have to\nuse select and return a result, causing the application to error out.\n\n-Stuart\n\nOn Mon, 26 Nov 2001, Rene Pijlman wrote:\n\n > On Mon, 26 Nov 2001 10:40:52 -0800 (PST), you wrote:\n > >But if you use the executeUpdate method, you'll get an error, because it\n > >isn't expecting a result, no? So, how do you call a stored procedure \nusing\n > >executeUpdate?\n >\n > You don't. In the current implementation you need to use a\n > SELECT statement. Why is that a problem?\n >\n > Regards,\n > Ren� Pijlman <rene@lab.applinet.nl>\n >\n\n-- \nStuart Robinson [stuart@zapata.org]\nhttp://www.nerdindustries.com\nhttp://www.tzeltal.org\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n\n", "msg_date": "Mon, 26 Nov 2001 13:01:37 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "insert/update/delete statements returning a query response" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> Is this behavior intended in the backend? The problem is that when you \n> create a rule on an object that calls a stored function and invoke that \n> rule on an insert/update/delete statement your insert/update/delete \n> statement will now return a query result to the front end over the FE/BE \n> protocol. (I am not sure this is the exact senerio, but something \n> similar).\n\nIf the rule adds SELECT operations to the basic statement then those\nSELECT(s) will return results to the frontend. I think this is\nappropriate, perhaps even necessary for some applications of rules.\n\n> This means that the user now needs to perform a \n> executeQuery() call when using these insert/update/delete statements in \n> JDBC because the JDBC driver isn't able to accept a query response when \n> issuing a insert/update/delete call.\n\nI would regard that as a JDBC bug: it should be able to accept a query\nresponse at any time. It shouldn't have preconceived ideas about what\na given query will produce.\n\nIt probably would be a good idea to add some kind of \"CALL\" or \"PERFORM\"\nstatement to the backend, having the same semantics as SELECT except\nthat the query result is discarded instead of being shipped to the\nclient. However, this is largely syntactic sugar with maybe a tiny\nbit of performance-improvement rationale. JDBC should be able to cope\nwith all the cases that libpq does, and libpq handles this scenario\nwith aplomb.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Nov 2001 18:36:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: insert/update/delete statements returning a query response " }, { "msg_contents": "What is the FE/BE protocol? (I did a Google search and found references to\nit, but no definitions or explanations.) Thanks. (And apologies if this is\na stupid RTFM sort of question.)\n\n-Stuart\n\nOn Mon, 26 Nov 2001, Barry Lind wrote:\n\n> Is this behavior intended in the backend? The problem is that when you\n> create a rule on an object that calls a stored function and invoke that\n> rule on an insert/update/delete statement your insert/update/delete\n> statement will now return a query result to the front end over the FE/BE\n> protocol. (I am not sure this is the exact senerio, but something\n> similar). This means that the user now needs to perform a\n> executeQuery() call when using these insert/update/delete statements in\n> JDBC because the JDBC driver isn't able to accept a query response when\n> issuing a insert/update/delete call.\n>\n> thanks,\n> --Barry\n\n", "msg_date": "Mon, 26 Nov 2001 16:36:56 -0800 (PST)", "msg_from": "Stuart Robinson <stuart@zapata.org>", "msg_from_op": false, "msg_subject": "Re: insert/update/delete statements returning a query response" }, { "msg_contents": "Doesn't PL/pgSQL already support a PERFORM statement?\n\n-Stuart\n\nOn Mon, 26 Nov 2001, Tom Lane wrote:\n\n> Barry Lind <barry@xythos.com> writes:\n> > Is this behavior intended in the backend? The problem is that when you\n> > create a rule on an object that calls a stored function and invoke that\n> > rule on an insert/update/delete statement your insert/update/delete\n> > statement will now return a query result to the front end over the FE/BE\n> > protocol. (I am not sure this is the exact senerio, but something\n> > similar).\n>\n> If the rule adds SELECT operations to the basic statement then those\n> SELECT(s) will return results to the frontend. I think this is\n> appropriate, perhaps even necessary for some applications of rules.\n>\n> > This means that the user now needs to perform a\n> > executeQuery() call when using these insert/update/delete statements in\n> > JDBC because the JDBC driver isn't able to accept a query response when\n> > issuing a insert/update/delete call.\n>\n> I would regard that as a JDBC bug: it should be able to accept a query\n> response at any time. It shouldn't have preconceived ideas about what\n> a given query will produce.\n>\n> It probably would be a good idea to add some kind of \"CALL\" or \"PERFORM\"\n> statement to the backend, having the same semantics as SELECT except\n> that the query result is discarded instead of being shipped to the\n> client. However, this is largely syntactic sugar with maybe a tiny\n> bit of performance-improvement rationale. JDBC should be able to cope\n> with all the cases that libpq does, and libpq handles this scenario\n> with aplomb.\n>\n> \t\t\tregards, tom lane\n>\n\n-- \nStuart Robinson [stuart@zapata.org]\nhttp://www.nerdindustries.com\nhttp://www.tzeltal.org\n\n", "msg_date": "Mon, 26 Nov 2001 16:39:54 -0800 (PST)", "msg_from": "Stuart Robinson <stuart@zapata.org>", "msg_from_op": false, "msg_subject": "Re: insert/update/delete statements returning a query" }, { "msg_contents": "Stuart Robinson <stuart@zapata.org> writes:\n> Doesn't PL/pgSQL already support a PERFORM statement?\n\nYes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Nov 2001 20:05:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: insert/update/delete statements returning a query response " }, { "msg_contents": "Stuart,\n\nFE/BE = Frontend/Backend protocol. It is the over the wire protocol \nPostgres uses to talk to clients (jdbc, odbc, libpq, etc.).\n\nIt is documented in the Developers Guide, there is a chapter titled \n\"Frontend/Backend Protocol\".\n\nthanks,\n--Barry\n\nStuart Robinson wrote:\n\n> What is the FE/BE protocol? (I did a Google search and found references to\n> it, but no definitions or explanations.) Thanks. (And apologies if this is\n> a stupid RTFM sort of question.)\n> \n> -Stuart\n> \n> On Mon, 26 Nov 2001, Barry Lind wrote:\n> \n> \n>>Is this behavior intended in the backend? The problem is that when you\n>>create a rule on an object that calls a stored function and invoke that\n>>rule on an insert/update/delete statement your insert/update/delete\n>>statement will now return a query result to the front end over the FE/BE\n>>protocol. (I am not sure this is the exact senerio, but something\n>>similar). This means that the user now needs to perform a\n>>executeQuery() call when using these insert/update/delete statements in\n>>JDBC because the JDBC driver isn't able to accept a query response when\n>>issuing a insert/update/delete call.\n>>\n>>thanks,\n>>--Barry\n>>\n> \n\n\n", "msg_date": "Mon, 26 Nov 2001 18:16:47 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: insert/update/delete statements returning a query response" }, { "msg_contents": "OK. I will fix the jdbc driver in 7.3 to handle this case. \nUnfortunately since the JDBC spec doesn't let me return anything other \nthan a row count for inserts/updates/deletes I will just be discarding \nthe query result.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n\n> Barry Lind <barry@xythos.com> writes:\n> \n>>Is this behavior intended in the backend? The problem is that when you \n>>create a rule on an object that calls a stored function and invoke that \n>>rule on an insert/update/delete statement your insert/update/delete \n>>statement will now return a query result to the front end over the FE/BE \n>>protocol. (I am not sure this is the exact senerio, but something \n>>similar).\n>>\n> \n> If the rule adds SELECT operations to the basic statement then those\n> SELECT(s) will return results to the frontend. I think this is\n> appropriate, perhaps even necessary for some applications of rules.\n> \n> \n>>This means that the user now needs to perform a \n>>executeQuery() call when using these insert/update/delete statements in \n>>JDBC because the JDBC driver isn't able to accept a query response when \n>>issuing a insert/update/delete call.\n>>\n> \n> I would regard that as a JDBC bug: it should be able to accept a query\n> response at any time. It shouldn't have preconceived ideas about what\n> a given query will produce.\n> \n> It probably would be a good idea to add some kind of \"CALL\" or \"PERFORM\"\n> statement to the backend, having the same semantics as SELECT except\n> that the query result is discarded instead of being shipped to the\n> client. However, this is largely syntactic sugar with maybe a tiny\n> bit of performance-improvement rationale. JDBC should be able to cope\n> with all the cases that libpq does, and libpq handles this scenario\n> with aplomb.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n", "msg_date": "Mon, 26 Nov 2001 18:20:03 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] insert/update/delete statements returning a query\n\tresponse" } ]
[ { "msg_contents": "Someone on IRC complained about these lines in the server log:\n\nDEBUG: database system was shut down at 2001-11-26 16:44:06 EST\nDEBUG: checkpoint record is at 0/1361168\nDEBUG: redo record is at 0/1361168; undo record is at 0/0; shutdown TRUE\nDEBUG: next transaction id: 4484; next oid: 147882\nDEBUG: database system is ready\n\nShould they be there, and should they say DEBUG?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Nov 2001 16:45:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "DEBUG lines in server log" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Someone on IRC complained about these lines in the server log:\n> DEBUG: database system was shut down at 2001-11-26 16:44:06 EST\n> DEBUG: checkpoint record is at 0/1361168\n> DEBUG: redo record is at 0/1361168; undo record is at 0/0; shutdown TRUE\n> DEBUG: next transaction id: 4484; next oid: 147882\n> DEBUG: database system is ready\n\n> Should they be there, and should they say DEBUG?\n\nThey look like DEBUG messages to me ;-)\n\nI would certainly not want the postmaster to start up completely\nsilently; the \"database system is ready\" message should absolutely not\nbe removed. The \"when shut down\" output also seems moderately useful\nto admins. The middle three lines are perhaps not useful in normal\noperation but I can see them being pretty important in a failure\nscenario, so I'm not that much in a hurry to take them out either.\n\nWhat exactly was the complaint, or rationale for complaining?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Nov 2001 18:44:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DEBUG lines in server log " } ]
[ { "msg_contents": "I see following in TODO:\n\n* -Allow international error message support and add error codes[elog](Peter E)\n\nIn my understanding, adding error codes has not done yet. Is this\ncorrect?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 27 Nov 2001 11:51:22 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "TODO" }, { "msg_contents": "> I see following in TODO:\n> \n> * -Allow international error message support and add error codes[elog](Peter E)\n> \n> In my understanding, adding error codes has not done yet. Is this\n> correct?\n\nCorrect. I have split the item into to separate items and marked only\nthe first one as done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Nov 2001 23:11:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: TODO" } ]
[ { "msg_contents": "\n> Is this behavior intended in the backend? The problem is that when\nyou \n> create a rule on an object that calls a stored function and invoke\nthat \n> rule on an insert/update/delete statement your insert/update/delete \n> statement will now return a query result to the front end over the\nFE/BE \n> protocol.\n\nSince this behavior is essential to the rule system, imho the actual\nsource\nof problems is, that PostgreSQL does not have \"real stored procedures\"\n==\nfunctions that do not have a return value or set (C lingo: void\nfunc_a(x)).\n\nThe usual view rule that needs enhanced processing intelligence would\nthen call a stored procedure and not a function.\n\nThe easy way out would be to write rules with instead actions, that\ncall insert/update/delete statemants directly. This often works\nfor the more common cases.\n\nAndreas\n", "msg_date": "Tue, 27 Nov 2001 10:14:46 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: insert/update/delete statements returning a query response" } ]
[ { "msg_contents": "\n> In the brave new world of WAL, Postgres does not flush dirty buffers\nto\n> disk at transaction commit, relying on WAL to clean up if a database\nor\n> system failure occurs. If we don't log page images to WAL then I\nthink\n> there's a hole here wherein a Postgres crash can lose data even though\n> no failure of the surrounding OS occurs. Maybe it's safe, but I'm not\n> convinced.\n\nIt should be (imho *is*) safe for heap pages, but not for index pages.\nThe reasoning for heap pages is, that the WAL code (pre page image)\nalready\ncoped with future heap page versions.\n\nI would not expect it to work for index pages, since one \"feature\" of\nthe page images was to \"undo\" future index operations.\n\nAndreas\n", "msg_date": "Tue, 27 Nov 2001 11:04:42 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Pre-page images in WAL " } ]
[ { "msg_contents": "Good news, 7.2b3 works fine on i386 OBSD 3.0\n\nBad news, 7.2b3 is giving random regression test failures on sparc OBSD\n3.0. I have made the necessary kernel tweaks, but am still getting\nrandom failures:\n\n path ... FAILED\n$ more results/path.out\npsql: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\n\nAny guesses where to look?\n\nThanks all,\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Tue, 27 Nov 2001 07:21:22 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "OpenBSD results for 7.2b3" }, { "msg_contents": "Sorry about the false alarms, 7.2b3 checks out on i386/sparc obsd 3.0.\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Tue, 27 Nov 2001 07:41:53 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "Re: OpenBSD results for 7.2b3" }, { "msg_contents": "> Sorry about the false alarms, 7.2b3 checks out on i386/sparc obsd 3.0.\n\nGreat. Thanks for the report.\n\n - Thomas\n", "msg_date": "Wed, 28 Nov 2001 04:16:35 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: OpenBSD results for 7.2b3" } ]
[ { "msg_contents": "ulimits never got changed. re-running tests, sorry.\n\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n\n", "msg_date": "Tue, 27 Nov 2001 07:22:41 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "Re: OpenBSD results for 7.2b3 (oops!)" } ]
[ { "msg_contents": "Cross-platform, cross-database, ERD based, data architecting tool.\n\nScreenshots\n\nhttp://www.thekompany.com/products/dataarchitect/screenshots.php3\n\nProduct Information\n\nhttp://www.thekompany.com/products/dataarchitect/\n\n--\nPeter Harvey\n", "msg_date": "Tue, 27 Nov 2001 09:44:26 -0800", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "ANN: Data Architect 2.0" } ]
[ { "msg_contents": "As most of you probably know, my former employer Great Bridge folded a\ncouple months ago. I am happy to announce that I've accepted a position\nwith Red Hat Inc, working in their database group. I expect to be able\nto continue full-time involvement with PostgreSQL. (Or should I say\n\"Red Hat Database\" now?) I'm excited about this opportunity, and\nbelieve that it will work out in the best interests of the Postgres\ncommunity as well as of the company.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2001 15:21:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Announcement: I've joined Red Hat" }, { "msg_contents": "Tom Lane wrote:\n\n>As most of you probably know, my former employer Great Bridge folded a\n>couple months ago. I am happy to announce that I've accepted a position\n>with Red Hat Inc, working in their database group. I expect to be able\n>to continue full-time involvement with PostgreSQL. (Or should I say\n>\"Red Hat Database\" now?) \n>\nCongratulations! It really seems like a smart (altough expected) move \nfrom RH :)\n\n>I'm excited about this opportunity, and\n>believe that it will work out in the best interests of the Postgres\n>community as well as of the company.\n>\nWill there be any other \"acquisitions\" by RH ?\n\nThem getting Cygnus of all things available also seemed like a really \nbright idea so I\nhave quite high expectations for them.\n\nPS. Is it postgres community or postgreSQL community ? ;)\n\n-------------------------\n\nHannu\n\n\n\n\n", "msg_date": "Wed, 28 Nov 2001 01:48:11 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Announcement: I've joined Red Hat" }, { "msg_contents": "Best of luck to you!\n\n\tJeff\n\nOn Tuesday 27 November 2001 12:21 pm, you wrote:\n> As most of you probably know, my former employer Great Bridge folded a\n> couple months ago. I am happy to announce that I've accepted a position\n> with Red Hat Inc, working in their database group. I expect to be able\n> to continue full-time involvement with PostgreSQL. (Or should I say\n> \"Red Hat Database\" now?) I'm excited about this opportunity, and\n> believe that it will work out in the best interests of the Postgres\n> community as well as of the company.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Tue, 27 Nov 2001 13:02:53 -0800", "msg_from": "Jeff Davis <list-pgsql-hackers@dynworks.com>", "msg_from_op": false, "msg_subject": "Re: Announcement: I've joined Red Hat" }, { "msg_contents": "On Tue, 27 Nov 2001, Tom Lane wrote:\n\n> As most of you probably know, my former employer Great Bridge folded a\n> couple months ago. I am happy to announce that I've accepted a position\n> with Red Hat Inc, working in their database group. I expect to be able\n> to continue full-time involvement with PostgreSQL. (Or should I say\n> \"Red Hat Database\" now?) I'm excited about this opportunity, and\n> believe that it will work out in the best interests of the Postgres\n> community as well as of the company.\n\nCongrats Tom!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 27 Nov 2001 16:27:16 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Announcement: I've joined Red Hat" }, { "msg_contents": "On Tuesday 27 November 2001 03:21 pm, Tom Lane wrote:\n> As most of you probably know, my former employer Great Bridge folded a\n> couple months ago. I am happy to announce that I've accepted a position\n> with Red Hat Inc, working in their database group. I expect to be able\n> to continue full-time involvement with PostgreSQL. (Or should I say\n> \"Red Hat Database\" now?) I'm excited about this opportunity, and\n> believe that it will work out in the best interests of the Postgres\n> community as well as of the company.\n\nWoof!\n\nGlad to hear it, Tom. Well, that certainly eased some of the potential \nconcerns about the direction of RHDB in my mind, at least.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 27 Nov 2001 18:01:22 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Announcement: I've joined Red Hat" }, { "msg_contents": "Fantastic!\nI was worried about RedHat becoming the \"Microsoft of Linux,\" but every step\nthey have taken seems to indicate they wish to take the high road. \n\nGood luck.\n\n\nTom Lane wrote:\n> \n> As most of you probably know, my former employer Great Bridge folded a\n> couple months ago. I am happy to announce that I've accepted a position\n> with Red Hat Inc, working in their database group. I expect to be able\n> to continue full-time involvement with PostgreSQL. (Or should I say\n> \"Red Hat Database\" now?) I'm excited about this opportunity, and\n> believe that it will work out in the best interests of the Postgres\n> community as well as of the company.\n", "msg_date": "Tue, 27 Nov 2001 20:48:14 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Announcement: I've joined Red Hat" }, { "msg_contents": "Wish you and all of PostgreSQL hackers out there the best :)\n\nwarm regards\nAndy\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: <pgsql-hackers@postgresql.org>\nSent: Wednesday, November 28, 2001 3:21 AM\nSubject: [HACKERS] Announcement: I've joined Red Hat\n\n\n> As most of you probably know, my former employer Great Bridge folded a\n> couple months ago. I am happy to announce that I've accepted a position\n> with Red Hat Inc, working in their database group. I expect to be able\n> to continue full-time involvement with PostgreSQL. (Or should I say\n> \"Red Hat Database\" now?) I'm excited about this opportunity, and\n> believe that it will work out in the best interests of the Postgres\n> community as well as of the company.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Wed, 28 Nov 2001 09:10:34 +0700", "msg_from": "\"Andy Samuel\" <andysamuel@geocities.com>", "msg_from_op": false, "msg_subject": "Re: Announcement: I've joined Red Hat" } ]
[ { "msg_contents": "\nI am looking into some performance issues with an application I have. I \nwant to do some testing to see how much overhead TOAST adds to my \napplication. I have a table that performs a similar function to the \npg_largeobject table. I have noticed that pg_largeobject doesn't have \ntoast enabled (i.e. reltoastrelid is 0). However when I create my table \nit always gets a value for reltoastrelid. Since pg_largeobject is \ncreated without toast, I am assuming this is intentional and that for \ncertain classes of tables it may make sense not to toast the tuples. \nWhich makes sense because inserting into the toast table will involve \nextra disk IOs and if the tuple would have fit into the base table these \nextra IOs could be avoided.\n\nSo how do I create a table without toast enabled? I have looked through \nthe docs for 'create table' and didn't see anything that indicates this \nis possible. Is there some undocumented syntax?\n\nthanks,\n--Barry\n\n\n", "msg_date": "Tue, 27 Nov 2001 12:25:04 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "How to turn off TOAST on a table/column" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> So how do I create a table without toast enabled?\n\nUnless you want to muck with the backend code, the only way to create\na table that has no toast table attached is to declare columns that\nthe backend can prove to itself will never add up to more than BLCKSZ\nspace per tuple. For example, use varchar(n) not text. (If you've got\nMULTIBYTE enabled then that doesn't work either, since the n is\nmeasure in characters not bytes.)\n\nHowever, the mere existence of a toast table doesn't cost anything\n(except for some increase of the time for CREATE TABLE). What you\nprobably really want to do is turn on and off the *use* of the toast\ntable. Which you can do by mucking with the attstorage attributes of\nthe table columns. I don't think anyone's gotten round to providing\na nice clean ALTER TABLE interface, but a quick\n\nUPDATE pg_attribute SET attstorage = 'p'\nWHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'mytable')\nAND attnum > 0\n\nwould suffice to disable toasting of all columns in 'mytable'.\n\nSee src/include/pg_attribute.h for documentation of the allowed values\nfor attstorage.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2001 15:52:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to turn off TOAST on a table/column " }, { "msg_contents": "In article <3C03F6A0.1010702@xythos.com>, \"Barry Lind\" <barry@xythos.com>\nwrote:\n\n> So how do I create a table without toast enabled? I have looked through\n> the docs for 'create table' and didn't see anything that indicates this\n> is possible. Is there some undocumented syntax?\n>\n\nOne of the additions in my \"TOAST slicing\" patch is the provision of\nALTER TABLE x ALTER COLUMN y SET STORAGE {EXTERNAL | PLAIN | EXTENDED |\nMAIN} -but, in short, it sets attstorage (in pg_attribute) for the\nappropriate column, which has the effect of forcing certain TOAST\nbehaviour. \n\nFor the moment, you could just set attstorage to 'e' which will turn off\ncompression (but still use the second table), or 'm' which will turn off the use\nof the external table for that attribute (or 'p' which will do both). Note that\nfixed-length types must have attstorage 'p' (by fixed length I mean things\nlike integer etc. -- varchar(n) is a variable-length type.)\n\nObviously all the usual caveats about altering system catalogs apply...\n\nRegards\n\nJohn\n-- \nJohn Gray\nAzuli IT http://www.azuli.co.uk +44 121 693 3397\njgray@azuli.co.uk\n", "msg_date": "Tue, 27 Nov 2001 21:14:41 +0000", "msg_from": "\"John Gray\" <jgray@azuli.co.uk>", "msg_from_op": false, "msg_subject": "Re: How to turn off TOAST on a table/column" }, { "msg_contents": "On Tue, Nov 27, 2001 at 03:52:27PM -0500, Tom Lane wrote:\n> Barry Lind <barry@xythos.com> writes:\n> > So how do I create a table without toast enabled?\n> \n> Unless you want to muck with the backend code, the only way to create\n> a table that has no toast table attached is to declare columns that\n> the backend can prove to itself will never add up to more than BLCKSZ\n> space per tuple. For example, use varchar(n) not text. (If you've got\n> MULTIBYTE enabled then that doesn't work either, since the n is\n> measure in characters not bytes.)\n> \n> However, the mere existence of a toast table doesn't cost anything\n> (except for some increase of the time for CREATE TABLE). What you\n> probably really want to do is turn on and off the *use* of the toast\n> table. Which you can do by mucking with the attstorage attributes of\n> the table columns. I don't think anyone's gotten round to providing\n> a nice clean ALTER TABLE interface, but a quick\n> \n> UPDATE pg_attribute SET attstorage = 'p'\n> WHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'mytable')\n> AND attnum > 0\n> \n> would suffice to disable toasting of all columns in 'mytable'.\n\nThis would reimpose the max-tuple limit on that table, would it not?\nSo trying to store 'too large' a text would error? Definitely one for\nthe regression tests, once we've got that ALTER TABLE interface.\n\n> \n> See src/include/pg_attribute.h for documentation of the allowed values\n> for attstorage.\n \nThis needs to get into the admin docs. I suppose it's also waiting on the\nALTER TABLE interface.\n\nRoss\n\n", "msg_date": "Tue, 27 Nov 2001 16:02:00 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to turn off TOAST on a table/column" }, { "msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n>> would suffice to disable toasting of all columns in 'mytable'.\n\n> This would reimpose the max-tuple limit on that table, would it not?\n> So trying to store 'too large' a text would error?\n\nRight. Presumably, that's what Barry wants to test. In practice the\nother values are more likely to be useful (for toastable datatypes\nthat is).\n\n>> See src/include/pg_attribute.h for documentation of the allowed values\n>> for attstorage.\n \n> This needs to get into the admin docs. I suppose it's also waiting on the\n> ALTER TABLE interface.\n\nYeah. Right now it's too easy to shoot yourself in the foot (for\nexample, you mustn't set attstorage to anything but 'p' for a\nnon-varlena datatype). So we haven't wanted to document the\nUPDATE-pg_attribute approach.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2001 17:08:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to turn off TOAST on a table/column " }, { "msg_contents": "Ross J. Reedstrom wrote:\n> On Tue, Nov 27, 2001 at 03:52:27PM -0500, Tom Lane wrote:\n> > Barry Lind <barry@xythos.com> writes:\n> > > So how do I create a table without toast enabled?\n> >\n> > UPDATE pg_attribute SET attstorage = 'p'\n> > WHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'mytable')\n> > AND attnum > 0\n> >\n> > would suffice to disable toasting of all columns in 'mytable'.\n>\n> This would reimpose the max-tuple limit on that table, would it not?\n> So trying to store 'too large' a text would error? Definitely one for\n> the regression tests, once we've got that ALTER TABLE interface.\n\n Yes, it would.\n\n>\n> >\n> > See src/include/pg_attribute.h for documentation of the allowed values\n> > for attstorage.\n>\n> This needs to get into the admin docs. I suppose it's also waiting on the\n> ALTER TABLE interface.\n\n One thing I'd like to add is that people should not be too\n surprised if turning off toast will slow down their\n application.\n\n One nice side effect of toast is, that often especially those\n fields you don't use in the where clause get toasted. Now\n while a query is executed and the tuples travel through the\n system, from the heap through the filters, in and out of\n sort, getting merged and joined, and some of them later\n thrown away, you don't need these attributes. If toasted,\n more tuples with the key fields fit into the blocks, so\n you'll get better cache hit rates and lesser disk IO. The\n sort sets will be alot smaller, more sorts can be done\n completely in memory without temp files. The huge attributes\n will only be pulled if the client wanted them and that at the\n time the result is sent to the client, by the type output\n function. And if you update a row and don't touch the\n toasted attribute, the value get's never read from the disk,\n nor does it get updated.\n\n Just to give a few reasons why I like toast. One day I will\n implement a real BLOB datatype - but probably name it poptart\n :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 5 Dec 2001 17:38:06 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to turn off TOAST on a table/column" } ]
[ { "msg_contents": "\nDoes this mean there will be two versions of PostgreSQL?\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us] \nSent: Tuesday, November 27, 2001 12:21 PM\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] Announcement: I've joined Red Hat\n\n\nAs most of you probably know, my former employer Great Bridge folded a\ncouple months ago. I am happy to announce that I've accepted a position\nwith Red Hat Inc, working in their database group. I expect to be able to\ncontinue full-time involvement with PostgreSQL. (Or should I say \"Red Hat\nDatabase\" now?) I'm excited about this opportunity, and believe that it\nwill work out in the best interests of the Postgres community as well as of\nthe company.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Tue, 27 Nov 2001 13:19:59 -0800", "msg_from": "Khoa Do <kdo@stratacare.com>", "msg_from_op": true, "msg_subject": "Re: Announcement: I've joined Red Hat" }, { "msg_contents": "Khoa Do <kdo@stratacare.com> writes:\n> Does this mean there will be two versions of PostgreSQL?\n\nIf you mean a code fork, no. Red Hat hasn't forked any of the other\nopen-source projects they contribute to; why would they start with\nPostgres?\n\nIt seems entirely possible that at any given time, what Red Hat is\nselling as RHDB might be ahead of, behind, or different from the\nlatest community release of Postgres, just because of differing\nrelease cycles, features not merged yet, or whatever. But there's\nno intention to cause a fork.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2001 16:27:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Announcement: I've joined Red Hat " } ]
[ { "msg_contents": "Hi,\n\n I just stumbled over a very hard to reproduce error. Running\n a \"VACUUM ANALYZE <table>\" concurrently to a database heavy\n under load caused a SELECT ... FOR UPDATE with full primary\n key qualification to return multiple results from that table.\n The table contains only a few rows which receive ton's of\n updates.\n\n System is 7.2B3 under Linux.\n\n Will try to produce a test case, just to let ppl know early.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 27 Nov 2001 19:45:29 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": true, "msg_subject": "Possible bug in new VACUUM code" }, { "msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> I just stumbled over a very hard to reproduce error. Running\n> a \"VACUUM ANALYZE <table>\" concurrently to a database heavy\n> under load caused a SELECT ... FOR UPDATE with full primary\n> key qualification to return multiple results from that table.\n\nUrgh. But I am not sure that you should be pointing the finger at\nVACUUM. It doesn't move any tuples around (at least not in ctid\nterms), so how could it possibly produce multiple tuple images in\nanother scan? My early bet is that the problem is not directly\nrelated to VACUUM.\n\nPost as soon as you have any more info ... remember there are lots\nof eyeballs out here ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2001 23:27:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible bug in new VACUUM code " }, { "msg_contents": "Jan Wieck wrote:\n> \n> Hi,\n> \n> I just stumbled over a very hard to reproduce error. Running\n> a \"VACUUM ANALYZE <table>\" concurrently to a database heavy\n> under load caused a SELECT ... FOR UPDATE with full primary\n> key qualification to return multiple results from that table.\n> The table contains only a few rows which receive ton's of\n> updates.\n> \n> System is 7.2B3 under Linux.\n> \n> Will try to produce a test case, just to let ppl know early.\n\nCan you post an explain plan for the query along with a \"\\d\" of the table?\n", "msg_date": "Wed, 28 Nov 2001 06:58:25 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Possible bug in new VACUUM code" } ]
[ { "msg_contents": "Hi guys,\n\nThis came across the phpPgAdmin list, and I'm reposting it here in case it\nis actually true...? If it is, is it a Postgres or a Debian package issue?\n\nChris\n\n-----Original Message-----\nFrom: phppgadmin-devel-admin@lists.sourceforge.net\n[mailto:phppgadmin-devel-admin@lists.sourceforge.net]On Behalf Of Guilherme\nBarile\nSent: Wednesday, 28 November 2001 3:58 AM\nTo: phpPgAdmin-devel@lists.sourceforge.net\nSubject: [ppa-dev] Severe bug in debian - phppgadmin opens up databases for\nanyone!\n\n\nDebian comes with a severe configuration fault in postgresql ... in\npg_hba.conf, it uses TRUST as the default authentication method (from\nlocalhost) ... as phpPgAdmin runs on localhost, anyone can login without a\npassword.\n\nThere are DOZENS of sites out there running without any security! And this\nis terrible! If I weren't a very nice person and simply didn't change\nanything (I could, as postgres is superuser and I can log as it).\nHere's how to fix it (on debian, don't know if any other distribution is\naffected):\nlog in as postgres\nrun psql\ncheck the pg_shadow table (SELECT * FROM pg_shadow;)\nsee if everyone has a password (especially user postgres)\n\nAfter setting all the passwords, edit /etc/postgres/pg_hba.conf to match the\nfollowing lines:\n\nlocal all password\nhost all 127.0.0.1 255.0.0.0 password\n\nThen it will require a password.\nAlso, If you wish to block connections from the internet, add this also:\n\nhost all 0.0.0.0 0.0.0.0 reject\n\nPlease put this on the page or together with PhpPgAdmin's documentation.\n(Search google.com with \"phppgadmin local:5432\" and check for yourself ...\nlogin as postgres and type anything as password!)\n\n\nThank you very much for your attention (Please be kind and reply)\n\nGuilherme Barile\nInfoage Web Solutions\nSao Paulo - SP - Brazil\n\n", "msg_date": "Wed, 28 Nov 2001 09:31:22 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "FW: [ppa-dev] Severe bug in debian - phppgadmin opens up databases\n\tfor anyone!" }, { "msg_contents": "\nThis is a known problem. I just updated the documentation today to\nstress that local users have full access to any database by default, and\nthat initdb -W and changing pg_hba.conf to password/md5 are the best\nways to fix this.\n\n---------------------------------------------------------------------------\n\n> Hi guys,\n> \n> This came across the phpPgAdmin list, and I'm reposting it here in case it\n> is actually true...? If it is, is it a Postgres or a Debian package issue?\n> \n> Chris\n> \n> -----Original Message-----\n> From: phppgadmin-devel-admin@lists.sourceforge.net\n> [mailto:phppgadmin-devel-admin@lists.sourceforge.net]On Behalf Of Guilherme\n> Barile\n> Sent: Wednesday, 28 November 2001 3:58 AM\n> To: phpPgAdmin-devel@lists.sourceforge.net\n> Subject: [ppa-dev] Severe bug in debian - phppgadmin opens up databases for\n> anyone!\n> \n> \n> Debian comes with a severe configuration fault in postgresql ... in\n> pg_hba.conf, it uses TRUST as the default authentication method (from\n> localhost) ... as phpPgAdmin runs on localhost, anyone can login without a\n> password.\n> \n> There are DOZENS of sites out there running without any security! And this\n> is terrible! If I weren't a very nice person and simply didn't change\n> anything (I could, as postgres is superuser and I can log as it).\n> Here's how to fix it (on debian, don't know if any other distribution is\n> affected):\n> log in as postgres\n> run psql\n> check the pg_shadow table (SELECT * FROM pg_shadow;)\n> see if everyone has a password (especially user postgres)\n> \n> After setting all the passwords, edit /etc/postgres/pg_hba.conf to match the\n> following lines:\n> \n> local all password\n> host all 127.0.0.1 255.0.0.0 password\n> \n> Then it will require a password.\n> Also, If you wish to block connections from the internet, add this also:\n> \n> host all 0.0.0.0 0.0.0.0 reject\n> \n> Please put this on the page or together with PhpPgAdmin's documentation.\n> (Search google.com with \"phppgadmin local:5432\" and check for yourself ...\n> login as postgres and type anything as password!)\n> \n> \n> Thank you very much for your attention (Please be kind and reply)\n> \n> Guilherme Barile\n> Infoage Web Solutions\n> Sao Paulo - SP - Brazil\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 20:50:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> This came across the phpPgAdmin list, and I'm reposting it here in case it\n> is actually true...? If it is, is it a Postgres or a Debian package issue?\n\nThe default installation is indeed insecure with respect to other local\nusers; you don't want to use TRUST auth method on a multi-user box. We\nneed to document that more prominently. But the default install is not\ninsecure w.r.t. to outside connections, because it doesn't allow any.\nIn particular, this advice is horsepucky:\n\n> Also, If you wish to block connections from the internet, add this also:\n> host all 0.0.0.0 0.0.0.0 reject\n\nbecause that will happen anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2001 23:31:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up\n\tdatabases for anyone!" }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > This came across the phpPgAdmin list, and I'm reposting it here in case it\n> > is actually true...? If it is, is it a Postgres or a Debian package issue?\n> \n> The default installation is indeed insecure with respect to other local\n> users; you don't want to use TRUST auth method on a multi-user box. We\n> need to document that more prominently. But the default install is not\n> insecure w.r.t. to outside connections, because it doesn't allow any.\n> In particular, this advice is horsepucky:\n\nLet me tell you what bothers me about our default install. If some\nsoftware installed all its data files in a world-writable directory, we\nwould consider it a security hole. But because we are Internet-enabled,\nand because our insecurity is only local, it seems OK to people.\n\nI am not sure about a solution, but I am shocked we haven't been beaten\nup about this more often.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 00:35:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> ... But because we are Internet-enabled,\n> and because our insecurity is only local, it seems OK to people.\n\nIt's not that it's \"okay\", it's that we haven't got any good\nalternatives. Password auth sucks from a convenience point of view\n(or even from a possibility point of view, for scripts; don't forget\nthe changes that you yourself recently applied to guarantee that a\nscript *cannot* supply a password to psql). Ident auth doesn't work,\nor isn't secure, in a lot of cases. Kerberos, well, not a lot to\noffer there either. What else do you want to make the default?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 01:08:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up\n\tdatabases for anyone!" }, { "msg_contents": "> It's not that it's \"okay\", it's that we haven't got any good\n> alternatives. Password auth sucks from a convenience point of view\n> (or even from a possibility point of view, for scripts; don't forget\n> the changes that you yourself recently applied to guarantee that a\n> script *cannot* supply a password to psql). Ident auth doesn't work,\n> or isn't secure, in a lot of cases. Kerberos, well, not a lot to\n> offer there either. What else do you want to make the default?\n\nSomeone, somewhere, please help us. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 01:11:10 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "At 01:08 AM 11/28/01 -0500, Tom Lane wrote:\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> ... But because we are Internet-enabled,\n>> and because our insecurity is only local, it seems OK to people.\n>\n>It's not that it's \"okay\", it's that we haven't got any good\n>alternatives. Password auth sucks from a convenience point of view\n>(or even from a possibility point of view, for scripts; don't forget\n>the changes that you yourself recently applied to guarantee that a\n>script *cannot* supply a password to psql). Ident auth doesn't work,\n>\n\nAck. We can't send in passwords to psql anymore? :(\n\nIs there a safe way to send username and password to psql?\n\nCheerio,\nLink.\n\n", "msg_date": "Wed, 28 Nov 2001 18:06:09 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> At 01:08 AM 11/28/01 -0500, Tom Lane wrote:\n> >Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> ... But because we are Internet-enabled,\n> >> and because our insecurity is only local, it seems OK to people.\n> >\n> >It's not that it's \"okay\", it's that we haven't got any good\n> >alternatives. Password auth sucks from a convenience point of view\n> >(or even from a possibility point of view, for scripts; don't forget\n> >the changes that you yourself recently applied to guarantee that a\n> >script *cannot* supply a password to psql). Ident auth doesn't work,\n> >\n> \n> Ack. We can't send in passwords to psql anymore? :(\n> \n> Is there a safe way to send username and password to psql?\n\nsmbclient does it via a file which must be 0600, but I don't know if \npsql has anything like that.\n\n-----------\nHannu\n", "msg_date": "Wed, 28 Nov 2001 17:03:47 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up" }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> At 01:08 AM 11/28/01 -0500, Tom Lane wrote:\n>> ... Password auth sucks from a convenience point of view\n>> (or even from a possibility point of view, for scripts; don't forget\n>> the changes that you yourself recently applied to guarantee that a\n>> script *cannot* supply a password to psql).\n\n> Ack. We can't send in passwords to psql anymore? :(\n\nWell, Bruce, you were the one that was hot to make that /dev/tty change.\nTime to defend it.\n\n> Is there a safe way to send username and password to psql?\n\nIf you want to put those things in a script, you could still do\n\n\texport PGUSER=whatever\n\texport PGPASSWORD=whatever\n\tpsql ...\n\nThis would actually work a lot better than other ways for cases such\nas doing pg_dumpall, where you'd otherwise need to supply the password\nmultiple times.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 10:17:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> > Is there a safe way to send username and password to psql?\n> \n> If you want to put those things in a script, you could still do\n> \n> \texport PGUSER=whatever\n> \texport PGPASSWORD=whatever\n> \tpsql ...\n\nBut this way the password ends up in the environment, which on many\nsystems is visible to other processes/users (via /proc or the 'ps'\ncommand). Might as well use \"trust\"...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "28 Nov 2001 11:17:35 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up" }, { "msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n> But this way the password ends up in the environment, which on many\n> systems is visible to other processes/users (via /proc or the 'ps'\n> command).\n\nYour *environment* is visible to other users? Geez, what a broken\nsystem ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 11:23:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Doug McNaught <doug@wireboard.com> writes:\n> > But this way the password ends up in the environment, which on many\n> > systems is visible to other processes/users (via /proc or the 'ps'\n> > command).\n> \n> Your *environment* is visible to other users? Geez, what a broken\n> system ...\n\nTrue on Solaris (/usr/ucb/ps -eax) at least. Other systems too I'm\npretty sure. I thought that Linux let you do it but I just checked\nand /proc/<pid>/environ is mode 0400...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "28 Nov 2001 11:46:20 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up" }, { "msg_contents": "> At 01:08 AM 11/28/01 -0500, Tom Lane wrote:\n> >Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> ... But because we are Internet-enabled,\n> >> and because our insecurity is only local, it seems OK to people.\n> >\n> >It's not that it's \"okay\", it's that we haven't got any good\n> >alternatives. Password auth sucks from a convenience point of view\n> >(or even from a possibility point of view, for scripts; don't forget\n> >the changes that you yourself recently applied to guarantee that a\n> >script *cannot* supply a password to psql). Ident auth doesn't work,\n> >\n> \n> Ack. We can't send in passwords to psql anymore? :(\n> \n> Is there a safe way to send username and password to psql?\n\nThe standard way I know of is to use 'expect' and wrap your psql call\naround that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 13:40:43 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > At 01:08 AM 11/28/01 -0500, Tom Lane wrote:\n> >> ... Password auth sucks from a convenience point of view\n> >> (or even from a possibility point of view, for scripts; don't forget\n> >> the changes that you yourself recently applied to guarantee that a\n> >> script *cannot* supply a password to psql).\n> \n> > Ack. We can't send in passwords to psql anymore? :(\n> \n> Well, Bruce, you were the one that was hot to make that /dev/tty change.\n> Time to defend it.\n\nHey, if people want it back, it is easy to do.\n\nMy only goal was to make psql consistent with other applications that\nrequire passwords.\n\n> > Is there a safe way to send username and password to psql?\n> \n> If you want to put those things in a script, you could still do\n> \n> \texport PGUSER=whatever\n> \texport PGPASSWORD=whatever\n> \tpsql ...\n> \n> This would actually work a lot better than other ways for cases such\n> as doing pg_dumpall, where you'd otherwise need to supply the password\n> multiple times.\n\nWhat about 'ps -e' that shows all environment variables? This is in\nsome ways worse than piping the password into psql. At least there was\nsome chance that they were using 'cat' from a file with the proper\npermissions. WIth PGPASSWORD, there is no way to restrict who can see\nit via 'ps -e'.\n\nSeems we shouldn't allow PGPASSWORD either.\n\nThe idea of allowing the password to be stored in a file with 600\npermissions seems quite standard. CVS does this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 13:46:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Is there a safe way to send username and password to psql?\n\n> The standard way I know of is to use 'expect' and wrap your psql call\n> around that.\n\nDidn't you break that by making psql read the password from /dev/tty?\nOr can 'expect' take control of /dev/tty?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 13:48:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Is there a safe way to send username and password to psql?\n> \n> > The standard way I know of is to use 'expect' and wrap your psql call\n> > around that.\n> \n> Didn't you break that by making psql read the password from /dev/tty?\n> Or can 'expect' take control of /dev/tty?\n\nExpect communicates with the process via pseudo-ttys, so it works fine. \nAlso, if it can't read /dev/tty, it will read from stdin. At least that\nis that standard way to do it.\n\ngetpass() docs said:\n\n The getpass() function displays a prompt to, and reads in a password\n from, /dev/tty. If this file is not accessible, getpass displays the\n prompt on the standard error output and reads from the standard input.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 13:50:29 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> > At 01:08 AM 11/28/01 -0500, Tom Lane wrote:\n> >> ... Password auth sucks from a convenience point of view\n> >> (or even from a possibility point of view, for scripts; don't forget\n> >> the changes that you yourself recently applied to guarantee that a\n> >> script *cannot* supply a password to psql).\n> \n> > Ack. We can't send in passwords to psql anymore? :(\n> \n> Well, Bruce, you were the one that was hot to make that /dev/tty change.\n> Time to defend it.\n\nOK, I remember now. The issue was how to handle:\n\t\n\tcat file | psql test\n\nIn previous releases, you _had_ to have the password as the first line\nin file. In the current code, if you are running from a terminal, you\nsupply the password from the keyboard. If you are running from a batch\njob that has no terminal (/dev/tty), you must have the password as the\nfirst line in the file.\n\nPeople were complaining about the old behavior.\n\nI modeled the changes after the BSD getpass(), which I assume is the\nstandard behavior on most unixes.\n\nIt would be nice to extend .psqlrc to allow storage of passwords, but\nthat is only read by psql and not by all libpq applications. Not sure\nhow to handle this.\n\nI will document the security problem with PGPASSWORD and add a TODO item\nto remove it in 7.3. Is that OK with everyone?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 14:13:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I will document the security problem with PGPASSWORD and add a TODO item\n> to remove it in 7.3. Is that OK with everyone?\n\nI don't think we should remove it. Documenting that using it is a\nsecurity risk on some platforms seems a good idea, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 14:42:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The idea of allowing the password to be stored in a file with 600\n> permissions seems quite standard. CVS does this.\n\nSeems it would be nice if psql could accept a switch along the lines of\n\t--password-is-in-file filename\nand go off to read the password from the named file (which we hope is\nsecured correctly).\n\nOr take it a little further: what about defining a PGPASSWORDFILE\nenvironment variable that libpq would consult, before or instead of\nPGPASSWORD? That would give us the same feature for free across all\nlibpq-using apps, not only psql. Exposing a file name in the\nenvironment is not a security risk, I hope.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 14:55:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will document the security problem with PGPASSWORD and add a TODO item\n> > to remove it in 7.3. Is that OK with everyone?\n> \n> I don't think we should remove it. Documenting that using it is a\n> security risk on some platforms seems a good idea, however.\n\nOK, new text is:\n\n\t<envar>PGPASSWORD</envar>\n\tsets the password used if the backend demands password\n\tauthentication. This is not recommended because the password can\n\tbe read by others using <command>ps -e</command>.\n\nI am unsure if Linux has this problem but it seems most other Unix's do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 15:00:07 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The idea of allowing the password to be stored in a file with 600\n> > permissions seems quite standard. CVS does this.\n> \n> Seems it would be nice if psql could accept a switch along the lines of\n> \t--password-is-in-file filename\n> and go off to read the password from the named file (which we hope is\n> secured correctly).\n\nWe can check security of the file if we wish.\n\n> Or take it a little further: what about defining a PGPASSWORDFILE\n> environment variable that libpq would consult, before or instead of\n> PGPASSWORD? That would give us the same feature for free across all\n> libpq-using apps, not only psql. Exposing a file name in the\n> environment is not a security risk, I hope.\n\nYes, seems like a good idea. Seems we may need both. Either we allow\nmultiple host/password combinations in the file or we need a psql flag,\nbut then again, a psql flag doesn't cover the other interfaces. We\ncould require they use one password file per host.\n\nAdded to TODO:\n\n * Add PGPASSWORDFILE password file capability to libpq and psql flag\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 15:13:55 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> OK, new text is:\n> \n> \t<envar>PGPASSWORD</envar>\n> \tsets the password used if the backend demands password\n> \tauthentication. This is not recommended because the password can\n> \tbe read by others using <command>ps -e</command>.\n\nJust a nit--the 'e' option is for Berkeley-style ps (/usr/ucb/ps on\nSolaris). SysV ps doesn't have an equivalent from what I can see,\n(though I may have missed it) and '-e' does something totally\ndifferent. \n\n> I am unsure if Linux has this problem but it seems most other Unix's do.\n\nModern versions (of Linux) don't seem to--you can see the env for your\nprocesses but not for others'.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "28 Nov 2001 15:23:57 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > OK, new text is:\n> > \n> > \t<envar>PGPASSWORD</envar>\n> > \tsets the password used if the backend demands password\n> > \tauthentication. This is not recommended because the password can\n> > \tbe read by others using <command>ps -e</command>.\n> \n> Just a nit--the 'e' option is for Berkeley-style ps (/usr/ucb/ps on\n> Solaris). SysV ps doesn't have an equivalent from what I can see,\n> (though I may have missed it) and '-e' does something totally\n> different. \n\nYes, I debated that one. I wanted to mention the environment issue\nwithout being verbose. I believe 'ps e', without the dash, does show\nenvironment, doesn't it?\n\n> \n> > I am unsure if Linux has this problem but it seems most other Unix's do.\n> \n> Modern versions (of Linux) don't seem to--you can see the env for your\n> processes but not for others'.\n\nIf Linux doesn't have this problem, I should mention it is a problem on\n_some_ platforms.\n\nNew text is:\n\n\t<envar>PGPASSWORD</envar>\n\tsets the password used if the backend demands password\n\tauthentication. This is not recommended because the password can\n\tbe read by others using <command>ps e</command> on some\n\tplatforms.\n\nI am glad to continue revising it until we are all happy. I throw these\ntexts out so people can make comments and improve upon it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 15:28:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > OK, new text is:\n> > \n> > \t<envar>PGPASSWORD</envar>\n> > \tsets the password used if the backend demands password\n> > \tauthentication. This is not recommended because the password can\n> > \tbe read by others using <command>ps -e</command>.\n> \n> Just a nit--the 'e' option is for Berkeley-style ps (/usr/ucb/ps on\n> Solaris). SysV ps doesn't have an equivalent from what I can see,\n> (though I may have missed it) and '-e' does something totally\n> different. \n\nOK, I reread what you said and I see now I am not going to get away with\nbeing brief. Text is now:\n\n\t<envar>PGPASSWORD</envar>\n\tsets the password used if the backend demands password\n\tauthentication. This is not recommended because the password can\n\tbe read by others using a <command>ps</command> environment flag\n\ton some platforms.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 15:35:47 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> OK, I reread what you said and I see now I am not going to get away with\n> being brief. Text is now:\n> \n> \t<envar>PGPASSWORD</envar>\n> \tsets the password used if the backend demands password\n> \tauthentication. This is not recommended because the password can\n> \tbe read by others using a <command>ps</command> environment flag\n> \ton some platforms.\n\nLooks good to me. I think the 'ps' options are the most annoying\ndifference between SysV and BSD Unices. Linux 'ps' tries to be both\nby acting BSDish if you supply options without a dash, and SysVish if\nyou use one. Ambitious, but it's bitten me once or twice...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "28 Nov 2001 15:44:29 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "Tom Lane writes:\n\n> It's not that it's \"okay\", it's that we haven't got any good\n> alternatives. Password auth sucks from a convenience point of view\n> (or even from a possibility point of view, for scripts; don't forget\n> the changes that you yourself recently applied to guarantee that a\n> script *cannot* supply a password to psql). Ident auth doesn't work,\n> or isn't secure, in a lot of cases. Kerberos, well, not a lot to\n> offer there either. What else do you want to make the default?\n\nunix_socket_permissions = 0700\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 28 Nov 2001 21:48:02 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Tom Lane writes:\n> \n> > It's not that it's \"okay\", it's that we haven't got any good\n> > alternatives. Password auth sucks from a convenience point of view\n> > (or even from a possibility point of view, for scripts; don't forget\n> > the changes that you yourself recently applied to guarantee that a\n> > script *cannot* supply a password to psql). Ident auth doesn't work,\n> > or isn't secure, in a lot of cases. Kerberos, well, not a lot to\n> > offer there either. What else do you want to make the default?\n> \n> unix_socket_permissions = 0700\n\nInteresting.\n\nThis means only the super-user can connect. Hmm, seems like an\ninteresting idea. You have to _open_ up the database to other users on\nyour local system. If you are running a server, you are the only person\nso there is no downside.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 15:53:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > OK, I reread what you said and I see now I am not going to get away with\n> > being brief. Text is now:\n> > \n> > \t<envar>PGPASSWORD</envar>\n> > \tsets the password used if the backend demands password\n> > \tauthentication. This is not recommended because the password can\n> > \tbe read by others using a <command>ps</command> environment flag\n> > \ton some platforms.\n> \n> Looks good to me. I think the 'ps' options are the most annoying\n> difference between SysV and BSD Unices. Linux 'ps' tries to be both\n> by acting BSDish if you supply options without a dash, and SysVish if\n> you use one. Ambitious, but it's bitten me once or twice...\n\nI dealt with this in pgmonitor. Got it working by trying different\ncombinations and checking the result.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 15:54:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > OK, new text is:\n> >\n> > \t<envar>PGPASSWORD</envar>\n> > \tsets the password used if the backend demands password\n> > \tauthentication. This is not recommended because the password can\n> > \tbe read by others using <command>ps -e</command>.\n> >\n> > I am unsure if Linux has this problem but it seems most other Unix's do.\n> \n> Please qualify \"most\".\n\nI thought SysV had an environment switch, but in researching I see it\ndoesn't. Seems it came from BSD, so it is probably not most.\n\nCurrent SGML comment says \"some platforms\".\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 16:03:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, new text is:\n>\n> \t<envar>PGPASSWORD</envar>\n> \tsets the password used if the backend demands password\n> \tauthentication. This is not recommended because the password can\n> \tbe read by others using <command>ps -e</command>.\n>\n> I am unsure if Linux has this problem but it seems most other Unix's do.\n\nPlease qualify \"most\".\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 28 Nov 2001 22:08:19 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "On Wed, 28 Nov 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >\n> > > OK, new text is:\n> > >\n> > > \t<envar>PGPASSWORD</envar>\n> > > \tsets the password used if the backend demands password\n> > > \tauthentication. This is not recommended because the password can\n> > > \tbe read by others using <command>ps -e</command>.\n> >\n> > Just a nit--the 'e' option is for Berkeley-style ps (/usr/ucb/ps on\n> > Solaris). SysV ps doesn't have an equivalent from what I can see,\n> > (though I may have missed it) and '-e' does something totally\n> > different.\n>\n> Yes, I debated that one. I wanted to mention the environment issue\n> without being verbose. I believe 'ps e', without the dash, does show\n> environment, doesn't it?\n\nSome OSs, I know AIX for example, use SysV options if you have a dash, and\nBSD options if you don't. So ps -e does the SysV -e option, while ps e\ndoes the BSD -e option.\n\nTake care,\n\nBill\n\n", "msg_date": "Wed, 28 Nov 2001 14:04:31 -0800 (PST)", "msg_from": "Bill Studenmund <wrstuden@netbsd.org>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "On Wed, 28 Nov 2001, Tom Lane wrote:\n\n> >> Is there a safe way to send username and password to psql?\n>\n> > The standard way I know of is to use 'expect' and wrap your psql call\n> > around that.\n>\n> Didn't you break that by making psql read the password from /dev/tty?\n> Or can 'expect' take control of /dev/tty?\n\nExpect sets up a pty:\n\n$ tty\n/dev/pts/6\n$ cat script.exp\nspawn tty\nexpect eof\n$ expect -f script.exp\nspawn tty\n/dev/pts/5\n\nMatthew.\n\n", "msg_date": "Thu, 29 Nov 2001 11:30:06 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Tom Lane writes:\n> \n> > It's not that it's \"okay\", it's that we haven't got any good\n> > alternatives. Password auth sucks from a convenience point of view\n> > (or even from a possibility point of view, for scripts; don't forget\n> > the changes that you yourself recently applied to guarantee that a\n> > script *cannot* supply a password to psql). Ident auth doesn't work,\n> > or isn't secure, in a lot of cases. Kerberos, well, not a lot to\n> > offer there either. What else do you want to make the default?\n> \n> unix_socket_permissions = 0700\n\nAdded to TODO:\n\n* Allow secure single-user use without passwords using Unix socket permissions\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Dec 2001 16:23:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" } ]
[ { "msg_contents": "Any chance the sequence docs could be updated to indicate sequence bounds\n(int4)?\nhttp://www.postgresql.org/idocs/index.php?sql-createsequence.html\n\nPerhaps note that int8 is coming?\n\nHow do user notes make it into the documentation proper.\n\n-AZ\n\n\n\n", "msg_date": "Tue, 27 Nov 2001 17:47:40 -0800", "msg_from": "\"August Zajonc\" <junk-pgsql@aontic.com>", "msg_from_op": true, "msg_subject": "Sequence docs" }, { "msg_contents": "> How do user notes make it into the documentation proper.\n\nAt the moment they don't. Afaik no one is taking those notes and folding\nthem into the docs where appropriate. It would be great if someone (or\nseveral people) want to do that, and it would be a nice project for\nanyone wanting to start contributing to PostgreSQL.\n\n - Thomas\n", "msg_date": "Wed, 28 Nov 2001 15:03:07 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Sequence docs" }, { "msg_contents": "On Wed, 28 Nov 2001, Thomas Lockhart wrote:\n\n> > How do user notes make it into the documentation proper.\n>\n> At the moment they don't. Afaik no one is taking those notes and folding\n> them into the docs where appropriate. It would be great if someone (or\n> several people) want to do that, and it would be a nice project for\n> anyone wanting to start contributing to PostgreSQL.\n\nI was under the impression Bruce had already started (if not finished)\nthat. After 7.2 is released and the final docs are imported the current\ndocnotes are going to be cleared and therefore no longer available.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 28 Nov 2001 10:10:40 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Sequence docs" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> How do user notes make it into the documentation proper.\n\n> At the moment they don't. Afaik no one is taking those notes and folding\n> them into the docs where appropriate.\n\nI made a pass a week or two ago to incorporate notes where appropriate.\nI did not go through the JDBC or ODBC sections, however, not feeling\ncompetent to work on the documentation for those.\n\n> It would be great if someone (or\n> several people) want to do that, and it would be a nice project for\n> anyone wanting to start contributing to PostgreSQL.\n\nIt'd be real nice if someone could look through the JDBC and/or ODBC notes\nand submit documentation patches before 7.2 goes final.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 11:04:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequence docs " }, { "msg_contents": "\"August Zajonc\" <junk-pgsql@aontic.com> writes:\n> Any chance the sequence docs could be updated to indicate sequence bounds\n> (int4)?\n\nDone for 7.2, see\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/sql-createsequence.html\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/functions-sequence.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 11:31:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequence docs " }, { "msg_contents": "> http://candle.pha.pa.us/main/writings/pgsql/sgml/sql-createsequence.html\n> http://candle.pha.pa.us/main/writings/pgsql/sgml/functions-sequence.html\n\nThese lead me to a interesting question. Though it would take a while,\nwhat happens if you get the the max sequence number? It can't roll over...\nare you just SOL? (Just for some numbers, to get to 2b, it would be\n~1000/sec for only 25 days)\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Wed, 28 Nov 2001 12:21:21 -0500 (EST)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: Sequence docs " }, { "msg_contents": "I'll work on the JDBC ones.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> \n>>>How do user notes make it into the documentation proper.\n>>>\n> \n>>At the moment they don't. Afaik no one is taking those notes and folding\n>>them into the docs where appropriate.\n>>\n> \n> I made a pass a week or two ago to incorporate notes where appropriate.\n> I did not go through the JDBC or ODBC sections, however, not feeling\n> competent to work on the documentation for those.\n> \n> \n>>It would be great if someone (or\n>>several people) want to do that, and it would be a nice project for\n>>anyone wanting to start contributing to PostgreSQL.\n>>\n> \n> It'd be real nice if someone could look through the JDBC and/or ODBC notes\n> and submit documentation patches before 7.2 goes final.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n", "msg_date": "Wed, 28 Nov 2001 09:30:03 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Sequence docs" }, { "msg_contents": "bpalmer <bpalmer@crimelabs.net> writes:\n> These lead me to a interesting question. Though it would take a while,\n> what happens if you get the the max sequence number?\n\nIf you didn't say CYCLE, nextval() starts giving errors.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 13:00:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequence docs " }, { "msg_contents": "> On Wed, 28 Nov 2001, Thomas Lockhart wrote:\n> \n> > > How do user notes make it into the documentation proper.\n> >\n> > At the moment they don't. Afaik no one is taking those notes and folding\n> > them into the docs where appropriate. It would be great if someone (or\n> > several people) want to do that, and it would be a nice project for\n> > anyone wanting to start contributing to PostgreSQL.\n> \n> I was under the impression Bruce had already started (if not finished)\n> that. After 7.2 is released and the final docs are imported the current\n> docnotes are going to be cleared and therefore no longer available.\n\nI don't know anything about those notes. Sorry. I am adding stuff from\nemail comments.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 13:36:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequence docs" }, { "msg_contents": "On Wed, 28 Nov 2001, Bruce Momjian wrote:\n\n> > On Wed, 28 Nov 2001, Thomas Lockhart wrote:\n> >\n> > > > How do user notes make it into the documentation proper.\n> > >\n> > > At the moment they don't. Afaik no one is taking those notes and folding\n> > > them into the docs where appropriate. It would be great if someone (or\n> > > several people) want to do that, and it would be a nice project for\n> > > anyone wanting to start contributing to PostgreSQL.\n> >\n> > I was under the impression Bruce had already started (if not finished)\n> > that. After 7.2 is released and the final docs are imported the current\n> > docnotes are going to be cleared and therefore no longer available.\n>\n> I don't know anything about those notes. Sorry. I am adding stuff from\n> email comments.\n\nI thought it was you, but Tom Lane just mentioned having gone thru it.\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 28 Nov 2001 14:46:02 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Sequence docs" }, { "msg_contents": "Docs have been updated to address the JDBC related notes. Actually most \nwere already addressed by previous doc cleanups.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> \n>>>How do user notes make it into the documentation proper.\n>>>\n> \n>>At the moment they don't. Afaik no one is taking those notes and folding\n>>them into the docs where appropriate.\n>>\n> \n> I made a pass a week or two ago to incorporate notes where appropriate.\n> I did not go through the JDBC or ODBC sections, however, not feeling\n> competent to work on the documentation for those.\n> \n> \n>>It would be great if someone (or\n>>several people) want to do that, and it would be a nice project for\n>>anyone wanting to start contributing to PostgreSQL.\n>>\n> \n> It'd be real nice if someone could look through the JDBC and/or ODBC notes\n> and submit documentation patches before 7.2 goes final.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n", "msg_date": "Wed, 28 Nov 2001 21:36:40 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Sequence docs" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> Docs have been updated to address the JDBC related notes. Actually most \n> were already addressed by previous doc cleanups.\n\nExcellent. Thanks!\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 00:40:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequence docs " }, { "msg_contents": "On Wed, 28 Nov 2001, Barry Lind wrote:\n\n> Docs have been updated to address the JDBC related notes. Actually most\n> were already addressed by previous doc cleanups.\n\nHas anyone looked at the ODBC docs and docnotes?\n\n\n\n>\n> thanks,\n> --Barry\n>\n> Tom Lane wrote:\n>\n> > Thomas Lockhart <lockhart@fourpalms.org> writes:\n> >\n> >>>How do user notes make it into the documentation proper.\n> >>>\n> >\n> >>At the moment they don't. Afaik no one is taking those notes and folding\n> >>them into the docs where appropriate.\n> >>\n> >\n> > I made a pass a week or two ago to incorporate notes where appropriate.\n> > I did not go through the JDBC or ODBC sections, however, not feeling\n> > competent to work on the documentation for those.\n> >\n> >\n> >>It would be great if someone (or\n> >>several people) want to do that, and it would be a nice project for\n> >>anyone wanting to start contributing to PostgreSQL.\n> >>\n> >\n> > It'd be real nice if someone could look through the JDBC and/or ODBC notes\n> > and submit documentation patches before 7.2 goes final.\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> >\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 29 Nov 2001 06:13:51 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Sequence docs" } ]
[ { "msg_contents": "I have not recorded successful tests for the following platforms (I've\nincluded the names of folks who reported for the last release, in case\nthey are still able to test or want to hand it off to someone else):\n\nAIX Gilles Darold\nBeOS Cyril Velter\nHPUX (have 10.20 from Tom; anyone tested 11.0 or higher?)\nIRIX Robert Bruccoleri\nLinux/arm Mark Knox\nLinux/s390 Neale Ferguson\nLinux/sparc Ryan Kirkpatrick\nNetBSD/arm32 Patrick Welche\nNetBSD/m68k Henry Hotz (not tested since 7.0; obsolete platform?)\nNetBSD/PPC Henry Hotz\nNetBSD/VAX Tom I. Helbekkmo\nNetBSD/x86 Giles Lean\nQNX (did I get a definitive report for 4.x already?)\nSCO Unixware Larry Rosenman\nSolaris/x86 Mathijs Brands\nSunOS Tatsuo Ishii (old release; still relevant?)\nWindows/Cygwin Jason Tishler\nWindows/native Magnus Hagander (clients only)\n\n\nThe following platforms have been reported as successfully running\nPostgreSQL 7.2:\n\nBSD/OS Bruce\nFreeBSD Chris Kings-Lynne\nLinux/Alpha Tom\nLinux/MIPS Hisao Shibuya\nLinux/PPC Tom\nLinux/x86 Thomas (and many others ;)\nMacOS-X Tom\nNetBSD/Alpha Thomas Thai\nOpenBSD/sparc Brandon Palmer\nOpenBSD/x86 Brandon Palmer\nSolaris/sparc Andrew Sullivan\nTru64 Alessio Bragadini (anyone tested 5.0 or higher?)\n\n\nHave I missed any success reports? Have I left out any platforms known\nto be good or bad (or which *should* be reported)? Sorry if I've missed\nanything; over time there is a lot flying around for which I'm not sure\nthe result (AIX and QNX come to mind for examples).\n\n - Thomas\n", "msg_date": "Wed, 28 Nov 2001 05:01:08 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Call for platform testing" }, { "msg_contents": "> Have I missed any success reports? Have I left out any platforms known\n> to be good or bad (or which *should* be reported)? Sorry if I've missed\n> anything; over time there is a lot flying around for which I'm not sure\n> the result (AIX and QNX come to mind for examples).\n\nQNX6 will fail in 7.2.X. We have patches for 7.3. I believe AIX is\nfine thanks to work done a few weeks ago by Tatuso and others. However,\nI am not totally sure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 00:29:30 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "> QNX6 will fail in 7.2.X. We have patches for 7.3. I believe AIX is\n> fine thanks to work done a few weeks ago by Tatuso and others. However,\n> I am not totally sure.\n\nI have been troubled by 7.2b3/AIX 5L combo. The parallel regression\ntest hangs in the middle of the test. It seems backend goes into an\nidle state and it happens nothing after that.\n\nI have been on a business trip till this week end. I will start to dig\ninto more next week.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 28 Nov 2001 15:03:29 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "I've upgraded to OpenUNIX 8, and all is well. \n\nIt probably passes on UW 7 as well, but I can no longer test it. \n\nLER\n\n* Thomas Lockhart <lockhart@fourpalms.org> [011127 23:06]:\n> I have not recorded successful tests for the following platforms (I've\n> included the names of folks who reported for the last release, in case\n> they are still able to test or want to hand it off to someone else):\n> \n> AIX Gilles Darold\n> BeOS Cyril Velter\n> HPUX (have 10.20 from Tom; anyone tested 11.0 or higher?)\n> IRIX Robert Bruccoleri\n> Linux/arm Mark Knox\n> Linux/s390 Neale Ferguson\n> Linux/sparc Ryan Kirkpatrick\n> NetBSD/arm32 Patrick Welche\n> NetBSD/m68k Henry Hotz (not tested since 7.0; obsolete platform?)\n> NetBSD/PPC Henry Hotz\n> NetBSD/VAX Tom I. Helbekkmo\n> NetBSD/x86 Giles Lean\n> QNX (did I get a definitive report for 4.x already?)\n> SCO Unixware Larry Rosenman\n> Solaris/x86 Mathijs Brands\n> SunOS Tatsuo Ishii (old release; still relevant?)\n> Windows/Cygwin Jason Tishler\n> Windows/native Magnus Hagander (clients only)\n> \n> \n> The following platforms have been reported as successfully running\n> PostgreSQL 7.2:\n> \n> BSD/OS Bruce\n> FreeBSD Chris Kings-Lynne\n> Linux/Alpha Tom\n> Linux/MIPS Hisao Shibuya\n> Linux/PPC Tom\n> Linux/x86 Thomas (and many others ;)\n> MacOS-X Tom\n> NetBSD/Alpha Thomas Thai\n> OpenBSD/sparc Brandon Palmer\n> OpenBSD/x86 Brandon Palmer\n> Solaris/sparc Andrew Sullivan\n> Tru64 Alessio Bragadini (anyone tested 5.0 or higher?)\n> \n> \n> Have I missed any success reports? Have I left out any platforms known\n> to be good or bad (or which *should* be reported)? Sorry if I've missed\n> anything; over time there is a lot flying around for which I'm not sure\n> the result (AIX and QNX come to mind for examples).\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Wed, 28 Nov 2001 07:17:23 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "> I've upgraded to OpenUNIX 8, and all is well.\n> It probably passes on UW 7 as well, but I can no longer test it.\n\nI'm not familiar with SCO products. Is OU 8 the successor to UW 7, or is\nit a different product line? Do I need two entries in the list? I'm\nstill not clear on the differences between that and OpenServer 5 (other\nthan version of course) and whether that should (still) be listed as\n\"untested and possibly unsupported\".\n\nHmm. Looking at Caldera's web site, it lists all three as separate\nproducts :(\n\nIf these product lines are feature-compatible, could we just list \"SCO\"\nas supported, and in the comments mention \"tested on OpenUnix 8\"?\n\nAnyone else running other variants of SCO?\n\n - Thomas\n", "msg_date": "Wed, 28 Nov 2001 14:30:46 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "* Thomas Lockhart <lockhart@fourpalms.org> [011128 08:28]:\n> > I've upgraded to OpenUNIX 8, and all is well.\n> > It probably passes on UW 7 as well, but I can no longer test it.\n> \n> I'm not familiar with SCO products. Is OU 8 the successor to UW 7, or is\n> it a different product line? Do I need two entries in the list? I'm\n> still not clear on the differences between that and OpenServer 5 (other\n> than version of course) and whether that should (still) be listed as\n> \"untested and possibly unsupported\".\nOU8 is UW 7 + LKP (Linux Kernel Personality) + Bug Fixes.\n\nOSR5 is the old Sco 3.2 stuff, and is a totally different\nbeast/codeset.\n\n\n> \n> Hmm. Looking at Caldera's web site, it lists all three as separate\n> products :(\n> \n> If these product lines are feature-compatible, could we just list \"SCO\"\n> as supported, and in the comments mention \"tested on OpenUnix 8\"?\nI would POSSIBLY combine OU8 and UW7, but NOT OSR5. \n> \n> Anyone else running other variants of SCO?\n> \n> - Thomas\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Wed, 28 Nov 2001 09:25:55 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "On Wed, Nov 28, 2001 at 05:01:08AM +0000, Thomas Lockhart allegedly wrote:\n> Solaris/x86 Mathijs Brands\n\nUnfortunately I don't have a Solaris/x86 development box anymore :(\n\nCheers,\n\nMathijs\n", "msg_date": "Wed, 28 Nov 2001 16:27:24 +0100", "msg_from": "Mathijs Brands <mathijs@ilse.nl>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "> OU8 is UW 7 + LKP (Linux Kernel Personality) + Bug Fixes.\n\nOK, I'll just do one entry for both UW and OU. Love those name changes\n:(\n\n> OSR5 is the old Sco 3.2 stuff, and is a totally different\n> beast/codeset.\n\nMaybe I'll just remove mention of it altogether. Either it is\nsupportable because it is Unix-y and somewhat compatible with other SCO\nproducts, or it isn't, but it isn't worth guessing at either way unless\nsomeone can test it.\n\n - Thomas\n", "msg_date": "Wed, 28 Nov 2001 16:32:42 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "> > OU8 is UW 7 + LKP (Linux Kernel Personality) + Bug Fixes.\n> \n> OK, I'll just do one entry for both UW and OU. Love those name changes\n> :(\n\nIf the first name doesn't succeed, try, try again. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 14:17:15 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "OK, the list is getting smaller fast (thanks for all of the responses!).\nHere is what I have left currently (note the addition of NetBSD/sparc,\nwhich I had inadvertently omitted from the first list):\n\nBeOS Cyril Velter\nLinux/arm Mark Knox\nLinux/s390 Neale Ferguson\nNetBSD/arm32 Patrick Welche\nNetBSD/m68k Bill Studenmund (will test)\nNetBSD/sparc Matthew Green (left off the first list)\nNetBSD/VAX Tom I. Helbekkmo\nQNX (did I get a definitive report for 4.x already?)\nSunOS Tatsuo Ishii (old release; still relevant?)\nWindows/Cygwin Daniel Horak (OK in serial test, trouble with parallel\ntest?)\nWindows/native Magnus Hagander (clients only)\n\nI'm curious about the Windows/Cygwin trouble, and if others see it too,\nsince the -cygwin mailing list seems to report happiness with 7.1.3.\n\nTatsuo, do you think that we should still track SunOS explicitly?\n\nAre there any other platforms we are running on?\n\n - Thomas\n\n\nThe following platforms have been reported as successfully running\nPostgreSQL 7.2:\n\nAIX Andreas Zeugswetter (Tatsuo working on 5L?)\nBSD/OS Bruce\nFreeBSD Chris Kings-Lynne\nHPUX Tom (anyone tested 11.0 or higher?)\nIRIX Luis Amigo\nLinux/Alpha Tom\nLinux/MIPS Hisao Shibuya\nLinux/PPC Tom\nLinux/sparc Doug McNaught\nLinux/x86 Thomas (and many others ;)\nMacOS-X Gavin Sherry\nNetBSD/Alpha Thomas Thai\nNetBSD/PPC Bill Studenmund\nNetBSD/x86 Bill Studenmund\nOpenBSD/sparc Brandon Palmer\nOpenBSD/x86 Brandon Palmer\nSCO OpenUnix Larry Rosenman\nSolaris/sparc Andrew Sullivan\nSolaris/x86 Martin Renters\nTru64 Alessio Bragadini (trouble with 5.1?)\n", "msg_date": "Thu, 29 Nov 2001 03:39:04 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "Thomas Lockhart writes:\n\n> > OSR5 is the old Sco 3.2 stuff, and is a totally different\n> > beast/codeset.\n>\n> Maybe I'll just remove mention of it altogether. Either it is\n> supportable because it is Unix-y and somewhat compatible with other SCO\n> products, or it isn't, but it isn't worth guessing at either way unless\n> someone can test it.\n\nThis seems to be a little drastic, just because one company ships two\nseparate products and everyone is confused about why they would do that.\nQuite a few people do use OpenServer (successfully, after a few patches\nagainst 7.1 which made it into 7.2), but if no one ends up submitting a\ntest result it will simply stay in the unsupported area.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 29 Nov 2001 17:05:06 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "On Wed, 28 Nov 2001, Larry Rosenman wrote:\n\n> I've upgraded to OpenUNIX 8, and all is well. \n> \n> It probably passes on UW 7 as well, but I can no longer test it. \n7.2b3 does run ok on uw711 (regression test shows no error)\n\n[snip]\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 29 Nov 2001 22:39:15 +0100", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "> 7.2b3 does run ok on uw711 (regression test shows no error)\n\nThanks! I'll update the list.\n\n - Thomas\n", "msg_date": "Fri, 30 Nov 2001 14:12:29 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Call for platform testing" } ]
[ { "msg_contents": "Hi Everybody.\nI am writing an application in java and want to create a databse with business logic (In a lot of functions) remotely. For that purpose I would like to get some parameters from user. these are \n\n\n1. Database Name (e.g. anaconda)\n2. Super user (e.g. postgres)\n3. Super user passwd. (e.g. postgres)\n4. Server IP (e.g 192.168.5.127)\n5. Server port (Default 5432).\n6. Datapath(e.g. /usr/local/Mydata)\n\nI am connecting to template1 using postgres-jdbc interface. I am using sql command Create Database. When I am giving command as follows\n\nCreate database anaconda With Location = 'usr/local/Mydata'\n\nIt gives error that absolute path is not allowed.\nPostgres Help says that to allow absolute path u have to install postgreSQL with some gamke option but it is security risk. If I want to give this application to anyone than it may be not suitable to reinstall postgreSQL. Help also says that that u have to put environment variable to avoid this error. (using commands export/initlocation in same server process.). \nBut since I want to create DB remotely than How can i set environment variable from different m/c (as m/c may be windows 95/98/NT and user not knows anything about telnet). I want to so all things from my application.\n\nIs there any different way to implement this. \nAny suggestion(s) may help me.\n\nRegards\nDinesh Parikh\nNew Delhi\n\n\n\n\n\n\n\nHi Everybody.\nI am writing an application in java and want to \ncreate a databse with business logic (In a lot of functions) remotely. For that \npurpose I would like to get some parameters from user. these are \n \n \n1. Database Name (e.g. \nanaconda)\n2. Super user (e.g. \npostgres)\n3. Super user passwd. \n(e.g. postgres)\n4. Server IP (e.g \n192.168.5.127)\n5. Server port (Default \n5432).\n6. Datapath(e.g. \n/usr/local/Mydata)\n \nI am connecting to template1 using postgres-jdbc \ninterface. I am using sql command Create \nDatabase. When I am giving command as \nfollows\n \nCreate database anaconda \nWith Location = 'usr/local/Mydata'\n \nIt gives error that absolute path is not \nallowed.\nPostgres Help says that to allow absolute path u \nhave to install postgreSQL with some gamke option but it is security risk. If I \nwant to give this application to anyone than it may be not suitable to reinstall \npostgreSQL. Help also says that that u have to put environment variable to avoid \nthis error. (using commands export/initlocation in same server \nprocess.). \nBut since I want to create DB remotely than \nHow can i set environment variable from different m/c (as m/c may be windows \n95/98/NT and user not knows anything about telnet). I want to so all things from \nmy application.\n \nIs there any different way to implement this. \n\nAny suggestion(s) may help me.\n \nRegards\nDinesh Parikh\nNew Delhi", "msg_date": "Wed, 28 Nov 2001 16:40:58 +0530", "msg_from": "\"Dinesh Parikh\" <dineshp@newgen.co.in>", "msg_from_op": true, "msg_subject": "Creating Postgres DB from Remote Machine" } ]
[ { "msg_contents": "\n> QNX6 will fail in 7.2.X. We have patches for 7.3. I believe AIX is\n> fine thanks to work done a few weeks ago by Tatuso and others.\nHowever,\n> I am not totally sure.\n\n7.2b3 passes \"make check\" on AIX 4.3 with both xlc and gcc.\n1 Rounding difference (last digit) and 3 0 vs -0 in the geometry tests \nwhen using gcc (both gcc and xlc show -0s, only different ones).\n\nThe PG_FUNCTION_INFO_V1 macro produces an annoying warning, that I\ncannot \ninterpret:\n\nxlc -O2 -qmaxmem=16384 -qsrcmsg -qlonglong -DREFINT_VERBOSE -I.\n-I../../src/include -I/us\nr/local/include -c -o autoinc.o autoinc.c\n 8 | extern Pg_finfo_record * pg_finfo_autoinc (void);\nPg_finfo_record * pg_finfo_a\nutoinc (void) { static Pg_finfo_record my_finfo = { 1 }; return\n&my_finfo; };\n \n........................................................................\n......\n........................................................................\n....a\na - 1506-137 (E) Declaration must declare at least one declarator, tag,\nor the members of\nan enumeration.\n\nAndreas\n", "msg_date": "Wed, 28 Nov 2001 12:25:15 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "> 7.2b3 passes \"make check\" on AIX 4.3 with both xlc and gcc.\n> 1 Rounding difference (last digit) and 3 0 vs -0 in the geometry tests\n> when using gcc (both gcc and xlc show -0s, only different ones).\n\nThanks Andreas. Tatsuo is continuing testing on AIX 5.x, so I'll\ncontinue to list 4.3 as supported for now.\n\n - Thomas\n", "msg_date": "Wed, 28 Nov 2001 14:46:17 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> The PG_FUNCTION_INFO_V1 macro produces an annoying warning, that I\n> cannot interpret:\n\n> xlc -O2 -qmaxmem=16384 -qsrcmsg -qlonglong -DREFINT_VERBOSE -I.\n> -I../../src/include -I/usr/local/include -c -o autoinc.o autoinc.c\n> 8 | extern Pg_finfo_record * pg_finfo_autoinc (void);\n> Pg_finfo_record * pg_finfo_a\n> utoinc (void) { static Pg_finfo_record my_finfo = { 1 }; return\n> &my_finfo; };\n \n> ........................................................................\n> ......\n> ........................................................................\n> ....a\n> a - 1506-137 (E) Declaration must declare at least one declarator, tag,\n> or the members of\n> an enumeration.\n\nIt's not so much the macro as the semicolon after it. I get \"Empty\ndeclaration\" warnings from HP's cc for those lines myself. Kind of\nannoying, but not writing the semicolon in the source sounds uglier.\n\nIs it worth adding a dummy declaration to the macro just to shut up\nthese compilers? We could probably make the macro produce bogus\nextern declarations, say PG_FUNCTION_INFO_V1(foo) produces\n\nextern Pg_finfo_record * pg_finfo_foo (void);\nPg_finfo_record * pg_finfo_foo (void)\n{\n static Pg_finfo_record my_finfo = { 1 };\n return &my_finfo;\n}\nextern int pg_finfo_foo_dummy\n\nwhich would satisfy even the most pedantic compiler ... unless it\nchose to warn about unreferenced extern declarations, but I don't\nthink any do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 10:25:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing " } ]
[ { "msg_contents": "Hi all,\n\nI'm novice in PostgreSQL.\nI want to obtain current week number.\nUnder PHP I try:\n\n$query = \"SELECT EXTRACT(WEEK FROM NOW)\";\n$result= pg_exec($dbconn,$query);\n\nwhere: $dbconn - successful conection id\nBut this give me an error message.\nCan somebody help me?\nSorry for my english.\n\nThanks in advice.\n\nWodzu\n\n\n\n\n\n", "msg_date": "Wed, 28 Nov 2001 13:09:48 +0100", "msg_from": "\"Wodzu\" <wodzu@wodzus.prv.pl>", "msg_from_op": true, "msg_subject": "Week number" }, { "msg_contents": "On Wed, 2001-11-28 at 13:09, Wodzu wrote:\n> I'm novice in PostgreSQL.\n\nThen pgsql-novice is the correct list to ask this kind of question.\n> I want to obtain current week number.\n> Under PHP I try:\n> \n> $query = \"SELECT EXTRACT(WEEK FROM NOW)\";\n> $result= pg_exec($dbconn,$query);\n\nselect date_part('week', CURRENT_TIMESTAMP);\n\nMarkus Bertheau", "msg_date": "28 Nov 2001 15:13:43 +0100", "msg_from": "Markus Bertheau <twanger@bluetwanger.de>", "msg_from_op": false, "msg_subject": "Re: Week number" }, { "msg_contents": "\nBoth\n\nSELECT extract(week FROM now());\n\nand\n\nSELECT extract(week FROM CURRENT_TIMESTAMP);\n\nwork as expected. I don't know which is the *accepted* SQL 92\nidiom, but I would bet that it isn't 'now().'\n\nJason\n\n\"Wodzu\" <wodzu@wodzus.prv.pl> writes:\n\n> Hi all,\n> \n> I'm novice in PostgreSQL.\n> I want to obtain current week number.\n> Under PHP I try:\n> \n> $query = \"SELECT EXTRACT(WEEK FROM NOW)\";\n> $result= pg_exec($dbconn,$query);\n> \n> where: $dbconn - successful conection id\n> But this give me an error message.\n> Can somebody help me?\n> Sorry for my english.\n> \n> Thanks in advice.\n> \n> Wodzu\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "28 Nov 2001 09:45:09 -0700", "msg_from": "Jason Earl <jason.earl@simplot.com>", "msg_from_op": false, "msg_subject": "Re: Week number" }, { "msg_contents": "In 7.1.x, try:\n\nSELECT EXTRACT (WEEK FROM CURRENT_DATE);\n\nin older postgres maybe this:\n\nSELECT EXTRACT (WEEK FROM DATE 'today');\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Wodzu\n> Sent: Wednesday, 28 November 2001 8:10 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Week number\n> \n> \n> Hi all,\n> \n> I'm novice in PostgreSQL.\n> I want to obtain current week number.\n> Under PHP I try:\n> \n> $query = \"SELECT EXTRACT(WEEK FROM NOW)\";\n> $result= pg_exec($dbconn,$query);\n> \n> where: $dbconn - successful conection id\n> But this give me an error message.\n> Can somebody help me?\n> Sorry for my english.\n> \n> Thanks in advice.\n> \n> Wodzu\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Thu, 29 Nov 2001 09:43:41 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Week number" }, { "msg_contents": "> Then pgsql-novice is the correct list to ask this kind of question.\n> > I want to obtain current week number.\n> > Under PHP I try:\n> >\n> > $query = \"SELECT EXTRACT(WEEK FROM NOW)\";\n> > $result= pg_exec($dbconn,$query);\n>\n> select date_part('week', CURRENT_TIMESTAMP);\n\nI'll just point out that using date_part isn't ANSI SQL, you should use the\nEXTRACT function, and the CURRENT_DATE, CURRENT_TIME and CURRENT_TIMESTAMP\nvariables.\n\nChris\n\n", "msg_date": "Thu, 29 Nov 2001 09:47:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Week number" } ]
[ { "msg_contents": "AFAIR (Postgres 6.5 - 7.1.3):\n\ncd /usr/src/postgresql-X.X.X/src/interfaces/libpq++/examples\n[postgres@xxx examples]$ /usr/local/pgsql/bin/psql\nPassword:\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\npostgres=# \\i testlibpq2.sql\nCREATE\nCREATE\npsql:testlibpq2.sql:5: ERROR: parser: parse error at or near \"\"\npsql:testlibpq2.sql:5: ERROR: parser: parse error at or near \"]\"\npostgres=# \\q\n[postgres@komuna examples]$ cat testlibpq2.sql\nCREATE TABLE TBL1 (i int4);\n\nCREATE TABLE TBL2 (i int4);\n\nCREATE RULE r1 AS ON INSERT TO TBL1 DO [INSERT INTO TBL2 values (new.i);\nNOTIFY TBL2];\n[postgres@komuna examples]$\n\nWhat is wrong?\n\n\n\n", "msg_date": "Wed, 28 Nov 2001 15:07:51 +0100", "msg_from": "\"Albert Bartoszko\" <albertb@nt.kegel.com.pl>", "msg_from_op": true, "msg_subject": "Rules" }, { "msg_contents": "\"Albert Bartoszko\" <albertb@nt.kegel.com.pl> writes:\n> [ src/interfaces/libpq++/examples/testlibpq2.sql fails ]\n> What is wrong?\n\nAs best I can tell, psql is falling down on the job: it's not treating\nsquare brackets as something to be matched up, as it does with\nparentheses. It ships this command to the backend as two separate\nqueries. Trying it with psql -e shows what's happening:\n\nregression=# CREATE RULE r1 AS ON INSERT TO TBL1 DO [INSERT INTO TBL2 values (new.i); NOTIFY TBL2];\nCREATE RULE r1 AS ON INSERT TO TBL1 DO [INSERT INTO TBL2 values (new.i);\nERROR: parser: parse error at or near \"\"\n NOTIFY TBL2];\nERROR: parser: parse error at or near \"]\"\nregression=#\n\nWhile this clearly ought to be fixed, I think it's a bit late in the\ncycle to consider fixing it for 7.2, especially seeing as how no\nfunctionality is lost (multi-rule actions work fine if you put\nparentheses rather than square brackets around them).\n\nFor 7.3, we should either fix psql or remove the option to use square\nbrackets in rule action lists. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 16:49:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rules " }, { "msg_contents": "On Wed, 28 Nov 2001, Albert Bartoszko wrote:\n\n> AFAIR (Postgres 6.5 - 7.1.3):\n>\n> cd /usr/src/postgresql-X.X.X/src/interfaces/libpq++/examples\n> [postgres@xxx examples]$ /usr/local/pgsql/bin/psql\n> Password:\n> Welcome to psql, the PostgreSQL interactive terminal.\n>\n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help on internal slash commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n>\n> postgres=# \\i testlibpq2.sql\n> CREATE\n> CREATE\n> psql:testlibpq2.sql:5: ERROR: parser: parse error at or near \"\"\n> psql:testlibpq2.sql:5: ERROR: parser: parse error at or near \"]\"\n> postgres=# \\q\n> [postgres@komuna examples]$ cat testlibpq2.sql\n> CREATE TABLE TBL1 (i int4);\n>\n> CREATE TABLE TBL2 (i int4);\n>\n> CREATE RULE r1 AS ON INSERT TO TBL1 DO [INSERT INTO TBL2 values (new.i);\n> NOTIFY TBL2];\n\nIt works for me with () rather than [], although both are in the help.\n\n\n", "msg_date": "Wed, 28 Nov 2001 13:56:18 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Rules" }, { "msg_contents": "> For 7.3, we should either fix psql or remove the option to use square\n> brackets in rule action lists. Comments anyone?\n\nDo we really need square brackets?\n\n\n\nJust my 2c,\nMarten.\n\n", "msg_date": "Wed, 28 Nov 2001 23:52:14 +0100", "msg_from": "=?iso-8859-1?Q?M=E5rten_Gustafsson?= <marten@ditt.as>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Rules " }, { "msg_contents": "Tom Lane writes:\n\n> For 7.3, we should either fix psql or remove the option to use square\n> brackets in rule action lists. Comments anyone?\n\nI'd certainly be for removing this since no one could ever really have\nused this, but this feature seems so unusual, perhaps there was a reason\nfor it?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 29 Nov 2001 20:04:15 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Rules " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> For 7.3, we should either fix psql or remove the option to use square\n>> brackets in rule action lists. Comments anyone?\n\n> I'd certainly be for removing this since no one could ever really have\n> used this, but this feature seems so unusual, perhaps there was a reason\n> for it?\n\nMy guess is that the syntax predates psql, or at least predates psql's\ncurrent ideas about splitting up queries. It's possible that someone\nout there is entering square-bracket-delimited rule lists via a non-psql\ninterface, but I doubt it. I'd vote for elimination of the syntax,\nI think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 14:29:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rules " }, { "msg_contents": "> Tom Lane writes:\n> \n> > For 7.3, we should either fix psql or remove the option to use square\n> > brackets in rule action lists. Comments anyone?\n> \n> I'd certainly be for removing this since no one could ever really have\n> used this, but this feature seems so unusual, perhaps there was a reason\n> for it?\n\nAdded to TODO:\n\n\t* Remove brackets as multi-statement rule grouping, must use parens\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Dec 2001 17:46:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Rules" }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> For 7.3, we should either fix psql or remove the option to use square\n> >> brackets in rule action lists. Comments anyone?\n>\n> > I'd certainly be for removing this since no one could ever really have\n> > used this, but this feature seems so unusual, perhaps there was a reason\n> > for it?\n>\n> My guess is that the syntax predates psql, or at least predates psql's\n> current ideas about splitting up queries. It's possible that someone\n> out there is entering square-bracket-delimited rule lists via a non-psql\n> interface, but I doubt it. I'd vote for elimination of the syntax,\n> I think.\n\n You're right. The [] syntax was used in Postgres 4.2 and well\n supported by monitor(1).\n\n Adding the parens as alternative was - uhm - a workaround for\n not beeing able to get square brackets through psql at some\n point, but I'm not sure if that was in 6.4 or 6.5.\n\n Anyway, let's rip out the square brackets.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 5 Dec 2001 17:58:02 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Rules" } ]
[ { "msg_contents": "> MacOS 10.1.1 passes.\n\nThanks Gavin!\n\n - Thomas\n\n> ======================\n> All 79 tests passed.\n> ======================\n> \n> [localhost:~/postgresql-7.2b3] swm% uname -a\n> Darwin localhost 5.1 Darwin Kernel Version 5.1: Tue Oct 30 00:06:34 PST\n> 2001; root:xnu/xnu-201.5.obj~1/RELEASE_PPC Power Macintosh powerpc\n", "msg_date": "Wed, 28 Nov 2001 14:44:59 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Call for platform testing" } ]
[ { "msg_contents": "\n> Doug McNaught <doug@wireboard.com> writes:\n> > But this way the password ends up in the environment, which on many\n> > systems is visible to other processes/users (via /proc or the 'ps'\n> > command).\n> \n> Your *environment* is visible to other users? Geez, what a broken\n> system ...\n\nTry \"ps axewww\" ? Doesn't work on your platform ? \nWorks on AIX, Linux?, ...\n\nAndreas\n", "msg_date": "Wed, 28 Nov 2001 17:48:51 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up " }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n\n> > Doug McNaught <doug@wireboard.com> writes:\n> > > But this way the password ends up in the environment, which on many\n> > > systems is visible to other processes/users (via /proc or the 'ps'\n> > > command).\n> >\n> > Your *environment* is visible to other users? Geez, what a broken\n> > system ...\n>\n> Try \"ps axewww\" ? Doesn't work on your platform ?\n> Works on AIX, Linux?, ...\n\nLinux Debian Unstable (updated 1 week ago).\n\nFor a non-root user, only her processes' environment appears.\n(and /proc/*/environ permissions are 400, the user being the process owner)\n\nFor root, all processes' environment is shown.\n\nAntonio\n\n\n>\n>\n> Andreas\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Wed, 28 Nov 2001 18:17:04 +0100", "msg_from": "Antonio Fiol =?iso-8859-1?Q?Bonn=EDn?= <fiol@w3ping.com>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up" }, { "msg_contents": "> \n> > Doug McNaught <doug@wireboard.com> writes:\n> > > But this way the password ends up in the environment, which on many\n> > > systems is visible to other processes/users (via /proc or the 'ps'\n> > > command).\n> > \n> > Your *environment* is visible to other users? Geez, what a broken\n> > system ...\n> \n> Try \"ps axewww\" ? Doesn't work on your platform ? \n> Works on AIX, Linux?, ...\n\nWorks on BSD/OS too, so I assume it works on all the BSD's.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 13:47:37 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> Zeugswetter Andreas SB SD wrote:\n> \n> > > Doug McNaught <doug@wireboard.com> writes:\n> > > > But this way the password ends up in the environment, which on many\n> > > > systems is visible to other processes/users (via /proc or the 'ps'\n> > > > command).\n> > >\n> > > Your *environment* is visible to other users? Geez, what a broken\n> > > system ...\n> >\n> > Try \"ps axewww\" ? Doesn't work on your platform ?\n> > Works on AIX, Linux?, ...\n> \n> Linux Debian Unstable (updated 1 week ago).\n> \n> For a non-root user, only her processes' environment appears.\n> (and /proc/*/environ permissions are 400, the user being the process owner)\n> \n> For root, all processes' environment is shown.\n\nOn BSD/OS, it doesn't matter what user you are. You can see the\nenvironment of all processes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 13:48:37 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" }, { "msg_contents": "> > Try \"ps axewww\" ? Doesn't work on your platform ?\n> > Works on AIX, Linux?, ...\n>\n> Linux Debian Unstable (updated 1 week ago).\n>\n> For a non-root user, only her processes' environment appears.\n> (and /proc/*/environ permissions are 400, the user being the\n> process owner)\n>\n> For root, all processes' environment is shown.\n>\n> Antonio\n\nI've tried it on FreeBSD and it seems an unprivlileged user can only see his\nor her own environmental variables, it doesn't show variables for any other\nuser.\n\nChris\n\n", "msg_date": "Thu, 29 Nov 2001 09:58:58 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens up" }, { "msg_contents": "> > > Try \"ps axewww\" ? Doesn't work on your platform ?\n> > > Works on AIX, Linux?, ...\n> >\n> > Linux Debian Unstable (updated 1 week ago).\n> >\n> > For a non-root user, only her processes' environment appears.\n> > (and /proc/*/environ permissions are 400, the user being the\n> > process owner)\n> >\n> > For root, all processes' environment is shown.\n> >\n> > Antonio\n> \n> I've tried it on FreeBSD and it seems an unprivlileged user can only see his\n> or her own environmental variables, it doesn't show variables for any other\n> user.\n\nYes, I see that now. Seems maybe my OS is the only one that isn't fixed\nyet. :-(\n\nAnyway, I based my dislike of passwords in the environment on prior\npractice of other programs. I knew one of the reasons it isn't used is\nbecause of 'ps', but there is also the issue of the passwords passed to\nsubprocesses, across 'su' calls, and into 'core' files. It just seems\nlike a bad practice.\n\nPasswords stored in a file, though not ideal, seems more secure, are\nused by cvs and a few other programs, and allow us to define a format\nthat can be used to store different user/host/password combinations in\nthe same file, if we wish.\n\nOf course, given that most OS's don't have the 'ps' environment problem,\nmaybe we have to keep PGPASSWORD around. It is up to the group. Do\npeople want me to change my wording of the option in the SGML sources?\n\n <envar>PGPASSWORD</envar>\n sets the password used if the backend demands password\n authentication. This is not recommended because the password can\n be read by others using a <command>ps</command> environment flag\n on some platforms.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Nov 2001 11:37:05 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" } ]
[ { "msg_contents": "> Windows/Cygwin Jason Tishler\n\nOnly the \"privileges\" test failed when running \"serial_schedule\". When\nrunning \"parallel_schedule\" there are random failures with errors\n\"cannot access some file with a table/index/...\". So I think that\nPostgreSQL is OK, but the Cygwin/Windows layer is not. We should try to\nfind what's wrong when multiple backends are running.\n\nsystem:\nCYGWIN_NT-5.0 CERT 1.3.5(0.47/3/2) 2001-11-13 23:16 i686 unknown\n\nwith cygipc 1.11\n\n\n\t\t\tDan\n", "msg_date": "Wed, 28 Nov 2001 17:51:24 +0100", "msg_from": "=?US-ASCII?Q?Horak_Daniel?= <horak@sit.plzen-city.cz>", "msg_from_op": true, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": ">\n\nRegression test on Irix 6.5.13 installed with MIPSPro 7.30\n\n=======================\n 2 of 79 tests failed.\n=======================\ngeometry and join", "msg_date": "Wed, 28 Nov 2001 18:12:11 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "regression test on irix 6.5.12 with MIPSpro Compilers: Version 7.3.1.2m\n(this platform is 8-r10000 ip25 processors, previous is 8 -r12000 ip35 processors)\n\n=======================\n 2 of 79 tests failed.\n=======================\nagain join and geometry(it also failed in postgres 7.1.3)", "msg_date": "Wed, 28 Nov 2001 18:55:37 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "7.2b3 compiles and passes all tests on Linux/Sparc64. I updated the\nregression test database. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n\n", "msg_date": "28 Nov 2001 13:24:04 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "Luis Amigo <lamigo@atc.unican.es> writes:\n> again join and geometry(it also failed in postgres 7.1.3)\n\nThe join discrepancy is probably an artifact of your local qsort()\nbehaving slightly differently for equal keys than everyone else's.\nCurious that it only affects these few queries, though.\n\nGeometry differences in the low-order digits are par for the course.\nI wouldn't even stop to think about it, except that I see that you\nare already using a platform-specific geometry comparison file\n(geometry-irix.out). Some digging in the CVS logs and mail archives\nshows that that file was submitted by Pete Forman in Oct 2000, and\nhe was using \n\n> Architecture (example: Intel Pentium) : SGI MIPS 8000\n>\n> Operating System (example: Linux 2.0.26 ELF) : IRIX 6.5.5m\n>\n> PostgreSQL version (example: PostgreSQL-7.1): PostgreSQL-7.1\n>\n> Compiler used (example: gcc 2.8.0): MIPSPro 7.3\n\nNot clear at this point if the differences in your results are due to\nhardware, library version, or compiler version differences.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 13:34:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing " }, { "msg_contents": "Tom Lane wrote:\n\n> Luis Amigo <lamigo@atc.unican.es> writes:\n> > again join and geometry(it also failed in postgres 7.1.3)\n>\n> The join discrepancy is probably an artifact of your local qsort()\n> behaving slightly differently for equal keys than everyone else's.\n> Curious that it only affects these few queries, though.\n>\n> Geometry differences in the low-order digits are par for the course.\n> I wouldn't even stop to think about it, except that I see that you\n> are already using a platform-specific geometry comparison file\n> (geometry-irix.out). Some digging in the CVS logs and mail archives\n> shows that that file was submitted by Pete Forman in Oct 2000, and\n> he was using\n>\n> > Architecture (example: Intel Pentium) : SGI MIPS 8000\n> >\n> > Operating System (example: Linux 2.0.26 ELF) : IRIX 6.5.5m\n> >\n> > PostgreSQL version (example: PostgreSQL-7.1): PostgreSQL-7.1\n> >\n> > Compiler used (example: gcc 2.8.0): MIPSPro 7.3\n>\n> Not clear at this point if the differences in your results are due to\n> hardware, library version, or compiler version differences.\n>\n> regards, tom lane\n\nI've reviewed the results, join differences seem to be conceptual\ndifferences, expected result gives higher value to NULL that 0\n(personally I think 0 is bigger than NULL).\nGeometry differences are architectural round up differences (I think\nthat MIPS 10000 and 12000 may have corrected previous errors or might\nbe a different point-of-view about rounding)\n\n\n", "msg_date": "Wed, 28 Nov 2001 20:01:30 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "> regression test on irix 6.5.12 with MIPSpro Compilers: Version 7.3.1.2m\n> (this platform is 8-r10000 ip25 processors, previous is 8 -r12000 ip35 processors)\n> \n> =======================\n> 2 of 79 tests failed.\n> =======================\n> again join and geometry(it also failed in postgres 7.1.3)\n\nGot both reports. Counts as good from my pov...\n\nThanks!\n\n - Thomas\n", "msg_date": "Thu, 29 Nov 2001 00:09:21 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> > regression test on irix 6.5.12 with MIPSpro Compilers: Version 7.3.1.2m\n> > (this platform is 8-r10000 ip25 processors, previous is 8 -r12000 ip35 processors)\n> >\n> > =======================\n> > 2 of 79 tests failed.\n> > =======================\n> > again join and geometry(it also failed in postgres 7.1.3)\n>\n> Got both reports. Counts as good from my pov...\n>\n> Thanks!\n>\n> - Thomas\n\nI also count them as good but I think that someone may change expected, cause two\nplatforms with different OS, architecture and compilers are giving same result.\n\n", "msg_date": "Thu, 29 Nov 2001 08:58:01 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "Tom Lane wrote:\n\n> Luis Amigo <lamigo@atc.unican.es> writes:\n> > again join and geometry(it also failed in postgres 7.1.3)\n>\n> The join discrepancy is probably an artifact of your local qsort()\n> behaving slightly differently for equal keys than everyone else's.\n> Curious that it only affects these few queries, though.\n>\n> Geometry differences in the low-order digits are par for the course.\n> I wouldn't even stop to think about it, except that I see that you\n> are already using a platform-specific geometry comparison file\n> (geometry-irix.out). Some digging in the CVS logs and mail archives\n> shows that that file was submitted by Pete Forman in Oct 2000, and\n> he was using\n>\n> > Architecture (example: Intel Pentium) : SGI MIPS 8000\n> >\n> > Operating System (example: Linux 2.0.26 ELF) : IRIX 6.5.5m\n> >\n> > PostgreSQL version (example: PostgreSQL-7.1): PostgreSQL-7.1\n> >\n> > Compiler used (example: gcc 2.8.0): MIPSPro 7.3\n>\n> Not clear at this point if the differences in your results are due to\n> hardware, library version, or compiler version differences.\n>\n> regards, tom lane\n\nI've been searching for info that could explain round-up difference in\ngeometry.\nI believe i've found a reasonable explanation, there is a architecture\ndisroupt between old MIPS3 FPUs\nand newer MIPS4 FPUs.(so I'm almost sure that is the explanation).\nI would like to know if is there anyone else running on IRIX, so we can\ndiscuss these and other results\nfrom another point of view.\nThanks for all your heavy working.\nbest wishes.\n\n", "msg_date": "Thu, 29 Nov 2001 20:45:02 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "Luis Amigo <lamigo@atc.unican.es> writes:\n> I've been searching for info that could explain round-up difference in\n> geometry.\n> I believe i've found a reasonable explanation, there is a architecture\n> disroupt between old MIPS3 FPUs\n> and newer MIPS4 FPUs.(so I'm almost sure that is the explanation).\n> I would like to know if is there anyone else running on IRIX, so we can\n> discuss these and other results from another point of view.\n\nI sent mail earlier to Pete Forman, who submitted the existing\ngeometry-irix comparison file. He hasn't responded yet.\n\nIf Pete's lost interest then I'd be willing to adopt your results as\nthe standard geometry-irix file. If he's still interested then we'd\nneed to figure out a way for the resultmap mechanism to distinguish\nolder and newer MIPS hardware in order to accommodate two sets of\nIRIX geometry results. Got any idea how to do that? (Does config.guess\nproduce any relevant info?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 15:31:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing " }, { "msg_contents": "Tom Lane wrote:\n\n> Luis Amigo <lamigo@atc.unican.es> writes:\n>\n> > I believe i've found a reasonable explanation, there is a architecture\n> > disroupt between old MIPS3 FPUs\n> > and newer MIPS4 FPUs.(so I'm almost sure that is the explanation).\n>\n> If Pete's lost interest then I'd be willing to adopt your results as\n> the standard geometry-irix file. If he's still interested then we'd\n> need to figure out a way for the resultmap mechanism to distinguish\n> older and newer MIPS hardware in order to accommodate two sets of\n> IRIX geometry results. Got any idea how to do that? (Does config.guess\n> produce any relevant info?)\n>\n> regards, tom lane\n\nFirst I have to apologize, I said MIPS3 and MIPS4 architecture, MIPS are\nproccesor instructions set no architecture, and r8000 CPUs have MIPS4\ninstructions set.\nI should have said old 8010 FPUs and newer 10010 and 12010 FPUs.\nI think there are only three possible reasons in differences:\n1.- FPUs architecture.\n2.- compiler specific optimization( It's only possible if Pete turned on\nextensive optimization for 8000), because we are compiling with\nfull-compatibility options -rx directives optimize cache and memory use but\ndoes not affect proccessors.)\n3.-If regress is sending binary data to disk and later retrieving the data\nand writing output files, there \"might\" be rounding up differences between\nwhat was written and readed.\n\nI still believe this difference is made by FPUs\nconfig.guess only says mips-sgi-irix6.5\nI think we can make two results choosing FPUs from hinv\nbut we don�t have another r8000 to test Pete's results nor another r10000 or\n12000 to test our results.\nI would try to test in our 4 r10000 IP27 to be sure that our results are\nright.\n\nregards\n Luis Amigo\n\n", "msg_date": "Fri, 30 Nov 2001 09:40:56 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "Luis Amigo writes:\n\n> I still believe this difference is made by FPUs\n> config.guess only says mips-sgi-irix6.5\n\nIf your uname -m gives any clue about the processor type we might want to\ncreate a patch for config.guess.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 1 Dec 2001 15:02:49 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> Luis Amigo writes:\n>\n> > I still believe this difference is made by FPUs\n> > config.guess only says mips-sgi-irix6.5\n>\n> If your uname -m gives any clue about the processor type we might want to\n> create a patch for config.guess.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n\nuname -m returns IP25 what is SGI platform, it's about cache sizes and\ninternal net issues, I'm not sure that would work, I think it's more\nreliable to do:\ncongrio 39% hinv -t cpu\nCPU: MIPS R10000 Processor Chip Revision: 2.5\nCPU: MIPS R10000 Processor Chip Revision: 2.5\nCPU: MIPS R10000 Processor Chip Revision: 2.5\nCPU: MIPS R10000 Processor Chip Revision: 2.5\nCPU: MIPS R10000 Processor Chip Revision: 2.6\nCPU: MIPS R10000 Processor Chip Revision: 2.6\nCPU: MIPS R10000 Processor Chip Revision: 2.6\nCPU: MIPS R10000 Processor Chip Revision: 2.6\nor\ncongrio 40% hinv -t fpu\nFPU: MIPS R10010 Floating Point Chip Revision: 2.5\nFPU: MIPS R10010 Floating Point Chip Revision: 2.5\nFPU: MIPS R10010 Floating Point Chip Revision: 2.5\nFPU: MIPS R10010 Floating Point Chip Revision: 2.5\nFPU: MIPS R10010 Floating Point Chip Revision: 2.6\nFPU: MIPS R10010 Floating Point Chip Revision: 2.6\nFPU: MIPS R10010 Floating Point Chip Revision: 2.6\nFPU: MIPS R10010 Floating Point Chip Revision: 2.6\nas you wish\nbest regards\n\n\n", "msg_date": "Mon, 03 Dec 2001 09:53:19 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "Luis Amigo <lamigo@atc.unican.es> writes:\n> uname -m returns IP25 what is SGI platform, it's about cache sizes and\n> internal net issues, I'm not sure that would work, I think it's more\n> reliable to do:\n> congrio 39% hinv -t cpu\n> congrio 40% hinv -t fpu\n\nI'm not eager to add such a platform-dependent test to the resultmap\ncode, especially not when the issue is just the geometry results; we\nalready know that the long-term answer there must be to display fewer\ndigits in the test output. Also, we don't know what this program will\nreport for older MIPS boxes anyway. (Pete Forman hasn't responded to my\nping, so I'm inclined to assume that he's not actively tracking Postgres\ndevelopment anymore; we can't expect help from him to check it on the\nolder boxes.)\n\nI'd be willing to just commit the newer-hardware IRIX results as\ngeometry-irix for 7.2. Somebody's got to put up with a discrepancy,\nand it may as well be the older boxes.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 10:57:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing " }, { "msg_contents": "Tom Lane wrote:\n\n> Luis Amigo <lamigo@atc.unican.es> writes:\n> > uname -m returns IP25 what is SGI platform, it's about cache sizes and\n> > internal net issues, I'm not sure that would work, I think it's more\n> > reliable to do:\n> > congrio 39% hinv -t cpu\n> > congrio 40% hinv -t fpu\n>\n> I'm not eager to add such a platform-dependent test to the resultmap\n> code, especially not when the issue is just the geometry results; we\n> already know that the long-term answer there must be to display fewer\n> digits in the test output. Also, we don't know what this program will\n> report for older MIPS boxes anyway. (Pete Forman hasn't responded to my\n> ping, so I'm inclined to assume that he's not actively tracking Postgres\n> development anymore; we can't expect help from him to check it on the\n> older boxes.)\n>\n> I'd be willing to just commit the newer-hardware IRIX results as\n> geometry-irix for 7.2. Somebody's got to put up with a discrepancy,\n> and it may as well be the older boxes.\n>\n> Comments?\n>\n> regards, tom lane\n\nI'm completely with you, I don�t think that would be useful to mantain such\nfirmware dependecies, but I think that it might be useful to document this\nand problems with gcc, now we all remember this issue, but in two months\nall will be forgotten.\nWe had here a lot of trouble installing our first 7.1.3 postgres because it\nwasn�t documented that gcc simply doesn�t run. I swear that when I had time\nI will try to compile with gcc and new libgcc links to try to fix this\nproblem.\n\n", "msg_date": "Mon, 03 Dec 2001 17:06:26 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "> I almost forget:\n\nbest wishes and regards\n\n", "msg_date": "Mon, 03 Dec 2001 17:07:38 +0100", "msg_from": "Luis Amigo <lamigo@atc.unican.es>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing" }, { "msg_contents": "Luis Amigo <lamigo@atc.unican.es> writes:\n> I'm completely with you, I don�t think that would be useful to mantain such\n> firmware dependecies, but I think that it might be useful to document this\n> and problems with gcc, now we all remember this issue, but in two months\n> all will be forgotten.\n> We had here a lot of trouble installing our first 7.1.3 postgres because it\n> wasn�t documented that gcc simply doesn�t run. I swear that when I had time\n> I will try to compile with gcc and new libgcc links to try to fix this\n> problem.\n\nWould you like to make up a FAQ_IRIX document similar to the ones we\nhave for some other platforms (see, eg, doc/FAQ_HPUX)? It doesn't need\nto be long or fancy, just cover the things you wish you'd known sooner.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 11:08:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Call for platform testing " } ]
[ { "msg_contents": "I was thinking of re-designing my database schema to use a SERIAL\nvalue as an indentification across tables (i.e. as a foreign key).\nI've been playing with some example tables and have found the\nfollowing behavior from SERIAL:\n\n(1) I think SERIAL is defined as an int4. However, the upper bound\nseems to be 2^31 - 1 (217483647) not 2^32 - 1. I suppose this is\nbecause a generic int4 should have one bit for the sign\n(negative/positive). However, shouldn't SERIAL always be a positive\nnumber? Would it be correct to make it some kind of unsigned int4\ninstead?\n\n(2) The SERIAL number increases even if the transaction was aborted\n(e.g. if a repeated tuple were trying to be inserted into a unique\ntable, the transaction fails, but the SERIAL gets incremented).\n\tI was hoping that VACUUM VERBOSE ANALYZE would somehow reclaim the\nlost SERIAL indicies. So, for example, if I had the table:\n\ndb02=# select * from center_out order by id;\n subject | arm | target | rep | id \n---------+-----+--------+-----+------------\n F | L | 1 | 1 | 1\n F | L | 1 | 2 | 3\n F | L | 10 | 2 | 4\n F | L | 100 | 2 | 100001\n F | L | 100 | 3 | 10000002\n F | L | 500 | 3 | 2110000001\n F | L | 501 | 3 | 2147483646\n F | L | 502 | 3 | 2147483647\n(8 rows)\n\nthen a VACUUM VERBOSE ANALYZE would do the following:\n\ndb02=# select * from center_out order by id;\n subject | arm | target | rep | id \n---------+-----+--------+-----+------------\n F | L | 1 | 1 | 1\n F | L | 1 | 2 | 2\n F | L | 10 | 2 | 3\n F | L | 100 | 2 | 4\n F | L | 100 | 3 | 5\n F | L | 500 | 3 | 6\n F | L | 501 | 3 | 7\n F | L | 502 | 3 | 8\n(8 rows)\n\nI figure that I should never reach 2^31 - 1 transaction per table even\nwith many aborted ones; however, I think these would be nice changes.\n\nComments?\n\n-Tony\n", "msg_date": "28 Nov 2001 13:30:46 -0800", "msg_from": "reina@nsi.edu (Tony Reina)", "msg_from_op": true, "msg_subject": "Questions about SERIAL type" }, { "msg_contents": "reina@nsi.edu (Tony Reina) writes:\n\n> I was thinking of re-designing my database schema to use a SERIAL\n> value as an indentification across tables (i.e. as a foreign key).\n> I've been playing with some example tables and have found the\n> following behavior from SERIAL:\n> \n> (1) I think SERIAL is defined as an int4. However, the upper bound\n> seems to be 2^31 - 1 (217483647) not 2^32 - 1. I suppose this is\n> because a generic int4 should have one bit for the sign\n> (negative/positive). However, shouldn't SERIAL always be a positive\n> number? Would it be correct to make it some kind of unsigned int4\n> instead?\n\nI don't think PG (or the SQL standard) has any concept of unsigned\nnumbers. Besides, you can have sequences that have negative values at\nsome points, and even decrement rather than increment. Some folks may \nrely on this behavior.\n\n> (2) The SERIAL number increases even if the transaction was aborted\n> (e.g. if a repeated tuple were trying to be inserted into a unique\n> table, the transaction fails, but the SERIAL gets incremented).\n> \tI was hoping that VACUUM VERBOSE ANALYZE would somehow reclaim the\n> lost SERIAL indicies. So, for example, if I had the table:\n\nHow would this work? Would the DB have to go through all tables\nlooking for REFERENCES constraints and update those rows referring to\na renumbered key? What if you had a referencing column without a\nREFERENCES constraint? What if you had some kind of data external to\nthe database that relied on those primary keys staying the same? Not\npractical IMHO.\n\n> I figure that I should never reach 2^31 - 1 transaction per table even\n> with many aborted ones; however, I think these would be nice changes.\n\nWhat's going to happen AFAIK is that 64-bit sequences will be\navailable. It's unlikely that overflow will be an issue with\nthose... ;)\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "28 Nov 2001 17:31:03 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Questions about SERIAL type" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI'm not 100% sure that you actually want this. The main reason I say this is\nthat in most cases I use sequence numbers is to do forign-key relationships. \n\nIf you change sequence numbers on rows in a table, unless all tables that\nuse that sequence number are also modified, then the relationship between\ntables that rely on the sequence number is lost. If for any reason the\nsequence number is used externally, (not usually a good idea, but sometimes\nit is) then that relationship is also lost.\n\nAnd for argument sake, lets assume that we know each location a sequence\nnumber is referenced, so you can make the changes everywhere. (And that these\nnumbers aren't used for other things like order-numbers that need to appear\nin a string format and printed/referenced later) That means that the database\nneeds to be off-line during this access. So the modifications to Vacuum to\nmake it less intrusive to users while its occuring is now lost. \n\nI don't think this is a good idea... (Also, does 7.2 have an 8 byte sequence\nnumber (serial8) anyways? So isn't this problem moot?)\n\n\nOn 28-Nov-2001 Tony Reina wrote:\n> I was thinking of re-designing my database schema to use a SERIAL\n> value as an indentification across tables (i.e. as a foreign key).\n> I've been playing with some example tables and have found the\n> following behavior from SERIAL:\n> \n> (1) I think SERIAL is defined as an int4. However, the upper bound\n> seems to be 2^31 - 1 (217483647) not 2^32 - 1. I suppose this is\n> because a generic int4 should have one bit for the sign\n> (negative/positive). However, shouldn't SERIAL always be a positive\n> number? Would it be correct to make it some kind of unsigned int4\n> instead?\n> \n> (2) The SERIAL number increases even if the transaction was aborted\n> (e.g. if a repeated tuple were trying to be inserted into a unique\n> table, the transaction fails, but the SERIAL gets incremented).\n> I was hoping that VACUUM VERBOSE ANALYZE would somehow reclaim the\n> lost SERIAL indicies. So, for example, if I had the table:\n> \n> db02=# select * from center_out order by id;\n> subject | arm | target | rep | id \n> ---------+-----+--------+-----+------------\n> F | L | 1 | 1 | 1\n> F | L | 1 | 2 | 3\n> F | L | 10 | 2 | 4\n> F | L | 100 | 2 | 100001\n> F | L | 100 | 3 | 10000002\n> F | L | 500 | 3 | 2110000001\n> F | L | 501 | 3 | 2147483646\n> F | L | 502 | 3 | 2147483647\n> (8 rows)\n> \n> then a VACUUM VERBOSE ANALYZE would do the following:\n> \n> db02=# select * from center_out order by id;\n> subject | arm | target | rep | id \n> ---------+-----+--------+-----+------------\n> F | L | 1 | 1 | 1\n> F | L | 1 | 2 | 2\n> F | L | 10 | 2 | 3\n> F | L | 100 | 2 | 4\n> F | L | 100 | 3 | 5\n> F | L | 500 | 3 | 6\n> F | L | 501 | 3 | 7\n> F | L | 502 | 3 | 8\n> (8 rows)\n> \n> I figure that I should never reach 2^31 - 1 transaction per table even\n> with many aborted ones; however, I think these would be nice changes.\n> \n> Comments?\n> \n> -Tony\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE8BW1siysnOdCML0URAjFqAJ9RJk25zXl/mjhJmjC5tsf4bkj7EQCeNpph\nPcrtIXqceZLqdkDOyfAcq84=\n=MqDe\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 28 Nov 2001 16:04:12 -0700 (MST)", "msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>", "msg_from_op": false, "msg_subject": "Re: Questions about SERIAL type" }, { "msg_contents": "Doug McNaught wrote:\n\n> I don't think PG (or the SQL standard) has any concept of unsigned\n> numbers. Besides, you can have sequences that have negative values at\n> some points, and even decrement rather than increment. Some folks may\n> rely on this behavior.\n\nWhen I tried setting the current value to -200 I got an error that the\nnumber was outside of the proper range.\n\ndb02=# create table test (id SERIAL);\nNOTICE: CREATE TABLE will create implicit sequence 'test_id_seq' for SERIAL\ncolumn 'test.id'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_id_key' for\ntable 'test'\nCREATE\n\ndb02=# select setval('test_id_seq', -200);\nERROR: test_id_seq.setval: value -200 is out of bounds (1,2147483647)\n\nSo I'm not sure how people would be using negative values. It looks like from\nthe documentation that the SERIAL type always increments by 1 so I'm not sure\nhow they could use decrementing values. Unless, of course, they've changed\nthe source code to do this. Perhaps I'm missing something here in the\ndocumentation (using PG 7.1.3, maybe 7.2beta has changed this?).\n\n\n> How would this work? Would the DB have to go through all tables\n> looking for REFERENCES constraints and update those rows referring to\n> a renumbered key? What if you had a referencing column without a\n> REFERENCES constraint? What if you had some kind of data external to\n> the database that relied on those primary keys staying the same? Not\n> practical IMHO.\n>\n\nYes, it would have to do this which may be time consuming and possibly\nimpractical. However, the VACUUM ANALYZE is doing an aweful lot of processing\non the tables and the indicies already.\n\nHowever, perhaps the other thing to do is to not increment the SERIAL value\non an aborted transaction. I'm not sure why serial has to be incremented if\nthe transaction fails. Of course, this won't take care of unused SERIAL\nnumbers when DELETEs occur.\n\nI'm not sure about other database schemas which depend on the SERIAL values\nremaining the same for external consistency. You could still use an OID in\nthat case I should think instead of SERIAL (?)\n\n>\n> > I figure that I should never reach 2^31 - 1 transaction per table even\n> > with many aborted ones; however, I think these would be nice changes.\n>\n> What's going to happen AFAIK is that 64-bit sequences will be\n> available. It's unlikely that overflow will be an issue with\n> those... ;)\n>\n\nThat will definitely make overflow unlikely. Perhaps I'm just being too\nparanoid that somehow I'll get to the point where my SERIAL value is maxed\nout but I have large gaps from DELETED/UPDATED/ABORTED transactions.\n\n-Tony\n\ndb02=# select version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\n\n\n", "msg_date": "Wed, 28 Nov 2001 15:06:00 -0800", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": false, "msg_subject": "Re: Questions about SERIAL type" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n\n> Doug McNaught wrote:\n> \n> > I don't think PG (or the SQL standard) has any concept of unsigned\n> > numbers. Besides, you can have sequences that have negative values at\n> > some points, and even decrement rather than increment. Some folks may\n> > rely on this behavior.\n> \n> When I tried setting the current value to -200 I got an error that the\n> number was outside of the proper range.\n\nYou need to specify MINVALUE, MAXVALUE and INCREMENT explicitly to\nplay tricks like that. See the docs for CREATE SEQUENCE. \n\nIf you need it, you should be able to create a sequence that uses the\nwhole range from -2^31 to 2^31-1 with proper arguments to CREATE\nSEQUENCE.\n\n> So I'm not sure how people would be using negative values. It looks like from\n> the documentation that the SERIAL type always increments by 1 so I'm not sure\n> how they could use decrementing values. Unless, of course, they've changed\n> the source code to do this. Perhaps I'm missing something here in the\n> documentation (using PG 7.1.3, maybe 7.2beta has changed this?).\n\nYou didn't read the right part of the docs. ;) See CREATE SEQUENCE\nin the SQL reference. \n\n> > How would this work? Would the DB have to go through all tables\n> > looking for REFERENCES constraints and update those rows referring to\n> > a renumbered key? What if you had a referencing column without a\n> > REFERENCES constraint? What if you had some kind of data external to\n> > the database that relied on those primary keys staying the same? Not\n> > practical IMHO.\n> >\n> \n> Yes, it would have to do this which may be time consuming and possibly\n> impractical. However, the VACUUM ANALYZE is doing an aweful lot of processing\n> on the tables and the indicies already.\n\nI'd be more concerned about the hairiness and maintainability of the\nresulting code, actually. ;)\n\n> However, perhaps the other thing to do is to not increment the SERIAL value\n> on an aborted transaction. I'm not sure why serial has to be incremented if\n> the transaction fails. Of course, this won't take care of unused SERIAL\n> numbers when DELETEs occur.\n\nThe reason we don't do it this way is that the sequence object would\nhave to be locked for the duration of every transaction that used it.\nYou'd get a lot of contention on that lock and a big slowdown of the\nwhole system. And as you say it wouldn't address the DELETE issue. \n\n> I'm not sure about other database schemas which depend on the SERIAL values\n> remaining the same for external consistency. You could still use an OID in\n> that case I should think instead of SERIAL (?)\n\nThat's worse if anything. ;) \n\n> > > I figure that I should never reach 2^31 - 1 transaction per table even\n> > > with many aborted ones; however, I think these would be nice changes.\n> >\n> > What's going to happen AFAIK is that 64-bit sequences will be\n> > available. It's unlikely that overflow will be an issue with\n> > those... ;)\n> >\n> \n> That will definitely make overflow unlikely. Perhaps I'm just being too\n> paranoid that somehow I'll get to the point where my SERIAL value is maxed\n> out but I have large gaps from DELETED/UPDATED/ABORTED transactions.\n\nSeriously, I wouldn't worry about it, unless you're incrementing\nthousands of times a second, in which case you're in trouble for a lot \nof other reasons...\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "28 Nov 2001 18:06:45 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Questions about SERIAL type" }, { "msg_contents": "On 28 Nov 2001, Tony Reina wrote:\n\n> I was thinking of re-designing my database schema to use a SERIAL\n> value as an indentification across tables (i.e. as a foreign key).\n> I've been playing with some example tables and have found the\n> following behavior from SERIAL:\n>\n> (1) I think SERIAL is defined as an int4. However, the upper bound\n\nIIRC in 7.2, there's 8 byte sequences and a serial8 pseudotype that\nprobably uses a signed int8.\n\n> (2) The SERIAL number increases even if the transaction was aborted\n> (e.g. if a repeated tuple were trying to be inserted into a unique\n> table, the transaction fails, but the SERIAL gets incremented).\n\nYeah, the tradeoff was made to go for the concurrency advantage. If\nyou need to rollback the sequence value if rollback is performed, you'd\nneed to wait until it's happened before the next insert would be able\nto get the sequence value.\n\n> \tI was hoping that VACUUM VERBOSE ANALYZE would somehow reclaim the\n> lost SERIAL indicies. So, for example, if I had the table:\n\nIck. That sounds really ugly to me. That seems to be outside what\nthe system can reasonably be expected to handle. It'd be difficult\nto determine the full set of in-database dependencies (say triggers\nthat do their own sort of integrity checks, views, functions, etc\nthat may join this field to another table) and probably impossible\nto determine out of database ones (printed material, etc...).\n\n", "msg_date": "Wed, 28 Nov 2001 15:14:31 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Questions about SERIAL type" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nOn 28-Nov-2001 G. Anthony Reina wrote:\n> However, perhaps the other thing to do is to not increment the SERIAL value\n> on an aborted transaction. I'm not sure why serial has to be incremented if\n> the transaction fails. Of course, this won't take care of unused SERIAL\n> numbers when DELETEs occur.\n\nI thought its incremented since the sequence is outside of the transaction. \nThat way, if multiple clients are doing inserts using the sequence, one\ndoesn't have to wait for the other transactions to end before they get a lock\non the sequence.\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE8BXVOiysnOdCML0URApJnAJ9Z43xFgJRevgoNIQEGYkwkxbAAJACbBopF\nN3slqHoAxPq7HkcDaI7FMsY=\n=r9mw\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 28 Nov 2001 16:37:50 -0700 (MST)", "msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>", "msg_from_op": false, "msg_subject": "Re: Questions about SERIAL type" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n\n> Now that I look at the CREATE SEQUENCE documentation, it appears to\n> have a CYCLE flag which wraps the sequence around if it were to\n> reach the MAXVALUE. Does anyone know if it wraps around to the next\n> unused value? Or, if an index already exists at SERIAL value =\n> MINVALUE, then will the INSERT get an error about duplicate\n> insertions?\n\nSERIAL columns get a unique index defined, so you'd get an error. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "28 Nov 2001 18:38:54 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Questions about SERIAL type" }, { "msg_contents": "Doug McNaught wrote:\n\n> \"G. Anthony Reina\" <reina@nsi.edu> writes:\n>\n>\n> You need to specify MINVALUE, MAXVALUE and INCREMENT explicitly to\n> play tricks like that. See the docs for CREATE SEQUENCE.\n>\n>\n\nAhhh. I found it now. I was looking at the documentation from an older on-line\nversion of Bruce's book and didn't see the MAXVALUE, MINVALUE stuff. I guess the\ndefault for a serial column is MINVALUE 1, MAX VALUE 2^31-1, INCREMENT +1.\n\n\n> The reason we don't do it this way is that the sequence object would\n> have to be locked for the duration of every transaction that used it.\n> You'd get a lot of contention on that lock and a big slowdown of the\n> whole system. And as you say it wouldn't address the DELETE issue.\n>\n\nOkay, yes I can see the lock problem now. That makes sense.\n\n>\n>\n> >\n> > That will definitely make overflow unlikely. Perhaps I'm just being too\n> > paranoid that somehow I'll get to the point where my SERIAL value is maxed\n> > out but I have large gaps from DELETED/UPDATED/ABORTED transactions.\n>\n> Seriously, I wouldn't worry about it, unless you're incrementing\n> thousands of times a second, in which case you're in trouble for a lot\n> of other reasons...\n>\n>\n\nI figured that I was just being overly cautious. 2^31 transactions is quite a lot.\nWith the move to int8 the point should be moot.\n\np.s.\n\nNow that I look at the CREATE SEQUENCE documentation, it appears to have a CYCLE\nflag which wraps the sequence around if it were to reach the MAXVALUE. Does anyone\nknow if it wraps around to the next unused value? Or, if an index already exists\nat SERIAL value = MINVALUE, then will the INSERT get an error about duplicate\ninsertions?\n\n-Tony\n\n\n", "msg_date": "Wed, 28 Nov 2001 15:42:47 -0800", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": false, "msg_subject": "Re: Questions about SERIAL type" }, { "msg_contents": "----- Original Message ----- \nFrom: Doug McNaught <doug@wireboard.com>\nSent: Wednesday, November 28, 2001 5:31 PM\n\n> > I figure that I should never reach 2^31 - 1 transaction per table even\n> > with many aborted ones; however, I think these would be nice changes.\n> \n> What's going to happen AFAIK is that 64-bit sequences will be\n> available. It's unlikely that overflow will be an issue with\n> those... ;)\n\n\"640K ought to be enough for everyone!\"\n Gill Bates.\n\nNo offense, just an association :)\n\n", "msg_date": "Sat, 1 Dec 2001 13:38:15 -0500", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Questions about SERIAL type" } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> authentication. This is not recommended because the password can\n> be read by others using <command>ps -e</command>.\n\n[snip long discussion on the shortcoming of 'ps'...]\n\nCan't we just say something generic like:\n\n\"because on some (inferior) platforms other user's \nenvironment variables can be read.\"?\n\n(Ok, without the \"inferior\"...mmmm..Linux :)\n\n\nGreg Sabino Mullane\ngreg@turnstep.com\nPGP Key: 0x14964AC8 200111281755\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBPAVrmLybkGcUlkrIEQLGiwCg9bMUAXXFSj5z8Cu6aPV+IhyWAPQAniSo\nZP5v6C4w2nJLi0+dbpuXUBBy\n=Bl6c\n-----END PGP SIGNATURE-----\n\n\n\n\n", "msg_date": "Wed, 28 Nov 2001 22:57:47 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opens" } ]
[ { "msg_contents": "Just tried on Tru64 5.1 using the Compaq C compiler. Failed the float8,\noidjoin and random tests. \n\n*** ./expected/float8-fp-exception.out Thu Mar 30 02:46:00 2000\n--- ./results/float8.out Wed Nov 28 19:29:33 2001\n***************\n*** 237,243 ****\n \n -- test for over- and underflow \n INSERT INTO FLOAT8_TBL(f1) VALUES ('10e400');\n! ERROR: Input '10e400' is out of range for float8\n INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400');\n ERROR: Input '-10e400' is out of range for float8\n INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400');\n--- 237,243 ----\n \n -- test for over- and underflow \n INSERT INTO FLOAT8_TBL(f1) VALUES ('10e400');\n! ERROR: Bad float8 input format '10e400'\n INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400');\n ERROR: Input '-10e400' is out of range for float8\n INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400');\n\nOIDJOIN had this...\n! psql: server closed the connection unexpectedly\n! This probably means the server terminated abnormally\n! before or while processing the request.\n\n--- ./results/random.out Wed Nov 28 19:33:03 2001\n***************\n*** 25,31 ****\n GROUP BY random HAVING count(random) > 1;\n random | count \n --------+-------\n! (0 rows)\n \n SELECT random FROM RANDOM_TBL\n WHERE random NOT BETWEEN 80 AND 120;\n--- 25,32 ----\n GROUP BY random HAVING count(random) > 1;\n random | count \n --------+-------\n! 119 | 2\n! (1 row)\n \n SELECT random FROM RANDOM_TBL\n WHERE random NOT BETWEEN 80 AND 120;\n\n======================================================================\n\n\n\n\nI can post the whole regression.out file if someone needs it.\n\nJim\n\n\n\n\n> Just donwloaded and built 7.2beta3 on Compaq Tru64 (Digital Unix) 4.0\non\n> Alpha architecture and now all regression tests are passed.\n> \n> test=# SELECT version();\n> version\n> ----------------------------------------------------------------\n> PostgreSQL 7.2b3 on alphaev67-dec-osf4.0g, compiled by cc -std\n> \n> -- \n> Alessio F. Bragadini\t\talessio@albourne.com\n> APL Financial Services\t\thttp://village.albourne.com\n> Nicosia, Cyprus\t\t \tphone: +357-22-755750\n> \n> \"It is more complicated than you think\"\n> \t\t-- The Eighth Networking Truth from RFC 1925\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n\n\n", "msg_date": "Wed, 28 Nov 2001 20:35:04 -0500", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: 7.2beta3 on Digital Alpha" }, { "msg_contents": "> Just tried on Tru64 5.1 using the Compaq C compiler. Failed the float8,\n> oidjoin and random tests.\n\nThe float8 is certainly OK.\n\nThe random test fails, uh, randomly, should succeed sometimes on your\nplatform, so is probably OK if you try again.\n\nBut the oidjoin test is certainly broken. What optimization are you\ncompiling with? Does it work better if you turn optimization off??\n\n - Thomas\n", "msg_date": "Thu, 29 Nov 2001 03:03:03 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: 7.2beta3 on Digital Alpha" }, { "msg_contents": "NetBSD used to fail on these tests too, until they implemented soft-float.\n\nOn Wed, 28 Nov 2001, Jim Buttafuoco wrote:\n\n> Date: Wed, 28 Nov 2001 20:35:04 -0500\n> From: Jim Buttafuoco <jim@buttafuoco.net>\n> To: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] 7.2beta3 on Digital Alpha\n> \n> Just tried on Tru64 5.1 using the Compaq C compiler. Failed the float8,\n> oidjoin and random tests. \n> \n> *** ./expected/float8-fp-exception.out Thu Mar 30 02:46:00 2000\n> --- ./results/float8.out Wed Nov 28 19:29:33 2001\n> ***************\n> *** 237,243 ****\n> \n> -- test for over- and underflow \n> INSERT INTO FLOAT8_TBL(f1) VALUES ('10e400');\n> ! ERROR: Input '10e400' is out of range for float8\n> INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400');\n> ERROR: Input '-10e400' is out of range for float8\n> INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400');\n> --- 237,243 ----\n> \n> -- test for over- and underflow \n> INSERT INTO FLOAT8_TBL(f1) VALUES ('10e400');\n> ! ERROR: Bad float8 input format '10e400'\n> INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400');\n> ERROR: Input '-10e400' is out of range for float8\n> INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400');\n> \n> OIDJOIN had this...\n> ! psql: server closed the connection unexpectedly\n> ! This probably means the server terminated abnormally\n> ! before or while processing the request.\n> \n> --- ./results/random.out Wed Nov 28 19:33:03 2001\n> ***************\n> *** 25,31 ****\n> GROUP BY random HAVING count(random) > 1;\n> random | count \n> --------+-------\n> ! (0 rows)\n> \n> SELECT random FROM RANDOM_TBL\n> WHERE random NOT BETWEEN 80 AND 120;\n> --- 25,32 ----\n> GROUP BY random HAVING count(random) > 1;\n> random | count \n> --------+-------\n> ! 119 | 2\n> ! (1 row)\n> \n> SELECT random FROM RANDOM_TBL\n> WHERE random NOT BETWEEN 80 AND 120;\n> \n> ======================================================================\n> \n> \n> \n> \n> I can post the whole regression.out file if someone needs it.\n> \n> Jim\n> \n> \n> \n> \n> > Just donwloaded and built 7.2beta3 on Compaq Tru64 (Digital Unix) 4.0\n> on\n> > Alpha architecture and now all regression tests are passed.\n> > \n> > test=# SELECT version();\n> > version\n> > ----------------------------------------------------------------\n> > PostgreSQL 7.2b3 on alphaev67-dec-osf4.0g, compiled by cc -std\n> > \n> > -- \n> > Alessio F. Bragadini\t\talessio@albourne.com\n> > APL Financial Services\t\thttp://village.albourne.com\n> > Nicosia, Cyprus\t\t \tphone: +357-22-755750\n> > \n> > \"It is more complicated than you think\"\n> > \t\t-- The Eighth Networking Truth from RFC 1925\n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> > \n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n", "msg_date": "Wed, 28 Nov 2001 21:04:23 -0600 (CST)", "msg_from": "\"Thomas T. Thai\" <tom@minnesota.com>", "msg_from_op": false, "msg_subject": "Re: 7.2beta3 on Digital Alpha" }, { "msg_contents": "\"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> OIDJOIN had this...\n> ! psql: server closed the connection unexpectedly\n> ! This probably means the server terminated abnormally\n> ! before or while processing the request.\n\nThis is bad. There should be a core dump file --- can you provide\na debugger backtrace from it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 00:50:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2beta3 on Digital Alpha " } ]
[ { "msg_contents": "Thomas,\n\nsee my replace to the original email. Tru64 5.1 fails float8, oidjoin\nand random...\n\nJim\n\n\n> > Just donwloaded and built 7.2beta3 on Compaq Tru64 (Digital Unix)\n4.0 on\n> > Alpha architecture and now all regression tests are passed.\n> \n> Thanks! Has anyone tested on more recent versions of Tru64 (I'm sure\n> they will work) and/or with gcc on this platform? We had reports\n> covering 4.0, 5.0, and two compilers for the previous release...\n> \n> - Thomas\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n\n", "msg_date": "Wed, 28 Nov 2001 20:37:18 -0500", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: 7.2beta3 on Digital Alpha" } ]
[ { "msg_contents": "I have this problem in 7.1.3 - I can't confirm at the moment if it exists in\n7.2.\n\nI have already granted the 'au-dietclub' user delete and insert permissions\non the users_flags table at this point:\n\naustralia=> delete from users_flags;\nDELETE 0\naustralia=> delete from users_flags where user_id=1;\nERROR: users_flags: Permission denied.\naustralia=> \\connect - chriskl\nYou are now connected as new user chriskl.\naustralia=# grant select on users_flags to \"au-dietclub\";\nCHANGE\naustralia=# \\connect - au-dietclub\nYou are now connected as new user au-dietclub.\naustralia=> delete from users_flags where user_id=1;\nDELETE 0\n\nWhy do I get a permission denied when I qualify the DELETE statement???\n\nChris\n\n", "msg_date": "Thu, 29 Nov 2001 10:59:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Unusual permissions behaviour" }, { "msg_contents": "> I have this problem in 7.1.3 - I can't confirm at the moment if\n> it exists in 7.2.\n>\n> I have already granted the 'au-dietclub' user delete and insert\n> permissions on the users_flags table at this point:\n>\n> australia=> delete from users_flags;\n> DELETE 0\n> australia=> delete from users_flags where user_id=1;\n> ERROR: users_flags: Permission denied.\n> australia=> \\connect - chriskl\n> You are now connected as new user chriskl.\n> australia=# grant select on users_flags to \"au-dietclub\";\n> CHANGE\n> australia=# \\connect - au-dietclub\n> You are now connected as new user au-dietclub.\n> australia=> delete from users_flags where user_id=1;\n> DELETE 0\n>\n> Why do I get a permission denied when I qualify the DELETE statement???\n>\n> Chris\n>\n\n\nThe schema:\n\nCREATE TABLE \"users_flags\" (\n \"user_id\" integer NOT NULL REFERENCES users_users(user_id) ON DELETE\nCASCADE,\n \"flag_id\" integer NOT NULL REFERENCES medidiets_flags(flag_id) ON DELETE\nCASCADE,\n Primary Key (\"user_id\", \"flag_id\")\n);\nCREATE INDEX \"users_flags_flag_id_idx\" on \"users_flags\" using btree (\n\"flag_id\" \"\nint4_ops\" );\n\n", "msg_date": "Thu, 29 Nov 2001 11:03:34 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Unusual permissions behaviour" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Why do I get a permission denied when I qualify the DELETE statement???\n\nIIRC, you need SELECT permission to reference the values of any fields\nof the table. If you don't have SELECT permission, the table should\nbe write-only to you; you shouldn't be able to learn things about its\ncontents by doing stuff like\n\n\tbegin;\n\tdelete from foo where col = 1;\n\t-- observe # rows deleted\n\trollback;\n\t-- now I know whether there is a row with col = 1\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2001 23:57:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unusual permissions behaviour " } ]
[ { "msg_contents": "OK, the list is getting smaller fast (thanks for all of the responses!).\nHere is what I have left currently (note the addition of NetBSD/sparc,\nwhich I had inadvertently omitted from the first list):\n\nBeOS Cyril Velter\nLinux/arm Mark Knox\nLinux/s390 Neale Ferguson\nNetBSD/arm32 Patrick Welche\nNetBSD/m68k Bill Studenmund (will test)\nNetBSD/sparc Matthew Green (left off the first list)\nNetBSD/VAX Tom I. Helbekkmo\nQNX (did I get a definitive report for 4.x already?)\nSunOS Tatsuo Ishii (old release; still relevant?)\nWindows/Cygwin Daniel Horak (OK in serial test, trouble with parallel\ntest?)\nWindows/native Magnus Hagander (clients only)\n\nI'm curious about the Windows/Cygwin trouble, and if others see it too,\nsince the -cygwin mailing list seems to report happiness with 7.1.3.\n\nTatsuo, do you think that we should still track SunOS explicitly?\n\nAre there any other platforms we are running on?\n\n - Thomas\n\n\nThe following platforms have been reported as successfully running\nPostgreSQL 7.2:\n\nAIX Andreas Zeugswetter (Tatsuo working on 5L?)\nBSD/OS Bruce\nFreeBSD Chris Kings-Lynne\nHPUX Tom (anyone tested 11.0 or higher?)\nIRIX Luis Amigo\nLinux/Alpha Tom\nLinux/MIPS Hisao Shibuya\nLinux/PPC Tom\nLinux/sparc Doug McNaught\nLinux/x86 Thomas (and many others ;)\nMacOS-X Gavin Sherry\nNetBSD/Alpha Thomas Thai\nNetBSD/PPC Bill Studenmund\nNetBSD/x86 Bill Studenmund\nOpenBSD/sparc Brandon Palmer\nOpenBSD/x86 Brandon Palmer\nSCO OpenUnix Larry Rosenman\nSolaris/sparc Andrew Sullivan\nSolaris/x86 Martin Renters\nTru64 Alessio Bragadini (trouble with 5.1?)\n", "msg_date": "Thu, 29 Nov 2001 03:54:15 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Second call for platform testing" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> The following platforms have been reported as successfully running\n> PostgreSQL 7.2:\n> \n> AIX Andreas Zeugswetter (Tatsuo working on 5L?)\n> BSD/OS Bruce\n> FreeBSD Chris Kings-Lynne\n> HPUX Tom (anyone tested 11.0 or higher?)\n\n\nAre you still looking for HPUX 11.0+ ? I can arrange for access to one \nif we still need it (gcc though, I don't have access to HP's compiler).\n\n-- Joe\n\n", "msg_date": "Wed, 28 Nov 2001 21:02:45 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "\n I just tested the BeOS port (and updated the regression test database).\n\n Two little tweaks are needed to make it compile and install :\n\n * int8/uint8 and int64/uint64 are not detected properly by configure\n(they are declared in SupportDefs.h on beos).\n * data dir is created with group and world access, preventing\npostmaster from starting (a simple chmod og-rwx will do)\n\n The regression test pass without any error.\n\n cyril\n\n> OK, the list is getting smaller fast (thanks for all of the responses!).\n> Here is what I have left currently (note the addition of NetBSD/sparc,\n> which I had inadvertently omitted from the first list):\n>\n> BeOS Cyril Velter\n> Linux/arm Mark Knox\n> Linux/s390 Neale Ferguson\n> NetBSD/arm32 Patrick Welche\n> NetBSD/m68k Bill Studenmund (will test)\n> NetBSD/sparc Matthew Green (left off the first list)\n> NetBSD/VAX Tom I. Helbekkmo\n> QNX (did I get a definitive report for 4.x already?)\n> SunOS Tatsuo Ishii (old release; still relevant?)\n> Windows/Cygwin Daniel Horak (OK in serial test, trouble with parallel\n> test?)\n> Windows/native Magnus Hagander (clients only)\n>\n> I'm curious about the Windows/Cygwin trouble, and if others see it too,\n> since the -cygwin mailing list seems to report happiness with 7.1.3.\n>\n> Tatsuo, do you think that we should still track SunOS explicitly?\n>\n> Are there any other platforms we are running on?\n>\n> - Thomas\n>\n>\n> The following platforms have been reported as successfully running\n> PostgreSQL 7.2:\n>\n> AIX Andreas Zeugswetter (Tatsuo working on 5L?)\n> BSD/OS Bruce\n> FreeBSD Chris Kings-Lynne\n> HPUX Tom (anyone tested 11.0 or higher?)\n> IRIX Luis Amigo\n> Linux/Alpha Tom\n> Linux/MIPS Hisao Shibuya\n> Linux/PPC Tom\n> Linux/sparc Doug McNaught\n> Linux/x86 Thomas (and many others ;)\n> MacOS-X Gavin Sherry\n> NetBSD/Alpha Thomas Thai\n> NetBSD/PPC Bill Studenmund\n> NetBSD/x86 Bill Studenmund\n> OpenBSD/sparc Brandon Palmer\n> OpenBSD/x86 Brandon Palmer\n> SCO OpenUnix Larry Rosenman\n> Solaris/sparc Andrew Sullivan\n> Solaris/x86 Martin Renters\n> Tru64 Alessio Bragadini (trouble with 5.1?)\n\n", "msg_date": "Thu, 29 Nov 2001 11:25:52 +0100", "msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> Are you still looking for HPUX 11.0+ ? I can arrange for access to one\n> if we still need it (gcc though, I don't have access to HP's compiler).\n\nYes, that would be great. 10.20 is pretty old afaik...\n\n - Thomas\n", "msg_date": "Thu, 29 Nov 2001 12:52:41 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> I just tested the BeOS port (and updated the regression test database).\n> Two little tweaks are needed to make it compile and install :\n> * int8/uint8 and int64/uint64 are not detected properly by configure\n> (they are declared in SupportDefs.h on beos).\n> * data dir is created with group and world access, preventing\n> postmaster from starting (a simple chmod og-rwx will do)\n> The regression test pass without any error.\n\nGreat! Can the int8 detection be helped with a patch to get the right\ninclude file?\n\n - Thomas\n", "msg_date": "Thu, 29 Nov 2001 13:07:16 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "SCO OpenUNIX should be Caldera OpenUNIX....\n\nThanks,\nLER\n\n* Thomas Lockhart <lockhart@fourpalms.org> [011128 22:07]:\n> OK, the list is getting smaller fast (thanks for all of the responses!).\n> Here is what I have left currently (note the addition of NetBSD/sparc,\n> which I had inadvertently omitted from the first list):\n> \n> BeOS Cyril Velter\n> Linux/arm Mark Knox\n> Linux/s390 Neale Ferguson\n> NetBSD/arm32 Patrick Welche\n> NetBSD/m68k Bill Studenmund (will test)\n> NetBSD/sparc Matthew Green (left off the first list)\n> NetBSD/VAX Tom I. Helbekkmo\n> QNX (did I get a definitive report for 4.x already?)\n> SunOS Tatsuo Ishii (old release; still relevant?)\n> Windows/Cygwin Daniel Horak (OK in serial test, trouble with parallel\n> test?)\n> Windows/native Magnus Hagander (clients only)\n> \n> I'm curious about the Windows/Cygwin trouble, and if others see it too,\n> since the -cygwin mailing list seems to report happiness with 7.1.3.\n> \n> Tatsuo, do you think that we should still track SunOS explicitly?\n> \n> Are there any other platforms we are running on?\n> \n> - Thomas\n> \n> \n> The following platforms have been reported as successfully running\n> PostgreSQL 7.2:\n> \n> AIX Andreas Zeugswetter (Tatsuo working on 5L?)\n> BSD/OS Bruce\n> FreeBSD Chris Kings-Lynne\n> HPUX Tom (anyone tested 11.0 or higher?)\n> IRIX Luis Amigo\n> Linux/Alpha Tom\n> Linux/MIPS Hisao Shibuya\n> Linux/PPC Tom\n> Linux/sparc Doug McNaught\n> Linux/x86 Thomas (and many others ;)\n> MacOS-X Gavin Sherry\n> NetBSD/Alpha Thomas Thai\n> NetBSD/PPC Bill Studenmund\n> NetBSD/x86 Bill Studenmund\n> OpenBSD/sparc Brandon Palmer\n> OpenBSD/x86 Brandon Palmer\n> SCO OpenUnix Larry Rosenman\n> Solaris/sparc Andrew Sullivan\n> Solaris/x86 Martin Renters\n> Tru64 Alessio Bragadini (trouble with 5.1?)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 29 Nov 2001 07:30:53 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> SCO OpenUNIX should be Caldera OpenUNIX....\n\nGot it. Thanks.\n\n - Thomas\n", "msg_date": "Thu, 29 Nov 2001 14:02:46 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> > I just tested the BeOS port (and updated the regression test\ndatabase).\n> > Two little tweaks are needed to make it compile and install :\n> > * int8/uint8 and int64/uint64 are not detected properly by\nconfigure\n> > (they are declared in SupportDefs.h on beos).\n> > * data dir is created with group and world access, preventing\n> > postmaster from starting (a simple chmod og-rwx will do)\n> > The regression test pass without any error.\n>\n> Great! Can the int8 detection be helped with a patch to get the right\n> include file?\n\n I think it can, but I'm not familiar at all with configure script. If\nsomeone can point me where the change should be made, I'll make and test a\npatch.\n\n What modification should be made to configure.in to make it include\nSupportDefs.h when testing for int8 uint8 int64 and uint64 size ?\n\n cyril\n\n\n\n\n\n", "msg_date": "Thu, 29 Nov 2001 16:55:51 +0100", "msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr> writes:\n> What modification should be made to configure.in to make it include\n> SupportDefs.h when testing for int8 uint8 int64 and uint64 size ?\n\nThis looks like a bit of a pain. We're currently using AC_CHECK_SIZEOF\nto make those probes, apparently because it's the closest standard macro\nto what we want. But it's not close enough. It doesn't include\nanything except <stdio.h>. I don't think we can fix this except by\nmaking our own macro. Peter, any opinion about how to do it?\n\n(If we do make our own macro, we could change the output to be something\nmore reasonable, like #defining HAVE_UINT8 rather than claiming\n\"sizeof uint8 = 0\", which is sure to confuse people.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 13:29:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Cyril VELTER writes:\n\n>\n> I just tested the BeOS port (and updated the regression test database).\n>\n> Two little tweaks are needed to make it compile and install :\n>\n> * int8/uint8 and int64/uint64 are not detected properly by configure\n> (they are declared in SupportDefs.h on beos).\n\nWell, that was the end of that bright idea. There's no way to tell\nAC_CHECK_SIZEOF what header files to consider. We need to put back the\n#ifdef __BEOS__, it seems.\n\n> * data dir is created with group and world access, preventing\n> postmaster from starting (a simple chmod og-rwx will do)\n\nWhat does the shell do with the umask call in initdb?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 29 Nov 2001 20:03:41 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> \"Cyril VELTER\" <cyril.velter@libertysurf.fr> writes:\n> > What modification should be made to configure.in to make it include\n> > SupportDefs.h when testing for int8 uint8 int64 and uint64 size ?\n> \n> This looks like a bit of a pain. We're currently using AC_CHECK_SIZEOF\n> to make those probes, apparently because it's the closest standard macro\n> to what we want. But it's not close enough. It doesn't include\n> anything except <stdio.h>. I don't think we can fix this except by\n> making our own macro. Peter, any opinion about how to do it?\n\nI was wondering where that bizarre fopen()/fprintf() in the test code\ncame from. It looked strange. Now I know why; it came from autoconf.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Nov 2001 14:12:54 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> > * data dir is created with group and world access, preventing\n> > postmaster from starting (a simple chmod og-rwx will do)\n>\n> What does the shell do with the umask call in initdb?\n\n I will look at it (I'm not on a beos box right now), but as BeOS is\nnot a true multiuser system, he is probably faulty here.\n\n cyril\n\n\n", "msg_date": "Thu, 29 Nov 2001 20:51:32 +0100", "msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Thomas Lockhart wrote:\n\n>>Are you still looking for HPUX 11.0+ ? I can arrange for access to one\n>>if we still need it (gcc though, I don't have access to HP's compiler).\n>>\n> \n> Yes, that would be great. 10.20 is pretty old afaik...\n> \n> - Thomas\n> \n> \n\n\nI ran into a problem on HPUX 11 right off with:\n\n===============================\nconfigure --enable-debug\n.\n.\n.\nchecking for struct sockaddr_un... yes\nchecking for int timezone... yes\nchecking types of arguments for accept()... configure: error: could not \ndetermine argument types\n\n\n===============================\n\nI won't pretend to be very knowledgable about HPUX or configure, but it \nlooks like the following in configure is where it dies:\n\n===============================\n\nelse\n for ac_cv_func_accept_arg1 in 'int' 'unsigned int'; do\n for ac_cv_func_accept_arg2 in 'struct sockaddr *' 'const struct \nsockaddr *' 'void *'; do\n for ac_cv_func_accept_arg3 in 'int' 'size_t' 'socklen_t' \n'unsigned int' 'void'; do\n cat > conftest.$ac_ext <<EOF\n\n=======================================\n\nand here's what the HPUX man page says:\n\n=======================================\naccept(2)\n\n NAME\n accept - accept a connection on a socket\n\n SYNOPSIS\n #include <sys/socket.h>\n\n AF_CCITT only\n #include <x25/x25addrstr.h>\n\n int accept(int s, void *addr, int *addrlen);\n\n _XOPEN_SOURCE_EXTENDED only (UNIX 98)\n int accept(int s, struct sockaddr *addr, socklen_t *addrlen);\n\n=======================================\n\nso it looks like configure expects the third argument to be (int), when \non HPUX 11 the third argument is (int *).\n\nAny ideas what I'm doing wrong?\n\n-- Joe\n\n", "msg_date": "Thu, 29 Nov 2001 12:07:28 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> I ran into a problem on HPUX 11 right off with:\n\nThat's odd, considering that someone else reported 7.1 worked fine on\nHPUX 11, and we haven't changed this configure test since then.\n\n> so it looks like configure expects the third argument to be (int), when \n> on HPUX 11 the third argument is (int *).\n\nNo, the macro knows that the third argument is pointer-to-something;\nit's trying to figure out pointer-to-what.\n\n> Any ideas what I'm doing wrong?\n\nWhat had configure selected so far for compiler, cflags, etc? Maybe\nit's blowing the game at that stage. Or maybe the test is failing to\ninclude all the needed include files to define these datatypes.\n\nIn general, the config.log output is a good thing to post when puzzled\nby configure problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 16:16:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Tom Lane wrote:\n\n> What had configure selected so far for compiler, cflags, etc? Maybe\n> it's blowing the game at that stage. Or maybe the test is failing to\n> include all the needed include files to define these datatypes.\n> \n> In general, the config.log output is a good thing to post when puzzled\n> by configure problems.\n\nOK. Figured out a couple of pieces to this. First, although I was told \nwe did not have HP's compiler, it is actually there. Second, since gcc \nwas freshly installed, it wasn't in my PATH. So the error I reported \nlast time is actually from the bundled compiler.\n\nSo I added /opt/gcc/bin to my PATH, and got a different error once gcc \nwas detected.\n\nAttached are both config.log files. Sorry for being so clueless here, \nbut I haven't done much on HPUX before.\n\nJoe", "msg_date": "Thu, 29 Nov 2001 15:06:21 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> ... So the error I reported \n> last time is actually from the bundled compiler.\n\nWhich, unless HP has reconsidered since HPUX 10, is a deliberately\ncrippled K&R (no ANSI features) compiler. It's no wonder the configure\nrun failed; I guess failing right at this point is happenstance, though\nit could not have gotten further since this test depends on ANSI\nprototypes.\n\n> configure:1361: checking whether the C compiler (cc -Ae +O2 ) works\n> configure:1377: cc -Ae -o conftest +O2 conftest.c 1>&5\n> (Bundled) cc: warning 480: The -A option is available only with the C/ANSI C product; ignored.\n> (Bundled) cc: warning 480: The +O2 option is available only with the C/ANSI C product; ignored.\n\nIt's a shame configure hides all these useful messages down inside\nconfig.log.\n\nI wonder whether we shouldn't make the bare \"does it work\" test include\nverification that ANSI-style prototypes are supported.\n\n> So I added /opt/gcc/bin to my PATH, and got a different error once gcc \n> was detected.\n\n> configure:1243: checking whether the C compiler (gcc ) works\n> configure:1259: gcc -o conftest conftest.c 1>&5\n> as: warning 2: Unknown option \"--traditional-format\" ignored.\n> as: \"/var/tmp/cceTg0Kd.s\", line 15: error 1052: Directive name not recognized - NSUBSPA\n\nThis looks like your gcc installation is broken: specifically, I'll bet\nit's trying to use HP's assembler rather than gas. You really want the\nGNU toolchain (binutils package) underneath gcc. The gcc-to-HP-as\nconfiguration has never worked for me (though I have to admit I haven't\ntried in a long time).\n\nIn the meantime, try it with HP's real compiler (set CC=cc for configure).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 18:26:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> OK, the list is getting smaller fast (thanks for all of the responses!).\n\nI got a regression test result from Hiroshi Saito on\n UNIX_System_V ews4800 4.2MP 4.2 R4000 r4000.\n\n=======================\n 10 of 79 tests failed.\n========================\n\n int8 ... FAILED\n abstime ... FAILED\n tinterval ... FAILED\ntest geometry ... FAILED\ntest horology ... FAILED\n create_index ... FAILED\ntest sanity_check ... FAILED\ntest select ... FAILED\n subselect ... FAILED\n union ... FAILED\n\nSeems INT64_IS_BUSTED, old PST ... etc and\n\n*** ./expected/create_index.out\tTue Aug 28 08:23:34 JST 2001\n--- ./results/create_index.out\tFri Nov 30 00:28:22 JST 2001\n***************\n*** 35,44 ****\n--- 35,47 ----\n --\n CREATE INDEX onek2_u1_prtl ON onek2 USING btree(unique1 int4_ops)\n \twhere unique1 < 20 or unique1 > 980;\n+ ERROR: AllocSetFree: cannot find block containing chunk 4f64f0\n CREATE INDEX onek2_u2_prtl ON onek2 USING btree(unique2 int4_ops)\n \twhere stringu1 < 'B';\n+ ERROR: AllocSetFree: cannot find block containing chunk 4f6390\n CREATE INDEX onek2_stu1_prtl ON onek2 USING btree(stringu1 name_ops)\n \twhere onek2.stringu1 >= 'J' and onek2.stringu1 < 'K';\n+ ERROR: AllocSetFree: cannot find block containing chunk 4f6740\n --\n -- RTREE\n -- \n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 30 Nov 2001 10:03:56 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Has someone thought about getting a sourceforge account for postgres or\nsomething? Even if we don't use their CVS and facilities, I wonder if we\ncan use their compile farm for platform testing?\n\nActually, maybe the phpPgAdmin admins could test it for us on the compile\nfarm, as they should have access...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Cyril VELTER\n> Sent: Thursday, 29 November 2001 6:26 PM\n> To: PostgreSQL Hackers Mailing List\n> Cc: Bruno G. Albuquerque\n> Subject: Re: [HACKERS] Second call for platform testing\n>\n>\n>\n> I just tested the BeOS port (and updated the regression test\n> database).\n>\n> Two little tweaks are needed to make it compile and install :\n>\n> * int8/uint8 and int64/uint64 are not detected properly\n> by configure\n> (they are declared in SupportDefs.h on beos).\n> * data dir is created with group and world access, preventing\n> postmaster from starting (a simple chmod og-rwx will do)\n>\n> The regression test pass without any error.\n>\n> cyril\n>\n> > OK, the list is getting smaller fast (thanks for all of the responses!).\n> > Here is what I have left currently (note the addition of NetBSD/sparc,\n> > which I had inadvertently omitted from the first list):\n> >\n> > BeOS Cyril Velter\n> > Linux/arm Mark Knox\n> > Linux/s390 Neale Ferguson\n> > NetBSD/arm32 Patrick Welche\n> > NetBSD/m68k Bill Studenmund (will test)\n> > NetBSD/sparc Matthew Green (left off the first list)\n> > NetBSD/VAX Tom I. Helbekkmo\n> > QNX (did I get a definitive report for 4.x already?)\n> > SunOS Tatsuo Ishii (old release; still relevant?)\n> > Windows/Cygwin Daniel Horak (OK in serial test, trouble with parallel\n> > test?)\n> > Windows/native Magnus Hagander (clients only)\n> >\n> > I'm curious about the Windows/Cygwin trouble, and if others see it too,\n> > since the -cygwin mailing list seems to report happiness with 7.1.3.\n> >\n> > Tatsuo, do you think that we should still track SunOS explicitly?\n> >\n> > Are there any other platforms we are running on?\n> >\n> > - Thomas\n> >\n> >\n> > The following platforms have been reported as successfully running\n> > PostgreSQL 7.2:\n> >\n> > AIX Andreas Zeugswetter (Tatsuo working on 5L?)\n> > BSD/OS Bruce\n> > FreeBSD Chris Kings-Lynne\n> > HPUX Tom (anyone tested 11.0 or higher?)\n> > IRIX Luis Amigo\n> > Linux/Alpha Tom\n> > Linux/MIPS Hisao Shibuya\n> > Linux/PPC Tom\n> > Linux/sparc Doug McNaught\n> > Linux/x86 Thomas (and many others ;)\n> > MacOS-X Gavin Sherry\n> > NetBSD/Alpha Thomas Thai\n> > NetBSD/PPC Bill Studenmund\n> > NetBSD/x86 Bill Studenmund\n> > OpenBSD/sparc Brandon Palmer\n> > OpenBSD/x86 Brandon Palmer\n> > SCO OpenUnix Larry Rosenman\n> > Solaris/sparc Andrew Sullivan\n> > Solaris/x86 Martin Renters\n> > Tru64 Alessio Bragadini (trouble with 5.1?)\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Fri, 30 Nov 2001 09:30:40 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I got a regression test result from Hiroshi Saito on\n> UNIX_System_V ews4800 4.2MP 4.2 R4000 r4000.\n\n> Seems INT64_IS_BUSTED, old PST ... etc and\n\n> *** ./expected/create_index.out\tTue Aug 28 08:23:34 JST 2001\n> --- ./results/create_index.out\tFri Nov 30 00:28:22 JST 2001\n> ***************\n> *** 35,44 ****\n> --- 35,47 ----\n> --\n> CREATE INDEX onek2_u1_prtl ON onek2 USING btree(unique1 int4_ops)\n> \twhere unique1 < 20 or unique1 > 980;\n> + ERROR: AllocSetFree: cannot find block containing chunk 4f64f0\n> CREATE INDEX onek2_u2_prtl ON onek2 USING btree(unique2 int4_ops)\n> \twhere stringu1 < 'B';\n> + ERROR: AllocSetFree: cannot find block containing chunk 4f6390\n> CREATE INDEX onek2_stu1_prtl ON onek2 USING btree(stringu1 name_ops)\n> \twhere onek2.stringu1 >= 'J' and onek2.stringu1 < 'K';\n> + ERROR: AllocSetFree: cannot find block containing chunk 4f6740\n\nInteresting. Something nonportable in the partial index support,\nperhaps?\n\nWould you ask him to compile with debug support, set a breakpoint at\nelog(), and get a backtrace from the point of the error? It'll probably\nwork to execute any one of those commands by hand in the regression\ndatabase, so he doesn't need to try to attach to a regression testing\nbackend on the fly. Just start psql, attach gdb to its backend, set the\nbreakpoint, and then run the create index command.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 20:46:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Has someone thought about getting a sourceforge account for postgres or\n> something? Even if we don't use their CVS and facilities, I wonder if we\n> can use their compile farm for platform testing?\n\nHardly need a project account. I have a personal account at\nsourceforge, and have used it (fairly recently, even) to run tests on\ntheir compile farm. They don't presently have all that wide a range of\nstuff available however. Let's see ....\n\n lqqqqqqqqChoose compile farm server...qqqqqqqqqk\n x A. [x86] Linux 2.2 (Debian 2.2) x\n x C. [x86] FreeBSD (4.3-RELEASE) x\n x x\n x G. [Alpha] Linux 2.2 (Debian 2.2) x\n x x\n x I. [PPC - G4] MacOS X 10.1 #1 **NEW** x\n x J. [PPC - G4] MacOS X 10.1 #2 **NEW** x\n x K. [PPC - RS/6000] Linux 2.2 (Debian 2.2) x\n x x\n x L. [Sparc - Ultra60] Linux 2.2 (Debian 2.2) x\n x M. [Sparc - R220] Sun Solaris (8) #1 x\n x N. [Sparc - R220] Sun Solaris (8) #2 x\n x O. [Sparc - R220] Sun Solaris (8) #3 x\n x P. [Sparc - R220] Sun Solaris (8) #4 x\n x x\n x Exit x\n mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj\n\nWith the possible exception of the Linux/Alpha box, there's nothing\nthere that we don't have covered already, is there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 20:54:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Fair enough.\n\nChris\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, 30 November 2001 9:55 AM\n> To: Christopher Kings-Lynne\n> Cc: PostgreSQL Hackers Mailing List\n> Subject: Re: [HACKERS] Second call for platform testing \n> \n> \n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Has someone thought about getting a sourceforge account for postgres or\n> > something? Even if we don't use their CVS and facilities, I \n> wonder if we\n> > can use their compile farm for platform testing?\n> \n> Hardly need a project account. I have a personal account at\n> sourceforge, and have used it (fairly recently, even) to run tests on\n> their compile farm. They don't presently have all that wide a range of\n> stuff available however. Let's see ....\n> \n> lqqqqqqqqChoose compile farm server...qqqqqqqqqk\n> x A. [x86] Linux 2.2 (Debian 2.2) x\n> x C. [x86] FreeBSD (4.3-RELEASE) x\n> x x\n> x G. [Alpha] Linux 2.2 (Debian 2.2) x\n> x x\n> x I. [PPC - G4] MacOS X 10.1 #1 **NEW** x\n> x J. [PPC - G4] MacOS X 10.1 #2 **NEW** x\n> x K. [PPC - RS/6000] Linux 2.2 (Debian 2.2) x\n> x x\n> x L. [Sparc - Ultra60] Linux 2.2 (Debian 2.2) x\n> x M. [Sparc - R220] Sun Solaris (8) #1 x\n> x N. [Sparc - R220] Sun Solaris (8) #2 x\n> x O. [Sparc - R220] Sun Solaris (8) #3 x\n> x P. [Sparc - R220] Sun Solaris (8) #4 x\n> x x\n> x Exit x\n> mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj\n> \n> With the possible exception of the Linux/Alpha box, there's nothing\n> there that we don't have covered already, is there?\n> \n> \t\t\tregards, tom lane\n> \n\n", "msg_date": "Fri, 30 Nov 2001 10:10:34 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Tom Lane wrote:\n\n>>configure:1243: checking whether the C compiler (gcc ) works\n>>configure:1259: gcc -o conftest conftest.c 1>&5\n>>as: warning 2: Unknown option \"--traditional-format\" ignored.\n>>as: \"/var/tmp/cceTg0Kd.s\", line 15: error 1052: Directive name not recognized - NSUBSPA\n>>\n> \n> This looks like your gcc installation is broken: specifically, I'll bet\n> it's trying to use HP's assembler rather than gas. You really want the\n> GNU toolchain (binutils package) underneath gcc. The gcc-to-HP-as\n> configuration has never worked for me (though I have to admit I haven't\n> tried in a long time).\n> \n> In the meantime, try it with HP's real compiler (set CC=cc for configure).\n\nAs usual Tom, you're right on the mark!\n\nThe HP compiler on this machine is in fact the K&R/non-ANSI bundled \ncompiler. Unfortunately, the real compiler is not available -- i.e. we \nhave not purchased it.\n\nI did get binutils installed though, and was able to run configure and \ngmake. However, 'gmake check' seems to hang at:\n\nparallel group (20 tests): lseg point\n\nI have a copy of gdb which has not been installed yet. Tomorrow I'll ask \nour unix admin to install it for me so I can drill down to the problem \ncode. Any other thoughts in the meantime?\n\nFWIW, the server is a new A Class HP:\n$ uname -a\nHP-UX csdoap2 B.11.00 A 9000/800 594720518 two-user license\n\nThanks,\n\nJoe\n\n\n\n\n\n", "msg_date": "Thu, 29 Nov 2001 18:46:01 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> I did get binutils installed though, and was able to run configure and \n> gmake. However, 'gmake check' seems to hang at:\n\n> parallel group (20 tests): lseg point\n\nThat one's in the FAQ ;-). Evidently HP still hasn't fixed that sh bug\nas of 11.0. Should work if you use ksh instead, viz\n\n\tgmake SHELL=/bin/ksh check\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 22:44:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Tom Lane wrote:\n\n> That one's in the FAQ ;-). Evidently HP still hasn't fixed that sh bug\n> as of 11.0. Should work if you use ksh instead, viz\n> \n> \tgmake SHELL=/bin/ksh check\n> \n\nCorrect again (not that I ever doubted you)! Sorry, I should have known \nto look at the platform FAQ.\n\nAnd after all that, the results:\n\n====================================================\n 2 of 79 tests failed, 1 of these failures ignored.\n====================================================\n\ngeometry and random failed. The diffs are attached. The geometry diffs \nall look to be 0 vs -0.\n\nAgain, for reference:\n$ model\n9000/800/A500-44\n$ uname -a\nHP-UX csdoap2 B.11.00 A 9000/800\n\nJoe", "msg_date": "Thu, 29 Nov 2001 22:34:43 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "...\n> geometry and random failed. The diffs are attached. The geometry diffs\n> all look to be 0 vs -0.\n> $ model\n> 9000/800/A500-44\n> $ uname -a\n> HP-UX csdoap2 B.11.00 A 9000/800\n\nGreat! Thanks for testing; I'll update the list...\n\n - Thomas\n", "msg_date": "Fri, 30 Nov 2001 14:08:49 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> geometry and random failed. The diffs are attached. The geometry diffs \n> all look to be 0 vs -0.\n\nOkay, it looks like HP has implemented -0 finally. Try the other\ngeometry comparison files to see if any match;\ngeometry-positive-zeros.out is evidently only good for HPUX 10.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 09:51:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Tom Lane wrote:\n\n> Okay, it looks like HP has implemented -0 finally. Try the other\n> geometry comparison files to see if any match;\n> geometry-positive-zeros.out is evidently only good for HPUX 10.\n> \n\nA little experimentation shows that \"geometry-solaris-precision.out\" is \na perfect match. Two questions:\n\n1. Do we use \"geometry-solaris-precision.out\", or clone it with a more \nappropriate name like \"geometry-hpux11.out\" or \n\"geometry-hpux-precision.out\"?\n\n2. The current resultmap says: \"geometry/hppa=geometry-positive-zeros\".\nWhat do we modify to so that it differentiates between HPUX11 and \nHPUX10? I see it's related to the config.guess output and the pg_regress \nscript, but it's not clear to me where to make a change.\n\nJoe\n\n", "msg_date": "Fri, 30 Nov 2001 13:45:32 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I got a regression test result from Hiroshi Saito on\n> > UNIX_System_V ews4800 4.2MP 4.2 R4000 r4000.\n> \n> > Seems INT64_IS_BUSTED, old PST ... etc and\n> \n> > *** ./expected/create_index.out\tTue Aug 28 08:23:34 JST 2001\n> > --- ./results/create_index.out\tFri Nov 30 00:28:22 JST 2001\n> > ***************\n> > *** 35,44 ****\n> > --- 35,47 ----\n> > --\n> > CREATE INDEX onek2_u1_prtl ON onek2 USING btree(unique1 int4_ops)\n> > \twhere unique1 < 20 or unique1 > 980;\n> > + ERROR: AllocSetFree: cannot find block containing chunk 4f64f0\n> > CREATE INDEX onek2_u2_prtl ON onek2 USING btree(unique2 int4_ops)\n> > \twhere stringu1 < 'B';\n> > + ERROR: AllocSetFree: cannot find block containing chunk 4f6390\n> > CREATE INDEX onek2_stu1_prtl ON onek2 USING btree(stringu1 name_ops)\n> > \twhere onek2.stringu1 >= 'J' and onek2.stringu1 < 'K';\n> > + ERROR: AllocSetFree: cannot find block containing chunk 4f6740\n> \n> Interesting. Something nonportable in the partial index support,\n> perhaps?\n> \n> Would you ask him to compile with debug support, set a breakpoint at\n> elog(), and get a backtrace from the point of the error?\n\nUnfortunately he doesn't seem to be able to do it at once\nthough he would like to do it. If he is ready he may reply\nto this thread directly.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Sat, 1 Dec 2001 07:49:09 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> A little experimentation shows that \"geometry-solaris-precision.out\" is \n> a perfect match. Two questions:\n\nExcellent.\n\n> 1. Do we use \"geometry-solaris-precision.out\", or clone it with a more \n> appropriate name like \"geometry-hpux11.out\" or \n> \"geometry-hpux-precision.out\"?\n\nI'd use it as-is. Not worth the trouble to rename it, and we definitely\ndon't want to carry around two copies.\n\n> 2. The current resultmap says: \"geometry/hppa=geometry-positive-zeros\".\n> What do we modify to so that it differentiates between HPUX11 and \n> HPUX10? I see it's related to the config.guess output and the pg_regress \n> script, but it's not clear to me where to make a change.\n\nLooks like it's got to be\n\ngeometry/hppa-hp-hpux9=geometry-positive-zeros\ngeometry/hppa-hp-hpux10=geometry-positive-zeros\ngeometry/hppa-hp-hpux11=geometry-solaris-precision\n\nUgh, but ...\n\nPlease check that's OK on 11, and I'll double check it on 10.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 18:06:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Tom Lane wrote:\n\n> Looks like it's got to be\n> \n> geometry/hppa-hp-hpux9=geometry-positive-zeros\n> geometry/hppa-hp-hpux10=geometry-positive-zeros\n> geometry/hppa-hp-hpux11=geometry-solaris-precision\n> \n> Ugh, but ...\n> \n> Please check that's OK on 11, and I'll double check it on 10.\n> \n\nI get:\n\ntest geometry ... FAILED\ntest horology ... trouble\n============== shutting down postmaster ==============\n\n=======================\n 1 of 37 tests failed.\n=======================\n\nLooks like geometry.out gets used.\n\nTried again with:\ngeometry/hppa2.0w-hp-hpux11.00=geometry-solaris-precision\n\nand get:\n\n======================\n All 79 tests passed.\n======================\n\nSo maybe it should be:\ngeometry/hppa.*-hp-hpux11.*=geometry-solaris-precision\n\nyup:\n======================\n All 79 tests passed.\n======================\n\nDo you want a patch?\n\nJoe\n\n\n\n\n", "msg_date": "Fri, 30 Nov 2001 16:01:21 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> So maybe it should be:\n> geometry/hppa.*-hp-hpux11.*=geometry-solaris-precision\n\nMmm, probably so.\n\n> Do you want a patch?\n\nI can take it from here. Thanks!\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 19:35:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "> OK, the list is getting smaller fast (thanks for all of the responses!).\n> Here is what I have left currently (note the addition of NetBSD/sparc,\n> which I had inadvertently omitted from the first list):\n> \n> BeOS Cyril Velter\n> Linux/arm Mark Knox\n> Linux/s390 Neale Ferguson\n> NetBSD/arm32 Patrick Welche\n> NetBSD/m68k Bill Studenmund (will test)\n> NetBSD/sparc Matthew Green (left off the first list)\n> NetBSD/VAX Tom I. Helbekkmo\n> QNX (did I get a definitive report for 4.x already?)\n> SunOS Tatsuo Ishii (old release; still relevant?)\n\nI tried 7.2b3 on SunOS 4.1.4 with gcc 2.7.1. It gives a comile error:\n\ngcc -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include\n -c pmsignal.c -o pmsignal.o\npmsignal.c:38: parse error before `*'\npmsignal.c:38: warning: data definition has no type or storage class\npmsignal.c: In function `PMSignalInit':\npmsignal.c:47: `sig_atomic_t' undeclared (first use this function)\npmsignal.c:47: (Each undeclared identifier is reported only once\npmsignal.c:47: for each function it appears in.)\npmsignal.c:47: parse error before `)'\n--\nTatsuo Ishii\n", "msg_date": "Sat, 01 Dec 2001 22:26:09 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I tried 7.2b3 on SunOS 4.1.4 with gcc 2.7.1. It gives a comile error:\n\n> pmsignal.c:47: `sig_atomic_t' undeclared (first use this function)\n\nsig_atomic_t is required by ANSI C to be declared in <signal.h>.\nIs it declared anywhere in the SunOS system headers?\n\nI suppose we could create a configure test that detects this,\nbut I wonder how long we want to go on supporting platforms whose\nheaders aren't even minimally ANSI-compliant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Dec 2001 12:25:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": ">Linux/arm Mark Knox\n\nHad a look at 7.2b3 and sadly it's failing several tests. I saw several \n\"ERROR: PGSTAT: Creation of DB hash table failed\" which I haven't seen before.\n\nGeometry fails as usual due to some minor rounding. The others are \ncompletely wrong. I'm afraid I don't have much time to look at this right \nnow (sorry) but I've attached the regression output and diffs if anyone \nwants to check them.\n\n(Note: any responses should probably be addressed to me directly as well as \nto the list.. I will likely miss them otherwise. Thanks.)\n\n\n __ .--------.\n |==|| | -( Mark 'segfault' Knox )-\n |==||________|\n |::| __====__`. .'`. \"Unix *is* user-friendly.. it's just\n |__|/::::::::\\ ~ (_) picky about its friends.\"\n\nparallel group (13 tests): char name text boolean int2 oid varchar int8 float4 int4 float8 bit numeric\n boolean ... ok\n char ... ok\n name ... ok\n varchar ... ok\n text ... ok\n int2 ... ok\n int4 ... ok\n int8 ... ok\n oid ... ok\n float4 ... ok\n float8 ... ok\n bit ... ok\n numeric ... FAILED\ntest strings ... ok\ntest numerology ... ok\nparallel group (20 tests): point lseg box path circle time polygon timetz date interval reltime abstime comments tinterval inet timestamp type_sanity timestamptz opr_sanity oidjoins\n point ... ok\n lseg ... ok\n box ... ok\n path ... ok\n polygon ... ok\n circle ... ok\n date ... ok\n time ... ok\n timetz ... ok\n timestamp ... ok\n timestamptz ... ok\n interval ... ok\n abstime ... ok\n reltime ... ok\n tinterval ... ok\n inet ... ok\n comments ... ok\n oidjoins ... ok\n type_sanity ... ok\n opr_sanity ... ok\ntest geometry ... FAILED\ntest horology ... ok\ntest create_function_1 ... ok\ntest create_type ... ok\ntest create_table ... ok\ntest create_function_2 ... ok\ntest copy ... ok\nparallel group (7 tests): create_aggregate create_operator triggers inherit constraints create_misc create_index\n constraints ... ok\n triggers ... ok\n create_misc ... ok\n create_aggregate ... ok\n create_operator ... ok\n create_index ... ok\n inherit ... ok\ntest create_view ... ok\ntest sanity_check ... FAILED\ntest errors ... ok\ntest select ... FAILED\nparallel group (16 tests): select_distinct_on select_into select_distinct select_having transactions random union subselect case arrays select_implicit portals join aggregates hash_index btree_index\n select_into ... ok\n select_distinct ... ok\n select_distinct_on ... ok\n select_implicit ... ok\n select_having ... ok\n subselect ... ok\n union ... ok\n case ... ok\n join ... ok\n aggregates ... ok\n transactions ... ok\n random ... ok\n portals ... ok\n arrays ... ok\n btree_index ... ok\n hash_index ... ok\ntest privileges ... ok\ntest misc ... ok\nparallel group (5 tests): portals_p2 alter_table rules foreign_key select_views\n select_views ... FAILED\n alter_table ... FAILED\n portals_p2 ... ok\n rules ... ok\n foreign_key ... ok\nparallel group (3 tests): limit temp plpgsql\n limit ... ok\n plpgsql ... ok\n temp ... ok\n\n*** ./expected/numeric.out\tFri Apr 7 15:17:42 2000\n--- ./results/numeric.out\tSat Dec 1 21:22:26 2001\n***************\n*** 489,494 ****\n--- 489,495 ----\n CREATE UNIQUE INDEX num_exp_log10_idx ON num_exp_log10 (id);\n CREATE UNIQUE INDEX num_exp_power_10_ln_idx ON num_exp_power_10_ln (id);\n VACUUM ANALYZE num_exp_add;\n+ ERROR: PGSTAT: Creation of DB hash table failed\n VACUUM ANALYZE num_exp_sub;\n VACUUM ANALYZE num_exp_div;\n VACUUM ANALYZE num_exp_mul;\n\n======================================================================\n\n*** ./expected/geometry.out\tFri Sep 28 03:59:52 2001\n--- ./results/geometry.out\tSat Dec 1 21:23:06 2001\n***************\n*** 127,133 ****\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140473)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n--- 127,133 ----\n | (-5,-12) | [(10,-10),(-3,-4)] | (-1.60487804878049,-4.64390243902439)\n | (10,10) | [(10,-10),(-3,-4)] | (2.39024390243902,-6.48780487804878)\n | (0,0) | [(-1000000,200),(300000,-40)] | (0.0028402365895872,15.384614860264)\n! | (-10,0) | [(-1000000,200),(300000,-40)] | (-9.99715942258202,15.3864610140472)\n | (-3,4) | [(-1000000,200),(300000,-40)] | (-2.99789812267519,15.3851688427303)\n | (5.1,34.5) | [(-1000000,200),(300000,-40)] | (5.09647083221496,15.3836744976925)\n | (-5,-12) | [(-1000000,200),(300000,-40)] | (-4.99494420845634,15.3855375281616)\n***************\n*** 150,160 ****\n six | box \n -----+----------------------------------------------------------------------------\n | (2.12132034355964,2.12132034355964),(-2.12132034355964,-2.12132034355964)\n! | (71.7106781186548,72.7106781186548),(-69.7106781186548,-68.7106781186548)\n! | (4.53553390593274,6.53553390593274),(-2.53553390593274,-0.535533905932738)\n | (3.12132034355964,4.12132034355964),(-1.12132034355964,-0.121320343559642)\n | (107.071067811865,207.071067811865),(92.9289321881345,192.928932188135)\n! | (170.710678118655,70.7106781186548),(29.2893218813452,-70.7106781186548)\n (6 rows)\n \n -- translation\n--- 150,160 ----\n six | box \n -----+----------------------------------------------------------------------------\n | (2.12132034355964,2.12132034355964),(-2.12132034355964,-2.12132034355964)\n! | (71.7106781186547,72.7106781186547),(-69.7106781186547,-68.7106781186547)\n! | (4.53553390593274,6.53553390593274),(-2.53553390593274,-0.535533905932737)\n | (3.12132034355964,4.12132034355964),(-1.12132034355964,-0.121320343559642)\n | (107.071067811865,207.071067811865),(92.9289321881345,192.928932188135)\n! | (170.710678118655,70.7106781186547),(29.2893218813453,-70.7106781186547)\n (6 rows)\n \n -- translation\n***************\n*** 443,454 ****\n FROM CIRCLE_TBL;\n six | polygon \n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359078377e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718156754e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077235131e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n! | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887966),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.59807621137373),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048617))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239385585e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n--- 443,454 ----\n FROM CIRCLE_TBL;\n six | polygon \n -----+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.59807621135076,1.50000000000442),(-1.49999999999116,2.59807621135842),(1.53102359017709e-11,3),(1.50000000001768,2.59807621134311),(2.59807621136607,1.4999999999779),(3,-3.06204718035418e-11),(2.59807621133545,-1.50000000003094),(1.49999999996464,-2.59807621137373),(-4.59307077053127e-11,-3),(-1.5000000000442,-2.5980762113278),(-2.59807621138138,-1.49999999995138))\n | ((-99,2),(-85.6025403783588,52.0000000001473),(-48.9999999997054,88.602540378614),(1.00000000051034,102),(51.0000000005893,88.6025403781036),(87.6025403788692,51.9999999992634),(101,1.99999999897932),(87.6025403778485,-48.0000000010313),(50.9999999988214,-84.6025403791243),(0.999999998468976,-98),(-49.0000000014732,-84.6025403775933),(-85.6025403793795,-47.9999999983795))\n! | ((-4,3),(-3.33012701891794,5.50000000000737),(-1.49999999998527,7.3301270189307),(1.00000000002552,8),(3.50000000002946,7.33012701890518),(5.33012701894346,5.49999999996317),(6,2.99999999994897),(5.33012701889242,0.499999999948437),(3.49999999994107,-1.33012701895622),(0.999999999923449,-2),(-1.50000000007366,-1.33012701887967),(-3.33012701896897,0.500000000081028))\n! | ((-2,2),(-1.59807621135076,3.50000000000442),(-0.499999999991161,4.59807621135842),(1.00000000001531,5),(2.50000000001768,4.59807621134311),(3.59807621136607,3.4999999999779),(4,1.99999999996938),(3.59807621133545,0.499999999969062),(2.49999999996464,-0.598076211373729),(0.999999999954069,-1),(-0.500000000044197,-0.598076211327799),(-1.59807621138138,0.500000000048616))\n | ((90,200),(91.3397459621641,205.000000000015),(95.0000000000295,208.660254037861),(100.000000000051,210),(105.000000000059,208.66025403781),(108.660254037887,204.999999999926),(110,199.999999999898),(108.660254037785,194.999999999897),(104.999999999882,191.339745962088),(99.9999999998469,190),(94.9999999998527,191.339745962241),(91.3397459620621,195.000000000162))\n! | ((0,0),(13.3974596216412,50.0000000001473),(50.0000000002946,86.602540378614),(100.00000000051,100),(150.000000000589,86.6025403781036),(186.602540378869,49.9999999992634),(200,-1.02068239345139e-09),(186.602540377848,-50.0000000010313),(149.999999998821,-86.6025403791243),(99.999999998469,-100),(49.9999999985268,-86.6025403775933),(13.3974596206205,-49.9999999983795))\n (6 rows)\n \n -- convert the circle to an 8-point polygon\n***************\n*** 456,467 ****\n FROM CIRCLE_TBL;\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359078377e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718156754e-11),(2.12132034353258,-2.12132034358671),(-4.59307077235131e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181134),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181134),(200,-1.02068239385585e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n--- 456,467 ----\n FROM CIRCLE_TBL;\n six | polygon \n -----+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n! | ((-3,0),(-2.12132034355423,2.12132034356506),(1.53102359017709e-11,3),(2.12132034357588,2.1213203435434),(3,-3.06204718035418e-11),(2.12132034353258,-2.12132034358671),(-4.59307077053127e-11,-3),(-2.12132034359753,-2.12132034352175))\n! | ((-99,2),(-69.7106781184743,72.7106781188352),(1.00000000051034,102),(71.710678119196,72.7106781181135),(101,1.99999999897932),(71.7106781177526,-68.7106781195569),(0.999999998468976,-98),(-69.7106781199178,-68.7106781173917))\n | ((-4,3),(-2.53553390592372,6.53553390594176),(1.00000000002552,8),(4.5355339059598,6.53553390590567),(6,2.99999999994897),(4.53553390588763,-0.535533905977846),(0.999999999923449,-2),(-2.53553390599589,-0.535533905869586))\n | ((-2,2),(-1.12132034355423,4.12132034356506),(1.00000000001531,5),(3.12132034357588,4.1213203435434),(4,1.99999999996938),(3.12132034353258,-0.121320343586707),(0.999999999954069,-1),(-1.12132034359753,-0.121320343521752))\n | ((90,200),(92.9289321881526,207.071067811884),(100.000000000051,210),(107.07106781192,207.071067811811),(110,199.999999999898),(107.071067811775,192.928932188044),(99.9999999998469,190),(92.9289321880082,192.928932188261))\n! | ((0,0),(29.2893218815257,70.7106781188352),(100.00000000051,100),(170.710678119196,70.7106781181135),(200,-1.02068239345139e-09),(170.710678117753,-70.7106781195569),(99.999999998469,-100),(29.2893218800822,-70.7106781173917))\n (6 rows)\n \n --\n***************\n*** 503,513 ****\n WHERE (p1.f1 <-> c1.f1) > 0\n ORDER BY distance, circle, point using <<;\n twentyfour | circle | point | distance \n! ------------+----------------+------------+-------------------\n! | <(100,0),100> | (5.1,34.5) | 0.976531926977965\n | <(1,2),3> | (-3,4) | 1.47213595499958\n | <(0,0),3> | (-3,4) | 2\n! | <(100,0),100> | (-3,4) | 3.07764064044151\n | <(100,0),100> | (-5,-12) | 5.68348972285122\n | <(1,3),5> | (-10,0) | 6.40175425099138\n | <(1,3),5> | (10,10) | 6.40175425099138\n--- 503,513 ----\n WHERE (p1.f1 <-> c1.f1) > 0\n ORDER BY distance, circle, point using <<;\n twentyfour | circle | point | distance \n! ------------+----------------+------------+------------------\n! | <(100,0),100> | (5.1,34.5) | 0.97653192697797\n | <(1,2),3> | (-3,4) | 1.47213595499958\n | <(0,0),3> | (-3,4) | 2\n! | <(100,0),100> | (-3,4) | 3.07764064044152\n | <(100,0),100> | (-5,-12) | 5.68348972285122\n | <(1,3),5> | (-10,0) | 6.40175425099138\n | <(1,3),5> | (10,10) | 6.40175425099138\n\n======================================================================\n\n*** ./expected/sanity_check.out\tMon Aug 27 19:23:34 2001\n--- ./results/sanity_check.out\tSat Dec 1 21:25:17 2001\n***************\n*** 1,4 ****\n--- 1,5 ----\n VACUUM;\n+ ERROR: PGSTAT: Creation of DB hash table failed\n --\n -- sanity check, if we don't have indices the test will take years to\n -- complete. But skip TOAST relations since they will have varying\n***************\n*** 21,26 ****\n--- 22,28 ----\n hash_name_heap | t\n hash_txt_heap | t\n ihighway | t\n+ inet_tbl | t\n num_exp_add | t\n num_exp_div | t\n num_exp_ln | t\n***************\n*** 59,63 ****\n shighway | t\n tenk1 | t\n tenk2 | t\n! (49 rows)\n \n--- 61,65 ----\n shighway | t\n tenk1 | t\n tenk2 | t\n! (50 rows)\n \n\n======================================================================\n\n*** ./expected/select.out\tMon Jul 16 01:07:00 2001\n--- ./results/select.out\tSat Dec 1 21:25:20 2001\n***************\n*** 211,232 ****\n -- so ANALYZE first.\n --\n ANALYZE onek2;\n --\n -- awk '{if($1<10){print $0;}else{next;}}' onek.data | sort +0n -1\n --\n SELECT onek2.* WHERE onek2.unique1 < 10;\n unique1 | unique2 | two | four | ten | twenty | hundred | thousand | twothousand | fivethous | tenthous | odd | even | stringu1 | stringu2 | string4 \n ---------+---------+-----+------+-----+--------+---------+----------+-------------+-----------+----------+-----+------+----------+----------+---------\n! 0 | 998 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | AAAAAA | KMBAAA | OOOOxx\n 1 | 214 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 3 | BAAAAA | GIAAAA | OOOOxx\n 2 | 326 | 0 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 4 | 5 | CAAAAA | OMAAAA | OOOOxx\n 3 | 431 | 1 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 6 | 7 | DAAAAA | PQAAAA | VVVVxx\n- 4 | 833 | 0 | 0 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 8 | 9 | EAAAAA | BGBAAA | HHHHxx\n 5 | 541 | 1 | 1 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 10 | 11 | FAAAAA | VUAAAA | HHHHxx\n- 6 | 978 | 0 | 2 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 12 | 13 | GAAAAA | QLBAAA | OOOOxx\n 7 | 647 | 1 | 3 | 7 | 7 | 7 | 7 | 7 | 7 | 7 | 14 | 15 | HAAAAA | XYAAAA | VVVVxx\n 8 | 653 | 0 | 0 | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 16 | 17 | IAAAAA | DZAAAA | HHHHxx\n! 9 | 49 | 1 | 1 | 9 | 9 | 9 | 9 | 9 | 9 | 9 | 18 | 19 | JAAAAA | XBAAAA | HHHHxx\n (10 rows)\n \n --\n--- 211,233 ----\n -- so ANALYZE first.\n --\n ANALYZE onek2;\n+ ERROR: PGSTAT: Creation of DB hash table failed\n --\n -- awk '{if($1<10){print $0;}else{next;}}' onek.data | sort +0n -1\n --\n SELECT onek2.* WHERE onek2.unique1 < 10;\n unique1 | unique2 | two | four | ten | twenty | hundred | thousand | twothousand | fivethous | tenthous | odd | even | stringu1 | stringu2 | string4 \n ---------+---------+-----+------+-----+--------+---------+----------+-------------+-----------+----------+-----+------+----------+----------+---------\n! 9 | 49 | 1 | 1 | 9 | 9 | 9 | 9 | 9 | 9 | 9 | 18 | 19 | JAAAAA | XBAAAA | HHHHxx\n 1 | 214 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 3 | BAAAAA | GIAAAA | OOOOxx\n 2 | 326 | 0 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 4 | 5 | CAAAAA | OMAAAA | OOOOxx\n 3 | 431 | 1 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 6 | 7 | DAAAAA | PQAAAA | VVVVxx\n 5 | 541 | 1 | 1 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 10 | 11 | FAAAAA | VUAAAA | HHHHxx\n 7 | 647 | 1 | 3 | 7 | 7 | 7 | 7 | 7 | 7 | 7 | 14 | 15 | HAAAAA | XYAAAA | VVVVxx\n 8 | 653 | 0 | 0 | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 16 | 17 | IAAAAA | DZAAAA | HHHHxx\n! 4 | 833 | 0 | 0 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 8 | 9 | EAAAAA | BGBAAA | HHHHxx\n! 6 | 978 | 0 | 2 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 12 | 13 | GAAAAA | QLBAAA | OOOOxx\n! 0 | 998 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | AAAAAA | KMBAAA | OOOOxx\n (10 rows)\n \n --\n***************\n*** 266,290 ****\n WHERE onek2.unique1 > 980;\n unique1 | stringu1 \n ---------+----------\n! 981 | TLAAAA\n! 982 | ULAAAA\n 983 | VLAAAA\n- 984 | WLAAAA\n- 985 | XLAAAA\n- 986 | YLAAAA\n- 987 | ZLAAAA\n- 988 | AMAAAA\n 989 | BMAAAA\n 990 | CMAAAA\n 991 | DMAAAA\n! 992 | EMAAAA\n 993 | FMAAAA\n 994 | GMAAAA\n! 995 | HMAAAA\n! 996 | IMAAAA\n! 997 | JMAAAA\n! 998 | KMAAAA\n! 999 | LMAAAA\n (19 rows)\n \n SELECT two, stringu1, ten, string4\n--- 267,291 ----\n WHERE onek2.unique1 > 980;\n unique1 | stringu1 \n ---------+----------\n! 997 | JMAAAA\n! 995 | HMAAAA\n! 999 | LMAAAA\n 983 | VLAAAA\n 989 | BMAAAA\n+ 986 | YLAAAA\n+ 996 | IMAAAA\n+ 982 | ULAAAA\n+ 992 | EMAAAA\n 990 | CMAAAA\n 991 | DMAAAA\n! 984 | WLAAAA\n! 981 | TLAAAA\n! 998 | KMAAAA\n 993 | FMAAAA\n 994 | GMAAAA\n! 988 | AMAAAA\n! 987 | ZLAAAA\n! 985 | XLAAAA\n (19 rows)\n \n SELECT two, stringu1, ten, string4\n\n======================================================================\n\n*** ./expected/select_views.out\tSat Oct 13 13:41:11 2001\n--- ./results/select_views.out\tSat Dec 1 21:27:40 2001\n***************\n*** 6,342 ****\n name | thepath | cname \n ------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------\n Access Rd 25 | [(-121.9283,37.894),(-121.9283,37.9)] | Oakland\n Agua Fria Creek | [(-121.9254,37.922),(-121.9281,37.889)] | Oakland\n Andrea Cir | [(-121.733218,37.88641),(-121.733286,37.90617)] | Oakland\n Apricot Lane | [(-121.9471,37.401),(-121.9456,37.392)] | Oakland\n- Arlington Dr | [(-121.8802,37.408),(-121.8807,37.394)] | Oakland\n- Arlington Road | [(-121.7957,37.898),(-121.7956,37.906)] | Oakland\n- Arroyo Las Positas | [(-121.7973,37.997),(-121.7957,37.005)] | Oakland\n- Arroyo Seco | [(-121.7073,37.766),(-121.6997,37.729)] | Oakland\n- Calaveras Creek | [(-121.8203,37.035),(-121.8207,37.931)] | Oakland\n- Corriea Way | [(-121.9501,37.402),(-121.9505,37.398)] | Oakland\n- Cowing Road | [(-122.0002,37.934),(-121.9772,37.782)] | Oakland\n- Driscoll Road | [(-121.9482,37.403),(-121.948451,37.39995)] | Oakland\n- Enos Way | [(-121.7677,37.896),(-121.7673,37.91)] | Oakland\n- Fairview Ave | [(-121.999,37.428),(-121.9863,37.351)] | Oakland\n- I- 580 | [(-121.9322,37.989),(-121.9243,37.006),(-121.9217,37.014)] | Oakland\n- I- 580 | [(-122.018,37.019),(-122.0009,37.032),(-121.9787,37.983),(-121.958,37.984),(-121.9571,37.986)] | Oakland\n- I- 580 Ramp | [(-121.8521,37.011),(-121.8479,37.999),(-121.8476,37.999),(-121.8456,37.01),(-121.8455,37.011)] | Oakland\n- I- 580 Ramp | [(-121.8743,37.014),(-121.8722,37.999),(-121.8714,37.999)] | Oakland\n- I- 580 Ramp | [(-121.9043,37.998),(-121.9036,37.013),(-121.902632,37.0174),(-121.9025,37.018)] | Oakland\n- I- 580 Ramp | [(-121.9368,37.986),(-121.936483,37.98832),(-121.9353,37.997),(-121.93504,37.00035),(-121.9346,37.006),(-121.933764,37.00031),(-121.9333,37.997),(-121.9322,37.989)] | Oakland\n- I- 580/I-680 Ramp | ((-121.9207,37.988),(-121.9192,37.016)) | Oakland\n- I- 680 | ((-121.939,37.15),(-121.9387,37.145),(-121.9373,37.125),(-121.934242,37.07643),(-121.933886,37.0709),(-121.9337,37.068),(-121.933122,37.06139),(-121.932736,37.05698),(-121.93222,37.05108),(-121.931844,37.04678),(-121.930113,37.027),(-121.926829,37),(-121.9265,37.998),(-121.9217,37.96),(-121.9203,37.949),(-121.9184,37.934)) | Oakland\n- I- 680 | [(-121.9101,37.715),(-121.911269,37.74682),(-121.9119,37.764),(-121.9124,37.776),(-121.9174,37.905),(-121.9194,37.957),(-121.9207,37.988)] | Oakland\n- I- 680 | [(-121.9184,37.934),(-121.917,37.913),(-121.9122,37.83),(-121.9052,37.702)] | Oakland\n- I- 680 Ramp | [(-121.8833,37.376),(-121.8833,37.392),(-121.883,37.4),(-121.8835,37.402),(-121.8852,37.422)] | Oakland\n- I- 680 Ramp | [(-121.92,37.438),(-121.9218,37.424),(-121.9238,37.408),(-121.9252,37.392)] | Oakland\n- I- 680 Ramp | [(-121.9238,37.402),(-121.9234,37.395),(-121.923,37.399)] | Oakland\n- I- 880 | ((-121.9669,37.075),(-121.9663,37.071),(-121.9656,37.065),(-121.9618,37.037),(-121.95689,37),(-121.948,37.933)) | Oakland\n- I- 880 | [(-121.948,37.933),(-121.9471,37.925),(-121.9467,37.923),(-121.946,37.918),(-121.9452,37.912),(-121.937,37.852)] | Oakland\n- Johnson Dr | [(-121.9145,37.901),(-121.915,37.877)] | Oakland\n- Juniper St | [(-121.7823,37.897),(-121.7815,37.9)] | Oakland\n- Las Positas Road | [(-121.764488,37.99199),(-121.75569,37.02022)] | Oakland\n- Livermore Ave | [(-121.7687,37.448),(-121.769,37.375)] | Oakland\n- Livermore Ave | [(-121.772719,37.99085),(-121.7728,37.001)] | Oakland\n- Mission Blvd | [(-121.918886,37),(-121.9194,37.976),(-121.9198,37.975)] | Oakland\n- Mission Blvd | [(-122.0006,37.896),(-121.9989,37.88)] | Oakland\n- Navajo Ct | [(-121.8779,37.901),(-121.8783,37.9)] | Oakland\n- Paseo Padre Pkwy | [(-122.0021,37.639),(-121.996,37.628)] | Oakland\n- Pimlico Dr | [(-121.8616,37.998),(-121.8618,37.008)] | Oakland\n- Rosedale Ct | [(-121.9232,37.9),(-121.924,37.897)] | Oakland\n- Saginaw Ct | [(-121.8803,37.898),(-121.8806,37.901)] | Oakland\n- Sp Railroad | [(-121.893564,37.99009),(-121.897,37.016)] | Oakland\n- Sp Railroad | [(-121.9565,37.898),(-121.9562,37.9)] | Oakland\n- State Hwy 84 | [(-121.9565,37.898),(-121.956589,37.89911),(-121.9569,37.903),(-121.956,37.91),(-121.9553,37.919)] | Oakland\n- Sunol Ridge Trl | [(-121.9419,37.455),(-121.9345,37.38)] | Oakland\n- Tassajara Creek | [(-121.87866,37.98898),(-121.8782,37.015)] | Oakland\n- Theresa Way | [(-121.7289,37.906),(-121.728,37.899)] | Oakland\n- Tissiack Way | [(-121.920364,37),(-121.9208,37.995)] | Oakland\n- Vallecitos Road | [(-121.8699,37.916),(-121.8703,37.891)] | Oakland\n- Warm Springs Blvd | [(-121.933956,37),(-121.9343,37.97)] | Oakland\n- Welch Creek Road | [(-121.7695,37.386),(-121.7737,37.413)] | Oakland\n- Whitlock Creek | [(-121.74683,37.91276),(-121.733107,37)] | Oakland\n- 1st St | [(-121.75508,37.89294),(-121.753581,37.90031)] | Oakland\n Apricot Lane | [(-121.9471,37.401),(-121.9456,37.392)] | Oakland\n Arden Road | [(-122.0978,37.177),(-122.1,37.177)] | Oakland\n Arlington Dr | [(-121.8802,37.408),(-121.8807,37.394)] | Oakland\n Arroyo Las Positas | [(-121.7973,37.997),(-121.7957,37.005)] | Oakland\n Ash St | [(-122.0408,37.31),(-122.04,37.292)] | Oakland\n- Bridgepointe Dr | [(-122.0514,37.305),(-122.0509,37.299)] | Oakland\n- Butterfield Dr | [(-122.0838,37.002),(-122.0834,37.987)] | Oakland\n- Calaveras Creek | [(-121.8203,37.035),(-121.8207,37.931)] | Oakland\n- Celia St | [(-122.0611,37.3),(-122.0616,37.299)] | Oakland\n- Claremont Pl | [(-122.0542,37.995),(-122.0542,37.008)] | Oakland\n- Corriea Way | [(-121.9501,37.402),(-121.9505,37.398)] | Oakland\n- Crystaline Dr | [(-121.925856,37),(-121.925869,37.00527)] | Oakland\n- Decoto Road | [(-122.0159,37.006),(-122.016,37.002),(-122.0164,37.993)] | Oakland\n- Driscoll Road | [(-121.9482,37.403),(-121.948451,37.39995)] | Oakland\n- Eden Creek | [(-122.022037,37.00675),(-122.0221,37.998)] | Oakland\n- Fairview Ave | [(-121.999,37.428),(-121.9863,37.351)] | Oakland\n- Hesperian Blvd | [(-122.097,37.333),(-122.0956,37.31),(-122.0946,37.293)] | Oakland\n- I- 580 | [(-121.727,37.074),(-121.7229,37.093),(-121.722301,37.09522),(-121.721001,37.10005),(-121.7194,37.106),(-121.7188,37.109),(-121.7168,37.12),(-121.7163,37.123),(-121.7145,37.127),(-121.7096,37.148),(-121.707731,37.1568),(-121.7058,37.166),(-121.7055,37.168),(-121.7044,37.174),(-121.7038,37.172),(-121.7037,37.172),(-121.7027,37.175),(-121.7001,37.181),(-121.6957,37.191),(-121.6948,37.192),(-121.6897,37.204),(-121.6697,37.185)] | Oakland\n- I- 580 | [(-121.9322,37.989),(-121.9243,37.006),(-121.9217,37.014)] | Oakland\n- I- 580 | [(-122.018,37.019),(-122.0009,37.032),(-121.9787,37.983),(-121.958,37.984),(-121.9571,37.986)] | Oakland\n- I- 580 Ramp | [(-121.8521,37.011),(-121.8479,37.999),(-121.8476,37.999),(-121.8456,37.01),(-121.8455,37.011)] | Oakland\n- I- 580 Ramp | [(-121.8743,37.014),(-121.8722,37.999),(-121.8714,37.999)] | Oakland\n- I- 580 Ramp | [(-121.9043,37.998),(-121.9036,37.013),(-121.902632,37.0174),(-121.9025,37.018)] | Oakland\n- I- 580 Ramp | [(-121.9368,37.986),(-121.936483,37.98832),(-121.9353,37.997),(-121.93504,37.00035),(-121.9346,37.006),(-121.933764,37.00031),(-121.9333,37.997),(-121.9322,37.989)] | Oakland\n- I- 580/I-680 Ramp | ((-121.9207,37.988),(-121.9192,37.016)) | Oakland\n- I- 680 | ((-121.939,37.15),(-121.9387,37.145),(-121.9373,37.125),(-121.934242,37.07643),(-121.933886,37.0709),(-121.9337,37.068),(-121.933122,37.06139),(-121.932736,37.05698),(-121.93222,37.05108),(-121.931844,37.04678),(-121.930113,37.027),(-121.926829,37),(-121.9265,37.998),(-121.9217,37.96),(-121.9203,37.949),(-121.9184,37.934)) | Oakland\n- I- 680 Ramp | [(-121.8833,37.376),(-121.8833,37.392),(-121.883,37.4),(-121.8835,37.402),(-121.8852,37.422)] | Oakland\n- I- 680 Ramp | [(-121.92,37.438),(-121.9218,37.424),(-121.9238,37.408),(-121.9252,37.392)] | Oakland\n- I- 680 Ramp | [(-121.9238,37.402),(-121.9234,37.395),(-121.923,37.399)] | Oakland\n- I- 880 | ((-121.9669,37.075),(-121.9663,37.071),(-121.9656,37.065),(-121.9618,37.037),(-121.95689,37),(-121.948,37.933)) | Oakland\n- I- 880 | [(-122.0612,37.003),(-122.0604,37.991),(-122.0596,37.982),(-122.0585,37.967),(-122.0583,37.961),(-122.0553,37.918),(-122.053635,37.89475),(-122.050759,37.8546),(-122.05,37.844),(-122.0485,37.817),(-122.0483,37.813),(-122.0482,37.811)] | Oakland\n- I- 880 | [(-122.0831,37.312),(-122.0819,37.296),(-122.081,37.285),(-122.0786,37.248),(-122.078,37.24),(-122.077642,37.23496),(-122.076983,37.22567),(-122.076599,37.22026),(-122.076229,37.21505),(-122.0758,37.209)] | Oakland\n- I- 880 Ramp | [(-122.0019,37.301),(-122.002,37.293)] | Oakland\n- I- 880 Ramp | [(-122.0041,37.313),(-122.0018,37.315),(-122.0007,37.315),(-122.0005,37.313),(-122.0002,37.308),(-121.9995,37.289)] | Oakland\n- I- 880 Ramp | [(-122.0041,37.313),(-122.0038,37.308),(-122.0039,37.284),(-122.0013,37.287),(-121.9995,37.289)] | Oakland\n- I- 880 Ramp | [(-122.059,37.982),(-122.0577,37.984),(-122.0612,37.003)] | Oakland\n- I- 880 Ramp | [(-122.0618,37.011),(-122.0631,37.982),(-122.0585,37.967)] | Oakland\n- I- 880 Ramp | [(-122.085,37.34),(-122.0801,37.316),(-122.081,37.285)] | Oakland\n- I- 880 Ramp | [(-122.085,37.34),(-122.0866,37.316),(-122.0819,37.296)] | Oakland\n- Kildare Road | [(-122.0968,37.016),(-122.0959,37)] | Oakland\n- Las Positas Road | [(-121.764488,37.99199),(-121.75569,37.02022)] | Oakland\n- Livermore Ave | [(-121.7687,37.448),(-121.769,37.375)] | Oakland\n- Livermore Ave | [(-121.772719,37.99085),(-121.7728,37.001)] | Oakland\n- Mildred Ct | [(-122.0002,37.388),(-121.9998,37.386)] | Oakland\n- Miramar Ave | [(-122.1009,37.025),(-122.099089,37.03209)] | Oakland\n- Mission Blvd | [(-121.918886,37),(-121.9194,37.976),(-121.9198,37.975)] | Oakland\n- Moores Ave | [(-122.0087,37.301),(-122.0094,37.292)] | Oakland\n- Oakridge Road | [(-121.8316,37.049),(-121.828382,37)] | Oakland\n- Paseo Padre Pkwy | [(-121.9143,37.005),(-121.913522,37)] | Oakland\n- Periwinkle Road | [(-122.0451,37.301),(-122.044758,37.29844)] | Oakland\n- Pimlico Dr | [(-121.8616,37.998),(-121.8618,37.008)] | Oakland\n- Railroad Ave | [(-122.0245,37.013),(-122.0234,37.003),(-122.0223,37.993)] | Oakland\n- Ranspot Dr | [(-122.0972,37.999),(-122.0959,37)] | Oakland\n- Santa Maria Ave | [(-122.0773,37),(-122.0773,37.98)] | Oakland\n- Sp Railroad | [(-121.893564,37.99009),(-121.897,37.016)] | Oakland\n- Sp Railroad | [(-122.0734,37.001),(-122.0734,37.997)] | Oakland\n- Stanton Ave | [(-122.100392,37.0697),(-122.099513,37.06052)] | Oakland\n- Sunol Ridge Trl | [(-121.9419,37.455),(-121.9345,37.38)] | Oakland\n- Tassajara Creek | [(-121.87866,37.98898),(-121.8782,37.015)] | Oakland\n- Thackeray Ave | [(-122.072,37.305),(-122.0715,37.298)] | Oakland\n- Tissiack Way | [(-121.920364,37),(-121.9208,37.995)] | Oakland\n- Warm Springs Blvd | [(-121.933956,37),(-121.9343,37.97)] | Oakland\n- Welch Creek Road | [(-121.7695,37.386),(-121.7737,37.413)] | Oakland\n- Western Pacific Railroad Spur | [(-122.0394,37.018),(-122.0394,37.961)] | Oakland\n- Whitlock Creek | [(-121.74683,37.91276),(-121.733107,37)] | Oakland\n Avenue 134th | [(-122.1823,37.002),(-122.1851,37.992)] | Oakland\n Avenue 140th | [(-122.1656,37.003),(-122.1691,37.988)] | Oakland\n B St | [(-122.1749,37.451),(-122.1743,37.443)] | Oakland\n Bancroft Ave | [(-122.15714,37.4242),(-122.156,37.409)] | Oakland\n Bancroft Ave | [(-122.1643,37.523),(-122.1631,37.508),(-122.1621,37.493)] | Oakland\n Birch St | [(-122.1617,37.425),(-122.1614,37.417)] | Oakland\n Birch St | [(-122.1673,37.509),(-122.1661,37.492)] | Oakland\n Blacow Road | [(-122.0179,37.469),(-122.0167,37.465)] | Oakland\n Broadmore Ave | [(-122.095,37.522),(-122.0936,37.497)] | Oakland\n- Butterfield Dr | [(-122.0838,37.002),(-122.0834,37.987)] | Oakland\n- C St | [(-122.1768,37.46),(-122.1749,37.435)] | Oakland\n- Cameron Ave | [(-122.1316,37.502),(-122.1327,37.481)] | Oakland\n- Cedar Blvd | [(-122.0282,37.446),(-122.0265,37.43)] | Oakland\n- Chapman Dr | [(-122.0421,37.504),(-122.0414,37.498)] | Oakland\n- Charles St | [(-122.0255,37.505),(-122.0252,37.499)] | Oakland\n- Cherry St | [(-122.0437,37.42),(-122.0434,37.413)] | Oakland\n- Claremont Pl | [(-122.0542,37.995),(-122.0542,37.008)] | Oakland\n- Coliseum Way | [(-122.2001,37.47),(-122.1978,37.516)] | Oakland\n- Cull Canyon Road | [(-122.0536,37.435),(-122.0499,37.315)] | Oakland\n- D St | [(-122.1811,37.505),(-122.1805,37.497)] | Oakland\n- Decoto Road | [(-122.0159,37.006),(-122.016,37.002),(-122.0164,37.993)] | Oakland\n- Driftwood Dr | [(-122.0109,37.482),(-122.0113,37.477)] | Oakland\n- E St | [(-122.1832,37.505),(-122.1826,37.498),(-122.182,37.49)] | Oakland\n- Eden Ave | [(-122.1143,37.505),(-122.1142,37.491)] | Oakland\n- Eden Creek | [(-122.022037,37.00675),(-122.0221,37.998)] | Oakland\n- Gading Road | [(-122.0801,37.343),(-122.08,37.336)] | Oakland\n- Harris Road | [(-122.0659,37.372),(-122.0675,37.363)] | Oakland\n- Hegenberger Exwy | [(-122.1946,37.52),(-122.1947,37.497)] | Oakland\n- Herrier St | [(-122.1943,37.006),(-122.1936,37.998)] | Oakland\n- Hesperian Blvd | [(-122.097,37.333),(-122.0956,37.31),(-122.0946,37.293)] | Oakland\n- I- 580 | [(-122.1108,37.023),(-122.1101,37.02),(-122.108103,37.00764),(-122.108,37.007),(-122.1069,37.998),(-122.1064,37.994),(-122.1053,37.982),(-122.1048,37.977),(-122.1032,37.958),(-122.1026,37.953),(-122.1013,37.938),(-122.0989,37.911),(-122.0984,37.91),(-122.098,37.908)] | Oakland\n- I- 580 | [(-122.1543,37.703),(-122.1535,37.694),(-122.1512,37.655),(-122.1475,37.603),(-122.1468,37.583),(-122.1472,37.569),(-122.149044,37.54874),(-122.1493,37.546),(-122.1501,37.532),(-122.1506,37.509),(-122.1495,37.482),(-122.1487,37.467),(-122.1477,37.447),(-122.1414,37.383),(-122.1404,37.376),(-122.1398,37.372),(-122.139,37.356),(-122.1388,37.353),(-122.1385,37.34),(-122.1382,37.33),(-122.1378,37.316)] | Oakland\n- I- 580 Ramp | [(-122.1086,37.003),(-122.1068,37.993),(-122.1066,37.992),(-122.1053,37.982)] | Oakland\n- I- 580 Ramp | [(-122.1414,37.383),(-122.1407,37.376),(-122.1403,37.372),(-122.139,37.356)] | Oakland\n- I- 880 | [(-122.0219,37.466),(-122.0205,37.447),(-122.020331,37.44447),(-122.020008,37.43962),(-122.0195,37.432),(-122.0193,37.429),(-122.0164,37.393),(-122.010219,37.34771),(-122.0041,37.313)] | Oakland\n- I- 880 | [(-122.0375,37.632),(-122.0359,37.619),(-122.0358,37.616),(-122.034514,37.60409),(-122.031876,37.57965),(-122.031193,37.57332),(-122.03016,37.56375),(-122.02943,37.55698),(-122.028689,37.54929),(-122.027833,37.53908),(-122.025979,37.51698),(-122.0238,37.491)] | Oakland\n- I- 880 | [(-122.0612,37.003),(-122.0604,37.991),(-122.0596,37.982),(-122.0585,37.967),(-122.0583,37.961),(-122.0553,37.918),(-122.053635,37.89475),(-122.050759,37.8546),(-122.05,37.844),(-122.0485,37.817),(-122.0483,37.813),(-122.0482,37.811)] | Oakland\n- I- 880 | [(-122.0978,37.528),(-122.096,37.496),(-122.0931,37.453),(-122.09277,37.4496),(-122.090189,37.41442),(-122.0896,37.405),(-122.085,37.34)] | Oakland\n- I- 880 | [(-122.1755,37.185),(-122.1747,37.178),(-122.1742,37.173),(-122.1692,37.126),(-122.167792,37.11594),(-122.16757,37.11435),(-122.1671,37.111),(-122.1655,37.1),(-122.165169,37.09811),(-122.1641,37.092),(-122.1596,37.061),(-122.158381,37.05275),(-122.155991,37.03657),(-122.1531,37.017),(-122.1478,37.98),(-122.1407,37.932),(-122.1394,37.924),(-122.1389,37.92),(-122.1376,37.91)] | Oakland\n- I- 880 Ramp | [(-122.0236,37.488),(-122.0231,37.458),(-122.0227,37.458),(-122.0223,37.452),(-122.0205,37.447)] | Oakland\n- I- 880 Ramp | [(-122.0238,37.491),(-122.0215,37.483),(-122.0211,37.477),(-122.0205,37.447)] | Oakland\n- I- 880 Ramp | [(-122.059,37.982),(-122.0577,37.984),(-122.0612,37.003)] | Oakland\n- I- 880 Ramp | [(-122.0618,37.011),(-122.0631,37.982),(-122.0585,37.967)] | Oakland\n- I- 880 Ramp | [(-122.085,37.34),(-122.0801,37.316),(-122.081,37.285)] | Oakland\n- I- 880 Ramp | [(-122.085,37.34),(-122.0866,37.316),(-122.0819,37.296)] | Oakland\n- Kaiser Dr | [(-122.067163,37.47821),(-122.060402,37.51961)] | Oakland\n- La Playa Dr | [(-122.1039,37.545),(-122.101,37.493)] | Oakland\n- Locust St | [(-122.1606,37.007),(-122.1593,37.987)] | Oakland\n- Logan Ct | [(-122.0053,37.492),(-122.0061,37.484)] | Oakland\n- Magnolia St | [(-122.0971,37.5),(-122.0962,37.484)] | Oakland\n- Mattos Dr | [(-122.0005,37.502),(-122.000898,37.49683)] | Oakland\n- Maubert Ave | [(-122.1114,37.009),(-122.1096,37.995)] | Oakland\n- McClure Ave | [(-122.1431,37.001),(-122.1436,37.998)] | Oakland\n- Medlar Dr | [(-122.0627,37.378),(-122.0625,37.375)] | Oakland\n- National Ave | [(-122.1192,37.5),(-122.1281,37.489)] | Oakland\n- Newark Blvd | [(-122.0352,37.438),(-122.0341,37.423)] | Oakland\n- Portsmouth Ave | [(-122.1064,37.315),(-122.1064,37.308)] | Oakland\n- Railroad Ave | [(-122.0245,37.013),(-122.0234,37.003),(-122.0223,37.993)] | Oakland\n- Ranspot Dr | [(-122.0972,37.999),(-122.0959,37)] | Oakland\n- Redwood Road | [(-122.1493,37.98),(-122.1437,37.001)] | Oakland\n- Santa Maria Ave | [(-122.0773,37),(-122.0773,37.98)] | Oakland\n- Skyline Blvd | [(-122.1738,37.01),(-122.1714,37.996)] | Oakland\n- Skyline Dr | [(-122.0277,37.5),(-122.0284,37.498)] | Oakland\n- Sp Railroad | [(-122.0734,37.001),(-122.0734,37.997)] | Oakland\n- Sp Railroad | [(-122.137792,37.003),(-122.1365,37.992),(-122.131257,37.94612)] | Oakland\n- Sp Railroad | [(-122.1947,37.497),(-122.193328,37.4848)] | Oakland\n- State Hwy 84 | [(-122.0671,37.426),(-122.07,37.402),(-122.074,37.37),(-122.0773,37.338)] | Oakland\n- State Hwy 92 | [(-122.1085,37.326),(-122.1095,37.322),(-122.1111,37.316),(-122.1119,37.313),(-122.1125,37.311),(-122.1131,37.308),(-122.1167,37.292),(-122.1187,37.285),(-122.12,37.28)] | Oakland\n- State Hwy 92 Ramp | [(-122.1086,37.321),(-122.1089,37.315),(-122.1111,37.316)] | Oakland\n- Tennyson Road | [(-122.0891,37.317),(-122.0927,37.317)] | Oakland\n- Western Pacific Railroad Spur | [(-122.0394,37.018),(-122.0394,37.961)] | Oakland\n- Willimet Way | [(-122.0964,37.517),(-122.0949,37.493)] | Oakland\n- Wisconsin St | [(-122.1994,37.017),(-122.1975,37.998),(-122.1971,37.994)] | Oakland\n- 100th Ave | [(-122.1657,37.429),(-122.1647,37.432)] | Oakland\n- 107th Ave | [(-122.1555,37.403),(-122.1531,37.41)] | Oakland\n- 85th Ave | [(-122.1877,37.466),(-122.186,37.476)] | Oakland\n- 89th Ave | [(-122.1822,37.459),(-122.1803,37.471)] | Oakland\n- 98th Ave | [(-122.1568,37.498),(-122.1558,37.502)] | Oakland\n- 98th Ave | [(-122.1693,37.438),(-122.1682,37.444)] | Oakland\n- Allen Ct | [(-122.0131,37.602),(-122.0117,37.597)] | Berkeley\n- Alvarado Niles Road | [(-122.0325,37.903),(-122.0316,37.9)] | Berkeley\n- Arizona St | [(-122.0381,37.901),(-122.0367,37.898)] | Berkeley\n- Avenue 134th | [(-122.1823,37.002),(-122.1851,37.992)] | Berkeley\n- Avenue 140th | [(-122.1656,37.003),(-122.1691,37.988)] | Berkeley\n- Avenue D | [(-122.298,37.848),(-122.3024,37.849)] | Berkeley\n Broadway | [(-122.2409,37.586),(-122.2395,37.601)] | Berkeley\n Buckingham Blvd | [(-122.2231,37.59),(-122.2214,37.606)] | Berkeley\n Butterfield Dr | [(-122.0838,37.002),(-122.0834,37.987)] | Berkeley\n California St | [(-122.2032,37.005),(-122.2016,37.996)] | Berkeley\n Campus Dr | [(-122.1704,37.905),(-122.1678,37.868),(-122.1671,37.865)] | Berkeley\n Carson St | [(-122.1846,37.9),(-122.1843,37.901)] | Berkeley\n Cedar St | [(-122.3011,37.737),(-122.2999,37.739)] | Berkeley\n Central Ave | [(-122.2343,37.602),(-122.2331,37.595)] | Berkeley\n Champion St | [(-122.214,37.991),(-122.2147,37.002)] | Berkeley\n Claremont Pl | [(-122.0542,37.995),(-122.0542,37.008)] | Berkeley\n Coliseum Way | [(-122.2113,37.626),(-122.2085,37.592),(-122.2063,37.568)] | Berkeley\n Cornell Ave | [(-122.2956,37.925),(-122.2949,37.906),(-122.2939,37.875)] | Berkeley\n Creston Road | [(-122.2639,37.002),(-122.2613,37.986),(-122.2602,37.978),(-122.2598,37.973)] | Berkeley\n Crow Canyon Creek | [(-122.043,37.905),(-122.0368,37.71)] | Berkeley\n Cull Creek | [(-122.0624,37.875),(-122.0582,37.527)] | Berkeley\n Decoto Road | [(-122.0159,37.006),(-122.016,37.002),(-122.0164,37.993)] | Berkeley\n Deering St | [(-122.2146,37.904),(-122.2126,37.897)] | Berkeley\n Dimond Ave | [(-122.2167,37.994),(-122.2162,37.006)] | Berkeley\n Donna Way | [(-122.1333,37.606),(-122.1316,37.599)] | Berkeley\n Eden Creek | [(-122.022037,37.00675),(-122.0221,37.998)] | Berkeley\n Euclid Ave | [(-122.2671,37.009),(-122.2666,37.987)] | Berkeley\n Foothill Blvd | [(-122.2414,37.9),(-122.2403,37.893)] | Berkeley\n Fountain St | [(-122.2306,37.593),(-122.2293,37.605)] | Berkeley\n Grizzly Peak Blvd | [(-122.2213,37.638),(-122.2127,37.581)] | Berkeley\n Grove Way | [(-122.0643,37.884),(-122.062679,37.89162),(-122.061796,37.89578),(-122.0609,37.9)] | Berkeley\n Herrier St | [(-122.1943,37.006),(-122.1936,37.998)] | Berkeley\n Hesperian Blvd | [(-122.1132,37.6),(-122.1123,37.586)] | Berkeley\n I- 580 | [(-122.1108,37.023),(-122.1101,37.02),(-122.108103,37.00764),(-122.108,37.007),(-122.1069,37.998),(-122.1064,37.994),(-122.1053,37.982),(-122.1048,37.977),(-122.1032,37.958),(-122.1026,37.953),(-122.1013,37.938),(-122.0989,37.911),(-122.0984,37.91),(-122.098,37.908)] | Berkeley\n I- 580 | [(-122.1543,37.703),(-122.1535,37.694),(-122.1512,37.655),(-122.1475,37.603),(-122.1468,37.583),(-122.1472,37.569),(-122.149044,37.54874),(-122.1493,37.546),(-122.1501,37.532),(-122.1506,37.509),(-122.1495,37.482),(-122.1487,37.467),(-122.1477,37.447),(-122.1414,37.383),(-122.1404,37.376),(-122.1398,37.372),(-122.139,37.356),(-122.1388,37.353),(-122.1385,37.34),(-122.1382,37.33),(-122.1378,37.316)] | Berkeley\n I- 580 | [(-122.2197,37.99),(-122.22,37.99),(-122.222092,37.99523),(-122.2232,37.998),(-122.224146,37.99963),(-122.2261,37.003),(-122.2278,37.007),(-122.2302,37.026),(-122.2323,37.043),(-122.2344,37.059),(-122.235405,37.06427),(-122.2365,37.07)] | Berkeley\n I- 580 Ramp | [(-122.093241,37.90351),(-122.09364,37.89634),(-122.093788,37.89212)] | Berkeley\n I- 580 Ramp | [(-122.0934,37.896),(-122.09257,37.89961),(-122.0911,37.906)] | Berkeley\n I- 580 Ramp | [(-122.0941,37.897),(-122.0943,37.902)] | Berkeley\n I- 580 Ramp | [(-122.096,37.888),(-122.0962,37.891),(-122.0964,37.9)] | Berkeley\n I- 580 Ramp | [(-122.101,37.898),(-122.1005,37.902),(-122.0989,37.911)] | Berkeley\n I- 580 Ramp | [(-122.1086,37.003),(-122.1068,37.993),(-122.1066,37.992),(-122.1053,37.982)] | Berkeley\n I- 880 | [(-122.0375,37.632),(-122.0359,37.619),(-122.0358,37.616),(-122.034514,37.60409),(-122.031876,37.57965),(-122.031193,37.57332),(-122.03016,37.56375),(-122.02943,37.55698),(-122.028689,37.54929),(-122.027833,37.53908),(-122.025979,37.51698),(-122.0238,37.491)] | Berkeley\n I- 880 | [(-122.0612,37.003),(-122.0604,37.991),(-122.0596,37.982),(-122.0585,37.967),(-122.0583,37.961),(-122.0553,37.918),(-122.053635,37.89475),(-122.050759,37.8546),(-122.05,37.844),(-122.0485,37.817),(-122.0483,37.813),(-122.0482,37.811)] | Berkeley\n I- 880 | [(-122.1365,37.902),(-122.1358,37.898),(-122.1333,37.881),(-122.1323,37.874),(-122.1311,37.866),(-122.1308,37.865),(-122.1307,37.864),(-122.1289,37.851),(-122.1277,37.843),(-122.1264,37.834),(-122.1231,37.812),(-122.1165,37.766),(-122.1104,37.72),(-122.109695,37.71094),(-122.109,37.702),(-122.108312,37.69168),(-122.1076,37.681)] | Berkeley\n I- 880 | [(-122.1755,37.185),(-122.1747,37.178),(-122.1742,37.173),(-122.1692,37.126),(-122.167792,37.11594),(-122.16757,37.11435),(-122.1671,37.111),(-122.1655,37.1),(-122.165169,37.09811),(-122.1641,37.092),(-122.1596,37.061),(-122.158381,37.05275),(-122.155991,37.03657),(-122.1531,37.017),(-122.1478,37.98),(-122.1407,37.932),(-122.1394,37.924),(-122.1389,37.92),(-122.1376,37.91)] | Berkeley\n I- 880 | [(-122.2214,37.711),(-122.2202,37.699),(-122.2199,37.695),(-122.219,37.682),(-122.2184,37.672),(-122.2173,37.652),(-122.2159,37.638),(-122.2144,37.616),(-122.2138,37.612),(-122.2135,37.609),(-122.212,37.592),(-122.2116,37.586),(-122.2111,37.581)] | Berkeley\n I- 880 | [(-122.2707,37.975),(-122.2693,37.972),(-122.2681,37.966),(-122.267,37.962),(-122.2659,37.957),(-122.2648,37.952),(-122.2636,37.946),(-122.2625,37.935),(-122.2617,37.927),(-122.2607,37.921),(-122.2593,37.916),(-122.258,37.911),(-122.2536,37.898),(-122.2432,37.858),(-122.2408,37.845),(-122.2386,37.827),(-122.2374,37.811)] | Berkeley\n I- 880 Ramp | [(-122.059,37.982),(-122.0577,37.984),(-122.0612,37.003)] | Berkeley\n I- 880 Ramp | [(-122.0618,37.011),(-122.0631,37.982),(-122.0585,37.967)] | Berkeley\n I- 880 Ramp | [(-122.1029,37.61),(-122.1013,37.587),(-122.0999,37.569)] | Berkeley\n I- 880 Ramp | [(-122.1379,37.891),(-122.1383,37.897),(-122.1377,37.902)] | Berkeley\n I- 880 Ramp | [(-122.1379,37.931),(-122.137597,37.92736),(-122.1374,37.925),(-122.1373,37.924),(-122.1369,37.914),(-122.1358,37.905),(-122.1365,37.908),(-122.1358,37.898)] | Berkeley\n I- 880 Ramp | [(-122.2536,37.898),(-122.254,37.902)] | Berkeley\n Jackson St | [(-122.0845,37.6),(-122.0842,37.606)] | Berkeley\n Joyce St | [(-122.0792,37.604),(-122.0774,37.581)] | Berkeley\n Keeler Ave | [(-122.2578,37.906),(-122.2579,37.899)] | Berkeley\n Laguna Ave | [(-122.2099,37.989),(-122.2089,37)] | Berkeley\n Lakehurst Cir | [(-122.284729,37.89025),(-122.286096,37.90364)] | Berkeley\n Lakeshore Ave | [(-122.2586,37.99),(-122.2556,37.006)] | Berkeley\n Linden St | [(-122.2867,37.998),(-122.2864,37.008)] | Berkeley\n Locust St | [(-122.1606,37.007),(-122.1593,37.987)] | Berkeley\n Marin Ave | [(-122.2741,37.894),(-122.272,37.901)] | Berkeley\n Martin Luther King Jr Way | [(-122.2712,37.608),(-122.2711,37.599)] | Berkeley\n Maubert Ave | [(-122.1114,37.009),(-122.1096,37.995)] | Berkeley\n McClure Ave | [(-122.1431,37.001),(-122.1436,37.998)] | Berkeley\n Miller Road | [(-122.0902,37.645),(-122.0865,37.545)] | Berkeley\n Mission Blvd | [(-122.0006,37.896),(-121.9989,37.88)] | Berkeley\n Oakland Inner Harbor | [(-122.2625,37.913),(-122.260016,37.89484)] | Berkeley\n Oneil Ave | [(-122.076754,37.62476),(-122.0745,37.595)] | Berkeley\n Parkridge Dr | [(-122.1438,37.884),(-122.1428,37.9)] | Berkeley\n Parkside Dr | [(-122.0475,37.603),(-122.0443,37.596)] | Berkeley\n Paseo Padre Pkwy | [(-122.0021,37.639),(-121.996,37.628)] | Berkeley\n Pearl St | [(-122.2383,37.594),(-122.2366,37.615)] | Berkeley\n Railroad Ave | [(-122.0245,37.013),(-122.0234,37.003),(-122.0223,37.993)] | Berkeley\n Ranspot Dr | [(-122.0972,37.999),(-122.0959,37)] | Berkeley\n Redding St | [(-122.1978,37.901),(-122.1975,37.895)] | Berkeley\n Redwood Road | [(-122.1493,37.98),(-122.1437,37.001)] | Berkeley\n Roca Dr | [(-122.0335,37.609),(-122.0314,37.599)] | Berkeley\n Sacramento St | [(-122.2799,37.606),(-122.2797,37.597)] | Berkeley\n Saddle Brook Dr | [(-122.1478,37.909),(-122.1454,37.904),(-122.1451,37.888)] | Berkeley\n San Andreas Dr | [(-122.0609,37.9),(-122.0614,37.895)] | Berkeley\n Santa Maria Ave | [(-122.0773,37),(-122.0773,37.98)] | Berkeley\n Shattuck Ave | [(-122.2686,37.904),(-122.2686,37.897)] | Berkeley\n Shoreline Dr | [(-122.2657,37.603),(-122.2648,37.6)] | Berkeley\n Skyline Blvd | [(-122.1738,37.01),(-122.1714,37.996)] | Berkeley\n Skywest Dr | [(-122.1161,37.62),(-122.1123,37.586)] | Berkeley\n Southern Pacific Railroad | [(-122.3002,37.674),(-122.2999,37.661)] | Berkeley\n Sp Railroad | [(-122.0734,37.001),(-122.0734,37.997)] | Berkeley\n Sp Railroad | [(-122.0914,37.601),(-122.087,37.56),(-122.086408,37.5551)] | Berkeley\n Sp Railroad | [(-122.137792,37.003),(-122.1365,37.992),(-122.131257,37.94612)] | Berkeley\n State Hwy 123 | [(-122.3004,37.986),(-122.2998,37.969),(-122.2995,37.962),(-122.2992,37.952),(-122.299,37.942),(-122.2987,37.935),(-122.2984,37.924),(-122.2982,37.92),(-122.2976,37.904),(-122.297,37.88),(-122.2966,37.869),(-122.2959,37.848),(-122.2961,37.843)] | Berkeley\n State Hwy 13 | [(-122.1797,37.943),(-122.179871,37.91849),(-122.18,37.9),(-122.179023,37.86615),(-122.1787,37.862),(-122.1781,37.851),(-122.1777,37.845),(-122.1773,37.839),(-122.177,37.833)] | Berkeley\n State Hwy 238 | ((-122.098,37.908),(-122.0983,37.907),(-122.099,37.905),(-122.101,37.898),(-122.101535,37.89711),(-122.103173,37.89438),(-122.1046,37.892),(-122.106,37.89)) | Berkeley\n State Hwy 238 Ramp | [(-122.1288,37.9),(-122.1293,37.895),(-122.1296,37.906)] | Berkeley\n Stuart St | [(-122.2518,37.6),(-122.2507,37.601),(-122.2491,37.606)] | Berkeley\n Tupelo Ter | [(-122.059087,37.6113),(-122.057021,37.59942)] | Berkeley\n West Loop Road | [(-122.0576,37.604),(-122.0602,37.586)] | Berkeley\n Western Pacific Railroad Spur | [(-122.0394,37.018),(-122.0394,37.961)] | Berkeley\n Wisconsin St | [(-122.1994,37.017),(-122.1975,37.998),(-122.1971,37.994)] | Berkeley\n Wp Railroad | [(-122.254,37.902),(-122.2506,37.891)] | Berkeley\n 19th Ave | [(-122.2366,37.897),(-122.2359,37.905)] | Berkeley\n 5th St | [(-122.296,37.615),(-122.2953,37.598)] | Berkeley\n 82nd Ave | [(-122.1695,37.596),(-122.1681,37.603)] | Berkeley\n! Ada St | [(-122.2487,37.398),(-122.2496,37.401)] | Lafayette\n! California St | [(-122.2032,37.005),(-122.2016,37.996)] | Lafayette\n! Capricorn Ave | [(-122.2176,37.404),(-122.2164,37.384)] | Lafayette\n! Chambers Dr | [(-122.2004,37.352),(-122.1972,37.368)] | Lafayette\n! Chambers Lane | [(-122.2001,37.359),(-122.1975,37.371)] | Lafayette\n! Champion St | [(-122.214,37.991),(-122.2147,37.002)] | Lafayette\n! Coolidge Ave | [(-122.2007,37.058),(-122.1992,37.06)] | Lafayette\n! Creston Road | [(-122.2639,37.002),(-122.2613,37.986),(-122.2602,37.978),(-122.2598,37.973)] | Lafayette\n! Dimond Ave | [(-122.2167,37.994),(-122.2162,37.006)] | Lafayette\n! Edgewater Dr | [(-122.201,37.379),(-122.2042,37.41)] | Lafayette\n! Euclid Ave | [(-122.2671,37.009),(-122.2666,37.987)] | Lafayette\n! Heartwood Dr | [(-122.2006,37.341),(-122.1992,37.338)] | Lafayette\n! Hollis St | [(-122.2885,37.397),(-122.289,37.414)] | Lafayette\n! I- 580 | [(-122.2197,37.99),(-122.22,37.99),(-122.222092,37.99523),(-122.2232,37.998),(-122.224146,37.99963),(-122.2261,37.003),(-122.2278,37.007),(-122.2302,37.026),(-122.2323,37.043),(-122.2344,37.059),(-122.235405,37.06427),(-122.2365,37.07)] | Lafayette\n! I- 80 | ((-122.2937,37.277),(-122.3016,37.262)) | Lafayette\n! I- 80 | ((-122.2962,37.273),(-122.3004,37.264)) | Lafayette\n! I- 80 Ramp | [(-122.2962,37.413),(-122.2959,37.382),(-122.2951,37.372)] | Lafayette\n! I- 880 Ramp | [(-122.2771,37.002),(-122.278,37)] | Lafayette\n! Indian Way | [(-122.2066,37.398),(-122.2045,37.411)] | Lafayette\n! Laguna Ave | [(-122.2099,37.989),(-122.2089,37)] | Lafayette\n! Lakeshore Ave | [(-122.2586,37.99),(-122.2556,37.006)] | Lafayette\n! Linden St | [(-122.2867,37.998),(-122.2864,37.008)] | Lafayette\n! Mandalay Road | [(-122.2322,37.397),(-122.2321,37.403)] | Lafayette\n! Proctor Ave | [(-122.2267,37.406),(-122.2251,37.386)] | Lafayette\n! Sheridan Road | [(-122.2279,37.425),(-122.2253,37.411),(-122.2223,37.377)] | Lafayette\n! State Hwy 13 | [(-122.2049,37.2),(-122.20328,37.17975),(-122.1989,37.125),(-122.198078,37.11641),(-122.1975,37.11)] | Lafayette\n! State Hwy 13 Ramp | [(-122.2244,37.427),(-122.223,37.414),(-122.2214,37.396),(-122.2213,37.388)] | Lafayette\n! State Hwy 24 | [(-122.2674,37.246),(-122.2673,37.248),(-122.267,37.261),(-122.2668,37.271),(-122.2663,37.298),(-122.2659,37.315),(-122.2655,37.336),(-122.265007,37.35882),(-122.264443,37.37286),(-122.2641,37.381),(-122.2638,37.388),(-122.2631,37.396),(-122.2617,37.405),(-122.2615,37.407),(-122.2605,37.412)] | Lafayette\n! Taurus Ave | [(-122.2159,37.416),(-122.2128,37.389)] | Lafayette\n! 14th St | [(-122.299,37.147),(-122.3,37.148)] | Lafayette\n! 5th St | [(-122.278,37),(-122.2792,37.005),(-122.2803,37.009)] | Lafayette\n 98th Ave | [(-122.2001,37.258),(-122.1974,37.27)] | Lafayette\n (333 rows)\n \n--- 6,342 ----\n name | thepath | cname \n ------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------\n Access Rd 25 | [(-121.9283,37.894),(-121.9283,37.9)] | Oakland\n+ Ada St | [(-122.2487,37.398),(-122.2496,37.401)] | Lafayette\n Agua Fria Creek | [(-121.9254,37.922),(-121.9281,37.889)] | Oakland\n+ Allen Ct | [(-122.0131,37.602),(-122.0117,37.597)] | Berkeley\n+ Alvarado Niles Road | [(-122.0325,37.903),(-122.0316,37.9)] | Berkeley\n Andrea Cir | [(-121.733218,37.88641),(-121.733286,37.90617)] | Oakland\n Apricot Lane | [(-121.9471,37.401),(-121.9456,37.392)] | Oakland\n Apricot Lane | [(-121.9471,37.401),(-121.9456,37.392)] | Oakland\n Arden Road | [(-122.0978,37.177),(-122.1,37.177)] | Oakland\n+ Arizona St | [(-122.0381,37.901),(-122.0367,37.898)] | Berkeley\n+ Arlington Dr | [(-121.8802,37.408),(-121.8807,37.394)] | Oakland\n Arlington Dr | [(-121.8802,37.408),(-121.8807,37.394)] | Oakland\n+ Arlington Road | [(-121.7957,37.898),(-121.7956,37.906)] | Oakland\n+ Arroyo Las Positas | [(-121.7973,37.997),(-121.7957,37.005)] | Oakland\n Arroyo Las Positas | [(-121.7973,37.997),(-121.7957,37.005)] | Oakland\n+ Arroyo Seco | [(-121.7073,37.766),(-121.6997,37.729)] | Oakland\n Ash St | [(-122.0408,37.31),(-122.04,37.292)] | Oakland\n Avenue 134th | [(-122.1823,37.002),(-122.1851,37.992)] | Oakland\n+ Avenue 134th | [(-122.1823,37.002),(-122.1851,37.992)] | Berkeley\n Avenue 140th | [(-122.1656,37.003),(-122.1691,37.988)] | Oakland\n+ Avenue 140th | [(-122.1656,37.003),(-122.1691,37.988)] | Berkeley\n+ Avenue D | [(-122.298,37.848),(-122.3024,37.849)] | Berkeley\n B St | [(-122.1749,37.451),(-122.1743,37.443)] | Oakland\n Bancroft Ave | [(-122.15714,37.4242),(-122.156,37.409)] | Oakland\n Bancroft Ave | [(-122.1643,37.523),(-122.1631,37.508),(-122.1621,37.493)] | Oakland\n Birch St | [(-122.1617,37.425),(-122.1614,37.417)] | Oakland\n Birch St | [(-122.1673,37.509),(-122.1661,37.492)] | Oakland\n Blacow Road | [(-122.0179,37.469),(-122.0167,37.465)] | Oakland\n+ Bridgepointe Dr | [(-122.0514,37.305),(-122.0509,37.299)] | Oakland\n Broadmore Ave | [(-122.095,37.522),(-122.0936,37.497)] | Oakland\n Broadway | [(-122.2409,37.586),(-122.2395,37.601)] | Berkeley\n Buckingham Blvd | [(-122.2231,37.59),(-122.2214,37.606)] | Berkeley\n+ Butterfield Dr | [(-122.0838,37.002),(-122.0834,37.987)] | Oakland\n+ Butterfield Dr | [(-122.0838,37.002),(-122.0834,37.987)] | Oakland\n Butterfield Dr | [(-122.0838,37.002),(-122.0834,37.987)] | Berkeley\n+ C St | [(-122.1768,37.46),(-122.1749,37.435)] | Oakland\n+ Calaveras Creek | [(-121.8203,37.035),(-121.8207,37.931)] | Oakland\n+ Calaveras Creek | [(-121.8203,37.035),(-121.8207,37.931)] | Oakland\n California St | [(-122.2032,37.005),(-122.2016,37.996)] | Berkeley\n+ California St | [(-122.2032,37.005),(-122.2016,37.996)] | Lafayette\n+ Cameron Ave | [(-122.1316,37.502),(-122.1327,37.481)] | Oakland\n Campus Dr | [(-122.1704,37.905),(-122.1678,37.868),(-122.1671,37.865)] | Berkeley\n+ Capricorn Ave | [(-122.2176,37.404),(-122.2164,37.384)] | Lafayette\n Carson St | [(-122.1846,37.9),(-122.1843,37.901)] | Berkeley\n+ Cedar Blvd | [(-122.0282,37.446),(-122.0265,37.43)] | Oakland\n Cedar St | [(-122.3011,37.737),(-122.2999,37.739)] | Berkeley\n+ Celia St | [(-122.0611,37.3),(-122.0616,37.299)] | Oakland\n Central Ave | [(-122.2343,37.602),(-122.2331,37.595)] | Berkeley\n+ Chambers Dr | [(-122.2004,37.352),(-122.1972,37.368)] | Lafayette\n+ Chambers Lane | [(-122.2001,37.359),(-122.1975,37.371)] | Lafayette\n Champion St | [(-122.214,37.991),(-122.2147,37.002)] | Berkeley\n+ Champion St | [(-122.214,37.991),(-122.2147,37.002)] | Lafayette\n+ Chapman Dr | [(-122.0421,37.504),(-122.0414,37.498)] | Oakland\n+ Charles St | [(-122.0255,37.505),(-122.0252,37.499)] | Oakland\n+ Cherry St | [(-122.0437,37.42),(-122.0434,37.413)] | Oakland\n+ Claremont Pl | [(-122.0542,37.995),(-122.0542,37.008)] | Oakland\n+ Claremont Pl | [(-122.0542,37.995),(-122.0542,37.008)] | Oakland\n Claremont Pl | [(-122.0542,37.995),(-122.0542,37.008)] | Berkeley\n+ Coliseum Way | [(-122.2001,37.47),(-122.1978,37.516)] | Oakland\n Coliseum Way | [(-122.2113,37.626),(-122.2085,37.592),(-122.2063,37.568)] | Berkeley\n+ Coolidge Ave | [(-122.2007,37.058),(-122.1992,37.06)] | Lafayette\n Cornell Ave | [(-122.2956,37.925),(-122.2949,37.906),(-122.2939,37.875)] | Berkeley\n+ Corriea Way | [(-121.9501,37.402),(-121.9505,37.398)] | Oakland\n+ Corriea Way | [(-121.9501,37.402),(-121.9505,37.398)] | Oakland\n+ Cowing Road | [(-122.0002,37.934),(-121.9772,37.782)] | Oakland\n Creston Road | [(-122.2639,37.002),(-122.2613,37.986),(-122.2602,37.978),(-122.2598,37.973)] | Berkeley\n+ Creston Road | [(-122.2639,37.002),(-122.2613,37.986),(-122.2602,37.978),(-122.2598,37.973)] | Lafayette\n Crow Canyon Creek | [(-122.043,37.905),(-122.0368,37.71)] | Berkeley\n+ Crystaline Dr | [(-121.925856,37),(-121.925869,37.00527)] | Oakland\n+ Cull Canyon Road | [(-122.0536,37.435),(-122.0499,37.315)] | Oakland\n Cull Creek | [(-122.0624,37.875),(-122.0582,37.527)] | Berkeley\n+ D St | [(-122.1811,37.505),(-122.1805,37.497)] | Oakland\n+ Decoto Road | [(-122.0159,37.006),(-122.016,37.002),(-122.0164,37.993)] | Oakland\n+ Decoto Road | [(-122.0159,37.006),(-122.016,37.002),(-122.0164,37.993)] | Oakland\n Decoto Road | [(-122.0159,37.006),(-122.016,37.002),(-122.0164,37.993)] | Berkeley\n Deering St | [(-122.2146,37.904),(-122.2126,37.897)] | Berkeley\n Dimond Ave | [(-122.2167,37.994),(-122.2162,37.006)] | Berkeley\n+ Dimond Ave | [(-122.2167,37.994),(-122.2162,37.006)] | Lafayette\n Donna Way | [(-122.1333,37.606),(-122.1316,37.599)] | Berkeley\n+ Driftwood Dr | [(-122.0109,37.482),(-122.0113,37.477)] | Oakland\n+ Driscoll Road | [(-121.9482,37.403),(-121.948451,37.39995)] | Oakland\n+ Driscoll Road | [(-121.9482,37.403),(-121.948451,37.39995)] | Oakland\n+ E St | [(-122.1832,37.505),(-122.1826,37.498),(-122.182,37.49)] | Oakland\n+ Eden Ave | [(-122.1143,37.505),(-122.1142,37.491)] | Oakland\n+ Eden Creek | [(-122.022037,37.00675),(-122.0221,37.998)] | Oakland\n+ Eden Creek | [(-122.022037,37.00675),(-122.0221,37.998)] | Oakland\n Eden Creek | [(-122.022037,37.00675),(-122.0221,37.998)] | Berkeley\n+ Edgewater Dr | [(-122.201,37.379),(-122.2042,37.41)] | Lafayette\n+ Enos Way | [(-121.7677,37.896),(-121.7673,37.91)] | Oakland\n Euclid Ave | [(-122.2671,37.009),(-122.2666,37.987)] | Berkeley\n+ Euclid Ave | [(-122.2671,37.009),(-122.2666,37.987)] | Lafayette\n+ Fairview Ave | [(-121.999,37.428),(-121.9863,37.351)] | Oakland\n+ Fairview Ave | [(-121.999,37.428),(-121.9863,37.351)] | Oakland\n Foothill Blvd | [(-122.2414,37.9),(-122.2403,37.893)] | Berkeley\n Fountain St | [(-122.2306,37.593),(-122.2293,37.605)] | Berkeley\n+ Gading Road | [(-122.0801,37.343),(-122.08,37.336)] | Oakland\n Grizzly Peak Blvd | [(-122.2213,37.638),(-122.2127,37.581)] | Berkeley\n Grove Way | [(-122.0643,37.884),(-122.062679,37.89162),(-122.061796,37.89578),(-122.0609,37.9)] | Berkeley\n+ Harris Road | [(-122.0659,37.372),(-122.0675,37.363)] | Oakland\n+ Heartwood Dr | [(-122.2006,37.341),(-122.1992,37.338)] | Lafayette\n+ Hegenberger Exwy | [(-122.1946,37.52),(-122.1947,37.497)] | Oakland\n+ Herrier St | [(-122.1943,37.006),(-122.1936,37.998)] | Oakland\n Herrier St | [(-122.1943,37.006),(-122.1936,37.998)] | Berkeley\n+ Hesperian Blvd | [(-122.097,37.333),(-122.0956,37.31),(-122.0946,37.293)] | Oakland\n+ Hesperian Blvd | [(-122.097,37.333),(-122.0956,37.31),(-122.0946,37.293)] | Oakland\n Hesperian Blvd | [(-122.1132,37.6),(-122.1123,37.586)] | Berkeley\n+ Hollis St | [(-122.2885,37.397),(-122.289,37.414)] | Lafayette\n+ I- 580 | [(-121.727,37.074),(-121.7229,37.093),(-121.722301,37.09522),(-121.721001,37.10005),(-121.7194,37.106),(-121.7188,37.109),(-121.7168,37.12),(-121.7163,37.123),(-121.7145,37.127),(-121.7096,37.148),(-121.707731,37.1568),(-121.7058,37.166),(-121.7055,37.168),(-121.7044,37.174),(-121.7038,37.172),(-121.7037,37.172),(-121.7027,37.175),(-121.7001,37.181),(-121.6957,37.191),(-121.6948,37.192),(-121.6897,37.204),(-121.6697,37.185)] | Oakland\n+ I- 580 | [(-121.9322,37.989),(-121.9243,37.006),(-121.9217,37.014)] | Oakland\n+ I- 580 | [(-121.9322,37.989),(-121.9243,37.006),(-121.9217,37.014)] | Oakland\n+ I- 580 | [(-122.018,37.019),(-122.0009,37.032),(-121.9787,37.983),(-121.958,37.984),(-121.9571,37.986)] | Oakland\n+ I- 580 | [(-122.018,37.019),(-122.0009,37.032),(-121.9787,37.983),(-121.958,37.984),(-121.9571,37.986)] | Oakland\n+ I- 580 | [(-122.1108,37.023),(-122.1101,37.02),(-122.108103,37.00764),(-122.108,37.007),(-122.1069,37.998),(-122.1064,37.994),(-122.1053,37.982),(-122.1048,37.977),(-122.1032,37.958),(-122.1026,37.953),(-122.1013,37.938),(-122.0989,37.911),(-122.0984,37.91),(-122.098,37.908)] | Oakland\n I- 580 | [(-122.1108,37.023),(-122.1101,37.02),(-122.108103,37.00764),(-122.108,37.007),(-122.1069,37.998),(-122.1064,37.994),(-122.1053,37.982),(-122.1048,37.977),(-122.1032,37.958),(-122.1026,37.953),(-122.1013,37.938),(-122.0989,37.911),(-122.0984,37.91),(-122.098,37.908)] | Berkeley\n+ I- 580 | [(-122.1543,37.703),(-122.1535,37.694),(-122.1512,37.655),(-122.1475,37.603),(-122.1468,37.583),(-122.1472,37.569),(-122.149044,37.54874),(-122.1493,37.546),(-122.1501,37.532),(-122.1506,37.509),(-122.1495,37.482),(-122.1487,37.467),(-122.1477,37.447),(-122.1414,37.383),(-122.1404,37.376),(-122.1398,37.372),(-122.139,37.356),(-122.1388,37.353),(-122.1385,37.34),(-122.1382,37.33),(-122.1378,37.316)] | Oakland\n I- 580 | [(-122.1543,37.703),(-122.1535,37.694),(-122.1512,37.655),(-122.1475,37.603),(-122.1468,37.583),(-122.1472,37.569),(-122.149044,37.54874),(-122.1493,37.546),(-122.1501,37.532),(-122.1506,37.509),(-122.1495,37.482),(-122.1487,37.467),(-122.1477,37.447),(-122.1414,37.383),(-122.1404,37.376),(-122.1398,37.372),(-122.139,37.356),(-122.1388,37.353),(-122.1385,37.34),(-122.1382,37.33),(-122.1378,37.316)] | Berkeley\n I- 580 | [(-122.2197,37.99),(-122.22,37.99),(-122.222092,37.99523),(-122.2232,37.998),(-122.224146,37.99963),(-122.2261,37.003),(-122.2278,37.007),(-122.2302,37.026),(-122.2323,37.043),(-122.2344,37.059),(-122.235405,37.06427),(-122.2365,37.07)] | Berkeley\n+ I- 580 | [(-122.2197,37.99),(-122.22,37.99),(-122.222092,37.99523),(-122.2232,37.998),(-122.224146,37.99963),(-122.2261,37.003),(-122.2278,37.007),(-122.2302,37.026),(-122.2323,37.043),(-122.2344,37.059),(-122.235405,37.06427),(-122.2365,37.07)] | Lafayette\n+ I- 580 Ramp | [(-121.8521,37.011),(-121.8479,37.999),(-121.8476,37.999),(-121.8456,37.01),(-121.8455,37.011)] | Oakland\n+ I- 580 Ramp | [(-121.8521,37.011),(-121.8479,37.999),(-121.8476,37.999),(-121.8456,37.01),(-121.8455,37.011)] | Oakland\n+ I- 580 Ramp | [(-121.8743,37.014),(-121.8722,37.999),(-121.8714,37.999)] | Oakland\n+ I- 580 Ramp | [(-121.8743,37.014),(-121.8722,37.999),(-121.8714,37.999)] | Oakland\n+ I- 580 Ramp | [(-121.9043,37.998),(-121.9036,37.013),(-121.902632,37.0174),(-121.9025,37.018)] | Oakland\n+ I- 580 Ramp | [(-121.9043,37.998),(-121.9036,37.013),(-121.902632,37.0174),(-121.9025,37.018)] | Oakland\n+ I- 580 Ramp | [(-121.9368,37.986),(-121.936483,37.98832),(-121.9353,37.997),(-121.93504,37.00035),(-121.9346,37.006),(-121.933764,37.00031),(-121.9333,37.997),(-121.9322,37.989)] | Oakland\n+ I- 580 Ramp | [(-121.9368,37.986),(-121.936483,37.98832),(-121.9353,37.997),(-121.93504,37.00035),(-121.9346,37.006),(-121.933764,37.00031),(-121.9333,37.997),(-121.9322,37.989)] | Oakland\n I- 580 Ramp | [(-122.093241,37.90351),(-122.09364,37.89634),(-122.093788,37.89212)] | Berkeley\n I- 580 Ramp | [(-122.0934,37.896),(-122.09257,37.89961),(-122.0911,37.906)] | Berkeley\n I- 580 Ramp | [(-122.0941,37.897),(-122.0943,37.902)] | Berkeley\n I- 580 Ramp | [(-122.096,37.888),(-122.0962,37.891),(-122.0964,37.9)] | Berkeley\n I- 580 Ramp | [(-122.101,37.898),(-122.1005,37.902),(-122.0989,37.911)] | Berkeley\n+ I- 580 Ramp | [(-122.1086,37.003),(-122.1068,37.993),(-122.1066,37.992),(-122.1053,37.982)] | Oakland\n I- 580 Ramp | [(-122.1086,37.003),(-122.1068,37.993),(-122.1066,37.992),(-122.1053,37.982)] | Berkeley\n+ I- 580 Ramp | [(-122.1414,37.383),(-122.1407,37.376),(-122.1403,37.372),(-122.139,37.356)] | Oakland\n+ I- 580/I-680 Ramp | ((-121.9207,37.988),(-121.9192,37.016)) | Oakland\n+ I- 580/I-680 Ramp | ((-121.9207,37.988),(-121.9192,37.016)) | Oakland\n+ I- 680 | ((-121.939,37.15),(-121.9387,37.145),(-121.9373,37.125),(-121.934242,37.07643),(-121.933886,37.0709),(-121.9337,37.068),(-121.933122,37.06139),(-121.932736,37.05698),(-121.93222,37.05108),(-121.931844,37.04678),(-121.930113,37.027),(-121.926829,37),(-121.9265,37.998),(-121.9217,37.96),(-121.9203,37.949),(-121.9184,37.934)) | Oakland\n+ I- 680 | ((-121.939,37.15),(-121.9387,37.145),(-121.9373,37.125),(-121.934242,37.07643),(-121.933886,37.0709),(-121.9337,37.068),(-121.933122,37.06139),(-121.932736,37.05698),(-121.93222,37.05108),(-121.931844,37.04678),(-121.930113,37.027),(-121.926829,37),(-121.9265,37.998),(-121.9217,37.96),(-121.9203,37.949),(-121.9184,37.934)) | Oakland\n+ I- 680 | [(-121.9101,37.715),(-121.911269,37.74682),(-121.9119,37.764),(-121.9124,37.776),(-121.9174,37.905),(-121.9194,37.957),(-121.9207,37.988)] | Oakland\n+ I- 680 | [(-121.9184,37.934),(-121.917,37.913),(-121.9122,37.83),(-121.9052,37.702)] | Oakland\n+ I- 680 Ramp | [(-121.8833,37.376),(-121.8833,37.392),(-121.883,37.4),(-121.8835,37.402),(-121.8852,37.422)] | Oakland\n+ I- 680 Ramp | [(-121.8833,37.376),(-121.8833,37.392),(-121.883,37.4),(-121.8835,37.402),(-121.8852,37.422)] | Oakland\n+ I- 680 Ramp | [(-121.92,37.438),(-121.9218,37.424),(-121.9238,37.408),(-121.9252,37.392)] | Oakland\n+ I- 680 Ramp | [(-121.92,37.438),(-121.9218,37.424),(-121.9238,37.408),(-121.9252,37.392)] | Oakland\n+ I- 680 Ramp | [(-121.9238,37.402),(-121.9234,37.395),(-121.923,37.399)] | Oakland\n+ I- 680 Ramp | [(-121.9238,37.402),(-121.9234,37.395),(-121.923,37.399)] | Oakland\n+ I- 80 | ((-122.2937,37.277),(-122.3016,37.262)) | Lafayette\n+ I- 80 | ((-122.2962,37.273),(-122.3004,37.264)) | Lafayette\n+ I- 80 Ramp | [(-122.2962,37.413),(-122.2959,37.382),(-122.2951,37.372)] | Lafayette\n+ I- 880 | ((-121.9669,37.075),(-121.9663,37.071),(-121.9656,37.065),(-121.9618,37.037),(-121.95689,37),(-121.948,37.933)) | Oakland\n+ I- 880 | ((-121.9669,37.075),(-121.9663,37.071),(-121.9656,37.065),(-121.9618,37.037),(-121.95689,37),(-121.948,37.933)) | Oakland\n+ I- 880 | [(-121.948,37.933),(-121.9471,37.925),(-121.9467,37.923),(-121.946,37.918),(-121.9452,37.912),(-121.937,37.852)] | Oakland\n+ I- 880 | [(-122.0219,37.466),(-122.0205,37.447),(-122.020331,37.44447),(-122.020008,37.43962),(-122.0195,37.432),(-122.0193,37.429),(-122.0164,37.393),(-122.010219,37.34771),(-122.0041,37.313)] | Oakland\n+ I- 880 | [(-122.0375,37.632),(-122.0359,37.619),(-122.0358,37.616),(-122.034514,37.60409),(-122.031876,37.57965),(-122.031193,37.57332),(-122.03016,37.56375),(-122.02943,37.55698),(-122.028689,37.54929),(-122.027833,37.53908),(-122.025979,37.51698),(-122.0238,37.491)] | Oakland\n I- 880 | [(-122.0375,37.632),(-122.0359,37.619),(-122.0358,37.616),(-122.034514,37.60409),(-122.031876,37.57965),(-122.031193,37.57332),(-122.03016,37.56375),(-122.02943,37.55698),(-122.028689,37.54929),(-122.027833,37.53908),(-122.025979,37.51698),(-122.0238,37.491)] | Berkeley\n+ I- 880 | [(-122.0612,37.003),(-122.0604,37.991),(-122.0596,37.982),(-122.0585,37.967),(-122.0583,37.961),(-122.0553,37.918),(-122.053635,37.89475),(-122.050759,37.8546),(-122.05,37.844),(-122.0485,37.817),(-122.0483,37.813),(-122.0482,37.811)] | Oakland\n+ I- 880 | [(-122.0612,37.003),(-122.0604,37.991),(-122.0596,37.982),(-122.0585,37.967),(-122.0583,37.961),(-122.0553,37.918),(-122.053635,37.89475),(-122.050759,37.8546),(-122.05,37.844),(-122.0485,37.817),(-122.0483,37.813),(-122.0482,37.811)] | Oakland\n I- 880 | [(-122.0612,37.003),(-122.0604,37.991),(-122.0596,37.982),(-122.0585,37.967),(-122.0583,37.961),(-122.0553,37.918),(-122.053635,37.89475),(-122.050759,37.8546),(-122.05,37.844),(-122.0485,37.817),(-122.0483,37.813),(-122.0482,37.811)] | Berkeley\n+ I- 880 | [(-122.0831,37.312),(-122.0819,37.296),(-122.081,37.285),(-122.0786,37.248),(-122.078,37.24),(-122.077642,37.23496),(-122.076983,37.22567),(-122.076599,37.22026),(-122.076229,37.21505),(-122.0758,37.209)] | Oakland\n+ I- 880 | [(-122.0978,37.528),(-122.096,37.496),(-122.0931,37.453),(-122.09277,37.4496),(-122.090189,37.41442),(-122.0896,37.405),(-122.085,37.34)] | Oakland\n I- 880 | [(-122.1365,37.902),(-122.1358,37.898),(-122.1333,37.881),(-122.1323,37.874),(-122.1311,37.866),(-122.1308,37.865),(-122.1307,37.864),(-122.1289,37.851),(-122.1277,37.843),(-122.1264,37.834),(-122.1231,37.812),(-122.1165,37.766),(-122.1104,37.72),(-122.109695,37.71094),(-122.109,37.702),(-122.108312,37.69168),(-122.1076,37.681)] | Berkeley\n+ I- 880 | [(-122.1755,37.185),(-122.1747,37.178),(-122.1742,37.173),(-122.1692,37.126),(-122.167792,37.11594),(-122.16757,37.11435),(-122.1671,37.111),(-122.1655,37.1),(-122.165169,37.09811),(-122.1641,37.092),(-122.1596,37.061),(-122.158381,37.05275),(-122.155991,37.03657),(-122.1531,37.017),(-122.1478,37.98),(-122.1407,37.932),(-122.1394,37.924),(-122.1389,37.92),(-122.1376,37.91)] | Oakland\n I- 880 | [(-122.1755,37.185),(-122.1747,37.178),(-122.1742,37.173),(-122.1692,37.126),(-122.167792,37.11594),(-122.16757,37.11435),(-122.1671,37.111),(-122.1655,37.1),(-122.165169,37.09811),(-122.1641,37.092),(-122.1596,37.061),(-122.158381,37.05275),(-122.155991,37.03657),(-122.1531,37.017),(-122.1478,37.98),(-122.1407,37.932),(-122.1394,37.924),(-122.1389,37.92),(-122.1376,37.91)] | Berkeley\n I- 880 | [(-122.2214,37.711),(-122.2202,37.699),(-122.2199,37.695),(-122.219,37.682),(-122.2184,37.672),(-122.2173,37.652),(-122.2159,37.638),(-122.2144,37.616),(-122.2138,37.612),(-122.2135,37.609),(-122.212,37.592),(-122.2116,37.586),(-122.2111,37.581)] | Berkeley\n I- 880 | [(-122.2707,37.975),(-122.2693,37.972),(-122.2681,37.966),(-122.267,37.962),(-122.2659,37.957),(-122.2648,37.952),(-122.2636,37.946),(-122.2625,37.935),(-122.2617,37.927),(-122.2607,37.921),(-122.2593,37.916),(-122.258,37.911),(-122.2536,37.898),(-122.2432,37.858),(-122.2408,37.845),(-122.2386,37.827),(-122.2374,37.811)] | Berkeley\n+ I- 880 Ramp | [(-122.0019,37.301),(-122.002,37.293)] | Oakland\n+ I- 880 Ramp | [(-122.0041,37.313),(-122.0018,37.315),(-122.0007,37.315),(-122.0005,37.313),(-122.0002,37.308),(-121.9995,37.289)] | Oakland\n+ I- 880 Ramp | [(-122.0041,37.313),(-122.0038,37.308),(-122.0039,37.284),(-122.0013,37.287),(-121.9995,37.289)] | Oakland\n+ I- 880 Ramp | [(-122.0236,37.488),(-122.0231,37.458),(-122.0227,37.458),(-122.0223,37.452),(-122.0205,37.447)] | Oakland\n+ I- 880 Ramp | [(-122.0238,37.491),(-122.0215,37.483),(-122.0211,37.477),(-122.0205,37.447)] | Oakland\n+ I- 880 Ramp | [(-122.059,37.982),(-122.0577,37.984),(-122.0612,37.003)] | Oakland\n+ I- 880 Ramp | [(-122.059,37.982),(-122.0577,37.984),(-122.0612,37.003)] | Oakland\n I- 880 Ramp | [(-122.059,37.982),(-122.0577,37.984),(-122.0612,37.003)] | Berkeley\n+ I- 880 Ramp | [(-122.0618,37.011),(-122.0631,37.982),(-122.0585,37.967)] | Oakland\n+ I- 880 Ramp | [(-122.0618,37.011),(-122.0631,37.982),(-122.0585,37.967)] | Oakland\n I- 880 Ramp | [(-122.0618,37.011),(-122.0631,37.982),(-122.0585,37.967)] | Berkeley\n+ I- 880 Ramp | [(-122.085,37.34),(-122.0801,37.316),(-122.081,37.285)] | Oakland\n+ I- 880 Ramp | [(-122.085,37.34),(-122.0801,37.316),(-122.081,37.285)] | Oakland\n+ I- 880 Ramp | [(-122.085,37.34),(-122.0866,37.316),(-122.0819,37.296)] | Oakland\n+ I- 880 Ramp | [(-122.085,37.34),(-122.0866,37.316),(-122.0819,37.296)] | Oakland\n I- 880 Ramp | [(-122.1029,37.61),(-122.1013,37.587),(-122.0999,37.569)] | Berkeley\n I- 880 Ramp | [(-122.1379,37.891),(-122.1383,37.897),(-122.1377,37.902)] | Berkeley\n I- 880 Ramp | [(-122.1379,37.931),(-122.137597,37.92736),(-122.1374,37.925),(-122.1373,37.924),(-122.1369,37.914),(-122.1358,37.905),(-122.1365,37.908),(-122.1358,37.898)] | Berkeley\n I- 880 Ramp | [(-122.2536,37.898),(-122.254,37.902)] | Berkeley\n+ I- 880 Ramp | [(-122.2771,37.002),(-122.278,37)] | Lafayette\n+ Indian Way | [(-122.2066,37.398),(-122.2045,37.411)] | Lafayette\n Jackson St | [(-122.0845,37.6),(-122.0842,37.606)] | Berkeley\n+ Johnson Dr | [(-121.9145,37.901),(-121.915,37.877)] | Oakland\n Joyce St | [(-122.0792,37.604),(-122.0774,37.581)] | Berkeley\n+ Juniper St | [(-121.7823,37.897),(-121.7815,37.9)] | Oakland\n+ Kaiser Dr | [(-122.067163,37.47821),(-122.060402,37.51961)] | Oakland\n Keeler Ave | [(-122.2578,37.906),(-122.2579,37.899)] | Berkeley\n+ Kildare Road | [(-122.0968,37.016),(-122.0959,37)] | Oakland\n+ La Playa Dr | [(-122.1039,37.545),(-122.101,37.493)] | Oakland\n Laguna Ave | [(-122.2099,37.989),(-122.2089,37)] | Berkeley\n+ Laguna Ave | [(-122.2099,37.989),(-122.2089,37)] | Lafayette\n Lakehurst Cir | [(-122.284729,37.89025),(-122.286096,37.90364)] | Berkeley\n Lakeshore Ave | [(-122.2586,37.99),(-122.2556,37.006)] | Berkeley\n+ Lakeshore Ave | [(-122.2586,37.99),(-122.2556,37.006)] | Lafayette\n+ Las Positas Road | [(-121.764488,37.99199),(-121.75569,37.02022)] | Oakland\n+ Las Positas Road | [(-121.764488,37.99199),(-121.75569,37.02022)] | Oakland\n Linden St | [(-122.2867,37.998),(-122.2864,37.008)] | Berkeley\n+ Linden St | [(-122.2867,37.998),(-122.2864,37.008)] | Lafayette\n+ Livermore Ave | [(-121.7687,37.448),(-121.769,37.375)] | Oakland\n+ Livermore Ave | [(-121.7687,37.448),(-121.769,37.375)] | Oakland\n+ Livermore Ave | [(-121.772719,37.99085),(-121.7728,37.001)] | Oakland\n+ Livermore Ave | [(-121.772719,37.99085),(-121.7728,37.001)] | Oakland\n+ Locust St | [(-122.1606,37.007),(-122.1593,37.987)] | Oakland\n Locust St | [(-122.1606,37.007),(-122.1593,37.987)] | Berkeley\n+ Logan Ct | [(-122.0053,37.492),(-122.0061,37.484)] | Oakland\n+ Magnolia St | [(-122.0971,37.5),(-122.0962,37.484)] | Oakland\n+ Mandalay Road | [(-122.2322,37.397),(-122.2321,37.403)] | Lafayette\n Marin Ave | [(-122.2741,37.894),(-122.272,37.901)] | Berkeley\n Martin Luther King Jr Way | [(-122.2712,37.608),(-122.2711,37.599)] | Berkeley\n+ Mattos Dr | [(-122.0005,37.502),(-122.000898,37.49683)] | Oakland\n+ Maubert Ave | [(-122.1114,37.009),(-122.1096,37.995)] | Oakland\n Maubert Ave | [(-122.1114,37.009),(-122.1096,37.995)] | Berkeley\n+ McClure Ave | [(-122.1431,37.001),(-122.1436,37.998)] | Oakland\n McClure Ave | [(-122.1431,37.001),(-122.1436,37.998)] | Berkeley\n+ Medlar Dr | [(-122.0627,37.378),(-122.0625,37.375)] | Oakland\n+ Mildred Ct | [(-122.0002,37.388),(-121.9998,37.386)] | Oakland\n Miller Road | [(-122.0902,37.645),(-122.0865,37.545)] | Berkeley\n+ Miramar Ave | [(-122.1009,37.025),(-122.099089,37.03209)] | Oakland\n+ Mission Blvd | [(-121.918886,37),(-121.9194,37.976),(-121.9198,37.975)] | Oakland\n+ Mission Blvd | [(-121.918886,37),(-121.9194,37.976),(-121.9198,37.975)] | Oakland\n+ Mission Blvd | [(-122.0006,37.896),(-121.9989,37.88)] | Oakland\n Mission Blvd | [(-122.0006,37.896),(-121.9989,37.88)] | Berkeley\n+ Moores Ave | [(-122.0087,37.301),(-122.0094,37.292)] | Oakland\n+ National Ave | [(-122.1192,37.5),(-122.1281,37.489)] | Oakland\n+ Navajo Ct | [(-121.8779,37.901),(-121.8783,37.9)] | Oakland\n+ Newark Blvd | [(-122.0352,37.438),(-122.0341,37.423)] | Oakland\n Oakland Inner Harbor | [(-122.2625,37.913),(-122.260016,37.89484)] | Berkeley\n+ Oakridge Road | [(-121.8316,37.049),(-121.828382,37)] | Oakland\n Oneil Ave | [(-122.076754,37.62476),(-122.0745,37.595)] | Berkeley\n Parkridge Dr | [(-122.1438,37.884),(-122.1428,37.9)] | Berkeley\n Parkside Dr | [(-122.0475,37.603),(-122.0443,37.596)] | Berkeley\n+ Paseo Padre Pkwy | [(-121.9143,37.005),(-121.913522,37)] | Oakland\n+ Paseo Padre Pkwy | [(-122.0021,37.639),(-121.996,37.628)] | Oakland\n Paseo Padre Pkwy | [(-122.0021,37.639),(-121.996,37.628)] | Berkeley\n Pearl St | [(-122.2383,37.594),(-122.2366,37.615)] | Berkeley\n+ Periwinkle Road | [(-122.0451,37.301),(-122.044758,37.29844)] | Oakland\n+ Pimlico Dr | [(-121.8616,37.998),(-121.8618,37.008)] | Oakland\n+ Pimlico Dr | [(-121.8616,37.998),(-121.8618,37.008)] | Oakland\n+ Portsmouth Ave | [(-122.1064,37.315),(-122.1064,37.308)] | Oakland\n+ Proctor Ave | [(-122.2267,37.406),(-122.2251,37.386)] | Lafayette\n+ Railroad Ave | [(-122.0245,37.013),(-122.0234,37.003),(-122.0223,37.993)] | Oakland\n+ Railroad Ave | [(-122.0245,37.013),(-122.0234,37.003),(-122.0223,37.993)] | Oakland\n Railroad Ave | [(-122.0245,37.013),(-122.0234,37.003),(-122.0223,37.993)] | Berkeley\n+ Ranspot Dr | [(-122.0972,37.999),(-122.0959,37)] | Oakland\n+ Ranspot Dr | [(-122.0972,37.999),(-122.0959,37)] | Oakland\n Ranspot Dr | [(-122.0972,37.999),(-122.0959,37)] | Berkeley\n Redding St | [(-122.1978,37.901),(-122.1975,37.895)] | Berkeley\n+ Redwood Road | [(-122.1493,37.98),(-122.1437,37.001)] | Oakland\n Redwood Road | [(-122.1493,37.98),(-122.1437,37.001)] | Berkeley\n Roca Dr | [(-122.0335,37.609),(-122.0314,37.599)] | Berkeley\n+ Rosedale Ct | [(-121.9232,37.9),(-121.924,37.897)] | Oakland\n Sacramento St | [(-122.2799,37.606),(-122.2797,37.597)] | Berkeley\n Saddle Brook Dr | [(-122.1478,37.909),(-122.1454,37.904),(-122.1451,37.888)] | Berkeley\n+ Saginaw Ct | [(-121.8803,37.898),(-121.8806,37.901)] | Oakland\n San Andreas Dr | [(-122.0609,37.9),(-122.0614,37.895)] | Berkeley\n+ Santa Maria Ave | [(-122.0773,37),(-122.0773,37.98)] | Oakland\n+ Santa Maria Ave | [(-122.0773,37),(-122.0773,37.98)] | Oakland\n Santa Maria Ave | [(-122.0773,37),(-122.0773,37.98)] | Berkeley\n Shattuck Ave | [(-122.2686,37.904),(-122.2686,37.897)] | Berkeley\n+ Sheridan Road | [(-122.2279,37.425),(-122.2253,37.411),(-122.2223,37.377)] | Lafayette\n Shoreline Dr | [(-122.2657,37.603),(-122.2648,37.6)] | Berkeley\n+ Skyline Blvd | [(-122.1738,37.01),(-122.1714,37.996)] | Oakland\n Skyline Blvd | [(-122.1738,37.01),(-122.1714,37.996)] | Berkeley\n+ Skyline Dr | [(-122.0277,37.5),(-122.0284,37.498)] | Oakland\n Skywest Dr | [(-122.1161,37.62),(-122.1123,37.586)] | Berkeley\n Southern Pacific Railroad | [(-122.3002,37.674),(-122.2999,37.661)] | Berkeley\n+ Sp Railroad | [(-121.893564,37.99009),(-121.897,37.016)] | Oakland\n+ Sp Railroad | [(-121.893564,37.99009),(-121.897,37.016)] | Oakland\n+ Sp Railroad | [(-121.9565,37.898),(-121.9562,37.9)] | Oakland\n+ Sp Railroad | [(-122.0734,37.001),(-122.0734,37.997)] | Oakland\n+ Sp Railroad | [(-122.0734,37.001),(-122.0734,37.997)] | Oakland\n Sp Railroad | [(-122.0734,37.001),(-122.0734,37.997)] | Berkeley\n Sp Railroad | [(-122.0914,37.601),(-122.087,37.56),(-122.086408,37.5551)] | Berkeley\n+ Sp Railroad | [(-122.137792,37.003),(-122.1365,37.992),(-122.131257,37.94612)] | Oakland\n Sp Railroad | [(-122.137792,37.003),(-122.1365,37.992),(-122.131257,37.94612)] | Berkeley\n+ Sp Railroad | [(-122.1947,37.497),(-122.193328,37.4848)] | Oakland\n+ Stanton Ave | [(-122.100392,37.0697),(-122.099513,37.06052)] | Oakland\n State Hwy 123 | [(-122.3004,37.986),(-122.2998,37.969),(-122.2995,37.962),(-122.2992,37.952),(-122.299,37.942),(-122.2987,37.935),(-122.2984,37.924),(-122.2982,37.92),(-122.2976,37.904),(-122.297,37.88),(-122.2966,37.869),(-122.2959,37.848),(-122.2961,37.843)] | Berkeley\n State Hwy 13 | [(-122.1797,37.943),(-122.179871,37.91849),(-122.18,37.9),(-122.179023,37.86615),(-122.1787,37.862),(-122.1781,37.851),(-122.1777,37.845),(-122.1773,37.839),(-122.177,37.833)] | Berkeley\n+ State Hwy 13 | [(-122.2049,37.2),(-122.20328,37.17975),(-122.1989,37.125),(-122.198078,37.11641),(-122.1975,37.11)] | Lafayette\n+ State Hwy 13 Ramp | [(-122.2244,37.427),(-122.223,37.414),(-122.2214,37.396),(-122.2213,37.388)] | Lafayette\n State Hwy 238 | ((-122.098,37.908),(-122.0983,37.907),(-122.099,37.905),(-122.101,37.898),(-122.101535,37.89711),(-122.103173,37.89438),(-122.1046,37.892),(-122.106,37.89)) | Berkeley\n State Hwy 238 Ramp | [(-122.1288,37.9),(-122.1293,37.895),(-122.1296,37.906)] | Berkeley\n+ State Hwy 24 | [(-122.2674,37.246),(-122.2673,37.248),(-122.267,37.261),(-122.2668,37.271),(-122.2663,37.298),(-122.2659,37.315),(-122.2655,37.336),(-122.265007,37.35882),(-122.264443,37.37286),(-122.2641,37.381),(-122.2638,37.388),(-122.2631,37.396),(-122.2617,37.405),(-122.2615,37.407),(-122.2605,37.412)] | Lafayette\n+ State Hwy 84 | [(-121.9565,37.898),(-121.956589,37.89911),(-121.9569,37.903),(-121.956,37.91),(-121.9553,37.919)] | Oakland\n+ State Hwy 84 | [(-122.0671,37.426),(-122.07,37.402),(-122.074,37.37),(-122.0773,37.338)] | Oakland\n+ State Hwy 92 | [(-122.1085,37.326),(-122.1095,37.322),(-122.1111,37.316),(-122.1119,37.313),(-122.1125,37.311),(-122.1131,37.308),(-122.1167,37.292),(-122.1187,37.285),(-122.12,37.28)] | Oakland\n+ State Hwy 92 Ramp | [(-122.1086,37.321),(-122.1089,37.315),(-122.1111,37.316)] | Oakland\n Stuart St | [(-122.2518,37.6),(-122.2507,37.601),(-122.2491,37.606)] | Berkeley\n+ Sunol Ridge Trl | [(-121.9419,37.455),(-121.9345,37.38)] | Oakland\n+ Sunol Ridge Trl | [(-121.9419,37.455),(-121.9345,37.38)] | Oakland\n+ Tassajara Creek | [(-121.87866,37.98898),(-121.8782,37.015)] | Oakland\n+ Tassajara Creek | [(-121.87866,37.98898),(-121.8782,37.015)] | Oakland\n+ Taurus Ave | [(-122.2159,37.416),(-122.2128,37.389)] | Lafayette\n+ Tennyson Road | [(-122.0891,37.317),(-122.0927,37.317)] | Oakland\n+ Thackeray Ave | [(-122.072,37.305),(-122.0715,37.298)] | Oakland\n+ Theresa Way | [(-121.7289,37.906),(-121.728,37.899)] | Oakland\n+ Tissiack Way | [(-121.920364,37),(-121.9208,37.995)] | Oakland\n+ Tissiack Way | [(-121.920364,37),(-121.9208,37.995)] | Oakland\n Tupelo Ter | [(-122.059087,37.6113),(-122.057021,37.59942)] | Berkeley\n+ Vallecitos Road | [(-121.8699,37.916),(-121.8703,37.891)] | Oakland\n+ Warm Springs Blvd | [(-121.933956,37),(-121.9343,37.97)] | Oakland\n+ Warm Springs Blvd | [(-121.933956,37),(-121.9343,37.97)] | Oakland\n+ Welch Creek Road | [(-121.7695,37.386),(-121.7737,37.413)] | Oakland\n+ Welch Creek Road | [(-121.7695,37.386),(-121.7737,37.413)] | Oakland\n West Loop Road | [(-122.0576,37.604),(-122.0602,37.586)] | Berkeley\n+ Western Pacific Railroad Spur | [(-122.0394,37.018),(-122.0394,37.961)] | Oakland\n+ Western Pacific Railroad Spur | [(-122.0394,37.018),(-122.0394,37.961)] | Oakland\n Western Pacific Railroad Spur | [(-122.0394,37.018),(-122.0394,37.961)] | Berkeley\n+ Whitlock Creek | [(-121.74683,37.91276),(-121.733107,37)] | Oakland\n+ Whitlock Creek | [(-121.74683,37.91276),(-121.733107,37)] | Oakland\n+ Willimet Way | [(-122.0964,37.517),(-122.0949,37.493)] | Oakland\n+ Wisconsin St | [(-122.1994,37.017),(-122.1975,37.998),(-122.1971,37.994)] | Oakland\n Wisconsin St | [(-122.1994,37.017),(-122.1975,37.998),(-122.1971,37.994)] | Berkeley\n Wp Railroad | [(-122.254,37.902),(-122.2506,37.891)] | Berkeley\n+ 100th Ave | [(-122.1657,37.429),(-122.1647,37.432)] | Oakland\n+ 107th Ave | [(-122.1555,37.403),(-122.1531,37.41)] | Oakland\n+ 14th St | [(-122.299,37.147),(-122.3,37.148)] | Lafayette\n 19th Ave | [(-122.2366,37.897),(-122.2359,37.905)] | Berkeley\n+ 1st St | [(-121.75508,37.89294),(-121.753581,37.90031)] | Oakland\n+ 5th St | [(-122.278,37),(-122.2792,37.005),(-122.2803,37.009)] | Lafayette\n 5th St | [(-122.296,37.615),(-122.2953,37.598)] | Berkeley\n 82nd Ave | [(-122.1695,37.596),(-122.1681,37.603)] | Berkeley\n! 85th Ave | [(-122.1877,37.466),(-122.186,37.476)] | Oakland\n! 89th Ave | [(-122.1822,37.459),(-122.1803,37.471)] | Oakland\n! 98th Ave | [(-122.1568,37.498),(-122.1558,37.502)] | Oakland\n! 98th Ave | [(-122.1693,37.438),(-122.1682,37.444)] | Oakland\n 98th Ave | [(-122.2001,37.258),(-122.1974,37.27)] | Lafayette\n (333 rows)\n \n\n======================================================================\n\n*** ./expected/alter_table.out\tSat Nov 3 22:08:11 2001\n--- ./results/alter_table.out\tSat Dec 1 21:26:15 2001\n***************\n*** 98,128 ****\n -- it might not. Therefore, vacuum first.\n --\n VACUUM ANALYZE tenk1;\n ALTER TABLE tenk1 RENAME TO ten_k;\n -- 20 values, sorted \n SELECT unique1 FROM ten_k WHERE unique1 < 20;\n unique1 \n ---------\n! 0\n! 1\n! 2\n! 3\n 4\n! 5\n 6\n! 7\n! 8\n 9\n! 10\n! 11\n! 12\n 13\n! 14\n! 15\n! 16\n! 17\n! 18\n 19\n (20 rows)\n \n -- 20 values, sorted \n--- 98,129 ----\n -- it might not. Therefore, vacuum first.\n --\n VACUUM ANALYZE tenk1;\n+ ERROR: PGSTAT: Creation of DB hash table failed\n ALTER TABLE tenk1 RENAME TO ten_k;\n -- 20 values, sorted \n SELECT unique1 FROM ten_k WHERE unique1 < 20;\n unique1 \n ---------\n! 18\n! 15\n 4\n! 2\n! 1\n 6\n! 14\n 9\n! 8\n! 5\n! 3\n 13\n! 12\n 19\n+ 17\n+ 11\n+ 7\n+ 10\n+ 16\n+ 0\n (20 rows)\n \n -- 20 values, sorted \n***************\n*** 262,272 ****\n SELECT unique1 FROM tenk1 WHERE unique1 < 5;\n unique1 \n ---------\n! 0\n! 1\n 2\n 3\n! 4\n (5 rows)\n \n -- FOREIGN KEY CONSTRAINT adding TEST\n--- 263,273 ----\n SELECT unique1 FROM tenk1 WHERE unique1 < 5;\n unique1 \n ---------\n! 4\n 2\n+ 1\n 3\n! 0\n (5 rows)\n \n -- FOREIGN KEY CONSTRAINT adding TEST\n\n======================================================================", "msg_date": "Sat, 01 Dec 2001 21:09:51 -0500", "msg_from": "Mark Knox <segfault@hardline.org>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Tom Lane writes:\n\n> \"Cyril VELTER\" <cyril.velter@libertysurf.fr> writes:\n> > What modification should be made to configure.in to make it include\n> > SupportDefs.h when testing for int8 uint8 int64 and uint64 size ?\n>\n> This looks like a bit of a pain. We're currently using AC_CHECK_SIZEOF\n> to make those probes, apparently because it's the closest standard macro\n> to what we want. But it's not close enough. It doesn't include\n> anything except <stdio.h>. I don't think we can fix this except by\n> making our own macro. Peter, any opinion about how to do it?\n\nOK, I bit the bullet and made up a whole new macro for type testing.\nThose who can't get at a new snapshot can try the attached patch.\n\nI have included <stdio.h> because that appears to effect the definition of\nthe types in question on AIX; apparently it pulls in <inttypes.h> somehow.\n\nPlease check this on AIX and BeOS.\n\n-- \nPeter Eisentraut peter_e@gmx.net", "msg_date": "Sun, 2 Dec 2001 12:46:31 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "\n NetBSD/sparc Matthew Green (left off the first list)\n\n\n`gmake check' gets:\n\n\t======================\n\t All 79 tests passed.\n\t======================\n\non both netbsd/sparc & netbsd/sparc64.\n\n\n.mrg.\n", "msg_date": "Mon, 03 Dec 2001 02:32:15 +1100", "msg_from": "matthew green <mrg@eterna.com.au>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "> `gmake check' gets:\n> All 79 tests passed.\n> on both netbsd/sparc & netbsd/sparc64.\n\nGreat. Thanks!\n\n - Thomas\n", "msg_date": "Mon, 03 Dec 2001 17:14:21 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> >Linux/arm Mark Knox\n> Had a look at 7.2b3 and sadly it's failing several tests. I saw several\n> \"ERROR: PGSTAT: Creation of DB hash table failed\" which I haven't seen before.\n> Geometry fails as usual due to some minor rounding. The others are\n> completely wrong. I'm afraid I don't have much time to look at this right\n> now (sorry) but I've attached the regression output and diffs if anyone\n> wants to check them.\n\nMost (or at least some; didn't count them up) of these look like\nordering differences on results which are not guaranteed to be ordered,\nso are OK. \n\nThe DB hash table trouble looks significant, but I don't know about that\narea of the code.\n\nAnyone?\n\n - Thomas\n", "msg_date": "Mon, 03 Dec 2001 17:42:51 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I tried 7.2b3 on SunOS 4.1.4 with gcc 2.7.1. It gives a comile error:\n\nI've added a configure test for sig_atomic_t, if you want to try again\nwith CVS tip.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 12:52:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Mark Knox <segfault@hardline.org> writes:\n> Had a look at 7.2b3 and sadly it's failing several tests. I saw several \n> \"ERROR: PGSTAT: Creation of DB hash table failed\" which I haven't seen before.\n\nUgh. If you don't have time to dig into that, can you provide a login\non your machine for someone else to look at it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 13:36:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "Mark Knox <segfault@hardline.org> writes:\n>> Had a look at 7.2b3 and sadly it's failing several tests. I saw several \n>> \"ERROR: PGSTAT: Creation of DB hash table failed\" which I haven't seen before.\n\nThat error is coming from the following ugly coding:\n\n *dbhash = hash_create(\"Databases hash\", PGSTAT_DB_HASH_SIZE, &hash_ctl,\n HASH_ELEM | HASH_FUNCTION | mcxt_flags);\n if (pgStatDBHash == NULL)\n\t/* raise error */\n\nAFAICT dbhash always points at the static variable pgStatDBHash, so the\ncode is not quite incorrect, though it's certainly trouble waiting to\nhappen as soon as someone changes things so that dbhash might point\nelsewhere. What I'm wondering is if your compiler is missing the\npotential for aliasing and is emitting code that loads pgStatDBHash\nbefore the store through dbhash occurs. Does it help if you change\nthe second line (line 2094 in src/backend/postmaster/pgstat.c) to:\n\n if (*dbhash == NULL)\n\nI'm going to commit this change in CVS anyway, but I'm wondering if it\nexplains your problem or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 14:01:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "On Mon, Dec 03, 2001 at 02:01:30PM -0500, Tom Lane wrote:\n> That error is coming from the following ugly coding:\n> \n> *dbhash = hash_create(\"Databases hash\", PGSTAT_DB_HASH_SIZE, &hash_ctl,\n> HASH_ELEM | HASH_FUNCTION | mcxt_flags);\n> if (pgStatDBHash == NULL)\n> \t/* raise error */\n\nInteresting...\n\n> AFAICT dbhash always points at the static variable pgStatDBHash, so the\n> code is not quite incorrect, though it's certainly trouble waiting to\n> happen as soon as someone changes things so that dbhash might point\n> elsewhere. What I'm wondering is if your compiler is missing the\n\nPossibly.. it's not the most recent by any means, but it generally behaves\nwell. Unfortunately, nobody is maintaining the arm gcc toolchain anymore (as\nfar as I know).\n\n> before the store through dbhash occurs. Does it help if you change\n> the second line (line 2094 in src/backend/postmaster/pgstat.c) to:\n> \n> if (*dbhash == NULL)\n> \n> I'm going to commit this change in CVS anyway, but I'm wondering if it\n> explains your problem or not.\n\nI'll give it a shot later tonight and let you know. Thanks for having a\nlook.\n\n-- \n __ .--------. \n |==|| | -( Mark 'segfault' Knox )-\n |==||________|\n |::| __====__`. .'`. \"Unix *is* user-friendly.. it's just\n |__|/::::::::\\ ~ (_) picky about its friends.\"\n", "msg_date": "Mon, 3 Dec 2001 16:58:59 -0500", "msg_from": "Mark Knox <markk@pixin.net>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I tried 7.2b3 on SunOS 4.1.4 with gcc 2.7.1. It gives a comile error:\n> \n> I've added a configure test for sig_atomic_t, if you want to try again\n> with CVS tip.\n\nThanks. But compiling fails again:\n\nformatting.c: In function `NUM_processor':\nformatting.c:4143: invalid operands to binary +\nformatting.c:4153: invalid operands to binary +\nformatting.c: In function `float4_to_char':\nformatting.c:4667: warning: assignment makes integer from pointer without a cast\nformatting.c: In function `float8_to_char':\nformatting.c:4746: warning: assignment makes integer from pointer without a cast\ngmake[4]: *** [formatting.o] Error 1\ngmake[4]: Leaving directory `/mnt2/tmp/pgsql/src/backend/utils/adt'\ngmake[3]: *** [adt-recursive] Error 2\ngmake[3]: Leaving directory `/mnt2/tmp/pgsql/src/backend/utils'\ngmake[2]: *** [utils-recursive] Error 2\ngmake[2]: Leaving directory `/mnt2/tmp/pgsql/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/mnt2/tmp/pgsql/src'\ngmake: *** [all] Error 2\n\nIt seems the code assumes sprintf always returns an integer (this is\nnot true for SunOS). I will fix it in more portable way.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 04 Dec 2001 15:15:58 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "On Tue, Dec 04, 2001 at 03:15:58PM +0900, Tatsuo Ishii wrote:\n\n> It seems the code assumes sprintf always returns an integer (this is\n> not true for SunOS). I will fix it in more portable way.\n\n It's interesting OS :-) I see \"CONFORMING TO\" part of sprintf manual\n and it's ANSI C and ISO/IEC function....\n\n Is anywhere (URL) summary of difference between basic C functions in \n basic OS?\n\n Thanks for fix Tatsuo.\n\n Karel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 4 Dec 2001 10:22:20 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> >\n> > Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > > I got a regression test result from Hiroshi Saito on\n> > > UNIX_System_V ews4800 4.2MP 4.2 R4000 r4000.\n> >\n> > > Seems INT64_IS_BUSTED, old PST ... etc and\n> >\n> > > *** ./expected/create_index.out Tue Aug 28 08:23:34 JST 2001\n> > > --- ./results/create_index.out Fri Nov 30 00:28:22 JST 2001\n> > > ***************\n> > > *** 35,44 ****\n> > > --- 35,47 ----\n> > > --\n> > > CREATE INDEX onek2_u1_prtl ON onek2 USING btree(unique1 int4_ops)\n> > > where unique1 < 20 or unique1 > 980;\n> > > + ERROR: AllocSetFree: cannot find block containing chunk 4f64f0\n> > > CREATE INDEX onek2_u2_prtl ON onek2 USING btree(unique2 int4_ops)\n> > > where stringu1 < 'B';\n> > > + ERROR: AllocSetFree: cannot find block containing chunk 4f6390\n> > > CREATE INDEX onek2_stu1_prtl ON onek2 USING btree(stringu1 name_ops)\n> > > where onek2.stringu1 >= 'J' and onek2.stringu1 < 'K';\n> > > + ERROR: AllocSetFree: cannot find block containing chunk 4f6740\n> >\n> > Interesting. Something nonportable in the partial index support,\n> > perhaps?\n> >\n> > Would you ask him to compile with debug support, set a breakpoint at\n> > elog(), and get a backtrace from the point of the error?\n> \n> Unfortunately he doesn't seem to be able to do it at once\n> though he would like to do it. If he is ready he may reply\n> to this thread directly.\n\nSorry for my late answer.\nHe has been working with the platform but hasn't yet\nidentify the cause. Unfortunately it seems too late\nfor 7.2 release.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 07 Dec 2001 18:08:32 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> He has been working with the platform but hasn't yet\n> identify the cause. Unfortunately it seems too late\n> for 7.2 release.\n\nNot necessarily ... we're still arguing what to do about the time\nprecision issue, and even if 7.2fc1 were out, I think a portability bug\nwould be cause for an update. Please encourage him to pursue it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Dec 2001 09:24:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "\n>change\n>the second line (line 2094 in src/backend/postmaster/pgstat.c) to:\n>\n> if (*dbhash == NULL)\n>\n>I'm going to commit this change in CVS anyway, but I'm wondering if it\n>explains your problem or not.\n\n\nYes it does, Tom! Nice work! I apologize for the delay in testing.. \nChristmas and all.\n\nI guess I can report a success for 7.2b3 on Linux/arm.\n\n\n __ .--------.\n |==|| | -( Mark 'segfault' Knox )-\n |==||________|\n |::| __====__`. .'`. \"Unix *is* user-friendly.. it's just\n |__|/::::::::\\ ~ (_) picky about its friends.\"\n\n", "msg_date": "Mon, 10 Dec 2001 22:21:55 -0500", "msg_from": "Mark Knox <segfault@hardline.org>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "> >I'm going to commit this change in CVS anyway, but I'm wondering if it\n> >explains your problem or not.\n>\n>\n> Yes it does, Tom! Nice work! I apologize for the delay in testing..\n> Christmas and all.\n>\n> I guess I can report a success for 7.2b3 on Linux/arm.\n\nI've just got hold of a FreeBSD/alpha box - hopefully I'll be able to test\nby the end of the week...\n\nChris\n\n", "msg_date": "Tue, 11 Dec 2001 12:01:40 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing " }, { "msg_contents": "> I guess I can report a success for 7.2b3 on Linux/arm.\n\nGreat! Thanks for the work; I'll update the list...\n\n - Thomas\n", "msg_date": "Tue, 11 Dec 2001 06:47:13 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: Second call for platform testing" } ]
[ { "msg_contents": "Is pg_dump supposed to work against prior version databases? I was\nmoving data from an old server (running 7.2devel from several months\nago) to a new server (running 7.2b3), using pg_dump on the *new* server,\nand got the following message:\n\n# pg_dump -h 172.16.1.84 -U postgres -t el_names -i lt_lcat > el_names.dmp\nPassword: <types in password>\npg_dump: query to obtain list of tables failed: ERROR: Attribute\n'relhasoids' not found\n\nNot that I need to do this, but it was convenient and I thought it was\nsupposed to work. Is this a problem, or am I trying to do something\nunsupported?\n\nOn a side note, I was *very* happy when I was able to load a table with\n~40 million rows about 30 minutes (compared to about a day+ on the old\nhardware, Red Hat 6.2, and early 7.2devel Postgres). And that was \nwithout changing the default postgresql.conf, because I forgot to do it \nbefore I started the copy ;-). Memory usage according to 'top' never \nexceeded about 5 MB.\n\nAlso worth noting, I installed successfully from an RPM which I built\nfrom the source RPM.\n\nThank you to *everyone* involved in getting PostgreSQL to where it is \ntoday! It is truly an awesome product.\n\n-- Joe\n\n\n", "msg_date": "Wed, 28 Nov 2001 20:48:39 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "7.2b3 pg_dump, general 7.2b3 comments" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> Is pg_dump supposed to work against prior version databases?\n\nSince 7.1.1 we've tried to make it do so.\n\n> # pg_dump -h 172.16.1.84 -U postgres -t el_names -i lt_lcat > el_names.dmp\n> Password: <types in password>\n> pg_dump: query to obtain list of tables failed: ERROR: Attribute\n> 'relhasoids' not found\n\n<scratches head> Odd. There's only one query in pg_dump that touches\nrelhasoids, and it's set up to only be used when \"remoteVersion >= 70200\".\nIt seems to work here, too: I can dump from a 7.1 or even 7.0 server\nwith current pg_dump. Would you burrow in there and see what's screwing\nup the version-check code?\n\nBTW, what happens if you leave off -i? It shouldn't be necessary.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 00:36:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments " }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> Is pg_dump supposed to work against prior version databases? I was\n> moving data from an old server (running 7.2devel from several months\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> ago) to a new server (running 7.2b3), using pg_dump on the *new* server,\n> and got the following message:\n\n> # pg_dump -h 172.16.1.84 -U postgres -t el_names -i lt_lcat > el_names.dmp\n> Password: <types in password>\n> pg_dump: query to obtain list of tables failed: ERROR: Attribute\n> 'relhasoids' not found\n\nOh, never mind: I see it. pg_dump's test is on whether the server calls\nitself 7.2 or not. You've evidently got a copy from back before the\nchanges to make OIDs optional. The versioning code is not set up to\ndeal with intermediate development versions, so it gets it wrong about\nwhat query to use. You'll need to use the pg_dump of the same vintage\nas the 7.2devel server.\n\nYou're a brave man to be putting production data on CVS-tip servers.\nI wouldn't recommend it ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 00:46:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments " }, { "msg_contents": "Tom Lane wrote:\n\n> Joe Conway <joseph.conway@home.com> writes:\n> \n>>Is pg_dump supposed to work against prior version databases? I was\n>>moving data from an old server (running 7.2devel from several months\n>>\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n>>ago) to a new server (running 7.2b3), using pg_dump on the *new* server,\n>>and got the following message:\n>>\n> \n>># pg_dump -h 172.16.1.84 -U postgres -t el_names -i lt_lcat > el_names.dmp\n>>Password: <types in password>\n>>pg_dump: query to obtain list of tables failed: ERROR: Attribute\n>>'relhasoids' not found\n>>\n> \n> Oh, never mind: I see it. pg_dump's test is on whether the server calls\n> itself 7.2 or not. You've evidently got a copy from back before the\n> changes to make OIDs optional. The versioning code is not set up to\n> deal with intermediate development versions, so it gets it wrong about\n> what query to use. You'll need to use the pg_dump of the same vintage\n> as the 7.2devel server.\n> \n> You're a brave man to be putting production data on CVS-tip servers.\n> I wouldn't recommend it ;-)\n> \n> \t\t\tregards, tom lane\n> \n\n\nThanks, but I'm not that brave. This has been my proof-of-concept server \nfor the past 6 or so months. The new server is intended to become a \nproduction, relatively high volume data collection server on our factory \nfloor. I've finally been successful at selling our upper management on \nusing PostgreSQL in lieu of commercial databases for this purpose (we \nalready have multiple instances of brand O and brand M). In fact, given \nthe current economy, they're enthusiastic about it ;-)\n\nIn any case, I've already moved the data, just wanted to report the \npossible issue. It sounds like it shouldn't affect any but the highly \nadventurous!\n\nBTW, after your first reply, I started to load the new pg_dump into gdb \nand discovered it had no debug symbols (recall I installed from RPM). Is \nthere a way to install the RPM with additional configure options without \nrebuilding it? Is there any significant downside (performance or \notherwise) to having --enable-debug on a production server?\n\n-- Joe\n\n\n\n\n", "msg_date": "Wed, 28 Nov 2001 22:17:50 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments" }, { "msg_contents": "Joe Conway <joseph.conway@home.com> writes:\n> BTW, after your first reply, I started to load the new pg_dump into gdb \n> and discovered it had no debug symbols (recall I installed from RPM). Is \n> there a way to install the RPM with additional configure options without \n> rebuilding it?\n\nDon't know; certainly you'd have to recompile, but I dunno if you have\nto modify the source RPM or not. Lamar?\n\n> Is there any significant downside (performance or \n> otherwise) to having --enable-debug on a production server?\n\nIf you're compiling with gcc then I believe the only cost is the disk\nfootprint of the debug info. On some other compilers, --enable-debug \ndisables most compiler optimizations, which can mean a significant\nspeed penalty. We currently have the following in the installation\nguide:\n\n --enable-debug\n\n Compiles all programs and libraries with debugging\n symbols. This means that you can run the programs through a\n debugger to analyze problems. This enlarges the size of the\n installed executables considerably, and on non-GCC compilers it\n usually also disables compiler optimization, causing\n slowdowns. However, having the symbols available is extremely\n helpful for dealing with any problems that may\n arise. Currently, this option is considered of marginal value\n for production installations, but you should have it on if you\n are doing development work or running a beta version.\n\n --enable-cassert\n\n Enables assertion checks in the server, which test for many\n \"can't happen\" conditions. This is invaluable for code\n development purposes, but the tests slow things down a\n little. Also, having the tests turned on won't necessarily\n enhance the stability of your server! The assertion checks are\n not categorized for severity, and so what might be a relatively\n harmless bug will still lead to server restarts if it triggers\n an assertion failure. Currently, this option is not\n recommended for production use, but you should have it on for\n development work or when running a beta version.\n\nPerhaps \"marginal value\" is too lukewarm an assessment, at least for\ngcc users. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 01:27:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments " }, { "msg_contents": "Tom Lane wrote:\n\n> Perhaps \"marginal value\" is too lukewarm an assessment, at least for\n> gcc users. Comments?\n> \n\nWell, if I have problems on my production machine, and I post a question \nhere, the first thing I'll be asked for is a backtrace, preferably with \nsymbols, right ;-)\n\nSo ISTM, that if there is no penalty except (a relatively trivial amount \nof) disk space with gcc, then that ought to be the preferred \nconfiguration for gcc. The \"value\" for a production installation is \nsignificant if it helps get a problem fixed faster.\n\n-- Joe\n\n", "msg_date": "Wed, 28 Nov 2001 22:42:24 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments" }, { "msg_contents": "On Thursday 29 November 2001 01:27 am, Tom Lane wrote:\n> Joe Conway <joseph.conway@home.com> writes:\n> > BTW, after your first reply, I started to load the new pg_dump into gdb\n> > and discovered it had no debug symbols (recall I installed from RPM). Is\n> > there a way to install the RPM with additional configure options without\n> > rebuilding it?\n\n> Don't know; certainly you'd have to recompile, but I dunno if you have\n> to modify the source RPM or not. Lamar?\n\nHmmm. You know, it would be a good idea, IMHO, to enable debugging symbols in \nthe beta RPMs anyway. So, I will do that for the next beta (or release \ncandidate -- although, with that report from Jan, I wonder what the next \nrelease will be).\n\nIn the meantime, Joe, if you can rebuild the RPM from source, here's what to \ndo:\n1.)\trpm -i the source RPM.\n2.)\tIf you've never built from a source RPM before, see \n/usr/share/docs/postgresql-7.2b3/README.rpm-dist for some more information.\n3.)\tEdit the spec file (on Red Hat, that would be in /usr/src/redhat/SPECS, \nname of 'postgresql.spec'), adding the following line near the top:\n%define __os_install_post /usr/lib/rpm/brp-compress\n4.)\tAdd the configure option. The rpm build process by default runs strip on \nthe binaries.....\n5.)\trpm -ba postgresql.spec, wait a few minutes for the build, and pick up \nyour RPMs in /usr/src/redhat/RPMS/i386.\n\nOr you can just wait until I upload a set with that line in it..... :-)\n\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 29 Nov 2001 19:59:04 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments" }, { "msg_contents": "On Thursday 29 November 2001 07:59 pm, Lamar Owen wrote:\n> Or you can just wait until I upload a set with that line in it..... :-)\n\nSaid set is now being uploaded to ftp.postgresql.org. Look for the 0.3PGDG \nsbinary et in /pub/binary/beta/RPMS/redhat-7.2, and the source in \n......./SRPMS\n\nMy my, --enable-debug and disabling stripping in the build sure does inflate \nthe package size :-O\n\nFor reference: The entire binary set for 7.2b3-0.2PGDG weighs in at 7,036KB. \nThe debug-enabled 7.2b3-0.3PGDG set weighs in at 11,240KB. The biggest \nincrease is in the postgresql-contrib package, which increases from 1,192KB \nto 2,940KB.\n\nThis build also has --enable-cassert.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 29 Nov 2001 20:48:04 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments" }, { "msg_contents": "Lamar Owen wrote:\n\n> Hmmm. You know, it would be a good idea, IMHO, to enable debugging symbols in \n> the beta RPMs anyway. So, I will do that for the next beta (or release \n> candidate -- although, with that report from Jan, I wonder what the next \n> release will be).\n> \n> In the meantime, Joe, if you can rebuild the RPM from source, here's what to \n> do:\n> 1.)\trpm -i the source RPM.\n> 2.)\tIf you've never built from a source RPM before, see \n> /usr/share/docs/postgresql-7.2b3/README.rpm-dist for some more information.\n> 3.)\tEdit the spec file (on Red Hat, that would be in /usr/src/redhat/SPECS, \n> name of 'postgresql.spec'), adding the following line near the top:\n> %define __os_install_post /usr/lib/rpm/brp-compress\n> 4.)\tAdd the configure option. The rpm build process by default runs strip on \n> the binaries.....\n> 5.)\trpm -ba postgresql.spec, wait a few minutes for the build, and pick up \n> your RPMs in /usr/src/redhat/RPMS/i386.\n> \n> Or you can just wait until I upload a set with that line in it..... :-)\n> \n\n\nThanks for the instructions, Lamar -- that helps quite a bit. I'll give \nthis a try next week.\n\nJoe\n\n\n\n\n", "msg_date": "Fri, 30 Nov 2001 17:50:15 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments" }, { "msg_contents": "On Friday 30 November 2001 08:50 pm, Joe Conway wrote:\n> Lamar Owen wrote:\n> > Or you can just wait until I upload a set with that line in it..... :-)\n\n> Thanks for the instructions, Lamar -- that helps quite a bit. I'll give\n> this a try next week.\n\nWhile you're certainly welcome to do the work yourself, you are also equally \nwelcome to download the latest RPMset, which has debugging symbols enabled, \nAFAIK.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 1 Dec 2001 21:34:30 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments" }, { "msg_contents": "Lamar Owen wrote:\n\n>>Lamar Owen wrote:\n> While you're certainly welcome to do the work yourself, you are also equally \n> welcome to download the latest RPMset, which has debugging symbols enabled, \n> AFAIK.\n> \n\n\nThanks, I saw that, but I'd like to know how to do these kinds of \nadjustments to RPMs anyway. I normally prefer to install from source \ntarballs, or even CVS, but for production servers our network admins get \na warm fuzzy from RPMs. It gives them more confidence they can reproduce \nthe server setup if they need to after I hit the Lotto and retire ;-)\n\nJoe\n\n\n\n\n", "msg_date": "Sat, 01 Dec 2001 22:30:04 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: 7.2b3 pg_dump, general 7.2b3 comments" } ]
[ { "msg_contents": "In looking at some performance issues (I was trying to look at the \noverhead of toast) I found that large insert statements were very slow.\n\nMy test case involved reading a file (6M file in my tests) and inserting \n it into the database into a \"largeobject\" like table defined as follows:\n\ncreate table tblob1 (filename text, lastbyte integer, data bytea);\n\nThe first test program read the file 8000 bytes at a time and inserted \nthem into the above table until the entire file was inserted. This test \nprogram used a regular insert statement to do the inserting: (insert \ninto tblob1 values (?,?,?))\n\nFor three runs of this test the average time to insert the 6M file into \nthe database in 8000 byte rows (which ended up being 801 rows inserted \ninto the table) was: 17.803 seconds\n\nThe second test read the file in 8000 byte chucks just like the first \nprogram but it used a function to do the insert and called the function \nvia the FastPath API. The function was:\n\nCREATE FUNCTION BYTEA_WRITE (TEXT, INTEGER, BYTEA) RETURNS INTEGER\nas '\nBEGIN\n INSERT INTO TBLOB1 VALUES ($1, $2, $3);\nRETURN 1;\nEND;'\nlanguage 'plpgsql'\n\nFor three runs of this test the average time to insert the 6M file into \nthe database in 8000 byte parts was: 2.645\n\n\nThus using the insert statement was almost an order of magnitude slower \nthat using the function (17.803 sec vs. 2.645 sec).\n\nReading the data back from the server via a standard select statement \ntakes on average: 1.674 seconds.\n\nI tried to run gprof to see where the time was going, but for some \nreason the gprof output on my gmon.out file doesn't have any timing \ninformation (all times are reported as 0.0) and I haven't been able to \nfigure out why yet. So I don't know what is taking up the bulk of the \ntime (I suspect it is either the decoding of the bytea data which the \nFastpath function call avoids, or the parser which needs to parse 801 8K \nSQL statements vs. the function which has to parse one 100 byte statement.)\n\nI have attached the two test programs (they are in java and use jdbc) \nand a SQL script that creates the table and function.\n\nthanks,\n--Barry", "msg_date": "Wed, 28 Nov 2001 22:19:13 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Performance problem with large insert statements" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> In looking at some performance issues (I was trying to look at the \n> overhead of toast) I found that large insert statements were very slow.\n> ...\n> I tried to run gprof to see where the time was going, but for some \n> reason the gprof output on my gmon.out file doesn't have any timing \n> information (all times are reported as 0.0) and I haven't been able to \n> figure out why yet.\n\nThat seems to be a common disease among Linuxen; dunno why. gprof\nworks fine for me on HPUX. I got around to reproducing this today,\nand what I find is that the majority of the backend time is going into\nsimple scanning of the input statement:\n\nEach sample counts as 0.01 seconds.\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 31.24 11.90 11.90 _mcount\n 19.51 19.33 7.43 10097 0.74 1.06 base_yylex\n 7.48 22.18 2.85 21953666 0.00 0.00 appendStringInfoChar\n 5.88 24.42 2.24 776 2.89 2.89 pglz_compress\n 4.36 26.08 1.66 21954441 0.00 0.00 pq_getbyte\n 3.57 27.44 1.36 7852141 0.00 0.00 addlit\n 3.26 28.68 1.24 1552 0.80 0.81 scanstr\n 2.84 29.76 1.08 779 1.39 7.18 pq_getstring\n 2.31 30.64 0.88 10171 0.09 0.09 _doprnt\n 2.26 31.50 0.86 776 1.11 1.11 byteain\n 2.07 32.29 0.79 msquadloop\n 1.60 32.90 0.61 7931430 0.00 0.00 memcpy\n 1.18 33.35 0.45 chunks\n 1.08 33.76 0.41 46160 0.01 0.01 strlen\n 1.08 34.17 0.41 encore\n 1.05 34.57 0.40 8541 0.05 0.05 XLogInsert\n 0.89 34.91 0.34 appendStringInfo\n\n60% of the call graph time is accounted for by these two areas:\n\nindex % time self children called name\n 7.43 3.32 10097/10097 yylex [14]\n[13] 41.0 7.43 3.32 10097 base_yylex [13]\n 1.36 0.61 7852141/7852141 addlit [28]\n 1.24 0.01 1552/1552 scanstr [30]\n 0.02 0.03 3108/3108 ScanKeywordLookup [99]\n 0.00 0.02 2335/2335 yy_get_next_buffer [144]\n 0.02 0.00 776/781 strtol [155]\n 0.00 0.01 777/3920 MemoryContextStrdup [108]\n 0.00 0.00 1/1 base_yy_create_buffer [560]\n 0.00 0.00 4675/17091 isupper [617]\n 0.00 0.00 1556/1556 yy_get_previous_state [671]\n 0.00 0.00 779/779 yywrap [706]\n 0.00 0.00 1/2337 base_yy_load_buffer_state [654]\n-----------------------------------------------\n 1.08 4.51 779/779 pq_getstr [17]\n[18] 21.4 1.08 4.51 779 pq_getstring [18]\n 2.85 0.00 21953662/21953666 appendStringInfoChar [20]\n 1.66 0.00 21954441/21954441 pq_getbyte [29]\n-----------------------------------------------\n\nWhile we could probably do a little bit to speed up pg_getstring and its\nchildren, it's not clear that we can do anything about yylex, which is\nflex output code not handmade code, and is probably well-tuned already.\n\nBottom line: feeding huge strings through the lexer is slow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 22:49:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance problem with large insert statements " }, { "msg_contents": "Tom,\n\nThanks for looking into this. What you found is about what I expected. \n What this means to me is that there should be a new todo item along \nthe lines:\n * allow binding query args over FE/BE protocol\n\nGenerally what makes the statement large is the values being \ninserted/updated or being specified in the where clause. If it were \npossible from the client declare a cursor with bind variables (i.e. $1, \n$2, ...), then later bind in the values and execute the cursor, it \nshould be possible to work around this problem.\n\nFor example the Oracle SQL*Net protocol (Oracle's equivalent to \npostgres' FE/BE) has seven distinct operations that can be done on a \ncursor from the client: open, bind, describe, parse, execute, fetch, \nclose. This perhaps is a little bit more control than you need, but you \nget the general idea.\n\nCurrently the FE/BE protocol has two modes for executing queries: 1) \nsend the entire query as a string and get back the entire result set in \none go, or 2) use cursors in sql, which still sends the entire query in \none string, but allows you to get back a subset of the results. It \nwould be nice to have even more control than either of these two options \noffer.\n\nIt seems to me that the server has all the functionality to do this, \nbecause plpgsql supports it in 7.2. It just isn't can't be done from \nthe client.\n\nI have been contemplating creating some SPI functions that could be \ncalled from the client that would implement this type of functionality \nand then enhancing the JDBC driver to use them instead of the regular \nquery execution of the FE/BE protocol. (i.e. one function to \"declare\" \na cursor with $1 placeholders, a second function to bind values and \nexecute the cursor, and then using standard fetch sql statement to get \nresults, therefore it could be done without changing the actual \nprotocol). This could also support the client explicitly \"caching\" \ncommonly used cursors and just rebinding/reexecuting them to avoid \nhaving to reparse/replan commonly used queries.\n\nIf this all isn't too off the wall, I would like to see something along \nthese lines added to the todo list and perhapes this email thread along \nwith. At a minimum I would like to hear others opinions on the subject.\n\nthanks,\n--Barry\n\n\n\n\n\nTom Lane wrote:\n\n> Barry Lind <barry@xythos.com> writes:\n> \n>>In looking at some performance issues (I was trying to look at the \n>>overhead of toast) I found that large insert statements were very slow.\n>>...\n>>I tried to run gprof to see where the time was going, but for some \n>>reason the gprof output on my gmon.out file doesn't have any timing \n>>information (all times are reported as 0.0) and I haven't been able to \n>>figure out why yet.\n>>\n> \n> That seems to be a common disease among Linuxen; dunno why. gprof\n> works fine for me on HPUX. I got around to reproducing this today,\n> and what I find is that the majority of the backend time is going into\n> simple scanning of the input statement:\n> \n> Each sample counts as 0.01 seconds.\n> % cumulative self self total \n> time seconds seconds calls ms/call ms/call name \n> 31.24 11.90 11.90 _mcount\n> 19.51 19.33 7.43 10097 0.74 1.06 base_yylex\n> 7.48 22.18 2.85 21953666 0.00 0.00 appendStringInfoChar\n> 5.88 24.42 2.24 776 2.89 2.89 pglz_compress\n> 4.36 26.08 1.66 21954441 0.00 0.00 pq_getbyte\n> 3.57 27.44 1.36 7852141 0.00 0.00 addlit\n> 3.26 28.68 1.24 1552 0.80 0.81 scanstr\n> 2.84 29.76 1.08 779 1.39 7.18 pq_getstring\n> 2.31 30.64 0.88 10171 0.09 0.09 _doprnt\n> 2.26 31.50 0.86 776 1.11 1.11 byteain\n> 2.07 32.29 0.79 msquadloop\n> 1.60 32.90 0.61 7931430 0.00 0.00 memcpy\n> 1.18 33.35 0.45 chunks\n> 1.08 33.76 0.41 46160 0.01 0.01 strlen\n> 1.08 34.17 0.41 encore\n> 1.05 34.57 0.40 8541 0.05 0.05 XLogInsert\n> 0.89 34.91 0.34 appendStringInfo\n> \n> 60% of the call graph time is accounted for by these two areas:\n> \n> index % time self children called name\n> 7.43 3.32 10097/10097 yylex [14]\n> [13] 41.0 7.43 3.32 10097 base_yylex [13]\n> 1.36 0.61 7852141/7852141 addlit [28]\n> 1.24 0.01 1552/1552 scanstr [30]\n> 0.02 0.03 3108/3108 ScanKeywordLookup [99]\n> 0.00 0.02 2335/2335 yy_get_next_buffer [144]\n> 0.02 0.00 776/781 strtol [155]\n> 0.00 0.01 777/3920 MemoryContextStrdup [108]\n> 0.00 0.00 1/1 base_yy_create_buffer [560]\n> 0.00 0.00 4675/17091 isupper [617]\n> 0.00 0.00 1556/1556 yy_get_previous_state [671]\n> 0.00 0.00 779/779 yywrap [706]\n> 0.00 0.00 1/2337 base_yy_load_buffer_state [654]\n> -----------------------------------------------\n> 1.08 4.51 779/779 pq_getstr [17]\n> [18] 21.4 1.08 4.51 779 pq_getstring [18]\n> 2.85 0.00 21953662/21953666 appendStringInfoChar [20]\n> 1.66 0.00 21954441/21954441 pq_getbyte [29]\n> -----------------------------------------------\n> \n> While we could probably do a little bit to speed up pg_getstring and its\n> children, it's not clear that we can do anything about yylex, which is\n> flex output code not handmade code, and is probably well-tuned already.\n> \n> Bottom line: feeding huge strings through the lexer is slow.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n", "msg_date": "Mon, 03 Dec 2001 20:31:35 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: Performance problem with large insert statements" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> It seems to me that the server has all the functionality to do this, \n> because plpgsql supports it in 7.2. It just isn't can't be done from \n> the client.\n\nTrue. Basically it's a shortcoming in the FE/BE protocol. There's\nsome other work that'd have to be done, but extending the protocol\nwould be the main bit.\n\n> I have been contemplating creating some SPI functions that could be \n> called from the client that would implement this type of functionality \n> and then enhancing the JDBC driver to use them instead of the regular \n> query execution of the FE/BE protocol.\n\nInteresting as a proof-of-concept hack, but I'd sure not want to see\nit shipped as a production solution. For one thing, the existing\nfastpath protocol is itself too broken to encourage more widespread\nuse of. (See the various comments in fastpath.c.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 23:55:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance problem with large insert statements " }, { "msg_contents": "> Tom,\n> \n> Thanks for looking into this. What you found is about what I expected. \n> What this means to me is that there should be a new todo item along \n> the lines:\n> * allow binding query args over FE/BE protocol\n\nOK, I have added this to the TODO list in the cache section because it\nis really just a special case of cached query plans where some constants\nchange. I know it is a litte more than that, but the basic issues are\nthe same.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Dec 2001 00:24:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance problem with large insert statements" } ]
[ { "msg_contents": "Is anyone doing any work on RServ these days?\n\nI just tried to set it up. I have bits of it working, and I can probably cobble\nsomething together, but it really looks like someone put a lot of thought into\nthe trigger, but all the support code is pretty incomplete.\n", "msg_date": "Thu, 29 Nov 2001 07:30:44 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "RServ" } ]
[ { "msg_contents": "Hi,\nit is a bug ??\n\ncreate table test (t1 int4, t2 int4);\ninsert into test values (1,1);\ninsert into test values (2,2);\ninsert into test values (3,1);\ninsert into test values (4,1);\ninsert into test values (4,1);\ninsert into test values (4,null);\n\n\n\n select * from test where t1 not in (select t2 from test);\n0 rows\n select * from test where t1 not in (select null);\n0 rows\n\nIf I delete the row with null value it works as expected.\nThe IN clause work as expected with or without null row.\n\nsorry for my english.\nbye\n\n-- \n-------------------------------------------------------\nGiuseppe Tanzilli\t\tg.tanzilli@gruppocsf.com\nCSF Sistemi srl\t\t\tphone ++39 0775 7771\nVia del Ciavattino \nAnagni FR\nItaly\n\n\n\n", "msg_date": "Thu, 29 Nov 2001 17:25:06 +0100", "msg_from": "Giuseppe Tanzilli - CSF <g.tanzilli@gruppocsf.com>", "msg_from_op": true, "msg_subject": "select NOT IN with NULL bug on 7.2b3" }, { "msg_contents": "On Thu, 29 Nov 2001, Giuseppe Tanzilli - CSF wrote:\n\n> Hi,\n> it is a bug ??\n>\n> create table test (t1 int4, t2 int4);\n> insert into test values (1,1);\n> insert into test values (2,2);\n> insert into test values (3,1);\n> insert into test values (4,1);\n> insert into test values (4,1);\n> insert into test values (4,null);\n>\n>\n>\n> select * from test where t1 not in (select t2 from test);\n> 0 rows\n> select * from test where t1 not in (select null);\n> 0 rows\n>\n> If I delete the row with null value it works as expected.\n> The IN clause work as expected with or without null row.\n\nI think this falls into the nulls are painful category of\ntrivalued logic.\n\nIIRC:\nWhen you ask for t1 not in (subselect)\nyou get : not(t1 in (subselect) -> not(t1 =ANY (subselect))\n -> for each row of subselect does t1 = t2 (in your case)\n * if true for any row, the in returns true (not in returns false)\n * if false for every row, the in returns false (not in - true)\n * otherwise, the in returns unknown (not in - also unknown).\nBasically with a NULL, you can say that a row is there definitively\nbut not that a row is not there since you don't know if the 3 equals\nthat NULL or not (same for the 4s).\n\n\n", "msg_date": "Thu, 29 Nov 2001 08:58:54 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: select NOT IN with NULL bug on 7.2b3" }, { "msg_contents": "Giuseppe Tanzilli - CSF <g.tanzilli@gruppocsf.com> writes:\n> it is a bug ??\n\nNo, it isn't. See past discussions about the semantics of NOT IN and\nNULL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2001 12:04:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: select NOT IN with NULL bug on 7.2b3 " } ]
[ { "msg_contents": "\n> What modification should be made to configure.in to make it\ninclude\n> SupportDefs.h when testing for int8 uint8 int64 and uint64 size ?\n\nIs SupportDefs.h actually (probably implicitly) included by the\nPostgreSQL\nsource ? Because if it is not, PostgreSQL is quite happy not finding\nthem in \nconfigure.\n\nNot finding them is only a problem if you get redefines during\ncompilation\n(and if your compiler then treats that as fatal).\n\nAndreas\n", "msg_date": "Thu, 29 Nov 2001 17:27:50 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": "> \n> > What modification should be made to configure.in to make it\n> include\n> > SupportDefs.h when testing for int8 uint8 int64 and uint64 size ?\n> \n> Is SupportDefs.h actually (probably implicitly) included by the\n> PostgreSQL\n> source ? Because if it is not, PostgreSQL is quite happy not finding\n> them in \n> configure.\n> \n> Not finding them is only a problem if you get redefines during\n> compilation\n> (and if your compiler then treats that as fatal).\n\nGood point. Also, a mixed-case include file is quite unusual. One idea\nif you can't get it working is to either include \"SupportDefs.h\" for\nBeOS in configure.in and c.h, or check the top of SupportDefs.h. Often\nthere is an #ifdef at the top to prevent the file from being included\ntwice. If you define that in c.h for BeOS, the include SupportDefs.h\nwill never be used and we can use our own defines for uint8/uint64.\n\nFYI, this test was added for AIX in recent weeks because it had similar\ntrouble.\n\nHere is the little snippet of code from configure to test for uint8:\n\t\n\t#include \"confdefs.h\"\n\t#include <stdio.h>\n\tmain()\n\t{\n\t FILE *f=fopen(\"conftestval\", \"w\");\n\t if (!f) exit(1);\n\t fprintf(f, \"%d\\n\", sizeof(uint8));\n\t exit(0);\n\t}\n\nNow, we would normally modify configure.in, but you can play with\nconfigure until you get it to work, let us know, and we can modify\nconfigure.in, run autoconf, make any needed additions to c.h, and get it\ninto CVS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 29 Nov 2001 11:51:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" }, { "msg_contents": ">> What modification should be made to configure.in to make it include\n>> SupportDefs.h when testing for int8 uint8 int64 and uint64 size ?\n\n>Is SupportDefs.h actually (probably implicitly) included by the\n>PostgreSQL\n>source ? Because if it is not, PostgreSQL is quite happy not finding\n>them in\n>configure.\n\n>Not finding them is only a problem if you get redefines during\n>compilation\n>(and if your compiler then treats that as fatal).\n\nSupportDefs.h is conditionaly included in c.h so everything is Ok when I\ncompile the backend, but when configure try to figure out the size of int8\n... they are not defined.\n\n May be SupportDefs.h should be included from another file ?\n\n\n cyril\n\n", "msg_date": "Thu, 29 Nov 2001 19:40:45 +0100", "msg_from": "\"Cyril VELTER\" <cyril.velter@libertysurf.fr>", "msg_from_op": false, "msg_subject": "Re: Second call for platform testing" } ]
[ { "msg_contents": "The known ones are postmaster and one backends per connection. There is a\nservice process forked by postmaster periodically. What does it do? Does the\nsystem use sperate processes to detect and recollect freed buffers?\n\n\n", "msg_date": "Thu, 29 Nov 2001 12:26:36 -0500", "msg_from": "\"xin\" <shenxin@sympatico.ca>", "msg_from_op": true, "msg_subject": "How many processes running on the server side?" }, { "msg_contents": "\"xin\" <shenxin@sympatico.ca> writes:\n> The known ones are postmaster and one backends per connection. There is a\n> service process forked by postmaster periodically. What does it do?\n\nCheckpoint, probably. In 7.2 the checkpointer should identify itself\nvia ps status.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 13:27:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How many processes running on the server side? " } ]
[ { "msg_contents": "Hello.\n\nI'm using postgres 7.1. I have an easy question...\n\nI want to create a primary key constraint on an existing table. The\ndocumentation says I can't . Please confirm. If this is true... How can I\nrename the existing table so I can create the new one and copy the data?\n\nThank you\n\nLigia\n\n\n", "msg_date": "Thu, 29 Nov 2001 17:22:22 -0600", "msg_from": "\"Ligia Pimentel\" <lmpimentel@yahoo.com>", "msg_from_op": true, "msg_subject": "An easy question about creating a primary key" }, { "msg_contents": "On Thu, 29 Nov 2001, Ligia Pimentel wrote:\n\n> Hello.\n>\n> I'm using postgres 7.1. I have an easy question...\n>\n> I want to create a primary key constraint on an existing table. The\n> documentation says I can't . Please confirm. If this is true... How can I\n> rename the existing table so I can create the new one and copy the data?\n\nI believe that's correct for 7.1 at least. You can rename tables using\nALTER TABLE (alter table <table> rename to <newtable>). If the column(s)\nare marked not null already, you may be able to just get away with\ncreating a unique index on the column(s) named \"<table>_pkey\"\n\n", "msg_date": "Mon, 3 Dec 2001 11:16:14 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: An easy question about creating a primary key" }, { "msg_contents": "Ligia,\n\n> I want to create a primary key constraint on an existing table. The\n> documentation says I can't . Please confirm. If this is true... How\n> can I\n> rename the existing table so I can create the new one and copy the\n> data?\n\nFYI, this question is more appropriate for the NOVICE list.\n\nYou would use the same method that you use to drop and recreate the\ntable for other reasons:\n\nCREATE TABLE tablea_temp AS\nSELECT * FROM tablea;\n\nDROP tablea;\n\nCREATE tablea (\n primary_key SERIAL ...\n <snip>\n);\n\nINSERT INTO tablea (column list)\nSELECT (column list) FROM tablea_temp;\n\nAnd don't forget to re-build your indexes!\n\n-Josh Berkus\n\n\n\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n", "msg_date": "Mon, 03 Dec 2001 11:20:07 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: An easy question about creating a primary key" }, { "msg_contents": "Hi Ligia,\n\nI have submitted code for 7.2b3 that allows ADD UNIQUE after table creation,\nbut you'll have to wait until 7.3 for ADD PRIMARY KEY after table createion.\nWhat you can do however is something like this:\n\n1. Make sure the column you want to make a primary key is NOT NULL and there\nare no other PRIMARY KEYs on the table.\n\n2.\nBEGIN;\nCREATE UNIQUE INDEX blah ON table(field);\nUPDATE pg_index SET indisprimary=true WHERE indexrelid=(SELECT oid FROM\npg_class WHERE relname='blah'))\nCOMMIT;\n\nNot that as far as postgres is concerned a UNIQUE, NOT NULL index is exactly\nthe same as a PRIMARY KEY index. All that the above catalog tweak does is\nactually mark the index as being primary in pg_dump, etc.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-sql-owner@postgresql.org\n> [mailto:pgsql-sql-owner@postgresql.org]On Behalf Of Ligia Pimentel\n> Sent: Friday, 30 November 2001 7:22 AM\n> To: pgsql-sql@postgresql.org\n> Subject: [SQL] An easy question about creating a primary key\n>\n>\n> Hello.\n>\n> I'm using postgres 7.1. I have an easy question...\n>\n> I want to create a primary key constraint on an existing table. The\n> documentation says I can't . Please confirm. If this is true... How can I\n> rename the existing table so I can create the new one and copy the data?\n>\n> Thank you\n>\n> Ligia\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Tue, 4 Dec 2001 09:36:18 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: An easy question about creating a primary key" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I have submitted code for 7.2b3 that allows ADD UNIQUE after table creation,\n> but you'll have to wait until 7.3 for ADD PRIMARY KEY after table createion.\n\nI think you've forgotten your own work, Chris.\n\nregression=# create table foo (bar int not null);\nCREATE\nregression=# alter table foo add primary key (bar);\nNOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index 'foo_pkey' for table 'foo'\nCREATE\nregression=#\n\nHaving to have marked the columns as \"not null\" from the beginning is a\npainful limitation, but it's not like the feature doesn't exist at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 21:12:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: An easy question about creating a primary key " }, { "msg_contents": "> I think you've forgotten your own work, Chris.\n>\n> regression=# create table foo (bar int not null);\n> CREATE\n> regression=# alter table foo add primary key (bar);\n> NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index\n> 'foo_pkey' for table 'foo'\n> CREATE\n> regression=#\n\nBizarre. That patch was never committed. If you check\nsrc/backend/commands/command.c and search for 'CONSTR_' you'll notice that\nthe CONSTR_UNIQUE function I implemented is there, but CONSTR_PRIMARY is\ndefinitely not being handled. (I'm looking at the 7.2b2 source code)\n\nChris\n\n", "msg_date": "Tue, 4 Dec 2001 10:23:19 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [SQL] An easy question about creating a primary key " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> I think you've forgotten your own work, Chris.\n>> \n>> regression=# create table foo (bar int not null);\n>> CREATE\n>> regression=# alter table foo add primary key (bar);\n>> NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index\n>> 'foo_pkey' for table 'foo'\n>> CREATE\n>> regression=#\n\n> Bizarre. That patch was never committed. If you check\n> src/backend/commands/command.c and search for 'CONSTR_' you'll notice that\n> the CONSTR_UNIQUE function I implemented is there, but CONSTR_PRIMARY is\n> definitely not being handled. (I'm looking at the 7.2b2 source code)\n\nHmm ... actually, I wonder whether that code in command.c isn't entirely\ndead code. I believe that as things stand, parser/analyze.c converts\nUNIQUE and PRIMARY constraints into CREATE INDEX statements; the\nconstraint nodes themselves never make it past the parser. It looks to\nme like command.c only needs to handle CHECK constraints and foreign-key\nconstraints, cf transformAlterTableStmt().\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 21:29:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] An easy question about creating a primary key " }, { "msg_contents": "Now that I look at it, I think I made the relevant changes in the\nparser:\n\n2001-10-11 20:07 tgl\n\n\t* doc/src/sgml/ref/alter_table.sgml, src/backend/catalog/pg_type.c,\n\tsrc/backend/commands/command.c, src/backend/parser/analyze.c,\n\tsrc/backend/tcop/utility.c, src/include/commands/command.h,\n\tsrc/include/nodes/parsenodes.h,\n\tsrc/test/regress/expected/alter_table.out,\n\tsrc/test/regress/expected/foreign_key.out: Break\n\ttransformCreateStmt() into multiple routines and make\n\ttransformAlterStmt() use these routines, instead of having lots of\n\tduplicate (not to mention should-have-been-duplicate) code. Adding\n\ta column with a CHECK constraint actually works now, and the tests\n\tto reject unsupported DEFAULT and NOT NULL clauses actually fire\n\tnow. ALTER TABLE ADD PRIMARY KEY works, modulo having to have\n\tcreated the column(s) NOT NULL already.\n\nI was mainly interested in eliminating the inconsistencies in parse-time\nhandling of CREATE TABLE and ALTER TABLE, and the ensuing bugs mentioned\nin the commit log. I didn't think much about the possibility that I was\nobsoleting stuff in command.c, but maybe I did.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 21:40:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] An easy question about creating a primary key " }, { "msg_contents": "You know, that would explain a lot. Since this was only done in October, I\nwouldn't have noticed it. And it explains why I couldn't get various\nchanges in my code to actually have any effect...\n\nOh well, it was a fun coding exercise ;) Feel free to remove the ADD UNIQUE\nstuff from command.c to see if the parser will handle it. However, your\ncommit message also just implies that all you fixed was ADD PRIMARY KEY???\n\nChris\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Tuesday, 4 December 2001 10:40 AM\n> To: Christopher Kings-Lynne\n> Cc: Ligia Pimentel; Hackers\n> Subject: Re: [SQL] An easy question about creating a primary key\n>\n>\n> Now that I look at it, I think I made the relevant changes in the\n> parser:\n>\n> 2001-10-11 20:07 tgl\n>\n> \t* doc/src/sgml/ref/alter_table.sgml, src/backend/catalog/pg_type.c,\n> \tsrc/backend/commands/command.c, src/backend/parser/analyze.c,\n> \tsrc/backend/tcop/utility.c, src/include/commands/command.h,\n> \tsrc/include/nodes/parsenodes.h,\n> \tsrc/test/regress/expected/alter_table.out,\n> \tsrc/test/regress/expected/foreign_key.out: Break\n> \ttransformCreateStmt() into multiple routines and make\n> \ttransformAlterStmt() use these routines, instead of having lots of\n> \tduplicate (not to mention should-have-been-duplicate) code. Adding\n> \ta column with a CHECK constraint actually works now, and the tests\n> \tto reject unsupported DEFAULT and NOT NULL clauses actually fire\n> \tnow. ALTER TABLE ADD PRIMARY KEY works, modulo having to have\n> \tcreated the column(s) NOT NULL already.\n>\n> I was mainly interested in eliminating the inconsistencies in parse-time\n> handling of CREATE TABLE and ALTER TABLE, and the ensuing bugs mentioned\n> in the commit log. I didn't think much about the possibility that I was\n> obsoleting stuff in command.c, but maybe I did.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Tue, 4 Dec 2001 10:50:06 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [SQL] An easy question about creating a primary key " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Oh well, it was a fun coding exercise ;) Feel free to remove the ADD UNIQUE\n> stuff from command.c to see if the parser will handle it. However, your\n> commit message also just implies that all you fixed was ADD PRIMARY KEY???\n\nIt says that because I thought that was all I was changing; I hadn't\nrealized the side-effects on ADD UNIQUE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 21:54:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] An easy question about creating a primary key " }, { "msg_contents": "Josh Berkus wrote:\n> \n> \n> And don't forget to re-build your indexes!\n> \n\nIs this just VACUUM ANALYZE\nor is there another index\nrebuild command?\n\n-- \nKeith Gray\n\nTechnical Development Manager\nHeart Consulting Services P/L\nmailto:keith@heart.com.au\n", "msg_date": "Tue, 04 Dec 2001 16:16:36 +1100", "msg_from": "Keith Gray <keith@heart.com.au>", "msg_from_op": false, "msg_subject": "Re: creating a primary key" }, { "msg_contents": "\nIs this resolved?\n\n---------------------------------------------------------------------------\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Oh well, it was a fun coding exercise ;) Feel free to remove the ADD UNIQUE\n> > stuff from command.c to see if the parser will handle it. However, your\n> > commit message also just implies that all you fixed was ADD PRIMARY KEY???\n> \n> It says that because I thought that was all I was changing; I hadn't\n> realized the side-effects on ADD UNIQUE.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 28 Dec 2001 00:11:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] An easy question about creating a primary key" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is this resolved?\n\nYup, the no-longer-needed code is gone.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Dec 2001 00:20:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] An easy question about creating a primary key " }, { "msg_contents": "Note that although I added a regression test for ADD UNIQUE (and had some\nconfusion as it was Tom's error messages generated, not mine!), I didn't add\nan ADD PRIMARY KEY one. It should be done in 7.3 I guess.\n\nChris\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, 28 December 2001 1:11 PM\n> To: Tom Lane\n> Cc: Christopher Kings-Lynne; Ligia Pimentel; Hackers\n> Subject: Re: [HACKERS] [SQL] An easy question about creating a primary\n> key\n>\n>\n>\n> Is this resolved?\n>\n> ------------------------------------------------------------------\n> ---------\n>\n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > Oh well, it was a fun coding exercise ;) Feel free to remove\n> the ADD UNIQUE\n> > > stuff from command.c to see if the parser will handle it.\n> However, your\n> > > commit message also just implies that all you fixed was ADD\n> PRIMARY KEY???\n> >\n> > It says that because I thought that was all I was changing; I hadn't\n> > realized the side-effects on ADD UNIQUE.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n", "msg_date": "Wed, 2 Jan 2002 11:36:53 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [SQL] An easy question about creating a primary key" } ]
[ { "msg_contents": "I am not sure if this is the right list, but has anyone looked at\nhttp://www.osdl.org/ --> Open Source Development Lab? I don't know\nthat much about it, but it basically allows open source projects to\ntest on their hardware. It is geared towards bringing Linux projects\ninto the enterprise and carrier class levels. I thought this might be\na good thing for PostgreSQL. I would have gone ahead and added\nPostgreSQL as a project, but I thought it would probably be better if\nit was started by someone that is part of the core development team.\n\nJust a thought.\n\nI noticed that NuSphere MySQL is signed up ... not that it makes it\nworthwhile or not...just an observation.\n\n--brett\n", "msg_date": "29 Nov 2001 16:00:15 -0800", "msg_from": "brett_schwarz@yahoo.com (Brett Schwarz)", "msg_from_op": true, "msg_subject": "OSDL" } ]
[ { "msg_contents": "Hi!\n\nA few months ago I asked if anyone started working on PL/JAVA, the \nansver was no. Now I started to write a java stored procedure language \nand environment for PostgreSQL. Some code is already working, and it is \ngeting interresting. So, I would like to ask you to write me your ideas, \nsuggestions, etc for this environment.\nThe source code will be available under GPL when it is worth for \ndistributing it (this will take for a while).\nthanks.\n\nLaszlo Hornyak\n\n", "msg_date": "Fri, 30 Nov 2001 10:14:12 +0100", "msg_from": "Laszlo Hornyak <hornyakl@freemail.hu>", "msg_from_op": true, "msg_subject": "java stored procedures" }, { "msg_contents": "Laszlo,\n\nIn my mind it would be more useful if this code was under the same \nlicense as the rest of postgresql. That way it could become part of the \nproduct as opposed to always being a separate component. (Just like \nplpgsql, pltcl and the other procedural languages).\n\nthanks,\n--Barry\n\n\nLaszlo Hornyak wrote:\n\n> Hi!\n> \n> A few months ago I asked if anyone started working on PL/JAVA, the \n> ansver was no. Now I started to write a java stored procedure language \n> and environment for PostgreSQL. Some code is already working, and it is \n> geting interresting. So, I would like to ask you to write me your ideas, \n> suggestions, etc for this environment.\n> The source code will be available under GPL when it is worth for \n> distributing it (this will take for a while).\n> thanks.\n> \n> Laszlo Hornyak\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n", "msg_date": "Mon, 03 Dec 2001 12:40:05 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: java stored procedures" }, { "msg_contents": "Hi!\n\nI am such a lame in the licensing area. As much as I know, BSD license \nis more free than GPL. I think it is too early to think about licensing, \nbut it`s ok, you won :), when it will be ready(or it will seem to get \ncloser to a working thing, currently it looks more like a interresting \ntest), I will ask you if you want to distribute it with Postgres, and if \nyou say yes, the license will be the same as Postgresql`s license. \nAnyway is this neccessary when it is the part of the distribution?\nIs this ok for you?\n\nthanks,\nLaszlo Hornyak\n\nps: still waiting for your ideas, suggestions, etc :) I am not memeber \nof the mailing list, please write me dirrectly!\n\nBarry Lind wrote:\n\n> Laszlo,\n>\n> In my mind it would be more useful if this code was under the same \n> license as the rest of postgresql. That way it could become part of \n> the product as opposed to always being a separate component. (Just \n> like plpgsql, pltcl and the other procedural languages).\n>\n> thanks,\n> --Barry\n>\n>\n\n\n", "msg_date": "Tue, 04 Dec 2001 10:12:30 +0100", "msg_from": "Laszlo Hornyak <hornyakl@freemail.hu>", "msg_from_op": true, "msg_subject": "Re: java stored procedures" }, { "msg_contents": "Laszlo,\n\nI think it would help a lot if you could take a little time to write \ndown what your planned architecture for a pljava would be. It then \nbecomes much easier for myself and probably others reading these lists \nto make suggestions on ways to improve what you are planning (or \npossible problems with your strategy). Without knowing what exactly you \nare thinking of doing it is difficult to comment.\n\nBut let me try throwing out a few thoughts about how I think this should \nbe done.\n\nFirst question is how will the jvm be run? Since postgres is a \nmultiprocess implementation (i.e. each connection has a separate process \non the server) and since java is a multithreaded implementation (i.e. \none process supporting multiple threads), what should the pljava \nimplementation look like? I think there should be a single jvm process \nfor the entire db server that each postgresql process connects to \nthrough sockets/rmi. It will be too expensive to create a new jvm \nprocess for each postgresql connection (expensive in both terms of \nmemory and cpu, since the startup time for the jvm is significant and it \nrequires a lot of memory).\n\nHaving one jvm that all the postgres backend processes communicate with \nmakes the whole feature much more complicated, but is necessary in my \nopinion.\n\nThen the question becomes how does the jvm process interact with the \ndatabase since they are two different processes. You will need some \nsort of interprocess communication between the two to execute sql \nstatements. This could be accomplished by using the existing jdbc \ndriver. But the bigest problem here is getting the transaction \nsemantics right. How does a sql statement being run by a java stored \nprocedure get access to the same connection/transaction as the original \nclient? What you don't want happening is that sql issued in a stored \njava procedure executes in a different transaction as the caller, what \nwould rollback of the stored function call mean in that case?\n\nI am very interested in hearing what your plans are for pl/java. I \nthink this is a very difficult project, but one that would be very \nuseful and welcome.\n\nthanks,\n--Barry\n\n\n\n\nLaszlo Hornyak wrote:\n\n> Hi!\n> \n> I am such a lame in the licensing area. As much as I know, BSD license \n> is more free than GPL. I think it is too early to think about licensing, \n> but it`s ok, you won :), when it will be ready(or it will seem to get \n> closer to a working thing, currently it looks more like a interresting \n> test), I will ask you if you want to distribute it with Postgres, and if \n> you say yes, the license will be the same as Postgresql`s license. \n> Anyway is this neccessary when it is the part of the distribution?\n> Is this ok for you?\n> \n> thanks,\n> Laszlo Hornyak\n> \n> ps: still waiting for your ideas, suggestions, etc :) I am not memeber \n> of the mailing list, please write me dirrectly!\n> \n> Barry Lind wrote:\n> \n>> Laszlo,\n>>\n>> In my mind it would be more useful if this code was under the same \n>> license as the rest of postgresql. That way it could become part of \n>> the product as opposed to always being a separate component. (Just \n>> like plpgsql, pltcl and the other procedural languages).\n>>\n>> thanks,\n>> --Barry\n>>\n>>\n> \n> \n\n\n", "msg_date": "Tue, 04 Dec 2001 08:44:50 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: java stored procedures" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n\n> Having one jvm that all the postgres backend processes communicate with makes\n> the whole feature much more complicated, but is necessary in my opinion.\n\nAgreed. Also, the JVM is a multithreaded app, and running it inside a\nnon-threaded program (the backend) might cause problems. \n\n> Then the question becomes how does the jvm process interact with the database\n> since they are two different processes. You will need some sort of\n> interprocess communication between the two to execute sql statements. This\n> could be accomplished by using the existing jdbc driver. But the bigest\n> problem here is getting the transaction semantics right. How does a sql\n> statement being run by a java stored procedure get access to the same\n> connection/transaction as the original client? What you don't want happening\n> is that sql issued in a stored java procedure executes in a different\n> transaction as the caller, what would rollback of the stored function call\n> mean in that case?\n\nI think you would have to to expose the SPI layer to Java running in a\nseparate process, either using an RMI server written in C or a custom\nprotocol over a TCP socket (Java of course can't do Unix sockets).\nThis raises some thorny issues of authentication and security but I\ndon't think they're insurmountable. You could, for example, create a\ncryptographically strong \"cookie\" in the backend when a Java function\nis called. The cookie would be passed to the Java function when it\ngets invoked, and then must be passed back to the SPI layer in order\nfor the latter to accept the call. A bit clunky but should be safe as\nfar as I can see.\n\nThe cookie would be needed anyhow, I think, in order for the SPI layer \nto be able to find the transaction that the Java function was\noriginally invoked in.\n\nYou could make the SPI layer stuff look like a normal JDBC driver to\nuser code--PL/Perl does this kind of thing with the Perl DBI\ninterface.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "04 Dec 2001 12:58:47 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: java stored procedures" }, { "msg_contents": "Hi!\n\nBarry Lind wrote:\n\n> Laszlo,\n>\n> I think it would help a lot if you could take a little time to write \n> down what your planned architecture for a pljava would be. It then \n> becomes much easier for myself and probably others reading these lists \n> to make suggestions on ways to improve what you are planning (or \n> possible problems with your strategy). Without knowing what exactly \n> you are thinking of doing it is difficult to comment. \n\n>\n>\n> But let me try throwing out a few thoughts about how I think this \n> should be done.\n>\n> First question is how will the jvm be run? Since postgres is a \n> multiprocess implementation (i.e. each connection has a separate \n> process on the server) and since java is a multithreaded \n> implementation (i.e. one process supporting multiple threads), what \n> should the pljava implementation look like? I think there should be a \n> single jvm process for the entire db server that each postgresql \n> process connects to through sockets/rmi. It will be too expensive to \n> create a new jvm process for each postgresql connection (expensive in \n> both terms of memory and cpu, since the startup time for the jvm is \n> significant and it requires a lot of memory). \n\nI absolutely agree. OK, it`s done.\n\nSo, a late-night-brainstorming here:\nWhat I would like to see in PL/JAVA is the object oriented features, \nthat makes postgresql nice. Creating a new table creates a new class in \nthe java side too. Instantiating an object of the newly created class \ninserts a row into the table. In postgresql tables can be inherited, and \nthis could be easyly done by pl/java too. I think this would look nice.\nBut this is not the main feature. Why I would like to see a nice java \nprocedural language inside postgres is java`s advanced communication \nfeatures (I mean CORBA, jdbc, other protocols). This is the sugar in the \ncaffe.\n\nI am very far from features like this.\nPL/JAVA now:\n-there is a separate process running java (kaffe). this process creates \na sys v message queue, that holds requests. almost forgot, a shared \nmemory segment too. I didn`t find better way to tell postgres the \ninformations about the java process.\n-the java request_handler function on the server side attaches to the \nshared memory, reads the key of the message queue., attaches to it, \nsends the data of the function, and a signal for the pl/java. after, it \nis waiting for a signal from the java thread.\n-when java thread receives the signal, it reads the message(s) from the \nqueue, and starts some actions. When done it tells postgres with a \nsignal that it is ready, and it can come for its results. This will be \nrewritten see below problems.\n-And postgres is runing, while java is waiting for postgres to say \nsomething.\n\nThreading on the java process side is not done yet, ok, it is not that \nhard, I will write it, if it will be realy neccessary.\n\nThe problems, for now:\nI had a very simple system, that passed a very limited scale of argument \ntypes, with a very limited quantity of parameters (int, varchar, bool). \nPostgres has limits for the argument count too, but not for types. It \nhad too much limits, so I am working (or to tell the truth now only \nthinking) on a new type handling that fits the felxibility of \nPostgresql`s type flexibility. For this I will have to learn a lot about \nPostgres`s type system. This will be my program this weekend. :)\n\nthanks,\nLaszlo Hornyak\n\n", "msg_date": "Tue, 04 Dec 2001 20:18:52 +0100", "msg_from": "Laszlo Hornyak <hornyakl@freemail.hu>", "msg_from_op": true, "msg_subject": "Re: java stored procedures" }, { "msg_contents": "Laszlo,\n\n\n> I am very far from features like this.\n> PL/JAVA now:\n> -there is a separate process running java (kaffe). this process creates \n> a sys v message queue, that holds requests. almost forgot, a shared \n> memory segment too. I didn`t find better way to tell postgres the \n> informations about the java process.\n\n\nDoes the mechanism you are planning support running any JVM? In my \nopionion Kaffe isn't good enough to be widely useful. I think you \nshould be able to plugin whatever jvm is best on your platform, which \nwill likely be either the Sun or IBM JVMs.\n\nAlso, can you explain this a little bit more. How does the jvm process \nget started? (I would hope that the postgresql server processes would \nstart it when needed, as opposed to requiring that it be started \nseparately.) How does the jvm access these shared memory structures? \nSince there aren't any methods in the java API to do such things that I \nam aware of.\n\n> -the java request_handler function on the server side attaches to the \n> shared memory, reads the key of the message queue., attaches to it, \n> sends the data of the function, and a signal for the pl/java. after, it \n> is waiting for a signal from the java thread.\n\n\nI don't understand how you do this in java? I must not be understanding \n something correctly here.\n\n> -when java thread receives the signal, it reads the message(s) from the \n> queue, and starts some actions. When done it tells postgres with a \n> signal that it is ready, and it can come for its results. This will be \n> rewritten see below problems.\n\n\nAre signals the best way to accomplish this?\n\n> -And postgres is runing, while java is waiting for postgres to say \n> something.\n\n\nBut in reality if the postgres process is executing a stored function it \nneeds to wait for the result of that function call before continuing \ndoesn't it?\n\n> \n> Threading on the java process side is not done yet, ok, it is not that \n> hard, I will write it, if it will be realy neccessary.\n\n\nAgreed, this is important.\n\n> \n> The problems, for now:\n> I had a very simple system, that passed a very limited scale of argument \n> types, with a very limited quantity of parameters (int, varchar, bool). \n> Postgres has limits for the argument count too, but not for types. It \n> had too much limits, so I am working (or to tell the truth now only \n> thinking) on a new type handling that fits the felxibility of \n> Postgresql`s type flexibility. For this I will have to learn a lot about \n> Postgres`s type system. This will be my program this weekend. :)\n\n\nShouldn't this code use all or most of the logic found in the FE/BE \nprotocol? Why invent and code another mechanism to transfer data when \none already exists. (I will admit that the current FE/BE mechanism \nisn't the ideal choice, but it seems easier to reuse what exists for now \nand improve on it later).\n\n\n> \n> thanks,\n> Laszlo Hornyak\n> \n\nYou didn't mention how you plan to deal with the transaction symantics. \n So what happens when the pl/java function calls through jdbc back to \nthe server to insert some data? That should happen in the same \ntransaction as the caller correct?\n\nthanks,\n--Barry\n\n", "msg_date": "Tue, 04 Dec 2001 17:34:21 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: java stored procedures" }, { "msg_contents": "Hi!\n\nBarry Lind wrote:\n\n> Does the mechanism you are planning support running any JVM? In my \n> opionion Kaffe isn't good enough to be widely useful. I think you \n> should be able to plugin whatever jvm is best on your platform, which \n> will likely be either the Sun or IBM JVMs.\n\nOk, I also had problems with caffe, but it may work. I like it becouse \nit is small (the source is about 6M). As much as I know Java VM`s has a \nsomewhat standard native interface called JNI. I use this to start the \nVM, and communicate with it. If you think I should change I will do it, \nbut it may take a long time to get the new VM. For then I have to run kaffe.\n\n> Also, can you explain this a little bit more. How does the jvm \n> process get started? (I would hope that the postgresql server \n> processes would start it when needed, as opposed to requiring that it \n> be started separately.) How does the jvm access these shared memory \n> structures? Since there aren't any methods in the java API to do such \n> things that I am aware of.\n\nJVM does not. 'the java process' does with simple posix calls. I use \ndebian potatoe, on any other posix system it should work, on any other \nsomewhat posix compatible system it may work, I am not sure...\n\n>\n> I don't understand how you do this in java? I must not be \n> understanding something correctly here.\n\nMy failure.\nThe 'java request_handler' is not a java function, it is the C \ncall_handler in the Postgres side, that is started when a function of \nlanguage 'pljava' is called.\nI made some failure in my previous mail. At home I named the pl/java \nlanguage pl/pizza (something that is not caffe, but well known enough \n:). The application has two running binaries:\n-pizza (which was called 'java process' last time) This is a small C \nprogram that uses JNI to start VM and call java methods.\n-plpizza.so the shared object that contains the call_handler function.\n\n\n>\n>\n>> -when java thread receives the signal, it reads the message(s) from \n>> the queue, and starts some actions. When done it tells postgres with \n>> a signal that it is ready, and it can come for its results. This will \n>> be rewritten see below problems.\n>\n>\n>\n> Are signals the best way to accomplish this? \n\nI don`t know if it is the best, it is the only way I know :)\nDo you know any other ways?\n\n>\n>\n>> -And postgres is runing, while java is waiting for postgres to say \n>> something.\n>\n> But in reality if the postgres process is executing a stored function \n> it needs to wait for the result of that function call before \n> continuing doesn't it? \n\nSurely, this is done. How could Postgres tell the result anyway ? :)\n\n>\n>>\n>> Threading on the java process side is not done yet, ok, it is not \n>> that hard, I will write it, if it will be realy neccessary.\n>\n> Agreed, this is important.\n>\n> Shouldn't this code use all or most of the logic found in the FE/BE \n> protocol? Why invent and code another mechanism to transfer data when \n> one already exists. (I will admit that the current FE/BE mechanism \n> isn't the ideal choice, but it seems easier to reuse what exists for \n> now and improve on it later). \n\nWell, I am relatively new to Postgres, and I don`t know these protocols. \nIn the weekend I will start to learn it, and in Sunday or Monday I maybe \nI will understand it, if not, next weekend..\n\n>\n> You didn't mention how you plan to deal with the transaction \n> symantics. So what happens when the pl/java function calls through \n> jdbc back to the server to insert some data? That should happen in \n> the same transaction as the caller correct? \n\nI don`t think this will be a problem, I have ideas for this. Idea mean: \nI know how I will start it, it may be good, or it may be fataly stupid \nidea, it will turn out when I tried it. Simply: The same way plpizza \ntells pizza the request, pizza can talk back to plpizza. This is planed \nto work with similar mechanism I described last time (shm+signals).\n\nMonday I will try to send a little pieces of code to make thing clear, ok?\n\nthanks,\nLaszlo Hornyak\n\n", "msg_date": "Wed, 05 Dec 2001 10:06:10 +0100", "msg_from": "Laszlo Hornyak <hornyakl@freemail.hu>", "msg_from_op": true, "msg_subject": "Re: java stored procedures" }, { "msg_contents": "Laszlo,\n\nI have cc'ed the hackers mail list since that group of developers is \nprobably better able than I to make suggestions on the best interprocess \ncommunication mechanism to use for this. See \nhttp://archives2.us.postgresql.org/pgsql-general/2001-12/msg00092.php \nfor background on this thread.\n\nI also stopped cc'ing the general list, since this is getting too \ndetailed for most of the members on that list.\n\nNow to your mail:\n\nLaszlo Hornyak wrote:\n\n> Hi!\n> \n> Barry Lind wrote:\n> \n>> Does the mechanism you are planning support running any JVM? In my \n>> opionion Kaffe isn't good enough to be widely useful. I think you \n>> should be able to plugin whatever jvm is best on your platform, which \n>> will likely be either the Sun or IBM JVMs.\n> \n> \n> Ok, I also had problems with caffe, but it may work. I like it becouse \n> it is small (the source is about 6M). As much as I know Java VM`s has a \n> somewhat standard native interface called JNI. I use this to start the \n> VM, and communicate with it. If you think I should change I will do it, \n> but it may take a long time to get the new VM. For then I have to run \n> kaffe.\n> \n\n\nThis seems like a reasonable approach and should work across different \nJVMs. It would probably be a good experiment to try this with the Sun \nor IBM jvm at some point to verify. What I was afraid of was that you \nwere hacking the Kaffe code to perform the integration which would limit \nthis solution to only using Kaffe.\n\n\n>> Also, can you explain this a little bit more. How does the jvm \n>> process get started? (I would hope that the postgresql server \n>> processes would start it when needed, as opposed to requiring that it \n>> be started separately.) How does the jvm access these shared memory \n>> structures? Since there aren't any methods in the java API to do such \n>> things that I am aware of.\n> \n> \n> JVM does not. 'the java process' does with simple posix calls. I use \n> debian potatoe, on any other posix system it should work, on any other \n> somewhat posix compatible system it may work, I am not sure...\n> \n>>\n>> I don't understand how you do this in java? I must not be \n>> understanding something correctly here.\n> \n> \n> My failure.\n> The 'java request_handler' is not a java function, it is the C \n> call_handler in the Postgres side, that is started when a function of \n> language 'pljava' is called.\n> I made some failure in my previous mail. At home I named the pl/java \n> language pl/pizza (something that is not caffe, but well known enough \n> :). The application has two running binaries:\n> -pizza (which was called 'java process' last time) This is a small C \n> program that uses JNI to start VM and call java methods.\n> -plpizza.so the shared object that contains the call_handler function.\n> \n\n\nJust a suggestion: PL/J might be a good name, since as you probably \nknow it can't be called pl/java because of the trademark restrictions on \nthe word 'java'.\n\nI am a little concerned about the stability and complexity of having \nthis '-pizza' program be responsible for handling the calls on the java \nside. My concern is that this will need to be a multithreaded program \nsince multiple backends will concurrently be needing to interact with \nmultiple java threads through this one program. It might be simpler if \neach postgres process directly communicated to a java thread via a tcpip \nsocket. Then the \"-pizza\" program would only need to be responsible for \nstarting up the jvm and creating java threads and sockets for a postgres \nprocess (it would perform a similar role to postmaster for postgres \nclient connections).\n\n\n> \n>>\n>>\n>>> -when java thread receives the signal, it reads the message(s) from \n>>> the queue, and starts some actions. When done it tells postgres with \n>>> a signal that it is ready, and it can come for its results. This will \n>>> be rewritten see below problems.\n>>\n>>\n>>\n>>\n>> Are signals the best way to accomplish this? \n> \n> \n> I don`t know if it is the best, it is the only way I know :)\n> Do you know any other ways?\n> \n\n\nI don't know, but hopefully someone on the hackers list will chip in \nhere with a comment.\n\n\n>>\n>>>\n>>> Threading on the java process side is not done yet, ok, it is not \n>>> that hard, I will write it, if it will be realy neccessary.\n>>\n>>\n>> Agreed, this is important.\n>>\n>> Shouldn't this code use all or most of the logic found in the FE/BE \n>> protocol? Why invent and code another mechanism to transfer data when \n>> one already exists. (I will admit that the current FE/BE mechanism \n>> isn't the ideal choice, but it seems easier to reuse what exists for \n>> now and improve on it later). \n> \n> \n> Well, I am relatively new to Postgres, and I don`t know these protocols. \n> In the weekend I will start to learn it, and in Sunday or Monday I maybe \n> I will understand it, if not, next weekend..\n> \n>>\n>> You didn't mention how you plan to deal with the transaction \n>> symantics. So what happens when the pl/java function calls through \n>> jdbc back to the server to insert some data? That should happen in \n>> the same transaction as the caller correct? \n> \n> \n> I don`t think this will be a problem, I have ideas for this. Idea mean: \n> I know how I will start it, it may be good, or it may be fataly stupid \n> idea, it will turn out when I tried it. Simply: The same way plpizza \n> tells pizza the request, pizza can talk back to plpizza. This is planed \n> to work with similar mechanism I described last time (shm+signals).\n> \n\n\nOK, so the same backend process that called the function gets messaged \nto process the sql. This should work. However it means you will need a \nspecial version of the jdbc driver that uses this shm+signals \ncommunication mechanism instead of what the current jdbc driver does. \nThis is something I would be happy to help you with.\n\n\n", "msg_date": "Wed, 05 Dec 2001 09:32:19 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] java stored procedures" }, { "msg_contents": "Barry Lind wrote:\n\n>\n> I also stopped cc'ing the general list, since this is getting too \n> detailed for most of the members on that list.\n\nOk.\n\n> Now to your mail:\n>\n> This seems like a reasonable approach and should work across different \n> JVMs. It would probably be a good experiment to try this with the Sun \n> or IBM jvm at some point to verify. What I was afraid of was that you \n> were hacking the Kaffe code to perform the integration which would \n> limit this solution to only using Kaffe. \n\nI am sure they wont work the same way. I think I have a sun jdk 1.3.0-2, \nso I will try to port it soon. The IBM implementation must wait I think \nuntil january.\n\n>\n> Just a suggestion: PL/J might be a good name, since as you probably \n> know it can't be called pl/java because of the trademark restrictions \n> on the word 'java'. \n\nOk, you won, I do not read the licenses. From now it`s name is pl/j. \nIsn`t 'j' too short for the name of the process that runns java? :)\n\n>\n> I am a little concerned about the stability and complexity of having \n> this '-pizza' program be responsible for handling the calls on the \n> java side. My concern is that this will need to be a multithreaded \n> program since multiple backends will concurrently be needing to \n> interact with multiple java threads through this one program. It \n> might be simpler if each postgres process directly communicated to a \n> java thread via a tcpip socket. Then the \"-pizza\" program would only \n> need to be responsible for starting up the jvm and creating java \n> threads and sockets for a postgres process (it would perform a similar \n> role to postmaster for postgres client connections). \n\nWith good design we can solve stability problems. As much as I know, if \npostmaster dies, the postgres server becomes unavailable, this looks the \nsame problem. I do not know if we realy need sockets. Anyway, if 'j' \ndies, we can create a new one, and restart calculations. Some watchdog \nfunctionality...\nDoing thing with sockets need a lot of rework. It is the best time for \nthis, while there is not too much thing done.\n\n>>>\n>>>> -when java thread receives the signal, it reads the message(s) from \n>>>> the queue, and starts some actions. When done it tells postgres \n>>>> with a signal that it is ready, and it can come for its results. \n>>>> This will be rewritten see below problems.\n>>>\n>>> Are signals the best way to accomplish this? \n>>\n>> I don`t know if it is the best, it is the only way I know :)\n>> Do you know any other ways?\n>>\n> I don't know, but hopefully someone on the hackers list will chip in \n> here with a comment.\n\nAfter a first developement cycle (if my brain doesn`t burn down), the \nsignals can be replaced to a plugable communication interface I think. \nSo maybe we can use CORBA, or sockets, or something else. This will take \na lot of time.\n\n> OK, so the same backend process that called the function gets messaged \n> to process the sql. This should work. However it means you will need \n> a special version of the jdbc driver that uses this shm+signals \n> communication mechanism instead of what the current jdbc driver does. \n> This is something I would be happy to help you with.\n\n\nThis is kind of you. :)\nFor this, I will have to finish the protocol of communication. I have to \nlearn Postgres enough, so I am not sure this will be done this weekend. \nI have ideas, only time is needed to implement them or to recognize the \nfailures.\n\nThanks,\nLaszlo Hornyak\n\n\n", "msg_date": "Wed, 05 Dec 2001 20:39:28 +0100", "msg_from": "Laszlo Hornyak <hornyakl@freemail.hu>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] java stored procedures" }, { "msg_contents": "* Barry Lind <barry@xythos.com> wrote:\n|\n| possible problems with your strategy). Without knowing what exactly\n| you are thinking of doing it is difficult to comment.\n\nAgreed. \n\n| Having one jvm that all the postgres backend processes communicate\n| with makes the whole feature much more complicated, but is necessary\n| in my opinion.\n\nI'm not quite sure if I agree here. Startup time is not an issue if you are \nusing connection pooling on the client and memory is cheap ;-) Having\na separate process would indeed introduce overhead where you don't want \nit. In the critical path when executing queries. \n\n| I am very interested in hearing what your plans are for pl/java. I\n| think this is a very difficult project, but one that would be very\n| useful and welcome.\n\nI would very much like to hear about the plans myself. \n\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n", "msg_date": "06 Dec 2001 13:32:42 +0100", "msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>", "msg_from_op": false, "msg_subject": "Re: java stored procedures" }, { "msg_contents": "Hi!\n\nSorry, I have time only for short ansvers, it is company time :((.\n\nGunnar R�nning wrote:\n\n>* Barry Lind <barry@xythos.com> wrote:\n>|\n>| possible problems with your strategy). Without knowing what exactly\n>| you are thinking of doing it is difficult to comment.\n>\n>Agreed. \n>\nOk, I will try to bring the code here before Monday, or at least some \npieces. It is full of hardcoded constants from my developement \nenvironment. :(\n\n\n>\n>| I am very interested in hearing what your plans are for pl/java. I\n>| think this is a very difficult project, but one that would be very\n>| useful and welcome.\n>\n>I would very much like to hear about the plans myself. \n>\nI do not see so big difficulities yet, am I so lame? It won`t be easy, \nrealy, we should keep it simple, at least becouse of me.\n\n\nthanks,\nLaszlo Hornyak\n\n", "msg_date": "Thu, 06 Dec 2001 15:23:39 +0100", "msg_from": "Laszlo Hornyak <hornyakl@freemail.hu>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] java stored procedures" }, { "msg_contents": "Laszlo Hornyak wrote:\n\n>>\n>> | I am very interested in hearing what your plans are for pl/java. I\n>> | think this is a very difficult project, but one that would be very\n>> | useful and welcome.\n>>\n>> I would very much like to hear about the plans myself.\n>\n> I do not see so big difficulities yet, am I so lame? It won`t be easy, \n> realy, we should keep it simple, at least becouse of me.\n\nLet me propose a very different approach to PL/J - use gcc-java and \nfigure out the problems\nwith (dynamic) compiling and dynamic linking.\n\nThis is an approach somewhat similar to .NET/C# that you first compile \nthings and then run instead\nof trying to do both at the same time ;)\n\nOracle /may/ be doing something similar with their java stored \nprocedures, as they claim these to be \"compiled\".\n\n-----------------\nHannu\n\n\n", "msg_date": "Thu, 06 Dec 2001 23:02:51 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] java stored procedures" } ]
[ { "msg_contents": "While surfing through our web page I found some references about Postgres\n(the original Berkley project) starting as Ingres. Now I wonder whether we\nor let's say the original Postgres project still used Ingres or parts\nthereof. \n\nThe original Postgres FAQ say\n\nQ. What is the connection between POSTGRES and University Ingres?\n\nA. There is none, aside from Prof. Stonebraker. There is no\n compatibility between the two software packages, and the research\n projects had differing objectives\n\nThis certainly sounds like these two are different projects by the same\nProf.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 30 Nov 2001 10:50:19 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "History question" }, { "msg_contents": "On Fri, Nov 30, 2001 at 10:50:19AM +0100, Michael Meskes wrote:\n> While surfing through our web page I found some references about Postgres\n> (the original Berkley project) starting as Ingres. Now I wonder whether we\n> or let's say the original Postgres project still used Ingres or parts\n> thereof. \n> \n> The original Postgres FAQ say\n> \n> Q. What is the connection between POSTGRES and University Ingres?\n> \n> A. There is none, aside from Prof. Stonebraker. There is no\n> compatibility between the two software packages, and the research\n> projects had differing objectives\n> \n> This certainly sounds like these two are different projects by the same\n> Prof.\n\nIngres - 1982 -- 1985\n - Michael Stonebraker and Eugene Wong at UC-Berkeley\n - Ingres = Interactive Graphics and Retrieval System\n - original developed on PDP-11/45\n - original query language was QUEL\n\nPostgres - 1985(?) - 1994\n - based on Ingres\n - start with idea make Ingres more OO\n - the father was again Stonebraker \n\nPostgres95 - 1994-1995 \n - UC-Berkeley's students Jolly Chen and Andrew Yu\n\nMariposa - based on Postgres95\n - keynote was specific non realtime replication\n - alive this project still?\n\nPostgreSQL - summer 1996\n - OpenSource\n\nCompanies:\n\n * Ingres Corporation (set up Stonebraker?)\n * Robert Epstein from UC-Berkeley team set up Sybase\n * Paula Hawthorn from UC-Berkeley team set up \n Illustra Information Technologies Incorporated, now know as\n Informix\n\n If I know (from some resources on web) M. Stonebraker work with/on\n Ingres, Illustra and Informix.\n\n By the way on the world exist two original branchs of SQL DB where is\n possible found inspiration of all DB:\n \n * System-R now know as DB2 \n * Ingres (PostgreSQL, Informix)\n \n\n http://www.nap.edu/readingroom/books/far/ch6.html\n http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95.html\n http://db.cs.berkeley.edu/\n\n Do know some other good URL about DB history?\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Fri, 30 Nov 2001 11:53:38 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: History question" }, { "msg_contents": "> While surfing through our web page I found some references about Postgres\n> (the original Berkley project) starting as Ingres. Now I wonder whether we\n> or let's say the original Postgres project still used Ingres or parts\n> thereof.\n\nIt doesn't, and never did. Not sure how that impression got started,\nother than some confusion over \"based on\" vs \"successor\".\n\nWhere exactly did you find this on the web site? We should rephrase\nit...\n\n> This certainly sounds like these two are different projects by the same\n> Prof.\n\nRight.\n\n - Thomas\n", "msg_date": "Fri, 30 Nov 2001 13:51:55 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: History question" }, { "msg_contents": "On Fri, 30 Nov 2001, Thomas Lockhart wrote:\n\n> > While surfing through our web page I found some references about Postgres\n> > (the original Berkley project) starting as Ingres. Now I wonder whether we\n> > or let's say the original Postgres project still used Ingres or parts\n> > thereof.\n>\n> It doesn't, and never did. Not sure how that impression got started,\n> other than some confusion over \"based on\" vs \"successor\".\n>\n> Where exactly did you find this on the web site? We should rephrase\n> it...\n\nThe history paper (the formatting was messed up with the copy/paste):\n\n\n\n The History of PostgreSQL Development\n\n by Bruce Momjian\n\n\n\nPostgreSQL is the most advanced open-source database\nserver. It is Object-Relational(ORDBMS), and\nis supported by a team of Internet developers. PostgreSQL\nbegan as Ingres, developed at the University of\nCalifornia at Berkeley(1977-1985). The Ingres code was\ntaken and enhanced by Relational\nTechnologies/Ingres Corporation, which produced one of the\nfirst commercially successful relational\ndatabase servers. (Ingres Corp. was later purchased by\nComputer Associates.) Also at Berkeley, Michael\nStonebraker led a team to develop an object-relational\ndatabase server called Postgres(1986-1994). The\nPostgres code was taken by Illustra and developed into a\ncommercial product. (Illustra was later purchased\nby Informix and integrated into Informix's Universal\nServer.) Two Berkeley graduate students, Jolly Chen\nand Andrew Yu, added SQL capabilities to Postgres, and\ncalled it Postgres95(1994-1995). They left\nBerkeley, but Jolly continued maintaining Postgres95,\nwhich had an active mailing list.\n\n\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 30 Nov 2001 09:25:05 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: History question" }, { "msg_contents": "On Fri, Nov 30, 2001 at 01:51:55PM +0000, Thomas Lockhart wrote:\n> Where exactly did you find this on the web site? We should rephrase\n> it...\n\nVince already posted this.\n\n> > This certainly sounds like these two are different projects by the same\n> > Prof.\n> \n> Right.\n\nYes, and that's how I remember it.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Fri, 30 Nov 2001 17:15:08 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: History question" }, { "msg_contents": "> On Fri, 30 Nov 2001, Thomas Lockhart wrote:\n> \n> > > While surfing through our web page I found some references about Postgres\n> > > (the original Berkley project) starting as Ingres. Now I wonder whether we\n> > > or let's say the original Postgres project still used Ingres or parts\n> > > thereof.\n> >\n> > It doesn't, and never did. Not sure how that impression got started,\n> > other than some confusion over \"based on\" vs \"successor\".\n> >\n> > Where exactly did you find this on the web site? We should rephrase\n> > it...\n> \n> The history paper (the formatting was messed up with the copy/paste):\n\nYes, no code went from Ingres to Postgres. However, Stonebraker was the\nsame, and the team was probably similar. The text in the first\nchapter of my book is a little clearer, calling Ingres an ancestor of\nPostgres. There is a link between them, it is just hard to clearly\nspecify it without going into all sorts of contortions in the text. I\nwonder if we should remove the old history article and put the first\nchapter of my book in there instead. It is the same content, but\nupdated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 12:38:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: History question" }, { "msg_contents": "...\n> Yes, no code went from Ingres to Postgres.\n\nRight. Trying to link Ingres with Postgres is a bit of a stretch. How\nabout linking the team and leaving it at that? Postgres was in many ways\na clean break to try some new ideas, not an evolutionary development\n(witness the first implementation in lisp, which afaik was not part of\nthe Ingres code base).\n\nIs the book content copyrighted differently from the currently posted\ncontent? If so, perhaps someone would like to just update the content...\n\n - Thomas\n", "msg_date": "Fri, 30 Nov 2001 17:56:16 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: History question" }, { "msg_contents": "On Fri, 30 Nov 2001, Thomas Lockhart wrote:\n\n> ...\n> > Yes, no code went from Ingres to Postgres.\n>\n> Right. Trying to link Ingres with Postgres is a bit of a stretch. How\n> about linking the team and leaving it at that? Postgres was in many ways\n> a clean break to try some new ideas, not an evolutionary development\n> (witness the first implementation in lisp, which afaik was not part of\n> the Ingres code base).\n>\n> Is the book content copyrighted differently from the currently posted\n> content? If so, perhaps someone would like to just update the content...\n\nEither way it's Bruce's text (both book and stuff on the website) so\nI'll let you guys figure that out. BTW, there's a reference that\nDaemon News also has a copy of that in their archives. Anyway I'm\nin class the rest of the day so whatever gets decided I'll get it\ntomorrow.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 30 Nov 2001 13:02:29 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: History question" }, { "msg_contents": "> ...\n> > Yes, no code went from Ingres to Postgres.\n> \n> Right. Trying to link Ingres with Postgres is a bit of a stretch. How\n> about linking the team and leaving it at that? Postgres was in many ways\n> a clean break to try some new ideas, not an evolutionary development\n> (witness the first implementation in lisp, which afaik was not part of\n> the Ingres code base).\n\nHere is the book text. Is that clearer. In fact, this paragraph was\nworked on just to clarify the relationship. Yes, I agree it is a\nstretch, but to ignore University Ingres seemed wrong too:\n\n POSTGRESQL'S ancestor was Ingres, developed at the University of\nCalifornia at Berkeley (1977-1985). The Ingres code was later enhanced\nby Relational Technologies/Ingres Corporation, 6.1 which produced one of\nthe first commercially successful relational database servers. Also at\nBerkeley, Michael Stonebraker led a team to develop an\nobject-relational database server called Postgres (1986-1994). Illustra \n6.2 took the Postgres code and developed it into a commercial product.\n\n> Is the book content copyrighted differently from the currently posted\n> content? If so, perhaps someone would like to just update the content...\n\nUh, of course the book is on the web site, but I am unsure about have it\nchanged because it wouldn't match the book. We can change what is there\nnow because that doesn't match the book anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 13:06:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: History question" }, { "msg_contents": "> On Fri, 30 Nov 2001, Thomas Lockhart wrote:\n> \n> > ...\n> > > Yes, no code went from Ingres to Postgres.\n> >\n> > Right. Trying to link Ingres with Postgres is a bit of a stretch. How\n> > about linking the team and leaving it at that? Postgres was in many ways\n> > a clean break to try some new ideas, not an evolutionary development\n> > (witness the first implementation in lisp, which afaik was not part of\n> > the Ingres code base).\n> >\n> > Is the book content copyrighted differently from the currently posted\n> > content? If so, perhaps someone would like to just update the content...\n> \n> Either way it's Bruce's text (both book and stuff on the website) so\n> I'll let you guys figure that out. BTW, there's a reference that\n> Daemon News also has a copy of that in their archives. Anyway I'm\n> in class the rest of the day so whatever gets decided I'll get it\n> tomorrow.\n\nOK, I have updated the PostgreSQL history article to match my book,\nwhich mentions Ingres as the \"ancestor\" of PostgreSQL, developed at\nBerkeley too.\n\nThanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Dec 2001 00:00:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: History question" } ]
[ { "msg_contents": "\n> Of course, given that most OS's don't have the 'ps' environment\nproblem,\n> maybe we have to keep PGPASSWORD around. It is up to the group. Do\n> people want me to change my wording of the option in the SGML sources?\n> \n> <envar>PGPASSWORD</envar>\n> sets the password used if the backend demands password\n> authentication. This is not recommended because the password can\n> be read by others using a <command>ps</command> environment flag\n> on some platforms.\n\nI think the wording is good. I would keep supporting the envar.\n\nWhat exactly speaks against a commandline switch, that gets hidden\nwith the postmaster argv trick, and a similar notice as for PGPASSWORD.\nFor me, this would be the most convenient form of supplying a password\n(if I used db side passwords :-).\n\nAndreas\n", "msg_date": "Fri, 30 Nov 2001 12:06:32 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opensup" }, { "msg_contents": "> \n> > Of course, given that most OS's don't have the 'ps' environment\n> problem,\n> > maybe we have to keep PGPASSWORD around. It is up to the group. Do\n> > people want me to change my wording of the option in the SGML sources?\n> > \n> > <envar>PGPASSWORD</envar>\n> > sets the password used if the backend demands password\n> > authentication. This is not recommended because the password can\n> > be read by others using a <command>ps</command> environment flag\n> > on some platforms.\n> \n> I think the wording is good. I would keep supporting the envar.\n> \n> What exactly speaks against a commandline switch, that gets hidden\n> with the postmaster argv trick, and a similar notice as for PGPASSWORD.\n> For me, this would be the most convenient form of supplying a password\n> (if I used db side passwords :-).\n\nWe can hide it but it will be visible for a short period, and many\noperating systems either don't allow us to modify the ps args or have\nways of circumventing custom ps display, i.e. it doesn't show updated ps\ndisplay if the process is swapped out because ps can't get to the\nuser-space definitions of the custom args.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 12:45:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opensup" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We can hide it but it will be visible for a short period, and many\n> operating systems either don't allow us to modify the ps args or have\n> ways of circumventing custom ps display, i.e. it doesn't show updated ps\n> display if the process is swapped out because ps can't get to the\n> user-space definitions of the custom args.\n\nYes, passwords in command-line arguments are *way* too dangerous.\n\nI had always thought that environment vars were secure, though, and was\nsurprised to learn that there are Unix variants wherein they're not.\n\nI still like the idea of arguments and/or env vars that give the name\nof a file in which to look for the password, however. Perhaps the file\ncontents could be along the lines of\n\n\tusername\thost\tpassword\n\nand libpq would look for a line matching the PGUSER and PGHOST values it\nalready has. (compare the usage of .netrc, .cvspass, etc). Maybe there\ncould even be a default assumption that we look in \"$HOME/.pgpass\",\nwithout having to be told? Or is that too Unix-centric?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 12:55:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opensup " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > We can hide it but it will be visible for a short period, and many\n> > operating systems either don't allow us to modify the ps args or have\n> > ways of circumventing custom ps display, i.e. it doesn't show updated ps\n> > display if the process is swapped out because ps can't get to the\n> > user-space definitions of the custom args.\n> \n> Yes, passwords in command-line arguments are *way* too dangerous.\n> \n> I had always thought that environment vars were secure, though, and was\n> surprised to learn that there are Unix variants wherein they're not.\n> \n> I still like the idea of arguments and/or env vars that give the name\n> of a file in which to look for the password, however. Perhaps the file\n> contents could be along the lines of\n> \n> \tusername\thost\tpassword\n> \n> and libpq would look for a line matching the PGUSER and PGHOST values it\n> already has. (compare the usage of .netrc, .cvspass, etc). Maybe there\n\nYes, this is more powerful than the environment variable anyway. We\nonly have to decide how to specify missing fields. Asterisk?\n\n> could even be a default assumption that we look in \"$HOME/.pgpass\",\n> without having to be told? Or is that too Unix-centric?\n\nYou mean like we do for .psqlrc. Good idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 12:58:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opensup" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> username\thost\tpassword\n\n> Yes, this is more powerful than the environment variable anyway. We\n> only have to decide how to specify missing fields. Asterisk?\n\nUh, *what* missing fields? It's not clear to me that there's any value\nin a wild-card entry in this thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 13:06:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opensup " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> username\thost\tpassword\n> \n> > Yes, this is more powerful than the environment variable anyway. We\n> > only have to decide how to specify missing fields. Asterisk?\n> \n> Uh, *what* missing fields? It's not clear to me that there's any value\n> in a wild-card entry in this thing.\n\nI think so. What if you password is the same for all hosts? Wouldn't\nyou want:\n\n> >> username\t*\tpassword\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 13:09:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opensup" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think so. What if you password is the same for all hosts?\n\nWe should encourage that? I don't think so ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 13:11:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opensup " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > We can hide it but it will be visible for a short period, and many\n> > operating systems either don't allow us to modify the ps args or have\n> > ways of circumventing custom ps display, i.e. it doesn't show updated ps\n> > display if the process is swapped out because ps can't get to the\n> > user-space definitions of the custom args.\n> \n> Yes, passwords in command-line arguments are *way* too dangerous.\n> \n> I had always thought that environment vars were secure, though, and was\n> surprised to learn that there are Unix variants wherein they're not.\n> \n> I still like the idea of arguments and/or env vars that give the name\n> of a file in which to look for the password, however. Perhaps the file\n> contents could be along the lines of\n> \n> \tusername\thost\tpassword\n> \n> and libpq would look for a line matching the PGUSER and PGHOST values it\n> already has. (compare the usage of .netrc, .cvspass, etc). Maybe there\n> could even be a default assumption that we look in \"$HOME/.pgpass\",\n> without having to be told? Or is that too Unix-centric?\n\nTODO updated:\n\n* Add PGPASSWORDFILE environment variable or ~/.pgpass to store\n user/host/password combinations\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 27 Dec 2001 22:30:04 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [ppa-dev] Severe bug in debian - phppgadmin opensup" } ]
[ { "msg_contents": "Configuration:\n Windows 2000 Server\n cygwin 2.78.2.9\n PostgreSQL 7.1.3\n psqlODBC 7.1.8\n pgAdmin II 1.1.66\n\nBug:\n Capital letters cannot be used in column names used in foreign key\nconstraints\n\n All Smalls succeeds:\n\n -- Table: significance\n CREATE TABLE \"significance\" (\n \"significanceid\" int4 NOT NULL,\n \"desc\" varchar(255),\n CONSTRAINT \"pk_significance\" PRIMARY KEY (\"significanceid\"));\n\n -- Table: primaryword\n CREATE TABLE \"primaryword\" (\n \"exerciseid\" int4 NOT NULL,\n \"significanceid\" int4 NOT NULL,\n CONSTRAINT \"pk_primaryword\" PRIMARY KEY (\"exerciseid\"),\n CONSTRAINT \"fk_primaryword_significance\" FOREIGN KEY\n(significanceid) REFERENCES \"significance\" (significanceid) );\n\n With just the foreign table name capitalized, it also succeeds:\n -- Table: Significance\n CREATE TABLE \"Significance\" (\n \"significanceid\" int4 NOT NULL,\n \"desc\" varchar(255),\n CONSTRAINT \"pk_significance\" PRIMARY KEY (\"significanceid\"));\n\n -- Table: primaryword\n CREATE TABLE \"primaryword\" (\n \"exerciseid\" int4 NOT NULL,\n \"significanceid\" int4 NOT NULL,\n CONSTRAINT \"pk_primaryword\" PRIMARY KEY (\"exerciseid\"),\n CONSTRAINT \"fk_primaryword_significance\" FOREIGN KEY\n(significanceid) REFERENCES \"Significance\" (significanceid) );\n\n Capitalizing just the foreign column name fails with what seems to be an\nincorrect error:\n -- Table: significance\n CREATE TABLE \"significance\" (\n \"Significanceid\" int4 NOT NULL,\n \"desc\" varchar(255),\n CONSTRAINT \"pk_significance\" PRIMARY KEY (\"Significanceid\"));\n\n -- Table: primaryword\n CREATE TABLE \"primaryword\" (\n \"exerciseid\" int4 NOT NULL,\n \"significanceid\" int4 NOT NULL,\n CONSTRAINT \"pk_primaryword\" PRIMARY KEY (\"exerciseid\"),\n CONSTRAINT \"fk_primaryword_significance\" FOREIGN KEY\n(significanceid) REFERENCES \"significance\" (Significanceid) );\n\n Fails with error\n Description: Error while executing the query;\n Error: UNIQUE constraint matching given keys for refernced table\n\"significance\" not found\n\n\n Capitalizing just the child column name fails :\n -- Table: Significance\n CREATE TABLE \"significance\" (\n \"significanceid\" int4 NOT NULL,\n \"desc\" varchar(255),\n CONSTRAINT \"pk_significance\" PRIMARY KEY (\"significanceid\"));\n\n -- Table: primaryword\n CREATE TABLE \"primaryword\" (\n \"exerciseid\" int4 NOT NULL,\n \"Significanceid\" int4 NOT NULL,\n CONSTRAINT \"pk_primaryword\" PRIMARY KEY (\"exerciseid\"),\n CONSTRAINT \"fk_primaryword_significance\" FOREIGN KEY\n(Significanceid) REFERENCES \"significance\" (significanceid) );\n\n\n With the following error:\n Description: Error while executing the query;\n Error: Columns referenced in foreign key constraint not found\n\n\nI could not get foreign keys to succeed if there were any caps in the column\nnames, although caps in primary key constraints seems to work just fine.\n\n\n\n\n", "msg_date": "Fri, 30 Nov 2001 10:23:02 -0600", "msg_from": "\"Mike Smialek\" <_ike_mialek@hotmail.com>", "msg_from_op": true, "msg_subject": "case sensititvity bug in foreign keys on cygwin" }, { "msg_contents": "\"Mike Smialek\" <_ike_mialek@hotmail.com> writes:\n> Capitalizing just the foreign column name fails with what seems to be an\n> incorrect error:\n> -- Table: significance\n> CREATE TABLE \"significance\" (\n> \"Significanceid\" int4 NOT NULL,\n> \"desc\" varchar(255),\n> CONSTRAINT \"pk_significance\" PRIMARY KEY (\"Significanceid\"));\n\n> -- Table: primaryword\n> CREATE TABLE \"primaryword\" (\n> \"exerciseid\" int4 NOT NULL,\n> \"significanceid\" int4 NOT NULL,\n> CONSTRAINT \"pk_primaryword\" PRIMARY KEY (\"exerciseid\"),\n> CONSTRAINT \"fk_primaryword_significance\" FOREIGN KEY\n> (significanceid) REFERENCES \"significance\" (Significanceid) );\n ^^^^^^^^^^^^^^\n> Fails with error\n> Description: Error while executing the query;\n> Error: UNIQUE constraint matching given keys for refernced table\n> \"significance\" not found\n\nI see no bug here. You didn't quote the foreign key column name, thus\nit got folded to lowercase.\n\nIt might be nice if the error message explicitly identified the key\ncolumns being sought, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Dec 2001 13:32:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: case sensititvity bug in foreign keys on cygwin " }, { "msg_contents": "Mike Smialek wrote:\n> \n> Configuration:\n> Windows 2000 Server\n> cygwin 2.78.2.9\n> PostgreSQL 7.1.3\n> psqlODBC 7.1.8\n> pgAdmin II 1.1.66\n> \n> Bug:\n> Capital letters cannot be used in column names used in foreign key\n> constraints\n> \n> All Smalls succeeds:\n\n[snip]\n\n> Capitalizing just the foreign column name fails with what seems to be an\n> incorrect error:\n> -- Table: significance\n> CREATE TABLE \"significance\" (\n> \"Significanceid\" int4 NOT NULL,\n> \"desc\" varchar(255),\n> CONSTRAINT \"pk_significance\" PRIMARY KEY (\"Significanceid\"));\n> \n> -- Table: primaryword\n> CREATE TABLE \"primaryword\" (\n> \"exerciseid\" int4 NOT NULL,\n> \"significanceid\" int4 NOT NULL,\n> CONSTRAINT \"pk_primaryword\" PRIMARY KEY (\"exerciseid\"),\n> CONSTRAINT \"fk_primaryword_significance\" FOREIGN KEY\n> (significanceid) REFERENCES \"significance\" (Significanceid) );\n\nYou aren't double quoting the column name Significanceid\nin the foreign key contraint clauses. Why ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 04 Dec 2001 09:34:37 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: case sensititvity bug in foreign keys on cygwin" } ]
[ { "msg_contents": "\n> > > This certainly sounds like these two are different projects by the\nsame\n> > > Prof.\n> > \n> > Right.\n> \n> Yes, and that's how I remember it.\n\nIIRC Postgres started out with the same [but extended] query language as\nIngres.\nSo I think they are not completely unrelated.\n\nAndreas\n", "msg_date": "Fri, 30 Nov 2001 17:44:08 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: History question" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> IIRC Postgres started out with the same [but extended] query language as\n> Ingres.\n> So I think they are not completely unrelated.\n\nQUEL and POSTQUEL are the two query languages in question, I believe.\nNever having studied either, I have no clue how closely they are\nrelated.\n\nIn any case, I'm pretty sure that Postgres was a completely new\nimplementation that used none of the Ingres code. The no-overwrite\nstorage management scheme was definitely a new concept in Postgres.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 12:29:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: History question " }, { "msg_contents": "> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > IIRC Postgres started out with the same [but extended] query language as\n> > Ingres.\n> > So I think they are not completely unrelated.\n> \n> QUEL and POSTQUEL are the two query languages in question, I believe.\n> Never having studied either, I have no clue how closely they are\n> related.\n\nThe languages were almost identical, except for the object creation\nextensions.\n\n\n> In any case, I'm pretty sure that Postgres was a completely new\n> implementation that used none of the Ingres code. The no-overwrite\n> storage management scheme was definitely a new concept in Postgres.\n\nNone shared. I have the Ingres code here and it is pretty small. \nEntire source is only 1.7 megs uncompressed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 13:03:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: History question" } ]
[ { "msg_contents": "The docs suggest that this:\n\n /* Get number of tuples in relation */\n ret = SPI_exec(\"select count(*) from ttest\", 0);\n\n if (ret < 0)\n elog(NOTICE, \"trigf (fired %s): SPI_exec returned %d\",\nwhen, ret);\n\n i = SPI_getbinval(SPI_tuptable->vals[0], SPI_tuptable->tupdesc,\n1, &isnull);\n elog (NOTICE, \"trigf(fired %s): there are %d tuples in ttest\",\nwhen, i);\n\n\nWhen it should be:\n\n pi = SPI_getbinval(SPI_tuptable->vals[0], SPI_tuptable->tupdesc,\n1, &isnull);\n elog (NOTICE, \"trigf(fired %s): there are %d tuples in ttest\",\nwhen, *pi);\n\n\n\n\n", "msg_date": "Fri, 30 Nov 2001 12:09:43 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Docbug, SPI_getbinval, triger example" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> The docs suggest that this:\n> /* Get number of tuples in relation */\n> ret = SPI_exec(\"select count(*) from ttest\", 0);\n\nMph. The example *used* to be right, but is not as of 7.2, because\ncount() now returns int8 which is pass-by-reference. Your proposed\nfix isn't quite right either (you'd have noticed the difference between\n*int and *int8 on a big-endian machine ;-)). Probably we should change\nthe example to read\n\n\ti = (int) DatumGetInt64(SPI_getbinval(...));\n\nAlternatively the example query could be changed to\n\n ret = SPI_exec(\"select count(*)::int4 from ttest\", 0);\n\nso as to avoid the backend version dependency. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 13:03:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Docbug, SPI_getbinval, triger example " } ]
[ { "msg_contents": "I don't seem to get the locale/LIKE warning message during initdb, even\nthough pg_controldata shows a non-C locale being used.\n\nI also don't see an \"invalid LC_COLLATE setting\" when setting LC_COLLATE\nto nonsense, though I don't know if that ever was the case.\n\nWill look further...\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 30 Nov 2001 19:11:27 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "initdb + locale problem" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I don't seem to get the locale/LIKE warning message during initdb, even\n> though pg_controldata shows a non-C locale being used.\n\nI'll bet a nickel that initdb is redirecting the message to /dev/null.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 14:40:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: initdb + locale problem " } ]
[ { "msg_contents": "Now that we've gone through nearly one development cycle with national\nlanguage support, I'd like to bring up a number of issues concerning the\nstyle of the backend error messages that make life difficult, but probably\nnot only for the translators but for users as well. Not all of these are\nstrictly translation issues, but the message catalogs make for a good\noverview of what's going on.\n\nI hope we can work through all of these during the next development\nperiod.\n\nPrefixing messages with command names\n-------------------------------------\n\nFor instance,\n\n| CREATE DATABASE: permission denied\n\nThis \"command: message\" style is typical for command-line programs and\nit's pretty useful there if you run many commands in a pipe. The same\nusefulness could probably be derived if you run many SQL commands in a\nfunction. (Just \"permission denied\" would be very confusing there!)\n\nIf we want to use that style we should make it consistent and we should\nautomate it. Via the \"command tag\" mechanism we already know what command\nwe're executing, so we can automatically prefix all messages that way. It\ncould even be switched off by the user if it's deemed annoying.\n\n\nPrefixing messages with function names\n--------------------------------------\n\nThe function names obviously have no meaning to the user. It is claimed\nthat they allow a developer to locate the place the error was raised\nfaster, but that's only half the truth. Firstly, this whole thing doesn't\nwork if the displayed name of the function was actually passed in from\nelsewhere. Then it takes you three times longer to locate the source\nbecause you *think* you know where it was but it's not there. Secondly,\na developer doesn't have the need to locate every error. For instance,\n\n| ExecAppend: rejected due to CHECK constraint foo\n\nThere's no need to debug anything there, it's a perfectly normal use\nsituation.\n\nI think the key here is to distinguish error messages that are perfectly\nnormal user-level events from messages which really represent an \"assert\nfailure, but continue gracefully\", such as\n\n| initGISTstate: numberOfAttributes %d > %d\n\nThe latter could better be written something like\n\n| BETTER_BE_TRUE(index->rd_att->natts > INDEX_MAX_KEYS);\n\nwe could lead to an error message in the style of an assert failure,\nincluding the expression in question and file and line information (and\nbug reporting suggestions). This way the developer doesn't have to write\nany message text at all but still gets much better information to locate\nthe source. (E.g., note that the tested variable isn't even called\n\"numberOfAttributes\".)\n\nThe exact API could be tuned to include some other information such as an\ninformal message, but I think something along these lines needs to be\nworked out.\n\n\nQuoting\n-------\n\nWhich is better:\n\nfunction '%s' not found\nfunction \"%s\" not found\nfunction %s not found\n\nI think two kinds of quotes is looking messy. Personal suggestion:\ndouble quotes.\n\n\nCapitalization and punctuation\n------------------------------\n\nWhich one?\n\nERROR: Permission denied.\nERROR: Permission denied\nERROR: permission denied\n\nI have personally used the GNU-recommended way which is the third, mostly\njust because it is *some* standardized way. I don't have a strong feeling\nabout the initial capitalization, but I'm against the periods except when\nthe message is really a sentence and it contains some other punctuation\n(commas, etc.) or the message consists of more than one sentence.\n\n\nGrammatical structure and choice of words\n-----------------------------------------\n\nThere are many others besides the above choices:\n\nERROR: Permission was denied.\nERROR: You don't have permission to do <task>.\nERROR: Permission to do <task> was denied.\nERROR: <task>: Permission denied\n\nIn other cases there's a sea of possibilities:\n\ncouldn't find THING\ncan't find THING\nTHING wasn't found\nunable to locate THING\nlookup of THING failed\nTHING doesn't exist\n\nStrictly speaking, there are at least four different meanings among those\nsix messages, yet they're used mostly randomly.\n\nThere are a number of things to think about here: active vs passive, can\nvs could, complete sentence vs telegram style, use of colons, addressing\nthe user with \"you [cannot...]\".\n\nAnd please let's not have the program talk in the \"I\"-form (\"I have rolled\nback the current transaction ...\").\n\n\n\nMore esoteric discussions are also possible, but I'm going to postpone\nthose. ;-) However, I think it's worth working on this and perhaps\nputting together a \"manual of style\" that applies to all parts of\nPostgreSQL. This would significantly improve the overall perceived\nquality. Some projects like KDE, GNU, and GCC have teams that discuss\nthese kinds of things and it's definitely showing.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 30 Nov 2001 19:12:16 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Backend error message style issues" }, { "msg_contents": "> And please let's not have the program talk in the \"I\"-form (\"I have rolled\n> back the current transaction ...\").\n\nBut what about my favorite?\n\n\t_bt_getstackbuf: my bits moved right off the end of the world!\n\n:-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 13:20:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Backend error message style issues" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Now that we've gone through nearly one development cycle with national\n> language support, I'd like to bring up a number of issues concerning the\n> style of the backend error messages that make life difficult, but probably\n> not only for the translators but for users as well. Not all of these are\n> strictly translation issues, but the message catalogs make for a good\n> overview of what's going on.\n\nFor what its worth, Oracle 8 ships with an error.txt file which\ndictates the message standards to which their products comply.\nRoughly:\n\nSize Of Message:\n---------------\n\nCannot exceed 76 characters, even when embedded format specifiers\nare apart of the message. Only \nstart-up and system-dependent messages can exceed 76 characters.\n\nSimple Language:\n---------------\n\nUse non-cryptic messages and overly technical language.\n\nUpper vs. Lowercase:\n-------------------\n\nUse uppercase for commands and keywords, lowercase for message\nwording, including the first letter (which agrees with your use,\nPeter).\n\nCommands, Keywords, Parameter Values:\n------------------------------------\n\nWhen possible, give the command, keyword and parameters used in the\nmessage. \n\nBAD: The relation could not be created\nGOOD: CREATE TABLE failed for table \"foo\" because the disk is full\n\nPeriod:\n------\n\nDo not end messages with a period (also agrees with your\nconclusion).\n\nNumbers:\n-------\n\nDon't enclose numbers with special characters. For example:\n\nBAD: rows returned by subquery (3) exceeded limit (1)\nGOOD: the subquery returned 3 rows exceeding the 1 row limit\n\nQuotes:\n------\n\nDon't use single or double quotes to emphasize a text variable or\ncommand\n\nSingle Quotes:\n-------------\n\nNever use them.\n\nDouble Quotes:\n-------------\n\nAlways and only use them to identify database objects. \n\nBAD: Unable to drop table employees\nGOOD: DROP TABLE of \"employees\" failed due to referential integrity\nconstraints\n\nEllipses:\n--------\n\nDon't use them. \n\nBAD: Unable to drop column mascarm.employees.salary\nGOOD: ALTER TABLE was unable to drop the column \"salary\" table\n\"employees\" schema \"mascarm\"\n\nParentheses:\n-----------\n\nAlways and only use parentheses for constraint names\n\nBAD: not null constraint ri_employees violated\nGOOD: not null constraint (ri_employees) violated\n\nBrackets:\n--------\n\nAlways and only used for program arguments\n\nGrammar:\n-------\n\nUse complete sentences whenever possible without the trailing\nperiod. Don't use multiple sentences. Use the active voice. Don't\nuse an aggressive tone.\n\nStyle:\n-----\n\nMake positive suggestions. Explain what is invalid and what is\nvalid.\n\nExample:\n\nBAD: file name invalid\nBETTER: COPY failed because the file name was too long\n\nRoutine names:\n-------------\n\nDo not use routine names in messages. Again, agrees with you, Peter.\n\nFWIW, \n\nMike Mascari\nmascarm@mascari.com\n", "msg_date": "Fri, 30 Nov 2001 14:19:09 -0500", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: Backend error message style issues" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I hope we can work through all of these during the next development\n> period.\n\nToo bad we didn't do it *before* doing a lot of translation work :-(.\n\nYes, I agree that a pass of rationalizing the error messages would be\nuseful. Might want to think about that old bugaboo, error codes,\nwhile we're at it. Also the perennial complaint that \"ERROR\" and\n\"DEBUG\" macros tend to conflict with other things. As long as we're\ngoing to touch many/all of the elog() calls, couldn't we try to clean\nup all these issues?\n\n> Which is better:\n\n> function '%s' not found\n> function \"%s\" not found\n> function %s not found\n\nGiven that 'x' and \"x\" mean very different things in SQL, I think that\nthe first form is just plain wrong when an identifier is involved.\nUnfortunately a lot of older code uses that style. I've tried to use\ndouble quotes in new messages, but have restrained myself from wholesale\nchanges of existing messages.\n\n> More esoteric discussions are also possible, but I'm going to postpone\n> those. ;-) However, I think it's worth working on this and perhaps\n> putting together a \"manual of style\" that applies to all parts of\n> PostgreSQL. This would significantly improve the overall perceived\n> quality.\n\nSounds like a plan to me: put together a style guide first, and then\nmake a pass through the code to try to implement it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 14:49:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Backend error message style issues " } ]
[ { "msg_contents": "\nDoes anyone have anything outstanding that holds off an RC1?\n\nSeen alot of changes over the past week, but mostly in the docs ...\nhaven't seen anything that I think is 'major' that would require a Beta4,\nanyone else?\n\n\n", "msg_date": "Fri, 30 Nov 2001 14:58:52 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "RC1 on Monday?" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Does anyone have anything outstanding that holds off an RC1?\n> Seen alot of changes over the past week, but mostly in the docs ...\n> haven't seen anything that I think is 'major' that would require a Beta4,\n> anyone else?\n\nThe only thing I'm concerned about is Jan's report from Tuesday:\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1020754\n\nIf that's real it seems like a release-stopper; but he hasn't come back\nwith any more info, not even enough to let other people try to reproduce\nit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 15:32:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1 on Monday? " }, { "msg_contents": "> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Does anyone have anything outstanding that holds off an RC1?\n> > Seen alot of changes over the past week, but mostly in the docs ...\n> > haven't seen anything that I think is 'major' that would require a Beta4,\n> > anyone else?\n> \n> The only thing I'm concerned about is Jan's report from Tuesday:\n> http://fts.postgresql.org/db/mw/msg.html?mid=1020754\n> \n> If that's real it seems like a release-stopper; but he hasn't come back\n> with any more info, not even enough to let other people try to reproduce\n> it.\n\nHow are we on the ports? We still have BeOS issues, right? HPUX looks\nOK, I think, because it was a compiler problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 16:02:25 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1 on Monday?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How are we on the ports? We still have BeOS issues, right? HPUX looks\n> OK, I think, because it was a compiler problem.\n\nI think Peter is working on the uint8-configuration issue.\n\nHPUX is okay except we might want to use a different geometry regression\nfile for HPUX 11. Waiting for Conway to get back on that. IRIX\ngeometry may need tweaking too.\n\nWe're apparently not there yet on getting libpq to report socket error\nmessage texts on all flavors of Windows. However, it's no worse than\nit was in 7.1, and better on at least some flavors. I'm willing to call\nit done for 7.2, and see what we can do better in 7.3.\n\nUnresolved report of porting problem on R4000 (from Hiroshi).\n\nSome docs issues still to be cleaned up; the biggest one being that\nThomas still has committed no documentation for the timestamp/interval\nprecision features.\n\nNone of the above look like RC1 stoppers to me, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 16:11:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1 on Monday? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How are we on the ports? We still have BeOS issues, right? HPUX looks\n> > OK, I think, because it was a compiler problem.\n> \n> I think Peter is working on the uint8-configuration issue.\n> \n> HPUX is okay except we might want to use a different geometry regression\n> file for HPUX 11. Waiting for Conway to get back on that. IRIX\n> geometry may need tweaking too.\n> \n> We're apparently not there yet on getting libpq to report socket error\n> message texts on all flavors of Windows. However, it's no worse than\n> it was in 7.1, and better on at least some flavors. I'm willing to call\n> it done for 7.2, and see what we can do better in 7.3.\n> \n> Unresolved report of porting problem on R4000 (from Hiroshi).\n> \n> Some docs issues still to be cleaned up; the biggest one being that\n> Thomas still has committed no documentation for the timestamp/interval\n> precision features.\n> \n> None of the above look like RC1 stoppers to me, though.\n\nWell, seeing as we aren't supposed to be doing _any_ tweeking after RC1\nunless we can help it, it seems these may delay things. \n(Docs/regression we can tweek.) If we can get the BeOS and R4000 stuff\ndone, or decide we don't want to get it done for 7.2, I think we are a\ngo.\n\nI guess that was my issue, that we do have a few ports in flux, and it\nwould be nice to have these all nailed down by Monday, and the\ndocs/regression if possible.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 16:18:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1 on Monday?" }, { "msg_contents": "...\n> Some docs issues still to be cleaned up; the biggest one being that\n> Thomas still has committed no documentation for the timestamp/interval\n> precision features.\n\nHmm. Thought I had done something on that. Anyway, I'm wading through\nthe docs to prepare hardcopy, which always finds some markup or wording\nchanges too. RC1 is not the release though, so it doesn't seem to me to\nbe necessary to wait for.\n\n - Thomas\n", "msg_date": "Sat, 01 Dec 2001 02:56:59 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: RC1 on Monday?" }, { "msg_contents": "> ...\n> > Some docs issues still to be cleaned up; the biggest one being that\n> > Thomas still has committed no documentation for the timestamp/interval\n> > precision features.\n> \n> Hmm. Thought I had done something on that. Anyway, I'm wading through\n> the docs to prepare hardcopy, which always finds some markup or wording\n> changes too. RC1 is not the release though, so it doesn't seem to me to\n> be necessary to wait for.\n\nYes, I sure thought that was done. Maybe Tom had something else in\nmind.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 30 Nov 2001 22:49:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1 on Monday?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, I sure thought that was done. Maybe Tom had something else in\n> mind.\n\nI'm talking about timestamp(n), interval(n), current_timestamp(n),\netc etc. If that stuff is documented, I don't see where.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 22:56:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1 on Monday? " }, { "msg_contents": "Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Does anyone have anything outstanding that holds off an RC1?\n> > Seen alot of changes over the past week, but mostly in the docs ...\n> > haven't seen anything that I think is 'major' that would require a Beta4,\n> > anyone else?\n>\n> The only thing I'm concerned about is Jan's report from Tuesday:\n> http://fts.postgresql.org/db/mw/msg.html?mid=1020754\n>\n> If that's real it seems like a release-stopper; but he hasn't come back\n> with any more info, not even enough to let other people try to reproduce\n> it.\n\n I was pretty sure that I was awake when I saw it. But I'm not\n able to reproduce it any more. So for the moment I take\n anything back and claim the opposite.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 4 Dec 2001 17:19:00 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: RC1 on Monday?" } ]
[ { "msg_contents": "will do and report...\n\n> \"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> > OIDJOIN had this...\n> > ! psql: server closed the connection unexpectedly\n> > ! This probably means the server terminated abnormally\n> > ! before or while processing the request.\n> \n> This is bad. There should be a core dump file --- can you provide\n> a debugger backtrace from it?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n", "msg_date": "Fri, 30 Nov 2001 15:27:21 -0500", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: 7.2beta3 on Digital Alpha " } ]
[ { "msg_contents": "I used the default configuration. just did a ./configure && gmake &&\ngmake check\n\n> > Just tried on Tru64 5.1 using the Compaq C compiler. Failed the\nfloat8,\n> > oidjoin and random tests.\n> \n> The float8 is certainly OK.\n> \n> The random test fails, uh, randomly, should succeed sometimes on your\n> platform, so is probably OK if you try again.\n> \n> But the oidjoin test is certainly broken. What optimization are you\n> compiling with? Does it work better if you turn optimization off??\n> \n> - Thomas\n> \n> \n\n\n", "msg_date": "Fri, 30 Nov 2001 15:28:22 -0500", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: 7.2beta3 on Digital Alpha" } ]
[ { "msg_contents": "my error, the workstation I was compiling on had max_proc_per_user set\nto 64. Changed to 1024 and now just float8 and random fail. It looks\nlike these errors are ok \n\nJim\n\n\n> will do and report...\n> \n> > \"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> > > OIDJOIN had this...\n> > > ! psql: server closed the connection unexpectedly\n> > > ! This probably means the server terminated abnormally\n> > > ! before or while processing the request.\n> > \n> > This is bad. There should be a core dump file --- can you provide\n> > a debugger backtrace from it?\n> > \n> > \t\t\tregards, tom lane\n> > \n> > \n> \n> \n> \n\n\n", "msg_date": "Fri, 30 Nov 2001 18:20:22 -0500", "msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>", "msg_from_op": true, "msg_subject": "Re: 7.2beta3 on Digital Alpha " }, { "msg_contents": "\"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> my error, the workstation I was compiling on had max_proc_per_user set\n> to 64.\n\nAh-hah. We probably ought to document that as a possible source of\nfailure in the parallel regression tests. I think the present script\nwill start as many as 20 parallel tests, which means 60 parallel\nprocesses under your userid (a backend, a psql, and probably an owning\nshell process for each psql). Add in the postmaster, your own shell,\na couple random make subprocesses, and you got trouble.\n\n> Changed to 1024 and now just float8 and random fail. It looks\n> like these errors are ok \n\nYou should find that random fails only, um, randomly. If it fails\nevery time then there's a problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Nov 2001 18:26:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2beta3 on Digital Alpha " } ]
[ { "msg_contents": "I hope I'm wrong but I just noticed that since 7.0.x the \npostgresql log format has changed to much worse -\nIt now misses such important things as process id ,\ntransaction id, timestamp and query id. \n\nThe two first ones are needed to just make sense of multiple \nbackend interactions - i.e. who is blocking who, which backend \ndid not do commit/rollback, etc\n\nthe last ones are needed for getting statistics about query \nexecution times\n\nOr are there now better ways to get at that information ?\n\n---------------\nHannu\n", "msg_date": "Sat, 01 Dec 2001 13:57:25 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "How to get back 7.0.x log format" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I hope I'm wrong but I just noticed that since 7.0.x the \n> postgresql log format has changed to much worse -\n> It now misses such important things as process id ,\n> transaction id, timestamp and query id. \n\nHave you turned on the appropriate flags in postgresql.conf?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Dec 2001 12:20:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to get back 7.0.x log format " }, { "msg_contents": "\n\nTom Lane wrote:\n\n>Hannu Krosing <hannu@tm.ee> writes:\n>\n>>I hope I'm wrong but I just noticed that since 7.0.x the \n>>postgresql log format has changed to much worse -\n>>It now misses such important things as process id ,\n>>transaction id, timestamp and query id. \n>>\n>\n>Have you turned on the appropriate flags in postgresql.conf?\n>\nThanks, found most of them now !\n\nI had turned on _all_ debug_* flags that I found there but somehow\nmissed the log_* ones ;(\n\nNow all I am missing is some timing statistics -\n * how long did planning/optimizing take\n * how long did execution take\n\nI guess I will be able to extract most that info when I enabe all of the \nfollowing\n\ndebug_print_query = true\ndebug_print_parse = true\ndebug_print_rewritten = true\ndebug_print_plan = true\n\nBut I will probably still not get the execution end time (or even better \nwhen\nwere first and last tuple delivered).\n\nIs there a way to get these ?\n\nI vaguely remember that 7.0.x logged both \"query start\" and \"query end\" \ntimes, no ?\n\n----------------------------\nHannu\n\n\n", "msg_date": "Sun, 02 Dec 2001 04:06:27 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: How to get back 7.0.x log format" }, { "msg_contents": "\n\nTom Lane wrote:\n\n>Hannu Krosing <hannu@tm.ee> writes:\n>\n>>I vaguely remember that 7.0.x logged both \"query start\" and \"query end\" \n>>times, no ?\n>>\n>\n>I do not think that any logging ability got removed between 7.0 and 7.1.\n>\nIIRC It was possible to determine how long a _certain_ query took in 7.0.x.\n\n>\n>\t\t\tregards, tom lane\n>\nOk, I found the statistics flags too, but how am I to tell which stats \ngo to which query ?\n\nI think that having backend pid in QUERY STATISTICS should be enough to \nhelp out.\n\n2001-12-02 04:12:39 [7369] DEBUG: query: select count(*) from item \ni1,item i2;\n2001-12-02 04:12:43 [7311] DEBUG: query: select count(*) from item;\nQUERY STATISTICS\n! system usage stats:\n! 0.005394 elapsed 0.010000 user 0.000000 system sec\n! [0.040000 user 0.020000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 0/2 [282/197] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 0 read, 0 written, buffer hit \nrate = 100.00%\n! Local blocks: 0 read, 0 written, buffer hit \nrate = 0.00%\n! Direct blocks: 0 read, 0 written\nQUERY STATISTICS\n! system usage stats:\n! 21.127781 elapsed 7.470000 user 0.010000 system sec\n! [7.480000 user 0.050000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 15/6 [267/196] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! postgres usage stats:\n! Shared blocks: 8 read, 0 written, buffer hit \nrate = 99.84%\n! Local blocks: 0 read, 0 written, buffer hit \nrate = 0.00%\n! Direct blocks: 0 read, 0 written\n\nI can't even figure this out by time elapsed, as I don't know when the \nstats were written\n\n-------------------------------\nHannu\n\n\n", "msg_date": "Sun, 02 Dec 2001 04:21:18 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: How to get back 7.0.x log format" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I vaguely remember that 7.0.x logged both \"query start\" and \"query end\" \n> times, no ?\n\nI do not think that any logging ability got removed between 7.0 and 7.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Dec 2001 21:07:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to get back 7.0.x log format " } ]
[ { "msg_contents": "My ISP changed my IP address last night. I knew the change was coming,\nbut I didn't realize it would be last night.\n\nI will not start getting email until tonight, and my web pages may be\noffline for a few days. I will let you know when everything is back in\nplace.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 1 Dec 2001 14:24:32 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "My machine is down" }, { "msg_contents": "> My ISP changed my IP address last night. I knew the change was coming,\n> but I didn't realize it would be last night.\n> \n> I will not start getting email until tonight, and my web pages may be\n> offline for a few days. I will let you know when everything is back in\n> place.\n\nI am back online. My web site is still not reachable until the .us name\nserver updates and DNS timesout. I have updated the TODO list so the\nlinks will work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Dec 2001 10:53:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: My machine is down" } ]
[ { "msg_contents": "If you want to get to the SGML build I do, you can still get to it\nusing:\n\n\thttp://216.55.132.35/main/writings/pgsql/sgml/\n\nAlso, my email is queued up at my ISP so I will not lose any mail.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 1 Dec 2001 16:04:46 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "My machine is down" } ]
[ { "msg_contents": "I was wondering if there was a way to log the \"explain\" of a query. Not the\nfull plan mind you, just what would be the result of an \"explain.\" That in\nconjunction with query statistics would be quite helpful (and resemble some of\nthe typical logging one can do with Oracle.)\n", "msg_date": "Sun, 02 Dec 2001 01:38:26 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Yet more logging questions" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I was wondering if there was a way to log the \"explain\" of a query.\n\nBeing a NOTICE, it *is* logged.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 02 Dec 2001 10:11:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Yet more logging questions " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > I was wondering if there was a way to log the \"explain\" of a query.\n> \n> Being a NOTICE, it *is* logged.\n\nSorry, I didn't make my intentions clear. I mean as a function of normal query\noperation.\n\nThe debug_print_plan is VERY verbose. Is there an option which does the\nequivilent of an EXPLAIN on a submitted query?\n\nFor instance, I would like to be able to do something like this:\n\ndebug_print_query = true\t\ndebug_print_explain = true\t#my own fictional directive\nshow_query_stats = true\n", "msg_date": "Sun, 02 Dec 2001 10:45:13 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Yet more logging questions" } ]