threads
listlengths
1
2.99k
[ { "msg_contents": "Despite of advertized support of Unicode to other charset conversion,\nPostgreSQL-7.1 reports that Conversion of UNICODE to KOI8 is not\nsupported. Same for WIN, ALT and other charsets.\n\nAs I found out, it was simply forgotten to add these charsets to list\nof 8-bit charsets which should be converted. May be becouse their maps\nare stored in another directory on ftp.unicode.org (see VENDORS/MicroSoft\nfor cp1251 and cp866 maps, and somewhere else for KOI8-R.TXT. At least all\nthose maps are included in the catdoc distribution)\n\nAttached patch fixes this problem. It adds script UCS_to_cyrillic.pl\ninto src/backend/utils/mb/Unicode directory. Mapping of the PostgreSQL\ncharset names to filenames (as they appear in catdoc distribution, i.e.\nlowercased) is hardcoded into script. It is almost exact copy of\nUCS_to_iso script, with only file and constant names changed.\n\nGenerated maps are included in the patch, as they are included in the\nsource tarball, and maps are omitted, becouse they are removed by\nmake distclean\n\nfile src/backend/mb/conv.c is modified\nto include new maps and provide appropriate conversion functions\n\n\n\n-- \nVictor Wagner\t\t\tvitus@ice.ru\nChief Technical Officer\t\tOffice:7-(095)-748-53-88\nCommuniware.Net \t\tHome: 7-(095)-135-46-61\nhttp://www.communiware.net http://www.ice.ru/~vitus", "msg_date": "Thu, 26 Apr 2001 20:51:25 +0400 (MSD)", "msg_from": "Victor Wagner <vitus@ice.ru>", "msg_from_op": true, "msg_subject": "Cyrillic to UNICODE conversion" }, { "msg_contents": "Thanks for the fixes. I have committed your patches and they should\nappear in 7.1.1.\n\nBTW, I have not added cp1251.txt cp866.txt koi8-r.txt, since they\ncome from Unicode.org and are not permitted to re-distribute.\n--\nTatsuo Ishii\n\nFrom: Victor Wagner <vitus@ice.ru>\nSubject: [PATCHES] Cyrillic to UNICODE conversion\nDate: Thu, 26 Apr 2001 20:51:25 +0400 (MSD)\nMessage-ID: <Pine.LNX.4.30.0104262041500.9539-101000@party.ice.ru>\n\n> \n> Despite of advertized support of Unicode to other charset conversion,\n> PostgreSQL-7.1 reports that Conversion of UNICODE to KOI8 is not\n> supported. Same for WIN, ALT and other charsets.\n> \n> As I found out, it was simply forgotten to add these charsets to list\n> of 8-bit charsets which should be converted. May be becouse their maps\n> are stored in another directory on ftp.unicode.org (see VENDORS/MicroSoft\n> for cp1251 and cp866 maps, and somewhere else for KOI8-R.TXT. At least all\n> those maps are included in the catdoc distribution)\n> \n> Attached patch fixes this problem. It adds script UCS_to_cyrillic.pl\n> into src/backend/utils/mb/Unicode directory. Mapping of the PostgreSQL\n> charset names to filenames (as they appear in catdoc distribution, i.e.\n> lowercased) is hardcoded into script. It is almost exact copy of\n> UCS_to_iso script, with only file and constant names changed.\n> \n> Generated maps are included in the patch, as they are included in the\n> source tarball, and maps are omitted, becouse they are removed by\n> make distclean\n> \n> file src/backend/mb/conv.c is modified\n> to include new maps and provide appropriate conversion functions\n> \n> \n> \n> -- \n> Victor Wagner\t\t\tvitus@ice.ru\n> Chief Technical Officer\t\tOffice:7-(095)-748-53-88\n> Communiware.Net \t\tHome: 7-(095)-135-46-61\n> http://www.communiware.net http://www.ice.ru/~vitus\n", "msg_date": "Sun, 29 Apr 2001 16:29:52 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Cyrillic to UNICODE conversion" }, { "msg_contents": "On Sun, 29 Apr 2001, Tatsuo Ishii wrote:\n\n> From: Tatsuo Ishii <t-ishii@sra.co.jp>\n> Subject: Re: [PATCHES] Cyrillic to UNICODE conversion\n> X-Mailer: Mew version 1.94.2 on Emacs 20.7 / Mule 4.1\n> [iso-2022-jp] (^[$B0*^[(B)\n>\n> Thanks for the fixes. I have committed your patches and they should\n> appear in 7.1.1.\n>\n> BTW, I have not added cp1251.txt cp866.txt koi8-r.txt, since they\n> come from Unicode.org and are not permitted to re-distribute.\n\nIt is not true for koi8-r.txt. At least one which is included into catdoc\ndistribution I've made myself from RFC1483, and only afterward it has\nappear on unicode.org, and Chernov's KOI8 pages.\n\n But anyway, if anybody\nis able to get them from unicode.org, why bother.\n--\nVictor Wagner\t\t\tvitus@ice.ru\nChief Technical Officer\t\tOffice:7-(095)-748-53-88\nCommuniware.Net \t\tHome: 7-(095)-135-46-61\nhttp://www.communiware.net http://www.ice.ru/~vitus\n\n", "msg_date": "Sun, 29 Apr 2001 13:15:07 +0400 (MSD)", "msg_from": "Victor Wagner <vitus@ice.ru>", "msg_from_op": true, "msg_subject": "Re: Cyrillic to UNICODE conversion" }, { "msg_contents": "> > BTW, I have not added cp1251.txt cp866.txt koi8-r.txt, since they\n> > come from Unicode.org and are not permitted to re-distribute.\n> \n> It is not true for koi8-r.txt. At least one which is included into catdoc\n> distribution I've made myself from RFC1483, and only afterward it has\n> appear on unicode.org, and Chernov's KOI8 pages.\n\nOh, I didn't know that.\n\n> But anyway, if anybody\n> is able to get them from unicode.org, why bother.\n\nAgreed.\n--\nTatsuo Ishii\n", "msg_date": "Sun, 29 Apr 2001 19:48:38 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Cyrillic to UNICODE conversion" } ]
[ { "msg_contents": "ok for serials, now i can extract from psql (\\d tablename).\n\nBut i'm not able to extract foreign keys from the schema.\n\n\n\n>From: Joel Burton <jburton@scw.org>\n>To: \"V. M.\" <txian@hotmail.com>\n>CC: pgsql-hackers@postgresql.org\n>Subject: Re: unanswered: Schema Issue\n>Date: Thu, 26 Apr 2001 13:51:26 -0400 (EDT)\n>\n>On Thu, 26 Apr 2001, V. M. wrote:\n>\n> >\n> > I want to extract tables schema information, i've looked at\n> > src/bin/psql/describe.c but i cannot determine the datatype\n> > 'serial' and\n> > 'references' from pg_*, i understand that triggers are generated for\n> > serial\n> > and references, so how i can understand from my perl application the\n> > full\n> > schema ?\n>\n>SERIALs are just integers (int4). They don't use a trigger, but use a\n>sequence\n>as a default value.\n>\n>REFERENCES are not a type of data, but a foreign key/primary key\n>relationship. There's still a data type (int, text, etc.)\n>\n>You can derive schema info from the system catalogs. Use psql with -E for\n>examples, or look in the Developer Manual.\n>\n>HTH,\n>\n>--\n>Joel Burton <jburton@scw.org>\n>Director of Information Systems, Support Center of Washington\n>\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n", "msg_date": "Thu, 26 Apr 2001 20:04:14 +0200", "msg_from": "\"V. M.\" <txian@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: unanswered: Schema Issue" }, { "msg_contents": "On Thu, 26 Apr 2001, V. M. wrote:\n\n> ok for serials, now i can extract from psql (\\d tablename).\n> \n> But i'm not able to extract foreign keys from the schema.\n\nYes you can. Read my tutorial on Referential Integrity in the top section\nat techdocs.postgresql.org.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Thu, 26 Apr 2001 14:42:31 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: unanswered: Schema Issue" } ]
[ { "msg_contents": "read it,\nbut i can determine only the related tables and not the fields of these \ntables that are related.\n\nvalter\n\n>From: Joel Burton <jburton@scw.org>\n>To: \"V. M.\" <txian@hotmail.com>\n>CC: pgsql-hackers@postgresql.org\n>Subject: [HACKERS] Re: unanswered: Schema Issue\n>Date: Thu, 26 Apr 2001 14:42:31 -0400 (EDT)\n>\n>On Thu, 26 Apr 2001, V. M. wrote:\n>\n> > ok for serials, now i can extract from psql (\\d tablename).\n> >\n> > But i'm not able to extract foreign keys from the schema.\n>\n>Yes you can. Read my tutorial on Referential Integrity in the top section\n>at techdocs.postgresql.org.\n>\n>--\n>Joel Burton <jburton@scw.org>\n>Director of Information Systems, Support Center of Washington\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n", "msg_date": "Thu, 26 Apr 2001 21:12:14 +0200", "msg_from": "\"V. M.\" <txian@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Re: unanswered: Schema Issue" } ]
[ { "msg_contents": "perhaps adding t.tgargs to your view enable me to extract parameters\nthat are the related fields\n---------------------------------------\n\n\nCREATE VIEW dev_ri\n AS\n SELECT ***** t.tgargs **** , t.oid as trigoid,\n c.relname as trig_tbl,\n t.tgfoid,\n f.proname as trigfunc,\n t.tgenabled,\n t.tgconstrname,\n c2.relname as const_tbl,\n t.tgdeferrable,\n t.tginitdeferred\n FROM pg_trigger t,\n pg_class c,\n pg_class c2,\n pg_proc f\n WHERE t.tgrelid=c.oid\n AND t.tgconstrrelid=c2.oid\n AND tgfoid=f.oid\n AND tgname ~ '^RI_'\n ORDER BY t.oid;\n\n\na tgargs example is:\n\nfk_provincie_id_paesi_id_provin\\000paesi\\000province\\000UNSPECIFIED\\000id_provincia\\000id\\000\n\nfirst field (fk_provincie_id_paesi_id_provin) is constraint name, and i can \nunderstand that: paesi(id_provincia) references provincia(id).\n\nvalter\n\n\n\n\n>From: Joel Burton <jburton@scw.org>\n>To: \"V. M.\" <txian@hotmail.com>\n>CC: pgsql-hackers@postgresql.org\n>Subject: [HACKERS] Re: unanswered: Schema Issue\n>Date: Thu, 26 Apr 2001 14:42:31 -0400 (EDT)\n>\n>On Thu, 26 Apr 2001, V. M. wrote:\n>\n> > ok for serials, now i can extract from psql (\\d tablename).\n> >\n> > But i'm not able to extract foreign keys from the schema.\n>\n>Yes you can. Read my tutorial on Referential Integrity in the top section\n>at techdocs.postgresql.org.\n>\n>--\n>Joel Burton <jburton@scw.org>\n>Director of Information Systems, Support Center of Washington\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n", "msg_date": "Thu, 26 Apr 2001 21:24:02 +0200", "msg_from": "\"V. M.\" <txian@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Re: unanswered: Schema Issue" }, { "msg_contents": "On Thu, 26 Apr 2001, V. M. wrote:\n\n(moving this conversation back to pgsql-general, followups to there)\n\n> perhaps adding t.tgargs to your view enable me to extract parameters\n> that are the related fields\n\nAt SCW, we use a naming convention for RI triggers, to allow\nus to easily extract that, and deal with error messages.\n\nWe use:\n\nCREATE TABLE p (id INT);\n\nCREATE TABLE c (id INT CONSTRAINT c__ref_id REFERENCES p);\n\nThis allows us at a glance to see in error messages what field of what\ntable we were referencing. In an Access front end, we can trap this\nerror message to a nice statement like \"You're trying to change a value in\nthe table \"c\", using information in table \"p\", \"id\", but...\")\n\nIf you don't have this, yes, you can look at in\nthe tgargs, but, given that its a bytea field, it's hard to\nprogrammatically dig anything out of it.\n\nHTH,\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Thu, 26 Apr 2001 15:32:47 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: unanswered: Schema Issue" } ]
[ { "msg_contents": "When I create a table\n\ncreate table test (a bit(4));\n\nand insert a value\n\ninsert into test values (b'1000000001');\n\nthe zpbit_in() function gets an atttypmod (arg 2 (of 2)) of -1. Is there\nsomewhere the system needs to be told that \"this type uses the atttypmod\nfield\"?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 26 Apr 2001 23:09:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "zpbit_in does not receive correct atttypmod" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> When I create a table\n> create table test (a bit(4));\n> and insert a value\n> insert into test values (b'1000000001');\n> the zpbit_in() function gets an atttypmod (arg 2 (of 2)) of -1. Is there\n> somewhere the system needs to be told that \"this type uses the atttypmod\n> field\"?\n\nNo. I don't believe this is broken, either --- the call is coming from\nmake_const() which has no reason to try to coerce the constant to a\nparticular length. Coercion to the target column width will happen\nlater (quite a bit later), when zpbit() is called. See\ncoerce_type_typmod in parse_coerce.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Apr 2001 18:35:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: zpbit_in does not receive correct atttypmod " } ]
[ { "msg_contents": "> Is there any discussion before I submit the patch to -patches?\n\nSince we can, or should be able to, run postgres as a backend to ldap,\nthis seems to give a wonderfully circular system (which probably works\njust fine). Just a comment...\n\n - Thomas\n", "msg_date": "Thu, 26 Apr 2001 23:03:51 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": true, "msg_subject": "Re: PAM Authentication for PostgreSQL..." }, { "msg_contents": "\nA couple {days,weeks} ago, someone sent an email to one of the pgsql-*\nlists asking if anybody had thought about implementing the glue to use PAM\nas authentication method for PostgreSQL. Having thought about being able\nto easily drop in various external authentication agents, I've been\nthinking about using PAM for PostgreSQL for a while... The recent thread\ninspired me, and I have now finished (and tested - imagine that :) the\ncode.\n\nI vaguely remember there were a few points brought up for discussion\nduring the short thread - unfortunately I was unable to find it in the\narchives (the search somehow seems not to be working (anymore)) - and I\ndeleted all but one email - the one from Peter:\n\n> Peter Eisentraut writes:\n> Konstantinos Agouros writes:\n> > I would really like to be able to use external authentication-methods\n> > (the password not the itself) to avoid setting up pass- words.\n> \n> What particular method that does not use passwords are you interested in?\n\nI think is question should be read as \"... to avoid having to set up local\npostgresql passwords.\"\n\n... Imagine the following scenario:\n\nRelatively large enterprise (6000+ employees), where several departments\nhave a need to use databases of various kinds. (Currently, unfortunately,\nall Access Shared filesystem databases... Yuk.)\n\nNice shiny PostgreSQL server sitting in the corner with lots of\n(currently) free disk space on it - places where, through ODBC, we could\nstuff the data from all these access databases, and 1) get them off the\nnetwork (and off IPX), and 2) central repository that is easy to back up,\nadministrate, etc...\n\nNow, it would be annoying to have to maintain local passwords for\nPostgreSQL for all of the X number of users who will be having tablespace\non this server. This would be an excellent place for PAM, in cooperation\nwith something like pam_ldap - the module that lets PAM authenticate into\nLDAP (which, in our case, sits on top of NDS, and contains all the\nuser/etc information.)\n\nI have several other examples where this could come in handy (Oddly\nenough, most of them involving LDAP... imagine that. :)\n\n\nIs there any discussion before I submit the patch to -patches?\n\n\n -Dominic\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n\n\n\n", "msg_date": "Thu, 26 Apr 2001 18:37:46 -0500 (CDT)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": false, "msg_subject": "PAM Authentication for PostgreSQL..." } ]
[ { "msg_contents": "Hi,\nThere's a report of startup recovery failure in Japan.\nRedo done but ...\nUnfortunately I have no time today.\n\nregards,\nHiroshi Inoue\n\nKAMI wrote:\n> \n> \n> DEBUG: database system shutdown was interrupted at 2001-04-26 22:15:00 JST\n> DEBUG: CheckPoint record at (1, 3923829232)\n> DEBUG: Redo record at (1, 3923829232); Undo record at (0, 0); Shutdown TRUE\n> DEBUG: NextTransactionId: 7473265; NextOid: 2550911\n> DEBUG: database system was not properly shut down; automatic recovery in\n> progress...\n> DEBUG: redo starts at (1, 3923829296)\n> DEBUG: ReadRecord: record with zero len at (1, 3923880136)\n> DEBUG: redo done at (1, 3923880100)\n> FATAL 2: XLogFlush: request is not satisfied\n> postmaster: Startup proc 4228 exited with status 512 - abort\n", "msg_date": "Fri, 27 Apr 2001 13:07:38 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "7.1 startup recovery failure" }, { "msg_contents": "> There's a report of startup recovery failure in Japan.\n> Redo done but ...\n> Unfortunately I have no time today.\n\nPlease ask to start up with wal_debug = 1...\n\nVadim\n\n\n", "msg_date": "Thu, 26 Apr 2001 21:18:40 -0700", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: 7.1 startup recovery failure" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> There's a report of startup recovery failure in Japan.\n>\n>> DEBUG: redo done at (1, 3923880100)\n>> FATAL 2: XLogFlush: request is not satisfied\n>> postmaster: Startup proc 4228 exited with status 512 - abort\n\nIs this person using 7.1 release, or a beta/RC version? That looks\njust like the last WAL bug Vadim fixed before final ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Apr 2001 08:46:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1 startup recovery failure " }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> > There's a report of startup recovery failure in Japan.\n> > Redo done but ...\n> > Unfortunately I have no time today.\n> \n> Please ask to start up with wal_debug = 1...\n> \n\nIsn't it very difficult for dbas to leave the\ncorrupted database as it is ?\nISTM we could hardly expect to get the log with\nwal_debug = 1 unless we automatically force the\nlog in case of recovery failures.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 01 May 2001 12:02:32 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: 7.1 startup recovery failure" }, { "msg_contents": "Corrupted or not, after a crash take a snapshot of the data tree\nbefore firing it back up again. Doesn't take that much time\n(especially with a netapp filer) and it allows for a virtually\nunlimited number of attempts to solve the trouble or debug.\n\n--\nRod Taylor\n BarChord Entertainment Inc.\n----- Original Message -----\nFrom: \"Hiroshi Inoue\" <Inoue@tpf.co.jp>\nTo: \"Vadim Mikheev\" <vmikheev@sectorbase.com>\nCc: \"pgsql-hackers\" <pgsql-hackers@postgresql.org>\nSent: Monday, April 30, 2001 11:02 PM\nSubject: Re: [HACKERS] 7.1 startup recovery failure\n\n\n> Vadim Mikheev wrote:\n> >\n> > > There's a report of startup recovery failure in Japan.\n> > > Redo done but ...\n> > > Unfortunately I have no time today.\n> >\n> > Please ask to start up with wal_debug = 1...\n> >\n>\n> Isn't it very difficult for dbas to leave the\n> corrupted database as it is ?\n> ISTM we could hardly expect to get the log with\n> wal_debug = 1 unless we automatically force the\n> log in case of recovery failures.\n>\n> regards,\n> Hiroshi Inoue\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n", "msg_date": "Mon, 30 Apr 2001 23:12:06 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: 7.1 startup recovery failure" }, { "msg_contents": "* Rod Taylor <rbt@barchord.com> [010430 22:10] wrote:\n> Corrupted or not, after a crash take a snapshot of the data tree\n> before firing it back up again. Doesn't take that much time\n> (especially with a netapp filer) and it allows for a virtually\n> unlimited number of attempts to solve the trouble or debug.\n> \n\nYou run your database over NFS? They must be made of steel. :)\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n", "msg_date": "Tue, 1 May 2001 03:07:17 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: 7.1 startup recovery failure" } ]
[ { "msg_contents": "How does 7.1 work now with the vacuum and all?\n\nDoes it go for indexes by default, even when i haven't run a vacuum at all?\nDoes vacuum lock up postgres? It says the analyze part shouldn't, but how's\nthat for all of the vacuum?\n\nAn 7.0.3 db we have here we are forced to run vacuum every hour to get an\nacceptable speed, and while doing that vacuum (5-10 minutes) it totaly\nblocks our application that's mucking with the db.\n\nJust curious\n\nMagnus Naeslund\n\n\n\n", "msg_date": "Fri, 27 Apr 2001 06:10:44 +0200", "msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "7.1 vacuum" }, { "msg_contents": "* Magnus Naeslund(f) <mag@fbab.net> [010426 21:17] wrote:\n> How does 7.1 work now with the vacuum and all?\n> \n> Does it go for indexes by default, even when i haven't run a vacuum at all?\n> Does vacuum lock up postgres? It says the analyze part shouldn't, but how's\n> that for all of the vacuum?\n> \n> An 7.0.3 db we have here we are forced to run vacuum every hour to get an\n> acceptable speed, and while doing that vacuum (5-10 minutes) it totaly\n> blocks our application that's mucking with the db.\n\nhttp://people.freebsd.org/~alfred/vacfix/\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nInstead of asking why a piece of software is using \"1970s technology,\"\nstart asking why software is ignoring 30 years of accumulated wisdom.\n", "msg_date": "Thu, 26 Apr 2001 23:54:39 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: 7.1 vacuum" }, { "msg_contents": "Alfred Perlstein wrote:\n> \n> * Magnus Naeslund(f) <mag@fbab.net> [010426 21:17] wrote:\n> > How does 7.1 work now with the vacuum and all?\n> >\n> > Does it go for indexes by default, even when i haven't run a vacuum at all?\n> > Does vacuum lock up postgres? It says the analyze part shouldn't, but how's\n> > that for all of the vacuum?\n> >\n> > An 7.0.3 db we have here we are forced to run vacuum every hour to get an\n> > acceptable speed, and while doing that vacuum (5-10 minutes) it totaly\n> > blocks our application that's mucking with the db.\n> \n> http://people.freebsd.org/~alfred/vacfix/\n\nWhat's the deal with vacuum lazy in 7.1? I was looking forward to it. It was\nnever clear whether or not you guys decided to put it in.\n\nIf it is in as a feature, how does one use it?\nIf it is a patch, how does one get it?\nIf it is neither a patch nor an existing feature, has development stopped?\n", "msg_date": "Fri, 27 Apr 2001 08:54:43 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: 7.1 vacuum" }, { "msg_contents": "* mlw <markw@mohawksoft.com> [010427 05:50] wrote:\n> Alfred Perlstein wrote:\n> > \n> > * Magnus Naeslund(f) <mag@fbab.net> [010426 21:17] wrote:\n> > > How does 7.1 work now with the vacuum and all?\n> > >\n> > > Does it go for indexes by default, even when i haven't run a vacuum at all?\n> > > Does vacuum lock up postgres? It says the analyze part shouldn't, but how's\n> > > that for all of the vacuum?\n> > >\n> > > An 7.0.3 db we have here we are forced to run vacuum every hour to get an\n> > > acceptable speed, and while doing that vacuum (5-10 minutes) it totaly\n> > > blocks our application that's mucking with the db.\n> > \n> > http://people.freebsd.org/~alfred/vacfix/\n> \n> What's the deal with vacuum lazy in 7.1? I was looking forward to it. It was\n> never clear whether or not you guys decided to put it in.\n> \n> If it is in as a feature, how does one use it?\n> If it is a patch, how does one get it?\n\nIf you actually download and read the enclosed READMEs it's pretty\nclear.\n\n> If it is neither a patch nor an existing feature, has development stopped?\n\nI have no idea, I haven't been tracking postgresql all that much \nsince leaving the place where we contracted that work.\n\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nRepresent yourself, show up at BABUG http://www.babug.org/\n", "msg_date": "Fri, 27 Apr 2001 05:55:17 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: 7.1 vacuum" }, { "msg_contents": "> > What's the deal with vacuum lazy in 7.1? I was looking forward to it. It was\n> > never clear whether or not you guys decided to put it in.\n> > \n> > If it is in as a feature, how does one use it?\n> > If it is a patch, how does one get it?\n> \n> If you actually download and read the enclosed READMEs it's pretty\n> clear.\n> \n> > If it is neither a patch nor an existing feature, has development stopped?\n> \n> I have no idea, I haven't been tracking postgresql all that much \n> since leaving the place where we contracted that work.\n\nVadim never got to merging into the 7.1 tree.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 11:28:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: 7.1 vacuum" } ]
[ { "msg_contents": "> If you are familiar with cddb (actually freedb.org) I am taking that data in\n> putting it into postgres. The steps are: (pseudo code)\n> \n> select nextval('cdid_seq');\n> \n> begin;\n> \n> insert into titles (...) values (...);\n> \n> for(i=0; i < tracks; i++)\n> \tinsert into tracks (...) values (...);\n> \n> commit;\n> \n> \n> When running stand alone on my machine, it will hovers around 130 full CDs per\n> second. When I start two processes it drops to fewer than 100 inserts per\n> second. When I add another, it drops even more. The results I posted with\n> pgbench pretty much showed what I was seeing in my program.\n\nThe above is a typical example of an application that will lose performance\nwhen perfomed in parallel as long as the bottleneck is the db. The only way to make \nabove behave better when done in parallel is a \"fragmented\" tracks table. \nThe chance that two concurrent clients insert into the same table file needs to be \nlowered, since above suffers from lock contention. Remember that for the non blocking \nlock PostgreSQL currently uses the fastest possible approach optimized in assembler.\n\nA valid design in PostgreSQL would involve n tracks tables tracks_1 .. tracks_n\na union all view \"tracks\" and some on insert and on update rules. Unfortunalely there\nis currently no way to optimize the select with a select rule, that is based on the given where \nclause. Nor would the optimizer regard any applicable check constraints for the union all\nquery. Thus if you don't have separate disks for the tracks_n's you will loose performance \non select.\n\nWhen not doing the above, your best chance is to tweak the single inserter case,\nsince that will be fastest.\n\nAndreas\n", "msg_date": "Fri, 27 Apr 2001 12:40:23 +0200", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: scaling multiple connections" } ]
[ { "msg_contents": "As mentioned on -hackers, I added the neccesary code to PostgreSQL to\nallow for authentication through PAM... Attached is the patch.\n\n\n -Dominic\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/", "msg_date": "Fri, 27 Apr 2001 10:39:10 -0500 (CDT)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": true, "msg_subject": "Patch to include PAM support..." }, { "msg_contents": "\nSaved for 7.2 in the patches mailbox.\n\n> \n> As mentioned on -hackers, I added the neccesary code to PostgreSQL to\n> allow for authentication through PAM... Attached is the patch.\n> \n> \n> -Dominic\n> \n> -- \n> Dominic J. Eidson\n> \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n> -------------------------------------------------------------------------------\n> http://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 13:41:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "\nIs there any interest in PAM support? If not, I will reject this patch.\n\n> \n> As mentioned on -hackers, I added the neccesary code to PostgreSQL to\n> allow for authentication through PAM... Attached is the patch.\n> \n> \n> -Dominic\n> \n> -- \n> Dominic J. Eidson\n> \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n> -------------------------------------------------------------------------------\n> http://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 11:56:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Bruce Momjian writes:\n\n> Is there any interest in PAM support? If not, I will reject this patch.\n\nSure there is.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 12 Jun 2001 18:24:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > Is there any interest in PAM support? If not, I will reject this patch.\n> \n> Sure there is.\n\nOK, care to give a thumbs up on the patch?\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nIt is enabled with a compile-time option.\n\nTom objected to it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 12:27:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is there any interest in PAM support? If not, I will reject this patch.\n\nWe have seen multiple requests for PAM support, so there's interest out\nthere. But IIRC, I had some serious concerns about this proposed\nimplementation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2001 12:32:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is there any interest in PAM support? If not, I will reject this patch.\n> \n> We have seen multiple requests for PAM support, so there's interest out\n> there. But IIRC, I had some serious concerns about this proposed\n> implementation.\n\nI know there was concerns about blocking but is that problem any more so\nthan other interfaces we already support?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 12:35:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I know there was concerns about blocking but is that problem any more so\n> than other interfaces we already support?\n\nWe don't need to make it worse. We've already had trouble reports about\npostmaster hangups with broken IDENT servers; PAM will hugely expand the\nscope of potential troubles. Can you say \"denial of service\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2001 12:44:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I know there was concerns about blocking but is that problem any more so\n> > than other interfaces we already support?\n> \n> We don't need to make it worse. We've already had trouble reports about\n> postmaster hangups with broken IDENT servers; PAM will hugely expand the\n> scope of potential troubles. Can you say \"denial of service\"?\n\nDoes it really? You are saying PAM can make \"denial of service\" attacks\neven easier than ident? \n\nIf it is the same risk, I think it is OK, but if it is worse, I see your\npoint. (I don't know much about PAM except it allows authentication.)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 12:55:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "On Tue, Jun 12, 2001 at 11:56:12AM -0400, Bruce Momjian allegedly wrote:\n> Is there any interest in PAM support? If not, I will reject this patch.\n\nYeah, I would be interested. It would enable me to pull passwords from\nLDAP, which should enable our helpdesk to support PostgreSQL better\n(change requests for passwords now go via sysadmin, while all other\npasswords (ftp, staging, accounts) are stored in a central repository).\n\nHowever, to me it's more of a handy featured than a critical one.\n\nRegards,\n\nMathijs\n-- \n\"Books constitute capital.\" \n Thomas Jefferson \n", "msg_date": "Tue, 12 Jun 2001 19:09:57 +0200", "msg_from": "Mathijs Brands <mathijs@ilse.nl>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, care to give a thumbs up on the patch?\n>\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\n From static inspection I have some doubts about whether this patch would\noperate correctly. The way it is implemented is that if the backend is\ninstructed to use PAM authentication it pretends to the frontend that\npassword authentication is going on. This would probably work correctly\nif your PAM setup is that you require exactly one password from the user.\nBut if the PAM setup does not require a password (Kerberos, rhosts\nmodules?) it would involve a useless exchange (and possibly prompt) for a\npassword. More importantly, though, if the PAM configuration requires\nmore than one password (perhaps the password is due to be changed), this\nimplementation will fail (to authenticate).\n\nDominic, any comments?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 12 Jun 2001 19:12:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "On Tue, 12 Jun 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I know there was concerns about blocking but is that problem any more so\n> > > than other interfaces we already support?\n> > \n> > We don't need to make it worse. We've already had trouble reports about\n> > postmaster hangups with broken IDENT servers; PAM will hugely expand the\n> > scope of potential troubles. Can you say \"denial of service\"?\n> \n> Does it really? You are saying PAM can make \"denial of service\" attacks\n> even easier than ident? \n\nIf anything, then \"possibly as easy as ident\" - but that's a worst case\nscenario. And the reason for that is because they both potentially use\noutside server/services. PAM doesn't _have_ to authenticate into external\ndevices, the LDAP example is just an example from my/our situation. You\ncould use PAM to authenticate into the local system password file, and/or\nuse it to create user limits (Only 3 connections per user, as example..)\n\n> If it is the same risk, I think it is OK, but if it is worse, I see your\n> point. (I don't know much about PAM except it allows authentication.)\n\nMy apologies if PAM has somehow been equated to \"remote server\nauthentication piece\" - there is a lot more to PAM than the abillity to\neasily do remote authentication.\n\nhttp://www.kernel.org/pub/linux/libs/pam/whatispam.html\nhttp://www.kernel.org/pub/linux/libs/pam/FAQ\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Tue, 12 Jun 2001 12:19:59 -0500 (CDT)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": true, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "On Tue, 12 Jun 2001, Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n> > OK, care to give a thumbs up on the patch?\n> >\n> > \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> \n> >From static inspection I have some doubts about whether this patch would\n> operate correctly. The way it is implemented is that if the backend is\n> instructed to use PAM authentication it pretends to the frontend that\n> password authentication is going on. This would probably work correctly\n\nCorrect - this was to save code duplication - since the frontend steps for\npassword authentication are the same, whether you're authenticating to\nglobal/pg_pwd, or handing off the username/password processing to PAM.\n\n> if your PAM setup is that you require exactly one password from the user.\n> But if the PAM setup does not require a password (Kerberos, rhosts\n> modules?) it would involve a useless exchange (and possibly prompt) for a\n\nThis works fine - if it doesn't require a password, it won't get to the\n\"password prompt\" step inside the conversation function, and ends up just\nreturning \"success\".\n\n> password. More importantly, though, if the PAM configuration requires\n> more than one password (perhaps the password is due to be changed), this\n> implementation will fail (to authenticate).\n\nTypical use of a database, is from a non-interactive interface (script,\napplication, et al), where you aren't given the abillity to enter a second\npassword in the first place. Granted, this could be implemented - but my\ngoal was to emulate the existing libpq authentication process (which only\nallows for the transmission of one password for all (the one?) of the\nexisting authentication methods that utilize passwords.\n\nIn all of the other remote authentication pieces that I have worked\nwith/used (radius, tacacs, etc) - if your password is in need to be\nchanged and/or expired - your authentication just fails.\n\n> Dominic, any comments?\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Tue, 12 Jun 2001 12:29:04 -0500 (CDT)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": true, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "\"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n> My apologies if PAM has somehow been equated to \"remote server\n> authentication piece\" - there is a lot more to PAM than the abillity to\n> easily do remote authentication.\n\nRight. Part of the reason I'm concerned is that if we support PAM,\nthen we don't *know* exactly what it is we are buying into or which\nauthentication protocol will be used. This doesn't bother me as long\nas any PAM-induced failure is confined to the connection trying to use\na particular PAM auth mechanism. But it does bother me if such a problem\ncan cause denial of service for all clients.\n\nWe have this problem already with IDENT, and we know we need to fix it.\nI'm just saying that we'd better fix it before we add PAM support, not\nafter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2001 13:40:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "> \"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n> > My apologies if PAM has somehow been equated to \"remote server\n> > authentication piece\" - there is a lot more to PAM than the abillity to\n> > easily do remote authentication.\n> \n> Right. Part of the reason I'm concerned is that if we support PAM,\n> then we don't *know* exactly what it is we are buying into or which\n> authentication protocol will be used. This doesn't bother me as long\n> as any PAM-induced failure is confined to the connection trying to use\n> a particular PAM auth mechanism. But it does bother me if such a problem\n> can cause denial of service for all clients.\n> \n> We have this problem already with IDENT, and we know we need to fix it.\n> I'm just saying that we'd better fix it before we add PAM support, not\n> after.\n\nIt is has the same problems as IDENT, and it doesn't add any new\nproblems, and it meets people's needs, why not add it? When we get\nIDENT fixed we can fix PAM at the same time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 13:59:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Dominic J. Eidson writes:\n\n> > if your PAM setup is that you require exactly one password from the user.\n> > But if the PAM setup does not require a password (Kerberos, rhosts\n> > modules?) it would involve a useless exchange (and possibly prompt) for a\n>\n> This works fine - if it doesn't require a password, it won't get to the\n> \"password prompt\" step inside the conversation function, and ends up just\n> returning \"success\".\n\nIn the patch I'm looking at, the conversation function doesn't do any\nactual \"prompting\", it looks at the password that has previously been\nobtained by way of the password packet exchange. If no password is\nrequired, the password is never looked at, but still obtained. That by\nitself causes psql to print a password prompt.\n\nPerhaps this could work: In the switch in be_recvauth(), you call the\npam_authenticate() and friends and if the sequence passes you report back\n\"OK\". In the conversation function -- if it gets called -- send a\npassword packet and store the answer packet. You might have to play some\ntricks here to obtain the answer packet, though.\n\n> In all of the other remote authentication pieces that I have worked\n> with/used (radius, tacacs, etc) - if your password is in need to be\n> changed and/or expired - your authentication just fails.\n\nAlright.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 12 Jun 2001 20:16:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It is has the same problems as IDENT, and it doesn't add any new\n> problems, and it meets people's needs, why not add it?\n\nBecause (a) it greatly increases the scope of the vulnerability,\nand (b) it adds more code that will need to be rewritten to fix the\nproblem. I want to fix the blocking problem first, then solicit a\nPAM patch that fits into the rewritten postmaster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2001 14:23:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It is has the same problems as IDENT, and it doesn't add any new\n> > problems, and it meets people's needs, why not add it?\n> \n> Because (a) it greatly increases the scope of the vulnerability,\n\nHow? It is just a new authentication method with the same problems as\nour current ones.\n\n> and (b) it adds more code that will need to be rewritten to fix the\n> problem. I want to fix the blocking problem first, then solicit a\n> PAM patch that fits into the rewritten postmaster.\n\nThis seems to fit into the \"wait for the perfect fix\" solution which I\ndon't think applies here. There is no saying that a PAM patch will even\nbe around once we get the rest working. \n\nBasically, we have some people who want it. Now we need to hear from\npeople who don't want it. I have a \"no\" from Tom and a \"yes\" from\n\"Peter E\" (and the author).\n\nWe need more votes to decide.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 14:31:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Because (a) it greatly increases the scope of the vulnerability,\n\n> How? It is just a new authentication method with the same problems as\n> our current ones.\n\nNo, it is not *a* new authentication method, it is an open interface\nthat could be plugged into almost anything. We need the top-level\npostmaster process to be absolutely reliable; plugging into \"almost\nanything\" is not conducive to reliability.\n\nBesides, an hour ago you were ready to reject this patch for lack of\ninterest. Why are you suddenly so eager to ignore the risks and apply\nit anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2001 14:44:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Because (a) it greatly increases the scope of the vulnerability,\n> \n> > How? It is just a new authentication method with the same problems as\n> > our current ones.\n> \n> No, it is not *a* new authentication method, it is an open interface\n> that could be plugged into almost anything. We need the top-level\n> postmaster process to be absolutely reliable; plugging into \"almost\n> anything\" is not conducive to reliability.\n\nBut isn't that the responsibility of the administrator? They are\nalready responsible for the IDENT servers they use. Isn't this the same\nthing.\n\n\n> Besides, an hour ago you were ready to reject this patch for lack of\n> interest. Why are you suddenly so eager to ignore the risks and apply\n> it anyway?\n\nBecause some have now said they want it and I do not see the _new_ risks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 14:57:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> But isn't that the responsibility of the administrator? They are\n> already responsible for the IDENT servers they use.\n\nDoesn't stop us from getting questions/complaints when it doesn't work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2001 15:01:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > But isn't that the responsibility of the administrator? They are\n> > already responsible for the IDENT servers they use.\n> \n> Doesn't stop us from getting questions/complaints when it doesn't work.\n\nWe are not getting flooded with IDENT problem reports, are we?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 12 Jun 2001 15:02:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> ... More importantly, though, if the PAM configuration requires\n> more than one password (perhaps the password is due to be changed), this\n> implementation will fail (to authenticate).\n\nI *think* that the FE protocol will support more than one round of\npassword challenge, although given the lack of any way for the PAM\nmodule to direct what prompt is given, that is unlikely to work\npleasantly.\n\nThe larger issue is how a PAM auth method of unknown characteristics\nis going to fit into our existing FE/BE protocol. It would seem to me\nthat a protocol extension will be required. Lying to the frontend about\nwhat is happening is very unlikely to prove workable in the long run.\nWhat if the selected PAM auth method requires the client side to respond\nin some special way?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2001 15:07:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "Tom Lane writes:\n\n> The larger issue is how a PAM auth method of unknown characteristics\n> is going to fit into our existing FE/BE protocol. It would seem to me\n> that a protocol extension will be required. Lying to the frontend about\n> what is happening is very unlikely to prove workable in the long run.\n> What if the selected PAM auth method requires the client side to respond\n> in some special way?\n\nThe interaction that a PAM stack can initiate is limited to prompting for\none or more values and getting strings as an answer. The PAM-using\napplication registers a \"conversation function\" callback, which is\nresponsible for issuing the prompt and getting at the data in an\napplication-specific manner. Ideally, the libpq protocol and API would be\nextended to support this generality, but based on Dominic's comments the\npassword exchange would work to support the useful subset of this\nfunctionality without any protocol or API changes.\n\nMost of the time, PAM is used as a wrapper around some password database\nlike NIS or LDAP (or maybe even PostgreSQL).\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 12 Jun 2001 22:02:52 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The interaction that a PAM stack can initiate is limited to prompting for\n> one or more values and getting strings as an answer.\n\nWe could do that full-up, if only the FE/BE protocol included a prompt\nstring in the outgoing password request. However, given the difficulty\nof reprogramming clients to cope with multiple password challenges,\nyou're probably right that handling the single-password case without\nany protocol or client API change is the wiser course.\n\nHowever, I'm still quite concerned about letting the postmaster ignore\nits other clients while it's executing a PAM auth cycle that will\ninvoke who-knows-what processing. What's your take on that point?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Jun 2001 16:26:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "> The interaction that a PAM stack can initiate is limited to prompting for\n> one or more values and getting strings as an answer. The PAM-using\n> application registers a \"conversation function\" callback, which is\n> responsible for issuing the prompt and getting at the data in an\n> application-specific manner. Ideally, the libpq protocol and API would be\n> extended to support this generality, but based on Dominic's comments the\n> password exchange would work to support the useful subset of this\n> functionality without any protocol or API changes.\n> \n> Most of the time, PAM is used as a wrapper around some password database\n> like NIS or LDAP (or maybe even PostgreSQL).\n\nWe now have enough \"yes\" votes to apply this patch. I will give another\nday for comments on the patch's contents.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Jun 2001 10:08:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Bruce Momjian writes:\n\n> This seems to fit into the \"wait for the perfect fix\" solution which I\n> don't think applies here. There is no saying that a PAM patch will even\n> be around once we get the rest working.\n>\n> Basically, we have some people who want it. Now we need to hear from\n> people who don't want it. I have a \"no\" from Tom and a \"yes\" from\n> \"Peter E\" (and the author).\n\nNot in the current form.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 13 Jun 2001 17:18:30 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Basically, we have some people who want it. Now we need to hear from\n>> people who don't want it. I have a \"no\" from Tom and a \"yes\" from\n>> \"Peter E\" (and the author).\n\n> Not in the current form.\n\nI think Peter's main objection was that it'd always prompt for a\npassword whether needed or not.\n\nCould we change the PAM code so that it tries to run the PAM auth cycle\nimmediately on receipt of a connection request? If it gets a callback\nfor a password, it abandons the PAM conversation, sends off a password\nrequest packet, and then tries again when the password comes back.\n\nOf course, this would be hugely simpler if the work were being done in\na dedicated forked child of the postmaster ;-) ;-) ... just send the\nrequest packet when PAM asks for a password, and sleep till it comes\nback.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Jun 2001 11:31:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support... " }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> >> Basically, we have some people who want it. Now we need to hear from\n> >> people who don't want it. I have a \"no\" from Tom and a \"yes\" from\n> >> \"Peter E\" (and the author).\n> \n> > Not in the current form.\n> \n> I think Peter's main objection was that it'd always prompt for a\n> password whether needed or not.\n\nOK, let's let Dominic work on that. Now that there is a strong chance\nthat the patch will be applied, I am sure he can try it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Jun 2001 11:33:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> >> Basically, we have some people who want it. Now we need to hear from\n> >> people who don't want it. I have a \"no\" from Tom and a \"yes\" from\n> >> \"Peter E\" (and the author).\n> \n> > Not in the current form.\n> \n> I think Peter's main objection was that it'd always prompt for a\n> password whether needed or not.\n\nI am on IRC with the author now and he is working on it. It is actually\npretty nice to be on IRC while working on a patch because you can ask\nquestions and stuff. Channel #postgresql on EfNet.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 13 Jun 2001 11:54:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to include PAM support..." }, { "msg_contents": "On Wed, 13 Jun 2001, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> >> Basically, we have some people who want it. Now we need to hear from\n> >> people who don't want it. I have a \"no\" from Tom and a \"yes\" from\n> >> \"Peter E\" (and the author).\n> \n> > Not in the current form.\n> \n> I think Peter's main objection was that it'd always prompt for a\n> password whether needed or not.\n\nOkay, after many months of lurking, I've finally set aside some time this\nlast week to actually finish up the code. (It's been mostly-merged/working\nsince about a week after Tom sent the mail I'm replying to - but then my\nemployer decided it would be good for us (read: me) to finish working on a\nproject which has consumed 99% of any programming motivation I could\nmuster.\n\n> Could we change the PAM code so that it tries to run the PAM auth cycle\n> immediately on receipt of a connection request? If it gets a callback\n> for a password, it abandons the PAM conversation, sends off a password\n> request packet, and then tries again when the password comes back.\n\nI am attempting to do this in a way that's relatively elegant, and the\ncode should get sent to -patches tomorrow sometime , after I've had time\nto do some testing.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 24 Aug 2001 20:46:07 -0500 (CDT)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Patch to include PAM support... " }, { "msg_contents": "\"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n>> Could we change the PAM code so that it tries to run the PAM auth cycle\n>> immediately on receipt of a connection request? If it gets a callback\n>> for a password, it abandons the PAM conversation, sends off a password\n>> request packet, and then tries again when the password comes back.\n\n> I am attempting to do this in a way that's relatively elegant, and the\n> code should get sent to -patches tomorrow sometime , after I've had time\n> to do some testing.\n\nI think that the main objection to the original form of the PAM patch\nwas that it would lock up the postmaster until the client responded.\nHowever, that is *not* a concern any longer, since the current code\nforks first and authenticates after. Accordingly, you shouldn't be\ncomplexifying the PAM code to avoid waits.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 Aug 2001 00:47:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] Patch to include PAM support... " }, { "msg_contents": "On Sat, 25 Aug 2001, Tom Lane wrote:\n\n> \"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n> >> Could we change the PAM code so that it tries to run the PAM auth cycle\n> >> immediately on receipt of a connection request? If it gets a callback\n> >> for a password, it abandons the PAM conversation, sends off a password\n> >> request packet, and then tries again when the password comes back.\n> \n> > I am attempting to do this in a way that's relatively elegant, and the\n> > code should get sent to -patches tomorrow sometime , after I've had time\n> > to do some testing.\n> \n> I think that the main objection to the original form of the PAM patch\n> was that it would lock up the postmaster until the client responded.\n> However, that is *not* a concern any longer, since the current code\n> forks first and authenticates after. Accordingly, you shouldn't be\n> complexifying the PAM code to avoid waits.\n\nThe complexity comes from getting PAM to only send a password request to\nthe frontend if the PAM authentication needs a password, and not\notherwise. As I'd mentioned to Bruce before, I think PAM authentication\nshould be treated like password authentication - if there's a potential\nthat a password might be required, request a password, whether it's needed\nor not. But PeterE asked that it only request a password if a password is\nneeded, so I'm fighting to get it to do exactly that.\n\n(I already knew auth is done in the backend, and therefor can be blocking :)\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Sat, 25 Aug 2001 00:16:49 -0500 (CDT)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] Patch to include PAM support... " } ]
[ { "msg_contents": "Sorry for the delay in the response. It took be a day to get \neverything upgraded to 7.1. To restate the problem - in 7.0 with \nGEQO enabled, a 15-way join took 10 seconds. With GEQO disabled it \ntook 18 seconds. 7.1 out of the box took only 2 seconds! I was amazed \nand shocked at this damned impressive improvement in planning \nspeed....until I actually used the explicit JOIN syntax described in \n11.2. Instanteous results! Instantaneous.....\n\nThanks a bunch,\n(still in shock)\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tTom Lane [SMTP:tgl@sss.pgh.pa.us]\nSent:\tWednesday, April 25, 2001 12:42 PM\nTo:\tmascarm@mascari.com\nCc:\t'pgsql-hackers@postgresql.org'\nSubject:\tRe: [HACKERS] Any optimizations to the join code in 7.1?\n\nMike Mascari <mascarm@mascari.com> writes:\n> I have a particular query which performs a 15-way join;\n\nYou should read\nhttp://www.postgresql.org/devel-corner/docs/postgres/explicit-join \ns.html\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Apr 2001 11:46:47 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: Any optimizations to the join code in 7.1? " }, { "msg_contents": "\nYou can thank Tom Lane for most/all of our optimization improvements.\n\n> Sorry for the delay in the response. It took be a day to get \n> everything upgraded to 7.1. To restate the problem - in 7.0 with \n> GEQO enabled, a 15-way join took 10 seconds. With GEQO disabled it \n> took 18 seconds. 7.1 out of the box took only 2 seconds! I was amazed \n> and shocked at this damned impressive improvement in planning \n> speed....until I actually used the explicit JOIN syntax described in \n> 11.2. Instanteous results! Instantaneous.....\n> \n> Thanks a bunch,\n> (still in shock)\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> -----Original Message-----\n> From:\tTom Lane [SMTP:tgl@sss.pgh.pa.us]\n> Sent:\tWednesday, April 25, 2001 12:42 PM\n> To:\tmascarm@mascari.com\n> Cc:\t'pgsql-hackers@postgresql.org'\n> Subject:\tRe: [HACKERS] Any optimizations to the join code in 7.1?\n> \n> Mike Mascari <mascarm@mascari.com> writes:\n> > I have a particular query which performs a 15-way join;\n> \n> You should read\n> http://www.postgresql.org/devel-corner/docs/postgres/explicit-join \n> s.html\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 13:42:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Any optimizations to the join code in 7.1?" }, { "msg_contents": "> ... 7.1 out of the box took only 2 seconds! I was amazed\n> and shocked at this damned impressive improvement in planning\n> speed....until I actually used the explicit JOIN syntax described in\n> 11.2. Instanteous results! Instantaneous.....\n\nBut it is possible, under many circumstances, for query optimization to\nbe a benefit for a many-table query. The docs indicate that explicit\njoin syntax bypasses that, even for inner joins, so you may find that\nthis syntax is a net loss in performance depending on the query and your\nchoice of table order.\n\nPresumably we will be interested in making these two forms of inner join\nequivalent in behavior in a future release. Tom, what are the\nimpediments we might encounter in doing this?\n\n - Thomas\n", "msg_date": "Sat, 28 Apr 2001 01:49:09 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Any optimizations to the join code in 7.1?" }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> But it is possible, under many circumstances, for query optimization to\n> be a benefit for a many-table query. The docs indicate that explicit\n> join syntax bypasses that, even for inner joins, so you may find that\n> this syntax is a net loss in performance depending on the query and your\n> choice of table order.\n\n> Presumably we will be interested in making these two forms of inner join\n> equivalent in behavior in a future release. Tom, what are the\n> impediments we might encounter in doing this?\n\nI don't think there are any real technical problems in the way; it's\nsimply an implementation choice not to treat INNER JOIN the same as an\nimplicit join list. I did it that way in 7.1 mainly as a flyer, to see\nhow many people would think it's a feature vs. how many think it's a\nbug. The votes aren't all in yet, but here we have Mike apparently\npretty pleased with it, while I recall at least one other person who\nwas not happy with the 7.1 behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 11:22:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Any optimizations to the join code in 7.1? " } ]
[ { "msg_contents": "Hi,\n\nFirstly, the attached patch implements archiving of off-\nline redo logs, via the wal_archive_dir GUC option. It\nbuilds and appears to work (though it looks like guc-file.l\nhas some problems with unquoted strings containing slashes).\n\n\nTODO: handle EXDEV from link/rename, and copy rather\nthan renaming.\n\n\nClearly this isn't a lot of use at the moment, but what I'd\nreally like would be a way to implement what our (Oracle)\nDBA calls \"managed recovery\".\n\nEssentially, the standby database is opened in read-only\nmode (since PG seems to lack this, having it not open at\nall should suffice :). and archived redo logs are copied\nover from the live database (we do it via rsync, every 5\nminutes) and rolled forward.\n\n(Note: for what it's worth, we're using this because\nOracle's Advanced Replication is too unstable.)\n\n\nIs there an easy way to do this? I suppose that while\nthere isn't a readonly option, it might be best done with\nan external tool, not unlike resetxlog.\n\nWhat are the plans for replication in 7.2 (assuming that\nis what's next)? The rserv stuff looks neat, but rather\nintricate. A cheap, out-of-band replication system would\nmake me very happy.\n\nMatthew.", "msg_date": "Fri, 27 Apr 2001 17:03:00 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Archived redo logs / Managed recovery mode?" } ]
[ { "msg_contents": "\nMorning all ...\n\n\tI'm going to do a broader announcement in a couple of days, but\nOleg and his gang have just finished setting up their Mailing List\nSearching software ...\n\n\tIf you go to fts.postgresql.org, it is like night->day as far as\nthe old searching is concerned ...\n\n\tWe have some more configuration work to do on it, to improve\nperformance, but if anyone has used the old interface, performance now is\nsuch that even without much tuning on the backend, you no longer have time\nfor coffee between searches :)\n\n\tTry it, let us know of any bugs, and we'll do a bigger\nannouncement in a couple of days to the rest of the community ...\n\n\tVince, can you fix the search links to point to this, as far as\nthe mailing list searches are concerned? docs are still in udmsearch for\nnow ...\n\n\t*Major* thanks to Oleg and his group for making this available\nto the community ... now searching is a useful function :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Fri, 27 Apr 2001 13:44:09 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "On Fri, 27 Apr 2001, The Hermit Hacker wrote:\n\n>\n> Morning all ...\n>\n> \tI'm going to do a broader announcement in a couple of days, but\n> Oleg and his gang have just finished setting up their Mailing List\n> Searching software ...\n>\n> \tIf you go to fts.postgresql.org, it is like night->day as far as\n> the old searching is concerned ...\n>\n> \tWe have some more configuration work to do on it, to improve\n> performance, but if anyone has used the old interface, performance now is\n> such that even without much tuning on the backend, you no longer have time\n> for coffee between searches :)\n>\n> \tTry it, let us know of any bugs, and we'll do a bigger\n> announcement in a couple of days to the rest of the community ...\n>\n> \tVince, can you fix the search links to point to this, as far as\n> the mailing list searches are concerned? docs are still in udmsearch for\n> now ...\n>\n> \t*Major* thanks to Oleg and his group for making this available\n> to the community ... now searching is a useful function :)\n\nIt *is* alot quicker. I did a search for \"scrappy\" on All Lists and\nit came back in 0.151 secs. But it only found 104 matches, have you\nbeen that quiet Marc?\n\nI'll add it over the weekend.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 27 Apr 2001 12:55:30 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "\nActually, default appears to be the last month worth of messages ... check\nyour date range :)\n\n\nOn Fri, 27 Apr 2001, Vince Vielhaber wrote:\n\n> On Fri, 27 Apr 2001, The Hermit Hacker wrote:\n>\n> >\n> > Morning all ...\n> >\n> > \tI'm going to do a broader announcement in a couple of days, but\n> > Oleg and his gang have just finished setting up their Mailing List\n> > Searching software ...\n> >\n> > \tIf you go to fts.postgresql.org, it is like night->day as far as\n> > the old searching is concerned ...\n> >\n> > \tWe have some more configuration work to do on it, to improve\n> > performance, but if anyone has used the old interface, performance now is\n> > such that even without much tuning on the backend, you no longer have time\n> > for coffee between searches :)\n> >\n> > \tTry it, let us know of any bugs, and we'll do a bigger\n> > announcement in a couple of days to the rest of the community ...\n> >\n> > \tVince, can you fix the search links to point to this, as far as\n> > the mailing list searches are concerned? docs are still in udmsearch for\n> > now ...\n> >\n> > \t*Major* thanks to Oleg and his group for making this available\n> > to the community ... now searching is a useful function :)\n>\n> It *is* alot quicker. I did a search for \"scrappy\" on All Lists and\n> it came back in 0.151 secs. But it only found 104 matches, have you\n> been that quiet Marc?\n>\n> I'll add it over the weekend.\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Fri, 27 Apr 2001 14:07:09 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "On Fri, 27 Apr 2001, The Hermit Hacker wrote:\n\n>\n> Actually, default appears to be the last month worth of messages ... check\n> your date range :)\n\nI did, I just find it hard to believe that *you* of all people were\nthat quiet! I did some other searches since then for things like 7.1\nwhere I knew I'd get alot of hits. Very impressive speeds!\n\n\n>\n>\n> On Fri, 27 Apr 2001, Vince Vielhaber wrote:\n>\n> > On Fri, 27 Apr 2001, The Hermit Hacker wrote:\n> >\n> > >\n> > > Morning all ...\n> > >\n> > > \tI'm going to do a broader announcement in a couple of days, but\n> > > Oleg and his gang have just finished setting up their Mailing List\n> > > Searching software ...\n> > >\n> > > \tIf you go to fts.postgresql.org, it is like night->day as far as\n> > > the old searching is concerned ...\n> > >\n> > > \tWe have some more configuration work to do on it, to improve\n> > > performance, but if anyone has used the old interface, performance now is\n> > > such that even without much tuning on the backend, you no longer have time\n> > > for coffee between searches :)\n> > >\n> > > \tTry it, let us know of any bugs, and we'll do a bigger\n> > > announcement in a couple of days to the rest of the community ...\n> > >\n> > > \tVince, can you fix the search links to point to this, as far as\n> > > the mailing list searches are concerned? docs are still in udmsearch for\n> > > now ...\n> > >\n> > > \t*Major* thanks to Oleg and his group for making this available\n> > > to the community ... now searching is a useful function :)\n> >\n> > It *is* alot quicker. I did a search for \"scrappy\" on All Lists and\n> > it came back in 0.151 secs. But it only found 104 matches, have you\n> > been that quiet Marc?\n> >\n> > I'll add it over the weekend.\n> >\n> > Vince.\n> > --\n> > ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> > 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> > Online Campground Directory http://www.camping-usa.com\n> > Online Giftshop Superstore http://www.cloudninegifts.com\n> > ==========================================================================\n> >\n> >\n> >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n>\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 27 Apr 2001 13:14:17 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "On Fri, 27 Apr 2001, Vince Vielhaber wrote:\n\n> It *is* alot quicker. I did a search for \"scrappy\" on All Lists and\n> it came back in 0.151 secs. But it only found 104 matches, have you\n> been that quiet Marc?\n\nI got 3604 messages for the period from 1995 to now.\n ^^^^^\n\tOleg\n\n>\n> I'll add it over the weekend.\n>\n> Vince.\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 27 Apr 2001 20:24:59 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Re: The new, the improved ... FTS Searching of Mailing\n\tList Archives" }, { "msg_contents": "On Fri, 27 Apr 2001, The Hermit Hacker wrote:\n\n>\n> Morning all ...\n>\n> \tI'm going to do a broader announcement in a couple of days, but\n> Oleg and his gang have just finished setting up their Mailing List\n\nAll work was done by Teodor Sigaev (teodor@stack.net) and me (oleg@sai.msu.su)\nas real-life application which utilize our work on GiST for 7.1\nThere is a room for improving performance - partial sorting\nand better support of gist indexes by optimizer. We hope core developers\ncould help us.\n\n>\n> \t*Major* thanks to Oleg and his group for making this available\n> to the community ... now searching is a useful function :)\n\nWe need to write some doc, FAQ. Also, we plan to add indexing of\npostgres documentation, so people could search in mailing list archive\nand docs.\n\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 27 Apr 2001 20:36:43 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "> \tVince, can you fix the search links to point to this, as far as\n> the mailing list searches are concerned? docs are still in udmsearch for\n> now ...\n> \n> \t*Major* thanks to Oleg and his group for making this available\n> to the community ... now searching is a useful function :)\n\nAnd something to add to 7.2!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 13:44:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "On Fri, 27 Apr 2001, Bruce Momjian wrote:\n\n> > \tVince, can you fix the search links to point to this, as far as\n> > the mailing list searches are concerned? docs are still in udmsearch for\n> > now ...\n> >\n> > \t*Major* thanks to Oleg and his group for making this available\n> > to the community ... now searching is a useful function :)\n>\n> And something to add to 7.2!\n\nHuh? *raised eyebrow* This is a standalone application that they've\ndonated to the project ... nothing that can be added to any of our\ndistributions ...\n\n", "msg_date": "Fri, 27 Apr 2001 15:11:45 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "> On Fri, 27 Apr 2001, Bruce Momjian wrote:\n> \n> > > \tVince, can you fix the search links to point to this, as far as\n> > > the mailing list searches are concerned? docs are still in udmsearch for\n> > > now ...\n> > >\n> > > \t*Major* thanks to Oleg and his group for making this available\n> > > to the community ... now searching is a useful function :)\n> >\n> > And something to add to 7.2!\n> \n> Huh? *raised eyebrow* This is a standalone application that they've\n> donated to the project ... nothing that can be added to any of our\n> distributions ...\n\nIsn't the text indexing something that can go into the distribution?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 14:20:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "On Fri, 27 Apr 2001, Bruce Momjian wrote:\n\n> > On Fri, 27 Apr 2001, Bruce Momjian wrote:\n> >\n> > > > \tVince, can you fix the search links to point to this, as far as\n> > > > the mailing list searches are concerned? docs are still in udmsearch for\n> > > > now ...\n> > > >\n> > > > \t*Major* thanks to Oleg and his group for making this available\n> > > > to the community ... now searching is a useful function :)\n> > >\n> > > And something to add to 7.2!\n> >\n> > Huh? *raised eyebrow* This is a standalone application that they've\n> > donated to the project ... nothing that can be added to any of our\n> > distributions ...\n>\n> Isn't the text indexing something that can go into the distribution?\n\nto the best of my knowledge, everything they had for public consumption\nwas added to v7.1, but Oleg would be better for that ... to get\nfts.postgresql.org, there was nothing special I had to do as far as the\nbackend was concerned *shrug*\n\n\n", "msg_date": "Fri, 27 Apr 2001 15:44:15 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "At 03:44 PM 27-04-2001 -0300, The Hermit Hacker wrote:\n>On Fri, 27 Apr 2001, Bruce Momjian wrote:\n>\n>> > On Fri, 27 Apr 2001, Bruce Momjian wrote:\n>> >\n>> > Huh? *raised eyebrow* This is a standalone application that they've\n>> > donated to the project ... nothing that can be added to any of our\n>> > distributions ...\n>>\n>> Isn't the text indexing something that can go into the distribution?\n>\n>to the best of my knowledge, everything they had for public consumption\n>was added to v7.1, but Oleg would be better for that ... to get\n>fts.postgresql.org, there was nothing special I had to do as far as the\n>backend was concerned *shrug*\n\n<featurerequest>\nWell if stuff like that ends up in Postgresql would it be possible to index\nLIKE '%xxx%' searches? That way all people have to do is create the\nrelevant index and use a fts_ops or something, and voila LIKE '%xxx%'\nsearches become faster, with maybe some performance+disk space hit for\ninserts.\n\nWould something like that be difficult to implement? I'm not sure how\nfunction+fts index would work either.\n\nI hope FTS for postgresql doesn't start looking like Oracle's\nContext/Intermedia... Proprietary interfaces == \"lock in\" == \"ick\".\n</featurerequest>\n\nCheerio,\nLink.\n\n", "msg_date": "Sat, 28 Apr 2001 09:29:35 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: The new, the improved ... FTS Searching of Mailing List Archives" }, { "msg_contents": "> <featurerequest>\n> Well if stuff like that ends up in Postgresql would it be possible to index\n> LIKE '%xxx%' searches? That way all people have to do is create the\n> relevant index and use a fts_ops or something, and voila LIKE '%xxx%'\n> searches become faster, with maybe some performance+disk space hit for\n> inserts.\n> \n> Would something like that be difficult to implement? I'm not sure how\n> function+fts index would work either.\n> \n> I hope FTS for postgresql doesn't start looking like Oracle's\n> Context/Intermedia... Proprietary interfaces == \"lock in\" == \"ick\".\n> </featurerequest>\n\nThis is what I was hoping... Something to make it automatic.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 21:45:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: The new, the improved ... FTS Searching of Mailing\n\tList Archives" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > I hope FTS for postgresql doesn't start looking like Oracle's\n> > Context/Intermedia... Proprietary interfaces == \"lock in\" == \"ick\".\n> > </featurerequest>\n> \n> This is what I was hoping... Something to make it automatic.\n> \n\nWell, I would love to see full text indexing with some understanding of the\nlanguage in question as well - I believe this is what Oracles product is\nabout and not just simple regexp matching. I want to be able to do regexps,\nsynonyms, soundex, etc. And I guess there is no standard for specifying\nthese different kind of searches. \n\nregards, \n\n\tGunnar \n", "msg_date": "03 May 2001 16:55:29 +0200", "msg_from": "Gunnar R|nning <gunnar@candleweb.no>", "msg_from_op": false, "msg_subject": "Re: Re: The new,\n\tthe improved ... FTS Searching of Mailing List Archives" } ]
[ { "msg_contents": "> > There's a report of startup recovery failure in Japan.\n> >\n> >> DEBUG: redo done at (1, 3923880100)\n> >> FATAL 2: XLogFlush: request is not satisfied\n> >> postmaster: Startup proc 4228 exited with status 512 - abort\n> \n> Is this person using 7.1 release, or a beta/RC version? That looks\n> just like the last WAL bug Vadim fixed before final ...\n\nNo, it doesn't. That bug was related to cases when there is no room\non last log page for startup checkpoint. ~5k is free in this case.\n\nVadim\n", "msg_date": "Fri, 27 Apr 2001 10:00:30 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: 7.1 startup recovery failure " }, { "msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > > There's a report of startup recovery failure in Japan.\n> > >\n> > >> DEBUG: redo done at (1, 3923880100)\n> > >> FATAL 2: XLogFlush: request is not satisfied\n> > >> postmaster: Startup proc 4228 exited with status 512 - abort\n> >\n> > Is this person using 7.1 release, or a beta/RC version? That looks\n> > just like the last WAL bug Vadim fixed before final ...\n> \n> No, it doesn't. That bug was related to cases when there is no room\n> on last log page for startup checkpoint. ~5k is free in this case.\n> \n\nI haven't gotten any reply from him yet.\nMany people are on vacation now in Japan.\nProbably we couldn't expect too much of him.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 30 Apr 2001 13:32:21 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: 7.1 startup recovery failure" } ]
[ { "msg_contents": "\nOver the past few months there've been a number of requests for an\ninteractive type documentation setup like the folks at php.net have.\nThe first version of it is now online and ready for testing. You can\nalso search the docs, but the search isn't that exotic - but since\nthere are fewer than 500 pages of documentation the crude ILIKE search\nI'm doing will suffice for now. So check it out, beat it up, and if it\nseems to work ok I'll move it to the main site and clear out the notes\nthat are currently in it (when you read them you'll know why). They're\navailable at:\n\n http://odbc.postgresql.org/docs/\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 27 Apr 2001 13:31:58 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "While we're on the subject of searches..." }, { "msg_contents": "Hi all,\n\nThe problem:\n\nI do a large bulk copy once a day (100,000 records of Radius data),\ntearing down indices, truncating a large table that contains summary\ninformation, and rebuilding everything after the copy. Over the course\nof this operation, I can generate up to 1.5 gigs of WAL data in\npg_xlog. Sometimes (like just now), I will run out of disk space and\nthe postmaster will crash. I try to restart it, and it errors out. \nThen I delete all the WAL logs, try to restart, and (surprise) it errors\nout again.\n\nI tried to set some of the of the WAL parameters in postgres.conf like\nso:\n\nwal_buffers = 4 # min 4\nwal_files = 8 # range 0-64\nwal_sync_method = fdatasync # fsync or fdatasync or open_sync or\nopen_datasync \n\nbut I get 24+ separate files.\n\nI would like to recover without an initdb, but if that isn't possible, I\nwould definitely like to avoid this problem in the future.\n\nThanks to all\n", "msg_date": "Fri, 27 Apr 2001 11:16:22 -0700", "msg_from": "webb sprague <wsprague@o1.com>", "msg_from_op": false, "msg_subject": "WAL Log using all my disk space!" }, { "msg_contents": " Over the past few months there've been a number of requests for an\n interactive type documentation setup like the folks at php.net have.\n\nGreat to add to the documentation, but I hope the PostgreSQL project\ndoesn't take it so far as to make the primary documentation\ninteractive. A well-thought out, coherent document is _much_ more\nuseful than the skads of random tips that characterize some other\nprojects. The current document is very well-written (though perhaps\nincomplete). I would hate to see that decline in quality.\n\nCheers,\nBrook\n", "msg_date": "Fri, 27 Apr 2001 13:56:50 -0600 (MDT)", "msg_from": "Brook Milligan <brook@biology.nmsu.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] While we're on the subject of searches..." }, { "msg_contents": "On Fri, 27 Apr 2001, Brook Milligan wrote:\n\n> Over the past few months there've been a number of requests for an\n> interactive type documentation setup like the folks at php.net have.\n>\n> Great to add to the documentation, but I hope the PostgreSQL project\n> doesn't take it so far as to make the primary documentation\n> interactive. A well-thought out, coherent document is _much_ more\n> useful than the skads of random tips that characterize some other\n> projects. The current document is very well-written (though perhaps\n> incomplete). I would hate to see that decline in quality.\n\nI wouldn't want that either. If anything I'd like to see some of the\ntips and/or clarifications put into the regular docs. Something that\nmay appear well thought out and clear to many, may still be confusing\nto others. If this doc enhancement can help someone writing the docs\nmake them clearer then it's definitely desirable.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 27 Apr 2001 16:50:26 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] While we're on the subject of searches..." }, { "msg_contents": "Vince Vielhaber (vev@michvhf.com) napisaďż˝(a):\n\n> that are currently in it (when you read them you'll know why). They're\n> available at:\n> \n> http://odbc.postgresql.org/docs/\nWere You thinking about connecting it with some mailing list - kind of\n\"note-announce\". It may be interesting to read all new notes to pg docs.\n\nmazek\n\n-- \nDon't tell me how hard you work. Tell me how much you get done.\n-- James J. Ling\n", "msg_date": "Fri, 27 Apr 2001 23:37:48 +0200", "msg_from": "Marcin Mazurek <M.Mazurek@poznan.multinet.pl>", "msg_from_op": false, "msg_subject": "Re: While we're on the subject of searches..." }, { "msg_contents": "On Fri, 27 Apr 2001, Marcin Mazurek wrote:\n\n> Vince Vielhaber (vev@michvhf.com) napisa�(a):\n>\n> > that are currently in it (when you read them you'll know why). They're\n> > available at:\n> >\n> > http://odbc.postgresql.org/docs/\n> Were You thinking about connecting it with some mailing list - kind of\n> \"note-announce\". It may be interesting to read all new notes to pg docs.\n\nCan't say that I had. It's food for thought tho.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 27 Apr 2001 17:59:18 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: While we're on the subject of searches..." } ]
[ { "msg_contents": "\nAs Tom's mentioned the other day, we're looking at doing up v7.1.1 on\nTuesday, and starting in on v7.2 ...\n\nDoes anyone have any outstanding fixes for v7.1.x that they want to see in\n*before* we do this release? Any points unresolved that anyone knows\nabout that we need to look at?\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Fri, 27 Apr 2001 15:22:12 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": ">\n> Does anyone have any outstanding fixes for v7.1.x that they want to see in\n> *before* we do this release? Any points unresolved that anyone knows\n> about that we need to look at?\n\nIs there a list of what IS getting changed? Can this be posted somewhere\nor is the changelist enough?\n\n- Brandon\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Fri, 27 Apr 2001 15:32:28 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "The Hermit Hacker wrote:\n>\n> As Tom's mentioned the other day, we're looking at doing up v7.1.1 on\n> Tuesday, and starting in on v7.2 ...\n>\n> Does anyone have any outstanding fixes for v7.1.x that they want to see in\n> *before* we do this release? Any points unresolved that anyone knows\n> about that we need to look at?\n\n The RI-oddness thing. Tom objected to my first trial and\n hasn't responded to my last reply yet. Well, and noone else\n lost a single word where I'd expected at least a *shrug* or\n two.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 27 Apr 2001 15:18:19 -0500 (EST)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "> Does anyone have any outstanding fixes for v7.1.x that they want to see in\n> *before* we do this release? Any points unresolved that anyone knows\n> about that we need to look at?\n\nNothing serious, but I would like to apply a patch to allow IDENT\nstrings (e.g. 'hour') to be accepted by the SQL92 EXTRACT() function. We\naccept those for date_part(), which is what EXTRACT() is translated to\nby the parser, and it seems to be a reasonable to the standard.\n\n - Thomas\n", "msg_date": "Fri, 27 Apr 2001 23:35:04 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "> Nothing serious, but I would like to apply a patch to allow IDENT\n> strings (e.g. 'hour') to be accepted by the SQL92 EXTRACT() function. We\n> accept those for date_part(), which is what EXTRACT() is translated to\n> by the parser, and it seems to be a reasonable to the standard.\n\n... reasonable *extension* to the standard.\n", "msg_date": "Sat, 28 Apr 2001 01:04:34 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "Thomas Lockhart writes:\n\n> Nothing serious, but I would like to apply a patch to allow IDENT\n> strings (e.g. 'hour') to be accepted by the SQL92 EXTRACT() function. We\n> accept those for date_part(), which is what EXTRACT() is translated to\n> by the parser, and it seems to be a reasonable to the standard.\n\nBut why does that need to go into 7.1.1?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sun, 29 Apr 2001 00:30:10 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "> > Nothing serious, but I would like to apply a patch to allow IDENT\n> > strings (e.g. 'hour') to be accepted by the SQL92 EXTRACT() function. We\n> > accept those for date_part(), which is what EXTRACT() is translated to\n> > by the parser, and it seems to be a reasonable to the standard.\n> But why does that need to go into 7.1.1?\n\nDoes not \"need to\". But it is non-invasive, extremely low risk, gets the\nbehavior to match the docs, and gets it off my desk and into the main\ntree.\n\n - Thomas\n", "msg_date": "Mon, 30 Apr 2001 15:02:08 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> Nothing serious, but I would like to apply a patch to allow IDENT\n> strings (e.g. 'hour') to be accepted by the SQL92 EXTRACT() function. We\n> accept those for date_part(), which is what EXTRACT() is translated to\n> by the parser, and it seems to be a reasonable to the standard.\n\n>> But why does that need to go into 7.1.1?\n\n> Does not \"need to\". But it is non-invasive, extremely low risk, gets the\n> behavior to match the docs, and gets it off my desk and into the main\n> tree.\n\nIf the current behavior does not match the docs then it qualifies as a\nbug fix ;-). I have no objections to this one.\n\nThomas, what do you think of the persistent reports of date conversion\nproblems at DST boundaries, eg, Ayal Leibowitz's report today in\npgsql-bugs? I cannot reproduce any such problem --- and my local\ntimezone database claims that MET DST transitions are the last week of\nMarch, never the first week of April, anyway. There's something funny\ngoing on there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 12:03:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: v7.1.1 branched and released on Tuesday ... " }, { "msg_contents": "Thomas Lockhart writes:\n\n> > > Nothing serious, but I would like to apply a patch to allow IDENT\n> > > strings (e.g. 'hour') to be accepted by the SQL92 EXTRACT() function. We\n> > > accept those for date_part(), which is what EXTRACT() is translated to\n> > > by the parser, and it seems to be a reasonable to the standard.\n> > But why does that need to go into 7.1.1?\n>\n> Does not \"need to\". But it is non-invasive, extremely low risk, gets the\n> behavior to match the docs, and gets it off my desk and into the main\n> tree.\n\nHehe, match the docs? The docs used to be perfectly accurate until you\nchanged them.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 30 Apr 2001 18:12:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "> Thomas, what do you think of the persistent reports of date conversion\n> problems at DST boundaries, eg, Ayal Leibowitz's report today in\n> pgsql-bugs? I cannot reproduce any such problem --- and my local\n> timezone database claims that MET DST transitions are the last week of\n> March, never the first week of April, anyway. There's something funny\n> going on there.\n\nYes. I tried the example on 7.0.2 (and 7.1) and could not get it to\nmisbehave. I was guessing that it involves string->date conversion,\nwhich may pass through timestamp to get there, but it looks like there\nis an explicit text->date conversion function so time zone should just\nnever be involved. Really!\n\n - Thomas\n", "msg_date": "Mon, 30 Apr 2001 17:58:54 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "> Hehe, match the docs? The docs used to be perfectly accurate until you\n> changed them.\n\n;)\n", "msg_date": "Mon, 30 Apr 2001 17:59:34 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Does anyone have any outstanding fixes for v7.1.x that they want to see in\n> *before* we do this release? Any points unresolved that anyone knows\n> about that we need to look at?\n\nFWIW, I've finished committing all the bug fixes I have pending.\n\nThere are several worrisome unresolved bug reports, but AFAIK none are\nfor reproducible conditions, and I don't think we can make any more\nprogress on them without more information. I doubt we should hold up\nthe 7.1.1 release while waiting to see if we get any.\n\nWe do have that not-quite-done QNX4 port patch in hand. Perhaps we\nshould give Bernd another day to respond to the comments on that and\nsqueeze it into 7.1.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 18:41:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ... " }, { "msg_contents": "> The Hermit Hacker <scrappy@hub.org> writes:\n> > Does anyone have any outstanding fixes for v7.1.x that they want to see in\n> > *before* we do this release? Any points unresolved that anyone knows\n> > about that we need to look at?\n> \n> FWIW, I've finished committing all the bug fixes I have pending.\n> \n> There are several worrisome unresolved bug reports, but AFAIK none are\n> for reproducible conditions, and I don't think we can make any more\n> progress on them without more information. I doubt we should hold up\n> the 7.1.1 release while waiting to see if we get any.\n> \n> We do have that not-quite-done QNX4 port patch in hand. Perhaps we\n> should give Bernd another day to respond to the comments on that and\n> squeeze it into 7.1.1.\n\nThere will surely be a 7.1.2. I vote against waiting for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 30 Apr 2001 19:03:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> We do have that not-quite-done QNX4 port patch in hand. Perhaps we\n>> should give Bernd another day to respond to the comments on that and\n>> squeeze it into 7.1.1.\n\n> There will surely be a 7.1.2. I vote against waiting for it.\n\nPossibly, but one hopes 7.1.2 will be a few months away ...\n\nGiven the triviality of the objections to Bernd's patch, I expect he can\nturn it around pretty quickly. I do not want to wait more than a day\nfor it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 19:12:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ... " }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n>> Thomas, what do you think of the persistent reports of date conversion\n>> problems at DST boundaries, eg, Ayal Leibowitz's report today in\n>> pgsql-bugs? I cannot reproduce any such problem --- and my local\n>> timezone database claims that MET DST transitions are the last week of\n>> March, never the first week of April, anyway. There's something funny\n>> going on there.\n\n> Yes. I tried the example on 7.0.2 (and 7.1) and could not get it to\n> misbehave. I was guessing that it involves string->date conversion,\n> which may pass through timestamp to get there, but it looks like there\n> is an explicit text->date conversion function so time zone should just\n> never be involved. Really!\n\nI dug through the conversions involved (basically date_in and date_out).\nAFAICS the only place where timezone could possibly get involved is that\nDecodeDateTime attempts to derive a timezone for the given date/time.\nIt does this by calling mktime() (line 878 in datetime.c in current\nsources). If mktime() screws up and alters the tm->tm_mday field then\nwe'd see the reported behavior. I really don't see any other place that\nit could be happening.\n\nA platform-specific bug in mktime would do nicely to explain why we\ncan't reproduce the problem, too ... OTOH, it's hard to believe such a\nbug would have persisted across several RedHat releases, which seems to\nbe necessary to explain the reports.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 20:51:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "date conversion (was Re: Re: v7.1.1 branched and released on Tuesday\n\t...)" }, { "msg_contents": "> I dug through the conversions involved (basically date_in and date_out).\n> AFAICS the only place where timezone could possibly get involved is that\n> DecodeDateTime attempts to derive a timezone for the given date/time.\n> It does this by calling mktime() (line 878 in datetime.c in current\n> sources). If mktime() screws up and alters the tm->tm_mday field then\n> we'd see the reported behavior. I really don't see any other place that\n> it could be happening.\n\nYes. It is possible to call DecodeDateTime() so that it *never* tries to\nderive a time zone (call with the last argument set to NULL), but that\nalso causes it to reject date/time strings which have an explicit time\nzone. We certainly would want to accept something like\n\n select date('1993-04-02 04:05:06 PST');\n\n(even though for a date-only result it is overspecified), so calling\nwith NULL is not the right thing to do (I tried it, then realized the\nbad effect).\n\n> A platform-specific bug in mktime would do nicely to explain why we\n> can't reproduce the problem, too ... OTOH, it's hard to believe such a\n> bug would have persisted across several RedHat releases, which seems to\n> be necessary to explain the reports.\n\nIt is also hard to see how such a bug would not be similarly manifested\nin Mandrake, Debian, etc etc.\n\nFor this particular problem, I'd like to see the \"DateStyle\" setting,\nthe time zone setting, an example of the problem (does not require a\ntable, but just a date string conversion), and the output of \"zdump -v\"\nfor the timezone in question.\n\nI'm not sure how to handle date/time bug reports which are not\nreproducible on our machines. Certainly date/time issues are the most\nproblematic in terms of number of bug reports, but they are also\nprobably the most sensitive to machine configuration and user's\nlocation, so all in all I think the types are doing very well. I don't\nwant to sound complacent, but it is probably sufficient to fix\nreproducible problems to keep our date/time data types viable, and we\nare doing far more than that over time :)\n\n - Thomas\n", "msg_date": "Tue, 01 May 2001 00:55:55 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: date conversion (was Re: Re: v7.1.1 branched and released on\n\tTuesday ...)" }, { "msg_contents": "On Mon, 30 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Does anyone have any outstanding fixes for v7.1.x that they want to see in\n> > *before* we do this release? Any points unresolved that anyone knows\n> > about that we need to look at?\n>\n> FWIW, I've finished committing all the bug fixes I have pending.\n>\n> There are several worrisome unresolved bug reports, but AFAIK none are\n> for reproducible conditions, and I don't think we can make any more\n> progress on them without more information. I doubt we should hold up\n> the 7.1.1 release while waiting to see if we get any.\n>\n> We do have that not-quite-done QNX4 port patch in hand. Perhaps we\n> should give Bernd another day to respond to the comments on that and\n> squeeze it into 7.1.1.\n\nHow about I do another end of week release? Give Bernd until Friday to\nsort through the patch with everyone without it being rushed ...\n\n\n", "msg_date": "Mon, 30 Apr 2001 23:12:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ... " }, { "msg_contents": "Hi,\n\njust noticed Marc has restarted postmaster at db.hub.org and\nforget to specify -e option (European format!) for backend. That's why\nfts.postgresql.org doesn't works properly. I've sent message to him\nbut probably better if somebody could talk with Marc or\nrestart corresponding postamaster at db.hub.rg.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 1 May 2001 10:17:05 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Sorry, need to restart postmaster at db.hub.org (fts.postgresql.org)" }, { "msg_contents": "I extracted from Ayal the info that he was using timezone\n'Asia/Jerusalem'. That zone has the interesting property that\nthe DST transitions happen *at midnight*, not at a sane hour like 2AM.\nI suspect that that is triggering various & sundry bugs in older\nversions of mktime().\n\nOn a relatively recent Linux (LinuxPPC 2000/Q4) the worst misbehavior\nI can find is\n\n\tregression=# select timestamp('1993-04-02');\n\t timestamp \n\t------------------------\n\t 1993-04-02 01:00:00+03\n\t(1 row)\n\nwhich is about the best we can do, seeing as how midnight local time\njust plain does not exist on that date in that timezone.\n\nHowever on an older Linux (RedHat 5.1) I get:\n\n\tregression=# select timestamp('1993-04-02');\n\t timestamp \n\t------------------------\n\t 2027-04-11 17:45:25+03\n\t(1 row)\n\nwhich is a tad startling. Tracing through DecodeDateTime tells the\ntale:\n\n(gdb) s\n875 mktime(tm);\n(gdb) p *tm\n$2 = {tm_sec = 0, tm_min = 0, tm_hour = 0, tm_mday = 2, tm_mon = 3, \n tm_year = 93, tm_wday = 0, tm_yday = 0, tm_isdst = -1, \n tm_gmtoff = -1073745925, tm_zone = 0x81420c0 \"\\203�\\ff�E�\\001\"}\n(gdb) n\n876 tm->tm_year += 1900;\n(gdb) p *tm\n$3 = {tm_sec = 0, tm_min = 0, tm_hour = 0, tm_mday = 2, tm_mon = 3, \n tm_year = 93, tm_wday = 0, tm_yday = 0, tm_isdst = -1, \n tm_gmtoff = -1073745925, tm_zone = 0x81420c0 \"\\203�\\ff�E�\\001\"}\n(gdb) s\n877 tm->tm_mon += 1;\n(gdb) s\n880 *tzp = -(tm->tm_gmtoff); /* tm_gmtoff is\n\nOoops.\n\nI recommend that all uses of tm->tm_gmtoff from mktime() be guarded\nalong the lines of\n\tif (tm->tm_isdst >= 0)\n\t\tbelieve gmtoff\n\telse\n\t\tassume GMT\n\nHowever, this still does not account for the reported failure of date()\nsince that code path doesn't use the returned value of *tzp --- and\nindeed I get the right thing from select date('1993-04-02'), despite\nthe failure of mktime(). Probably the behavior of mktime() in this\nsituation varies across different glibc releases. Would some other\nfolk try\n\n\tset timezone to 'Asia/Jerusalem';\n\tselect timestamp('1993-04-02');\n\tselect date('1993-04-02');\n\nand report what you see?\n\nBTW, I also see\n\n\tregression=# select timestamp(date('1993-04-02'));\n\tERROR: Unable to convert date to tm\n\nwhich is just what you'd expect if mktime() fails for this input;\nI suppose there's nothing we can do about that except advise people\nto update to a less broken libc...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 May 2001 10:14:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: date conversion (was Re: Re: v7.1.1 branched and\n\treleased on Tuesday ...)" }, { "msg_contents": "> just noticed Marc has restarted postmaster at db.hub.org and\n> forget to specify -e option (European format!) for backend. That's why\n> fts.postgresql.org doesn't works properly. I've sent message to him\n> but probably better if somebody could talk with Marc or\n> restart corresponding postamaster at db.hub.rg.\n\nHmm. Might be a good time to consider using ISO time formats ;)\n\nCould I help with something in that regard?\n\n - Thomas\n", "msg_date": "Wed, 02 May 2001 04:26:18 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Sorry,\n\tneed to restart postmaster at db.hub.org (fts.postgresql.org)" }, { "msg_contents": "On Mon, Apr 30, 2001 at 07:12:16PM -0400, Tom Lane wrote:\n> \n> > There will surely be a 7.1.2. I vote against waiting for it.\n> \n> Possibly, but one hopes 7.1.2 will be a few months away ...\n\n\tIs there a chance for the %TYPE patch for PL/pgSQL to make it into\n7.1.2?\n\n\t-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Developer \nJunior, quit playing with your floppy!\n", "msg_date": "Tue, 1 May 2001 23:02:09 -0600", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ..." }, { "msg_contents": "Roberto Mello <rmello@cc.usu.edu> writes:\n> \tIs there a chance for the %TYPE patch for PL/pgSQL to make it into\n> 7.1.2?\n\nWe are not in the habit of putting new features into dot-releases.\nI'd have to vote against this, particularly seeing that the patch\nin question is unreviewed and untested...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 May 2001 01:10:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 branched and released on Tuesday ... " }, { "msg_contents": "> I extracted from Ayal the info that he was using timezone\n> 'Asia/Jerusalem'. That zone has the interesting property that\n> the DST transitions happen *at midnight*, not at a sane hour like 2AM.\n> I suspect that that is triggering various & sundry bugs in older\n> versions of mktime().\n\nAhhh! So \"GMT+2\" was just an approximation, eh? :/\n\n> On a relatively recent Linux (LinuxPPC 2000/Q4) the worst misbehavior\n> I can find is\n...\n> However on an older Linux (RedHat 5.1) I get:\n...\n> I recommend that all uses of tm->tm_gmtoff from mktime() be guarded\n> along the lines of\n> if (tm->tm_isdst >= 0)\n> believe gmtoff\n> else\n> assume GMT\n\nI'm not sure that tm_isdst == -1 is a legitimate indicator for mktime()\nfailure on all platforms; it indicates \"don't know\", but afaik there is\nno defined behavior for the rest of the fields in that case. Can we be\nassured that for all platforms the other fields are not damaged?\n\n> However, this still does not account for the reported failure of date()\n> since that code path doesn't use the returned value of *tzp --- and\n> indeed I get the right thing from select date('1993-04-02'), despite\n> the failure of mktime(). Probably the behavior of mktime() in this\n> situation varies across different glibc releases.\n...\n> which is just what you'd expect if mktime() fails for this input;\n> I suppose there's nothing we can do about that except advise people\n> to update to a less broken libc...\n\nNot sure how much code we should put in to guard for cases we can't even\ntest (RH 5.1 is pretty old).\n\n - Thomas\n", "msg_date": "Thu, 10 May 2001 13:22:32 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: date conversion (was Re: Re: v7.1.1 branched and\n\treleased on Tuesday ...)" }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> I'm not sure that tm_isdst == -1 is a legitimate indicator for mktime()\n> failure on all platforms; it indicates \"don't know\", but afaik there is\n> no defined behavior for the rest of the fields in that case. Can we be\n> assured that for all platforms the other fields are not damaged?\n\nWe can't; further investigation showed that another form of the problem\nwas mktime() setting the y/m/d/h/m/s fields one hour earlier than what\nit was given --- ie, pass it 00:00:00 of a DST forward transition date,\nget back neither 00:00:00 nor 01:00:00 (either of which would be\nplausible) but 23:00:00 of the day before!\n\nWhat I did about this was to coalesce all of the three or four places\nthat use mktime just to probe for DST status into a single routine\n(DetermineLocalTimeZone) that is careful to pass mktime a copy of the\noriginal struct tm. No matter how brain dead the system mktime is,\nit can't screw up the other fields that way ;-). Then we trust\ntm_isdst and tm_gmtoff only if tm_isdst >= 0. Possibly we'll find\nthat it'd be a good idea to test also for return value == -1, but\nthe tm_isdst test seems to be sufficient for the known bug cases.\n\n> Not sure how much code we should put in to guard for cases we can't even\n> test (RH 5.1 is pretty old).\n\nYeah, but the above-described behavior is reported on RH 7.1 (by two\ndifferent people). I'm afraid we can't ignore that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 May 2001 09:55:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: date conversion (was Re: Re: v7.1.1 branched and\n\treleased on Tuesday ...)" } ]
[ { "msg_contents": "\nWe have discussed splitting the tree on May 1 to start 7.2 development. \nIf no one objects, we will stay with that schedule.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 14:46:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Split of tree on May 1" }, { "msg_contents": "On Fri, 27 Apr 2001, Bruce Momjian wrote:\n\n>\n> We have discussed splitting the tree on May 1 to start 7.2 development.\n> If no one objects, we will stay with that schedule.\n\nPlease see other thread where we are actually discussing this ... if you\nhave anything to add to that thread please do so ...\n\n\n", "msg_date": "Fri, 27 Apr 2001 18:59:05 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Split of tree on May 1" } ]
[ { "msg_contents": "> As Tom's mentioned the other day, we're looking at doing up v7.1.1 on\n> Tuesday, and starting in on v7.2 ...\n> \n> Does anyone have any outstanding fixes for v7.1.x that they\n> want to see in *before* we do this release? Any points unresolved\n> that anyone knows about that we need to look at?\n\nHiroshi reported about startup problem yesterday - we should fix this\nfor 7.1.1...\n\nVadim\n", "msg_date": "Fri, 27 Apr 2001 12:15:47 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: v7.1.1 branched and released on Tuesday ..." } ]
[ { "msg_contents": "Dominic,\n\nI like your idea. One of the benefits SQLServer 2000 is that I can assign a\nrole in the database to a NT group. At that point, all I have to do is add\na user to the group to be able to access that database. Would you solution\ninclude this scenario? This lets me assign all the resources needed for an\napplication to a group. Then to let people access the application, all I\nhave to do is add them to a group in one spot.\n\nRyan.\n\n", "msg_date": "Fri, 27 Apr 2001 14:22:54 -0500", "msg_from": "\"Ryan M. Hager\" <rmhager@misource.com>", "msg_from_op": true, "msg_subject": "RE:PAM Authentication for PostgreSQL..." } ]
[ { "msg_contents": "> What's the deal with vacuum lazy in 7.1? I was looking\n> forward to it. It was never clear whether or not you guys\n> decided to put it in.\n> \n> If it is in as a feature, how does one use it?\n> If it is a patch, how does one get it?\n> If it is neither a patch nor an existing feature, has\n> development stopped?\n\nI still had no time to port it to 7.1 -:(\nI'll post message to -hackers when it will be ready.\n\nVadim\n", "msg_date": "Fri, 27 Apr 2001 13:52:18 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Re: 7.1 vacuum" } ]
[ { "msg_contents": "\nWAL was a difficult feature to add to 7.1. Currently, it is only used\nas a performance benefit, but I expect it will be used in the future to\nadd new features like:\n\n\tAdvanced Replication\n\tPoint-in-time recovery\n\tRow reuse without vacuum\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 17:03:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "WAL feature" }, { "msg_contents": "On Fri, 27 Apr 2001, Bruce Momjian wrote:\n\n>\n> WAL was a difficult feature to add to 7.1. Currently, it is only used\n> as a performance benefit, but I expect it will be used in the future to\n> add new features like:\n>\n> \tAdvanced Replication\n\nHow?\n\n> \tPoint-in-time recovery\n\nI thought that was understood from Vadim's explanations?\n\n> \tRow reuse without vacuum\n\nHow? Didn't even see these as being related ...\n\n", "msg_date": "Fri, 27 Apr 2001 18:58:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: WAL feature" }, { "msg_contents": "> On Fri, 27 Apr 2001, Bruce Momjian wrote:\n> \n> >\n> > WAL was a difficult feature to add to 7.1. Currently, it is only used\n> > as a performance benefit, but I expect it will be used in the future to\n> > add new features like:\n> >\n> > \tAdvanced Replication\n> \n> How?\n\nI guess other hosts could read the WAL to find out what changed.\n\n> \n> > \tPoint-in-time recovery\n> \n> I thought that was understood from Vadim's explanations?\n\nYes, I am just reiterating that WAL may be related to future new\nfeatures.\n\n> \n> > \tRow reuse without vacuum\n> \n> How? Didn't even see these as being related ...\n\nIt may be. Not sure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 18:16:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL feature" }, { "msg_contents": "On Fri, 27 Apr 2001, Bruce Momjian wrote:\n\n> > How?\n>\n> I guess other hosts could read the WAL to find out what changed.\n\nI wonder if that would get around one of the issues I had brought up a\nways back concerning replication and stuff like sequences ...\n\n> > > \tRow reuse without vacuum\n> >\n> > How? Didn't even see these as being related ...\n>\n> It may be. Not sure.\n\nNeither am I ... Vadim seems to think so, so am curious as to how ...\n\n\n", "msg_date": "Fri, 27 Apr 2001 20:14:55 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: WAL feature" }, { "msg_contents": "> On Fri, 27 Apr 2001, Bruce Momjian wrote:\n> \n> > > How?\n> >\n> > I guess other hosts could read the WAL to find out what changed.\n> \n> I wonder if that would get around one of the issues I had brought up a\n> ways back concerning replication and stuff like sequences ...\n\nYep, WAL collects all database changes into one file. Easy to see how\nsome other host trying to replication a different host would find the\nWAL contents valuable.\n\n> \n> > > > \tRow reuse without vacuum\n> > >\n> > > How? Didn't even see these as being related ...\n> >\n> > It may be. Not sure.\n> \n> Neither am I ... Vadim seems to think so, so am curious as to how ...\n\nI think my point is that WAL could prove to be very valuable in a number\nof areas, perhaps more areas than we know of right now. In fact, I\nthink one idea once we start 7.2 is to identify how we want to use WAL\nin the upcoming 7.2 features, make any needed WAL improvements, then\nstart adding features.\n\nWAL was tough to add, but there are probably a lot of nice things was\ncan do now that we have it.\n\nAlso, Vadim mentioned that WAL fixed btree corruption problems, which\nwas certainly important too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 19:20:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL feature" }, { "msg_contents": "What about incremental backup ?\n\n\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> �������/�������� � ��������\n���������: news:200104272103.f3RL3ur23234@candle.pha.pa.us...\n>\n> WAL was a difficult feature to add to 7.1. Currently, it is only used\n> as a performance benefit, but I expect it will be used in the future to\n> add new features like:\n>\n> Advanced Replication\n> Point-in-time recovery\n> Row reuse without vacuum\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n\n", "msg_date": "Sat, 28 Apr 2001 12:59:42 +0400", "msg_from": "\"Sergey E. Volkov\" <sve@raiden.bancorp.ru>", "msg_from_op": false, "msg_subject": "Re: WAL feature" } ]
[ { "msg_contents": "`make depend' is broken in the CVS sources. I've only tested it when\nusing a build directory which is different from the source directory,\nbut frankly it looks broken anyhow.\n\nThis is what I get:\n\nmake -C backend depend\nmake[1]: Entering directory `/home/ian/pgsql-objdir/src/backend'\nfor i in access bootstrap catalog parser commands executor lib libpq main nodes optimizer port postmaster regex rewrite storage tcop utils; do make -C $i depend; done\nmake[2]: Entering directory `/home/ian/pgsql-objdir/src/backend/access'\nfor dir in common gist hash heap index nbtree rtree transam; do make -C $dir depend || exit; done\nmake[3]: Entering directory `/home/ian/pgsql-objdir/src/backend/access/common'\ngcc -MM -O2 -Wall -Wmissing-prototypes -Wmissing-declarations *.c >depend\ngcc: *.c: No such file or directory\ngcc: No input files\nmake[3]: *** [depend] Error 1\nmake[3]: *** Deleting file `depend'\nmake[3]: Leaving directory `/home/ian/pgsql-objdir/src/backend/access/common'\nmake[2]: *** [depend] Error 2\nmake[2]: Leaving directory `/home/ian/pgsql-objdir/src/backend/access'\nmake[2]: Entering directory `/home/ian/pgsql-objdir/src/backend/bootstrap'\ngcc -MM -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error *.c >depend\ngcc: *.c: No such file or directory\ngcc: No input files\nmake[2]: *** [depend] Error 1\nmake[2]: *** Deleting file `depend'\n\netc.\n\nMaking this change to src/backend/access/common/Makefile fixes the\nfirst error:\n\nIndex: Makefile\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/common/Makefile,v\nretrieving revision 1.19\ndiff -u -r1.19 Makefile\n--- Makefile\t2000/08/31 16:09:30\t1.19\n+++ Makefile\t2001/04/27 21:11:26\n@@ -21,7 +21,7 @@\n \t$(LD) $(LDREL) $(LDOUT) SUBSYS.o $(OBJS)\n \n dep depend:\n-\t$(CC) -MM $(CFLAGS) *.c >depend\n+\t$(CC) -MM $(CFLAGS) $(CPPFLAGS) $(srcdir)/*.c >depend\n \n clean: \n \trm -f SUBSYS.o $(OBJS)\n\n\n\nI can submit a patch to make a similar change to all Makefiles.\nBefore I do, is `make depend' still supported? Is there a better way?\n\nIan\n", "msg_date": "27 Apr 2001 14:12:35 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "`make depend' broken in CVS sources" }, { "msg_contents": "Ian Lance Taylor writes:\n\n> `make depend' is broken in the CVS sources.\n\n'make depend' doesn't exist anymore. Use configure --enable-depend.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 27 Apr 2001 23:46:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: `make depend' broken in CVS sources" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Ian Lance Taylor writes:\n>> `make depend' is broken in the CVS sources.\n\n> 'make depend' doesn't exist anymore. Use configure --enable-depend.\n\nHowever, the makefiles are still full of depend targets --- so someone\nwho hadn't read the configure docs with eagle eyes could be forgiven\nfor thinking that 'make depend' will do what it usually does in most\nother project trees.\n\nPerhaps in the next release the make targets should be renamed to\nsomething other than depend. At the very least, it'd be nice if\n'make depend' at the top level would emit a helpful error message\nrather than doing something obscure/wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 01:16:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: `make depend' broken in CVS sources " } ]
[ { "msg_contents": "> WAL was a difficult feature to add to 7.1. Currently, it is only used\n> as a performance benefit, but I expect it will be used in the future to\n\nNot only. Did you forget about btree stability?\nPartial disk writes?\n\n> add new features like:\n> \n> \tAdvanced Replication\n\nI'm for sure not fan of this.\n\n> \tPoint-in-time recovery\n> \tRow reuse without vacuum\n\nYes, it will help to remove uncommitted rows.\n\nAnd don't forget about SAVEPOINTs.\n\nVadim\n", "msg_date": "Fri, 27 Apr 2001 14:47:45 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL feature" }, { "msg_contents": "On Fri, 27 Apr 2001, Mikheev, Vadim wrote:\n\n> > \tRow reuse without vacuum\n>\n> Yes, it will help to remove uncommitted rows.\n\nSame question as I asked Bruce ... how? :) I wasn't trying to be\nfascisious(sp?) when I asked, I didn't realize the two were connected and\nam curious as to how :)\n\n", "msg_date": "Fri, 27 Apr 2001 20:12:48 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "RE: WAL feature" } ]
[ { "msg_contents": "I'm trying to use pg_dump to backup my tables one at a time from\nPostgres 7.0.3 (I'll upgrade to 7.1 in a few weeks). I'm getting a\nstrange error that I've never encountered before.\n\nThe backup call is: pg_dump db01 -t cell | gzip > cell.backup.gz\n\nThe error is : failed sanity check, table ro_ellipse was not found\n\nHowever, I wasn't even accessing table ro_ellipse. Plus, I've verified\nthat the table does exist and appears fine (I can select data from it).\nI vacuumed the db and even restarted the postmaster, but I still get\nthis weird warning.\n\nAnyone seen this before or know if this is a problem?\n\nThanks.\n-Tony\n\nPostgres 7.0.3 running on RH Linux 6.2 (Zoot), Pentium III/400 MHz, 512\nMeg RAM\n\n\n\n\n", "msg_date": "Fri, 27 Apr 2001 14:51:48 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "pg_dump Backup on 7.0.3 - Sanity error?" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n> I'm trying to use pg_dump to backup my tables one at a time from\n> Postgres 7.0.3 (I'll upgrade to 7.1 in a few weeks). I'm getting a\n> strange error that I've never encountered before.\n> The error is : failed sanity check, table ro_ellipse was not found\n\nMost likely, you removed the user that owned ro_ellipse. Create a\nuser with the same usesysid shown as ro_ellipse's relowner, or else\nchange the relowner field to point at an extant user.\n\nI believe 7.1's pg_dump copes with this sort of thing more gracefully...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 00:16:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump Backup on 7.0.3 - Sanity error? " }, { "msg_contents": "Tom Lane wrote:\n\n> Most likely, you removed the user that owned ro_ellipse. Create a\n> user with the same usesysid shown as ro_ellipse's relowner, or else\n> change the relowner field to point at an extant user.\n>\n> I believe 7.1's pg_dump copes with this sort of thing more gracefully...\n>\n> regards, tom lane\n\nYes. I did delete that user. Thanks Tom. That makes sense.\n\n\n-Tony\n\n\n", "msg_date": "Mon, 30 Apr 2001 10:53:12 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Re: pg_dump Backup on 7.0.3 - Sanity error?" } ]
[ { "msg_contents": "> > > \tRow reuse without vacuum\n> >\n> > Yes, it will help to remove uncommitted rows.\n> \n> Same question as I asked Bruce ... how? :) I wasn't trying to be\n> fascisious(sp?) when I asked, I didn't realize the two were\n> connected and am curious as to how :)\n\nAfter implementing UNDO operation (we have only REDO now)\ntransactions will roll back their changes on abort and so\nfree space occupied by inserted rows.\n\nHow to re-use freed space (ie how to maintain information about\nblocks available for insertion of new rows) is another issue,\nof course, but anyway - space must be freed first.\n\nVadim\n", "msg_date": "Fri, 27 Apr 2001 17:20:14 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL feature" }, { "msg_contents": "On Fri, 27 Apr 2001, Mikheev, Vadim wrote:\n\n> > > > \tRow reuse without vacuum\n> > >\n> > > Yes, it will help to remove uncommitted rows.\n> >\n> > Same question as I asked Bruce ... how? :) I wasn't trying to be\n> > fascisious(sp?) when I asked, I didn't realize the two were\n> > connected and am curious as to how :)\n>\n> After implementing UNDO operation (we have only REDO now)\n> transactions will roll back their changes on abort and so\n> free space occupied by inserted rows.\n\nAhhh, okay, so this isn't reusing space on delete/update, so much as\navoiding writing to the table until the transaction is committed?\n\n\n", "msg_date": "Fri, 27 Apr 2001 21:41:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "RE: WAL feature" } ]
[ { "msg_contents": "> Yep, WAL collects all database changes into one file. Easy to see how\n> some other host trying to replication a different host would find the\n> WAL contents valuable.\n\nUnfortunately, slave database(s) should be on the same platform\n(hardware+OS) to be able to use information about *physical*\nchanges in data files. Also, this would be still *async* replication.\nMaybe faster than rserv, maybe with less space requirements (no rserv'\nlog table), but maybe not.\n\nI believe that making efforts to implement (bi-directional) *sync*\nreplication would be more valuable.\n\nVadim\n", "msg_date": "Fri, 27 Apr 2001 17:41:43 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL feature" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > Yep, WAL collects all database changes into one file. Easy to see how\n> > some other host trying to replication a different host would find the\n> > WAL contents valuable.\n> \n> Unfortunately, slave database(s) should be on the same platform\n> (hardware+OS) to be able to use information about *physical*\n> changes in data files. Also, this would be still *async* replication.\n> Maybe faster than rserv, maybe with less space requirements (no rserv'\n> log table), but maybe not.\n> \n> I believe that making efforts to implement (bi-directional) *sync*\n> replication would be more valuable.\n\nOr maybe an platform-neutral interface to the WAL file. Seems this\nwould fit a need.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 27 Apr 2001 20:44:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL feature" } ]
[ { "msg_contents": "1) I login as administrator (local) at our SQL 2000 server.\n2) change password for administrator\n3) reboot server\n4) login in as administrator (local) with new password\n5) receive message \"SQL Agent cannot start\" because \"login failure\"\n6) change password for administrator again back to previous one and reboot\n7) login as administrator (local) with previous password\n8) SQL Agent and server start as usual... however, when i check performance\nmonitor, CPU always 100% !!! performance is siginificantly degraded !!\n\n9) the only solution is to stop both SQL Server and SQL Agent and start both\nof them one by one manually.\n\nBill !! why you do this to me????\nAnyone can help?\n\n\n\n\n\n\n-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----\nhttp://www.newsfeeds.com - The #1 Newsgroup Service in the World!\n-----== Over 80,000 Newsgroups - 16 Different Servers! =-----\n", "msg_date": "Sat, 28 Apr 2001 09:53:14 +0800", "msg_from": "\"������������\" <balloon@atm.cyberec.com>", "msg_from_op": true, "msg_subject": "after changing login password (local), CPU always 100% !!" } ]
[ { "msg_contents": "(Haven't seen this mentioned on-list yet)\n\nI saw a report that Informix was selling its database business to IBM\nfor ~US$1B. Which would have IBM owning the remnants of Illustra, which\nwas based on University Postgres. Confirmation of the sale is at\nwww.informix.com :(\n\nOn a possibly related note, I notice that back in January 2000, IBM sued\nInformix over patent infringement for databases, distributed processing\nand data compression.\n\nHow the world changes!\n\n - Thomas\n", "msg_date": "Sat, 28 Apr 2001 02:25:26 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": true, "msg_subject": "Informix?" } ]
[ { "msg_contents": "This patch adds support for %TYPE in CREATE FUNCTION argument and\nreturn types.\n\n%TYPE is already supported by PL/pgSQL when declaring variables.\nHowever, that does not help with the argument and return types in\nCREATE FUNCTION.\n\nUsing %TYPE makes it easier to write a function which is independent\nof the definition of a table. That is, minor changes to the types\nused in the table may not require changes to the function.\n\nFor example, this trivial function will work whenever `table' which\nhas columns named `name' and `value', no matter what the types of the\ncolumns are.\n\nCREATE FUNCTION lookup (table.name%TYPE)\n RETURNS table.value%TYPE\n AS 'select value from table where name = $1'\n LANGUAGE 'sql';\n\nThis patch includes changes to the testsuite and the documentation.\n\nThis work was sponsored by Zembu.\n\nIan\n\nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.126\ndiff -p -u -r1.126 parsenodes.h\n--- src/include/nodes/parsenodes.h\t2001/03/23 04:49:56\t1.126\n+++ src/include/nodes/parsenodes.h\t2001/04/28 03:38:21\n@@ -945,6 +945,7 @@ typedef struct TypeName\n \tbool\t\tsetof;\t\t\t/* is a set? */\n \tint32\t\ttypmod;\t\t\t/* type modifier */\n \tList\t *arrayBounds;\t/* array bounds */\n+\tchar\t *attrname;\t\t/* field name when using %TYPE */\n } TypeName;\n \n /*\nIndex: src/backend/parser/analyze.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/analyze.c,v\nretrieving revision 1.183\ndiff -p -u -r1.183 analyze.c\n--- src/backend/parser/analyze.c\t2001/03/22 06:16:15\t1.183\n+++ src/backend/parser/analyze.c\t2001/04/28 03:38:23\n@@ -27,6 +27,7 @@\n #include \"parser/parse_relation.h\"\n #include \"parser/parse_target.h\"\n #include \"parser/parse_type.h\"\n+#include \"parser/parse_expr.h\"\n #include \"rewrite/rewriteManip.h\"\n #include \"utils/builtins.h\"\n #include \"utils/fmgroids.h\"\n@@ -49,7 +50,10 @@ static Node *transformSetOperationTree(P\n static Query *transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt);\n static Query *transformCreateStmt(ParseState *pstate, CreateStmt *stmt);\n static Query *transformAlterTableStmt(ParseState *pstate, AlterTableStmt *stmt);\n+static Node *transformTypeRefs(ParseState *pstate, Node *stmt);\n \n+static void transformTypeRefsList(ParseState *pstate, List *l);\n+static void transformTypeRef(ParseState *pstate, TypeName *tn);\n static List *getSetColTypes(ParseState *pstate, Node *node);\n static void transformForUpdate(Query *qry, List *forUpdate);\n static void transformFkeyGetPrimaryKey(FkConstraint *fkconstraint);\n@@ -230,6 +234,18 @@ transformStmt(ParseState *pstate, Node *\n \t\t\t\t\t\t\t\t\t\t\t (SelectStmt *) parseTree);\n \t\t\tbreak;\n \n+\t\t\t/*\n+\t\t\t * Convert use of %TYPE in statements where it is permitted.\n+\t\t\t */\n+\t\tcase T_ProcedureStmt:\n+\t\tcase T_CommentStmt:\n+\t\tcase T_RemoveFuncStmt:\n+\t\tcase T_DefineStmt:\n+\t\t\tresult = makeNode(Query);\n+\t\t\tresult->commandType = CMD_UTILITY;\n+\t\t\tresult->utilityStmt = transformTypeRefs(pstate, parseTree);\n+\t\t\tbreak;\n+\n \t\tdefault:\n \n \t\t\t/*\n@@ -2607,6 +2623,104 @@ transformAlterTableStmt(ParseState *psta\n \t}\n \tqry->utilityStmt = (Node *) stmt;\n \treturn qry;\n+}\n+\n+/* \n+ * Transform uses of %TYPE in a statement.\n+ */\n+static Node *\n+transformTypeRefs(ParseState *pstate, Node *stmt)\n+{\n+\tswitch (nodeTag(stmt))\n+\t{\n+\t\tcase T_ProcedureStmt:\n+\t\t{\n+\t\t\tProcedureStmt *ps = (ProcedureStmt *) stmt;\n+\n+\t\t\ttransformTypeRefsList(pstate, ps->argTypes);\n+\t\t\ttransformTypeRef(pstate, (TypeName *) ps->returnType);\n+\t\t\ttransformTypeRefsList(pstate, ps->withClause);\n+\t\t}\n+\t\tbreak;\n+\n+\t\tcase T_CommentStmt:\n+\t\t{\n+\t\t\tCommentStmt\t *cs = (CommentStmt *) stmt;\n+\n+\t\t\ttransformTypeRefsList(pstate, cs->objlist);\n+\t\t}\n+\t\tbreak;\n+\n+\t\tcase T_RemoveFuncStmt:\n+\t\t{\n+\t\t\tRemoveFuncStmt *rs = (RemoveFuncStmt *) stmt;\n+\n+\t\t\ttransformTypeRefsList(pstate, rs->args);\n+\t\t}\n+\t\tbreak;\n+\n+\t\tcase T_DefineStmt:\n+\t\t{\n+\t\t\tDefineStmt *ds = (DefineStmt *) stmt;\n+\t\t\tList\t *ele;\n+\n+\t\t\tforeach(ele, ds->definition)\n+\t\t\t{\n+\t\t\t\tDefElem\t *de = (DefElem *) lfirst(ele);\n+\n+\t\t\t\tif (de->arg != NULL\n+\t\t\t\t\t&& IsA(de->arg, TypeName))\n+\t\t\t\t{\n+\t\t\t\t\ttransformTypeRef(pstate, (TypeName *) de->arg);\n+\t\t\t\t}\n+\t\t\t}\n+\t\t}\n+\t\tbreak;\n+\n+\t\tdefault:\n+\t\t\telog(ERROR, \"Unsupported type %d in transformTypeRefs\",\n+\t\t\t\t nodeTag(stmt));\n+\t\t\tbreak;\n+\t}\n+\n+\treturn stmt;\n+}\n+\n+/*\n+ * Transform uses of %TYPE in a list.\n+ */\n+static void\n+transformTypeRefsList(ParseState *pstate, List *l)\n+{\n+\tList\t *ele;\n+\n+\tforeach(ele, l)\n+\t{\n+\t\tif (IsA(lfirst(ele), TypeName))\n+\t\t\ttransformTypeRef(pstate, (TypeName *) lfirst(ele));\n+\t}\n+}\n+\n+/*\n+ * Transform a TypeName to not use %TYPE.\n+ */\n+static void\n+transformTypeRef(ParseState *pstate, TypeName *tn)\n+{\n+\tAttr *att;\n+\tNode *n;\n+\tVar\t *v;\n+\n+\tif (tn->attrname == NULL)\n+\t\treturn;\n+\tatt = makeAttr(tn->name, tn->attrname);\n+\tn = transformExpr(pstate, (Node *) att, EXPR_COLUMN_FIRST);\n+\tif (! IsA(n, Var))\n+\t\telog(ERROR, \"unsupported expression in %%TYPE\");\n+\tv = (Var *) n;\n+\ttn->name = typeidTypeName(v->vartype);\n+\ttn->typmod = v->vartypmod;\n+\ttn->attrname = NULL;\n }\n \n /* exported so planner can check again after rewriting, query pullup, etc */\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.221\ndiff -p -u -r2.221 gram.y\n--- src/backend/parser/gram.y\t2001/02/18 18:06:10\t2.221\n+++ src/backend/parser/gram.y\t2001/04/28 03:38:26\n@@ -192,7 +192,7 @@ static void doNegateFloat(Value *v);\n \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n \t\tselect_limit, opt_select_limit\n \n-%type <typnam>\tfunc_arg, func_return, aggr_argtype\n+%type <typnam>\tfunc_arg, func_return, func_type, aggr_argtype\n \n %type <boolean>\topt_arg, TriggerForOpt, TriggerForType, OptTemp\n \n@@ -2462,7 +2462,7 @@ func_args_list: func_arg\n \t\t\t\t{\t$$ = lappend($1, $3); }\n \t\t;\n \n-func_arg: opt_arg Typename\n+func_arg: opt_arg func_type\n \t\t\t\t{\n \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n@@ -2470,7 +2470,7 @@ func_arg: opt_arg Typename\n \t\t\t\t\t */\n \t\t\t\t\t$$ = $2;\n \t\t\t\t}\n-\t\t| Typename\n+\t\t| func_type\n \t\t\t\t{\n \t\t\t\t\t$$ = $1;\n \t\t\t\t}\n@@ -2498,7 +2498,7 @@ func_as: Sconst\n \t\t\t\t{ \t$$ = makeList2(makeString($1), makeString($3)); }\n \t\t;\n \n-func_return: Typename\n+func_return: func_type\n \t\t\t\t{\n \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n@@ -2508,6 +2508,18 @@ func_return: Typename\n \t\t\t\t}\n \t\t;\n \n+func_type:\tTypename\n+\t\t\t\t{\n+\t\t\t\t\t$$ = $1;\n+\t\t\t\t}\n+\t\t| IDENT '.' ColId '%' TYPE_P\n+\t\t\t\t{\n+\t\t\t\t\t$$ = makeNode(TypeName);\n+\t\t\t\t\t$$->name = $1;\n+\t\t\t\t\t$$->typmod = -1;\n+\t\t\t\t\t$$->attrname = $3;\n+\t\t\t\t}\n+\t\t;\n \n /*****************************************************************************\n *\nIndex: src/backend/parser/parse_expr.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\nretrieving revision 1.92\ndiff -p -u -r1.92 parse_expr.c\n--- src/backend/parser/parse_expr.c\t2001/03/22 03:59:41\t1.92\n+++ src/backend/parser/parse_expr.c\t2001/04/28 03:38:26\n@@ -939,6 +939,7 @@ parser_typecast_expression(ParseState *p\n char *\n TypeNameToInternalName(TypeName *typename)\n {\n+\tAssert(typename->attrname == NULL);\n \tif (typename->arrayBounds != NIL)\n \t{\n \nIndex: src/test/regress/input/create_function_2.source\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/create_function_2.source,v\nretrieving revision 1.12\ndiff -p -u -r1.12 create_function_2.source\n--- src/test/regress/input/create_function_2.source\t2000/11/20 20:36:54\t1.12\n+++ src/test/regress/input/create_function_2.source\t2001/04/28 03:38:27\n@@ -13,6 +13,12 @@ CREATE FUNCTION hobby_construct(text, te\n LANGUAGE 'sql';\n \n \n+CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n+ RETURNS hobbies_r.person%TYPE\n+ AS 'select person from hobbies_r where name = $1'\n+ LANGUAGE 'sql';\n+\n+\n CREATE FUNCTION equipment(hobbies_r)\n RETURNS setof equipment_r\n AS 'select * from equipment_r where hobby = $1.name'\nIndex: src/test/regress/input/misc.source\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/misc.source,v\nretrieving revision 1.14\ndiff -p -u -r1.14 misc.source\n--- src/test/regress/input/misc.source\t2000/11/20 20:36:54\t1.14\n+++ src/test/regress/input/misc.source\t2001/04/28 03:38:28\n@@ -214,6 +214,7 @@ SELECT user_relns() AS user_relns\n \n --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n \n+SELECT hobbies_by_name('basketball');\n \n --\n -- check that old-style C functions work properly with TOASTed values\nIndex: src/test/regress/output/create_function_2.source\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/create_function_2.source,v\nretrieving revision 1.13\ndiff -p -u -r1.13 create_function_2.source\n--- src/test/regress/output/create_function_2.source\t2000/11/20 20:36:54\t1.13\n+++ src/test/regress/output/create_function_2.source\t2001/04/28 03:38:28\n@@ -9,6 +9,10 @@ CREATE FUNCTION hobby_construct(text, te\n RETURNS hobbies_r\n AS 'select $1 as name, $2 as hobby'\n LANGUAGE 'sql';\n+CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n+ RETURNS hobbies_r.person%TYPE\n+ AS 'select person from hobbies_r where name = $1'\n+ LANGUAGE 'sql';\n CREATE FUNCTION equipment(hobbies_r)\n RETURNS setof equipment_r\n AS 'select * from equipment_r where hobby = $1.name'\nIndex: src/test/regress/output/misc.source\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/misc.source,v\nretrieving revision 1.27\ndiff -p -u -r1.27 misc.source\n--- src/test/regress/output/misc.source\t2000/11/20 20:36:54\t1.27\n+++ src/test/regress/output/misc.source\t2001/04/28 03:38:28\n@@ -656,6 +656,12 @@ SELECT user_relns() AS user_relns\n (90 rows)\n \n --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n+SELECT hobbies_by_name('basketball');\n+ hobbies_by_name \n+-----------------\n+ joe\n+(1 row)\n+\n --\n -- check that old-style C functions work properly with TOASTed values\n --\nIndex: doc/src/sgml/ref/create_function.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/create_function.sgml,v\nretrieving revision 1.21\ndiff -p -u -r1.21 create_function.sgml\n--- doc/src/sgml/ref/create_function.sgml\t2000/12/25 23:15:26\t1.21\n+++ doc/src/sgml/ref/create_function.sgml\t2001/04/28 03:38:31\n@@ -58,10 +58,16 @@ CREATE FUNCTION <replaceable class=\"para\n <listitem>\n <para>\n \tThe data type(s) of the function's arguments, if any.\n-\tThe input types may be base or complex types, or\n-\t<firstterm>opaque</firstterm>.\n+\tThe input types may be base or complex types,\n+\t<firstterm>opaque</firstterm>, or the same as the type of an\n+\texisting column.\n \t<literal>Opaque</literal> indicates that the function\n \taccepts arguments of a non-SQL type such as <type>char *</type>.\n+\tThe type of a column is indicated using <replaceable\n+\tclass=\"parameter\">tablename</replaceable>.<replaceable\n+\tclass=\"parameter\">columnname</replaceable><literal>%TYPE</literal>;\n+\tusing this can sometimes help make a function independent from\n+\tchanges to the definition of a table.\n </para>\n </listitem>\n </varlistentry>\n@@ -72,7 +78,8 @@ CREATE FUNCTION <replaceable class=\"para\n \tThe return data type.\n \tThe output type may be specified as a base type, complex type, \n \t<option>setof type</option>,\n-\tor <option>opaque</option>.\n+\t<option>opaque</option>, or the same as the type of an\n+\texisting column.\n \tThe <option>setof</option>\n \tmodifier indicates that the function will return a set of items,\n \trather than a single item.\n", "msg_date": "27 Apr 2001 20:45:25 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "On Fri, Apr 27, 2001 at 08:45:25PM -0700, Ian Lance Taylor wrote:\n> This patch adds support for %TYPE in CREATE FUNCTION argument and\n> return types.\n> \n> %TYPE is already supported by PL/pgSQL when declaring variables.\n> However, that does not help with the argument and return types in\n> CREATE FUNCTION.\n> \n> Using %TYPE makes it easier to write a function which is independent\n> of the definition of a table. That is, minor changes to the types\n> used in the table may not require changes to the function.\n\n\tWow! This would be _very_ useful! It's something I wish PostgreSQL \nhad and I miss it everytime I write functions and remember PL/SQL.\n\n\tThanks a lot Ian, I hope this one makes it in (hopefully for 7.1.1)\n\n\t-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Developer \n������������������������-----�*'. (Explosive Tagline)\n", "msg_date": "Sat, 28 Apr 2001 08:55:32 -0600", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "> > Using %TYPE makes it easier to write a function which is independent\n> > of the definition of a table. That is, minor changes to the types\n> > used in the table may not require changes to the function.\n> \n> \tWow! This would be _very_ useful! It's something I wish PostgreSQL \n> had and I miss it everytime I write functions and remember PL/SQL.\n> \n> \tThanks a lot Ian, I hope this one makes it in (hopefully for 7.1.1)\n\nSorry, only in 7.2. No new features in minor releases unless they are\nvery safe.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 28 Apr 2001 18:45:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "On Sat, Apr 28, 2001 at 06:45:39PM -0400, Bruce Momjian wrote:\n> \n> Sorry, only in 7.2. No new features in minor releases unless they are\n> very safe.\n\n\tSo how was that patch not safe?\n\tIt sure would make porting Oracle apps to PostgreSQL _much_ easier.\n\tHow far down the line is 7.2 (my guess is a few months away at least)? \nIs there a doc with what's planned for 7.2 somewhere? I know Jan Wieck\nmentioned improvements in the procedural languages.\n\n\t-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Developer \nKeyboard not connected, press F1 to continue.\n", "msg_date": "Sun, 29 Apr 2001 11:28:48 -0600", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "> On Sat, Apr 28, 2001 at 06:45:39PM -0400, Bruce Momjian wrote:\n> > \n> > Sorry, only in 7.2. No new features in minor releases unless they are\n> > very safe.\n> \n> \tSo how was that patch not safe?\n> \tIt sure would make porting Oracle apps to PostgreSQL _much_ easier.\n> \tHow far down the line is 7.2 (my guess is a few months away at least)? \n> Is there a doc with what's planned for 7.2 somewhere? I know Jan Wieck\n> mentioned improvements in the procedural languages.\n\nThe TODO list has a list of things we think need doing. There is an\nUrgent section that I hope we can focus on for 7.2. We can't promise\nwhat will be in 7.2 because we don't know what people will volunteer to\nwork on. I would guess 7.2 is 4-6 months away, at least.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 19:33:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "\nSorry, looks like this patch has to be rejected because it can not\nhandle table changes.\n\n> This patch adds support for %TYPE in CREATE FUNCTION argument and\n> return types.\n> \n> %TYPE is already supported by PL/pgSQL when declaring variables.\n> However, that does not help with the argument and return types in\n> CREATE FUNCTION.\n> \n> Using %TYPE makes it easier to write a function which is independent\n> of the definition of a table. That is, minor changes to the types\n> used in the table may not require changes to the function.\n> \n> For example, this trivial function will work whenever `table' which\n> has columns named `name' and `value', no matter what the types of the\n> columns are.\n> \n> CREATE FUNCTION lookup (table.name%TYPE)\n> RETURNS table.value%TYPE\n> AS 'select value from table where name = $1'\n> LANGUAGE 'sql';\n> \n> This patch includes changes to the testsuite and the documentation.\n> \n> This work was sponsored by Zembu.\n> \n> Ian\n> \n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.126\n> diff -p -u -r1.126 parsenodes.h\n> --- src/include/nodes/parsenodes.h\t2001/03/23 04:49:56\t1.126\n> +++ src/include/nodes/parsenodes.h\t2001/04/28 03:38:21\n> @@ -945,6 +945,7 @@ typedef struct TypeName\n> \tbool\t\tsetof;\t\t\t/* is a set? */\n> \tint32\t\ttypmod;\t\t\t/* type modifier */\n> \tList\t *arrayBounds;\t/* array bounds */\n> +\tchar\t *attrname;\t\t/* field name when using %TYPE */\n> } TypeName;\n> \n> /*\n> Index: src/backend/parser/analyze.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/analyze.c,v\n> retrieving revision 1.183\n> diff -p -u -r1.183 analyze.c\n> --- src/backend/parser/analyze.c\t2001/03/22 06:16:15\t1.183\n> +++ src/backend/parser/analyze.c\t2001/04/28 03:38:23\n> @@ -27,6 +27,7 @@\n> #include \"parser/parse_relation.h\"\n> #include \"parser/parse_target.h\"\n> #include \"parser/parse_type.h\"\n> +#include \"parser/parse_expr.h\"\n> #include \"rewrite/rewriteManip.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/fmgroids.h\"\n> @@ -49,7 +50,10 @@ static Node *transformSetOperationTree(P\n> static Query *transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt);\n> static Query *transformCreateStmt(ParseState *pstate, CreateStmt *stmt);\n> static Query *transformAlterTableStmt(ParseState *pstate, AlterTableStmt *stmt);\n> +static Node *transformTypeRefs(ParseState *pstate, Node *stmt);\n> \n> +static void transformTypeRefsList(ParseState *pstate, List *l);\n> +static void transformTypeRef(ParseState *pstate, TypeName *tn);\n> static List *getSetColTypes(ParseState *pstate, Node *node);\n> static void transformForUpdate(Query *qry, List *forUpdate);\n> static void transformFkeyGetPrimaryKey(FkConstraint *fkconstraint);\n> @@ -230,6 +234,18 @@ transformStmt(ParseState *pstate, Node *\n> \t\t\t\t\t\t\t\t\t\t\t (SelectStmt *) parseTree);\n> \t\t\tbreak;\n> \n> +\t\t\t/*\n> +\t\t\t * Convert use of %TYPE in statements where it is permitted.\n> +\t\t\t */\n> +\t\tcase T_ProcedureStmt:\n> +\t\tcase T_CommentStmt:\n> +\t\tcase T_RemoveFuncStmt:\n> +\t\tcase T_DefineStmt:\n> +\t\t\tresult = makeNode(Query);\n> +\t\t\tresult->commandType = CMD_UTILITY;\n> +\t\t\tresult->utilityStmt = transformTypeRefs(pstate, parseTree);\n> +\t\t\tbreak;\n> +\n> \t\tdefault:\n> \n> \t\t\t/*\n> @@ -2607,6 +2623,104 @@ transformAlterTableStmt(ParseState *psta\n> \t}\n> \tqry->utilityStmt = (Node *) stmt;\n> \treturn qry;\n> +}\n> +\n> +/* \n> + * Transform uses of %TYPE in a statement.\n> + */\n> +static Node *\n> +transformTypeRefs(ParseState *pstate, Node *stmt)\n> +{\n> +\tswitch (nodeTag(stmt))\n> +\t{\n> +\t\tcase T_ProcedureStmt:\n> +\t\t{\n> +\t\t\tProcedureStmt *ps = (ProcedureStmt *) stmt;\n> +\n> +\t\t\ttransformTypeRefsList(pstate, ps->argTypes);\n> +\t\t\ttransformTypeRef(pstate, (TypeName *) ps->returnType);\n> +\t\t\ttransformTypeRefsList(pstate, ps->withClause);\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tcase T_CommentStmt:\n> +\t\t{\n> +\t\t\tCommentStmt\t *cs = (CommentStmt *) stmt;\n> +\n> +\t\t\ttransformTypeRefsList(pstate, cs->objlist);\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tcase T_RemoveFuncStmt:\n> +\t\t{\n> +\t\t\tRemoveFuncStmt *rs = (RemoveFuncStmt *) stmt;\n> +\n> +\t\t\ttransformTypeRefsList(pstate, rs->args);\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tcase T_DefineStmt:\n> +\t\t{\n> +\t\t\tDefineStmt *ds = (DefineStmt *) stmt;\n> +\t\t\tList\t *ele;\n> +\n> +\t\t\tforeach(ele, ds->definition)\n> +\t\t\t{\n> +\t\t\t\tDefElem\t *de = (DefElem *) lfirst(ele);\n> +\n> +\t\t\t\tif (de->arg != NULL\n> +\t\t\t\t\t&& IsA(de->arg, TypeName))\n> +\t\t\t\t{\n> +\t\t\t\t\ttransformTypeRef(pstate, (TypeName *) de->arg);\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tdefault:\n> +\t\t\telog(ERROR, \"Unsupported type %d in transformTypeRefs\",\n> +\t\t\t\t nodeTag(stmt));\n> +\t\t\tbreak;\n> +\t}\n> +\n> +\treturn stmt;\n> +}\n> +\n> +/*\n> + * Transform uses of %TYPE in a list.\n> + */\n> +static void\n> +transformTypeRefsList(ParseState *pstate, List *l)\n> +{\n> +\tList\t *ele;\n> +\n> +\tforeach(ele, l)\n> +\t{\n> +\t\tif (IsA(lfirst(ele), TypeName))\n> +\t\t\ttransformTypeRef(pstate, (TypeName *) lfirst(ele));\n> +\t}\n> +}\n> +\n> +/*\n> + * Transform a TypeName to not use %TYPE.\n> + */\n> +static void\n> +transformTypeRef(ParseState *pstate, TypeName *tn)\n> +{\n> +\tAttr *att;\n> +\tNode *n;\n> +\tVar\t *v;\n> +\n> +\tif (tn->attrname == NULL)\n> +\t\treturn;\n> +\tatt = makeAttr(tn->name, tn->attrname);\n> +\tn = transformExpr(pstate, (Node *) att, EXPR_COLUMN_FIRST);\n> +\tif (! IsA(n, Var))\n> +\t\telog(ERROR, \"unsupported expression in %%TYPE\");\n> +\tv = (Var *) n;\n> +\ttn->name = typeidTypeName(v->vartype);\n> +\ttn->typmod = v->vartypmod;\n> +\ttn->attrname = NULL;\n> }\n> \n> /* exported so planner can check again after rewriting, query pullup, etc */\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.221\n> diff -p -u -r2.221 gram.y\n> --- src/backend/parser/gram.y\t2001/02/18 18:06:10\t2.221\n> +++ src/backend/parser/gram.y\t2001/04/28 03:38:26\n> @@ -192,7 +192,7 @@ static void doNegateFloat(Value *v);\n> \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n> \t\tselect_limit, opt_select_limit\n> \n> -%type <typnam>\tfunc_arg, func_return, aggr_argtype\n> +%type <typnam>\tfunc_arg, func_return, func_type, aggr_argtype\n> \n> %type <boolean>\topt_arg, TriggerForOpt, TriggerForType, OptTemp\n> \n> @@ -2462,7 +2462,7 @@ func_args_list: func_arg\n> \t\t\t\t{\t$$ = lappend($1, $3); }\n> \t\t;\n> \n> -func_arg: opt_arg Typename\n> +func_arg: opt_arg func_type\n> \t\t\t\t{\n> \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n> \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n> @@ -2470,7 +2470,7 @@ func_arg: opt_arg Typename\n> \t\t\t\t\t */\n> \t\t\t\t\t$$ = $2;\n> \t\t\t\t}\n> -\t\t| Typename\n> +\t\t| func_type\n> \t\t\t\t{\n> \t\t\t\t\t$$ = $1;\n> \t\t\t\t}\n> @@ -2498,7 +2498,7 @@ func_as: Sconst\n> \t\t\t\t{ \t$$ = makeList2(makeString($1), makeString($3)); }\n> \t\t;\n> \n> -func_return: Typename\n> +func_return: func_type\n> \t\t\t\t{\n> \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n> \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n> @@ -2508,6 +2508,18 @@ func_return: Typename\n> \t\t\t\t}\n> \t\t;\n> \n> +func_type:\tTypename\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = $1;\n> +\t\t\t\t}\n> +\t\t| IDENT '.' ColId '%' TYPE_P\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = makeNode(TypeName);\n> +\t\t\t\t\t$$->name = $1;\n> +\t\t\t\t\t$$->typmod = -1;\n> +\t\t\t\t\t$$->attrname = $3;\n> +\t\t\t\t}\n> +\t\t;\n> \n> /*****************************************************************************\n> *\n> Index: src/backend/parser/parse_expr.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\n> retrieving revision 1.92\n> diff -p -u -r1.92 parse_expr.c\n> --- src/backend/parser/parse_expr.c\t2001/03/22 03:59:41\t1.92\n> +++ src/backend/parser/parse_expr.c\t2001/04/28 03:38:26\n> @@ -939,6 +939,7 @@ parser_typecast_expression(ParseState *p\n> char *\n> TypeNameToInternalName(TypeName *typename)\n> {\n> +\tAssert(typename->attrname == NULL);\n> \tif (typename->arrayBounds != NIL)\n> \t{\n> \n> Index: src/test/regress/input/create_function_2.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/create_function_2.source,v\n> retrieving revision 1.12\n> diff -p -u -r1.12 create_function_2.source\n> --- src/test/regress/input/create_function_2.source\t2000/11/20 20:36:54\t1.12\n> +++ src/test/regress/input/create_function_2.source\t2001/04/28 03:38:27\n> @@ -13,6 +13,12 @@ CREATE FUNCTION hobby_construct(text, te\n> LANGUAGE 'sql';\n> \n> \n> +CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n> + RETURNS hobbies_r.person%TYPE\n> + AS 'select person from hobbies_r where name = $1'\n> + LANGUAGE 'sql';\n> +\n> +\n> CREATE FUNCTION equipment(hobbies_r)\n> RETURNS setof equipment_r\n> AS 'select * from equipment_r where hobby = $1.name'\n> Index: src/test/regress/input/misc.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/misc.source,v\n> retrieving revision 1.14\n> diff -p -u -r1.14 misc.source\n> --- src/test/regress/input/misc.source\t2000/11/20 20:36:54\t1.14\n> +++ src/test/regress/input/misc.source\t2001/04/28 03:38:28\n> @@ -214,6 +214,7 @@ SELECT user_relns() AS user_relns\n> \n> --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n> \n> +SELECT hobbies_by_name('basketball');\n> \n> --\n> -- check that old-style C functions work properly with TOASTed values\n> Index: src/test/regress/output/create_function_2.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/create_function_2.source,v\n> retrieving revision 1.13\n> diff -p -u -r1.13 create_function_2.source\n> --- src/test/regress/output/create_function_2.source\t2000/11/20 20:36:54\t1.13\n> +++ src/test/regress/output/create_function_2.source\t2001/04/28 03:38:28\n> @@ -9,6 +9,10 @@ CREATE FUNCTION hobby_construct(text, te\n> RETURNS hobbies_r\n> AS 'select $1 as name, $2 as hobby'\n> LANGUAGE 'sql';\n> +CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n> + RETURNS hobbies_r.person%TYPE\n> + AS 'select person from hobbies_r where name = $1'\n> + LANGUAGE 'sql';\n> CREATE FUNCTION equipment(hobbies_r)\n> RETURNS setof equipment_r\n> AS 'select * from equipment_r where hobby = $1.name'\n> Index: src/test/regress/output/misc.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/misc.source,v\n> retrieving revision 1.27\n> diff -p -u -r1.27 misc.source\n> --- src/test/regress/output/misc.source\t2000/11/20 20:36:54\t1.27\n> +++ src/test/regress/output/misc.source\t2001/04/28 03:38:28\n> @@ -656,6 +656,12 @@ SELECT user_relns() AS user_relns\n> (90 rows)\n> \n> --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n> +SELECT hobbies_by_name('basketball');\n> + hobbies_by_name \n> +-----------------\n> + joe\n> +(1 row)\n> +\n> --\n> -- check that old-style C functions work properly with TOASTed values\n> --\n> Index: doc/src/sgml/ref/create_function.sgml\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/create_function.sgml,v\n> retrieving revision 1.21\n> diff -p -u -r1.21 create_function.sgml\n> --- doc/src/sgml/ref/create_function.sgml\t2000/12/25 23:15:26\t1.21\n> +++ doc/src/sgml/ref/create_function.sgml\t2001/04/28 03:38:31\n> @@ -58,10 +58,16 @@ CREATE FUNCTION <replaceable class=\"para\n> <listitem>\n> <para>\n> \tThe data type(s) of the function's arguments, if any.\n> -\tThe input types may be base or complex types, or\n> -\t<firstterm>opaque</firstterm>.\n> +\tThe input types may be base or complex types,\n> +\t<firstterm>opaque</firstterm>, or the same as the type of an\n> +\texisting column.\n> \t<literal>Opaque</literal> indicates that the function\n> \taccepts arguments of a non-SQL type such as <type>char *</type>.\n> +\tThe type of a column is indicated using <replaceable\n> +\tclass=\"parameter\">tablename</replaceable>.<replaceable\n> +\tclass=\"parameter\">columnname</replaceable><literal>%TYPE</literal>;\n> +\tusing this can sometimes help make a function independent from\n> +\tchanges to the definition of a table.\n> </para>\n> </listitem>\n> </varlistentry>\n> @@ -72,7 +78,8 @@ CREATE FUNCTION <replaceable class=\"para\n> \tThe return data type.\n> \tThe output type may be specified as a base type, complex type, \n> \t<option>setof type</option>,\n> -\tor <option>opaque</option>.\n> +\t<option>opaque</option>, or the same as the type of an\n> +\texisting column.\n> \tThe <option>setof</option>\n> \tmodifier indicates that the function will return a set of items,\n> \trather than a single item.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 28 May 2001 10:15:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Sorry, looks like this patch has to be rejected because it can not\n> handle table changes.\n\n> > This patch adds support for %TYPE in CREATE FUNCTION argument and\n> > return types.\n\nDoes anybody want to suggest how to handle table changes? Does\nanybody want to work with me to make this patch acceptable? Or is\nthis functionality of no interest to the Postgres development team?\n\nIan\n", "msg_date": "28 May 2001 15:47:24 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Sorry, looks like this patch has to be rejected because it can not\n> > handle table changes.\n> \n> > > This patch adds support for %TYPE in CREATE FUNCTION argument and\n> > > return types.\n> \n> Does anybody want to suggest how to handle table changes? Does\n> anybody want to work with me to make this patch acceptable? Or is\n> this functionality of no interest to the Postgres development team?\n\nI think the major problem was that our pg_proc table doesn't have any\nway of handling arg changes. In fact, we need a ALTER FUNCTION\ncapability first so we can recreate functions in place with the same\nOID. We may then be able to recreate the function on table change, but\nI think we will need this TODO item done also:\n\n\t* Add pg_depend table to track object dependencies\n\nSo it seems we need two items done first, then we would have the tools\nto properly implement this functionality.\n\nSo, yes, the functionality is desired, but it has to be done with the\nproper groundwork already in place.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 28 May 2001 21:13:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "At 03:47 PM 5/28/01 -0700, Ian Lance Taylor wrote:\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>\n>> Sorry, looks like this patch has to be rejected because it can not\n>> handle table changes.\n>\n>> > This patch adds support for %TYPE in CREATE FUNCTION argument and\n>> > return types.\n>\n>Does anybody want to suggest how to handle table changes? Does\n>anybody want to work with me to make this patch acceptable? Or is\n>this functionality of no interest to the Postgres development team?\n\nI don't know about the Postgres development team, but it is of great\ninterest to the OpenACS project team. We've got hundreds or perhaps\nthousands of PL/SQL procs and funcs in our code base that use this\nnotation and it would be very, very nice if we could use this construct\nin our PostgreSQL code base.\n\nI suspect any organization or project attempting to either migrate\nfrom Oracle to Postgres or trying to support both databases (as we\ndo at OpenACS) will find this very useful.\n\nWe're deep in the midst of our rewrite of the Ars Digita code base that\nwe've inherited so don't have any resources to offer to help solve the\nproblem. \n\nBut we can offer encouragement and appreciation!\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 29 May 2001 08:57:54 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think the major problem was that our pg_proc table doesn't have any\n> way of handling arg changes. In fact, we need a ALTER FUNCTION\n> capability first so we can recreate functions in place with the same\n> OID.\n\nActually that's the least of the issues. The real problem is that\nbecause of function overloading, myfunc(int4) and myfunc(int2) (for\nexample) are considered completely different functions. It is thus\nnot at all clear what should happen if I create myfunc(foo.f1%TYPE)\nand later alter the type of foo.f1 from int4 to int2. Does myfunc(int4)\nstop existing? What if a conflicting myfunc(int2) already exists?\nWhat happens to type-specific references to myfunc(int4) --- for\nexample, what if it's used as the implementation function for an\noperator declared on int4?\n\nWorrying about implementation issues is premature when you haven't\ngot an adequate definition.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 May 2001 12:30:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think the major problem was that our pg_proc table doesn't have any\n> > way of handling arg changes. In fact, we need a ALTER FUNCTION\n> > capability first so we can recreate functions in place with the same\n> > OID.\n> \n> Actually that's the least of the issues. The real problem is that\n> because of function overloading, myfunc(int4) and myfunc(int2) (for\n> example) are considered completely different functions. It is thus\n> not at all clear what should happen if I create myfunc(foo.f1%TYPE)\n> and later alter the type of foo.f1 from int4 to int2. Does myfunc(int4)\n> stop existing? What if a conflicting myfunc(int2) already exists?\n> What happens to type-specific references to myfunc(int4) --- for\n> example, what if it's used as the implementation function for an\n> operator declared on int4?\n> \n> Worrying about implementation issues is premature when you haven't\n> got an adequate definition.\n\nIt's pretty easy to define what to do in each of the cases you\ndescribe. The options are: 1) leave the function unchanged; 2) alter\nthe function to use the new type; 3) define a copy of the function\nwith the new type. In cases 2 or 3 you have to consider whether there\nis already a function with the new type; if there is, you have to\neither: 23a) replace the new function; 23b) issue a NOTICE; 23c) issue\na NOTICE and drop the old function. In case 2 you also have to\nconsider whether something is using the old function; if there is, you\nhave to 2a) leave the old function there; 2b) issue a NOTICE while\ndropping the old function.\n\nI propose this: if a table definition changes, alter the function to\nuse the new type (choice 2). If there is already a function with the\nnew type, issue a NOTICE and drop the old function (choice 23b). If\nsomething is using the old function, issue a NOTICE while dropping the\nold function (choice 2b).\n\nOf course, this is made much easier if there is a pg_depends table\nwhich accurately records dependencies.\n\n\nI have a meta-point: the choices to be made here are not all that\ninteresting. They do have to be defined. But almost any definition\nis OK. Users are not going to routinely redefine tables with attached\nfunctions; when they do, they must be prepared to consider the\nconsequences. If anybody thinks that different choices should be made\nin this case, that is certainly fine with me.\n\nIf you agree with me on the meta-point, then this is just a quibble\nabout my original patch (which made choice 1 above). If you disagree\nwith me, I'd like to understand why.\n\nIan\n", "msg_date": "30 May 2001 10:06:06 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "> Of course, this is made much easier if there is a pg_depends table\n> which accurately records dependencies.\n\nYes, that was a nifty idea.\n\n> I have a meta-point: the choices to be made here are not all that\n> interesting. They do have to be defined. But almost any definition\n> is OK. Users are not going to routinely redefine tables with attached\n> functions; when they do, they must be prepared to consider the\n> consequences. If anybody thinks that different choices should be made\n> in this case, that is certainly fine with me.\n> \n> If you agree with me on the meta-point, then this is just a quibble\n> about my original patch (which made choice 1 above). If you disagree\n> with me, I'd like to understand why.\n\nI agree that having problems when a table is defined is acceptable. It\nis not like someone is _forced_ to use the feature.\n\nSo far that is three or four people who like the feature, and I have\nonly heard one opposed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 30 May 2001 13:14:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> I have a meta-point: the choices to be made here are not all that\n> interesting. They do have to be defined. But almost any definition\n> is OK.\n\nWell, that implicit assumption is exactly the one I was questioning;\n*is* it OK not to be very concerned about what the behavior is? ISTM\nthat how the system handles these cases will constrain the use of the\n%TYPE feature into certain pathways. The limitations arising from your\noriginal patch presumably don't matter for your intended use, but they\nmay nonetheless be surprising for people who try to use it differently.\n(We've seen cases before where someone does a quick-and-dirty feature\naddition that fails to act as other people expect it to.)\n\nI wanted to see a clear understanding of what the corner-case behavior\nis, and a consensus that that behavior is acceptable all 'round. If\nthe quick-and-dirty route will be satisfactory over the long run, fine;\nbut I don't much want to install a new feature that is immediately going\nto draw bug reports/upgrade requests/whatever you want to call 'em.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 May 2001 13:25:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION " }, { "msg_contents": "At 12:30 PM 5/30/01 -0400, Tom Lane wrote:\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I think the major problem was that our pg_proc table doesn't have any\n>> way of handling arg changes. In fact, we need a ALTER FUNCTION\n>> capability first so we can recreate functions in place with the same\n>> OID.\n>\n>Actually that's the least of the issues. The real problem is that\n>because of function overloading, myfunc(int4) and myfunc(int2) (for\n>example) are considered completely different functions. It is thus\n>not at all clear what should happen if I create myfunc(foo.f1%TYPE)\n>and later alter the type of foo.f1 from int4 to int2. Does myfunc(int4)\n>stop existing?\n\nWhat happens now with PL/pgSQL variables? Does it continue to point\nint4 as long as the backend stays alive, but switch in new backends\nas they come to life, the function gets called, and the body recompiled?\n\n(Compiled bytes are stored on a per-backend basis, right? Or wrong? :)\n\nThat's not particularly relevant to the parameter case other than to\npoint out that we may already have some weirdness in PL/pgSQL in\nthis regard.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 30 May 2001 10:48:20 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION " }, { "msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n\n> At 12:30 PM 5/30/01 -0400, Tom Lane wrote:\n> >Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I think the major problem was that our pg_proc table doesn't have any\n> >> way of handling arg changes. In fact, we need a ALTER FUNCTION\n> >> capability first so we can recreate functions in place with the same\n> >> OID.\n> >\n> >Actually that's the least of the issues. The real problem is that\n> >because of function overloading, myfunc(int4) and myfunc(int2) (for\n> >example) are considered completely different functions. It is thus\n> >not at all clear what should happen if I create myfunc(foo.f1%TYPE)\n> >and later alter the type of foo.f1 from int4 to int2. Does myfunc(int4)\n> >stop existing?\n> \n> What happens now with PL/pgSQL variables? Does it continue to point\n> int4 as long as the backend stays alive, but switch in new backends\n> as they come to life, the function gets called, and the body recompiled?\n> \n> (Compiled bytes are stored on a per-backend basis, right? Or wrong? :)\n> \n> That's not particularly relevant to the parameter case other than to\n> point out that we may already have some weirdness in PL/pgSQL in\n> this regard.\n\nI assume you mean: what happens now with a PL/pgSQL variable which is\ndeclared using table.row%TYPE?\n\nAs you suspect, the answer is that any existing backend which has\nalready compiled the function will continue to use the old\ndefinition. Any new backend will recompile the function and get the\nnew definition.\n\nAs far as I can see in a quick look, there is currently no interface\nto direct PL/pgSQL that it must reparse a function. And there is no\nway for PL/pgSQL to register interest in table changes.\n\nIan\n", "msg_date": "30 May 2001 11:01:54 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Tom Lane wrote:\n> Ian Lance Taylor <ian@airs.com> writes:\n> > I have a meta-point: the choices to be made here are not all that\n> > interesting. They do have to be defined. But almost any definition\n> > is OK.\n>\n> Well, that implicit assumption is exactly the one I was questioning;\n> *is* it OK not to be very concerned about what the behavior is? ISTM\n> that how the system handles these cases will constrain the use of the\n> %TYPE feature into certain pathways. The limitations arising from your\n> original patch presumably don't matter for your intended use, but they\n> may nonetheless be surprising for people who try to use it differently.\n> (We've seen cases before where someone does a quick-and-dirty feature\n> addition that fails to act as other people expect it to.)\n\n IMHO the possible confusion added by supporting %TYPE in our\n utility statements is too high a risk.\n\n What most of those if favor for doing it right now want is an\n easy Oracle->PostgreSQL one-time porting path. Reasonable,\n but solveable with some external preprocessor/script too.\n\n I see that the currently discussed implementation add's more\n Oracle incompatibility than compatibility. This is because\n there are different times between the interpretation of %TYPE\n inside and out of a procedures body. Inside the PL/pgSQL\n declarations, it's parsed at each first call of a function\n per session, so there is at least some chance that changes\n propagate up (at reconnect time).\n\n But used in the utility statement to specify arguments,\n column types and the like they are interpreted just once and\n stored as that in our catalog. We don't remember the\n original CREATE statement, that created it. So even if we\n remember that this thing once depended on another, we don't\n know what to do if that other is altered.\n\n Thus, usage of %TYPE inside of a PL/pgSQL function is OK,\n because it behaves more or less like expected - at least\n after reconnecting. Using it outside IMHO isn't, because the\n type reference cannot be stored as that, but has to be\n resolved once and forever with possible code breakage if the\n referenced objects type changes. The kind of breakage could\n be extremely tricky and the code might appear to work but\n does the wrong thing internally (think about changing a\n column from DOUBLE to NUMERIC and assuming that everything\n working with this column is doing exact precision from now on\n - it might NOT).\n\n A \"No\" from here.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 30 May 2001 14:39:26 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n\n> What most of those if favor for doing it right now want is an\n> easy Oracle->PostgreSQL one-time porting path. Reasonable,\n> but solveable with some external preprocessor/script too.\n\nCan you explain how an external preprocessor/script addresses the\nissue of %TYPE in a function definition? Presumably the preprocessor\nhas to translate %TYPE into some definite type when it creates the\nfunction. But how can a preprocessor address the issue of what to do\nwhen the table definition changes? There still has to be an entry in\npg_proc for the procedure. What happens to that entry when the table\nchanges?\n\nYou seem to be saying that %TYPE can be implemented via some other\nmechanism. That is fine with me, but how would that other mechanism\nwork? Why it would not raise the exact same set of issues?\n\nIan\n", "msg_date": "30 May 2001 12:22:30 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Ian Lance Taylor wrote:\n> [...]\n> I propose this: if a table definition changes, alter the function to\n> use the new type (choice 2). If there is already a function with the\n> new type, issue a NOTICE and drop the old function (choice 23b). If\n> something is using the old function, issue a NOTICE while dropping the\n> old function (choice 2b).\n\n Altering a function definition in any language other than\n PL/pgSQL really scares me. What do you expect a \"C\" function\n declared to take a VARCHAR argument to do if you just change\n the pg_proc entry telling it now takes a NAME? I'd expect it\n to generate a signal 11 most of it's calls, and nothing\n really useful the other times.\n\n And you have no chance of limiting your implementation to\n functions defined in PL/pgSQL. It's a loadable PL so you\n don't even know the languages or handlers Oid at compile\n time.\n\n> If you agree with me on the meta-point, then this is just a quibble\n> about my original patch (which made choice 1 above). If you disagree\n> with me, I'd like to understand why.\n\n The possible SIGSEGV above. Please don't take it personally,\n I'm talking tech here, but it seems you forgot that PL/pgSQL\n is just *one* of many possible languages.\n\n And please forget about a chance to finally track all\n dependencies. You'll never be able to know if some PL/Tcl or\n PL/Python function/trigger uses that function. So not getting\n your NOTICE doesn't tell if really nothing broke. As soon as\n you tell me you can I'd implement PL/Forth or PL/Pascal -\n maybe PL/COBOL or PL/RPL (using an embedded HP48 emulator)\n just to tell \"you can't\" again :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 30 May 2001 16:00:00 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n\n> Altering a function definition in any language other than\n> PL/pgSQL really scares me. What do you expect a \"C\" function\n> declared to take a VARCHAR argument to do if you just change\n> the pg_proc entry telling it now takes a NAME? I'd expect it\n> to generate a signal 11 most of it's calls, and nothing\n> really useful the other times.\n\nGood point.\n\nThat brings me back to choice 1 in my original message: don't try to\nchange the function if the table definition changes.\n\nIn fact, it's possible to do better. A procedural language could\ndefine a hook to handle table definition changes. The Postgres\nbackend could define a way to register to receive notification of\ntable definition changes (this would essentially be an entry in a\ntable like the proposed pg_depends). The procedural language itself\ncould then handle the table changes by redefining the function or\nwhatever.\n\nWhen defining a function using %TYPE, the procedural language would be\nnotified that %TYPE was used. It could then record a dependency, if\nit was prepared to handle one.\n\nThis would permit PL/pgSQL to redefine the function defined using\n%TYPE if that seems desirable. It would also permit PL/pgSQL to\nbehave more reasonably with regard to variables defined using %TYPE.\n\nThis would also permit the C function handler to issue a NOTICE when a\nC function was defined using %TYPE and the table definition was\nchanged.\n\n> > If you agree with me on the meta-point, then this is just a quibble\n> > about my original patch (which made choice 1 above). If you disagree\n> > with me, I'd like to understand why.\n> \n> The possible SIGSEGV above. Please don't take it personally,\n> I'm talking tech here, but it seems you forgot that PL/pgSQL\n> is just *one* of many possible languages.\n\nActually, I don't see this as a disagreement about my meta-point.\nUsers who use %TYPE must watch out if they change a table definition.\nA SIGSEGV is just an extreme case.\n\n> And please forget about a chance to finally track all\n> dependencies. You'll never be able to know if some PL/Tcl or\n> PL/Python function/trigger uses that function. So not getting\n> your NOTICE doesn't tell if really nothing broke. As soon as\n> you tell me you can I'd implement PL/Forth or PL/Pascal -\n> maybe PL/COBOL or PL/RPL (using an embedded HP48 emulator)\n> just to tell \"you can't\" again :-)\n\nI don't entirely understand this. I can break the system just as\neasily using DROP FUNCTION. At some point, I think the programmer has\nto take responsibility.\n\n\nI return to the question of whether the Postgres development team is\ninterested in support for %TYPE. If the team is not interested, then\nI'm wasting my time. I'm seeing a no from you and Tom Lane, and a\nmaybe from Bruce Momjian.\n\nIan\n", "msg_date": "30 May 2001 13:13:44 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Ian Lance Taylor wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n>\n> > What most of those if favor for doing it right now want is an\n> > easy Oracle->PostgreSQL one-time porting path. Reasonable,\n> > but solveable with some external preprocessor/script too.\n>\n> Can you explain how an external preprocessor/script addresses the\n> issue of %TYPE in a function definition? Presumably the preprocessor\n> has to translate %TYPE into some definite type when it creates the\n> function. But how can a preprocessor address the issue of what to do\n> when the table definition changes? There still has to be an entry in\n> pg_proc for the procedure. What happens to that entry when the table\n> changes?\n>\n> You seem to be saying that %TYPE can be implemented via some other\n> mechanism. That is fine with me, but how would that other mechanism\n> work? Why it would not raise the exact same set of issues?\n\n What I (wanted to have) said is that the \"one-time porting\"\n can be solved by external preprocessing/translation of %TYPE\n into the resolved type at porting time. That is *porting*\n instead of making the target system emulate the original\n platform. You know, today you can run a mainframe application\n on an Intel architecture by running IBM's OS390 emulator\n under Linux - but is that porting?\n\n And I repeat what I've allways said over the past years. I\n don't feel the need for all the catalog mucking with most of\n the ALTER commands. Changing column types here and there,\n dropping and renaming columns and tables somewhere else and\n kicking the entire schema while holding data around during\n application coding doesn't have anything to do with\n development or software engineering. It's pure script-kiddy\n hacking or even worse quality. There seems to be no business\n process description, no data model or any other \"plan\", just\n this \"let's code around until something seems to work all of\n the sudden\". Where's the problem description, application\n spec, all the stuff the DB schema resulted from? Oh - it\n resulted from \"I need another column because I have this\n unexpected value I need to keep - and if there'll be more of\n them I can ALTER it to be an array\". Well, if that's what\n people consider \"development\", all they really need is\n\n ALTER n% OF SCHEMA AT RANDOM;\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 30 May 2001 17:02:44 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n\n> What I (wanted to have) said is that the \"one-time porting\"\n> can be solved by external preprocessing/translation of %TYPE\n> into the resolved type at porting time. That is *porting*\n> instead of making the target system emulate the original\n> platform. You know, today you can run a mainframe application\n> on an Intel architecture by running IBM's OS390 emulator\n> under Linux - but is that porting?\n\nAh. My personal interest is not in doing a straight port from Oracle\nto Postgres and never going back. I'm sure there are people\ninterested in that. Personally, I'm interested in supporting people\nwho want to use either Oracle or Postgres, or both, with the same\napplication.\n\n> And I repeat what I've allways said over the past years. I\n> don't feel the need for all the catalog mucking with most of\n> the ALTER commands. Changing column types here and there,\n> dropping and renaming columns and tables somewhere else and\n> kicking the entire schema while holding data around during\n> application coding doesn't have anything to do with\n> development or software engineering. It's pure script-kiddy\n> hacking or even worse quality. There seems to be no business\n> process description, no data model or any other \"plan\", just\n> this \"let's code around until something seems to work all of\n> the sudden\". Where's the problem description, application\n> spec, all the stuff the DB schema resulted from? Oh - it\n> resulted from \"I need another column because I have this\n> unexpected value I need to keep - and if there'll be more of\n> them I can ALTER it to be an array\". Well, if that's what\n> people consider \"development\", all they really need is\n> \n> ALTER n% OF SCHEMA AT RANDOM;\n\nIt is desirable to have some reasonable mechanism for changing the\nschema without requiring data to be dumped and reloaded. Otherwise it\nis very difficult to upgrade a system which needs to be up 24/7, such\nas many web sites today.\n\nIt is not acceptable for eBay to shut down their system for even just\na few hours for maintenance. Shouldn't it be possible for eBay to run\non top of Postgres?\n\nIan\n", "msg_date": "30 May 2001 14:22:38 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Ian Lance Taylor wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n>\n> > Altering a function definition in any language other than\n> > PL/pgSQL really scares me. What do you expect a \"C\" function\n> > declared to take a VARCHAR argument to do if you just change\n> > the pg_proc entry telling it now takes a NAME? I'd expect it\n> > to generate a signal 11 most of it's calls, and nothing\n> > really useful the other times.\n>\n> Good point.\n>\n> That brings me back to choice 1 in my original message: don't try to\n> change the function if the table definition changes.\n>\n> In fact, it's possible to do better. A procedural language could\n> define a hook to handle table definition changes. The Postgres\n> backend could define a way to register to receive notification of\n> table definition changes (this would essentially be an entry in a\n> table like the proposed pg_depends). The procedural language itself\n> could then handle the table changes by redefining the function or\n> whatever.\n>\n> When defining a function using %TYPE, the procedural language would be\n> notified that %TYPE was used. It could then record a dependency, if\n> it was prepared to handle one.\n\n When defining a function, there is absolutely no language\n dependant code invoked (except for 'sql'). So at the time\n you do the CREATE FUNCTION, the PL/pgSQL handler doesn't even\n get loaded. All the utility does is creating the pg_proc\n entry.\n\n When the analyzis of a query results in this pg_proc entries\n oid to appear in a Func node and that Func node get's hit\n during the queries execution, then the function manager will\n load the PL handler and call it.\n\n What you describe above is a general schema change callback\n entry point into a procedural language module. It get's\n called at CREATE/DROP FUNCTION and any other catalog change -\n right? And the backend loads all declared procedural language\n handlers at startup time so they can register themself for\n callback - right? Sound's more like a bigger project than a\n small grammar change.\n\n> This would permit PL/pgSQL to redefine the function defined using\n> %TYPE if that seems desirable. It would also permit PL/pgSQL to\n> behave more reasonably with regard to variables defined using %TYPE.\n\n Ah - so the CREATE FUNCTION utility doesn't create the\n pg_proc entry any more, but just calls some function in the\n PL handler doing all the job? Of course, one language might,\n while another uses the backward compatibility mode of the\n existing CREATE FUNCTION - that's neat. And since the general\n schema change callback informs one PL (the one that want's to\n get informed), every language could decide on it's own if\n it's better to create another overload function, drop the\n existing, modify the existing or just abort the transaction\n if it gets confused.\n\n> This would also permit the C function handler to issue a NOTICE when a\n> C function was defined using %TYPE and the table definition was\n> changed.\n\n Seems I missed some code changes in the past, so where's this\n new C function handler located and how does it work?\n\n> I return to the question of whether the Postgres development team is\n> interested in support for %TYPE. If the team is not interested, then\n> I'm wasting my time. I'm seeing a no from you and Tom Lane, and a\n> maybe from Bruce Momjian.\n\n I don't say we shouldn't have support for %TYPE. But if we\n have it, ppl will assume it tracks later schema changes, but\n with what I've seen so far it either could have severe side\n effects on other languages or just doesn't do it. A change\n like %TYPE support is a little too fundamental to get this\n quick yes/no decision just in a few days.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 30 May 2001 18:00:05 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "> > I return to the question of whether the Postgres development team is\n> > interested in support for %TYPE. If the team is not interested, then\n> > I'm wasting my time. I'm seeing a no from you and Tom Lane, and a\n> > maybe from Bruce Momjian.\n> \n> I don't say we shouldn't have support for %TYPE. But if we\n> have it, ppl will assume it tracks later schema changes, but\n> with what I've seen so far it either could have severe side\n> effects on other languages or just doesn't do it. A change\n> like %TYPE support is a little too fundamental to get this\n> quick yes/no decision just in a few days.\n\nCan't we just throw a NOTICE and let them do it. Seems harmless to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 30 May 2001 18:01:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "At 02:22 PM 5/30/01 -0700, Ian Lance Taylor wrote:\n\n>Ah. My personal interest is not in doing a straight port from Oracle\n>to Postgres and never going back. I'm sure there are people\n>interested in that. Personally, I'm interested in supporting people\n>who want to use either Oracle or Postgres, or both, with the same\n>application.\n\nWhich is what we're doing with the OpenACS toolkit. We can (and have,\nactually) stripped these out of the parameter lists but the resulting\nfunction definitions are less clear.\n\nEven with %TYPE we won't actually share datamodel sources, of course,\nbut the less that's different, the easier it is for folks to work\non the code.\n\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 30 May 2001 15:56:35 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> It is desirable to have some reasonable mechanism for changing the\n> schema without requiring data to be dumped and reloaded. Otherwise it\n> is very difficult to upgrade a system which needs to be up 24/7, such\n> as many web sites today.\n\n> It is not acceptable for eBay to shut down their system for even just\n> a few hours for maintenance. Shouldn't it be possible for eBay to run\n> on top of Postgres?\n\nWhat's that got to do with the argument at hand? On-the-fly schema\nchanges aren't free either; at the very least you have to lock down the\ntables involved while you change them. When the change cascades across\nmultiple tables and functions (if it doesn't, this feature is hardly\nof any use!), ISTM you still end up shutting down your operation for as\nlong as it takes to do the changes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 May 2001 20:37:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION " }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n\n> What you describe above is a general schema change callback\n> entry point into a procedural language module. It get's\n> called at CREATE/DROP FUNCTION and any other catalog change -\n> right? And the backend loads all declared procedural language\n> handlers at startup time so they can register themself for\n> callback - right? Sound's more like a bigger project than a\n> small grammar change.\n\nYes. But since it doesn't look like the small grammar change will get\ninto the sources, the bigger project appears to be needed.\n\n> I don't say we shouldn't have support for %TYPE. But if we\n> have it, ppl will assume it tracks later schema changes, but\n> with what I've seen so far it either could have severe side\n> effects on other languages or just doesn't do it. A change\n> like %TYPE support is a little too fundamental to get this\n> quick yes/no decision just in a few days.\n\nUnderstood. I don't need a quick yes/no decision on the patch--after\nall, I submitted it a month ago.\n\nWhat would help a lot, though, is some indication of whether this\npatch is of interest. Should I put the time into doing something\nalong the lines that I outlined? Would that get accepted? Or would I\nbe wasting my time, and should I just keep my much simpler patch as a\nlocal change?\n\nI've been doing the free software thing for over a decade, both as a\ncontributor and as a maintainer, with many different projects. For\nany given functionality, I've normally been able to say ``this would\nbe good'' or ``this would be bad'' or ``this would be too hard to\nmaintain'' or ``this is irrelevant, but it's OK if you do all the\nwork.'' I'm having trouble getting a feel for how Postgres\ndevelopment is done. In general, I would like to see a roadmap, and I\nwould like to see where Oracle compatibility falls on that roadmap.\nIn specific, I'm trying to understand what the feeling is about this\nparticular functionality.\n\nIan\n", "msg_date": "30 May 2001 17:58:40 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Ian Lance Taylor <ian@airs.com> writes:\n> > It is desirable to have some reasonable mechanism for changing the\n> > schema without requiring data to be dumped and reloaded. Otherwise it\n> > is very difficult to upgrade a system which needs to be up 24/7, such\n> > as many web sites today.\n> \n> > It is not acceptable for eBay to shut down their system for even just\n> > a few hours for maintenance. Shouldn't it be possible for eBay to run\n> > on top of Postgres?\n> \n> What's that got to do with the argument at hand? On-the-fly schema\n> changes aren't free either; at the very least you have to lock down the\n> tables involved while you change them. When the change cascades across\n> multiple tables and functions (if it doesn't, this feature is hardly\n> of any use!), ISTM you still end up shutting down your operation for as\n> long as it takes to do the changes.\n\nThat's a lot better than a dump and restore.\n\nI was just responding to Jan's comments about ALTER statements. Jan's\ncomments didn't appear to have anything to do with %TYPE, and mine\ndidn't either. Apologies if I misunderstood.\n\nIan\n", "msg_date": "30 May 2001 18:01:19 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "On Wed, May 30, 2001 at 12:30:23PM -0400, Tom Lane wrote:\n> Actually that's the least of the issues. The real problem is that\n> because of function overloading, myfunc(int4) and myfunc(int2) (for\n> example) are considered completely different functions. It is thus\n> not at all clear what should happen if I create myfunc(foo.f1%TYPE)\n> and later alter the type of foo.f1 from int4 to int2. Does myfunc(int4)\n> stop existing? What if a conflicting myfunc(int2) already exists?\n> What happens to type-specific references to myfunc(int4) --- for\n> example, what if it's used as the implementation function for an\n> operator declared on int4?\n\nWould the idea of %TYPE being considered a \"default\" type, so it won't\nconflict with any more specific functions be out of the question?\n\nFor example, if I call myfunc(int4), it'll first check if there's a\nmyfunc(int4), then failing that, check if there's a myfunc(foo.bar%TYPE).\n\nUmm.. of course, there's no reason why it should search in that order,\nbecause checking for myfunc(foo.bar%TYPE) first would be just as valid,\nbut either way, it's a well defined semantic.\n\n-- \nMichael Samuel <michael@miknet.net>\n", "msg_date": "Thu, 31 May 2001 14:04:57 +1000", "msg_from": "Michael Samuel <michael@miknet.net>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Ian Lance Taylor wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n> > Ian Lance Taylor <ian@airs.com> writes:\n> > > It is desirable to have some reasonable mechanism for changing the\n> > > schema without requiring data to be dumped and reloaded. Otherwise it\n> > > is very difficult to upgrade a system which needs to be up 24/7, such\n> > > as many web sites today.\n> >\n> > > It is not acceptable for eBay to shut down their system for even just\n> > > a few hours for maintenance. Shouldn't it be possible for eBay to run\n> > > on top of Postgres?\n> >\n> > What's that got to do with the argument at hand? On-the-fly schema\n> > changes aren't free either; at the very least you have to lock down the\n> > tables involved while you change them. When the change cascades across\n> > multiple tables and functions (if it doesn't, this feature is hardly\n> > of any use!), ISTM you still end up shutting down your operation for as\n> > long as it takes to do the changes.\n>\n> That's a lot better than a dump and restore.\n\n Indeed.\n\n> I was just responding to Jan's comments about ALTER statements. Jan's\n> comments didn't appear to have anything to do with %TYPE, and mine\n> didn't either. Apologies if I misunderstood.\n\n That's what happens when ppl run out of arguments, and\n developers are human beeings too - unfortunately ;-}\n\n I think Bruce made a point in his other tread about imperfect\n fixes. This is of course no fix but a feature. Then again we\n have to think about \"imperfect features\" as well, and looking\n at the past (foreign key, PL/pgSQL itself and lztext - just\n to blame myself) I realize that I've not been that much of a\n perfectionist I claim to be in recent posts.\n\n And Bruce is right, the speed we demonstrated in gaining\n features wouldn't have been possible if we'd insisted on\n perfectionism all the time like we currently seem to do.\n\n I can understand Ian. Working for some time on a feature,\n posting a patch and watching it going down in the flames of\n discussion is frustrating. Even more frustrating is it if you\n asked for discussion before and nobody responded with more\n than a *shrug* - then when you've done the work the\n discussion starts.\n\n At least we know by now that we want to have that feature.\n And we know that we can't do it perfect now. Since we know\n that doing a halfhearted tracking could severely break other\n things, it's out of discussion. So the question we have to\n answer is if we accept the %TYPE syntax with immediate type\n resolution and delay the real fix until the FAQ's force\n someone to do it. It doesn't hurt as long as you don't use it\n AND expect it to do more than that. So a NOTICE at the\n actual usage, telling that x%TYPE for y got resolved to\n basetype z and will currently NOT follow later changes to x\n should do it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 31 May 2001 10:15:18 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Hi,\n\nI've been following this discussion with interest. As a member of the\nOpenACS community I'd like to see the %TYPE feature in PG ASAP. I also\nunderstand the reluctance of some of the PG team members in implementing\nsomething that is not anywhere near 'perfect'.\n\nI like Jans' (and Ian?) suggestion of ONLY doing resolution at create\ntime, as a full 'tracking-the-current-definition' seems to too tough for\nnow. I think it will be very acceptable to a lot of us out there to have\nto drop and re-create our own dependancies. A lot of times, the changes\nmay not require recoding of the function (except for languages like C).\nFor OpenACS, schema changes on production machines will mostly be managed\nby upgrade sql scripts. Although not 'perfect', having to drop and\nrecreate functions during upgrade are only minor problems.\n\n> AND expect it to do more than that. So a NOTICE at the\n> actual usage, telling that x%TYPE for y got resolved to\n> basetype z and will currently NOT follow later changes to x\n> should do it.\n\nSo if you could implement it like that, we will be VERY happy.\n\nRegards,\nPascal Scheffers\n\n\n", "msg_date": "Fri, 1 Jun 2001 08:15:39 +0200 (CEST)", "msg_from": "Pascal Scheffers <pascal@scheffers.net>", "msg_from_op": false, "msg_subject": "Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "I've been thinking about this, and I think the smartest way to implement\n%TYPE would be to have it as a special-case data type. So, the C\nrepresentation of it would be something like this:\n\nstruct PercentType {\n\tint datatype;\n\tvoid *data;\n};\n\nNote: I made the datatype field an int, but that may/may not be the\ncorrect datatype to use there.\n\nAnd basically, postgres can resolve at runtime what it should point to,\nand the code should have to deal with it, either via casting, or throwing\nan exception if it's unacceptable.\n\nOf course, there'd be a small overhead within the function, but it's a\nsmall price to pay for a robust implementation.\n\nAs for operator overloading, a decision must be made whether you search\nfor a more specific function first, or for a matching %TYPE.\n\nOf course, this may be too many special cases to be coded cleanly...\n\n-- \nMichael Samuel <michael@miknet.net>\n", "msg_date": "Fri, 1 Jun 2001 23:11:13 +1000", "msg_from": "Michael Samuel <michael@miknet.net>", "msg_from_op": false, "msg_subject": "Re: Re: Support for %TYPE in CREATE FUNCTION" }, { "msg_contents": "Michael Samuel <michael@miknet.net> writes:\n\n> I've been thinking about this, and I think the smartest way to implement\n> %TYPE would be to have it as a special-case data type. So, the C\n> representation of it would be something like this:\n> \n> struct PercentType {\n> \tint datatype;\n> \tvoid *data;\n> };\n> \n> Note: I made the datatype field an int, but that may/may not be the\n> correct datatype to use there.\n> \n> And basically, postgres can resolve at runtime what it should point to,\n> and the code should have to deal with it, either via casting, or throwing\n> an exception if it's unacceptable.\n> \n> Of course, there'd be a small overhead within the function, but it's a\n> small price to pay for a robust implementation.\n> \n> As for operator overloading, a decision must be made whether you search\n> for a more specific function first, or for a matching %TYPE.\n\nFunctions are stored in the pg_proc table. That table has 16 fields\nwhich hold the OIDs of the types of the arguments. When searching for\na function, the types of the parameters are used to search the table.\nWe would have to figure out a way to store the %TYPE field instead.\n\nPerhaps one approach would be to have a separate table which just held\n%TYPE entries. Then pg_proc could hold the OID of the row in that\ntable. The parser code which hooks up function calls with function\ndefinitions would have to recognize this case and convert the %TYPE\ninto the real type at that time. This would only be done if there was\nno exact match, so there would only be a performance penalty when\n%TYPE was used.\n\nThe code could be written such that a function which specified the\nexact type would always be chosen before a function which used %TYPE.\nHowever, a function which used %TYPE to specify the exact type would\nbe chosen before a function which specified a coerceable type.\n\nProbably several other places would have to be prepared to convert an\nentry in the new %TYPE table to an entry in the pg_type field. But\nthat could be encapsulated in a function.\n\nWhether this is of any interest or not, I don't know.\n\nIan\n", "msg_date": "01 Jun 2001 10:13:50 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": true, "msg_subject": "Re: Re: Support for %TYPE in CREATE FUNCTION" } ]
[ { "msg_contents": "What would be nice, and I don't know how it would be done or what the \nsyntax would be, would be a feature that allows PostgreSQL to skip \nnot only the parsing stage, but the planning stage as well. Then, \nwhen the data has changed dramatically enough to warrant it, as you \npoint out, a command can be issued to 'refresh' the query plan. My \n15-way join has expanded to a 19-way join and is still instantaneous, \nalbeit on a very small set of data. Before 7.1, the query would \nsimply have taken far too long, and I would have had to denormalize \nthe database for performance purposes. With the explicit join syntax, \nit allows me to design the database 'the right way'. I basically used \nEXPLAIN SELECT... to determine the explicit join order, so as the \ndata changes, its something I'll have to do on occassion to ensure \ngood performance, but at least its now possible. :-)\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tThomas Lockhart [SMTP:lockhart@alumni.caltech.edu]\nSent:\tFriday, April 27, 2001 9:49 PM\nTo:\tmascarm@mascari.com; 'Tom Lane'\nCc:\t'pgsql-hackers@postgresql.org'\nSubject:\t[HACKERS] Re: Any optimizations to the join code in 7.1?\n\n> ... 7.1 out of the box took only 2 seconds! I was amazed\n> and shocked at this damned impressive improvement in planning\n> speed....until I actually used the explicit JOIN syntax described \nin\n> 11.2. Instanteous results! Instantaneous.....\n\nBut it is possible, under many circumstances, for query optimization \nto\nbe a benefit for a many-table query. The docs indicate that explicit\njoin syntax bypasses that, even for inner joins, so you may find that\nthis syntax is a net loss in performance depending on the query and \nyour\nchoice of table order.\n\nPresumably we will be interested in making these two forms of inner \njoin\nequivalent in behavior in a future release. Tom, what are the\nimpediments we might encounter in doing this?\n\n - Thomas\n\n---------------------------(end of broadcast)--------------------- \n------\nTIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Sat, 28 Apr 2001 02:08:47 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: Re: Any optimizations to the join code in 7.1?" } ]
[ { "msg_contents": "Hi,\n\nI'm playing with wal parameters and found that wal_sync_method =\nopen_sync enormously enhance the performance on my machine. Without it\n(using default fsync) I got only 90 tps at the best using pgbench (-s\n2). However if I set wal_sync_method = open_sync, I get ~200 tps. I\nhave checked PostgreSQL uses O_SYNC flag when it opens WAL log files\nusing strace. Can anybody tell me why? I am afraid this is just a\ndream:-) Linux kernel 2.2.17.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 28 Apr 2001 21:15:28 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "WAL performance with wal_sync_method = open_sync" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I'm playing with wal parameters and found that wal_sync_method =\n> open_sync enormously enhance the performance on my machine. Without it\n> (using default fsync) I got only 90 tps at the best using pgbench (-s\n> 2). However if I set wal_sync_method = open_sync, I get ~200 tps.\n\nWouldn't surprise me. The performance of the fsync method sucks on\nmy system (HPUX 10.20) as well. AFAICT HPUX and Linux 2.2.x are not\nvery smart about fsync on large files --- they scan all the kernel\ndisk buffers for the target file to find the dirty ones. O_SYNC\navoids this scanning.\n\nI hear Linux 2.4.* is smarter about doing fsync, so it probably has\nfsync as fast or faster than O_SYNC.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 01:58:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL performance with wal_sync_method = open_sync " } ]
[ { "msg_contents": "Slashdot just announced that SAP has released the souce for SAP DB under\nGPL. Not sure what this mean, or what people think, but I thought the\nhackers list might want to know.\n\nhttp://slashdot.org/developers/01/04/28/016220.shtml\n\nhttp://www.sap.com/solutions/technology/sapdb/develop/dev_sources.htm\n\nMatt\n", "msg_date": "Sat, 28 Apr 2001 12:17:58 -0500", "msg_from": "Matthew <matt@ctlno.com>", "msg_from_op": true, "msg_subject": "SAPDB Open Souce" }, { "msg_contents": "Hi guys,\n\nI've used the open source SAPDB and the performance is pretty damned\nimpressive. However, 'open source' in application to it is somewhat\ndeceptive, since you have to make it with SAP's proprietary build\ntools/environment.\n\nIn my opinion, however, it would be worth closely auditing SAP DB to see\nwhat postgres can learn.\n\nGavin\n\nOn Sat, 28 Apr 2001, Matthew wrote:\n\n> Slashdot just announced that SAP has released the souce for SAP DB under\n> GPL. Not sure what this mean, or what people think, but I thought the\n> hackers list might want to know.\n> \n> http://slashdot.org/developers/01/04/28/016220.shtml\n> \n> http://www.sap.com/solutions/technology/sapdb/develop/dev_sources.htm\n> \n> Matt\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n", "msg_date": "Sun, 29 Apr 2001 12:17:16 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: SAPDB Open Souce" }, { "msg_contents": "> Hi guys,\n> \n> I've used the open source SAPDB and the performance is pretty damned\n> impressive. However, 'open source' in application to it is somewhat\n> deceptive, since you have to make it with SAP's proprietary build\n> tools/environment.\n> \n> In my opinion, however, it would be worth closely auditing SAP DB to see\n> what postgres can learn.\n\nI downloaded it. The directories are two characters in length, the\nfiles are numbers, and it is a mixture of C++, Python, and Pascal. Need\nI say more. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 28 Apr 2001 22:33:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SAPDB Open Souce" }, { "msg_contents": "> I downloaded it. The directories are two characters in length, the\n> files are numbers, and it is a mixture of C++, Python, and Pascal. Need\n> I say more. :-)\n\n1.) What is wrong with a mixture of C++, Python and Pascal? Nothing IMHO.\n\n2.) The directory structure is probably the consequence of the development\ntools (produced automatically). Such a structure can have advantages, too.\n\n3.) I left Germany 6 years ago. I don't know what happened in the meantime,\nbut at that time (and the past 10 years before that) virtually any major\nbusiness and the majority of large hospitals were running on SAP. AFAIK,\nsimilar in other european countries. Complaints about the horrendous price\nstructure (par with Oracle) - yes. Complaints about crappy user interfaces -\nyes. Complaints about arrogant support team - yes. But *no* complaints\nregarding data integrity, robustness, and almost no complaints regarding\nperformance.\n\nTherefore, I think it should not be disregarded too quickly. There is\ncertainly something to learn from it by studying it; that would be probably\nmore productive than using the same time just thinking about own design.\n(Maybe start looking at their developer manuals, which are *really* helpful\nif you want to develop something with SAP).\n\nI can't help it (as much as I admire Postgres, and as much as I like using\nit), but I always perceive a certain air of arrogance blowing from this\nlist - a feeling I don't get from other open source projects. I might be\nwrong here.\n\nRegards,\nHorst\n\n", "msg_date": "Sun, 29 Apr 2001 15:10:52 +1000", "msg_from": "\"Horst Herb\" <hherb@malleenet.net.au>", "msg_from_op": false, "msg_subject": "Re: SAPDB Open Souce" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I downloaded it. The directories are two characters in length, the\n> files are numbers, and it is a mixture of C++, Python, and Pascal. Need\n> I say more. :-)\n\nOK, I'll bite: you need to say more.\n\nWhat is it like at handling transactions? What sort of full-text indexing does it\nhave? Can I have transactions within transactions? What sort of tools are\navailable for managing database extents? How compliant is it with the various SQL\nstandards? How does performance compare with PostgreSQL and others? Does it have\nan extensible type system? Does it have an 'internal' language to compare with\nPL/SQL or PL/PGSQL? How well does it scale on SMP systems? Can I perform a single\nquery across multiple databases? What performance monitoring tools does it come\nwith?\n\nHell, in a statement like that you don't even indicate if those directories are\nso-named within the source code, or in an installed data environment. Whichever\nenvironment they do apply to, however, I'm sure there are good systems in place for\ndealing with them. And of course C++, Python and Pascal are all languages with\nplenty of proponents, so there's no problem with those.\n\nYour statement is so light on utility that it persuades me to download it for myself\nand try it - but that is presumably exactly the effect you were after, wasn't it?\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64(21)635-694, Fax: +64(4)499-5596, Office: +64(4)499-2267xtn709\n", "msg_date": "Sun, 29 Apr 2001 19:44:27 +1200", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: SAPDB Open Souce" }, { "msg_contents": "> I downloaded it. The directories are two characters in length, the\n> files are numbers, and it is a mixture of C++, Python, and Pascal. Need\n> I say more. :-)\n\nYes!\n\nYou, or better someone who knows SAP DB could tell if it's \"probably the most \ncomplete free database system available right now, with much more features \nthan interbase, mysql or postgresql\" as this guy Hemos says on Slashdot.\n\nI remember the same being said about Interbase when it was OSS'ed, but I \nstill stick to PostgreSQL.\n\nBut knowledge doesn't hurt, and as someone pointed out, you can't tell the \nquality of the software from the names of your source code.\nBut it sure makes it a lot more difficult to understand what's going on, I'll \ngrant you that :-)\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Sun, 29 Apr 2001 10:14:18 +0200", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": false, "msg_subject": "Re: SAPDB Open Souce" }, { "msg_contents": "> Bruce Momjian wrote: > > I downloaded it. The directories are\n> two characters in length, the > files are numbers, and it is a\n> mixture of C++, Python, and Pascal. Need > I say more. :-)\n> \n> OK, I'll bite: you need to say more.\n> \n> What is it like at handling transactions? What sort of full-text\n> indexing does it have? Can I have transactions within transactions?\n> What sort of tools are available for managing database extents?\n> How compliant is it with the various SQL standards? How does\n> performance compare with PostgreSQL and others? Does it have\n> an extensible type system? Does it have an 'internal' language\n> to compare with PL/SQL or PL/PGSQL? How well does it scale on\n> SMP systems? Can I perform a single query across multiple\n> databases? What performance monitoring tools does it come with?\n> \n> Hell, in a statement like that you don't even indicate if those\n> directories are so-named within the source code, or in an\n> installed data environment. Whichever environment they do apply\n> to, however, I'm sure there are good systems in place for dealing\n> with them. And of course C++, Python and Pascal are all languages\n> with plenty of proponents, so there's no problem with those.\n> \n> Your statement is so light on utility that it persuades me to\n> download it for myself and try it - but that is presumably\n> exactly the effect you were after, wasn't it?\n\nOK, basically, I couldn't figure out any of it. I am sure there are\nuseful things in there, but I can't figure out how to find any of them. \nHopefully others will be better at it than I am.\n\nAnd yes, it would be good for people to look over the code and see if\nthey can find valuable things in it.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 12:30:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SAPDB Open Souce" }, { "msg_contents": "> You, or better someone who knows SAP DB could tell if it's \"probably the most \n> complete free database system available right now, with much more features \n> than interbase, mysql or postgresql\" as this guy Hemos says on Slashdot.\n> \n> I remember the same being said about Interbase when it was OSS'ed, but I \n> still stick to PostgreSQL.\n> \n> But knowledge doesn't hurt, and as someone pointed out, you can't tell the \n> quality of the software from the names of your source code.\n> But it sure makes it a lot more difficult to understand what's going on, I'll \n> grant you that :-)\n\nThat was my point. It is very hard to make sense of the code, at least\nfor me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 12:31:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SAPDB Open Souce" }, { "msg_contents": "\n\nBruce Momjian schrieb:\n> \n> > Hi guys,\n> >\n> > I've used the open source SAPDB and the performance is pretty damned\n> > impressive. However, 'open source' in application to it is somewhat\n> > deceptive, since you have to make it with SAP's proprietary build\n> > tools/environment.\n> >\n> > In my opinion, however, it would be worth closely auditing SAP DB to see\n> > what postgres can learn.\n> \n> I downloaded it. The directories are two characters in length, the\n> files are numbers, and it is a mixture of C++, Python, and Pascal. Need\n> I say more. :-)\n> \n\n Well, I used to use PostgreSQL and Adabas / SAP-DB and after that it's\npretty much clear now, that there're not so many arguments for \nPostgreSQL. \n\n\n Marten\n", "msg_date": "Sun, 29 Apr 2001 20:11:07 +0100", "msg_from": "M.Feldtmann@t-online.de (Marten Feldtmann)", "msg_from_op": false, "msg_subject": "Re: SAPDB Open Souce" }, { "msg_contents": "Horst Herb wrote:\n> > I downloaded it. The directories are two characters in length, the\n> > files are numbers, and it is a mixture of C++, Python, and Pascal. Need\n> > I say more. :-)\n>\n> 1.) What is wrong with a mixture of C++, Python and Pascal? Nothing IMHO.\n>\n> 2.) The directory structure is probably the consequence of the development\n> tools (produced automatically). Such a structure can have advantages, too.\n>\n> 3.) I left Germany 6 years ago. I don't know what happened in the meantime,\n> but at that time (and the past 10 years before that) virtually any major\n> business and the majority of large hospitals were running on SAP. AFAIK,\n> similar in other european countries. Complaints about the horrendous price\n> structure (par with Oracle) - yes. Complaints about crappy user interfaces -\n> yes. Complaints about arrogant support team - yes. But *no* complaints\n> regarding data integrity, robustness, and almost no complaints regarding\n> performance.\n\n Don't mix up SAP's application (R/3 today and R/2 before)\n with SAP DB. Most of the customers I've seen (and I've worked\n as an SAP R/3 base-consultant for the past 10 years) ran SAP\n R/3 on top of Oracle. So that's where the integrity and\n robustness came from. And I've got may complaints WRT\n performance - but fortunately our projects where usually\n located in the multi-$M range, so simply throwing bucks into\n iron worked.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 30 Apr 2001 08:41:53 -0500 (EST)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: SAPDB Open Souce" } ]
[ { "msg_contents": "Hi all,\n\nI'm rewriting my OLD crypt (Thanks to Henrique) C_fonction to version 0\nforms :\n\nI have this C file compiled OK to a shared library:\n\n/*\n*\n* Henrique Pantarotto (scanner@cepa.com.br)\n* Funcao para encriptar senhas (Function to encrypt passwords)\n* September 1999\n*\n* PS: Note that all crypted passwords are created with salt \"HP\" (my name\n* initials..) You can change that, or if you know C, you can do in a way\n* that it will pick two random characters (the way it should really be).\n*\n*/\n\n#include <strings.h>\n#include <unistd.h>\n\n#include <postgres.h>\n\ntext *post_crypt(text *user)\n{\n text *password;\n char * crypt();\n long now=time((long *) 0);\n int len;\n char salt[7]=\"PY\", *crypted;\n /*strcpy(salt,l64a(now));\n salt[3]='\\0'; */\n crypted=crypt(VARDATA(user),salt);\n len=strlen(crypted);\n password= palloc((int32) 13 + VARHDRSZ);\n VARATT_SIZEP(password)= (int32) VARHDRSZ + 13;\n memcpy(VARDATA(password),crypted,len);\n return password;\n}\n\ntext *sql_crypt(text *user,text *salt)\n{\n text *password;\n char * crypt(), *crypted;\n int len;\n char s[3];\n strncpy(s,VARDATA(salt),2);\n s[2]='\\0';\n crypted=crypt(VARDATA(user),s);\n len=strlen(crypted);\n password=palloc((int32) 13 + VARHDRSZ);\n VARATT_SIZEP(password)=(int32) 13 + VARHDRSZ;\n memcpy(VARDATA(password),crypted,len);\n return password;\n}\n\n\n/*\nCompile using something like this:\n\ngcc -I/home/postgres/postgresql-6.5.1/src/include -I/home/postgres/postgresql-6.5.1/src/backend -O2 -Wall -Wmissing-prototypes -fpic -I/home/postgres/postgresql-6.5.1/src/include -c -o encrypt.o encrypt.c\ngcc -shared -o encrypt.so encrypt.o\n\nAnd last, you create the trigger in PostgreSQL using this:\n\ncreate function encrypt(text)\nreturns text as '/usr/local/pgsql/lib/encrypt.so' language 'c';\n\nIf everything is okay, you'll probably have: select encrypt('secret') working\nand showing:\n\nencrypt\n------------\nHPK1Jt2NX21G.\n(1 row)\n*/\n\nI have defined to SQL function:\n\nCREATE FUNCTION post_crypt(text) RETURNS text AS 'xxxx/encrypt.so'\nCREATE FUNCTION sql_cypt(text,text) RETURNS text AS 'xxxx/encrypt.so';\n\nWHY on earth does\n\nSELECT post_crypt('test'),sql_crypt('test','PY') \nNOT GIVE the same result???\n\nPlease help, \n\nThis is most urgent (My customer can't use this function anymore); it\nworked OK with 7.0.3!!\n\nRegards,\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Sat, 28 Apr 2001 23:51:45 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Struggling with c functions" }, { "msg_contents": "You actually almost have it right.\n\nYou are passing VARDATA(user) to crypt, this is wrong.\n\nYou must do something like this:\n\nint ulen = VARSIZE(user)-VARHDRSZ;\nchar utmp[ulen+]; // This works in newer GCC, cool.\nmemcpy(utmp,VARDATA(user), len);\nutmp[ulen]=0;\ncrypted=crypt(utmp,salt);\n\nStrings are not gurenteed to be NULL teminated.\n\n\nOlivier PRENANT wrote:\n> \n> Hi all,\n> \n> I'm rewriting my OLD crypt (Thanks to Henrique) C_fonction to version 0\n> forms :\n> \n> I have this C file compiled OK to a shared library:\n> \n> /*\n> *\n> * Henrique Pantarotto (scanner@cepa.com.br)\n> * Funcao para encriptar senhas (Function to encrypt passwords)\n> * September 1999\n> *\n> * PS: Note that all crypted passwords are created with salt \"HP\" (my name\n> * initials..) You can change that, or if you know C, you can do in a way\n> * that it will pick two random characters (the way it should really be).\n> *\n> */\n> \n> #include <strings.h>\n> #include <unistd.h>\n> \n> #include <postgres.h>\n> \n> text *post_crypt(text *user)\n> {\n> text *password;\n> char * crypt();\n> long now=time((long *) 0);\n> int len;\n> char salt[7]=\"PY\", *crypted;\n> /*strcpy(salt,l64a(now));\n> salt[3]='\\0'; */\n> crypted=crypt(VARDATA(user),salt);\n> len=strlen(crypted);\n> password= palloc((int32) 13 + VARHDRSZ);\n> VARATT_SIZEP(password)= (int32) VARHDRSZ + 13;\n> memcpy(VARDATA(password),crypted,len);\n> return password;\n> }\n> \n> text *sql_crypt(text *user,text *salt)\n> {\n> text *password;\n> char * crypt(), *crypted;\n> int len;\n> char s[3];\n> strncpy(s,VARDATA(salt),2);\n> s[2]='\\0';\n> crypted=crypt(VARDATA(user),s);\n> len=strlen(crypted);\n> password=palloc((int32) 13 + VARHDRSZ);\n> VARATT_SIZEP(password)=(int32) 13 + VARHDRSZ;\n> memcpy(VARDATA(password),crypted,len);\n> return password;\n> }\n> \n> /*\n> Compile using something like this:\n> \n> gcc -I/home/postgres/postgresql-6.5.1/src/include -I/home/postgres/postgresql-6.5.1/src/backend -O2 -Wall -Wmissing-prototypes -fpic -I/home/postgres/postgresql-6.5.1/src/include -c -o encrypt.o encrypt.c\n> gcc -shared -o encrypt.so encrypt.o\n> \n> And last, you create the trigger in PostgreSQL using this:\n> \n> create function encrypt(text)\n> returns text as '/usr/local/pgsql/lib/encrypt.so' language 'c';\n> \n> If everything is okay, you'll probably have: select encrypt('secret') working\n> and showing:\n> \n> encrypt\n> ------------\n> HPK1Jt2NX21G.\n> (1 row)\n> */\n> \n> I have defined to SQL function:\n> \n> CREATE FUNCTION post_crypt(text) RETURNS text AS 'xxxx/encrypt.so'\n> CREATE FUNCTION sql_cypt(text,text) RETURNS text AS 'xxxx/encrypt.so';\n> \n> WHY on earth does\n> \n> SELECT post_crypt('test'),sql_crypt('test','PY')\n> NOT GIVE the same result???\n> \n> Please help,\n> \n> This is most urgent (My customer can't use this function anymore); it\n> worked OK with 7.0.3!!\n> \n> Regards,\n> --\n> Olivier PRENANT Tel: +33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: ohp@pyrenet.fr\n> ------------------------------------------------------------------------------\n> Make your life a dream, make your dream a reality. (St Exupery)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sat, 28 Apr 2001 21:22:51 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Struggling with c functions" }, { "msg_contents": "Many thanks to you!!!\n\nIt now works (did'nt realize that strings where not null\nterminated) stupid me!!!\n\nRegards,\nOn Sat, 28 Apr 2001, mlw wrote:\n\n> You actually almost have it right.\n> \n> You are passing VARDATA(user) to crypt, this is wrong.\n> \n> You must do something like this:\n> \n> int ulen = VARSIZE(user)-VARHDRSZ;\n> char utmp[ulen+]; // This works in newer GCC, cool.\n> memcpy(utmp,VARDATA(user), len);\n> utmp[ulen]=0;\n> crypted=crypt(utmp,salt);\n> \n> Strings are not gurenteed to be NULL teminated.\n> \n> \n> Olivier PRENANT wrote:\n> > \n> > Hi all,\n> > \n> > I'm rewriting my OLD crypt (Thanks to Henrique) C_fonction to version 0\n> > forms :\n> > \n> > I have this C file compiled OK to a shared library:\n> > \n> > /*\n> > *\n> > * Henrique Pantarotto (scanner@cepa.com.br)\n> > * Funcao para encriptar senhas (Function to encrypt passwords)\n> > * September 1999\n> > *\n> > * PS: Note that all crypted passwords are created with salt \"HP\" (my name\n> > * initials..) You can change that, or if you know C, you can do in a way\n> > * that it will pick two random characters (the way it should really be).\n> > *\n> > */\n> > \n> > #include <strings.h>\n> > #include <unistd.h>\n> > \n> > #include <postgres.h>\n> > \n> > text *post_crypt(text *user)\n> > {\n> > text *password;\n> > char * crypt();\n> > long now=time((long *) 0);\n> > int len;\n> > char salt[7]=\"PY\", *crypted;\n> > /*strcpy(salt,l64a(now));\n> > salt[3]='\\0'; */\n> > crypted=crypt(VARDATA(user),salt);\n> > len=strlen(crypted);\n> > password= palloc((int32) 13 + VARHDRSZ);\n> > VARATT_SIZEP(password)= (int32) VARHDRSZ + 13;\n> > memcpy(VARDATA(password),crypted,len);\n> > return password;\n> > }\n> > \n> > text *sql_crypt(text *user,text *salt)\n> > {\n> > text *password;\n> > char * crypt(), *crypted;\n> > int len;\n> > char s[3];\n> > strncpy(s,VARDATA(salt),2);\n> > s[2]='\\0';\n> > crypted=crypt(VARDATA(user),s);\n> > len=strlen(crypted);\n> > password=palloc((int32) 13 + VARHDRSZ);\n> > VARATT_SIZEP(password)=(int32) 13 + VARHDRSZ;\n> > memcpy(VARDATA(password),crypted,len);\n> > return password;\n> > }\n> > \n> > /*\n> > Compile using something like this:\n> > \n> > gcc -I/home/postgres/postgresql-6.5.1/src/include -I/home/postgres/postgresql-6.5.1/src/backend -O2 -Wall -Wmissing-prototypes -fpic -I/home/postgres/postgresql-6.5.1/src/include -c -o encrypt.o encrypt.c\n> > gcc -shared -o encrypt.so encrypt.o\n> > \n> > And last, you create the trigger in PostgreSQL using this:\n> > \n> > create function encrypt(text)\n> > returns text as '/usr/local/pgsql/lib/encrypt.so' language 'c';\n> > \n> > If everything is okay, you'll probably have: select encrypt('secret') working\n> > and showing:\n> > \n> > encrypt\n> > ------------\n> > HPK1Jt2NX21G.\n> > (1 row)\n> > */\n> > \n> > I have defined to SQL function:\n> > \n> > CREATE FUNCTION post_crypt(text) RETURNS text AS 'xxxx/encrypt.so'\n> > CREATE FUNCTION sql_cypt(text,text) RETURNS text AS 'xxxx/encrypt.so';\n> > \n> > WHY on earth does\n> > \n> > SELECT post_crypt('test'),sql_crypt('test','PY')\n> > NOT GIVE the same result???\n> > \n> > Please help,\n> > \n> > This is most urgent (My customer can't use this function anymore); it\n> > worked OK with 7.0.3!!\n> > \n> > Regards,\n> > --\n> > Olivier PRENANT Tel: +33-5-61-50-97-00 (Work)\n> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> > FRANCE Email: ohp@pyrenet.fr\n> > ------------------------------------------------------------------------------\n> > Make your life a dream, make your dream a reality. (St Exupery)\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://www.postgresql.org/search.mpl\n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Sun, 29 Apr 2001 12:09:29 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: Struggling with c functions" } ]
[ { "msg_contents": "Where do get a listing of what PQftype() can return to me?\n(that is what type the field/col has, need a list of Oid's i believe)\n\nMagnus\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n\n", "msg_date": "Sun, 29 Apr 2001 00:13:20 +0200", "msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "PQftype()" }, { "msg_contents": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net> writes:\n> Where do get a listing of what PQftype() can return to me?\n\nselect oid, typname from pg_type\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 00:31:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQftype() " }, { "msg_contents": "From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> \"Magnus Naeslund\\(f\\)\" <mag@fbab.net> writes:\n> > Where do get a listing of what PQftype() can return to me?\n> \n> select oid, typname from pg_type\n> \n> regards, tom lane\n\nDoes these change often?\nOr could i do like the ODBC driver, autogenerate a .h out of that table.\n\nMagnus\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n\n", "msg_date": "Tue, 1 May 2001 00:06:03 +0200", "msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Re: PQftype() " }, { "msg_contents": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net> writes:\n> Where do get a listing of what PQftype() can return to me?\n>> \n>> select oid, typname from pg_type\n\n> Does these change often?\n\nThe system type OIDs are stable. User-defined types would probably have\na new OID after a dump and reload.\n\n> Or could i do like the ODBC driver, autogenerate a .h out of that table.\n\nI would not recommend relying on compiled-in OID knowledge for any types\nother than the system-defined datatypes. If you expect to have to deal\nwith user-defined types, it's best to cache the results of pg_type\nlookups at the client end. You need not worry about OIDs changing\nduring a single client connection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 18:11:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQftype() " }, { "msg_contents": "From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n[snip]\n>\n> The system type OIDs are stable. User-defined types would probably have\n> a new OID after a dump and reload.\n>\n> > Or could i do like the ODBC driver, autogenerate a .h out of that table.\n>\n> I would not recommend relying on compiled-in OID knowledge for any types\n> other than the system-defined datatypes. If you expect to have to deal\n> with user-defined types, it's best to cache the results of pg_type\n> lookups at the client end. You need not worry about OIDs changing\n> during a single client connection.\n>\n> regards, tom lane\n>\n\nOk, then i can use static thing for my application (for now atleast).\nThanks..\n\nMagnus\n\n", "msg_date": "Tue, 1 May 2001 00:13:49 +0200", "msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Re: PQftype() " } ]
[ { "msg_contents": "Have attempted to install DBI and DBD, but DBD �make test� fails.\nBelow is my complete procedure , starting from:\n\nAccessing PostgreSQL from the command line, (successful)\n creating a database, populating. (successful)\nExpanding DBI and DBD. (successful)\nInstalling DBI (seemingly successful)\nSet environments. (successful)\nInstall DBD. (error occur on �test�)\n\nSo now I�m hoping someone can help me out....\n\nI think I have included all the version information throughout the\nprocedure. \nif not please let me know.\n\n\nAny assistance would be greatly appreciated.\n\n\n\n\n#############LOG OF TELNET SESSION##############\n\nLinux/PPC 2000 Q4\nPackages current to December 25 2000\nKernel 2.2.18-4hpmac on a ppc\nlogin: ausit\nPassword:\nLast login: Sun Apr 29 06:15:07 from 192.168.168.50\n[ausit@pivot ausit]$ su\nPassword:\n\n#############ACCESSING POSTGRES##############\n\n[root@pivot ausit]# su postgres\nbash-2.04$ psql -V\npsql (PostgreSQL) 7.0.3\ncontains readline, history, multibyte support\nPortions Copyright (c) 1996-2000, PostgreSQL, Inc\nPortions Copyright (c) 1996 Regents of the University of California\nRead the file COPYRIGHT or use the command \\copyright to see the\nusage and distribution terms.\nbash-2.04$ createdb post_demo\nCREATE DATABASE\nbash-2.04$ psql post_demo\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\npost_demo=# \\d\nNo relations found.\npost_demo=# create table test_class (yeh int4, nah text);\nCREATE\npost_demo=# insert into test_class (yeh, nah) values (123,'one_two_three');\nINSERT 19082 1\npost_demo=# select * from test_class ;\n yeh | nah\n-----+---------------\n 123 | one_two_three\n(1 row)\n\npost_demo=# \\q\nbash-2.04$ exit\nexit\n[root@pivot ausit]# exit\nexit\n\n#############EXPANDING DBI AND DBD##############\n\n[ausit@pivot ausit]$ ls\nDesktop src\n[ausit@pivot ausit]$ cd src\n[ausit@pivot src]$ ll\ntotal 3208\n-rwxr-xr-x 1 ausit ausit 174080 Apr 29 02:26 DBD-Pg-0.98.tar\n-rwxr-xr-x 1 ausit ausit 632320 Apr 29 02:26 DBI-1.15.tar\n-rwxr-xr-x 1 ausit ausit 456331 Apr 29 02:26\napache-1.3.14-2.6.2.ppc.rpm\n-rwxr-xr-x 1 ausit ausit 110724 Apr 29 02:26\napache-devel-1.3.14-2.6.2.ppc.r\n-rwxr-xr-x 1 ausit ausit 473057 Apr 29 02:26\napache-manual-1.3.14-2.6.2.ppc.\n-rwxr-xr-x 1 ausit ausit 1083848 Apr 29 02:26\napache-ssl-1.3.12_1.40-1.ppc.rp\n-rwxr-xr-x 1 ausit ausit 18089 Apr 29 02:26\nperl-Mail-Sendmail-0.77-6.ppc.r\ndrwxr-xr-x 4 ausit ausit 4096 Apr 29 08:07 pg\n-rw-r--r-- 1 ausit ausit 284204 Apr 29 02:52\ntar-1.13.17-5.ppc.rpm\n[ausit@pivot src]$ tar xvf DBD-Pg-0.98.tar\nDBD-Pg-0.98/\nDBD-Pg-0.98/eg/\nDBD-Pg-0.98/eg/ApacheDBI.pl\nDBD-Pg-0.98/Changes\nDBD-Pg-0.98/MANIFEST\nDBD-Pg-0.98/Makefile.PL\nDBD-Pg-0.98/Pg.h\nDBD-Pg-0.98/Pg.pm\nDBD-Pg-0.98/Pg.xs\nDBD-Pg-0.98/README\nDBD-Pg-0.98/README.win32\nDBD-Pg-0.98/dbd-pg.pod\nDBD-Pg-0.98/dbdimp.c\nDBD-Pg-0.98/dbdimp.h\nDBD-Pg-0.98/test.pl\n[ausit@pivot src]$ tar xvf DBI-1.15.tar\nDBI-1.15/\nDBI-1.15/lib/\nDBI-1.15/lib/DBD/\nDBI-1.15/lib/DBD/ExampleP.pm\nDBI-1.15/lib/DBD/Proxy.pm\nDBI-1.15/lib/DBD/NullP.pm\nDBI-1.15/lib/DBD/Sponge.pm\nDBI-1.15/lib/DBD/Multiplex.pm\nDBI-1.15/lib/DBD/ADO.pm\nDBI-1.15/lib/DBI/\nDBI-1.15/lib/DBI/Shell.pm\nDBI-1.15/lib/DBI/FAQ.pm\nDBI-1.15/lib/DBI/ProxyServer.pm\nDBI-1.15/lib/DBI/DBD.pm\nDBI-1.15/lib/DBI/W32ODBC.pm\nDBI-1.15/lib/DBI/Format.pm\nDBI-1.15/lib/Bundle/\nDBI-1.15/lib/Bundle/DBI.pm\nDBI-1.15/lib/Win32/\nDBI-1.15/lib/Win32/DBIODBC.pm\nDBI-1.15/DBI.xs\nDBI-1.15/t/\nDBI-1.15/t/meta.t\nDBI-1.15/t/dbidrv.t\nDBI-1.15/t/examp.t\nDBI-1.15/t/subclass.t\nDBI-1.15/t/proxy.t\nDBI-1.15/t/basics.t\nDBI-1.15/t/shell.t\nDBI-1.15/Perl.xs\nDBI-1.15/DBIXS.h\nDBI-1.15/MANIFEST\nDBI-1.15/Driver.xst\nDBI-1.15/Changes\nDBI-1.15/dbipport.h\nDBI-1.15/Makefile.PL\nDBI-1.15/test.pl\nDBI-1.15/README\nDBI-1.15/dbd_xsh.h\nDBI-1.15/dbish.PL\nDBI-1.15/dbi_sql.h\nDBI-1.15/ToDo\nDBI-1.15/DBI.pm\nDBI-1.15/dbiproxy.PL\n[ausit@pivot src]$ ll\ntotal 3216\ndrwxr-xr-x 3 ausit ausit 4096 Apr 25 21:50 DBD-Pg-0.98\n-rwxr-xr-x 1 ausit ausit 174080 Apr 29 02:26 DBD-Pg-0.98.tar\ndrwxr-xr-x 4 ausit ausit 4096 Mar 31 00:57 DBI-1.15\n-rwxr-xr-x 1 ausit ausit 632320 Apr 29 02:26 DBI-1.15.tar\n-rwxr-xr-x 1 ausit ausit 456331 Apr 29 02:26\napache-1.3.14-2.6.2.ppc.rpm\n-rwxr-xr-x 1 ausit ausit 110724 Apr 29 02:26\napache-devel-1.3.14-2.6.2.ppc.r\n-rwxr-xr-x 1 ausit ausit 473057 Apr 29 02:26\napache-manual-1.3.14-2.6.2.ppc.\n-rwxr-xr-x 1 ausit ausit 1083848 Apr 29 02:26\napache-ssl-1.3.12_1.40-1.ppc.rp\n-rwxr-xr-x 1 ausit ausit 18089 Apr 29 02:26\nperl-Mail-Sendmail-0.77-6.ppc.r\ndrwxr-xr-x 4 ausit ausit 4096 Apr 29 08:07 pg\n-rw-r--r-- 1 ausit ausit 284204 Apr 29 02:52\ntar-1.13.17-5.ppc.rpm\n\n#############INSTALLING DBI ##############\n\n[ausit@pivot src]$ cd DBI-1.15\n[ausit@pivot DBI-1.15]$ su\nPassword:\n[root@pivot DBI-1.15]# perl Makefile.PL\n*** Note:\n The optional PlRPC-modules (RPC::PlServer etc) are not installed.\n If you want to use the DBD::Proxy driver and DBI::ProxyServer\n modules, then you'll need to install the RPC::PlServer, RPC::PlClient,\n Storable and Net::Daemon modules. The CPAN Bundle::DBI may help you.\n You can install them any time after installing the DBI.\n You do *not* need these modules for typical DBI usage.\n\nOptional modules are available from any CPAN mirror, in particular\n http://www.perl.com/CPAN/modules/by-module\n http://www.perl.org/CPAN/modules/by-module\n ftp://ftp.funet.fi/pub/languages/perl/CPAN/modules/by-module\n\nChecking if your kit is complete...\nLooks good\nWriting Makefile for DBI\n\n Remember to actually *read* the README file!\n Use 'make' to build the software (dmake or nmake on Windows).\n Then 'make test' to execute self tests.\n Then 'make install' to install the DBI and then delete this working\n directory before unpacking and building any DBD::* drivers.\n\n[root@pivot DBI-1.15]# make\nmkdir blib\nmkdir blib/lib\nmkdir blib/arch\nmkdir blib/arch/auto\nmkdir blib/arch/auto/DBI\nmkdir blib/lib/auto\nmkdir blib/lib/auto/DBI\nmkdir blib/man1\nmkdir blib/man3\ncp lib/DBI/W32ODBC.pm blib/lib/DBI/W32ODBC.pm\ncp lib/DBD/ExampleP.pm blib/lib/DBD/ExampleP.pm\ncp lib/DBI/Shell.pm blib/lib/DBI/Shell.pm\ncp lib/DBI/FAQ.pm blib/lib/DBI/FAQ.pm\ncp lib/DBI/ProxyServer.pm blib/lib/DBI/ProxyServer.pm\ncp lib/Bundle/DBI.pm blib/lib/Bundle/DBI.pm\ncp lib/DBD/Proxy.pm blib/lib/DBD/Proxy.pm\ncp lib/DBD/Multiplex.pm blib/lib/DBD/Multiplex.pm\ncp DBIXS.h blib/arch/auto/DBI/DBIXS.h\ncp dbd_xsh.h blib/arch/auto/DBI/dbd_xsh.h\ncp dbi_sql.h blib/arch/auto/DBI/dbi_sql.h\ncp lib/DBD/NullP.pm blib/lib/DBD/NullP.pm\ncp lib/DBD/Sponge.pm blib/lib/DBD/Sponge.pm\ncp lib/DBI/Format.pm blib/lib/DBI/Format.pm\ncp Driver.xst blib/arch/auto/DBI/Driver.xst\ncp lib/DBI/DBD.pm blib/lib/DBI/DBD.pm\ncp lib/Win32/DBIODBC.pm blib/lib/Win32/DBIODBC.pm\ncp DBI.pm blib/lib/DBI.pm\ncp dbipport.h blib/arch/auto/DBI/dbipport.h\ncp lib/DBD/ADO.pm blib/lib/DBD/ADO.pm\n/usr/bin/perl -p -e \"s/~DRIVER~/Perl/g\" < blib/arch/auto/DBI/Driver.xst >\nPerl.xsi\n/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503\n/usr/lib/perl5/5.00503/ExtUtils/xsubpp -typemap\n/usr/lib/perl5/5.00503/ExtUtils/typemap Perl.xs >xst\nmp.c && mv xstmp.c Perl.c\ncc -c -Dbool=char -DHAS_BOOL -O2 -fsigned-char -DVERSION=\\\"1.15\\\"\n-DXS_VERSION=\\\"1.15\\\" -fpic -I/usr/lib/perl5/5.00503/ppc-linux/CORE\n-DDBI_NO_THREADS Perl.c\nmake test/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux\n-I/usr/lib/perl5/5.00503 /usr/lib/perl5/5.00503/ExtUtils/xsubpp -typemap\n/usr/lib/perl5/5.00503/ExtUtils/typemap DBI\n.xs >xstmp.c && mv xstmp.c DBI.c\ncc -c -Dbool=char -DHAS_BOOL -O2 -fsigned-char -DVERSION=\\\"1.15\\\"\n-DXS_VERSION=\\\"1.15\\\" -fpic -I/usr/lib/perl5/5.00503/ppc-linux/CORE\n-DDBI_NO_THREADS DBI.c\nRunning Mkbootstrap for DBI ()\nchmod 644 DBI.bs\nLD_RUN_PATH=\"\" cc -o blib/arch/auto/DBI/DBI.so -shared -L/usr/local/lib\nDBI.o\nchmod 755 blib/arch/auto/DBI/DBI.so\ncp DBI.bs blib/arch/auto/DBI/DBI.bs\nchmod 644 blib/arch/auto/DBI/DBI.bs\n/usr/bin/perl -Iblib/arch -Iblib/lib -I/usr/lib/perl5/5.00503/ppc-linux\n-I/usr/lib/perl5/5.00503 dbiproxy.PL dbiproxy\nExtracted dbiproxy from dbiproxy.PL with variable substitutions.\nmkdir blib/script\ncp dbiproxy blib/script/dbiproxy\n/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503\n-MExtUtils::MakeMaker -e \"MY->fixin(shift)\" blib/script/dbiproxy\n/usr/bin/perl -Iblib/arch -Iblib/lib -I/usr/lib/perl5/5.00503/ppc-linux\n-I/usr/lib/perl5/5.00503 dbish.PL dbish\nExtracted dbish from dbish.PL with variable substitutions.\ncp dbish blib/script/dbish\n/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503\n-MExtUtils::MakeMaker -e \"MY->fixin(shift)\" blib/script/dbish\nManifying blib/man1/dbiproxy.1\nManifying blib/man3/DBI::W32ODBC.3\nManifying blib/man3/DBI::FAQ.3\nManifying blib/man3/DBI::Shell.3\nManifying blib/man3/DBI::Format.3\nManifying blib/man3/DBI::ProxyServer.3\nManifying blib/man3/Bundle::DBI.3\nManifying blib/man3/DBI::DBD.3\nManifying blib/man1/dbish.1\nManifying blib/man3/DBI.3\nManifying blib/man3/Win32::DBIODBC.3\nManifying blib/man3/DBD::Proxy.3\nManifying blib/man3/DBD::Multiplex.3\nManifying blib/man3/DBD::ADO.3\n[root@pivot DBI-1.15]# make test\nPERL_DL_NONLAZY=1 /usr/bin/perl -Iblib/arch -Iblib/lib\n-I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 -e 'use\nTest::Harness qw(&runtests $verbose); $verbose=0; ru\nntests @ARGV;' t/*.t\nt/basics............ok\nt/dbidrv............ok\nt/examp.............ok\nt/meta..............ok\nt/proxy.............skipping test on this platform\nt/shell.............ok\nt/subclass..........ok\nAll tests successful, 1 test skipped.\nFiles=7, Tests=183, 4 wallclock secs ( 3.28 cusr + 0.39 csys = 3.67 CPU)\nPERL_DL_NONLAZY=1 /usr/bin/perl -Iblib/arch -Iblib/lib\n-I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 test.pl\ntest.pl\nDBI test application $Revision: 10.5 $\nUsing /home/ausit/src/DBI-1.15/blib\nSwitch: DBI 1.15 by Tim Bunce, 1.15\nAvailable Drivers: ADO, ExampleP, Multiplex, Pg, Proxy\ndbi:ExampleP:: testing 5 sets of 20 connections:\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nMade 100 connections in 0 wallclock secs ( 0.12 usr + 0.00 sys = 0.12\nCPU)\n\nTesting handle creation speed...\n5000 NullP statement handles cycled in 3.6 cpu+sys seconds (1388 per sec)\n\ntest.pl done\n[root@pivot DBI-1.15]# make install\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/DBIXS.h\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/dbd_xsh.h\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/dbi_sql.h\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/Driver.xst\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/dbipport.h\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/DBI.so\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/DBI.bs\n(unchanged)\nFiles found in blib/arch --> Installing files in blib/lib into architecture\ndependend library tree!\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/W32ODBC.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/Shell.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/FAQ.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/ProxyServer.pm\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/Format.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/DBD.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/ExampleP.pm\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/Proxy.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/Multiplex.pm\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/NullP.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/Sponge.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/ADO.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/Bundle/DBI.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/Win32/DBIODBC.pm\n(unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI.pm (unchanged)\nSkipping /usr/man/man1/dbiproxy.1 (unchanged)\nSkipping /usr/man/man1/dbish.1 (unchanged)\nInstalling /usr/lib/perl5/man/man3/DBI::W32ODBC.3\nInstalling /usr/lib/perl5/man/man3/DBI::FAQ.3\nInstalling /usr/lib/perl5/man/man3/DBI::Shell.3\nInstalling /usr/lib/perl5/man/man3/DBI::Format.3\nInstalling /usr/lib/perl5/man/man3/DBI::ProxyServer.3\nInstalling /usr/lib/perl5/man/man3/Bundle::DBI.3\nInstalling /usr/lib/perl5/man/man3/DBI::DBD.3\nInstalling /usr/lib/perl5/man/man3/DBI.3\nInstalling /usr/lib/perl5/man/man3/Win32::DBIODBC.3\nInstalling /usr/lib/perl5/man/man3/DBD::Proxy.3\nInstalling /usr/lib/perl5/man/man3/DBD::Multiplex.3\nInstalling /usr/lib/perl5/man/man3/DBD::ADO.3\nSkipping /usr/bin/dbiproxy (unchanged)\nSkipping /usr/bin/dbish (unchanged)\nWriting /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/.packlist\nAppending installation info to\n/usr/lib/perl5/5.00503/ppc-linux/perllocal.pod\n\n[root@pivot DBI-1.15]# cd ../DBD-Pg-0.98\n[root@pivot DBD-Pg-0.98]# su ausit\n[ausit@pivot DBD-Pg-0.98]$ ll\ntotal 184\n-rw-r--r-- 1 ausit ausit 9877 Apr 25 21:29 Changes\n-rw-r--r-- 1 ausit ausit 119 Sep 30 1999 MANIFEST\n-rw-r--r-- 1 ausit ausit 1594 Sep 30 1999 Makefile.PL\n-rw-r--r-- 1 ausit ausit 895 Jul 11 2000 Pg.h\n-rw-r--r-- 1 ausit ausit 35704 Apr 25 21:29 Pg.pm\n-rw-r--r-- 1 ausit ausit 14525 Jul 11 2000 Pg.xs\n-rw-r--r-- 1 ausit ausit 5591 Apr 25 21:29 README\n-rw-r--r-- 1 ausit ausit 2400 Jul 11 2000 README.win32\n-rw-r--r-- 1 ausit ausit 13810 Sep 30 1999 dbd-pg.pod\n-rw-r--r-- 1 ausit ausit 50355 Apr 25 21:29 dbdimp.c\n-rw-r--r-- 1 ausit ausit 2164 Apr 10 03:44 dbdimp.h\ndrwxr-xr-x 2 ausit ausit 4096 Apr 25 21:29 eg\n-rwxr-xr-x 1 ausit ausit 14402 Apr 21 07:01 test.pl\n\n#############SETTING POSTGRES ENVIRONMENTS ##############\n\n\n[ausit@pivot DBD-Pg-0.98]$ POSTGRES_INCLUDE=/usr/local/pgsql/include\n[ausit@pivot DBD-Pg-0.98]$ POSTGRES_LIB=/usr/local/pgsql/lib\n[ausit@pivot DBD-Pg-0.98]$ export POSTGRES_INCLUDE POSTGRES_LIB\n[ausit@pivot DBD-Pg-0.98]$ env\nPWD=/home/ausit/src/DBD-Pg-0.98\nPOSTGRES_INCLUDE=/usr/local/pgsql/include\nREMOTEHOST=192.168.168.50\nHOSTNAME=pivot\nQTDIR=/usr/lib/qt-2.2.1\nLESSOPEN=|/usr/bin/lesspipe.sh %s\nKDEDIR=/opt/kde2\nUSER=ausit\nLS_COLORS=\nMACHTYPE=powerpc-redhat-linux-gnu\nPOSTGRES_LIB=/usr/local/pgsql/lib\nMAIL=/var/spool/mail/ausit\nINPUTRC=/etc/inputrc\nBASH_ENV=/home/ausit/.bashrc\nLOGNAME=ausit\nSHLVL=3\nSHELL=/bin/bash\nUSERNAME=\nHOSTTYPE=powerpc\nOSTYPE=linux-gnu\nHISTSIZE=1000\nTERM=vt220\nHOME=/home/ausit\nPATH=/sbin:/usr/sbin:/sbin:/usr/sbin:/usr/kerberos/sbin:/sbin:/usr/sbin:/usr\n/kerberos/bin:/opt/kde2/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr\n/local/pgsql/bin:/usr/l\nib/qt-2.2.1/bin:/home/ausit/bin\n_=/usr/bin/env\n\n#############INSTALLING DBD ##############\n\n\n[ausit@pivot DBD-Pg-0.98]$ perl Makefile.PL\nConfiguring Pg\nRemember to actually read the README file !\nOS: linux\nUsing DBI 1.15 installed in\n/usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI\nChecking if your kit is complete...\nLooks good\nWriting Makefile for DBD::Pg\n[ausit@pivot DBD-Pg-0.98]$ make\nmkdir blib\nmkdir blib/lib\nmkdir blib/lib/DBD\nmkdir blib/arch\nmkdir blib/arch/auto\nmkdir blib/arch/auto/DBD\nmkdir blib/arch/auto/DBD/Pg\nmkdir blib/lib/auto\nmkdir blib/lib/auto/DBD\nmkdir blib/lib/auto/DBD/Pg\nmkdir blib/man3\ncp Pg.pm blib/lib/DBD/Pg.pm\ncp dbd-pg.pod blib/lib/DBD/dbd-pg.pod\n/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503\n/usr/lib/perl5/5.00503/ExtUtils/xsubpp -typemap\n/usr/lib/perl5/5.00503/ExtUtils/typemap Pg.xs >xstmp\n.c && mv xstmp.c Pg.c\ncc -c -I/usr/local/pgsql/include\n-I/usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI -Dbool=char -DHAS_BOOL\n-O2 -fsigned-char -DVERSION=\\\"0.98\\\" -DXS_VERSION=\\\"0.98\\\" -fpic\n -I/usr/lib/perl5/5.00503/ppc-linux/CORE Pg.c\ncc -c -I/usr/local/pgsql/include\n-I/usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI -Dbool=char -DHAS_BOOL\n-O2 -fsigned-char -DVERSION=\\\"0.98\\\" -DXS_VERSION=\\\"0.98\\\" -fpic\n -I/usr/lib/perl5/5.00503/ppc-linux/CORE dbdimp.c\nRunning Mkbootstrap for DBD::Pg ()\nchmod 644 Pg.bs\nLD_RUN_PATH=\"/usr/local/pgsql/lib\" cc -o blib/arch/auto/DBD/Pg/Pg.so\n-shared -L/usr/local/lib Pg.o dbdimp.o -L/usr/local/pgsql/lib -lpq\nchmod 755 blib/arch/auto/DBD/Pg/Pg.so\ncp Pg.bs blib/arch/auto/DBD/Pg/Pg.bs\nchmod 644 blib/arch/auto/DBD/Pg/Pg.bs\nManifying blib/man3/DBD::Pg.3\nManifying blib/man3/DBD::dbd-pg.3\n[ausit@pivot DBD-Pg-0.98]$ make test\n\n\n#############DBD INSTALL FAILURE LOG ##############\n\n\n\nPERL_DL_NONLAZY=1 /usr/bin/perl -Iblib/arch -Iblib/lib\n-I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 test.pl\nOS: linux\nUse of uninitialized value at test.pl line 53.\nDBI->data_sources .......... not ok:\nDBI->connect(dbname=template1) failed: FATAL 1: at test.pl line 59\nDBI->connect ............... not ok: FATAL 1: at test.pl line 59.\nmake: *** [test_dynamic] Error 255\n\n#############DBD INSTALL FAILURE LOG ##############\n\n\n############# PERL VERSION ##############\n\n\n\n[ausit@pivot DBD-Pg-0.98]$ perl -v\n\nThis is perl, version 5.005_03 built for ppc-linux\n\nCopyright 1987-1999, Larry Wall\n\nPerl may be copied only under the terms of either the Artistic License or\nthe\nGNU General Public License, which may be found in the Perl 5.0 source kit.\n\nComplete documentation for Perl, including FAQ lists, should be found on\nthis system using `man perl' or `perldoc perl'. If you have access to the\nInternet, point your browser at http://www.perl.com/, the Perl Home Page.\n\n[ausit@pivot DBD-Pg-0.98]$\n\n\n#############END OF TELNET SESSION ##############\n\n\n\nI have tried different users (including root)\nI have used different path ot env�s. (/usr/lib/pgsql/ & /usr/include/pgsql)\n\n\n\nAgain, if anyone can help me out here, as I�m not having too much luck\nfinding the problem.\n\n\n\n\nperlDBD::pg error! Please help (have all details in  post)\n\n\nHave attempted  to install DBI and DBD, but DBD “make test” fails.\nBelow is my complete  procedure , starting from:\n\nAccessing  PostgreSQL from the command line, (successful)\n creating  a database, populating. (successful)\nExpanding  DBI and DBD. (successful)\nInstalling  DBI  (seemingly successful)\nSet environments. (successful)\nInstall DBD.  (error occur on “test”)\n\nSo now I’m hoping someone can help me out....\n\nI think I have included all the version information throughout the procedure.  \nif not please let me know.\n\n\nAny assistance  would be greatly appreciated.\n\n\n\n\n#############LOG  OF TELNET SESSION##############\n\nLinux/PPC 2000 Q4\nPackages current to December 25 2000\nKernel 2.2.18-4hpmac on a ppc\nlogin: ausit\nPassword:\nLast login: Sun Apr 29 06:15:07 from 192.168.168.50\n[ausit@pivot ausit]$ su\nPassword:\n\n#############ACCESSING  POSTGRES##############\n\n[root@pivot ausit]# su postgres\nbash-2.04$ psql -V\npsql (PostgreSQL) 7.0.3\ncontains readline, history, multibyte support\nPortions Copyright (c) 1996-2000, PostgreSQL, Inc\nPortions Copyright (c) 1996 Regents of the University of California\nRead the file COPYRIGHT or use the command \\copyright to see the\nusage and distribution terms.\nbash-2.04$ createdb post_demo\nCREATE DATABASE\nbash-2.04$ psql post_demo\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType:  \\copyright for distribution terms\n       \\h for help with SQL commands\n       \\? for help on internal slash commands\n       \\g or terminate with semicolon to execute query\n       \\q to quit\n\npost_demo=# \\d\nNo relations found.\npost_demo=# create table test_class (yeh int4, nah text);\nCREATE\npost_demo=# insert into test_class (yeh, nah) values (123,'one_two_three');\nINSERT 19082 1\npost_demo=# select * from test_class ;\n yeh |      nah\n-----+---------------\n 123 | one_two_three\n(1 row)\n\npost_demo=# \\q\nbash-2.04$ exit\nexit\n[root@pivot ausit]# exit\nexit\n\n#############EXPANDING    DBI AND   DBD##############\n\n[ausit@pivot ausit]$ ls\nDesktop  src\n[ausit@pivot ausit]$ cd src\n[ausit@pivot src]$ ll\ntotal 3208\n-rwxr-xr-x    1 ausit    ausit      174080 Apr 29 02:26 DBD-Pg-0.98.tar\n-rwxr-xr-x    1 ausit    ausit      632320 Apr 29 02:26 DBI-1.15.tar\n-rwxr-xr-x    1 ausit    ausit      456331 Apr 29 02:26 apache-1.3.14-2.6.2.ppc.rpm\n-rwxr-xr-x    1 ausit    ausit      110724 Apr 29 02:26 apache-devel-1.3.14-2.6.2.ppc.r\n-rwxr-xr-x    1 ausit    ausit      473057 Apr 29 02:26 apache-manual-1.3.14-2.6.2.ppc.\n-rwxr-xr-x    1 ausit    ausit     1083848 Apr 29 02:26 apache-ssl-1.3.12_1.40-1.ppc.rp\n-rwxr-xr-x    1 ausit    ausit       18089 Apr 29 02:26 perl-Mail-Sendmail-0.77-6.ppc.r\ndrwxr-xr-x    4 ausit    ausit        4096 Apr 29 08:07 pg\n-rw-r--r--    1 ausit    ausit      284204 Apr 29 02:52 tar-1.13.17-5.ppc.rpm\n[ausit@pivot src]$ tar xvf DBD-Pg-0.98.tar\nDBD-Pg-0.98/\nDBD-Pg-0.98/eg/\nDBD-Pg-0.98/eg/ApacheDBI.pl\nDBD-Pg-0.98/Changes\nDBD-Pg-0.98/MANIFEST\nDBD-Pg-0.98/Makefile.PL\nDBD-Pg-0.98/Pg.h\nDBD-Pg-0.98/Pg.pm\nDBD-Pg-0.98/Pg.xs\nDBD-Pg-0.98/README\nDBD-Pg-0.98/README.win32\nDBD-Pg-0.98/dbd-pg.pod\nDBD-Pg-0.98/dbdimp.c\nDBD-Pg-0.98/dbdimp.h\nDBD-Pg-0.98/test.pl\n[ausit@pivot src]$ tar xvf DBI-1.15.tar\nDBI-1.15/\nDBI-1.15/lib/\nDBI-1.15/lib/DBD/\nDBI-1.15/lib/DBD/ExampleP.pm\nDBI-1.15/lib/DBD/Proxy.pm\nDBI-1.15/lib/DBD/NullP.pm\nDBI-1.15/lib/DBD/Sponge.pm\nDBI-1.15/lib/DBD/Multiplex.pm\nDBI-1.15/lib/DBD/ADO.pm\nDBI-1.15/lib/DBI/\nDBI-1.15/lib/DBI/Shell.pm\nDBI-1.15/lib/DBI/FAQ.pm\nDBI-1.15/lib/DBI/ProxyServer.pm\nDBI-1.15/lib/DBI/DBD.pm\nDBI-1.15/lib/DBI/W32ODBC.pm\nDBI-1.15/lib/DBI/Format.pm\nDBI-1.15/lib/Bundle/\nDBI-1.15/lib/Bundle/DBI.pm\nDBI-1.15/lib/Win32/\nDBI-1.15/lib/Win32/DBIODBC.pm\nDBI-1.15/DBI.xs\nDBI-1.15/t/\nDBI-1.15/t/meta.t\nDBI-1.15/t/dbidrv.t\nDBI-1.15/t/examp.t\nDBI-1.15/t/subclass.t\nDBI-1.15/t/proxy.t\nDBI-1.15/t/basics.t\nDBI-1.15/t/shell.t\nDBI-1.15/Perl.xs\nDBI-1.15/DBIXS.h\nDBI-1.15/MANIFEST\nDBI-1.15/Driver.xst\nDBI-1.15/Changes\nDBI-1.15/dbipport.h\nDBI-1.15/Makefile.PL\nDBI-1.15/test.pl\nDBI-1.15/README\nDBI-1.15/dbd_xsh.h\nDBI-1.15/dbish.PL\nDBI-1.15/dbi_sql.h\nDBI-1.15/ToDo\nDBI-1.15/DBI.pm\nDBI-1.15/dbiproxy.PL\n[ausit@pivot src]$ ll\ntotal 3216\ndrwxr-xr-x    3 ausit    ausit        4096 Apr 25 21:50 DBD-Pg-0.98\n-rwxr-xr-x    1 ausit    ausit      174080 Apr 29 02:26 DBD-Pg-0.98.tar\ndrwxr-xr-x    4 ausit    ausit        4096 Mar 31 00:57 DBI-1.15\n-rwxr-xr-x    1 ausit    ausit      632320 Apr 29 02:26 DBI-1.15.tar\n-rwxr-xr-x    1 ausit    ausit      456331 Apr 29 02:26 apache-1.3.14-2.6.2.ppc.rpm\n-rwxr-xr-x    1 ausit    ausit      110724 Apr 29 02:26 apache-devel-1.3.14-2.6.2.ppc.r\n-rwxr-xr-x    1 ausit    ausit      473057 Apr 29 02:26 apache-manual-1.3.14-2.6.2.ppc.\n-rwxr-xr-x    1 ausit    ausit     1083848 Apr 29 02:26 apache-ssl-1.3.12_1.40-1.ppc.rp\n-rwxr-xr-x    1 ausit    ausit       18089 Apr 29 02:26 perl-Mail-Sendmail-0.77-6.ppc.r\ndrwxr-xr-x    4 ausit    ausit        4096 Apr 29 08:07 pg\n-rw-r--r--    1 ausit    ausit      284204 Apr 29 02:52 tar-1.13.17-5.ppc.rpm\n\n#############INSTALLING    DBI  ##############\n\n[ausit@pivot src]$ cd DBI-1.15\n[ausit@pivot DBI-1.15]$ su\nPassword:\n[root@pivot DBI-1.15]# perl Makefile.PL\n*** Note:\n    The optional PlRPC-modules (RPC::PlServer etc) are not installed.\n    If you want to use the DBD::Proxy driver and DBI::ProxyServer\n    modules, then you'll need to install the RPC::PlServer, RPC::PlClient,\n    Storable and Net::Daemon modules. The CPAN Bundle::DBI may help you.\n    You can install them any time after installing the DBI.\n    You do *not* need these modules for typical DBI usage.\n\nOptional modules are available from any CPAN mirror, in particular\n    http://www.perl.com/CPAN/modules/by-module\n    http://www.perl.org/CPAN/modules/by-module\n    ftp://ftp.funet.fi/pub/languages/perl/CPAN/modules/by-module\n\nChecking if your kit is complete...\nLooks good\nWriting Makefile for DBI\n\n    Remember to actually *read* the README file!\n    Use  'make' to build the software (dmake or nmake on Windows).\n    Then 'make test' to execute self tests.\n    Then 'make install' to install the DBI and then delete this working\n    directory before unpacking and building any DBD::* drivers.\n\n[root@pivot DBI-1.15]# make\nmkdir blib\nmkdir blib/lib\nmkdir blib/arch\nmkdir blib/arch/auto\nmkdir blib/arch/auto/DBI\nmkdir blib/lib/auto\nmkdir blib/lib/auto/DBI\nmkdir blib/man1\nmkdir blib/man3\ncp lib/DBI/W32ODBC.pm blib/lib/DBI/W32ODBC.pm\ncp lib/DBD/ExampleP.pm blib/lib/DBD/ExampleP.pm\ncp lib/DBI/Shell.pm blib/lib/DBI/Shell.pm\ncp lib/DBI/FAQ.pm blib/lib/DBI/FAQ.pm\ncp lib/DBI/ProxyServer.pm blib/lib/DBI/ProxyServer.pm\ncp lib/Bundle/DBI.pm blib/lib/Bundle/DBI.pm\ncp lib/DBD/Proxy.pm blib/lib/DBD/Proxy.pm\ncp lib/DBD/Multiplex.pm blib/lib/DBD/Multiplex.pm\ncp DBIXS.h blib/arch/auto/DBI/DBIXS.h\ncp dbd_xsh.h blib/arch/auto/DBI/dbd_xsh.h\ncp dbi_sql.h blib/arch/auto/DBI/dbi_sql.h\ncp lib/DBD/NullP.pm blib/lib/DBD/NullP.pm\ncp lib/DBD/Sponge.pm blib/lib/DBD/Sponge.pm\ncp lib/DBI/Format.pm blib/lib/DBI/Format.pm\ncp Driver.xst blib/arch/auto/DBI/Driver.xst\ncp lib/DBI/DBD.pm blib/lib/DBI/DBD.pm\ncp lib/Win32/DBIODBC.pm blib/lib/Win32/DBIODBC.pm\ncp DBI.pm blib/lib/DBI.pm\ncp dbipport.h blib/arch/auto/DBI/dbipport.h\ncp lib/DBD/ADO.pm blib/lib/DBD/ADO.pm\n/usr/bin/perl -p -e \"s/~DRIVER~/Perl/g\" < blib/arch/auto/DBI/Driver.xst > Perl.xsi\n/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 /usr/lib/perl5/5.00503/ExtUtils/xsubpp  -typemap /usr/lib/perl5/5.00503/ExtUtils/typemap Perl.xs >xst\nmp.c && mv xstmp.c Perl.c\ncc -c  -Dbool=char -DHAS_BOOL -O2 -fsigned-char    -DVERSION=\\\"1.15\\\" -DXS_VERSION=\\\"1.15\\\" -fpic -I/usr/lib/perl5/5.00503/ppc-linux/CORE -DDBI_NO_THREADS Perl.c\nmake test/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 /usr/lib/perl5/5.00503/ExtUtils/xsubpp  -typemap /usr/lib/perl5/5.00503/ExtUtils/typemap DBI\n.xs >xstmp.c && mv xstmp.c DBI.c\ncc -c  -Dbool=char -DHAS_BOOL -O2 -fsigned-char    -DVERSION=\\\"1.15\\\" -DXS_VERSION=\\\"1.15\\\" -fpic -I/usr/lib/perl5/5.00503/ppc-linux/CORE -DDBI_NO_THREADS DBI.c\nRunning Mkbootstrap for DBI ()\nchmod 644 DBI.bs\nLD_RUN_PATH=\"\" cc -o blib/arch/auto/DBI/DBI.so  -shared -L/usr/local/lib DBI.o\nchmod 755 blib/arch/auto/DBI/DBI.so\ncp DBI.bs blib/arch/auto/DBI/DBI.bs\nchmod 644 blib/arch/auto/DBI/DBI.bs\n/usr/bin/perl -Iblib/arch -Iblib/lib -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 dbiproxy.PL dbiproxy\nExtracted dbiproxy from dbiproxy.PL with variable substitutions.\nmkdir blib/script\ncp dbiproxy blib/script/dbiproxy\n/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 -MExtUtils::MakeMaker -e \"MY->fixin(shift)\" blib/script/dbiproxy\n/usr/bin/perl -Iblib/arch -Iblib/lib -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 dbish.PL dbish\nExtracted dbish from dbish.PL with variable substitutions.\ncp dbish blib/script/dbish\n/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 -MExtUtils::MakeMaker -e \"MY->fixin(shift)\" blib/script/dbish\nManifying blib/man1/dbiproxy.1\nManifying blib/man3/DBI::W32ODBC.3\nManifying blib/man3/DBI::FAQ.3\nManifying blib/man3/DBI::Shell.3\nManifying blib/man3/DBI::Format.3\nManifying blib/man3/DBI::ProxyServer.3\nManifying blib/man3/Bundle::DBI.3\nManifying blib/man3/DBI::DBD.3\nManifying blib/man1/dbish.1\nManifying blib/man3/DBI.3\nManifying blib/man3/Win32::DBIODBC.3\nManifying blib/man3/DBD::Proxy.3\nManifying blib/man3/DBD::Multiplex.3\nManifying blib/man3/DBD::ADO.3\n[root@pivot DBI-1.15]# make test\nPERL_DL_NONLAZY=1 /usr/bin/perl -Iblib/arch -Iblib/lib -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 -e 'use Test::Harness qw(&runtests $verbose); $verbose=0; ru\nntests @ARGV;' t/*.t\nt/basics............ok\nt/dbidrv............ok\nt/examp.............ok\nt/meta..............ok\nt/proxy.............skipping test on this platform\nt/shell.............ok\nt/subclass..........ok\nAll tests successful, 1 test skipped.\nFiles=7,  Tests=183,  4 wallclock secs ( 3.28 cusr +  0.39 csys =  3.67 CPU)\nPERL_DL_NONLAZY=1 /usr/bin/perl -Iblib/arch -Iblib/lib -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 test.pl\ntest.pl\nDBI test application $Revision: 10.5 $\nUsing /home/ausit/src/DBI-1.15/blib\nSwitch: DBI 1.15 by Tim Bunce, 1.15\nAvailable Drivers: ADO, ExampleP, Multiplex, Pg, Proxy\ndbi:ExampleP:: testing 5 sets of 20 connections:\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nConnecting... 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nDisconnecting...\nMade 100 connections in  0 wallclock secs ( 0.12 usr +  0.00 sys =  0.12 CPU)\n\nTesting handle creation speed...\n5000 NullP statement handles cycled in 3.6 cpu+sys seconds (1388 per sec)\n\ntest.pl done\n[root@pivot DBI-1.15]# make install\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/DBIXS.h (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/dbd_xsh.h (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/dbi_sql.h (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/Driver.xst (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/dbipport.h (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/DBI.so (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/DBI.bs (unchanged)\nFiles found in blib/arch --> Installing files in blib/lib into architecture dependend library tree!\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/W32ODBC.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/Shell.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/FAQ.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/ProxyServer.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/Format.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI/DBD.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/ExampleP.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/Proxy.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/Multiplex.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/NullP.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/Sponge.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBD/ADO.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/Bundle/DBI.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/Win32/DBIODBC.pm (unchanged)\nSkipping /usr/lib/perl5/site_perl/5.005/ppc-linux/DBI.pm (unchanged)\nSkipping /usr/man/man1/dbiproxy.1 (unchanged)\nSkipping /usr/man/man1/dbish.1 (unchanged)\nInstalling /usr/lib/perl5/man/man3/DBI::W32ODBC.3\nInstalling /usr/lib/perl5/man/man3/DBI::FAQ.3\nInstalling /usr/lib/perl5/man/man3/DBI::Shell.3\nInstalling /usr/lib/perl5/man/man3/DBI::Format.3\nInstalling /usr/lib/perl5/man/man3/DBI::ProxyServer.3\nInstalling /usr/lib/perl5/man/man3/Bundle::DBI.3\nInstalling /usr/lib/perl5/man/man3/DBI::DBD.3\nInstalling /usr/lib/perl5/man/man3/DBI.3\nInstalling /usr/lib/perl5/man/man3/Win32::DBIODBC.3\nInstalling /usr/lib/perl5/man/man3/DBD::Proxy.3\nInstalling /usr/lib/perl5/man/man3/DBD::Multiplex.3\nInstalling /usr/lib/perl5/man/man3/DBD::ADO.3\nSkipping /usr/bin/dbiproxy (unchanged)\nSkipping /usr/bin/dbish (unchanged)\nWriting /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI/.packlist\nAppending installation info to /usr/lib/perl5/5.00503/ppc-linux/perllocal.pod\n\n[root@pivot DBI-1.15]# cd ../DBD-Pg-0.98\n[root@pivot DBD-Pg-0.98]# su  ausit\n[ausit@pivot DBD-Pg-0.98]$ ll\ntotal 184\n-rw-r--r--    1 ausit    ausit        9877 Apr 25 21:29 Changes\n-rw-r--r--    1 ausit    ausit         119 Sep 30  1999 MANIFEST\n-rw-r--r--    1 ausit    ausit        1594 Sep 30  1999 Makefile.PL\n-rw-r--r--    1 ausit    ausit         895 Jul 11  2000 Pg.h\n-rw-r--r--    1 ausit    ausit       35704 Apr 25 21:29 Pg.pm\n-rw-r--r--    1 ausit    ausit       14525 Jul 11  2000 Pg.xs\n-rw-r--r--    1 ausit    ausit        5591 Apr 25 21:29 README\n-rw-r--r--    1 ausit    ausit        2400 Jul 11  2000 README.win32\n-rw-r--r--    1 ausit    ausit       13810 Sep 30  1999 dbd-pg.pod\n-rw-r--r--    1 ausit    ausit       50355 Apr 25 21:29 dbdimp.c\n-rw-r--r--    1 ausit    ausit        2164 Apr 10 03:44 dbdimp.h\ndrwxr-xr-x    2 ausit    ausit        4096 Apr 25 21:29 eg\n-rwxr-xr-x    1 ausit    ausit       14402 Apr 21 07:01 test.pl\n\n#############SETTING   POSTGRES    ENVIRONMENTS  ##############\n\n\n[ausit@pivot DBD-Pg-0.98]$ POSTGRES_INCLUDE=/usr/local/pgsql/include\n[ausit@pivot DBD-Pg-0.98]$ POSTGRES_LIB=/usr/local/pgsql/lib\n[ausit@pivot DBD-Pg-0.98]$ export POSTGRES_INCLUDE POSTGRES_LIB\n[ausit@pivot DBD-Pg-0.98]$ env\nPWD=/home/ausit/src/DBD-Pg-0.98\nPOSTGRES_INCLUDE=/usr/local/pgsql/include\nREMOTEHOST=192.168.168.50\nHOSTNAME=pivot\nQTDIR=/usr/lib/qt-2.2.1\nLESSOPEN=|/usr/bin/lesspipe.sh %s\nKDEDIR=/opt/kde2\nUSER=ausit\nLS_COLORS=\nMACHTYPE=powerpc-redhat-linux-gnu\nPOSTGRES_LIB=/usr/local/pgsql/lib\nMAIL=/var/spool/mail/ausit\nINPUTRC=/etc/inputrc\nBASH_ENV=/home/ausit/.bashrc\nLOGNAME=ausit\nSHLVL=3\nSHELL=/bin/bash\nUSERNAME=\nHOSTTYPE=powerpc\nOSTYPE=linux-gnu\nHISTSIZE=1000\nTERM=vt220\nHOME=/home/ausit\nPATH=/sbin:/usr/sbin:/sbin:/usr/sbin:/usr/kerberos/sbin:/sbin:/usr/sbin:/usr/kerberos/bin:/opt/kde2/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/local/pgsql/bin:/usr/l\nib/qt-2.2.1/bin:/home/ausit/bin\n_=/usr/bin/env\n\n#############INSTALLING    DBD  ##############\n\n\n[ausit@pivot DBD-Pg-0.98]$ perl Makefile.PL\nConfiguring Pg\nRemember to actually read the README file !\nOS: linux\nUsing DBI 1.15 installed in /usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI\nChecking if your kit is complete...\nLooks good\nWriting Makefile for DBD::Pg\n[ausit@pivot DBD-Pg-0.98]$ make\nmkdir blib\nmkdir blib/lib\nmkdir blib/lib/DBD\nmkdir blib/arch\nmkdir blib/arch/auto\nmkdir blib/arch/auto/DBD\nmkdir blib/arch/auto/DBD/Pg\nmkdir blib/lib/auto\nmkdir blib/lib/auto/DBD\nmkdir blib/lib/auto/DBD/Pg\nmkdir blib/man3\ncp Pg.pm blib/lib/DBD/Pg.pm\ncp dbd-pg.pod blib/lib/DBD/dbd-pg.pod\n/usr/bin/perl -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 /usr/lib/perl5/5.00503/ExtUtils/xsubpp  -typemap /usr/lib/perl5/5.00503/ExtUtils/typemap Pg.xs >xstmp\n.c && mv xstmp.c Pg.c\ncc -c -I/usr/local/pgsql/include -I/usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI -Dbool=char -DHAS_BOOL -O2 -fsigned-char    -DVERSION=\\\"0.98\\\" -DXS_VERSION=\\\"0.98\\\" -fpic\n -I/usr/lib/perl5/5.00503/ppc-linux/CORE  Pg.c\ncc -c -I/usr/local/pgsql/include -I/usr/lib/perl5/site_perl/5.005/ppc-linux/auto/DBI -Dbool=char -DHAS_BOOL -O2 -fsigned-char    -DVERSION=\\\"0.98\\\" -DXS_VERSION=\\\"0.98\\\" -fpic\n -I/usr/lib/perl5/5.00503/ppc-linux/CORE  dbdimp.c\nRunning Mkbootstrap for DBD::Pg ()\nchmod 644 Pg.bs\nLD_RUN_PATH=\"/usr/local/pgsql/lib\" cc -o blib/arch/auto/DBD/Pg/Pg.so  -shared -L/usr/local/lib Pg.o dbdimp.o    -L/usr/local/pgsql/lib -lpq\nchmod 755 blib/arch/auto/DBD/Pg/Pg.so\ncp Pg.bs blib/arch/auto/DBD/Pg/Pg.bs\nchmod 644 blib/arch/auto/DBD/Pg/Pg.bs\nManifying blib/man3/DBD::Pg.3\nManifying blib/man3/DBD::dbd-pg.3\n[ausit@pivot DBD-Pg-0.98]$ make test\n\n\n#############DBD INSTALL FAILURE LOG  ##############\n\n\n\nPERL_DL_NONLAZY=1 /usr/bin/perl -Iblib/arch -Iblib/lib -I/usr/lib/perl5/5.00503/ppc-linux -I/usr/lib/perl5/5.00503 test.pl\nOS: linux\nUse of uninitialized value at test.pl line 53.\nDBI->data_sources .......... not ok:\nDBI->connect(dbname=template1) failed: FATAL 1: at test.pl line 59\nDBI->connect ............... not ok: FATAL 1: at test.pl line 59.\nmake: *** [test_dynamic] Error 255\n\n#############DBD INSTALL FAILURE LOG  ##############\n\n\n#############   PERL  VERSION  ##############\n\n\n\n[ausit@pivot DBD-Pg-0.98]$ perl -v\n\nThis is perl, version 5.005_03 built for ppc-linux\n\nCopyright 1987-1999, Larry Wall\n\nPerl may be copied only under the terms of either the Artistic License or the\nGNU General Public License, which may be found in the Perl 5.0 source kit.\n\nComplete documentation for Perl, including FAQ lists, should be found on\nthis system using `man perl' or `perldoc perl'.  If you have access to the\nInternet, point your browser at http://www.perl.com/, the Perl Home Page.\n\n[ausit@pivot DBD-Pg-0.98]$\n\n\n#############END    OF   TELNET SESSION  ##############\n\n\n\nI have tried different users (including  root)\nI have used different path ot env’s. (/usr/lib/pgsql/ & /usr/include/pgsql)\n\n\n\nAgain, if anyone can help me out here, as I’m not having too much luck finding the problem.", "msg_date": "Sat, 28 Apr 2001 22:55:07 GMT", "msg_from": "Raoul Callaghan <ausit@bigpond.net.au>", "msg_from_op": true, "msg_subject": "perlDBD::pg error! Please help (have all details in post)" } ]
[ { "msg_contents": "", "msg_date": "Sat, 28 Apr 2001 21:28:47 -0700", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": true, "msg_subject": "SAP-DB" }, { "msg_contents": "On Sat, 28 Apr 2001, Don Baccus wrote:\n> I humbly suggest you don't write them off quite so quickly.\n> \n> SAP is, after all, a very successful company.\n\nSo is Microsoft.\n\nI got the between the lines part of Bruce's post and agree with him.\n\n-- \n| Matthew N. Dodd | '78 Datsun 280Z | '75 Volvo 164E | FreeBSD/NetBSD |\n| winter@jurai.net | 2 x '84 Volvo 245DL | ix86,sparc,pmax |\n| http://www.jurai.net/~winter | For Great Justice! | ISO8802.5 4ever |\n\n", "msg_date": "Sun, 29 Apr 2001 00:40:43 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <winter@jurai.net>", "msg_from_op": false, "msg_subject": "Re: SAP-DB" }, { "msg_contents": "Don Baccus wrote:\n> \n> Hi guys,\n> >\n> > I've used the open source SAPDB and the performance is pretty damned\n> > impressive. However, 'open source' in application to it is somewhat\n> > deceptive, since you have to make it with SAP's proprietary build\n> > tools/environment.\n> >\n> > In my opinion, however, it would be worth closely auditing SAP DB to see\n> > what postgres can learn.\n> \n> I downloaded it. The directories are two characters in length, the\n> files are numbers, and it is a mixture of C++, Python, and Pascal. Need\n> I say more. :-)\n> \n> I swore I'd never post to the hackers list again, but this is an amazing\n> statement by Bruce.\n> \n> Boy, the robustness of the software is determined by the number of characters\n> in the directory name?\n> \n> By the languages used?\n\n[Snip]\n\nMy guess is that Bruce was implying that the code was obfuscated. It is a\ncommon trick for closed source to be \"open\" but not really.\n\nI don't think it was any sort of technology snobbery. Far be it for me to\nsuggest an explanation to the words of others, that is just how I read it.\n\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 29 Apr 2001 13:12:08 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: SAP-DB" }, { "msg_contents": "> > I swore I'd never post to the hackers list again, but this is an amazing\n> > statement by Bruce.\n> > \n> > Boy, the robustness of the software is determined by the number of characters\n> > in the directory name?\n> > \n> > By the languages used?\n> \n> [Snip]\n> \n> My guess is that Bruce was implying that the code was obfuscated. It is a\n> common trick for closed source to be \"open\" but not really.\n> \n> I don't think it was any sort of technology snobbery. Far be it for me to\n> suggest an explanation to the words of others, that is just how I read it.\n\nI don't think they intentionally confused the code.\n\nThe real problem I see in that it was very hard for me to find anything\nin the code. I would be interested to see if others can find stuff.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 13:32:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: SAP-DB" }, { "msg_contents": "> Have you considered that the development tools may\n> \n> be abstracting out the directory names in their development \n> environment?\n\nI never considered this, but it makes sense. I didn't try the\ndevelopment tools and went right to the code. I did find a web site\nthat described the two-letter directory names and their meanings so I\nthought that was all there was. Seems their development tools make the\ncode more understandable. Has anyone tried it?\n\n> I humbly suggest you don't write them off quite so quickly.\n\nI never wrote them off. I just couldn't figure any of it out.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 13:37:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SAP-DB" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010429 10:44] wrote:\n> > > I swore I'd never post to the hackers list again, but this is an amazing\n> > > statement by Bruce.\n> > > \n> > > Boy, the robustness of the software is determined by the number of characters\n> > > in the directory name?\n> > > \n> > > By the languages used?\n> > \n> > [Snip]\n> > \n> > My guess is that Bruce was implying that the code was obfuscated. It is a\n> > common trick for closed source to be \"open\" but not really.\n> > \n> > I don't think it was any sort of technology snobbery. Far be it for me to\n> > suggest an explanation to the words of others, that is just how I read it.\n> \n> I don't think they intentionally confused the code.\n> \n> The real problem I see in that it was very hard for me to find anything\n> in the code. I would be interested to see if others can find stuff.\n\nI think this is general problem in a lot of projects, you open up\nfoo.c and say... \"what the heck is this...\" after a few hours of\nstudying the source you finally figure out is something that does\nminiscule part X of massive part Y and by then you're too engrossed\nto write a little banner for the file or dir explaining what it's\nfor and incorrectly assume that even if you did, it wouldn't help\nthat user unless he went through the same painful steps that you\ndid.\n\nBeen there, done that.. er, actually, still there, mostly still\ndoing that. :)\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nhttp://www.egr.unlv.edu/~slumos/on-netbsd.html\n", "msg_date": "Sun, 29 Apr 2001 12:49:11 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Re: SAP-DB" } ]
[ { "msg_contents": "Hi,\n\nI'm new to postgresql...but so far i love it...after working with companies \nthat have spent millions of dollars on the big \"O\" i must say pgsql is a \nbreath of fresh air...nice work to all who contributed.\n\nI'm looking at building a fault tolerant/failover database [hot standby] \nsystem using pgsql. I have setup and tested out DRBD and it works pretty \nwell...althoug no work seams to have been done on it for a while. I\nalso just found out about eRserver but have not tried it out yet.\n\nDoes anyone have suggestions/recommendations for a good system that can be \nused for a high volume commercially available application.\n\nThanks!\n\nShawn\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com\n\n", "msg_date": "Sun, 29 Apr 2001 04:30:09 -0000", "msg_from": "\"special agent \"k\"\" <chickensoda@hotmail.com>", "msg_from_op": true, "msg_subject": "Building Fault Tolerant/Failover PGSQL Systems" }, { "msg_contents": "Thus spake special agent k \n> I'm new to postgresql...but so far i love it...after working with companies \n> that have spent millions of dollars on the big \"O\" i must say pgsql is a \n> breath of fresh air...nice work to all who contributed.\n> \n> I'm looking at building a fault tolerant/failover database [hot standby] \n> system using pgsql. I have setup and tested out DRBD and it works pretty \n> well...althoug no work seams to have been done on it for a while. I\n> also just found out about eRserver but have not tried it out yet.\n> \n> Does anyone have suggestions/recommendations for a good system that can be \n> used for a high volume commercially available application.\n\nSomething that I am looking at is setting up two high-end systems connected\nto an external RAID array. The RAID gives us storage redundancy and\neither system can take over and be the database server using the same files.\nOf course we will still do nightly backups but it would have to be quite\nthe catastrophe if we ever needed it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 29 Apr 2001 07:02:10 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Building Fault Tolerant/Failover PGSQL Systems" } ]
[ { "msg_contents": "I found it myself:\n\nSQL Features\n- SQL 92 entry level with several extensions\n- Oracle 7 compatibility mode\n\nKey benefits\n\n- Referential integrity (to be defined in CREATE TABLE or ALTER Table\n statement)\n- Stored procedures\n- After statement trigger (INSERT/UPDATE/DELETE) \n- Updateable views, although not every view is updateable\n- Datatype BOOLEAN\n- A number of functions including functions for date values\n- Maximum length of object names is 32 characters\n- Subtransactions\n- Sequences (number generator)\n- Roles (compositions of user authorizations that can be granted/revoked as a \nwhole)\n- Subselects that can be specified in the FROM clause of a query\n- Outer joins\n- Scrollable cursor\n-Temporary tables that will be destroyed if the application ends the session\n These tables can be created within a stored procedure and selected from \noutside this procedure. They are updateable, although there is no implicit \nupdate of the rows that created the temporary table.\n-Explicit and implicit locking on row level\n\nNOT supported features:\n- Collations\n- Result sets that are created within a stored procedure and fetched outside. \nThis feature is planned to be offered in one of the coming releases.\n Meanwhile, use temporary tables.\n- Multi version concurrency for OLTP\n It is available with the object extension of SAPDB only.\n- Hot stand by\n This feature is planned to be offered in one of the coming releases. \n\nEnterprise Features\n- The Microsoft cluster server is supported. For other systems scripts have \nto be written according to the failover solution of the system.\n- Online backup\n- Online expansion of the database\n- No explicit reorganization\n\n- Supported backup tools:\n-- ADSM + adint2\n Networker\n Netvault, HiBack\n (soon) Backint for Oracle\n Tools supporting this interface are: ARCserve, Backup Express, dbBRZ for \n R/3, DBVAULT, DoroStore, EASY_BASE, EMC, EPOCH, FDR/UPSTREAM, HIBACK, \n HSMS-CL Backint, NetBackup, NetVault, NetWorker, Omniback, Seagate Backup, \n SESAM, Solstice Backup, Sys-Save, TIME NAVIGATOR for R/3, \n Tivoli\n\nProgramming Interfaces\n- ODBC\n- C/C++ Precompiler (Embedded SQL)\n- JDBC\n- Perl DBI\n- Python\n- PHP\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Sun, 29 Apr 2001 10:27:11 +0200", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": true, "msg_subject": "SAP DB Featuers" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> I found it myself:\n\nYes, this was on the Features web page.\n\n> SQL Features\n> - SQL 92 entry level with several extensions\n\n> - Oracle 7 compatibility mode\n\nYes, that was a major feature to me. I just couldn't find it in the\ncode.\n\n> - Updateable views, although not every view is updateable\n\nAlso nice.\n\n> Enterprise Features\n> - The Microsoft cluster server is supported. For other systems scripts have \n> to be written according to the failover solution of the system.\n\nThis seemed interesting. And I saw replication mentioned too.\n\n\n> - Supported backup tools:\n> -- ADSM + adint2\n> Networker\n> Netvault, HiBack\n> (soon) Backint for Oracle\n> Tools supporting this interface are: ARCserve, Backup Express, dbBRZ for \n> R/3, DBVAULT, DoroStore, EASY_BASE, EMC, EPOCH, FDR/UPSTREAM, HIBACK, \n> HSMS-CL Backint, NetBackup, NetVault, NetWorker, Omniback, Seagate Backup, \n> SESAM, Solstice Backup, Sys-Save, TIME NAVIGATOR for R/3, \n> Tivoli\n\nThat's a lot of backup tools. Can we grab that code somehow?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 12:50:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: SAP DB Featuers" } ]
[ { "msg_contents": "Hi guys (and girls),\n\nFirstly, I must say that everyone has been quite helpful to me while I've\nbeen migrating my database to PostgreSQL 7.1.\n\nOne feature I would like to see would be the ability to set a \"usage\" and\n\"idle\" threshold, so that tables automatically get vacuumed once they have\nhad more than X insert/deletes and there is less than Y load (however load\nmay be defined) on the database.\n\nWould this be a particularly hard feature to implement?\n\nCheers,\n\n--\nAlastair D'Silva (mob: 0413 485 733)\nNetworking Consultant\nNew Millennium Networking (web: http://www.newmillennium.net.au)\n\n", "msg_date": "Sun, 29 Apr 2001 23:24:54 +0800", "msg_from": "\"Alastair D'Silva\" <deece@newmillennium.net.au>", "msg_from_op": true, "msg_subject": "Self vacuuming" }, { "msg_contents": "Alastair D'Silva wrote:\n> \n> Hi guys (and girls),\n> \n> Firstly, I must say that everyone has been quite helpful to me while I've\n> been migrating my database to PostgreSQL 7.1.\n> \n> One feature I would like to see would be the ability to set a \"usage\" and\n> \"idle\" threshold, so that tables automatically get vacuumed once they have\n> had more than X insert/deletes and there is less than Y load (however load\n> may be defined) on the database.\n> \n> Would this be a particularly hard feature to implement?\n\nI would like not to see vacuuming required at all. I like the feature as a way\nto force compaction, but I would like to see dynamic block space reuse. This is\na far more complex thing to implement with variable length fields, but it is\nrealy the only way to do it.\n\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 29 Apr 2001 11:37:11 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Self vacuuming" } ]
[ { "msg_contents": "I have attached the original message and my reply. The person was\nasking how we could used SAP \"to see what postgres can learn\". My reply\nwas to say that I couldn't figure how how to learn anything from the\ncode. That was my only statement.\n\nI did not trash SAP DB. Seems using their development tools may make\nthe code much easier to understand. Hopefully someone will try.\n\n---------------------------------------------------------------------------\n\n\n> > Hi guys,\n> > \n> > I've used the open source SAPDB and the performance is pretty damned\n> > impressive. However, 'open source' in application to it is somewhat\n> > deceptive, since you have to make it with SAP's proprietary build\n> > tools/environment.\n> > \n> > In my opinion, however, it would be worth closely auditing SAP DB to see\n> > what postgres can learn.\n> \n> I downloaded it. The directories are two characters in length, the\n> files are numbers, and it is a mixture of C++, Python, and Pascal. Need\n> I say more. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 14:12:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SAPDB Open Souce" }, { "msg_contents": "I see one of my mistakes here. The person clearly said \"you have to\nmake it with SAP's proprietary build tools/environment.\" I didn't\nrealize you need the build tools/environment to meaningfully view the\ncode. Of course, as someone else stated, the build tools/environment\nhave been open-sourced.\n\nCan someone confirm that the build tools/environment makes the code \neasier to understand? That would be good, and make it easier for us to\nlearn from it.\n\n\n> I have attached the original message and my reply. The person was\n> asking how we could used SAP \"to see what postgres can learn\". My reply\n> was to say that I couldn't figure how how to learn anything from the\n> code. That was my only statement.\n> \n> I did not trash SAP DB. Seems using their development tools may make\n> the code much easier to understand. Hopefully someone will try.\n> \n> ---------------------------------------------------------------------------\n> \n> \n> > > Hi guys,\n> > > \n> > > I've used the open source SAPDB and the performance is pretty damned\n> > > impressive. However, 'open source' in application to it is somewhat\n> > > deceptive, since you have to make it with SAP's proprietary build\n> > > tools/environment.\n> > > \n> > > In my opinion, however, it would be worth closely auditing SAP DB to see\n> > > what postgres can learn.\n> > \n> > I downloaded it. The directories are two characters in length, the\n> > files are numbers, and it is a mixture of C++, Python, and Pascal. Need\n> > I say more. :-)\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 14:36:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SAPDB Open Souce" } ]
[ { "msg_contents": "Here is a general call for people to review other open-source database\nsoftware and report back on things PostgreSQL can learn from them.\n\nI can see Interbase, MySQL, and SAP DB as being three database that\nwould be worth researching. I am willing to assist anyone who wants to\ngive it a try. I have all the sources here myself. I even have old\nUniversity Ingres, Mariposa, and Postgres 4.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 15:58:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Learning from other open source databases" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> I can see Interbase, MySQL, and SAP DB as being three database that\n> would be worth researching. I am willing to assist anyone who wants to\n> give it a try. I have all the sources here myself. I even have old\n> University Ingres, Mariposa, and Postgres 4.2.\n\nThere's also the Shore data manager. While not a complete SQL database,\nI've wondered if it could actually be spliced into PostgreSQL, since the\nlicenses appear compatible.\n\nhttp://www.cs.wisc.edu/shore/\n\nKen Hirsch\n\n\n", "msg_date": "Mon, 30 Apr 2001 06:57:10 -0400", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: Learning from other open source databases" }, { "msg_contents": "Bruce Momjian wrote:\n\n> Here is a general call for people to review other open-source database\n> software and report back on things PostgreSQL can learn from them.\n\ni don't know how much there is to learn since it doesn't seem as though \ndevelopment has been active in a few years, but there's also GNU SQL, \nwhich i had always hoped would develop into a useable system.\n\nhttp://www.ispras.ru/~kml/gss/index.html\n\n-tfo\n\n\n\n", "msg_date": "Tue, 01 May 2001 10:22:07 -0500", "msg_from": "\"Thomas F. O'Connell\" <tfo@monsterlabs.com>", "msg_from_op": false, "msg_subject": "Re: Learning from other open source databases" }, { "msg_contents": "On Sun, 29 Apr 2001 20:04:19 +0000 (UTC), pgman@candle.pha.pa.us\n(Bruce Momjian) wrote:\n\n>Here is a general call for people to review other open-source database\n>software and report back on things PostgreSQL can learn from them.\n>\n>I can see Interbase, MySQL, and SAP DB as being three database that\n>would be worth researching. I am willing to assist anyone who wants to\n>give it a try. I have all the sources here myself. I even have old\n>University Ingres, Mariposa, and Postgres 4.2.\n\nIdeas that could be used from other databases:\n\nDB2\nDB2 and others made their native API ODBC compatible so they can plug\nin anywhere. If PostgreSQL moved to an ODBC compatible API, PostgreSQL\ncould be a plug compatible replacement for DB2. As DB2 has 33% of the\ncommercial market, 1% ahead of Oracle, you will get more exposure and\nsupport from large corporations if you can replace DB2 for the smaller\nprojects that do not require DB2's multi-tier capabilities.\n\nCompanies writing applications for DB2 can instantly plug their\nsoftware in to open source environments.\n\nPeople without ODBC would continue using the native API then suddenly\nfind they are using odbc functions anyway.\n\nphpMyAdmin\nphpPgAdmin is based on phpMyAdmin but uses some complicated SQL to\nprovide the same views of databases. I think PostgreSQL should\ncontinue adding predefined views to the point where phpPgAdmin can use\nthe same simple SQL as phpMyAdmin because that covers a huge amount of\nwhat people write as soon as they have a few databases and lots of\ntables.\n\nNT\nMySQL and others install ODBC support as standard in NT. It is one of\nthe standard things to do on NT. Starting services, like Postmaster,\nas a service is another. \n\nphpPgAdmin\nRecommend phpPgAdmin as the interface instead of psql as phpPgAdmin is\nfar closer to what NT users already use. Even cheap little routers are\nnow using web interfaces instead of telnet because web interfaces make\nthe products accessible to about 100 times more people.\n\nDocumentation\nMySQL has it's documentation as one big install instead of 5 separate\ndocuments. Even if PostgreSQL just had one big index in to the 5\nseparate documents, that would help.\n\nPeter\n", "msg_date": "Sun, 13 May 2001 23:47:22 GMT", "msg_from": "peter@petermouldingBUTNOTSPAM.com", "msg_from_op": false, "msg_subject": "Re: Learning from other open source databases" } ]
[ { "msg_contents": "First off I just wanted to give a big 'thank you' to all the developers and contributors\nwho have made PostgreSQL what it is today. I haven't come across a single thing\nsince my first experience with it a few years ago that hasn't been corrected, sped\nup, or otherwise postively enhanced!\n\nIn working with 7.1 over the past couple weeks, I've noted the following mods may\nadd to usability and speed:\n\no v7.1 changed the database naming convention to be all numeric; I suggest having\n the DB engine create symbolic links when creating a new DB and subsequent tables.\n For instance, in creating a database 'foo' with table 'bar' the /path/to/pgsql/data/base\n folder will have a new folder named something like '18720'; this folder could also\n have a symbolic link to 'foo'. Then in the '18720' folder rather than just having\n numeric files for each table, pk, index, etc. there could be symbolic links following\n the naming convention 'bar', 'pk_foo_pkey', 'field1_foo_ukey', 'field2_foo_key'.\n\n Maybe this would work best as configurable flag that could be set during compilation or\n in the conf file.\n\no count() should use index scans for tables with a PK; scans would be on the PK index;\n even after running 'vacuum analyze' such a query still uses a sequential scan. For\n instance, \"select count(*) from bar\" and even \"select(pk_name) from bar\" both use\n sequential scans. Likewise, scans on fields with indexes should use the index.\n\n\nI hope this input is useful; keep up the excellent work,\n\nCasey Lyon\nSystems Engineer\nEarthcars.com, Inc\nwww.earthcars.com\ncasey@earthcars.com\n\n", "msg_date": "Sun, 29 Apr 2001 21:58:11 -0400", "msg_from": "Casey Lyon <casey@earthcars.com>", "msg_from_op": true, "msg_subject": "Thanks, naming conventions, and count()" }, { "msg_contents": "> First off I just wanted to give a big 'thank you' to all the\n> developers and contributors who have made PostgreSQL what it is\n> today. I haven't come across a single thing since my first\n> experience with it a few years ago that hasn't been corrected,\n> sped up, or otherwise postively enhanced!\n> \n> In working with 7.1 over the past couple weeks, I've noted the\n> following mods may add to usability and speed:\n> \n> o v7.1 changed the database naming convention to be all numeric;\n> I suggest having\n> the DB engine create symbolic links when creating a new DB\n> and subsequent tables. For instance, in creating a database\n> 'foo' with table 'bar' the /path/to/pgsql/data/base folder\n> will have a new folder named something like '18720'; this\n> folder could also have a symbolic link to 'foo'. Then in the\n> '18720' folder rather than just having numeric files for each\n> table, pk, index, etc. there could be symbolic links\n> following the naming convention 'bar', 'pk_foo_pkey',\n> 'field1_foo_ukey', 'field2_foo_key'.\n> \n> Maybe this would work best as configurable flag that could\n> be set during compilation or in the conf file.\n\nI think this is an excellent idea, and will add it to the TODO list. We\nagonized over moving to numeric names, and we couldn't think of a good\nway to allow administrators to know that table matched what files. The\nbig problem is that there is no good way to make the symlinks reliable\nbecause in a crash, the symlink could point to a table creation that got\nrolled back or the renaming of a table that got rolled back. I think\nsymlinks with some postmaster cleanup script that fixed bad symlinks\nwould be great for 7,2.\n\nI have added this to the TODO list. If someone objects, I will remove\nit:\n\n\t* Add tablename symlinks for numeric file names\n\n> \n> o count() should use index scans for tables with a PK; scans\n> would be on the PK index;\n> even after running 'vacuum analyze' such a query still uses\n> a sequential scan. For instance, \"select count(*) from bar\"\n> and even \"select(pk_name) from bar\" both use sequential scans.\n> Likewise, scans on fields with indexes should use the index.\n\nThe problem here is that now we don't have commit status in the index\nrows, so they have to check the heap for every row. One idea is to\nupdate the index status on an index scan, and if we can do that, we can\neasily use the index. However, the table scan is pretty quick.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 22:35:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "\ndoesn't this defeat the reasons for going to numerics? is there a reason\nwhy its such a difficult thing to do a SELECT oid on pg_database and\npg_class to get this information? that's what I've been doing when I need\nto know *shrug*\n\nOn Sun, 29 Apr 2001, Bruce Momjian wrote:\n\n> > First off I just wanted to give a big 'thank you' to all the\n> > developers and contributors who have made PostgreSQL what it is\n> > today. I haven't come across a single thing since my first\n> > experience with it a few years ago that hasn't been corrected,\n> > sped up, or otherwise postively enhanced!\n> >\n> > In working with 7.1 over the past couple weeks, I've noted the\n> > following mods may add to usability and speed:\n> >\n> > o v7.1 changed the database naming convention to be all numeric;\n> > I suggest having\n> > the DB engine create symbolic links when creating a new DB\n> > and subsequent tables. For instance, in creating a database\n> > 'foo' with table 'bar' the /path/to/pgsql/data/base folder\n> > will have a new folder named something like '18720'; this\n> > folder could also have a symbolic link to 'foo'. Then in the\n> > '18720' folder rather than just having numeric files for each\n> > table, pk, index, etc. there could be symbolic links\n> > following the naming convention 'bar', 'pk_foo_pkey',\n> > 'field1_foo_ukey', 'field2_foo_key'.\n> >\n> > Maybe this would work best as configurable flag that could\n> > be set during compilation or in the conf file.\n>\n> I think this is an excellent idea, and will add it to the TODO list. We\n> agonized over moving to numeric names, and we couldn't think of a good\n> way to allow administrators to know that table matched what files. The\n> big problem is that there is no good way to make the symlinks reliable\n> because in a crash, the symlink could point to a table creation that got\n> rolled back or the renaming of a table that got rolled back. I think\n> symlinks with some postmaster cleanup script that fixed bad symlinks\n> would be great for 7,2.\n>\n> I have added this to the TODO list. If someone objects, I will remove\n> it:\n>\n> \t* Add tablename symlinks for numeric file names\n>\n> >\n> > o count() should use index scans for tables with a PK; scans\n> > would be on the PK index;\n> > even after running 'vacuum analyze' such a query still uses\n> > a sequential scan. For instance, \"select count(*) from bar\"\n> > and even \"select(pk_name) from bar\" both use sequential scans.\n> > Likewise, scans on fields with indexes should use the index.\n>\n> The problem here is that now we don't have commit status in the index\n> rows, so they have to check the heap for every row. One idea is to\n> update the index status on an index scan, and if we can do that, we can\n> easily use the index. However, the table scan is pretty quick.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Sun, 29 Apr 2001 23:40:38 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "> \n> doesn't this defeat the reasons for going to numerics? is there a reason\n> why its such a difficult thing to do a SELECT oid on pg_database and\n> pg_class to get this information? that's what I've been doing when I need\n> to know *shrug*\n\nYes, but you can't do that if you can't start the database or can't\nconnect for some reason. If people don't think it is worthwhile, we can\ndelete the TODO item.\n\nFor example, when someone has trouble figuring out which directory is\nwhich database, they can just ls and look at the symlinks. Seems like a\nnice feature.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 22:44:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "> > \n> > doesn't this defeat the reasons for going to numerics? is there a reason\n> > why its such a difficult thing to do a SELECT oid on pg_database and\n> > pg_class to get this information? that's what I've been doing when I need\n> > to know *shrug*\n> \n> Yes, but you can't do that if you can't start the database or can't\n> connect for some reason. If people don't think it is worthwhile, we can\n> delete the TODO item.\n> \n> For example, when someone has trouble figuring out which directory is\n> which database, they can just ls and look at the symlinks. Seems like a\n> nice feature.\n\nI will admit we are not getting flooded with problems due to the new\nnumeric file names like I thought we would, so maybe it is not worth the\nsymlinks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 22:51:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "On Sun, 29 Apr 2001, Bruce Momjian wrote:\n\n> >\n> > doesn't this defeat the reasons for going to numerics? is there a reason\n> > why its such a difficult thing to do a SELECT oid on pg_database and\n> > pg_class to get this information? that's what I've been doing when I need\n> > to know *shrug*\n>\n> Yes, but you can't do that if you can't start the database or can't\n> connect for some reason. If people don't think it is worthwhile, we can\n> delete the TODO item.\n\nOkay, what does being able to ls the directory give you if you can't start\nthe database? the only thing I do it for is to figure out whicih tables\nare taking up so much disk space, or which databases ...\n\n> For example, when someone has trouble figuring out which directory is\n> which database, they can just ls and look at the symlinks. Seems like\n> a nice feature.\n\nYa, but I thought that the reason for going numeric had to do with being\ntransaction safe ... something about being able to safely RENAME a table,\nif my recollection remotely comes close ... as soon as you start throwing\naround symlinks, do we break that once more? what about if someone wants\nto physically move a table to a seperate file system, which is something\nthat has been suggested as a way around the fact that all files are in the\nsame subdirectory? You have a symlink to the symlink?\n\nI don't know the answers to these questions, which is why I'm asking them\n... if this is something safe to do, and doesn't break us again, then\nsounds like a good idea to me too ...\n\n", "msg_date": "Sun, 29 Apr 2001 23:56:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "> On Sun, 29 Apr 2001, Bruce Momjian wrote:\n> \n> > >\n> > > doesn't this defeat the reasons for going to numerics? is there a reason\n> > > why its such a difficult thing to do a SELECT oid on pg_database and\n> > > pg_class to get this information? that's what I've been doing when I need\n> > > to know *shrug*\n> >\n> > Yes, but you can't do that if you can't start the database or can't\n> > connect for some reason. If people don't think it is worthwhile, we can\n> > delete the TODO item.\n> \n> Okay, what does being able to ls the directory give you if you can't start\n> the database? the only thing I do it for is to figure out whicih tables\n> are taking up so much disk space, or which databases ...\n\nYes, it is just for admin convenience, and if you pull back a database\nfrom a tar backup, you can know which files are which without starting\nthe database.\n\n> \n> > For example, when someone has trouble figuring out which directory is\n> > which database, they can just ls and look at the symlinks. Seems like\n> > a nice feature.\n> \n> Ya, but I thought that the reason for going numeric had to do with being\n> transaction safe ... something about being able to safely RENAME a table,\n> if my recollection remotely comes close ... as soon as you start throwing\n> around symlinks, do we break that once more? what about if someone wants\n> to physically move a table to a seperate file system, which is something\n> that has been suggested as a way around the fact that all files are in the\n> same subdirectory? You have a symlink to the symlink?\n> \n> I don't know the answers to these questions, which is why I'm asking them\n> ... if this is something safe to do, and doesn't break us again, then\n> sounds like a good idea to me too ...\n\nI was suggesting the symlinks purely for admin convenience. The database\nwould use only the numeric names.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 23:00:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "On Sun, 29 Apr 2001, Bruce Momjian wrote:\n\n> > I don't know the answers to these questions, which is why I'm asking them\n> > ... if this is something safe to do, and doesn't break us again, then\n> > sounds like a good idea to me too ...\n>\n> I was suggesting the symlinks purely for admin convenience. The database\n> would use only the numeric names.\n\nExcept that the database would have to maintain those links ... now you've\ngiven something ppl are relying on being there, but, for some reason, a\nsymlink wasn't created, so they think their table doesn't exist?\n\nI can even think of a situation, as unlikely as it can be, where this\ncould happen ... run out of inodes on the file system ... last inode used\nby the table, no inode to stick the symlink onto ...\n\nits a remote situation, but I've personally had it happen ...\n\nI'd personally prefer to see some text file created in the database\ndirectory itself that contains the mappings ... so that each time there is\na change, it just redumps that data to the dext file ... less to maintain\noverall ...\n\n\n\n", "msg_date": "Mon, 30 Apr 2001 00:09:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "> I can even think of a situation, as unlikely as it can be, where this\n> could happen ... run out of inodes on the file system ... last inode used\n> by the table, no inode to stick the symlink onto ...\n\n\nIf you run out of inodes, you are going to have much bigger problems\nthan symlinks. Sort file creation would fail too.\n\n> \n> its a remote situation, but I've personally had it happen ...\n> \n> I'd personally prefer to see some text file created in the database\n> directory itself that contains the mappings ... so that each time there is\n> a change, it just redumps that data to the dext file ... less to maintain\n> overall ...\n\nYes, I like that idea, but the problem is that it is hard to update just\none table in the file. You sort of have to update the entire file each\ntime a table changes. That is why I liked symlinks because they are\nper-table, but you are right that the symlink creation could fail\nbecause the new table file was never created or something, leaving the\nsymlink pointing to nothing. Not sure how to address this. Is there a\nway to update a flat file when a single table changes?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 23:12:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010429 20:14] wrote:\n\n> Yes, I like that idea, but the problem is that it is hard to update just\n> one table in the file. You sort of have to update the entire file each\n> time a table changes. That is why I liked symlinks because they are\n> per-table, but you are right that the symlink creation could fail\n> because the new table file was never created or something, leaving the\n> symlink pointing to nothing. Not sure how to address this. Is there a\n> way to update a flat file when a single table changes?\n\nSort of, if that flat file is in the form of:\n123456;\"tablename \"\n000033;\"another_table \"\n\nie, each line is a fixed length.\n\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n", "msg_date": "Sun, 29 Apr 2001 20:17:28 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> [010429 20:14] wrote:\n> \n> > Yes, I like that idea, but the problem is that it is hard to update just\n> > one table in the file. You sort of have to update the entire file each\n> > time a table changes. That is why I liked symlinks because they are\n> > per-table, but you are right that the symlink creation could fail\n> > because the new table file was never created or something, leaving the\n> > symlink pointing to nothing. Not sure how to address this. Is there a\n> > way to update a flat file when a single table changes?\n> \n> Sort of, if that flat file is in the form of:\n> 123456;\"tablename \"\n> 000033;\"another_table \"\n> \n> ie, each line is a fixed length.\n> \n\nYea, after I posted, I realized that using a fixed length line would\nsolve the problem. The larger problem, though, I think, is concurrency.\nCan multiple backends update that single flat file reliably? I suppose\nthey could do append-only to the file, and you could grab the last\nentry, but again, sometimes it is rolled back, so I think there has to\nbe a way to clean it up.\n\nOf course, Tom or Vadim may come along and say this is a stupid idea,\nand we would be done discussing it. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 23:20:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "On Sun, 29 Apr 2001, Bruce Momjian wrote:\n\n> > I can even think of a situation, as unlikely as it can be, where this\n> > could happen ... run out of inodes on the file system ... last inode used\n> > by the table, no inode to stick the symlink onto ...\n>\n>\n> If you run out of inodes, you are going to have much bigger problems\n> than symlinks. Sort file creation would fail too.\n>\n> >\n> > its a remote situation, but I've personally had it happen ...\n> >\n> > I'd personally prefer to see some text file created in the database\n> > directory itself that contains the mappings ... so that each time there is\n> > a change, it just redumps that data to the dext file ... less to maintain\n> > overall ...\n>\n> Yes, I like that idea, but the problem is that it is hard to update just\n> one table in the file. You sort of have to update the entire file each\n> time a table changes. That is why I liked symlinks because they are\n> per-table, but you are right that the symlink creation could fail\n> because the new table file was never created or something, leaving the\n> symlink pointing to nothing. Not sure how to address this. Is there a\n> way to update a flat file when a single table changes?\n\nWhy not just dump the whole file? That way, if a previosu dump failed for\nwhatever reason, the new dump would correct that omission ...\n\nThen again, why not some sort of 'lsdb' command that looks at where it is\nand gives you info as appropriate?\n\nif in data/base, then do a connect to template1 using postgres so that you\ncan dump and parse the raw data from pg_database ... if in a directory,\nyou should be able to connect to that database in a similar way to grab\nthe contents of pg_class ...\n\nno server would need to be running for this to work, and if it was\nreadonly, it should be workable if a server is running, no?\n\n", "msg_date": "Mon, 30 Apr 2001 00:21:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "> > Yes, I like that idea, but the problem is that it is hard to update just\n> > one table in the file. You sort of have to update the entire file each\n> > time a table changes. That is why I liked symlinks because they are\n> > per-table, but you are right that the symlink creation could fail\n> > because the new table file was never created or something, leaving the\n> > symlink pointing to nothing. Not sure how to address this. Is there a\n> > way to update a flat file when a single table changes?\n> \n> Why not just dump the whole file? That way, if a previosu dump failed for\n> whatever reason, the new dump would correct that omission ...\n\nYes, you can do that, but it is only updated during a dump, right? \nMakes it hard to use during the day, no?\n\n> \n> Then again, why not some sort of 'lsdb' command that looks at where it is\n> and gives you info as appropriate?\n\n\nI want to do that for oid2name. I had the plan layed out, but never got\nto it.\n\n> \n> if in data/base, then do a connect to template1 using postgres so that you\n> can dump and parse the raw data from pg_database ... if in a directory,\n> you should be able to connect to that database in a similar way to grab\n> the contents of pg_class ...\n> \n> no server would need to be running for this to work, and if it was\n> readonly, it should be workable if a server is running, no?\n\nI think parsing the file contents is too hard. The database would have\nto be running and I would use psql.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 23:24:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "2 points:\n\n- I thought that a big part of the reason we got rid of filenames was\nso we would use arbitrary table / db names that were not restricted by\nthe file system / OS. Using links would then return this restriction.\n\n- What is the format for the table? Could we write a tool that can\nread the tables raw in case of a 'HARD' crash? One could then walk the\ntable by hand as need. Can someone give me information on the schema\nfor the files? i'll take a look at it. There may also be a way to\nthen use WAL files to do some more serious recovery.\n\nThoughts?\n\n- brandon\n\n\n\n", "msg_date": "Sun, 29 Apr 2001 23:24:46 -0400", "msg_from": "\"B. Palmer\" <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "I could even see a utility that does a dump of this info into a flat file,\nentirely overwriting the file every time.\n\nThis would be quick to reference and usable in a meltdown scenario. Could\neasily be incorporated into vacuum and other db maintenance cron scripts.\n\n-Casey\n\n\nBruce Momjian wrote:\n\n>>> Yes, I like that idea, but the problem is that it is hard to update just\n>>> one table in the file. You sort of have to update the entire file each\n>>> time a table changes. That is why I liked symlinks because they are\n>>> per-table, but you are right that the symlink creation could fail\n>>> because the new table file was never created or something, leaving the\n>>> symlink pointing to nothing. Not sure how to address this. Is there a\n>>> way to update a flat file when a single table changes?\n>> \n>> Why not just dump the whole file? That way, if a previosu dump failed for\n>> whatever reason, the new dump would correct that omission ...\n> \n> \n> Yes, you can do that, but it is only updated during a dump, right? \n> Makes it hard to use during the day, no?\n> \n> \n>> Then again, why not some sort of 'lsdb' command that looks at where it is\n>> and gives you info as appropriate?\n> \n> \n> \n> I want to do that for oid2name. I had the plan layed out, but never got\n> to it.\n> \n> \n>> if in data/base, then do a connect to template1 using postgres so that you\n>> can dump and parse the raw data from pg_database ... if in a directory,\n>> you should be able to connect to that database in a similar way to grab\n>> the contents of pg_class ...\n>> \n>> no server would need to be running for this to work, and if it was\n>> readonly, it should be workable if a server is running, no?\n> \n> \n> I think parsing the file contents is too hard. The database would have\n> to be running and I would use psql.\n\n", "msg_date": "Sun, 29 Apr 2001 23:30:47 -0400", "msg_from": "Casey Lyon <casey@earthcars.com>", "msg_from_op": true, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "Bruce Momjian wrote:\n\n \n> The problem here is that now we don't have commit status in the index\n> rows, so they have to check the heap for every row. One idea is to\n> update the index status on an index scan, and if we can do that, we can\n> easily use the index. However, the table scan is pretty quick.\n\nIt certainly works quickly for smaller tables, however the 21.7 million\nrecord table I ran this on takes a touch longer as shown here:\n\ndatabase=# explain select count(*) from table;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=478056.20..478056.20 rows=1 width=0)\n -> Seq Scan on table (cost=0.00..423737.76 rows=21727376 width=0)\n\nEXPLAIN\n\nHowever I noted explain provides rows as part of it's data; from what\nI've seen this loses precision over time or with large data imports,\nthough; at least until the table is vacuumed again.\n\n-Casey\n\n", "msg_date": "Sun, 29 Apr 2001 23:38:53 -0400", "msg_from": "Casey Lyon <casey@earthcars.com>", "msg_from_op": true, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "On Sun, 29 Apr 2001, Bruce Momjian wrote:\n\n> > > Yes, I like that idea, but the problem is that it is hard to update just\n> > > one table in the file. You sort of have to update the entire file each\n> > > time a table changes. That is why I liked symlinks because they are\n> > > per-table, but you are right that the symlink creation could fail\n> > > because the new table file was never created or something, leaving the\n> > > symlink pointing to nothing. Not sure how to address this. Is there a\n> > > way to update a flat file when a single table changes?\n> >\n> > Why not just dump the whole file? That way, if a previosu dump failed for\n> > whatever reason, the new dump would correct that omission ...\n>\n> Yes, you can do that, but it is only updated during a dump, right?\n> Makes it hard to use during the day, no?\n>\n> >\n> > Then again, why not some sort of 'lsdb' command that looks at where it is\n> > and gives you info as appropriate?\n>\n>\n> I want to do that for oid2name. I had the plan layed out, but never got\n> to it.\n>\n> >\n> > if in data/base, then do a connect to template1 using postgres so that you\n> > can dump and parse the raw data from pg_database ... if in a directory,\n> > you should be able to connect to that database in a similar way to grab\n> > the contents of pg_class ...\n> >\n> > no server would need to be running for this to work, and if it was\n> > readonly, it should be workable if a server is running, no?\n>\n> I think parsing the file contents is too hard. The database would have\n> to be running and I would use psql.\n\nI don't know, I recovered someone's database using a \"raw\" connection ...\nwasn't that difficult once I figured out the format *shrug*\n\nthe following gets the oid,relname's for a database in the format:\n\necho \"select oid,relname from pg_class\" | postgres -L -D /usr/local/pgsql/data eceb | egrep \"oid|relname\"\n\nthen just parse the output using a simple perl script:\n\n 1: oid = \"163338\" (typeid = 26, len = 4, typmod = -1, byval = t)\n 2: relname = \"auth_info_uid_key\" (typeid = 19, len = 32, typmod = -1, byval = f)\n 1: oid = \"163341\" (typeid = 26, len = 4, typmod = -1, byval = t)\n 2: relname = \"auth_info_id\" (typeid = 19, len = 32, typmod = -1, byval = f)\n 1: oid = \"56082\" (typeid = 26, len = 4, typmod = -1, byval = t)\n 2: relname = \"auth_info\" (typeid = 19, len = 32, typmod = -1, byval = f)\n\nthe above won't work on a live database, did try that, so best is to test\nfor a connection first, and this would be a fall back ... but you'd at\nleast have a live *and* non live way of parsing the data *shrug*\n\n", "msg_date": "Mon, 30 Apr 2001 00:42:24 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "\n----- Original Message ----- \nFrom: Alfred Perlstein <bright@wintelcom.net>\nTo: Bruce Momjian <pgman@candle.pha.pa.us>\nCc: The Hermit Hacker <scrappy@hub.org>; Casey Lyon <casey@earthcars.com>; <pgsql-hackers@postgresql.org>\nSent: Sunday, April 29, 2001 11:17 PM\nSubject: Re: [HACKERS] Thanks, naming conventions, and count()\n\n\n> * Bruce Momjian <pgman@candle.pha.pa.us> [010429 20:14] wrote:\n> \n> > Yes, I like that idea, but the problem is that it is hard to update just\n> > one table in the file. You sort of have to update the entire file each\n> > time a table changes. That is why I liked symlinks because they are\n> > per-table, but you are right that the symlink creation could fail\n> > because the new table file was never created or something, leaving the\n> > symlink pointing to nothing. Not sure how to address this. Is there a\n> > way to update a flat file when a single table changes?\n> \n> Sort of, if that flat file is in the form of:\n> 123456;\"tablename \"\n> 000033;\"another_table \"\n> \n> ie, each line is a fixed length.\n\nWhat if have one such a line in separate file in one dir?\nThen there is no restriction on field length, you don't need\nto dump the file each time and maintain the real .symlinks.\n\nThe 'lsdb' command (courtesy of The Hermit Hacker :))\nwill assemble all of them together and will show the DBA\nwhere to look for a specific table.\nFile names can be your OIDs again, and just keep\ntable name inside the file. Keep these files under\na certain dir, and let the lsdb display them appropriately\nwhen needed.\n\nOr another idea is to create 'deferred' symlinks.\nThe (real) symlinks only created when DBA issues the 'lsdb'\ncommand and lists them, and this list is maintained only\nwhen the 'lsdb' is invoked....\n\nMaybe this sounds stupid, but just a thought... \n\nSerguei\n\n\n\n\n\n\n", "msg_date": "Sun, 29 Apr 2001 23:42:29 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "> It certainly works quickly for smaller tables, however the 21.7 million\n> record table I ran this on takes a touch longer as shown here:\n> \n> database=# explain select count(*) from table;\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=478056.20..478056.20 rows=1 width=0)\n> -> Seq Scan on table (cost=0.00..423737.76 rows=21727376 width=0)\n> \n> EXPLAIN\n> \n> However I noted explain provides rows as part of it's data; from what\n> I've seen this loses precision over time or with large data imports,\n> though; at least until the table is vacuumed again.\n\nI guess I was saying that an index scan could take longer because it has\nto walk the btree. However it only has one column of the table, so it\nmay be faster. I never measured the two, but the heap access needed for\nthe index scan currently is a performance killer. Sequential is faster\nthan all those random heap lookups from the index.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 23:43:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "> > I think parsing the file contents is too hard. The database would have\n> > to be running and I would use psql.\n> \n> I don't know, I recovered someone's database using a \"raw\" connection ...\n> wasn't that difficult once I figured out the format *shrug*\n> \n> the following gets the oid,relname's for a database in the format:\n> \n> echo \"select oid,relname from pg_class\" | postgres -L -D /usr/local/pgsql/data eceb | egrep \"oid|relname\"\n> \n> then just parse the output using a simple perl script:\n> \n> 1: oid = \"163338\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> 2: relname = \"auth_info_uid_key\" (typeid = 19, len = 32, typmod = -1, byval = f)\n> 1: oid = \"163341\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> 2: relname = \"auth_info_id\" (typeid = 19, len = 32, typmod = -1, byval = f)\n> 1: oid = \"56082\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> 2: relname = \"auth_info\" (typeid = 19, len = 32, typmod = -1, byval = f)\n\nOh, you did a direct postgres backend connect. Yes, that will work\nfine. Good idea if the postmaster is down. I originally thought you\nmeant reading the pg_class file raw. Of course, that would be really\nhard because there is no way to know what numeric file is pg_class!\n\nActually, seems it is always 1259. I see this in\ninclude/catalog/pg_class.h:\n\n\tDATA(insert OID = 1259 ( pg_class 83 PGUID 0 1259 0 0 0 0 f f r\n\t22 0 0 0 0 0 f f f _null_ ));\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 23:46:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "If this isn't incorporated into a utility, it would certainly be prime\nfor inclusion for the yet-to-be-written chapter 11 of the PG Admin Manual\n\"Database Recovery.\"\n\nThanks for your responses, -Casey\n\n\nThe Hermit Hacker wrote:\n\n> On Sun, 29 Apr 2001, Bruce Momjian wrote:\n> \n> \n>>>> Yes, I like that idea, but the problem is that it is hard to update just\n>>>> one table in the file. You sort of have to update the entire file each\n>>>> time a table changes. That is why I liked symlinks because they are\n>>>> per-table, but you are right that the symlink creation could fail\n>>>> because the new table file was never created or something, leaving the\n>>>> symlink pointing to nothing. Not sure how to address this. Is there a\n>>>> way to update a flat file when a single table changes?\n>>> \n>>> Why not just dump the whole file? That way, if a previosu dump failed for\n>>> whatever reason, the new dump would correct that omission ...\n>> \n>> Yes, you can do that, but it is only updated during a dump, right?\n>> Makes it hard to use during the day, no?\n>> \n>> \n>>> Then again, why not some sort of 'lsdb' command that looks at where it is\n>>> and gives you info as appropriate?\n>> \n>> \n>> I want to do that for oid2name. I had the plan layed out, but never got\n>> to it.\n>> \n>> \n>>> if in data/base, then do a connect to template1 using postgres so that you\n>>> can dump and parse the raw data from pg_database ... if in a directory,\n>>> you should be able to connect to that database in a similar way to grab\n>>> the contents of pg_class ...\n>>> \n>>> no server would need to be running for this to work, and if it was\n>>> readonly, it should be workable if a server is running, no?\n>> \n>> I think parsing the file contents is too hard. The database would have\n>> to be running and I would use psql.\n> \n> \n> I don't know, I recovered someone's database using a \"raw\" connection ...\n> wasn't that difficult once I figured out the format *shrug*\n> \n> the following gets the oid,relname's for a database in the format:\n> \n> echo \"select oid,relname from pg_class\" | postgres -L -D /usr/local/pgsql/data eceb | egrep \"oid|relname\"\n> \n> then just parse the output using a simple perl script:\n> \n> 1: oid = \"163338\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> 2: relname = \"auth_info_uid_key\" (typeid = 19, len = 32, typmod = -1, byval = f)\n> 1: oid = \"163341\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> 2: relname = \"auth_info_id\" (typeid = 19, len = 32, typmod = -1, byval = f)\n> 1: oid = \"56082\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> 2: relname = \"auth_info\" (typeid = 19, len = 32, typmod = -1, byval = f)\n> \n> the above won't work on a live database, did try that, so best is to test\n> for a connection first, and this would be a fall back ... but you'd at\n> least have a live *and* non live way of parsing the data *shrug*\n\n", "msg_date": "Sun, 29 Apr 2001 23:50:04 -0400", "msg_from": "Casey Lyon <casey@earthcars.com>", "msg_from_op": true, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "\nHere is what I suggested for oid2name to do with file names:\n\n---------------------------------------------------------------------------\n\nJust seems like a major pain; not worth the work.\n\nIf you do a ls and pipe it, here is what you would need to do:\n\n- find out where $PWD is\n- in that database (found from PID), for each file in the dir, look it\nup using oid2name\n- print that out\n\nproblems:\n- ls -l vs ls\n- column are different for differing OSs / filesystems\n- du will REALLY suck\n- what if the user tries to do \"ls /var/postgres/data/base/12364\" Will\nyou try to parse out the request? Ugh. no thanks.\n\nI also don't think people will have much reason to use the script.\noid2name will have little enough use, what use will the script have? Who\nknows.. I guess keep it on back burner till there is a demand.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 29 Apr 2001 23:54:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> big problem is that there is no good way to make the symlinks reliable\n> because in a crash, the symlink could point to a table creation that got\n> rolled back or the renaming of a table that got rolled back.\n\nYes. Have you already forgotten the very long discussion we had about\nthis some months back? There is no way to provide a reliable symlink\nmapping without re-introducing all the same problems that we went to\nnumeric filenames to avoid. Now if you want an *UNRELIABLE* symlink\nmapping, maybe we could talk about it ... but IMHO such a feature would\nbe worse than useless. Murphy's law says that the symlinks would be\nright often enough to mislead dbadmins into trusting them, and wrong\nexactly when it would do the most damage to trust them. The same goes\nfor other methods of unreliably exporting the name-to-number mapping,\nsuch as dumping it into a flat file.\n\nWe do need to document how to get the mapping (ie, select relfilenode,\nrelname from pg_class). But I really doubt that an automated method\nfor exporting the mapping would be worth the cycles it would cost,\neven if it could be made reliable which it can't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 02:08:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count() " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010429 23:12] wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > big problem is that there is no good way to make the symlinks reliable\n> > because in a crash, the symlink could point to a table creation that got\n> > rolled back or the renaming of a table that got rolled back.\n> \n> Yes. Have you already forgotten the very long discussion we had about\n> this some months back? There is no way to provide a reliable symlink\n> mapping without re-introducing all the same problems that we went to\n> numeric filenames to avoid. Now if you want an *UNRELIABLE* symlink\n> mapping, maybe we could talk about it ... but IMHO such a feature would\n> be worse than useless. Murphy's law says that the symlinks would be\n> right often enough to mislead dbadmins into trusting them, and wrong\n> exactly when it would do the most damage to trust them. The same goes\n> for other methods of unreliably exporting the name-to-number mapping,\n> such as dumping it into a flat file.\n> \n> We do need to document how to get the mapping (ie, select relfilenode,\n> relname from pg_class). But I really doubt that an automated method\n> for exporting the mapping would be worth the cycles it would cost,\n> even if it could be made reliable which it can't.\n\nPerhaps an external tool to rebuild the symlink state that could be\nrun on an offline database. But I'm sure you have more important\nthings to do. :)\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n", "msg_date": "Sun, 29 Apr 2001 23:17:20 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "On Sun, 29 Apr 2001, Bruce Momjian wrote:\n\n> > > I think parsing the file contents is too hard. The database would have\n> > > to be running and I would use psql.\n> >\n> > I don't know, I recovered someone's database using a \"raw\" connection ...\n> > wasn't that difficult once I figured out the format *shrug*\n> >\n> > the following gets the oid,relname's for a database in the format:\n> >\n> > echo \"select oid,relname from pg_class\" | postgres -L -D /usr/local/pgsql/data eceb | egrep \"oid|relname\"\n> >\n> > then just parse the output using a simple perl script:\n> >\n> > 1: oid = \"163338\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> > 2: relname = \"auth_info_uid_key\" (typeid = 19, len = 32, typmod = -1, byval = f)\n> > 1: oid = \"163341\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> > 2: relname = \"auth_info_id\" (typeid = 19, len = 32, typmod = -1, byval = f)\n> > 1: oid = \"56082\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> > 2: relname = \"auth_info\" (typeid = 19, len = 32, typmod = -1, byval = f)\n>\n> Oh, you did a direct postgres backend connect. Yes, that will work\n> fine. Good idea if the postmaster is down. I originally thought you\n> meant reading the pg_class file raw. Of course, that would be really\n> hard because there is no way to know what numeric file is pg_class!\n\nBut would it work on a crashed database that won't come up or doesn't\nthe direct connect care about any other tables in this usage?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 30 Apr 2001 06:03:22 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n>> Oh, you did a direct postgres backend connect. Yes, that will work\n>> fine. Good idea if the postmaster is down. I originally thought you\n>> meant reading the pg_class file raw. Of course, that would be really\n>> hard because there is no way to know what numeric file is pg_class!\n\n> But would it work on a crashed database that won't come up\n\nNo.\n\nIt's not that hard to know \"which numeric file is pg_class\" --- that\ninfo has to be hard-wired in at some level. (The backends cannot learn\npg_class's own relfilenode number by examining its pg_class entry...)\n\nIt might be worth making a simple utility (could be based on Bryan\nWhite's pg_check) to grovel through the raw pg_class bits and extract\nrelfilenode info the hard way. You'd only need it in certain disaster\nscenarios, but when you did need it you'd need it bad.\n\nSo far we have not seen a report of a situation where this seemed to be\nuseful, so I'm not that excited about having it... WAL dump and\ninterrogation utilities are higher on my want list.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 11:44:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count() " }, { "msg_contents": "> It might be worth making a simple utility (could be based on Bryan\n> White's pg_check) to grovel through the raw pg_class bits and extract\n> relfilenode info the hard way. You'd only need it in certain disaster\n> scenarios, but when you did need it you'd need it bad.\n> \n> So far we have not seen a report of a situation where this seemed to be\n> useful, so I'm not that excited about having it... WAL dump and\n> interrogation utilities are higher on my want list.\n\nOK, updated TODO item:\n\n\t* Add table name mapping for numeric file names\n\nI removed the symlink mention, and I agree it is low priority. No one is\nreally asking for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 30 Apr 2001 11:56:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > I can even think of a situation, as unlikely as it can be, where this\n> > could happen ... run out of inodes on the file system ... last inode used\n> > by the table, no inode to stick the symlink onto ...\n> \n> If you run out of inodes, you are going to have much bigger problems\n> than symlinks. Sort file creation would fail too.\n> \n> >\n> > its a remote situation, but I've personally had it happen ...\n> >\n> > I'd personally prefer to see some text file created in the database\n> > directory itself that contains the mappings ... so that each time there is\n> > a change, it just redumps that data to the dext file ... less to maintain\n> > overall ...\n> \n> Yes, I like that idea, but the problem is that it is hard to update just\n> one table in the file.\n\nwhy not have just one ever-growing file that is only appended to and\nthat has \nlines of form \n\nOID, type (DB/TABLE/INDEX/...), name, time\n\nso when you need tha actual info you grep for name and use tha last line\nwhose \nfile actually exists. Not too convenient but useful enough when you\nreally need it.\n\n-------------------\nHannu\n", "msg_date": "Wed, 02 May 2001 11:58:38 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "On Sun, Apr 29, 2001 at 08:17:28PM -0700, Alfred Perlstein wrote:\n> Sort of, if that flat file is in the form of:\n> 123456;\"tablename \"\n> 000033;\"another_table \"\n\nOr better yet, since the flat file is unlikely to be large, you could\njust do this dance:\n\n1) open file for reading\n2) flock() file exclusively, non-blocking.\n3) If 2 failed, sleep a bit, then go back to 1, otherwise open new file\n for writing\n4) Write out new file\n5) rename() the temp file over the new file\n6) close files, etc\n\nThat way, you'll never have the race of 2 programs trying to write the file\nat a time (therefore losing changes), and you get total atomicity of the\nwriting.\n\nYou could also do it with an open(O_EXCL) on a fixed temp file, instead of\nthe flock() call. The semantics should be the same.\n\nOf course, you could always fork() a child to handle this in the background,\nas it's hardly important to the running of the database. (Or if it is, it\ncan become part of the transaction, which means that at rename() time, there\nmust be no room for other failures, but it mustn't be too late to roll back)\n\n-- \nMichael Samuel <michael@miknet.net>\n", "msg_date": "Wed, 2 May 2001 21:28:11 +1000", "msg_from": "Michael Samuel <michael@miknet.net>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" }, { "msg_contents": "> > Yes, I like that idea, but the problem is that it is hard to update just\n> > one table in the file.\n> \n> why not have just one ever-growing file that is only appended to and\n> that has \n> lines of form \n> \n> OID, type (DB/TABLE/INDEX/...), name, time\n> \n> so when you need tha actual info you grep for name and use tha last line\n> whose \n> file actually exists. Not too convenient but useful enough when you\n> really need it.\n\nYes, that is one idea, but it is hard to undo a change. You would have\nto write to the file only on a commit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 10:56:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Thanks, naming conventions, and count()" } ]
[ { "msg_contents": "\nOkay, maybe this query isn't quite as simple as I think it is, but does\nthis raise any flags for anyone? How did I get into a COPY? It appears\nre-creatable, as I've done it twice so far ...\n\neceb=# select e.idnumber,e.password from egi e, auth_info a where e.idnumber != a.idnumber;\nBackend sent D message without prior T\nBackend sent D message without prior T\nBackend sent D message without prior T\nBackend sent D message without prior T\nBackend sent D message without prior T\nBackend sent D message without prior T\nBackend sent D message without prior T\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself.\n>> \\.\nUnknown protocol character 'J' read from backend. (The protocol character is the first character the backend sends in response to a query it receives).\nPQendcopy: resetting connection\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Mon, 30 Apr 2001 01:37:15 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "v7.1 error ... SELECT converted to a COPY?" }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Okay, maybe this query isn't quite as simple as I think it is, but does\n> this raise any flags for anyone? How did I get into a COPY? It appears\n> re-creatable, as I've done it twice so far ...\n\n> eceb=# select e.idnumber,e.password from egi e, auth_info a where e.idnumber != a.idnumber;\n> Backend sent D message without prior T\n> Backend sent D message without prior T\n\nAt a guess, you're running out of memory on the client side for the\nSELECT results (did you really want a not-equal rather than equal\nconstraint there!?) --- libpq tends not to cope with this too\ngracefully. Someone oughta fix that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 00:54:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1 error ... SELECT converted to a COPY? " }, { "msg_contents": "On Mon, 30 Apr 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Okay, maybe this query isn't quite as simple as I think it is, but does\n> > this raise any flags for anyone? How did I get into a COPY? It appears\n> > re-creatable, as I've done it twice so far ...\n>\n> > eceb=# select e.idnumber,e.password from egi e, auth_info a where e.idnumber != a.idnumber;\n> > Backend sent D message without prior T\n> > Backend sent D message without prior T\n>\n> At a guess, you're running out of memory on the client side for the\n> SELECT results (did you really want a not-equal rather than equal\n> constraint there!?)\n\nYup, want to figure out which ones are in the egi table that I hadn't\ntransfer'd over yet ... tried it with a NOT IN ( SELECT ... ) combination,\nbut an explain of that showed two sequential searches on the tables, so am\nworking on fixing that ...\n\n\n\n", "msg_date": "Mon, 30 Apr 2001 01:57:57 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.1 error ... SELECT converted to a COPY? " }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Mon, 30 Apr 2001, Tom Lane wrote:\n> \n> > The Hermit Hacker <scrappy@hub.org> writes:\n> > > Okay, maybe this query isn't quite as simple as I think it is, but does\n> > > this raise any flags for anyone? How did I get into a COPY? It appears\n> > > re-creatable, as I've done it twice so far ...\n> >\n> > > eceb=# select e.idnumber,e.password from egi e, auth_info a where e.idnumber != a.idnumber;\n> > > Backend sent D message without prior T\n> > > Backend sent D message without prior T\n> >\n> > At a guess, you're running out of memory on the client side for the\n> > SELECT results (did you really want a not-equal rather than equal\n> > constraint there!?)\n> \n> Yup, want to figure out which ones are in the egi table that I hadn't\n> transfer'd over yet ... tried it with a NOT IN ( SELECT ... ) combination,\n> but an explain of that showed two sequential searches on the tables,\n\ndid you do it as\n\nselect e.idnumber,e.password from egi e\n where e.idnumber not in (select idnumber from auth_info a where\ne.idnumber = a.idnumber)\n;\n\nto smarten up the optimizer about using a join ?\n\nI guess that it can be done using outer joins and testing the \"outer2\npart for IS NULL in 7.1\n\n-------------------\nHannu\n", "msg_date": "Wed, 02 May 2001 11:27:02 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: v7.1 error ... SELECT converted to a COPY?" } ]
[ { "msg_contents": "It would be very helpful if the COPY command could be expanded\nin order to provide positional parameters.\n\nI noticed that it didn't a while back and it can really hurt\nsomeone when they happen to try to use pg_dump to move data\nfrom one database to another database and they happened to\ncreate the feilds in the tables in different orders.\n\nBasically:\nCOPY \"webmaster\" FROM stdin;\n\ncould become:\nCOPY \"webmaster\" FIELDS \"id\", \"name\", \"ssn\" FROM stdin;\n\nthis way when sourcing it would know where to place the\nfeilds.\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n", "msg_date": "Mon, 30 Apr 2001 02:35:51 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": true, "msg_subject": "COPY commands could use an enhancement." }, { "msg_contents": "On Mon, 30 Apr 2001, Alfred Perlstein wrote:\n\n> Basically:\n> COPY \"webmaster\" FROM stdin;\n> \n> could become:\n> COPY \"webmaster\" FIELDS \"id\", \"name\", \"ssn\" FROM stdin;\n\nWe'd need some way of making field name dumping optional, because\none of the nice things about not having the field names appear is that\nI can dump, change the field names, and re-slurp in the old dump.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Mon, 30 Apr 2001 09:25:25 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: COPY commands could use an enhancement." }, { "msg_contents": "* Joel Burton <jburton@scw.org> [010430 06:26] wrote:\n> On Mon, 30 Apr 2001, Alfred Perlstein wrote:\n> \n> > Basically:\n> > COPY \"webmaster\" FROM stdin;\n> > \n> > could become:\n> > COPY \"webmaster\" FIELDS \"id\", \"name\", \"ssn\" FROM stdin;\n> \n> We'd need some way of making field name dumping optional, because\n> one of the nice things about not having the field names appear is that\n> I can dump, change the field names, and re-slurp in the old dump.\n\nOf course!\n\nI meant this as an additional option, not as a replacement.\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n", "msg_date": "Mon, 30 Apr 2001 06:29:23 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": true, "msg_subject": "Re: COPY commands could use an enhancement." }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n> It would be very helpful if the COPY command could be expanded\n> in order to provide positional parameters.\n\nI think it's a bad idea to try to expand COPY into a full-tilt data\nimport/conversion utility, which is the direction that this sort of\nsuggestion is headed in. COPY is designed as a simple, fast, reliable,\nlow-overhead data transfer mechanism for backup and restore. The more\nwarts we add to it, the less well it will serve that purpose.\n\nExample: if we allow selective column import, what do we do with missing\ncolumns? Must COPY now be able to handle insertion of default-value\nexpressions?\n\nI think it'd be better to put effort into an external data translation\nutility that can deal with column selection, data reformatting, CR/LF\nconversion, and all those other silly little issues that come up when\nyou need to move data from one DBMS to another. Sure, we could make\nthe backend do some of this stuff, but it'd be more maintainable as a\nseparate program ... IMHO anyway. I think that pgaccess and pgadmin\nalready have some capability in this line, BTW.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 11:36:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY commands could use an enhancement. " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010430 08:37] wrote:\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> > It would be very helpful if the COPY command could be expanded\n> > in order to provide positional parameters.\n> \n> I think it's a bad idea to try to expand COPY into a full-tilt data\n> import/conversion utility, which is the direction that this sort of\n> suggestion is headed in. COPY is designed as a simple, fast, reliable,\n> low-overhead data transfer mechanism for backup and restore. The more\n> warts we add to it, the less well it will serve that purpose.\n\nHonestly it would be hard for COPY to be any more less serving of\npeople's needs, it really makes sense for it to be able to parse\npositional paramters for both speed and correctness.\n\n> Example: if we allow selective column import, what do we do with missing\n> columns?\n\nWhat is already done, if you initiate a copy into a 5 column table\nusing only 4 columns of copy data the fifth is left empty.\n\n> Must COPY now be able to handle insertion of default-value\n> expressions?\n\nNo, copy should be what it is simple but at the same time useful\nenough for bulk transfer without painful contortions and fear\nof modifying tables.\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nRepresent yourself, show up at BABUG http://www.babug.org/\n", "msg_date": "Mon, 30 Apr 2001 09:30:04 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": true, "msg_subject": "Re: COPY commands could use an enhancement." }, { "msg_contents": "> Alfred Perlstein <bright@wintelcom.net> writes:\n> > It would be very helpful if the COPY command could be expanded\n> > in order to provide positional parameters.\n> \n> I think it's a bad idea to try to expand COPY into a full-tilt data\n> import/conversion utility, which is the direction that this sort of\n> suggestion is headed in. COPY is designed as a simple, fast, reliable,\n> low-overhead data transfer mechanism for backup and restore. The more\n> warts we add to it, the less well it will serve that purpose.\n\nWhat is really cool is Informix's UNLOAD/LOAD commands. It combines\nCOPY with SELECT/INSERT:\n\n\tUNLOAD TO '/tmp/x'\n\tSELECT * FROM tab\n\nand LOAD is similar:\n\n\tLOAD FROM '/tmp/x'\n\tINSERT INTO TAB\n\nThis leverages SELECT and INSERT's column and WHERE capabilities to do\nalmost anything you want with flat files. I think it is superior to our\nCOPY.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 30 Apr 2001 13:25:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY commands could use an enhancement." }, { "msg_contents": "Karen saw me importing data into a database using pgaccess.\n\nAgain, this could be useful to someone that it is not a \"superuser\". \nBut only superusers can use pgaccess. What a shame :-(\n\nFernando\n\nP.S.: pgaccess has a much more limited import facility - only text files\nand you can only change the delimiter. But it can be expanded.\n\n\nTom Lane wrote:\n> \n> Alfred Perlstein <bright@wintelcom.net> writes:\n> > It would be very helpful if the COPY command could be expanded\n> > in order to provide positional parameters.\n> \n> I think it's a bad idea to try to expand COPY into a full-tilt data\n> import/conversion utility, which is the direction that this sort of\n> suggestion is headed in. COPY is designed as a simple, fast, reliable,\n> low-overhead data transfer mechanism for backup and restore. The more\n> warts we add to it, the less well it will serve that purpose.\n> \n> Example: if we allow selective column import, what do we do with missing\n> columns? Must COPY now be able to handle insertion of default-value\n> expressions?\n> \n> I think it'd be better to put effort into an external data translation\n> utility that can deal with column selection, data reformatting, CR/LF\n> conversion, and all those other silly little issues that come up when\n> you need to move data from one DBMS to another. Sure, we could make\n> the backend do some of this stuff, but it'd be more maintainable as a\n> separate program ... IMHO anyway. I think that pgaccess and pgadmin\n> already have some capability in this line, BTW.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n", "msg_date": "Mon, 30 Apr 2001 13:28:10 -0400", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: COPY commands could use an enhancement." }, { "msg_contents": "On Mon, 30 Apr 2001, Tom Lane wrote:\n\n> I think it'd be better to put effort into an external data translation\n> utility that can deal with column selection, data reformatting, CR/LF\n> conversion, and all those other silly little issues that come up when\n> you need to move data from one DBMS to another. Sure, we could make\n> the backend do some of this stuff, but it'd be more maintainable as a\n> separate program ... IMHO anyway. I think that pgaccess and pgadmin\n> already have some capability in this line, BTW.\n\nReal conversion should happen in userland.\n\nHowever, allowing people to COPY in a different order does prevent a\nuserland tool from having to re-arrange a dump file. (Of course, really,\nwith perl, re-ordering a dump file should take more than a few lines\nanyway.)\n\nAre there any generalized tools for re-ordering delimited columns, without\nhaving to use sed/perl/regexes, etc.?\n\nIf people can point to some best practices/ideas, I'd be happy to turn\nthem into a HOWTO.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Mon, 30 Apr 2001 18:17:44 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: COPY commands could use an enhancement. " }, { "msg_contents": "At 11:36 30/04/01 -0400, Tom Lane wrote:\n>\n>COPY is designed as a simple, fast, reliable,\n>low-overhead data transfer mechanism for backup and restore. The more\n>warts we add to it, the less well it will serve that purpose.\n>\n\nDo you have a alternate suggestion as to how to solve the problems it has\nbacking up the regression DB?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 01 May 2001 11:31:48 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: COPY commands could use an enhancement. " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Do you have a alternate suggestion as to how to solve the problems it has\n> backing up the regression DB?\n\nOne possibility is to fix ALTER TABLE ADD COLUMN to maintain the same\ncolumn ordering in parents and children.\n\nCOPY with specified columns may in fact be the best way to deal with\nthat particular issue, if pg_dump is all we care about fixing. However\nthere are a bunch of things that have a problem with it, not only\npg_dump. See thread over in committers about functions and inheritance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Apr 2001 22:06:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY commands could use an enhancement. " } ]
[ { "msg_contents": "Guys,\n\n while analyzing some stuff I came across something that looks\n like a bug in the ODBC driver to me. I'm by far not the ODBC\n user I should be to fix it, so could someone else please take\n another look?\n\n The symptom is when the driver is in AUTOCOMMIT=OFF, closing\n the last used curser issues an END, thereby committing the\n transaction. It happens in odbc/qresult.c line 320 and the\n code looks pretty much like ignoring the autocommit flag.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 30 Apr 2001 08:05:36 -0500 (EST)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Bug in ODBC driver" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Guys,\n> \n> while analyzing some stuff I came across something that looks\n> like a bug in the ODBC driver to me. I'm by far not the ODBC\n> user I should be to fix it, so could someone else please take\n> another look?\n> \n> The symptom is when the driver is in AUTOCOMMIT=OFF, closing\n> the last used curser issues an END, thereby committing the\n> transaction. It happens in odbc/qresult.c line 320 and the\n> code looks pretty much like ignoring the autocommit flag.\n> \n\nYou seem to be right.\nI would fix it.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 01 May 2001 08:56:49 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug in ODBC driver" } ]
[ { "msg_contents": "Hi all,\n\nNot sure if this is useful, but it might be good to file and reference\nsomewhere.\n\nRegards and best wishes,\n\nJustin Clift\n\n-------- Original Message --------\nSubject: Re: [GENERAL] Unisersal B-Tree\nDate: Mon, 30 Apr 2001 17:59:49 +0200\nFrom: J�rg Schulz <jschulz@sgbs.de>\nOrganization: Geb�udereinigung Schulz\nTo: \"Justin Clift\" <justin@postgresql.org>\nReferences: <9cb797$642$1@news.tht.net>\n<3AEB9B33.EA886A2B@postgresql.org> <001b01c0d143$b9b33f40$0600a8c0@opal>\n<3AED60D9.37A9028A@postgresql.org>\n\n> Do you mind if I forward this email to the pgsql-hackers@postgresql.org\n> mailing list?\n>\nOf cource you can forward it. Maybe you can correct my bad english :-)\n\nJ�rg Schulz\n\n----- Original Message -----\nFrom: \"Justin Clift\" <justin@postgresql.org>\nTo: \"J�rg Schulz\" <jschulz@sgbs.de>\nSent: Monday, April 30, 2001 2:55 PM\nSubject: Re: [GENERAL] Unisersal B-Tree\n\n\n> Hi J�rg,\n>\n> I know we have indices and sub-indices, but this also sounds\n> interesting.\n>\n> Do you mind if I forward this email to the pgsql-hackers@postgresql.org\n> mailing list?\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> J�rg Schulz wrote:\n> >\n> > Hi Cliff,\n> >\n> > I've read an article in the german magazine c't (2001/1 P174) about this\n> > new astonishing method. After I realized that none of the major commercial\n> > databases implement this for now (afaik there is only one database on the\n> > market \"Transbase HyperCube\" www.transaction.de), I thought it would be a great\n> > chance for an open source database. I even think it's a \"must have feature\"\n> > in the near future.\n> >\n> > But what is it about? It can dramatically speed up queries that run over more\n> > than one index. Think of a query like this:\n> >\n> > select a,b,c from table where ( a>min_a and a<max_a ) and ( b>min_b and b<max_b )\n> >\n> > In a conventional implementation you have two indexes on attributes a and b.\n> > But to run this query the database engine profits only from one index. It has\n> > to run through all the values of the other. This gets even worse if you use more\n> > constraints, and this scheme is typical for things like OLAP.\n> >\n> > With the new methode you add one UB-index that embraces a and b. And you run\n> > only once through this index.\n> >\n> > There are a number of papers available under mistral.in.tum.de that explain\n> > the basic concepts.\n> >\n> > Regards,\n> >\n> > J�rg Schulz\n> >\n> > ----- Original Message -----\n> > From: \"Justin Clift\" <justin@postgresql.org>\n> > To: \"JXrg Schulz\" <jschulz@sgbs.de>\n> > Sent: Sunday, April 29, 2001 6:40 AM\n> > Subject: Re: [GENERAL] Unisersal B-Tree\n> >\n> > > Hi J�rg,\n> > >\n> > > What advantages do they have?\n> > >\n> > > Regards and best wishes,\n> > >\n> > > Justin Clift\n> > >\n> > > JXrg Schulz wrote:\n> > > >\n> > > > Are there any plans to implement UB-Trees\n> > > > multidimensional indexes?\n> > > >\n> > > > J�rg Schulz\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > >\n> > > > http://www.postgresql.org/search.mpl\n> > >\n> > > --\n> > > \"My grandfather once told me that there are two kinds of people: those\n> > > who work and those who take the credit. He told me to try to be in the\n> > > first group; there was less competition there.\"\n> > > - Indira Gandhi\n> > >\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n", "msg_date": "Tue, 01 May 2001 08:53:31 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "[Fwd: Re: [GENERAL] Unisersal B-Tree]" }, { "msg_contents": "> ... Think of a query like this:\n> \n> select a,b,c from table where ( a>min_a and a<max_a ) and ( b>min_b and b<max_b )\n> \n> In a conventional implementation you have two indexes on attributes a and b.\n> But to run this query the database engine profits only from one index. It has\n> to run through all the values of the other. This gets even worse if you use more\n> constraints, and this scheme is typical for things like OLAP.\n> \n> With the new methode you add one UB-index that embraces a and b. And you run\n> only once through this index.\n\nAnd this is different from a multicolumn btree index how?\n\nI looked at the referenced website when this message first went by,\nand was unhappy at the apparently proprietary nature of the technology\n(not to mention the excessive hype ratio). I lost interest ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 15:56:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: [GENERAL] Unisersal B-Tree] " } ]
[ { "msg_contents": "In a very volatile table I have a list of actions which need to be\ncompleted by external systems -- somewhat like a queue. I'd like the\naction row to be locked until it's completed so that multiple\nprocesses chewing away at it won't try to complete the same external\naction (easy enough -- exclusive lock on the row).\n\nNow, ORDER BY random() requires a full sort on the table -- even with\nLIMIT 1 in place which makes this operation quite heavy. A plain\nselect attempts to grab the same row each time as the row is always at\nthe top. Doing an update (to flag it) shouldn't be necessary and\nisn't effective.\n\nIdeally in my case, I could do: SELECT * FROM junk WHERE 'ROW NOT\nLOCKED' LIMIT 1\n\nAnyway to fake this type of thing? I've thought about SET TRANSACTION\nISOLATION LEVEL READ UNCOMMITTED (does that exist?), and doing an\nupdate to a flag while it's locked. Of course, I remove the row after\nusing it so that doesn't really affect anything. What I do want\nthough is the action to become available again if something doesn't\ncomplete.\n--\nRod Taylor\n BarChord Entertainment Inc.", "msg_date": "Tue, 1 May 2001 11:07:42 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": true, "msg_subject": "SELECT WHERE 'NOT LOCKED'?" } ]
[ { "msg_contents": "Hello to all\nI need help in the following one for the following error\n\n\"PQGETVALUE: ERROR! tuple number 0 is out of range 0.. -1\n Segment violation\"\n\nThis happens when I make a pg_dump namebd > namebd.dump\n\nI work posgresql 7.0.3 with mandrake 7.2\n\nA thousand thank you\n\nAntonio Acu�a\n\n\n\n", "msg_date": "Tue, 1 May 2001 13:32:21 -0500", "msg_from": "\"Antonio Jose Acu���a Jimenez\" <aj_acuna@hotmail.com>", "msg_from_op": true, "msg_subject": "\"PQgetvalue: ERROR!" }, { "msg_contents": "\"Antonio Jose Acu�a Jimenez\" <aj_acuna@hotmail.com> writes:\n> I need help in the following one for the following error\n\n> \"PQGETVALUE: ERROR! tuple number 0 is out of range 0.. -1\n> Segment violation\"\n\nPre-7.1 versions of pg_dump are not very robust about situations like\nfunctions whose owner doesn't exist anymore, tables that refer to\nnonexistent datatypes, that sort of thing. The above is not enough\nto narrow down the problem, however. Try starting the postmaster with\n-d2 so that you can get a log of pg_dump's queries; then look to see\nwhat's the last query processed before it crashes. That should let you\nfigure out which database item has the dangling reference.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 14:50:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"PQgetvalue: ERROR! " } ]
[ { "msg_contents": "hello \nWould you tell me how many characters we can have as a string field?\nThanks a lot.\n", "msg_date": "Tue, 1 May 2001 15:11:31 -0400", "msg_from": "\"Rosie Sedghi\" <rosie@macadamian.com>", "msg_from_op": true, "msg_subject": "what is the limit for string" }, { "msg_contents": "On Tue, 1 May 2001, Rosie Sedghi wrote:\n\n> hello \n> Would you tell me how many characters we can have as a string field?\n> Thanks a lot.\n\nQuestions like this should be sent to pgsql-general or pgsql-novice.\n\nThere is no string field. There are CHAR, VARCHAR, TEXT, and a few other\nunusual text-type fields.\n\nLook at the section on data types in the User's Manual for info.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Fri, 4 May 2001 15:15:06 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: what is the limit for string" } ]
[ { "msg_contents": "Hi,\n\nProbably a fairly simple question:\n\nFor various reasons, I'd like to implement a readonly\nbackend option. I've been poking around the backend\ncode, and am slowly getting a feel for it, but I'm far\nfrom sure of the best way to implement such an option.\n\nThere are basically two parts that I'd like to hack:\n\n1. To block insert/update/delete/create/alter/drop\n queries with a helpful message.\n2. The ability to open the datafiles read-only (WAL\n included) so that they can be held on a read-only\n filesystem.\n\nThe latter should be doable, but will require some\neffort and testing to catch everything. But where in\nthe backend should be checks for the former go? I\nguess that the executor might be a better place than\nthe parser, but should I go even lower? There seemed\nno obvious way for ExecAppend (for example) to return\nfailure. Do I just do an elog(NOTICE) and ignore the\nquery?\n\nI would also like for a readonly backend to be able\nsafely to coexist with a writable one. Might this be\npossible, or are there cases where a backend executing\na readonly transaction needs write something?\n\nMatthew.\n\n", "msg_date": "Tue, 1 May 2001 23:10:45 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Readonly backend option" } ]
[ { "msg_contents": "I find it hard to believe this crew's been quiet all day...\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 1 May 2001 19:57:32 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "No Posts?" }, { "msg_contents": "> I find it hard to believe this crew's been quiet all day...\n\nMe too. I was absolutely certain that postgresql.org was hosed up ;)\n\n - Thomas\n", "msg_date": "Wed, 02 May 2001 04:22:21 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: No Posts?" }, { "msg_contents": "On Wed, 2 May 2001, Thomas Lockhart wrote:\n\n> > I find it hard to believe this crew's been quiet all day...\n>\n> Me too. I was absolutely certain that postgresql.org was hosed up ;)\n\nWe're having a weird problem with the server ... one that started on\nMonday when we added an 18gig drive to the machine, and persists since\nwe've taken it out ...\n\nfor some *odd* reason, the NIC appears to be getting \"wedged\" ... a simple\n'ifconfig down/up' appears to fix it ... we're still investigating ...\n\n", "msg_date": "Wed, 2 May 2001 04:03:42 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Re: No Posts?" } ]
[ { "msg_contents": "Ever need the ability to send mail from within PostgreSQL? Want more\nfunctionality out of your database? Need to increase your customer\noutreach while decreasing your engaged involvement? pgMail is the answer!\n\npgMail is simply a stored function written in TCL which takes 4 arguments\nof type text (Who is it from, who is it to, subject, and body of message),\ncontacts the email server via TCL sockets, and transmits your email.\n\nWhen used with triggers, pgMail can automagically send email when various\ncolumns in records are updated, rows are inserted, or even deleted. For\ninstance, if you run a e-commerce website which must ship inventory,\nemails can be sent when product is ordered, and then when product is\nshipped.\n\nThis can be used for various closed installations which might want to\nhandle all email sending from the database layer if the application layer\nis mixed, firewalled, or possibly just slow.\n\nView and download this gem at http://pgmail.sourceforge.net/. Thanks!\n\nFair Winds and Following Seas,\n\nBranden R. Williams <brw@brw.net>\nhttp://www.brw.net/\n--\nPGP Key: http://www.brw.net/brw.asc\n\n\n", "msg_date": "Tue, 1 May 2001 21:22:15 -0500 (CDT)", "msg_from": "\"Branden R. Williams\" <brw@brw.net>", "msg_from_op": true, "msg_subject": "pgMail 1.1 Released." }, { "msg_contents": "On Tue, 1 May 2001, Branden R. Williams wrote:\n\n> Ever need the ability to send mail from within PostgreSQL? Want more\n> functionality out of your database? Need to increase your customer\n> outreach while decreasing your engaged involvement? pgMail is the answer!\n> \n> View and download this gem at http://pgmail.sourceforge.net/. Thanks!\n\nThis is useful. Thanks!\n\n(there was a pgsendmail contribution several months ago, but I could never\nget it to compile, and the author seemed to drop off the face of the\nearth).\n\nWould you consider adding this the techdocs PL/pgSQL Cookbook at\ntechdocs.postgresql.org? People might notice it there that wouldn't see it\nhere.\n\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Thu, 3 May 2001 14:45:22 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: pgMail 1.1 Released." } ]
[ { "msg_contents": "Hi all,\n\nIs there a simple way to change database owner without disrupting the\nsystem.\n\nThe problem is that my collegue is leaving the company and he owns one a\nthe custommer databases.\n\nHow can I owe it myself (I'm superuser already).\n\nMany thanks,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Wed, 2 May 2001 13:51:21 +0200", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "change database owner" } ]
[ { "msg_contents": "Hi,\n\n[ resent, because I didn't see it appear yesterday ]\n\nProbably a fairly simple question:\n\nFor various reasons, I'd like to implement a readonly\nbackend option. I've been poking around the backend\ncode, and am slowly getting a feel for it, but I'm far\nfrom sure of the best way to implement such an option.\n\nThere are basically two parts that I'd like to hack:\n\n1. To block insert/update/delete/create/alter/drop\n queries with a helpful message.\n2. The ability to open the datafiles read-only (WAL\n included) so that they can be held on a read-only\n filesystem.\n\nThe latter should be doable, but will require some\neffort and testing to catch everything. But where in\nthe backend should be checks for the former go? I\nguess that the executor might be a better place than\nthe parser, but should I go even lower? There seemed\nno obvious way for ExecAppend (for example) to return\nfailure. Do I just do an elog(NOTICE) and ignore the\nquery?\n\nI would also like for a readonly backend to be able\nsafely to coexist with a writable one. Might this be\npossible, or are there cases where a backend executing\na readonly transaction needs write something?\n\nMatthew.\n\n\n", "msg_date": "Wed, 2 May 2001 15:12:58 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Readonly backend option" } ]
[ { "msg_contents": "\nI am unable to connect to the cvsup server. Are there problems?\n\n.. Otto\n\nOtto Hirr\nOLAB Inc.\notto.hirr@olabinc.com\n503 / 617-6595\n\n", "msg_date": "Wed, 2 May 2001 07:32:18 -0700", "msg_from": "\"Otto A. Hirr, Jr.\" <otto.hirr@olabinc.com>", "msg_from_op": true, "msg_subject": "CVSUP - Problems?" } ]
[ { "msg_contents": "\nIn the TODO, in the Type section, someone has proposed as 'separate SERIAL\ntype', which I assume is a type distinct from the SERIAL mapping to int\nnextval... that exists now.\n\nCan anyone elaborate on why this would be useful? I'm curious.\n\nI've tried searching the lists, but I find too many references to common\nquestions about the current SERIAL pseudotype to come up with much.\n\nThanks!\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Wed, 2 May 2001 11:01:47 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": true, "msg_subject": "Question re: TODO Item 'separate SERIAL type'" } ]
[ { "msg_contents": "Hello! I am a Technical Recruiter with MIS Consultants in Toronto Canada and I desperately need to find 2 POSTGRES DBA's for our Toronto client on a 3-4 month renewable contract, open $$$ based on experience.\n\nHow do I go about finding these guys?\n\nAny help is much appreciated, thanks!\n\nJeff Vainio, B.Comm\n*Technical Recruiter*\nMIS Consultants\n905-305-1455 Ext.#203\n1-800-311-2828\nFax#905-305-0033\nJeffv@misconsult.com \nwww.misconsult.com\n3190 Steeles Ave. East, Suite#120\nMarkham, Ontario\nL3R 1G9\n>\n____________________\n>\n>THIS E-MAIL MESSAGE IS FOR THE ADDRESSED PERSON ONLY. INITIAL INTRODUCTION OF A >CANDIDATE VIA E-MAIL OR FAX IS CONSIDERED BY LAW TO BE REPRESENTATED BY MIS >Consultants. OUR FEE IS DUE & PAYABLE WHEN A CANDIDATE IS HIRED BY YOUR >FIRM, A SUBSIDIARY OR DIVISION OR ANY OTHER FIRMS YOU REFER OUR CANDIDATE TO >WITHIN A 12 MONTH PERIOD.\n\n\n", "msg_date": "Wed, 2 May 2001 11:30:51 -0400", "msg_from": "Jeff Vainio <jeffv@misconsult.com>", "msg_from_op": true, "msg_subject": "help!" }, { "msg_contents": "On Wed, 2 May 2001, Jeff Vainio wrote:\n\n> Hello! I am a Technical Recruiter with MIS Consultants in Toronto Canada and I desperately need to find 2 POSTGRES DBA's for our Toronto client on a 3-4 month renewable contract, open $$$ based on experience.\n> \n> How do I go about finding these guys?\n\nJeff --\n\nThe *-hackers list is not the appropriate forum for this. This is for\ndiscussion of developing PostgreSQl, and for kvetching about users, and\nthings like that. :-)\n\n. There's a web page at techdocs.postgresql.org about people looking for\nPG consultants.\n\n. Call Great Bridge (greatbridge.com) or PostgreSQL Inc. (pgsql.com); they\nboth are commercial companies providing PG support. They might be able to\nshake loose someone. PG Inc. is in Canada, so they might be a great bet.\n\n. A short message to pgsql-general would get everyone's attention. I'm not\nsure how people feel about these kind of notices, though -- so, keep it\nshort, and obviously titled. \"help!\", for instance, should become \"Seeking\nPostgreSQL DBAs in Toronto, Canada\"\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Fri, 4 May 2001 13:47:37 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: help!" } ]
[ { "msg_contents": "This may be a reported bug. 7.1beta4.\n\nI use user names mostly as numbers. E.g. 1050, 1060, 1092.\nSometimes I got strange result when I try to reconnect:\n\ntir=> \\c - 1022\nYou are now connected as new user 1022.\ntir=> select user;\n current_user\n--------------\n 1022\n(1 row)\n\n(It's OK.)\n\ntir=> \\c - 1060\nYou are now connected as new user 1060.\ntir=> select user;\n current_user\n--------------\n 1092\n(1 row)\n\nThis is the problematic point. Is this a solved bug?\n\nTIA, Zoltan\n\n", "msg_date": "Wed, 2 May 2001 18:40:30 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "\\c connects as another user instead I want in psql" }, { "msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> tir=> \\c - 1060\n> You are now connected as new user 1060.\n> tir=> select user;\n> current_user\n> --------------\n> 1092\n> (1 row)\n\nIs it possible that 1060 and 1092 have the same usesysid in pg_shadow?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 May 2001 18:32:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql " }, { "msg_contents": "On Wed, 2 May 2001, Tom Lane wrote:\n\n> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> > tir=> \\c - 1060\n> > You are now connected as new user 1060.\n> > tir=> select user;\n> > current_user\n> > --------------\n> > 1092\n> > (1 row)\n> \n> Is it possible that 1060 and 1092 have the same usesysid in pg_shadow?\nHmmm. That was the problem. Thanks! By the way, could you please define a\nunique constraint on column 'usesysid' in future in PostgreSQL?\n\nZoltan\n\n", "msg_date": "Thu, 3 May 2001 11:18:15 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "Re: \\c connects as another user instead I want in psql " }, { "msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n>> Is it possible that 1060 and 1092 have the same usesysid in pg_shadow?\n\n> Hmmm. That was the problem. Thanks! By the way, could you please define a\n> unique constraint on column 'usesysid' in future in PostgreSQL?\n\nYup, there should be one (and one on usename, too). Not sure why it's\nbeen overlooked so far.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 May 2001 09:07:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql " }, { "msg_contents": "Kovacs Zoltan writes:\n\n> By the way, could you please define a unique constraint on column\n> 'usesysid' in future in PostgreSQL?\n\nThe usesysid column will be removed and the oid column will be used\ninstead. That one tends to be unique, but an index will still be added.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 3 May 2001 16:02:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql " }, { "msg_contents": "> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> >> Is it possible that 1060 and 1092 have the same usesysid in pg_shadow?\n> \n> > Hmmm. That was the problem. Thanks! By the way, could you please define a\n> > unique constraint on column 'usesysid' in future in PostgreSQL?\n> \n> Yup, there should be one (and one on usename, too). Not sure why it's\n> been overlooked so far.\n\nTODO item has:\n\n\t* Add unique indexes to pg_shadow.usename and pg_shadow.usesysid\n\nI overlooked it long ago because there is no cache lookup on that\ncolumn.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 11:16:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql" }, { "msg_contents": "> Kovacs Zoltan writes:\n> \n> > By the way, could you please define a unique constraint on column\n> > 'usesysid' in future in PostgreSQL?\n> \n> The usesysid column will be removed and the oid column will be used\n> instead. That one tends to be unique, but an index will still be added.\n\nReally? We are removing usesysid? Seems the admin will no longer be\nable to choose the users id, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 11:17:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql" }, { "msg_contents": "Bruce Momjian writes:\n\n> Really? We are removing usesysid? Seems the admin will no longer be\n> able to choose the users id, right?\n\nNot that this was ever useful.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 3 May 2001 17:49:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Really? We are removing usesysid? Seems the admin will no longer be\n> > able to choose the users id, right?\n> \n> Not that this was ever useful.\n\n Except for re-adding users.\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 7 May 2001 10:42:41 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql" }, { "msg_contents": ">>> Really? We are removing usesysid? Seems the admin will no longer be\n>>> able to choose the users id, right?\n>> \n>> Not that this was ever useful.\n\n> Except for re-adding users.\n\nYes. In theory, the correct answer to that is to add referential\nintegrity checks that prevent you from dropping a user that still\nowns any objects. In practice, this is impractical because users\nspan a whole database installation. We have no reasonable way to\ncheck whether the user owns objects in other databases that cannot\nbe seen from the DB where we are issuing the DROP USER command.\n\nTherefore, for the foreseeable future it will be important to be\nable to reverse a DROP USER command --- ie, recreate a user with\nthe same user identifier previously used.\n\nAfter thinking about that for awhile, I am inclined to change my\nprevious position: we should not switch over to using the OIDs of\npg_shadow rows as user identifiers. usesysid should continue to\nexist. Ditto for groups --- grosysid can't go away either.\n\nI think the original motivation for wanting to eliminate these columns\nwas that we need usesysid and grosysid to be distinct (can't use the\nsame ID for both a user and a group). Using OIDs as IDs would fix\nthat, but it's overkill. Wouldn't it be sufficient to use an\ninstallation-wide sequence object to assign new IDs for new users and\ngroups? We have no such animals at the present, but I see no reason\nwhy we couldn't make one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 11:11:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql " }, { "msg_contents": "> I think the original motivation for wanting to eliminate these columns\n> was that we need usesysid and grosysid to be distinct (can't use the\n> same ID for both a user and a group). Using OIDs as IDs would fix\n> that, but it's overkill. Wouldn't it be sufficient to use an\n> installation-wide sequence object to assign new IDs for new users and\n> groups? We have no such animals at the present, but I see no reason\n> why we couldn't make one.\n\nUpdated TODO to show both options:\n\n* Add unique indexes to pg_shadow.usename and pg_shadow.usesysid or\n switch to pg_shadow.oid as user id\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 14:40:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql" }, { "msg_contents": "> After thinking about that for awhile, I am inclined to change my\n> previous position: we should not switch over to using the OIDs of\n> pg_shadow rows as user identifiers. usesysid should continue to\n> exist. Ditto for groups --- grosysid can't go away either.\n> \n> I think the original motivation for wanting to eliminate these columns\n> was that we need usesysid and grosysid to be distinct (can't use the\n> same ID for both a user and a group). Using OIDs as IDs would fix\n> that, but it's overkill. Wouldn't it be sufficient to use an\n> installation-wide sequence object to assign new IDs for new users and\n> groups? We have no such animals at the present, but I see no reason\n> why we couldn't make one.\n\nOne thing on the TODO list is to allow people to soecify OID's on\nINSERT. There is no reason we should disallow it, and it could come in\nhandy for fixing deleted rows.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 15:53:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\c connects as another user instead I want in psql" }, { "msg_contents": "In 7.1.1 the following statement doesn't work (backend closes\nimmediately):\n\nSELECT INTO var1, var2 col1, col2 FROM table WHERE conditions;\n\nIn 7.1 (final) this problem doesn't occur.\n\nWorkaround:\n\nvar1 := col1 FROM table WHERE conditions;\nvar2 := col2 FROM table WHERE conditions;\n\n(Of course I'd better not rewrite my 200K code of PLGSQL... :-)\n\nTIA, Zoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Mon, 14 May 2001 10:38:08 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "bug in PLPGSQL" }, { "msg_contents": "Kovacs Zoltan wrote:\n> \n> In 7.1.1 the following statement doesn't work (backend closes\n> immediately):\n> \n> SELECT INTO var1, var2 col1, col2 FROM table WHERE conditions;\n> \n> In 7.1 (final) this problem doesn't occur.\n> \n\nIt's a known bug.\nIf you in a hurry, please apply the latest change for\nsrc/pl/plpgsql/src/pl_exec.c by Tom.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 14 May 2001 18:48:24 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in PLPGSQL" }, { "msg_contents": "There are SELECT INTO statements which work properly. Here is an example\nwhich closes the backend:\n\nCREATE FUNCTION plpgsql_call_handler ( ) RETURNS opaque AS '/usr/local/pgsql-7.1.1/lib/plpgsql.so' LANGUAGE 'C';\nCREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql' HANDLER plpgsql_call_handler LANCOMPILER 'PL/pgSQL';\n\ncreate table foo(x int4, y int4);\n\ncreate function bugtest(int4) returns int4 as '\ndeclare\n _x int4;\n _y int4;\nbegin\n select into _x,_y\n\t\tx, y from foo where x = $1 limit 1;\n return x;\nend;\n' language 'plpgsql';\n\nselect bugtest(5);\n\nIf the WHERE clause doesn't contain any input parameters (i.e. $1), I\ndon't get into any trouble.\n\nZoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n", "msg_date": "Mon, 14 May 2001 11:52:38 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "Re: bug in PLPGSQL" }, { "msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> In 7.1.1 the following statement doesn't work (backend closes\n> immediately):\n\n> SELECT INTO var1, var2 col1, col2 FROM table WHERE conditions;\n\nWould you mind providing a complete test case, so that we don't waste\ntime guessing at context?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 10:26:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in PLPGSQL " }, { "msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> If the WHERE clause doesn't contain any input parameters (i.e. $1), I\n> don't get into any trouble.\n\nIs this the known bug with failure if the SELECT returns zero rows?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 May 2001 10:41:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: bug in PLPGSQL " }, { "msg_contents": "On Mon, 14 May 2001, Tom Lane wrote:\n\n> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> > If the WHERE clause doesn't contain any input parameters (i.e. $1), I\n> > don't get into any trouble.\n> \n> Is this the known bug with failure if the SELECT returns zero rows?\nYes, it is. I haven't known this bug yet, however I read the mailing lists\nfirst. I also tried your patch and it works now greatly. Thanks,\n\nZoltan\n\n", "msg_date": "Mon, 14 May 2001 18:13:07 +0200 (CEST)", "msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>", "msg_from_op": true, "msg_subject": "Re: Re: bug in PLPGSQL " } ]
[ { "msg_contents": "I was talking to a Linux user yesterday, and he said that performance\nusing the xfs file system is pretty bad. He believes it has to do with\nthe fact that fsync() on log-based file systems requires more writes.\n\nWith a standard BSD/ext2 file system, WAL writes can stay on the same\ncylinder to perform fsync. Is that true of log-based file systems?\n\nI know xfs and reiser are both log based. Do we need to be concerned\nabout PostgreSQL performance on these file systems? I use BSD FFS with\nsoft updates here, so it doesn't affect me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 13:35:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "New Linux xfs/reiser file systems" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010502 14:01] wrote:\n> I was talking to a Linux user yesterday, and he said that performance\n> using the xfs file system is pretty bad. He believes it has to do with\n> the fact that fsync() on log-based file systems requires more writes.\n> \n> With a standard BSD/ext2 file system, WAL writes can stay on the same\n> cylinder to perform fsync. Is that true of log-based file systems?\n> \n> I know xfs and reiser are both log based. Do we need to be concerned\n> about PostgreSQL performance on these file systems? I use BSD FFS with\n> soft updates here, so it doesn't affect me.\n\nThe \"problem\" with log based filesystems is that they most likely\ndo not know the consequences of a write so an fsync on a file may\nrequire double writing to both the log and the \"real\" portion of\nthe disk. They can also exhibit the problem that an fsync may\ncause all pending writes to require scheduling unless the log is\nconstructed on the fly rather than incrementally.\n\nThere was also the problem that was brought up recently that\ncertain versions (maybe all?) of Linux perform fsync() in a very\nnon-optimal manner, if the user is able to use the O_FSYNC option\nrather than fsync he may see a performance increase.\n\nBut his guess is probably nearly as good as mine. :)\n\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nhttp://www.egr.unlv.edu/~slumos/on-netbsd.html\n", "msg_date": "Wed, 2 May 2001 14:28:07 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> The \"problem\" with log based filesystems is that they most likely\n> do not know the consequences of a write so an fsync on a file may\n> require double writing to both the log and the \"real\" portion of\n> the disk. They can also exhibit the problem that an fsync may\n> cause all pending writes to require scheduling unless the log is\n> constructed on the fly rather than incrementally.\n\nYes, this double-writing is a problem. Suppose you have your WAL on a\nseparate drive. You can fsync() WAL with zero head movement. With a\nlog based file system, you need two head movements, so you have gone\nfrom zero movements to two.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 17:36:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010502 15:20] wrote:\n> > The \"problem\" with log based filesystems is that they most likely\n> > do not know the consequences of a write so an fsync on a file may\n> > require double writing to both the log and the \"real\" portion of\n> > the disk. They can also exhibit the problem that an fsync may\n> > cause all pending writes to require scheduling unless the log is\n> > constructed on the fly rather than incrementally.\n> \n> Yes, this double-writing is a problem. Suppose you have your WAL on a\n> separate drive. You can fsync() WAL with zero head movement. With a\n> log based file system, you need two head movements, so you have gone\n> from zero movements to two.\n\nIt may be worse depending on how the filesystem actually does\njournalling. I wonder if an fsync() may cause ALL pending\nmeta-data to be updated (even metadata not related to the \npostgresql files).\n\nDo you know if reiser or xfs have this problem?\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n", "msg_date": "Wed, 2 May 2001 16:06:02 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> > Yes, this double-writing is a problem. Suppose you have your WAL on a\n> > separate drive. You can fsync() WAL with zero head movement. With a\n> > log based file system, you need two head movements, so you have gone\n> > from zero movements to two.\n> \n> It may be worse depending on how the filesystem actually does\n> journalling. I wonder if an fsync() may cause ALL pending\n> meta-data to be updated (even metadata not related to the \n> postgresql files).\n> \n> Do you know if reiser or xfs have this problem?\n\nI don't know, but the Linux user reported xfs was really slow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 20:18:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I was talking to a Linux user yesterday, and he said that performance\n> using the xfs file system is pretty bad. He believes it has to do with\n> the fact that fsync() on log-based file systems requires more writes.\n> \n> With a standard BSD/ext2 file system, WAL writes can stay on the same\n> cylinder to perform fsync. Is that true of log-based file systems?\n> \n> I know xfs and reiser are both log based. Do we need to be concerned\n> about PostgreSQL performance on these file systems? I use BSD FFS with\n> soft updates here, so it doesn't affect me.\n\nI did see poor performance on reiserfs, I have not as yet ventured into using\nxfs.\n\nI occurs to me that journalizing file systems will almost always be slower on\nan application such as postgres. The journalizing file system is trying to\nmaintain data integrity for an application which is also trying to maintain\ndata integrity. There will always be extra work involved.\n\nThis behavior raises the question about file system usage in Postgres. Many\ndatabases, such as Oracle, create table space files and operate directly on the\nraw blocks, bypassing the file system altogether.\n\nOn one hand, Postgres is easy to use and maintain because it cooperates with\nthe native file system, on the other hand it incurs the overhead of whatever\nsilliness the file system wants to do. \n\nI would bet it is a huge amount of work to use a \"table space\" system and no\none wants that. lol. However, it should be noted that a bit more control over\ndatabase layout would make some great performance improvements.\n\nThe ability to put indexes on a separate volume from data.\nThe ability to put different tables on different volumes.\nAnd so on.\n\nIn the short term, I think poor performance on a journalizing file system is to\nbe expected, unless there is an IOCTL to tell the FS to leave the files alone\n(and postgres calls it). A Linux HOWTO which informs people that certain file\nsystems will have performance issues and why should handle the problem.\n\nPerhaps we can convince the Linux community to create a \"dbfs\" which is a\nstripped down simple no nonsense file system designed for applications like\ndatabases?\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Thu, 03 May 2001 08:09:01 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "On Thu, 3 May 2001, mlw wrote:\n\n> I would bet it is a huge amount of work to use a \"table space\" system\n> and no one wants that.\n\n From some stracing of 7.1, the most common syscall issued by\npostgres is an lseek() to the end of the file, presumably to\nfind its length, which seems to happen up to about a dozen\ntimes per (pgbench) transaction.\n\nTablespaces would solve this (not that lseek is a particularly\nexpensive operation, of course).\n\n> Perhaps we can convince the Linux community to create a \"dbfs\" which\n> is a stripped down simple no nonsense file system designed for\n> applications like databases?\n\nSync-metadata ext2 should be fine. Filesystems fsck pretty\nquick when they contain only a few large files.\n\nOtherwise, something like \"smugfs\" (now obsolete) might do.\n\nMatthew.\n\n", "msg_date": "Thu, 3 May 2001 13:23:02 +0100 (BST)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> From some stracing of 7.1, the most common syscall issued by\n> postgres is an lseek() to the end of the file, presumably to\n> find its length, which seems to happen up to about a dozen\n> times per (pgbench) transaction.\n\n> Tablespaces would solve this (not that lseek is a particularly\n> expensive operation, of course).\n\nNo, they wouldn't; or at least they'd just create a different problem.\nThe reason for the lseek is that the file length may have changed since\nthe current backend last checked it. To avoid lseek we'd need some\nshared data structure that maintains the current length of every active\ntable, which would be a nuisance to maintain and probably a source of\ncontention delays.\n\n(Of course, such a data structure would just be the tip of the iceberg\nof what we'd have to maintain for ourselves if we couldn't depend on the\nkernel to do it for us. Reimplementing a filesystem doesn't strike me\nas a profitable use of our time.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 May 2001 09:33:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems " }, { "msg_contents": "> > I know xfs and reiser are both log based. Do we need to be concerned\n> > about PostgreSQL performance on these file systems? I use BSD FFS with\n> > soft updates here, so it doesn't affect me.\n> \n> I did see poor performance on reiserfs, I have not as yet ventured into using\n> xfs.\n> \n> I occurs to me that journalizing file systems will almost always be slower on\n> an application such as postgres. The journalizing file system is trying to\n> maintain data integrity for an application which is also trying to maintain\n> data integrity. There will always be extra work involved.\n\nYes, the problem is that extra work is required on PostgreSQL's part. \nLog-based file systems make sure all the changes get onto the disk in an\norderly way, but I believe it can delay what gets written to the drive. \nPostgreSQL wants to be sure all the data is on the disk, period. \nUnfortunately, the _orderly_ part makes the _fsync_ part do more work. \nBy going from ext2 to a log-based file system, we are getting _farther_\nfrom a raw device that if we just sayed with ext2.\n\next2 has serious problems with corrupt file systems after a crash, so I\nunderstand the need to move to another file system type. I have been\nwaitin for Linux to get a more modern file system. Unfortunately, the\nnew ones seem to be worse for PostgreSQL.\n\n> This behavior raises the question about file system usage in Postgres. Many\n> databases, such as Oracle, create table space files and operate directly on the\n> raw blocks, bypassing the file system altogether.\n\nOK, we have considered this, but frankly, the new, modern file systems\nlike FFS/softupdates have i/o rates near raw speed, with all the\nadvantages a file system gives us. I believe most commercial dbs are\nmoving away from raw devices and toward file systems. In the old days\nthe SysV file system was pretty bad at i/o & fragmentation, so they used\nraw devices.\n\n> The ability to put indexes on a separate volume from data.\n> The ability to put different tables on different volumes.\n> And so on.\n\nWe certainly need that, but raw devices would not make this any easier,\nI think.\n\n> In the short term, I think poor performance on a journalizing file system is to\n> be expected, unless there is an IOCTL to tell the FS to leave the files alone\n> (and postgres calls it). A Linux HOWTO which informs people that certain file\n> systems will have performance issues and why should handle the problem.\n> \n> Perhaps we can convince the Linux community to create a \"dbfs\" which is a\n> stripped down simple no nonsense file system designed for applications like\n> databases?\n\nIt could become a serious problem as people start using reiser/xfs for\ntheir file systems and don't understand the performance problems. Even\nmore likely is that they will turn off fsync, thinking reiser doesn't\nneed it, when in fact, I think it does.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 11:41:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> > From some stracing of 7.1, the most common syscall issued by\n> > postgres is an lseek() to the end of the file, presumably to\n> > find its length, which seems to happen up to about a dozen\n> > times per (pgbench) transaction.\n> \n> > Tablespaces would solve this (not that lseek is a particularly\n> > expensive operation, of course).\n> \n> No, they wouldn't; or at least they'd just create a different problem.\n> The reason for the lseek is that the file length may have changed since\n> the current backend last checked it. To avoid lseek we'd need some\n> shared data structure that maintains the current length of every active\n> table, which would be a nuisance to maintain and probably a source of\n> contention delays.\n\nSeems we should cache the file lengths somehow. Not sure how to do it\nbecause our file system cache is local to each backend.\n\n\n> (Of course, such a data structure would just be the tip of the iceberg\n> of what we'd have to maintain for ourselves if we couldn't depend on the\n> kernel to do it for us. Reimplementing a filesystem doesn't strike me\n> as a profitable use of our time.)\n\nDitto. The database is complicated enough.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 11:42:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> > This behavior raises the question about file system usage in Postgres. Many\n> > databases, such as Oracle, create table space files and operate directly on the\n> > raw blocks, bypassing the file system altogether.\n>\n> OK, we have considered this, but frankly, the new, modern file systems\n> like FFS/softupdates have i/o rates near raw speed, with all the\n> advantages a file system gives us. I believe most commercial dbs are\n> moving away from raw devices and toward file systems. In the old days\n> the SysV file system was pretty bad at i/o & fragmentation, so they used\n> raw devices.\n\nI'm starting to like the idea of raw FS for a few reasons:\n\n1) Considering that postgresql now does WAL, the need for a logging FS\nfor the database doesn't seem as needed (is it needed at all?).\n\n2) Given the fact that postgresql is trying to support many OSs,\ndepending on, for example, XFS on a linux system will cause many\nproblems. What about solaris? How about BSD? Etc.. Using raw db MAY be\neasier than dealing with the problems that will arise from supporting\nmultiple filesystems.\n\nThat said, the ability to use the system's FS does have it's advantages\n(backup, moving files, etc).\n\nJust some thoughts..\n\n- Brandon\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n\n", "msg_date": "Thu, 3 May 2001 14:20:50 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> > kernel to do it for us. Reimplementing a filesystem doesn't strike me\n> > as a profitable use of our time.)\n> Ditto. The database is complicated enough.\n\nMaybe some kind of recommendation would be a good thing. That is, if the \nPostgreSQL community has enough knowledge.\n\nA section in the docs that discusses various file systems, so people can make \nan intelligent choice.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@webline.dk\n", "msg_date": "Thu, 3 May 2001 21:07:17 +0200", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "On Thu, 3 May 2001, mlw wrote:\n\n> This behavior raises the question about file system usage in Postgres. Many\n> databases, such as Oracle, create table space files and operate directly on the\n> raw blocks, bypassing the file system altogether.\n> \n> On one hand, Postgres is easy to use and maintain because it cooperates with\n> the native file system, on the other hand it incurs the overhead of whatever\n> silliness the file system wants to do. \n\nIt is not *that* hard to write a 'postgresfs' but you have to look at\nthe problems it creates. One of the biggest problems facing sys admins of\nlarge sites is that the Oracle/DB2/etc DBA, having created the\npurpose-build database filesystem, has not allowed enough room for\ngrowth. Like I said, a basic file system is not difficult, but volume\nmanagement tools and the maintenance of the whole thing is. Currently,\npostgres administrators are not faced with such a problem.\n\nThere is, of course, the argument that pgfs need not been enforced. The\nproblem is that many people would probably use it so as to have a\n'superior' installation. This then entails the problems above, creating\nmore work for core developers.\n\nGavin\n\n", "msg_date": "Fri, 4 May 2001 09:37:14 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Just put a note in the installation docs that the place where the database\nis initialised to should be on a non-Reiser, non-XFS mount...\n\nChris\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of mlw\nSent: Thursday, 3 May 2001 8:09 PM\nTo: Bruce Momjian; Hackers List\nSubject: [HACKERS] Re: New Linux xfs/reiser file systems\n\n\nBruce Momjian wrote:\n>\n> I was talking to a Linux user yesterday, and he said that performance\n> using the xfs file system is pretty bad. He believes it has to do with\n> the fact that fsync() on log-based file systems requires more writes.\n>\n> With a standard BSD/ext2 file system, WAL writes can stay on the same\n> cylinder to perform fsync. Is that true of log-based file systems?\n>\n> I know xfs and reiser are both log based. Do we need to be concerned\n> about PostgreSQL performance on these file systems? I use BSD FFS with\n> soft updates here, so it doesn't affect me.\n\nI did see poor performance on reiserfs, I have not as yet ventured into\nusing\nxfs.\n\nI occurs to me that journalizing file systems will almost always be slower\non\nan application such as postgres. The journalizing file system is trying to\nmaintain data integrity for an application which is also trying to maintain\ndata integrity. There will always be extra work involved.\n\nThis behavior raises the question about file system usage in Postgres. Many\ndatabases, such as Oracle, create table space files and operate directly on\nthe\nraw blocks, bypassing the file system altogether.\n\nOn one hand, Postgres is easy to use and maintain because it cooperates with\nthe native file system, on the other hand it incurs the overhead of whatever\nsilliness the file system wants to do.\n\nI would bet it is a huge amount of work to use a \"table space\" system and no\none wants that. lol. However, it should be noted that a bit more control\nover\ndatabase layout would make some great performance improvements.\n\nThe ability to put indexes on a separate volume from data.\nThe ability to put different tables on different volumes.\nAnd so on.\n\nIn the short term, I think poor performance on a journalizing file system is\nto\nbe expected, unless there is an IOCTL to tell the FS to leave the files\nalone\n(and postgres calls it). A Linux HOWTO which informs people that certain\nfile\nsystems will have performance issues and why should handle the problem.\n\nPerhaps we can convince the Linux community to create a \"dbfs\" which is a\nstripped down simple no nonsense file system designed for applications like\ndatabases?\n\n--\nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n", "msg_date": "Fri, 4 May 2001 09:08:56 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "There might be a problem, but if no one mentions it to the maintainers of\nthose\nfs's, it will not get fixed...\n\nRegards\nJohn\n\n", "msg_date": "Fri, 4 May 2001 13:39:35 +1200", "msg_from": "<john@mwk.co.nz>", "msg_from_op": false, "msg_subject": "Reiser and XFS -- tell the maintainers" }, { "msg_contents": "> Just put a note in the installation docs that the place where the database\n> is initialised to should be on a non-Reiser, non-XFS mount...\n\nSure, we can do that now. What do we do when these are the default file\nsystems for Linux? We can tell them to create other types of file\nsystems, but that is a pretty big hurdle. I wonder if it would be\neasier to get reiser/xfs to make some modifications.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 21:42:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Well, arguably if you're setting up a database server then a reasonable DBA\nshould think about such things...\n\n(My 2c)\n\nChris\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Friday, 4 May 2001 9:42 AM\nTo: Christopher Kings-Lynne\nCc: mlw; Hackers List\nSubject: Re: [HACKERS] Re: New Linux xfs/reiser file systems\n\n\n> Just put a note in the installation docs that the place where the database\n> is initialised to should be on a non-Reiser, non-XFS mount...\n\nSure, we can do that now. What do we do when these are the default file\nsystems for Linux? We can tell them to create other types of file\nsystems, but that is a pretty big hurdle. I wonder if it would be\neasier to get reiser/xfs to make some modifications.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Fri, 4 May 2001 09:49:39 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> Well, arguably if you're setting up a database server then a reasonable DBA\n> should think about such things...\n\nYes, but people have trouble installing PostgreSQL. I can't imagine\nwalking them through a newfs.\n\n\n> \n> (My 2c)\n> \n> Chris\n> \n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, 4 May 2001 9:42 AM\n> To: Christopher Kings-Lynne\n> Cc: mlw; Hackers List\n> Subject: Re: [HACKERS] Re: New Linux xfs/reiser file systems\n> \n> \n> > Just put a note in the installation docs that the place where the database\n> > is initialised to should be on a non-Reiser, non-XFS mount...\n> \n> Sure, we can do that now. What do we do when these are the default file\n> systems for Linux? We can tell them to create other types of file\n> systems, but that is a pretty big hurdle. I wonder if it would be\n> easier to get reiser/xfs to make some modifications.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 21:55:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Just put a note in the installation docs that the place where the database\n> > is initialised to should be on a non-Reiser, non-XFS mount...\n> \n> Sure, we can do that now. What do we do when these are the default file\n> systems for Linux? We can tell them to create other types of file\n> systems, but that is a pretty big hurdle. I wonder if it would be\n> easier to get reiser/xfs to make some modifications.\n\n\nI have looked at Reiser, and I don't think it is a file system suited for very\nlarge files, or applications such as postgres. The Linux crowd should lobby\nagainst any such trend. It is ok for many moderately small files. ReiserFS\nwould be great for a cddb server, but poor for a database box.\n\nXFS is a real big file system project, I'd bet that there are file properties\nor management tools to tell it to leave directories and files alone. They\nshould have addressed that years ago.\n\nOne last mention..\n\nHaving better control over WHERE various files in a database are located can\nmake it easier to deal with these things.\n\nJust a thought. ;-)\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Thu, 03 May 2001 23:20:45 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "mlw wrote:\n\n>Bruce Momjian wrote:\n>\n>>>Just put a note in the installation docs that the place where the database\n>>>is initialised to should be on a non-Reiser, non-XFS mount...\n>>>\n>>Sure, we can do that now. What do we do when these are the default file\n>>systems for Linux? We can tell them to create other types of file\n>>systems, but that is a pretty big hurdle. I wonder if it would be\n>>easier to get reiser/xfs to make some modifications.\n>>\n>\n>\n>I have looked at Reiser, and I don't think it is a file system suited for very\n>large files, or applications such as postgres. The Linux crowd should lobby\n>against any such trend. It is ok for many moderately small files. ReiserFS\n>would be great for a cddb server, but poor for a database box.\n>\n>XFS is a real big file system project, I'd bet that there are file properties\n>or management tools to tell it to leave directories and files alone. They\n>should have addressed that years ago.\n>\n>One last mention..\n>\n>Having better control over WHERE various files in a database are located can\n>make it easier to deal with these things.\n>\nI think it's worth noting that Oracle has been petitioning the kernel \ndevelopers for better raw device support: in other words, the ability to \nwrite directly to the hard disk and bypassing the filesystem all \ntogether. \n\nIf the db is going to assume the responsibility of disk write \nverification it seems reasonable to assume you might want to investigate \nthe raw disk i/o options.\n\nTelling your installers that a major performance gain is attainable by \ndoing so might be a start in the opposite direction. I've monitored a \nlot of discussions and from what I can gather, postgresql does it's own \nset of journaling operations. I don't think that it's necessary for \nwrites to be double journalled anyway.\n\nAgain, just my two cents worth...\n\nmlw wrote:\nBruce Momjian wrote:\nJust put a note in the installation docs that the place where the databaseis initialised to should be on a non-Reiser, non-XFS mount...Sure, we can do that now. What do we do when these are the default filesystems for Linux? We can tell them to create other types of filesystems, but that is a pretty big hurdle. I wonder if it would beeasier to get reiser/xfs to make some modifications.\nI have looked at Reiser, and I don't think it is a file system suited for verylarge files, or applications such as postgres. The Linux crowd should lobbyagainst any such trend. It is ok for many moderately small files. ReiserFSwould be great for a cddb server, but poor for a database box.XFS is a real big file system project, I'd bet that there are file propertiesor management tools to tell it to leave directories and files alone. Theyshould have addressed that years ago.One last mention..Having better control over WHERE various files in a database are located canmake it easier to deal with these things.\n\nI think it's worth noting that Oracle has been petitioning the kernel developers\nfor better raw device support: in other words, the ability to write directly\nto the hard disk and bypassing the filesystem all together.   \n\nIf the db is going to assume the responsibility of disk write verification\nit seems reasonable to assume you might want to investigate the raw disk\ni/o options.\n\nTelling your installers that a major performance gain is attainable by doing\nso might be a start in the opposite direction.   I've monitored a lot of\ndiscussions and from what I can gather, postgresql does it's own set of journaling\noperations.  I don't think that it's necessary for writes to be double journalled\nanyway.\n\nAgain, just my two cents worth...", "msg_date": "Fri, 04 May 2001 02:09:23 -0500", "msg_from": "Thomas Swan <tswan@ics.olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "On Thu, May 03, 2001 at 11:41:24AM -0400, Bruce Momjian wrote:\n> ext2 has serious problems with corrupt file systems after a crash, so I\n> understand the need to move to another file system type. I have been\n> waitin for Linux to get a more modern file system. Unfortunately, the\n> new ones seem to be worse for PostgreSQL.\n\nIf you fsync() a directory in Linux, all the metadata within that directory\nwill be written out to disk.\n\nAs for filesystem corruption, I can say the e2fsck is among the best fsck\nprograms out there, and I've only ever had 1 occasion where I've lost any\ndata on an ext2 filesystem, and that was due to bad sectors causing me to\nlose the root directory. (Well, apart from human errors, but that doesn't\ncount)\n\n> OK, we have considered this, but frankly, the new, modern file systems\n> like FFS/softupdates have i/o rates near raw speed, with all the\n> advantages a file system gives us. I believe most commercial dbs are\n> moving away from raw devices and toward file systems. In the old days\n> the SysV file system was pretty bad at i/o & fragmentation, so they used\n> raw devices.\n\nAnd Solaris' 1/01 media has better support for O_DIRECT (?), which they claim\ngives you 93% of the speed of a raw device. (Or something like that; I read\nthis in marketing material a couple of months ago)\n\nRaw devices are designed to have filesystems on them. The only excuses for\nuserland tools accessing them, are fs-specific tools (eg. dump, fsck, etc),\nor for non-unix filesystem tools, where the unix VFS doesn't handle things\nproperly (hfstools).\n\n> > The ability to put indexes on a separate volume from data.\n> > The ability to put different tables on different volumes.\n> > And so on.\n> \n> We certainly need that, but raw devices would not make this any easier,\n> I think.\n\nIt would be cool if either at compile time or at database creation time, we\ncould specify a printf-like format for placing tables, indexes, etc.\n\n> It could become a serious problem as people start using reiser/xfs for\n> their file systems and don't understand the performance problems. Even\n> more likely is that they will turn off fsync, thinking reiser doesn't\n> need it, when in fact, I think it does.\n\nReiserFS only supports metadata logging. The performance slowdown must be\ndue to logging things like mtime or atime, because otherwise ReiserFS is a\nvery high performance FS. (Although, I admittedly haven't used it since it\nwas early in it's development)\n\n-- \nMichael Samuel <michael@miknet.net>\n", "msg_date": "Fri, 4 May 2001 21:35:34 +1000", "msg_from": "Michael Samuel <michael@miknet.net>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Michael Samuel wrote:\n> \n> ReiserFS only supports metadata logging. The performance slowdown must be\n> due to logging things like mtime or atime, because otherwise ReiserFS is a\n> very high performance FS. (Although, I admittedly haven't used it since it\n> was early in it's development)\n\nThe way I understand it is that ReiserFS does not attempt to separate files at\nthe block level. Multiple files can live in the same disk block. This is cool\nif you have many small files, but the extra overhead for large files such as\nthose used by a database, is a bit much.\n\nI read some stuff about a year ago, and my impressions forced me to conclude\nthat ReiserFS was geared toward applications. Which is a pretty good thing for\napplications, but not for databases. \n\nI really think a simple low down dirty file system is just what the doctor\nordered for postgres.\n\nRemember, general purpose file systems must do for files what Postgres is\nalready doing for records. You will always have extra work. I am seriously\nthinking of trying a FAT32 as pg_xlog. I wonder if it will improve performance,\nor if there is just something fundamentally stupid about FAT32 that will make\nit worse?\n \n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Fri, 04 May 2001 08:02:17 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n\n> I have looked at Reiser, and I don't think it is a file system suited for very\n> large files, or applications such as postgres.\n\nWhat's the problem with big files? ReiserFS v2 doesn't seem to support\nit, while v3 seems just fine (of the ondisk format)\n\nThat said, I'm certainly looking forward to xfs - I believe it will be\nthe most widely used of the current batch of journaling file systems\n(reiserfs, jfs, XFS and ext3, the latter mainly focusing on an easy\nmigration path for existing system)\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "04 May 2001 09:33:07 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "On Fri, May 04, 2001 at 08:02:17AM -0400, mlw wrote:\n> The way I understand it is that ReiserFS does not attempt to separate files at\n> the block level. Multiple files can live in the same disk block. This is cool\n> if you have many small files, but the extra overhead for large files such as\n> those used by a database, is a bit much.\n\nIt should be at least as fast as other filesystems for large files. I suspect\nthat it would be faster in fact. The only catch is that the performance of\nreiserfs sucks when it gets past 85% or so full. (ext2 has similar problems)\n\nYou can read about all this stuff at http://www.namesys.com/\n\n> I really think a simple low down dirty file system is just what the doctor\n> ordered for postgres.\n\nTraditional BSD FFS or Solaris UFS is probably the best bet for postgres.\n\n> Remember, general purpose file systems must do for files what Postgres is\n> already doing for records. You will always have extra work. I am seriously\n> thinking of trying a FAT32 as pg_xlog. I wonder if it will improve performance,\n> or if there is just something fundamentally stupid about FAT32 that will make\n> it worse?\n\nWell, for a starters, file permissions...\n\nExt2 would kick arse over FAT32 for performance.\n\n-- \nMichael Samuel <michael@miknet.net>\n", "msg_date": "Fri, 4 May 2001 23:50:22 +1000", "msg_from": "Michael Samuel <michael@miknet.net>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": ">>>>> \"Bruce\" == Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n >> Well, arguably if you're setting up a database server then a\n >> reasonable DBA should think about such things...\n\n Bruce> Yes, but people have trouble installing PostgreSQL. I\n Bruce> can't imagine walking them through a newfs.\n\nIn most of linux-land, the DBA is probably also the sysadmin. In\nbigger shops, and those which currently run, say Oracle or Sybase, the\ntwo roles are separate. When they are separate, you don't have to\nwalk the DBA through it; he just walks over to the sysadmin and says\n\"I need X megabytes of space on a new Y filesystem.\"\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nrbroberts@acm.org Forest Hills, NY 11375\n", "msg_date": "04 May 2001 10:24:53 -0400", "msg_from": "Roland Roberts <roland@astrofoto.org>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "I got some information from Stephen Tweedie on this - please keep him\n\"Cc:\" as he's not on this list\n\n************************************************************************\nBruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> I was talking to a Linux user yesterday, and he said that performance\n> using the xfs file system is pretty bad. He believes it has to do with\n> the fact that fsync() on log-based file systems requires more writes.\n\n\nPerformance doing what? XFS has known performance problems doing\nunlinks and truncates, but not synchronous IO. The user should be\nusing fdatasync() for databases, btw, not fsync().\n\nFirst, XFS, ext3 and reiserfs are *NOT* log-based filesystems. They\nare journaling filesystems. They have a log, but they are not\nlog-based because they do not store data permanently in a log\nstructure. Berkeley LFS, Sprite and Spiralog are log-based\nfilesystems.\n\n> With a standard BSD/ext2 file system, WAL writes can stay on the same\n> cylinder to perform fsync. Is that true of log-based file systems?\n\nNot true on ext2 or BSD. Write-aheads are _usually_ close to the\ninode, but not always. For true log-based filesystems, writes are\nalways completely sequential, so the issue just goes away. For\njournaling filesystems, depending on the setup there may be a seek to\nthe journal involved, but some journaling filesystems can use a\nseparate disk for the journal so no seek is required.\n\n> I know xfs and reiser are both log based. Do we need to be concerned\n> about PostgreSQL performance on these file systems? I use BSD FFS with\n> soft updates here, so it doesn't affect me.\n\nA database normally preallocates its data files and then performs most\nof its writes using update-in-place. In such cases, fsync() is almost\nalways the wrong thing to be doing --- the data writes have changed\nnothing in the inode except for the timestamps, and there's no need to\nflush the timestamps to disk for every write. fdatasync() is\ndesigned for this --- if the only inode change is timestamps,\nfdatasync() will skip the seek to the inode and will only update the\ndata. If any significant inode fields have been changed, then a full\nflush is done.\n\nUsing fdatasync, most filesystems will incur no seeks for data flush,\nregardless of whether the filesystem is journaling or not.\n\nCheers,\n Stephen\n************************************************************************\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "04 May 2001 11:04:30 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> Sure, we can do that now. What do we do when these are the default file\n> systems for Linux? We can tell them to create other types of file\n\nWhat is a 'default file system' ? I know that untill now, everybody is using \next2. But that's only because there hasn't been anything comparable. Now we \nse ReiserFS, and my SuSE installation offers the choice. In the future, I \nbelieve that people can choose from ext2, ReiserFS,xfs, ext3 and maybe more.\n\n> systems, but that is a pretty big hurdle. I wonder if it would be\n> easier to get reiser/xfs to make some modifications.\n\nNo, I don't think it's a big hurdle. If you just want to play with \nPostgreSQL, you wont care. If you're serious, you'll repartition.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@webline.dk\n", "msg_date": "Fri, 4 May 2001 17:59:18 +0200", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> On Fri, May 04, 2001 at 08:02:17AM -0400, mlw wrote:\n> > The way I understand it is that ReiserFS does not attempt to separate files at\n> > the block level. Multiple files can live in the same disk block. This is cool\n> > if you have many small files, but the extra overhead for large files such as\n> > those used by a database, is a bit much.\n> \n> It should be at least as fast as other filesystems for large files. I suspect\n> that it would be faster in fact. The only catch is that the performance of\n> reiserfs sucks when it gets past 85% or so full. (ext2 has similar problems)\n\nThat is pretty standard for most modern file systems. They need that\nfree space to optimize.\n\n\n> \n> You can read about all this stuff at http://www.namesys.com/\n> \n> > I really think a simple low down dirty file system is just what the doctor\n> > ordered for postgres.\n> \n> Traditional BSD FFS or Solaris UFS is probably the best bet for postgres.\n\nThat is my opinion. BSD FFS seems to be general enough to give good\nperformance for a large scale of application needs. It is not as fast\nas XFS for streaming large files (media), and it doesn't optimize small\nfiles below the 1k size (fragments), and it does require fsck on reboot.\n\nHowever, looking at all those for PostgreSQL, the costs of the new Linux\nfile systems seems pretty high, especially considering our need for\nfsync().\n\nWhat I am really concerned about is when xfs/reiser become the default\nfile systems for Linux, and people complain about PostgreSQL\nperformance. And if we require special file systems, we lose some of\nour ability to easily grow. Because of ext2's problems with crash\nrecovery, who is going to want to put other data on that file system\nwhen they have xfs/reiser available. And boots are going to have to\nfsck that ext2 file system.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 12:48:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > Sure, we can do that now. What do we do when these are the default file\n> > systems for Linux? We can tell them to create other types of file\n> \n> What is a 'default file system' ? I know that untill now, everybody is using \n> ext2. But that's only because there hasn't been anything comparable. Now we \n> se ReiserFS, and my SuSE installation offers the choice. In the future, I \n> believe that people can choose from ext2, ReiserFS,xfs, ext3 and maybe more.\n\nBut some day the default will be a log-based file system, and people\nwill have to hunt around to create a non-log based one.\n\n> > systems, but that is a pretty big hurdle. I wonder if it would be\n> > easier to get reiser/xfs to make some modifications.\n> \n> No, I don't think it's a big hurdle. If you just want to play with \n> PostgreSQL, you wont care. If you're serious, you'll repartition.\n\nYes, but we could get a reputation for slowness on these log-based file\nsystems.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 12:50:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> I got some information from Stephen Tweedie on this - please keep him\n> \"Cc:\" as he's not on this list\n> \n> ************************************************************************\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > I was talking to a Linux user yesterday, and he said that performance\n> > using the xfs file system is pretty bad. He believes it has to do with\n> > the fact that fsync() on log-based file systems requires more writes.\n> \n> \n> Performance doing what? XFS has known performance problems doing\n> unlinks and truncates, but not synchronous IO. The user should be\n> using fdatasync() for databases, btw, not fsync().\n\nThis is hugely helpful. In PostgreSQL 7.1, we do use fdatasync() by\ndefault it is available on a platform.\n\n\n> First, XFS, ext3 and reiserfs are *NOT* log-based filesystems. They\n> are journaling filesystems. They have a log, but they are not\n> log-based because they do not store data permanently in a log\n> structure. Berkeley LFS, Sprite and Spiralog are log-based\n> filesystems.\n\nSorry, I get those mixed up.\n\n> > With a standard BSD/ext2 file system, WAL writes can stay on the same\n> > cylinder to perform fsync. Is that true of log-based file systems?\n> \n> Not true on ext2 or BSD. Write-aheads are _usually_ close to the\n> inode, but not always. For true log-based filesystems, writes are\n> always completely sequential, so the issue just goes away. For\n> journaling filesystems, depending on the setup there may be a seek to\n> the journal involved, but some journaling filesystems can use a\n> separate disk for the journal so no seek is required.\n> \n> > I know xfs and reiser are both log based. Do we need to be concerned\n> > about PostgreSQL performance on these file systems? I use BSD FFS with\n> > soft updates here, so it doesn't affect me.\n> \n> A database normally preallocates its data files and then performs most\n> of its writes using update-in-place. In such cases, fsync() is almost\n> always the wrong thing to be doing --- the data writes have changed\n> nothing in the inode except for the timestamps, and there's no need to\n> flush the timestamps to disk for every write. fdatasync() is\n> designed for this --- if the only inode change is timestamps,\n> fdatasync() will skip the seek to the inode and will only update the\n> data. If any significant inode fields have been changed, then a full\n> flush is done.\n\nWe do pre-allocate our log file space in chunks to avoid inode/block\nindex writes.\n\n> Using fdatasync, most filesystems will incur no seeks for data flush,\n> regardless of whether the filesystem is journaling or not.\n\nThanks. That is a big help. I wonder if people reporting performance\nproblems were using 7.0.3. We only added fdatasync() in 7.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 13:49:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Michael Samuel wrote:\n\n>\n> > Remember, general purpose file systems must do for files what Postgres is\n> > already doing for records. You will always have extra work. I am seriously\n> > thinking of trying a FAT32 as pg_xlog. I wonder if it will improve performance,\n> > or if there is just something fundamentally stupid about FAT32 that will make\n> > it worse?\n>\n> Well, for a starters, file permissions...\n>\n> Ext2 would kick arse over FAT32 for performance.\n\nOK, I'll bite.\n\nIn a database environment where file creation is not such an issue, why would ext2\nbe faster?\n\nThe FAT file system has, AFAIK, very little overhead for file writes. It simply\nwrites the two FAT tables on file extension, and data. Depending on cluster size,\nthere is probably even less happening there.\n\nI don't think that anyone is saying that FAT is the answer in a production\nenvironment, but maybe we can do a comparison of various file systems and see if any\nperformance issues show up.\n\nI mentioned FAT only because I was thinking about how postgres would perform on a\nvery simple file system, one which bypasses most of the normal stuff a \"good\"\ngeneral purpose file system would do. While I was thinking this, it occurred to me\nthat FAT was about he cheesiest simple file system one could find, short of a ram\ndisk, and maybe we could use it to test the assumptions about performance impact of\nthe file system on postgres.\n\nJust a thought. If you know of some reason why ext2 would perform better in the\npostgres environment, I would love to hear why, I'm very curious.\n\n", "msg_date": "Fri, 04 May 2001 13:54:26 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Hi,\n\nOn Fri, May 04, 2001 at 01:49:54PM -0400, Bruce Momjian wrote:\n> > \n> > Performance doing what? XFS has known performance problems doing\n> > unlinks and truncates, but not synchronous IO. The user should be\n> > using fdatasync() for databases, btw, not fsync().\n> \n> This is hugely helpful. In PostgreSQL 7.1, we do use fdatasync() by\n> default it is available on a platform.\n\nGood --- fdatasync is defined in SingleUnix, so it's probably safe to\nprobe for it and use it by default if it is there.\n\nThe 2.2 Linux kernel does not have fdatasync implemented, but glibc\nwill fall back to fsync if that's all that the kernel supports. 2.4\nimplements both with the required semantics.\n\n--Stephen\n", "msg_date": "Fri, 4 May 2001 19:03:05 +0100", "msg_from": "\"Stephen C. Tweedie\" <sct@redhat.com>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> Hi,\n> \n> On Fri, May 04, 2001 at 01:49:54PM -0400, Bruce Momjian wrote:\n> > > \n> > > Performance doing what? XFS has known performance problems doing\n> > > unlinks and truncates, but not synchronous IO. The user should be\n> > > using fdatasync() for databases, btw, not fsync().\n> > \n> > This is hugely helpful. In PostgreSQL 7.1, we do use fdatasync() by\n> > default it is available on a platform.\n> \n> Good --- fdatasync is defined in SingleUnix, so it's probably safe to\n> probe for it and use it by default if it is there.\n> \n> The 2.2 Linux kernel does not have fdatasync implemented, but glibc\n> will fall back to fsync if that's all that the kernel supports. 2.4\n> implements both with the required semantics.\n\nOK, that is something we found too, that fdatasync() was there on some\nplatforms, but was really just an fsync(). I believe some HPUX\nplatforms had that.\n\nOK, so they need a 2.4 kernel to properly test performance of Reiser/xfs\nwith fdatasync().\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 14:33:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "At 02:09 AM 5/4/01 -0500, Thomas Swan wrote:\n> I think it's worth noting that Oracle has been petitioning the\n> kernel developers for better raw device support: in other words,\n> the ability to write directly to the hard disk and bypassing the\n> filesystem all together. \n\nBut there could be other reasons why Oracle would want to do raw stuff.\n\n1) They have more things to sell - management modules/software. More\ntraining courses. Certified blahblahblah. More features in brochure.\n2) It just helps make things more proprietary. Think lock in.\n\nAll that for maybe 10% performance increase?\n\nI think it's more advantageous for Postgresql to keep the filesystem layer\nof abstraction, than to do away with it, and later reinvent certain parts\nof it along with new bugs.\n\nWhat would be useful is if one can specify where the tables, indexes, WAL\nand other files go. That feature would probably help improve performance\nfar more. \n\nFor example: you could then stick the WAL on a battery backed up RAM disk.\nHow much total space does a WAL log need?\n\nA battery backed RAM disk might even be cheaper than Brand X RDBMS\nProprietary Feature #5.\n\nCheerio,\nLink.\n\n", "msg_date": "Sun, 06 May 2001 01:07:51 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> At 02:09 AM 5/4/01 -0500, Thomas Swan wrote:\n> > I think it's worth noting that Oracle has been petitioning the\n> > kernel developers for better raw device support: in other words,\n> > the ability to write directly to the hard disk and bypassing the\n> > filesystem all together.\n> \n> But there could be other reasons why Oracle would want to do raw stuff.\n> \n> 1) They have more things to sell - management modules/software. More\n> training courses. Certified blahblahblah. More features in brochure.\n> 2) It just helps make things more proprietary. Think lock in.\n> \n> All that for maybe 10% performance increase?\n> \n> I think it's more advantageous for Postgresql to keep the filesystem layer\n> of abstraction, than to do away with it, and later reinvent certain parts\n> of it along with new bugs.\n\nI just did a test of putting pg_xlog on a FAT file system, and my first rough\ntests (pgbench) show an approximate 20% performance increase over ext2 with\nfsync enabled.\n\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sat, 05 May 2001 13:16:43 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n>> > Yes, this double-writing is a problem. Suppose you have your WAL on a\n>> > separate drive. You can fsync() WAL with zero head movement. With a\n>> > log based file system, you need two head movements, so you have gone\n>> > from zero movements to two.\n>> \n>> It may be worse depending on how the filesystem actually does\n>> journalling. I wonder if an fsync() may cause ALL pending\n>> meta-data to be updated (even metadata not related to the \n>> postgresql files).\n>> \n>> Do you know if reiser or xfs have this problem?\n\n> I don't know, but the Linux user reported xfs was really slow.\n\ni think this should be tested in more detail: i once tried this\nlightly (running pgbench against postgresql 7.1beta4) with\ndifferent filesystems: ext2, reiserfs and XFS and reproducable\ni got about 15% better results running on XFS ... ok - it's\nnot a very big test, but i think it might be worth to really\ndo an a/b test before seing it as a fact that postgresql is\nslow on XFS (and maybe reiserfs too ... but reiserfs has had\nperformance problems in certain situations anyway)\n\nXFS is a journaling fs, but it does all it's work in a very\nclever way (delayed allocation etc.) - so usually you should\nunder normal conditions get decent performance out of it -\notherwise it might be worth sending a mail to the XFS\nmailinglist (resierfs maybe dito)\n\nt\n\n-- \nthomas graichen <tgr@spoiled.org> ... perfection is reached, not\nwhen there is no longer anything to add, but when there is no\nlonger anything to take away. --- antoine de saint-exupery\n", "msg_date": "Sat, 5 May 2001 21:41:25 +0200", "msg_from": "thomas graichen <list-pgsql.hackers@spoiled.org>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "At 01:16 PM 5/5/01 -0400, mlw wrote:\n>Lincoln Yeoh wrote:\n>> \n>> All that for maybe 10% performance increase?\n>> \n>> I think it's more advantageous for Postgresql to keep the filesystem layer\n>> of abstraction, than to do away with it, and later reinvent certain parts\n>> of it along with new bugs.\n>\n>I just did a test of putting pg_xlog on a FAT file system, and my first rough\n>tests (pgbench) show an approximate 20% performance increase over ext2 with\n>fsync enabled.\n\nOK. I slouch corrected :). It's more than 10%.\n\nHowever in the same message I did also say:\n>What would be useful is if one can specify where the tables, indexes, WAL\n>and other files go. That feature would probably help improve performance\n>far more. \n>\n>For example: you could then stick the WAL on a battery backed up RAM disk.\n>How much total space does a WAL log need?\n>\n>A battery backed RAM disk might even be cheaper than Brand X RDBMS\n>Proprietary Feature #5.\n\nAnd your experiments do help show that it is useful to be able to specify\nwhere things go, that putting just the WAL somewhere else makes things 20%\nfaster. So you don't have to put everything on a pgfs. Just the WAL on some\nother FS (even FAT32, ick ;) ).\n\n---\nOK we can do that with symlinks, but is there a PGSQL Recommended or\nStandard way to do it, so as to reduce administrative errors, and at least\nhelp improve consistency with multiadmin pgsql installations?\n\nThe WAL and DBs are in separate directories, so this makes things easy. But\nthe object names are now all numbers so that makes things a bit harder -\nand what to do with temp tables?\n\nWould it be good to have tables in one directory and indexes in another? Or\nmost people optimize on a specific table/index basis? Where does PGSQL do\nthe on-disk sorts?\n\nHow about naming the DB objects <object ID>.<object name>?\ne.g\n\n121575.testtable\n125575.testtableindex\n\n(or the other way round - name.OID - harder for DB, easier for admin?)\n\nThey'll still be unique, but now they're admin readable. Slower? e.g. at\nthat code point, pgsql no longer knows the object's name, and wants to\nrefer to everything by just numbers? \n\nI apologize if there was already a long discussion on this. I seem to\nrecall Bruce saying that the developers agonized over this.\n\nCheerio,\nLink.\n\n\n", "msg_date": "Sun, 06 May 2001 19:24:43 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> At 01:16 PM 5/5/01 -0400, mlw wrote:\n> >Lincoln Yeoh wrote:\n> >>\n> >> All that for maybe 10% performance increase?\n> >>\n> >> I think it's more advantageous for Postgresql to keep the filesystem layer\n> >> of abstraction, than to do away with it, and later reinvent certain parts\n> >> of it along with new bugs.\n> >\n> >I just did a test of putting pg_xlog on a FAT file system, and my first rough\n> >tests (pgbench) show an approximate 20% performance increase over ext2 with\n> >fsync enabled.\n> \n> OK. I slouch corrected :). It's more than 10%.\n> \n> However in the same message I did also say:\n> >What would be useful is if one can specify where the tables, indexes, WAL\n> >and other files go. That feature would probably help improve performance\n> >far more.\n> >\n> >For example: you could then stick the WAL on a battery backed up RAM disk.\n> >How much total space does a WAL log need?\n> >\n> >A battery backed RAM disk might even be cheaper than Brand X RDBMS\n> >Proprietary Feature #5.\n> \n> And your experiments do help show that it is useful to be able to specify\n> where things go, that putting just the WAL somewhere else makes things 20%\n> faster. So you don't have to put everything on a pgfs. Just the WAL on some\n> other FS (even FAT32, ick ;) ).\n\nSo you propose pgwalfs ? ;)\n\nIt may be much easier to implement than a full fs.\n\nHow hard would it be to let wal reside on a (raw) device ?\n\nIf we already pre-allocate a required number of fixed-size files would\nit be too \nhard to replace them with plain (raw) devices and test for possible\nperformance gains ?\n\n> \n> How about naming the DB objects <object ID>.<object name>?\n> e.g\n> \n> 121575.testtable\n> 125575.testtableindex\n> \n\nThis sure seems to be an elegant solution for the problem that seems to\nbe impossible \nto solve with symlinks and such. Even the IMHO hardest to solve problem\n- RENAME - can \nprobably be done in a transaction-safe manner by doing a\nlink(oid.<newname>) in the \nbeginning and selective unlink(oid.<newname/oldname>) at commit time.\n\n--------------------\nHannu\n", "msg_date": "Sun, 06 May 2001 14:04:28 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> Lincoln Yeoh wrote:\n> >\n> > At 01:16 PM 5/5/01 -0400, mlw wrote:\n> > >Lincoln Yeoh wrote:\n> > >>\n> > >> All that for maybe 10% performance increase?\n> > >>\n> > >> I think it's more advantageous for Postgresql to keep the filesystem layer\n> > >> of abstraction, than to do away with it, and later reinvent certain parts\n> > >> of it along with new bugs.\n> > >\n> > >I just did a test of putting pg_xlog on a FAT file system, and my first rough\n> > >tests (pgbench) show an approximate 20% performance increase over ext2 with\n> > >fsync enabled.\n> >\n> > OK. I slouch corrected :). It's more than 10%.\n> >\n> > However in the same message I did also say:\n> > >What would be useful is if one can specify where the tables, indexes, WAL\n> > >and other files go. That feature would probably help improve performance\n> > >far more.\n> > >\n> > >For example: you could then stick the WAL on a battery backed up RAM disk.\n> > >How much total space does a WAL log need?\n> > >\n> > >A battery backed RAM disk might even be cheaper than Brand X RDBMS\n> > >Proprietary Feature #5.\n> >\n> > And your experiments do help show that it is useful to be able to specify\n> > where things go, that putting just the WAL somewhere else makes things 20%\n> > faster. So you don't have to put everything on a pgfs. Just the WAL on some\n> > other FS (even FAT32, ick ;) ).\n> \n> So you propose pgwalfs ? ;)\n\nI don't know about a \"pgwalfs\" too much work. I have had some time to grapple\nwith my feelings about FAT, and you know what? I don't hate the idea. I would,\nof course, like to look through the driver code and see if there are any\ntechnical reasons why it should be excluded.\n\nFAT is almost perfect for WAL, and if I can figure out how to get the \"base\"\ndirectory to get the same performance, I'd think about putting it there as\nwell.\n\nThe ReiserFS issues touched on some vague suspicions I had about fsync. Maybe\nI'm over reacting, but there are reasons why the oracles manage their own table\nspaces.\n\nBack to FAT. FAT is probably the most simple file system I can think of. As\nlong as it writes to disk when it gets synched, and doesn't loose things, its\nperfect. Postgres maintains much of the coherency issues, there is no real\nproblem with permissions because it will be owned by the postgres super user,\netc. I would never suggest FAT as a general purpose file system, but, geez, as\na special purpose single user (postgres) it seems an ideal answer to what will\nbe an increasingly hard problem of advanced file systems.\n\nAside from a general, and well deserved, disdain for FAT. What are the\ntechnical \"cons\" of such a proposal. If we can get the Linux kernel (and other\nunices) to accept IOCTLs to direct space allocation, and/or write up a white\npaper on how to use this for postgres, why wouldn't it be a reasonable\nstrategy?\n\n\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 06 May 2001 08:53:56 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": ">Lincoln Yeoh wrote:\n>> \n>> >Lincoln Yeoh wrote:\n>> >For example: you could then stick the WAL on a battery backed up RAM disk.\n>> >How much total space does a WAL log need?\n>> >\n>> >A battery backed RAM disk might even be cheaper than Brand X RDBMS\n>> >Proprietary Feature #5.\n>> \n>> And your experiments do help show that it is useful to be able to specify\n>> where things go, that putting just the WAL somewhere else makes things 20%\n>> faster. So you don't have to put everything on a pgfs. Just the WAL on some\n>> other FS (even FAT32, ick ;) ).\n\nAt 02:04 PM 5/6/01 +0200, Hannu Krosing wrote:\n>So you propose pgwalfs ? ;)\n\nNah. I'm proposing the opposite in fact.\n\nI'm saying so far there appears to be no real need to come up with a\nspecial filesystem. Stick to using existing/future filesystems. Just make\nit easy and safe enough for DBA's to put the objects on whatever filesystem\nthey choose. So long as the O/S kernel/driver people support the hardware\nor filesystem, postgresql will take advantage of it with little if any\nextra work.\n\nIn fact as mlw's experiments show, you can put the WAL on FAT (FAT16?) for\na 20% performance increase. How much better would a raw device be? Would it\nreally be worth all that hassle? For instance if you need to resize the FAT\npartition, you could probably use fips, Partition Magic or some other cost\neffective solution - no need for pgsql developers or anybody to reinvent\nanything.\n\nMy proposed but untested idea is that you could get a significant\nperformance increase by putting the WAL on popular filesystems running on\nbattery backed RAM drives (or other special hardware). 128MB RAM should be\nenough for small setups? \n\nDon't know how much these things cost, but I believe that when you need the\nspeed, they'll be more worthwhile than a special proprietary filesystem.\n\nOk, just found:\nhttp://www.expressdata.com.au/Products/ProductsList.asp?SUPPLIER_NAME=PLATYP\nUS+TECHNOLOGY&SUBCATEGORY_NAME=QikDrive2#PRODUCTTITLE\n\nAUD$1,624.70 = USD843.06. Not cheap but not way out of reach. Haven't found\nother competing products yet. Must be somewhere.\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 07 May 2001 00:02:38 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Even the IMHO hardest to solve problem\n> - RENAME - can \n> probably be done in a transaction-safe manner by doing a\n> link(oid.<newname>) in the \n> beginning and selective unlink(oid.<newname/oldname>) at commit time.\n\nNope. Consider\n\n\tbegin;\n\trename a to b;\n\trename b to a;\n\tend;\n\nAnd don't tell me you'll solve this by ignoring failures from link().\nThat's a recipe for losing your data...\n\nI would ask people who think they have a solution to please go back and\nreread the very long discussions we have had on this point in the past.\nNobody particularly likes numeric filenames, but there really isn't any\nother workable answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 May 2001 12:03:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems " }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> OK we can do that with symlinks, but is there a PGSQL Recommended or\n> Standard way to do it, so as to reduce administrative errors, and at least\n> help improve consistency with multiadmin pgsql installations?\n\nNot yet. There should be support for this. See\ndoc/TODO.detail/tablespaces.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 May 2001 12:05:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems " }, { "msg_contents": "At 12:03 PM 5/6/01 -0400, Tom Lane wrote:\n>Hannu Krosing <hannu@tm.ee> writes:\n>> Even the IMHO hardest to solve problem\n>> - RENAME - can \n>> probably be done in a transaction-safe manner by doing a\n>> link(oid.<newname>) in the \n>> beginning and selective unlink(oid.<newname/oldname>) at commit time.\n>\n>Nope. Consider\n>\n>\tbegin;\n>\trename a to b;\n>\trename b to a;\n>\tend;\n>\n>And don't tell me you'll solve this by ignoring failures from link().\n>That's a recipe for losing your data...\n>\n>I would ask people who think they have a solution to please go back and\n>reread the very long discussions we have had on this point in the past.\n>Nobody particularly likes numeric filenames, but there really isn't any\n>other workable answer.\n\nOK. Found one of the discussions at:\nhttp://postgresql.readysetnet.com/mhonarc/pgsql-hackers/2000-03/threads.html\n#00088\n\nConclusion calling stuff oid.relname doesn't really work. Sorry to have\nbrought it up again.\n\nAnother idea that's probably more messy than it's worth: \n\nMain object still called <oid> with a symlink called <oid.originalrelname>.\nDB really just uses <oid>.\n\nRename= adds symlink called <oid.newrelname>, doesn't remove symlinks\n(symlinks more for show!). \n\nCommitted drop table does what 7.1 does with the main oid entry. \n\nVacuum cleans up the symlinks leaving just a single valid one or zaps all\nif the table has been dropped. \n\nFor windows create empty files named oid.relname instead of symlinks.\nWindows will definitely like .verylongrelname extensions ;).\n\nKinda messy and kludgy. Throw in the performance reduction and Ick! \n\nI probably have to think harder :), maybe there's just no good way :(. \n\nAh well,\nLink.\n\n", "msg_date": "Mon, 07 May 2001 01:56:18 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Even the IMHO hardest to solve problem\n> > - RENAME - can\n> > probably be done in a transaction-safe manner by doing a\n> > link(oid.<newname>) in the\n> > beginning and selective unlink(oid.<newname/oldname>) at commit time.\n> \n> Nope. Consider\n> \n> begin;\n> rename a to b;\n> rename b to a;\n> end;\n> \n> And don't tell me you'll solve this by ignoring failures from link().\n> That's a recipe for losing your data...\n\nI guess link() failures can be safely ignored _as long as_ we check that \nwe have the right link after doing it. I can't see how it will lose\ndata.\n\n> I would ask people who think they have a solution to please go back and\n> reread the very long discussions we have had on this point in the past.\n\nI think I have now (No way to guarantee I have read _everything_ about\nit, \nbut I did hit about ~10 messages on oid_relname naming scheme).\n\nthe most serious objection seemed to be that we need to remember the \npostgres tablename while it would be much easier to use only oids .\n\nI guess we could hit some system limits here (running out of directory \nentries or reaching the maximum number of links to a file) but at least\non \nlinux i was able to make >10000 links to one file with no problems.\n\nnow that i think of it I have one concern - it would require extra work \nto use tablenames like \"/etc/passwd\" or others that use characters that\nare \nreserved in filenames which are ok to use in 7.1.\n\nhannu=# create table \"/etc/passwd\"(\nhannu(# login text,\nhannu(# uid int,\nhannu(# gid int\nhannu(# );\nCREATE\nhannu=# \\dt\n List of relations\n Name | Type | Owner \n-------------+-------+-------\n /etc/passwd | table | hannu\n\nSo if people start using names like these it will not be easy to go back\n;)\n\n> Nobody particularly likes numeric filenames, but there really isn't any\n> other workable answer.\n\nAt least we could put links on system relations, so it would be \neasier to find them. \n\nI guess one is not supposed to rename/drop system tables ?\n\n---------------------\nHannu\n", "msg_date": "Mon, 07 May 2001 10:12:32 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: TABLE RENAME/NUMERIC FILENAMES (Was: New Linux xfs/reiser\n\tfile systems)" } ]
[ { "msg_contents": "hi...\n i'm configuring a web based e-commerce site called pg_market that uses PHP\nand postgresql and i keep coming up with this error:\n\nWarning: PostgreSQL query failed: ERROR: Relation 'order_cntry' does not exist\nin /home/users/h/haresh/public_html/pgmarket-1.2.0/lib/dblib.inc.php on line 84\n\nCan't execute query\n\nSELECT name FROM order_cntry WHERE cntry_id = 974\n\nThis script cannot continue, terminating.\n\nThe table was created wiht the same user that is used to access the databse\nin the script. I've tried everything from granting permissions to doing a dump\nand repost. Hope someone knows what might be the problem.\n", "msg_date": "Wed, 02 May 2001 15:23:17 -0400 (EDT)", "msg_from": "haresh@mail1.hub.org", "msg_from_op": true, "msg_subject": "strange table access error using PHP" } ]
[ { "msg_contents": "\nI am reading an interesting discussion about fsync() and disk flush on\nSlashdot. The discussion starts about log-based file systems, then\nmoves to disk fsync about 20% into the discussion.\n\nLook for:\n\n\tReal issue is HARD DRIVE CACHEs\n\nAll this discussion relates to WAL and our use of fsync().\n\nThere is also a mention of PostgreSQL:\n\n\tI just tried both about 3 weeks ago. I first tried reieserfs. Worked\n\tfine until I tried to load a postgresql database. Never finished after\n\trunning all night--it realy thrashed the drive (I also run a RAID level\n\t0 on the filesystem). Switched to xfs and everything ran great (db\n\tloaded in a few hours). Been running xfs ever since. I'd like to see xfs\n\tget put into the stock kernel along with reiserfs. It's possible my\n\tproblems with reiserfs have since been fixed.\n\nAlso, in reading the thread, it seems xfs is much more log-based than\nReiser, so we may only have WAL/fsync() performance problems on xfs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 16:03:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Fsync Slashdot discussion" } ]
[ { "msg_contents": "To avoid getting into states where a btree index is corrupt (or appears\nthat way), it is absolutely critical that the datatype provide a unique,\nconsistent sort order. In particular, the operators = <> < <= > >= had\nbetter all agree with each other and with the 3-way-comparison support\nfunction about the ordering of any two non-NULL data values.\n\nAfter tracing some Assert failures in the new planner statistics code\nI'm working on, I have realized that several of our existing datatypes\nfail to meet this fundamental requirement, and therefore are prone to\nserious misbehavior when trying to index \"weird\" values. In particular,\ntype NUMERIC does not return consistent results for comparisons\ninvolving \"NaN\" values, and several of the date/time types do not return\nconsistent results for comparisons involving \"INVALID\" values.\n(Example: numeric_cmp will assert that two NaNs are equal, whereas\nnumeric_eq will assert that they aren't. Worse, numeric_cmp will assert\nthat a NaN is equal to any non-NaN, too. The date/time routines avoid\nthe latter mistake but make the former one.)\n\nI am planning to fix this by ensuring that all these operations agree\non an (arbitrarily chosen) sort order for the \"weird\" values of these\ntypes. What I'm wondering about is whether to insert the fixes into\n7.1.1 or wait for 7.2. In theory changing the sort order might break\nexisting user indexes, and should therefore be avoided until an initdb\nis needed. But: any indexes that contain these values are likely broken\nalready, since in fact we don't have a well-defined sort order right now\nfor these values.\n\nA closely related problem is that the \"current time\" special value\nsupported by several of the date/time datatypes is inherently not\ncompatible with being indexed, since its sort order relative to\nordinary time values keeps changing. We had discussed removing this\nspecial case, and I think agreed to do so, but it hasn't happened yet.\n\nWhat I'm inclined to do is force consistency of the comparison operators\nnow (for 7.1.1) and then remove \"current time\" for 7.2, but perhaps it'd\nbe better to leave the whole can of worms alone until 7.2. Comments\nanyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 May 2001 17:38:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Collation order for btree-indexable datatypes" }, { "msg_contents": "\n> I am planning to fix this by ensuring that all these operations agree\n> on an (arbitrarily chosen) sort order for the \"weird\" values of these\n> types. What I'm wondering about is whether to insert the fixes into\n> 7.1.1 or wait for 7.2. In theory changing the sort order might break\n> existing user indexes, and should therefore be avoided until an initdb\n> is needed. But: any indexes that contain these values are likely broken\n> already, since in fact we don't have a well-defined sort order right now\n> for these values.\n\n> What I'm inclined to do is force consistency of the comparison operators\n> now (for 7.1.1) and then remove \"current time\" for 7.2, but perhaps it'd\n> be better to leave the whole can of worms alone until 7.2. Comments\n> anyone?\n\nAssuming that the changes are reasonably safe, I think you're\nright. What parts of the changes would require an initdb, would new\nfunctions need to be added or the index ops need to change or would\nit be fixes to the existing functions (if the latter, wouldn't a recompile\nand dropping/recreating the indexes be enough?)\n\n", "msg_date": "Wed, 2 May 2001 15:57:23 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Collation order for btree-indexable datatypes" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> What parts of the changes would require an initdb, would new\n> functions need to be added or the index ops need to change or would\n> it be fixes to the existing functions (if the latter, wouldn't a recompile\n> and dropping/recreating the indexes be enough?)\n\nYes, dropping and recreating any user indexes that contain the problematic\nvalues would be sufficient to get you out of trouble. We don't need any\nsystem catalog changes for this, AFAICS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 May 2001 19:02:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Collation order for btree-indexable datatypes " }, { "msg_contents": "On Wed, 2 May 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > What parts of the changes would require an initdb, would new\n> > functions need to be added or the index ops need to change or would\n> > it be fixes to the existing functions (if the latter, wouldn't a recompile\n> > and dropping/recreating the indexes be enough?)\n> \n> Yes, dropping and recreating any user indexes that contain the problematic\n> values would be sufficient to get you out of trouble. We don't need any\n> system catalog changes for this, AFAICS.\n\nLooking back, I misread the original message. I thought you were saying\nthat it needed to wait for an initdb and so would be bad in a dot release,\nbut it was just the breaking of indexes thing, but since they're already\npretty broken, I don't see much of a loss by fixing it.\n\n\n", "msg_date": "Wed, 2 May 2001 16:05:49 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Collation order for btree-indexable datatypes " }, { "msg_contents": "> A closely related problem is that the \"current time\" special value\n> supported by several of the date/time datatypes is inherently not\n> compatible with being indexed, since its sort order relative to\n> ordinary time values keeps changing. We had discussed removing this\n> special case, and I think agreed to do so, but it hasn't happened yet.\n> \n> What I'm inclined to do is force consistency of the comparison operators\n> now (for 7.1.1) and then remove \"current time\" for 7.2, but perhaps it'd\n> be better to leave the whole can of worms alone until 7.2. Comments\n> anyone?\n\nComparing NaN/Invalid seems so off the beaten path that we would just\nwait for 7.2. That and no one has reported a problem with it so far.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 19:19:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Collation order for btree-indexable datatypes" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Comparing NaN/Invalid seems so off the beaten path that we would just\n> wait for 7.2. That and no one has reported a problem with it so far.\n\nDo you consider \"vacuum analyze\" on the regression database to be\noff the beaten path? How about creating an index on a numeric column\nthat contains NaNs, or a timestamp column that contains Invalid?\n\nUnless you believe these values are not being used in the field at all,\nthere's a problem. (And if you do believe that, you shouldn't be\nworried about my changing their behavior ;-))\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 May 2001 21:32:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Collation order for btree-indexable datatypes " }, { "msg_contents": "If you feel strongly about it, go ahead. I didn't see any problem\nreports on it, and it seemed kind of iffy, so I thought we should hold\nit.\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Comparing NaN/Invalid seems so off the beaten path that we would just\n> > wait for 7.2. That and no one has reported a problem with it so far.\n> \n> Do you consider \"vacuum analyze\" on the regression database to be\n> off the beaten path? How about creating an index on a numeric column\n> that contains NaNs, or a timestamp column that contains Invalid?\n> \n> Unless you believe these values are not being used in the field at all,\n> there's a problem. (And if you do believe that, you shouldn't be\n> worried about my changing their behavior ;-))\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 21:48:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Collation order for btree-indexable datatypes" } ]
[ { "msg_contents": "I have been using PostgreSQL and XFS file systems on SGI's for many\nyears, and PostgreSQL is fast. Dumping and loading 100GB of table\nfiles takes less than one day elapsed (provided there is no other\nactivity on that database -- large amounts of transactional activity\nwill slow things down). I always turn off fsync. Most of my experience has\nbeen with 6.5.3, although I've been using 7.1 and I don't see much\nof a difference in performance.\n\nI don't know if the Linux version of XFS is substantially different\nthan the SGI version, but XFS is a wonderful filesystem to use and\nadminister (at least on SGI's).\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W Franklin Ave, Suite K1,4,5 | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n", "msg_date": "Wed, 2 May 2001 18:45:16 -0400 (EDT)", "msg_from": "bruc@stone.congenomics.com (Robert E. Bruccoleri)", "msg_from_op": true, "msg_subject": "XFS File systems and PostgreSQL" }, { "msg_contents": "bruc@stone.congenomics.com (Robert E. Bruccoleri) writes:\n\n> I have been using PostgreSQL and XFS file systems on SGI's for many\n> years, and PostgreSQL is fast. Dumping and loading 100GB of table\n> files takes less than one day elapsed (provided there is no other\n> activity on that database -- large amounts of transactional activity\n> will slow things down). I always turn off fsync. \n ^^^^^^^^^^^^^^^^^^^^^^^\n\nThen your performance numbers are largely useless for those of us that \nlike our data. ;)\n\nThe point at issue is the performance of fsync() on journaling\nfilesystems...\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "02 May 2001 19:44:42 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: XFS File systems and PostgreSQL" }, { "msg_contents": "> I have been using PostgreSQL and XFS file systems on SGI's for many\n> years, and PostgreSQL is fast. Dumping and loading 100GB of table\n> files takes less than one day elapsed (provided there is no other\n> activity on that database -- large amounts of transactional activity\n> will slow things down). I always turn off fsync. Most of my experience has\n> been with 6.5.3, although I've been using 7.1 and I don't see much\n> of a difference in performance.\n> \n> I don't know if the Linux version of XFS is substantially different\n> than the SGI version, but XFS is a wonderful filesystem to use and\n> administer (at least on SGI's).\n\nYes, but you turn off fsync. It is with fsync on that XFS will be slow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 19:51:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: XFS File systems and PostgreSQL" }, { "msg_contents": "> bruc@stone.congenomics.com (Robert E. Bruccoleri) writes:\n> \n> > I have been using PostgreSQL and XFS file systems on SGI's for many\n> > years, and PostgreSQL is fast. Dumping and loading 100GB of table\n> > files takes less than one day elapsed (provided there is no other\n> > activity on that database -- large amounts of transactional activity\n> > will slow things down). I always turn off fsync. \n> ^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Then your performance numbers are largely useless for those of us that \n> like our data. ;)\n> \n> The point at issue is the performance of fsync() on journaling\n> filesystems...\n\nYes, the irony is that a journaling file system is being used to have\nfast, reliable restore after crash bootup, but with no fsync, the db is\nprobably hosed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 20:19:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: XFS File systems and PostgreSQL" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Yes, the irony is that a journaling file system is being used to have\n> fast, reliable restore after crash bootup, but with no fsync, the db is\n> probably hosed.\n\nIt just struck me--is it necessarily true that we get the big\nperformance hit? \n\nOn a non-data-journaling FS (like ext3), since WAL files are\npreallocated (right?), a WAL sync shouldn't involve any metadata\nupdates. So we just write the WAL data to a (hopefully contiguous)\nchunk of data blocks.\n\nOn an FS that journals data AND metadata, fsync() can return once the\nupdates are committed to the log--it doesn't have to wait until the\nlog is back-flushed (or whatever you call it) to the main filesystem. \n\nThe above is theoretical, and I don't know enough about Reiser or XFS\nto know how they behave. \n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "02 May 2001 20:27:35 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: XFS File systems and PostgreSQL" }, { "msg_contents": "Dear Bruce,\n> \n> Yes, the irony is that a journaling file system is being used to have\n> fast, reliable restore after crash bootup, but with no fsync, the db is\n> probably hosed.\n\nThere is no irony in these cases. In my systems, which are used for\nbioinformatics, the updating process is generally restartable. I\nnormally have lots of data to load or many records to change,\nand the quantities are much more than any reasonable\nsized transaction. Some jobs run for days. If I lose some data\nbecause of a crash, I just restart the jobs, and they'll delete some\nof the last data to be loaded, and then resume. Furthermore, the SGI's\nthat I run on are highly reliable, and they rarely crash. So, I might\nhave to clean up a big mess rarely (I've had one really big one in\ntwo years), but performance otherwise is really good. I should\nalso point out that most of my work has been with PostgreSQL 6.5.3\nwhich doesn't have the WAL.\n\nIf I have some time, I will try the experiment of loading a database\nof mine into PG 7.1 using -F or not and I'll report the timing.\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W Franklin Ave, Suite K1,4,5 | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n", "msg_date": "Wed, 2 May 2001 20:49:36 -0400 (EDT)", "msg_from": "bruc@stone.congenomics.com (Robert E. Bruccoleri)", "msg_from_op": true, "msg_subject": "Re: XFS File systems and PostgreSQL" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Yes, the irony is that a journaling file system is being used to have\n> > fast, reliable restore after crash bootup, but with no fsync, the db is\n> > probably hosed.\n> \n> It just struck me--is it necessarily true that we get the big\n> performance hit? \n> \n> On a non-data-journaling FS (like ext3), since WAL files are\n> preallocated (right?), a WAL sync shouldn't involve any metadata\n> updates. So we just write the WAL data to a (hopefully contiguous)\n> chunk of data blocks.\n> \n> On an FS that journals data AND metadata, fsync() can return once the\n> updates are committed to the log--it doesn't have to wait until the\n> log is back-flushed (or whatever you call it) to the main filesystem. \n> \n> The above is theoretical, and I don't know enough about Reiser or XFS\n> to know how they behave. \n\nTheoretically, yes, all these log-based file system just log metadata\nchanges, not user data, so it should not affect it. I just don't know\nhow well the fsync's are implemented on these things.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 2 May 2001 20:54:21 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: XFS File systems and PostgreSQL" } ]
[ { "msg_contents": "Hi,\n\nThere seems to be a minor bug related to permissions. If you create a\ntable and grant permissions on that table to someone else, you lose your\nown permissions (note: do this as a non-dbadmin account):\n\n testdb=> create table tester ( test int4 );\n CREATE \n testdb=> insert into tester values ('1');\n INSERT 17109139 1\n testdb=> grant select on tester to someone;\n CHANGE\n testdb=> insert into tester values ('2');\n ERROR: tester: Permission denied.\n testdb=>\n\n>From postgres/sql-grant.htm:\n\n Description\n \n GRANT allows the creator of an object to give specific permissions to\n all users (PUBLIC) or to a certain user or group. Users other than\n the creator don't have any access permission unless the creator\n GRANTs permissions, after the object is created.\n \n Once a user has a privilege on an object, he is enabled to exercise\n that privilege. There is no need to GRANT privileges to the creator\n of an object, the creator automatically holds ALL privileges, and can\n also drop the object. \n\nIt's not behaving as documented (\"There is no need to GRANT privileges\nto the creator of an object\").\n\nThis is in postgresql-7.0.3, but it's possible this is fixed in a more\nrecent version - can someone try this and see what happens ?\n\nCheers,\n\nChris.\n", "msg_date": "Thu, 3 May 2001 11:54:26 +1000", "msg_from": "Chris Dunlop <chris@onthe.net.au>", "msg_from_op": true, "msg_subject": "Permissions problem" }, { "msg_contents": "Chris Dunlop <chris@onthe.net.au> writes:\n> There seems to be a minor bug related to permissions. If you create a\n> table and grant permissions on that table to someone else, you lose your\n> own permissions (note: do this as a non-dbadmin account):\n\nThis is fixed in 7.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 May 2001 23:46:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Permissions problem " } ]
[ { "msg_contents": "Hi,\n\nhere is an answer from Bettina Kemme - author of\nPostgres-R - consisent replcation engine based on Postgres 6.4\n(URL of paper - http://citeseer.nj.nec.com/330257.html).\nWhile I don't see much activity on replication topic I think it's\nworth to discuss with Bettina design issue and co-ordinate\nour plans for future release.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Wed, 02 May 2001 18:26:58 -0400\nFrom: Bettina Kemme <kemme@cs.mcgill.ca>\nTo: Oleg Bartunov <oleg@sai.msu.su>\nSubject: Re: Postgres-R question\n\nOleg,\n\nwe are currently working on producing a Postgres-R\nversion based on PostgreSQL 6.4 that can be released\nfor the public. Darren Johnson from sourcesoft institute is\nactually working on this version right now.\nSince PostgreSQL 7.1 has a quite different concurrency\ncontrol system than 6.4, upgrading Postgres-R to 7.1 is a bit more\ndifficult.\nOne of my students is going to work on this problem\nduring the summer (he is going to start next week, but at\nthe beginning he will have to do some reading to get familiar\nwith replicated etc).\nWe would be very happy to cooperate with the PostgreSQL\ndevelopers and make the replication part a part of PostgreSQL\n(or at least an add-on feature).\nThe problem is that our approach requires, to a certain degree,\nto change the underlying concurrency control method and\nthe transaction execution control.\nThat is, different to lazy replication schemes, it will\nnot be exclusively an add-on feature, but must be integrated into\nexisting code. Of course,\nwe are planning to design the system such that the user\n(or system administrator) can choose between a replicated\nand a non-replicated system. In the non-replicated system,\nsimply the original PostgreSQL will be executed.\n\nIn the next days, I am going to have a closer look at\nPostgreSQL 7.1 to figure out all the details. Again,\nwe are open for any kind of suggestions, support, cooperation\netc.\n\n\tBest\n\tBettina\n\nOleg Bartunov wrote:\n>\n> Hello,\n>\n> I found your paper about database replication and your implementation\n> based on Postgres. What's the status of your implementation ?\n> Is't available for downloading ? I'm postgres developer and interested\n> in database replication. Current version of PostgreSQL is 7.1 and\n> it has much more feature than that of 6.4.2 but it still lacks replication.\n> I think your contribution would be very useful for other users.\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\n-- \n******************************************\nBettina Kemme\nSchool of Computer Science\nMcGill University Phone: +1 514 398 8930\nMontreal, Quebec Fax: +1 514 398 3883\nCanada E-mail: kemme@cs.mcgill.ca\nhttp://www.cs.mcgill.ca/~kemme\n\n", "msg_date": "Thu, 3 May 2001 10:37:09 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "PostgreSQL replication " } ]
[ { "msg_contents": "I have a table with a FK on itself: in fact a record may depend on\nanother table (\"pig's ear\" :-) I may run into a problem\ndumping/restoring using pg_dump, PostgreSQL 7.1.0.\n\nHere's a simplification of the table:\n\nprovo=# SELECT version();\n version\n--------------------------------------------------------------\n PostgreSQL 7.1 on alphaev67-dec-osf4.0f, compiled by cc -std\n\nprovo=# CREATE TABLE t1 (id serial, val text, ref integer references\nt1(id));\n\nprovo=# \\d t1\n Table \"t1\"\n Attribute | Type | Modifier\n-----------+---------+-----------------------------------------------\n id | integer | not null default nextval('\"t1_id_seq\"'::text)\n val | text |\n ref | integer |\nIndex: t1_id_key\n\nIf I perform some UPDATEs after the INSERTs, rows in the table are not\n\"ordered\" (in a physical sense) according to the serial id, as obvious:\n\nprovo=# INSERT INTO t1 (val) VALUES ('A');\nINSERT 2361407 1\nprovo=# INSERT INTO t1 (val, ref) VALUES ('B', 1);\nINSERT 2361408 1\nprovo=# SELECT oid,* from t1;\n oid | id | val | ref\n---------+----+-----+-----\n 2361407 | 1 | A |\n 2361408 | 2 | B | 1\n(2 rows)\n \nprovo=# UPDATE t1 SET val = 'A+' WHERE id = 1;\nUPDATE 1\nprovo=# SELECT oid,* from t1;\n oid | id | val | ref\n---------+----+-----+-----\n 2361408 | 2 | B | 1\n 2361407 | 1 | A+ |\n(2 rows)\n\nNow, if I dump *in INSERT mode and only the data* the table, the\nordering makes it unusable:\n\n--\n-- Data for TOC Entry ID 5 (OID 2361370) TABLE DATA t1\n--\n \n\\connect - alessio\nINSERT INTO t1 (id,val,ref) VALUES (2,'B',1);\nINSERT INTO t1 (id,val,ref) VALUES (1,'A+',NULL);\n\nwhich fails since row '1' is not defined while entering '2', because I\nwant to put an older database's data into a newer (compatible)\nstructure, so FK triggers and other stuff is working. Is there a\ndifferent solution that disabling FK or editing (argh!) the dump?\n\nIt should work fine if rows would be dumped according to oid. Can this\nbe considered a bug?\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Thu, 03 May 2001 12:27:31 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": true, "msg_subject": "A problem with pg_dump?" }, { "msg_contents": "Alessio Bragadini <alessio@albourne.com> writes:\n> It should work fine if rows would be dumped according to oid. Can this\n> be considered a bug?\n\nNo; or at least, that solution would be equally buggy. It's not much\nharder than your given example to construct cases where dumping the rows\nin OID order would be wrong too (just takes some UPDATEs). In fact, I\ncould easily build a version of your table in which there is a circular\nchain of dependencies and so *no* dump order will work.\n\npg_dump scripts ordinarily turn off foreign-key checking while loading\ndata, and this sort of consideration is the reason why. It looks to me\nlike you may have some pre-release copy of pg_dump that gets this wrong\n(the comment format in your example is not exactly like current pg_dump,\nwhich seems suspicious). Try pg_dump -V to see what it says.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 May 2001 10:33:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A problem with pg_dump? " }, { "msg_contents": "Tom Lane wrote:\n\n> It's not much\n> harder than your given example to construct cases where dumping the rows\n> in OID order would be wrong too (just takes some UPDATEs).\n\nYes, I figured out myself quickly. :-( \n\n> like you may have some pre-release copy of pg_dump that gets this wrong\n> (the comment format in your example is not exactly like current pg_dump,\n\nIt's pg_dump from 7.0, I am trying to move my 7.0 installation to a 7.1\ndatabase, including all the structure changes that we made in our\ndevelopment system. That's why I am stuck. Is it possible to use pg_dump\n7.1 on a 7.0 database?\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Thu, 03 May 2001 17:50:15 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": true, "msg_subject": "Re: A problem with pg_dump?" }, { "msg_contents": "> Is it possible to use pg_dump 7.1 on a 7.0 database?\n\nTried. Nope.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Thu, 03 May 2001 18:19:59 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": true, "msg_subject": "Re: A problem with pg_dump?" }, { "msg_contents": "Alessio Bragadini <alessio@albourne.com> writes:\n>> Is it possible to use pg_dump 7.1 on a 7.0 database?\n\n> Tried. Nope.\n\nCurrent CVS pg_dump (grab the nightly snapshot if you don't use CVS,\nor wait for 7.1.1 in a day or two) is alleged to be able to work\nagainst a 7.0 database. Give it a try.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 May 2001 11:32:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A problem with pg_dump? " }, { "msg_contents": "Tom Lane wrote:\n\n> Current CVS pg_dump (grab the nightly snapshot if you don't use CVS,\n> or wait for 7.1.1 in a day or two) is alleged to be able to work\n> against a 7.0 database. Give it a try.\n\nThat would be great, I had plans to wait for 7.1.1 anyway.\n\nThanks\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Thu, 03 May 2001 18:43:55 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": true, "msg_subject": "Re: A problem with pg_dump?" } ]
[ { "msg_contents": "\nOleg:\n I concur with this, and would like to collaborate as possible.\n\n[You may recall my messages from a couple weeks back. I'm looking at\nways of implementing 'concurrent' PostGres on a NUMA machine that has\nboth local memory and (a bit more costly) access to a shared memory].\n\n Bettina Kemme's method relies on fast, reliable and ordered global \ncommunication, which is easily implemented on my target system, so her \nmethods look a great fit. [Im my case, the 'replication' would only happen \nat the level of the buffer cache in each node, since all nodes have access \nto a shared file system].\n\n\tPresently, per Tom Lane's suggestion, I'm\nlooking at a different approach, in which I keep the shared memory segment \nin shared memory and check-in/check-out the buffers into local memory at \nproper times (which Tom suggested at the LockBuffer boundaries].\nI'm open to suggestions/advice/...\n\n cheers\n Mauricio\n\n>From: Oleg Bartunov <oleg@sai.msu.su>\n>To: <pgsql-hackers@postgresql.org>\n>Subject: [HACKERS] PostgreSQL replication\n>Date: Thu, 3 May 2001 10:37:09 +0300 (GMT)\n>\n>Hi,\n>\n>here is an answer from Bettina Kemme - author of\n>Postgres-R - consisent replcation engine based on Postgres 6.4\n>(URL of paper - http://citeseer.nj.nec.com/330257.html).\n>While I don't see much activity on replication topic I think it's\n>worth to discuss with Bettina design issue and co-ordinate\n>our plans for future release.\n>\n>\tRegards,\n>\t\tOleg\n>_____________________________________________________________\n>Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n>Sternberg Astronomical Institute, Moscow University (Russia)\n>Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n>phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>---------- Forwarded message ----------\n>Date: Wed, 02 May 2001 18:26:58 -0400\n>From: Bettina Kemme <kemme@cs.mcgill.ca>\n>To: Oleg Bartunov <oleg@sai.msu.su>\n>Subject: Re: Postgres-R question\n>\n>Oleg,\n>\n>we are currently working on producing a Postgres-R\n>version based on PostgreSQL 6.4 that can be released\n>for the public. Darren Johnson from sourcesoft institute is\n>actually working on this version right now.\n>Since PostgreSQL 7.1 has a quite different concurrency\n>control system than 6.4, upgrading Postgres-R to 7.1 is a bit more\n>difficult.\n>One of my students is going to work on this problem\n>during the summer (he is going to start next week, but at\n>the beginning he will have to do some reading to get familiar\n>with replicated etc).\n>We would be very happy to cooperate with the PostgreSQL\n>developers and make the replication part a part of PostgreSQL\n>(or at least an add-on feature).\n>The problem is that our approach requires, to a certain degree,\n>to change the underlying concurrency control method and\n>the transaction execution control.\n>That is, different to lazy replication schemes, it will\n>not be exclusively an add-on feature, but must be integrated into\n>existing code. Of course,\n>we are planning to design the system such that the user\n>(or system administrator) can choose between a replicated\n>and a non-replicated system. In the non-replicated system,\n>simply the original PostgreSQL will be executed.\n>\n>In the next days, I am going to have a closer look at\n>PostgreSQL 7.1 to figure out all the details. Again,\n>we are open for any kind of suggestions, support, cooperation\n>etc.\n>\n>\tBest\n>\tBettina\n>\n>Oleg Bartunov wrote:\n> >\n> > Hello,\n> >\n> > I found your paper about database replication and your implementation\n> > based on Postgres. What's the status of your implementation ?\n> > Is't available for downloading ? I'm postgres developer and interested\n> > in database replication. Current version of PostgreSQL is 7.1 and\n> > it has much more feature than that of 6.4.2 but it still lacks \n>replication.\n> > I think your contribution would be very useful for other users.\n> >\n> > Regards,\n> > Oleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>--\n>******************************************\n>Bettina Kemme\n>School of Computer Science\n>McGill University Phone: +1 514 398 8930\n>Montreal, Quebec Fax: +1 514 398 3883\n>Canada E-mail: kemme@cs.mcgill.ca\n>http://www.cs.mcgill.ca/~kemme\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com\n\n", "msg_date": "Thu, 03 May 2001 08:20:05 -0500", "msg_from": "\"Mauricio Breternitz\" <mbjsql@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL replication" } ]
[ { "msg_contents": "\n> >> Is it possible that 1060 and 1092 have the same usesysid \n> in pg_shadow?\n> \n> > Hmmm. That was the problem. Thanks! By the way, could you \n> please define a\n> > unique constraint on column 'usesysid' in future in PostgreSQL?\n> \n> Yup, there should be one (and one on usename, too). Not sure why it's\n> been overlooked so far.\n\nThe usesysid was originally intended to map pg users to unix accounts.\nI do not see why it should not be possible to map different pg users\nto a single unix account. The above imho stems from an improper use of this \ncolumn which needs to be fixed, not the column made unique.\n\nAndreas\n", "msg_date": "Thu, 3 May 2001 15:38:57 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: \\c connects as another user instead I want in psql " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> The usesysid was originally intended to map pg users to unix accounts.\n> I do not see why it should not be possible to map different pg users\n> to a single unix account. The above imho stems from an improper use of this \n> column which needs to be fixed, not the column made unique.\n\nNo. I'm not sure whether or not I believe the comment about Unix\naccounts; Postgres does not care about Unix accounts, and never has\nto my knowledge. But it has always used the usesysid as owner\nidentification for database objects (tables etc). If two different\nusers have the same usesysid then they are both the owner of these\nobjects; moreover they are interchangeable for permissions checks, too.\nThis is not a situation that has any practical use AFAICS.\n\nThere has been some talk of eliminating usesysid entirely in favor of\nusing the OID of the pg_shadow entry as the user's ID for ownership\nidentification. If that happens, we'd want a unique index on OID\ninstead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 May 2001 09:51:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: \\c connects as another user instead I want in psql " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> No. I'm not sure whether or not I believe the comment about Unix\n> accounts; Postgres does not care about Unix accounts, and never has\n> to my knowledge. But it has always used the usesysid as owner\n> identification for database objects (tables etc). If two different\n> users have the same usesysid then they are both the owner of these\n> objects; moreover they are interchangeable for permissions checks, too.\n> This is not a situation that has any practical use AFAICS.\n\nOn Unix it is reasonable to have multiple users with the same user ID.\nYou do this when they play the same role, but it is useful to\ndistinguish them for logging purposes. They have different passwords,\nof course, and logging code uses getlogin() to get the login name they\nused.\n\nI can imagine something similar within Postgres, using triggers to\nrecord log information when changes are made.\n\nWhether this is a feature worth having, I don't know. But there is at\nleast one practical use.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 924: Good news from afar can bring you a welcome visitor.\n", "msg_date": "03 May 2001 11:05:17 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: AW: \\c connects as another user instead I want in psql" } ]
[ { "msg_contents": "Hi !!\n\nI was trying to get a very nice FREE graphical db tool called DbVisualizer \n(http://www.ideit.com/products/dbvis/) to work with Postgresql and I found \nout the following bug: if database has views then getTables() gets the null \npointer exception ('order by relname' makes the listing tree in \nDbVisualizer a lot useful !!)\n\nThis patch should propably be applied to the the jdbc1's \nDatabaseMetaData.java, too.\n\n[/tmp/postgresql-7.1/src/interfaces/jdbc/org/postgresql/jdbc2]$\n<ql/jdbc2]$ diff -u DatabaseMetaData.java.org DatabaseMetaData.java\n\n--- DatabaseMetaData.java.org\tWed May 02 22:52:25 2001\n+++ DatabaseMetaData.java\tWed May 02 23:07:19 2001\n@@ -1666,7 +1666,7 @@\n // Now take the pattern into account\n sql.append(\") and relname like '\");\n sql.append(tableNamePattern.toLowerCase());\n- sql.append(\"'\");\n+ sql.append(\"' order by relname\");\n\n // Now run the query\n r = connection.ExecSQL(sql.toString());\n@@ -1697,6 +1697,9 @@\n \tcase 'S':\n \t relKind = \"SEQUENCE\";\n \t break;\n+\tcase 'v':\n+\t relKind = \"VIEW\";\n+\t break;\n \tdefault:\n \t relKind = null;\n \t}\n@@ -1704,7 +1707,7 @@\n \ttuple[0] = null;\t\t// Catalog name\n \ttuple[1] = null;\t\t// Schema name\n \ttuple[2] = r.getBytes(1);\t// Table name\n-\ttuple[3] = relKind.getBytes();\t// Table type\n+\ttuple[3] = (relKind==null) ? null : relKind.getBytes();\t// Table type\n \ttuple[4] = remarks;\t\t// Remarks\n \tv.addElement(tuple);\n }\n\n\n-----\nhttp://www.ideit.com/products/dbvis/\n\n...\n\nDbVisualizer\nVersion: 2.0\nReleased: 2001-04-20\n\n\nThe #1 requested feature to ease editing table data is now supported!\nThe #2 requested feature to print graphs is now supported!\nRead the complete change log for all new features and enhancements!\n\n\nDbVisualizer is a cross platform database visualization and edit tool \nrelying 100% on the JDBC, Java Database Connectivity API's. DbVisualizer \nenables simultaneous connections to many different databases through JDBC \ndrivers available from a variety of vendors. Just point and click to browse \nthe structure of the database, characteristics of tables, etc. No matter if \nit's an enterprise database from Oracle or an open source product like \nInstantDB!\n\nAnd best of all -> it's FREE!\n-----\n\n\n", "msg_date": "Thu, 03 May 2001 18:32:28 +0300", "msg_from": "Panu Outinen <panu@vertex.fi>", "msg_from_op": true, "msg_subject": "A bug fix for JDBC's getTables() in Postgresql 7.1" }, { "msg_contents": "\nHi!\n \nI have a problem like that:\n \nEnvironment:\n \nThe database is postgresql v7.1, with locale-support\njdk is sun's jdk1.3.0_02\nand jdbc is that one which comes with postgres (type 2).\n \nBoth database and jdbc driver has been build by myself.\n \nOS: RH 6.2 based linux with 2.4.3 kernel, glibc 2.1.3.\n \nThe problem:\n \nThere is a database which contains fields (the field's type is 'text')\nwith scandinavian alphabet. (Especially ������ (odiaeresis, adiaeresis,\naring, or in other words, oe, ae, and a with ring above it)).\n\n\nThe database has been installed, created and used under\nLC_ALL=\"finnish\" and LANG=\"fi_FI\" environment variables in act.\n \nOk, the problem:\n\nWhen I try to read those field, I get guestion marks instead of those\n8-bit scandic chars. \n\nI have been check my java programs and the database. (in fact, same\nproblem appears with postgres-7.1/src/interfaces/jdbc/example/psql.java).\nIn general, my java environment works fine with 8-bit chars, with psgl\n(not the java one) there is everything very well with those fields with\n8-bit chars.\n\nSo my question is, am I doing something wrong, or is there a bug in the\npgsql-jdbc? \n\nIf this is a bug or you need otherwise help or more information, please\nlet me know. I will try to help as much as possible to hunt this one\ndown.\n\nIf I am doing something stupid, I would very likely to know it...\n\nBR, Jani\n\n---\nJani Averbach \n\n\n\n", "msg_date": "Thu, 3 May 2001 21:42:50 +0300 (EET DST)", "msg_from": "Jani Averbach <jaa@cc.jyu.fi>", "msg_from_op": false, "msg_subject": "A bug with pgsql 7.1/jdbc and non-ascii (8-bit) chars?" }, { "msg_contents": "Since Java uses unicode (ucs2) internally for all Strings, the jdbc code \nalways needs to do character set conversions for all strings it gets \nfrom the database. In the 7.0 drivers this was done by assuming the \ndatabase was using the same character set as the default on the client, \nwhich is incorrect for a number of reasons. In 7.1 the jdbc code asks \nthe database what character set it is using and does the conversion from \nthe server character set to the java unicode strings.\n\nNow it turns out that Postgres is a little lax in its character set \nsupport, so you can very easily insert char/varchar/text with values \nthat fall outside the range of valid values for a given character set \n(and psql doesn't really care either). However in java since we must do \ncharacter set conversion to unicode, it does make a difference and any \nvalues that were inserted that are incorrect with regards to the \ndatabase character set will be reported as ?'s in java.\n\nWith regards to your specific problem, my guess is that you haven't \ncreated you database with the proper character set for the data you are \nstoring in it. I am guessing you simply used the default SQL Acsii \ncharacter set for your created database and therefore only the first 127 \ncharacters are defined. Any characters above 127 will be returned by \njava as ?'s.\n\nIf this is the case you will need to recreate your database with the \nproper character set for the data you are storing in it and then \neverything should be fine.\n\nthanks,\n--Barry\n\nJani Averbach wrote:\n\n> Hi!\n> \n> I have a problem like that:\n> \n> Environment:\n> \n> The database is postgresql v7.1, with locale-support\n> jdk is sun's jdk1.3.0_02\n> and jdbc is that one which comes with postgres (type 2).\n> \n> Both database and jdbc driver has been build by myself.\n> \n> OS: RH 6.2 based linux with 2.4.3 kernel, glibc 2.1.3.\n> \n> The problem:\n> \n> There is a database which contains fields (the field's type is 'text')\n> with scandinavian alphabet. (Especially ������ (odiaeresis, adiaeresis,\n> aring, or in other words, oe, ae, and a with ring above it)).\n> \n> \n> The database has been installed, created and used under\n> LC_ALL=\"finnish\" and LANG=\"fi_FI\" environment variables in act.\n> \n> Ok, the problem:\n> \n> When I try to read those field, I get guestion marks instead of those\n> 8-bit scandic chars. \n> \n> I have been check my java programs and the database. (in fact, same\n> problem appears with postgres-7.1/src/interfaces/jdbc/example/psql.java).\n> In general, my java environment works fine with 8-bit chars, with psgl\n> (not the java one) there is everything very well with those fields with\n> 8-bit chars.\n> \n> So my question is, am I doing something wrong, or is there a bug in the\n> pgsql-jdbc? \n> \n> If this is a bug or you need otherwise help or more information, please\n> let me know. I will try to help as much as possible to hunt this one\n> down.\n> \n> If I am doing something stupid, I would very likely to know it...\n> \n> BR, Jani\n> \n> ---\n> Jani Averbach \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n", "msg_date": "Thu, 03 May 2001 16:59:22 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit) chars?" }, { "msg_contents": "On Thu, 3 May 2001, Barry Lind wrote:\n\n> With regards to your specific problem, my guess is that you haven't \n> created you database with the proper character set for the data you are \n> storing in it. I am guessing you simply used the default SQL Acsii \n> character set for your created database and therefore only the first 127 \n> characters are defined. Any characters above 127 will be returned by \n> java as ?'s.\n> \n> If this is the case you will need to recreate your database with the \n> proper character set for the data you are storing in it and then \n> everything should be fine.\n> \n\nThanks, you are right!\n\nThe main problem was that I had not enabled the multibyte support for\ndatabase. (I believe fairytale and supposed that correct locale\nsetting will be enough.)\n\nSo my humble wish is that the instructions in the INSTALL file should be\ncorrected.\nBecause:\n\n --enable-multibyte\n \n Allows the use of multibyte character encodings. This is\n primarily for languages like Japanese, Korean, and Chinese. Read\n the Administrator's Guide for details.\n\nI think that this is a little bit missleading. \n\nThere is correct information in the Administrator's Guide, so I should\nhave to read the Guide, but but... The world would be much better place,\nif there is little mention in the installation instruction that this\nconcerns also 8-bit chars...\n\n\nBut anyway, it works now very fine, thanks!\n\nBR, Jani\n\n---\nJani Averbach\n\n", "msg_date": "Fri, 4 May 2001 13:42:10 +0300 (EET DST)", "msg_from": "Jani Averbach <jaa@cc.jyu.fi>", "msg_from_op": false, "msg_subject": "Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit) chars?" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> With regards to your specific problem, my guess is that you haven't \n> created you database with the proper character set for the data you are \n> storing in it. I am guessing you simply used the default SQL Acsii \n> character set for your created database and therefore only the first 127 \n> characters are defined. Any characters above 127 will be returned by \n> java as ?'s.\n\nDoes this happen with a non-multibyte-compiled database? If so, I'd\nargue that's a serious bug in the JDBC code: it makes JDBC unusable\nfor non-ASCII 8-bit character sets, unless one puts up with the overhead\nof MULTIBYTE support.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 10:29:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit) chars? " }, { "msg_contents": "On 04 May 2001 10:29:50 -0400, Tom Lane wrote:\n\n> > With regards to your specific problem, my guess is that you haven't \n> > created you database with the proper character set for the data you are \n> > storing in it. I am guessing you simply used the default SQL Acsii \n> > character set for your created database and therefore only the first 127 \n> > characters are defined. Any characters above 127 will be returned by \n> > java as ?'s.\n> \n> Does this happen with a non-multibyte-compiled database? If so, I'd\n> argue that's a serious bug in the JDBC code: it makes JDBC unusable\n> for non-ASCII 8-bit character sets, unless one puts up with the overhead\n> of MULTIBYTE support.\n\nI fought with this for a few days. The solution is to dump the database\nand create a new database with the correct encoding.\n\nMULTIBYTE is not neccesary I just set the type to LATIN1 and it works\nfine.\n\nQueries even work on accentuated caracters!!! \n\nI have a demo database for those interested\n\nCheers\n\nTony Grant\n\n\n\n-- \nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\nUltradev and PostgreSQL\nhttp://www.animaproductions.com/ultra.html\n\n", "msg_date": "04 May 2001 17:36:07 +0200", "msg_from": "Tony Grant <tony@animaproductions.com>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit)\n\tchars?" }, { "msg_contents": "Tony Grant <tony@animaproductions.com> writes:\n> On 04 May 2001 10:29:50 -0400, Tom Lane wrote:\n>> Does this happen with a non-multibyte-compiled database? If so, I'd\n>> argue that's a serious bug in the JDBC code: it makes JDBC unusable\n>> for non-ASCII 8-bit character sets, unless one puts up with the overhead\n>> of MULTIBYTE support.\n\n> I fought with this for a few days. The solution is to dump the database\n> and create a new database with the correct encoding.\n\n> MULTIBYTE is not neccesary I just set the type to LATIN1 and it works\n> fine.\n\nBut a non-MULTIBYTE backend doesn't even have the concept of \"setting\nthe encoding\" --- it will always just report SQL_ASCII.\n\nPerhaps what this really says is that it'd be better if the JDBC code\nassumed LATIN1 translations when the backend claims SQL_ASCII.\nCertainly, translating all high-bit-set characters to '?' is about as\nuselessly obstructionist a policy as I can think of...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 11:40:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit) chars? " }, { "msg_contents": "On 04 May 2001 11:40:48 -0400, Tom Lane wrote:\n\n> > I fought with this for a few days. The solution is to dump the database\n> > and create a new database with the correct encoding.\n> \n> > MULTIBYTE is not neccesary I just set the type to LATIN1 and it works\n> > fine.\n> \n> But a non-MULTIBYTE backend doesn't even have the concept of \"setting\n> the encoding\" --- it will always just report SQL_ASCII.\n\nOK I just read the configure script for my backend - you guessed it\nmultibyte support and locale support compiled in there... So createdb -E\nLATIN1 works just fine =:-b\n \n> Perhaps what this really says is that it'd be better if the JDBC code\n> assumed LATIN1 translations when the backend claims SQL_ASCII.\n> Certainly, translating all high-bit-set characters to '?' is about as\n> uselessly obstructionist a policy as I can think of...\n\n\nI will be adding this snippet to my doc on techdocs in the French\nversion. It will save somebody a lot of head scratching.\n\nCheers\nTony Grant\n\n\n-- \nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\n\n", "msg_date": "04 May 2001 17:58:41 +0200", "msg_from": "Tony Grant <tony@animaproductions.com>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit)\n\tchars?" }, { "msg_contents": "On 04 May 2001 11:40:48 -0400, Tom Lane wrote:\n\n> \n> But a non-MULTIBYTE backend doesn't even have the concept of \"setting\n> the encoding\" --- it will always just report SQL_ASCII.\n\nWhat kind of error message does \"createdb -E LATIN1\" give on a non\nMULTIBYTE backend? \n\nMaybe there needs to be a note somewhere informing people from Europe\nthat they too need MULTIBYTE as an option at compile time. i.e. In a\nbright yellow box in the HTML docs...\n\nAnd in the Reference manual and man pages the -E option for createdb\nneeds a note to specify that it applies to MULTIBYTE backends only. \n\nCheers\n\nTony Grant\n\n-- \nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\n\n", "msg_date": "04 May 2001 18:29:57 +0200", "msg_from": "Tony Grant <tony@animaproductions.com>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit)\n\tchars?" }, { "msg_contents": "Tony Grant <tony@animaproductions.com> writes:\n> What kind of error message does \"createdb -E LATIN1\" give on a non\n> MULTIBYTE backend? \n\n$ createdb -E LATIN1 foo\n/home/postgres/testversion/bin/createdb[143]: /home/postgres/testversion/bin/pg_encoding: not found.\ncreatedb: \"LATIN1\" is not a valid encoding name\n$\n\n> Maybe there needs to be a note somewhere informing people from Europe\n> that they too need MULTIBYTE as an option at compile time. i.e. In a\n> bright yellow box in the HTML docs...\n\nBut they *should not* need it, if they only want to use an 8-bit character\nset. Locale support should be enough. Or so I would think, anyway.\nI have to admit I have not looked very closely at the functionality\nthat's enabled by MULTIBYTE; is any of it really needed to deal with\nLATINn character sets?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 12:40:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit) chars? " }, { "msg_contents": "\n\nTony Grant wrote:\n\n> On 04 May 2001 11:40:48 -0400, Tom Lane wrote:\n> \n>> But a non-MULTIBYTE backend doesn't even have the concept of \"setting\n>> the encoding\" --- it will always just report SQL_ASCII.\n> \n> \n> What kind of error message does \"createdb -E LATIN1\" give on a non\n> MULTIBYTE backend? \n> \n> Maybe there needs to be a note somewhere informing people from Europe\n> that they too need MULTIBYTE as an option at compile time. i.e. In a\n> bright yellow box in the HTML docs...\n> \n> And in the Reference manual and man pages the -E option for createdb\n> needs a note to specify that it applies to MULTIBYTE backends only. \n> \n> Cheers\n> \n> Tony Grant\n> \nThe errors you get are:\nfrom createdb-\n\n$ createdb -E LATIN1 testdb\n/usr/local/pgsql/bin/createdb: /usr/local/pgsql/bin/pg_encoding: No such \nfile or directory\ncreatedb: \"LATIN1\" is not a valid encoding name\n\nand from psql-\n\ntemplate1=# create database testdb with encoding = 'LATIN1';\nERROR: Multi-byte support is not enabled\n\nthanks,\n--Barry\n\n", "msg_date": "Fri, 04 May 2001 11:46:55 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit)\n\t\tchars?" }, { "msg_contents": "\nTom,\n\nI don't consider it a 'uselessly obstrucionist policy' for the client to \nuse the encoding the server says it is using :-) The jdbc code simply \nissues a 'select getdatabaseencoding()' and uses the value the server \ntells it to. I would place the blame more on the server for lying to \nthe client :-)\n\nI consider this a problem with the backend in that it requires multibyte \nsupport to be enabled to handle supporting even single byte character \nsets like LATIN1. (True it supports LATIN1 without multibyte, but it \ndoesn't correctly report to the client what character set the server is \nusing, so the client has know way of knowing if it should use LATIN1, \nLATIN2, or KOI8-R -- the character set of the data is an important piece \nof information for a client especially in java where some encoding needs \nto be used to convert to ucs2).\n\nNow it is an easy change in the jdbc code to use LATIN1 when the server \nreports SQL_ASCII, but I really dislike hardcoding support that only \nworks in english speaking countries and Western Europe. All this does \nis move the problem from being one that non-english countries have to \nbeing one where it is a non-english and non-western european problem \n(eg. Eastern Europe, Russia, etc.).\n\nIn the current jdbc code it is possible to override the character set \nthat is being used (by passing a 'charSet' parameter to the connection), \nso it is possible to use a different encoding than the database is \nreporting.\n\nfrom Connection.java:\n //Set the encoding for this connection\n //Since the encoding could be specified or obtained from the DB we \nuse the\n //following order:\n // 1. passed as a property\n // 2. value from DB if supported by current JVM\n // 3. default for JVM (leave encoding null)\n\nthanks,\n--Barry\n\n\nTom Lane wrote:\n\n> Tony Grant <tony@animaproductions.com> writes:\n> \n>> On 04 May 2001 10:29:50 -0400, Tom Lane wrote:\n>> \n>>> Does this happen with a non-multibyte-compiled database? If so, I'd\n>>> argue that's a serious bug in the JDBC code: it makes JDBC unusable\n>>> for non-ASCII 8-bit character sets, unless one puts up with the overhead\n>>> of MULTIBYTE support.\n>> \n>> I fought with this for a few days. The solution is to dump the database\n>> and create a new database with the correct encoding.\n> \n>> MULTIBYTE is not neccesary I just set the type to LATIN1 and it works\n>> fine.\n> \n> \n> But a non-MULTIBYTE backend doesn't even have the concept of \"setting\n> the encoding\" --- it will always just report SQL_ASCII.\n> \n> Perhaps what this really says is that it'd be better if the JDBC code\n> assumed LATIN1 translations when the backend claims SQL_ASCII.\n> Certainly, translating all high-bit-set characters to '?' is about as\n> uselessly obstructionist a policy as I can think of...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n", "msg_date": "Fri, 04 May 2001 12:20:51 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit) chars?" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> Now it is an easy change in the jdbc code to use LATIN1 when the server \n> reports SQL_ASCII, but I really dislike hardcoding support that only \n> works in english speaking countries and Western Europe.\n\nWhat's wrong with that? It won't be any more broken for people who are\nnot really using LATIN1, and it will be considerably less broken for\nthose who are. Seems like a net win to me, even without making the\nobvious point about where the majority of Postgres users are.\n\nIt probably would be a good idea to allow the backend to store an\nindication of character set even when not compiled for MULTIBYTE,\nbut that's not the issue here. To me, the issue is whether JDBC\nmakes a reasonable effort not to munge data when presented with\na backend that claims to be using SQL_ASCII (which, let me remind\nyou, is the default setting). Converting high-bit-set characters\nto '?' is almost certainly NOT what the user wants you to do.\nConverting on the assumption of LATIN1 will make a lot of people\nhappy, and the people who aren't happy with it will certainly not\nbe happy with '?' conversion either.\n\n> All this does \n> is move the problem from being one that non-english countries have to \n> being one where it is a non-english and non-western european problem \n> (eg. Eastern Europe, Russia, etc.).\n\nNonsense. The non-Western-European folks see broken behavior now\nanyway, unless they compile with MULTIBYTE and set an appropriate\nencoding. How would this make their lives worse, or even different?\n\nI'm merely suggesting that the default behavior could be made useful\nto a larger set of people than it now is, without making things any\nworse for those that it's not useful to.\n\n\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 15:44:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit) chars? " }, { "msg_contents": "\nI can see that I'm probably not going to win this argument, but I'll \ntake one more try. :-)\n\nThe basic issue I have it that the server is providing an API to the \nclient to get the character encoding for the database and that API can \nreport incorrect information to the client. \n\nIf multibyte isn't enabled, getdatabaseencoding() always returns \nSQL_ASCII. In my understanding SQL_ASCII = 7bit ascii (at least that \nis what the code in backend/utils/mb/conv.c is assuming). But in \nreality SQL_ASCII means some unknown single byte character encoding. \nBut if multibyte is enabled then SQL_ASCII means 7bit ascii. And as far \nas I know there is no way for the client to know if multibyte is enabled \nor not.\n\nThus I would be happy if getdatabaseencoding() returned 'UNKNOWN' or \nsomething similar when in fact it doesn't know what the encoding is \n(i.e. when not compiled with multibyte). That way users of this \nfunction on the client have a means of knowing does the server mean 7bit \nascii or not. (Alternatively, having some other fuction like \ngetmultibyteenabled(Y/N) would work as well, because using that value \nyou can then determine whether or not to trust the value of \ngetdatabaseencoding).\n\nI just don't like having an api that under some circumstances you can't \nrely on its returned value as being correct.\n\nthanks,\n--Barry\n\nPS. Note that if multibyte is enabled, the functionality that is being \ncomplained about here in the jdbc client is apparently ok for the server \nto do. If you insert a value into a text column on a SQL_ASCII database \nwith multibyte enabled and that value contains 8bit characters, those \n8bit characters will be quietly replaced with a dummy character since \nthey are invalid for the SQL_ASCII 7bit character set.\n\n\nTom Lane wrote:\n\n> Barry Lind <barry@xythos.com> writes:\n> \n>> Now it is an easy change in the jdbc code to use LATIN1 when the server \n>> reports SQL_ASCII, but I really dislike hardcoding support that only \n>> works in english speaking countries and Western Europe.\n> \n> \n> What's wrong with that? It won't be any more broken for people who are\n> not really using LATIN1, and it will be considerably less broken for\n> those who are. Seems like a net win to me, even without making the\n> obvious point about where the majority of Postgres users are.\n> \n> It probably would be a good idea to allow the backend to store an\n> indication of character set even when not compiled for MULTIBYTE,\n> but that's not the issue here. To me, the issue is whether JDBC\n> makes a reasonable effort not to munge data when presented with\n> a backend that claims to be using SQL_ASCII (which, let me remind\n> you, is the default setting). Converting high-bit-set characters\n> to '?' is almost certainly NOT what the user wants you to do.\n> Converting on the assumption of LATIN1 will make a lot of people\n> happy, and the people who aren't happy with it will certainly not\n> be happy with '?' conversion either.\n> \n>> All this does \n>> is move the problem from being one that non-english countries have to \n>> being one where it is a non-english and non-western european problem \n>> (eg. Eastern Europe, Russia, etc.).\n> \n> \n> Nonsense. The non-Western-European folks see broken behavior now\n> anyway, unless they compile with MULTIBYTE and set an appropriate\n> encoding. How would this make their lives worse, or even different?\n> \n> I'm merely suggesting that the default behavior could be made useful\n> to a larger set of people than it now is, without making things any\n> worse for those that it's not useful to.\n> \n> \t\tregards, tom lane\n> \n> \n\n", "msg_date": "Fri, 04 May 2001 18:26:09 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit) chars?" }, { "msg_contents": "[ thread renamed and cross-posted to pghackers, since this isn't only\nabout JDBC anymore ]\n\nBarry Lind <barry@xythos.com> writes:\n> The basic issue I have it that the server is providing an API to the \n> client to get the character encoding for the database and that API can \n> report incorrect information to the client. \n\nI don't have any objection to changing the system so that even a\nnon-MULTIBYTE server can store and return encoding settings.\n(Presumably it should only accept encoding settings that correspond\nto single-byte encodings.) That can't happen before 7.2, however,\nas the necessary changes are a bit larger than I'd care to shoehorn\ninto a 7.1.* release.\n\n> Thus I would be happy if getdatabaseencoding() returned 'UNKNOWN' or \n> something similar when in fact it doesn't know what the encoding is \n> (i.e. when not compiled with multibyte).\n\nI have a philosophical difference with this: basically, I think that\nsince SQL_ASCII is the default value, you probably ought to assume that\nit's not too trustworthy. The software can *never* be said to KNOW what\nthe data encoding is; at most it knows what it's been told, and in the\ncase of a default it probably hasn't been told anything. I'd argue that\nSQL_ASCII should be interpreted in the way you are saying \"UNKNOWN\"\nought to be: ie, it's an unspecified 8-bit encoding (and from there\nit's not much of a jump to deciding to treat it as LATIN1, if you're\nforced to do conversion to Unicode or whatever). Certainly, seeing\nSQL_ASCII from the server is not license to throw away data, which is\nwhat JDBC is doing now.\n\n> PS. Note that if multibyte is enabled, the functionality that is being \n> complained about here in the jdbc client is apparently ok for the server \n> to do. If you insert a value into a text column on a SQL_ASCII database \n> with multibyte enabled and that value contains 8bit characters, those \n> 8bit characters will be quietly replaced with a dummy character since \n> they are invalid for the SQL_ASCII 7bit character set.\n\nI have not tried it, but if the backend does that then I'd argue that\nthat's a bug too. To my mind, a MULTIBYTE backend operating in\nSQL_ASCII encoding ought to behave the same as a non-MULTIBYTE backend:\ntransparent pass-through of characters with the high bit set. But I'm\nnot a multibyte guru. Comments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 May 2001 11:21:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "MULTIBYTE and SQL_ASCII (was Re: Re: A bug with pgsql 7.1/jdbc and\n\tnon-ascii (8-bit) chars?)" }, { "msg_contents": "> > Thus I would be happy if getdatabaseencoding() returned 'UNKNOWN' or \n> > something similar when in fact it doesn't know what the encoding is \n> > (i.e. when not compiled with multibyte).\n\nIs that ok for Java? I thought Java needs to know the encoding\nbeforehand so that it could convert to/from Unicode.\n\n> I have a philosophical difference with this: basically, I think that\n> since SQL_ASCII is the default value, you probably ought to assume that\n> it's not too trustworthy. The software can *never* be said to KNOW what\n> the data encoding is; at most it knows what it's been told, and in the\n> case of a default it probably hasn't been told anything. I'd argue that\n> SQL_ASCII should be interpreted in the way you are saying \"UNKNOWN\"\n> ought to be: ie, it's an unspecified 8-bit encoding (and from there\n> it's not much of a jump to deciding to treat it as LATIN1, if you're\n> forced to do conversion to Unicode or whatever). Certainly, seeing\n> SQL_ASCII from the server is not license to throw away data, which is\n> what JDBC is doing now.\n> \n> > PS. Note that if multibyte is enabled, the functionality that is being \n> > complained about here in the jdbc client is apparently ok for the server \n> > to do. If you insert a value into a text column on a SQL_ASCII database \n> > with multibyte enabled and that value contains 8bit characters, those \n> > 8bit characters will be quietly replaced with a dummy character since \n> > they are invalid for the SQL_ASCII 7bit character set.\n> \n> I have not tried it, but if the backend does that then I'd argue that\n> that's a bug too.\n\nI suspect the JDBC driver is responsible for the problem Burry has\nreported (I have tried to reproduce the problem using psql without\nsuccess).\n\n>From interfaces/jdbc/org/postgresql/Connection.java:\n\n> if (dbEncoding.equals(\"SQL_ASCII\")) {\n> dbEncoding = \"ASCII\";\n\nBTW, even if the backend behaves like that, I don't think it's a\nbug. Since SQL_ASCII is nothing more than an ascii encoding.\n\n> To my mind, a MULTIBYTE backend operating in\n> SQL_ASCII encoding ought to behave the same as a non-MULTIBYTE backend:\n> transparent pass-through of characters with the high bit set. But I'm\n> not a multibyte guru. Comments anyone?\n\nIf you expect that behavior, I think the encoding name 'UNKNOWN' or\nsomething like that seems more appropreate. (SQL_)ASCII is just an\nascii IMHO.\n--\nTatsuo Ishii\n", "msg_date": "Sun, 06 May 2001 16:47:11 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MULTIBYTE and SQL_ASCII (was Re: Re: A bug\n\twith pgsql 7.1/jdbc and non-ascii (8-bit) chars?)" }, { "msg_contents": "On 04 May 2001 15:44:23 -0400, Tom Lane wrote:\n\nBack from the weekend with sunburn (very important sign that it has stopped \nraining here on the west of Europe!!!!)\n\n> \n> > All this does \n> > is move the problem from being one that non-english countries have to \n> > being one where it is a non-english and non-western european problem \n> > (eg. Eastern Europe, Russia, etc.).\n> \n> Nonsense. The non-Western-European folks see broken behavior now\n> anyway, unless they compile with MULTIBYTE and set an appropriate\n> encoding. How would this make their lives worse, or even different?\n> \n> I'm merely suggesting that the default behavior could be made useful\n> to a larger set of people than it now is, without making things any\n> worse for those that it's not useful to.\n\nThis reminds me of e-mail software when I joined the net. 7 bit ASCII\nonly software made the use of accents impossible so we learnt to type\nwithout them or put up with garbage in our mail.\n\nI must agree with Tom here. There is a 256 caracter alphabet which is\nstandard in many languages. For North America, Spanish and French spring\nto mind. How are you going to build a common market if these two\nlanguages plus Brasilian Portugese are not supported in business\nsoftware?\n\nMultibyte is supported for other alphabets. This is already a wonderfull\nachievement for those concerned.\n\nThe standard backend should in my opinion support the LATIN alphabet. US\nASCII is a subset of that alphabet, it is not _the_ alphabet.\n\nThe JDBC and Java itself should also support the whole alphabet. All\nthis should be transparent for the programmer and the end user. Another\nbattle to be fought...\n\nCheers\n\nTony Grant\n\n-- \nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\nMacromedia UltraDev with PostgreSQL\nhttp://www.animaproductions.com/ultra.html\n\n", "msg_date": "07 May 2001 10:57:29 +0200", "msg_from": "Tony Grant <tony@animaproductions.com>", "msg_from_op": false, "msg_subject": "Re: Re: A bug with pgsql 7.1/jdbc and non-ascii (8-bit)\n\tchars?" }, { "msg_contents": "Hello,\n\nFor those having problems compiling the 7.1 JDBC driver the following\nsnippet from the Apache site may be a clue. I got the 7.1 .jar file to\nbuild by setting JAVA_HOME and ANT_HOME before doing make\n\n\n> You have to set the enviroment variable JAVA_HOME. It must\n> point to your local JDK root directory. This is true, even if you use\n> JDK 1.2 or above, which normally don't need this setting. It is\n> used by Ant, the compilation software.\n\nCheers\n\nTony Grant\n\n-- \nRedHat Linux on Sony Vaio C1XD/S\nhttp://www.animaproductions.com/linux2.html\nMacromedia UltraDev with PostgreSQL\nhttp://www.animaproductions.com/ultra.html\n\n", "msg_date": "07 May 2001 15:52:34 +0200", "msg_from": "Tony Grant <tony@animaproductions.com>", "msg_from_op": false, "msg_subject": "Building JDBC in 7.1" }, { "msg_contents": "\n\nTatsuo Ishii wrote:\n\n>>> Thus I would be happy if getdatabaseencoding() returned 'UNKNOWN' or \n>>> something similar when in fact it doesn't know what the encoding is \n>>> (i.e. when not compiled with multibyte).\n>> \n> \n> Is that ok for Java? I thought Java needs to know the encoding\n> beforehand so that it could convert to/from Unicode.\n\nThat is actually the original issue that started this thread. If you \nwant the full thread see the jdbc mail archive list. A user was \ncomplaining that when running on a database without multibyte enabled, \nthat through psql he could insert and retrieve 8bit characters, but in \njdbc the 8bit characters were converted to ?'s.\n\nI then explained why this was happening (db returns SQL_ASCII as the db \ncharacter set when not compiled with multibyte) so that character set is \nused to convert to unicode.\n\nTom suggested that it would make more sense for jdbc to use LATIN1 when \nthe database reported SQL_ASCII so that most users will see 'correct' \nbehavior in a non multibyte database. Because currently you need to \nenable multibyte support in order to use 8bit characters with jdbc. \nJdbc could easily be changed to treat SQL_ASCII as LATIN1, but I don't \nfeel that is an appropriate solution for the reasons outlined in this \nthread (thus the suggestions for UNKNOWN, or the ability for the client \nto determine if multibyte is enabled or not).\n\n> \n>> I have a philosophical difference with this: basically, I think that\n>> since SQL_ASCII is the default value, you probably ought to assume that\n>> it's not too trustworthy. The software can *never* be said to KNOW what\n>> the data encoding is; at most it knows what it's been told, and in the\n>> case of a default it probably hasn't been told anything. I'd argue that\n>> SQL_ASCII should be interpreted in the way you are saying \"UNKNOWN\"\n>> ought to be: ie, it's an unspecified 8-bit encoding (and from there\n>> it's not much of a jump to deciding to treat it as LATIN1, if you're\n>> forced to do conversion to Unicode or whatever). Certainly, seeing\n>> SQL_ASCII from the server is not license to throw away data, which is\n>> what JDBC is doing now.\n>> \n>>> PS. Note that if multibyte is enabled, the functionality that is being \n>>> complained about here in the jdbc client is apparently ok for the server \n>>> to do. If you insert a value into a text column on a SQL_ASCII database \n>>> with multibyte enabled and that value contains 8bit characters, those \n>>> 8bit characters will be quietly replaced with a dummy character since \n>>> they are invalid for the SQL_ASCII 7bit character set.\n>> \n>> I have not tried it, but if the backend does that then I'd argue that\n>> that's a bug too.\n> \n> \n> I suspect the JDBC driver is responsible for the problem Burry has\n> reported (I have tried to reproduce the problem using psql without\n> success).\n> \n> >From interfaces/jdbc/org/postgresql/Connection.java:\n> \n>> if (dbEncoding.equals(\"SQL_ASCII\")) {\n>> dbEncoding = \"ASCII\";\n> \n> \n> BTW, even if the backend behaves like that, I don't think it's a\n> bug. Since SQL_ASCII is nothing more than an ascii encoding.\n\nI believe Tom's point is that if multibyte is not enabled this isn't \ntrue, since SQL_ASCII then means whatever character set the client wants \nto use against the server as the server really doesn't care what single \nbyte data is being inserted/selected from the database.\n\n> \n>> To my mind, a MULTIBYTE backend operating in\n>> SQL_ASCII encoding ought to behave the same as a non-MULTIBYTE backend:\n>> transparent pass-through of characters with the high bit set. But I'm\n>> not a multibyte guru. Comments anyone?\n> \n> \n> If you expect that behavior, I think the encoding name 'UNKNOWN' or\n> something like that seems more appropreate. (SQL_)ASCII is just an\n> ascii IMHO.\n\nI agree.\n\n> \n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n--Barry\n\n", "msg_date": "Mon, 07 May 2001 18:10:00 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MULTIBYTE and SQL_ASCII (was Re: Re: A bug\n\twith pgsql 7.1/jdbc and non-ascii (8-bit) chars?)" }, { "msg_contents": "> >>> Thus I would be happy if getdatabaseencoding() returned 'UNKNOWN' or \n> >>> something similar when in fact it doesn't know what the encoding is \n> >>> (i.e. when not compiled with multibyte).\n> >> \n> > \n> > Is that ok for Java? I thought Java needs to know the encoding\n> > beforehand so that it could convert to/from Unicode.\n> \n> That is actually the original issue that started this thread. If you \n> want the full thread see the jdbc mail archive list. A user was \n> complaining that when running on a database without multibyte enabled, \n> that through psql he could insert and retrieve 8bit characters, but in \n> jdbc the 8bit characters were converted to ?'s.\n\nStill I don't see what you are wanting in the JDBC driver if\nPostgreSQL would return \"UNKNOWN\" indicating that the backend is not\ncompiled with MULTIBYTE. Do you want exact the same behavior as prior\n7.1 driver? i.e. reading data from the PostgreSQL backend, assume its\nencoding default to the Java client (that is set by locale or\nsomething else) and convert it to UTF-8. If so, that would make sense\nto me...\n--\nTatsuo Ishii\n", "msg_date": "Tue, 08 May 2001 11:02:49 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MULTIBYTE and SQL_ASCII (was Re: Re: A bug\t\n\twith pgsql 7.1/jdbc and non-ascii (8-bit) chars?)" }, { "msg_contents": "\n\nTatsuo Ishii wrote:\n\n>>>>> Thus I would be happy if getdatabaseencoding() returned 'UNKNOWN' or \n>>>>> something similar when in fact it doesn't know what the encoding is \n>>>>> (i.e. when not compiled with multibyte).\n>>>> \n>>> Is that ok for Java? I thought Java needs to know the encoding\n>>> beforehand so that it could convert to/from Unicode.\n>> \n>> That is actually the original issue that started this thread. If you \n>> want the full thread see the jdbc mail archive list. A user was \n>> complaining that when running on a database without multibyte enabled, \n>> that through psql he could insert and retrieve 8bit characters, but in \n>> jdbc the 8bit characters were converted to ?'s.\n> \n> \n> Still I don't see what you are wanting in the JDBC driver if\n> PostgreSQL would return \"UNKNOWN\" indicating that the backend is not\n> compiled with MULTIBYTE. Do you want exact the same behavior as prior\n> 7.1 driver? i.e. reading data from the PostgreSQL backend, assume its\n> encoding default to the Java client (that is set by locale or\n> something else) and convert it to UTF-8. If so, that would make sense\n> to me...\n\nMy suggestion would be that if the jdbc client was able to determine if \nthe server character set was UNKNOWN (i.e. no multibyte) that it would \nthen use some appropriate default character set to perform conversions \nto UCS2 (LATIN1 would probably make the most sence as a default). The \njdbc driver would perform its existing behavior if the character set was \nSQL_ASCII and multibyte was enabled (i.e. only support 7bit characters \njust like the backend does).\n\nNote that the user is always able to override the character set used for \nconversion by setting the charSet property.\n\nTom also mentioned that it might be possible for the server to support \nsetting the character set for a database even when multibyte wasn't \nenabled. That would then allow clients like jdbc to get a value from \nnon-multibyte enabled servers that would be more meaningful than the \ncurrent SQL_ASCII. If this where done, then the 'UNKNOWN' hack would \nnot be necessary.\n\nthanks,\n--Barry\n\n> \n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n", "msg_date": "Mon, 07 May 2001 22:16:03 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MULTIBYTE and SQL_ASCII (was Re: Re: A bug\t\n\twith pgsql 7.1/jdbc and non-ascii (8-bit) chars?)" }, { "msg_contents": "\n\nPeter B. West wrote:\n\n> I'm not entirely sure of the situation here, although I have been\n> reading the thread as it has unwound. Given that I may not understand\n> the whole situation, my *philosophical* preference is NOT to build in\n> kludges which silently bypass the information which is being passed\n> around.\n> \n> Initially, I was getting wound up about Latin1 imperialism, but I\n> realised that, for SQL_ASCII encoding to work in 8-bit environments up\n> to now, users must be working in homogeneous encoding environments,\n> where 8 bits coming and going will always represent the same character. \n> In that case it doesn't matter how the character is represented\n> internally as long as the round-trip translation is consistent.\n> \n> How hard is it to change the single-byte character encoding of a\n> database? If that is currently difficult, why not provide a one-off\n> upgrade application which does just that, provided it is going from\n> SQL_ASCII to a single-byte encoding?\n\nIt is currently not possible to change the character encoding of a \ndatabase once created. You can specify a character encoding for a newly \ncreated database only if multibyte is enabled. The code hardcodes a \nvalue of 'SQL_ACSII' if multibyte is not enabled. How difficult would \nit be to change this functionality is a question more appropriately \nanswered by others on the list (i.e. I don't know).\n\n> \n> Alternatively, add a compile switch that specifies an implicit 8-bit\n> encoding in which 8-bit SQL_ASCII values are to be understood? I think\n> that the first solution should be as easy to implement, and would be a\n> lot cleaner.\n> \n> Peter\n> \nI agree that your first suggestion would be more desirable IMHO.\n\nthanks,\n--Barry\n\n> \n> Barry Lind wrote:\n> \n>> Tatsuo Ishii wrote:\n>> \n>>>>> Thus I would be happy if getdatabaseencoding() returned 'UNKNOWN' or\n>>>>> something similar when in fact it doesn't know what the encoding is\n>>>>> (i.e. when not compiled with multibyte).\n>>>> \n>>> Is that ok for Java? I thought Java needs to know the encoding\n>>> beforehand so that it could convert to/from Unicode.\n>> \n>> That is actually the original issue that started this thread. If you\n>> want the full thread see the jdbc mail archive list. A user was\n>> complaining that when running on a database without multibyte enabled,\n>> that through psql he could insert and retrieve 8bit characters, but in\n>> jdbc the 8bit characters were converted to ?'s.\n>> \n>> I then explained why this was happening (db returns SQL_ASCII as the db\n>> character set when not compiled with multibyte) so that character set is\n>> used to convert to unicode.\n>> \n>> Tom suggested that it would make more sense for jdbc to use LATIN1 when\n>> the database reported SQL_ASCII so that most users will see 'correct'\n>> behavior in a non multibyte database. Because currently you need to\n>> enable multibyte support in order to use 8bit characters with jdbc.\n>> Jdbc could easily be changed to treat SQL_ASCII as LATIN1, but I don't\n>> feel that is an appropriate solution for the reasons outlined in this\n>> thread (thus the suggestions for UNKNOWN, or the ability for the client\n>> to determine if multibyte is enabled or not).\n>> \n>>>> I have a philosophical difference with this: basically, I think that\n>>>> since SQL_ASCII is the default value, you probably ought to assume that\n>>>> it's not too trustworthy. The software can *never* be said to KNOW what\n>>>> the data encoding is; at most it knows what it's been told, and in the\n>>>> case of a default it probably hasn't been told anything. I'd argue that\n>>>> SQL_ASCII should be interpreted in the way you are saying \"UNKNOWN\"\n>>>> ought to be: ie, it's an unspecified 8-bit encoding (and from there\n>>>> it's not much of a jump to deciding to treat it as LATIN1, if you're\n>>>> forced to do conversion to Unicode or whatever). Certainly, seeing\n>>>> SQL_ASCII from the server is not license to throw away data, which is\n>>>> what JDBC is doing now.\n>>>> \n>>>>> PS. Note that if multibyte is enabled, the functionality that is being\n>>>>> complained about here in the jdbc client is apparently ok for the server\n>>>>> to do. If you insert a value into a text column on a SQL_ASCII database\n>>>>> with multibyte enabled and that value contains 8bit characters, those\n>>>>> 8bit characters will be quietly replaced with a dummy character since\n>>>>> they are invalid for the SQL_ASCII 7bit character set.\n>>>> \n>>>> I have not tried it, but if the backend does that then I'd argue that\n>>>> that's a bug too.\n>>> \n>>> \n>>> I suspect the JDBC driver is responsible for the problem Burry has\n>>> reported (I have tried to reproduce the problem using psql without\n>>> success).\n>>> \n>>> >From interfaces/jdbc/org/postgresql/Connection.java:\n>>> \n>>>> if (dbEncoding.equals(\"SQL_ASCII\")) {\n>>>> dbEncoding = \"ASCII\";\n>>> \n>>> \n>>> BTW, even if the backend behaves like that, I don't think it's a\n>>> bug. Since SQL_ASCII is nothing more than an ascii encoding.\n>> \n>> I believe Tom's point is that if multibyte is not enabled this isn't\n>> true, since SQL_ASCII then means whatever character set the client wants\n>> to use against the server as the server really doesn't care what single\n>> byte data is being inserted/selected from the database.\n>> \n>>>> To my mind, a MULTIBYTE backend operating in\n>>>> SQL_ASCII encoding ought to behave the same as a non-MULTIBYTE backend:\n>>>> transparent pass-through of characters with the high bit set. But I'm\n>>>> not a multibyte guru. Comments anyone?\n>>> \n>>> \n>>> If you expect that behavior, I think the encoding name 'UNKNOWN' or\n>>> something like that seems more appropreate. (SQL_)ASCII is just an\n>>> ascii IMHO.\n>> \n>> I agree.\n> \n\n", "msg_date": "Tue, 08 May 2001 14:14:46 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: MULTIBYTE and SQL_ASCII (was Re: [JDBC] Re: A bugwith\n\tpgsql 7.1/jdbc and non-ascii (8-bit) chars?)" }, { "msg_contents": "> > Still I don't see what you are wanting in the JDBC driver if\n> > PostgreSQL would return \"UNKNOWN\" indicating that the backend is not\n> > compiled with MULTIBYTE. Do you want exact the same behavior as prior\n> > 7.1 driver? i.e. reading data from the PostgreSQL backend, assume its\n> > encoding default to the Java client (that is set by locale or\n> > something else) and convert it to UTF-8. If so, that would make sense\n> > to me...\n> \n> My suggestion would be that if the jdbc client was able to determine if \n> the server character set was UNKNOWN (i.e. no multibyte) that it would \n> then use some appropriate default character set to perform conversions \n> to UCS2 (LATIN1 would probably make the most sence as a default). The \n> jdbc driver would perform its existing behavior if the character set was \n> SQL_ASCII and multibyte was enabled (i.e. only support 7bit characters \n> just like the backend does).\n>\n> Note that the user is always able to override the character set used for \n> conversion by setting the charSet property.\n\nI see. However I would say we could not change the current behavior\nof the backend until 7.2 is out. It is our policy the we would not\nadd/change existing functionalities while we are in the minor release\ncycle.\n\nWhat about doing like this:\n\n1. call pg_encoding_to_char(1)\t(actually any number except 0 is ok)\n\n2. if it returns \"SQL_ASCII\", then you could assume that MULTIBYTE is\nnot enbaled.\n\nThis is pretty ugly, but should work.\n\n> Tom also mentioned that it might be possible for the server to support \n> setting the character set for a database even when multibyte wasn't \n> enabled. That would then allow clients like jdbc to get a value from \n> non-multibyte enabled servers that would be more meaningful than the \n> current SQL_ASCII. If this where done, then the 'UNKNOWN' hack would \n> not be necessary.\n\nTom's suggestion does not sound reasonable to me. If PostgreSQL is not\nbuilt with MULTIBYTE, then it means there would be no such idea\n\"encoding\" in PostgreSQL becuase there is no program to handle\nencodings. Thus it would be meaningless to assign an \"encoding\" to a\ndatabase if MULTIBYTE is not enabled.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 09 May 2001 10:23:05 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MULTIBYTE and SQL_ASCII (was Re: Re: A bug\t\t\n\twith pgsql 7.1/jdbc and non-ascii (8-bit) chars?)" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> Tom also mentioned that it might be possible for the server to support \n>> setting the character set for a database even when multibyte wasn't \n>> enabled. That would then allow clients like jdbc to get a value from \n>> non-multibyte enabled servers that would be more meaningful than the \n>> current SQL_ASCII. If this where done, then the 'UNKNOWN' hack would \n>> not be necessary.\n\n> Tom's suggestion does not sound reasonable to me. If PostgreSQL is not\n> built with MULTIBYTE, then it means there would be no such idea\n> \"encoding\" in PostgreSQL becuase there is no program to handle\n> encodings. Thus it would be meaningless to assign an \"encoding\" to a\n> database if MULTIBYTE is not enabled.\n\nWhy? Without the MULTIBYTE code, the backend cannot perform character\nset translations --- but it's perfectly possible that someone might not\nneed translations. A lot of European sites are probably very happy\nas long as the server gives them back the same 8-bit characters they\nstored. But what they would like, if they have to deal with tools like\nJDBC, is to *identify* what character set they are storing data in, so\nthat their data will be correctly translated to Unicode or whatever.\nThe obvious way to do that is to allow them to set the value that\ngetdatabaseencoding() will return.\n\nEssentially, my point is that identifying the character set is useful\nto support outside-the-database character set conversions, whether or\nnot we have compiled the code for inside-the-database conversions.\nMoreover, the stored data certainly has some encoding, whether or not\nthe database contains code that knows enough to do anything useful about\nthe encoding. So it's not \"meaningless\" to be able to store and report\nan encoding value.\n\nI am not sure how much of the MULTIBYTE code would have to be activated\nto allow this, but surely it's only a small fraction of the complete\nfeature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 May 2001 22:40:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MULTIBYTE and SQL_ASCII (was Re: Re: A bug with pgsql\n\t7.1/jdbc and non-ascii (8-bit) chars?)" }, { "msg_contents": "Thanks. Patch applied to jdbc1 and jdbc2, and attached. I had already\npatched the ORDER BY as:\n\n sql.append(\"' order by relkind, relname\");\n\n\n> Hi !!\n> \n> I was trying to get a very nice FREE graphical db tool called DbVisualizer \n> (http://www.ideit.com/products/dbvis/) to work with Postgresql and I found \n> out the following bug: if database has views then getTables() gets the null \n> pointer exception ('order by relname' makes the listing tree in \n> DbVisualizer a lot useful !!)\n> \n> This patch should propably be applied to the the jdbc1's \n> DatabaseMetaData.java, too.\n> \n> [/tmp/postgresql-7.1/src/interfaces/jdbc/org/postgresql/jdbc2]$\n> <ql/jdbc2]$ diff -u DatabaseMetaData.java.org DatabaseMetaData.java\n> \n> --- DatabaseMetaData.java.org\tWed May 02 22:52:25 2001\n> +++ DatabaseMetaData.java\tWed May 02 23:07:19 2001\n> @@ -1666,7 +1666,7 @@\n> // Now take the pattern into account\n> sql.append(\") and relname like '\");\n> sql.append(tableNamePattern.toLowerCase());\n> - sql.append(\"'\");\n> + sql.append(\"' order by relname\");\n> \n> // Now run the query\n> r = connection.ExecSQL(sql.toString());\n> @@ -1697,6 +1697,9 @@\n> \tcase 'S':\n> \t relKind = \"SEQUENCE\";\n> \t break;\n> +\tcase 'v':\n> +\t relKind = \"VIEW\";\n> +\t break;\n> \tdefault:\n> \t relKind = null;\n> \t}\n> @@ -1704,7 +1707,7 @@\n> \ttuple[0] = null;\t\t// Catalog name\n> \ttuple[1] = null;\t\t// Schema name\n> \ttuple[2] = r.getBytes(1);\t// Table name\n> -\ttuple[3] = relKind.getBytes();\t// Table type\n> +\ttuple[3] = (relKind==null) ? null : relKind.getBytes();\t// Table type\n> \ttuple[4] = remarks;\t\t// Remarks\n> \tv.addElement(tuple);\n> }\n> \n> \n> -----\n> http://www.ideit.com/products/dbvis/\n> \n> ...\n> \n> DbVisualizer\n> Version: 2.0\n> Released: 2001-04-20\n> \n> \n> The #1 requested feature to ease editing table data is now supported!\n> The #2 requested feature to print graphs is now supported!\n> Read the complete change log for all new features and enhancements!\n> \n> \n> DbVisualizer is a cross platform database visualization and edit tool \n> relying 100% on the JDBC, Java Database Connectivity API's. DbVisualizer \n> enables simultaneous connections to many different databases through JDBC \n> drivers available from a variety of vendors. Just point and click to browse \n> the structure of the database, characteristics of tables, etc. No matter if \n> it's an enterprise database from Oracle or an open source product like \n> InstantDB!\n> \n> And best of all -> it's FREE!\n> -----\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/interfaces/jdbc/org/postgresql/jdbc1/DatabaseMetaData.java\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/jdbc/org/postgresql/jdbc1/DatabaseMetaData.java,v\nretrieving revision 1.14\ndiff -c -r1.14 DatabaseMetaData.java\n*** src/interfaces/jdbc/org/postgresql/jdbc1/DatabaseMetaData.java\t2001/05/16 04:08:49\t1.14\n--- src/interfaces/jdbc/org/postgresql/jdbc1/DatabaseMetaData.java\t2001/05/16 16:36:21\n***************\n*** 1697,1702 ****\n--- 1697,1705 ----\n \tcase 'S':\n \t relKind = \"SEQUENCE\";\n \t break;\n+ \tcase 'v':\n+ \t relKind = \"VIEW\";\n+ \t break;\n \tdefault:\n \t relKind = null;\n \t}\n***************\n*** 1704,1710 ****\n \ttuple[0] = null;\t\t// Catalog name\n \ttuple[1] = null;\t\t// Schema name\n \ttuple[2] = r.getBytes(1);\t// Table name\t\n! \ttuple[3] = relKind.getBytes();\t// Table type\n \ttuple[4] = remarks;\t\t// Remarks\n \tv.addElement(tuple);\n }\n--- 1707,1713 ----\n \ttuple[0] = null;\t\t// Catalog name\n \ttuple[1] = null;\t\t// Schema name\n \ttuple[2] = r.getBytes(1);\t// Table name\t\n! \ttuple[3] = (relKind==null) ? null : relKind.getBytes();\t// Table type\n \ttuple[4] = remarks;\t\t// Remarks\n \tv.addElement(tuple);\n }\nIndex: src/interfaces/jdbc/org/postgresql/jdbc2/DatabaseMetaData.java\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/jdbc/org/postgresql/jdbc2/DatabaseMetaData.java,v\nretrieving revision 1.18\ndiff -c -r1.18 DatabaseMetaData.java\n*** src/interfaces/jdbc/org/postgresql/jdbc2/DatabaseMetaData.java\t2001/05/16 04:08:50\t1.18\n--- src/interfaces/jdbc/org/postgresql/jdbc2/DatabaseMetaData.java\t2001/05/16 16:36:25\n***************\n*** 1697,1702 ****\n--- 1697,1705 ----\n \tcase 'S':\n \t relKind = \"SEQUENCE\";\n \t break;\n+ \tcase 'v':\n+ \t relKind = \"VIEW\";\n+ \t break;\n \tdefault:\n \t relKind = null;\n \t}\n***************\n*** 1704,1710 ****\n \ttuple[0] = null;\t\t// Catalog name\n \ttuple[1] = null;\t\t// Schema name\n \ttuple[2] = r.getBytes(1);\t// Table name\n! \ttuple[3] = relKind.getBytes();\t// Table type\n \ttuple[4] = remarks;\t\t// Remarks\n \tv.addElement(tuple);\n }\n--- 1707,1713 ----\n \ttuple[0] = null;\t\t// Catalog name\n \ttuple[1] = null;\t\t// Schema name\n \ttuple[2] = r.getBytes(1);\t// Table name\n! \ttuple[3] = (relKind==null) ? null : relKind.getBytes();\t// Table type\n \ttuple[4] = remarks;\t\t// Remarks\n \tv.addElement(tuple);\n }", "msg_date": "Wed, 16 May 2001 12:41:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug fix for JDBC's getTables() in Postgresql 7.1" }, { "msg_contents": "For anyone looking for latest jar files; I have built the jars from the\nlatest code snapshot and they are available for download at\n\nhttp://jdbc.fastcrypt.com\n\nDave\n\n\n\n", "msg_date": "Thu, 17 May 2001 07:51:28 -0400", "msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Latest binaries" } ]
[ { "msg_contents": "I am starting to package 7.1.1, and I see I did not brand 7.1 properly. \nI forgot the date in the HISTORY file, and didn't update register.txt. \nI will do all those now for 7.1.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 12:16:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Packaging 7.1.1" }, { "msg_contents": "Please,\n\napply a little patch:\n\n--- src/test/locale/test-ctype.c Tue Sep 1 08:40:33 1998\n+++ /u/megera/app/locale/test/test-ctype.c Fri Sep 15 19:12:06 2000\n@@ -39,7 +39,7 @@\n void\n describe_char(int c)\n {\n- char cp = c,\n+ unsigned char cp = c,\n up = toupper(c),\n lo = tolower(c);\n\n\n\tRegards,\n\n\t\tOleg\n\nOn Thu, 3 May 2001, Bruce Momjian wrote:\n\n> I am starting to package 7.1.1, and I see I did not brand 7.1 properly.\n> I forgot the date in the HISTORY file, and didn't update register.txt.\n> I will do all those now for 7.1.1.\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 3 May 2001 22:09:40 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "\nOK, Oleg, I am applying this on your word only. I don't understand its\npurpose, but you sent it with a 7.1.1 subject so I assume you want it in\nthere. This is not a critical area of our code.\n\n> Please,\n> \n> apply a little patch:\n> \n> --- src/test/locale/test-ctype.c Tue Sep 1 08:40:33 1998\n> +++ /u/megera/app/locale/test/test-ctype.c Fri Sep 15 19:12:06 2000\n> @@ -39,7 +39,7 @@\n> void\n> describe_char(int c)\n> {\n> - char cp = c,\n> + unsigned char cp = c,\n> up = toupper(c),\n> lo = tolower(c);\n> \n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> On Thu, 3 May 2001, Bruce Momjian wrote:\n> \n> > I am starting to package 7.1.1, and I see I did not brand 7.1 properly.\n> > I forgot the date in the HISTORY file, and didn't update register.txt.\n> > I will do all those now for 7.1.1.\n> >\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 15:20:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Packaging 7.1.1" } ]
[ { "msg_contents": " From my point of view, NULL is neither bigger, nor smaller, you can't\ncompare it with a number.\n\nSo it just comes at the end if you sort at all. \n\n(Perhaps you need to take a think about what NULL means in your data. Should\nNULL sort as if it's 0?, +infinity?, -infinity? if so why?)\n\n\nregards,\n\nPiers Scannell\nSystem Architect, GlobeCast France Telecom\nTel: +44 1707 667 228 Fax: +44 1707 667 206\n\n\n\n> -----Original Message-----\n> From: Marcin Zukowski [mailto:mz174771@students.mimuw.edu.pl]\n> Sent: 30 April 2001 16:30\n> To: Tom Lane\n> Cc: eru@mimuw.edu.pl; pgsql-bugs@postgresql.org\n> Subject: [BUGS] Found an example prooving bug\n> \n> \n> I found an example when postgres while executing the same \n> query uses null\n> values as sometimes bigger than everything and sometimes smaller.\n> And I think it's BAD.\n> Check this out:\n> \n> -------------------------------------------------------------------\n> \n> DROP TABLE NUTKI ;\n> CREATE TABLE NUTKI (\n> ID INT4 PRIMARY KEY,\n> VAL INT4,\n> REF INT4\n> );\n> CREATE INDEX NUTKI_VAL ON NUTKI(VAL);\n> CREATE INDEX NUTKI_KEY ON NUTKI(KEY);\n> INSERT INTO NUTKI VALUES(1,1,null);\n> INSERT INTO NUTKI VALUES(2,2,1);\n> INSERT INTO NUTKI VALUES(3,3,1);\n> INSERT INTO NUTKI VALUES(4,null,1);\n> INSERT INTO NUTKI VALUES(5,5,5);\n> INSERT INTO NUTKI VALUES(7,null,7);\n> INSERT INTO NUTKI VALUES(8,8,7);\n> SET ENABLE_INDEXSCAN TO ON ;\n> SET ENABLE_SEQSCAN TO OFF ;\n> SET ENABLE_SORT TO OFF;\n> SELECT * FROM NUTKI N1, NUTKI N2 WHERE N1.ID = N2.REF\n> ORDER BY N1.VAL DESC, N2.VAL;\n> \n> --------------------------------------------------------------\n> -----------\n> ( well, i think all the index creation and switches are not \n> necessary )\n> \n> The result is:\n> \n> id | val | ref | id | val | ref\n> ----+-----+-----+----+-----+-----\n> 5 | 5 | 5 | 5 | 5 | 5\n> 1 | 1 | | 2 | 2 | 1\n> 1 | 1 | | 3 | 3 | 1\n> 1 | 1 | | 4 | | 1\n> 7 | | 7 | 8 | 8 | 7\n> 7 | | 7 | 7 | | 7\n> \n> Tested on:\n> PostgreSQL 7.0.3 on i586-pc-linux-gnu, compiled by gcc egcs-2.91.66\n> \n> So, as you can see, values in 2nd column are sorted descending, with\n> null smaller than everything. In the 5th column, val's are sorted\n> ascending, with null BIGGER than everything.\n> I really think it's a bug.\n> Please let me know, what do you think about it, and please \n> make it go to\n> the pgsql-bugs, because my mails aren't accepted there. I \n> didn't get any\n> reply for my previous letter, and I don't know what to think.\n> \n> best regards,\n> \n> Marcin \n> \n> --\n> : Marcin Zukowski < eru@i.pl || eru@mimuw.edu.pl >\n> : \"The worst thing in life is that there's no background music\"\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n", "msg_date": "Thu, 3 May 2001 17:20:58 +0100 ", "msg_from": "Piers Scannell <piers.scannell@globecastne.com>", "msg_from_op": true, "msg_subject": "RE: Found an example prooving bug" }, { "msg_contents": "> From my point of view, NULL is neither bigger, nor smaller, you can't\n> compare it with a number.\n> So it just comes at the end if you sort at all. \nWell, I know you can't compare null in, for example, WHERE clause. But if\nwe want to sort data in some way, I would like Postgres to behave in any,\nbut predictable, way. If last of the query execution steps is sorting,\nnull values are always at the end. And it would be OK, but, depending on \nthe query, values in database, and some options (like ENABLE_SORT), \nnull-values are sometimes at the beginning, because it uses order stored \nin index.\nAlso, for my bug-report Tom Lane replied with some details from SQL92\nspecs. And my last mail, with an example (I can wrote less complex\none) shows, that pgsql doesn't work the way SQL92 says. So, is it \ncompliant with SQL92 standard in this matter or is it not? If it's not,\nshouldn't that be changed?\n> (Perhaps you need to take a think about what NULL means in your data. Should\n> NULL sort as if it's 0?, +infinity?, -infinity? if so why?)\nAs I wrote - any way. But fixed one.\nTo finish this problem - I've changed my program to use -infinity for null \nvalues (but I really don't like it :) ). I still think pgsql is not \ncompliant with SQL92, but I'm not the one to decide if it should be \nchanged. \n\nBest regards\n\nMarcin Zukowski\n\n", "msg_date": "Thu, 3 May 2001 20:32:39 +0200 (CEST)", "msg_from": "Marcin Zukowski <mz174771@students.mimuw.edu.pl>", "msg_from_op": false, "msg_subject": "RE: Found an example prooving bug" }, { "msg_contents": "> >From my point of view, NULL is neither bigger, nor smaller, you can't\n> compare it with a number.\n> \n> So it just comes at the end if you sort at all. \n> \n> (Perhaps you need to take a think about what NULL means in your data. Should\n> NULL sort as if it's 0?, +infinity?, -infinity? if so why?)\n\nWe have a TODO item:\n\n\t* Make NULL's come out at the beginning or end depending on the\n\t ORDER BY direction\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 15:21:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Found an example prooving bug" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We have a TODO item:\n\n> \t* Make NULL's come out at the beginning or end depending on the\n> \t ORDER BY direction\n\nThe tricky part of this is to know which direction you are talking\nabout, when all you've been given is an operator that might have an\narbitrary name (ie, not necessarily '<' or '>'). So it's not all that\nclear which end to put the NULLs at.\n\nActually, I've been messing around with that code in hopes of speeding\nup sorting a little bit. Up to now the sort comparison routines depend\non invoking the datatype's ordering operator '<', which they may have to\ndo twice; whereas if they invoked the datatype's btree 3-way comparator\nfunction there'd only be one function call and one underlying comparison\noperation. So I have code pending commit that tries to look up the\nassociated comparator function and use that instead, if there is one.\n\nHow's that relevant, you ask? Well, to make this work for both '<' and\n'>' (ie, ASC or DESC sort), the sort comparator has to distinguish which\nway it's sorting and negate the 3-way comparison result or not. It\nknows which case applies from the pg_amop entry that it found the\noperator in (ie, BTLessOp or BTGreaterOp). So for all btree-compatible\nsort operators, it would now be a pretty simple matter to make the NULLs\ncome out at the same end that a btree index scan would make them come\nout at. The semantics are defined by the system catalogs and we don't\nhave to depend on anything as klugy as looking at the operator name.\n\nThis still leaves us up in the air for sort operators that aren't linked\nto btree comparison routines. Would it be OK to punt for those, and\njust sort the NULLs at the end no matter what sort operator you mention?\nThere's no issue of getting different results for an indexscan vs\nexplicit sort plan in this situation, since there can't be any btree\nindex available...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 May 2001 17:01:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Sort ordering of NULLs (was Re: Found an example prooving bug)" } ]
[ { "msg_contents": "I have completed branding 7.1.1. HISTORY file shows changes as:\n\n\tFix for numeric MODULO operator (Tom)\n\tpg_dump fixes (Philip)\n\treadline 4.2 fixes (Peter E)\n\tJOIN fixes (Tom)\n\tAIX, MSWIN, VAX,N32K fixes (Tom)\n\tMultibytes fixes (Tom)\n\tUnicode fixes (Tatsuo)\n\tOptimizer improvements (Tom)\n\tFix for whole tuples in functions (Tom)\n\tFix for pg_ctl and option strings with spaces (Peter E)\n\tODBC fixes (Hiroshi)\n\tEXTRACT can now take string argument (Thomas)\n\tPython fixes (Darcy)\n\nIs this OK?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 13:17:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Final stamp for 7.1.1" }, { "msg_contents": "Bruce Momjian writes:\n\n> I have completed branding 7.1.1. HISTORY file shows changes as:\n>\n> \tFix for numeric MODULO operator (Tom)\n> \tpg_dump fixes (Philip)\n\nShould probably point out that pg_dump is now able to dump 7.0 databases\nas well.\n\n> \treadline 4.2 fixes (Peter E)\n> \tJOIN fixes (Tom)\n> \tAIX, MSWIN, VAX,N32K fixes (Tom)\n> \tMultibytes fixes (Tom)\n> \tUnicode fixes (Tatsuo)\n> \tOptimizer improvements (Tom)\n> \tFix for whole tuples in functions (Tom)\n> \tFix for pg_ctl and option strings with spaces (Peter E)\n> \tODBC fixes (Hiroshi)\n> \tEXTRACT can now take string argument (Thomas)\n> \tPython fixes (Darcy)\n>\n> Is this OK?\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 3 May 2001 22:38:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Final stamp for 7.1.1" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > I have completed branding 7.1.1. HISTORY file shows changes as:\n> >\n> > \tFix for numeric MODULO operator (Tom)\n> > \tpg_dump fixes (Philip)\n> \n> Should probably point out that pg_dump is now able to dump 7.0 databases\n> as well.\n\nDone:\n\n\tpg_dump can dump 7.0 databases (Philip)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 16:39:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Final stamp for 7.1.1" } ]
[ { "msg_contents": "I'm starting to throw together a web site relating to postgresql\nreplication, trying to bring together the ideas we have thrown around so\nfar. If anyone has any good docs (on replication not relating to\npostgresq too), please send me the links.\n\nThanks.\n- Brandon\n\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Thu, 3 May 2001 14:46:04 -0400 (EDT)", "msg_from": "bpalmer <bpalmer@crimelabs.net>", "msg_from_op": true, "msg_subject": "Replication Docs.." }, { "msg_contents": "On Thu, 3 May 2001, bpalmer wrote:\n\n> I'm starting to throw together a web site relating to postgresql\n> replication, trying to bring together the ideas we have thrown around so\n> far. If anyone has any good docs (on replication not relating to\n> postgresq too), please send me the links.\n\nCollaborate w/Justin -- he has information about replication up at\ntechdocs.postgresql.org now.\n\nThanks,\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Thu, 3 May 2001 16:17:47 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": false, "msg_subject": "Re: Replication Docs.." }, { "msg_contents": "> I'm starting to throw together a web site relating to postgresql\n> replication, trying to bring together the ideas we have thrown around so\n> far. If anyone has any good docs (on replication not relating to\n> postgresq too), please send me the links.\n\nYou may want to take a look at:\n\n\thttp://greatbridge.org/project/pgreplication/projdisplay.php\n\nand especially here:\n\n\thttp://www.greatbridge.org/genpage?replication_top\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 14:43:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication Docs.." } ]
[ { "msg_contents": "I have a table:\n\ncreate table forsamling (\n id SERIAL,\n for_id int4 unique not null,\n kund_flag int8 not null default 1,\n online smallint default 0,\n klar smallint default 0,\n);\n\ncreate index forsamling_idx on forsamling(for_id,online,klar,kund_flag);\n\nIt has about 1000 entries in this table...\n\nWhy doesn't it go by indexes when i search the smallints and int8s, but it\nworks with the integer SERIAL (SERIAL creates it's own index)?\n\nWhat can i do to make it go by index?\n\n/Magnus\n\nexplain select * from forsamling where klar = 1;\nNOTICE: QUERY PLAN:\n\nSeq Scan on forsamling (cost=0.00..23.50 rows=1 width=88)\n\n-----\n\nexplain select * from forsamling where kund_flag = 123;\nNOTICE: QUERY PLAN:\n\nSeq Scan on forsamling (cost=0.00..23.50 rows=1 width=88)\n\n\n-----\n\nexplain select * from forsamling where for_id = 123;\nNOTICE: QUERY PLAN:\n\nIndex Scan using forsamling_idx on forsamling (cost=0.00..2.01 rows=1\nwidth=88)\n\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n\n", "msg_date": "Thu, 3 May 2001 21:15:43 +0200", "msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Not scanning by index" }, { "msg_contents": "> explain select * from forsamling where klar = 1;\n\nTry SELECT * FROM forsampling WHERE klar = 1::int2\n\n-Mitch\n\n", "msg_date": "Thu, 3 May 2001 17:02:05 -0400", "msg_from": "\"Mitch Vincent\" <mitch@venux.net>", "msg_from_op": false, "msg_subject": "Re: Not scanning by index" }, { "msg_contents": "\nOn Thu, 3 May 2001, Magnus Naeslund(f) wrote:\n\n> I have a table:\n> \n> create table forsamling (\n> id SERIAL,\n> for_id int4 unique not null,\n> kund_flag int8 not null default 1,\n> online smallint default 0,\n> klar smallint default 0,\n> );\n> \n> create index forsamling_idx on forsamling(for_id,online,klar,kund_flag);\n> \n> It has about 1000 entries in this table...\n> \n> Why doesn't it go by indexes when i search the smallints and int8s, but it\n> works with the integer SERIAL (SERIAL creates it's own index)?\n> \n> What can i do to make it go by index?\n\nTwo things I can think of that might help...\n\nFirst, the multi-column indexes aren't very useful for searching for\nthings not at the start of the index (ie, klar, etc...).\n\nSecond, there's a known problem with the other integer types because the\nint constant you're comparing against is assumed as an int4. You need\nto explicitly cast the constant to type of the column.\n\n", "msg_date": "Thu, 3 May 2001 14:29:27 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Not scanning by index" } ]
[ { "msg_contents": "Hi, I'm using DBD::Pg version 0.98 with Postgres 7.1. I'm noticing\nthat quite often on an error, the $dbh->errstr method doesn't return\nthe full error. For example, if I have a table with a unique key\nconstraint:\n\nCREATE TABLE urls (\n url_id SERIAL PRIMARY KEY,\n msg_id integer NOT NULL REFERENCES msg_info(msg_id),\n url_link varchar(255) NOT NULL default ''\n);\n\nand I do this insert:\n\nINSERT INTO urls (msg_id,url_link) VALUES (9,'http://www.kcilink.com/');\n\nthe second time I insert it, I get this on the psql command line:\n\nERROR: Cannot insert a duplicate key into unique index urls_id_link\n\nHowever, if I use a perl module to do it, like this:\n\n my $sth = $dbh->prepare('INSERT INTO urls (msg_id,url_link) VALUES (?,?)');\n if ($sth and $sth->execute($msgid,$url)) {\n ($urlid) = $dbh->selectrow_array(\"SELECT currval('urls_url_id_seq')\");\n } else {\n print $dbh->errstr(),\"\\n\";\n }\n\nwhere $msgid and $url are the same values above, I get this output:\n\nERROR: Cannot i\n\nThis makes it a bit difficult to distinguish between a hard error and\nsimply a duplicate insert error, which I can handle in this app.\n\nAlso, at other times, I get just \"7\" as the error message rather than\nthe full message, making error reporting just a bit confusing. ;-)\n\nDo you know of any issues with $dbh->errstr that could be causing\nthis? Any workarounds I might try?\n", "msg_date": "Thu, 3 May 2001 15:24:31 -0400", "msg_from": "Vivek Khera <khera@kcilink.com>", "msg_from_op": true, "msg_subject": "DBD::Pg errstr method doesn't return full error messages" }, { "msg_contents": "On Thu, 3 May 2001 15:24:31 -0400, Vivek Khera wrote:\n> Hi, I'm using DBD::Pg version 0.98 with Postgres 7.1. I'm noticing\n> that quite often on an error, the $dbh->errstr method doesn't return\n> the full error.\n\nHere's a patch to DBD::Pg 0.98 which fixes this:\n\n--- dbdimp.c.orig Tue May 1 11:46:47 2001\n+++ dbdimp.c Tue May 1 11:55:26 2001\n@@ -72,18 +72,21 @@\n char *error_msg;\n {\n D_imp_xxh(h);\n- char *err, *src, *dst; \n+ char *err, *src, *dst, *end; \n int len = strlen(error_msg);\n \n- err = (char *)malloc(strlen(error_msg + 1));\n+ err = (char *)malloc(len + 1);\n if (!err) {\n return;\n }\n+ /* Remove trailing newlines, allowing for multi-line messages */\n+ for(end = error_msg + len; end > error_msg && end[-1] == '\\n'; --end);\n+ \n src = error_msg;\n dst = err;\n \n /* copy error message without trailing newlines */\n- while (*dst != '\\0' && *dst != '\\n') {\n+ while (src < end){\n *dst++ = *src++;\n }\n *dst = '\\0';\n\n-- \n\tPeter Haworth\tpmh@edison.ioppublishing.com\n\"A good messenger expects to get shot.\"\n\t--Larry Wall\n", "msg_date": "Fri, 4 May 2001 12:49:57 +0100", "msg_from": "\"Peter Haworth\" <pmh@edison.ioppublishing.com>", "msg_from_op": false, "msg_subject": "Re: DBD::Pg errstr method doesn't return full error\n messages" }, { "msg_contents": "Hello, \nI am having this little annoying problem with permisions - there is a\ntable, with 'select' permision, and then there is a view with ALL\npermisions on it.\nWhen I update the view, I get 'permision denied' at source table, unless\nI'm using user with ability to create databases/users.\n\n>From logs:\n010504.12:27:31.774 [2179] ProcessQuery\n010504.12:27:31.779 [2179] query: SELECT oid FROM \"jednostka\" WHERE\n\"brcd\" = $1 FOR UPDATE OF \"jednostka\"\n010504.12:27:31.780 [2179] parser outputs:\n010504.12:27:31.784 [2179] ERROR: jednostka: Permission denied.\n\nbut select oid FROM \"jednostka\" where \"brcd\" = 222; works fine. for\nupdate clause makes the difference.\n\nWhat's wrong?\n\n--\nxx xxx...\nDariusz Pietrzak\t\t\thttp://eyck.tinet.pl\n Eyck@irc.pl dariush@ajax.umcs.lublin.pl\n\n", "msg_date": "Fri, 4 May 2001 14:05:19 +0200 (CEST)", "msg_from": "Dariusz Pietrzak <dariush@ajax.umcs.lublin.pl>", "msg_from_op": false, "msg_subject": "Permissions and views." }, { "msg_contents": "Dariusz Pietrzak <dariush@ajax.umcs.lublin.pl> writes:\n> but select oid FROM \"jednostka\" where \"brcd\" = 222; works fine. for\n> update clause makes the difference.\n\nWhat PG version is this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 10:06:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Permissions and views. " }, { "msg_contents": "On Lun 07 May 2001 11:29, you wrote:\n> > What PG version is this?\n>\n> 7.0.2\n>\n> It is said that RULES are executed with rule's owner permissions, so how\n> is it possible that different users are getting different results?\n\nThis is not true. Rules are not executed with owner permission. The \npermission needed is over the relation on which the \nSELECT/INSERT/DELETE/UPDATE has been executed.\n\nAn example: If you have permission to read (SELECT) on a table, the rule over \nthat table, ON select will be executed, else NOT.\n\nSaludos... :-)\n\nP.D.: Check views and rules for more info.\n\n\n-- \nEl mejor sistema operativo es aquel que te da de comer.\nCuida tu dieta.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Mon, 7 May 2001 08:55:30 +0300", "msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: Permissions and views." }, { "msg_contents": "\n> What PG version is this?\n7.0.2\n\nIt is said that RULES are executed with rule's owner permissions, so how\nis it possible that different users are getting different results?\n\n\n", "msg_date": "Mon, 7 May 2001 10:29:11 +0200 (CEST)", "msg_from": "Dariusz Pietrzak <dariush@ajax.umcs.lublin.pl>", "msg_from_op": false, "msg_subject": "Re: Permissions and views. " }, { "msg_contents": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar> writes:\n>> It is said that RULES are executed with rule's owner permissions, so how\n>> is it possible that different users are getting different results?\n\n> This is not true. Rules are not executed with owner permission.\n\nYes they are. If you do something like\n\n\tINSERT INTO view ...\n\nwhich is rewritten by a rule into INSERT INTO someplace_else,\nthen there are two sets of permission checks applied: the original\ncaller must have insert rights on the view, and the rule owner must\nhave insert rights on \"someplace_else\".\n\nIn the case at hand, I'd expect that the owner of the rule issuing\nSELECT...FOR UPDATE would need to have select and update permission\non the target table.\n\nThere have been sundry bugs in this mechanism in various versions of\nPostgres, which is why I asked what version. But on reading over the\nthread, there's not really enough info to know whether the system\nis misbehaving or not. We'd need to see a more complete example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 10:44:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Permissions and views. " }, { "msg_contents": "\nHas this gotten back to the DBD Perl maintainers?\n\n\n> On Thu, 3 May 2001 15:24:31 -0400, Vivek Khera wrote:\n> > Hi, I'm using DBD::Pg version 0.98 with Postgres 7.1. I'm noticing\n> > that quite often on an error, the $dbh->errstr method doesn't return\n> > the full error.\n> \n> Here's a patch to DBD::Pg 0.98 which fixes this:\n> \n> --- dbdimp.c.orig Tue May 1 11:46:47 2001\n> +++ dbdimp.c Tue May 1 11:55:26 2001\n> @@ -72,18 +72,21 @@\n> char *error_msg;\n> {\n> D_imp_xxh(h);\n> - char *err, *src, *dst; \n> + char *err, *src, *dst, *end; \n> int len = strlen(error_msg);\n> \n> - err = (char *)malloc(strlen(error_msg + 1));\n> + err = (char *)malloc(len + 1);\n> if (!err) {\n> return;\n> }\n> + /* Remove trailing newlines, allowing for multi-line messages */\n> + for(end = error_msg + len; end > error_msg && end[-1] == '\\n'; --end);\n> + \n> src = error_msg;\n> dst = err;\n> \n> /* copy error message without trailing newlines */\n> - while (*dst != '\\0' && *dst != '\\n') {\n> + while (src < end){\n> *dst++ = *src++;\n> }\n> *dst = '\\0';\n> \n> -- \n> \tPeter Haworth\tpmh@edison.ioppublishing.com\n> \"A good messenger expects to get shot.\"\n> \t--Larry Wall\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 14:54:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DBD::Pg errstr method doesn't return full error messages" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Has this gotten back to the DBD Perl maintainers?\n> \n\nyes, I just want to to apply some more patches,\nbefore releasing the next version.\n\nEdmund\n\n-- \nhttp://www.edmund-mergl.de\nfon: +49 700 edemergl\n", "msg_date": "Tue, 08 May 2001 21:20:35 +0200", "msg_from": "Edmund Mergl <e.mergl@bawue.de>", "msg_from_op": false, "msg_subject": "Re: DBD::Pg errstr method doesn't return full error \n messages" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Has this gotten back to the DBD Perl maintainers?\n> > \n> \n> yes, I just want to to apply some more patches,\n> before releasing the next version.\n\nEdmond, what about adding your code to the official PostgreSQL CVS\ndistribution? Seems like it would be a good idea and help folks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 May 2001 17:15:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DBD::Pg errstr method doesn't return full error messages" }, { "msg_contents": "Hello,\n\tI want next :\n\n\ta) add constraint (primary and foreign) in existing table\n\tb) temporary disable constraint and enable later\n\nIs it possible in Postgresql ?\n\nHaris Peco\nsnpe@infosky.net\n", "msg_date": "Mon, 14 May 2001 16:35:47 +0200", "msg_from": "snpe <snpe@infosky.net>", "msg_from_op": false, "msg_subject": "Contraints in postgresql ?" }, { "msg_contents": "\nOn Mon, 14 May 2001, snpe wrote:\n\n> Hello,\n> \tI want next :\n> \n> \ta) add constraint (primary and foreign) in existing table\n> \tb) temporary disable constraint and enable later\n> \n> Is it possible in Postgresql ?\n\nSort of...\nYou can add foreign key constraints using ALTER TABLE ADD CONSTRAINT,\nand unique constraints with CREATE UNIQUE INDEX (primary keys are\neffectively unique constraints where all the columns are not null \n-- if you need to change the columns to not null that's a bit more\ninvolved). AFAIK, you can't really \"disable\" constraints, although\nyou can remove them (drop the index for unique. you have to manually\ndrop the triggers created by the foreign key constraint) and re-add them.\nHowever, if you've violated the constraint while it was gone, you\nwon't be able to re-add the constraint.\n\n\n", "msg_date": "Mon, 14 May 2001 11:42:45 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Contraints in postgresql ?" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Has this gotten back to the DBD Perl maintainers?\n> > \n> \n> yes, I just want to to apply some more patches,\n> before releasing the next version.\n\nEdmund, what do you think about moving DBD perl into our main CVS tree?\nSeems it would be a logical place for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 23:59:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DBD::Pg errstr method doesn't return full error\n messages" } ]
[ { "msg_contents": "I am running Postgresql 7.1 on a dedicated Redhat 7.0 box with 512meg ram \nand an IDE hard drive.\n\nAll day long queries that usually seem to execute instantaneously have been \ntaking up to 10 second to run! I generally have about 6 postmasters \nrunning, utilizing anywhere from 1% to 96% CPU utilization. Another server \nrunning Apache and PHP is performing all the queries. I am not using \npersistant connections. I vaccum daily, usually has little to no impact on \nthe server. The database is roughly 60 megs, there are no usually wide \ntables... the is one table (user tracking) that has about 200,000 rows. It \nis indexed.\n\nWe are running a proprietary e-commerce package. Right now I am getting \nroughly one query per second.\n\nAny input would be helpful! If you need additional info let me know.\n\nBTW, Thanks to Lamar for some great tips today!\n\nHere is the status from pg_ctl:\n/usr/local/pgsql/bin/postmaster '-d2' '-N' '48' '-B' '10000' '-i' '-D' \n'/usr/local/pgsql/data'\n\nHere is a sample from the log:\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: SELECT sale_price FROM ec_sale_prices WHERE sale_begins <= \nCURRENT_TIMESTAMP AND sale_ends >= CURRENT_TIMESTAMP AND p\nroduct_id = 137\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: proc_exit(0)\nDEBUG: shmem_exit(0)\nDEBUG: exit(0)\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 958 exited with status 0\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 5\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 5\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling writing 5\n/usr/local/pgsql/bin/postmaster: BackendStartup: pid 959 user postgres db \npa_commerce socket 5\n/usr/local/pgsql/bin/postmaster child[959]: starting with (postgres -d2 \n-v131072 -p pa_commerce )\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\nDEBUG: connection: host=216.239.233.44 user=postgres database=pa_commerce\nDEBUG: InitPostgres\nDEBUG: StartTransactionCommand\nDEBUG: query: SELECT web_user_id FROM pa_web_users WHERE session_id = \n'34978ae91facc5fc9abb8e21db609b4c'\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: SELECT web_user_id FROM pa_partner_user_map WHERE \nweb_user_id = 221256\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: UPDATE pa_partner_user_map SET last_visited = \nCURRENT_TIMESTAMP, partner_id = 'OmdxViUZtwA-*HFh50XeaHBc70n42b4iXA' WH\nERE web_user_id = 221256\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: StartTransactionCommand\nDEBUG: query: SELECT order_id FROM pa_orders WHERE web_user_id = 221256 \nAND order_state = 'in_basket'\nDEBUG: ProcessQuery\nDEBUG: CommitTransactionCommand\nDEBUG: proc_exit(0)\nDEBUG: shmem_exit(0)\nDEBUG: exit(0)\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 5\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading 5\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling writing 5\n/usr/local/pgsql/bin/postmaster: BackendStartup: pid 960 user postgres db \npa_commerce socket 5\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 959 exited with status 0\n/usr/local/pgsql/bin/postmaster child[960]: starting with (postgres -d2 \n-v131072 -p pa_commerce )\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01", "msg_date": "Thu, 03 May 2001 21:07:42 +0100", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": true, "msg_subject": "Extrordinarily Poor Performance...." }, { "msg_contents": "!! I haven't ran VACUUM ANALYZE since last night. Just ran it - \nperformance has improved significantly. I think I am going to have to run \nit hourly during this high traffic time. Postmasters are still utilizing \nabout 100% of the CPU. Is this normal? I am considering increasing the \nshmmax again.\n\nThanks for the help Mitch!\n\n-r\n\nAt 09:11 PM 5/3/01 -0400, Mitch Vincent wrote:\n\n> > persistant connections. I vaccum daily, usually has little to no impact\n>on\n>\n> You VACUUM ANALYZE too, don't you?\n>\n>-Mitch\n>\n>\n>\n>\n>\n>---\n>Incoming mail is certified Virus Free.\n>Checked by AVG anti-virus system (http://www.grisoft.com).\n>Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01", "msg_date": "Thu, 03 May 2001 21:24:39 +0100", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": true, "msg_subject": "Re: Extrordinarily Poor Performance...." }, { "msg_contents": "Here is some output from top...\n\n9:20pm up 40 min, 1 user, load average: 3.77, 3.12, 3.74\n41 processes: 36 sleeping, 5 running, 0 zombie, 0 stopped\nCPU states: 99.2% user, 0.7% system, 0.0% nice, 0.0% idle\nMem: 515664K av, 303712K used, 211952K free, 37476K shrd, 39552K buff\nSwap: 514068K av, 0K used, 514068K free 158980K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n 1657 postgres 20 0 6820 6820 5712 R 33.4 1.3 0:12 postmaster\n 1671 postgres 20 0 6576 6576 5468 R 33.4 1.2 0:08 postmaster\n 1650 postgres 19 0 6952 6952 5848 R 32.6 1.3 0:16 postmaster\n 1444 postgres 0 0 1044 1044 844 R 0.5 0.2 0:02 top\n 1 root 0 0 540 540 476 S 0.0 0.1 0:06 init\n 2 root 0 0 0 0 0 SW 0.0 0.0 0:02 kflushd\n 3 root 0 0 0 0 0 SW 0.0 0.0 0:04 kupdate\n 4 root 0 0 0 0 0 SW 0.0 0.0 0:00 kpiod\n 5 root 0 0 0 0 0 SW 0.0 0.0 0:00 kswapd\n 6 root -20 -20 0 0 0 SW< 0.0 0.0 0:00 mdrecoveryd\n 373 root 0 0 836 836 700 S 0.0 0.1 0:00 syslogd\n 383 root 0 0 852 852 472 S 0.0 0.1 0:00 klogd\n 398 rpc 0 0 580 580 492 S 0.0 0.1 0:00 portmap\n 414 root 0 0 0 0 0 SW 0.0 0.0 0:00 lockd\n 415 root 0 0 0 0 0 SW 0.0 0.0 0:00 rpciod\n 425 rpcuser 0 0 832 832 720 S 0.0 0.1 0:00 rpc.statd\n 477 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n 484 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n 485 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n 488 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n 489 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n 496 daemon 0 0 580 580 504 S 0.0 0.1 0:00 atd\n 511 root 0 0 1040 1040 828 S 0.0 0.2 0:00 xinetd\n 520 root 0 0 1196 1196 1060 S 0.0 0.2 0:00 sshd\n 533 root 0 0 2008 2008 1652 R 0.0 0.3 0:00 sshd\n 542 lp 0 0 1112 1112 948 S 0.0 0.2 0:00 lpd\n 579 root 0 0 512 512 448 S 0.0 0.0 0:00 gpm\n 594 root 0 0 720 720 616 S 0.0 0.1 0:00 crond\n 679 xfs 0 0 4496 4496 808 S 0.0 0.8 0:00 xfs\n 704 ryan 0 0 1328 1328 1036 S 0.0 0.2 0:00 bash\n 752 postgres 0 0 2492 2492 2360 S 0.0 0.4 0:01 postmaster\n 778 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n 779 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n 780 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n 781 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n 782 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n 783 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n 1279 root 0 0 1060 1060 856 S 0.0 0.2 0:00 su\n 1280 root 0 0 1388 1388 1076 S 0.0 0.2 0:00 bash\n 1369 root 0 0 1020 1020 828 S 0.0 0.1 0:00 su\n 1370 postgres 0 0 1308 1308 1028 S 0.0 0.2 0:00 bash\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01", "msg_date": "Thu, 03 May 2001 21:27:57 +0100", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": true, "msg_subject": "Re: Extrordinarily Poor Performance...." }, { "msg_contents": "oh btw, i completely forgot to mention the minor fixes to the linux init\nscripts i mentioned earlier (about 2 weeks ago) for things that perhaps\nshould be in the 7.1.1 release. (someone sent out a mail that they were\nbranching 7.1.1)\n\nAlso i never got a response on who actually packages those linux init\nscripts that appear in the RPM but not on the pgsql cvs tree. (i am also\ncurious on why it is different, and how the RPM is built).\n\n-rchit\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Thursday, May 03, 2001 9:16 AM\nTo: PostgreSQL-development\nSubject: [HACKERS] Packaging 7.1.1\n\n\nI am starting to package 7.1.1, and I see I did not brand 7.1 properly. \nI forgot the date in the HISTORY file, and didn't update register.txt. \nI will do all those now for 7.1.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Thu, 3 May 2001 16:15:39 -0700 ", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": false, "msg_subject": "RE: Packaging 7.1.1" }, { "msg_contents": "\nNot sure on their status. Are they listed on the outstanding patches\npage at the bottom of the developers page? Probably too late for 7.1.1\nnow.\n\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> oh btw, i completely forgot to mention the minor fixes to the linux init\n> scripts i mentioned earlier (about 2 weeks ago) for things that perhaps\n> should be in the 7.1.1 release. (someone sent out a mail that they were\n> branching 7.1.1)\n> \n> Also i never got a response on who actually packages those linux init\n> scripts that appear in the RPM but not on the pgsql cvs tree. (i am also\n> curious on why it is different, and how the RPM is built).\n> \n> -rchit\n> \n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Thursday, May 03, 2001 9:16 AM\n> To: PostgreSQL-development\n> Subject: [HACKERS] Packaging 7.1.1\n> \n> \n> I am starting to package 7.1.1, and I see I did not brand 7.1 properly. \n> I forgot the date in the HISTORY file, and didn't update register.txt. \n> I will do all those now for 7.1.1.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 19:18:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Rachit Siamwalla <rachit@ensim.com> writes:\n\n> Also i never got a response on who actually packages those linux init\n> scripts that appear in the RPM but not on the pgsql cvs tree. (i am also\n> curious on why it is different, and how the RPM is built).\n\nLamar Owen and I.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "03 May 2001 20:16:07 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Rachit Siamwalla wrote:\n> oh btw, i completely forgot to mention the minor fixes to the linux init\n> scripts i mentioned earlier (about 2 weeks ago) for things that perhaps\n> should be in the 7.1.1 release. (someone sent out a mail that they were\n> branching 7.1.1)\n\n> Also i never got a response on who actually packages those linux init\n> scripts that appear in the RPM but not on the pgsql cvs tree. (i am also\n> curious on why it is different, and how the RPM is built).\n\nThat would be me. Before building and releasing 7.1.1 RPMs I will be\nreviewing the various bugs and changes planned for the 7.1.1 RPM.\n\nAs to why the RPM init script is different from the one packaged in the\nmain source tree -- I can make assumptions in the RPM set that the\nversion in the source tree cannot.\n\nAs to how the RPMs are built -- to answer that question sanely requires\nme to know how much experience you have with the whole RPM paradigm. \n'How is the RPM built?' is a multifaceted question. The obvious simple\nanswer is that I maintain:\n\t1.)\tA set of patches to make certain portions of the source\n\t\ttree 'behave' in the different environment of the RPMset;\n\t2.)\tThe initscript;\n\t3.)\tAny other ancilliary scripts and files;\n\t4.)\tA README.rpm-dist document that tries to adequately document\n\t\tboth the differences between the RPM build and the WHY of the\n\t\tdifferences, as well as useful RPM environment operations\n\t\t(like, using syslog, upgrading, getting postmaster to\n\t\tstart at OS boot, etc);\n\t5.)\tThe spec file that throws it all together. This is not a \n\t\ttrivial undertaking in a package of this size.\n\nI then download and build on as many different canonical distributions\nas I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1 on\nmy personal hardware. Occasionally I receive opportunity from certain\ncommercial enterprises such as Great Bridge and PostgreSQL Inc to build\non other distributions. \n\nI test the build by installing the resulting packages and running the\nregression tests. Once the build passes these tests, I upload to the\npostgresql.org ftp server and make a release announcement. I am also\nresponsible for maintaining the RPM download area on the ftp site.\n\nYou'll notice I said 'canonical' distributions above. That simply means\nthat the machine is as stock 'out of the box' as practical -- that is,\neverything (except select few programs) on these boxen are installed by\nRPM; only official Red Hat released RPMs are used (except in unusual\ncircumstances involving software that will not alter the build -- for\nexample, installing a newer non-RedHat version of the Dia diagramming\npackage is OK -- installing Python 2.1 on the box that has Python 1.5.2\ninstalled is not, as that alters the PostgreSQL build). The RPM as\nuploaded is built to as close to out-of-the-box pristine as is\npossible. Only the standard released 'official to that release'\ncompiler is used -- and only the standard official kernel is used as\nwell.\n\nFor a time I built on Mandrake for RedHat consumption -- no more. \nNonstandard RPM building systems are worse than useless. Which is not\nto say that Mandrake is useless! By no means is Mandrake useless --\nunless you are building Red Hat RPMs -- and Red Hat is useless if you're\ntrying to build Mandrake or SuSE RPMs, for that matter. But I would be\nfoolish to use 'Lamar Owen's Super Special RPM Blend Distro 0.1.2' to\nbuild for public consumption! :-)\n\nI _do_ attempt to make the _source_ RPM compatible with as many\ndistributions as possible -- however, since I have limited resources (as\na volunteer RPM maintainer) I am limited as to the amount of testing\nsaid build will get on other distributions, architectures, or systems. \n\nAnd, while I understand people's desire to immediately upgrade to the\nnewest version, realize that I do this as a side interest -- I have a\nregular, full-time job as a broadcast\nengineer/webmaster/sysadmin/Technical Director which occasionally\nprevents me from making timely RPM releases. This happened during the\nearly part of the 7.1 beta cycle -- but I believe I was pretty much on\nthe ball for the Release Candidates and the final release.\n\nI am working towards a more open RPM distribution -- I would dearly love\nto more fully document the process and put everything into CVS -- once I\nfigure out how I want to represent things such as the spec file in a CVS\nform. It makes no sense to maintain a changelog, for instance, in the\nspec file in CVS when CVS does a better job of changelogs -- I will need\nto write a tool to generate a real spec file from a CVS spec-source file\nthat would add version numbers, changelog entries, etc to the result\nbefore building the RPM. IOW, I need to rethink the process -- and then\ngo through the motions of putting my long RPM history into CVS one\nversion at a time so that version history information isn't lost.\n\nAs to why all these files aren't part of the source tree, well, unless\nthere was a large cry for it to happen, I don't believe it should. \nPostgreSQL is very platform-agnostic -- and I like that. Including the\nRPM stuff as part of the Official Tarball (TM) would, IMHO, slant that\nagnostic stance in a negative way. But maybe I'm too sensitive to\nthat. I'm not opposed to doing that if that is the consensus of the\ncore group -- and that would be a sneaky way to get the stuff into CVS\n:-). But if the core group isn't thrilled with the idea (and my\ninstinct says they're not likely to be), I am opposed to the idea -- not\nto keep the stuff to myself, but to not hinder the platform-neutral\nstance. IMHO, of course. \n\nOf course, there are many projects that DO include all the files\nnecessary to build RPMs from their Official Tarball (TM).\n\nBruce, should portions of that answer be part of the linux FAQ? I don't\nwant to have to write that too many times :-).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 03 May 2001 21:22:05 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> \n> Rachit Siamwalla <rachit@ensim.com> writes:\n> \n> > Also i never got a response on who actually packages those linux init\n> > scripts that appear in the RPM but not on the pgsql cvs tree. (i am also\n> > curious on why it is different, and how the RPM is built).\n> \n> Lamar Owen and I.\n\nEgads! I forgot to mention Trond! My apologies! (I'm being serious...)\n\nTrond, of Red Hat; Reinhard Max, of SuSE; and Thomas Lockhart, of\nPostgreSQL Inc (:-)) have all been major contributors to the RPM\ndistribution. Karl DeBisschop, Mike Mascari, and many others have\nprovided fixes and ideas as well.\n\nSorry guys -- I got caught up in the process and forgot the people! :-(\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 03 May 2001 21:26:11 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Thanks to input Bruce M., figured out my performance problems - had to do \nwith a few QUERIES FROM HELL! After running EXPLAIN a few times I fine \ntuned some of the worst ones, mostly over use of sub queries. Still \ncombing through my query log.\n\nGetting there...\n\n-r\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01", "msg_date": "Fri, 04 May 2001 02:33:43 +0100", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": true, "msg_subject": "Extrordinarily Poor Performance.... RESOLUTION" }, { "msg_contents": "Ryan Mahoney wrote:\n> Any input would be helpful! If you need additional info let me know.\n \n> BTW, Thanks to Lamar for some great tips today!\n\nYou're more than welcome.\n\nI forgot a basic tip, which leads to a question:\nHow often are you running VACUUM ANALYZE?\n\nIf this were PostgreSQL 7.0.3, we could ask Alfred about his lazy vacuum\npatches, as they work as well for Red Hat 7 as they do for FreeBSD.\n\nPersonally, I look forward to the following note being placed into the\ndocs:\nVACUUM: deprecated. And the feature that makes that note possible.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 03 May 2001 21:35:10 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Extrordinarily Poor Performance...." }, { "msg_contents": "\nI hope you have those postmasters listening on different ports.\n\n> Here is some output from top...\n> \n> 9:20pm up 40 min, 1 user, load average: 3.77, 3.12, 3.74\n> 41 processes: 36 sleeping, 5 running, 0 zombie, 0 stopped\n> CPU states: 99.2% user, 0.7% system, 0.0% nice, 0.0% idle\n> Mem: 515664K av, 303712K used, 211952K free, 37476K shrd, 39552K buff\n> Swap: 514068K av, 0K used, 514068K free 158980K cached\n> \n> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n> 1657 postgres 20 0 6820 6820 5712 R 33.4 1.3 0:12 postmaster\n> 1671 postgres 20 0 6576 6576 5468 R 33.4 1.2 0:08 postmaster\n> 1650 postgres 19 0 6952 6952 5848 R 32.6 1.3 0:16 postmaster\n> 1444 postgres 0 0 1044 1044 844 R 0.5 0.2 0:02 top\n> 1 root 0 0 540 540 476 S 0.0 0.1 0:06 init\n> 2 root 0 0 0 0 0 SW 0.0 0.0 0:02 kflushd\n> 3 root 0 0 0 0 0 SW 0.0 0.0 0:04 kupdate\n> 4 root 0 0 0 0 0 SW 0.0 0.0 0:00 kpiod\n> 5 root 0 0 0 0 0 SW 0.0 0.0 0:00 kswapd\n> 6 root -20 -20 0 0 0 SW< 0.0 0.0 0:00 mdrecoveryd\n> 373 root 0 0 836 836 700 S 0.0 0.1 0:00 syslogd\n> 383 root 0 0 852 852 472 S 0.0 0.1 0:00 klogd\n> 398 rpc 0 0 580 580 492 S 0.0 0.1 0:00 portmap\n> 414 root 0 0 0 0 0 SW 0.0 0.0 0:00 lockd\n> 415 root 0 0 0 0 0 SW 0.0 0.0 0:00 rpciod\n> 425 rpcuser 0 0 832 832 720 S 0.0 0.1 0:00 rpc.statd\n> 477 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n> 484 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n> 485 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n> 488 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n> 489 nobody 0 0 720 720 612 S 0.0 0.1 0:00 identd\n> 496 daemon 0 0 580 580 504 S 0.0 0.1 0:00 atd\n> 511 root 0 0 1040 1040 828 S 0.0 0.2 0:00 xinetd\n> 520 root 0 0 1196 1196 1060 S 0.0 0.2 0:00 sshd\n> 533 root 0 0 2008 2008 1652 R 0.0 0.3 0:00 sshd\n> 542 lp 0 0 1112 1112 948 S 0.0 0.2 0:00 lpd\n> 579 root 0 0 512 512 448 S 0.0 0.0 0:00 gpm\n> 594 root 0 0 720 720 616 S 0.0 0.1 0:00 crond\n> 679 xfs 0 0 4496 4496 808 S 0.0 0.8 0:00 xfs\n> 704 ryan 0 0 1328 1328 1036 S 0.0 0.2 0:00 bash\n> 752 postgres 0 0 2492 2492 2360 S 0.0 0.4 0:01 postmaster\n> 778 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n> 779 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n> 780 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n> 781 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n> 782 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n> 783 root 0 0 444 444 380 S 0.0 0.0 0:00 mingetty\n> 1279 root 0 0 1060 1060 856 S 0.0 0.2 0:00 su\n> 1280 root 0 0 1388 1388 1076 S 0.0 0.2 0:00 bash\n> 1369 root 0 0 1020 1020 828 S 0.0 0.1 0:00 su\n> 1370 postgres 0 0 1308 1308 1028 S 0.0 0.2 0:00 bash\n\n> \n> ---\n> Outgoing mail is certified Virus Free.\n> Checked by AVG anti-virus system (http://www.grisoft.com).\n> Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 May 2001 21:45:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extrordinarily Poor Performance...." }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> Ryan Mahoney wrote:\n> > Any input would be helpful! If you need additional info let me know.\n> \n> > BTW, Thanks to Lamar for some great tips today!\n> \n> You're more than welcome.\n> \n> I forgot a basic tip, which leads to a question:\n> How often are you running VACUUM ANALYZE?\n> \n> If this were PostgreSQL 7.0.3, we could ask Alfred about his lazy vacuum\n> patches, as they work as well for Red Hat 7 as they do for FreeBSD.\n\nPostgresql 7.0.3 from Red Hat Linux 7.1 should work just fine on Red\nHat Linux 7.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "03 May 2001 22:38:08 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Extrordinarily Poor Performance...." }, { "msg_contents": "Thus spake Ryan Mahoney\n> !! I haven't ran VACUUM ANALYZE since last night. Just ran it - \n> performance has improved significantly. I think I am going to have to run \n> it hourly during this high traffic time. Postmasters are still utilizing \n> about 100% of the CPU. Is this normal? I am considering increasing the \n> shmmax again.\n\nAlthough it isn't supposed to be necessary, I find that I have to dump and\nreload once in a while to keep performance hight.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 4 May 2001 08:10:28 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: Re: Extrordinarily Poor Performance...." }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> \n> Rachit Siamwalla <rachit@ensim.com> writes:\n> \n> > Also i never got a response on who actually packages those linux init\n> > scripts that appear in the RPM but not on the pgsql cvs tree. (i am also\n> > curious on why it is different, and how the RPM is built).\n> \n> Lamar Owen and I.\n\nIs the current snapshot available? I have submitted fixes twice now for what I am fairly sure is a bug in the init script. At least one of the posts was the shortly after lamar posted the RC3 RPM. Yet the bug remained.\n\nThis is not a complaint -- you guys have put alot of effort into the RPMs and they are very solid IMHO. But I would like the chance to look at the RPM as it stands sometime before 7.1, as I have to customize the RPM yet again to distribute a working init script to our servers.\n\nHave you thought about a CVS store some place for the RPM files? \n\n-- \nKarl\n", "msg_date": "Fri, 04 May 2001 09:11:06 -0400", "msg_from": "Karl DeBisschop <kdebisschop@alert.infoplease.com>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> As to why all these files aren't part of the source tree, well, unless\n> there was a large cry for it to happen, I don't believe it should. \n> PostgreSQL is very platform-agnostic -- and I like that. Including the\n> RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that\n> agnostic stance in a negative way.\n\nSeems like that stuff should be in CVS somewhere ... if only so someone\nelse can pick up the ball if you get run over by a truck :-(.\n\nIf it's just a small amount of code, I don't see what the harm would be\nin including it in the regular distro, though we should talk about just\nwhere it should go. If it's a large amount of code then perhaps a\nseparate CVS project would be better, so that people who have no use for\nit don't end up pulling/downloading it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 10:36:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1 " }, { "msg_contents": "Tom Lane wrote:\n> Seems like that stuff should be in CVS somewhere ... if only so someone\n> else can pick up the ball if you get run over by a truck :-(.\n\nMy wife appreciates the sentiment :-). As it stands now, better\ndocumentation distributed in the source RPM would help greatly. \nEverything necessary to do the build and maintain the package is in the\nsource RPM as it stands now -- evidenced by the Linux distributors being\nable to take our source RPM, massage it to fit their particular system,\nand run with it. And I have a scad of history available in specfile\nform....\n\n> If it's just a small amount of code, I don't see what the harm would be\n> in including it in the regular distro, though we should talk about just\n> where it should go. If it's a large amount of code then perhaps a\n> separate CVS project would be better, so that people who have no use for\n> it don't end up pulling/downloading it.\n\nNot counting the JDBC jars, it's a hundred K or so uncompressed. The\nspec file is around 30k -- a small amount of code. \n\ncontrib/rpm-dist?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 04 May 2001 10:57:24 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> contrib/rpm-dist?\n\nContrib was my first thought also --- but on second thought, the RPM\npackaging support is hardly contrib-grade material. For a large\nproportion of our users it's a critical part of the distribution.\nSo, if we are going to have it in the CVS tree at all, I'd vote for\nputting it in the main tree.\n\nPerhaps src/rpm-tools/ or some such name.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 11:13:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1 " }, { "msg_contents": "Tom Lane wrote:\n> Contrib was my first thought also --- but on second thought, the RPM\n> packaging support is hardly contrib-grade material. For a large\n> proportion of our users it's a critical part of the distribution.\n> So, if we are going to have it in the CVS tree at all, I'd vote for\n> putting it in the main tree.\n\n> Perhaps src/rpm-tools/ or some such name.\n\nLet's see where the rest of core and hackers weighs in....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 04 May 2001 11:19:12 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Lamar Owen wrote:\n> \n> Tom Lane wrote:\n> > Seems like that stuff should be in CVS somewhere ... if only so someone\n> > else can pick up the ball if you get run over by a truck :-(.\n> \n> My wife appreciates the sentiment :-). As it stands now, better\n> documentation distributed in the source RPM would help greatly.\n> Everything necessary to do the build and maintain the package is in the\n> source RPM as it stands now -- evidenced by the Linux distributors being\n> able to take our source RPM, massage it to fit their particular system,\n> and run with it. And I have a scad of history available in specfile\n> form....\n> \n> > If it's just a small amount of code, I don't see what the harm would be\n> > in including it in the regular distro, though we should talk about just\n> > where it should go. If it's a large amount of code then perhaps a\n> > separate CVS project would be better, so that people who have no use for\n> > it don't end up pulling/downloading it.\n> \n> Not counting the JDBC jars, it's a hundred K or so uncompressed. The\n> spec file is around 30k -- a small amount of code.\n> \n> contrib/rpm-dist?\n\nSeems to work. But I would prefer to look at how ither packaging schemes\nwork and come up with something that might be consistent and useful\nacross the board.\n\nFor starters, I'd make contrib/package/\n\nThen make an rpm subdirectory. Also a pkg directory for system that use\npkgmk/pkginfo/pkgadd/pkgrm. If there's a way to may debain packages paly\nthe game, put them in as well. Then, if someaone is packages for a\nvariety of systems, there is alt least the possibility of some small\namount of consistency.\n\nExtending things, you could have contrib/package/rpm/redhat for\nredhat-specific stuff. contrib/package/rpm/mandrake for mandrafke stuff.\nYou get the idea.\n\nAt that point, I could even imagine contrib/mkpackage script that di som\nOS detection, and built wahtever you wanted. That may be a little far\noff, but I think there is an important nuggent in here. Tarballs are\ngreat for developers, but they are not that great for system\nadministrators with large installed bases. PostgreSQL builds are great\nfor the portability. The next logical step might in fact be to extend\nsome of that consistency to the package creation arena.\n\n-- \nKarl\n", "msg_date": "Fri, 04 May 2001 11:28:09 -0400", "msg_from": "Karl DeBisschop <kdebisschop@alert.infoplease.com>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Lamar Owen writes:\n\n> contrib/rpm-dist?\n\nA separate CVS module sounds like a better idea to me.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 4 May 2001 18:14:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > contrib/rpm-dist?\n> \n> Contrib was my first thought also --- but on second thought, the RPM\n> packaging support is hardly contrib-grade material. For a large\n> proportion of our users it's a critical part of the distribution.\n> So, if we are going to have it in the CVS tree at all, I'd vote for\n> putting it in the main tree.\n> \n> Perhaps src/rpm-tools/ or some such name.\n\nIt is platform-specific, which would seem to vote for /contrib.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 12:57:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Perhaps src/rpm-tools/ or some such name.\n\n> It is platform-specific, which would seem to vote for /contrib.\n\nHuh? By that logic, all of src/makefiles/, src/template/, and\nsrc/backend/port/, not to mention large chunks of the configure\nmechanism, belong in contrib. Shall we rip out all BSD support\nand move it to contrib?\n\ncontrib has never been about platform dependency in my mind; it's about\nwhether we consider something part of the project mainstream (in terms\nof code quality and our willingness to support it). RPM support isn't\ngoing away, and I'm willing to call it mainstream ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 13:17:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1 " }, { "msg_contents": "For various definitions of \"Platform\". Linux runs on a NUMBER of \nhardware platforms, and RPM is used by a LOT of LINUX distributions. \n\nI'd vote for src/rpm-tools/ if I had a vote.\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler/\nPhone: +1 972 414 9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 US\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 5/4/01, 11:57:05 AM, Bruce Momjian <pgman@candle.pha.pa.us> wrote \nregarding Re: [HACKERS] Packaging 7.1.1:\n\n\n> > Lamar Owen <lamar.owen@wgcr.org> writes:\n> > > contrib/rpm-dist?\n> >\n> > Contrib was my first thought also --- but on second thought, the RPM\n> > packaging support is hardly contrib-grade material. For a large\n> > proportion of our users it's a critical part of the distribution.\n> > So, if we are going to have it in the CVS tree at all, I'd vote for\n> > putting it in the main tree.\n> >\n> > Perhaps src/rpm-tools/ or some such name.\n\n> It is platform-specific, which would seem to vote for /contrib.\n\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n\n> http://www.postgresql.org/search.mpl\n", "msg_date": "Fri, 04 May 2001 17:51:40 GMT", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Karl DeBisschop writes:\n\n> PostgreSQL builds are great for the portability. The next logical step\n> might in fact be to extend some of that consistency to the package\n> creation arena.\n\nThis would have been cool in 1996. We would have evolved a large number\nof different packages along with the build system. But it didn't happen\nthis way and now most packages are sufficiently contorted in a number of\nways because of vendor requirements, different ideas of how an operating\nsystem is supposed to work, self-inflicted incompatibilities, and a number\nof other reasons, including not least importantly the desire to have\ncontrol over what ships in your system. All valid reasons, of course.\n\nIf we can work at, and succeed at, resolving most of these oddities, then\ntracking packages in the source tree might prove worthwhile. But as long\nas we're still required to keep track what vendor has 'chkconfig' or what\nversion of what distribution has broken CFLAGS, to list some trivial\nthings, as long as the packages need to track anything but the development\nof PostgreSQL itself, this undertaking is going to become a problem.\n\nWhat would be worthwhile is setting up another cvs module so packages can\nbe developed and released at their own pace.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 4 May 2001 20:49:34 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Karl DeBisschop wrote:\n> \n> Trond Eivind Glomsr�d wrote:\n> >\n> > Rachit Siamwalla <rachit@ensim.com> writes:\n> >\n> > > Also i never got a response on who actually packages those linux init\n> > > scripts that appear in the RPM but not on the pgsql cvs tree. (i am also\n> > > curious on why it is different, and how the RPM is built).\n> >\n> > Lamar Owen and I.\n> \n> Is the current snapshot available? \n\nThe current snapshot is the 7.1-1 release as of this time.\n\n>I have submitted fixes twice now for what I am fairly sure is a bug in the init script. At least one of the posts was the shortly after lamar posted the RC3 RPM. Yet the bug remained.\n\nI thought I integrated that one, but I must not have. My apologies.\n \n> This is not a complaint -- you guys have put alot of effort into the RPMs and they are very solid IMHO. But I would like the chance to look at the RPM as it stands sometime before 7.1, as I have to customize the RPM yet again to distribute a working init script to our servers.\n\nMail me the initscript as fixed. Put a [HACKERS] in the usbject so it\ngoes to the right folder. The extant 7.1-1 RPMset is the last build I\nhave made.\n \n> Have you thought about a CVS store some place for the RPM files?\n\nYes. Discussion currently underway in HACKERS.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 04 May 2001 15:46:06 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Peter Eisentraut wrote:\n> What would be worthwhile is setting up another cvs module so packages can\n> be developed and released at their own pace.\n\nThis is an _excellent_ point, and one I had thought of before but had\nforgotten.\n\nFWIW, I have a project set up at greatbridge.org -- I just have to get\nmyself in gear and get it done.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 04 May 2001 15:49:58 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Karl DeBisschop writes:\n> \n> > PostgreSQL builds are great for the portability. The next logical step\n> > might in fact be to extend some of that consistency to the package\n> > creation arena.\n> \n> This would have been cool in 1996. We would have evolved a large number\n> of different packages along with the build system. But it didn't happen\n> this way and now most packages are sufficiently contorted in a number of\n> ways because of vendor requirements, different ideas of how an operating\n> system is supposed to work, self-inflicted incompatibilities, and a number\n> of other reasons, including not least importantly the desire to have\n> control over what ships in your system. All valid reasons, of course.\n> \n> If we can work at, and succeed at, resolving most of these oddities, then\n> tracking packages in the source tree might prove worthwhile. But as long\n> as we're still required to keep track what vendor has 'chkconfig' or what\n> version of what distribution has broken CFLAGS, to list some trivial\n> things, as long as the packages need to track anything but the development\n> of PostgreSQL itself, this undertaking is going to become a problem.\n> \n> What would be worthwhile is setting up another cvs module so packages can\n> be developed and released at their own pace.\n\nI think on the biggest point we agree. Working with packagers and making\nthat job easier and more consistent is a good thing, (so long as it does\nnot interfere with development on postgresql itself, of course).\n\nOn the projects I am involved with, however, my experience of what work\nhas been contary to the tactics you suggest for reaching tha goal. I\nfound it easiest to develop in close concert with packagers, and in my\ncase that meant hosting the various packaging scripts within the source\ntree. Of course that was for smaller projects with much less legacy than\npostgresql, so maybe it doesn't apply here.\n\nI still think it would be cool to download just the tarball from the\nsite and have a little 'mkpackage' script that I run on solaris to get\nthe cannonical solaris packages, on Red Hat to get the cannonical Red\nHat rpms, on FreeBSD to get the cannonical port, etc. Maybe a ways off,\nbut an appealing end goal to me.\n\nIt would be even better if by unifying support of the packaging process,\nthe differences between install would be limited to the requirements of\neach OS, and not be dictated by the personal whims of the packager. I\nknow Lamar and Oliver keep in close contact so their packages don't get\ntoo idiosyncratic. I'm advocating any process that helps extend that\nspirit across the board.\n\n-- \nKarl\n", "msg_date": "Sat, 05 May 2001 00:59:59 -0400", "msg_from": "Karl DeBisschop <karl@debisschop.net>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "> Of course, there are many projects that DO include all the files\n> necessary to build RPMs from their Official Tarball (TM).\n> \n> Bruce, should portions of that answer be part of the linux FAQ? I don't\n> want to have to write that too many times :-).\n\nI just had time to read that myself. Not sure about the Linux FAQ, but\nit seems the file should be linked to from the Linux FAQ so people can\nread this when needed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 14:52:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "\nI have added this to the developer's FAQ.\n\n---------------------------------------------------------------------------\n\n> Rachit Siamwalla wrote:\n> > oh btw, i completely forgot to mention the minor fixes to the linux init\n> > scripts i mentioned earlier (about 2 weeks ago) for things that perhaps\n> > should be in the 7.1.1 release. (someone sent out a mail that they were\n> > branching 7.1.1)\n> \n> > Also i never got a response on who actually packages those linux init\n> > scripts that appear in the RPM but not on the pgsql cvs tree. (i am also\n> > curious on why it is different, and how the RPM is built).\n> \n> That would be me. Before building and releasing 7.1.1 RPMs I will be\n> reviewing the various bugs and changes planned for the 7.1.1 RPM.\n> \n> As to why the RPM init script is different from the one packaged in the\n> main source tree -- I can make assumptions in the RPM set that the\n> version in the source tree cannot.\n> \n> As to how the RPMs are built -- to answer that question sanely requires\n> me to know how much experience you have with the whole RPM paradigm. \n> 'How is the RPM built?' is a multifaceted question. The obvious simple\n> answer is that I maintain:\n> \t1.)\tA set of patches to make certain portions of the source\n> \t\ttree 'behave' in the different environment of the RPMset;\n> \t2.)\tThe initscript;\n> \t3.)\tAny other ancilliary scripts and files;\n> \t4.)\tA README.rpm-dist document that tries to adequately document\n> \t\tboth the differences between the RPM build and the WHY of the\n> \t\tdifferences, as well as useful RPM environment operations\n> \t\t(like, using syslog, upgrading, getting postmaster to\n> \t\tstart at OS boot, etc);\n> \t5.)\tThe spec file that throws it all together. This is not a \n> \t\ttrivial undertaking in a package of this size.\n> \n> I then download and build on as many different canonical distributions\n> as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1 on\n> my personal hardware. Occasionally I receive opportunity from certain\n> commercial enterprises such as Great Bridge and PostgreSQL Inc to build\n> on other distributions. \n> \n> I test the build by installing the resulting packages and running the\n> regression tests. Once the build passes these tests, I upload to the\n> postgresql.org ftp server and make a release announcement. I am also\n> responsible for maintaining the RPM download area on the ftp site.\n> \n> You'll notice I said 'canonical' distributions above. That simply means\n> that the machine is as stock 'out of the box' as practical -- that is,\n> everything (except select few programs) on these boxen are installed by\n> RPM; only official Red Hat released RPMs are used (except in unusual\n> circumstances involving software that will not alter the build -- for\n> example, installing a newer non-RedHat version of the Dia diagramming\n> package is OK -- installing Python 2.1 on the box that has Python 1.5.2\n> installed is not, as that alters the PostgreSQL build). The RPM as\n> uploaded is built to as close to out-of-the-box pristine as is\n> possible. Only the standard released 'official to that release'\n> compiler is used -- and only the standard official kernel is used as\n> well.\n> \n> For a time I built on Mandrake for RedHat consumption -- no more. \n> Nonstandard RPM building systems are worse than useless. Which is not\n> to say that Mandrake is useless! By no means is Mandrake useless --\n> unless you are building Red Hat RPMs -- and Red Hat is useless if you're\n> trying to build Mandrake or SuSE RPMs, for that matter. But I would be\n> foolish to use 'Lamar Owen's Super Special RPM Blend Distro 0.1.2' to\n> build for public consumption! :-)\n> \n> I _do_ attempt to make the _source_ RPM compatible with as many\n> distributions as possible -- however, since I have limited resources (as\n> a volunteer RPM maintainer) I am limited as to the amount of testing\n> said build will get on other distributions, architectures, or systems. \n> \n> And, while I understand people's desire to immediately upgrade to the\n> newest version, realize that I do this as a side interest -- I have a\n> regular, full-time job as a broadcast\n> engineer/webmaster/sysadmin/Technical Director which occasionally\n> prevents me from making timely RPM releases. This happened during the\n> early part of the 7.1 beta cycle -- but I believe I was pretty much on\n> the ball for the Release Candidates and the final release.\n> \n> I am working towards a more open RPM distribution -- I would dearly love\n> to more fully document the process and put everything into CVS -- once I\n> figure out how I want to represent things such as the spec file in a CVS\n> form. It makes no sense to maintain a changelog, for instance, in the\n> spec file in CVS when CVS does a better job of changelogs -- I will need\n> to write a tool to generate a real spec file from a CVS spec-source file\n> that would add version numbers, changelog entries, etc to the result\n> before building the RPM. IOW, I need to rethink the process -- and then\n> go through the motions of putting my long RPM history into CVS one\n> version at a time so that version history information isn't lost.\n> \n> As to why all these files aren't part of the source tree, well, unless\n> there was a large cry for it to happen, I don't believe it should. \n> PostgreSQL is very platform-agnostic -- and I like that. Including the\n> RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that\n> agnostic stance in a negative way. But maybe I'm too sensitive to\n> that. I'm not opposed to doing that if that is the consensus of the\n> core group -- and that would be a sneaky way to get the stuff into CVS\n> :-). But if the core group isn't thrilled with the idea (and my\n> instinct says they're not likely to be), I am opposed to the idea -- not\n> to keep the stuff to myself, but to not hinder the platform-neutral\n> stance. IMHO, of course. \n> \n> Of course, there are many projects that DO include all the files\n> necessary to build RPMs from their Official Tarball (TM).\n> \n> Bruce, should portions of that answer be part of the linux FAQ? I don't\n> want to have to write that too many times :-).\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 13:27:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "On Tuesday 27 November 2001 01:27 pm, Bruce Momjian wrote:\n> I have added this to the developer's FAQ.\n\n> > I then download and build on as many different canonical distributions\n> > as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1 on\n> > my personal hardware. Occasionally I receive opportunity from certain\n> > commercial enterprises such as Great Bridge and PostgreSQL Inc to build\n> > on other distributions.\n\nHmmm. Bruce, would it be possible to put a date on that entry, as that \nanswer has some fairly old information -- old as in last cycle. I currently \nam only able to build and test on Red Hat 7.2 -- and that is subject to \nchange as time goes on. Maybe, in the FAQ, where you say 'Written by Lamar \nOwen' you could expound that a little by adding 'on May 4 2001' (or whenever \nI actually wrote it)....\n\nBTW: I believe those new sections are very nice, even if I did write two of \nthem.... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 27 Nov 2001 19:26:21 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Developers FAQ (was:Re: Packaging 7.1.1)" }, { "msg_contents": "> On Tuesday 27 November 2001 01:27 pm, Bruce Momjian wrote:\n> > I have added this to the developer's FAQ.\n> \n> > > I then download and build on as many different canonical distributions\n> > > as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1 on\n> > > my personal hardware. Occasionally I receive opportunity from certain\n> > > commercial enterprises such as Great Bridge and PostgreSQL Inc to build\n> > > on other distributions.\n> \n> Hmmm. Bruce, would it be possible to put a date on that entry, as that \n> answer has some fairly old information -- old as in last cycle. I currently \n> am only able to build and test on Red Hat 7.2 -- and that is subject to \n> change as time goes on. Maybe, in the FAQ, where you say 'Written by Lamar \n> Owen' you could expound that a little by adding 'on May 4 2001' (or whenever \n> I actually wrote it)....\n> \n> BTW: I believe those new sections are very nice, even if I did write two of \n> them.... :-)\n\nOK, done. Dates are a good idea for these entries. I put the date in\nstandard Unix format at the top of each message.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 19:32:40 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Developers FAQ (was:Re: Packaging 7.1.1)" } ]
[ { "msg_contents": "\nI've hacked together a metaphone function from an existing metaphone\nimplementation and the example provided by the soundex() function in\ncontrib.\n\nI'd like to send this out to the general list, in the hopes that people\nwill find it useful, but I wanted to wave it in front of the -hackers and\n-patches people first, just to make sure that I haven't done anything\ndreadfully, awfully terrible. :-)\n\nI'm not an expert C coder, and this is really two programs, neither of\nwhich I wrote, glued together, with some mild editing by me. There are no\nmalloc() calls (no dynamic memory at all), nor anything terribly strange,\nbut this is my first C function I'm turning loose on the world.\n\nIf you have a chance, I'd appreciate any feedback/pointers.\n\nIf it looks good / If I don't hear otherwise, I'll send it out to\npgsql-announce and pgsql-general early next week.\n\nThanks!\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Thu, 3 May 2001 16:15:34 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": true, "msg_subject": "Metaphone function" } ]
[ { "msg_contents": "-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington", "msg_date": "Thu, 3 May 2001 16:16:49 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": true, "msg_subject": "Metaphone function attachment" }, { "msg_contents": "For those curious about what this actually does, README attached. It\nwill appear in 7.2. Seems similar to Soundex.\n\n> \n> \n> -- \n> Joel Burton <jburton@scw.org>\n> Director of Information Systems, Support Center of Washington\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nThis directory contains a module that implements the \"Metaphone\" code as\na PostgreSQL user-defined function. The Metaphone system is a method of\nmatching similar sounding names (or any words) to the same code. \n\nMetaphone was invented by Lawrence Philips as an improvement to the popular\nname-hashing routine, Soundex.\n\nThis metaphone code is from Michael Kuhn, and is detailed at\n http://aspell.sourceforge.net/metaphone/metaphone-kuhn.txt\n\nCode for this (including this help file!) was liberally borrowed from\nthe soundex() module for PostgreSQL.\n\nThere are two functions:\n metaphone(text) : returns hash of a name\n metaphone(text,int) : returns hash (maximum length of int) of name\n\n---\n\nTo install it, first configure the main source tree, then run make;\nmake install in this directory. Finally, load the function definition\nwith psql:\n\n psql -f PREFIX/share/contrib/metaphone.sql\n\nThe following are some usage examples:\n\nSELECT text_metaphone('hello world!');\nSELECT text_metaphone('hello world!', 4);\n\nCREATE TABLE s (nm text)\\g\n\ninsert into s values ('john')\\g\ninsert into s values ('joan')\\g\ninsert into s values ('wobbly')\\g\n\nselect * from s\nwhere text_metaphone(nm) = text_metaphone('john')\\g\n\nselect nm from s a, s b\nwhere text_metaphone(a.nm) = text_metaphone(b.nm)\nand a.oid <> b.oid\\g\n\nCREATE FUNCTION text_mp_eq(text, text) RETURNS bool AS\n'select text_metaphone($1) = text_metaphone($2)'\nLANGUAGE 'sql'\\g\n\nCREATE FUNCTION text_mp_lt(text,text) RETURNS bool AS\n'select text_metaphone($1) < text_metaphone($2)'\nLANGUAGE 'sql'\\g\n\nCREATE FUNCTION text_mp_gt(text,text) RETURNS bool AS\n'select text_metaphone($1) > text_metaphone($2)'\nLANGUAGE 'sql';\n\nCREATE FUNCTION text_mp_le(text,text) RETURNS bool AS\n'select text_metaphone($1) <= text_metaphone($2)'\nLANGUAGE 'sql';\n\nCREATE FUNCTION text_mp_ge(text,text) RETURNS bool AS\n'select text_metaphone($1) >= text_metaphone($2)'\nLANGUAGE 'sql';\n\nCREATE FUNCTION text_mp_ne(text,text) RETURNS bool AS\n'select text_metaphone($1) <> text_metaphone($2)'\nLANGUAGE 'sql';\n\nDROP OPERATOR #= (text,text)\\g\n\nCREATE OPERATOR #= (leftarg=text, rightarg=text, procedure=text_mp_eq,\ncommutator=text_mp_eq)\\g\n\nSELECT *\nFROM s\nWHERE text_mp_eq(nm,'pillsbury')\\g\n\nSELECT *\nfrom s\nwhere s.nm #= 'pillsbury';", "msg_date": "Thu, 3 May 2001 17:40:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Metaphone function attachment" }, { "msg_contents": "Joel Burton wrote:\n> \n> --\n> Joel Burton <jburton@scw.org>\n> Director of Information Systems, Support Center of Washington\n> \n> -------------------------------------------------------------------------------\n> Name: contrib-metaphone.tgz\n> contrib-metaphone.tgz Type: unspecified type (APPLICATION/octet-stream)\n> Encoding: BASE64\n> \n> -------------------------------------------------------------------------------\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nI have written a similar function, and it is in a library I wrote \"pgcontains\"\nwhich started out as a simple implementation of \"contains(...)\" but has grown\nWAY beyond it's original scope.\n\nAnyway, Metaphone is great for doing some cool searches, but now that we are on\nthe subject of cool search functions, I have a few that may be useful and would\nbe glad to contribute. My only question is how? (and I'd have to write them for\n7.1, because they are still in 7.0.x format)\n\n\ncontains(...)\nA simple implementation of contains. Forces a table scan, but does some cool\nthings including phrase detection.\n\ndecode(...)\nSimilar to \"case\" but works as decode for oracle queries. In 7.1 and higher it\nshould be easy to make one function take a variable number of parameters. Right\nnow I have a stub for the most common numbers.\n\nstrip(...)\nStrips out all but alphanumeric characters and returns a lowercase string.\n\"Oops I did it again\" comes back as \"oopsididitagain.\" This is cool for lightly\nfuzzy searches.\n\nstriprev(...)\nLike strip, but reverses the string. Allows you to use an index for records\nwhich end in something. For instance: \"select * from table where field like\n'abc%'\" can use an index, where as \"select * from table where field like\n'%xyx'\" will not. However, \"select * from table where striprev(field) like\nstriprev('xyz') || '%'\" can.\n\nExample:\ncdinfo=# select title, striprev(title) from ztitles where striprev(title) like\nstriprev('wall') || '%' limit 3;\n title | striprev\n--------------------------------------------+------------------------------------\n A Giggle Can Wiggle Its Way Through A Wall |\nllawahguorhtyawstielggiwnacelggiga\n Shadows On A Wall * | llawanoswodahs\n The Wall | llaweht\n(3 rows)\n\ncdinfo=# explain select title, striprev(title) from ztitles where\nstriprev(title) like striprev('wall') || '%' limit\n3;\nNOTICE: QUERY PLAN:\n \nLimit (cost=0.00..10.21 rows=3 width=12)\n -> Index Scan using f1 on ztitles (cost=0.00..7579.94 rows=2227 width=12)\n \nEXPLAIN\n\nint64_t *strtonumu(text, int4 base)\nConverts a string to a number with an arbitrary base. (Is there a function to\ndo this already?)\n\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Fri, 04 May 2001 08:02:52 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Metaphone function attachment" }, { "msg_contents": "Why not start a new project at greatbridge.org?\n\nI'd be happy to see metaphone() move in there, soundex() would make\nsense. I have a hashing algorithm that grabs the first letter off of\nwords, except for user-definable 'stop words', which we use to look for\nlikely organization name matches.\n\nThese could all fall under a project of PG string functions.\n\nI think, as little things in contrib/, it's easy for people to miss\nthese. With a project page, some discussion, etc. (& a place in contrib/),\nmore people would be able to use these.\n\nPG functions are one of the things that separates PG from MySQL (which has\nonly C UDFs, and IIRC, not on some platforms) and InterBase (which has\nplsql-like procedures, but functions can only be written in C). I think\nour functions are one of our strongest cases, and the more we can show\npeople examples of how to use them, and the larger our useful library, the\nmore we win.\n\nP.S. What exactly does contains() do?\n\n", "msg_date": "Fri, 4 May 2001 12:36:18 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": true, "msg_subject": "Re: Metaphone function attachment" }, { "msg_contents": "Joel Burton writes:\n\n> I think, as little things in contrib/, it's easy for people to miss\n> these. With a project page, some discussion, etc. (& a place in contrib/),\n> more people would be able to use these.\n\nMost of the extension functions and types in contrib should, in my mind,\neventually be moved into the core. contrib is a nice place for things\nthat we don't really know how/whether they work, but once we're confident\nabout the quality we might as well offer it by default.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 4 May 2001 19:36:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Metaphone function attachment" }, { "msg_contents": "On Fri, 4 May 2001, Peter Eisentraut wrote:\n\n> Joel Burton writes:\n> \n> > I think, as little things in contrib/, it's easy for people to miss\n> > these. With a project page, some discussion, etc. (& a place in contrib/),\n> > more people would be able to use these.\n> \n> Most of the extension functions and types in contrib should, in my mind,\n> eventually be moved into the core. contrib is a nice place for things\n> that we don't really know how/whether they work, but once we're confident\n> about the quality we might as well offer it by default.\n\nYeah, but things do seem to languish there for quite a while. (soundex(),\nfor instance, was in contrib when I first looked at PG).\n\nAlso, some things are in contrib/ that seem a bit out of date (I think\nthere was still some early RI stuff in there last time I went through it)\n\nI understand the need not to stuff PG full of *everything* -- and perhaps\nstuff like soundex(), metaphone(), etc., shouldn't go into the core *. But\nI think if we leave them in contrib/, after a while, it feels like there's\nan implied comment on the quality/soundness of the code.\n\nWould it work to have a different mechanism for distributing proven yet\nout-of-the-mainstream stuff, like soundex(), etc.\n\n\n* - soundex(), in particular, should go into the core, though. \nMany other DBs have it built in, so users could reasonably have the\nexpectation that we should have it.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n", "msg_date": "Fri, 4 May 2001 13:43:56 -0400 (EDT)", "msg_from": "Joel Burton <jburton@scw.org>", "msg_from_op": true, "msg_subject": "Re: Re: Metaphone function attachment" }, { "msg_contents": "\nselect * from table where contains(field, 'text string', 10) > 1 order by\nscore(10);\n\nContains returns a number based on an evaluation of the 'text string' against\nthe field. If the field has the words contained in 'text string' in it, it\nreturns a number based on some default assumptions. The assumptions are\nthings like points for the first occurrence of a word, and next occurrence of\nthe word. Points for words in the right order as specified in 'text string',\netc.\n\nWho do I contact at greatbridge?\n\n\nJoel Burton wrote:\n\n> Why not start a new project at greatbridge.org?\n>\n> I'd be happy to see metaphone() move in there, soundex() would make\n> sense. I have a hashing algorithm that grabs the first letter off of\n> words, except for user-definable 'stop words', which we use to look for\n> likely organization name matches.\n>\n> These could all fall under a project of PG string functions.\n>\n> I think, as little things in contrib/, it's easy for people to miss\n> these. With a project page, some discussion, etc. (& a place in contrib/),\n> more people would be able to use these.\n>\n> PG functions are one of the things that separates PG from MySQL (which has\n> only C UDFs, and IIRC, not on some platforms) and InterBase (which has\n> plsql-like procedures, but functions can only be written in C). I think\n> our functions are one of our strongest cases, and the more we can show\n> people examples of how to use them, and the larger our useful library, the\n> more we win.\n>\n> P.S. What exactly does contains() do?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Fri, 04 May 2001 14:01:06 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Metaphone function attachment" }, { "msg_contents": "> Yeah, but things do seem to languish there for quite a while. (soundex(),\n> for instance, was in contrib when I first looked at PG).\n> \n> Also, some things are in contrib/ that seem a bit out of date (I think\n> there was still some early RI stuff in there last time I went through it)\n> \n> I understand the need not to stuff PG full of *everything* -- and perhaps\n> stuff like soundex(), metaphone(), etc., shouldn't go into the core *. But\n> I think if we leave them in contrib/, after a while, it feels like there's\n> an implied comment on the quality/soundness of the code.\n> \n> Would it work to have a different mechanism for distributing proven yet\n> out-of-the-mainstream stuff, like soundex(), etc.\n> \n> \n> * - soundex(), in particular, should go into the core, though. \n> Many other DBs have it built in, so users could reasonably have the\n> expectation that we should have it.\n\nAdded to TODO:\n\n\t* Move some things from /contrib into main tree, like soundex \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 16:47:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Metaphone function attachmenty" }, { "msg_contents": "\nSure, send them over and we can put them in /contrib. Send them to the\npatches list, I think.\n\n\n> Joel Burton wrote:\n> > \n> > --\n> > Joel Burton <jburton@scw.org>\n> > Director of Information Systems, Support Center of Washington\n> > \n> > -------------------------------------------------------------------------------\n> > Name: contrib-metaphone.tgz\n> > contrib-metaphone.tgz Type: unspecified type (APPLICATION/octet-stream)\n> > Encoding: BASE64\n> > \n> > -------------------------------------------------------------------------------\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> \n> I have written a similar function, and it is in a library I wrote \"pgcontains\"\n> which started out as a simple implementation of \"contains(...)\" but has grown\n> WAY beyond it's original scope.\n> \n> Anyway, Metaphone is great for doing some cool searches, but now that we are on\n> the subject of cool search functions, I have a few that may be useful and would\n> be glad to contribute. My only question is how? (and I'd have to write them for\n> 7.1, because they are still in 7.0.x format)\n> \n> \n> contains(...)\n> A simple implementation of contains. Forces a table scan, but does some cool\n> things including phrase detection.\n> \n> decode(...)\n> Similar to \"case\" but works as decode for oracle queries. In 7.1 and higher it\n> should be easy to make one function take a variable number of parameters. Right\n> now I have a stub for the most common numbers.\n> \n> strip(...)\n> Strips out all but alphanumeric characters and returns a lowercase string.\n> \"Oops I did it again\" comes back as \"oopsididitagain.\" This is cool for lightly\n> fuzzy searches.\n> \n> striprev(...)\n> Like strip, but reverses the string. Allows you to use an index for records\n> which end in something. For instance: \"select * from table where field like\n> 'abc%'\" can use an index, where as \"select * from table where field like\n> '%xyx'\" will not. However, \"select * from table where striprev(field) like\n> striprev('xyz') || '%'\" can.\n> \n> Example:\n> cdinfo=# select title, striprev(title) from ztitles where striprev(title) like\n> striprev('wall') || '%' limit 3;\n> title | striprev\n> --------------------------------------------+------------------------------------\n> A Giggle Can Wiggle Its Way Through A Wall |\n> llawahguorhtyawstielggiwnacelggiga\n> Shadows On A Wall * | llawanoswodahs\n> The Wall | llaweht\n> (3 rows)\n> \n> cdinfo=# explain select title, striprev(title) from ztitles where\n> striprev(title) like striprev('wall') || '%' limit\n> 3;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=0.00..10.21 rows=3 width=12)\n> -> Index Scan using f1 on ztitles (cost=0.00..7579.94 rows=2227 width=12)\n> \n> EXPLAIN\n> \n> int64_t *strtonumu(text, int4 base)\n> Converts a string to a number with an arbitrary base. (Is there a function to\n> do this already?)\n> \n> \n> -- \n> I'm not offering myself as an example; every life evolves by its own laws.\n> ------------------------\n> http://www.mohawksoft.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 11:58:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Metaphone function attachment" }, { "msg_contents": "\nAdded to our /contrib. This is a loadable module, so it clearly belongs\nin /contrib.\n\n\n> \n> \n> -- \n> Joel Burton <jburton@scw.org>\n> Director of Information Systems, Support Center of Washington\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 19:00:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Metaphone function attachment" } ]
[ { "msg_contents": "\ncvsup -L 2 postgres.cvsup\nParsing supfile \"postgres.cvsup\"\nConnecting to postgresql.org\nCannot connect to postgresql.org: Connection refused\nWill retry at 22:31:23\n----------------------------------\n# This file represents the standard CVSup distribution file\n# for the PostgreSQL ORDBMS project\n# Modified by lockhart@alumni.caltech.edu 1997-08-28\n# - Point to my local snapshot source tree\n# - Pull the full CVS repository, not just the latest snapshot\n#\n# Defaults that apply to all the collections\n*default host=postgresql.org\n*default compress\n*default release=cvs\n*default delete use-rel-suffix\n# enable the following line to get the latest snapshot\n#*default tag=.\n# enable the following line to get whatever was specified above or by \ndefault\n# at the date specified below\n#*default date=97.08.29.00.00.00\n\n# base directory points to where CVSup will store its 'bookmarks' file(s)\n# will create subdirectory sup/\n#*default base=/opt/postgres # /usr/local/pgsql\n*default base=/usr/local/cvsroot\n\n# prefix directory points to where CVSup will store the actual \ndistribution(s)\n*default prefix=/usr/local/cvsroot\n\n# complete distribution, including all below\npgsql\n\n# individual distributions vs 'the whole thing'\n# pgsql-doc\n# pgsql-perl5\n# pgsql-src\n\n\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n", "msg_date": "Thu, 03 May 2001 22:30:17 +0200", "msg_from": "\"V. M.\" <txian@hotmail.com>", "msg_from_op": true, "msg_subject": "CVSup not working!" }, { "msg_contents": "> cvsup -L 2 postgres.cvsup\n> Parsing supfile \"postgres.cvsup\"\n> Connecting to postgresql.org\n> Cannot connect to postgresql.org: Connection refused\n> Will retry at 22:31:23\n...\n\nMe too. Marc, could you take a peek at it? cvsupd seems to be gone or\nport blocked or ??\n\n - Thomas\n", "msg_date": "Fri, 04 May 2001 00:56:31 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: CVSup not working!" }, { "msg_contents": "\n\nmost odd ... its set to start on rebooted, but either it went down on its\nown, or didn't ... restarted now, let me know if its not working ...\n\nOn Fri, 4 May 2001, Thomas Lockhart wrote:\n\n> > cvsup -L 2 postgres.cvsup\n> > Parsing supfile \"postgres.cvsup\"\n> > Connecting to postgresql.org\n> > Cannot connect to postgresql.org: Connection refused\n> > Will retry at 22:31:23\n> ...\n>\n> Me too. Marc, could you take a peek at it? cvsupd seems to be gone or\n> port blocked or ??\n>\n> - Thomas\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Fri, 4 May 2001 08:45:06 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVSup not working!" } ]
[ { "msg_contents": "Hi,\n\nI have noticed that it is possible to create duplicate CHECK (haven't tried\nother) constraints in 7.0.3 by doing something like this:\n\nCREATE TABLE \"test\" (\n \"a\" int4,\n CHECK (a < 400),\n CONSTRAINT \"$1\" CHECK (a > 5)\n);\n\nI was just fiddling around with trying to implement the 'DROP CONSTRAINT'\ncode (it's quite hard - don't wait up for me!) and it would seem to be a bad\nthing that it's possible to have two constraints with the same name in a\ntable.\n\nSurely there should be a UNIQUE (rcrelid, rcname) on pg_relcheck?, or at\nleast better checking in the CREATE TABLE code?\n\nChris\n\n", "msg_date": "Fri, 4 May 2001 09:48:23 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Duplicate constraint names in 7.0.3" }, { "msg_contents": "\nIf I read the spec correctly, table constraint names are supposed to be\nunique across a schema. So technically the constraint name should also\nnot conflict with the name of an fk constraint, or a unique index. In\naddition, generated constraint names are supposed to follow the same\nsyntax rules (which includes the uniqueness) which seems to imply that\nin cases like the below, that's not an error, and a different name should\nbe generated for the unnamed constraint. However, unnamed column\nconstraints seem to get the empty string as a name. \n\nI'd say don't worry about it for the purposes of drop constraint. :)\n\nOn Fri, 4 May 2001, Christopher Kings-Lynne wrote:\n\n> Hi,\n> \n> I have noticed that it is possible to create duplicate CHECK (haven't tried\n> other) constraints in 7.0.3 by doing something like this:\n> \n> CREATE TABLE \"test\" (\n> \"a\" int4,\n> CHECK (a < 400),\n> CONSTRAINT \"$1\" CHECK (a > 5)\n> );\n> \n> I was just fiddling around with trying to implement the 'DROP CONSTRAINT'\n> code (it's quite hard - don't wait up for me!) and it would seem to be a bad\n> thing that it's possible to have two constraints with the same name in a\n> table.\n> \n> Surely there should be a UNIQUE (rcrelid, rcname) on pg_relcheck?, or at\n> least better checking in the CREATE TABLE code?\n\n", "msg_date": "Thu, 3 May 2001 19:48:03 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate constraint names in 7.0.3" }, { "msg_contents": "I left it unsaid that, in fact, all constraint names should be unique.\nUnnamed column constraints as far as I can tell get a '$n' automatically\nassigned name.\n\nMaybe the create table function should process named constraints first, and\nthen the unnamed ones to prevent such a problem?\n\nChris\n\n-----Original Message-----\nFrom: Stephan Szabo [mailto:sszabo@megazone23.bigpanda.com]\nSent: Friday, 4 May 2001 10:48 AM\nTo: Christopher Kings-Lynne\nCc: Hackers\nSubject: Re: [HACKERS] Duplicate constraint names in 7.0.3\n\n\n\nIf I read the spec correctly, table constraint names are supposed to be\nunique across a schema. So technically the constraint name should also\nnot conflict with the name of an fk constraint, or a unique index. In\naddition, generated constraint names are supposed to follow the same\nsyntax rules (which includes the uniqueness) which seems to imply that\nin cases like the below, that's not an error, and a different name should\nbe generated for the unnamed constraint. However, unnamed column\nconstraints seem to get the empty string as a name.\n\nI'd say don't worry about it for the purposes of drop constraint. :)\n\nOn Fri, 4 May 2001, Christopher Kings-Lynne wrote:\n\n> Hi,\n>\n> I have noticed that it is possible to create duplicate CHECK (haven't\ntried\n> other) constraints in 7.0.3 by doing something like this:\n>\n> CREATE TABLE \"test\" (\n> \"a\" int4,\n> CHECK (a < 400),\n> CONSTRAINT \"$1\" CHECK (a > 5)\n> );\n>\n> I was just fiddling around with trying to implement the 'DROP CONSTRAINT'\n> code (it's quite hard - don't wait up for me!) and it would seem to be a\nbad\n> thing that it's possible to have two constraints with the same name in a\n> table.\n>\n> Surely there should be a UNIQUE (rcrelid, rcname) on pg_relcheck?, or at\n> least better checking in the CREATE TABLE code?\n\n", "msg_date": "Fri, 4 May 2001 11:17:21 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: Duplicate constraint names in 7.0.3" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> If I read the spec correctly, table constraint names are supposed to be\n> unique across a schema.\n\nThat's what the spec says, but I doubt we should enforce it. For one\nthing, what do you do with inherited constraints? Invent a random name\nfor them? No thanks. The absolute limit of what I'd accept is\nconstraint name unique for a given table ... and even that seems like\nan unnecessary restriction.\n\n>> I was just fiddling around with trying to implement the 'DROP CONSTRAINT'\n>> code (it's quite hard - don't wait up for me!) and it would seem to be a bad\n>> thing that it's possible to have two constraints with the same name in a\n>> table.\n\nA reasonable interpretation of DROP CONSTRAINT \"foo\" is to drop *all*\nconstraints named \"foo\" on the target table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 May 2001 23:24:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate constraint names in 7.0.3 " }, { "msg_contents": "On Thu, 3 May 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > If I read the spec correctly, table constraint names are supposed to be\n> > unique across a schema.\n> \n> That's what the spec says, but I doubt we should enforce it. For one\n> thing, what do you do with inherited constraints? Invent a random name\n> for them? No thanks. The absolute limit of what I'd accept is\n> constraint name unique for a given table ... and even that seems like\n> an unnecessary restriction.\n\nThe only thing I'd say is it might be confusing to people that some\nconstraint names must be unique (unique, primary key) and that others\ncan be duplicated (check, foreign key), not that all that many people \nprobably name their unique constraints.\n\n> >> I was just fiddling around with trying to implement the 'DROP CONSTRAINT'\n> >> code (it's quite hard - don't wait up for me!) and it would seem to be a bad\n> >> thing that it's possible to have two constraints with the same name in a\n> >> table.\n> \n> A reasonable interpretation of DROP CONSTRAINT \"foo\" is to drop *all*\n> constraints named \"foo\" on the target table.\n\nDefinately true if non-unique names are allowed.\n\n", "msg_date": "Thu, 3 May 2001 20:42:57 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate constraint names in 7.0.3 " }, { "msg_contents": "> A reasonable interpretation of DROP CONSTRAINT \"foo\" is to drop *all*\n> constraints named \"foo\" on the target table.\n\nThen it should probably be a good thing to avoid the automatic generation of\nduplicate names? I might take a look at that, actually...\n\nChris\n\n", "msg_date": "Fri, 4 May 2001 12:33:22 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: Duplicate constraint names in 7.0.3 " }, { "msg_contents": "OK,\n\nI have modifed heap.c so that it won't automatically generate duplicate\nconstraint names.\n\nI have _not_ compiled this yet, as it's a bit of a pain for me cos I don't\nhave bison, etc. However, it looks good to me, and if someone else wants to\ntest it and then maybe think about if the patch is necessary that's fine by\nme.\n\nIf no-one wants to test it, I will eventually get around to testing it\nmyself.\n\nGiven that this is my first code patch for Postgres - I should treat it with\ncaution!\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Friday, 4 May 2001 12:33 PM\n> To: Hackers\n> Subject: RE: [HACKERS] Duplicate constraint names in 7.0.3\n>\n>\n> > A reasonable interpretation of DROP CONSTRAINT \"foo\" is to drop *all*\n> > constraints named \"foo\" on the target table.\n>\n> Then it should probably be a good thing to avoid the automatic\n> generation of\n> duplicate names? I might take a look at that, actually...\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>", "msg_date": "Tue, 8 May 2001 11:47:14 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: Duplicate constraint names in 7.0.3 " }, { "msg_contents": "\nCan you send a context diff please? Thanks.\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> OK,\n> \n> I have modifed heap.c so that it won't automatically generate duplicate\n> constraint names.\n> \n> I have _not_ compiled this yet, as it's a bit of a pain for me cos I don't\n> have bison, etc. However, it looks good to me, and if someone else wants to\n> test it and then maybe think about if the patch is necessary that's fine by\n> me.\n> \n> If no-one wants to test it, I will eventually get around to testing it\n> myself.\n> \n> Given that this is my first code patch for Postgres - I should treat it with\n> caution!\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> > Kings-Lynne\n> > Sent: Friday, 4 May 2001 12:33 PM\n> > To: Hackers\n> > Subject: RE: [HACKERS] Duplicate constraint names in 7.0.3\n> >\n> >\n> > > A reasonable interpretation of DROP CONSTRAINT \"foo\" is to drop *all*\n> > > constraints named \"foo\" on the target table.\n> >\n> > Then it should probably be a good thing to avoid the automatic\n> > generation of\n> > duplicate names? I might take a look at that, actually...\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> >\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 00:41:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate constraint names in 7.0.3" }, { "msg_contents": "Assuming that generating a context diff is a matter of going 'cvs diff -c\nheap.c' then attached is a context diff.\n\nTo jog people's memories, it's intended to fix the problem with the\nfollowing code creating duplicate constraint names:\n\nCREATE TABLE \"test\" (\n \"a\" int4,\n CHECK (a < 400),\n CONSTRAINT \"$1\" CHECK (a > 5)\n);\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Tuesday, 8 May 2001 12:42 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] Duplicate constraint names in 7.0.3\n>\n>\n>\n> Can you send a context diff please? Thanks.\n>\n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > OK,\n> >\n> > I have modifed heap.c so that it won't automatically generate duplicate\n> > constraint names.\n> >\n> > I have _not_ compiled this yet, as it's a bit of a pain for me\n> cos I don't\n> > have bison, etc. However, it looks good to me, and if someone\n> else wants to\n> > test it and then maybe think about if the patch is necessary\n> that's fine by\n> > me.\n> >\n> > If no-one wants to test it, I will eventually get around to testing it\n> > myself.\n> >\n> > Given that this is my first code patch for Postgres - I should\n> treat it with\n> > caution!\n> >\n> > Chris\n> >\n> > > -----Original Message-----\n> > > From: pgsql-hackers-owner@postgresql.org\n> > > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> > > Kings-Lynne\n> > > Sent: Friday, 4 May 2001 12:33 PM\n> > > To: Hackers\n> > > Subject: RE: [HACKERS] Duplicate constraint names in 7.0.3\n> > >\n> > >\n> > > > A reasonable interpretation of DROP CONSTRAINT \"foo\" is to\n> drop *all*\n> > > > constraints named \"foo\" on the target table.\n> > >\n> > > Then it should probably be a good thing to avoid the automatic\n> > > generation of\n> > > duplicate names? I might take a look at that, actually...\n> > >\n> > > Chris\n> > >\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://www.postgresql.org/search.mpl\n> > >\n>\n> [ Attachment, skipping... ]\n>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>", "msg_date": "Tue, 8 May 2001 13:11:47 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: Duplicate constraint names in 7.0.3" }, { "msg_contents": "DOH!\n\nI installed bison and there is a tiny little compile-stopper. Use the\nattached diff instead.\n\n(I forgot to declare 'i' :) )\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Tuesday, 8 May 2001 1:12 PM\n> To: Hackers\n> Cc: pgman@candle.pha.pa.us\n> Subject: RE: [HACKERS] Duplicate constraint names in 7.0.3\n>\n>\n> Assuming that generating a context diff is a matter of going 'cvs diff -c\n> heap.c' then attached is a context diff.\n>\n> To jog people's memories, it's intended to fix the problem with the\n> following code creating duplicate constraint names:\n>\n> CREATE TABLE \"test\" (\n> \"a\" int4,\n> CHECK (a < 400),\n> CONSTRAINT \"$1\" CHECK (a > 5)\n> );\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > Sent: Tuesday, 8 May 2001 12:42 PM\n> > To: Christopher Kings-Lynne\n> > Cc: Hackers\n> > Subject: Re: [HACKERS] Duplicate constraint names in 7.0.3\n> >\n> >\n> >\n> > Can you send a context diff please? Thanks.\n> >\n> > [ Charset ISO-8859-1 unsupported, converting... ]\n> > > OK,\n> > >\n> > > I have modifed heap.c so that it won't automatically generate\n> duplicate\n> > > constraint names.\n> > >\n> > > I have _not_ compiled this yet, as it's a bit of a pain for me\n> > cos I don't\n> > > have bison, etc. However, it looks good to me, and if someone\n> > else wants to\n> > > test it and then maybe think about if the patch is necessary\n> > that's fine by\n> > > me.\n> > >\n> > > If no-one wants to test it, I will eventually get around to testing it\n> > > myself.\n> > >\n> > > Given that this is my first code patch for Postgres - I should\n> > treat it with\n> > > caution!\n> > >\n> > > Chris\n> > >\n> > > > -----Original Message-----\n> > > > From: pgsql-hackers-owner@postgresql.org\n> > > > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> > > > Kings-Lynne\n> > > > Sent: Friday, 4 May 2001 12:33 PM\n> > > > To: Hackers\n> > > > Subject: RE: [HACKERS] Duplicate constraint names in 7.0.3\n> > > >\n> > > >\n> > > > > A reasonable interpretation of DROP CONSTRAINT \"foo\" is to\n> > drop *all*\n> > > > > constraints named \"foo\" on the target table.\n> > > >\n> > > > Then it should probably be a good thing to avoid the automatic\n> > > > generation of\n> > > > duplicate names? I might take a look at that, actually...\n> > > >\n> > > > Chris\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > >\n> > > > http://www.postgresql.org/search.mpl\n> > > >\n> >\n> > [ Attachment, skipping... ]\n> >\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to\n> majordomo@postgresql.org)\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill,\n> Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>", "msg_date": "Tue, 8 May 2001 13:23:16 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RE: Duplicate constraint names in 7.0.3" }, { "msg_contents": "\nApplied (Newer version). I quote this one to give it context.\n\nThanks.\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> OK,\n> \n> I have modifed heap.c so that it won't automatically generate duplicate\n> constraint names.\n> \n> I have _not_ compiled this yet, as it's a bit of a pain for me cos I don't\n> have bison, etc. However, it looks good to me, and if someone else wants to\n> test it and then maybe think about if the patch is necessary that's fine by\n> me.\n> \n> If no-one wants to test it, I will eventually get around to testing it\n> myself.\n> \n> Given that this is my first code patch for Postgres - I should treat it with\n> caution!\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> > Kings-Lynne\n> > Sent: Friday, 4 May 2001 12:33 PM\n> > To: Hackers\n> > Subject: RE: [HACKERS] Duplicate constraint names in 7.0.3\n> >\n> >\n> > > A reasonable interpretation of DROP CONSTRAINT \"foo\" is to drop *all*\n> > > constraints named \"foo\" on the target table.\n> >\n> > Then it should probably be a good thing to avoid the automatic\n> > generation of\n> > duplicate names? I might take a look at that, actually...\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> >\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 17:13:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate constraint names in 7.0.3" } ]
[ { "msg_contents": "\nThanks a lot for your total and complete description of the process. (i\nshould have checked out the sprm first before asking). I empathize with\nwhat you said about packaging not being a simple task, i have been through\nthe agony.\n\nAbout putting your stuff into the postgres tree, i believe it would be a\n\"good\" thing other than bad to include it in pgsq. It can be put into the\ncontrib directory (because it isn't part of the \"core\" portable stuff). This\nsolution was done for the portable openssh cvs tree. not only redhat\npackaging stuff was included, but the solaris pkg mechanism was also in\nthere (and i also believe there were some others). It usually isn't a lot of\nfiles (ie. the spec file and maybe the initscript). Of course its up to the\ngods of the pgsql tree what they want to do with it, so i'm just going to\nraise this suggestion and shut up.\n\nanyways, getting back to the what brought me to ask about this, can you add\nthe fixes to these two small problems in your initscripts?\n\n1. `pidof` should be `pidof -s` (2 instances)\n2. restart) should be stop; sleep x; start\nideally, stop should actually wait till postgres fully stops. The sleep is\njust a temporary fix.\n\nI have a more thorough email i sent earlier, i can resend it to you if you\nwant.\n\n-rchit\n", "msg_date": "Thu, 3 May 2001 19:31:43 -0700 ", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": true, "msg_subject": "RE: Packaging 7.1.1" }, { "msg_contents": "Rachit Siamwalla wrote:\n> Thanks a lot for your total and complete description of the process. (i\n> should have checked out the sprm first before asking). I empathize with\n> what you said about packaging not being a simple task, i have been through\n> the agony.\n\nEmpathize is appropriate if you've been there. But, it's better than\ngoing six months to a year for a newer RPM -- the release lag was one\nofthe two triggers that caused me to go do this -- the other was the\nupgrading issue. I won't say any more about that right now --too tired.\n \n> About putting your stuff into the postgres tree, i believe it would be a\n> \"good\" thing other than bad to include it in pgsq. It can be put into the\n> contrib directory (because it isn't part of the \"core\" portable stuff). This\n\nWe'll see what transpires.\n\n> I have a more thorough email i sent earlier, i can resend it to you if you\n> want.\n\nHmmm.. lessee... I have Bruce's reply, which includes your message in\nits entirety, I think. But, just to be safe, resend directly to me, and\nadd the [HACKERS] part to the subject (so it will go to the correct mail\nfolder, otherwise I might miss it). I have a list of messages in an\n'RPMS for 7.1' subfolder of my mail folder 'Postgres' that I work\nthrough for each release.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 03 May 2001 23:28:42 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Packaging 7.1.1" }, { "msg_contents": "On Thu, 3 May 2001, Rachit Siamwalla wrote:\n\n> 1. `pidof` should be `pidof -s` (2 instances)\n> 2. restart) should be stop; sleep x; start\n> ideally, stop should actually wait till postgres fully stops. The sleep is\n> just a temporary fix.\n> \nPerhaps a naive question, but why not use the pg_ctl for starting and\nstopping?\n\nIt has a -w option to have it wait for the stop/start/restart to complete.\n\n\t-rocco\n\n", "msg_date": "Fri, 4 May 2001 10:41:17 -0400 (EDT)", "msg_from": "Rocco Altier <roccoa@routescape.com>", "msg_from_op": false, "msg_subject": "RE: Packaging 7.1.1" } ]
[ { "msg_contents": ">\n> > Just put a note in the installation docs that the place where the \n>database\n> > is initialised to should be on a non-Reiser, non-XFS mount...\n>\n>Sure, we can do that now.\n\nI still think this is not necessarily the right approach either. One\nmajor purpose of using a journaling fs is for fast boot up time after\ncrash. If you have a 100 GB database you may wish to have the data\non XFS. I do think that the WAL log should be on a separate disk and\non a non-journaling fs for performance.\n\nBest Regards,\nCarl Garland\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com\n\n", "msg_date": "Fri, 04 May 2001 00:58:27 -0400", "msg_from": "\"carl garland\" <carlhgarland@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Here is a radical idea...\n\nWhat is it that is causing Postgres trouble? It is the file system's attempts\nto maintain some integrity. So I proposed a simple \"dbfs\" sort of thing which\nwas the most basic sort of file system possible.\n\nI'm not sure, but I think we can test this hypothesis on the FAT32 file system\non Linux. As far as I know, FAT32 (FAT in general) is a very simple file system\nand does very little during operation, except read and write the files and\nmanage what's been allocated. Plus, the allocation table is very simple in\ncomparison all the other file systems.\n\nWould pgbench run on a system using ext2, Reiser, then FAT32 be sufficient to\nget a feeling for the type of performance Postgres would get, or am I just off\nthe wall?\n\nIf this idea has some merit, what would be the best way to test it? Move the\npg_xlog directory first, then try base? What's the best methodology to try?\n\n\ncarl garland wrote:\n> \n> >\n> > > Just put a note in the installation docs that the place where the\n> >database\n> > > is initialised to should be on a non-Reiser, non-XFS mount...\n> >\n> >Sure, we can do that now.\n> \n> I still think this is not necessarily the right approach either. One\n> major purpose of using a journaling fs is for fast boot up time after\n> crash. If you have a 100 GB database you may wish to have the data\n> on XFS. I do think that the WAL log should be on a separate disk and\n> on a non-journaling fs for performance.\n> \n> Best Regards,\n> Carl Garland\n> \n> _________________________________________________________________\n> Get your FREE download of MSN Explorer at http://explorer.msn.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Fri, 04 May 2001 06:43:06 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Before we get too involved in speculating, shouldn't we actually measure the\nperformance of 7.1 on XFS and Reiserfs? Since it's easy to disable fsync,\nwe can test whether that's the problem. I don't think that logging file\nsystems must intrinsically give bad performance on fsync since they only log\nmetadata changes.\n\nI don't have a machine with XFS installed and it will be at least a week\nbefore I could get around to a build. Any volunteers?\n\nKen Hirsch\n\n\n", "msg_date": "Fri, 4 May 2001 08:24:52 -0400", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Before we get too involved in speculating, shouldn't we actually measure the\n> performance of 7.1 on XFS and Reiserfs? Since it's easy to disable fsync,\n> we can test whether that's the problem. I don't think that logging file\n> systems must intrinsically give bad performance on fsync since they only log\n> metadata changes.\n> \n> I don't have a machine with XFS installed and it will be at least a week\n> before I could get around to a build. Any volunteers?\n\nThere have been multiple reports of poor PostgreSQL performance on\nReiser and xfs. I don't have numbers, though. Frankly, I think we need\nxfs and reiser experts involved to figure out our options here.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 12:36:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> > Before we get too involved in speculating, shouldn't we actually measure\nthe\n> > performance of 7.1 on XFS and Reiserfs? Since it's easy to disable\nfsync,\n> > we can test whether that's the problem. I don't think that logging file\n> > systems must intrinsically give bad performance on fsync since they only\nlog\n> > metadata changes.\n> >\n> > I don't have a machine with XFS installed and it will be at least a week\n> > before I could get around to a build. Any volunteers?\n>\n> There have been multiple reports of poor PostgreSQL performance on\n> Reiser and xfs. I don't have numbers, though. Frankly, I think we need\n> xfs and reiser experts involved to figure out our options here.\n\nI've done some testing to see how Reiserfs performs\nvs ext2, and also various for various values of wal_sync_method while on a\nreiserfs partition. The attached graph shows the results. The y axis is\ntransactions per second and the x axis is the transaction number. It was\nclear that, at least for my specific app, ext2 was significantly faster.\n\nThe hardware I tested on has an Athalon 1 Ghz cpu and 512 MB ram. The\nharddrive is a 2 year old IDE drive. I'm running Red Hat 7 with all the\nlatest updates, and a freshly compiled 2.4.2 kernel with the latest Reiserfs\npatch, and of course PostgreSQL 7.1. The transactions were run in a loop,\n700 times per test, to insert sample data into 4 tables. I used a PHP script\nrunning on the same machine to do the inserts.\n\nI'd be happy to provide more detail or try a different variation if anyone\nis interested.\n\n- Joe", "msg_date": "Fri, 4 May 2001 11:07:01 -0700", "msg_from": "\"Joe Conway\" <joe@conway-family.com>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "\"Ken Hirsch\" <kenhirsch@myself.com> writes:\n\n> I don't have a machine with XFS installed and it will be at least a week\n> before I could get around to a build. Any volunteers?\n\nI think I could do that... any useful benchmarks to run?\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "04 May 2001 14:22:39 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> > There have been multiple reports of poor PostgreSQL performance on\n> > Reiser and xfs. I don't have numbers, though. Frankly, I think we need\n> > xfs and reiser experts involved to figure out our options here.\n> \n> I've done some testing to see how Reiserfs performs\n> vs ext2, and also various for various values of wal_sync_method while on a\n> reiserfs partition. The attached graph shows the results. The y axis is\n> transactions per second and the x axis is the transaction number. It was\n> clear that, at least for my specific app, ext2 was significantly faster.\n> \n> The hardware I tested on has an Athalon 1 Ghz cpu and 512 MB ram. The\n> harddrive is a 2 year old IDE drive. I'm running Red Hat 7 with all the\n> latest updates, and a freshly compiled 2.4.2 kernel with the latest Reiserfs\n> patch, and of course PostgreSQL 7.1. The transactions were run in a loop,\n> 700 times per test, to insert sample data into 4 tables. I used a PHP script\n> running on the same machine to do the inserts.\n> \n> I'd be happy to provide more detail or try a different variation if anyone\n> is interested.\n\nThis is hugely helpful.\n\nYikes, look at those lines. It shows a few things. \n\nFirst, under Reiser, nosync, fsync, and fdatasync are pretty much the\nsame. The big surprise here is that fsync doesn't seem to have any\neffect.\n\nSecond surprise is that open fsync, which synces on every write rather\nthan on end of transaction, was slower. I believe this should be slower\nif multiple WAL writes are being made in one transaction. fdatasync\nwould sync just at end of transaction, while each WAL write would be\nsynced by open fsync.\n\nAnd the largest surpise is that ext2 is faster, but not because of\nfsync, and almost double so. Keep in mind that WAL writes are no the\nonly write happening. Though in 7.1 we don't flush the data blocks to\ndisk, we do write to disk as the buffer cache fill up with dirty\nbuffers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 14:28:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Joe Conway <joe@conway-family.com> wrote:\n>\n> I've done some testing to see how Reiserfs performs\n> vs ext2, and also various for various values of wal_sync_method while on a\n> reiserfs partition. The attached graph shows the results. The y axis is\n> transactions per second and the x axis is the transaction number. It was\n> clear that, at least for my specific app, ext2 was significantly faster.\n\nThis is great, thanks a lot! Among other things it tells us, it appears\nthat fsync() is not the problem on Reiserfs. I don't know the details of\nReiserfs, but I think a lot of work has gone into optimizing it for very\nsmall files, so you can use the file system as a simple database for\nstrings, a la Windows registry. I don't remember hearing about optimizing\nfor large files and large block reads and writes.\n\nXFS, on the other hand, is used for very large files on SGI systems.\n\nI think the XFS and Reiserfs folks will be happy to look at the performance\nproblem, but it would be very helpful for them to have a prepackaged\nbenchmark (or two or three) to use. We should set up an FTP area to share\nthem. Joe, can you contribute yours? Does anybody else have anything?\n\nAlready, Trond Eivind Glomsr�d teg@redhat.com has volunteered to test on\nXFS. The easier we make it, the more help we'll get.\n\nKen Hirsch\n\n\n\n\n\n", "msg_date": "Fri, 4 May 2001 15:58:07 -0400", "msg_from": "\"Ken Hirsch\" <kahirsch@bellsouth.net>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> I think the XFS and Reiserfs folks will be happy to look at the\nperformance\n> problem, but it would be very helpful for them to have a prepackaged\n> benchmark (or two or three) to use. We should set up an FTP area to\nshare\n> them. Joe, can you contribute yours? Does anybody else have anything?\n>\n\nI don't mind contributing the script and schema that I used, but one thing I\nfailed to mention in my first post is that the first thing the script does\nis open connections to 256 databases (all on this same machine), and the\ntransactions are relatively evenly dispersed among the 256 connections. The\ntest was originally written to try out an idea to allow scalability by\npartitioning the data into seperate databases (which could eventually each\nlive on its own server). If you are interested I can modify the test to use\nonly one database and rerun the same tests this weekend.\n\nJoe\n\n", "msg_date": "Fri, 4 May 2001 14:13:47 -0700", "msg_from": "\"Joe Conway\" <joe@conway-family.com>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "teg@redhat.com (Trond Eivind Glomsr�d) writes:\n\n> \"Ken Hirsch\" <kenhirsch@myself.com> writes:\n> \n> > I don't have a machine with XFS installed and it will be at least a week\n> > before I could get around to a build. Any volunteers?\n> \n> I think I could do that... any useful benchmarks to run?\n\nIn lack of bigger benchmarks, I tried postgresql 7.1 on a Red Hat\nLinux 7.1 system with the SGI XFS modifications. The differences were\nvery small.\n\n\n\n\n\n\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.", "msg_date": "07 May 2001 18:07:16 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> teg@redhat.com (Trond Eivind Glomsr?d) writes:\n> \n> > \"Ken Hirsch\" <kenhirsch@myself.com> writes:\n> > \n> > > I don't have a machine with XFS installed and it will be at least a week\n> > > before I could get around to a build. Any volunteers?\n> > \n> > I think I could do that... any useful benchmarks to run?\n> \n> In lack of bigger benchmarks, I tried postgresql 7.1 on a Red Hat\n> Linux 7.1 system with the SGI XFS modifications. The differences were\n> very small.\n> \n\nThanks. That is very helpful. Seems XFS is fine. According to Joe\nConway, reiser has some problems.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 19:07:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Quoting Trond Eivind Glomsr�d <teg@redhat.com>:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > > \n> > > When compared to the earlier ones (including XFS), you'll note that\n> ReiserFS\n> > > performance is rather poor in some of the tests - it takes 37 vs. 13\n> > > seconds for 8192 inserts, when the inserts are different transactions.\n> > \n> > That is all the fsync delay, probably, and it should be using fdatasync()\n> > on that kernel.\n> \n> And it does seem to work that way with XFS...\n\nI'm concearned about this because we are going to switch our fist server to a\nJournaling FS (on Linux).\nSearching and asking I found out that for our short term work we need ReiserFS\n(it's for a proxy server).\nPut the interesting thing was that for large (very large) files, everybody\nrecomends XFS.\nThe drawback of XFS is that it's very, very sloooow when deleting files.\n\nSaludos... :-)\n\n-- \nEl mejor sistema operativo es aquel que te da de comer.\nCuida tu dieta.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Wed, 9 May 2001 14:21:10 +0300", "msg_from": "=?iso-8859-1?B?TWFydO1uIE1hcnF16XM=?= <martin@bugs.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Quoting Bruce Momjian <pgman@candle.pha.pa.us>:\n\n> > I'm concearned about this because we are going to switch our\n> > fist server to a Journaling FS (on Linux). Searching and asking\n> > I found out that for our short term work we need ReiserFS (it's\n> > for a proxy server). Put the interesting thing was that for\n> > large (very large) files, everybody recomends XFS. The drawback\n> > of XFS is that it's very, very sloooow when deleting files.\n> \n> Why do all these file systems seem to have one major negative?\n\nIn the case of XFS they told me that it was slow deleting, but I guess that they\nwere trying to tell me that reiser would do the job on a proxy cache better then\nXFS.\nEverybody put there thumbs-up to XFS when talking about databases (because of\nthe large file size).\n\nSaludos... :-)\n\n-- \nEl mejor sistema operativo es aquel que te da de comer.\nCuida tu dieta.\n-----------------------------------------------------------------\nMartin Marques | mmarques@unl.edu.ar\nProgramador, Administrador | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n", "msg_date": "Wed, 9 May 2001 14:30:03 +0300", "msg_from": "=?iso-8859-1?B?TWFydO1uIE1hcnF16XM=?= <martin@bugs.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "teg@redhat.com (Trond Eivind Glomsr�d) writes:\n\n> teg@redhat.com (Trond Eivind Glomsr�d) writes:\n> \n> > \"Ken Hirsch\" <kenhirsch@myself.com> writes:\n> > \n> > > I don't have a machine with XFS installed and it will be at least a week\n> > > before I could get around to a build. Any volunteers?\n> > \n> > I think I could do that... any useful benchmarks to run?\n> \n> In lack of bigger benchmarks, I tried postgresql 7.1 on a Red Hat\n> Linux 7.1 system with the SGI XFS modifications. The differences were\n> very small.\n\nAnd here is the one for ReiserFS - same kernel, but recompiled to turn\noff debugging\n\n\n\n\nWhen compared to the earlier ones (including XFS), you'll note that ReiserFS\nperformance is rather poor in some of the tests - it takes 37 vs. 13\nseconds for 8192 inserts, when the inserts are different transactions.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.", "msg_date": "09 May 2001 11:16:34 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> \n> When compared to the earlier ones (including XFS), you'll note that ReiserFS\n> performance is rather poor in some of the tests - it takes 37 vs. 13\n> seconds for 8192 inserts, when the inserts are different transactions.\n\nThat is all the fsync delay, probably, and it should be using fdatasync()\non that kernel.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 12:14:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > \n> > When compared to the earlier ones (including XFS), you'll note that ReiserFS\n> > performance is rather poor in some of the tests - it takes 37 vs. 13\n> > seconds for 8192 inserts, when the inserts are different transactions.\n> \n> That is all the fsync delay, probably, and it should be using fdatasync()\n> on that kernel.\n\nAnd it does seem to work that way with XFS...\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "09 May 2001 12:15:35 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "> I'm concearned about this because we are going to switch our\n> fist server to a Journaling FS (on Linux). Searching and asking\n> I found out that for our short term work we need ReiserFS (it's\n> for a proxy server). Put the interesting thing was that for\n> large (very large) files, everybody recomends XFS. The drawback\n> of XFS is that it's very, very sloooow when deleting files.\n\nWhy do all these file systems seem to have one major negative?\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 13:24:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Makes it more fun :) Kinda like a lottery ticket:\n\n- reliable (cherry)\n- fast (cherry)\n- resource hog (lemon)\n--\nRod Taylor\n BarChord Entertainment Inc.\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Mart�n Marqu�s\" <martin@bugs.unl.edu.ar>\nCc: \"Trond Eivind Glomsr�d\" <teg@redhat.com>;\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, May 09, 2001 1:24 PM\nSubject: Re: [HACKERS] Re: New Linux xfs/reiser file systems\n\n\n> > I'm concearned about this because we are going to switch our\n> > fist server to a Journaling FS (on Linux). Searching and asking\n> > I found out that for our short term work we need ReiserFS (it's\n> > for a proxy server). Put the interesting thing was that for\n> > large (very large) files, everybody recomends XFS. The drawback\n> > of XFS is that it's very, very sloooow when deleting files.\n>\n> Why do all these file systems seem to have one major negative?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Wed, 9 May 2001 13:59:55 -0400", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "Hello !\nI am forwarding the following from lkml\n\nIt seems that the only case when XFS is slow is the 'rm -rf linux' \n[which can be considered as a good sign for linux]. For all other \noperation XFS is the winner.\n\nYAS\n\n<MessageFromLKML>\nFrom: Ricardo Galli (gallir@uib.es)\nDate: Wed May 09 2001 - 20:45:46 EDT\n\n* Next message: clameter@lameter.com: \"USB broken in 2.4.4? Serial \nRicochet works, USB performance sucks.\"\n\n * Previous message: AmigaLinux A2232 Driver Project : \"New Amiga \nDriver\"\n * Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]\n\n > It would be great to see a table of ReiserFS/XFS/Ext2+index performance\n > results. Well, to make it really fair it should be Ext3+index so I'd\n > better add 'backport the patch to 2.2' or 'bug Stephen and friends to\n > hurry up' to my to-do list.\n\nYou can find a simple benchmark (an average of three samples) among reiser,\next2, xfs and fat32 under Linux:\n\nhttp://bulma.lug.net/body.phtml?nIdNoticia=626\n\nAlthough is Spanish, the tables are easy to understand.\n\nThe benchmark was carried up by Guillem Cantallops, student of the\nUniversity of Balearics Islands and member or the local LUG...\n\nBASIC WORDS ;-)\nEscritura: Writing\nLectura: Reading\nBorrado: Deletion\nCopia: Copy\nExtracci�n: Extraction\n\nRegards,\n\n--ricardo\nhttp://m3d.uib.es/~gallir/\n\n-\nTo unsubscribe from this list: send the line \"unsubscribe linux-kernel\" in\nthe body of a message to majordomo@vger.kernel.org\nMore majordomo info at http://vger.kernel.org/majordomo-info.html\nPlease read the FAQ at http://www.tux.org/lkml/\n</MessageFromLKML>\n\n\nBruce Momjian wrote:\n\n>>I'm concearned about this because we are going to switch our\n>>fist server to a Journaling FS (on Linux). Searching and asking\n>>I found out that for our short term work we need ReiserFS (it's\n>>for a proxy server). Put the interesting thing was that for\n>>large (very large) files, everybody recomends XFS. The drawback\n>>of XFS is that it's very, very sloooow when deleting files.\n>>\n> \n> Why do all these file systems seem to have one major negative?\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Thu, 10 May 2001 10:55:16 +0200", "msg_from": "Yaacov Akiba Slama <ya@slamail.org>", "msg_from_op": false, "msg_subject": "Re: New Linux xfs/reiser file systems" } ]
[ { "msg_contents": "\nHi,\n\nI have a problem with a case sensitive 'order by'.\nOn my development platform, the 'order by' clauses are insensitive\nBut on my production platform, they are case sensitive.\nI did the same installation on both platforms, from RPMs I downloaded\n(Postgresql 7.1 on Redhat 7.0).\nI tried to set LC_COLLATE=C before initdb, but nothing changed.\n\nIs there a configuration for that ? Is it a specific parameter on my system\n?\n\nAny help will be appreciated\n\nThank's a lot.\n\n\n\n", "msg_date": "Fri, 4 May 2001 09:55:40 +0200", "msg_from": "=?iso-8859-1?Q?Micha=EBl_Fiey?= <m.fiey@futuresoundtech.com>", "msg_from_op": true, "msg_subject": "Case sensitive order by" }, { "msg_contents": "=?iso-8859-1?Q?Micha=EBl_Fiey?= <m.fiey@futuresoundtech.com> writes:\n> I have a problem with a case sensitive 'order by'.\n> On my development platform, the 'order by' clauses are insensitive\n\nApparently you've discovered a feature that the developers didn't know\nexisted ;-). Seriously, I don't think I believe that. Could we see\nsome evidence? Also, what does the pg_controldata contrib program\n(I think this is included in the RPMs) show is in your pg_control file?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 09:57:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Case sensitive order by " }, { "msg_contents": "On Fri, May 04, 2001 at 09:57:02AM -0400,\n Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> =?iso-8859-1?Q?Micha=EBl_Fiey?= <m.fiey@futuresoundtech.com> writes:\n> > I have a problem with a case sensitive 'order by'.\n> > On my development platform, the 'order by' clauses are insensitive\n\nCould this be a locale issue?\n", "msg_date": "Fri, 4 May 2001 13:30:07 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: Case sensitive order by" }, { "msg_contents": "You're right !\nLC_ALL=C explain the difference.\n\nBut I haven't found pg_controldata on my platform.\nIs it installed with Postgresql 7.1 RPM for Redhat 7.0 ?\n\nThank's for your help\n\nregards\n\n----- Original Message -----\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nTo: Micha�l Fiey <m.fiey@futuresoundtech.com>\nSent: Wednesday, May 09, 2001 4:28 PM\nSubject: Re: [GENERAL] Case sensitive order by\n\n\n> > On my prod platform :\n> > aaa\n> > bbb\n> > ccc\n> > AAA\n> > BBB\n> > CCC\n>\n> > And on my dev platform :\n> > aaa\n> > AAA\n> > bbb\n> > BBB\n> > ccc\n> > CCC\n>\n> Are you sure that both databases are running with the same locale\n> settings? pg_controldata can tell you for sure.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Wed, 9 May 2001 17:36:53 +0200", "msg_from": "=?iso-8859-1?Q?Micha=EBl_Fiey?= <m.fiey@futuresoundtech.com>", "msg_from_op": true, "msg_subject": "Re: Case sensitive order by " }, { "msg_contents": "> But I haven't found pg_controldata on my platform.\n> Is it installed with Postgresql 7.1 RPM for Redhat 7.0 ?\n\nDunno. It's part of our contrib stuff. I thought there would be an RPM\nfor the contrib stuff for 7.1, but maybe not, or maybe you didn't\ninstall that RPM.\n\nIf you don't have it, just looking at $PGDATA/global/pg_control with\n\"strings\" would probably do well enough to let you spot the locale name...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 May 2001 11:45:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Case sensitive order by " }, { "msg_contents": "Hi,\n\nAs I previous searched a tool to convert Oracle database to PostgreSQL\nand really found nothing, there's now a piece of perl code I've written\nthat\ncan become a great tool to do this job.\n\nIt currently extract the database schema table definition of an Oracle\ndatabase\nand output a sql script to import into postgresql. You simply provide the\nDBI\nconnection to the Oracle DB and the script do its best. It normally handle\nunique, primary and foreign keys.\n\nI need help to go further because I really don't know about Oracle and I go\n\nvery slowly. Also I did not have so much Oracle database to test with.\n\nThings to do are:\n\n- Extract grants (coming soon)\n- More precision in type conversion based on length (I've no good DB to do\nthat)\n- Extract triggers and internal function.\n- Extract datas.\n- SQL queries converter.\n\nFor extracting data if someone know a way to dump content of an Oracle DB\nin ascii. I don't found anything than binary dump. Extracting them with\nperl/DBI\ncan be very slow but if there's no other way...\n\nYou can found this tools at:\n\n http://www.samse.fr/GPL/ora2pg/\n\n\nRegards\n\n", "msg_date": "Wed, 09 May 2001 18:20:41 +0200", "msg_from": "Gilles DAROLD <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Oracle to Pg tool" }, { "msg_contents": "Add on on ora2pg.\n\nTable grant extraction is done. It is based on group/users grants.\n\nOracle has ROLES that I understand as groups and users associated to\nthese roles. So I create a group for each role and alter it by adding the\nusers.\nAnd then set grants to each tables.\n\nLet me now if I have to stop to send new update on the list.\n\nRegards\n\n\n\n", "msg_date": "Wed, 09 May 2001 22:40:41 +0200", "msg_from": "Gilles DAROLD <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: Oracle to Pg tool" }, { "msg_contents": "\nWhere do people want this. Should it be in /contrib or on its own web\npage?\n\nI have an Xbase conversion utility too. Where should that go?\n\n\n> Hi,\n> \n> As I previous searched a tool to convert Oracle database to PostgreSQL\n> and really found nothing, there's now a piece of perl code I've written\n> that\n> can become a great tool to do this job.\n> \n> It currently extract the database schema table definition of an Oracle\n> database\n> and output a sql script to import into postgresql. You simply provide the\n> DBI\n> connection to the Oracle DB and the script do its best. It normally handle\n> unique, primary and foreign keys.\n> \n> I need help to go further because I really don't know about Oracle and I go\n> \n> very slowly. Also I did not have so much Oracle database to test with.\n> \n> Things to do are:\n> \n> - Extract grants (coming soon)\n> - More precision in type conversion based on length (I've no good DB to do\n> that)\n> - Extract triggers and internal function.\n> - Extract datas.\n> - SQL queries converter.\n> \n> For extracting data if someone know a way to dump content of an Oracle DB\n> in ascii. I don't found anything than binary dump. Extracting them with\n> perl/DBI\n> can be very slow but if there's no other way...\n> \n> You can found this tools at:\n> \n> http://www.samse.fr/GPL/ora2pg/\n> \n> \n> Regards\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 17:02:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oracle to Pg tool" }, { "msg_contents": "On Wed, 9 May 2001, Bruce Momjian wrote:\n\n>\n> Where do people want this. Should it be in /contrib or on its own web\n> page?\n\nThis is already linked on the related page.\n\n> I have an Xbase conversion utility too. Where should that go?\n\nWhat's the URL?\n\n>\n>\n> > Hi,\n> >\n> > As I previous searched a tool to convert Oracle database to PostgreSQL\n> > and really found nothing, there's now a piece of perl code I've written\n> > that\n> > can become a great tool to do this job.\n> >\n> > It currently extract the database schema table definition of an Oracle\n> > database\n> > and output a sql script to import into postgresql. You simply provide the\n> > DBI\n> > connection to the Oracle DB and the script do its best. It normally handle\n> > unique, primary and foreign keys.\n> >\n> > I need help to go further because I really don't know about Oracle and I go\n> >\n> > very slowly. Also I did not have so much Oracle database to test with.\n> >\n> > Things to do are:\n> >\n> > - Extract grants (coming soon)\n> > - More precision in type conversion based on length (I've no good DB to do\n> > that)\n> > - Extract triggers and internal function.\n> > - Extract datas.\n> > - SQL queries converter.\n> >\n> > For extracting data if someone know a way to dump content of an Oracle DB\n> > in ascii. I don't found anything than binary dump. Extracting them with\n> > perl/DBI\n> > can be very slow but if there's no other way...\n> >\n> > You can found this tools at:\n> >\n> > http://www.samse.fr/GPL/ora2pg/\n> >\n> >\n> > Regards\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 9 May 2001 17:28:26 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Oracle to Pg tool" }, { "msg_contents": "> On Wed, 9 May 2001, Bruce Momjian wrote:\n> \n> >\n> > Where do people want this. Should it be in /contrib or on its own web\n> > page?\n> \n> This is already linked on the related page.\n> \n> > I have an Xbase conversion utility too. Where should that go?\n> \n> What's the URL?\n\nIt doesn't have a URL. It is on our own FTP site. You can link to it,\nI guess, but it probably needs to hit a mirror that always will respond.\n\nThe URL is:\n\n\tftp://ftp.crimelabs.net/pub/postgresql/contrib/dbf2pg-3.1.tar.gz\n\nbut it is not on the mirrors yet. My problem is trying to get some\nrules on what goes in /contrib and what doesn't. My assumption is that\nloadable modules and stuff that deals with the backend internals go into\n/contrib. That would mean our mysql stuff would be removed, I guess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 17:37:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oracle to Pg tool" }, { "msg_contents": "Hi All,\n\nI'm willing to put all PostgreSQL related matter on the techdocs site,\nor link to it, as the case may be. Is that of any assistance?\n\nSome stuff is definitely better in /contrib, as it is \"already there\"\nwith PostgreSQL, some stuff might not be. Database conversion tools\nprobably are better in /contrib is my thought though... time to\nclarify rules?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\nBruce Momjian wrote:\n> \n> > On Wed, 9 May 2001, Bruce Momjian wrote:\n> >\n> > >\n> > > Where do people want this. Should it be in /contrib or on its own web\n> > > page?\n> >\n> > This is already linked on the related page.\n> >\n> > > I have an Xbase conversion utility too. Where should that go?\n> >\n> > What's the URL?\n> \n> It doesn't have a URL. It is on our own FTP site. You can link to it,\n> I guess, but it probably needs to hit a mirror that always will respond.\n> \n> The URL is:\n> \n> ftp://ftp.crimelabs.net/pub/postgresql/contrib/dbf2pg-3.1.tar.gz\n> \n> but it is not on the mirrors yet. My problem is trying to get some\n> rules on what goes in /contrib and what doesn't. My assumption is that\n> loadable modules and stuff that deals with the backend internals go into\n> /contrib. That would mean our mysql stuff would be removed, I guess.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 10 May 2001 09:38:23 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Oracle to Pg tool" }, { "msg_contents": "[ Charset US-ASCII unsupported, converting... ]\n> Hi All,\n> \n> I'm willing to put all PostgreSQL related matter on the techdocs site,\n> or link to it, as the case may be. Is that of any assistance?\n> \n> Some stuff is definitely better in /contrib, as it is \"already there\"\n> with PostgreSQL, some stuff might not be. Database conversion tools\n> probably are better in /contrib is my thought though... time to\n> clarify rules?\n\nMy first guess is that loadable modules clearly should be in /contrib,\nand tools like pg_controldata, and things that workaround missing\nfeatures in PostgreSQL.\n\nLooking in /contrib, that is pretty much what we have. Now, that leaves\nconversion tools as \"unknown\". You are voting the belong in /contrib? \nIf I get another \"yes\", I will add the xbase and Oracle tools in there\ntoo.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 May 2001 20:08:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oracle to Pg tool" }, { "msg_contents": "Hi,\n\nAnother point regarding /contrib or other directory like /tools is to\ncentralize tools for Pg. Also I can't be sure to always have an URL.\nThis one is dependant on the company I'm working now.\n\nLife is moving.\n\nRegards\n\nBruce Momjian wrote:\n\n> [ Charset US-ASCII unsupported, converting... ]\n> > Hi All,\n> >\n> > I'm willing to put all PostgreSQL related matter on the techdocs site,\n> > or link to it, as the case may be. Is that of any assistance?\n> >\n> > Some stuff is definitely better in /contrib, as it is \"already there\"\n> > with PostgreSQL, some stuff might not be. Database conversion tools\n> > probably are better in /contrib is my thought though... time to\n> > clarify rules?\n>\n> My first guess is that loadable modules clearly should be in /contrib,\n> and tools like pg_controldata, and things that workaround missing\n> features in PostgreSQL.\n>\n> Looking in /contrib, that is pretty much what we have. Now, that leaves\n> conversion tools as \"unknown\". You are voting the belong in /contrib?\n> If I get another \"yes\", I will add the xbase and Oracle tools in there\n> too.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n\n", "msg_date": "Thu, 10 May 2001 10:17:36 +0200", "msg_from": "Gilles DAROLD <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: Oracle to Pg tool" }, { "msg_contents": "On Thu, 10 May 2001, Gilles DAROLD wrote:\n\n> Hi,\n>\n> Another point regarding /contrib or other directory like /tools is to\n> centralize tools for Pg. Also I can't be sure to always have an URL.\n> This one is dependant on the company I'm working now.\n\nThere's your \"yes\" Bruce. So when you move them there (hopefully to\nsomething like /contrib/tools) I'll change the info on the related\npage.\n\nVince.\n\n>\n> Life is moving.\n>\n> Regards\n>\n> Bruce Momjian wrote:\n>\n> > [ Charset US-ASCII unsupported, converting... ]\n> > > Hi All,\n> > >\n> > > I'm willing to put all PostgreSQL related matter on the techdocs site,\n> > > or link to it, as the case may be. Is that of any assistance?\n> > >\n> > > Some stuff is definitely better in /contrib, as it is \"already there\"\n> > > with PostgreSQL, some stuff might not be. Database conversion tools\n> > > probably are better in /contrib is my thought though... time to\n> > > clarify rules?\n> >\n> > My first guess is that loadable modules clearly should be in /contrib,\n> > and tools like pg_controldata, and things that workaround missing\n> > features in PostgreSQL.\n> >\n> > Looking in /contrib, that is pretty much what we have. Now, that leaves\n> > conversion tools as \"unknown\". You are voting the belong in /contrib?\n> > If I get another \"yes\", I will add the xbase and Oracle tools in there\n> > too.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 10 May 2001 06:32:43 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Oracle to Pg tool" }, { "msg_contents": "\nOK, applied in /contrib/oracle for 7.2.\n\n> Hi,\n> \n> As I previous searched a tool to convert Oracle database to PostgreSQL\n> and really found nothing, there's now a piece of perl code I've written\n> that\n> can become a great tool to do this job.\n> \n> It currently extract the database schema table definition of an Oracle\n> database\n> and output a sql script to import into postgresql. You simply provide the\n> DBI\n> connection to the Oracle DB and the script do its best. It normally handle\n> unique, primary and foreign keys.\n> \n> I need help to go further because I really don't know about Oracle and I go\n> \n> very slowly. Also I did not have so much Oracle database to test with.\n> \n> Things to do are:\n> \n> - Extract grants (coming soon)\n> - More precision in type conversion based on length (I've no good DB to do\n> that)\n> - Extract triggers and internal function.\n> - Extract datas.\n> - SQL queries converter.\n> \n> For extracting data if someone know a way to dump content of an Oracle DB\n> in ascii. I don't found anything than binary dump. Extracting them with\n> perl/DBI\n> can be very slow but if there's no other way...\n> \n> You can found this tools at:\n> \n> http://www.samse.fr/GPL/ora2pg/\n> \n> \n> Regards\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 11:52:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oracle to Pg tool" }, { "msg_contents": "> On Thu, 10 May 2001, Gilles DAROLD wrote:\n> \n> > Hi,\n> >\n> > Another point regarding /contrib or other directory like /tools is to\n> > centralize tools for Pg. Also I can't be sure to always have an URL.\n> > This one is dependant on the company I'm working now.\n> \n> There's your \"yes\" Bruce. So when you move them there (hopefully to\n> something like /contrib/tools) I'll change the info on the related\n> page.\n\nDone. Should I make /contrib/tools, /contrib/conversion,\n/contrib/extensions, or just leave it alone?\n\nFYI, now that it is in /contrib, it really doesn't have a URL, except\nthe URL to the CVS:\n\n\thttp://www.ca.postgresql.org/docs/pgsql/contrib/\n\nWhen the web copy updates, it will be in:\n\n http://www.ca.postgresql.org/docs/pgsql/contrib/oracle\n\nbut as separate files, not as a tarball.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 11:55:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Oracle to Pg tool" }, { "msg_contents": "On Thu, 10 May 2001, Bruce Momjian wrote:\n\n> > On Thu, 10 May 2001, Gilles DAROLD wrote:\n> >\n> > > Hi,\n> > >\n> > > Another point regarding /contrib or other directory like /tools is to\n> > > centralize tools for Pg. Also I can't be sure to always have an URL.\n> > > This one is dependant on the company I'm working now.\n> >\n> > There's your \"yes\" Bruce. So when you move them there (hopefully to\n> > something like /contrib/tools) I'll change the info on the related\n> > page.\n>\n> Done. Should I make /contrib/tools, /contrib/conversion,\n> /contrib/extensions, or just leave it alone?\n\nNo, that should be ok.\n\n> FYI, now that it is in /contrib, it really doesn't have a URL, except\n> the URL to the CVS:\n>\n> \thttp://www.ca.postgresql.org/docs/pgsql/contrib/\n>\n> When the web copy updates, it will be in:\n>\n> http://www.ca.postgresql.org/docs/pgsql/contrib/oracle\n>\n> but as separate files, not as a tarball.\n\nRight, I should be able to just point to the directory but let me think\non that as I may cron a tarball creation.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 10 May 2001 12:08:43 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Oracle to Pg tool" }, { "msg_contents": "> > Done. Should I make /contrib/tools, /contrib/conversion,\n> > /contrib/extensions, or just leave it alone?\n> \n> No, that should be ok.\n\nOK.\n\n> > FYI, now that it is in /contrib, it really doesn't have a URL, except\n> > the URL to the CVS:\n> >\n> > \thttp://www.ca.postgresql.org/docs/pgsql/contrib/\n> >\n> > When the web copy updates, it will be in:\n> >\n> > http://www.ca.postgresql.org/docs/pgsql/contrib/oracle\n> >\n> > but as separate files, not as a tarball.\n> \n> Right, I should be able to just point to the directory but let me think\n> on that as I may cron a tarball creation.\n\nThere are only a few file.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 May 2001 12:09:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Oracle to Pg tool" } ]
[ { "msg_contents": "G'day mate,\n\nSure, it's fine to mail me...Actually I haven't found the time to get\ninvolved in the JDBC driver (it now does what I need it to do, and I'm\nbehind schedule with a project of mine ;-) so your involvement is very\nwelcome!\n\nProbably the best thing is to access the CVS but on the postgres FTP site\nthey also put a snapshot every so often. The opt package contains the jdbc\ndriver ('interface' in postgrespeak):\n\nftp://download.sourceforge.net/pub/mirrors/postgresql/dev/postgresql-opt-sna\npshot.tar.gz\n\nThe thing I modified was that setXXX methods in PreparedStatement have a\ndecent (and JDBC spec abiding) null parameter check. (I want, and the JDBC\nspec demands, that when I call setObject(null), or setDate(null) a NULL\nvalue, in the resulting query, gets send to the database.\n\nThe reason why PostgreSql is such a pain in the ass for Java developers is\nthat they have to use transactions to use BLOB's (when updating them, ok,\nbut _also_ when selecting them!). The solution in my opinion would be to\nhave transactions around ResultSets in autocommit mode... \nA similar solution would probably not be wise for PreparedStatements,\nbecause ps's get created ahead of use (could result in lot's of lingering\nopen transactions)\n\nI'll send you my driver as well (Note that you have to change the JDBC1\nimplementation as well, I haven't checked but I should have kept them in\nsync)\n\nGreetz from the Low Lands,\nJeroen\n\n\n> -----Original Message-----\n> From: Dmitri Colebatch [mailto:dim@nuix.com.au]\n> Sent: Friday, May 04, 2001 02:32\n> To: jeroen.habets@framfab.nl\n> Subject: postgres jdbc source code\n> \n> \n> Jeroen,\n> \n> Hi, hope its ok to email you - I found your email in the \n> postgres mail \n> archives. I'm trying to find the postgres jdbc driver source \n> code - I've had \n> a look through jdbc.postgresql.org without any luck.. I know \n> there's a lot of \n> stuff not implemented and would like to get stuck into it.... \n> do you know \n> where I can find the source (or could I get it off you - \n> email attachment \n> would be fine).\n> \n> cheers\n> dim\n>", "msg_date": "Fri, 4 May 2001 12:34:29 +0200 ", "msg_from": "Jeroen Habets <jeroen.habets@framfab.nl>", "msg_from_op": true, "msg_subject": "RE: postgres jdbc source code" } ]
[ { "msg_contents": "Where i can find a wonderful installer for postgres 7.1 on our windows2000 \nadvanced servers ?\n\nWe use postgres on linux systems, But...\n\nI'm not able to find on the net a binary for Postgres 7.1.\nI don't want to compile, i want a simple installer. windows machines are 200 \nMillion, if you'll make a maintained binary postgres version for windows \nprobably more developers more bug fixing will be... It's critical.\n\nAlso in the web site , will be wonderful if, on the navbar, the download \nlinks are always there (latest stable, jdbc, odbc, DBD::Pg) etc.\n\nthanks,\nvalter\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n", "msg_date": "Fri, 04 May 2001 15:49:32 +0200", "msg_from": "\"V. M.\" <txian@hotmail.com>", "msg_from_op": true, "msg_subject": "Postgresql.exe 7.1 for M$ OS" }, { "msg_contents": "\"V. M.\" wrote:\n> \n> Where i can find a wonderful installer for postgres 7.1 on our windows2000\n> advanced servers ?\n> \n\nwww.cygwin.com\n-- \n\"Tout penseur avare de ses pensees est un penseur de Radin.\" --\nPierre Dac\n", "msg_date": "Fri, 04 May 2001 18:21:19 +0200", "msg_from": "Fabrice Scemama <fabrice@scemama.org>", "msg_from_op": false, "msg_subject": "Re: Postgresql.exe 7.1 for M$ OS" } ]
[ { "msg_contents": "\tMy postmaster is refusing to start. I don't know what's wrong. If\nanyone has pointers/tips/whatever, please tell me. \n\tOne of my backup scripts was wrong so I don't have current backups\n(that's what you get for trusting other people to do your backups). Any\nway to recover my databases? \n\tThe oddities started yesterday. Everytime I'd call a PL/pgSQL\nfunction, the backend would die. The function was working perfectly up to\nthat point so I know there's nothing wrong with it.\n\tI am running PG 7.1 from the Debian packages.\n\n\tHere's what my log says:\n\n[tons of these]\npq_recvbuf: unexpected EOF on client connection\npq_recvbuf: unexpected EOF on client connection\npq_recvbuf: unexpected EOF on client connection\npq_recvbuf: unexpected EOF on client connection\nServer process (pid 3599) exited with status 11 at Thu May 3 10:32:47 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 10:32:55 2001\nReinitializing shared memory and semaphores\nServer process (pid 3616) exited with status 11 at Thu May 3 10:35:21 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 10:35:21 2001\nReinitializing shared memory and semaphores\npq_recvbuf: unexpected EOF on client connection\npq_recvbuf: unexpected EOF on client connection\npq_recvbuf: unexpected EOF on client connection\npq_recvbuf: unexpected EOF on client connection\nServer process (pid 5115) exited with status 11 at Thu May 3 16:03:09 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 16:03:12 2001 \nReinitializing shared memory and semaphores\nServer process (pid 5118) exited with status 11 at Thu May 3 16:03:42 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 16:03:42 2001\nReinitializing shared memory and semaphores\nServer process (pid 5122) exited with status 11 at Thu May 3 16:04:00 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 16:04:00 2001\nReinitializing shared memory and semaphores\nServer process (pid 5158) exited with status 11 at Thu May 3 16:08:43 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 16:08:45 2001 \nReinitializing shared memory and semaphores\nServer process (pid 5176) exited with status 11 at Thu May 3 16:15:32 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 16:15:33 2001\nReinitializing shared memory and semaphores\nThe Data Base System is starting up\nThe Data Base System is starting up\npq_recvbuf: unexpected EOF on client connection\nSmart Shutdown request at Thu May 3 16:17:59 2001\nServer process (pid 5271) exited with status 11 at Thu May 3 16:21:35 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 16:21:36 2001\nReinitializing shared memory and semaphores\nServer process (pid 5288) exited with status 11 at Thu May 3 16:26:47 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 16:26:49 2001\nReinitializing shared memory and semaphores\nServer process (pid 5293) exited with status 11 at Thu May 3 16:28:32 2001\nTerminating any active server processes...\nServer processes were terminated at Thu May 3 16:28:32 2001 \nReinitializing shared memory and semaphores\nThe Data Base System is starting up\nServer process (pid 7780) exited with status 512 at Fri May 4 04:06:34 2001\nTerminating any active server processes...\nServer processes were terminated at Fri May 4 04:06:34 2001\nReinitializing shared memory and semaphores\n/usr/lib/postgresql/bin/postmaster: Startup proc 7781 exited with status 512 - abort\n/usr/lib/postgresql/bin/postmaster: Startup proc 9347 exited with status 512 - abort\n/usr/lib/postgresql/bin/postmaster: Startup proc 9364 exited with status 512 - abort\n/usr/lib/postgresql/bin/postmaster: Startup proc 9380 exited with status 512 - abort\n\n\tThese last lines are me trying to restart the postmaster via pg_ctl\nwith debugging set to 5.\n\n\tAny help is very much appreciated.\n\n\t-Roberto\n\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Developer \n", "msg_date": "Fri, 4 May 2001 14:08:07 -0600", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": true, "msg_subject": "Postmaster refuses to start " }, { "msg_contents": "Roberto Mello <rmello@cc.usu.edu> writes:\n> \tMy postmaster is refusing to start. I don't know what's wrong. If\n> anyone has pointers/tips/whatever, please tell me. \n\nPerhaps a gdb backtrace from one of the core files would yield clues.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 May 2001 19:05:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postmaster refuses to start " } ]
[ { "msg_contents": "I see by the messages that 7.1.1 is in the final packaging. Anyone know\nwhen it will be released?\n\n-Tony\n\n\n", "msg_date": "Fri, 04 May 2001 13:53:48 -0700", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "7.1.1" }, { "msg_contents": "> I see by the messages that 7.1.1 is in the final packaging. Anyone know\n> when it will be released?\n\nOnly Marc knows. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 17:42:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.1" }, { "msg_contents": "On Fri, 4 May 2001, Bruce Momjian wrote:\n\n> > I see by the messages that 7.1.1 is in the final packaging. Anyone know\n> > when it will be released?\n>\n> Only Marc knows. :-)\n\nTomorrow aft ... sorry, got tied up with a client finishing his server\nmove to v7.1 this afternoon, and we hit problems with a programmer who\ndidn't realize that telling the scripts to connect to a specific host was\na good idea :)\n\n\n", "msg_date": "Fri, 4 May 2001 23:55:38 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.1.1" }, { "msg_contents": "\nOK, I have updated the file dates for a release tomorrow.\n\n > On Fri, 4 May 2001, Bruce Momjian wrote:\n> \n> > > I see by the messages that 7.1.1 is in the final packaging. Anyone know\n> > > when it will be released?\n> >\n> > Only Marc knows. :-)\n> \n> Tomorrow aft ... sorry, got tied up with a client finishing his server\n> move to v7.1 this afternoon, and we hit problems with a programmer who\n> didn't realize that telling the scripts to connect to a specific host was\n> a good idea :)\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 May 2001 22:57:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1.1" }, { "msg_contents": "\nthnks :)\n\nOn Fri, 4 May 2001, Bruce Momjian wrote:\n\n>\n> OK, I have updated the file dates for a release tomorrow.\n>\n> > On Fri, 4 May 2001, Bruce Momjian wrote:\n> >\n> > > > I see by the messages that 7.1.1 is in the final packaging. Anyone know\n> > > > when it will be released?\n> > >\n> > > Only Marc knows. :-)\n> >\n> > Tomorrow aft ... sorry, got tied up with a client finishing his server\n> > move to v7.1 this afternoon, and we hit problems with a programmer who\n> > didn't realize that telling the scripts to connect to a specific host was\n> > a good idea :)\n> >\n> >\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Sat, 5 May 2001 00:03:42 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.1.1" } ]
[ { "msg_contents": "\n I've been doing some research work using the GiST indexes,\nbut I persistently develop a problem where the system doesn't \nmake use of the indexes during the execution of a query. If\nI use the examples provided here:\n\n http://wit.mcs.anl.gov/~selkovjr/pg_extensions/\n\nFor instance, and I place an elog( DEBUG, \"functionname\" )\nin each of the GiST accessor functions, I can witness when\nthe database is making use of the index. During the construction\nof the index, I never have a problem, although during query\nexecution, it seems that my indices aren't getting used at\nall, and the database is simply searching through all of\nthe entries in the database.\n\nThis is a terribly frustrating problem that I encountered\nonce before, but which mysteriously went away after fiddling\nwith the problem for a while. This time, the problem isn't\ngoing away, however. When I trace through the postgres \napplication I can see that it at least examines the opclass\nfor my specialized data types, and detects that there exists\nan index that could be used, but it seems to decide not to\nmake use of it regardless.\n\nIs there an easy way that I can force the use of an index\nduring a query?\n\n-David\n\n----------------------[=========]------------------------\nDavid T. McWherter udmcwher@mcs.drexel.edu\n\n vdiff\n=====\n /vee'dif/ v.,n. Visual diff. The operation offinding\ndifferences between two files by {eyeball search}. Theterm\n`optical diff' has also been reported, and is sometimes more\nspecifically used for the act of superimposing two nearly identical\nprintouts on one another and holding them up to a light to spot\ndifferences. Though this method is poor for detecting omissions in\nthe `rear' file, it can also be used with printouts of graphics, a\nclaim few if any diff programs can make. See {diff}.\n", "msg_date": "Sat, 5 May 2001 03:07:22 -0400", "msg_from": "David McWherter <udmcwher@mcs.drexel.edu>", "msg_from_op": true, "msg_subject": "GiST indexing problems..." }, { "msg_contents": "David,\n\ncould you provide more info (scheme, query, postgresql version)\n\n\tRegards,\n\n\t\tOleg\nOn Sat, 5 May 2001, David McWherter wrote:\n\n>\n> I've been doing some research work using the GiST indexes,\n> but I persistently develop a problem where the system doesn't\n> make use of the indexes during the execution of a query. If\n> I use the examples provided here:\n>\n> http://wit.mcs.anl.gov/~selkovjr/pg_extensions/\n>\n> For instance, and I place an elog( DEBUG, \"functionname\" )\n> in each of the GiST accessor functions, I can witness when\n> the database is making use of the index. During the construction\n> of the index, I never have a problem, although during query\n> execution, it seems that my indices aren't getting used at\n> all, and the database is simply searching through all of\n> the entries in the database.\n>\n> This is a terribly frustrating problem that I encountered\n> once before, but which mysteriously went away after fiddling\n> with the problem for a while. This time, the problem isn't\n> going away, however. When I trace through the postgres\n> application I can see that it at least examines the opclass\n> for my specialized data types, and detects that there exists\n> an index that could be used, but it seems to decide not to\n> make use of it regardless.\n>\n> Is there an easy way that I can force the use of an index\n> during a query?\n>\n> -David\n>\n> ----------------------[=========]------------------------\n> David T. McWherter udmcwher@mcs.drexel.edu\n>\n> vdiff\n> =====\n> /vee'dif/ v.,n. Visual diff. The operation offinding\n> differences between two files by {eyeball search}. Theterm\n> `optical diff' has also been reported, and is sometimes more\n> specifically used for the act of superimposing two nearly identical\n> printouts on one another and holding them up to a light to spot\n> differences. Though this method is poor for detecting omissions in\n> the `rear' file, it can also be used with printouts of graphics, a\n> claim few if any diff programs can make. See {diff}.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 5 May 2001 14:00:19 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: GiST indexing problems..." }, { "msg_contents": "\nSure. My postgresql version is 7.0.2. \n\nMy database has a datatype called graph that looks like this:\n\nCREATE TYPE graph (\n internallength = VARIABLE,\n input = graph_in,\n output = graph_out\n);\nCREATE OPERATOR ~ ( \n leftarg = graph,\n rightarg = graph,\n procedure = graph_distance,\n commutator = ~\n );\n\nAnd it has a datatype 'graphrange':\n\nCREATE FUNCTION graph_inrange(graph, graphrange)\n RETURNS bool\n AS '/usr/remote/home_u/udmcwher/myprojs/pg_graph.2/graph.so'\n language 'c';\n\nCREATE TYPE graphrange (\n internallength = VARIABLE,\n input = graphrange_in,\n output = graphrange_out\n);\nCREATE OPERATOR << (\n leftarg = graph,\n rightarg = graphrange,\n procedure = graph_inrange\n);\n\nI have a bunch of GiST operators that are created like this:\n CREATE FUNCTION gist_graph_consistent(opaque,graphrange) \n RETURNS bool\n AS '/usr/remote/home_u/udmcwher/myprojs/pg_graph.2/graph.so'\n language 'c';\n /* the same for gist_graph_{compress,decompress,penalty,picksplit,union,same} */\n \n\n\nI've tried adding the parameters 'restrict = eqsel' and 'join = eqjoinsel'\nto the datatype operators, but that doesn't seem to change anything.\n\n\nI construct a new opclass like this:\n\nINSERT INTO pg_opclass (opcname,opcdeftype)\n values ( 'gist_graphrange_ops' );\n\nSELECT o.oid AS opoid, o.oprname\nINTO TABLE graph_ops_tmp\nFROM pg_operator o, pg_type t\nWHERE o.oprleft = t.oid \n and t.typname = 'graph';\nINSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy)\n SELECT am.oid, opcl.oid, c.opoid, 1\n FROM pg_am am, pg_opclass opcl, graph_ops_tmp c\n WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n and c.oprname = '<<';\n\n\n\nINSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n SELECT am.oid, opcl.oid, pro.oid, 1\n FROM pg_am am, pg_opclass opcl, pg_proc pro\n WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n and proname = 'gist_graph_consistent';\n\nINSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n SELECT am.oid, opcl.oid, pro.oid, 2\n FROM pg_am am, pg_opclass opcl, pg_proc pro\n WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n and proname = 'gist_graph_union';\n\nINSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n SELECT am.oid, opcl.oid, pro.oid, 3\n FROM pg_am am, pg_opclass opcl, pg_proc pro\n WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n and proname = 'gist_graph_compress';\n\nINSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n SELECT am.oid, opcl.oid, pro.oid, 4\n FROM pg_am am, pg_opclass opcl, pg_proc pro\n WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n and proname = 'gist_graph_decompress';\n\nINSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n SELECT am.oid, opcl.oid, pro.oid, 5\n FROM pg_am am, pg_opclass opcl, pg_proc pro\n WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n and proname = 'gist_graph_penalty';\n\nINSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n SELECT am.oid, opcl.oid, pro.oid, 6\n FROM pg_am am, pg_opclass opcl, pg_proc pro\n WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n and proname = 'gist_graph_picksplit';\n\nINSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n SELECT am.oid, opcl.oid, pro.oid, 7\n FROM pg_am am, pg_opclass opcl, pg_proc pro\n WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n and proname = 'gist_graphrange_same';\n\nI construct a table like this:\n \nCREATE TABLE repos ( a graph, file varchar(512) );\nINSERT INTO repos VALUES ( import_graphfile('/tmp/test1'), '/tmp/test1' );\nINSERT INTO repos VALUES ( import_graphfile('/tmp/test2'), '/tmp/test2' );\n\nWhat this does is a little bit weird, it reads in the test1 and test2 datafiles\ninto the database, storing them as large objects. Then, it constructs\ngraph objects which have their oid's, and returns them from import_graphfile.\n\nI then try to construct an index like this:\n\nCREATE INDEX repos_index ON repos \n USING gist ( a gist_graphrange_ops ) ;\n\nI've also tried a:graph and a:graphrange, but I don't think it changes anything.\n\nMy queries look like:\n\n SELECT * from repos where a << '(oid-num,int-num)'::graphrange;\n\nThe function operator returns a boolean if a particular relation holds between\nthe graph object and the graphrange object.\n\nThe GiST compress operator will convert leaf GRAPH keys into \ngraphrange keys for internal use. Each of my GiST operators\ncall elog( DEBUG, \"function-name\" ) as they're called. When\nconstructing the index, compress,decompress,picksplit,union\nare called as expected. During the execution of the query,\nhowever, nothing happens.\n\nI've found the same exact results using the 'pggist' examples\n(a suite including intproc,boxproc,polyproc,textproc), \nand the examples found here: http://wit.mcs.anl.gov/~selkovjr/pg_extensions/contrib-7.0.tgz.\nThe 'cube' test suite at that site is somewhat straightforward\nto invoke, and shows the same results. \n\n-david\n\n\nOleg Bartunov writes:\n > David,\n > \n > could you provide more info (scheme, query, postgresql version)\n > \n > \tRegards,\n > \n > \t\tOleg\n > On Sat, 5 May 2001, David McWherter wrote:\n > \n > >\n > > I've been doing some research work using the GiST indexes,\n > > but I persistently develop a problem where the system doesn't\n > > make use of the indexes during the execution of a query. If\n > > I use the examples provided here:\n > >\n > > http://wit.mcs.anl.gov/~selkovjr/pg_extensions/\n > >\n > > For instance, and I place an elog( DEBUG, \"functionname\" )\n > > in each of the GiST accessor functions, I can witness when\n > > the database is making use of the index. During the construction\n > > of the index, I never have a problem, although during query\n > > execution, it seems that my indices aren't getting used at\n > > all, and the database is simply searching through all of\n > > the entries in the database.\n > >\n > > This is a terribly frustrating problem that I encountered\n > > once before, but which mysteriously went away after fiddling\n > > with the problem for a while. This time, the problem isn't\n > > going away, however. When I trace through the postgres\n > > application I can see that it at least examines the opclass\n > > for my specialized data types, and detects that there exists\n > > an index that could be used, but it seems to decide not to\n > > make use of it regardless.\n > >\n > > Is there an easy way that I can force the use of an index\n > > during a query?\n > >\n > > -David\n > >\n > > ----------------------[=========]------------------------\n > > David T. McWherter udmcwher@mcs.drexel.edu\n > >\n > > vdiff\n > > =====\n > > /vee'dif/ v.,n. Visual diff. The operation offinding\n > > differences between two files by {eyeball search}. Theterm\n > > `optical diff' has also been reported, and is sometimes more\n > > specifically used for the act of superimposing two nearly identical\n > > printouts on one another and holding them up to a light to spot\n > > differences. Though this method is poor for detecting omissions in\n > > the `rear' file, it can also be used with printouts of graphics, a\n > > claim few if any diff programs can make. See {diff}.\n > >\n > > ---------------------------(end of broadcast)---------------------------\n > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n > >\n > \n > \tRegards,\n > \t\tOleg\n > _____________________________________________________________\n > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n > Sternberg Astronomical Institute, Moscow University (Russia)\n > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n > phone: +007(095)939-16-83, +007(095)939-23-83\n\n----------------------[=========]------------------------\nDavid T. McWherter udmcwher@mcs.drexel.edu\n\nIf God had meant for us to be in the Army, we would have been born with\ngreen, baggy skin.\n", "msg_date": "Sat, 5 May 2001 08:55:46 -0400", "msg_from": "David McWherter <udmcwher@mcs.drexel.edu>", "msg_from_op": true, "msg_subject": "Re: GiST indexing problems..." }, { "msg_contents": "David,\n\nGiST prior 7.1 was broken in several respects. Please,\ntry 7.1 and examples from contrib/intarray. It should works.\nbtw, you'll have compress function actually works.\n\n\tRegards,\n\n\t\tOleg\nOn Sat, 5 May 2001, David McWherter wrote:\n\n>\n> Sure. My postgresql version is 7.0.2.\n>\n> My database has a datatype called graph that looks like this:\n>\n> CREATE TYPE graph (\n> internallength = VARIABLE,\n> input = graph_in,\n> output = graph_out\n> );\n> CREATE OPERATOR ~ (\n> leftarg = graph,\n> rightarg = graph,\n> procedure = graph_distance,\n> commutator = ~\n> );\n>\n> And it has a datatype 'graphrange':\n>\n> CREATE FUNCTION graph_inrange(graph, graphrange)\n> RETURNS bool\n> AS '/usr/remote/home_u/udmcwher/myprojs/pg_graph.2/graph.so'\n> language 'c';\n>\n> CREATE TYPE graphrange (\n> internallength = VARIABLE,\n> input = graphrange_in,\n> output = graphrange_out\n> );\n> CREATE OPERATOR << (\n> leftarg = graph,\n> rightarg = graphrange,\n> procedure = graph_inrange\n> );\n>\n> I have a bunch of GiST operators that are created like this:\n> CREATE FUNCTION gist_graph_consistent(opaque,graphrange)\n> RETURNS bool\n> AS '/usr/remote/home_u/udmcwher/myprojs/pg_graph.2/graph.so'\n> language 'c';\n> /* the same for gist_graph_{compress,decompress,penalty,picksplit,union,same} */\n>\n>\n>\n> I've tried adding the parameters 'restrict = eqsel' and 'join = eqjoinsel'\n> to the datatype operators, but that doesn't seem to change anything.\n>\n>\n> I construct a new opclass like this:\n>\n> INSERT INTO pg_opclass (opcname,opcdeftype)\n> values ( 'gist_graphrange_ops' );\n>\n> SELECT o.oid AS opoid, o.oprname\n> INTO TABLE graph_ops_tmp\n> FROM pg_operator o, pg_type t\n> WHERE o.oprleft = t.oid\n> and t.typname = 'graph';\n> INSERT INTO pg_amop (amopid, amopclaid, amopopr, amopstrategy)\n> SELECT am.oid, opcl.oid, c.opoid, 1\n> FROM pg_am am, pg_opclass opcl, graph_ops_tmp c\n> WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n> and c.oprname = '<<';\n>\n>\n>\n> INSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n> SELECT am.oid, opcl.oid, pro.oid, 1\n> FROM pg_am am, pg_opclass opcl, pg_proc pro\n> WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n> and proname = 'gist_graph_consistent';\n>\n> INSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n> SELECT am.oid, opcl.oid, pro.oid, 2\n> FROM pg_am am, pg_opclass opcl, pg_proc pro\n> WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n> and proname = 'gist_graph_union';\n>\n> INSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n> SELECT am.oid, opcl.oid, pro.oid, 3\n> FROM pg_am am, pg_opclass opcl, pg_proc pro\n> WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n> and proname = 'gist_graph_compress';\n>\n> INSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n> SELECT am.oid, opcl.oid, pro.oid, 4\n> FROM pg_am am, pg_opclass opcl, pg_proc pro\n> WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n> and proname = 'gist_graph_decompress';\n>\n> INSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n> SELECT am.oid, opcl.oid, pro.oid, 5\n> FROM pg_am am, pg_opclass opcl, pg_proc pro\n> WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n> and proname = 'gist_graph_penalty';\n>\n> INSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n> SELECT am.oid, opcl.oid, pro.oid, 6\n> FROM pg_am am, pg_opclass opcl, pg_proc pro\n> WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n> and proname = 'gist_graph_picksplit';\n>\n> INSERT INTO pg_amproc (amid, amopclaid, amproc, amprocnum)\n> SELECT am.oid, opcl.oid, pro.oid, 7\n> FROM pg_am am, pg_opclass opcl, pg_proc pro\n> WHERE amname = 'gist' and opcname = 'gist_graphrange_ops'\n> and proname = 'gist_graphrange_same';\n>\n> I construct a table like this:\n>\n> CREATE TABLE repos ( a graph, file varchar(512) );\n> INSERT INTO repos VALUES ( import_graphfile('/tmp/test1'), '/tmp/test1' );\n> INSERT INTO repos VALUES ( import_graphfile('/tmp/test2'), '/tmp/test2' );\n>\n> What this does is a little bit weird, it reads in the test1 and test2 datafiles\n> into the database, storing them as large objects. Then, it constructs\n> graph objects which have their oid's, and returns them from import_graphfile.\n>\n> I then try to construct an index like this:\n>\n> CREATE INDEX repos_index ON repos\n> USING gist ( a gist_graphrange_ops ) ;\n>\n> I've also tried a:graph and a:graphrange, but I don't think it changes anything.\n>\n> My queries look like:\n>\n> SELECT * from repos where a << '(oid-num,int-num)'::graphrange;\n>\n> The function operator returns a boolean if a particular relation holds between\n> the graph object and the graphrange object.\n>\n> The GiST compress operator will convert leaf GRAPH keys into\n> graphrange keys for internal use. Each of my GiST operators\n> call elog( DEBUG, \"function-name\" ) as they're called. When\n> constructing the index, compress,decompress,picksplit,union\n> are called as expected. During the execution of the query,\n> however, nothing happens.\n>\n> I've found the same exact results using the 'pggist' examples\n> (a suite including intproc,boxproc,polyproc,textproc),\n> and the examples found here: http://wit.mcs.anl.gov/~selkovjr/pg_extensions/contrib-7.0.tgz.\n> The 'cube' test suite at that site is somewhat straightforward\n> to invoke, and shows the same results.\n>\n> -david\n>\n>\n> Oleg Bartunov writes:\n> > David,\n> >\n> > could you provide more info (scheme, query, postgresql version)\n> >\n> > \tRegards,\n> >\n> > \t\tOleg\n> > On Sat, 5 May 2001, David McWherter wrote:\n> >\n> > >\n> > > I've been doing some research work using the GiST indexes,\n> > > but I persistently develop a problem where the system doesn't\n> > > make use of the indexes during the execution of a query. If\n> > > I use the examples provided here:\n> > >\n> > > http://wit.mcs.anl.gov/~selkovjr/pg_extensions/\n> > >\n> > > For instance, and I place an elog( DEBUG, \"functionname\" )\n> > > in each of the GiST accessor functions, I can witness when\n> > > the database is making use of the index. During the construction\n> > > of the index, I never have a problem, although during query\n> > > execution, it seems that my indices aren't getting used at\n> > > all, and the database is simply searching through all of\n> > > the entries in the database.\n> > >\n> > > This is a terribly frustrating problem that I encountered\n> > > once before, but which mysteriously went away after fiddling\n> > > with the problem for a while. This time, the problem isn't\n> > > going away, however. When I trace through the postgres\n> > > application I can see that it at least examines the opclass\n> > > for my specialized data types, and detects that there exists\n> > > an index that could be used, but it seems to decide not to\n> > > make use of it regardless.\n> > >\n> > > Is there an easy way that I can force the use of an index\n> > > during a query?\n> > >\n> > > -David\n> > >\n> > > ----------------------[=========]------------------------\n> > > David T. McWherter udmcwher@mcs.drexel.edu\n> > >\n> > > vdiff\n> > > =====\n> > > /vee'dif/ v.,n. Visual diff. The operation offinding\n> > > differences between two files by {eyeball search}. Theterm\n> > > `optical diff' has also been reported, and is sometimes more\n> > > specifically used for the act of superimposing two nearly identical\n> > > printouts on one another and holding them up to a light to spot\n> > > differences. Though this method is poor for detecting omissions in\n> > > the `rear' file, it can also be used with printouts of graphics, a\n> > > claim few if any diff programs can make. See {diff}.\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n>\n> ----------------------[=========]------------------------\n> David T. McWherter udmcwher@mcs.drexel.edu\n>\n> If God had meant for us to be in the Army, we would have been born with\n> green, baggy skin.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 5 May 2001 17:00:50 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: GiST indexing problems..." }, { "msg_contents": "David McWherter <udmcwher@mcs.drexel.edu> writes:\n> I've tried adding the parameters 'restrict = eqsel' and 'join = eqjoinsel'\n> to the datatype operators, but that doesn't seem to change anything.\n\nYou might have better luck if you use area-related selectivity\nestimators. Your problem seems to be that the optimizer doesn't\nthink the index is worth using, and the cause almost certainly is\noverly pessimistic selectivity estimates for the indexable operators.\nareasel and friends are completely bogus, but at least they deliver\nsmall enough numbers to encourage use of the index ;-)\n\nAs Oleg says, the GiST support in 7.0.* is in pretty poor shape\n(it had been suffering from neglect for a long time). Try 7.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 May 2001 10:52:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST indexing problems... " }, { "msg_contents": "\nSo, I've migrated my code to do the TOAST'ing thing required of 7.1\nclients, and I've updated my operator to use the areaselectors:\n\t CREATE OPERATOR = (\n\t leftarg = graph,\n\t rightarg = graphrange,\n\t procedure = graph_inrange,\n\t commutator = '=',\n\t restrict = areasel,\n\t join = areajoinsel\n\t );\n\nBut I still get the issue that my queries don't seem to trigger the\nGiST indexes to be used. Perhaps the problem is that the system\njust thinks that the query doesn't need an index to increase \nperformance, i've only got about a dozen elements in the database\nright now for testing purposes.\n\n-David\n\nTom Lane writes:\n > David McWherter <udmcwher@mcs.drexel.edu> writes:\n > > I've tried adding the parameters 'restrict = eqsel' and 'join = eqjoinsel'\n > > to the datatype operators, but that doesn't seem to change anything.\n > \n > You might have better luck if you use area-related selectivity\n > estimators. Your problem seems to be that the optimizer doesn't\n > think the index is worth using, and the cause almost certainly is\n > overly pessimistic selectivity estimates for the indexable operators.\n > areasel and friends are completely bogus, but at least they deliver\n > small enough numbers to encourage use of the index ;-)\n > \n > As Oleg says, the GiST support in 7.0.* is in pretty poor shape\n > (it had been suffering from neglect for a long time). Try 7.1.\n > \n > \t\t\tregards, tom lane\n\n----------------------[=========]------------------------\nDavid T. McWherter udmcwher@mcs.drexel.edu\n\nThe truth is rarely pure, and never simple.\n\t\t-- Oscar Wilde\n", "msg_date": "Sat, 5 May 2001 11:20:20 -0400", "msg_from": "David McWherter <udmcwher@mcs.drexel.edu>", "msg_from_op": true, "msg_subject": "Re: GiST indexing problems... " }, { "msg_contents": "David McWherter <udmcwher@mcs.drexel.edu> writes:\n> But I still get the issue that my queries don't seem to trigger the\n> GiST indexes to be used. Perhaps the problem is that the system\n> just thinks that the query doesn't need an index to increase \n> performance, i've only got about a dozen elements in the database\n> right now for testing purposes.\n\nAh, so. You're right, you need more data.\n\nYou could try\n\tSET ENABLE_SEQSCAN TO OFF\nif you just want to force use of the index for testing purposes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 May 2001 11:26:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST indexing problems... " }, { "msg_contents": "\nBeautiful! That fixed my problem. One thing that might be\nuseful is to update the Index method-extension documentation\non the web site to reflect this problem a bit...if somebody\njust wants to get a working index, it can be a bit misleading.\nI'll probably go and see if I can construct a few words on\nit after my current workload subsides a bit.\n\n-David\n\nTom Lane writes:\n > David McWherter <udmcwher@mcs.drexel.edu> writes:\n > > But I still get the issue that my queries don't seem to trigger the\n > > GiST indexes to be used. Perhaps the problem is that the system\n > > just thinks that the query doesn't need an index to increase \n > > performance, i've only got about a dozen elements in the database\n > > right now for testing purposes.\n > \n > Ah, so. You're right, you need more data.\n > \n > You could try\n > \tSET ENABLE_SEQSCAN TO OFF\n > if you just want to force use of the index for testing purposes.\n > \n > \t\t\tregards, tom lane\n\n----------------------[=========]------------------------\nDavid T. McWherter udmcwher@mcs.drexel.edu\n\nNever pay a compliment as if expecting a receipt.\n", "msg_date": "Sat, 5 May 2001 11:29:32 -0400", "msg_from": "David McWherter <udmcwher@mcs.drexel.edu>", "msg_from_op": true, "msg_subject": "Re: GiST indexing problems... " } ]
[ { "msg_contents": "There's a TODO item to make elog(LOG) a separate level. I propose the\nname INFO. It would be identical to DEBUG in effect, only with a\ndifferent label. Additionally, all DEBUG logging should either be\ndisabled unless the debug_level is greater than zero, or alternatively\nsome elog(DEBUG) calls should be converted to INFO conditional on a\nconfiguration setting (like log_pid, for example).\n\nThe stricter distinction between DEBUG and INFO would also yield the\npossibility of optionally sending DEBUG output to the frontend, as has\nbeen requested a few times.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 5 May 2001 10:58:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "elog(LOG), elog(DEBUG)" }, { "msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010505 02:06] wrote:\n> There's a TODO item to make elog(LOG) a separate level. I propose the\n> name INFO. It would be identical to DEBUG in effect, only with a\n> different label. Additionally, all DEBUG logging should either be\n> disabled unless the debug_level is greater than zero, or alternatively\n> some elog(DEBUG) calls should be converted to INFO conditional on a\n> configuration setting (like log_pid, for example).\n> \n> The stricter distinction between DEBUG and INFO would also yield the\n> possibility of optionally sending DEBUG output to the frontend, as has\n> been requested a few times.\n\nINFO makes sense as afaik it maps to syslog.\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n", "msg_date": "Sat, 5 May 2001 02:14:48 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: elog(LOG), elog(DEBUG)" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> There's a TODO item to make elog(LOG) a separate level. I propose the\n> name INFO. It would be identical to DEBUG in effect, only with a\n> different label.\n\nThis conveys nothing to my mind. How should I determine whether a given\nelog call ought to use INFO or DEBUG?\n\n> The stricter distinction between DEBUG and INFO would also yield the\n> possibility of optionally sending DEBUG output to the frontend, as has\n> been requested a few times.\n\nIt's not a \"strict distinction\" unless we have a clear policy as to what\nthe different levels mean. I think setting and documenting that policy\nis the hard part of the task.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 May 2001 10:44:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(LOG), elog(DEBUG) " }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > There's a TODO item to make elog(LOG) a separate level. I propose the\n> > name INFO. It would be identical to DEBUG in effect, only with a\n> > different label.\n>\n> This conveys nothing to my mind. How should I determine whether a given\n> elog call ought to use INFO or DEBUG?\n\nDEBUG is for messages intended to help locating and analyzing faults in\nthe source code (i.e., debugging). Normal users don't need this during\nnormal operation.\n\nINFO (or whatever the name) is for messages that administrator's might be\ninterested in for auditing and tuning.\n\nExample:\n\nelog(DEBUG, \"heapgettup(..., b=0x%x, nkeys=%d, key=0x%x\", buffer, nkeys, key);\n\nvs.\n\nelog(INFO, \"connection: host=%s user=%s database=%s\", ...);\n\nThere are maybe a dozen potential INFO messages, plus a few to be\nconverted fprintf's.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 5 May 2001 22:57:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: elog(LOG), elog(DEBUG) " }, { "msg_contents": "> Tom Lane writes:\n> \n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > There's a TODO item to make elog(LOG) a separate level. I propose the\n> > > name INFO. It would be identical to DEBUG in effect, only with a\n> > > different label.\n> >\n> > This conveys nothing to my mind. How should I determine whether a given\n> > elog call ought to use INFO or DEBUG?\n> \n> DEBUG is for messages intended to help locating and analyzing faults in\n> the source code (i.e., debugging). Normal users don't need this during\n> normal operation.\n> \n> INFO (or whatever the name) is for messages that administrator's might be\n> interested in for auditing and tuning.\n\nSeems like a good idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 12:45:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(LOG), elog(DEBUG)" } ]
[ { "msg_contents": "Hello\n\nI see the following \n\nproba=> select * from pg_language;\nlanname |lanispl|lanpltrusted|lanplcallfoid|lancompiler \n--------+-------+------------+-------------+--------------\ninternal|f |f | 0|n/a \nlisp |f |f | 0|/usr/ucb/liszt\nC |f |f | 0|/bin/cc \nsql |f |f | 0|postgres \nplpgsql |t |t | 56702850|PL/pgSQL \n\nWould you mind to tell me is it possible to use Lisp\nas procedural language ? Which Lisp (e.g Emacs-list,\nCommon Lisp, etc.). If it is possible could you give\nme hints how I can do that ?\n\nI'm using PosgtreSQL 7.0, Slackware 7.0, also I have\nCommon Lisp (CMUCL 18c) installed.\n\n-- \nVladimir Zolotych gsmith@eurocom.od.ua\n", "msg_date": "Sat, 05 May 2001 16:54:07 +0300", "msg_from": "\"Vladimir V. Zolotych\" <gsmith@eurocom.od.ua>", "msg_from_op": true, "msg_subject": "Lisp as procedural language" }, { "msg_contents": "On Sat, May 05, 2001 at 04:54:07PM +0300, Vladimir V. Zolotych wrote:\n> I see the following \n> \n> proba=> select * from pg_language;\n\n> lisp |f |f | 0|/usr/ucb/liszt\n\n> Would you mind to tell me is it possible to use Lisp\n> as procedural language ? Which Lisp (e.g Emacs-list,\n> Common Lisp, etc.). If it is possible could you give\n> me hints how I can do that ?\n\nHuh? Seems like you already have using lisp? Ask your\nsysadmin where did he got it? And meybe you/he could\npost it to PostgreSQL lists too?\n\nOr did you simply inserted a new row into pg_language?\nWell, that's not the way it works. There needs to be a glue\nlayer between PostgreSQL and a language. You should study\ncode in pgsql/src/pl/{plperl,tcl} for how it is\nimplementer for Perl and Tcl. There is also plpgsql which\nis stand-alone module.\n\n> I'm using PosgtreSQL 7.0, Slackware 7.0, also I have\n> Common Lisp (CMUCL 18c) installed.\n\nOk, but you need a little bit more for that...\n\n-- \nmarko\n\n", "msg_date": "Sat, 5 May 2001 22:21:57 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: Lisp as procedural language" }, { "msg_contents": "Vladimir V. Zolotych writes:\n\n> I see the following\n>\n> proba=> select * from pg_language;\n> lanname |lanispl|lanpltrusted|lanplcallfoid|lancompiler\n> --------+-------+------------+-------------+--------------\n> internal|f |f | 0|n/a\n> lisp |f |f | 0|/usr/ucb/liszt\n[...]\n\nThis must have been an artifact from the time when part of the Postgres\nsystem was written in Lisp. A Lisp procedural language never actually\nexisted in PostgreSQL.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 5 May 2001 23:39:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Lisp as procedural language" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> This must have been an artifact from the time when part of the Postgres\n> system was written in Lisp. A Lisp procedural language never actually\n> existed in PostgreSQL.\n\n[ Digs in archives... ] The pg_language entry that Vladimir refers to\nwas still present as late as Postgres 6.5 --- but I agree that it must\nhave been vestigial long before that. Certainly, at one time large\nchunks of Postgres *were* written in Lisp, and I imagine that the\npg_language entry did something useful when that was true. But it was\ndead code in Postgres 4.2 (1994), which is the oldest source I have;\nthere is no Lisp code remaining in 4.2.\n\nIt'd theoretically be possible to support Lisp in the same way as we\ncurrently support Tcl, Perl, etc. The hard part is to find a suitable\ninterpreter that is designed to be dynamically linked into other\napplications. Perl still hasn't got that quite right, and I imagine\nit's an even more foreign idea for most Lisp systems...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 May 2001 01:12:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lisp as procedural language " }, { "msg_contents": "On Sun, May 06, 2001 at 01:12:45AM -0400, Tom Lane wrote:\n> It'd theoretically be possible to support Lisp in the same way as we\n> currently support Tcl, Perl, etc. The hard part is to find a suitable\n> interpreter that is designed to be dynamically linked into other\n> applications. Perl still hasn't got that quite right, and I imagine\n> it's an even more foreign idea for most Lisp systems...\n\nlibrep for emacs-like-lisp and I remember seeing couple of\nScheme libs too (guile, cant remember more ATM) Not that I\nhave looked them closely.\n\n-- \nmarko\n\n", "msg_date": "Sun, 6 May 2001 12:26:48 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: Lisp as procedural language" }, { "msg_contents": "\nCan someone explain why we have a lisp.sgml file in our docs? Seems it\ndescripes a 3rd party Emacs interface. I don't think we should start\ndistributing docs for software we don't distribute. Can I remove it?\n\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > This must have been an artifact from the time when part of the Postgres\n> > system was written in Lisp. A Lisp procedural language never actually\n> > existed in PostgreSQL.\n> \n> [ Digs in archives... ] The pg_language entry that Vladimir refers to\n> was still present as late as Postgres 6.5 --- but I agree that it must\n> have been vestigial long before that. Certainly, at one time large\n> chunks of Postgres *were* written in Lisp, and I imagine that the\n> pg_language entry did something useful when that was true. But it was\n> dead code in Postgres 4.2 (1994), which is the oldest source I have;\n> there is no Lisp code remaining in 4.2.\n> \n> It'd theoretically be possible to support Lisp in the same way as we\n> currently support Tcl, Perl, etc. The hard part is to find a suitable\n> interpreter that is designed to be dynamically linked into other\n> applications. Perl still hasn't got that quite right, and I imagine\n> it's an even more foreign idea for most Lisp systems...\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 14:06:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lisp as procedural language" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can someone explain why we have a lisp.sgml file in our docs? Seems it\n> descripes a 3rd party Emacs interface. I don't think we should start\n> distributing docs for software we don't distribute. Can I remove it?\n\nOnly if you move the pointer to someplace more appropriate (don't we\nhave somewhere on the website with links to outside software?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 17:22:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lisp as procedural language " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can someone explain why we have a lisp.sgml file in our docs? Seems it\n> > descripes a 3rd party Emacs interface. I don't think we should start\n> > distributing docs for software we don't distribute. Can I remove it?\n> \n> Only if you move the pointer to someplace more appropriate (don't we\n> have somewhere on the website with links to outside software?)\n\nWe sure do:\n\n\thttp://postgresql.readysetnet.com/interfaces.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 18:06:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lisp as procedural language" }, { "msg_contents": "Bruce Momjian writes:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Can someone explain why we have a lisp.sgml file in our docs? Seems it\n> > > descripes a 3rd party Emacs interface. I don't think we should start\n> > > distributing docs for software we don't distribute. Can I remove it?\n> >\n> > Only if you move the pointer to someplace more appropriate (don't we\n> > have somewhere on the website with links to outside software?)\n>\n> We sure do:\n>\n> \thttp://postgresql.readysetnet.com/interfaces.html\n\nMight as well move pgadmin.sgml there too.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 8 May 2001 21:23:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Lisp as procedural language" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > Can someone explain why we have a lisp.sgml file in our docs? Seems it\n> > > > descripes a 3rd party Emacs interface. I don't think we should start\n> > > > distributing docs for software we don't distribute. Can I remove it?\n> > >\n> > > Only if you move the pointer to someplace more appropriate (don't we\n> > > have somewhere on the website with links to outside software?)\n> >\n> > We sure do:\n> >\n> > \thttp://postgresql.readysetnet.com/interfaces.html\n> \n> Might as well move pgadmin.sgml there too.\n\nAgreed. Removed. \n\nHowever, I see no mention of pgadmin in our Interfaces or Enhancements\npage. Do we want to add it? There is lots of stuff on greatbridge.org\nnow, (including my pgmonitor :-). Is there stuff on pgsql.com too? Not\nsure how to get those listed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 May 2001 15:29:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lisp as procedural language" } ]
[ { "msg_contents": "\npostgresql=> \\h create table\nCommand: CREATE TABLE\nDescription: Creates a new table\nSyntax:\nCREATE [ TEMPORARY | TEMP ] TABLE table (\n column type\n [ NULL | NOT NULL ] [ UNIQUE ] [ DEFAULT value ]\n [column_constraint_clause | PRIMARY KEY } [ ... ] ]\n ^\n This should be a ] |\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 5 May 2001 11:22:35 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "typo in psql's help" }, { "msg_contents": "> postgresql=> \\h create table\n> Command: CREATE TABLE\n> Description: Creates a new table\n> Syntax:\n> CREATE [ TEMPORARY | TEMP ] TABLE table (\n> column type\n> [ NULL | NOT NULL ] [ UNIQUE ] [ DEFAULT value ]\n> [column_constraint_clause | PRIMARY KEY } [ ... ] ]\n> ^\n> This should be a ] |\n\nVince, I can't find this anywhere. What version is this? I bet we\nalready fixed it. In fact, I think I remember seeing the fix a while\nago.\n\nMy pgsql \\h create table shows:\n\n---------------------------------------------------------------------------\n\nCREATE [ TEMPORARY | TEMP ] TABLE table_name (\n { column_name type [ column_constraint [ ... ] ]\n | table_constraint } [, ... ]\n ) [ INHERITS ( parent_table [, ... ] ) ]\n\nwhere column_constraint can be:\n[ CONSTRAINT constraint_name ]\n{ NOT NULL | NULL | UNIQUE | PRIMARY KEY | DEFAULT value | CHECK (condition) |\n REFERENCES table [ ( column ) ] [ MATCH FULL | MATCH PARTIAL ]\n [ ON DELETE action ] [ ON UPDATE action ]\n [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]\n}\n\nand table_constraint can be:\n[ CONSTRAINT constraint_name ]\n{ UNIQUE ( column_name [, ... ] ) |\n PRIMARY KEY ( column_name [, ... ] ) |\n CHECK ( condition ) |\n FOREIGN KEY ( column_name [, ... ] ) REFERENCES table [ ( column [, ... ] ) ]\n [ MATCH FULL | MATCH PARTIAL ] [ ON DELETE action ] [ ON UPDATE action ]\n [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]\n}\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 12:06:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: typo in psql's help" }, { "msg_contents": "On Mon, 7 May 2001, Bruce Momjian wrote:\n\n> > postgresql=> \\h create table\n> > Command: CREATE TABLE\n> > Description: Creates a new table\n> > Syntax:\n> > CREATE [ TEMPORARY | TEMP ] TABLE table (\n> > column type\n> > [ NULL | NOT NULL ] [ UNIQUE ] [ DEFAULT value ]\n> > [column_constraint_clause | PRIMARY KEY } [ ... ] ]\n> > ^\n> > This should be a ] |\n>\n> Vince, I can't find this anywhere. What version is this? I bet we\n> already fixed it. In fact, I think I remember seeing the fix a while\n> ago.\n\nYeah, I got a note from Peter saying it was fixed in 7.1. Silly me, I\nthought hub was running 7.1, psql must be 7.0.x.\n\n---\npostgresql=> select version();\n version\n-------------------------------------------------------------------\n PostgreSQL 7.1 on i386-unknown-freebsd4.2, compiled by GCC 2.95.2\n(1 row)\n\npostgresql=> \\h create table\nCommand: CREATE TABLE\nDescription: Creates a new table\nSyntax:\nCREATE [ TEMPORARY | TEMP ] TABLE table (\n column type\n [ NULL | NOT NULL ] [ UNIQUE ] [ DEFAULT value ]\n [column_constraint_clause | PRIMARY KEY } [ ... ] ]\n [, ... ]\n [, PRIMARY KEY ( column [, ...] ) ]\n [, CHECK ( condition ) ]\n [, table_constraint_clause ]\n ) [ INHERITS ( inherited_table [, ...] ) ]\n---\n\nas just seen on hub a few minutes ago.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 7 May 2001 12:45:44 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: typo in psql's help" }, { "msg_contents": "On Mon, 7 May 2001, Bruce Momjian wrote:\n\n> > postgresql=> \\h create table\n> > Command: CREATE TABLE\n> > Description: Creates a new table\n> > Syntax:\n> > CREATE [ TEMPORARY | TEMP ] TABLE table (\n> > column type\n> > [ NULL | NOT NULL ] [ UNIQUE ] [ DEFAULT value ]\n> > [column_constraint_clause | PRIMARY KEY } [ ... ] ]\n> > ^\n> > This should be a ] |\n>\n> Vince, I can't find this anywhere. What version is this? I bet we\n> already fixed it. In fact, I think I remember seeing the fix a while\n> ago.\n\nDid someone delete this one from the database? I just went to close it\nout and it's gone.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 7 May 2001 12:56:03 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: typo in psql's help" }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> Yeah, I got a note from Peter saying it was fixed in 7.1. Silly me, I\n> thought hub was running 7.1, psql must be 7.0.x.\n\nLooks like there's an older psql in your PATH. You could make sure with\n\"psql -V\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 17:03:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: typo in psql's help " }, { "msg_contents": "On Mon, 7 May 2001, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > Yeah, I got a note from Peter saying it was fixed in 7.1. Silly me, I\n> > thought hub was running 7.1, psql must be 7.0.x.\n>\n> Looks like there's an older psql in your PATH. You could make sure with\n> \"psql -V\".\n\nYup. 7.0.3.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 7 May 2001 21:46:40 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: typo in psql's help " } ]
[ { "msg_contents": "A small debate started with bad performance on ReiserFS. I pondered the likely\nadvantages to raw device access. It also occured to me that the FAT file system\nis about as close to a managed raw device as one could get. So I did some\ntests:\nThe hardware:\n\nA PII system running Linux 7.0, with 2.2.16-2.\n256M RAM\nIDE home hard disk.\nAdaptec 2740 with two SCSI drives\nA 9G Seagate ST19171W as /dev/sda1 mounted as /sda1\nA 4G Seagate ST15150W as /dev/sdb1 mounted as /sdb1\n/sda1 has a ext2 file system, and is used as \"base\" with a symlink.\n/sdb1 is either an ext2 or FAT file system used as \"pg_xlog\" with a symlink.\n\n\nIn a clean Postgres environment, I initialized pgbench as:\n./pgbench -i -s 10 -d pgbench\n\nI used this script to produce the results:\n\npsql -U mohawk pgbench -c \"checkpoint; \"\nsu mohawk -c \"./pgbench -d pgbench -t 32 -c 1\"\npsql -U mohawk pgbench -c \"checkpoint; \"\nsu mohawk -c \"./pgbench -d pgbench -t 32 -c 2\"\npsql -U mohawk pgbench -c \"checkpoint; \"\nsu mohawk -c \"./pgbench -d pgbench -t 32 -c 3\"\npsql -U mohawk pgbench -c \"checkpoint; \"\nsu mohawk -c \"./pgbench -d pgbench -t 32 -c 4\"\npsql -U mohawk pgbench -c \"checkpoint; \"\nsu mohawk -c \"./pgbench -d pgbench -t 32 -c 5\"\npsql -U mohawk pgbench -c \"checkpoint; \"\nsu mohawk -c \"./pgbench -d pgbench -t 32 -c 6\"\npsql -U mohawk pgbench -c \"checkpoint; \"\nsu mohawk -c \"./pgbench -d pgbench -t 32 -c 7\"\npsql -U mohawk pgbench -c \"checkpoint; \"\nsu mohawk -c \"./pgbench -d pgbench -t 32 -c 8\"\n\n(My postgres user is \"mohawk\")\n\nI had to modify xlog.c to use \"rename\" instead of link. And I had to explicitly\nset ownership of the FAT file system to the postgres user during mount.\n\nI ran the script twice as:\n\n./test.sh > ext2.log\n\n(Then rebuilt a fresh database and formatted sdb1 as fat)\n./test.sh > fat.log\n\nHere is a diff of the two runs:\n\n--- ext2.log\tSat May 5 12:58:07 2001\n+++ fat.log\tSat May 5 12:58:07 2001\n@@ -5,8 +5,8 @@\n number of clients: 1\n number of transactions per client: 32\n number of transactions actually processed: 32/32\n-tps = 18.697006(including connections establishing)\n-tps = 19.193225(excluding connections establishing)\n+tps = 37.439512(including connections establishing)\n+tps = 39.710461(excluding connections establishing)\n CHECKPOINT\n pghost: (null) pgport: (null) nclients: 2 nxacts: 32 dbName: pgbench\n transaction type: TPC-B (sort of)\n@@ -14,8 +14,8 @@\n number of clients: 2\n number of transactions per client: 32\n number of transactions actually processed: 64/64\n-tps = 32.444226(including connections establishing)\n-tps = 33.499452(excluding connections establishing)\n+tps = 44.782177(including connections establishing)\n+tps = 46.799328(excluding connections establishing)\n CHECKPOINT\n pghost: (null) pgport: (null) nclients: 3 nxacts: 32 dbName: pgbench\n transaction type: TPC-B (sort of)\n@@ -23,8 +23,8 @@\n number of clients: 3\n number of transactions per client: 32\n number of transactions actually processed: 96/96\n-tps = 43.042861(including connections establishing)\n-tps = 44.816086(excluding connections establishing)\n+tps = 55.416117(including connections establishing)\n+tps = 58.057013(excluding connections establishing)\n CHECKPOINT\n pghost: (null) pgport: (null) nclients: 4 nxacts: 32 dbName: pgbench\n transaction type: TPC-B (sort of)\n@@ -32,8 +32,8 @@\n number of clients: 4\n number of transactions per client: 32\n number of transactions actually processed: 128/128\n-tps = 46.033959(including connections establishing)\n-tps = 47.681683(excluding connections establishing)\n+tps = 61.752368(including connections establishing)\n+tps = 64.796970(excluding connections establishing)\n CHECKPOINT\n pghost: (null) pgport: (null) nclients: 5 nxacts: 32 dbName: pgbench\n transaction type: TPC-B (sort of)\n@@ -41,8 +41,8 @@\n number of clients: 5\n number of transactions per client: 32\n number of transactions actually processed: 160/160\n-tps = 49.980258(including connections establishing)\n-tps = 51.874653(excluding connections establishing)\n+tps = 63.124090(including connections establishing)\n+tps = 67.225563(excluding connections establishing)\n CHECKPOINT\n pghost: (null) pgport: (null) nclients: 6 nxacts: 32 dbName: pgbench\n transaction type: TPC-B (sort of)\n@@ -50,8 +50,8 @@\n number of clients: 6\n number of transactions per client: 32\n number of transactions actually processed: 192/192\n-tps = 51.800192(including connections establishing)\n-tps = 53.752739(excluding connections establishing)\n+tps = 65.452545(including connections establishing)\n+tps = 68.741933(excluding connections establishing)\n CHECKPOINT\n pghost: (null) pgport: (null) nclients: 7 nxacts: 32 dbName: pgbench\n transaction type: TPC-B (sort of)\n@@ -59,8 +59,8 @@\n number of clients: 7\n number of transactions per client: 32\n number of transactions actually processed: 224/224\n-tps = 52.652660(including connections establishing)\n-tps = 54.616802(excluding connections establishing)\n+tps = 66.525419(including connections establishing)\n+tps = 69.727409(excluding connections establishing)\n CHECKPOINT\n pghost: (null) pgport: (null) nclients: 8 nxacts: 32 dbName: pgbench\n transaction type: TPC-B (sort of)\n@@ -68,5 +68,5 @@\n number of clients: 8\n number of transactions per client: 32\n number of transactions actually processed: 256/256\n-tps = 55.440884(including connections establishing)\n-tps = 57.525931(excluding connections establishing)\n+tps = 67.331052(including connections establishing)\n+tps = 70.575482(excluding connections establishing)\n", "msg_date": "Sat, 05 May 2001 13:09:38 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "File system performance and pg_xlog" }, { "msg_contents": "On Sat, May 05, 2001 at 01:09:38PM -0400, mlw wrote:\n> A small debate started with bad performance on ReiserFS. I pondered the likely\n> advantages to raw device access. It also occured to me that the FAT file system\n> is about as close to a managed raw device as one could get. So I did some\n> tests:\n\n> /sdb1 is either an ext2 or FAT file system used as \"pg_xlog\" with a symlink.\n\nOne little thought: does mounting ext2 with 'noatime' makes any\ndifference? AFAIK fat does not have concept of atime, so then\nit would be more fair? Just a thought.\n\n-- \nmarko\n\n", "msg_date": "Sat, 5 May 2001 22:00:47 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "Marko Kreen wrote:\n> \n> On Sat, May 05, 2001 at 01:09:38PM -0400, mlw wrote:\n> > A small debate started with bad performance on ReiserFS. I pondered the likely\n> > advantages to raw device access. It also occured to me that the FAT file system\n> > is about as close to a managed raw device as one could get. So I did some\n> > tests:\n> \n> > /sdb1 is either an ext2 or FAT file system used as \"pg_xlog\" with a symlink.\n> \n> One little thought: does mounting ext2 with 'noatime' makes any\n> difference? AFAIK fat does not have concept of atime, so then\n> it would be more fair? Just a thought.\n> \n> --\n> marko\n\nI don't know, and I haven't tried that, but I suspect that it won't make much\ndifference. \n\nWhile I do not think that anyone would seriously consider using FAT for xlog,\nI'd have problems considering myself, it in a production environment, the\nnumbers do say something about the nature of WAL. A bunch of files, all the\nsame size, is practically what FAT does best. Plus there is no real overhead.\n\nThe very reasons why FAT is a POS file system are the same reasons it would\nwork great for WAL, with the only caveat being that fsync is implemented, and\nthe application (postgres) maintains its own data integrity.\n\nOddly enough, I did not see any performance improvement using FAT for the\n\"base\" directory. That may be the nature of the pg block size vs cluster size,\nfragmentation, and stuff. If I get some time I will investigate it a bit more.\n\nClearly not everyone would be interested in this. PG seems to be used for\neverything from a small personal db, to a system component db -- like on a web\nbox, to a full blown stand-alone server. The first two applications may not be\ninterested in this sort of stuff, but last category, the \"full blown server\"\nwould certainly want to squeeze as much out of their system as possible.\n\nI think a \"pgfs\" could easily be a derivative of FAT, or even FAT with some\nIoctls. It is simple, it is fast, it does not attempt to do things postgres\ndoesn't need.\n", "msg_date": "Sat, 05 May 2001 18:43:51 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "On Sat, May 05, 2001 at 06:43:51PM -0400, mlw wrote:\n> Marko Kreen wrote:\n> > On Sat, May 05, 2001 at 01:09:38PM -0400, mlw wrote:\n> > > A small debate started with bad performance on ReiserFS. I pondered the likely\n> > > advantages to raw device access. It also occured to me that the FAT file system\n> > > is about as close to a managed raw device as one could get. So I did some\n> > > tests:\n\n> I think a \"pgfs\" could easily be a derivative of FAT, or even FAT with some\n> Ioctls. It is simple, it is fast, it does not attempt to do things postgres\n> doesn't need.\n\nWell, my opinion too is that it is waste of resources to try\nimplement PostgreSQL-specific filesystem. As you already showed\nthat there are noticeable differences of different filesystems,\nthe Right Thing would be to make a FAQ/web-page/knowledge-base\nof comments on different filesystem in point of view of DB\n(PostgreSQL) server.\n\nAlso users will have different priorities:\nreliability/speed-of-reads/speed-of-writes - I mean different\nusers have them ordered differently - so it should be mentioned\nthis fs is good for this but bad on this, etc... It is good\nto put this part of db on this fs but not that part of db...\nSuggestions on mount flags to use...\n\nThere already exist bazillion filesystems, _some_ of them should\nbe usable for PostgreSQL too :)\n\nBesides resource waste there are others problems with app-level\nfs:\n\n* double-buffering and incompatibilities of avoiding that\n* a lot of code should be reimplemented that already exists\n in today's OS'es\n* you lose all of UNIX user-space tools\n* the speed difference will not be very big. Remeber: it _was_\n big on OS'es and fs' in year 1990. Today's fs are lot of\n better and there should be a os/fs combo that is 95% perfect.\n\n\n-- \nmarko\n\n", "msg_date": "Sun, 6 May 2001 02:27:16 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "* Marko Kreen <marko@l-t.ee> [010505 17:39] wrote:\n> \n> There already exist bazillion filesystems, _some_ of them should\n> be usable for PostgreSQL too :)\n> \n> Besides resource waste there are others problems with app-level\n> fs:\n> \n> * double-buffering and incompatibilities of avoiding that\n\nDepends on the OS, most Operating systems like FreeBSD and Solaris\noffer character device access, this means that the OS will DMA\ndirectly from the process's address space. Avoiding the double\ncopy is trivial except that one must align and size writes correctly,\ngenerally on 512 byte boundries and in 512 byte increments.\n\n> * a lot of code should be reimplemented that already exists\n> in today's OS'es\n\nThat's true.\n\n> * you lose all of UNIX user-space tools\n\nEven worse. :)\n\n> * the speed difference will not be very big. Remeber: it _was_\n> big on OS'es and fs' in year 1990. Today's fs are lot of\n> better and there should be a os/fs combo that is 95% perfect.\n\nWell, here's an idea, has anyone tried using the \"direct write\"\ninterface that some OS's offer? I doubt FreeBSD does, but I'm\npositive that Solaris offers it as well as possibly IRIX.\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n", "msg_date": "Sat, 5 May 2001 19:01:35 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Utilizing \"direct writes\" Re: File system performance and pg_xlog" }, { "msg_contents": "Marko Kreen wrote:\n> \n> On Sat, May 05, 2001 at 06:43:51PM -0400, mlw wrote:\n> > Marko Kreen wrote:\n> > > On Sat, May 05, 2001 at 01:09:38PM -0400, mlw wrote:\n> > > > A small debate started with bad performance on ReiserFS. I pondered the likely\n> > > > advantages to raw device access. It also occured to me that the FAT file system\n> > > > is about as close to a managed raw device as one could get. So I did some\n> > > > tests:\n> \n> > I think a \"pgfs\" could easily be a derivative of FAT, or even FAT with some\n> > Ioctls. It is simple, it is fast, it does not attempt to do things postgres\n> > doesn't need.\n> \n> Well, my opinion too is that it is waste of resources to try\n> implement PostgreSQL-specific filesystem. As you already showed\n> that there are noticeable differences of different filesystems,\n> the Right Thing would be to make a FAQ/web-page/knowledge-base\n> of comments on different filesystem in point of view of DB\n> (PostgreSQL) server.\n> \n> Also users will have different priorities:\n> reliability/speed-of-reads/speed-of-writes - I mean different\n> users have them ordered differently - so it should be mentioned\n> this fs is good for this but bad on this, etc... It is good\n> to put this part of db on this fs but not that part of db...\n> Suggestions on mount flags to use...\n\nI think it is simpler problem than that. Postgres, with fsync enabled, does a\nlot of work trying to maintain data integrity. It is logical to conclude that a\nfile system that does as little as possible would almost always perform better.\nRegardless of what the file system does, eventually it writes blocks of data to\nsectors on a disk.\n\nMany databases use their own data volume management. I am not suggesting that\nanyone create a new file system, but after performing some tests, I am really\nstarting to see why products like oracle manage their own table spaces.\n\nIf one looks at the FAT file system with an open mind and a clear understanding\nof how it will be used, some small modifications may make it the functional\nequivalent of a managed table space volume, at least under Linux.\n\nSome of the benchmark numbers are hovering around 20% improvement! That's\nnothing to sneeze at. I have a database loader that does a select nextval(..)\nfollowed by a begin, a series of inserts, followed by a commit.\n\nWith xlog on a FAT file system, I can get 53-60 sets per second. With Xlog\nsitting on ext2, I can get 40-45 sets per second. (Of the same data) These are\nnot insignificant improvements, and should be examined. If not from a Postgres\ndevelopment perspective, at least from a deployment perspective.\n\n> \n> There already exist bazillion filesystems, _some_ of them should\n> be usable for PostgreSQL too :)\n\nI agree.\n\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sat, 05 May 2001 22:10:33 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "On Sat, May 05, 2001 at 07:01:35PM -0700, Alfred Perlstein wrote:\n> * Marko Kreen <marko@l-t.ee> [010505 17:39] wrote:\n> > * double-buffering and incompatibilities of avoiding that\n> \n> Depends on the OS, most Operating systems like FreeBSD and Solaris\n> offer character device access, this means that the OS will DMA\n> directly from the process's address space. Avoiding the double\n> copy is trivial except that one must align and size writes correctly,\n> generally on 512 byte boundries and in 512 byte increments.\n\nPostgreSQL must then also think about write ordering very hard,\natm this OS business.\n\n> > * the speed difference will not be very big. Remeber: it _was_\n> > big on OS'es and fs' in year 1990. Today's fs are lot of\n> > better and there should be a os/fs combo that is 95% perfect.\n> \n> Well, here's an idea, has anyone tried using the \"direct write\"\n> interface that some OS's offer? I doubt FreeBSD does, but I'm\n> positive that Solaris offers it as well as possibly IRIX.\n\nAnd how much it differs from using FAT? Thats the point I\nwant to make. There should be already a fs that is 90% close\nthat.\n\n-- \nmarko\n\n", "msg_date": "Sun, 6 May 2001 12:34:48 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: Utilizing \"direct writes\" Re: File system performance and pg_xlog" }, { "msg_contents": "On Sat, May 05, 2001 at 10:10:33PM -0400, mlw wrote:\n> I think it is simpler problem than that. Postgres, with fsync enabled, does a\n> lot of work trying to maintain data integrity. It is logical to conclude that a\n> file system that does as little as possible would almost always perform better.\n> Regardless of what the file system does, eventually it writes blocks of data to\n> sectors on a disk.\n\nBut there's more, when PostgreSQL today 'uses a fs' it also get\nall the caching/optimizing algorithms in os kernel 'for free'.\n\n> Many databases use their own data volume management. I am not suggesting that\n> anyone create a new file system, but after performing some tests, I am really\n> starting to see why products like oracle manage their own table spaces.\n> \n> If one looks at the FAT file system with an open mind and a clear understanding\n> of how it will be used, some small modifications may make it the functional\n> equivalent of a managed table space volume, at least under Linux.\n\nAre you talking about new in-kernel fs? Lets see, how many\nos'es PostgreSQL today supports?\n\n> With xlog on a FAT file system, I can get 53-60 sets per second. With Xlog\n> sitting on ext2, I can get 40-45 sets per second. (Of the same data) These are\n> not insignificant improvements, and should be examined. If not from a Postgres\n> development perspective, at least from a deployment perspective.\n\nYes, therefore a proposed a 'knowledge-base' where such things\ncould be mentioned.\n\n-- \nmarko\n\n", "msg_date": "Sun, 6 May 2001 12:41:53 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "* Marko Kreen <marko@l-t.ee> [010506 03:33] wrote:\n> On Sat, May 05, 2001 at 07:01:35PM -0700, Alfred Perlstein wrote:\n> > * Marko Kreen <marko@l-t.ee> [010505 17:39] wrote:\n> > > * double-buffering and incompatibilities of avoiding that\n> > \n> > Depends on the OS, most Operating systems like FreeBSD and Solaris\n> > offer character device access, this means that the OS will DMA\n> > directly from the process's address space. Avoiding the double\n> > copy is trivial except that one must align and size writes correctly,\n> > generally on 512 byte boundries and in 512 byte increments.\n> \n> PostgreSQL must then also think about write ordering very hard,\n> atm this OS business.\n\nDepends. :)\n\n> \n> > > * the speed difference will not be very big. Remeber: it _was_\n> > > big on OS'es and fs' in year 1990. Today's fs are lot of\n> > > better and there should be a os/fs combo that is 95% perfect.\n> > \n> > Well, here's an idea, has anyone tried using the \"direct write\"\n> > interface that some OS's offer? I doubt FreeBSD does, but I'm\n> > positive that Solaris offers it as well as possibly IRIX.\n> \n> And how much it differs from using FAT? Thats the point I\n> want to make. There should be already a fs that is 90% close\n> that.\n\nUsing FAT is totally up to the vendor's FAT implementation.\nSolaris FAT will cache data for a file as long as it's open\nwhich sort of defeats the purpose. Maybe Linux's caching\nmethods are less effective or have less overhead making FAT\nunder Linux a win.\n\nOne of the problems is that I don't think most vendors consider\nthier FAT implementation to be \"mission critical\", it's possible\nthat bugs may be present.\n\nDoes anyone have that test suite that was just mentioned for\nbenching Postgresql? (I'd like to try FreeBSD FAT).\n\n-- \n-Alfred Perlstein - [alfred@freebsd.org]\nInstead of asking why a piece of software is using \"1970s technology,\"\nstart asking why software is ignoring 30 years of accumulated wisdom.\n", "msg_date": "Sun, 6 May 2001 08:27:47 -0700", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Utilizing \"direct writes\" Re: File system performance and pg_xlog" }, { "msg_contents": "Well, as my tests continue, and I try to understand the nature of how file\nsystem design affects Postgres, I did notice something disturbing.\n\nOn a single processor machine, Linux kernel 2.2x, and a good Adaptec SCSI\nsystem and disks, the results were a clear win. When you put WAL on FAT32 on\nits own disk, it ranges between 10% and 20% improvement.\n\nMy other machine, which is semi-production, I don't want to screw too much with\nthe OS, layout, etc. has a Paradise ATA-66 and two ATA-100 disks, which perform\nquite well, usually. It is an SMP PIII 600, 512M RAM.\n\nUsing FAT32 was horrible, one tenth the performance of ext2. Perhaps this is\nbecause FAT has one HUGE spinlock, where as ext2 has a better granularity? I\ndon't know, maybe I will get off my butt and examine the code later.\n\nOne thing is perfectly clear, file systems have a huge impact. While it may not\nbe an argument for writing a \"pgfs,\" it is a clear indicator that optimal\nperformance is non-trivial and requires a bit of screwing around and\nunderstanding what's best.\n\nPersonally, I would fear a \"pgfs.\" Writing a kernel component would be a bad\nidea. FAT has potential, but I don't think kernel developers put any serious\nthought into it, so I don't think it is a mission critical component in most\ncases. Just the behavior that I saw with FAT on SMP Linux, tells me to be\ncareful.\n\nPostgres is at the mercy of the file systems, WAL make it even more so. My gut\ntells me that this aspect of the project will refuse to be taken lightly.\n\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 06 May 2001 16:47:52 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: File system performance and pg_xlog (More info)" }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n\n> On Sat, May 05, 2001 at 10:10:33PM -0400, mlw wrote:\n> > I think it is simpler problem than that. Postgres, with fsync enabled, does a\n> > lot of work trying to maintain data integrity. It is logical to conclude that a\n> > file system that does as little as possible would almost always perform better.\n> > Regardless of what the file system does, eventually it writes blocks of data to\n> > sectors on a disk.\n> \n> But there's more, when PostgreSQL today 'uses a fs' it also get\n> all the caching/optimizing algorithms in os kernel 'for free'.\n> \n> > Many databases use their own data volume management. I am not suggesting that\n> > anyone create a new file system, but after performing some tests, I am really\n> > starting to see why products like oracle manage their own table spaces.\n> > \n> > If one looks at the FAT file system with an open mind and a clear understanding\n> > of how it will be used, some small modifications may make it the functional\n> > equivalent of a managed table space volume, at least under Linux.\n> \n> Are you talking about new in-kernel fs? Lets see, how many\n> os'es PostgreSQL today supports?\n\nIf you're using raw devices on Linux and get a win there, it's a win\nfor Postgresql on Linux. This is important for everyone using it on\nthis platform (probably a big chunk of the users). And who uses all\nthe new features and performance enhancements done in other ways?\n\nIt all comes down to if it actually would give a performance boost,\nhow much work it is and if someone wants to do it.\n> \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "07 May 2001 11:00:22 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> If you're using raw devices on Linux and get a win there, it's a win\n> for Postgresql on Linux. ...\n> It all comes down to if it actually would give a performance boost,\n> how much work it is and if someone wants to do it.\n\nNo, those are not the only considerations. If the feature is not\nportable then we also have to consider how much of a headache it'll be\nto maintain in parallel with a more portable approach. We might reject\nsuch a feature even if it's a clear win for Linux, if it creates enough\nproblems elsewhere. Postgres is *not* a Linux-only application, and I\ntrust it never will be.\n\n\t\t\tregards, tom lane\n\nPS: that's not meant to reject the idea out-of-hand; perhaps the\nbenefits will prove to be so large that we will want to do it\nanyway. I'm just trying to counter what appears to be a narrowly\nplatform-centric view of the issues.\n", "msg_date": "Mon, 07 May 2001 12:08:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog " }, { "msg_contents": "One big performance issue is that PostgreSQL 7.1 uses fdatasync if it is\navailable. However, according to RedHat, 2.2 Linux kernels have\nfdatasync, but it really just acts as fsync. In 2.4 kernels, fdatasync\nis really fdatasync, I think.\n\nThat is a major issue for people running performance tests. For\nexample, XFS may be slow on 2.2 kernels but not 2.4 kernels.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 12:09:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "> If one looks at the FAT file system with an open mind and a clear understanding\n> of how it will be used, some small modifications may make it the functional\n> equivalent of a managed table space volume, at least under Linux.\n\nCan I ask if we are talking FAT16 (DOS) or FAT32 (NT)?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 12:12:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> > If you're using raw devices on Linux and get a win there, it's a win\n> > for Postgresql on Linux. ...\n> > It all comes down to if it actually would give a performance boost,\n> > how much work it is and if someone wants to do it.\n> \n> No, those are not the only considerations. If the feature is not\n> portable then we also have to consider how much of a headache it'll be\n> to maintain in parallel with a more portable approach.\n\nCleanliness and code quality are obvious requirements.\n\n> We might reject such a feature even if it's a clear win for Linux,\n> if it creates enough problems elsewhere. Postgres is *not* a\n> Linux-only application, and I trust it never will be.\n\nNo, but if Linux-specific approach gives a 100% performance boost,\nit's probably worth doing. At 1% it probably isn't. Same goes for\nFreeBSD and others.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "07 May 2001 12:26:31 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> That is a major issue for people running performance tests. For\n> example, XFS may be slow on 2.2 kernels but not 2.4 kernels.\n\nXFS is 2.4 only, AFAIK - even the installer modifications SGI did to\nRed Hat Linux 7 (which is shipped with a 2.2 kernel) includes\ninstalling a 2.4pre kernel, AFAIR.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "07 May 2001 12:46:48 -0400", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "On Mon, May 07, 2001 at 12:12:43PM -0400, Bruce Momjian wrote:\n> > If one looks at the FAT file system with an open mind and a clear understanding\n> > of how it will be used, some small modifications may make it the functional\n> > equivalent of a managed table space volume, at least under Linux.\n> \n> Can I ask if we are talking FAT16 (DOS) or FAT32 (NT)?\n\nDoes not matter. Arhitecture is same. FAT16 is not DOS-only,\nand FAT32 is not NT-only. And there is VFAT16 and VFAT32...\n\nPoint 1 in this discussion seems to be that for storing WAL\nfiles on a FAT-like fs seems to be better (less overhead) than\next2/ufs like fs.\n\nPoint 2: as vendors do not think of FAT as critical fs, it is\nprobably not very optimised for things like SMP; also reliability\n(this probably comes from FAT design itself (thats why it has\nprobably less overhead too...)).\n\nPoint 3: as FAT-like fs's are probably least-overhead\nfs's, could we get any better with a pgfs implementation?\n\nConclusion: ?\n\n-- \nmarko\n\n", "msg_date": "Mon, 7 May 2001 19:18:50 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > If one looks at the FAT file system with an open mind and a clear understanding\n> > of how it will be used, some small modifications may make it the functional\n> > equivalent of a managed table space volume, at least under Linux.\n>\n> Can I ask if we are talking FAT16 (DOS) or FAT32 (NT)\n\nI used FAT32 in my tests.\n\nOn a side note, FAT32 is actually DOS. It showed up in Windows 95b and wasn't\nsupported in NT until Win2K.\n\nI guess, what I have been trying to say, is that we all know it all comes down to\ndisk I/O at some point. Reducing the number of sequencial disk I/O operations for\neach transaction will improve performence. Maybe choosing a simple file system will\naccomplish this.\n\n", "msg_date": "Mon, 07 May 2001 13:24:13 -0400", "msg_from": "\"Mark L. Woodward\" <mlw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: File system performance and pg_xlog" }, { "msg_contents": "> Personally, I would fear a \"pgfs.\" Writing a kernel component would be a bad\n> idea. FAT has potential, but I don't think kernel developers put any serious\n> thought into it, so I don't think it is a mission critical component in most\n> cases. Just the behavior that I saw with FAT on SMP Linux, tells me to be\n> careful.\n> \n> Postgres is at the mercy of the file systems, WAL make it even more so. My gut\n> tells me that this aspect of the project will refuse to be taken lightly.\n\n From a portability standpoint, I think if we go anywhere, it would be to\nwrite directly into device files representing sections of a disk.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 14:11:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: File system performance and pg_xlog (More info)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> >From a portability standpoint, I think if we go anywhere, it would be to\n> write directly into device files representing sections of a disk.\n\nThat makes sense to me. On \"traditional\" Unices, we could use the raw \ncharacter device for a partition (eg /dev/rdsk/* on Solaris), and on\nLinux we'd use /dev/raw*, which is a mapping to a specific partition\nestablished before PG startup. \n\nI guess there would need to be a system table that keeps track of \n(dev, offset, size) tuples for each WAL file.\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n", "msg_date": "07 May 2001 14:51:18 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Re: File system performance and pg_xlog (More info)" }, { "msg_contents": "Doug McNaught wrote:\n> \n> That makes sense to me. On \"traditional\" Unices, we could use the raw\n> character device for a partition (eg /dev/rdsk/* on Solaris), and on\n> Linux we'd use /dev/raw*, which is a mapping to a specific partition\n> established before PG startup.\n\nSmall update - newer Linux kernels now support multiple raw devices\nthrough /dev/raw/raw*, though the mapping between raw (character)\nand block devices has to be recreated on each boot.\n\n--\nSteve Wampler- SOLIS Project, National Solar Observatory\nswampler@noao.edu\n", "msg_date": "Mon, 07 May 2001 13:34:55 -0700", "msg_from": "Steve Wampler <swampler@noao.edu>", "msg_from_op": false, "msg_subject": "Re: Re: File system performance and pg_xlog (More info)" }, { "msg_contents": "Steve Wampler wrote:\n> \n> Doug McNaught wrote:\n> >\n> > That makes sense to me. On \"traditional\" Unices, we could use the raw\n> > character device for a partition (eg /dev/rdsk/* on Solaris), and on\n> > Linux we'd use /dev/raw*, which is a mapping to a specific partition\n> > established before PG startup.\n> \n> Small update - newer Linux kernels now support multiple raw devices\n> through /dev/raw/raw*, though the mapping between raw (character)\n> and block devices has to be recreated on each boot.\n\nIt would be very easy to do a lot of experimenting, and perhaps even more\nefficient in the long run if we could:\n\npre-allocate table spaces, rather than only letting a table file grow, why not\nallow pre-allocated tables files. I want xyz table to be 2.2G long. That way a\nfile system doesn't care about the periphery of a file.\n\nALTER [table index] name PREALLOCATE nn BLOCKS;\n\nVacuuming and space reuse would be an issue. You would propbably have to\nimplement a defragment routine, or some sort of free block list.\nAfter the preallocated limit is hit, grow the file normally. \n\nSecond, allow tables and indexes to be created with arbitrary file names,\nsomething like:\n\ncreate table foo (this integer, that varchar) as file '/path/file';\ncreate index foo_ndx on foo (this) as file '/path2/file1';\n\nIf you do not specify a file, then it behaves as before.\n\nI suspect that these sorts of modifications are either easy or hard. There\nnever is a middle ground on changes like this. The file name one is probably\neasier than the preallocated block one.\n\n\n\n\n\n-- \nI'm not offering myself as an example; every life evolves by its own laws.\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Mon, 07 May 2001 17:41:28 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: File system performance and pg_xlog (More info)" } ]
[ { "msg_contents": "\nThis is just a quick announcement that we have now branched off v7.1.x\nfrom the main development tree, and are starting to dive into development\nof v7.2 ...\n\nThere have been several changes since v7.1 was released, including:\n\nFix for numeric MODULO operator (Tom)\npg_dump fixes (Philip)\npg_dump can dump 7.0 databases (Philip)\nreadline 4.2 fixes (Peter E)\nJOIN fixes (Tom)\nAIX, MSWIN, VAX,N32K fixes (Tom)\nMultibytes fixes (Tom)\nUnicode fixes (Tatsuo)\nOptimizer improvements (Tom)\nFix for whole tuples in functions (Tom)\nFix for pg_ctl and option strings with spaces (Peter E)\nODBC fixes (Hiroshi)\nEXTRACT can now take string argument (Thomas)\nPython fixes (Darcy)\n\nWith more details available in the ChangeLog file ...\n\nThis release does not require a dump/restore from v7.1, it is purely a\nmaintaince release ...\n\nAny bugs please report them to pgsql-bugs@postgresql.org ...\n\nRPMs and DEBs should be available soon ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Sat, 5 May 2001 17:36:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "v7.1.1 Branched, Packaged and Released ..." }, { "msg_contents": "Does this mean that we have officially released 7.1.1? I could not\nfind any statements regarding 7.1.1 on the web pages...\n--\nTatsuo Ishii\n\n> This is just a quick announcement that we have now branched off v7.1.x\n> from the main development tree, and are starting to dive into development\n> of v7.2 ...\n> \n> There have been several changes since v7.1 was released, including:\n> \n> Fix for numeric MODULO operator (Tom)\n> pg_dump fixes (Philip)\n> pg_dump can dump 7.0 databases (Philip)\n> readline 4.2 fixes (Peter E)\n> JOIN fixes (Tom)\n> AIX, MSWIN, VAX,N32K fixes (Tom)\n> Multibytes fixes (Tom)\n> Unicode fixes (Tatsuo)\n> Optimizer improvements (Tom)\n> Fix for whole tuples in functions (Tom)\n> Fix for pg_ctl and option strings with spaces (Peter E)\n> ODBC fixes (Hiroshi)\n> EXTRACT can now take string argument (Thomas)\n> Python fixes (Darcy)\n> \n> With more details available in the ChangeLog file ...\n> \n> This release does not require a dump/restore from v7.1, it is purely a\n> maintaince release ...\n> \n> Any bugs please report them to pgsql-bugs@postgresql.org ...\n> \n> RPMs and DEBs should be available soon ...\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n", "msg_date": "Mon, 07 May 2001 21:29:23 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 Branched, Packaged and Released ..." }, { "msg_contents": "\ntakes Vince a day or two to catch up ... yes, we are officially released,\nand Tom just dump'd some major stats changes into HEAD ...\n\nOn Mon, 7 May 2001, Tatsuo Ishii wrote:\n\n> Does this mean that we have officially released 7.1.1? I could not\n> find any statements regarding 7.1.1 on the web pages...\n> --\n> Tatsuo Ishii\n>\n> > This is just a quick announcement that we have now branched off v7.1.x\n> > from the main development tree, and are starting to dive into development\n> > of v7.2 ...\n> >\n> > There have been several changes since v7.1 was released, including:\n> >\n> > Fix for numeric MODULO operator (Tom)\n> > pg_dump fixes (Philip)\n> > pg_dump can dump 7.0 databases (Philip)\n> > readline 4.2 fixes (Peter E)\n> > JOIN fixes (Tom)\n> > AIX, MSWIN, VAX,N32K fixes (Tom)\n> > Multibytes fixes (Tom)\n> > Unicode fixes (Tatsuo)\n> > Optimizer improvements (Tom)\n> > Fix for whole tuples in functions (Tom)\n> > Fix for pg_ctl and option strings with spaces (Peter E)\n> > ODBC fixes (Hiroshi)\n> > EXTRACT can now take string argument (Thomas)\n> > Python fixes (Darcy)\n> >\n> > With more details available in the ChangeLog file ...\n> >\n> > This release does not require a dump/restore from v7.1, it is purely a\n> > maintaince release ...\n> >\n> > Any bugs please report them to pgsql-bugs@postgresql.org ...\n> >\n> > RPMs and DEBs should be available soon ...\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Mon, 7 May 2001 09:36:57 -0300 (ADT)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.1.1 Branched, Packaged and Released ..." }, { "msg_contents": "On Mon, 7 May 2001, The Hermit Hacker wrote:\n\n>\n> takes Vince a day or two to catch up ... yes, we are officially released,\n> and Tom just dump'd some major stats changes into HEAD ...\n\nBut this time Vince had all the info online in a matter of minutes\nafter receiving Marc's announcement. It does take a while for the\nmirrors to get the updates tho. www.ca.postgresql.org *always* has\nthe most current info.\n\n\n>\n> On Mon, 7 May 2001, Tatsuo Ishii wrote:\n>\n> > Does this mean that we have officially released 7.1.1? I could not\n> > find any statements regarding 7.1.1 on the web pages...\n> > --\n> > Tatsuo Ishii\n> >\n> > > This is just a quick announcement that we have now branched off v7.1.x\n> > > from the main development tree, and are starting to dive into development\n> > > of v7.2 ...\n> > >\n> > > There have been several changes since v7.1 was released, including:\n> > >\n> > > Fix for numeric MODULO operator (Tom)\n> > > pg_dump fixes (Philip)\n> > > pg_dump can dump 7.0 databases (Philip)\n> > > readline 4.2 fixes (Peter E)\n> > > JOIN fixes (Tom)\n> > > AIX, MSWIN, VAX,N32K fixes (Tom)\n> > > Multibytes fixes (Tom)\n> > > Unicode fixes (Tatsuo)\n> > > Optimizer improvements (Tom)\n> > > Fix for whole tuples in functions (Tom)\n> > > Fix for pg_ctl and option strings with spaces (Peter E)\n> > > ODBC fixes (Hiroshi)\n> > > EXTRACT can now take string argument (Thomas)\n> > > Python fixes (Darcy)\n> > >\n> > > With more details available in the ChangeLog file ...\n> > >\n> > > This release does not require a dump/restore from v7.1, it is purely a\n> > > maintaince release ...\n> > >\n> > > Any bugs please report them to pgsql-bugs@postgresql.org ...\n> > >\n> > > RPMs and DEBs should be available soon ...\n> > >\n> > > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > > Systems Administrator @ hub.org\n> > > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n>\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 7 May 2001 10:56:20 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 Branched, Packaged and Released ..." }, { "msg_contents": "Are the \"nested views permission problems\" fixed in this release?\nIf so, a dump IS necessary because of a change rule creation routines.\n\nThanks,\n\nLieven\n\nThe Hermit Hacker wrote:\n\n> This is just a quick announcement that we have now branched off v7.1.x\n> from the main development tree, and are starting to dive into development\n> of v7.2 ...\n>\n> There have been several changes since v7.1 was released, including:\n>\n> Fix for numeric MODULO operator (Tom)\n> pg_dump fixes (Philip)\n> pg_dump can dump 7.0 databases (Philip)\n> readline 4.2 fixes (Peter E)\n> JOIN fixes (Tom)\n> AIX, MSWIN, VAX,N32K fixes (Tom)\n> Multibytes fixes (Tom)\n> Unicode fixes (Tatsuo)\n> Optimizer improvements (Tom)\n> Fix for whole tuples in functions (Tom)\n> Fix for pg_ctl and option strings with spaces (Peter E)\n> ODBC fixes (Hiroshi)\n> EXTRACT can now take string argument (Thomas)\n> Python fixes (Darcy)\n>\n> With more details available in the ChangeLog file ...\n>\n> This release does not require a dump/restore from v7.1, it is purely a\n> maintaince release ...\n>\n> Any bugs please report them to pgsql-bugs@postgresql.org ...\n>\n> RPMs and DEBs should be available soon ...\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Mon, 07 May 2001 18:41:54 +0200", "msg_from": "Lieven Van Acker <lieven@elisa.be>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 Branched, Packaged and Released ..." }, { "msg_contents": "Lieven Van Acker <lieven@elisa.be> writes:\n> Are the \"nested views permission problems\" fixed in this release?\n> If so, a dump IS necessary because of a change rule creation routines.\n\nIf you're running into that issue, you might want to drop and recreate\nthe affected views/rules. That's a far cry from a database dump and\nreload, though. At least for them as has gigabytes of data ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 17:01:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 Branched, Packaged and Released ... " }, { "msg_contents": "> takes Vince a day or two to catch up ... yes, we are officially released,\n> and Tom just dump'd some major stats changes into HEAD ...\n> \n> On Mon, 7 May 2001, Tatsuo Ishii wrote:\n> \n> > Does this mean that we have officially released 7.1.1? I could not\n> > find any statements regarding 7.1.1 on the web pages...\n\nThanks. I'm just wondering which version should be targeted in my new\nbook. I think I could write based on 7.1.1.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 08 May 2001 07:09:25 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: v7.1.1 Branched, Packaged and Released ..." } ]
[ { "msg_contents": "PHP users tend to start with MySQL and stick there.\nPostgreSQL from release 7 is getting rave reviews for being equivalent\nin performance to MySQL in medium size web sites.\n\nPerhaps it is time for PHP programmers to dive straight in to\npostgreSQL.\n\nWanted:\nPostgreSQL expert to rave about PostgreSQL advantages (an explain\nthem) to a bunch of PHP programmers at PHP Sydney User Group.\n\nMeeting details at http://phpsydney.com/ plus a contact page there for\na brave volunteer.\n\nI would love to use PostgreSQL all the time but my 2.7 remaining brain\ncells re insufficient to install PostgreSQL on NT and my Linux\nworkstation always refuses to talk to either the video card or the\nmouse. If I find a Linux workstation setup that does not require a\nscreen or a mouse......\n", "msg_date": "Sun, 06 May 2001 10:27:16 GMT", "msg_from": "com@com.com", "msg_from_op": true, "msg_subject": "Wanted Sydney Australia,\n\tsomeone to explain PostgreSQL to a bunch of programmers" } ]
[ { "msg_contents": "Right now anyone can look in pg_statistic and discover the min/max/most\ncommon values of other people's tables. That's not a lot of info, but\nit might still be more than you want them to find out. And the\nstatistical changes that I'm about to commit will allow a couple dozen\nvalues to be exposed, not only three values per column.\n\nIt seems to me that only superusers should be allowed to read the\npg_statistic table. Or am I overreacting? Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 May 2001 13:14:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Isn't pg_statistic a security hole?" }, { "msg_contents": "Being a simple user, I still want\nto view the stats from the table,\nbut it should be limited only\nto the stuff I own. I don't wanna\nlet others see any of my info, however.\nThe SU's, of course, should be able to read\nall the stats.\n\n----- Original Message ----- \nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nTo: <pgsql-hackers@postgresql.org>\nSent: Sunday, May 06, 2001 1:14 PM\nSubject: [HACKERS] Isn't pg_statistic a security hole?\n\n\n> Right now anyone can look in pg_statistic and discover the min/max/most\n> common values of other people's tables. That's not a lot of info, but\n> it might still be more than you want them to find out. And the\n> statistical changes that I'm about to commit will allow a couple dozen\n> values to be exposed, not only three values per column.\n> \n> It seems to me that only superusers should be allowed to read the\n> pg_statistic table. Or am I overreacting? Comments?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Sun, 6 May 2001 13:23:03 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Isn't pg_statistic a security hole?" }, { "msg_contents": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> Being a simple user, I still want to view the stats from the table,\n> but it should be limited only to the stuff I own. I don't wanna let\n> others see any of my info, however. The SU's, of course, should be\n> able to read all the stats.\n\nThis is infeasible since we don't have a concept of per-row permissions.\nIt's all or nothing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 May 2001 13:27:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Isn't pg_statistic a security hole? " }, { "msg_contents": "\nOn Sun, 6 May 2001, Tom Lane wrote:\n\n> \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> > Being a simple user, I still want to view the stats from the table,\n> > but it should be limited only to the stuff I own. I don't wanna let\n> > others see any of my info, however. The SU's, of course, should be\n> > able to read all the stats.\n> \n> This is infeasible since we don't have a concept of per-row permissions.\n> It's all or nothing.\n\nMaybe make statistics readable only by superusers with a view that uses\nCURRENT_USER or something like that to only give the objects that\nhave owners of this user? Might be an ugly view, but...\n\n\n", "msg_date": "Sun, 6 May 2001 11:03:40 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Isn't pg_statistic a security hole? " }, { "msg_contents": "> \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> > Being a simple user, I still want to view the stats from the table,\n> > but it should be limited only to the stuff I own. I don't wanna let\n> > others see any of my info, however. The SU's, of course, should be\n> > able to read all the stats.\n>\n> This is infeasible since we don't have a concept of per-row permissions.\n> It's all or nothing.\n>\n\nYou can acheive the same effect using a view if the statistics table has the\nuser name included.\n\nJoe\n\ntest=# select version();\n version\n-----------------------------------------------------------\n PostgreSQL 7.1 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\ncreate table teststat(username name,stat_id int4,stat_val float, primary\nkey(username,stat_id));\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'teststat_pkey'\nfor table 'teststat'\nCREATE\ninsert into teststat values('postgres',1,15.321);\nINSERT 1007064 1\ninsert into teststat values('foo',1,12.123);\nINSERT 1007065 1\nselect * from teststat;\n username | stat_id | stat_val\n----------+---------+----------\n postgres | 1 | 15.321\n foo | 1 | 12.123\n(2 rows)\n\ncreate view vw_teststat as (select * from teststat where\n(username=current_user or current_user='postgres'));\nCREATE\nselect current_user;\n current_user\n--------------\n postgres\n(1 row)\n\nselect * from vw_teststat;\n username | stat_id | stat_val\n----------+---------+----------\n postgres | 1 | 15.321\n foo | 1 | 12.123\n(2 rows)\n\ncreate user foo;\nCREATE USER\ngrant select on vw_teststat to foo;\nCHANGE\nYou are now connected as new user foo.\nselect current_user;\n current_user\n--------------\n foo\n(1 row)\n\nselect * from vw_teststat;\n username | stat_id | stat_val\n----------+---------+----------\n foo | 1 | 12.123\n(1 row)\n\n\n", "msg_date": "Sun, 6 May 2001 11:35:49 -0700", "msg_from": "\"Joe Conway\" <joe@conway-family.com>", "msg_from_op": false, "msg_subject": "Re: Isn't pg_statistic a security hole? " }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n>> This is infeasible since we don't have a concept of per-row permissions.\n>> It's all or nothing.\n\n> Maybe make statistics readable only by superusers with a view that uses\n> CURRENT_USER or something like that to only give the objects that\n> have owners of this user? Might be an ugly view, but...\n\nHmm, that would work --- you could join against pg_class to find out the\nowner of the relation. While you were at it, maybe look up the\nattribute name in pg_attribute as well. Anyone want to propose a\nspecific view definition?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 May 2001 15:12:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Isn't pg_statistic a security hole? " }, { "msg_contents": "> Hmm, that would work --- you could join against pg_class to find out the\n> owner of the relation. While you were at it, maybe look up the\n> attribute name in pg_attribute as well. Anyone want to propose a\n> specific view definition?\n> \n\nHow does this work?\n\ncreate view pg_userstat as (\n select\n s.starelid\n ,s.staattnum\n ,s.staop\n ,s.stanullfrac\n ,s.stacommonfrac\n ,s.stacommonval\n ,s.staloval\n ,s.stahival\n ,c.relname\n ,a.attname\n ,sh.usename\n from \n pg_statistic as s\n ,pg_class as c\n ,pg_shadow as sh\n ,pg_attribute as a\n where\n (sh.usename=current_user or current_user='postgres')\n and sh.usesysid = c.relowner\n and a.attrelid = c.oid\n and c.oid = s.starelid\n);\n\n\n-- Joe\n\n", "msg_date": "Sun, 6 May 2001 13:01:58 -0700", "msg_from": "\"Joe Conway\" <joe@conway-family.com>", "msg_from_op": false, "msg_subject": "Re: Isn't pg_statistic a security hole? " }, { "msg_contents": "Tom Lane wrote:\n> \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> > Being a simple user, I still want to view the stats from the table,\n> > but it should be limited only to the stuff I own. I don't wanna let\n> > others see any of my info, however. The SU's, of course, should be\n> > able to read all the stats.\n>\n> This is infeasible since we don't have a concept of per-row permissions.\n> It's all or nothing.\n\n Can't we provide a view that shows those rows from\n pg_statistics that belong to the tables owned by the current\n user?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 7 May 2001 12:07:56 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Isn't pg_statistic a security hole?" }, { "msg_contents": "> Right now anyone can look in pg_statistic and discover the min/max/most\n> common values of other people's tables. That's not a lot of info, but\n> it might still be more than you want them to find out. And the\n> statistical changes that I'm about to commit will allow a couple dozen\n> values to be exposed, not only three values per column.\n> \n> It seems to me that only superusers should be allowed to read the\n> pg_statistic table. Or am I overreacting? Comments?\n\nYou are not overreacting. Imagine a salary column. I can imagine\nmax/min being quite interesting.\n\nI doubt it is worth letting non-super users see values in that table. \nTheir only value is in debugging the optimizer, which seems like a\nsuper-user job anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 13:37:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Isn't pg_statistic a security hole?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> It seems to me that only superusers should be allowed to read the\n>> pg_statistic table. Or am I overreacting? Comments?\n\n> You are not overreacting. Imagine a salary column. I can imagine\n> max/min being quite interesting.\n\nA fine example, indeed ;-)\n\n> I doubt it is worth letting non-super users see values in that table. \n> Their only value is in debugging the optimizer, which seems like a\n> super-user job anyway.\n\nWell, mumble. I routinely ask people who're complaining of bad plans\nfor extracts from their pg_statistic table. I don't foresee that need\nvanishing any time soon :-(. The idea of a view seemed nice, in part\nbecause it could be set up to give all the useful info with a simple\n\n\tselect * from pg_statview where relname = 'foo';\n\nrather than the messy three-way join you have to type now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 18:54:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Isn't pg_statistic a security hole? " }, { "msg_contents": "> > I doubt it is worth letting non-super users see values in that table. \n> > Their only value is in debugging the optimizer, which seems like a\n> > super-user job anyway.\n> \n> Well, mumble. I routinely ask people who're complaining of bad plans\n> for extracts from their pg_statistic table. I don't foresee that need\n> vanishing any time soon :-(. The idea of a view seemed nice, in part\n> because it could be set up to give all the useful info with a simple\n> \n> \tselect * from pg_statview where relname = 'foo';\n> \n> rather than the messy three-way join you have to type now.\n\nSounds fine, but aren't most people who we ask for stats superusers?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 19:02:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Isn't pg_statistic a security hole?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Sounds fine, but aren't most people who we ask for stats superusers?\n\nAre they? I don't think we should assume that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 19:35:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Isn't pg_statistic a security hole? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Sounds fine, but aren't most people who we ask for stats superusers?\n> \n> Are they? I don't think we should assume that.\n\nOK, just asking.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 19:36:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Isn't pg_statistic a security hole?" } ]
[ { "msg_contents": "Sorry, forgot to post to the list...\n\n----- Original Message ----- \nFrom: Tom Lane <tgl@sss.pgh.pa.us>\n> \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca> writes:\n> > Being a simple user, I still want to view the stats from the table,\n> > but it should be limited only to the stuff I own. I don't wanna let\n> > others see any of my info, however. The SU's, of course, should be\n> > able to read all the stats.\n> \n> This is infeasible since we don't have a concept of per-row permissions.\n> It's all or nothing.\n\nHow hard is to create a per-user stats table similar to pg_statistic?\nAnd then limit the original pg_statistic table only to superusers...\n\nOR\n\nwhen one queries the table, this \"one\" can be authenticated\nand even if there are no per-row permissions, it is possible\nto output one row WHERE the username is the same as the user\nruns the query. Isn't it the same like\n\nSELECT * FROM pg_statisctic\nWHERE 'user is myself'\n \n and this WHERE clause will be just appended by the system\nfor the current user to the original query.\n\nDoes it make any sense, is it sane? Cuz, I'm not familiar\nwith PG internals at all...\n\nSerguei\n\n\n", "msg_date": "Sun, 6 May 2001 14:15:52 -0400", "msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": true, "msg_subject": "Fw: Isn't pg_statistic a security hole? " } ]
[ { "msg_contents": "\n> > I think it's worth noting that Oracle has been petitioning the\n> > kernel developers for better raw device support: in other words,\n> > the ability to write directly to the hard disk and bypassing the\n> > filesystem all together. \n> \n> But there could be other reasons why Oracle would want to do \n> raw stuff.\n\nThe reasons are: \n1. Most Unixen now have shared (between several machines) raw devices\n\tOracle needs this for their shared everything Parallel Server. Only 2 Unixen \n\tthat I know of have shared filesystems (IBM gpfs and Sun Veritas) (both are rather new)\n2. The allocation time for raw devices is by far better (near instantaneous) than\n\tcreating preallocated files in a fs. Providing 1 Tb of raw devices is a task \n\tof minutes, creating 1 Tb filsystems with preallocated 2 Gb files is a task of \n\thours at best.\n3. absolute control over writes and page location (you don't want interleaved pages)\n4. Efficient use of buffer memory. Usual use of filesystems buffers the disk pages twice,\n\tone copy in the db buffer pool, one in the OS file cache.\n5. async raw IO (most Unixes provide async raw IO on raw devices, only some provide \n\traw IO on filesystem files).\n\t(async IO has 2 advantages: CPU work can be done while waiting for IO and \n\tIO can complete within one OS timeslice (20 us). This is possible with modern \n\tdisk systems, that have large caches)\n\nAndreas\n", "msg_date": "Mon, 7 May 2001 12:10:03 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: New Linux xfs/reiser file systems" }, { "msg_contents": "\n> 2. The allocation time for raw devices is by far better (near\n> \tinstantaneous) than creating preallocated files in a\n> \tfs. Providing 1 Tb of raw devices is a task of minutes,\n> \tcreating 1 Tb filsystems with preallocated 2 Gb files is a\n> \ttask of hours at best.\n\nFilesystem dependent, surely? Veritas' VxFS can create filesystems\nquickly, and quickly preallocate space for the files. If you actually\nwant to write data into the files that would take longer. :)\n\nCreating a 1TB UFS filesystem might take a while, and UFS doesn't\nsupport pre-allocation of space as far as I know so creating 2GB files\nwould take time too. Perhaps hours. :-(\n\n> 3. absolute control over writes and page location (you don't want\n> interleaved pages)\n\nAs well as a filesystem, most large systems I'm familiar with use\nvolume management software (VxVM, LVM, ...) and their \"disks\" will be\nallocated space on disk arrays.\n\nThese additional layers aren't arguments against simplifying the\nfilesystem layer, but they sure will complicate measurement and\ntuning. :-)\n\nRegards,\n\nGiles\n", "msg_date": "Tue, 08 May 2001 08:32:31 +1000", "msg_from": "Giles Lean <giles@nemeton.com.au>", "msg_from_op": false, "msg_subject": "Re: AW: Re: New Linux xfs/reiser file systems " }, { "msg_contents": "\n> > 2. The allocation time for raw devices is by far better (near\n> > \tinstantaneous) than creating preallocated files in a\n> > \tfs. Providing 1 Tb of raw devices is a task of minutes,\n> > \tcreating 1 Tb filsystems with preallocated 2 Gb files is a\n> > \ttask of hours at best.\n> \n> Filesystem dependent, surely? Veritas' VxFS can create filesystems\n> quickly, and quickly preallocate space for the files.\n\nAnd you are sure, that this does not create a sparse file, which is exactly \nwhat we do not want ? Can you name one other example ?\n\n> > 3. absolute control over writes and page location (you don't want\n> > interleaved pages)\n> \n> As well as a filesystem, most large systems I'm familiar with use\n> volume management software (VxVM, LVM, ...) and their \"disks\" will be\n> allocated space on disk arrays.\n\nOf course. My thinking has long switched to volume groups and logical \nvolumes. This however does not alter the fact, that one LV can be \nregarded as one mainly contiguous (is that the word ?) block on disk\nfor optimization issues. When reading a logical volume sequentially \nhead movement will be minimal.\n\nAndreas\n", "msg_date": "Tue, 8 May 2001 09:59:11 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Re: New Linux xfs/reiser file systems " }, { "msg_contents": "\n> > Filesystem dependent, surely? Veritas' VxFS can create filesystems\n> > quickly, and quickly preallocate space for the files.\n> \n> And you are sure, that this does not create a sparse file, which is exactly \n> what we do not want ? Can you name one other example ?\n\nhttp://docs.hp.com//hpux/onlinedocs/B3929-90011/00/00/35-con.html#s3-2\n\n Reservation: Preallocating Space to a File \n\n VxFS makes it possible to preallocate space to a file at the time\n of the request rather than when data is written into the\n file. This space cannot be allocated to other files in the file\n system. VxFS prevents any unexpected out-of-space condition on the\n file system by ensuring that a file's required space will be\n associated with the file before it is required.\n\nI can't name another example -- I'm not familiar with what IBM's JFS\nor SGI's XFS filesytems are capable of doing.\n\n> Of course. My thinking has long switched to volume groups and logical \n> volumes. This however does not alter the fact, that one LV can be \n> regarded as one mainly contiguous (is that the word ?) block on disk\n> for optimization issues. When reading a logical volume sequentially \n> head movement will be minimal.\n\nI'm no storage guru, but I'd certainly hope that sequential reads were\n\"efficient\" on just about any storage device.\n\nMy mild concern is that any model of storage system behaviour that\nincludes \"head movement\" is inadequate for anything but small systems,\nand is challenged for them by the presence of caches everywhere.\n\nA storage array such as those made by Hitachi and EMC will have SCSI\nLUNs (aka \"disks\") that are sized and configured by software inside\nthe storage device.\n\nGood performance on such storage systems might depend on keeping as\nmuch work up to it as possible, to let the device determine what order\nto service the requests. Attempts to minimise \"head movement\" may\nhurt, not help. But as I said, I'm no storage guru, and I'm not a\nperformance consultant either. :-)\n\nRegards,\n\nGiles\n\n\n\n\n", "msg_date": "Tue, 08 May 2001 19:02:26 +1000", "msg_from": "Giles Lean <giles@nemeton.com.au>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Re: New Linux xfs/reiser file systems " }, { "msg_contents": "On Tue, 8 May 2001 09:09:08 +0000 (UTC), giles@nemeton.com.au (Giles\nLean) wrote:\n\n>Good performance on such storage systems might depend on keeping as\n>much work up to it as possible, to let the device determine what order\n>to service the requests. Attempts to minimise \"head movement\" may\n>hurt, not help.\n\nLetting the device determine the sequence of IO increases throughput\nand reduces performance.\n\nIf you want the maximum throughput, so you can reduce the money you\nspend on storage, you que the requests and sort the ques based on the\nminimum work required to complete the aggregated requests.\n\nIf you want performance, you put your request first and make the que\nwait. Some storage systems allow the specification of two or more\npriorities so your IO can go first and everyone else goes second.\n\n\"lazy\" page writes and all the other tricks used to keep IO in memory\nhave the effect of reducing writes at the expense of data lost during\na power failure. Some storage devices were built with batteries to\nallow writes after power loss. If the batteries could maintain writes\nfor 5 seconds after poser loss, writes could be held up for nearly 5\nseconds in the hope that many duplicate writes to the same location\ncould be dropped.\n\nI know a lot of storage systems from the hardware up and few\noutperform an equivalent system where the money was focused on more\nmemory in the computer. Most add on storage systems offering\n\"spectacular\" performance have make most financial sense when they are\nattached to a computer that is at a physical limit of expansion. If\nyou have 4 Gb on a 32 bit computer, adding a storage system with 2 Gb\nof cache can be a sound investment. Adding the same 2 Gb cache to a 32\nbit system expanded to just 2 Gb usually costs more than adding the\nextra 2 Gb to the computer.\n\nOnce 64 bit computers with 32, 64 or 128 Gb of DDR become available,\nthe best approach will go back to heaps of RAM on the computer and\nnone on disk.\n\nIf you are looking at one of the 64 bit replacements x86 style\nprocessor and equivalents, the best disk arrangement would be to have\nno file system or operating system intervention and have the whole\ndisk allocated to the processor page function, similar to the theory\nbehind AS/400s and equivalents. Each disk would be on a single fibre,\nservice 64 Gb gigabyte and be mirrored on an adjacent disk. The only\nprocessing in the CPU would be ECC, the disk controller would perform\nthe RAID 1 processing and perform the IO in a pendulum sweep pattern\nwith just enough cache to handle one sweep. You would, of course, need\npower supplies big enough to cover a few extra sweeps and something to\ntell the page processing to flush everything when the power is\ndropping.\n\nWhen you have multiple computers in a cluster, you could build an\nintermediate device to handle the page flow much the same as a network\nswitch.\n\nAll these technologies were tried and proves several times in the last\n30 years and work perfectly when the computer's maximum address space\nis larger than the total size of all open files. They worked perfectly\nwhen people had 100Mb databases on 200Mb disks in systems that could\naddress 4Gb. Doubling the number of bits in the address range puts 64\nbit systems out in front of both disks and memory again. There are\nalready 128 bit and 256 bit processors in use so systems could be\nplanned to stay ahead of disk design so you never have to worry about\na file system again.\n\nThe AMD slot A and Intel slot 1 could be sold the way you buy Turkish\npizza, by the foot. Just walk up to the hardware shop and ask for 300\nbits of address space. Shops could have specials, like an extra 100\nbits of address space for all orders over $20.\n\n", "msg_date": "Tue, 08 May 2001 11:38:39 GMT", "msg_from": "test@test.com", "msg_from_op": false, "msg_subject": "Re: AW: AW: Re: New Linux xfs/reiser file systems" } ]
[ { "msg_contents": "I've tried the pg_dump bundled in the new 7.1.1 release. I wanted to\ntest its feature of dumping a 7.0.X database.\n\nLet's say I have database A running 7.1.1, B running 7.0.2. Both servers\nhave the same database 'test', 'myview' is a view defined on both of\nthem. I want to dump data only, being a VIEW I expect zero rows.\n\n From host A:\n\npg_dump -da -t myview test\tOK\npg_dump -h B -a -t myview test\tOK\npg_dump -h B -da -t myview test\tAn INSERT for each row\n\nThis last behaviour is obviously wrong because you cannot re-INSERT into\nthe VIEW (no rules are defined).\n\n From host B:\n\npg_dump -da -t myview test\tOK\n\nSeems that there is a problem dumping 'INSERT-style' from a 7.0.X\ndatabase.\n\nRunning PostgreSQL 7.1.1 on alphaev67-dec-osf4.0f, compiled by cc -std\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Mon, 07 May 2001 13:19:48 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": true, "msg_subject": "A problem with new pg_dump" }, { "msg_contents": "At 13:19 7/05/01 +0300, Alessio Bragadini wrote:\n>\n>Seems that there is a problem dumping 'INSERT-style' from a 7.0.X\n>database.\n>\n\nIt's actually a more general problem - it looks like dumping views in 7.0\ndoes not work with the 7.1.1 pg_dump (it thinks they are tables because the\n7.1 check of pg_relkind='v' is not valid).\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 07 May 2001 23:04:14 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: A problem with new pg_dump" }, { "msg_contents": "At 23:04 7/05/01 +1000, Philip Warner wrote:\n>\n>It's actually a more general problem - it looks like dumping views in 7.0\n>does not work with the 7.1.1 pg_dump (it thinks they are tables because the\n>7.1 check of pg_relkind='v' is not valid).\n>\n\nThe attached patch should fix the problem. Assuming it tests out OK, can\nthis be back-patched, since 7.1.1 is already out?\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/", "msg_date": "Mon, 07 May 2001 23:39:38 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: A problem with new pg_dump" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> The attached patch should fix the problem. Assuming it tests out OK, can\n> this be back-patched, since 7.1.1 is already out?\n\nYes, it should be back-patched into the REL7_1_STABLE branch once you're\nconfident of it. Probably there will be a 7.1.2 by and by ...\n\nDo you need a quick lecture on CVS branch management?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 11:22:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A problem with new pg_dump " }, { "msg_contents": "At 11:22 7/05/01 -0400, Tom Lane wrote:\n>\n>Do you need a quick lecture on CVS branch management?\n>\n\nThat would be sensible.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 08 May 2001 08:44:55 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: A problem with new pg_dump " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 11:22 7/05/01 -0400, Tom Lane wrote:\n>> Do you need a quick lecture on CVS branch management?\n\n> That would be sensible.\n\nOK, some quick notes for those with commit privileges:\n\nIf you just do basic \"cvs checkout\", \"cvs update\", \"cvs commit\", then\nyou'll always be dealing with the HEAD version of the files in CVS.\nThat's what you want for development, but if you need to patch past\nstable releases then you have to be able to access and update the\n\"branch\" portions of our CVS repository. We normally fork off a branch\nfor a stable release just before starting the development cycle for the\nnext release.\n\nThe first thing you have to know is the branch name for the branch you\nare interested in getting at. Unfortunately Marc has been less than\n100% consistent in naming the things. One way to check is to apply\n\"cvs log\" to any file that goes back a long time, for example HISTORY\nin the top directory:\n\n$ cvs log HISTORY | more\n\nRCS file: /home/projects/pgsql/cvsroot/pgsql/HISTORY,v\nWorking file: HISTORY\nhead: 1.106\nbranch:\nlocks: strict\naccess list:\nsymbolic names:\n REL7_1_STABLE: 1.106.0.2\n REL7_1_BETA: 1.79\n REL7_1_BETA3: 1.86\n REL7_1_BETA2: 1.86\n REL7_1: 1.102\n REL7_0_PATCHES: 1.70.0.2\n REL7_0: 1.70\n REL6_5_PATCHES: 1.52.0.2\n REL6_5: 1.52\n REL6_4: 1.44.0.2\n release-6-3: 1.33\n SUPPORT: 1.1.1.1\n PG95-DIST: 1.1.1\nkeyword substitution: kv\ntotal revisions: 129; selected revisions: 129\nMore---q\n\nUnfortunately \"cvs log\" isn't all that great about distinguishing\nbranches from tags --- it calls 'em all \"symbolic names\". (A \"tag\" just\nmarks a specific timepoint across all files --- it's essentially a\nsnapshot whereas a branch is a changeable fileset.) Rule of thumb is\nthat names attached to four-number versions where the third number is\nzero represent branches, the others are just tags. Here we can see that\nthe extant branches are\n\tREL7_1_STABLE\n\tREL7_0_PATCHES\n\tREL6_5_PATCHES\nThe next commit to the head will be revision 1.107, whereas any changes\ncommitted into the REL7_1_STABLE branch will have revision numbers like\n1.106.2.*, corresponding to the branch number 1.106.0.2 (don't ask where\nthe zero went...).\n\nOK, so how do you do work on a branch? By far the best way is to create\na separate checkout tree for the branch and do your work in that. Not\nonly is that the easiest way to deal with CVS, but you really need to\nhave the whole past tree available anyway to test your work. (And you\n*better* test your work. Never forget that dot-releases tend to go out\nwith very little beta testing --- so whenever you commit an update to a\nstable branch, you'd better be doubly sure that it's correct.)\n\nNormally, to checkout the head branch, you just cd to the place you\nwant to contain the toplevel \"pgsql\" directory and say\n\n\tcvs ... checkout pgsql\n\nTo get a past branch, you cd to whereever you want it and say\n\n\tcvs ... checkout -r BRANCHNAME pgsql\n\nFor example, just a couple days ago I did\n\n\tmkdir ~postgres/REL7_1\n\tcd ~postgres/REL7_1\n\tcvs ... checkout -r REL7_1_STABLE pgsql\n\nand now I have a maintenance copy of 7.1.*.\n\nWhen you've done a checkout in this way, the branch name is \"sticky\":\nCVS automatically knows that this directory tree is for the branch,\nand whenever you do \"cvs update\" or \"cvs commit\" in this tree, you'll\nfetch or store the latest version in the branch, not the head version.\nEasy as can be.\n\nSo, if you have a patch that needs to apply to both the head and a\nrecent stable branch, you have to make the edits and do the commit\ntwice, once in your development tree and once in your stable branch\ntree. This is kind of a pain, which is why we don't normally fork\nthe tree right away after a major release --- we wait for a dot-release\nor two, so that we won't have to double-patch the first wave of fixes.\n\nAny questions? (See the CVS manual for details on these commands,\nof course.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 May 2001 19:35:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "CVS branch management (was Re: A problem with new pg_dump)" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Unfortunately \"cvs log\" isn't all that great about distinguishing\n> branches from tags --- it calls 'em all \"symbolic names\".\n\nMinor addition to this: you can distinguish branches and tags by using\n`cvs status -v'.\n\n(Historical note: CVS was originally implemented as shell scripts on\ntop of RCS. The .0 syntax was magic which CVS used to indicate a\nbranch as opposed to a revision tag. The output of `cvs log' is\nsimply the output of `rlog' on the underlying RCS file. `cvs status'\nis not based on an existing RCS command.)\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 734: Often statistics are used as a drunken man uses lampposts --\nfor support rather than illumination.\n", "msg_date": "07 May 2001 18:45:13 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: CVS branch management (was Re: A problem with new pg_dump)" }, { "msg_contents": "> Any questions? (See the CVS manual for details on these commands,\n> of course.)\n\nWould someone like to integrate this into the docs appendix which\nalready discusses the CVS repository?\n\n - Thomas\n", "msg_date": "Thu, 10 May 2001 14:21:08 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: CVS branch management (was Re: A problem with new pg_dump)" }, { "msg_contents": "\nI have added this to the developer's FAQ.\n\n---------------------------------------------------------------------------\n\n> Philip Warner <pjw@rhyme.com.au> writes:\n> > At 11:22 7/05/01 -0400, Tom Lane wrote:\n> >> Do you need a quick lecture on CVS branch management?\n> \n> > That would be sensible.\n> \n> OK, some quick notes for those with commit privileges:\n> \n> If you just do basic \"cvs checkout\", \"cvs update\", \"cvs commit\", then\n> you'll always be dealing with the HEAD version of the files in CVS.\n> That's what you want for development, but if you need to patch past\n> stable releases then you have to be able to access and update the\n> \"branch\" portions of our CVS repository. We normally fork off a branch\n> for a stable release just before starting the development cycle for the\n> next release.\n> \n> The first thing you have to know is the branch name for the branch you\n> are interested in getting at. Unfortunately Marc has been less than\n> 100% consistent in naming the things. One way to check is to apply\n> \"cvs log\" to any file that goes back a long time, for example HISTORY\n> in the top directory:\n> \n> $ cvs log HISTORY | more\n> \n> RCS file: /home/projects/pgsql/cvsroot/pgsql/HISTORY,v\n> Working file: HISTORY\n> head: 1.106\n> branch:\n> locks: strict\n> access list:\n> symbolic names:\n> REL7_1_STABLE: 1.106.0.2\n> REL7_1_BETA: 1.79\n> REL7_1_BETA3: 1.86\n> REL7_1_BETA2: 1.86\n> REL7_1: 1.102\n> REL7_0_PATCHES: 1.70.0.2\n> REL7_0: 1.70\n> REL6_5_PATCHES: 1.52.0.2\n> REL6_5: 1.52\n> REL6_4: 1.44.0.2\n> release-6-3: 1.33\n> SUPPORT: 1.1.1.1\n> PG95-DIST: 1.1.1\n> keyword substitution: kv\n> total revisions: 129; selected revisions: 129\n> More---q\n> \n> Unfortunately \"cvs log\" isn't all that great about distinguishing\n> branches from tags --- it calls 'em all \"symbolic names\". (A \"tag\" just\n> marks a specific timepoint across all files --- it's essentially a\n> snapshot whereas a branch is a changeable fileset.) Rule of thumb is\n> that names attached to four-number versions where the third number is\n> zero represent branches, the others are just tags. Here we can see that\n> the extant branches are\n> \tREL7_1_STABLE\n> \tREL7_0_PATCHES\n> \tREL6_5_PATCHES\n> The next commit to the head will be revision 1.107, whereas any changes\n> committed into the REL7_1_STABLE branch will have revision numbers like\n> 1.106.2.*, corresponding to the branch number 1.106.0.2 (don't ask where\n> the zero went...).\n> \n> OK, so how do you do work on a branch? By far the best way is to create\n> a separate checkout tree for the branch and do your work in that. Not\n> only is that the easiest way to deal with CVS, but you really need to\n> have the whole past tree available anyway to test your work. (And you\n> *better* test your work. Never forget that dot-releases tend to go out\n> with very little beta testing --- so whenever you commit an update to a\n> stable branch, you'd better be doubly sure that it's correct.)\n> \n> Normally, to checkout the head branch, you just cd to the place you\n> want to contain the toplevel \"pgsql\" directory and say\n> \n> \tcvs ... checkout pgsql\n> \n> To get a past branch, you cd to whereever you want it and say\n> \n> \tcvs ... checkout -r BRANCHNAME pgsql\n> \n> For example, just a couple days ago I did\n> \n> \tmkdir ~postgres/REL7_1\n> \tcd ~postgres/REL7_1\n> \tcvs ... checkout -r REL7_1_STABLE pgsql\n> \n> and now I have a maintenance copy of 7.1.*.\n> \n> When you've done a checkout in this way, the branch name is \"sticky\":\n> CVS automatically knows that this directory tree is for the branch,\n> and whenever you do \"cvs update\" or \"cvs commit\" in this tree, you'll\n> fetch or store the latest version in the branch, not the head version.\n> Easy as can be.\n> \n> So, if you have a patch that needs to apply to both the head and a\n> recent stable branch, you have to make the edits and do the commit\n> twice, once in your development tree and once in your stable branch\n> tree. This is kind of a pain, which is why we don't normally fork\n> the tree right away after a major release --- we wait for a dot-release\n> or two, so that we won't have to double-patch the first wave of fixes.\n> \n> Any questions? (See the CVS manual for details on these commands,\n> of course.)\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 13:25:37 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS branch management (was Re: A problem with new pg_dump)" }, { "msg_contents": "\nI have added a mention of 'cvs status -v' to the developer's FAQ, with\nyour name on it.\n\n---------------------------------------------------------------------------\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > Unfortunately \"cvs log\" isn't all that great about distinguishing\n> > branches from tags --- it calls 'em all \"symbolic names\".\n> \n> Minor addition to this: you can distinguish branches and tags by using\n> `cvs status -v'.\n> \n> (Historical note: CVS was originally implemented as shell scripts on\n> top of RCS. The .0 syntax was magic which CVS used to indicate a\n> branch as opposed to a revision tag. The output of `cvs log' is\n> simply the output of `rlog' on the underlying RCS file. `cvs status'\n> is not based on an existing RCS command.)\n> \n> Ian\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 734: Often statistics are used as a drunken man uses lampposts --\n> for support rather than illumination.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 13:26:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS branch management (was Re: A problem with new pg_dump)" }, { "msg_contents": "> > Any questions? (See the CVS manual for details on these commands,\n> > of course.)\n> \n> Would someone like to integrate this into the docs appendix which\n> already discusses the CVS repository?\n\nI added these to the developer's FAQ. The seem a little detailed for\nthe main docs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Nov 2001 13:26:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS branch management (was Re: A problem with new pg_dump)" }, { "msg_contents": "Bruce Momjian writes:\n\n> I added these to the developer's FAQ. The seem a little detailed for\n> the main docs.\n\nI was always under the impression that a FAQ was an *abbreviated* version\nof some of the main docs. As in, FAQ = frequently asked questions, main\ndocs = all possible questions. So this reasoning doesn't make sense to\nme.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 28 Nov 2001 21:48:42 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CVS branch management (was Re: A problem with new" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > I added these to the developer's FAQ. The seem a little detailed for\n> > the main docs.\n> \n> I was always under the impression that a FAQ was an *abbreviated* version\n> of some of the main docs. As in, FAQ = frequently asked questions, main\n> docs = all possible questions. So this reasoning doesn't make sense to\n> me.\n\nI guess informal would be a better word for what I added. They are more\nimpressions or tips. Do we want to formalize them by putting them in\nthe docs? I am glad to add them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Nov 2001 16:00:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS branch management (was Re: A problem with new pg_dump)" }, { "msg_contents": "On Wednesday 28 November 2001 03:48 pm, Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> > I added these to the developer's FAQ. The seem a little detailed for\n> > the main docs.\n\n> I was always under the impression that a FAQ was an *abbreviated* version\n> of some of the main docs. As in, FAQ = frequently asked questions, main\n> docs = all possible questions. So this reasoning doesn't make sense to\n> me.\n\nFAQ = questions from users on how the thing works, with answers gleaned \nfromthe developer's mailing list (this has been the definition for at least \nten years -- or more -- but, as I've only been internet-literate for a mere \nten years, I wouldn't have first-hand knowledge of accepted practice prior to \n1991. As I ran a C-News site beginning in 1991, I got up to speed on the \nJargon fairly quickly. Speaking of Jargon.... according to Jargoogle, FAQ is \n'officially':\n\"FAQ /F-A-Q/ or /fak/ n. \n\n[Usenet] 1. A Frequently Asked Question. 2. A compendium of accumulated lore, \nposted periodically to high-volume newsgroups in an attempt to forestall such \nquestions. Some people prefer the term `FAQ list' or `FAQL' /fa'kl/, \nreserving `FAQ' for sense 1. \n\nThis lexicon itself serves as a good example of a collection of one kind of \nlore, although it is far too big for a regular FAQ posting. Examples: \"What \nis the proper type of NULL?\" and \"What's that funny name for the # \ncharacter?\" are both Frequently Asked Questions. Several FAQs refer readers \nto this file. \"\n\nSo, while Bruce isn't doing the regular list posting of the dev FAQ, it still \nis a compendium in sense 2....)\n\ndocs = our take on the questions we think will be asked about how the thing \nworks, plus any FAQL's necessary.\n\nWhile it may seem to be hairsplitting, the traditional FAQ list is just \nexactly what Bruce has developed in the developers FAQ -- these are answers \nthat currently don't fit in our docs in an organized fashion. Now, maybe if \nthe docs were modified to include this information... (hint, hint)....\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 28 Nov 2001 16:22:18 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: CVS branch management (was Re: A problem with new" }, { "msg_contents": "On Wed, 28 Nov 2001, Lamar Owen wrote:\n\n> > I was always under the impression that a FAQ was an *abbreviated* version\n> > of some of the main docs. As in, FAQ = frequently asked questions, main\n> > docs = all possible questions. So this reasoning doesn't make sense to\n> > me.\n>\n> FAQ = questions from users on how the thing works, with answers gleaned\n> fromthe developer's mailing list (this has been the definition for at least\n> ten years -- or more -- but, as I've only been internet-literate for a mere\n> ten years, I wouldn't have first-hand knowledge of accepted practice prior to\n> 1991. As I ran a C-News site beginning in 1991, I got up to speed on the\n> Jargon fairly quickly. Speaking of Jargon.... according to Jargoogle, FAQ is\n\nI've seen them (that I recall) back to at least '72 and the definition is\nstill the same as yours Lamar.\n\n> 'officially':\n> \"FAQ /F-A-Q/ or /fak/ n.\n\n'cept we pronounced it different. Two sylables, the second only being\nthe Q. I think you get the idea. I won't go into the rationale for it\ntho.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 28 Nov 2001 17:23:48 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: CVS branch management (was Re: A problem with new" } ]
[ { "msg_contents": "\n> > I don't have a machine with XFS installed and it will be at least a week\n> > before I could get around to a build. Any volunteers?\n> \n> I think I could do that... any useful benchmarks to run?\n\nLooks like we have expert help here :-) One very interesting question\nwould imho be, how do we best preallocate the log files ?\nThe current method is to prewrite 8k pages to the whole file, since\nonly writing 1 byte to the end of file triggered the sparse file handling.\nThis, although usually during off peak times, effectively doubles the writes \nfor WAL. \n\nAndreas\n", "msg_date": "Mon, 7 May 2001 14:56:22 +0200 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: New Linux xfs/reiser file systems" } ]
[ { "msg_contents": "I have run a simple PostgreSQL benchmark on my SGI system which uses\nXFS for its file system on all disks to compare the effect of fsync.\nThe benchmark was the loading of a database from 157 MB of pg_dump data\nincluding the construction of 11 Btree indexes covering nearly all\nof the data. The second column was just for the data load,\nand the third column is for the index creation.\nThe system is an SGI Indigo2 R10000 running Irix 6.5.7\nwith 384 MB RAM writing to Seagate 18GB 7200RPM narrow SCSI disks.\n\n\nFsync enabled\tElapsed load time\tElapsed indexing time\nYes\t\t 15:53\t\t\t9:16\nNo\t\t 10:33\t\t\t8:40\n\nThe CPU is not fully utilized for loading, and thus the system is I/O\nbound and the use of fsync has an impact. By contrast, the indexing\nprocess is CPU bound, and fsync is less important.\n\nThe performance penalty for using fsync is modest, and therefore,\nI do not believe that we should discourage people from using XFS\nbecause it is a journaling file system. The note advising against\ninstalling Postgres on XFS should be removed from the installation\nguide. Instead, we need to explore how to use XFS's features to improve\nPostgreSQL's performance. For example, the XFS filesystem journal can be\nplaced on a drive different from the data drive. This would substantially\nimprove write performance.\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W Franklin Ave, Suite K1,4,5 | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n", "msg_date": "Mon, 7 May 2001 10:35:07 -0400 (EDT)", "msg_from": "bruc@stone.congenomics.com (Robert E. Bruccoleri)", "msg_from_op": true, "msg_subject": "Re: XFS File systems and PostgreSQL" }, { "msg_contents": "> The performance penalty for using fsync is modest, and therefore,\n> I do not believe that we should discourage people from using XFS\n> because it is a journaling file system. The note advising against\n> installing Postgres on XFS should be removed from the installation\n> guide. Instead, we need to explore how to use XFS's features to improve\n\nI don't believe any mention has been made in the docs yet. Seems we are\nstill exploring this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 May 2001 14:38:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: XFS File systems and PostgreSQL" } ]